content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
list | answers_scores
list | non_answers
list | non_answers_scores
list | tags
list | name
stringlengths 35
137
|
---|---|---|---|---|---|---|---|---|
Q:
How do you index a float value in Python?
Just looking for an answer to a question no amount of googling appears to resolve.
if..
a = 1.23
I would like to be able to take the 1 and multiply this number yet keep the .23
How is this possible??
Thanks in advance!
A:
In the comments to munkhd's answer you said:
I have want to be able to input a value as hours then convert them to minutes.
So if it was 1.20 I would multiply the 1 by 60 then add 20.
Im sure there must be an easier method :)
Thus your program will receive 1.20 as a string. So you can use string methods on it, eg
>>> dotted_time = '1.20'
>>> h, m = [int(s) for s in dotted_time.split('.')]
>>> print h, m
1 20
>>> minutes = 60*h + m
>>> hours = minutes / 60.0
>>> print minutes, hours
80 1.33333333333
Alternatively, you can do colon_time = dotted_time.replace('.', ':'); colon_time is in a format that the standard time functions can understand and manipulate for you. This is probably the more sensible way to proceed, and it will easily cope if you want to process times with seconds like '1.20.30' which .replace('.', ':') will convert to '1:20:30'.
A:
Is int() what you're looking for:
In [39]: a = 1.23
In [40]: int(a)
Out[40]: 1
In [41]: a
Out[41]: 1.23
A:
You can't index a float as such. It's not a collection. For more info on how floating point numbers work, read http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html
The math library gives you a way to get a tuple with the two parts of the number though
>>> import math
>>> a = 1.23
>>> math.modf(a)
(0.22999999999999998, 1.0)
This is not exacct because of the way floating point numbers are represented, but the value is on the order of 1e-17.
Of course, this is always possible too...
>>> map(int, str(a).split('.'))
[1, 23]
A:
Besides PM 2Ring's answer seems to solve [1] your actual problem, you may "index floats", of course after converting it to strings, but be aware of the limited accuracy. So use the built-in round function to define the accuracy required by your solution:
s = str(round(a, 2)) # round a to two digits
Now, you may apply all the string functions you want to it. After that you probably need to convert your results back to float or int ...
[1] Others already suggested this: when dealing with common models like time in a high-level language like python, expect to be supported by reliable solutions that are already there.
A:
modf() and round()
Use modf() to get the whole number and fractional part. modf() will return a tuple of (fractional, whole). Use round() to account for how precise you want the answer.
>>> import math
>>> a = 1.23
>>> math.modf(a) (0.22999999999999998, 1.0)
>>> math.modf(a)[0] #the zeroth location of the tuple is the fractional part
0.22999999999999998 # some precision lost, use round() to fix
>>> round(math.modf(a)[0], 2)
0.23 # This is an exact match the source fractional part
Further, if you want to index each specific portion of the float you can use this:
def get_pos_float(num, unit, precision=3):
if unit >= 10:
num = abs(round(math.modf(num)[0], 3)) # Get just the fractional part
num *= 10 # Move the decimal point one place to the right
return get_pos_float(num, unit/10)
retVal = int(math.modf(num)[1]) #Return the whole number part
return retVal
It is used as follows:
>>> get_pos_float(1.23, 10)
2
>>> get_pos_float(1.23, 100)
3
|
How do you index a float value in Python?
|
Just looking for an answer to a question no amount of googling appears to resolve.
if..
a = 1.23
I would like to be able to take the 1 and multiply this number yet keep the .23
How is this possible??
Thanks in advance!
|
[
"In the comments to munkhd's answer you said:\n\nI have want to be able to input a value as hours then convert them to minutes.\n So if it was 1.20 I would multiply the 1 by 60 then add 20.\n Im sure there must be an easier method :)\n\nThus your program will receive 1.20 as a string. So you can use string methods on it, eg\n>>> dotted_time = '1.20'\n>>> h, m = [int(s) for s in dotted_time.split('.')]\n>>> print h, m\n1 20\n>>> minutes = 60*h + m\n>>> hours = minutes / 60.0\n>>> print minutes, hours\n80 1.33333333333\n\nAlternatively, you can do colon_time = dotted_time.replace('.', ':'); colon_time is in a format that the standard time functions can understand and manipulate for you. This is probably the more sensible way to proceed, and it will easily cope if you want to process times with seconds like '1.20.30' which .replace('.', ':') will convert to '1:20:30'.\n",
"Is int() what you're looking for:\nIn [39]: a = 1.23\n\nIn [40]: int(a)\nOut[40]: 1\n\nIn [41]: a\nOut[41]: 1.23\n\n",
"You can't index a float as such. It's not a collection. For more info on how floating point numbers work, read http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html\nThe math library gives you a way to get a tuple with the two parts of the number though\n>>> import math\n>>> a = 1.23\n>>> math.modf(a)\n(0.22999999999999998, 1.0)\n\nThis is not exacct because of the way floating point numbers are represented, but the value is on the order of 1e-17.\nOf course, this is always possible too...\n>>> map(int, str(a).split('.'))\n[1, 23]\n\n",
"Besides PM 2Ring's answer seems to solve [1] your actual problem, you may \"index floats\", of course after converting it to strings, but be aware of the limited accuracy. So use the built-in round function to define the accuracy required by your solution:\ns = str(round(a, 2)) # round a to two digits\n\nNow, you may apply all the string functions you want to it. After that you probably need to convert your results back to float or int ...\n\n[1] Others already suggested this: when dealing with common models like time in a high-level language like python, expect to be supported by reliable solutions that are already there.\n",
"modf() and round()\nUse modf() to get the whole number and fractional part. modf() will return a tuple of (fractional, whole). Use round() to account for how precise you want the answer.\n>>> import math\n>>> a = 1.23\n>>> math.modf(a) (0.22999999999999998, 1.0)\n>>> math.modf(a)[0] #the zeroth location of the tuple is the fractional part\n0.22999999999999998 # some precision lost, use round() to fix\n>>> round(math.modf(a)[0], 2)\n0.23 # This is an exact match the source fractional part\n\nFurther, if you want to index each specific portion of the float you can use this:\ndef get_pos_float(num, unit, precision=3):\n if unit >= 10:\n num = abs(round(math.modf(num)[0], 3)) # Get just the fractional part\n num *= 10 # Move the decimal point one place to the right\n return get_pos_float(num, unit/10)\n retVal = int(math.modf(num)[1]) #Return the whole number part\n return retVal\n\nIt is used as follows:\n>>> get_pos_float(1.23, 10)\n2\n>>> get_pos_float(1.23, 100)\n3\n\n"
] |
[
2,
0,
0,
0,
0
] |
[] |
[] |
[
"indexing",
"python"
] |
stackoverflow_0025919129_indexing_python.txt
|
Q:
process json file with pandas
I have a json file with objects like this: `
{"_id":"62b2eb94955fe1001d22576a","datasetName":"training-set","x":[1.062747597694397,0.010748463682830334,0.5052880048751831,0.7953124046325684,0.4599417448043823,0.5107740, 0.005278450902551413,0,0.372520387172699,0.9956972002983093],"y":"Contemporary", "team":"A"}
`
can you help me extract the "x" arrays as a list in panda based on the team and the y value? I mean extracting an array of "x" as a panda list specically for "team":"A" and "y":"contemporary" for example.
I don't know how to write a nested two for loops to extract "x" arrays as a list for each team per each "y" value.
A:
I'm guessing here but am assuming you want?
df = pd.DataFrame(data).groupby(["team", "y"])["x"].apply(list).reset_index()["x"].squeeze()
print(df)
Output:
[1.062747597694397, 0.010748463682830334, 0.5052880048751831, 0.7953124046325684, 0.4599417448043823, 0.510774, 0.005278450902551413, 0.0, 0.372520387172699, 0.9956972002983093]
|
process json file with pandas
|
I have a json file with objects like this: `
{"_id":"62b2eb94955fe1001d22576a","datasetName":"training-set","x":[1.062747597694397,0.010748463682830334,0.5052880048751831,0.7953124046325684,0.4599417448043823,0.5107740, 0.005278450902551413,0,0.372520387172699,0.9956972002983093],"y":"Contemporary", "team":"A"}
`
can you help me extract the "x" arrays as a list in panda based on the team and the y value? I mean extracting an array of "x" as a panda list specically for "team":"A" and "y":"contemporary" for example.
I don't know how to write a nested two for loops to extract "x" arrays as a list for each team per each "y" value.
|
[
"I'm guessing here but am assuming you want?\ndf = pd.DataFrame(data).groupby([\"team\", \"y\"])[\"x\"].apply(list).reset_index()[\"x\"].squeeze()\nprint(df)\n\nOutput:\n[1.062747597694397, 0.010748463682830334, 0.5052880048751831, 0.7953124046325684, 0.4599417448043823, 0.510774, 0.005278450902551413, 0.0, 0.372520387172699, 0.9956972002983093]\n\n"
] |
[
0
] |
[] |
[] |
[
"json",
"pandas",
"python"
] |
stackoverflow_0074585220_json_pandas_python.txt
|
Q:
how to extract text from a div block or hyperlink in BS4
imagine I have a webpage
from bs4 import BeautifulSoup
import requests
url = 'https://www.example.com/something'
result = requests.get(url).content
#print(result)
soup = BeautifulSoup(result,"lxml")
result = soup.find_all("div", class_="header-subtitle")
print(result)
the result will be like
[<div class="header-subtitle"><a href="https://example/cat1" title="cat1">Cat1</a> <span>/</span> <a href="https://www.example.com/cat2" title="cat2">Cat2</a> <span>/</span> <a href="https://www.example.com/cat3" title="cat3">Cat3</a>
</div>]
where I need to extract Cat1 , Cat2 and Cat3 , it could be one level only , or keep going to Cat4 , Cat5, Cat6 ...etc
I tried
for div in result:
print(div.find('a').contents)
but it only gives me the Cat1 , the rest 2 are missing
also tried with regex to extract, but couldn't make out the regex properly to fit it
A:
You could get your goal as mentioned by MattDMo or alternativly take a look at css selectors:
[a.get_text(strip=True) for a in soup.select('div.header-subtitle a')]
To get the text of an element you could use get_text()
Example
from bs4 import BeautifulSoup
html='''
<div class="header-subtitle"><a href="https://example/cat1" title="cat1">Cat1</a> <span>/</span> <a href="https://www.example.com/cat2" title="cat2">Cat2</a> <span>/</span> <a href="https://www.example.com/cat3" title="cat3">Cat3</a>
</div>
'''
soup = BeautifulSoup(html)
[a.get_text(strip=True) for a in soup.select('div.header-subtitle a')]
Output
['Cat1', 'Cat2', 'Cat3']
|
how to extract text from a div block or hyperlink in BS4
|
imagine I have a webpage
from bs4 import BeautifulSoup
import requests
url = 'https://www.example.com/something'
result = requests.get(url).content
#print(result)
soup = BeautifulSoup(result,"lxml")
result = soup.find_all("div", class_="header-subtitle")
print(result)
the result will be like
[<div class="header-subtitle"><a href="https://example/cat1" title="cat1">Cat1</a> <span>/</span> <a href="https://www.example.com/cat2" title="cat2">Cat2</a> <span>/</span> <a href="https://www.example.com/cat3" title="cat3">Cat3</a>
</div>]
where I need to extract Cat1 , Cat2 and Cat3 , it could be one level only , or keep going to Cat4 , Cat5, Cat6 ...etc
I tried
for div in result:
print(div.find('a').contents)
but it only gives me the Cat1 , the rest 2 are missing
also tried with regex to extract, but couldn't make out the regex properly to fit it
|
[
"You could get your goal as mentioned by MattDMo or alternativly take a look at css selectors:\n[a.get_text(strip=True) for a in soup.select('div.header-subtitle a')]\n\nTo get the text of an element you could use get_text()\nExample\nfrom bs4 import BeautifulSoup\nhtml='''\n<div class=\"header-subtitle\"><a href=\"https://example/cat1\" title=\"cat1\">Cat1</a> <span>/</span> <a href=\"https://www.example.com/cat2\" title=\"cat2\">Cat2</a> <span>/</span> <a href=\"https://www.example.com/cat3\" title=\"cat3\">Cat3</a>\n</div>\n'''\n\nsoup = BeautifulSoup(html)\n\n[a.get_text(strip=True) for a in soup.select('div.header-subtitle a')]\n\nOutput\n['Cat1', 'Cat2', 'Cat3']\n\n"
] |
[
0
] |
[] |
[] |
[
"beautifulsoup",
"python",
"python_3.x"
] |
stackoverflow_0074585152_beautifulsoup_python_python_3.x.txt
|
Q:
Getting a ValueError after running the LabelEncoder command
I'm working on a ML webapp and am training data from a CSV file. When converting the data array to float the ValueError appears
CODE
X[:, 0] = le_country.transform(X[:,0]) X[:, 1] = le_education.transform(X[:,1]) X = X.astype(float) X
ERROR
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
Cell In [54], line 1
----> 1 X[:, 0] = le_country.transform(X[:,0])
2 X[:, 1] = le_education.transform(X[:,1])
3 X = X.astype(float)
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\sklearn\preprocessing\_label.py:138, in LabelEncoder.transform(self, y)
135 if _num_samples(y) == 0:
136 return np.array([])
--> 138 return _encode(y, uniques=self.classes_)
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\sklearn\utils\_encode.py:226, in _encode(values, uniques, check_unknown)
224 return _map_to_integer(values, uniques)
225 except KeyError as e:
--> 226 raise ValueError(f"y contains previously unseen labels: {str(e)}")
227 else:
228 if check_unknown:
ValueError: y contains previously unseen labels: 'United States'
A:
If you are fitting an encoder then you should use:
from sklearn import preprocessing
le = preprocessing.LabelEncoder()
le.fit([1, 2, 2, 6])
You are probably using the encoder without having it been fit or the new data which you are using to train a model does not have the labels ('United States') which you fitted the encoder with.
|
Getting a ValueError after running the LabelEncoder command
|
I'm working on a ML webapp and am training data from a CSV file. When converting the data array to float the ValueError appears
CODE
X[:, 0] = le_country.transform(X[:,0]) X[:, 1] = le_education.transform(X[:,1]) X = X.astype(float) X
ERROR
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
Cell In [54], line 1
----> 1 X[:, 0] = le_country.transform(X[:,0])
2 X[:, 1] = le_education.transform(X[:,1])
3 X = X.astype(float)
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\sklearn\preprocessing\_label.py:138, in LabelEncoder.transform(self, y)
135 if _num_samples(y) == 0:
136 return np.array([])
--> 138 return _encode(y, uniques=self.classes_)
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\sklearn\utils\_encode.py:226, in _encode(values, uniques, check_unknown)
224 return _map_to_integer(values, uniques)
225 except KeyError as e:
--> 226 raise ValueError(f"y contains previously unseen labels: {str(e)}")
227 else:
228 if check_unknown:
ValueError: y contains previously unseen labels: 'United States'
|
[
"If you are fitting an encoder then you should use:\nfrom sklearn import preprocessing\nle = preprocessing.LabelEncoder()\nle.fit([1, 2, 2, 6])\n\nYou are probably using the encoder without having it been fit or the new data which you are using to train a model does not have the labels ('United States') which you fitted the encoder with.\n"
] |
[
0
] |
[] |
[] |
[
"jupyter_notebook",
"label_encoding",
"python",
"scikit_learn"
] |
stackoverflow_0074585304_jupyter_notebook_label_encoding_python_scikit_learn.txt
|
Q:
How do I convert multiple JSON files with unidentical structure to a single pandas dataframe?
The input is many JSON files differing in structure, and the desired output is a single dataframe.
Input Description:
Each JSON file may have 1 or many attackers and exactly 1 victim. The attackers key points to a list of dictionaries. Each dictionary is 1 attacker with keys such as character_id, corporation_id, alliance_id, etc. The victim key points to dictionary with similar keys. Important thing to note here is that the keys might differ between the same JSON. For example, a JSON file may have attackers key which looks like this:
{
"attackers": [
{
"alliance_id": 99005678,
"character_id": 94336577,
"corporation_id": 98224639,
"damage_done": 3141,
"faction_id": 500003,
"final_blow": true,
"security_status": -9.4,
"ship_type_id": 73796,
"weapon_type_id": 3178
},
{
"damage_done": 1614,
"faction_id": 500003,
"final_blow": false,
"security_status": 0,
"ship_type_id": 32963
}
],
...
Here the JSON file has 2 attackers. But only the first attacker has the afore-mentioned keys. Similarly, the victim may look like this:
...
"victim": {
"character_id": 2119076173,
"corporation_id": 98725195,
"damage_taken": 4755,
"faction_id": 500002,
"items": [...
...
Output Description:
As an output I want to create a dataframe from many (about 400,000) such JSON files stored in the same directory. Each row of the resulting dataframe should have 1 attacker and 1 victim. JSONs with multiple attackers should be split into equal number of rows, where the attackers' properties are different, but the victim properties are the same. For e.g., 3 rows if there are 3 attackers and NaN values where a certain attacker doesn't have a key-value pair. So, the character_id for the second attacker in the dataframe of the above example should be NaN.
Current Method:
To achieve this, I first create an empty list. Then iterate through all the files, open them, load them as JSON objects, convert to dataframe then append dataframe to the list. Please note that pd.DataFrame([json.load(fi)]) has the same output as pd.json_normalize(json.load(fi)).
mainframe = []
for file in tqdm(os.listdir("D:/Master/killmails_jul"), ncols=100, ascii=' >'):
with open("%s/%s" % ("D:/Master/killmails_jul", file),'r') as fi:
mainframe.append(pd.DataFrame([json.load(fi)]))
After this loop, I am left with a list of dataframes which I concatenate using pd.concat().
mainframe = pd.concat(mainframe)
As of yet, the dataframe only has 1 row per JSON irrespective of the number of attackers. To fix this, I use pd.explode() in the next step.
mainframe = mainframe.explode('attackers')
mainframe.reset_index(drop=True, inplace=True)
Now I have separate rows for each attacker, however the attackers & victim keys are still hidden in their respective column. To fix this I 'explode' the the two columns horizontally by pd.apply(pd.Series) and apply prefix for easy recognition as follows:
intframe = mainframe["attackers"].apply(pd.Series).add_prefix("attackers_").join(mainframe["victim"].apply(pd.Series).add_prefix("victim_"))
In the next step I join this intermediate frame with the mainframe to retain the killmail_id and killmail_hash columns. Then remove the attackers & victim columns as I have now expanded them.
mainframe = intframe.join(mainframe)
mainframe.fillna(0, inplace=True)
mainframe.drop(['attackers','victim'], axis=1, inplace=True)
This gives me the desired output with the following 24 columns:
['attackers_character_id', 'attackers_corporation_id', 'attackers_damage_done', 'attackers_final_blow', 'attackers_security_status', 'attackers_ship_type_id', 'attackers_weapon_type_id', 'attackers_faction_id', 'attackers_alliance_id', 'victim_character_id', 'victim_corporation_id', 'victim_damage_taken', 'victim_items', 'victim_position', 'victim_ship_type_id', 'victim_alliance_id', 'victim_faction_id', 'killmail_id', 'killmail_time', 'solar_system_id', 'killmail_hash', 'http_last_modified', 'war_id', 'moon_id']
Question:
Is there a better way to do this than I am doing right now? I tried to use generators but couldn't get them to work. I get an AttributeError: 'str' object has no attribute 'read'
all_files_paths = glob(os.path.join('D:\\Master\\kmrest', '*.json'))
def gen_df(files):
for file in files:
with open(file, 'r'):
data = json.load(file)
data = pd.DataFrame([data])
yield data
mainframe = pd.concat(gen_df(all_files_paths), ignore_index=True)
Will using the pd.concat() function with generators lead to quadratic copying?
Also, I am worried opening and closing many files is slowing down computation. Maybe it would be better to create a JSONL file from all the JSONs first and then creating a dataframe for each line.
If you'd like to get your hands on the files, I am trying to work with you can click here. Let me know if further information is needed.
A:
You could use pd.json_normalize() to help with the heavy lifting:
First, load your data:
import json
import requests
import tarfile
from tqdm.notebook import tqdm
url = 'https://data.everef.net/killmails/2022/killmails-2022-11-22.tar.bz2'
with requests.get(url, stream=True) as r:
fobj = io.BytesIO(r.raw.read())
with tarfile.open(fileobj=fobj, mode='r:bz2') as tar:
json_files = [it for it in tar if it.name.endswith('.json')]
data = [json.load(tar.extractfile(it)) for it in tqdm(json_files)]
To do the same with your files:
import json
from glob import glob
def json_load(filename):
with open(filename) as f:
return json.load(f)
topdir = '...' # the dir containing all your json files
data = [json_load(fn) for fn in tqdm(glob(f'{topdir}/*.json'))]
Once you have a list of dicts in data:
others = ['killmail_id', 'killmail_hash']
a = pd.json_normalize(data, 'attackers', others, record_prefix='attackers.')
v = pd.json_normalize(data).drop('attackers', axis=1)
df = a.merge(v, on=others)
Some quick inspection:
>>> df.shape
(44903, 26)
# check:
>>> sum([len(d['attackers']) for d in data])
44903
>>> df.columns
Index(['attackers.alliance_id', 'attackers.character_id',
'attackers.corporation_id', 'attackers.damage_done',
'attackers.final_blow', 'attackers.security_status',
'attackers.ship_type_id', 'attackers.weapon_type_id',
'attackers.faction_id', 'killmail_id', 'killmail_hash', 'killmail_time',
'solar_system_id', 'http_last_modified', 'victim.alliance_id',
'victim.character_id', 'victim.corporation_id', 'victim.damage_taken',
'victim.items', 'victim.position.x', 'victim.position.y',
'victim.position.z', 'victim.ship_type_id', 'victim.faction_id',
'war_id', 'moon_id'],
dtype='object')
>>> df.iloc[:5, :5]
attackers.alliance_id attackers.character_id attackers.corporation_id attackers.damage_done attackers.final_blow
0 99007887.0 1.450608e+09 2.932806e+08 1426 False
1 99010931.0 1.628193e+09 5.668252e+08 1053 False
2 99007887.0 1.841341e+09 1.552312e+09 1048 False
3 99007887.0 2.118406e+09 9.872458e+07 662 False
4 99005839.0 9.573650e+07 9.947834e+08 630 False
>>> df.iloc[-5:, -5:]
victim.position.z victim.ship_type_id victim.faction_id war_id moon_id
44898 1.558110e+11 670 NaN NaN NaN
44899 -7.678686e+10 670 NaN NaN NaN
44900 -7.678686e+10 670 NaN NaN NaN
44901 -7.678686e+10 670 NaN NaN NaN
44902 -7.678686e+10 670 NaN NaN NaN
Note also that, as desired, missing keys for attackers are NaN:
>>> df.iloc[15:20, :2]
attackers.alliance_id attackers.character_id
15 99007887.0 2.117497e+09
16 99011893.0 1.593514e+09
17 NaN 9.175132e+07
18 NaN 2.119191e+09
19 99011258.0 1.258332e+09
|
How do I convert multiple JSON files with unidentical structure to a single pandas dataframe?
|
The input is many JSON files differing in structure, and the desired output is a single dataframe.
Input Description:
Each JSON file may have 1 or many attackers and exactly 1 victim. The attackers key points to a list of dictionaries. Each dictionary is 1 attacker with keys such as character_id, corporation_id, alliance_id, etc. The victim key points to dictionary with similar keys. Important thing to note here is that the keys might differ between the same JSON. For example, a JSON file may have attackers key which looks like this:
{
"attackers": [
{
"alliance_id": 99005678,
"character_id": 94336577,
"corporation_id": 98224639,
"damage_done": 3141,
"faction_id": 500003,
"final_blow": true,
"security_status": -9.4,
"ship_type_id": 73796,
"weapon_type_id": 3178
},
{
"damage_done": 1614,
"faction_id": 500003,
"final_blow": false,
"security_status": 0,
"ship_type_id": 32963
}
],
...
Here the JSON file has 2 attackers. But only the first attacker has the afore-mentioned keys. Similarly, the victim may look like this:
...
"victim": {
"character_id": 2119076173,
"corporation_id": 98725195,
"damage_taken": 4755,
"faction_id": 500002,
"items": [...
...
Output Description:
As an output I want to create a dataframe from many (about 400,000) such JSON files stored in the same directory. Each row of the resulting dataframe should have 1 attacker and 1 victim. JSONs with multiple attackers should be split into equal number of rows, where the attackers' properties are different, but the victim properties are the same. For e.g., 3 rows if there are 3 attackers and NaN values where a certain attacker doesn't have a key-value pair. So, the character_id for the second attacker in the dataframe of the above example should be NaN.
Current Method:
To achieve this, I first create an empty list. Then iterate through all the files, open them, load them as JSON objects, convert to dataframe then append dataframe to the list. Please note that pd.DataFrame([json.load(fi)]) has the same output as pd.json_normalize(json.load(fi)).
mainframe = []
for file in tqdm(os.listdir("D:/Master/killmails_jul"), ncols=100, ascii=' >'):
with open("%s/%s" % ("D:/Master/killmails_jul", file),'r') as fi:
mainframe.append(pd.DataFrame([json.load(fi)]))
After this loop, I am left with a list of dataframes which I concatenate using pd.concat().
mainframe = pd.concat(mainframe)
As of yet, the dataframe only has 1 row per JSON irrespective of the number of attackers. To fix this, I use pd.explode() in the next step.
mainframe = mainframe.explode('attackers')
mainframe.reset_index(drop=True, inplace=True)
Now I have separate rows for each attacker, however the attackers & victim keys are still hidden in their respective column. To fix this I 'explode' the the two columns horizontally by pd.apply(pd.Series) and apply prefix for easy recognition as follows:
intframe = mainframe["attackers"].apply(pd.Series).add_prefix("attackers_").join(mainframe["victim"].apply(pd.Series).add_prefix("victim_"))
In the next step I join this intermediate frame with the mainframe to retain the killmail_id and killmail_hash columns. Then remove the attackers & victim columns as I have now expanded them.
mainframe = intframe.join(mainframe)
mainframe.fillna(0, inplace=True)
mainframe.drop(['attackers','victim'], axis=1, inplace=True)
This gives me the desired output with the following 24 columns:
['attackers_character_id', 'attackers_corporation_id', 'attackers_damage_done', 'attackers_final_blow', 'attackers_security_status', 'attackers_ship_type_id', 'attackers_weapon_type_id', 'attackers_faction_id', 'attackers_alliance_id', 'victim_character_id', 'victim_corporation_id', 'victim_damage_taken', 'victim_items', 'victim_position', 'victim_ship_type_id', 'victim_alliance_id', 'victim_faction_id', 'killmail_id', 'killmail_time', 'solar_system_id', 'killmail_hash', 'http_last_modified', 'war_id', 'moon_id']
Question:
Is there a better way to do this than I am doing right now? I tried to use generators but couldn't get them to work. I get an AttributeError: 'str' object has no attribute 'read'
all_files_paths = glob(os.path.join('D:\\Master\\kmrest', '*.json'))
def gen_df(files):
for file in files:
with open(file, 'r'):
data = json.load(file)
data = pd.DataFrame([data])
yield data
mainframe = pd.concat(gen_df(all_files_paths), ignore_index=True)
Will using the pd.concat() function with generators lead to quadratic copying?
Also, I am worried opening and closing many files is slowing down computation. Maybe it would be better to create a JSONL file from all the JSONs first and then creating a dataframe for each line.
If you'd like to get your hands on the files, I am trying to work with you can click here. Let me know if further information is needed.
|
[
"You could use pd.json_normalize() to help with the heavy lifting:\nFirst, load your data:\nimport json\nimport requests\nimport tarfile\nfrom tqdm.notebook import tqdm\n\nurl = 'https://data.everef.net/killmails/2022/killmails-2022-11-22.tar.bz2'\nwith requests.get(url, stream=True) as r:\n fobj = io.BytesIO(r.raw.read())\n with tarfile.open(fileobj=fobj, mode='r:bz2') as tar:\n json_files = [it for it in tar if it.name.endswith('.json')]\n data = [json.load(tar.extractfile(it)) for it in tqdm(json_files)]\n\n\nTo do the same with your files:\nimport json\nfrom glob import glob\n\ndef json_load(filename):\n with open(filename) as f:\n return json.load(f)\n\ntopdir = '...' # the dir containing all your json files\ndata = [json_load(fn) for fn in tqdm(glob(f'{topdir}/*.json'))]\n\nOnce you have a list of dicts in data:\nothers = ['killmail_id', 'killmail_hash']\na = pd.json_normalize(data, 'attackers', others, record_prefix='attackers.')\nv = pd.json_normalize(data).drop('attackers', axis=1)\ndf = a.merge(v, on=others)\n\nSome quick inspection:\n>>> df.shape\n(44903, 26)\n\n# check:\n>>> sum([len(d['attackers']) for d in data])\n44903\n\n>>> df.columns\nIndex(['attackers.alliance_id', 'attackers.character_id',\n 'attackers.corporation_id', 'attackers.damage_done',\n 'attackers.final_blow', 'attackers.security_status',\n 'attackers.ship_type_id', 'attackers.weapon_type_id',\n 'attackers.faction_id', 'killmail_id', 'killmail_hash', 'killmail_time',\n 'solar_system_id', 'http_last_modified', 'victim.alliance_id',\n 'victim.character_id', 'victim.corporation_id', 'victim.damage_taken',\n 'victim.items', 'victim.position.x', 'victim.position.y',\n 'victim.position.z', 'victim.ship_type_id', 'victim.faction_id',\n 'war_id', 'moon_id'],\n dtype='object')\n\n>>> df.iloc[:5, :5]\n attackers.alliance_id attackers.character_id attackers.corporation_id attackers.damage_done attackers.final_blow\n0 99007887.0 1.450608e+09 2.932806e+08 1426 False \n1 99010931.0 1.628193e+09 5.668252e+08 1053 False \n2 99007887.0 1.841341e+09 1.552312e+09 1048 False \n3 99007887.0 2.118406e+09 9.872458e+07 662 False \n4 99005839.0 9.573650e+07 9.947834e+08 630 False \n\n>>> df.iloc[-5:, -5:]\n victim.position.z victim.ship_type_id victim.faction_id war_id moon_id\n44898 1.558110e+11 670 NaN NaN NaN \n44899 -7.678686e+10 670 NaN NaN NaN \n44900 -7.678686e+10 670 NaN NaN NaN \n44901 -7.678686e+10 670 NaN NaN NaN \n44902 -7.678686e+10 670 NaN NaN NaN \n\nNote also that, as desired, missing keys for attackers are NaN:\n>>> df.iloc[15:20, :2]\n attackers.alliance_id attackers.character_id\n15 99007887.0 2.117497e+09 \n16 99011893.0 1.593514e+09 \n17 NaN 9.175132e+07 \n18 NaN 2.119191e+09 \n19 99011258.0 1.258332e+09 \n\n"
] |
[
0
] |
[] |
[] |
[
"dataframe",
"json",
"pandas",
"python"
] |
stackoverflow_0074582715_dataframe_json_pandas_python.txt
|
Q:
Django and DRF Why isn't my password hashing
I am using DRF and I have these pieces of code as models, register view and serializer
But anytime I signup a user the password does not hashed and I can't see to figure out why.
models.py
class UserManager(BaseUserManager):
def create_user(self, email, password=None, **kwargs):
if not email:
raise ValueError("Users must have an email")
email = self.normalize_email(email).lower()
user = self.model(email=email, **kwargs)
user.set_password(password)
user.save()
return user
def create_superuser(self, email, password, **extra_fields):
if not password:
raise ValueError("Password is required")
user = self.create_user(email, password)
user.is_superuser = True
user.is_staff = True
user.save()
return user
class User(AbstractBaseUser, PermissionsMixin):
email = models.EmailField(max_length=255, unique=True)
first_name = models.CharField(max_length=255)
last_name = models.CharField(max_length=255)
role = models.CharField(max_length=255)
department = models.CharField(max_length=255)
is_active = models.BooleanField(default=True)
is_verified = models.BooleanField(default=False)
is_staff = models.BooleanField(default=False)
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
objects = UserManager()
USERNAME_FIELD = "email"
REQUIRED_FIELDS = ["first_name", "last_name", "role", "department"]
def __str__(self):
return self.email
serializers.py
class RegisterSerializer(serializers.ModelSerializer):
email = serializers.CharField(max_length=255)
password = serializers.CharField(min_length=8, write_only=True)
first_name = serializers.CharField(max_length=255)
last_name = serializers.CharField(max_length=255)
role = serializers.CharField(max_length=255)
department = serializers.CharField(max_length=255)
class Meta:
model = User
fields = ["email", "password", "first_name", "last_name", "role", "department"]
def create(self, validated_data):
return User.objects.create(**validated_data)
def validate_email(self, value):
if User.objects.filter(email=value).exists():
raise serializers.ValidationError("This email already exists!")
return value
views.py
class RegisterView(APIView):
serializer_class = RegisterSerializer
def post(self, request, *args):
serializer = self.serializer_class(data=request.data)
serializer.is_valid(raise_exception=True)
serializer.save()
user_data = serializer.data
user = User.objects.get(email=user_data.get("email"))
return Response(user_data, status=status.HTTP_201_CREATED)
for some reason, which I don't know anytime a user is created the password is save in clear text. It does not hash the passwords. The superuser's password is however hashed because I created it with the command line but the api doesn't hash the password. I some help to fix this.
A:
Instead of passing plain password you should use make_password method provided by django.
from django.contrib.auth.hashers import make_password
make_password(password, salt=None, hasher='default')
Creates a hashed password in the format used by this application. It takes one mandatory argument: the password in plain-text (string or bytes). Optionally, you can provide a salt and a hashing algorithm to use, if you don’t want to use the defaults (first entry of PASSWORD_HASHERS setting). See Included hashers for the algorithm name of each hasher. If the password argument is None, an unusable password is returned (one that will never be accepted by check_password()).
You can try something like this
hashed_pass = make_password(your_password_here, salt=None, hasher='default')
user = self.create_user(email, hashed_pass)
Source: https://docs.djangoproject.com/en/4.1/topics/auth/passwords/
A:
Try this:
def create(self, validated_data):
password = validated_data.pop('password')
user = User.objects.create(**validated_data)
user.set_password(password)
user.save()
return user
A:
I found why it wasn't working. In the create method in the RegisterSerializer I did a return User.objects.create(**validated_data)
which saves on the request data in the database, including the password without hashing it first. The correct way is to be calling the create_user method instead, like so return User.objects.create_user(**validated_data) since that method has the set_password mechanism to hash the password.
|
Django and DRF Why isn't my password hashing
|
I am using DRF and I have these pieces of code as models, register view and serializer
But anytime I signup a user the password does not hashed and I can't see to figure out why.
models.py
class UserManager(BaseUserManager):
def create_user(self, email, password=None, **kwargs):
if not email:
raise ValueError("Users must have an email")
email = self.normalize_email(email).lower()
user = self.model(email=email, **kwargs)
user.set_password(password)
user.save()
return user
def create_superuser(self, email, password, **extra_fields):
if not password:
raise ValueError("Password is required")
user = self.create_user(email, password)
user.is_superuser = True
user.is_staff = True
user.save()
return user
class User(AbstractBaseUser, PermissionsMixin):
email = models.EmailField(max_length=255, unique=True)
first_name = models.CharField(max_length=255)
last_name = models.CharField(max_length=255)
role = models.CharField(max_length=255)
department = models.CharField(max_length=255)
is_active = models.BooleanField(default=True)
is_verified = models.BooleanField(default=False)
is_staff = models.BooleanField(default=False)
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
objects = UserManager()
USERNAME_FIELD = "email"
REQUIRED_FIELDS = ["first_name", "last_name", "role", "department"]
def __str__(self):
return self.email
serializers.py
class RegisterSerializer(serializers.ModelSerializer):
email = serializers.CharField(max_length=255)
password = serializers.CharField(min_length=8, write_only=True)
first_name = serializers.CharField(max_length=255)
last_name = serializers.CharField(max_length=255)
role = serializers.CharField(max_length=255)
department = serializers.CharField(max_length=255)
class Meta:
model = User
fields = ["email", "password", "first_name", "last_name", "role", "department"]
def create(self, validated_data):
return User.objects.create(**validated_data)
def validate_email(self, value):
if User.objects.filter(email=value).exists():
raise serializers.ValidationError("This email already exists!")
return value
views.py
class RegisterView(APIView):
serializer_class = RegisterSerializer
def post(self, request, *args):
serializer = self.serializer_class(data=request.data)
serializer.is_valid(raise_exception=True)
serializer.save()
user_data = serializer.data
user = User.objects.get(email=user_data.get("email"))
return Response(user_data, status=status.HTTP_201_CREATED)
for some reason, which I don't know anytime a user is created the password is save in clear text. It does not hash the passwords. The superuser's password is however hashed because I created it with the command line but the api doesn't hash the password. I some help to fix this.
|
[
"Instead of passing plain password you should use make_password method provided by django.\nfrom django.contrib.auth.hashers import make_password\n\nmake_password(password, salt=None, hasher='default')\n\nCreates a hashed password in the format used by this application. It takes one mandatory argument: the password in plain-text (string or bytes). Optionally, you can provide a salt and a hashing algorithm to use, if you don’t want to use the defaults (first entry of PASSWORD_HASHERS setting). See Included hashers for the algorithm name of each hasher. If the password argument is None, an unusable password is returned (one that will never be accepted by check_password()).\nYou can try something like this\nhashed_pass = make_password(your_password_here, salt=None, hasher='default')\nuser = self.create_user(email, hashed_pass)\n\nSource: https://docs.djangoproject.com/en/4.1/topics/auth/passwords/\n",
"Try this:\n def create(self, validated_data):\n password = validated_data.pop('password')\n user = User.objects.create(**validated_data)\n user.set_password(password)\n user.save()\n return user\n\n",
"I found why it wasn't working. In the create method in the RegisterSerializer I did a return User.objects.create(**validated_data)\nwhich saves on the request data in the database, including the password without hashing it first. The correct way is to be calling the create_user method instead, like so return User.objects.create_user(**validated_data) since that method has the set_password mechanism to hash the password.\n"
] |
[
0,
0,
0
] |
[] |
[] |
[
"django",
"django_models",
"django_rest_framework",
"python",
"python_3.x"
] |
stackoverflow_0074577582_django_django_models_django_rest_framework_python_python_3.x.txt
|
Q:
Writing a one liner code that outputs filenames with more than 10 characters and whose content has more than 10 lines
Thank you all for the help before. I now had completed a task (so I thought) in order to achieve the following: I needed to write a one liner which outputs filenames that have more than 10 characters and also their contents consists out of more than 10 lines. My code is the following:
import os; [filename for filename in os.listdir(".") if len(filename) > 10 and len([line for line in open(filename)]) > 10 if filename =! filename.endswith(".ipynb_checkpoints")]
The problem that exists is the following error:
[Errno 13] Permission denied: '.ipynb_checkpoints'
I tried to exclude the .ipynb_checkpoint-files but it still doesn´t work properly. Do you have any suggestions on how to exclude them or solve my problem? Thank you for your time!
Greetings.
A:
The issue might be the order of your if statements, String.ends with returns a boolean and cannot be compared to the string, and __pycache__ files also cause errors so I added it.
Some of the characters in ipynb files cause UnicodeDecodeError: 'charmap' codec can't decode errors and decoding in Latin-1 seemed to fix that. (UnicodeDecodeError: 'charmap' codec can't decode byte X in position Y: character maps to <undefined>)
you could try something like:
import os; print([filename for filename in os.listdir(".") if not filename.endswith(".ipynb_checkpoints") and not filename.endswith("__pycache__") and len(filename) > 10 and len([line for line in open(filename, encoding="Latin-1")]) > 10])
this solution seems to work but maybe a more general solution makes more sense.ie: PermissionError: [Errno 13] Permission denied
|
Writing a one liner code that outputs filenames with more than 10 characters and whose content has more than 10 lines
|
Thank you all for the help before. I now had completed a task (so I thought) in order to achieve the following: I needed to write a one liner which outputs filenames that have more than 10 characters and also their contents consists out of more than 10 lines. My code is the following:
import os; [filename for filename in os.listdir(".") if len(filename) > 10 and len([line for line in open(filename)]) > 10 if filename =! filename.endswith(".ipynb_checkpoints")]
The problem that exists is the following error:
[Errno 13] Permission denied: '.ipynb_checkpoints'
I tried to exclude the .ipynb_checkpoint-files but it still doesn´t work properly. Do you have any suggestions on how to exclude them or solve my problem? Thank you for your time!
Greetings.
|
[
"The issue might be the order of your if statements, String.ends with returns a boolean and cannot be compared to the string, and __pycache__ files also cause errors so I added it.\nSome of the characters in ipynb files cause UnicodeDecodeError: 'charmap' codec can't decode errors and decoding in Latin-1 seemed to fix that. (UnicodeDecodeError: 'charmap' codec can't decode byte X in position Y: character maps to <undefined>)\nyou could try something like:\nimport os; print([filename for filename in os.listdir(\".\") if not filename.endswith(\".ipynb_checkpoints\") and not filename.endswith(\"__pycache__\") and len(filename) > 10 and len([line for line in open(filename, encoding=\"Latin-1\")]) > 10])\n\nthis solution seems to work but maybe a more general solution makes more sense.ie: PermissionError: [Errno 13] Permission denied\n"
] |
[
1
] |
[] |
[] |
[
"filenames",
"listdir",
"python"
] |
stackoverflow_0074585327_filenames_listdir_python.txt
|
Q:
I need to read a file and turn it into a dictionary
I have a file of recipes
banana pancake
1 cups of flour
2 tablespoons of sugar
1 eggs
1 cups of milk
3 teaspoons of cinnamon
2 teaspoons of baking powder
0 slices of bread
2 bananas
0 apples
0 peaches
And I need to create a dictionary, where the key is the name of the ingredient and the value is the respective unit.
Example {
'cups of flower': 1
'tablespoons of sugar': 2
...
}
This is what I tried to do
file = open('recipe.txt', 'r')
alpha = 'abcdefghijklmnopqrstuvwxyz'
d = {}
for line in file:
if line[0] in alpha:#this makes sure that the for loop "ignores" the first line for each recipe, which contains the name of the meal
continue
else:
line1 = line.split(' ') #splits lines into lists
d[line1[1:]] = line1[0] #grabs keys and values
print(d)
A:
Running your code generates an error that looks something like this:
TypeError Traceback (most recent call last)
<ipython-input-14-08149fb7c88a> in <module>
6 else:
7 line1 = line.split(' ') #splits lines into lists
----> 8 d[line1[1:]] = line1[0] #grabs keys and values
9 print(d)
10
TypeError: unhashable type: 'list'
The error message indicates that you're trying to use a list (line1[1:]) as a dictionary key, which is not allowed. That's not really what you want either. Take the list of words and turn them back into a string and use that as your key instead.
key = ' '.join(line1[1:]) # join list elements together, separated by a space
d[key] = line1[0]
This question has more details about what you can and can't use as dict keys.
In the future be sure to include details like any error or traceback messages you get in your question. Welcome to SO!
A:
You don't need the alpha variable to ignore the first line you just pop the first element from the list of lines of the file. Code should be like this:
file = open("recipe.txt", "r")
dictionary = dict()
lines = file.readlines()
lines.pop(0)
for line in lines:
line = line.split()
ingredients = line[1:]
ingredients = ' '.join(ingredients)
dictionary[ingredients] = int(line[0])
print(dictionary)
Output Example of your file:
{'cups of flour': 1, 'tablespoons of sugar': 2, 'eggs': 1, 'cups of milk': 1, 'teaspoons of cinnamon': 3, 'teaspoons of baking powder': 2, 'slices of bread': 0, 'bananas': 2, 'apples': 0, 'peaches': 0}
|
I need to read a file and turn it into a dictionary
|
I have a file of recipes
banana pancake
1 cups of flour
2 tablespoons of sugar
1 eggs
1 cups of milk
3 teaspoons of cinnamon
2 teaspoons of baking powder
0 slices of bread
2 bananas
0 apples
0 peaches
And I need to create a dictionary, where the key is the name of the ingredient and the value is the respective unit.
Example {
'cups of flower': 1
'tablespoons of sugar': 2
...
}
This is what I tried to do
file = open('recipe.txt', 'r')
alpha = 'abcdefghijklmnopqrstuvwxyz'
d = {}
for line in file:
if line[0] in alpha:#this makes sure that the for loop "ignores" the first line for each recipe, which contains the name of the meal
continue
else:
line1 = line.split(' ') #splits lines into lists
d[line1[1:]] = line1[0] #grabs keys and values
print(d)
|
[
"Running your code generates an error that looks something like this:\nTypeError Traceback (most recent call last)\n<ipython-input-14-08149fb7c88a> in <module>\n 6 else:\n 7 line1 = line.split(' ') #splits lines into lists\n----> 8 d[line1[1:]] = line1[0] #grabs keys and values\n 9 print(d)\n 10\n\nTypeError: unhashable type: 'list'\n\nThe error message indicates that you're trying to use a list (line1[1:]) as a dictionary key, which is not allowed. That's not really what you want either. Take the list of words and turn them back into a string and use that as your key instead.\nkey = ' '.join(line1[1:]) # join list elements together, separated by a space\nd[key] = line1[0]\n\nThis question has more details about what you can and can't use as dict keys.\nIn the future be sure to include details like any error or traceback messages you get in your question. Welcome to SO!\n",
"You don't need the alpha variable to ignore the first line you just pop the first element from the list of lines of the file. Code should be like this:\nfile = open(\"recipe.txt\", \"r\")\ndictionary = dict()\nlines = file.readlines()\nlines.pop(0)\n\nfor line in lines:\n line = line.split()\n ingredients = line[1:]\n ingredients = ' '.join(ingredients)\n dictionary[ingredients] = int(line[0])\n\nprint(dictionary)\n\nOutput Example of your file:\n{'cups of flour': 1, 'tablespoons of sugar': 2, 'eggs': 1, 'cups of milk': 1, 'teaspoons of cinnamon': 3, 'teaspoons of baking powder': 2, 'slices of bread': 0, 'bananas': 2, 'apples': 0, 'peaches': 0}\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"dictionary",
"python",
"python_3.x"
] |
stackoverflow_0074585167_dictionary_python_python_3.x.txt
|
Q:
Stack python function debugging issue
I have implemented the stack in python code.
class stack:
arrlen = 0
def __init__(self,arr,poin):
self.arr = arr
self.poin = poin
arrlen = len(self.arr)
def push(obj):
self.poin = (self.poin+1)%arrlen
self.arr[self.poin] = obj
def pop():
self.poin = (self.poin-1)%arrlen
def printStack():
for i in range(self.poin):
print("",self.arr[i])
print("\n")
I am try to run this code:
x = int(input("Hello.Please write down the number of tasks you want to do for today:\n"))
array = [None]*x
y = 0
mytasks = stack(array,x);
while(y is not 4):
y = int(input("What do you want to do 1)add task 2) remove task 3)print remaining tasks 4)exit\n"))
if y is 1:
tsk = input("Write down the name of your task:\n")
mytasks.push(tsk)
if y is 2:
mytasks.pop()
if y is 3:
mytasks.printStack()
However every time I type 1 to add a task to the stack when the push()function is called it stops debugging and it shows me a error message:
TypeError: stack.push() takes 1 positional argument but 2 were given
What does this mean?
Before this issue I had had a issue with the init function so I had removed the arrlen variable but then I readded the arrlen variable so I dont know where I am wrong.I am using the IDLE Shell 3.10.4
A:
self is the first argument of every python class method. Therefore, your method push should look like something like:
def push(self, obj):
self.poin = (self.poin+1)%arrlen
self.arr[self.poin] = obj
And even the methods where you don't want to take any inputs, you should put self as the only parameter.
|
Stack python function debugging issue
|
I have implemented the stack in python code.
class stack:
arrlen = 0
def __init__(self,arr,poin):
self.arr = arr
self.poin = poin
arrlen = len(self.arr)
def push(obj):
self.poin = (self.poin+1)%arrlen
self.arr[self.poin] = obj
def pop():
self.poin = (self.poin-1)%arrlen
def printStack():
for i in range(self.poin):
print("",self.arr[i])
print("\n")
I am try to run this code:
x = int(input("Hello.Please write down the number of tasks you want to do for today:\n"))
array = [None]*x
y = 0
mytasks = stack(array,x);
while(y is not 4):
y = int(input("What do you want to do 1)add task 2) remove task 3)print remaining tasks 4)exit\n"))
if y is 1:
tsk = input("Write down the name of your task:\n")
mytasks.push(tsk)
if y is 2:
mytasks.pop()
if y is 3:
mytasks.printStack()
However every time I type 1 to add a task to the stack when the push()function is called it stops debugging and it shows me a error message:
TypeError: stack.push() takes 1 positional argument but 2 were given
What does this mean?
Before this issue I had had a issue with the init function so I had removed the arrlen variable but then I readded the arrlen variable so I dont know where I am wrong.I am using the IDLE Shell 3.10.4
|
[
"self is the first argument of every python class method. Therefore, your method push should look like something like:\ndef push(self, obj):\n\n self.poin = (self.poin+1)%arrlen\n\n self.arr[self.poin] = obj\n\nAnd even the methods where you don't want to take any inputs, you should put self as the only parameter.\n"
] |
[
1
] |
[] |
[] |
[
"debugging",
"python",
"stack"
] |
stackoverflow_0074585374_debugging_python_stack.txt
|
Q:
Drawing recursive circles PYTHON
I can't get to draw the other smaller circles. Using recursive circles it's confusing me. How can I draw the rest of the circles? I called recursively the function draw_fractal_circles but it only draws one more smaller round of circles.
Wanted result:
My result:
import turtle
def centered_circle(circle_radius, turtle):
turtle.penup()
turtle.forward(circle_radius)
turtle.left(90)
turtle.pendown()
turtle.circle(circle_radius)
turtle.penup()
turtle.right(90)
turtle.backward(circle_radius)
turtle.penup()
def circles(circle_radius, turtle):
centered_circle(circle_radius, turtle)
turtle.left(90)
turtle.forward(circle_radius * 1.5)
centered_circle(circle_radius * 0.5, turtle)
turtle.backward(circle_radius * 1.5)
turtle.right(180)
turtle.forward(circle_radius * 1.5)
centered_circle(circle_radius * 0.5, turtle)
turtle.backward(circle_radius * 1.5)
turtle.left(90)
turtle.forward(circle_radius * 1.5)
centered_circle(circle_radius * 0.5, turtle)
turtle.backward(circle_radius * 1.5)
# Here is the recursive function which my question is about
def recursive_circles(circle_radius, turtle):
if circle_radius > 2:
centered_circle(circle_radius, turtle)
turtle.left(90)
turtle.forward(circle_radius * 1.5)
centered_circle(circle_radius * 0.5, turtle)
recursive_circles(circle_radius - 25, turtle) #Recursive call
turtle.backward(circle_radius * 1.5)
turtle.right(180)
turtle.forward(circle_radius * 1.5)
centered_circle(circle_radius * 0.5, turtle)
recursive_circles(circle_radius - 25, turtle) #Recursive call
turtle.backward(circle_radius * 1.5)
turtle.left(90)
turtle.forward(circle_radius * 1.5)
centered_circle(circle_radius * 0.5, turtle)
recursive_circles(circle_radius - 25, turtle) #Recursive call
turtle.backward(circle_radius * 1.5)
def main():
# Set up the turtle and window
recursive_turtle = turtle.Turtle()
recursive_turtle.speed(0)
myWin = turtle.Screen()
recursive_turtle.penup()
recursive_turtle.left(90)
recursive_turtle.backward(100)
# Draw the circles
recursive_turtle.penup()
recursive_turtle.goto(0, 100)
recursive_turtle.setheading(90)
recursive_circles(50, recursive_turtle)
myWin.exitonclick()
if __name__ == "__main__":
main()
The if statement has to be > 2, and no for loop or while
A:
You are overthinking some parts of your program.
The recursive_circles() function only needs to draw its own circle, move to other relative positions and call recursive_circles() to draw all the other circles down from there,
Also the radius should be halved in size on forward calls.
import turtle
def centered_circle(circle_radius, turtle):
turtle.penup()
turtle.forward(circle_radius)
turtle.left(90)
turtle.pendown()
turtle.circle(circle_radius)
turtle.penup()
turtle.right(90)
turtle.backward(circle_radius)
turtle.penup()
def recursive_circles(circle_radius, turtle):
if circle_radius > 2:
centered_circle(circle_radius, turtle)
turtle.left(90)
turtle.forward(circle_radius * 1.5)
recursive_circles(circle_radius * 0.5, turtle) #Recursive call
turtle.backward(circle_radius * 1.5)
turtle.right(180)
turtle.forward(circle_radius * 1.5)
recursive_circles(circle_radius * 0.5, turtle) #Recursive call
turtle.backward(circle_radius * 1.5)
turtle.left(90)
turtle.forward(circle_radius * 1.5)
recursive_circles(circle_radius * 0.5, turtle) #Recursive call
turtle.backward(circle_radius * 1.5)
def main():
# Set up the turtle and window
recursive_turtle = turtle.Turtle()
recursive_turtle.speed(0)
recursive_turtle.hideturtle()
myWin = turtle.Screen()
# Draw the circles
recursive_turtle.penup()
recursive_turtle.goto(0, 100)
recursive_turtle.setheading(90)
recursive_circles(50, recursive_turtle)
myWin.exitonclick()
if __name__ == "__main__":
main()
|
Drawing recursive circles PYTHON
|
I can't get to draw the other smaller circles. Using recursive circles it's confusing me. How can I draw the rest of the circles? I called recursively the function draw_fractal_circles but it only draws one more smaller round of circles.
Wanted result:
My result:
import turtle
def centered_circle(circle_radius, turtle):
turtle.penup()
turtle.forward(circle_radius)
turtle.left(90)
turtle.pendown()
turtle.circle(circle_radius)
turtle.penup()
turtle.right(90)
turtle.backward(circle_radius)
turtle.penup()
def circles(circle_radius, turtle):
centered_circle(circle_radius, turtle)
turtle.left(90)
turtle.forward(circle_radius * 1.5)
centered_circle(circle_radius * 0.5, turtle)
turtle.backward(circle_radius * 1.5)
turtle.right(180)
turtle.forward(circle_radius * 1.5)
centered_circle(circle_radius * 0.5, turtle)
turtle.backward(circle_radius * 1.5)
turtle.left(90)
turtle.forward(circle_radius * 1.5)
centered_circle(circle_radius * 0.5, turtle)
turtle.backward(circle_radius * 1.5)
# Here is the recursive function which my question is about
def recursive_circles(circle_radius, turtle):
if circle_radius > 2:
centered_circle(circle_radius, turtle)
turtle.left(90)
turtle.forward(circle_radius * 1.5)
centered_circle(circle_radius * 0.5, turtle)
recursive_circles(circle_radius - 25, turtle) #Recursive call
turtle.backward(circle_radius * 1.5)
turtle.right(180)
turtle.forward(circle_radius * 1.5)
centered_circle(circle_radius * 0.5, turtle)
recursive_circles(circle_radius - 25, turtle) #Recursive call
turtle.backward(circle_radius * 1.5)
turtle.left(90)
turtle.forward(circle_radius * 1.5)
centered_circle(circle_radius * 0.5, turtle)
recursive_circles(circle_radius - 25, turtle) #Recursive call
turtle.backward(circle_radius * 1.5)
def main():
# Set up the turtle and window
recursive_turtle = turtle.Turtle()
recursive_turtle.speed(0)
myWin = turtle.Screen()
recursive_turtle.penup()
recursive_turtle.left(90)
recursive_turtle.backward(100)
# Draw the circles
recursive_turtle.penup()
recursive_turtle.goto(0, 100)
recursive_turtle.setheading(90)
recursive_circles(50, recursive_turtle)
myWin.exitonclick()
if __name__ == "__main__":
main()
The if statement has to be > 2, and no for loop or while
|
[
"You are overthinking some parts of your program.\nThe recursive_circles() function only needs to draw its own circle, move to other relative positions and call recursive_circles() to draw all the other circles down from there,\nAlso the radius should be halved in size on forward calls.\nimport turtle\n\ndef centered_circle(circle_radius, turtle):\n turtle.penup()\n turtle.forward(circle_radius)\n turtle.left(90)\n turtle.pendown()\n turtle.circle(circle_radius)\n turtle.penup()\n turtle.right(90)\n turtle.backward(circle_radius)\n turtle.penup()\n\ndef recursive_circles(circle_radius, turtle):\n if circle_radius > 2:\n centered_circle(circle_radius, turtle)\n turtle.left(90)\n turtle.forward(circle_radius * 1.5)\n recursive_circles(circle_radius * 0.5, turtle) #Recursive call\n \n turtle.backward(circle_radius * 1.5)\n turtle.right(180)\n turtle.forward(circle_radius * 1.5)\n recursive_circles(circle_radius * 0.5, turtle) #Recursive call \n\n turtle.backward(circle_radius * 1.5)\n turtle.left(90)\n turtle.forward(circle_radius * 1.5)\n recursive_circles(circle_radius * 0.5, turtle) #Recursive call \n turtle.backward(circle_radius * 1.5)\n\n\ndef main():\n # Set up the turtle and window\n recursive_turtle = turtle.Turtle()\n recursive_turtle.speed(0)\n recursive_turtle.hideturtle()\n myWin = turtle.Screen()\n\n # Draw the circles\n recursive_turtle.penup()\n recursive_turtle.goto(0, 100)\n recursive_turtle.setheading(90)\n recursive_circles(50, recursive_turtle)\n\n myWin.exitonclick()\n\nif __name__ == \"__main__\":\n main()\n\n"
] |
[
1
] |
[] |
[] |
[
"python",
"recursion"
] |
stackoverflow_0074585281_python_recursion.txt
|
Q:
Adding markers to Sympy plots
I have created two lines in a Sympy plot and would to add markers to each line. Using the tip from this post, the following does what I expect.
import sympy as sp
x = sp.symbols('x')
sp.plot(x,-x, markers=[{'args' : [5,5, 'r*'], 'ms' : 10},
{'args' : [5,-5,'r*'],'ms' : 10}])
However, when I break up the above call into two separate plot commands, each with their own set of markers, both lines show up, but only the marker on the first line shows up.
p0 = sp.plot(x, markers=[{'args' : [5,5, 'r*'],'ms' : 10}],show=False)
p1 = sp.plot(-x,markers=[{'args' : [5,-5,'r*'],'ms' : 10}], show=False)
p0.extend(p1)
p0.show()
What am I missing?
Update: One reason for breaking up the single call into two plot calls is to add custom labels to each expression.
A:
Sadly, the extend method only consider the data series, not the markers. You can index the plot object in order to access the data series and apply a label. For example:
from sympy import *
var("x")
p = plot(x,-x, markers=[{'args' : [5, 5, 'r*'], 'ms' : 10, "label": "a"},
{'args' : [5, -5, 'r*'],'ms' : 10, "label": "b"}], show=False, legend=True)
p[0].label = "a"
p[1].label = "b"
p.show()
Note that you have to set legend=True to make the legend visible.
|
Adding markers to Sympy plots
|
I have created two lines in a Sympy plot and would to add markers to each line. Using the tip from this post, the following does what I expect.
import sympy as sp
x = sp.symbols('x')
sp.plot(x,-x, markers=[{'args' : [5,5, 'r*'], 'ms' : 10},
{'args' : [5,-5,'r*'],'ms' : 10}])
However, when I break up the above call into two separate plot commands, each with their own set of markers, both lines show up, but only the marker on the first line shows up.
p0 = sp.plot(x, markers=[{'args' : [5,5, 'r*'],'ms' : 10}],show=False)
p1 = sp.plot(-x,markers=[{'args' : [5,-5,'r*'],'ms' : 10}], show=False)
p0.extend(p1)
p0.show()
What am I missing?
Update: One reason for breaking up the single call into two plot calls is to add custom labels to each expression.
|
[
"Sadly, the extend method only consider the data series, not the markers. You can index the plot object in order to access the data series and apply a label. For example:\nfrom sympy import *\nvar(\"x\")\np = plot(x,-x, markers=[{'args' : [5, 5, 'r*'], 'ms' : 10, \"label\": \"a\"},\n {'args' : [5, -5, 'r*'],'ms' : 10, \"label\": \"b\"}], show=False, legend=True)\np[0].label = \"a\"\np[1].label = \"b\"\np.show()\n\nNote that you have to set legend=True to make the legend visible.\n"
] |
[
1
] |
[] |
[] |
[
"python",
"sympy"
] |
stackoverflow_0074584653_python_sympy.txt
|
Q:
Exploding a Pandas Crosstab Table
I have created a pandas crosstab table. My data has groupings as shown along the left column, where a particular row can correspond to more than one organization. I would like to 'explode' these out such that values in [org1, org2] would be counted in both org1 and org2. Therefore, I am trying to display one row in the crosstab table for each organization (i.e., one row for each org1, org2, org3, and org4).
Is there a good function or series of functions I can call to accomplish this explosion/semi-level of detail calculation? I have not been successful.
Here is some sample data:
Here is some expected output... assuming I did my mental math correctly :)
A:
Have you tried using explode before using crosstab ?
#import ast
#df['ColName2007.1099']=df['ColName2007.1099'].apply(ast.literal_eval)
df = df.explode('ColName2007.1099')
df = pd.crosstab(df['ColName2007.1099'],columns=df['Category'],margins=True)
print(df)
'''
Category Class A Class B Class D All
ColName2007.1099
org1 2 3 1 6
org2 3 2 2 7
org3 2 1 0 3
org4 1 0 1 2
All 8 6 4 18
'''
Note, the data you gave as an example does not match your expected output (For example, there is no Class C in the data you provided.
). But it works correctly in the data you provided. If this is not what you want, please provide data that can be copied and pasted and compatible with the expected output.
|
Exploding a Pandas Crosstab Table
|
I have created a pandas crosstab table. My data has groupings as shown along the left column, where a particular row can correspond to more than one organization. I would like to 'explode' these out such that values in [org1, org2] would be counted in both org1 and org2. Therefore, I am trying to display one row in the crosstab table for each organization (i.e., one row for each org1, org2, org3, and org4).
Is there a good function or series of functions I can call to accomplish this explosion/semi-level of detail calculation? I have not been successful.
Here is some sample data:
Here is some expected output... assuming I did my mental math correctly :)
|
[
"Have you tried using explode before using crosstab ?\n#import ast\n#df['ColName2007.1099']=df['ColName2007.1099'].apply(ast.literal_eval)\n\ndf = df.explode('ColName2007.1099')\ndf = pd.crosstab(df['ColName2007.1099'],columns=df['Category'],margins=True)\nprint(df)\n'''\nCategory Class A Class B Class D All\nColName2007.1099 \norg1 2 3 1 6\norg2 3 2 2 7\norg3 2 1 0 3\norg4 1 0 1 2\nAll 8 6 4 18\n'''\n\nNote, the data you gave as an example does not match your expected output (For example, there is no Class C in the data you provided.\n). But it works correctly in the data you provided. If this is not what you want, please provide data that can be copied and pasted and compatible with the expected output.\n"
] |
[
0
] |
[] |
[] |
[
"list",
"pandas",
"pivot_table",
"python",
"python_3.x"
] |
stackoverflow_0074584953_list_pandas_pivot_table_python_python_3.x.txt
|
Q:
Can a parquet file exceed 2.1GB?
I'm having an issue storing a large dataset (around 40GB) in a single parquet file.
I'm using the fastparquet library to append pandas.DataFrames to this parquet dataset file. The following is a minimal example program that appends chunks to a parquet file until it crashes as the file-size in bytes exceeds the int32 threshold of 2147483647 (2.1GB):
Link to minimum reproducible example code
Everything goes fine until the dataset hits 2.1GB, at which point I get the following errors:
OverflowError: value too large to convert to int
Exception ignored in: 'fastparquet.cencoding.write_thrift'
Because the exception is ignored internally, it's very hard to figure out which specific thrift it's upset about and get a stack trace. However, it's very clear that it is linked to the file size exceeding the int32 range.
Also these thrift definitions come from the parquet format repo itself, so I wonder if this is a limitation built into the design of the parquet format?
A:
Finally, I figured out that I was running into a genuine bug in the python library fastparquet, which resulted in a fix in the main library.
This is a link to the salient issue on Github.
The commit in which the issue is fixed is 89d16a2.
|
Can a parquet file exceed 2.1GB?
|
I'm having an issue storing a large dataset (around 40GB) in a single parquet file.
I'm using the fastparquet library to append pandas.DataFrames to this parquet dataset file. The following is a minimal example program that appends chunks to a parquet file until it crashes as the file-size in bytes exceeds the int32 threshold of 2147483647 (2.1GB):
Link to minimum reproducible example code
Everything goes fine until the dataset hits 2.1GB, at which point I get the following errors:
OverflowError: value too large to convert to int
Exception ignored in: 'fastparquet.cencoding.write_thrift'
Because the exception is ignored internally, it's very hard to figure out which specific thrift it's upset about and get a stack trace. However, it's very clear that it is linked to the file size exceeding the int32 range.
Also these thrift definitions come from the parquet format repo itself, so I wonder if this is a limitation built into the design of the parquet format?
|
[
"Finally, I figured out that I was running into a genuine bug in the python library fastparquet, which resulted in a fix in the main library.\nThis is a link to the salient issue on Github.\nThe commit in which the issue is fixed is 89d16a2.\n"
] |
[
1
] |
[] |
[] |
[
"dataset",
"fastparquet",
"machine_learning",
"parquet",
"python"
] |
stackoverflow_0074562453_dataset_fastparquet_machine_learning_parquet_python.txt
|
Q:
Writing a constraint with cplex python
I shared the parameters, variables and notation of the model:
I have difficulty in writing equation 7, which is one of the constraints of the model, with cplex. The code block I wrote is as follows:
mdl.add_constraints(T[i, j, k] >= mdl.sum(p[l]*y[i, l, s] + s[l]*x[i, l, s] for l in N for s in ???)- d[j] - 100000*(1 - x[i, j, k])
for i in M
for j in N
for k in N) #7
Could you please help me about this? It will be very welcome. If desired, I can also share all the model code I wrote.
A:
The "kicker" (hard part) in that constraint is the fact that the range of the sum over s is bounded by the index k. Because your indices are numerical, you could just use a range command to generate the appropriate subset.
caution: you have 2 elements named s in there, so you will need to rename one. I changed your index variable.
also:
you had mdl.sum() You don't / shouldn't need mdl. there ?
None of your variables / sets are "model objects" unless you renamed them previously (not recommended). I would expect to see something like mdl.x[...] etc. and mdl.M etc....
Code:
mdl.add_constraints(T[i, j, k] >= sum(p[l]*y[i, l, s_idx] + s[l]*x[i, l, s_idx] for l in N for s_idx in range(1, k+1))- d[j] - 100000*(1 - x[i, j, k])
for i in M
for j in N
for k in N) #7
|
Writing a constraint with cplex python
|
I shared the parameters, variables and notation of the model:
I have difficulty in writing equation 7, which is one of the constraints of the model, with cplex. The code block I wrote is as follows:
mdl.add_constraints(T[i, j, k] >= mdl.sum(p[l]*y[i, l, s] + s[l]*x[i, l, s] for l in N for s in ???)- d[j] - 100000*(1 - x[i, j, k])
for i in M
for j in N
for k in N) #7
Could you please help me about this? It will be very welcome. If desired, I can also share all the model code I wrote.
|
[
"The \"kicker\" (hard part) in that constraint is the fact that the range of the sum over s is bounded by the index k. Because your indices are numerical, you could just use a range command to generate the appropriate subset.\ncaution: you have 2 elements named s in there, so you will need to rename one. I changed your index variable.\nalso:\n\nyou had mdl.sum() You don't / shouldn't need mdl. there ?\n\nNone of your variables / sets are \"model objects\" unless you renamed them previously (not recommended). I would expect to see something like mdl.x[...] etc. and mdl.M etc....\n\n\nCode:\nmdl.add_constraints(T[i, j, k] >= sum(p[l]*y[i, l, s_idx] + s[l]*x[i, l, s_idx] for l in N for s_idx in range(1, k+1))- d[j] - 100000*(1 - x[i, j, k]) \n for i in M\n for j in N\n for k in N) #7\n\n"
] |
[
0
] |
[] |
[] |
[
"constraints",
"cplex",
"optimization",
"python"
] |
stackoverflow_0074585013_constraints_cplex_optimization_python.txt
|
Q:
AttributeError at /sign-up and /sign-in 'WSGIRequest' object has no attribute 'is_ajax'
I am getting this problem, any help will be appreciated, Im getting an arror trying to sign-in or sign-up.Error bellow.
AttributeError at /sign-up
'WSGIRequest' object has no attribute 'is_ajax' I know that function is depreciated now, but i can't seem to fix the issue.
mixins.py
class AjaxFormMixin(object):
'''
Mixin to ajaxify django form - can be over written in view by calling form_valid method
'''
def form_invalid(self, form):
response = super(AjaxFormMixin, self).form_invalid(form)
if self.request.is_ajax():
message = FormErrors(form)
return JsonResponse({'result': 'Error', 'message': message})
return response
def form_valid(self, form):
response = super(AjaxFormMixin, self).form_valid(form)
if self.request.is_ajax():
form.save()
return JsonResponse({'result': 'Success', 'message': ""})
return response
views.py
def profile_view(request):
'''
function view to allow users to update their profile
'''
user = request.user
up = user.userprofile
form = UserProfileForm(instance=up)
if request.is_ajax():
form = UserProfileForm(data=request.POST, instance=up)
if form.is_valid():
obj = form.save()
obj.has_profile = True
obj.save()
result = "Success"
message = "Your profile has been updated"
else:
message = FormErrors(form)
data = {'result': result, 'message': message}
return JsonResponse(data)
else:
context = {'form': form}
context['google_api_key'] = settings.GOOGLE_API_KEY
context['base_country'] = settings.BASE_COUNTRY
return render(request, 'users/profile.html', context)
class SignUpView(AjaxFormMixin, FormView):
'''
Generic FormView with our mixin for user sign-up with reCAPTURE security
'''
template_name = "users/sign_up.html"
form_class = UserForm
success_url = "/"
# reCAPTURE key required in context
def get_context_data(self, **kwargs):
context = super().get_context_data(**kwargs)
context["recaptcha_site_key"] = settings.RECAPTCHA_PUBLIC_KEY
return context
# over write the mixin logic to get, check and save reCAPTURE score
def form_valid(self, form):
response = super(AjaxFormMixin, self).form_valid(form)
if self.request.is_ajax():
token = form.cleaned_data.get('token')
captcha = reCAPTCHAValidation(token)
if captcha["success"]:
obj = form.save()
obj.email = obj.username
obj.save()
up = obj.userprofile
up.captcha_score = float(captcha["score"])
up.save()
login(self.request, obj,
backend='django.contrib.auth.backends.ModelBackend')
# change result & message on success
result = "Success"
message = "Thank you for signing up"
data = {'result': result, 'message': message}
return JsonResponse(data)
return response
class SignInView(AjaxFormMixin, FormView):
'''
Generic FormView with our mixin for user sign-in
'''
template_name = "users/sign_in.html"
form_class = AuthForm
success_url = "/"
def form_valid(self, form):
response = super(AjaxFormMixin, self).form_valid(form)
if self.request.is_ajax():
username = form.cleaned_data.get('username')
password = form.cleaned_data.get('password')
# attempt to authenticate user
user = authenticate(
self.request, username=username, password=password)
if user is not None:
login(self.request, user,
backend='django.contrib.auth.backends.ModelBackend')
result = "Success"
message = 'You are now logged in'
else:
message = FormErrors(form)
data = {'result': result, 'message': message}
return JsonResponse(data)
return response
I know there's a post with similar issue, but I'm kinda struggling to fix it on my end.
A:
Use it like if request.headers.get('x-requested-with') == 'XMLHttpRequest': everywhere so:
def profile_view(request):
'''
function view to allow users to update their profile
'''
user = request.user
up = user.userprofile
form = UserProfileForm(instance=up)
if request.headers.get('x-requested-with') == 'XMLHttpRequest':
form = UserProfileForm(data=request.POST, instance=up)
if form.is_valid():
obj = form.save()
obj.has_profile = True
obj.save()
result = "Success"
message = "Your profile has been updated"
else:
message = FormErrors(form)
data = {'result': result, 'message': message}
return JsonResponse(data)
else:
context = {'form': form}
context['google_api_key'] = settings.GOOGLE_API_KEY
context['base_country'] = settings.BASE_COUNTRY
return render(request, 'users/profile.html', context)
For class based views use it as if self.request.headers.get('x-requested-with') == 'XMLHttpRequest':
A:
As of django-3.1, the .is_ajax() method [Django-doc] was deprecated. Indeed, in the release notes, we see:
The HttpRequest.is_ajax() method is deprecated as it relied on a jQuery-specific way of signifying AJAX calls, while current usage tends to use the JavaScript Fetch API. Depending on your use case, you can either write your own AJAX detection method, or use the new HttpRequest.accepts() method if your code depends on the client Accept HTTP header.
Originally, it used:
def is_ajax():
return request.headers.get('x-requested-with') == 'XMLHttpRequest'
But note that this is, and has always been something specific to jQuery, and therefore it makes not much sense: a browser or HTTP library can always mimic this behavior, and you can make AJAX requests without this header, and thus it is not reliable.
You can check if the browser accepts json/xml with .accepts(…) [Django-doc] which is probably what an AJAX request will try to accept, so:
self.request.accepts('application/json')
or:
self.request.accepts('application/xml')
are likely candidates for this.
|
AttributeError at /sign-up and /sign-in 'WSGIRequest' object has no attribute 'is_ajax'
|
I am getting this problem, any help will be appreciated, Im getting an arror trying to sign-in or sign-up.Error bellow.
AttributeError at /sign-up
'WSGIRequest' object has no attribute 'is_ajax' I know that function is depreciated now, but i can't seem to fix the issue.
mixins.py
class AjaxFormMixin(object):
'''
Mixin to ajaxify django form - can be over written in view by calling form_valid method
'''
def form_invalid(self, form):
response = super(AjaxFormMixin, self).form_invalid(form)
if self.request.is_ajax():
message = FormErrors(form)
return JsonResponse({'result': 'Error', 'message': message})
return response
def form_valid(self, form):
response = super(AjaxFormMixin, self).form_valid(form)
if self.request.is_ajax():
form.save()
return JsonResponse({'result': 'Success', 'message': ""})
return response
views.py
def profile_view(request):
'''
function view to allow users to update their profile
'''
user = request.user
up = user.userprofile
form = UserProfileForm(instance=up)
if request.is_ajax():
form = UserProfileForm(data=request.POST, instance=up)
if form.is_valid():
obj = form.save()
obj.has_profile = True
obj.save()
result = "Success"
message = "Your profile has been updated"
else:
message = FormErrors(form)
data = {'result': result, 'message': message}
return JsonResponse(data)
else:
context = {'form': form}
context['google_api_key'] = settings.GOOGLE_API_KEY
context['base_country'] = settings.BASE_COUNTRY
return render(request, 'users/profile.html', context)
class SignUpView(AjaxFormMixin, FormView):
'''
Generic FormView with our mixin for user sign-up with reCAPTURE security
'''
template_name = "users/sign_up.html"
form_class = UserForm
success_url = "/"
# reCAPTURE key required in context
def get_context_data(self, **kwargs):
context = super().get_context_data(**kwargs)
context["recaptcha_site_key"] = settings.RECAPTCHA_PUBLIC_KEY
return context
# over write the mixin logic to get, check and save reCAPTURE score
def form_valid(self, form):
response = super(AjaxFormMixin, self).form_valid(form)
if self.request.is_ajax():
token = form.cleaned_data.get('token')
captcha = reCAPTCHAValidation(token)
if captcha["success"]:
obj = form.save()
obj.email = obj.username
obj.save()
up = obj.userprofile
up.captcha_score = float(captcha["score"])
up.save()
login(self.request, obj,
backend='django.contrib.auth.backends.ModelBackend')
# change result & message on success
result = "Success"
message = "Thank you for signing up"
data = {'result': result, 'message': message}
return JsonResponse(data)
return response
class SignInView(AjaxFormMixin, FormView):
'''
Generic FormView with our mixin for user sign-in
'''
template_name = "users/sign_in.html"
form_class = AuthForm
success_url = "/"
def form_valid(self, form):
response = super(AjaxFormMixin, self).form_valid(form)
if self.request.is_ajax():
username = form.cleaned_data.get('username')
password = form.cleaned_data.get('password')
# attempt to authenticate user
user = authenticate(
self.request, username=username, password=password)
if user is not None:
login(self.request, user,
backend='django.contrib.auth.backends.ModelBackend')
result = "Success"
message = 'You are now logged in'
else:
message = FormErrors(form)
data = {'result': result, 'message': message}
return JsonResponse(data)
return response
I know there's a post with similar issue, but I'm kinda struggling to fix it on my end.
|
[
"Use it like if request.headers.get('x-requested-with') == 'XMLHttpRequest': everywhere so:\ndef profile_view(request):\n '''\n function view to allow users to update their profile\n '''\n user = request.user\n up = user.userprofile\n\n form = UserProfileForm(instance=up)\n\n if request.headers.get('x-requested-with') == 'XMLHttpRequest':\n form = UserProfileForm(data=request.POST, instance=up)\n if form.is_valid():\n obj = form.save()\n obj.has_profile = True\n obj.save()\n result = \"Success\"\n message = \"Your profile has been updated\"\n else:\n message = FormErrors(form)\n data = {'result': result, 'message': message}\n return JsonResponse(data)\n\n else:\n\n context = {'form': form}\n context['google_api_key'] = settings.GOOGLE_API_KEY\n context['base_country'] = settings.BASE_COUNTRY\n\n return render(request, 'users/profile.html', context)\n\nFor class based views use it as if self.request.headers.get('x-requested-with') == 'XMLHttpRequest':\n",
"As of django-3.1, the .is_ajax() method [Django-doc] was deprecated. Indeed, in the release notes, we see:\n\nThe HttpRequest.is_ajax() method is deprecated as it relied on a jQuery-specific way of signifying AJAX calls, while current usage tends to use the JavaScript Fetch API. Depending on your use case, you can either write your own AJAX detection method, or use the new HttpRequest.accepts() method if your code depends on the client Accept HTTP header.\n\nOriginally, it used:\n\ndef is_ajax():\n return request.headers.get('x-requested-with') == 'XMLHttpRequest'\n\n\nBut note that this is, and has always been something specific to jQuery, and therefore it makes not much sense: a browser or HTTP library can always mimic this behavior, and you can make AJAX requests without this header, and thus it is not reliable.\nYou can check if the browser accepts json/xml with .accepts(…) [Django-doc] which is probably what an AJAX request will try to accept, so:\nself.request.accepts('application/json')\n\nor:\nself.request.accepts('application/xml')\n\nare likely candidates for this.\n"
] |
[
2,
1
] |
[] |
[] |
[
"ajax",
"django",
"django_forms",
"django_views",
"python"
] |
stackoverflow_0074585414_ajax_django_django_forms_django_views_python.txt
|
Q:
Multiple Linear Regression Using Scikit-learn Error
I'm relatively new to Python and I am trying to make a Multiple Linear Regression model which has two predictor variables and one dependent. While doing my research on this, I found that Scikit provides a class to do this. I tried to get a model for my variables and I got the following message:
Shape of passed values is (3, 1), indices imply (2, 1)
The code I've used is:
from sklearn import linear_model
data = pd.read_csv('data.csv',delimiter=',',header=0)
SEED_VALUE = 12356789
np.random.seed(SEED_VALUE)
data_train, data_test = train_test_split(data, test_size=0.3, random_state=SEED_VALUE)
print('Train size: {}'.format(data_train.shape[0]))
print('Test size: {}'.format(data_test.shape[0]))
data_train_X = data_train.values[:,0:2] #predictor variables
data_train_Y = data_train.values[:,2].astype('float') # dependant
model = linear_model.LinearRegression()
np.random.seed(SEED_VALUE)
model.fit(data_train_X, data_train_Y)
coef = pd.DataFrame([model.intercept_, *model.coef_], ['(Intercept)', *data_train_X.columns], columns=['Coefficients'])
coef
I got the Error in the model.fit(data_train_X, data_train_Y) line. I have searched online for different ways to use Scikit method and I have found that other people with the same code had no error, so I don't know where my mistake could be
Thank you all so much
The data file is like this:
"retardation","distrust","degree"
2.80,6.1,44
3.10,5.1,25
2.59,6.0,10
3.36,6.9,28
2.80,7.0,25
3.35,5.6,72
2.99,6.3,45
2.99,7.2,25
2.92,6.9,12
3.23,6.5,24
3.37,6.8,46
2.72,6.6, 8
A:
Would be great if you could also provide the input of the data, but even without it's most likely due to the fact that you use index column from your file. You should remove it and it will be fine.
If you can give the example of data that you use (columns), will be able to check it further.
I rerun your code, with the data you've provided and it does not show any errors for model.fit(data_train_X, data_train_Y) line :)
|
Multiple Linear Regression Using Scikit-learn Error
|
I'm relatively new to Python and I am trying to make a Multiple Linear Regression model which has two predictor variables and one dependent. While doing my research on this, I found that Scikit provides a class to do this. I tried to get a model for my variables and I got the following message:
Shape of passed values is (3, 1), indices imply (2, 1)
The code I've used is:
from sklearn import linear_model
data = pd.read_csv('data.csv',delimiter=',',header=0)
SEED_VALUE = 12356789
np.random.seed(SEED_VALUE)
data_train, data_test = train_test_split(data, test_size=0.3, random_state=SEED_VALUE)
print('Train size: {}'.format(data_train.shape[0]))
print('Test size: {}'.format(data_test.shape[0]))
data_train_X = data_train.values[:,0:2] #predictor variables
data_train_Y = data_train.values[:,2].astype('float') # dependant
model = linear_model.LinearRegression()
np.random.seed(SEED_VALUE)
model.fit(data_train_X, data_train_Y)
coef = pd.DataFrame([model.intercept_, *model.coef_], ['(Intercept)', *data_train_X.columns], columns=['Coefficients'])
coef
I got the Error in the model.fit(data_train_X, data_train_Y) line. I have searched online for different ways to use Scikit method and I have found that other people with the same code had no error, so I don't know where my mistake could be
Thank you all so much
The data file is like this:
"retardation","distrust","degree"
2.80,6.1,44
3.10,5.1,25
2.59,6.0,10
3.36,6.9,28
2.80,7.0,25
3.35,5.6,72
2.99,6.3,45
2.99,7.2,25
2.92,6.9,12
3.23,6.5,24
3.37,6.8,46
2.72,6.6, 8
|
[
"Would be great if you could also provide the input of the data, but even without it's most likely due to the fact that you use index column from your file. You should remove it and it will be fine.\nIf you can give the example of data that you use (columns), will be able to check it further.\nI rerun your code, with the data you've provided and it does not show any errors for model.fit(data_train_X, data_train_Y) line :) \n"
] |
[
0
] |
[] |
[] |
[
"jupyter_notebook",
"linear_regression",
"machine_learning",
"python",
"scikit_learn"
] |
stackoverflow_0074585492_jupyter_notebook_linear_regression_machine_learning_python_scikit_learn.txt
|
Q:
How does Waitress handle concurrent tasks?
I'm trying to build a python webserver using Django and Waitress, but I'd like to know how Waitress handles concurrent requests, and when blocking may occur.
While the Waitress documentation mentions that multiple worker threads are available, it doesn't provide a lot of information on how they are implemented and how the python GIL affects them (emphasis my own):
When a channel determines the client has sent at least one full valid HTTP request, it schedules a "task" with a "thread dispatcher". The thread dispatcher maintains a fixed pool of worker threads available to do client work (by default, 4 threads). If a worker thread is available when a task is scheduled, the worker thread runs the task. The task has access to the channel, and can write back to the channel's output buffer. When all worker threads are in use, scheduled tasks will wait in a queue for a worker thread to become available.
There doesn't seem to be much information on Stackoverflow either. From the question "Is Gunicorn's gthread async worker analogous to Waitress?":
Waitress has a master async thread that buffers requests, and enqueues each request to one of its sync worker threads when the request I/O is finished.
These statements don't address the GIL (at least from my understanding) and it'd be great if someone could elaborate more on how worker threads work for Waitress. Thanks!
A:
Here's how the event-driven asynchronous servers generally work:
Start a process and listen to incoming requests. Utilizing the event notification API of the operating system makes it very easy to serve thousands of clients from single thread/process.
Since there's only one process managing all the connections, you don't want to perform any slow (or blocking) tasks in this process. Because then it will block the program for every client.
To perform blocking tasks, the server delegates the tasks to "workers". Workers can be threads (running in the same process) or separate processes (or subprocesses). Now the main process can keep on serving clients while workers perform the blocking tasks.
How does Waitress handle concurrent tasks?
Pretty much the same way I just described above. And for workers it creates threads, not processes.
how the python GIL affects them
Waitress uses threads for workers. So, yes they are affected by GIL in that they aren't truly concurrent though they seem to be. "Asynchronous" is the correct term.
Threads in Python run inside a single process, on a single CPU core, and don't run in parallel. A thread acquires the GIL for a very small amount of time and executes its code and then the GIL is acquired by another thread.
But since the GIL is released on network I/O, the parent process will always acquire the GIL whenever there's a network event (such as an incoming request) and this way you can stay assured that the GIL will not affect the network bound operations (like receiving requests or sending response).
On the other hand, Python processes are actually concurrent: they can run in parallel on multiple cores. But Waitress doesn't use processes.
Should you be worried?
If you're just doing small blocking tasks like database read/writes and serving only a few hundred users per second, then using threads isn't really that bad.
For serving a large volume of users or doing long running blocking tasks, you can look into using external task queues like Celery. This will be much better than spawning and managing processes yourself.
A:
Hint: Those were my comments to the accepted answer and the conversation below, moved to a separate answer for space reasons.
Wait.. The 5th request will stay in the queue until one of the 4 threads is done with their previous handling, and therefore gone back to the pool. One thread will only ever server one request at a time. "IO bound" tasks only help in that the threads waiting for IO will implicitly (e.g. by calling time.sleep) tell the scheduler (python's internal one) that it can pass the GIL along to another thread since there's currently nothing to do, so that the others will get more CPU time for their stuff. On thread level this is fully sequential, which is still concurrent and asynchronous on process level, just not parallel. Just to get some wording staight.
Also, Python threads are "standard" OS threads (like those in C). So they will use all CPU cores and make full use of them. The only thing restricting them is that they need to hold the GIL when calling Python C-API functions, because the whole API in general is not thread-safe. On the other hand, calls to non-Python functions, i.e. functions in C extensions like numpy for example, but also many database APIs, including anything loaded via ctypes, do not hold the GIL while running. Why should they, they are running external C binaries which don't know anything of the Python interpreter running in the parent process. Therefore, such tasks will run truely in parallel when called from a WSGI app hosted by waitress. And if you've got more cores available, turn the thread number up to that amount (threads=X kwarg on waitress.create_server).
|
How does Waitress handle concurrent tasks?
|
I'm trying to build a python webserver using Django and Waitress, but I'd like to know how Waitress handles concurrent requests, and when blocking may occur.
While the Waitress documentation mentions that multiple worker threads are available, it doesn't provide a lot of information on how they are implemented and how the python GIL affects them (emphasis my own):
When a channel determines the client has sent at least one full valid HTTP request, it schedules a "task" with a "thread dispatcher". The thread dispatcher maintains a fixed pool of worker threads available to do client work (by default, 4 threads). If a worker thread is available when a task is scheduled, the worker thread runs the task. The task has access to the channel, and can write back to the channel's output buffer. When all worker threads are in use, scheduled tasks will wait in a queue for a worker thread to become available.
There doesn't seem to be much information on Stackoverflow either. From the question "Is Gunicorn's gthread async worker analogous to Waitress?":
Waitress has a master async thread that buffers requests, and enqueues each request to one of its sync worker threads when the request I/O is finished.
These statements don't address the GIL (at least from my understanding) and it'd be great if someone could elaborate more on how worker threads work for Waitress. Thanks!
|
[
"Here's how the event-driven asynchronous servers generally work:\n\nStart a process and listen to incoming requests. Utilizing the event notification API of the operating system makes it very easy to serve thousands of clients from single thread/process.\nSince there's only one process managing all the connections, you don't want to perform any slow (or blocking) tasks in this process. Because then it will block the program for every client.\nTo perform blocking tasks, the server delegates the tasks to \"workers\". Workers can be threads (running in the same process) or separate processes (or subprocesses). Now the main process can keep on serving clients while workers perform the blocking tasks.\n\n\n\nHow does Waitress handle concurrent tasks?\n\nPretty much the same way I just described above. And for workers it creates threads, not processes.\n\nhow the python GIL affects them \n\nWaitress uses threads for workers. So, yes they are affected by GIL in that they aren't truly concurrent though they seem to be. \"Asynchronous\" is the correct term.\nThreads in Python run inside a single process, on a single CPU core, and don't run in parallel. A thread acquires the GIL for a very small amount of time and executes its code and then the GIL is acquired by another thread. \nBut since the GIL is released on network I/O, the parent process will always acquire the GIL whenever there's a network event (such as an incoming request) and this way you can stay assured that the GIL will not affect the network bound operations (like receiving requests or sending response).\nOn the other hand, Python processes are actually concurrent: they can run in parallel on multiple cores. But Waitress doesn't use processes. \nShould you be worried?\nIf you're just doing small blocking tasks like database read/writes and serving only a few hundred users per second, then using threads isn't really that bad.\nFor serving a large volume of users or doing long running blocking tasks, you can look into using external task queues like Celery. This will be much better than spawning and managing processes yourself.\n",
"Hint: Those were my comments to the accepted answer and the conversation below, moved to a separate answer for space reasons.\nWait.. The 5th request will stay in the queue until one of the 4 threads is done with their previous handling, and therefore gone back to the pool. One thread will only ever server one request at a time. \"IO bound\" tasks only help in that the threads waiting for IO will implicitly (e.g. by calling time.sleep) tell the scheduler (python's internal one) that it can pass the GIL along to another thread since there's currently nothing to do, so that the others will get more CPU time for their stuff. On thread level this is fully sequential, which is still concurrent and asynchronous on process level, just not parallel. Just to get some wording staight.\nAlso, Python threads are \"standard\" OS threads (like those in C). So they will use all CPU cores and make full use of them. The only thing restricting them is that they need to hold the GIL when calling Python C-API functions, because the whole API in general is not thread-safe. On the other hand, calls to non-Python functions, i.e. functions in C extensions like numpy for example, but also many database APIs, including anything loaded via ctypes, do not hold the GIL while running. Why should they, they are running external C binaries which don't know anything of the Python interpreter running in the parent process. Therefore, such tasks will run truely in parallel when called from a WSGI app hosted by waitress. And if you've got more cores available, turn the thread number up to that amount (threads=X kwarg on waitress.create_server).\n"
] |
[
11,
0
] |
[] |
[] |
[
"django",
"python",
"waitress",
"wsgi"
] |
stackoverflow_0059838433_django_python_waitress_wsgi.txt
|
Q:
How can I access primary key in template tag?
I am trying to create an update view that allows users to update their data. I am trying to access the data by using primary keys. My problem is that I do not know the syntax to implement it.
models.py
class Detail(models.Model):
"""
This is the one for model.py
"""
username = models.ForeignKey(User, on_delete=models.CASCADE, null=True, default="")
matricno = models.CharField(max_length=9, default="")
email = models.EmailField(default="")
first_name = models.CharField(max_length=200, default="")
last_name = models.CharField(max_length=255, default="")
class Meta:
verbose_name_plural = "Detail"
def __str__(self):
return self.first_name+ " "+self.last_name
views.py
def success(request):
return render(request, "success.html", {})
@login_required(login_url="signin")
def details(request):
form = Details()
if request.method == "POST":
form = Details(request.POST)
if form.is_valid():
detail = form.save(commit=False)
detail.username = request.user
detail.save()
return redirect(success)
else:
form = Details(initial={"matricno":request.user.username})
return render(request, "details.html", {"form":form})
def updatedetails(request, pk):
detail = Detail.objects.get(id=pk)
form = Details(instance=detail)
if request.method == "POST":
form = Details(request.POST, instance=detail)
if form.is_valid():
form.save()
return redirect(success)
return render(request, "details.html", {"form":form})
urls.py
from django.urls import path
from . import views
urlpatterns = [
path("details/", views.details, name="details"),
path("success/", views.success, name="success"),
path("edit/<str:pk>/", views.updatedetails, name="updatedetails"),
]
my html template
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Success</title>
</head>
<body>
<h1>Thank You for Filling Out the Form</h1>
<p><a href="/edit/{{request.detail.id}}/">Click Here To Edit</a></p>
</body>
</html>
So what I am trying to figure out is how to call the primary key in my template.
A:
Answer to the original question
How can I access primary key in template tag?
Well, you have to pass it to the view that renders the template, either through the context, or in this case, since you need it in the success page, which you are getting to via a redirect, send it as a parameter in your redirect.
Second, if you are trying to create a form from a model, then use a ModelForm. This you do not do just by setting a form variable equal to a model instance. You do this by creating a file called forms.py:
# forms.py
from django.forms import ModelForm
from .models import Detail
class DetailForm(ModelForm):
class Meta:
model = Detail
# Include below the fields from the Detail model you
# would like to include in your form
fields = ['username', 'matricno', 'email', 'first_name', 'last_name', ]
Then in your views you can access this form:
#views.py
from .models import Detail
from .forms import DetailForm
def success(request, pk):
# NOTE that here you must receive the pk
return render(request, "success.html", {'pk': pk})
@login_required(login_url="signin")
def details(request):
form = DetailForm()
if request.method == "POST":
form = DetailForm(request.POST)
if form.is_valid():
detail = form.save(commit=False)
detail.username = request.user
detail.save()
# NOTE: here you can send the detail primary key, pk, or id
# to the success view where you are trying to use it
return redirect('success', pk=detail.pk)
else:
form = Details(initial={"matricno": request.user.username})
return render(request, "details.html", {"form": form})
def updatedetails(request, pk):
detail = Detail.objects.get(id=pk)
form = DetailForm(instance=detail)
if request.method == "POST":
form = Details(request.POST, instance=detail)
if form.is_valid():
form.save()
return redirect('success', pk=detail.pk)
return render(request, "details.html", {"form":form})
Note how you can send the primary key to the success template in the redirect. That is the answer to the original question.
Now you can access the primary in your template simply:
<!--html template -->
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Success</title>
</head>
<body>
<h1>Thank You for Filling Out the Form</h1>
<p><a href="{% url 'updatedetails' pk %}">Click Here To Edit</a></p>
</body>
</html>
Since the success view needed to be changed to get the primary key, we have to modify the urls.py so that it can receive it:
# urls.py
from django.urls import path
from . import views
urlpatterns = [
path("details/", views.details, name="details"),
path("success/<int:pk>/", views.success, name="success"),
path("edit/<int:pk>/", views.updatedetails, name="updatedetails"),
]
A:
At first, I'd recommend you to change <str:pk> to <int:pk> in updatedetails path so urls.py should be:
from django.urls import path
from . import views
urlpatterns = [
path("details/", views.details, name="details"),
path("success/<int:pk>/", views.success, name="success"),
path("edit/<int:pk>/", views.updatedetails, name="updatedetails"),
]
Secondly, I'd recommend you to use get_object_or_404() instead of get() and also if you see it correctly you are also missing "" in redirect(success) that should be redirect("success") so views.py should be:
from django.shortcuts import get_object_or_404
def updatedetails(request, pk):
detail = get_object_or_404(Detail,id=pk)
form = Details(instance=detail)
if request.method == "POST":
form = Details(request.POST, instance=detail)
if form.is_valid():
form.save()
return redirect("success", args=(pk))
return render(request, "details.html", {"form":form})
@login_required(login_url="signin")
def details(request):
form = Details()
if request.method == "POST":
form = Details(request.POST)
if form.is_valid():
detail = form.save(commit=False)
detail.username = request.user
detail.save()
return redirect(success,args=(detail.id))
else:
form = Details(initial={"matricno":request.user.username})
return render(request, "details.html", {"form":form})
def success(request,pk):
return render(request, "success.html", {"id":pk})
Then, in the template you can use url tags and pass id to updatedetails view as:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Success</title>
</head>
<body>
<h1>Thank You for Filling Out the Form</h1>
<p><a href="{% url 'updatedetails' id %}">Click Here To Edit</a></p>
</body>
</html>
A:
Try using
href="{% url 'updatedetails' pk=request.detail.id %}"
Let me know if that works. This will look for the path name updatedetails located in your urlpatterns and provide the id as the key. I'm somewhat new at this also, so i'm hoping this helps and is correct.
|
How can I access primary key in template tag?
|
I am trying to create an update view that allows users to update their data. I am trying to access the data by using primary keys. My problem is that I do not know the syntax to implement it.
models.py
class Detail(models.Model):
"""
This is the one for model.py
"""
username = models.ForeignKey(User, on_delete=models.CASCADE, null=True, default="")
matricno = models.CharField(max_length=9, default="")
email = models.EmailField(default="")
first_name = models.CharField(max_length=200, default="")
last_name = models.CharField(max_length=255, default="")
class Meta:
verbose_name_plural = "Detail"
def __str__(self):
return self.first_name+ " "+self.last_name
views.py
def success(request):
return render(request, "success.html", {})
@login_required(login_url="signin")
def details(request):
form = Details()
if request.method == "POST":
form = Details(request.POST)
if form.is_valid():
detail = form.save(commit=False)
detail.username = request.user
detail.save()
return redirect(success)
else:
form = Details(initial={"matricno":request.user.username})
return render(request, "details.html", {"form":form})
def updatedetails(request, pk):
detail = Detail.objects.get(id=pk)
form = Details(instance=detail)
if request.method == "POST":
form = Details(request.POST, instance=detail)
if form.is_valid():
form.save()
return redirect(success)
return render(request, "details.html", {"form":form})
urls.py
from django.urls import path
from . import views
urlpatterns = [
path("details/", views.details, name="details"),
path("success/", views.success, name="success"),
path("edit/<str:pk>/", views.updatedetails, name="updatedetails"),
]
my html template
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Success</title>
</head>
<body>
<h1>Thank You for Filling Out the Form</h1>
<p><a href="/edit/{{request.detail.id}}/">Click Here To Edit</a></p>
</body>
</html>
So what I am trying to figure out is how to call the primary key in my template.
|
[
"Answer to the original question\n\nHow can I access primary key in template tag?\n\nWell, you have to pass it to the view that renders the template, either through the context, or in this case, since you need it in the success page, which you are getting to via a redirect, send it as a parameter in your redirect.\nSecond, if you are trying to create a form from a model, then use a ModelForm. This you do not do just by setting a form variable equal to a model instance. You do this by creating a file called forms.py:\n# forms.py\n\nfrom django.forms import ModelForm\nfrom .models import Detail\n\nclass DetailForm(ModelForm):\n class Meta:\n model = Detail\n # Include below the fields from the Detail model you\n # would like to include in your form\n fields = ['username', 'matricno', 'email', 'first_name', 'last_name', ]\n\nThen in your views you can access this form:\n#views.py\n\nfrom .models import Detail\nfrom .forms import DetailForm\n\ndef success(request, pk):\n # NOTE that here you must receive the pk\n return render(request, \"success.html\", {'pk': pk})\n\n@login_required(login_url=\"signin\")\ndef details(request):\n form = DetailForm()\n if request.method == \"POST\":\n form = DetailForm(request.POST)\n if form.is_valid():\n detail = form.save(commit=False)\n detail.username = request.user\n detail.save()\n # NOTE: here you can send the detail primary key, pk, or id\n # to the success view where you are trying to use it\n return redirect('success', pk=detail.pk)\n else:\n form = Details(initial={\"matricno\": request.user.username})\n return render(request, \"details.html\", {\"form\": form})\n\ndef updatedetails(request, pk):\n detail = Detail.objects.get(id=pk)\n form = DetailForm(instance=detail)\n if request.method == \"POST\":\n form = Details(request.POST, instance=detail)\n if form.is_valid():\n form.save()\n return redirect('success', pk=detail.pk)\n return render(request, \"details.html\", {\"form\":form})\n\nNote how you can send the primary key to the success template in the redirect. That is the answer to the original question.\nNow you can access the primary in your template simply:\n<!--html template -->\n<!DOCTYPE html>\n<html lang=\"en\">\n<head>\n <meta charset=\"UTF-8\">\n <meta http-equiv=\"X-UA-Compatible\" content=\"IE=edge\">\n <meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0\">\n <title>Success</title>\n</head>\n<body>\n <h1>Thank You for Filling Out the Form</h1>\n <p><a href=\"{% url 'updatedetails' pk %}\">Click Here To Edit</a></p>\n</body>\n</html>\n\nSince the success view needed to be changed to get the primary key, we have to modify the urls.py so that it can receive it:\n# urls.py\nfrom django.urls import path\n\nfrom . import views\n\nurlpatterns = [\n path(\"details/\", views.details, name=\"details\"),\n path(\"success/<int:pk>/\", views.success, name=\"success\"),\n path(\"edit/<int:pk>/\", views.updatedetails, name=\"updatedetails\"),\n]\n\n",
"At first, I'd recommend you to change <str:pk> to <int:pk> in updatedetails path so urls.py should be:\nfrom django.urls import path\n\nfrom . import views\n\nurlpatterns = [\n path(\"details/\", views.details, name=\"details\"),\n path(\"success/<int:pk>/\", views.success, name=\"success\"),\n path(\"edit/<int:pk>/\", views.updatedetails, name=\"updatedetails\"),\n]\n\nSecondly, I'd recommend you to use get_object_or_404() instead of get() and also if you see it correctly you are also missing \"\" in redirect(success) that should be redirect(\"success\") so views.py should be:\nfrom django.shortcuts import get_object_or_404\n\ndef updatedetails(request, pk):\n detail = get_object_or_404(Detail,id=pk)\n form = Details(instance=detail)\n if request.method == \"POST\":\n form = Details(request.POST, instance=detail)\n if form.is_valid():\n form.save()\n return redirect(\"success\", args=(pk))\n return render(request, \"details.html\", {\"form\":form})\n\n@login_required(login_url=\"signin\")\ndef details(request):\n form = Details()\n if request.method == \"POST\":\n form = Details(request.POST)\n if form.is_valid():\n detail = form.save(commit=False)\n detail.username = request.user\n detail.save()\n return redirect(success,args=(detail.id))\n else:\n form = Details(initial={\"matricno\":request.user.username})\n return render(request, \"details.html\", {\"form\":form})\n\n\ndef success(request,pk):\n \n return render(request, \"success.html\", {\"id\":pk})\n\n\nThen, in the template you can use url tags and pass id to updatedetails view as:\n<!DOCTYPE html>\n<html lang=\"en\">\n<head>\n <meta charset=\"UTF-8\">\n <meta http-equiv=\"X-UA-Compatible\" content=\"IE=edge\">\n <meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0\">\n <title>Success</title>\n</head>\n<body>\n <h1>Thank You for Filling Out the Form</h1>\n <p><a href=\"{% url 'updatedetails' id %}\">Click Here To Edit</a></p>\n</body>\n</html>\n\n",
"Try using\nhref=\"{% url 'updatedetails' pk=request.detail.id %}\"\n\nLet me know if that works. This will look for the path name updatedetails located in your urlpatterns and provide the id as the key. I'm somewhat new at this also, so i'm hoping this helps and is correct.\n"
] |
[
2,
1,
0
] |
[] |
[] |
[
"django",
"django_forms",
"django_templates",
"django_urls",
"python"
] |
stackoverflow_0074578041_django_django_forms_django_templates_django_urls_python.txt
|
Q:
Can I reconnect to the main window with PyWinAuto?
I reach open browser, click in the button on the screen and click in the button of the pop up window without problems. The only problem is, when I close pop up window clicking the button "Close Tor Browser", I can't reconnect with my previous window (the main and first window). Any tips??
from pywinauto.application import Application
import time
for i in range(0,2):
try:
for i in range(0,1):
try:
app=Application(backend='uia').start('\...\firefox.exe')
app=Application(backend='uia').connect(title='Connect to Tor — Tor Browser',timeout=40)
time.sleep(5)
app.window(best_match='Dialog', top_level_only=True).child_window(best_match='Ver todos los servicios').click()
except Exception:
time.sleep(1)
app2=Application(backend='uia').connect(title='Close Tor Browser',timeout=40)
app2.window(best_match='Dialog', top_level_only=True).child_window(best_match='Cancel').click()
time.sleep(5)
app=Application(backend='uia').connect(title='Connect to Tor — Tor Browser',timeout=40)
top_level_only=True).child_window(best_match='Ver todos los servicios').click()
time.sleep(10)
app.kill()
time.sleep(1)
except Exception:
pass
I can open browser easily, click in the button on the screen and click in the button of the pop up window without problems. The only problem is, when I close pop up window clicking the button "Close Tor Browser", I can't reconnect with my previous window (the main and first window). Any tips??
A:
Best bet to contorl (assuming extension has interactable popout [e.g. options, whatever, once you click/depress specfiiced hotkey) as follows (kukos to BrowserStack folk, and RobWuRobWu - CRX / page taxomy etc.).
This is the soluiton I explore here - will leave link in comment for altenrative concerning pywinauto method that should also achieve what you need (uses elements of objects instead of inaccurate pic identification, although it can do that too I believe, the backend method may be able to serve your case RE: using a diferent deskopt at the same time).
Recommneded approach:
Get ID for extension (in URL on respective Webstore page, if one exists, or in app details within settings)
[
OR
i.e. https://chrome.yada.yada.yada.then.BOOM./dknlfmjaanfblgfdfebhijalfmhmjjjo
Plug this ID into the following url in place of YOURID (take out the latter, put the former in its place):
Courtesy Rob Wu site: https://robwu.nl/crxviewer/?crx=https://chrome.google.com/webstore/detail/**`YOURID`**
Yes it has two https: don't get hung up by that its supposed to. See references / blog site below if you feel sceptical RE: legitamicy/validity etc. ¯_(ツ)_/¯
In my case, this would be:
https://robwu.nl/crxviewer/?crx=https://chrome.google.com/webstore/detail/dknlfmjaanfblgfdfebhijalfmhmjjjo
This will provide a webpage that looks like this - with useful detail regarding the extension - each file/pop-up html component (LHS), wiht RegEx to find the page you want to automate.
Notice you can download crx and unloaded package from here - but
that's or another rainy day - back to the Story Rory
In this example I get a popup out when I clikc on the extension - so
I've filtered for all htmls, per penultimate pic below, to retieve
ultimate pic for this step:
Phew! Left with just one html - which sounds very promising given it's entitled 'popup.html' (this is non uncommon) -
You now have popup.html (let's call this the OPTION_PAGE - use this in the following url (again, substituting where appropriate your extension's details in the obvious plaecs in this url and voila! (first belo = generaic e.g., min is live demo! :)
chrome-extension://**YOUR_ID**/**YOUR_OPTION_PAGE** which became (in my case)
chrome-extension://dknlfmjaanfblgfdfebhijalfmhmjjjo/popup.html
And it opens to a full webpage comprising the relevant interactabel / programmaticly friendly option page !
( ᵔ ͜ʖ ͡ᵔ)
**Notes/sample example:
Feaures:
Can be more up to date/looking ward
No requirement to contorlhistoric with physical intevention
Should be able to run in hidden mode (that woold be great - but prob. not allowed! :)
Ability to multi-procewss & especially demand modelling
Code to sign into noptcha using a dummy ID (go ahead give it a go - you might get lucky! not :))
Some context RE: variables/fns:
p element of vector ps =[0,1,2,...,n] for n browsers - you can ignore this and pretend variables like d[p] simply represented driver, where driver = webdriver.Chrome(options=o, executable_path=path_exec)
activ(p) activates the pth browser - ignore
ss = time.time (so ss(0.5) aka time.time(0.5) = 0.5 sec delay)
import time
from selenium.webdriver.common.action_chains import ActionChains as AC
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.chrome.options import Options
from concurrent.futures import hreadPoolExecutor, ProcessPoolExecutor
#*(see [here][7]) - as I say, not essential*
def nopecha(p):
print('nopecha')
global d # selenium webdriver - with early thinking as wemight have given to access, and done the hand me donw ting L)
#d[p].implicitly_wait(10)
activ(p) #this activates window in quesiton
d[p].get("chrome-extension://dknlfmjaanfblgfdfebhijalfmhmjjjo/popup.html")
d[p].find_element_by_class_name('plan_button.clickable').click()
ss(0.5)
AC(d[p]).key_down(Keys.CONTROL).send_keys('a').key_up('keys.CONTROL').perform() #send_keys_to_element()
#kd('ctrl'), pr('a'), ku('ctrl'), pr('delete')
ss(0.5)
el = d[p].find_element_by_class_name('plan_info')
# d[p].find_element_by_class_name('plan_info').clear()
AC(d[p]).send_keys_to_element(el, 'made-up-example-for-extension-login-serial-hope-you-understand-ta-k-bye-now').perform() #note this doesn't hijack your mouse/keyboard - you could still be using it while multiple browsers being threaded - as in this case
ss(0.5)
AC(d[p]).send_keys_to_element(el, Keys.ENTER).perform()
# d[p].find_element_by_class_name('plan_button.clickable').click()
# AC(d[p]).send_keys(Keys.ENTER).perform()
try:
d[p].find_element_by_class_name('btn off').click()
except:
pass
el = ev[p]('''return document.querySelector('.menu').children[1].lastElementChild''')
el.click(), cc(2)
with UIPath(u"chrome-extension://dknlfmjaanfblgfdfebhijalfmhmjjjo/popup.html - Chromium||Pane"):
click(u"||Document->Image||Text")
click(u"||Document->Speech||Text")
# for p in ps: th_0(p)
Gif shot:
|
Can I reconnect to the main window with PyWinAuto?
|
I reach open browser, click in the button on the screen and click in the button of the pop up window without problems. The only problem is, when I close pop up window clicking the button "Close Tor Browser", I can't reconnect with my previous window (the main and first window). Any tips??
from pywinauto.application import Application
import time
for i in range(0,2):
try:
for i in range(0,1):
try:
app=Application(backend='uia').start('\...\firefox.exe')
app=Application(backend='uia').connect(title='Connect to Tor — Tor Browser',timeout=40)
time.sleep(5)
app.window(best_match='Dialog', top_level_only=True).child_window(best_match='Ver todos los servicios').click()
except Exception:
time.sleep(1)
app2=Application(backend='uia').connect(title='Close Tor Browser',timeout=40)
app2.window(best_match='Dialog', top_level_only=True).child_window(best_match='Cancel').click()
time.sleep(5)
app=Application(backend='uia').connect(title='Connect to Tor — Tor Browser',timeout=40)
top_level_only=True).child_window(best_match='Ver todos los servicios').click()
time.sleep(10)
app.kill()
time.sleep(1)
except Exception:
pass
I can open browser easily, click in the button on the screen and click in the button of the pop up window without problems. The only problem is, when I close pop up window clicking the button "Close Tor Browser", I can't reconnect with my previous window (the main and first window). Any tips??
|
[
"Best bet to contorl (assuming extension has interactable popout [e.g. options, whatever, once you click/depress specfiiced hotkey) as follows (kukos to BrowserStack folk, and RobWuRobWu - CRX / page taxomy etc.).\nThis is the soluiton I explore here - will leave link in comment for altenrative concerning pywinauto method that should also achieve what you need (uses elements of objects instead of inaccurate pic identification, although it can do that too I believe, the backend method may be able to serve your case RE: using a diferent deskopt at the same time).\nRecommneded approach:\n\n\nGet ID for extension (in URL on respective Webstore page, if one exists, or in app details within settings)\n\n[\nOR\n\ni.e. https://chrome.yada.yada.yada.then.BOOM./dknlfmjaanfblgfdfebhijalfmhmjjjo\n\n\nPlug this ID into the following url in place of YOURID (take out the latter, put the former in its place):\n\n\nCourtesy Rob Wu site: https://robwu.nl/crxviewer/?crx=https://chrome.google.com/webstore/detail/**`YOURID`**\n\nYes it has two https: don't get hung up by that its supposed to. See references / blog site below if you feel sceptical RE: legitamicy/validity etc. ¯_(ツ)_/¯\n\nIn my case, this would be:\nhttps://robwu.nl/crxviewer/?crx=https://chrome.google.com/webstore/detail/dknlfmjaanfblgfdfebhijalfmhmjjjo\n\n\nThis will provide a webpage that looks like this - with useful detail regarding the extension - each file/pop-up html component (LHS), wiht RegEx to find the page you want to automate.\n\n\nNotice you can download crx and unloaded package from here - but\nthat's or another rainy day - back to the Story Rory\nIn this example I get a popup out when I clikc on the extension - so\nI've filtered for all htmls, per penultimate pic below, to retieve\nultimate pic for this step:\n\n\nPhew! Left with just one html - which sounds very promising given it's entitled 'popup.html' (this is non uncommon) -\n\n\nYou now have popup.html (let's call this the OPTION_PAGE - use this in the following url (again, substituting where appropriate your extension's details in the obvious plaecs in this url and voila! (first belo = generaic e.g., min is live demo! :)\n\nchrome-extension://**YOUR_ID**/**YOUR_OPTION_PAGE** which became (in my case)\n\nchrome-extension://dknlfmjaanfblgfdfebhijalfmhmjjjo/popup.html\n\n\nAnd it opens to a full webpage comprising the relevant interactabel / programmaticly friendly option page !\n( ᵔ ͜ʖ ͡ᵔ)\n\n**Notes/sample example:\nFeaures:\n\nCan be more up to date/looking ward\nNo requirement to contorlhistoric with physical intevention\nShould be able to run in hidden mode (that woold be great - but prob. not allowed! :)\nAbility to multi-procewss & especially demand modelling\n\nCode to sign into noptcha using a dummy ID (go ahead give it a go - you might get lucky! not :))\nSome context RE: variables/fns:\np element of vector ps =[0,1,2,...,n] for n browsers - you can ignore this and pretend variables like d[p] simply represented driver, where driver = webdriver.Chrome(options=o, executable_path=path_exec)\nactiv(p) activates the pth browser - ignore\nss = time.time (so ss(0.5) aka time.time(0.5) = 0.5 sec delay)\nimport time\n\nfrom selenium.webdriver.common.action_chains import ActionChains as AC\nfrom selenium.webdriver.common.keys import Keys\nfrom selenium.webdriver.chrome.options import Options\nfrom concurrent.futures import hreadPoolExecutor, ProcessPoolExecutor \n#*(see [here][7]) - as I say, not essential* \n\n\n\ndef nopecha(p):\n\n print('nopecha')\n global d # selenium webdriver - with early thinking as wemight have given to access, and done the hand me donw ting L)\n #d[p].implicitly_wait(10)\n activ(p) #this activates window in quesiton\n d[p].get(\"chrome-extension://dknlfmjaanfblgfdfebhijalfmhmjjjo/popup.html\")\n d[p].find_element_by_class_name('plan_button.clickable').click()\n ss(0.5)\n AC(d[p]).key_down(Keys.CONTROL).send_keys('a').key_up('keys.CONTROL').perform() #send_keys_to_element()\n #kd('ctrl'), pr('a'), ku('ctrl'), pr('delete')\n ss(0.5)\n el = d[p].find_element_by_class_name('plan_info')\n # d[p].find_element_by_class_name('plan_info').clear()\n AC(d[p]).send_keys_to_element(el, 'made-up-example-for-extension-login-serial-hope-you-understand-ta-k-bye-now').perform() #note this doesn't hijack your mouse/keyboard - you could still be using it while multiple browsers being threaded - as in this case \n ss(0.5)\n AC(d[p]).send_keys_to_element(el, Keys.ENTER).perform()\n # d[p].find_element_by_class_name('plan_button.clickable').click()\n # AC(d[p]).send_keys(Keys.ENTER).perform()\n try:\n d[p].find_element_by_class_name('btn off').click()\n except:\n pass\n el = ev[p]('''return document.querySelector('.menu').children[1].lastElementChild''')\n el.click(), cc(2)\n with UIPath(u\"chrome-extension://dknlfmjaanfblgfdfebhijalfmhmjjjo/popup.html - Chromium||Pane\"):\n click(u\"||Document->Image||Text\")\n click(u\"||Document->Speech||Text\")\n\n # for p in ps: th_0(p)\n\n\nGif shot:\n"
] |
[
0
] |
[] |
[] |
[
"automation",
"bots",
"python",
"pywinauto",
"tor"
] |
stackoverflow_0071905553_automation_bots_python_pywinauto_tor.txt
|
Q:
Compare and match range of timestamps in pandas two different dataframes
How to compare and match beginning and end of two ranges of timestamps in two different dataframes, when the frequency of timestamps varies, and it is not known which range starts earlies and finishes later. Then discard unmatched beginning and end, so the two ranges are the same.
Easy to do it manually in a txt file, how to do it in python and pandas dataframes?
Sample first dataframe:
0 1
0 2022-10-30 14:11:57
1 2022-10-30 14:11:57
2 2022-10-30 14:11:57
3 2022-10-30 14:11:58
4 2022-10-30 14:11:59
... ...
149801 2022-10-30 15:22:11
149802 2022-10-30 15:22:11
149803 2022-10-30 15:22:11
149804 2022-10-30 15:22:11
149805 2022-10-30 15:22:11
\[149806 rows x 2 columns\]
Sample second dataframe:
0 1
0 2022-10-30 14:11:59
1 2022-10-30 14:11:59
2 2022-10-30 14:12:00
3 2022-10-30 14:12:00
4 2022-10-30 14:12:00
... ...
21065 2022-10-30 15:22:11
21066 2022-10-30 15:22:11
21067 2022-10-30 15:22:12
21068 2022-10-30 15:22:13
21069 2022-10-30 15:22:13
Column 1 filled with data
Comparing two timestamps in a specific row would look like:
if first_df[0].iloc[0] == second_df[0].iloc[0]:
print('hit')
else:
print('miss')
How to do it over full range, so it would be possible to discard unmatched beginning and end while preserving what's inside?
Sample match of those two ranges:
First dataframe:
0 1
4 2022-10-30 14:11:59
... ...
149801 2022-10-30 15:22:11
149802 2022-10-30 15:22:11
149803 2022-10-30 15:22:11
149804 2022-10-30 15:22:11
149805 2022-10-30 15:22:11
Second dataframe:
0 1
0 2022-10-30 14:11:59
1 2022-10-30 14:11:59
2 2022-10-30 14:12:00
3 2022-10-30 14:12:00
4 2022-10-30 14:12:00
... ...
21065 2022-10-30 15:22:11
21066 2022-10-30 15:22:11
Edit:
Consider this code (note that frequency of timestamps in each dataframe is different):
import pandas as pd
from datetime import datetime
df1 = pd.DataFrame({'val_1' : [10,11,12,13,14,15]},
index = [pd.DatetimeIndex([datetime.strptime(s, '%Y-%m-%d %H:%M:%S')])[0]
for s in ['2022-11-12 09:03:59',
'2022-11-12 09:03:59',
'2022-11-12 09:03:59',
'2022-11-12 09:04:00',
'2022-11-12 09:04:01',
'2022-11-12 09:04:02'
] ])
df2 = pd.DataFrame({'val_2': [11,22,33,44]},
index = [pd.DatetimeIndex([datetime.strptime(s, '%Y-%m-%d %H:%M:%S')])[0]
for s in ['2022-11-12 09:03:58',
'2022-11-12 09:03:59',
'2022-11-12 09:03:59',
'2022-11-12 09:04:00',
] ])
What I would like as result is this:
val_1 val_2
2022-11-12 09:03:59 10 NaN
2022-11-12 09:03:59 11 22
2022-11-12 09:03:59 12 33
2022-11-12 09:04:00 13 44
or:
df1:
2022-11-12 09:03:59 10
2022-11-12 09:03:59 11
2022-11-12 09:03:59 12
2022-11-12 09:04:00 13
and df2:
2022-11-12 09:03:59 22
2022-11-12 09:03:59 33
2022-11-12 09:04:00 44
Tried both join and merge with probably every combination of options and can't do that.
A:
New answer on the new example data:
The problem with merging here is that you have duplicated index Dates, so there can't be unambigous assignment done.
But you could do it seperately as you suggested in the beginning.
You said you don't know which of both df's have start earlier or end later.
Find the min value of both indexes and get the max value of these two. Same for the upper bound, get both max values and take the min value of these two values. Then you slice your df's with the lower and upper bound.
lower, upper = max(df1.index.min(), df2.index.min()), min(df1.index.max(), df2.index.max())
df1 = df1.loc[lower:upper]
print(df1)
val_1
2022-11-12 09:03:59 10
2022-11-12 09:03:59 11
2022-11-12 09:03:59 12
2022-11-12 09:04:00 13
df2 = df2.loc[lower:upper]
print(df2)
val_2
2022-11-12 09:03:59 22
2022-11-12 09:03:59 33
2022-11-12 09:04:00 44
OLD:
Since you didn't provide usable data, here my own example input data:
np.random.seed(42)
df1 = pd.DataFrame(
{
'A' : np.random.randint(0,10, size=10)
},
index= pd.date_range('2022-11-26 08:00', periods=10, freq='10T')
)
df2 = pd.DataFrame(
{
'B' : np.random.randint(0,10, size=10)
},
index= pd.date_range('2022-11-26 08:30', periods=10, freq='10T')
)
which creates this data:
#df1
A
2022-11-26 08:00:00 6
2022-11-26 08:10:00 3
2022-11-26 08:20:00 7
2022-11-26 08:30:00 4
2022-11-26 08:40:00 6
2022-11-26 08:50:00 9
2022-11-26 09:00:00 2
2022-11-26 09:10:00 6
2022-11-26 09:20:00 7
2022-11-26 09:30:00 4
#df2
B
2022-11-26 08:30:00 3
2022-11-26 08:40:00 7
2022-11-26 08:50:00 7
2022-11-26 09:00:00 2
2022-11-26 09:10:00 5
2022-11-26 09:20:00 4
2022-11-26 09:30:00 1
2022-11-26 09:40:00 7
2022-11-26 09:50:00 5
2022-11-26 10:00:00 1
I think a decent approach still would be to merge the data to find out the edges that are off.
Just a offer, if you leave them merged, you could compare them directly like this:
combined = df1.merge(df2, how='inner', left_index=True, right_index=True)
combined['compare'] = np.where(combined['A']==combined['B'], 'hit', 'miss')
print(combined)
Output of combined:
A B compare
2022-11-26 08:30:00 4 3 miss
2022-11-26 08:40:00 6 7 miss
2022-11-26 08:50:00 9 7 miss
2022-11-26 09:00:00 2 2 hit
2022-11-26 09:10:00 6 5 miss
2022-11-26 09:20:00 7 4 miss
2022-11-26 09:30:00 4 1 miss
If you really need them to stay seperated, just add:
df1_new = combined[['A']]
df2_new = combined[['B']]
|
Compare and match range of timestamps in pandas two different dataframes
|
How to compare and match beginning and end of two ranges of timestamps in two different dataframes, when the frequency of timestamps varies, and it is not known which range starts earlies and finishes later. Then discard unmatched beginning and end, so the two ranges are the same.
Easy to do it manually in a txt file, how to do it in python and pandas dataframes?
Sample first dataframe:
0 1
0 2022-10-30 14:11:57
1 2022-10-30 14:11:57
2 2022-10-30 14:11:57
3 2022-10-30 14:11:58
4 2022-10-30 14:11:59
... ...
149801 2022-10-30 15:22:11
149802 2022-10-30 15:22:11
149803 2022-10-30 15:22:11
149804 2022-10-30 15:22:11
149805 2022-10-30 15:22:11
\[149806 rows x 2 columns\]
Sample second dataframe:
0 1
0 2022-10-30 14:11:59
1 2022-10-30 14:11:59
2 2022-10-30 14:12:00
3 2022-10-30 14:12:00
4 2022-10-30 14:12:00
... ...
21065 2022-10-30 15:22:11
21066 2022-10-30 15:22:11
21067 2022-10-30 15:22:12
21068 2022-10-30 15:22:13
21069 2022-10-30 15:22:13
Column 1 filled with data
Comparing two timestamps in a specific row would look like:
if first_df[0].iloc[0] == second_df[0].iloc[0]:
print('hit')
else:
print('miss')
How to do it over full range, so it would be possible to discard unmatched beginning and end while preserving what's inside?
Sample match of those two ranges:
First dataframe:
0 1
4 2022-10-30 14:11:59
... ...
149801 2022-10-30 15:22:11
149802 2022-10-30 15:22:11
149803 2022-10-30 15:22:11
149804 2022-10-30 15:22:11
149805 2022-10-30 15:22:11
Second dataframe:
0 1
0 2022-10-30 14:11:59
1 2022-10-30 14:11:59
2 2022-10-30 14:12:00
3 2022-10-30 14:12:00
4 2022-10-30 14:12:00
... ...
21065 2022-10-30 15:22:11
21066 2022-10-30 15:22:11
Edit:
Consider this code (note that frequency of timestamps in each dataframe is different):
import pandas as pd
from datetime import datetime
df1 = pd.DataFrame({'val_1' : [10,11,12,13,14,15]},
index = [pd.DatetimeIndex([datetime.strptime(s, '%Y-%m-%d %H:%M:%S')])[0]
for s in ['2022-11-12 09:03:59',
'2022-11-12 09:03:59',
'2022-11-12 09:03:59',
'2022-11-12 09:04:00',
'2022-11-12 09:04:01',
'2022-11-12 09:04:02'
] ])
df2 = pd.DataFrame({'val_2': [11,22,33,44]},
index = [pd.DatetimeIndex([datetime.strptime(s, '%Y-%m-%d %H:%M:%S')])[0]
for s in ['2022-11-12 09:03:58',
'2022-11-12 09:03:59',
'2022-11-12 09:03:59',
'2022-11-12 09:04:00',
] ])
What I would like as result is this:
val_1 val_2
2022-11-12 09:03:59 10 NaN
2022-11-12 09:03:59 11 22
2022-11-12 09:03:59 12 33
2022-11-12 09:04:00 13 44
or:
df1:
2022-11-12 09:03:59 10
2022-11-12 09:03:59 11
2022-11-12 09:03:59 12
2022-11-12 09:04:00 13
and df2:
2022-11-12 09:03:59 22
2022-11-12 09:03:59 33
2022-11-12 09:04:00 44
Tried both join and merge with probably every combination of options and can't do that.
|
[
"New answer on the new example data:\nThe problem with merging here is that you have duplicated index Dates, so there can't be unambigous assignment done.\nBut you could do it seperately as you suggested in the beginning.\nYou said you don't know which of both df's have start earlier or end later.\nFind the min value of both indexes and get the max value of these two. Same for the upper bound, get both max values and take the min value of these two values. Then you slice your df's with the lower and upper bound.\nlower, upper = max(df1.index.min(), df2.index.min()), min(df1.index.max(), df2.index.max())\n\ndf1 = df1.loc[lower:upper]\nprint(df1)\n\n val_1\n2022-11-12 09:03:59 10\n2022-11-12 09:03:59 11\n2022-11-12 09:03:59 12\n2022-11-12 09:04:00 13\n\ndf2 = df2.loc[lower:upper]\nprint(df2)\n\n val_2\n2022-11-12 09:03:59 22\n2022-11-12 09:03:59 33\n2022-11-12 09:04:00 44\n\nOLD:\nSince you didn't provide usable data, here my own example input data:\nnp.random.seed(42)\ndf1 = pd.DataFrame(\n {\n 'A' : np.random.randint(0,10, size=10)\n },\n index= pd.date_range('2022-11-26 08:00', periods=10, freq='10T')\n)\n\ndf2 = pd.DataFrame(\n {\n 'B' : np.random.randint(0,10, size=10)\n },\n index= pd.date_range('2022-11-26 08:30', periods=10, freq='10T')\n)\n\nwhich creates this data:\n#df1\n A\n2022-11-26 08:00:00 6\n2022-11-26 08:10:00 3\n2022-11-26 08:20:00 7\n2022-11-26 08:30:00 4\n2022-11-26 08:40:00 6\n2022-11-26 08:50:00 9\n2022-11-26 09:00:00 2\n2022-11-26 09:10:00 6\n2022-11-26 09:20:00 7\n2022-11-26 09:30:00 4\n\n#df2\n B\n2022-11-26 08:30:00 3\n2022-11-26 08:40:00 7\n2022-11-26 08:50:00 7\n2022-11-26 09:00:00 2\n2022-11-26 09:10:00 5\n2022-11-26 09:20:00 4\n2022-11-26 09:30:00 1\n2022-11-26 09:40:00 7\n2022-11-26 09:50:00 5\n2022-11-26 10:00:00 1\n\nI think a decent approach still would be to merge the data to find out the edges that are off.\nJust a offer, if you leave them merged, you could compare them directly like this:\ncombined = df1.merge(df2, how='inner', left_index=True, right_index=True)\ncombined['compare'] = np.where(combined['A']==combined['B'], 'hit', 'miss')\nprint(combined)\n\nOutput of combined:\n A B compare\n2022-11-26 08:30:00 4 3 miss\n2022-11-26 08:40:00 6 7 miss\n2022-11-26 08:50:00 9 7 miss\n2022-11-26 09:00:00 2 2 hit\n2022-11-26 09:10:00 6 5 miss\n2022-11-26 09:20:00 7 4 miss\n2022-11-26 09:30:00 4 1 miss\n\nIf you really need them to stay seperated, just add:\ndf1_new = combined[['A']]\ndf2_new = combined[['B']]\n\n"
] |
[
0
] |
[] |
[] |
[
"pandas",
"python"
] |
stackoverflow_0074585368_pandas_python.txt
|
Q:
UserWarning: X does not have valid feature names, but DecisionTreeClassifier was fitted with feature names
I am learning machine learning from Programming with Mosh channel.
I got desired output in this case.
output=array(['HipHop', 'Acoustic', 'Classical'], dtype=object)
but there is a warning like this and I cannot find which part is wrong.
C:\Users\User\anaconda3\lib\site-packages\sklearn\base.py:450: UserWarning: X does not have valid feature names, but DecisionTreeClassifier was fitted with feature names
warnings.warn(
Do you know how can I correct this?
Code:
import pandas as pd
from sklearn.tree import DecisionTreeClassifier
music_data=pd.read_csv('C:\\Users\\User\\Desktop\\machine learning tutorial\\Python Tutorial Supplementary Materials\\music.csv')
y=music_data['genre']
X=music_data.drop(columns=['genre'])
model = DecisionTreeClassifier()
model.fit(X,y)
predictions=model.predict([[22,1],[26,0],[39,1]])
predictions
A:
After line 5, before "model = DecisionTreeClassifier" add two more lines:
X = X.values
y = y.values
A more in-depth solution and explanation can be found here:
UserWarning: X does not have valid feature names, but LogisticRegression was fitted with feature names
|
UserWarning: X does not have valid feature names, but DecisionTreeClassifier was fitted with feature names
|
I am learning machine learning from Programming with Mosh channel.
I got desired output in this case.
output=array(['HipHop', 'Acoustic', 'Classical'], dtype=object)
but there is a warning like this and I cannot find which part is wrong.
C:\Users\User\anaconda3\lib\site-packages\sklearn\base.py:450: UserWarning: X does not have valid feature names, but DecisionTreeClassifier was fitted with feature names
warnings.warn(
Do you know how can I correct this?
Code:
import pandas as pd
from sklearn.tree import DecisionTreeClassifier
music_data=pd.read_csv('C:\\Users\\User\\Desktop\\machine learning tutorial\\Python Tutorial Supplementary Materials\\music.csv')
y=music_data['genre']
X=music_data.drop(columns=['genre'])
model = DecisionTreeClassifier()
model.fit(X,y)
predictions=model.predict([[22,1],[26,0],[39,1]])
predictions
|
[
"After line 5, before \"model = DecisionTreeClassifier\" add two more lines:\nX = X.values\ny = y.values\n\nA more in-depth solution and explanation can be found here:\nUserWarning: X does not have valid feature names, but LogisticRegression was fitted with feature names\n"
] |
[
0
] |
[] |
[] |
[
"decision_tree",
"python",
"scikit_learn"
] |
stackoverflow_0073914558_decision_tree_python_scikit_learn.txt
|
Q:
How to extract only the pixels of an image where it is masked? (Python numpy array operation)
I have an image and its corresponding mask for the cob as numpy arrays:
The image numpy array has shape (332, 107, 3).
The mask is Boolean (consists of True/False) and has this shape as binary (332, 107).
[[False False False ... False False False]
[False False False ... False False False]
[False False False ... False False False]
...
[False False False ... False False False]
[False False False ... False False False]
[False False False ... False False False]]
How can I get the color pixels of the cob (all pixels in the color image where the mask is)?
A:
Thanks to the useful comment of M.Setchell, I was able to find the answer myself.
Basically, I had to expand the dimensions of the mask array (2D) to the same dimension of the image (3D with 3 color channels).
y=np.expand_dims(mask,axis=2)
newmask=np.concatenate((y,y,y),axis=2)
Then I had to simply multiply the new mask with the image to get the colored mask:
cob= img * newmask
And here just for visualization the result:
A:
If you want to get an array of the pixels, i.e. array with shape (n,3):
#assuming mask.shape = (h,w) , and mask.dtype = bool
pixels = img[[mask]]
and if you want to produce the image in your answer then simply do this:
cop = img.copy()
cop[mask] = 0
|
How to extract only the pixels of an image where it is masked? (Python numpy array operation)
|
I have an image and its corresponding mask for the cob as numpy arrays:
The image numpy array has shape (332, 107, 3).
The mask is Boolean (consists of True/False) and has this shape as binary (332, 107).
[[False False False ... False False False]
[False False False ... False False False]
[False False False ... False False False]
...
[False False False ... False False False]
[False False False ... False False False]
[False False False ... False False False]]
How can I get the color pixels of the cob (all pixels in the color image where the mask is)?
|
[
"Thanks to the useful comment of M.Setchell, I was able to find the answer myself.\nBasically, I had to expand the dimensions of the mask array (2D) to the same dimension of the image (3D with 3 color channels).\ny=np.expand_dims(mask,axis=2)\nnewmask=np.concatenate((y,y,y),axis=2)\n\nThen I had to simply multiply the new mask with the image to get the colored mask:\ncob= img * newmask\n\nAnd here just for visualization the result:\n\n",
"If you want to get an array of the pixels, i.e. array with shape (n,3):\n#assuming mask.shape = (h,w) , and mask.dtype = bool\npixels = img[[mask]]\n\nand if you want to produce the image in your answer then simply do this:\ncop = img.copy()\ncop[mask] = 0\n\n"
] |
[
5,
0
] |
[] |
[] |
[
"image",
"mask",
"numpy",
"python"
] |
stackoverflow_0059160337_image_mask_numpy_python.txt
|
Q:
Call a function to replace every regex in a file
I have a some entries in a file and I want to modify specific regex values. This file must remain identical to the original except the value I want to replace.
Here is my file:
dn: uid=alan,cn=users,dc=mysite,dc=dc=com
objectclass: organizationalPerson
objectclass: person
objectclass: top
objectclass: inetOrgPerson
uid: wpsadmin
userpassword: wpsadmin
specificattribute: abc123, cvb765
sn: admin
givenName: fgh876
cn: wps admin
dn: uid=alice,cn=users,dc=mysite,dc=dc=com
objectclass: organizationalPerson
objectclass: person
objectclass: top
objectclass: inetOrgPerson
uid: wasadmin
userpassword: wasadmin
specificattribute: def456
sn: admin
givenName: aaa000
cn: was admin
dn: uid=lana,cn=users,dc=mysite,dc=dc=com
objectclass: top
objectclass: person
objectclass: organizationalPerson
objectclass: inetOrgPerson
uid: wpsbind
userpassword: wpsbind
specificattribute: ghi789
sn: bind
givenName: wps
cn: wps bind
I want to replace each value like aaa000 by another stored in a dict.
#!/usr/bin/env python3
import re
Dict = {'abc123': 'zzz999', 'cde456': 'xxx888', 'fgh789': 'www777'} # and so on...
def replacement(val):
val2 = Dict.get(val)
return print(val2)
I've found a solution to identify the regex but not to call the function named 'replacement'
with open('file.txt', "r+") as f:
content = f.read()
content_new = re.sub('[a-z]{3}[0-9]{3}', r'abc123', content)
f.seek(0)
f.write(content_new)
f.truncate()
This code change every matching regex by abc123 but this is not that I want.
A:
You can try the following:
with open('file.txt', 'r') as f:
content = f.read()
for old_value, new_value in Dict.items():
content = content.replace(old_value, new_value)
with open('file.txt', 'w') as f:
f.write(content)
|
Call a function to replace every regex in a file
|
I have a some entries in a file and I want to modify specific regex values. This file must remain identical to the original except the value I want to replace.
Here is my file:
dn: uid=alan,cn=users,dc=mysite,dc=dc=com
objectclass: organizationalPerson
objectclass: person
objectclass: top
objectclass: inetOrgPerson
uid: wpsadmin
userpassword: wpsadmin
specificattribute: abc123, cvb765
sn: admin
givenName: fgh876
cn: wps admin
dn: uid=alice,cn=users,dc=mysite,dc=dc=com
objectclass: organizationalPerson
objectclass: person
objectclass: top
objectclass: inetOrgPerson
uid: wasadmin
userpassword: wasadmin
specificattribute: def456
sn: admin
givenName: aaa000
cn: was admin
dn: uid=lana,cn=users,dc=mysite,dc=dc=com
objectclass: top
objectclass: person
objectclass: organizationalPerson
objectclass: inetOrgPerson
uid: wpsbind
userpassword: wpsbind
specificattribute: ghi789
sn: bind
givenName: wps
cn: wps bind
I want to replace each value like aaa000 by another stored in a dict.
#!/usr/bin/env python3
import re
Dict = {'abc123': 'zzz999', 'cde456': 'xxx888', 'fgh789': 'www777'} # and so on...
def replacement(val):
val2 = Dict.get(val)
return print(val2)
I've found a solution to identify the regex but not to call the function named 'replacement'
with open('file.txt', "r+") as f:
content = f.read()
content_new = re.sub('[a-z]{3}[0-9]{3}', r'abc123', content)
f.seek(0)
f.write(content_new)
f.truncate()
This code change every matching regex by abc123 but this is not that I want.
|
[
"You can try the following:\nwith open('file.txt', 'r') as f:\n content = f.read()\n\nfor old_value, new_value in Dict.items():\n content = content.replace(old_value, new_value)\n\nwith open('file.txt', 'w') as f:\n f.write(content)\n\n"
] |
[
0
] |
[] |
[] |
[
"python",
"regex"
] |
stackoverflow_0074585539_python_regex.txt
|
Q:
Downloading Shares Gives "JSONDecodeError: Expecting value: line 1 column 1 (char 0)"
I am downloading shared from Finance Yahoo. There is this error prompts:
JSONDecodeError: Expecting value: line 1 column 1 (char 0)
I have checked all the similar questions and applied but all in vain, that's why asking this question again specifying my problem.
I am downloading shares details of the top 100 current stocks of Nasdaq using this repo, that is basically for the prediction of stock shares based on financial analysis. there is this stage of downloading shares and saving them as Pandas DataFrame. The code is:
shares = []
tickers_done = []
for ticker in tqdm(tickers):
if ticker in tickers_done:
continue
d = requests.get(f"https://query1.finance.yahoo.com/ws/fundamentals-timeseries/v1/finance/timeseries/{ticker}?symbol={ticker}&padTimeSeries=true&type=annualPreferredSharesNumber,annualOrdinarySharesNumber&merge=false&period1=0&period2=2013490868")
if not d.ok:
time.sleep(300)
d = requests.get(f"https://query1.finance.yahoo.com/ws/fundamentals-timeseries/v1/finance/timeseries/{ticker}?symbol={ticker}&padTimeSeries=true&type=annualPreferredSharesNumber,annualOrdinarySharesNumber&merge=false&period1=0&period2=2013490868")
ctn = d.json()['timeseries']['result']
dct = dict()
for n in ctn:
type = n['meta']['type'][0]
dct[type] = dict()
if type in n:
for o in n[type]:
if o is not None:
dct[type][o['asOfDate']] = o['reportedValue']['raw']
df = pd.DataFrame.from_dict(dct)
df['symbol'] = ticker
shares.append(df)
tickers_done.append(ticker)
time.sleep(1)
# save dataframe
df = pd.concat(shares)
df['date'] = df.index
df.to_csv(f"data/{project}/shares.csv", index=False)
And the error screen shot is:
And I have checked each ticker requests status_code as:
for x in tickers:
d = requests.get(f"https://query1.finance.yahoo.com/ws/fundamentals-timeseries/v1/finance/timeseries/{x}?symbol={x}&padTimeSeries=true&type=annualPreferredSharesNumber,annualOrdinarySharesNumber&merge=false&period1=0&period2=2013490868")
print(d.status_code)
The results are all 403. Your kind help will be highly appreciable. Thanks!
But searching the link https://query1.finance.yahoo.com/ws/fundamentals-timeseries/v1/finance/timeseries/AAPL?symbol=AAPL&padTimeSeries=true&type=annualPreferredSharesNumber,annualOrdinarySharesNumber&merge=false&period1=0&period2=2013490868in chrome by putting one of the tickers, like, AAPL, it gives some data, like,
A:
The error
Per your error trace, the below line is throwing an error:
ctn = d.json()['timeseries']['result']
The error is trying to tell you that the data in d is not JSON-formatted:
JSONDecodeError: Expecting value: line 1 column 1 (char 0)
So basically the very first character (line 1, column 1) is not the expected { that starts JSON.
Add a print/breakpoint prior to the line above and see for yourself what data is in d (and whether or not it contains valid JSON).
The cause
As you point out towards the end:
The results are all 403. Your kind help will be highly appreciable. Thanks!
Since the requests.get calls are being rejected by the server, the responses don't contain valid JSON.
Find out why the server is rejecting your calls (403 suggests you don't have access to the requested resource); start with the most basic call that returns [any] data and then add bits until you either get a working query, or an error.
|
Downloading Shares Gives "JSONDecodeError: Expecting value: line 1 column 1 (char 0)"
|
I am downloading shared from Finance Yahoo. There is this error prompts:
JSONDecodeError: Expecting value: line 1 column 1 (char 0)
I have checked all the similar questions and applied but all in vain, that's why asking this question again specifying my problem.
I am downloading shares details of the top 100 current stocks of Nasdaq using this repo, that is basically for the prediction of stock shares based on financial analysis. there is this stage of downloading shares and saving them as Pandas DataFrame. The code is:
shares = []
tickers_done = []
for ticker in tqdm(tickers):
if ticker in tickers_done:
continue
d = requests.get(f"https://query1.finance.yahoo.com/ws/fundamentals-timeseries/v1/finance/timeseries/{ticker}?symbol={ticker}&padTimeSeries=true&type=annualPreferredSharesNumber,annualOrdinarySharesNumber&merge=false&period1=0&period2=2013490868")
if not d.ok:
time.sleep(300)
d = requests.get(f"https://query1.finance.yahoo.com/ws/fundamentals-timeseries/v1/finance/timeseries/{ticker}?symbol={ticker}&padTimeSeries=true&type=annualPreferredSharesNumber,annualOrdinarySharesNumber&merge=false&period1=0&period2=2013490868")
ctn = d.json()['timeseries']['result']
dct = dict()
for n in ctn:
type = n['meta']['type'][0]
dct[type] = dict()
if type in n:
for o in n[type]:
if o is not None:
dct[type][o['asOfDate']] = o['reportedValue']['raw']
df = pd.DataFrame.from_dict(dct)
df['symbol'] = ticker
shares.append(df)
tickers_done.append(ticker)
time.sleep(1)
# save dataframe
df = pd.concat(shares)
df['date'] = df.index
df.to_csv(f"data/{project}/shares.csv", index=False)
And the error screen shot is:
And I have checked each ticker requests status_code as:
for x in tickers:
d = requests.get(f"https://query1.finance.yahoo.com/ws/fundamentals-timeseries/v1/finance/timeseries/{x}?symbol={x}&padTimeSeries=true&type=annualPreferredSharesNumber,annualOrdinarySharesNumber&merge=false&period1=0&period2=2013490868")
print(d.status_code)
The results are all 403. Your kind help will be highly appreciable. Thanks!
But searching the link https://query1.finance.yahoo.com/ws/fundamentals-timeseries/v1/finance/timeseries/AAPL?symbol=AAPL&padTimeSeries=true&type=annualPreferredSharesNumber,annualOrdinarySharesNumber&merge=false&period1=0&period2=2013490868in chrome by putting one of the tickers, like, AAPL, it gives some data, like,
|
[
"The error\nPer your error trace, the below line is throwing an error:\n\nctn = d.json()['timeseries']['result']\n\nThe error is trying to tell you that the data in d is not JSON-formatted:\n\nJSONDecodeError: Expecting value: line 1 column 1 (char 0)\nSo basically the very first character (line 1, column 1) is not the expected { that starts JSON.\n\nAdd a print/breakpoint prior to the line above and see for yourself what data is in d (and whether or not it contains valid JSON).\nThe cause\nAs you point out towards the end:\n\nThe results are all 403. Your kind help will be highly appreciable. Thanks!\n\nSince the requests.get calls are being rejected by the server, the responses don't contain valid JSON.\nFind out why the server is rejecting your calls (403 suggests you don't have access to the requested resource); start with the most basic call that returns [any] data and then add bits until you either get a working query, or an error.\n"
] |
[
0
] |
[] |
[] |
[
"http_status_code_403",
"json",
"python",
"yahoo_finance",
"yfinance"
] |
stackoverflow_0074585557_http_status_code_403_json_python_yahoo_finance_yfinance.txt
|
Q:
`DefaultCredentialsError` when attempting to import google cloud libraries in python
I am attempting to import google-cloud and big-query libraries and running into default credentials error. I have attempted to set the credentials by downloading the json file from cloud portal and specifying the path to the file.
## Google Big Query
%reload_ext google.cloud.bigquery
from google.cloud import bigquery
bqclient = bigquery.Client(project = "dat-exp")
os.environ.setdefault("GCLOUD_PROJECT", "dat-exp")
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "path/xxxxx.json"
---------------------------------------------------------------------------
DefaultCredentialsError Traceback (most recent call last)
/tmp/ipykernel_2944/2163850103.py in <cell line: 81>()
79 get_ipython().run_line_magic('reload_ext', 'google.cloud.bigquery')
80 from google.cloud import bigquery
---> 81 bqclient = bigquery.Client(project = "dat-exp")
82 os.environ.setdefault("GCLOUD_PROJECT", "dat-exp")
DefaultCredentialsError: Could not automatically determine credentials. Please set GOOGLE_APPLICATION_CREDENTIALS or explicitly create credentials and re-run the application. For more information, please see https://cloud.google.com/docs/authentication/getting-started
A:
Default Credentials (ADC) is a method of searching for credentials.
Your code is setting the environment after the client has attempted to locate credentials. That means the client failed to locate credentials before you set up credentials. A quick solution is to move the line with bigquery.Client(...) to be after the os.environ(...) lines.
os.environ.setdefault("GCLOUD_PROJECT", "dat-exp")
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "path/xxxxx.json"
bqclient = bigquery.Client(project = "dat-exp")
I do not recommend the method that you are using (modify the environment inside the program). Either modify the environment before the program starts or specify the credentials to use when creating the client bigquery.Client().
from google.cloud import bigquery
from google.oauth2 import service_account
key_path = "path/to/service_account.json"
credentials = service_account.Credentials.from_service_account_file(
key_path, scopes=["https://www.googleapis.com/auth/cloud-platform"])
client = bigquery.Client(credentials=credentials, project='dat-exp')
Provide credentials for Application Default Credentials
However, the correct method of specifying credentials depends on where you are deploying your code. For example, applications can fetch credentials from the compute metadata service when deployed in Google Cloud.
|
`DefaultCredentialsError` when attempting to import google cloud libraries in python
|
I am attempting to import google-cloud and big-query libraries and running into default credentials error. I have attempted to set the credentials by downloading the json file from cloud portal and specifying the path to the file.
## Google Big Query
%reload_ext google.cloud.bigquery
from google.cloud import bigquery
bqclient = bigquery.Client(project = "dat-exp")
os.environ.setdefault("GCLOUD_PROJECT", "dat-exp")
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "path/xxxxx.json"
---------------------------------------------------------------------------
DefaultCredentialsError Traceback (most recent call last)
/tmp/ipykernel_2944/2163850103.py in <cell line: 81>()
79 get_ipython().run_line_magic('reload_ext', 'google.cloud.bigquery')
80 from google.cloud import bigquery
---> 81 bqclient = bigquery.Client(project = "dat-exp")
82 os.environ.setdefault("GCLOUD_PROJECT", "dat-exp")
DefaultCredentialsError: Could not automatically determine credentials. Please set GOOGLE_APPLICATION_CREDENTIALS or explicitly create credentials and re-run the application. For more information, please see https://cloud.google.com/docs/authentication/getting-started
|
[
"Default Credentials (ADC) is a method of searching for credentials.\nYour code is setting the environment after the client has attempted to locate credentials. That means the client failed to locate credentials before you set up credentials. A quick solution is to move the line with bigquery.Client(...) to be after the os.environ(...) lines.\nos.environ.setdefault(\"GCLOUD_PROJECT\", \"dat-exp\")\nos.environ[\"GOOGLE_APPLICATION_CREDENTIALS\"] = \"path/xxxxx.json\"\nbqclient = bigquery.Client(project = \"dat-exp\")\n\nI do not recommend the method that you are using (modify the environment inside the program). Either modify the environment before the program starts or specify the credentials to use when creating the client bigquery.Client().\nfrom google.cloud import bigquery\nfrom google.oauth2 import service_account\n\nkey_path = \"path/to/service_account.json\"\n\ncredentials = service_account.Credentials.from_service_account_file(\n key_path, scopes=[\"https://www.googleapis.com/auth/cloud-platform\"])\n\nclient = bigquery.Client(credentials=credentials, project='dat-exp')\n\nProvide credentials for Application Default Credentials\nHowever, the correct method of specifying credentials depends on where you are deploying your code. For example, applications can fetch credentials from the compute metadata service when deployed in Google Cloud.\n"
] |
[
1
] |
[] |
[] |
[
"google_bigquery",
"google_cloud_platform",
"python"
] |
stackoverflow_0074585427_google_bigquery_google_cloud_platform_python.txt
|
Q:
Python datetime.fromisoformat requires colon in tzinfo?
I am parsing date+time values in isoformat generated by another application, which look like this:
"2022-07-31T01:51:05-0400"
Note: The time zone info part is "-0400", without a colon.
But I can't see to use datetime.fromisoformat to do this:
# This works fine (with a colon):
datetime.fromisoformat("2022-07-31T01:51:05-04:00")
datetime.datetime(2022, 7, 31, 1, 51, 05, tzinfo=datetime.timezone(datetime.timedelta(days=-1, seconds=72000)))
# But this does not (without a colon)
datetime.fromisoformat("2022-07-31T01:51:05-0400")
Traceback (most recent call last):
File "/usr/lib/python3.10/code.py", line 90, in runcode
exec(code, self.locals)
File "<input>", line 1, in <module>
ValueError: Invalid isoformat string: '2000-10-20T01:02:03-0400'
The definition of valid ISO 8601 datetime formats explicitly include strings like "-0400":
Time offsets from UTC
The UTC offset is appended to the time in the same way that 'Z' was above, in the form ±[hh]:[mm], ±[hh][mm], or ±[hh].
Negative UTC offsets describe a time zone west of UTC±00:00, where the civil time is behind (or earlier) than UTC so the zone designator will look like "−03:00","−0300", or "−03".
Positive UTC offsets describe a time zone at or east of UTC±00:00, where the civil time is the same as or ahead (or later) than UTC so the zone designator will look like "+02:00","+0200", or "+02".
Source: https://en.wikipedia.org/wiki/ISO_8601#Time_offsets_from_UTC
and the Python
datetime documentation says
classmethod datetime.fromisoformat(date_string)
Return a datetime corresponding to a date_string in any valid ISO 8601 format, with the following exceptions:
Time zone offsets may have fractional seconds.
The T separator may be replaced by any single unicode character.
Ordinal dates are not currently supported.
Fractional hours and minutes are not supported.
The non-colon form does not fall into any of the prohibited categories.
I know I can do something like this:
datetime.strptime("2022-07-31T01:51:05-0400", '%Y-%m-%dT%H:%M:%S%z')
where I explicitly indicate the datetime format, but the whole point of wanting to use fromisoformat() is so that I don't have to look up the format codes and spell it out explicitly.
Is there some way to do this and still use fromisoformat?
A:
You could do a workaround by making sure the colon is in the UTC offset portion of the string:
def fix_iso(s):
pos = len("2022-07-31T01:51:05-0400") - 2 # take off end "00"
if len(s) == pos: # missing minutes completely
s += ":00"
elif s[pos:pos+1] != ':': # missing UTC offset colon
s = f"{s[:pos]}:{s[pos:]}"
return s
Just process the string with this before feeding to fromisoformat(). This takes care of missing colon and also when there are no minutes in the offset.
From what the OP and other commenters have stated, it sounds like Python is overclaiming standards compliance...
|
Python datetime.fromisoformat requires colon in tzinfo?
|
I am parsing date+time values in isoformat generated by another application, which look like this:
"2022-07-31T01:51:05-0400"
Note: The time zone info part is "-0400", without a colon.
But I can't see to use datetime.fromisoformat to do this:
# This works fine (with a colon):
datetime.fromisoformat("2022-07-31T01:51:05-04:00")
datetime.datetime(2022, 7, 31, 1, 51, 05, tzinfo=datetime.timezone(datetime.timedelta(days=-1, seconds=72000)))
# But this does not (without a colon)
datetime.fromisoformat("2022-07-31T01:51:05-0400")
Traceback (most recent call last):
File "/usr/lib/python3.10/code.py", line 90, in runcode
exec(code, self.locals)
File "<input>", line 1, in <module>
ValueError: Invalid isoformat string: '2000-10-20T01:02:03-0400'
The definition of valid ISO 8601 datetime formats explicitly include strings like "-0400":
Time offsets from UTC
The UTC offset is appended to the time in the same way that 'Z' was above, in the form ±[hh]:[mm], ±[hh][mm], or ±[hh].
Negative UTC offsets describe a time zone west of UTC±00:00, where the civil time is behind (or earlier) than UTC so the zone designator will look like "−03:00","−0300", or "−03".
Positive UTC offsets describe a time zone at or east of UTC±00:00, where the civil time is the same as or ahead (or later) than UTC so the zone designator will look like "+02:00","+0200", or "+02".
Source: https://en.wikipedia.org/wiki/ISO_8601#Time_offsets_from_UTC
and the Python
datetime documentation says
classmethod datetime.fromisoformat(date_string)
Return a datetime corresponding to a date_string in any valid ISO 8601 format, with the following exceptions:
Time zone offsets may have fractional seconds.
The T separator may be replaced by any single unicode character.
Ordinal dates are not currently supported.
Fractional hours and minutes are not supported.
The non-colon form does not fall into any of the prohibited categories.
I know I can do something like this:
datetime.strptime("2022-07-31T01:51:05-0400", '%Y-%m-%dT%H:%M:%S%z')
where I explicitly indicate the datetime format, but the whole point of wanting to use fromisoformat() is so that I don't have to look up the format codes and spell it out explicitly.
Is there some way to do this and still use fromisoformat?
|
[
"You could do a workaround by making sure the colon is in the UTC offset portion of the string:\ndef fix_iso(s):\n pos = len(\"2022-07-31T01:51:05-0400\") - 2 # take off end \"00\"\n if len(s) == pos: # missing minutes completely\n s += \":00\"\n elif s[pos:pos+1] != ':': # missing UTC offset colon\n s = f\"{s[:pos]}:{s[pos:]}\"\n return s\n\nJust process the string with this before feeding to fromisoformat(). This takes care of missing colon and also when there are no minutes in the offset.\nFrom what the OP and other commenters have stated, it sounds like Python is overclaiming standards compliance...\n"
] |
[
0
] |
[] |
[] |
[
"datetime",
"iso8601",
"python"
] |
stackoverflow_0074585548_datetime_iso8601_python.txt
|
Q:
Sprite not being drawn
I am a beginner programmer practicing with Python. I am trying to make a simple game, what I have so far is just adding the character sprite onto the map. What I'm trying to do is when no keys are being pressed, that the character sprite continuously switches between two animations.
When running the game, it just loads up a black screen and the game freezes if you click on the screen.
CODE:
class Character(pygame.sprite.Sprite):
clock = pygame.time.Clock()
sprite_frame = 0
def __init__(self):
self.standing_right_frame1 = pygame.image.load("C:\\...\\character1_standing_facing_right_1.png")
self.standing_right_frame2 = pygame.image.load("C:\\...\\character1_standing_facing_right_2.png")
self.standing_right = [self.standing_right_frame1, self.standing_right_frame2]
self.rect1 = self.standing_right_frame1.get_rect()
self.rect2 = self.standing_right_frame2.get_rect()
self.rect1.center = (300, 200)
self.rect2.center = (300, 200)
def update(self):
game_running = True
self.sprite_frame = 0
while game_running:
if self.sprite_frame >= len(self.standing_right):
self.sprite_frame = 0
self.character_sprite = self.standing_right[self.sprite_frame]
self.sprite_frame = self.sprite_frame + 1
def draw(self, display):
self.game_screen = display
self.game_screen.blit(self.character_sprite, self.rect)
pygame.init()
character = Character()
#GAME COLORS:
color_black = pygame.Color(0, 0, 0)
color_white = pygame.Color(255, 255, 255)
color_light_grey = pygame.Color(212, 212, 212)
color_grey = pygame.Color(150, 150, 150)
color_dark_grey = pygame.Color(100, 100, 100)
color_dark_green = pygame.Color(41, 169, 48)
#GAME INITIALIZATION
pygame.display.set_caption("My First Game")
screen_width = 800
screen_height = 600
game_screen = pygame.display.set_mode((800, 600))
game_screen.fill(color_white)
game_clock = pygame.time.Clock()
game_clock.tick(60)
game_running = True
# MAIN GAME LOOP
while game_running:
for event in pygame.event.get():
if event.type == pygame.QUIT:
pygame.quit()
else:
character.update()
character.draw(game_screen)
A:
You must do the update in the application loop and not in the event loop. The application loop is executed once per frame, the event loop is executed only when an event occurs. Also clear the display and update the display every frame and limit the frames per second with pygame.time.Clock.tick.
And very importantly, don't try to animate anything with a loop inside the application loop. You don't need this inner loop at all, because the application loop runs continuously.
Also see How can I make a sprite move when key is held down and use pygame.key.get_pressed() to detect if any key is hold down:
class Character(pygame.sprite.Sprite):
def __init__(self):
# [...]
self.rect = self.rect1
self.character_sprite = self.standing_right_frame1
def update(self):
sprite_index = self.sprite_frame // 30
if sprite_index >= len(self.standing_right):
self.sprite_frame = 0
sprite_index = 0
self.character_sprite = self.standing_right[sprite_index]
self.sprite_frame = self.sprite_frame + 1
clock = pygame.time.Clock()
while game_running:
clock.tick(60)
for event in pygame.event.get():
if event.type == pygame.QUIT:
game_running = False
keys = pygame.key.get_pressed()
if any(keys):
character.update()
game_screen.fill(color_white)
character.draw(game_screen)
pygame.display.flip()
pygame.quit()
Additionally I recommend the reading Animated sprite from few images and How do I create animated sprites using Sprite Sheets in Pygame?
|
Sprite not being drawn
|
I am a beginner programmer practicing with Python. I am trying to make a simple game, what I have so far is just adding the character sprite onto the map. What I'm trying to do is when no keys are being pressed, that the character sprite continuously switches between two animations.
When running the game, it just loads up a black screen and the game freezes if you click on the screen.
CODE:
class Character(pygame.sprite.Sprite):
clock = pygame.time.Clock()
sprite_frame = 0
def __init__(self):
self.standing_right_frame1 = pygame.image.load("C:\\...\\character1_standing_facing_right_1.png")
self.standing_right_frame2 = pygame.image.load("C:\\...\\character1_standing_facing_right_2.png")
self.standing_right = [self.standing_right_frame1, self.standing_right_frame2]
self.rect1 = self.standing_right_frame1.get_rect()
self.rect2 = self.standing_right_frame2.get_rect()
self.rect1.center = (300, 200)
self.rect2.center = (300, 200)
def update(self):
game_running = True
self.sprite_frame = 0
while game_running:
if self.sprite_frame >= len(self.standing_right):
self.sprite_frame = 0
self.character_sprite = self.standing_right[self.sprite_frame]
self.sprite_frame = self.sprite_frame + 1
def draw(self, display):
self.game_screen = display
self.game_screen.blit(self.character_sprite, self.rect)
pygame.init()
character = Character()
#GAME COLORS:
color_black = pygame.Color(0, 0, 0)
color_white = pygame.Color(255, 255, 255)
color_light_grey = pygame.Color(212, 212, 212)
color_grey = pygame.Color(150, 150, 150)
color_dark_grey = pygame.Color(100, 100, 100)
color_dark_green = pygame.Color(41, 169, 48)
#GAME INITIALIZATION
pygame.display.set_caption("My First Game")
screen_width = 800
screen_height = 600
game_screen = pygame.display.set_mode((800, 600))
game_screen.fill(color_white)
game_clock = pygame.time.Clock()
game_clock.tick(60)
game_running = True
# MAIN GAME LOOP
while game_running:
for event in pygame.event.get():
if event.type == pygame.QUIT:
pygame.quit()
else:
character.update()
character.draw(game_screen)
|
[
"You must do the update in the application loop and not in the event loop. The application loop is executed once per frame, the event loop is executed only when an event occurs. Also clear the display and update the display every frame and limit the frames per second with pygame.time.Clock.tick.\nAnd very importantly, don't try to animate anything with a loop inside the application loop. You don't need this inner loop at all, because the application loop runs continuously.\nAlso see How can I make a sprite move when key is held down and use pygame.key.get_pressed() to detect if any key is hold down:\nclass Character(pygame.sprite.Sprite):\n def __init__(self):\n # [...]\n\n self.rect = self.rect1\n self.character_sprite = self.standing_right_frame1\n\n def update(self):\n sprite_index = self.sprite_frame // 30\n if sprite_index >= len(self.standing_right):\n self.sprite_frame = 0\n sprite_index = 0\n self.character_sprite = self.standing_right[sprite_index]\n self.sprite_frame = self.sprite_frame + 1\n\nclock = pygame.time.Clock()\nwhile game_running:\n clock.tick(60)\n for event in pygame.event.get():\n if event.type == pygame.QUIT:\n game_running = False \n \n keys = pygame.key.get_pressed()\n if any(keys):\n character.update()\n\n game_screen.fill(color_white)\n character.draw(game_screen)\n pygame.display.flip()\n\npygame.quit()\n\n\nAdditionally I recommend the reading Animated sprite from few images and How do I create animated sprites using Sprite Sheets in Pygame?\n"
] |
[
2
] |
[] |
[] |
[
"pygame",
"pygame_surface",
"python"
] |
stackoverflow_0074585698_pygame_pygame_surface_python.txt
|
Q:
Randomly Replacing Characters in String with Character Other than the Current Character
Suppose that I have a string that I would like to modify at random with a defined set of options from another string. First, I created my original string and the potential replacement characters:
string1 = "abcabcabc"
replacement_chars = "abc"
Then I found this function on a forum that will randomly replace n characters:
def randomlyChangeNChar(word, value):
length = len(word)
word = list(word)
# This will select the distinct index for us to replace
k = random.sample(range(0, length), value)
for index in k:
# This will replace the characters at the specified index with the generated characters
word[index] = random.choice(replacement_chars)
# Finally print the string in the modified format.
return "".join(word)
This code does what I want with one exception -- it does not account for characters in string1 that match the random replacement character. I understand that the problem is in the function that I am trying to adapt, I predict under the for loop, but I am unsure what to add to prevent the substituting character from equaling the old character from string1. All advice appreciated, if I'm overcomplicating things please educate me!
A:
In the function you retrieved, replacing:
word[index] = random.choice(replacement_chars)
with
word[index] = random.choice(replacement_chars.replace(word[index],'')
will do the job. It simply replaces word[index] (the char you want to replace) with an empty string in the replacement_chars string, effectively removing it from the replacement characters.
Another approach, that will predictably be less efficient on average, is to redraw until you get a different character from the original one:
that is, replacing:
word[index] = random.choice(replacement_chars)
with
char = word[index]
while char == word[index]:
char = random.choice(replacement_chars)
word[index] = char
or
while True:
char = random.choice(replacement_chars)
if char != word[index]:
word[index] = char
break
WARNING: if replacement_chars only features 1 character, both methods would fail when the original character is the same as the replacement one!
|
Randomly Replacing Characters in String with Character Other than the Current Character
|
Suppose that I have a string that I would like to modify at random with a defined set of options from another string. First, I created my original string and the potential replacement characters:
string1 = "abcabcabc"
replacement_chars = "abc"
Then I found this function on a forum that will randomly replace n characters:
def randomlyChangeNChar(word, value):
length = len(word)
word = list(word)
# This will select the distinct index for us to replace
k = random.sample(range(0, length), value)
for index in k:
# This will replace the characters at the specified index with the generated characters
word[index] = random.choice(replacement_chars)
# Finally print the string in the modified format.
return "".join(word)
This code does what I want with one exception -- it does not account for characters in string1 that match the random replacement character. I understand that the problem is in the function that I am trying to adapt, I predict under the for loop, but I am unsure what to add to prevent the substituting character from equaling the old character from string1. All advice appreciated, if I'm overcomplicating things please educate me!
|
[
"In the function you retrieved, replacing:\nword[index] = random.choice(replacement_chars)\n\nwith\nword[index] = random.choice(replacement_chars.replace(word[index],'')\n\nwill do the job. It simply replaces word[index] (the char you want to replace) with an empty string in the replacement_chars string, effectively removing it from the replacement characters.\nAnother approach, that will predictably be less efficient on average, is to redraw until you get a different character from the original one:\nthat is, replacing:\nword[index] = random.choice(replacement_chars)\n\nwith\nchar = word[index]\nwhile char == word[index]:\n char = random.choice(replacement_chars)\nword[index] = char\n\nor\nwhile True:\n char = random.choice(replacement_chars)\n if char != word[index]:\n word[index] = char\n break\n\nWARNING: if replacement_chars only features 1 character, both methods would fail when the original character is the same as the replacement one!\n"
] |
[
3
] |
[] |
[] |
[
"python",
"random",
"string"
] |
stackoverflow_0074585606_python_random_string.txt
|
Q:
combine python lists into sql table
I have two lists which I want to combine into an sql table using pandas.read_sql. I tried using unnest, but it gives me the wrong output. Attempt below:
import pandas as pd
from sqlalchemy import create_engine
engine = create_engine(
"postgresql+psycopg2://postgres:password@localhost:5432/database"
)
list1 = ["a", "b", "c"]
list2 = [1, 2, 3]
# expected output
df_expected = pd.DataFrame({"list1": list1, "list2": list2})
# query
df_query = pd.read_sql_query(
"""
select *
from unnest(%(list1)s) as list1,
unnest(%(list2)s) as list2
""",
con=engine,
params={"list1": list1, "list2": list2},
)
# throws assertion error
assert df_query.equals(df_expected)
A:
df_expected
list1 list2
0 a 1
1 b 2
2 c 3
Your original query:
df_query = pd.read_sql_query(
"""
select *
from unnest(%(list1)s) as list1,
unnest(%(list2)s) as list2
""",
con=engine,
params={"list1": list1, "list2": list2},
)
df_query
list1 list2
0 a 1
1 a 2
2 a 3
3 b 1
4 b 2
5 b 3
6 c 1
7 c 2
8 c 3
So you are doing a join between the two unnest sets.
What you want:
df_query = pd.read_sql_query(
"""
select unnest(%(list1)s) as list1,
unnest(%(list2)s) as list2
""",
con=engine,
params={"list1": list1, "list2": list2},
)
df_query
list1 list2
0 a 1
1 b 2
2 c 3
#The below succeeds
assert df_query.equals(df_expected)
|
combine python lists into sql table
|
I have two lists which I want to combine into an sql table using pandas.read_sql. I tried using unnest, but it gives me the wrong output. Attempt below:
import pandas as pd
from sqlalchemy import create_engine
engine = create_engine(
"postgresql+psycopg2://postgres:password@localhost:5432/database"
)
list1 = ["a", "b", "c"]
list2 = [1, 2, 3]
# expected output
df_expected = pd.DataFrame({"list1": list1, "list2": list2})
# query
df_query = pd.read_sql_query(
"""
select *
from unnest(%(list1)s) as list1,
unnest(%(list2)s) as list2
""",
con=engine,
params={"list1": list1, "list2": list2},
)
# throws assertion error
assert df_query.equals(df_expected)
|
[
"df_expected \n list1 list2\n0 a 1\n1 b 2\n2 c 3\n\nYour original query:\ndf_query = pd.read_sql_query(\n \"\"\"\n select * \n from unnest(%(list1)s) as list1, \n unnest(%(list2)s) as list2\n \"\"\",\n con=engine,\n params={\"list1\": list1, \"list2\": list2},\n)\n df_query \n list1 list2\n0 a 1\n1 a 2\n2 a 3\n3 b 1\n4 b 2\n5 b 3\n6 c 1\n7 c 2\n8 c 3\n\nSo you are doing a join between the two unnest sets.\nWhat you want:\ndf_query = pd.read_sql_query(\n \"\"\"\n select unnest(%(list1)s) as list1, \n unnest(%(list2)s) as list2\n \"\"\",\n con=engine,\n params={\"list1\": list1, \"list2\": list2},\n)\n df_query \n list1 list2\n0 a 1\n1 b 2\n2 c 3\n\n#The below succeeds\nassert df_query.equals(df_expected)\n\n\n"
] |
[
1
] |
[] |
[] |
[
"pandas",
"postgresql",
"python",
"sql"
] |
stackoverflow_0074585449_pandas_postgresql_python_sql.txt
|
Q:
Cannot display images or play sound in my .exe if it is in a different folder
When I convert my file in .exe and I want for example an image to be displayed or a sound to be played on my app I am forced to put them in the same folder otherwise the image will not appear and the sound will not play.
How can I display a image(png)/play a sound (a mp3) if my .exe and the used ressources (the image and the sound) are in a different folder ?
I've done for example myimageframe = PhotoImage(file='myimage.png') and playsound('mysound.mp3')
I would like to access the resources even on another machine if I transfer my .exe to a friend or myself on another computer
for the image(an error is raised):
the sound just doesn't play (nothing happen)
A:
file='myimage.png' is what's called a relative path - since there are no folders listed before, the app will look in the same folder where the .exe is executed from.
From the same machine
use absolute paths to the files (e.g. C:\Temp\myImage.png), or
change the working directory to another location at runtime (e.g. os.chdir("C:\Temp))
From another machine
find a way to embed the files' contents into the .py file (e.g. a base64-encoded string??),
find a way to embed the files' contents into the .exe (no idea here - read the docs for your .exe bundler?)
host the files on the public internet (e.g. a github repo, or file-supporting alternatives to PasteBin) & pull them down using requests
NOTE: that last option will require the running machine to have internet access, and should probably handle things gracefully if it doesn't....
|
Cannot display images or play sound in my .exe if it is in a different folder
|
When I convert my file in .exe and I want for example an image to be displayed or a sound to be played on my app I am forced to put them in the same folder otherwise the image will not appear and the sound will not play.
How can I display a image(png)/play a sound (a mp3) if my .exe and the used ressources (the image and the sound) are in a different folder ?
I've done for example myimageframe = PhotoImage(file='myimage.png') and playsound('mysound.mp3')
I would like to access the resources even on another machine if I transfer my .exe to a friend or myself on another computer
for the image(an error is raised):
the sound just doesn't play (nothing happen)
|
[
"file='myimage.png' is what's called a relative path - since there are no folders listed before, the app will look in the same folder where the .exe is executed from.\nFrom the same machine\n\nuse absolute paths to the files (e.g. C:\\Temp\\myImage.png), or\nchange the working directory to another location at runtime (e.g. os.chdir(\"C:\\Temp))\n\nFrom another machine\n\nfind a way to embed the files' contents into the .py file (e.g. a base64-encoded string??),\nfind a way to embed the files' contents into the .exe (no idea here - read the docs for your .exe bundler?)\nhost the files on the public internet (e.g. a github repo, or file-supporting alternatives to PasteBin) & pull them down using requests\n\nNOTE: that last option will require the running machine to have internet access, and should probably handle things gracefully if it doesn't....\n"
] |
[
1
] |
[] |
[] |
[
"python",
"tkinter"
] |
stackoverflow_0074585658_python_tkinter.txt
|
Q:
Python Socket Not Reading UDP Packets on Jetson Hardware
II have UDP packets that are being sent to my device via an Ethernet connection. I am attempting to read them data using the socket library in Python (version 3.8.10); however, despite them being displayed on the device when I run tcpdump, my Python program never receives the data, and I'm not sure why.
I want to receive the data from my local port 2368; however, it is not working. I will provide more information below.
Here is the code that I'm using to read data from the socket that I'm interested in.
import socket
with socket.socket(socket.AF_INET, socket.SOCK_DGRAM) as soc:
soc.bind(("", 2368))
data = soc.recv(2000) # I'm using an arbitrary buffer size
print("Received Data!", data)
Main Issue: The hardware that I'm having trouble with running this code on is a Jetson Xavier with Ubuntu 20.04.5 LTS and Gnome 3.36.8 (I'm not exactly too sure what Gnome is). When I try to run the Python code above on this hardware, the soc.recv(2000) function blocks as it is unable to read data from the port, and it never reaches the print statement. However, when I run tcpdump in the command line, I get the following output.
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
15:07:29.621066 IP 192.168.1.201.2368 > 255.255.255.255.2368: UDP, length 1206
15:07:29.622382 IP 192.168.1.201.2368 > 255.255.255.255.2368: UDP, length 1206
15:07:29.623765 IP 192.168.1.201.2368 > 255.255.255.255.2368: UDP, length 1206
15:07:29.625010 IP 192.168.1.201.2368 > 255.255.255.255.2368: UDP, length 1206
15:07:29.626340 IP 192.168.1.201.2368 > 255.255.255.255.2368: UDP, length 1206
15:07:29.627667 IP 192.168.1.201.2368 > 255.255.255.255.2368: UDP, length 1206
15:07:29.628815 IP 192.168.1.201.8308 > 255.255.255.255.8308: UDP, length 512
15:07:29.628819 IP 192.168.1.201.2368 > 255.255.255.255.2368: UDP, length 1206
15:07:29.630328 IP 192.168.1.201.2368 > 255.255.255.255.2368: UDP, length 1206
15:07:29.631656 IP 192.168.1.201.2368 > 255.255.255.255.2368: UDP, length 1206
10 packets captured
24 packets received by filter
0 packets dropped by kernel
As a result, I believe that the Jetson (the computer I'm on) is receiving the UDP packets via the ethernet connection; however, for some reason, I'm unable to read it from the Python UDP server. If anybody has any suggestions on why this is occurring and if they have any suggestions on how to get this fixed, please let me know. I would just like to read the UDP packets via a Python program. I'm not sure if this is an error due to improper programs (I don't think this is the issue due to the additional information I provide below) or is somehow the networks between the Python program and the tcpdump command are split or a firewall is blocking them such as in this link here. I'm quite unfamiliar with Ubuntu, so if this is a firewall issue, I would appreciate some guidance on how to adjust the firewall to make this work out. I've attempted to disable ufw and add a rule to Thank you!
Additional Information: For some additional information, I attempted running this code on a 2019 MacBook Pro running MacOC Monterey (12.5.1) with the ethernet connection connected, and it ran perfectly fine where the blocking soc.recv(2000) call was able to return, and the print statement was reached. Below, I've listed what the tcpdump output looked like when the UDP packets were sent to the macbook. It makes me believe that my code is correct, but for some reason related to the OS or hardware, I'm unable to simply coy the code over onto the Jetson and run it appropriately.
There are some significant differences between the two tcpdump outputs between the Jetson (above which is not working) and the Macbook (below which is working), such as the mention of broadcasthost and opentable. I'm not entirely sure what these mean, so any help with my understanding of these would be greatly appreciated as well.
tcpdump: data link type PKTAP
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on pktap, link-type PKTAP (Apple DLT_PKTAP), capture size 262144 bytes
15:47:32.528190 IP 192.168.1.201.opentable > broadcasthost.opentable: UDP, length 1206
15:47:32.529466 IP 192.168.1.201.opentable > broadcasthost.opentable: UDP, length 1206
15:47:32.530802 IP 192.168.1.201.opentable > broadcasthost.opentable: UDP, length 1206
15:47:32.532092 IP 192.168.1.201.opentable > broadcasthost.opentable: UDP, length 1206
15:47:32.533425 IP 192.168.1.201.opentable > broadcasthost.opentable: UDP, length 1206
15:47:32.534734 IP 192.168.1.201.opentable > broadcasthost.opentable: UDP, length 1206
15:47:32.536098 IP 192.168.1.201.opentable > broadcasthost.opentable: UDP, length 1206
15:47:32.536350 IP 192.168.1.201.8308 > broadcasthost.8308: UDP, length 512
15:47:32.537433 IP 192.168.1.201.opentable > broadcasthost.opentable: UDP, length 1206
15:47:32.537868 IP 192.168.1.201.8308 > broadcasthost.8308: UDP, length 512
10 packets captured
16 packets received by filter
0 packets dropped by kernel
If there is any additional information that I can provide or any guidance on how to structure my question on Stackoverflow, please let me know, thank you!
A:
The packets that were being sent to the computer were sent by a Velodyne LiDAR. There are instructions to manually set the IP Address of the ethernet connection that we are receiving data from and following those enabled me to use the code that I wrote above and read packets from the LiDAR.
|
Python Socket Not Reading UDP Packets on Jetson Hardware
|
II have UDP packets that are being sent to my device via an Ethernet connection. I am attempting to read them data using the socket library in Python (version 3.8.10); however, despite them being displayed on the device when I run tcpdump, my Python program never receives the data, and I'm not sure why.
I want to receive the data from my local port 2368; however, it is not working. I will provide more information below.
Here is the code that I'm using to read data from the socket that I'm interested in.
import socket
with socket.socket(socket.AF_INET, socket.SOCK_DGRAM) as soc:
soc.bind(("", 2368))
data = soc.recv(2000) # I'm using an arbitrary buffer size
print("Received Data!", data)
Main Issue: The hardware that I'm having trouble with running this code on is a Jetson Xavier with Ubuntu 20.04.5 LTS and Gnome 3.36.8 (I'm not exactly too sure what Gnome is). When I try to run the Python code above on this hardware, the soc.recv(2000) function blocks as it is unable to read data from the port, and it never reaches the print statement. However, when I run tcpdump in the command line, I get the following output.
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
15:07:29.621066 IP 192.168.1.201.2368 > 255.255.255.255.2368: UDP, length 1206
15:07:29.622382 IP 192.168.1.201.2368 > 255.255.255.255.2368: UDP, length 1206
15:07:29.623765 IP 192.168.1.201.2368 > 255.255.255.255.2368: UDP, length 1206
15:07:29.625010 IP 192.168.1.201.2368 > 255.255.255.255.2368: UDP, length 1206
15:07:29.626340 IP 192.168.1.201.2368 > 255.255.255.255.2368: UDP, length 1206
15:07:29.627667 IP 192.168.1.201.2368 > 255.255.255.255.2368: UDP, length 1206
15:07:29.628815 IP 192.168.1.201.8308 > 255.255.255.255.8308: UDP, length 512
15:07:29.628819 IP 192.168.1.201.2368 > 255.255.255.255.2368: UDP, length 1206
15:07:29.630328 IP 192.168.1.201.2368 > 255.255.255.255.2368: UDP, length 1206
15:07:29.631656 IP 192.168.1.201.2368 > 255.255.255.255.2368: UDP, length 1206
10 packets captured
24 packets received by filter
0 packets dropped by kernel
As a result, I believe that the Jetson (the computer I'm on) is receiving the UDP packets via the ethernet connection; however, for some reason, I'm unable to read it from the Python UDP server. If anybody has any suggestions on why this is occurring and if they have any suggestions on how to get this fixed, please let me know. I would just like to read the UDP packets via a Python program. I'm not sure if this is an error due to improper programs (I don't think this is the issue due to the additional information I provide below) or is somehow the networks between the Python program and the tcpdump command are split or a firewall is blocking them such as in this link here. I'm quite unfamiliar with Ubuntu, so if this is a firewall issue, I would appreciate some guidance on how to adjust the firewall to make this work out. I've attempted to disable ufw and add a rule to Thank you!
Additional Information: For some additional information, I attempted running this code on a 2019 MacBook Pro running MacOC Monterey (12.5.1) with the ethernet connection connected, and it ran perfectly fine where the blocking soc.recv(2000) call was able to return, and the print statement was reached. Below, I've listed what the tcpdump output looked like when the UDP packets were sent to the macbook. It makes me believe that my code is correct, but for some reason related to the OS or hardware, I'm unable to simply coy the code over onto the Jetson and run it appropriately.
There are some significant differences between the two tcpdump outputs between the Jetson (above which is not working) and the Macbook (below which is working), such as the mention of broadcasthost and opentable. I'm not entirely sure what these mean, so any help with my understanding of these would be greatly appreciated as well.
tcpdump: data link type PKTAP
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on pktap, link-type PKTAP (Apple DLT_PKTAP), capture size 262144 bytes
15:47:32.528190 IP 192.168.1.201.opentable > broadcasthost.opentable: UDP, length 1206
15:47:32.529466 IP 192.168.1.201.opentable > broadcasthost.opentable: UDP, length 1206
15:47:32.530802 IP 192.168.1.201.opentable > broadcasthost.opentable: UDP, length 1206
15:47:32.532092 IP 192.168.1.201.opentable > broadcasthost.opentable: UDP, length 1206
15:47:32.533425 IP 192.168.1.201.opentable > broadcasthost.opentable: UDP, length 1206
15:47:32.534734 IP 192.168.1.201.opentable > broadcasthost.opentable: UDP, length 1206
15:47:32.536098 IP 192.168.1.201.opentable > broadcasthost.opentable: UDP, length 1206
15:47:32.536350 IP 192.168.1.201.8308 > broadcasthost.8308: UDP, length 512
15:47:32.537433 IP 192.168.1.201.opentable > broadcasthost.opentable: UDP, length 1206
15:47:32.537868 IP 192.168.1.201.8308 > broadcasthost.8308: UDP, length 512
10 packets captured
16 packets received by filter
0 packets dropped by kernel
If there is any additional information that I can provide or any guidance on how to structure my question on Stackoverflow, please let me know, thank you!
|
[
"The packets that were being sent to the computer were sent by a Velodyne LiDAR. There are instructions to manually set the IP Address of the ethernet connection that we are receiving data from and following those enabled me to use the code that I wrote above and read packets from the LiDAR.\n"
] |
[
0
] |
[] |
[] |
[
"jetson_xavier",
"python",
"sockets",
"ubuntu",
"udp"
] |
stackoverflow_0074585619_jetson_xavier_python_sockets_ubuntu_udp.txt
|
Q:
How to display which key holds the most number of values and the corresponding total?
I have a dictionary with four keys and would like to display which key has the most elements or values in them and how many values is stored.
my dictionary looks like this
{'rank': [1, 2, 3, 4, 5], 'trolley': [5, 10, 15, 25, 30], 'ward': [0, 12, 10, 8, 3], 'patients': [200, 100, 1000, 500, 375]
in the variable explorer and my excel file I can see the key or column "patients" has the most amount of total values with 220 (so 220 rows in excel) and the smallest amount of total values is the "trolley" key/column with 64 (64 rows in excel file)
I have tried multiple resources online which suggest the use of the following below
print("Highest key is:", max(hosp,key=hosp.get), "with total", max(hosp.values()))
print("Highest key is:", min(hosp,key=hosp.get), "with total", min(hosp.values()))
all this does for me is print the key that has the highest individual value so e.g. "patients" and then prints the entire "patients" key with all the individual values i.e. "[200, 100, 1000, 500, 375]"
but I want it to show which key has the most and which key has the least elements/rows in it, which should result in "patients with 220 values" as it has 220 entries or rows from the excel file. and the min should show "trolley with 64 values"
also for the purpose of this I need to accomplish this without any modules so I cannot import anything
A:
Please correct me if I interpret it wrong. Till now, you have been able to execute the code you have provided, with an exception that the values being printed is the whole list and not the maximum value among the list.
Secondly, you have mentioned that you are trying to display the amount of rows rather than the maximum value.
len function can help you achieve it. It returns the length of an iterable.
I am assuming here, that the list you have displayed is only a extract from the excel file.
print("Highest key is:", max(hosp,key=hosp.get), "with total", len(max(hosp.values())))
From the code you have provided, the suitable output is below:
Highest key is: patients with total 5
|
How to display which key holds the most number of values and the corresponding total?
|
I have a dictionary with four keys and would like to display which key has the most elements or values in them and how many values is stored.
my dictionary looks like this
{'rank': [1, 2, 3, 4, 5], 'trolley': [5, 10, 15, 25, 30], 'ward': [0, 12, 10, 8, 3], 'patients': [200, 100, 1000, 500, 375]
in the variable explorer and my excel file I can see the key or column "patients" has the most amount of total values with 220 (so 220 rows in excel) and the smallest amount of total values is the "trolley" key/column with 64 (64 rows in excel file)
I have tried multiple resources online which suggest the use of the following below
print("Highest key is:", max(hosp,key=hosp.get), "with total", max(hosp.values()))
print("Highest key is:", min(hosp,key=hosp.get), "with total", min(hosp.values()))
all this does for me is print the key that has the highest individual value so e.g. "patients" and then prints the entire "patients" key with all the individual values i.e. "[200, 100, 1000, 500, 375]"
but I want it to show which key has the most and which key has the least elements/rows in it, which should result in "patients with 220 values" as it has 220 entries or rows from the excel file. and the min should show "trolley with 64 values"
also for the purpose of this I need to accomplish this without any modules so I cannot import anything
|
[
"Please correct me if I interpret it wrong. Till now, you have been able to execute the code you have provided, with an exception that the values being printed is the whole list and not the maximum value among the list.\nSecondly, you have mentioned that you are trying to display the amount of rows rather than the maximum value.\nlen function can help you achieve it. It returns the length of an iterable.\nI am assuming here, that the list you have displayed is only a extract from the excel file.\nprint(\"Highest key is:\", max(hosp,key=hosp.get), \"with total\", len(max(hosp.values())))\n\nFrom the code you have provided, the suitable output is below:\n\nHighest key is: patients with total 5\n\n"
] |
[
0
] |
[] |
[] |
[
"dictionary",
"python"
] |
stackoverflow_0074585586_dictionary_python.txt
|
Q:
Python Lists , FIles, For Loops, Index
I am comfused with how to tackle this, I have done the first few parts by opening the file, converting it to a list and completing the first prompt but now I dont really understand how to do this. The prompt for this part is:
All Words that Have a User Chosen Letter in a Specified Location
This task builds on the first task, but instead of the letter appearing anywhere in the word, you need to output every word that has the letter in a very specific location. The user provides a single letter, and what location in the word it appears in. Make sure you give clear instructions to the user on how to denote the letter location (is the first letter 0? 1? something else? They won't know what your program expects unless you tell them).
All Words that Have a User Specified Number of Vowels
Supposedly the best way to play Wordle is to use a word with lots of vowels as your first guess. For this task you will let the user choose how many vowels they want in their word, and then output all words that have that number of vowels. This task is the hardest of the required tasks, so think about it after you've reasoned through the others. Recall that vowels in English are the letters a, e, i, o, and u.
import os.path
def name_file():
# asking user name of file
file_name = input("What is the name of the file to read the names from?")
while not os.path.exists(file_name):
print("This file does not exist")
file_name = input("What is the name of the file to read the names from?")
return file_name
name_file()
file_opener = open("5letterwords.txt","r")
read_line_by_line = file_opener.readlines()
word_list = []
for line in read_line_by_line:
word_list.append(line.strip())
print(word_list)
letter = input("Pick a letter of your choosing and every word with that letter will be outputted:")
for word in word_list:
if letter in word:
print(word)
letter2 = input("Pick another letter and every word with that letter will be outputted")
location = input("Pick what location you want the letter to be in from 1-5")
for word in word_list:
if letter2 in word and :
I currently have that done and starting from the variable letter 2 is where I start wrking on the prompt but I am clueless as to what to do. I would like some hints and tips on how to do this, i've been working on it for almost 3 weeks now and not much progress
A:
I think there are three steps to your second task:
Get the input letter and index.
Iterate over the list of words.
Check each word to see if the letter is in the desired index and print the word if so.
You have part of number 1 and all of number 2 handled already.
To finish the first step, you need to convert your location value, which is a numeric string, into an integer and make sure it matches the zero-indexing of Python strings.
For the third step, you need to index your each word string with the converted index value you got from the user. You might want some error handling here, to correctly report invalid indexes, but since that isn't specifically called out in your requirements something minimal might serve (e.g. print an error message and quit). The comparison of the indexed letter to the input letter should be trivial after that.
|
Python Lists , FIles, For Loops, Index
|
I am comfused with how to tackle this, I have done the first few parts by opening the file, converting it to a list and completing the first prompt but now I dont really understand how to do this. The prompt for this part is:
All Words that Have a User Chosen Letter in a Specified Location
This task builds on the first task, but instead of the letter appearing anywhere in the word, you need to output every word that has the letter in a very specific location. The user provides a single letter, and what location in the word it appears in. Make sure you give clear instructions to the user on how to denote the letter location (is the first letter 0? 1? something else? They won't know what your program expects unless you tell them).
All Words that Have a User Specified Number of Vowels
Supposedly the best way to play Wordle is to use a word with lots of vowels as your first guess. For this task you will let the user choose how many vowels they want in their word, and then output all words that have that number of vowels. This task is the hardest of the required tasks, so think about it after you've reasoned through the others. Recall that vowels in English are the letters a, e, i, o, and u.
import os.path
def name_file():
# asking user name of file
file_name = input("What is the name of the file to read the names from?")
while not os.path.exists(file_name):
print("This file does not exist")
file_name = input("What is the name of the file to read the names from?")
return file_name
name_file()
file_opener = open("5letterwords.txt","r")
read_line_by_line = file_opener.readlines()
word_list = []
for line in read_line_by_line:
word_list.append(line.strip())
print(word_list)
letter = input("Pick a letter of your choosing and every word with that letter will be outputted:")
for word in word_list:
if letter in word:
print(word)
letter2 = input("Pick another letter and every word with that letter will be outputted")
location = input("Pick what location you want the letter to be in from 1-5")
for word in word_list:
if letter2 in word and :
I currently have that done and starting from the variable letter 2 is where I start wrking on the prompt but I am clueless as to what to do. I would like some hints and tips on how to do this, i've been working on it for almost 3 weeks now and not much progress
|
[
"I think there are three steps to your second task:\n\nGet the input letter and index.\nIterate over the list of words.\nCheck each word to see if the letter is in the desired index and print the word if so.\n\nYou have part of number 1 and all of number 2 handled already.\nTo finish the first step, you need to convert your location value, which is a numeric string, into an integer and make sure it matches the zero-indexing of Python strings.\nFor the third step, you need to index your each word string with the converted index value you got from the user. You might want some error handling here, to correctly report invalid indexes, but since that isn't specifically called out in your requirements something minimal might serve (e.g. print an error message and quit). The comparison of the indexed letter to the input letter should be trivial after that.\n"
] |
[
0
] |
[] |
[] |
[
"file",
"for_loop",
"list",
"python"
] |
stackoverflow_0074579068_file_for_loop_list_python.txt
|
Q:
Avoid globals in tkinter
I made a simple code and my question is if is there a way to avoid globals in tkinter in this kind of scenario:
root = Tk()
root.title('Main')
root.minsize(400, 450)
toggle = True
def change_now():
global toggle
root.config(bg='blue') if toggle else root.config(bg='black')
toggle = not toggle
my_button = Button(root, text='Click me!', command=change_now)
my_button.pack()
root.mainloop()
I know the best option is an object-oriented approach, but that means refactoring the entire code, is there a quick solution in this example? I know using global variables is bad practice.
A:
In this specific case, there is a way: use tkinter variables, instead of globals
from tkinter import Tk, Button, BooleanVar
root = Tk()
root.title('Main')
root.minsize(400, 450)
toggle_tkinter = BooleanVar(value=True)
def change_now():
root.config(bg='blue') if toggle_tkinter.get() else root.config(bg='black')
toggle_tkinter.set(not toggle_tkinter.get())
my_button = Button(root, text='Click me!', command=change_now)
my_button.pack()
root.mainloop()
Though, I have personally found a case where globals make a decent design choice (sticking under the assumption that OOP is not viable): imagine that you have a 30 variables that you need to pass on to a checking function that validates the input and then uses only a subset of these variables (or a mixture of them), depending on the user input. In this case, instead of passing all 30 variables to your checker function and then your checker function passing whatever is needed onwards, I've opted to set them all as globals - that way they can be extracted and managed much more easily; see here (the repo contains relevant READMEs and a paper describe the infrastructure) for such a use case if interested.
A:
You can remove the need to manage a global variable by using the background color as the Boolean. Set it to black if it’s currently blue, and blue if it’s black.
def chande_now():
color = “black” if root.cget(“bg”) == “blue” else “blue”
root.configure(bg=color)
|
Avoid globals in tkinter
|
I made a simple code and my question is if is there a way to avoid globals in tkinter in this kind of scenario:
root = Tk()
root.title('Main')
root.minsize(400, 450)
toggle = True
def change_now():
global toggle
root.config(bg='blue') if toggle else root.config(bg='black')
toggle = not toggle
my_button = Button(root, text='Click me!', command=change_now)
my_button.pack()
root.mainloop()
I know the best option is an object-oriented approach, but that means refactoring the entire code, is there a quick solution in this example? I know using global variables is bad practice.
|
[
"In this specific case, there is a way: use tkinter variables, instead of globals\nfrom tkinter import Tk, Button, BooleanVar\nroot = Tk()\nroot.title('Main')\nroot.minsize(400, 450)\n\ntoggle_tkinter = BooleanVar(value=True)\n\ndef change_now():\n root.config(bg='blue') if toggle_tkinter.get() else root.config(bg='black')\n toggle_tkinter.set(not toggle_tkinter.get())\n\nmy_button = Button(root, text='Click me!', command=change_now)\nmy_button.pack()\n\nroot.mainloop()\n\nThough, I have personally found a case where globals make a decent design choice (sticking under the assumption that OOP is not viable): imagine that you have a 30 variables that you need to pass on to a checking function that validates the input and then uses only a subset of these variables (or a mixture of them), depending on the user input. In this case, instead of passing all 30 variables to your checker function and then your checker function passing whatever is needed onwards, I've opted to set them all as globals - that way they can be extracted and managed much more easily; see here (the repo contains relevant READMEs and a paper describe the infrastructure) for such a use case if interested.\n",
"You can remove the need to manage a global variable by using the background color as the Boolean. Set it to black if it’s currently blue, and blue if it’s black.\ndef chande_now():\n color = “black” if root.cget(“bg”) == “blue” else “blue”\n root.configure(bg=color)\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"global_variables",
"python",
"tkinter",
"variables"
] |
stackoverflow_0074577606_global_variables_python_tkinter_variables.txt
|
Q:
Writing data to a file in python
I keep getting this error:
FileNotFoundError: [Errno 2] No such file or directory: 'Bayofagbenro.txt'
Here is my code:
def main():
outfile = open('Bayofagbenro.txt')
Bayofagbenro =outfile.write ('Modupeola\n')
Bayofagbenro =outfile.w ('Ayobami\n')
Bayofagbenro =outfile.w ('AKintola\n')
Bayofagbenro =outfile.w ('Omonike\n')
Bayofagbenro =outfile.w ('Fehintoluwa\n')
Bayofagbenro =outfile.w ('Modupeola is 44yrs, Ayobami is 42 years, AKintola is 38 years, omonike is 36 years while fehintoluwa is 30 years')
outfile = Bayofagbenro.r
outfile.close
if __name__ == "__main__":
main()
While expecting something like this:
A:
Try something like this:
def main():
with open('Bayofagbenro.txt', 'w') as f:
f.write('Modupeola\n')
f.write('Ayobami\n')
f.write('AKintola\n')
f.write('Omonike\n')
f.write('Fehintoluwa\n')
f.write('Modupeola is 44yrs, Ayobami is 42 years, AKintola is 38 years, omonike is 36 years while fehintoluwa is 30 years')
|
Writing data to a file in python
|
I keep getting this error:
FileNotFoundError: [Errno 2] No such file or directory: 'Bayofagbenro.txt'
Here is my code:
def main():
outfile = open('Bayofagbenro.txt')
Bayofagbenro =outfile.write ('Modupeola\n')
Bayofagbenro =outfile.w ('Ayobami\n')
Bayofagbenro =outfile.w ('AKintola\n')
Bayofagbenro =outfile.w ('Omonike\n')
Bayofagbenro =outfile.w ('Fehintoluwa\n')
Bayofagbenro =outfile.w ('Modupeola is 44yrs, Ayobami is 42 years, AKintola is 38 years, omonike is 36 years while fehintoluwa is 30 years')
outfile = Bayofagbenro.r
outfile.close
if __name__ == "__main__":
main()
While expecting something like this:
|
[
"Try something like this:\ndef main():\n with open('Bayofagbenro.txt', 'w') as f:\n f.write('Modupeola\\n')\n f.write('Ayobami\\n')\n f.write('AKintola\\n')\n f.write('Omonike\\n')\n f.write('Fehintoluwa\\n')\n f.write('Modupeola is 44yrs, Ayobami is 42 years, AKintola is 38 years, omonike is 36 years while fehintoluwa is 30 years')\n\n"
] |
[
1
] |
[] |
[] |
[
"file",
"python"
] |
stackoverflow_0074585759_file_python.txt
|
Q:
Using python to solve for value that meets a condition
new true value that meets the condition = v
previous true value = vprev
I am trying to look for a v so that hash of str(((power(v,2))+(power(vprev, 3))) begins with ee
I tried this
import hashlib
values_list = []# a list where v and prev will be
solved = False
v = 1 # to start looping from 1
while not solved:
for index, v in enumerate(values_list):
vprev = values_list[(index - 1)]
results = str(v**2 + vprev**3)
results_encoded = results.encode()
results_hashed = hashlib.sha256(results_encoded).hexdigest()
if results[0:2] == "ee":
solved = True
values_list.append(v)
else: v += 1
print(values_list)
I'm expecting a list with the first true value but I have failed
A:
Commenting your own code:
# ...
solved = False
v = 1 # to start looping from 1
while solved:
# This block is never executed: the `while` condition-check fails since the initial state of `solved` is `False`
print(values_list)
you'll probably want to use while not solved: instead
A:
import hashlib
values_list = []# a list where v and prev will be
solved = False
v = 1 # to start looping from 1
while not solved:
for index, value in enumerate(values_list):
vprev = values_list[(index - 1)]
results = str(v**2 + vprev**3)
results_encoded = results.encode()
results_hashed = hashlib.sha256(results_encoded).hexdigest()
if results[0:2] == "ee":
values_list.append(v)
solved = True
else: v += 1
print(values_list)
While loop is working for "true" statment. So you have to change your statments on code. And If you make the statment false then while loop breaks the loop.
|
Using python to solve for value that meets a condition
|
new true value that meets the condition = v
previous true value = vprev
I am trying to look for a v so that hash of str(((power(v,2))+(power(vprev, 3))) begins with ee
I tried this
import hashlib
values_list = []# a list where v and prev will be
solved = False
v = 1 # to start looping from 1
while not solved:
for index, v in enumerate(values_list):
vprev = values_list[(index - 1)]
results = str(v**2 + vprev**3)
results_encoded = results.encode()
results_hashed = hashlib.sha256(results_encoded).hexdigest()
if results[0:2] == "ee":
solved = True
values_list.append(v)
else: v += 1
print(values_list)
I'm expecting a list with the first true value but I have failed
|
[
"Commenting your own code:\n# ...\nsolved = False\nv = 1 # to start looping from 1\n\nwhile solved:\n # This block is never executed: the `while` condition-check fails since the initial state of `solved` is `False`\n\nprint(values_list)\n\nyou'll probably want to use while not solved: instead\n",
"import hashlib\nvalues_list = []# a list where v and prev will be\nsolved = False\nv = 1 # to start looping from 1\n\nwhile not solved:\n for index, value in enumerate(values_list):\n vprev = values_list[(index - 1)]\n results = str(v**2 + vprev**3)\n results_encoded = results.encode()\n results_hashed = hashlib.sha256(results_encoded).hexdigest()\n if results[0:2] == \"ee\":\n values_list.append(v)\n solved = True\n else: v += 1\n\nprint(values_list)\n\nWhile loop is working for \"true\" statment. So you have to change your statments on code. And If you make the statment false then while loop breaks the loop.\n"
] |
[
0,
0
] |
[] |
[] |
[
"hash",
"list",
"loops",
"python",
"solver"
] |
stackoverflow_0074585664_hash_list_loops_python_solver.txt
|
Q:
What is the deference bewteen zeromq binding with * and 127.0.0.1
As title, here is 2 ways to binding a zeromq socket.
socket.bind("tcp://*:port")
socket.bind("tcp://127.0.0.1:port")
Both these two way work for me, but I am still curious about it.
A:
In general, the server binds to an endpoint and the client connects to an endpoint as follows:
# Server
socket = context.socket(zmq.REP)
socket.bind("tcp://*:5555")
connect the socket:
# Client
socket = context.socket(zmq.REQ)
socket.connect("tcp://localhost:5555")
By binding to 127.0.0.1 you restrict requests to the server to 127.0.0.1 only. Running locally will work just fine. But when you use different machines with different IPs you will notice the effect. Hence the use of "*".
A:
To add to sitWolf's answer, note that you can bind a socket multiple times, to multiple protocols. For instance:
socket = context.socket(zmq.REQ);
socket.bind("tcp://127.0.0.1:5555"); // Bind to localhost
socket.bind("ipc:///tmp/mypipe"); // Also bind to a local pipe
socket.bind("tcp://192.168.0.2:4444"); // Also bind to a specific NIC
Also, the socket type is independent of whether you bind it or connect it, though for some socket types it's natural to bind, and others it's natural to connect. So, a PUB socket makes most sense if it's bound, and the corresponding SUB socket connects. Other sockets, one can choose as suits circumstances (e.g. if there is a machine that is more obviously in a server role, that'd be the one to bind).
|
What is the deference bewteen zeromq binding with * and 127.0.0.1
|
As title, here is 2 ways to binding a zeromq socket.
socket.bind("tcp://*:port")
socket.bind("tcp://127.0.0.1:port")
Both these two way work for me, but I am still curious about it.
|
[
"In general, the server binds to an endpoint and the client connects to an endpoint as follows:\n# Server\nsocket = context.socket(zmq.REP)\nsocket.bind(\"tcp://*:5555\")\n\nconnect the socket:\n# Client\nsocket = context.socket(zmq.REQ)\nsocket.connect(\"tcp://localhost:5555\")\n\nBy binding to 127.0.0.1 you restrict requests to the server to 127.0.0.1 only. Running locally will work just fine. But when you use different machines with different IPs you will notice the effect. Hence the use of \"*\".\n",
"To add to sitWolf's answer, note that you can bind a socket multiple times, to multiple protocols. For instance:\nsocket = context.socket(zmq.REQ);\nsocket.bind(\"tcp://127.0.0.1:5555\"); // Bind to localhost\nsocket.bind(\"ipc:///tmp/mypipe\"); // Also bind to a local pipe\nsocket.bind(\"tcp://192.168.0.2:4444\"); // Also bind to a specific NIC\n\nAlso, the socket type is independent of whether you bind it or connect it, though for some socket types it's natural to bind, and others it's natural to connect. So, a PUB socket makes most sense if it's bound, and the corresponding SUB socket connects. Other sockets, one can choose as suits circumstances (e.g. if there is a machine that is more obviously in a server role, that'd be the one to bind).\n"
] |
[
0,
0
] |
[] |
[] |
[
"python",
"pyzmq",
"zeromq"
] |
stackoverflow_0074561434_python_pyzmq_zeromq.txt
|
Q:
RegexValidator in Django Models not validating email correctly
I'm making a django form with an email field and using a RegexValidator and want a specific format of of the email but it seems to be not validating the field correctly
email = models.EmailField(
unique=True,
validators=[
RegexValidator(
regex=r"^[2][2][a-zA-Z]{3}\d{3}@[nith.ac.in]*",
message=
"Only freshers with college email addresses are authorised.")
],
)
When I am entering the email ending with @nith.ac.in or @nith.acin both are getting accepted while @nith.acin should not get accepted... any sols?
A:
The [nith.ac.in] does not parse literal text, it parses any character in te group, meaning it can end with a sequence of ns, is, nis, etc.
Your regex should look like:
email = models.EmailField(
unique=True,
validators=[
RegexValidator(
regex=r'^[2][2][a-zA-Z]{3}\d{3}@nith[.]ac[.]in$',
message='Only freshers with college email addresses are authorised.',
)
],
)
|
RegexValidator in Django Models not validating email correctly
|
I'm making a django form with an email field and using a RegexValidator and want a specific format of of the email but it seems to be not validating the field correctly
email = models.EmailField(
unique=True,
validators=[
RegexValidator(
regex=r"^[2][2][a-zA-Z]{3}\d{3}@[nith.ac.in]*",
message=
"Only freshers with college email addresses are authorised.")
],
)
When I am entering the email ending with @nith.ac.in or @nith.acin both are getting accepted while @nith.acin should not get accepted... any sols?
|
[
"The [nith.ac.in] does not parse literal text, it parses any character in te group, meaning it can end with a sequence of ns, is, nis, etc.\nYour regex should look like:\nemail = models.EmailField(\n unique=True,\n validators=[\n RegexValidator(\n regex=r'^[2][2][a-zA-Z]{3}\\d{3}@nith[.]ac[.]in$',\n message='Only freshers with college email addresses are authorised.',\n )\n ],\n)\n"
] |
[
1
] |
[] |
[] |
[
"django",
"django_models",
"django_validation",
"python"
] |
stackoverflow_0074585815_django_django_models_django_validation_python.txt
|
Q:
Two threads kept running without completed in Python
I'm trying to get the result below running 2 threads alternately. *Thread A prints Step 1 and Step 3 and thread B prints Step 2 and Step 4 (I use Python 3.8.5):
Step 1
Step 2
Step 3
Step 4
So, with global variables, locks and while statements, I created the code below to try to get the result above:
import threading
lock = threading.Lock()
flow = "Step 1"
def test1():
global flow
while True:
while True:
if flow == "Step 1":
lock.acquire()
print(flow)
flow = "Step 2"
lock.release()
break
while True:
if flow == "Step 3":
lock.acquire()
print(flow)
flow = "Step 4"
lock.release()
break
break
def test2():
global flow
while True:
while True:
if flow == "Step 2":
lock.acquire()
print(flow)
flow = "Step 3"
lock.release()
break
while True:
if flow == "Step 4":
lock.acquire()
print(flow)
lock.release()
break
break
t1 = threading.Thread(target=test1)
t2 = threading.Thread(target=test2)
t1.start()
t2.start()
t1.join()
t2.join()
But, the code above got the result below without Step 3 and Step 4, then the program kept running without completed. *Thread A printed Step 1, then thread B printed Step 2, then the program kept running without printing Step 3 and Step 4:
Step 1
Step 2
I couldn't find any mistakes so how can I get the proper result with Step 3 and Step 4? And, why did I get the result without Step 3 and Step 4?
|
Two threads kept running without completed in Python
|
I'm trying to get the result below running 2 threads alternately. *Thread A prints Step 1 and Step 3 and thread B prints Step 2 and Step 4 (I use Python 3.8.5):
Step 1
Step 2
Step 3
Step 4
So, with global variables, locks and while statements, I created the code below to try to get the result above:
import threading
lock = threading.Lock()
flow = "Step 1"
def test1():
global flow
while True:
while True:
if flow == "Step 1":
lock.acquire()
print(flow)
flow = "Step 2"
lock.release()
break
while True:
if flow == "Step 3":
lock.acquire()
print(flow)
flow = "Step 4"
lock.release()
break
break
def test2():
global flow
while True:
while True:
if flow == "Step 2":
lock.acquire()
print(flow)
flow = "Step 3"
lock.release()
break
while True:
if flow == "Step 4":
lock.acquire()
print(flow)
lock.release()
break
break
t1 = threading.Thread(target=test1)
t2 = threading.Thread(target=test2)
t1.start()
t2.start()
t1.join()
t2.join()
But, the code above got the result below without Step 3 and Step 4, then the program kept running without completed. *Thread A printed Step 1, then thread B printed Step 2, then the program kept running without printing Step 3 and Step 4:
Step 1
Step 2
I couldn't find any mistakes so how can I get the proper result with Step 3 and Step 4? And, why did I get the result without Step 3 and Step 4?
|
[] |
[] |
[
"In both second while loops you have an extra break that terminates the execution after first run. (outside the if-block). And you have also souperflous extra outside-while loops.\n\ndef test1():\n global flow\n while True: # <--- DELETE ME\n while True:\n if flow == \"Step 1\":\n lock.acquire()\n print(flow)\n flow = \"Step 2\"\n lock.release()\n break\n \n while True:\n if flow == \"Step 3\":\n lock.acquire()\n print(flow)\n flow = \"Step 4\"\n lock.release()\n break\n break # <--- DELETE ME\n\n\nHowever, you are reading the flow outside of the lock state. In an real world application this could case an error, as the variable can change between the if and the lock. Thus, this code is NOT really THREADSAFE even with the lock.\n",
"The position of each last break in test1() and test2() is wrong so you need to put each last break just in the outer while statement as shown below:\nimport threading\nlock = threading.Lock()\n\nflow = \"Step 1\"\n\ndef test1():\n global flow\n while True:\n while True:\n if flow == \"Step 1\":\n lock.acquire()\n print(flow)\n flow = \"Step 2\"\n lock.release()\n break\n \n while True:\n if flow == \"Step 3\":\n lock.acquire()\n print(flow)\n flow = \"Step 4\"\n lock.release()\n break\n break # This breaks the outer \"while\" statement\n\ndef test2():\n global flow\n while True:\n while True:\n if flow == \"Step 2\":\n lock.acquire()\n print(flow)\n flow = \"Step 3\"\n lock.release()\n break\n \n while True:\n if flow == \"Step 4\":\n lock.acquire()\n print(flow)\n lock.release()\n break\n break # # This breaks the outer \"while\" statement\n\nt1 = threading.Thread(target=test1)\nt2 = threading.Thread(target=test2)\n\nt1.start()\nt2.start()\n\nt1.join()\nt2.join()\n\nThen, you can get the proper result below:\nStep 1\nStep 2\nStep 3\nStep 4\n\n"
] |
[
-1,
-1
] |
[
"alternate",
"multithreading",
"python",
"python_3.x",
"python_multiprocessing"
] |
stackoverflow_0074585534_alternate_multithreading_python_python_3.x_python_multiprocessing.txt
|
Q:
pandas read_sql_query with params matching multiple columns
I'm trying to query a table using pandas.read_sql_query, where I want to match multiple columns to python lists passed in as param arguments. Running into various psycopg2 errors when trying to accomplish this.
Ideally, I would provide a reproducible example, but unfortunately, that's not possible here due to the SQL connection requirement. If there is some way to provide a reproducible example, please let me know and I will edit the code below. Assume that the entries of col1 are strings and those of col2 are numeric values.
Note that I'm trying to ensure that each row of col1 and col2 matches the corresponding combination of list1 and list2, so it would not be possible to do separate where clauses for each, i.e., where col1 = any(%(list1)s) and col2 = any(%(list2)s).
First, I tried passing the lists as separate parameters and then combining them into an array within the SQL query:
import pandas as pd
list1 = ['a', 'b', 'c']
list2 = [1,2,3]
pd.read_sql_query(
"""
select * from table
where (col1, col2) = any(array[(%(list1)s, %(list2)s)])
""",
con = conn,
params = {'list1': list1, 'list2':list2}
)
When I try this, I get Datatype Mismatch: cannot compare dissimilar columns of type text and text[] at column 1.
Also tried the following variant, where I passed a list of lists into param:
pd.read_sql_query(
"""
select * from table
where (col1, col2) = any(%(arr)s)
""",
con = conn,
params = {'arr': [[x,y] for x,y in zip(list1,list2)]}
)
Here, I got DataError: (psycopg2.errors.InvalidTextRepresentation) in valid input syntax for integer: "a".
Tried a few other minor variants of the above, but every attempt threw some kind of error. So, what's the syntax needed in order to accomplish this?
EDIT:
Including a reproducible example:
import numpy as np
import pandas as pd
from sqlalchemy import create_engine
list1 = ["a", "b", "c"]
list2 = [1, 2, 3]
n = 100
engine = create_engine(
"postgresql+psycopg2://postgres:password@localhost:5432/database"
)
np.random.seed(2022)
df = pd.DataFrame(
{
"col1": np.random.choice(list1, n, replace=True),
"col2": np.random.choice(list2, n, replace=True),
}
)
# write table to database
df.to_sql("toy_table", engine, if_exists="replace", index=False)
# query with where any
df_query = pd.read_sql_query(
"""
select * from toy_table
where col1 = any(%(list1)s) and col2=any(%(list2)s)
""",
con=engine,
params={"list1": list1, "list2": list2},
)
# expected output
rows = [(x, y) for x, y in zip(list1, list2)]
df_expected = df.loc[df.apply(lambda x: tuple(x.values) in rows, axis=1)]
# Throws assertion error
assert df_expected.equals(df_query)
A:
To make a comparison of exact pairs you could convert your array to a dictionary then to JSON, taking advantage of PostgreSQL JSON functions and operators, like this:
#combine lists into a dictionary then convert to json
json1 = json.dumps(dict(zip(list1, list2)))
then query request should be
df = pd.read_sql_query(
"""
select *
from "table"
where concat('{"',col1,'":',col2,'}')::jsonb <@ %s::jsonb
""",
con = conn,
params = (json1,)
)
Or a more general approach for n columns
list1 = ['a', 'b', 'c']
list2 = [1,2,3]
list3 = ['z', 'y', 'x']
#assemble a json
json1 = ','.join(['"f1": "'+x+'"' for x in list1])
json2 = ','.join(['"f2": '+str(x) for x in list2])
json3 = ','.join(['"f3": "'+x+'"' for x in list3])
json_string = '{'+json1+', '+json2+ ', '+json3+'}'
the query
df = pd.read_sql_query(
"""
select *
from "table"
where row_to_json(row(col1,col2,col3))::jsonb <@ %s::jsonb
""",
con = conn,
params = (json_string,)
)
Tested with python 3.10.6
A:
Found a solution that works for any number of input lists:
df_query = pd.read_sql_query(
"""
select * from toy_table
join (select unnest(%(list1)s) as list1, unnest(%(list2)s) as list2) as tmp
on col1 = list1 and col2 = list2
""",
con=engine,
params={"list1": list1, "list2": list2},
)
# expected output
rows = [(x, y) for x, y in zip(list1, list2)]
df_expected = df.loc[df.apply(lambda x: tuple(x.values) in rows, axis=1)]
assert df_query.equals(df_expected)
|
pandas read_sql_query with params matching multiple columns
|
I'm trying to query a table using pandas.read_sql_query, where I want to match multiple columns to python lists passed in as param arguments. Running into various psycopg2 errors when trying to accomplish this.
Ideally, I would provide a reproducible example, but unfortunately, that's not possible here due to the SQL connection requirement. If there is some way to provide a reproducible example, please let me know and I will edit the code below. Assume that the entries of col1 are strings and those of col2 are numeric values.
Note that I'm trying to ensure that each row of col1 and col2 matches the corresponding combination of list1 and list2, so it would not be possible to do separate where clauses for each, i.e., where col1 = any(%(list1)s) and col2 = any(%(list2)s).
First, I tried passing the lists as separate parameters and then combining them into an array within the SQL query:
import pandas as pd
list1 = ['a', 'b', 'c']
list2 = [1,2,3]
pd.read_sql_query(
"""
select * from table
where (col1, col2) = any(array[(%(list1)s, %(list2)s)])
""",
con = conn,
params = {'list1': list1, 'list2':list2}
)
When I try this, I get Datatype Mismatch: cannot compare dissimilar columns of type text and text[] at column 1.
Also tried the following variant, where I passed a list of lists into param:
pd.read_sql_query(
"""
select * from table
where (col1, col2) = any(%(arr)s)
""",
con = conn,
params = {'arr': [[x,y] for x,y in zip(list1,list2)]}
)
Here, I got DataError: (psycopg2.errors.InvalidTextRepresentation) in valid input syntax for integer: "a".
Tried a few other minor variants of the above, but every attempt threw some kind of error. So, what's the syntax needed in order to accomplish this?
EDIT:
Including a reproducible example:
import numpy as np
import pandas as pd
from sqlalchemy import create_engine
list1 = ["a", "b", "c"]
list2 = [1, 2, 3]
n = 100
engine = create_engine(
"postgresql+psycopg2://postgres:password@localhost:5432/database"
)
np.random.seed(2022)
df = pd.DataFrame(
{
"col1": np.random.choice(list1, n, replace=True),
"col2": np.random.choice(list2, n, replace=True),
}
)
# write table to database
df.to_sql("toy_table", engine, if_exists="replace", index=False)
# query with where any
df_query = pd.read_sql_query(
"""
select * from toy_table
where col1 = any(%(list1)s) and col2=any(%(list2)s)
""",
con=engine,
params={"list1": list1, "list2": list2},
)
# expected output
rows = [(x, y) for x, y in zip(list1, list2)]
df_expected = df.loc[df.apply(lambda x: tuple(x.values) in rows, axis=1)]
# Throws assertion error
assert df_expected.equals(df_query)
|
[
"To make a comparison of exact pairs you could convert your array to a dictionary then to JSON, taking advantage of PostgreSQL JSON functions and operators, like this:\n#combine lists into a dictionary then convert to json\njson1 = json.dumps(dict(zip(list1, list2)))\n\nthen query request should be\ndf = pd.read_sql_query(\n \"\"\"\n select * \n from \"table\"\n where concat('{\"',col1,'\":',col2,'}')::jsonb <@ %s::jsonb\n \"\"\",\n con = conn,\n params = (json1,)\n )\n\nOr a more general approach for n columns\nlist1 = ['a', 'b', 'c']\nlist2 = [1,2,3]\nlist3 = ['z', 'y', 'x']\n\n#assemble a json\njson1 = ','.join(['\"f1\": \"'+x+'\"' for x in list1])\njson2 = ','.join(['\"f2\": '+str(x) for x in list2])\njson3 = ','.join(['\"f3\": \"'+x+'\"' for x in list3])\njson_string = '{'+json1+', '+json2+ ', '+json3+'}' \n\nthe query\ndf = pd.read_sql_query(\n \"\"\"\n select * \n from \"table\"\n where row_to_json(row(col1,col2,col3))::jsonb <@ %s::jsonb\n \"\"\",\n con = conn,\n params = (json_string,)\n )\n\nTested with python 3.10.6\n",
"Found a solution that works for any number of input lists:\ndf_query = pd.read_sql_query(\n \"\"\"\n select * from toy_table\n join (select unnest(%(list1)s) as list1, unnest(%(list2)s) as list2) as tmp\n on col1 = list1 and col2 = list2\n \"\"\",\n con=engine,\n params={\"list1\": list1, \"list2\": list2},\n)\n\n# expected output\nrows = [(x, y) for x, y in zip(list1, list2)]\ndf_expected = df.loc[df.apply(lambda x: tuple(x.values) in rows, axis=1)]\n\nassert df_query.equals(df_expected)\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"pandas",
"postgresql",
"python"
] |
stackoverflow_0074575599_pandas_postgresql_python.txt
|
Q:
how to do a continuous sum with python pandas
I would like to do the same in python pandas as shown on the picture.
pandas image
This is sum function where the first cell is fixed and the formula calculates "continuous sum".
I tried to create pandas data frame however I did not manage to do this exactly.
A:
I would refer to pandas cumsum()
For example:
df['NEW_COLUMN_CUMULATED'] = df['OLD_COLUMN'].cumsum()
A:
You can use DataFrame.cumsum() to achieve what you want:
import pandas as pd
df = pd.DataFrame([10, 20, 30])
print(df.cumsum())
A:
Based on the example you shared, use pandas.Series.cumsum with pandas.Series.ffill :
import pandas as pd
import numpy as np
df= pd.DataFrame({'Nuber': [10.0, np.NaN, 20.0, np.NaN, np.NaN, 10.0, np.NaN, np.NaN, 30.0]})
df["Sum cont"]= df["Nuber"].cumsum().ffill()
# Output :
print(df)
Nuber Sum cont
0 10.0 10.0
1 NaN 10.0
2 20.0 30.0
3 NaN 30.0
4 NaN 30.0
5 10.0 40.0
6 NaN 40.0
7 NaN 40.0
8 30.0 70.0
# Edit :
If you read the data from an Excel spreadsheet, use this :
import pandas as pd
df = pd.read_excel("python inport.xlsx")
df["Sum cont"]= df["Nuber"].cumsum().ffill()
Or this :
import pandas as pd
df = (
pd.read_excel("python inport.xlsx")
.assign(Sum_Count= lambda x: x["Nuber"].cumsum().ffill())
)
A:
You could use sum function in pandas for specific range of rows using index of start and end row:
df.iloc[0:5].sum()
The sum of the rows with index values between 0 and 5 (not including 5)
Output:
30
|
how to do a continuous sum with python pandas
|
I would like to do the same in python pandas as shown on the picture.
pandas image
This is sum function where the first cell is fixed and the formula calculates "continuous sum".
I tried to create pandas data frame however I did not manage to do this exactly.
|
[
"I would refer to pandas cumsum()\nFor example:\ndf['NEW_COLUMN_CUMULATED'] = df['OLD_COLUMN'].cumsum()\n\n",
"You can use DataFrame.cumsum() to achieve what you want:\n\nimport pandas as pd\ndf = pd.DataFrame([10, 20, 30])\nprint(df.cumsum())\n\n\n",
"Based on the example you shared, use pandas.Series.cumsum with pandas.Series.ffill :\nimport pandas as pd\nimport numpy as np\n\ndf= pd.DataFrame({'Nuber': [10.0, np.NaN, 20.0, np.NaN, np.NaN, 10.0, np.NaN, np.NaN, 30.0]})\n\ndf[\"Sum cont\"]= df[\"Nuber\"].cumsum().ffill()\n\n# Output :\nprint(df)\n\n Nuber Sum cont\n0 10.0 10.0\n1 NaN 10.0\n2 20.0 30.0\n3 NaN 30.0\n4 NaN 30.0\n5 10.0 40.0\n6 NaN 40.0\n7 NaN 40.0\n8 30.0 70.0\n\n# Edit :\nIf you read the data from an Excel spreadsheet, use this :\nimport pandas as pd\n\ndf = pd.read_excel(\"python inport.xlsx\")\ndf[\"Sum cont\"]= df[\"Nuber\"].cumsum().ffill()\n\nOr this :\nimport pandas as pd\n\ndf = (\n pd.read_excel(\"python inport.xlsx\")\n .assign(Sum_Count= lambda x: x[\"Nuber\"].cumsum().ffill())\n )\n\n",
"You could use sum function in pandas for specific range of rows using index of start and end row:\ndf.iloc[0:5].sum()\n\nThe sum of the rows with index values between 0 and 5 (not including 5)\nOutput:\n30\n\n"
] |
[
0,
0,
0,
0
] |
[] |
[] |
[
"excel",
"function",
"pandas",
"python",
"sum"
] |
stackoverflow_0074581136_excel_function_pandas_python_sum.txt
|
Q:
UNIQUE constraint failed: auth_user.username while trying to register the user in django
I have written following set of code in views.py of my project but when I try to register the user info as an object in the database, above mentioned error arrived
views.py
from django.shortcuts import render
from django.http import HttpResponse
from django.contrib.auth.models import User
from django.contrib.auth import authenticate, login
from django.contrib import messages
from django.shortcuts import redirect, render
# Create your views here.
def home(request):
return render(request, "authentication/index.html")
def signin(request):
if request.method == "post":
username = request.POST['username']
pass1 = request.POST['pass1']
user = authenticate(username = username, password=pass1)
if user is not None:
login(request, user)
fname = user.first_name
return render(request, 'authenticate/index.html', {'fname' : fname})
else:
messages.error(request, 'Bad credentials')
return redirect('home')
return render(request, "authentication/signin.html")
def signup(request):
if request.method == "POST":
# username = request.POST.get('username')
username = request.POST['username']
fname = request.POST['fname']
lname = request.POST['lname']
email = request.POST['email']
pass1 = request.POST['pass1']
pass2 = request.POST['pass2']
myuser = User.objects.create_user(username, email, pass1) ## Error is coming in this line ##
myuser.firstname = fname
myuser.lastname = lname
myuser.save()
messages.success(request, 'Your account has been succesfully created.')
return redirect('/signin')
return render(request, "authentication/signup.html")
def signout(request):
pass
I am trying to make user login system and following error has arrived in views.py of my app. enter image description here
A:
Most probably the username already exists in the database as you haven't used django forms so maybe the validation isn't done correctly with your manually defined html fields.
Also create_user() method doesn't require save() method to be called for updating the fields, the thing you can do is save(commit=False) but I think you can directly use create() method.
You should first check whether the user with username already exists in the database or not.
Use this view:
from django.contrib import messages
def signup(request):
if request.method == "POST":
username = request.POST['username']
try:
user_exist_or_not=User.objects.get(username=username)
messages.error(request,"user already exists")
return redirect("some_error_page")
except User.DoesNotExist:
print("user does not exist")
fname = request.POST['fname']
lname = request.POST['lname']
email = request.POST['email']
pass1 = request.POST['pass1']
pass2 = request.POST['pass2']
myuser = User.objects.create(username=username, email=email, password=pass1,first_name=fname,last_name=lname)
messages.success(request, 'Your account has been succesfully created.')
return redirect('/signin')
return render(request, "authentication/signup.html")
|
UNIQUE constraint failed: auth_user.username while trying to register the user in django
|
I have written following set of code in views.py of my project but when I try to register the user info as an object in the database, above mentioned error arrived
views.py
from django.shortcuts import render
from django.http import HttpResponse
from django.contrib.auth.models import User
from django.contrib.auth import authenticate, login
from django.contrib import messages
from django.shortcuts import redirect, render
# Create your views here.
def home(request):
return render(request, "authentication/index.html")
def signin(request):
if request.method == "post":
username = request.POST['username']
pass1 = request.POST['pass1']
user = authenticate(username = username, password=pass1)
if user is not None:
login(request, user)
fname = user.first_name
return render(request, 'authenticate/index.html', {'fname' : fname})
else:
messages.error(request, 'Bad credentials')
return redirect('home')
return render(request, "authentication/signin.html")
def signup(request):
if request.method == "POST":
# username = request.POST.get('username')
username = request.POST['username']
fname = request.POST['fname']
lname = request.POST['lname']
email = request.POST['email']
pass1 = request.POST['pass1']
pass2 = request.POST['pass2']
myuser = User.objects.create_user(username, email, pass1) ## Error is coming in this line ##
myuser.firstname = fname
myuser.lastname = lname
myuser.save()
messages.success(request, 'Your account has been succesfully created.')
return redirect('/signin')
return render(request, "authentication/signup.html")
def signout(request):
pass
I am trying to make user login system and following error has arrived in views.py of my app. enter image description here
|
[
"Most probably the username already exists in the database as you haven't used django forms so maybe the validation isn't done correctly with your manually defined html fields.\nAlso create_user() method doesn't require save() method to be called for updating the fields, the thing you can do is save(commit=False) but I think you can directly use create() method.\nYou should first check whether the user with username already exists in the database or not.\nUse this view:\nfrom django.contrib import messages\ndef signup(request):\n\n if request.method == \"POST\":\n \n username = request.POST['username']\n try: \n user_exist_or_not=User.objects.get(username=username) \n messages.error(request,\"user already exists\")\n return redirect(\"some_error_page\")\n \n except User.DoesNotExist:\n print(\"user does not exist\")\n \n fname = request.POST['fname']\n lname = request.POST['lname']\n email = request.POST['email']\n pass1 = request.POST['pass1']\n pass2 = request.POST['pass2']\n\n myuser = User.objects.create(username=username, email=email, password=pass1,first_name=fname,last_name=lname)\n\n messages.success(request, 'Your account has been succesfully created.')\n\n return redirect('/signin')\n \n\n\n return render(request, \"authentication/signup.html\")\n\n"
] |
[
0
] |
[] |
[] |
[
"django",
"python"
] |
stackoverflow_0074585791_django_python.txt
|
Q:
pandas merge two dataframe and sort by compared column in adjacent column
I compare two dataframe and result can be shown below;
import pandas as pd
exam_1 = {
'Name': ['Jonn', 'Tomas', 'Fran', 'Olga', 'Veronika', 'Stephan'],
'Mat': [85, 75, 50, 93, 88, 90],
'Science': [96, 97, 99, 87, 90, 88],
'Reading': [80, 60, 72, 86, 84, 77],
'Wiritng': [78, 82, 88, 78, 86, 82],
'Lang': [77, 79, 77, 72, 90, 92],
}
exam_2 = {
'Name': ['Jonn', 'Tomas', 'Fran', 'Olga', 'Veronika', 'Stephan'],
'Mat': [80, 80, 90, 90, 85, 80],
'Science': [50, 60, 85, 90, 66, 82],
'Reading': [60, 75, 55, 90, 85, 60],
'Wiritng': [56, 66, 90, 82, 60, 80],
'Lang': [80, 78, 76, 90, 77, 66],
}
df_1 = pd.DataFrame(exam_1)
df_2 = pd.DataFrame(exam_2)
cmp = pd.merge(df_1, df_2, how="outer", on=["Name"], suffixes=("_1", "_2"))
print(cmp)
Name Mat_1 Science_1 Reading_1 Wiritng_1 Lang_1 Mat_2 Science_2 Reading_2 Wiritng_2 Lang_2
0 Jonn 85 96 80 78 77 80 50 60 56 80
1 Tomas 75 97 60 82 79 80 60 75 66 78
2 Fran 50 99 72 88 77 90 85 55 90 76
3 Olga 93 87 86 78 72 90 90 90 82 90
4 Veronika 88 90 84 86 90 85 66 85 60 77
5 Stephan 90 88 77 82 92 80 82 60 80 66
But I want to see Mat_1 and Mat_2 in adjacent column and also others.
I try to do it manually but is there any easy way to do it like already built-in function.
A:
You can use pandas.DataFrame.sort_index on axis=1.
Replace this :
cmp = pd.merge(df_1, df_2, how="outer", on=["Name"], suffixes=("_1", "_2"))
By this :
cmp = (
pd.merge(df_1, df_2, how="outer", on=["Name"], suffixes=("_1", "_2"))
.set_index("Name")
.sort_index(axis=1)
.reset_index()
)
# Output :
print(cmp.to_string())
Name Lang_1 Lang_2 Mat_1 Mat_2 Reading_1 Reading_2 Science_1 Science_2 Wiritng_1 Wiritng_2
0 Jonn 77 80 85 80 80 60 96 50 78 56
1 Tomas 79 78 75 80 60 75 97 60 82 66
2 Fran 77 76 50 90 72 55 99 85 88 90
3 Olga 72 90 93 90 86 90 87 90 78 82
4 Veronika 90 77 88 85 84 85 90 66 86 60
5 Stephan 92 66 90 80 77 60 88 82 82 80
|
pandas merge two dataframe and sort by compared column in adjacent column
|
I compare two dataframe and result can be shown below;
import pandas as pd
exam_1 = {
'Name': ['Jonn', 'Tomas', 'Fran', 'Olga', 'Veronika', 'Stephan'],
'Mat': [85, 75, 50, 93, 88, 90],
'Science': [96, 97, 99, 87, 90, 88],
'Reading': [80, 60, 72, 86, 84, 77],
'Wiritng': [78, 82, 88, 78, 86, 82],
'Lang': [77, 79, 77, 72, 90, 92],
}
exam_2 = {
'Name': ['Jonn', 'Tomas', 'Fran', 'Olga', 'Veronika', 'Stephan'],
'Mat': [80, 80, 90, 90, 85, 80],
'Science': [50, 60, 85, 90, 66, 82],
'Reading': [60, 75, 55, 90, 85, 60],
'Wiritng': [56, 66, 90, 82, 60, 80],
'Lang': [80, 78, 76, 90, 77, 66],
}
df_1 = pd.DataFrame(exam_1)
df_2 = pd.DataFrame(exam_2)
cmp = pd.merge(df_1, df_2, how="outer", on=["Name"], suffixes=("_1", "_2"))
print(cmp)
Name Mat_1 Science_1 Reading_1 Wiritng_1 Lang_1 Mat_2 Science_2 Reading_2 Wiritng_2 Lang_2
0 Jonn 85 96 80 78 77 80 50 60 56 80
1 Tomas 75 97 60 82 79 80 60 75 66 78
2 Fran 50 99 72 88 77 90 85 55 90 76
3 Olga 93 87 86 78 72 90 90 90 82 90
4 Veronika 88 90 84 86 90 85 66 85 60 77
5 Stephan 90 88 77 82 92 80 82 60 80 66
But I want to see Mat_1 and Mat_2 in adjacent column and also others.
I try to do it manually but is there any easy way to do it like already built-in function.
|
[
"You can use pandas.DataFrame.sort_index on axis=1.\nReplace this :\ncmp = pd.merge(df_1, df_2, how=\"outer\", on=[\"Name\"], suffixes=(\"_1\", \"_2\"))\n\nBy this :\ncmp = (\n pd.merge(df_1, df_2, how=\"outer\", on=[\"Name\"], suffixes=(\"_1\", \"_2\"))\n .set_index(\"Name\")\n .sort_index(axis=1)\n .reset_index()\n )\n\n\n# Output :\nprint(cmp.to_string()) \n\n Name Lang_1 Lang_2 Mat_1 Mat_2 Reading_1 Reading_2 Science_1 Science_2 Wiritng_1 Wiritng_2\n0 Jonn 77 80 85 80 80 60 96 50 78 56\n1 Tomas 79 78 75 80 60 75 97 60 82 66\n2 Fran 77 76 50 90 72 55 99 85 88 90\n3 Olga 72 90 93 90 86 90 87 90 78 82\n4 Veronika 90 77 88 85 84 85 90 66 86 60\n5 Stephan 92 66 90 80 77 60 88 82 82 80\n\n"
] |
[
0
] |
[] |
[] |
[
"dataframe",
"merge",
"pandas",
"python"
] |
stackoverflow_0074585876_dataframe_merge_pandas_python.txt
|
Q:
Using an Input to Retrieve a Corresponding Element From a 2D Array
This might be a really obvious solution to some, but, being pretty new to python, I'm unsure how to do it - In short, I want to take a user's input, and find the corresponding element on a 2D array, i.e. an input of '1' would print 'a', '2' would print 'b', and so on. Is there a way to do this?
The code I've written so far is below
var=[["1","a"],["2","b"],["3","c"]]
inp='x'
while inp!='1' and inp!='2' and inp!='3':
inp=str(input("Enter a number 1-3\n"))
I've not got a clue what to try, and I'm yet to find a solution - that just might be due to my poor phrasing though - so any help is greatly appreciated!
A:
Loop through the sub-lists inside var until you find one where the first element matches the user input. Then print the second element of that sub-list.
for sublist in var:
if sublist[0] == inp:
print(sublist[1])
A:
There are a lot of ways of doing what you want to do. Here's a way that is easier for a beginner to understand (hopefully):
var=[["1","a"],["2","b"],["3","c"]]
inp='x'
while inp!='1' and inp!='2' and inp!='3':
inp=input("Enter a number 1-3\n")
for entry in var:
if entry[0] == inp:
print(entry[1])
break
Note that I removed the call to str() that you had put around your call to input(), as that function already returns a string value.
A better way to do this if you can choose the form of your input data structure is to use a map rather than a list. Here's how to do that. Note how much simpler the added code becomes:
var= {'1': 'a', '2': 'b', '3': 'c'}
inp='x'
while inp!='1' and inp!='2' and inp!='3':
inp=input("Enter a number 1-3\n")
print(var[inp])
...and to make it so you don't have to hard-code valid inputs:
var= {'1': 'a', '2': 'b', '3': 'c'}
inp='x'
while inp not in var:
inp=input("Enter a number 1-3\n")
print(var[inp])
A:
One of the simplest pythonic ways to do this would be this:
Note that your question is relatively easy, and that therefore are many answers possible
var=[["1","a"],["2","b"],["3","c"]]
inp = '1'
def function(lst, inp):
for val in lst:
if val[0] == inp:
return val[1]
function(var, inp)
|
Using an Input to Retrieve a Corresponding Element From a 2D Array
|
This might be a really obvious solution to some, but, being pretty new to python, I'm unsure how to do it - In short, I want to take a user's input, and find the corresponding element on a 2D array, i.e. an input of '1' would print 'a', '2' would print 'b', and so on. Is there a way to do this?
The code I've written so far is below
var=[["1","a"],["2","b"],["3","c"]]
inp='x'
while inp!='1' and inp!='2' and inp!='3':
inp=str(input("Enter a number 1-3\n"))
I've not got a clue what to try, and I'm yet to find a solution - that just might be due to my poor phrasing though - so any help is greatly appreciated!
|
[
"Loop through the sub-lists inside var until you find one where the first element matches the user input. Then print the second element of that sub-list.\nfor sublist in var:\n if sublist[0] == inp:\n print(sublist[1])\n\n",
"There are a lot of ways of doing what you want to do. Here's a way that is easier for a beginner to understand (hopefully):\nvar=[[\"1\",\"a\"],[\"2\",\"b\"],[\"3\",\"c\"]]\ninp='x'\nwhile inp!='1' and inp!='2' and inp!='3':\n inp=input(\"Enter a number 1-3\\n\")\n\nfor entry in var:\n if entry[0] == inp:\n print(entry[1])\n break\n\nNote that I removed the call to str() that you had put around your call to input(), as that function already returns a string value.\nA better way to do this if you can choose the form of your input data structure is to use a map rather than a list. Here's how to do that. Note how much simpler the added code becomes:\nvar= {'1': 'a', '2': 'b', '3': 'c'}\ninp='x'\nwhile inp!='1' and inp!='2' and inp!='3':\n inp=input(\"Enter a number 1-3\\n\")\n\nprint(var[inp])\n\n...and to make it so you don't have to hard-code valid inputs:\nvar= {'1': 'a', '2': 'b', '3': 'c'}\ninp='x'\nwhile inp not in var:\n inp=input(\"Enter a number 1-3\\n\")\n\nprint(var[inp])\n\n",
"One of the simplest pythonic ways to do this would be this:\nNote that your question is relatively easy, and that therefore are many answers possible\nvar=[[\"1\",\"a\"],[\"2\",\"b\"],[\"3\",\"c\"]]\n\ninp = '1'\n\ndef function(lst, inp):\n for val in lst:\n if val[0] == inp:\n return val[1]\n \nfunction(var, inp)\n\n"
] |
[
0,
0,
0
] |
[] |
[] |
[
"arrays",
"multidimensional_array",
"python"
] |
stackoverflow_0074585875_arrays_multidimensional_array_python.txt
|
Q:
ERROR: Proxy URL had no scheme. However, URL & Proxies are properly setup
I'm getting the error:
urllib3.exceptions.ProxySchemeUnknown: Proxy URL had no scheme, should start with http:// or https://
but the proxies are fine & so is the URL.
URL = f"https://google.com/search?q={query2}&num=100"
mysite = self.listbox.get(0)
headers = {"user-agent": USER_AGENT}
while True:
proxy = next(proxy_cycle)
print(proxy)
proxies = {"http": proxy, "https": proxy}
print(proxies)
resp = requests.get(URL, proxies=proxies, headers=headers)
if resp.status_code == 200:
break
Print results:
41.139.253.91:8080
{'http': '41.139.253.91:8080', 'https': '41.139.253.91:8080'}
A:
On Linux unset http_proxy and https_proxy using terminal on the current location of your project
unset http_proxy
unset https_proxy
A:
I had the same problem and setting in my terminal https_proxy variable really helped me. You can set it as follows:
set HTTPS_PROXY=http://username:password@proxy.example.com:8080
set https_proxy=http://username:password@proxy.example.com:8080
Where proxy.example.com is the proxy address (in my case it is "localhost") and 8080 is my port.
You can figure out your username by typing echo %username% in your command line. As for the proxy server, on Windows, you need to go to "Internet Options" -> "Connections" -> LAN Settings and tick "Use a proxy server for your LAN". There, you can find your proxy address and port.
An important note here. If you're using PyCharm, try first running your script from the terminal. I say this because you may get the same error if you will just run the file by "pushing" the button. But using the terminal may help you get rid of this error.
P.S. Also, you can try to downgrade your pip to 20.2.3 as it may help you too.
A:
I was having same issue. I resolved with upgrading requests library in python3 by
pip3 install --upgrade requests
I think it is related to lower version of requests library conflicting higher version of python3
|
ERROR: Proxy URL had no scheme. However, URL & Proxies are properly setup
|
I'm getting the error:
urllib3.exceptions.ProxySchemeUnknown: Proxy URL had no scheme, should start with http:// or https://
but the proxies are fine & so is the URL.
URL = f"https://google.com/search?q={query2}&num=100"
mysite = self.listbox.get(0)
headers = {"user-agent": USER_AGENT}
while True:
proxy = next(proxy_cycle)
print(proxy)
proxies = {"http": proxy, "https": proxy}
print(proxies)
resp = requests.get(URL, proxies=proxies, headers=headers)
if resp.status_code == 200:
break
Print results:
41.139.253.91:8080
{'http': '41.139.253.91:8080', 'https': '41.139.253.91:8080'}
|
[
"On Linux unset http_proxy and https_proxy using terminal on the current location of your project\nunset http_proxy\n\nunset https_proxy\n\n",
"I had the same problem and setting in my terminal https_proxy variable really helped me. You can set it as follows:\nset HTTPS_PROXY=http://username:password@proxy.example.com:8080 \nset https_proxy=http://username:password@proxy.example.com:8080\n\nWhere proxy.example.com is the proxy address (in my case it is \"localhost\") and 8080 is my port.\nYou can figure out your username by typing echo %username% in your command line. As for the proxy server, on Windows, you need to go to \"Internet Options\" -> \"Connections\" -> LAN Settings and tick \"Use a proxy server for your LAN\". There, you can find your proxy address and port.\nAn important note here. If you're using PyCharm, try first running your script from the terminal. I say this because you may get the same error if you will just run the file by \"pushing\" the button. But using the terminal may help you get rid of this error.\nP.S. Also, you can try to downgrade your pip to 20.2.3 as it may help you too.\n",
"I was having same issue. I resolved with upgrading requests library in python3 by\npip3 install --upgrade requests\n\nI think it is related to lower version of requests library conflicting higher version of python3\n"
] |
[
2,
1,
0
] |
[] |
[] |
[
"python",
"python_3.x",
"python_requests"
] |
stackoverflow_0067010503_python_python_3.x_python_requests.txt
|
Q:
Getting Video Links from Youtube Channel in Python Selenium
I am using Selenium in Python to scrape the videos from Youtube channels' websites. Below is a set of code. The line videos = driver.find_elements(By.CLASS_NAME, 'style-scope ytd-grid-video-renderer') repeatedly returns no links to the videos (a.k.a. the print(videos) after it outputs an empty list). How would you modify it to find all the videos on the loaded page?
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
driver = webdriver.Chrome()
driver.get('https://www.youtube.com/wendoverproductions/videos')
videos = driver.find_elements(By.CLASS_NAME, 'style-scope ytd-grid-video-renderer')
print(videos)
urls = []
titles = []
dates = []
for video in videos:
video_url = video.find_element(by=By.XPATH, value='.//*[@id="video-title"]').get_attribute('href')
urls.append(video_url)
video_title = video.find_element(by=By.XPATH, value='.//*[@id="video-title"]').text
titles.append(video_title)
video_date = video.find_element(by=By.XPATH, value='.//*[@id="metadata-line"]/span[2]').text
dates.append(video_date)
A:
If you don't have a YouTube Data API v3 developer key:
The following procedure requires you to have a Google account.
Go to: https://console.cloud.google.com/projectcreate
Click on the CREATE button.
Go to: https://console.cloud.google.com/marketplace/product/google/youtube.googleapis.com
Click on the ENABLE button.
Click on the CREATE CREDENTIALS button.
Choose the Public data option.
Click on the NEXT button.
Note the displayed API Key and continue reading.
If you have a YouTube Data API v3 developer key:
To get the videos of a given YouTube channel id (see this answer if you don't know how to get the channel id of a given YouTube channel), replace its second character (C) to U to obtain its uploads playlist id and provide it as a playlistId to YouTube Data API v3 PlaylistItems: list endpoint.
This is a Python sample code listing videos of a given channel id (don't forget to replace API_KEY with your YouTube Data API v3 developer key):
import requests, json
CHANNEL_ID = 'UC07-dOwgza1IguKA86jqxNA'
PLAYLIST_ID = 'UU' + CHANNEL_ID[2:]
API_KEY = 'AIzaSy...'
URL = f'https://www.googleapis.com/youtube/v3/playlistItems?part=snippet&playlistId={PLAYLIST_ID}&maxResults=50&key={API_KEY}'
pageToken = ''
while True:
pageUrl = URL
if pageToken != '':
pageUrl += f'&pageToken={pageToken}'
response = json.loads(requests.get(pageUrl).text)
print(response['items'])
if 'nextPageToken' in response:
pageToken = response['nextPageToken']
else:
break
For documentation concerning the pagination, see this webpage.
A:
Implementation using Selenium:
First of all, I wanna solve the problem meaning wanted to pull data with the help of YouTube API and I'm about to reach the goal but for some API's restrictions like API KEY's rquests restrict and some other complexities, I couldn't grab complete data that's why I go with super powerful Selenium engine as my last resort and it works like a charm.
Full working code as an example:
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.common.by import By
import time
import pandas as pd
from selenium.webdriver.support.wait import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
options = webdriver.ChromeOptions()
#All are optional
options.add_experimental_option("detach", True)
options.add_argument("--disable-extensions")
options.add_argument("--disable-notifications")
options.add_argument("--disable-Advertisement")
options.add_argument("--disable-popup-blocking")
options.add_argument("start-maximized")
s=Service('./chromedriver')
driver= webdriver.Chrome(service=s,options=options)
driver.get('https://www.youtube.com/wendoverproductions/videos')
time.sleep(3)
item = []
SCROLL_PAUSE_TIME = 1
last_height = driver.execute_script("return document.documentElement.scrollHeight")
item_count = 180
while item_count > len(item):
driver.execute_script("window.scrollTo(0,document.documentElement.scrollHeight);")
time.sleep(SCROLL_PAUSE_TIME)
new_height = driver.execute_script("return document.documentElement.scrollHeight")
if new_height == last_height:
break
last_height = new_height
data = []
try:
for e in WebDriverWait(driver, 20).until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, 'div#details'))):
title = e.find_element(By.CSS_SELECTOR,'a#video-title-link').get_attribute('title')
vurl = e.find_element(By.CSS_SELECTOR,'a#video-title-link').get_attribute('href')
views= e.find_element(By.XPATH,'.//*[@id="metadata"]//span[@class="inline-metadata-item style-scope ytd-video-meta-block"][1]').text
date_time = e.find_element(By.XPATH,'.//*[@id="metadata"]//span[@class="inline-metadata-item style-scope ytd-video-meta-block"][2]').text
data.append({
'video_url':vurl,
'title':title,
'date_time':date_time,
'views':views
})
except:
pass
item = data
print(item)
print(len(item))
# df = pd.DataFrame(item)
# print(df)
OUTPUT:
[{'video_url': 'https://www.youtube.com/watch?v=oL0umpPPe-8', 'title': 'Samsung’s Dangerous Dominance over South Korea', 'date_time': '10 days ago', 'views': '2.3M views'}, {'video_url': 'https://www.youtube.com/watch?v=GBp_NgrrtPM', 'title': 'China’s Electricity Problem', 'date_time': '3 weeks ago', 'views': '1.9M views'}, {'video_url': 'https://www.youtube.com/watch?v=YBNcYxHJPLE', 'title': 'How the World’s Wealthiest People Travel', 'date_time': '1 month ago', 'views': '2.3M views'}, {'video_url': 'https://www.youtube.com/watch?v=iIpPuJ_r8Xg', 'title': 'The US Military’s Massive Global Transportation System', 'date_time': '1 month ago', 'views': '1.6M views'}, {'video_url': 'https://www.youtube.com/watch?v=MY8AB1wYOtg', 'title': 'The Absurd Logistics of Concert Tours', 'date_time': '2 months ago', 'views': '1.4M views'}, {'video_url': 'https://www.youtube.com/watch?v=8xzINLykprA', 'title': 'Money’s Mostly Digital, So Why Is Moving It So Hard?', 'date_time': '2 months ago', 'views': '1.1M views'}, {'video_url': 'https://www.youtube.com/watch?v=f66GfsKPTUg', 'title': 'How This Central African City Became the World’s Most Expensive', 'date_time': '3 months ago', 'views': '3.1M views'}, {'video_url': 'https://www.youtube.com/watch?v=IDLkOWW0_xg', 'title': 'The Simple Genius of NYC’s Water Supply System', 'date_time': '3 months ago', 'views': '1.8M views'}, {'video_url': 'https://www.youtube.com/watch?v=U9jirFqex6g', 'title': 'Europe’s Experiment: Treating Trains Like Planes', 'date_time': '3 months ago', 'views': '2M views'}, {'video_url': 'https://www.youtube.com/watch?v=eoWcQUjNM8o', 'title': 'How the YouTube Creator Economy Works', 'date_time': '4 months ago', 'views': '1M views'}, {'video_url': 'https://www.youtube.com/watch?v=V0Xx0E8cs7U', 'title': 'The Incredible Logistics Behind Weather Forecasting', 'date_time': '4 months ago', 'views': '1.2M views'}, {'video_url': 'https://www.youtube.com/watch?v=v0aGGOK4kAM', 'title': 'Australia Had a Mass-Shooting Problem. Here’s How it Stopped', 'date_time': '5 months ago', 'views': '947K views'}, {'video_url': 'https://www.youtube.com/watch?v=AW3gaelBypY', 'title': 'The Carbon Offset Problem', 'date_time': '5 months ago', 'views': '1.2M views'}, {'video_url': 'https://www.youtube.com/watch?v=xhYl7Jjefo8', 'title': 'Jet Lag: The Game - A New Channel by Wendover Productions', 'date_time': '6 months ago', 'views': '328K views'}, {'video_url': 'https://www.youtube.com/watch?v=oESoI6XxZTg', 'title': 'How to Design a Theme Park (To Take Tons of Your Money)', 'date_time': '6 months ago', 'views': '1.9M views'}, {'video_url': 'https://www.youtube.com/watch?v=AQbmpecxS2w', 'title': 'Why Gas Got So Expensive (It’s Not the War)', 'date_time': '6 months ago', 'views': '3.4M views'}, {'video_url': 'https://www.youtube.com/watch?v=U_7CGl6VWaQ', 'title': 'How Cyberwarfare Actually Works', 'date_time': '7 months ago', 'views': '1.9M views'}, {'video_url': 'https://www.youtube.com/watch?v=R9pxFgJwxFE', 'title': 'The Incredible Logistics Behind Corn Farming', 'date_time': '7 months ago', 'views': '1.6M views'}, {'video_url': 'https://www.youtube.com/watch?v=SrTrpwzVt4g', 'title': 'The Sanction-Fueled Destruction of
the Russian Aviation Industry', 'date_time': '8 months ago', 'views': '4.3M 'https://www.youtube.com/watch?v=b1JlYZQG3lI', 'title': "Why There are Now So Many Shortages (It's Not COVID)", 'date_time': '1 year ago', 'views': '7.7M
'https://www.youtube.com/watch?v=N4dOCfWlgBw', 'title': 'The Insane Logistics of Shutting Down the Cruise Industry', 'date_time': '1 year ago', 'views': '2M views'}, {'video_url': 'https://www.youtube.com/watch?v=3CuPqeIJr3U', 'title': "China's Vaccine Diplomacy", 'date_time': '1 year ago', 'views': '916K views'}, {'video_url': 'https://www.youtube.com/watch?v=DlTq8DbRs4k', 'title': "The UK's Failed Experiment in Rail Privatization", 'date_time': '1 year ago', 'views': '1.5M views'}, {'video_url': 'https://www.youtube.com/watch?v=VjiH3mpxyrQ', 'title': 'How to Start an Airline', 'date_time': '1 year ago', 'views': '1.8M views'}, {'video_url': 'https://www.youtube.com/watch?v=pLcqJ2DclEg', 'title': 'The Electric Vehicle Charging Problem', 'date_time': '1 year ago', 'views': '4.4M views'}, {'video_url': 'https://www.youtube.com/watch?v=3gdCH1XUIlE', 'title': "How Air Ambulances (Don't) Work", 'date_time': '1 year ago', 'views': '1M views'}, {'video_url': 'https://www.youtube.com/watch?v=2qanMpnYsjk', 'title': "How Amazon's Super-Complex Shipping System
Works", 'date_time': '1 year ago', 'views': '2.9M views'}, {'video_url': 'https://www.youtube.com/watch?v=7R7jNWHp0D0', 'title': 'The News You Missed in 2020, From Every Country in the World (Part 2)', 'date_time': '1 year ago', 'views': '1.3M views'}, {'video_url': 'https://www.youtube.com/watch?v=GIFV_Z7Y9_w', 'title': 'The News You Missed in 2020, From Every Country in the World (Part 1)', 'date_time': '1 year ago', 'views': '2.3M views'}, {'video_url': 'https://www.youtube.com/watch?v=KXRtNwUju5g', 'title': "How China Broke the World's Recycling", 'date_time': '1 year ago', 'views': '3.9M views'}, {'video_url': 'https://www.youtube.com/watch?v=fTyUE162lrw', 'title':
'Why Long-Haul Low-Cost Airlines Always Go Bankrupt', 'date_time': '1 year ago', 'views': '1.3M views'}, {'video_url': 'https://www.youtube.com/watch?v=ZAEydOjNWyQ', 'title': 'How Living at the
South Pole Works', 'date_time': '2 years ago', 'views': '2.3M views'}, {'video_url': 'https://www.youtube.com/watch?v=_BCY0SPOFpE', 'title': "Egypt's Dam Problem: The Geopolitics of the Nile", 'date_time': '2 years ago', 'views': '1.8M views'}, {'video_url': 'https://www.youtube.com/watch?v=v_rXhuaI0W8', 'title': 'The 8 Flights That Show How COVID-19 Reinvented Aviation', 'date_time':
'2 years ago', 'views': '1.3M views'}, {'video_url': 'https://www.youtube.com/watch?v=Ongqf93rAcM', 'title': "How to Beat the Casino, and How They'll Stop You", 'date_time': '2 years ago', 'views': '3.1M views'}, {'video_url': 'https://www.youtube.com/watch?v=byW1GExQB84', 'title': 'Distributing the COVID Vaccine: The Greatest Logistics Challenge Ever', 'date_time': '2 years ago', 'views': '911K views'}, {'video_url': 'https://www.youtube.com/watch?v=DTIDCA7mjZs', 'title': 'How to Illegally Cross the Mexico-US Border', 'date_time': '2 years ago', 'views': '1.7M views'}, {'video_url': 'https://www.youtube.com/watch?v=H_akzwzghWQ', 'title': 'How COVID-19 Broke the Airline
Pricing Model', 'date_time': '2 years ago', 'views': '2.3M views'}, {'video_url': 'https://www.youtube.com/watch?v=7C1fPocIFgU', 'title': 'The Broken Economics of Organ Transplants', 'date_time': '2 years ago', 'views': '600K views'}, {'video_url': 'https://www.youtube.com/watch?v=3J06af5xHD0', 'title': 'The Final Years of Majuro [Documentary]', 'date_time': '2 years ago', 'views': '2M views'}, {'video_url': 'https://www.youtube.com/watch?v=YgiMqePRp0Y', 'title': 'The Logistics of Covid-19 Testing', 'date_time': '2 years ago', 'views': '533K views'}, {'video_url': 'https://www.youtube.com/watch?v=Rtmhv5qEBg0', 'title': "Airlines' Protocol for After a Plane Crash", 'date_time': '2 years ago', 'views': '1.5M views'}, {'video_url': 'https://www.youtube.com/watch?v=6GMoUmvw8kU', 'title': 'Why Taiwan and China are Battling over Tiny Island Countries', 'date_time': '2 years ago', 'views': '855K views'}, {'video_url': 'https://www.youtube.com/watch?v=QlPrAKtegFQ', 'title': 'How Long-Haul Trucking Works', 'date_time': '2 years ago', 'views': '2.2M views'}, {'video_url': 'https://www.youtube.com/watch?v=uAG4zCsiA_w', 'title': 'Why Helicopter Airlines Failed', 'date_time': '2 years ago', 'views': '1.5M views'}, {'video_url': 'https://www.youtube.com/watch?v=NtX-Ibi21tU', 'title': 'The Five Rules of Risk', 'date_time': '2 years ago', 'views': '1M
views'}, {'video_url': 'https://www.youtube.com/watch?v=r2oPk20OHBE', 'title': "Air Cargo's Coronavirus Problem", 'date_time': '2 years ago', 'views': '1.1M views'}, {'video_url': 'https://www.youtube.com/watch?v=ABIkWS_YavM', 'title': 'How Offshore Oil Rigs Work', 'date_time': '2 years ago', 'views': '2.4M views'}, {'video_url': 'https://www.youtube.com/watch?v=-pNBAxx4IRo', 'title':
"How the US' Hospital Ships Work", 'date_time': '2 years ago', 'views': '1.5M views'}, {'video_url': 'https://www.youtube.com/watch?v=VX2e2iEg_pM', 'title': 'COVID-19: How Aviation is Fighting for Survival', 'date_time': '2 years ago', 'views': '1.3M views'}, {'video_url': 'https://www.youtube.com/watch?v=Ppjv0H-Yt5Q', 'title': 'The Logistics of the US Census', 'date_time': '2 years ago', 'views': '887K views'}, {'video_url': 'https://www.youtube.com/watch?v=QvUpSFGRqEo', 'title': 'How Boeing Will Get the 737 MAX Flying Again', 'date_time': '2 years ago', 'views': '1.2M views'}, {'video_url': 'https://www.youtube.com/watch?v=3Sh7hghljuQ', 'title': 'How China Built a Hospital in 10 Days', 'date_time': '2 years ago', 'views': '2.8M views'}, {'video_url': 'https://www.youtube.com/watch?v=vpcUVOjUrKk', 'title': 'The Business of Ski Resorts', 'date_time': '2 years ago', 'views': '2.1M views'}, {'video_url': 'https://www.youtube.com/watch?v=FCEwPio2bkg', 'title': "American Sports' Battle for China", 'date_time': '2 years ago', 'views': '494K views'}, {'video_url': 'https://www.youtube.com/watch?v=5-QejUTDCWw', 'title': "The World's Most Useful Airport
[Documentary]", 'date_time': '2 years ago', 'views': '9M views'}, {'video_url': 'https://www.youtube.com/watch?v=RyG7nzteG64', 'title': 'The Logistics of the US Election', 'date_time': '2 years
ago', 'views': '838K views'}, {'video_url': 'https://www.youtube.com/watch?v=dSw7fWCrDk0', 'title': 'Amtrak’s Grand Plan for Profitability', 'date_time': '2 years ago', 'views': '1.5M views'}, {'video_url': 'https://www.youtube.com/watch?v=5SDUm1bx7Zc', 'title': "Australia's China Problem", 'date_time': '3 years ago', 'views': '6.8M views'}, {'video_url': 'https://www.youtube.com/watch?v=U1a73gdNs0M', 'title': 'The US Government Program That Pays For Your Flights', 'date_time': '3 years ago', 'views': '886K views'}, {'video_url': 'https://www.youtube.com/watch?v=tLS9IK693KI', 'title': 'The Logistics of Disaster Response', 'date_time': '3 years ago', 'views': '938K views'}, {'video_url': 'https://www.youtube.com/watch?v=cnfoTAxhpzQ', 'title': 'Why So Many Airlines are Going Bankrupt', 'date_time': '3 years ago', 'views': '2.9M views'}, {'video_url': 'https://www.youtube.com/watch?v=erS2YMYcZO8', 'title': 'The Logistics of Filming Avengers', 'date_time': '3
years ago', 'views': '1.5M views'}, {'video_url': 'https://www.youtube.com/watch?v=XjbYloKJX7c',
'title': "Boeing's China Problem", 'date_time': '3 years ago', 'views': '3.4M views'}, {'video_url': 'https://www.youtube.com/watch?v=A0qt0hdCQtg', 'title': "The US' Overseas Military Base Strategy", 'date_time': '3 years ago', 'views': '3.6M views'}, {'video_url': 'https://www.youtube.com/watch?v=I9ttpHvK6yw', 'title': 'How to Stop an Epidemic', 'date_time': '3 years ago', 'views': '2.3M views'}, {'video_url': 'https://www.youtube.com/watch?v=pJ_LUFBSoqM', 'title': "The NFL's Logistics Problem", 'date_time': '3 years ago', 'views': '2.2M views'}, {'video_url': 'https://www.youtube.com/watch?v=jYPrH4xANpU', 'title': 'The Economics of Private Jets', 'date_time': '3 years
ago', 'views': '4.1M views'}, {'video_url': 'https://www.youtube.com/watch?v=KgsxapE27NU', 'title': "The World's Shortcut: How the Panama Canal Works", 'date_time': '3 years ago', 'views': '4.4M views'}, {'video_url': 'https://www.youtube.com/watch?v=C1f2GwWLB3k', 'title': 'How Air Traffic
Control Works', 'date_time': '3 years ago', 'views': '4M views'}, {'video_url': 'https://www.youtube.com/watch?v=u2-ehDQM6TM', 'title': 'Extremities: A New Scripted Podcast from Wendover', 'date_time': '3 years ago', 'views': '193K views'}, {'video_url': 'https://www.youtube.com/watch?v=17oZPYcpPnQ', 'title': "Iceland's Tourism Revolution", 'date_time': '3 years ago', 'views': '1.3M views'}, {'video_url': 'https://www.youtube.com/watch?v=SUsqnD9-42g', 'title': 'Mini Countries Abroad: How Embassies Work', 'date_time': '3 years ago', 'views': '5.2M views'}, {'video_url': 'https://www.youtube.com/watch?v=BfNEOfEGe3I', 'title': 'The Economics That Made Boeing Build the 737 Max', 'date_time': '3 years ago', 'views': '2.4M views'}, {'video_url': 'https://www.youtube.com/watch?v=EkRRo5DN9lI', 'title': 'The Logistics of the International Space Station', 'date_time': '3 years ago', 'views': '3.2M views'}, {'video_url': 'https://www.youtube.com/watch?v=E3jfvncofiA', 'title': 'How Airlines Decide Where to Fly', 'date_time': '3 years ago', 'views': '2.5M views'}, {'video_url': 'https://www.youtube.com/watch?v=xX0ozxrZlEQ', 'title': 'How Rwanda is Becoming the Singapore of Africa', 'date_time': '3 years ago', 'views': '6.4M views'}, {'video_url': 'https://www.youtube.com/watch?v=9poImReDFeY', 'title': 'How Freight Trains Connect the World', 'date_time': '3 years ago', 'views': '2.3M views'}, {'video_url': 'https://www.youtube.com/watch?v=69EVxLLhciQ', 'title': 'How Hong Kong Changed Countries', 'date_time': '3 years ago', 'views': '2.5M views'}, {'video_url': 'https://www.youtube.com/watch?v=gdy0gBVWAzE', 'title': 'Living Underwater:
How Submarines Work', 'date_time': '3 years ago', 'views': '10M views'}, {'video_url': 'https://www.youtube.com/watch?v=bnoUBfLxZz0', 'title': 'The Super-Fast Logistics of Delivering Blood By Drone', 'date_time': '3 years ago', 'views': '1M views'}, {'video_url': 'https://www.youtube.com/watch?v=msjuRoZ0Vu8', 'title': 'The New Economy of the Warming Arctic', 'date_time': '3 years ago', 'views': '844K views'}, {'video_url': 'https://www.youtube.com/watch?v=c0pS3Zx7Fc8', 'title': 'Cities at Sea: How Aircraft Carriers Work', 'date_time': '3 years ago', 'views': '13M views'}, {'video_url': 'https://www.youtube.com/watch?v=TNUomfuWuA8', 'title': 'The Rise of 20-Hour Long Flights', 'date_time': '3 years ago', 'views': '3.7M views'}, {'video_url': 'https://www.youtube.com/watch?v=0JDoll8OEFE', 'title': 'Why China Is so Good at Building Railways', 'date_time': '4 years ago', 'views': '8.2M views'}, {'video_url': 'https://www.youtube.com/watch?v=7cjIWMUgPtY', 'title': 'The Magic Economics of Gambling', 'date_time': '4 years ago', 'views': '2.5M views'}, {'video_url': 'https://www.youtube.com/watch?v=cognzTud3Wg', 'title': 'Why the World is Running Out of
Pilots', 'date_time': '4 years ago', 'views': '4.2M views'}, {'video_url': 'https://www.youtube.com/watch?v=EodxubsO8EI', 'title': 'How Fighting Wildfires Works', 'date_time': '4 years ago', 'views': '1.6M views'}, {'video_url': 'https://www.youtube.com/watch?v=30XpSozOZII', 'title': 'How to Build a $100 Million Satellite', 'date_time': '4 years ago', 'views': '1.2M views'}, {'video_url': 'https://www.youtube.com/watch?v=zQV_DKQkT8o', 'title': "How Africa is Becoming China's China", 'date_time': '4 years ago', 'views': '8.7M views'}, {'video_url': 'https://www.youtube.com/watch?v=wdU1WTBJMl0', 'title': 'How Airports Make Money', 'date_time': '4 years ago', 'views': '4.5M views'}, {'video_url': 'https://www.youtube.com/watch?v=6OLVFa8YRfM', 'title': 'The Insane Logistics of Formula 1', 'date_time': '4 years ago', 'views': '6.1M views'}, {'video_url': 'https://www.youtube.com/watch?v=jdNDYBt9e_U', 'title': 'The Most Valuable Airspace in the World', 'date_time': '4 years ago', 'views': '5.6M views'}, {'video_url': 'https://www.youtube.com/watch?v=5r90DYjZ76g', 'title': "Guam: Why America's Most Isolated Territory Exists", 'date_time': '4 years ago', 'views': '6M views'}, {'video_url': 'https://www.youtube.com/watch?v=1Y1kJpHBn50', 'title': 'How to Design Impenetrable Airport Security', 'date_time': '4 years ago', 'views': '4.8M views'}, {'video_url': 'https://www.youtube.com/watch?v=hiRBQxHrxNw', 'title': 'Space: The Next Trillion Dollar Industry', 'date_time': '4 years ago', 'views': '3.4M views'}, {'video_url': 'https://www.youtube.com/watch?v=-s3j-ptJD10', 'title': 'The Logistics of Living in Antarctica', 'date_time': '4
years ago', 'views': '5M views'}, {'video_url': 'https://www.youtube.com/watch?v=y3qfeoqErtY', 'title': 'How Overnight Shipping Works', 'date_time': '4 years ago', 'views': '7.2M views'}, {'video_url': 'https://www.youtube.com/watch?v=IvAvHjYoLUU', 'title': 'Why Cities Exist', 'date_time':
'4 years ago', 'views': '3.2M views'}, {'video_url': 'https://www.youtube.com/watch?v=72hlr-E7KA0', 'title': 'How Airlines Price Flights', 'date_time': '4 years ago', 'views': '3.2M views'}, {'video_url': 'https://www.youtube.com/watch?v=voozHXadYYE', 'title': 'The Gene Patent Question', 'date_time': '4 years ago', 'views': '587K views'}, {'video_url': 'https://www.youtube.com/watch?v=uU3kLBo_ruo', 'title': 'The Nuclear Waste Problem', 'date_time': '5 years ago', 'views': '3.7M views'}, {'video_url': 'https://www.youtube.com/watch?v=V1YMPk3XhCc', 'title': 'The Little Plane War', 'date_time': '5 years ago', 'views': '4.1M views'}, {'video_url': 'https://www.youtube.com/watch?v=h97fXhDN5qE', 'title': "Elon Musk's Basic Economics", 'date_time': '5 years ago', 'views':
'5.9M views'}, {'video_url': 'https://www.youtube.com/watch?v=GiBF6v5UAAE', 'title': "China's Geography Problem", 'date_time': '5 years ago', 'views': '12M views'}, {'video_url': 'https://www.youtube.com/watch?v=-cjfTG8DbwA', 'title': 'Why Public Transportation Sucks in the US', 'date_time': '5 years ago', 'views': '3.3M views'}, {'video_url': 'https://www.youtube.com/watch?v=ql0Op1VcELw', 'title': "What's Actually the Plane of the Future", 'date_time': '5 years ago', 'views': '5M views'}, {'video_url': 'https://www.youtube.com/watch?v=RaGG50laHgI', 'title': 'TWL is back! (But not here...)', 'date_time': '5 years ago', 'views': '303K views'}, {'video_url': 'https://www.youtube.com/watch?v=yT9bit2-1pg', 'title': 'How to Stop a Riot', 'date_time': '5 years ago', 'views': '5.1M views'}, {'video_url': 'https://www.youtube.com/watch?v=e-WO-c9xHms', 'title': 'How Geography Gave the US Power', 'date_time': '5 years ago', 'views': '3.5M views'}, {'video_url': 'https://www.youtube.com/watch?v=E7Jfrzkmzyc', 'title': 'Why Chinese Manufacturing Wins', 'date_time': '5 years ago', 'views': '6.1M views'}, {'video_url': 'https://www.youtube.com/watch?v=dGXahSnA_oA', 'title': 'How Airlines Schedule Flights', 'date_time': '5 years ago', 'views': '4M views'},
{'video_url': 'https://www.youtube.com/watch?v=N4PW66_g6XA', 'title': 'How to Fix Traffic Forever', 'date_time': '5 years ago', 'views': '3.9M views'}, {'video_url': 'https://www.youtube.com/watch?v=j48Z3W35FI0', 'title': 'How the US Government Will Survive Doomsday', 'date_time': '5 years
ago', 'views': '4.7M views'}, {'video_url': 'https://www.youtube.com/watch?v=MP1OAm7Pzps', 'title': 'Why the Northernmost Town in America Exists', 'date_time': '5 years ago', 'views': '8.9M views'}, {'video_url': 'https://www.youtube.com/watch?v=lkCeKc1GTMs', 'title': 'Which Country Are International Airports In?', 'date_time': '5 years ago', 'views': '3.7M views'}, {'video_url': 'https://www.youtube.com/watch?v=Nu2WOxXxsHw', 'title': 'How the Post Office Made America', 'date_time': '5 years ago', 'views': '1.2M views'}, {'video_url': 'https://www.youtube.com/watch?v=HSxSgbNQi-g', 'title': 'Small Planes Over Big Oceans (ETOPS Explained)', 'date_time': '5 years ago', 'views': '3.1M views'}, {'video_url': 'https://www.youtube.com/watch?v=ZcDwtO4RWmo', 'title': "Canada's New Shipping Shortcut", 'date_time': '5 years ago', 'views': '3.7M views'}, {'video_url': 'https://www.youtube.com/watch?v=EqWksuyry5w', 'title': 'Why Airlines Sell More Seats Than They Have', 'date_time': '5 years ago', 'views': '2.2M views'}, {'video_url': 'https://www.youtube.com/watch?v=fwjwePe-HmA', 'title': 'Why Trains are so Expensive', 'date_time': '5 years ago', 'views': '3.1M views'}, {'video_url': 'https://www.youtube.com/watch?v=v3C_5bsdQWg', 'title': "Russia's Geography Problem", 'date_time': '5 years ago', 'views': '8.6M views'}, {'video_url': 'https://www.youtube.com/watch?v=BzB5xtGGsTc', 'title': 'The Economics of Airline Class', 'date_time': '5 years
ago', 'views': '10M views'}, {'video_url': 'https://www.youtube.com/watch?v=n1QEj09Pe6k', 'title': "Why Planes Don't Fly Faster", 'date_time': '5 years ago', 'views': '7.1M views'}, {'video_url': 'https://www.youtube.com/watch?v=YJRqB1xtIxg', 'title': "The US President's $2,614 Per Minute Transport System", 'date_time': '5 years ago', 'views': '11M views'}, {'video_url': 'https://www.youtube.com/watch?v=ancuYECRGN8', 'title': 'Every State in the US', 'date_time': '5 years ago', 'views': '6.5M views'}, {'video_url': 'https://www.youtube.com/watch?v=bL2WPDtLYNU', 'title': 'How
to Make First Contact', 'date_time': '5 years ago', 'views': '3.1M views'}, {'video_url': 'https://www.youtube.com/watch?v=3PWWtqfwacQ', 'title': 'Why Cities Are Where They Are', 'date_time': '5 years ago', 'views': '5.1M views'}, {'video_url': 'https://www.youtube.com/watch?v=_lj127TKu4Q', 'title': 'Is the European Union a Country?', 'date_time': '5 years ago', 'views': '2M views'}, {'video_url': 'https://www.youtube.com/watch?v=d1CVVoAihBc', 'title': 'Every Country in the World
(Part 2)', 'date_time': '5 years ago', 'views': '3.2M views'}, {'video_url': 'https://www.youtube.com/watch?v=P-b4SUOfn_4', 'title': 'Every Country in the World (Part 1)', 'date_time': '5 years
ago', 'views': '8.6M views'}, {'video_url': 'https://www.youtube.com/watch?v=thqbjA2DC-E', 'title': 'The Five Freedoms of Aviation', 'date_time': '6 years ago', 'views': '3.4M views'}, {'video_url': 'https://www.youtube.com/watch?v=_rk2hPrEnk8', 'title': "Why Chicken Sandwiches Don't Cost $1500", 'date_time': '6 years ago', 'views': '3M views'}, {'video_url': 'https://www.youtube.com/watch?v=aQSxPzafO_k', 'title': 'Urban Geography: Why We Live Where We Do', 'date_time': '6 years ago', 'views': '3.3M views'}, {'video_url': 'https://www.youtube.com/watch?v=NlIdzF1_b5M', 'title': 'Big Plane vs Little Plane (The Economics of Long-Haul Flights)', 'date_time': '6 years ago', 'views': '4.1M views'}, {'video_url': 'https://www.youtube.com/watch?v=HpfvOc8HJdg', 'title': 'TWL #10.5: Argleton (+ Patreon)', 'date_time': '6 years ago', 'views': '202K views'}, {'video_url':
'https://www.youtube.com/watch?v=7ouiTMXuDAQ', 'title': 'How Much Would it Cost to Live on the Moon?', 'date_time': '6 years ago', 'views': '1M views'}, {'video_url': 'https://www.youtube.com/watch?v=-aQ2E0mlRQI', 'title': 'The Plane Highway in the Sky', 'date_time': '6 years ago', 'views': '4.4M views'}, {'video_url': 'https://www.youtube.com/watch?v=mbEfzuCLoAQ', 'title': 'Why Trains Suck in America', 'date_time': '6 years ago', 'views': '5.8M views'}, {'video_url': 'https://www.youtube.com/watch?v=N7CvEt51fz4', 'title': 'How Maritime Law Works', 'date_time': '6 years ago', 'views': '3.5M views'}, {'video_url': 'https://www.youtube.com/watch?v=7PsmkAxVHdM', 'title': 'Why College is so Expensive', 'date_time': '6 years ago', 'views': '1.1M views'}, {'video_url': 'https://www.youtube.com/watch?v=JoYNhX15w4k', 'title': 'TWL #10: The Day Sweden Switched Driving Directions', 'date_time': '6 years ago', 'views': '1.1M views'}, {'video_url': 'https://www.youtube.com/watch?v=1c-jBfZPVv4', 'title': 'TWL #9: The Secret Anti-Counterfeit Symbol', 'date_time': '6 years ago', 'views': '1.4M views'}, {'video_url': 'https://www.youtube.com/watch?v=069y1MpOkQY', 'title': 'How Budget Airlines Work', 'date_time': '6 years ago', 'views': '9.2M views'}, {'video_url': 'https://www.youtube.com/watch?v=n7RHv_MIIT0', 'title': 'TWL #8: Immortality Through Quantum Suicide', 'date_time': '6 years ago', 'views': '1.8M views'}, {'video_url': 'https://www.youtube.com/watch?v=6Oe8T3AvydU', 'title': 'Why Flying is So Expensive', 'date_time': '6 years ago',
'views': '4.8M views'}, {'video_url': 'https://www.youtube.com/watch?v=LnEyjwdoj7g', 'title': 'TWL #7: This Number is Illegal', 'date_time': '6 years ago', 'views': '3.7M views'}, {'video_url':
'https://www.youtube.com/watch?v=5XdYbmova_s', 'title': 'TWL #6: Big Mac Economics', 'date_time': '6 years ago', 'views': '1.6M views'}, {'video_url': 'https://www.youtube.com/watch?v=F53TA37Mqck', 'title': 'TWL #5: Timekeeping on Mars', 'date_time': '6 years ago', 'views': '810K views'}, {'video_url': 'https://www.youtube.com/watch?v=O6o5C-i02c8', 'title': 'TWL #4: Which Way Should Toilet Paper Face?', 'date_time': '6 years ago', 'views': '739K views'}, {'video_url': 'https://www.youtube.com/watch?v=JllpzZgAAl8', 'title': 'TWL #3: Paper Towns- Fake Places Made to Catch Copyright Thieves', 'date_time': '6 years ago', 'views': '1.1M views'}, {'video_url': 'https://www.youtube.com/watch?v=v_iaurPRhCs', 'title': 'TWL #2: Bir Tawil- The Land Without a Country', 'date_time': '6 years ago', 'views': '852K views'}, {'video_url': 'https://www.youtube.com/watch?v=_ugvJi2pIck', 'title': 'TWL #1: Presidents Are 4x More Likely to be Lefties', 'date_time': '6 years ago', 'views': '546K views'}, {'video_url': 'https://www.youtube.com/watch?v=RRWggYusSds', 'title':
"Why Time is One of Humanity's Greatest Inventions", 'date_time': '6 years ago', 'views': '1.4M views'}, {'video_url': 'https://www.youtube.com/watch?v=h6AWAoc_Lr0', 'title': 'Space Law-What Laws are There in Space?', 'date_time': '6 years ago', 'views': '1.5M views'}, {'video_url': 'https://www.youtube.com/watch?v=2H9KEcb74aA', 'title': "A Map Geek's Tour of the World #2", 'date_time': '6 years ago', 'views': '248K views'}, {'video_url': 'https://www.youtube.com/watch?v=ThNeIT7aceI', 'title': 'How the Layouts of Grocery Stores are Secretly Designed to Make You Spend More Money', 'date_time': '6 years ago', 'views': '953K views'}, {'video_url': 'https://www.youtube.com/watch?v=bg2VZIPfX0U', 'title': 'How to Create a Country', 'date_time': '6 years ago', 'views': '3.7M views'}, {'video_url': 'https://www.youtube.com/watch?v=pVB4TEeMcgA', 'title': "A Map Geek's Tour of the World", 'date_time': '6 years ago', 'views': '405K views'}, {'video_url': 'https://www.youtube.com/watch?v=avh7ez858xM', 'title': 'The Messy Ethics of Self Driving Cars', 'date_time': '6 years ago', 'views': '461K views'}, {'video_url': 'https://www.youtube.com/watch?v=Y3jlFtBg0Y8', 'title': 'The Surprising Easternmost Point in the US', 'date_time': '6 years ago', 'views': '213K views'}, {'video_url': 'https://www.youtube.com/watch?v=F-ZskaqBshs', 'title': "Containerization: The Most Influential Invention That You've Never Heard Of", 'date_time': '6 years ago', 'views': '684K views'}, {'video_url': 'https://www.youtube.com/watch?v=Nn-ym8y1_kw', 'title': 'The World is Shrinking', 'date_time': '7 years ago', 'views': '183K views'}, {'video_url': 'https://www.youtube.com/watch?v=8LqqVfPduTs', 'title': 'How Marketers Manipulate Us: Psychological Manipulation in Advertising', 'date_time': '7 years ago', 'views': '458K views'}]
Total: 189
|
Getting Video Links from Youtube Channel in Python Selenium
|
I am using Selenium in Python to scrape the videos from Youtube channels' websites. Below is a set of code. The line videos = driver.find_elements(By.CLASS_NAME, 'style-scope ytd-grid-video-renderer') repeatedly returns no links to the videos (a.k.a. the print(videos) after it outputs an empty list). How would you modify it to find all the videos on the loaded page?
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
driver = webdriver.Chrome()
driver.get('https://www.youtube.com/wendoverproductions/videos')
videos = driver.find_elements(By.CLASS_NAME, 'style-scope ytd-grid-video-renderer')
print(videos)
urls = []
titles = []
dates = []
for video in videos:
video_url = video.find_element(by=By.XPATH, value='.//*[@id="video-title"]').get_attribute('href')
urls.append(video_url)
video_title = video.find_element(by=By.XPATH, value='.//*[@id="video-title"]').text
titles.append(video_title)
video_date = video.find_element(by=By.XPATH, value='.//*[@id="metadata-line"]/span[2]').text
dates.append(video_date)
|
[
"If you don't have a YouTube Data API v3 developer key:\nThe following procedure requires you to have a Google account.\nGo to: https://console.cloud.google.com/projectcreate\nClick on the CREATE button.\nGo to: https://console.cloud.google.com/marketplace/product/google/youtube.googleapis.com\nClick on the ENABLE button.\nClick on the CREATE CREDENTIALS button.\nChoose the Public data option.\nClick on the NEXT button.\nNote the displayed API Key and continue reading.\nIf you have a YouTube Data API v3 developer key:\nTo get the videos of a given YouTube channel id (see this answer if you don't know how to get the channel id of a given YouTube channel), replace its second character (C) to U to obtain its uploads playlist id and provide it as a playlistId to YouTube Data API v3 PlaylistItems: list endpoint.\nThis is a Python sample code listing videos of a given channel id (don't forget to replace API_KEY with your YouTube Data API v3 developer key):\nimport requests, json\n\nCHANNEL_ID = 'UC07-dOwgza1IguKA86jqxNA'\nPLAYLIST_ID = 'UU' + CHANNEL_ID[2:]\nAPI_KEY = 'AIzaSy...'\n\nURL = f'https://www.googleapis.com/youtube/v3/playlistItems?part=snippet&playlistId={PLAYLIST_ID}&maxResults=50&key={API_KEY}'\n\npageToken = ''\n\nwhile True:\n pageUrl = URL\n if pageToken != '':\n pageUrl += f'&pageToken={pageToken}'\n response = json.loads(requests.get(pageUrl).text)\n print(response['items'])\n if 'nextPageToken' in response:\n pageToken = response['nextPageToken']\n else:\n break\n\nFor documentation concerning the pagination, see this webpage.\n",
"Implementation using Selenium:\nFirst of all, I wanna solve the problem meaning wanted to pull data with the help of YouTube API and I'm about to reach the goal but for some API's restrictions like API KEY's rquests restrict and some other complexities, I couldn't grab complete data that's why I go with super powerful Selenium engine as my last resort and it works like a charm.\nFull working code as an example:\nfrom selenium import webdriver\nfrom selenium.webdriver.chrome.service import Service\nfrom selenium.webdriver.common.by import By\nimport time\nimport pandas as pd\nfrom selenium.webdriver.support.wait import WebDriverWait\nfrom selenium.webdriver.support import expected_conditions as EC\n\noptions = webdriver.ChromeOptions()\n#All are optional\noptions.add_experimental_option(\"detach\", True)\noptions.add_argument(\"--disable-extensions\")\noptions.add_argument(\"--disable-notifications\")\noptions.add_argument(\"--disable-Advertisement\")\noptions.add_argument(\"--disable-popup-blocking\")\noptions.add_argument(\"start-maximized\")\n\ns=Service('./chromedriver')\ndriver= webdriver.Chrome(service=s,options=options)\n\ndriver.get('https://www.youtube.com/wendoverproductions/videos')\ntime.sleep(3)\n\nitem = []\nSCROLL_PAUSE_TIME = 1\nlast_height = driver.execute_script(\"return document.documentElement.scrollHeight\")\n\nitem_count = 180\n\nwhile item_count > len(item):\n driver.execute_script(\"window.scrollTo(0,document.documentElement.scrollHeight);\")\n time.sleep(SCROLL_PAUSE_TIME)\n new_height = driver.execute_script(\"return document.documentElement.scrollHeight\")\n\n if new_height == last_height:\n break\n last_height = new_height\n \n\ndata = []\ntry:\n for e in WebDriverWait(driver, 20).until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, 'div#details'))):\n title = e.find_element(By.CSS_SELECTOR,'a#video-title-link').get_attribute('title')\n vurl = e.find_element(By.CSS_SELECTOR,'a#video-title-link').get_attribute('href')\n views= e.find_element(By.XPATH,'.//*[@id=\"metadata\"]//span[@class=\"inline-metadata-item style-scope ytd-video-meta-block\"][1]').text\n date_time = e.find_element(By.XPATH,'.//*[@id=\"metadata\"]//span[@class=\"inline-metadata-item style-scope ytd-video-meta-block\"][2]').text\n data.append({\n 'video_url':vurl,\n 'title':title,\n 'date_time':date_time,\n 'views':views\n })\nexcept:\n pass\n \nitem = data\nprint(item)\nprint(len(item))\n# df = pd.DataFrame(item)\n# print(df)\n\nOUTPUT:\n[{'video_url': 'https://www.youtube.com/watch?v=oL0umpPPe-8', 'title': 'Samsung’s Dangerous Dominance over South Korea', 'date_time': '10 days ago', 'views': '2.3M views'}, {'video_url': 'https://www.youtube.com/watch?v=GBp_NgrrtPM', 'title': 'China’s Electricity Problem', 'date_time': '3 weeks ago', 'views': '1.9M views'}, {'video_url': 'https://www.youtube.com/watch?v=YBNcYxHJPLE', 'title': 'How the World’s Wealthiest People Travel', 'date_time': '1 month ago', 'views': '2.3M views'}, {'video_url': 'https://www.youtube.com/watch?v=iIpPuJ_r8Xg', 'title': 'The US Military’s Massive Global Transportation System', 'date_time': '1 month ago', 'views': '1.6M views'}, {'video_url': 'https://www.youtube.com/watch?v=MY8AB1wYOtg', 'title': 'The Absurd Logistics of Concert Tours', 'date_time': '2 months ago', 'views': '1.4M views'}, {'video_url': 'https://www.youtube.com/watch?v=8xzINLykprA', 'title': 'Money’s Mostly Digital, So Why Is Moving It So Hard?', 'date_time': '2 months ago', 'views': '1.1M views'}, {'video_url': 'https://www.youtube.com/watch?v=f66GfsKPTUg', 'title': 'How This Central African City Became the World’s Most Expensive', 'date_time': '3 months ago', 'views': '3.1M views'}, {'video_url': 'https://www.youtube.com/watch?v=IDLkOWW0_xg', 'title': 'The Simple Genius of NYC’s Water Supply System', 'date_time': '3 months ago', 'views': '1.8M views'}, {'video_url': 'https://www.youtube.com/watch?v=U9jirFqex6g', 'title': 'Europe’s Experiment: Treating Trains Like Planes', 'date_time': '3 months ago', 'views': '2M views'}, {'video_url': 'https://www.youtube.com/watch?v=eoWcQUjNM8o', 'title': 'How the YouTube Creator Economy Works', 'date_time': '4 months ago', 'views': '1M views'}, {'video_url': 'https://www.youtube.com/watch?v=V0Xx0E8cs7U', 'title': 'The Incredible Logistics Behind Weather Forecasting', 'date_time': '4 months ago', 'views': '1.2M views'}, {'video_url': 'https://www.youtube.com/watch?v=v0aGGOK4kAM', 'title': 'Australia Had a Mass-Shooting Problem. Here’s How it Stopped', 'date_time': '5 months ago', 'views': '947K views'}, {'video_url': 'https://www.youtube.com/watch?v=AW3gaelBypY', 'title': 'The Carbon Offset Problem', 'date_time': '5 months ago', 'views': '1.2M views'}, {'video_url': 'https://www.youtube.com/watch?v=xhYl7Jjefo8', 'title': 'Jet Lag: The Game - A New Channel by Wendover Productions', 'date_time': '6 months ago', 'views': '328K views'}, {'video_url': 'https://www.youtube.com/watch?v=oESoI6XxZTg', 'title': 'How to Design a Theme Park (To Take Tons of Your Money)', 'date_time': '6 months ago', 'views': '1.9M views'}, {'video_url': 'https://www.youtube.com/watch?v=AQbmpecxS2w', 'title': 'Why Gas Got So Expensive (It’s Not the War)', 'date_time': '6 months ago', 'views': '3.4M views'}, {'video_url': 'https://www.youtube.com/watch?v=U_7CGl6VWaQ', 'title': 'How Cyberwarfare Actually Works', 'date_time': '7 months ago', 'views': '1.9M views'}, {'video_url': 'https://www.youtube.com/watch?v=R9pxFgJwxFE', 'title': 'The Incredible Logistics Behind Corn Farming', 'date_time': '7 months ago', 'views': '1.6M views'}, {'video_url': 'https://www.youtube.com/watch?v=SrTrpwzVt4g', 'title': 'The Sanction-Fueled Destruction of \nthe Russian Aviation Industry', 'date_time': '8 months ago', 'views': '4.3M 'https://www.youtube.com/watch?v=b1JlYZQG3lI', 'title': \"Why There are Now So Many Shortages (It's Not COVID)\", 'date_time': '1 year ago', 'views': '7.7M \n'https://www.youtube.com/watch?v=N4dOCfWlgBw', 'title': 'The Insane Logistics of Shutting Down the Cruise Industry', 'date_time': '1 year ago', 'views': '2M views'}, {'video_url': 'https://www.youtube.com/watch?v=3CuPqeIJr3U', 'title': \"China's Vaccine Diplomacy\", 'date_time': '1 year ago', 'views': '916K views'}, {'video_url': 'https://www.youtube.com/watch?v=DlTq8DbRs4k', 'title': \"The UK's Failed Experiment in Rail Privatization\", 'date_time': '1 year ago', 'views': '1.5M views'}, {'video_url': 'https://www.youtube.com/watch?v=VjiH3mpxyrQ', 'title': 'How to Start an Airline', 'date_time': '1 year ago', 'views': '1.8M views'}, {'video_url': 'https://www.youtube.com/watch?v=pLcqJ2DclEg', 'title': 'The Electric Vehicle Charging Problem', 'date_time': '1 year ago', 'views': '4.4M views'}, {'video_url': 'https://www.youtube.com/watch?v=3gdCH1XUIlE', 'title': \"How Air Ambulances (Don't) Work\", 'date_time': '1 year ago', 'views': '1M views'}, {'video_url': 'https://www.youtube.com/watch?v=2qanMpnYsjk', 'title': \"How Amazon's Super-Complex Shipping System \nWorks\", 'date_time': '1 year ago', 'views': '2.9M views'}, {'video_url': 'https://www.youtube.com/watch?v=7R7jNWHp0D0', 'title': 'The News You Missed in 2020, From Every Country in the World (Part 2)', 'date_time': '1 year ago', 'views': '1.3M views'}, {'video_url': 'https://www.youtube.com/watch?v=GIFV_Z7Y9_w', 'title': 'The News You Missed in 2020, From Every Country in the World (Part 1)', 'date_time': '1 year ago', 'views': '2.3M views'}, {'video_url': 'https://www.youtube.com/watch?v=KXRtNwUju5g', 'title': \"How China Broke the World's Recycling\", 'date_time': '1 year ago', 'views': '3.9M views'}, {'video_url': 'https://www.youtube.com/watch?v=fTyUE162lrw', 'title': \n'Why Long-Haul Low-Cost Airlines Always Go Bankrupt', 'date_time': '1 year ago', 'views': '1.3M views'}, {'video_url': 'https://www.youtube.com/watch?v=ZAEydOjNWyQ', 'title': 'How Living at the \nSouth Pole Works', 'date_time': '2 years ago', 'views': '2.3M views'}, {'video_url': 'https://www.youtube.com/watch?v=_BCY0SPOFpE', 'title': \"Egypt's Dam Problem: The Geopolitics of the Nile\", 'date_time': '2 years ago', 'views': '1.8M views'}, {'video_url': 'https://www.youtube.com/watch?v=v_rXhuaI0W8', 'title': 'The 8 Flights That Show How COVID-19 Reinvented Aviation', 'date_time': \n'2 years ago', 'views': '1.3M views'}, {'video_url': 'https://www.youtube.com/watch?v=Ongqf93rAcM', 'title': \"How to Beat the Casino, and How They'll Stop You\", 'date_time': '2 years ago', 'views': '3.1M views'}, {'video_url': 'https://www.youtube.com/watch?v=byW1GExQB84', 'title': 'Distributing the COVID Vaccine: The Greatest Logistics Challenge Ever', 'date_time': '2 years ago', 'views': '911K views'}, {'video_url': 'https://www.youtube.com/watch?v=DTIDCA7mjZs', 'title': 'How to Illegally Cross the Mexico-US Border', 'date_time': '2 years ago', 'views': '1.7M views'}, {'video_url': 'https://www.youtube.com/watch?v=H_akzwzghWQ', 'title': 'How COVID-19 Broke the Airline \nPricing Model', 'date_time': '2 years ago', 'views': '2.3M views'}, {'video_url': 'https://www.youtube.com/watch?v=7C1fPocIFgU', 'title': 'The Broken Economics of Organ Transplants', 'date_time': '2 years ago', 'views': '600K views'}, {'video_url': 'https://www.youtube.com/watch?v=3J06af5xHD0', 'title': 'The Final Years of Majuro [Documentary]', 'date_time': '2 years ago', 'views': '2M views'}, {'video_url': 'https://www.youtube.com/watch?v=YgiMqePRp0Y', 'title': 'The Logistics of Covid-19 Testing', 'date_time': '2 years ago', 'views': '533K views'}, {'video_url': 'https://www.youtube.com/watch?v=Rtmhv5qEBg0', 'title': \"Airlines' Protocol for After a Plane Crash\", 'date_time': '2 years ago', 'views': '1.5M views'}, {'video_url': 'https://www.youtube.com/watch?v=6GMoUmvw8kU', 'title': 'Why Taiwan and China are Battling over Tiny Island Countries', 'date_time': '2 years ago', 'views': '855K views'}, {'video_url': 'https://www.youtube.com/watch?v=QlPrAKtegFQ', 'title': 'How Long-Haul Trucking Works', 'date_time': '2 years ago', 'views': '2.2M views'}, {'video_url': 'https://www.youtube.com/watch?v=uAG4zCsiA_w', 'title': 'Why Helicopter Airlines Failed', 'date_time': '2 years ago', 'views': '1.5M views'}, {'video_url': 'https://www.youtube.com/watch?v=NtX-Ibi21tU', 'title': 'The Five Rules of Risk', 'date_time': '2 years ago', 'views': '1M \nviews'}, {'video_url': 'https://www.youtube.com/watch?v=r2oPk20OHBE', 'title': \"Air Cargo's Coronavirus Problem\", 'date_time': '2 years ago', 'views': '1.1M views'}, {'video_url': 'https://www.youtube.com/watch?v=ABIkWS_YavM', 'title': 'How Offshore Oil Rigs Work', 'date_time': '2 years ago', 'views': '2.4M views'}, {'video_url': 'https://www.youtube.com/watch?v=-pNBAxx4IRo', 'title': \n\"How the US' Hospital Ships Work\", 'date_time': '2 years ago', 'views': '1.5M views'}, {'video_url': 'https://www.youtube.com/watch?v=VX2e2iEg_pM', 'title': 'COVID-19: How Aviation is Fighting for Survival', 'date_time': '2 years ago', 'views': '1.3M views'}, {'video_url': 'https://www.youtube.com/watch?v=Ppjv0H-Yt5Q', 'title': 'The Logistics of the US Census', 'date_time': '2 years ago', 'views': '887K views'}, {'video_url': 'https://www.youtube.com/watch?v=QvUpSFGRqEo', 'title': 'How Boeing Will Get the 737 MAX Flying Again', 'date_time': '2 years ago', 'views': '1.2M views'}, {'video_url': 'https://www.youtube.com/watch?v=3Sh7hghljuQ', 'title': 'How China Built a Hospital in 10 Days', 'date_time': '2 years ago', 'views': '2.8M views'}, {'video_url': 'https://www.youtube.com/watch?v=vpcUVOjUrKk', 'title': 'The Business of Ski Resorts', 'date_time': '2 years ago', 'views': '2.1M views'}, {'video_url': 'https://www.youtube.com/watch?v=FCEwPio2bkg', 'title': \"American Sports' Battle for China\", 'date_time': '2 years ago', 'views': '494K views'}, {'video_url': 'https://www.youtube.com/watch?v=5-QejUTDCWw', 'title': \"The World's Most Useful Airport \n[Documentary]\", 'date_time': '2 years ago', 'views': '9M views'}, {'video_url': 'https://www.youtube.com/watch?v=RyG7nzteG64', 'title': 'The Logistics of the US Election', 'date_time': '2 years \nago', 'views': '838K views'}, {'video_url': 'https://www.youtube.com/watch?v=dSw7fWCrDk0', 'title': 'Amtrak’s Grand Plan for Profitability', 'date_time': '2 years ago', 'views': '1.5M views'}, {'video_url': 'https://www.youtube.com/watch?v=5SDUm1bx7Zc', 'title': \"Australia's China Problem\", 'date_time': '3 years ago', 'views': '6.8M views'}, {'video_url': 'https://www.youtube.com/watch?v=U1a73gdNs0M', 'title': 'The US Government Program That Pays For Your Flights', 'date_time': '3 years ago', 'views': '886K views'}, {'video_url': 'https://www.youtube.com/watch?v=tLS9IK693KI', 'title': 'The Logistics of Disaster Response', 'date_time': '3 years ago', 'views': '938K views'}, {'video_url': 'https://www.youtube.com/watch?v=cnfoTAxhpzQ', 'title': 'Why So Many Airlines are Going Bankrupt', 'date_time': '3 years ago', 'views': '2.9M views'}, {'video_url': 'https://www.youtube.com/watch?v=erS2YMYcZO8', 'title': 'The Logistics of Filming Avengers', 'date_time': '3 \nyears ago', 'views': '1.5M views'}, {'video_url': 'https://www.youtube.com/watch?v=XjbYloKJX7c', \n'title': \"Boeing's China Problem\", 'date_time': '3 years ago', 'views': '3.4M views'}, {'video_url': 'https://www.youtube.com/watch?v=A0qt0hdCQtg', 'title': \"The US' Overseas Military Base Strategy\", 'date_time': '3 years ago', 'views': '3.6M views'}, {'video_url': 'https://www.youtube.com/watch?v=I9ttpHvK6yw', 'title': 'How to Stop an Epidemic', 'date_time': '3 years ago', 'views': '2.3M views'}, {'video_url': 'https://www.youtube.com/watch?v=pJ_LUFBSoqM', 'title': \"The NFL's Logistics Problem\", 'date_time': '3 years ago', 'views': '2.2M views'}, {'video_url': 'https://www.youtube.com/watch?v=jYPrH4xANpU', 'title': 'The Economics of Private Jets', 'date_time': '3 years \nago', 'views': '4.1M views'}, {'video_url': 'https://www.youtube.com/watch?v=KgsxapE27NU', 'title': \"The World's Shortcut: How the Panama Canal Works\", 'date_time': '3 years ago', 'views': '4.4M views'}, {'video_url': 'https://www.youtube.com/watch?v=C1f2GwWLB3k', 'title': 'How Air Traffic \nControl Works', 'date_time': '3 years ago', 'views': '4M views'}, {'video_url': 'https://www.youtube.com/watch?v=u2-ehDQM6TM', 'title': 'Extremities: A New Scripted Podcast from Wendover', 'date_time': '3 years ago', 'views': '193K views'}, {'video_url': 'https://www.youtube.com/watch?v=17oZPYcpPnQ', 'title': \"Iceland's Tourism Revolution\", 'date_time': '3 years ago', 'views': '1.3M views'}, {'video_url': 'https://www.youtube.com/watch?v=SUsqnD9-42g', 'title': 'Mini Countries Abroad: How Embassies Work', 'date_time': '3 years ago', 'views': '5.2M views'}, {'video_url': 'https://www.youtube.com/watch?v=BfNEOfEGe3I', 'title': 'The Economics That Made Boeing Build the 737 Max', 'date_time': '3 years ago', 'views': '2.4M views'}, {'video_url': 'https://www.youtube.com/watch?v=EkRRo5DN9lI', 'title': 'The Logistics of the International Space Station', 'date_time': '3 years ago', 'views': '3.2M views'}, {'video_url': 'https://www.youtube.com/watch?v=E3jfvncofiA', 'title': 'How Airlines Decide Where to Fly', 'date_time': '3 years ago', 'views': '2.5M views'}, {'video_url': 'https://www.youtube.com/watch?v=xX0ozxrZlEQ', 'title': 'How Rwanda is Becoming the Singapore of Africa', 'date_time': '3 years ago', 'views': '6.4M views'}, {'video_url': 'https://www.youtube.com/watch?v=9poImReDFeY', 'title': 'How Freight Trains Connect the World', 'date_time': '3 years ago', 'views': '2.3M views'}, {'video_url': 'https://www.youtube.com/watch?v=69EVxLLhciQ', 'title': 'How Hong Kong Changed Countries', 'date_time': '3 years ago', 'views': '2.5M views'}, {'video_url': 'https://www.youtube.com/watch?v=gdy0gBVWAzE', 'title': 'Living Underwater: \nHow Submarines Work', 'date_time': '3 years ago', 'views': '10M views'}, {'video_url': 'https://www.youtube.com/watch?v=bnoUBfLxZz0', 'title': 'The Super-Fast Logistics of Delivering Blood By Drone', 'date_time': '3 years ago', 'views': '1M views'}, {'video_url': 'https://www.youtube.com/watch?v=msjuRoZ0Vu8', 'title': 'The New Economy of the Warming Arctic', 'date_time': '3 years ago', 'views': '844K views'}, {'video_url': 'https://www.youtube.com/watch?v=c0pS3Zx7Fc8', 'title': 'Cities at Sea: How Aircraft Carriers Work', 'date_time': '3 years ago', 'views': '13M views'}, {'video_url': 'https://www.youtube.com/watch?v=TNUomfuWuA8', 'title': 'The Rise of 20-Hour Long Flights', 'date_time': '3 years ago', 'views': '3.7M views'}, {'video_url': 'https://www.youtube.com/watch?v=0JDoll8OEFE', 'title': 'Why China Is so Good at Building Railways', 'date_time': '4 years ago', 'views': '8.2M views'}, {'video_url': 'https://www.youtube.com/watch?v=7cjIWMUgPtY', 'title': 'The Magic Economics of Gambling', 'date_time': '4 years ago', 'views': '2.5M views'}, {'video_url': 'https://www.youtube.com/watch?v=cognzTud3Wg', 'title': 'Why the World is Running Out of \nPilots', 'date_time': '4 years ago', 'views': '4.2M views'}, {'video_url': 'https://www.youtube.com/watch?v=EodxubsO8EI', 'title': 'How Fighting Wildfires Works', 'date_time': '4 years ago', 'views': '1.6M views'}, {'video_url': 'https://www.youtube.com/watch?v=30XpSozOZII', 'title': 'How to Build a $100 Million Satellite', 'date_time': '4 years ago', 'views': '1.2M views'}, {'video_url': 'https://www.youtube.com/watch?v=zQV_DKQkT8o', 'title': \"How Africa is Becoming China's China\", 'date_time': '4 years ago', 'views': '8.7M views'}, {'video_url': 'https://www.youtube.com/watch?v=wdU1WTBJMl0', 'title': 'How Airports Make Money', 'date_time': '4 years ago', 'views': '4.5M views'}, {'video_url': 'https://www.youtube.com/watch?v=6OLVFa8YRfM', 'title': 'The Insane Logistics of Formula 1', 'date_time': '4 years ago', 'views': '6.1M views'}, {'video_url': 'https://www.youtube.com/watch?v=jdNDYBt9e_U', 'title': 'The Most Valuable Airspace in the World', 'date_time': '4 years ago', 'views': '5.6M views'}, {'video_url': 'https://www.youtube.com/watch?v=5r90DYjZ76g', 'title': \"Guam: Why America's Most Isolated Territory Exists\", 'date_time': '4 years ago', 'views': '6M views'}, {'video_url': 'https://www.youtube.com/watch?v=1Y1kJpHBn50', 'title': 'How to Design Impenetrable Airport Security', 'date_time': '4 years ago', 'views': '4.8M views'}, {'video_url': 'https://www.youtube.com/watch?v=hiRBQxHrxNw', 'title': 'Space: The Next Trillion Dollar Industry', 'date_time': '4 years ago', 'views': '3.4M views'}, {'video_url': 'https://www.youtube.com/watch?v=-s3j-ptJD10', 'title': 'The Logistics of Living in Antarctica', 'date_time': '4 \nyears ago', 'views': '5M views'}, {'video_url': 'https://www.youtube.com/watch?v=y3qfeoqErtY', 'title': 'How Overnight Shipping Works', 'date_time': '4 years ago', 'views': '7.2M views'}, {'video_url': 'https://www.youtube.com/watch?v=IvAvHjYoLUU', 'title': 'Why Cities Exist', 'date_time': \n'4 years ago', 'views': '3.2M views'}, {'video_url': 'https://www.youtube.com/watch?v=72hlr-E7KA0', 'title': 'How Airlines Price Flights', 'date_time': '4 years ago', 'views': '3.2M views'}, {'video_url': 'https://www.youtube.com/watch?v=voozHXadYYE', 'title': 'The Gene Patent Question', 'date_time': '4 years ago', 'views': '587K views'}, {'video_url': 'https://www.youtube.com/watch?v=uU3kLBo_ruo', 'title': 'The Nuclear Waste Problem', 'date_time': '5 years ago', 'views': '3.7M views'}, {'video_url': 'https://www.youtube.com/watch?v=V1YMPk3XhCc', 'title': 'The Little Plane War', 'date_time': '5 years ago', 'views': '4.1M views'}, {'video_url': 'https://www.youtube.com/watch?v=h97fXhDN5qE', 'title': \"Elon Musk's Basic Economics\", 'date_time': '5 years ago', 'views': \n'5.9M views'}, {'video_url': 'https://www.youtube.com/watch?v=GiBF6v5UAAE', 'title': \"China's Geography Problem\", 'date_time': '5 years ago', 'views': '12M views'}, {'video_url': 'https://www.youtube.com/watch?v=-cjfTG8DbwA', 'title': 'Why Public Transportation Sucks in the US', 'date_time': '5 years ago', 'views': '3.3M views'}, {'video_url': 'https://www.youtube.com/watch?v=ql0Op1VcELw', 'title': \"What's Actually the Plane of the Future\", 'date_time': '5 years ago', 'views': '5M views'}, {'video_url': 'https://www.youtube.com/watch?v=RaGG50laHgI', 'title': 'TWL is back! (But not here...)', 'date_time': '5 years ago', 'views': '303K views'}, {'video_url': 'https://www.youtube.com/watch?v=yT9bit2-1pg', 'title': 'How to Stop a Riot', 'date_time': '5 years ago', 'views': '5.1M views'}, {'video_url': 'https://www.youtube.com/watch?v=e-WO-c9xHms', 'title': 'How Geography Gave the US Power', 'date_time': '5 years ago', 'views': '3.5M views'}, {'video_url': 'https://www.youtube.com/watch?v=E7Jfrzkmzyc', 'title': 'Why Chinese Manufacturing Wins', 'date_time': '5 years ago', 'views': '6.1M views'}, {'video_url': 'https://www.youtube.com/watch?v=dGXahSnA_oA', 'title': 'How Airlines Schedule Flights', 'date_time': '5 years ago', 'views': '4M views'}, \n{'video_url': 'https://www.youtube.com/watch?v=N4PW66_g6XA', 'title': 'How to Fix Traffic Forever', 'date_time': '5 years ago', 'views': '3.9M views'}, {'video_url': 'https://www.youtube.com/watch?v=j48Z3W35FI0', 'title': 'How the US Government Will Survive Doomsday', 'date_time': '5 years \nago', 'views': '4.7M views'}, {'video_url': 'https://www.youtube.com/watch?v=MP1OAm7Pzps', 'title': 'Why the Northernmost Town in America Exists', 'date_time': '5 years ago', 'views': '8.9M views'}, {'video_url': 'https://www.youtube.com/watch?v=lkCeKc1GTMs', 'title': 'Which Country Are International Airports In?', 'date_time': '5 years ago', 'views': '3.7M views'}, {'video_url': 'https://www.youtube.com/watch?v=Nu2WOxXxsHw', 'title': 'How the Post Office Made America', 'date_time': '5 years ago', 'views': '1.2M views'}, {'video_url': 'https://www.youtube.com/watch?v=HSxSgbNQi-g', 'title': 'Small Planes Over Big Oceans (ETOPS Explained)', 'date_time': '5 years ago', 'views': '3.1M views'}, {'video_url': 'https://www.youtube.com/watch?v=ZcDwtO4RWmo', 'title': \"Canada's New Shipping Shortcut\", 'date_time': '5 years ago', 'views': '3.7M views'}, {'video_url': 'https://www.youtube.com/watch?v=EqWksuyry5w', 'title': 'Why Airlines Sell More Seats Than They Have', 'date_time': '5 years ago', 'views': '2.2M views'}, {'video_url': 'https://www.youtube.com/watch?v=fwjwePe-HmA', 'title': 'Why Trains are so Expensive', 'date_time': '5 years ago', 'views': '3.1M views'}, {'video_url': 'https://www.youtube.com/watch?v=v3C_5bsdQWg', 'title': \"Russia's Geography Problem\", 'date_time': '5 years ago', 'views': '8.6M views'}, {'video_url': 'https://www.youtube.com/watch?v=BzB5xtGGsTc', 'title': 'The Economics of Airline Class', 'date_time': '5 years \nago', 'views': '10M views'}, {'video_url': 'https://www.youtube.com/watch?v=n1QEj09Pe6k', 'title': \"Why Planes Don't Fly Faster\", 'date_time': '5 years ago', 'views': '7.1M views'}, {'video_url': 'https://www.youtube.com/watch?v=YJRqB1xtIxg', 'title': \"The US President's $2,614 Per Minute Transport System\", 'date_time': '5 years ago', 'views': '11M views'}, {'video_url': 'https://www.youtube.com/watch?v=ancuYECRGN8', 'title': 'Every State in the US', 'date_time': '5 years ago', 'views': '6.5M views'}, {'video_url': 'https://www.youtube.com/watch?v=bL2WPDtLYNU', 'title': 'How \nto Make First Contact', 'date_time': '5 years ago', 'views': '3.1M views'}, {'video_url': 'https://www.youtube.com/watch?v=3PWWtqfwacQ', 'title': 'Why Cities Are Where They Are', 'date_time': '5 years ago', 'views': '5.1M views'}, {'video_url': 'https://www.youtube.com/watch?v=_lj127TKu4Q', 'title': 'Is the European Union a Country?', 'date_time': '5 years ago', 'views': '2M views'}, {'video_url': 'https://www.youtube.com/watch?v=d1CVVoAihBc', 'title': 'Every Country in the World \n(Part 2)', 'date_time': '5 years ago', 'views': '3.2M views'}, {'video_url': 'https://www.youtube.com/watch?v=P-b4SUOfn_4', 'title': 'Every Country in the World (Part 1)', 'date_time': '5 years \nago', 'views': '8.6M views'}, {'video_url': 'https://www.youtube.com/watch?v=thqbjA2DC-E', 'title': 'The Five Freedoms of Aviation', 'date_time': '6 years ago', 'views': '3.4M views'}, {'video_url': 'https://www.youtube.com/watch?v=_rk2hPrEnk8', 'title': \"Why Chicken Sandwiches Don't Cost $1500\", 'date_time': '6 years ago', 'views': '3M views'}, {'video_url': 'https://www.youtube.com/watch?v=aQSxPzafO_k', 'title': 'Urban Geography: Why We Live Where We Do', 'date_time': '6 years ago', 'views': '3.3M views'}, {'video_url': 'https://www.youtube.com/watch?v=NlIdzF1_b5M', 'title': 'Big Plane vs Little Plane (The Economics of Long-Haul Flights)', 'date_time': '6 years ago', 'views': '4.1M views'}, {'video_url': 'https://www.youtube.com/watch?v=HpfvOc8HJdg', 'title': 'TWL #10.5: Argleton (+ Patreon)', 'date_time': '6 years ago', 'views': '202K views'}, {'video_url': \n'https://www.youtube.com/watch?v=7ouiTMXuDAQ', 'title': 'How Much Would it Cost to Live on the Moon?', 'date_time': '6 years ago', 'views': '1M views'}, {'video_url': 'https://www.youtube.com/watch?v=-aQ2E0mlRQI', 'title': 'The Plane Highway in the Sky', 'date_time': '6 years ago', 'views': '4.4M views'}, {'video_url': 'https://www.youtube.com/watch?v=mbEfzuCLoAQ', 'title': 'Why Trains Suck in America', 'date_time': '6 years ago', 'views': '5.8M views'}, {'video_url': 'https://www.youtube.com/watch?v=N7CvEt51fz4', 'title': 'How Maritime Law Works', 'date_time': '6 years ago', 'views': '3.5M views'}, {'video_url': 'https://www.youtube.com/watch?v=7PsmkAxVHdM', 'title': 'Why College is so Expensive', 'date_time': '6 years ago', 'views': '1.1M views'}, {'video_url': 'https://www.youtube.com/watch?v=JoYNhX15w4k', 'title': 'TWL #10: The Day Sweden Switched Driving Directions', 'date_time': '6 years ago', 'views': '1.1M views'}, {'video_url': 'https://www.youtube.com/watch?v=1c-jBfZPVv4', 'title': 'TWL #9: The Secret Anti-Counterfeit Symbol', 'date_time': '6 years ago', 'views': '1.4M views'}, {'video_url': 'https://www.youtube.com/watch?v=069y1MpOkQY', 'title': 'How Budget Airlines Work', 'date_time': '6 years ago', 'views': '9.2M views'}, {'video_url': 'https://www.youtube.com/watch?v=n7RHv_MIIT0', 'title': 'TWL #8: Immortality Through Quantum Suicide', 'date_time': '6 years ago', 'views': '1.8M views'}, {'video_url': 'https://www.youtube.com/watch?v=6Oe8T3AvydU', 'title': 'Why Flying is So Expensive', 'date_time': '6 years ago', \n'views': '4.8M views'}, {'video_url': 'https://www.youtube.com/watch?v=LnEyjwdoj7g', 'title': 'TWL #7: This Number is Illegal', 'date_time': '6 years ago', 'views': '3.7M views'}, {'video_url': \n'https://www.youtube.com/watch?v=5XdYbmova_s', 'title': 'TWL #6: Big Mac Economics', 'date_time': '6 years ago', 'views': '1.6M views'}, {'video_url': 'https://www.youtube.com/watch?v=F53TA37Mqck', 'title': 'TWL #5: Timekeeping on Mars', 'date_time': '6 years ago', 'views': '810K views'}, {'video_url': 'https://www.youtube.com/watch?v=O6o5C-i02c8', 'title': 'TWL #4: Which Way Should Toilet Paper Face?', 'date_time': '6 years ago', 'views': '739K views'}, {'video_url': 'https://www.youtube.com/watch?v=JllpzZgAAl8', 'title': 'TWL #3: Paper Towns- Fake Places Made to Catch Copyright Thieves', 'date_time': '6 years ago', 'views': '1.1M views'}, {'video_url': 'https://www.youtube.com/watch?v=v_iaurPRhCs', 'title': 'TWL #2: Bir Tawil- The Land Without a Country', 'date_time': '6 years ago', 'views': '852K views'}, {'video_url': 'https://www.youtube.com/watch?v=_ugvJi2pIck', 'title': 'TWL #1: Presidents Are 4x More Likely to be Lefties', 'date_time': '6 years ago', 'views': '546K views'}, {'video_url': 'https://www.youtube.com/watch?v=RRWggYusSds', 'title': \n\"Why Time is One of Humanity's Greatest Inventions\", 'date_time': '6 years ago', 'views': '1.4M views'}, {'video_url': 'https://www.youtube.com/watch?v=h6AWAoc_Lr0', 'title': 'Space Law-What Laws are There in Space?', 'date_time': '6 years ago', 'views': '1.5M views'}, {'video_url': 'https://www.youtube.com/watch?v=2H9KEcb74aA', 'title': \"A Map Geek's Tour of the World #2\", 'date_time': '6 years ago', 'views': '248K views'}, {'video_url': 'https://www.youtube.com/watch?v=ThNeIT7aceI', 'title': 'How the Layouts of Grocery Stores are Secretly Designed to Make You Spend More Money', 'date_time': '6 years ago', 'views': '953K views'}, {'video_url': 'https://www.youtube.com/watch?v=bg2VZIPfX0U', 'title': 'How to Create a Country', 'date_time': '6 years ago', 'views': '3.7M views'}, {'video_url': 'https://www.youtube.com/watch?v=pVB4TEeMcgA', 'title': \"A Map Geek's Tour of the World\", 'date_time': '6 years ago', 'views': '405K views'}, {'video_url': 'https://www.youtube.com/watch?v=avh7ez858xM', 'title': 'The Messy Ethics of Self Driving Cars', 'date_time': '6 years ago', 'views': '461K views'}, {'video_url': 'https://www.youtube.com/watch?v=Y3jlFtBg0Y8', 'title': 'The Surprising Easternmost Point in the US', 'date_time': '6 years ago', 'views': '213K views'}, {'video_url': 'https://www.youtube.com/watch?v=F-ZskaqBshs', 'title': \"Containerization: The Most Influential Invention That You've Never Heard Of\", 'date_time': '6 years ago', 'views': '684K views'}, {'video_url': 'https://www.youtube.com/watch?v=Nn-ym8y1_kw', 'title': 'The World is Shrinking', 'date_time': '7 years ago', 'views': '183K views'}, {'video_url': 'https://www.youtube.com/watch?v=8LqqVfPduTs', 'title': 'How Marketers Manipulate Us: Psychological Manipulation in Advertising', 'date_time': '7 years ago', 'views': '458K views'}]\n\nTotal: 189\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"python",
"selenium",
"selenium_webdriver",
"web_scraping",
"youtube"
] |
stackoverflow_0074578175_python_selenium_selenium_webdriver_web_scraping_youtube.txt
|
Q:
While importing the Tensorflow. Failed to load the native TensorFlow runtime
I've got a error mesage when I am trying to install tensorflow to my project.
It's a little bit long traceback but I'm sure there are some heroes right there to help me tackle with that.
import tkinter as tk
from PIL import ImageTk, Image
from tkinter import filedialog
import numpy as np
import tensorflow
from tensorflow.keras.applications import vgg16
from tensorflow.keras.preprocessing.image import img_to_array
from tensorflow.keras.applications.vgg16 import decode_predictions
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/tensorflow/python/pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/imp.py", line 243, in load_module
return load_dynamic(name, filename, file)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/imp.py", line 343, in load_dynamic
return _load(spec)
ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so, 0x0006): tried: '/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so' (mach-o file, but is an incompatible architecture (have (x86_64), need (arm64e)))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/volfcan/Documents/Coding/Python/Python/Tkinter/Image Classifier w: Tkinter/image_classifier_tkinter.py", line 5, in <module>
import tensorflow
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/tensorflow/__init__.py", line 24, in <module>
from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/tensorflow/python/__init__.py", line 49, in <module>
from tensorflow.python import pywrap_tensorflow
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/tensorflow/python/pywrap_tensorflow.py", line 74, in <module>
raise ImportError(msg)
ImportError: Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/tensorflow/python/pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/imp.py", line 243, in load_module
return load_dynamic(name, filename, file)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/imp.py", line 343, in load_dynamic
return _load(spec)
ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so, 0x0006): tried: '/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so' (mach-o file, but is an incompatible architecture (have (x86_64), need (arm64e)))
Failed to load the native TensorFlow runtime.
See https://www.tensorflow.org/install/errors
for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help.
Thank you for your interest in making the world forward.
I am trying to to deploy a machine learning image classifier project with tkinter module on python.
A:
As answered in: Stackoverflow answer
You can try uninstalling numpy package:
pip3 uninstall numpy
and reinstalling it:
pip3 install numpy
|
While importing the Tensorflow. Failed to load the native TensorFlow runtime
|
I've got a error mesage when I am trying to install tensorflow to my project.
It's a little bit long traceback but I'm sure there are some heroes right there to help me tackle with that.
import tkinter as tk
from PIL import ImageTk, Image
from tkinter import filedialog
import numpy as np
import tensorflow
from tensorflow.keras.applications import vgg16
from tensorflow.keras.preprocessing.image import img_to_array
from tensorflow.keras.applications.vgg16 import decode_predictions
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/tensorflow/python/pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/imp.py", line 243, in load_module
return load_dynamic(name, filename, file)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/imp.py", line 343, in load_dynamic
return _load(spec)
ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so, 0x0006): tried: '/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so' (mach-o file, but is an incompatible architecture (have (x86_64), need (arm64e)))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/volfcan/Documents/Coding/Python/Python/Tkinter/Image Classifier w: Tkinter/image_classifier_tkinter.py", line 5, in <module>
import tensorflow
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/tensorflow/__init__.py", line 24, in <module>
from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/tensorflow/python/__init__.py", line 49, in <module>
from tensorflow.python import pywrap_tensorflow
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/tensorflow/python/pywrap_tensorflow.py", line 74, in <module>
raise ImportError(msg)
ImportError: Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/tensorflow/python/pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/imp.py", line 243, in load_module
return load_dynamic(name, filename, file)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/imp.py", line 343, in load_dynamic
return _load(spec)
ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so, 0x0006): tried: '/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so' (mach-o file, but is an incompatible architecture (have (x86_64), need (arm64e)))
Failed to load the native TensorFlow runtime.
See https://www.tensorflow.org/install/errors
for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help.
Thank you for your interest in making the world forward.
I am trying to to deploy a machine learning image classifier project with tkinter module on python.
|
[
"As answered in: Stackoverflow answer\nYou can try uninstalling numpy package:\npip3 uninstall numpy\n\nand reinstalling it:\npip3 install numpy\n\n"
] |
[
0
] |
[] |
[] |
[
"import",
"machine_learning",
"python",
"tensorflow",
"tkinter"
] |
stackoverflow_0074585808_import_machine_learning_python_tensorflow_tkinter.txt
|
Q:
Transferring the the data from a file to pandas dataframe, which have no file extension
I like to use SMS Spam Collection Data Set which can be found on UCI Machine Learning Repository, to build a classification model. The data file that is shared on the repository has no file extension. The data is look like the following
ham Go until jurong point, crazy.. Available only in bugis n great world la e buffet... Cine there got amore wat...
ham Ok lar... Joking wif u oni...
spam Free entry in 2 a wkly comp to win FA Cup final tkts 21st May 2005. Text FA to 87121 to receive entry question(std txt rate)T&C's apply 08452810075over18's
ham U dun say so early hor... U c already then say...
ham Nah I don't think he goes to usf, he lives around here though
spam FreeMsg Hey there darling it's been 3 week's now and no word back! I'd like some fun you up for it still? Tb ok! XxX std chgs to send, £1.50 to rcv
Where ham or spam should be the class attribute and the rest of the portion is the message. How could I transfer the dataset into Pandas dataframe? The dataframe should like the following
Message Class Messages
ham Go until jurong point, crazy.. Available only in bugis n great world la e buffet... Cine there got amore wat...
ham Ok lar... Joking wif u oni...
spam Free entry in 2 a wkly comp to win FA Cup final tkts 21st May 2005. Text FA to 87121 to receive entry question(std txt rate)T&C's apply 08452810075over18's
ham U dun say so early hor... U c already then say...
ham Nah I don't think he goes to usf, he lives around here though
spam FreeMsg Hey there darling it's been 3 week's now and no word back! I'd like some fun you up for it still? Tb ok! XxX std chgs to send, £1.50 to rcv
A:
The file seems like a .txt tab separated, so you can use pandas.read_csv :
import pandas as pd
df = pd.read_csv(filepath_or_buffer= "SMSSpamCollection",
header=None, sep="\t", names=["Message Class", "Messages"])
# Output :
A:
This should work
df= pd.read_csv("your_file.csv", sep="\t")
df.dropna(how="any", inplace=True, axis=1)
df.columns = ['label', 'message']
df.head()
|
Transferring the the data from a file to pandas dataframe, which have no file extension
|
I like to use SMS Spam Collection Data Set which can be found on UCI Machine Learning Repository, to build a classification model. The data file that is shared on the repository has no file extension. The data is look like the following
ham Go until jurong point, crazy.. Available only in bugis n great world la e buffet... Cine there got amore wat...
ham Ok lar... Joking wif u oni...
spam Free entry in 2 a wkly comp to win FA Cup final tkts 21st May 2005. Text FA to 87121 to receive entry question(std txt rate)T&C's apply 08452810075over18's
ham U dun say so early hor... U c already then say...
ham Nah I don't think he goes to usf, he lives around here though
spam FreeMsg Hey there darling it's been 3 week's now and no word back! I'd like some fun you up for it still? Tb ok! XxX std chgs to send, £1.50 to rcv
Where ham or spam should be the class attribute and the rest of the portion is the message. How could I transfer the dataset into Pandas dataframe? The dataframe should like the following
Message Class Messages
ham Go until jurong point, crazy.. Available only in bugis n great world la e buffet... Cine there got amore wat...
ham Ok lar... Joking wif u oni...
spam Free entry in 2 a wkly comp to win FA Cup final tkts 21st May 2005. Text FA to 87121 to receive entry question(std txt rate)T&C's apply 08452810075over18's
ham U dun say so early hor... U c already then say...
ham Nah I don't think he goes to usf, he lives around here though
spam FreeMsg Hey there darling it's been 3 week's now and no word back! I'd like some fun you up for it still? Tb ok! XxX std chgs to send, £1.50 to rcv
|
[
"The file seems like a .txt tab separated, so you can use pandas.read_csv :\nimport pandas as pd\n\ndf = pd.read_csv(filepath_or_buffer= \"SMSSpamCollection\",\n header=None, sep=\"\\t\", names=[\"Message Class\", \"Messages\"])\n\n# Output :\n\n",
"This should work\ndf= pd.read_csv(\"your_file.csv\", sep=\"\\t\")\ndf.dropna(how=\"any\", inplace=True, axis=1)\ndf.columns = ['label', 'message']\ndf.head()\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"dataframe",
"pandas",
"python"
] |
stackoverflow_0074585893_dataframe_pandas_python.txt
|
Q:
Why is QSystemTrayIcon.isSystemTrayAvailable() False running as root but True as user?
On Ubuntu 20.04 with Gnome3 and X11 and Qt5.12.8 (PyQt 5.14.1)
from PyQt5.QtWidgets import QSystemTrayIcon, QApplication
qapp = QApplication([''])
print(str(QSystemTrayIcon.isSystemTrayAvailable()))
shows True as user and False as root (via sudo as well as via pkexec).
How can I find out the reason for "False as root" (is there a way to enable logging for this) and
how can I enable system tray icons also for code running as root?
I have also tried to inject the user's X11 env vars but this does not help:
pkexec env DISPLAY=$DISPLAY XDG_RUNTIME_DIR=$XDG_RUNTIME_DIR XAUTHORITY=$XAUTHORITY python3 -c "from PyQt5.QtWidgets import QSystemTrayIcon, QApplication; qapp = QApplication(['']); print(str(QSystemTrayIcon.isSystemTrayAvailable()))"
Edit: I think the relevant Qt5 source code is here:
QSystemTrayIcon::isSystemTrayAvailable()
QSystemTrayIconPrivate::isSystemTrayAvailable_sys()
A:
Regarding question 1: I have found the correct logging settings and reason
How can I find out the reason for "False as root" (is there a way to enable logging for this)?
I have to enable the logging for the logging category qt.qpa.*
pkexec env "QT_LOGGING_RULES=qt.qpa.*=true" DISPLAY=$DISPLAY XDG_RUNTIME_DIR=$XDG_RUNTIME_DIR XAUTHORITY=$XAUTHORITY python3 -c "from PyQt5.QtWidgets import QSystemTrayIcon, QApplication; qapp = QApplication(['']); print(str(QSystemTrayIcon.isSystemTrayAvailable()))"
to get a lot of diagnostic output with this important line:
qt.qpa.menu: StatusNotifierHost is not registered
Looking into the source code of Qt5 I found out that isSystemTrayAvailable() is trying to query a user session D-Bus property at
org.kde.StatusNotifierWatcher/StatusNotifierWatcher/IsStatusNotifierHostRegistered
which is only available when running as user but not when running as root.
This can be verified by running qdbusviewer as user
vs. as root (via the same pkexec call like in the question):
Summary: As root the user' D-Bus session cannot be called (without further tweaks)
|
Why is QSystemTrayIcon.isSystemTrayAvailable() False running as root but True as user?
|
On Ubuntu 20.04 with Gnome3 and X11 and Qt5.12.8 (PyQt 5.14.1)
from PyQt5.QtWidgets import QSystemTrayIcon, QApplication
qapp = QApplication([''])
print(str(QSystemTrayIcon.isSystemTrayAvailable()))
shows True as user and False as root (via sudo as well as via pkexec).
How can I find out the reason for "False as root" (is there a way to enable logging for this) and
how can I enable system tray icons also for code running as root?
I have also tried to inject the user's X11 env vars but this does not help:
pkexec env DISPLAY=$DISPLAY XDG_RUNTIME_DIR=$XDG_RUNTIME_DIR XAUTHORITY=$XAUTHORITY python3 -c "from PyQt5.QtWidgets import QSystemTrayIcon, QApplication; qapp = QApplication(['']); print(str(QSystemTrayIcon.isSystemTrayAvailable()))"
Edit: I think the relevant Qt5 source code is here:
QSystemTrayIcon::isSystemTrayAvailable()
QSystemTrayIconPrivate::isSystemTrayAvailable_sys()
|
[
"Regarding question 1: I have found the correct logging settings and reason\n\n\nHow can I find out the reason for \"False as root\" (is there a way to enable logging for this)?\n\n\nI have to enable the logging for the logging category qt.qpa.*\npkexec env \"QT_LOGGING_RULES=qt.qpa.*=true\" DISPLAY=$DISPLAY XDG_RUNTIME_DIR=$XDG_RUNTIME_DIR XAUTHORITY=$XAUTHORITY python3 -c \"from PyQt5.QtWidgets import QSystemTrayIcon, QApplication; qapp = QApplication(['']); print(str(QSystemTrayIcon.isSystemTrayAvailable()))\"\n\nto get a lot of diagnostic output with this important line:\n\nqt.qpa.menu: StatusNotifierHost is not registered\n\nLooking into the source code of Qt5 I found out that isSystemTrayAvailable() is trying to query a user session D-Bus property at\norg.kde.StatusNotifierWatcher/StatusNotifierWatcher/IsStatusNotifierHostRegistered\nwhich is only available when running as user but not when running as root.\nThis can be verified by running qdbusviewer as user\n\nvs. as root (via the same pkexec call like in the question):\n\nSummary: As root the user' D-Bus session cannot be called (without further tweaks)\n"
] |
[
0
] |
[] |
[] |
[
"pyqt5",
"python",
"qt",
"qt5"
] |
stackoverflow_0074573870_pyqt5_python_qt_qt5.txt
|
Q:
'>' not supported between instances of 'type' and 'datetime.date'
I'm creating a CRUD application that displays activities available on or after today; I'm working through the filtering mechanism on displaying these activities, however I'm having a nightmare trying to only show the activities that are on/after today.
I'm getting the below error when I try to use the '>=' operand, however it's giving me the following error:
'>' not supported between instances of 'type' and 'datetime.date'
Below is my views.py for the comparison:
today= date.today()
available_activities = Activity.objects.filter(available = True).values()
activities = available_activities.filter(date > today).values()
activities= available_activities.order_by('date','start_time')
Below is a screenshot of the error traceback also, to show the data format for the data in the DB also.
A:
You can filter with the __gt lookup [Django-doc] so:
today = date.today()
available_activities = Activity.objects.filter(
available=True, date__gt=today
).order_by('date', 'start_time')
|
'>' not supported between instances of 'type' and 'datetime.date'
|
I'm creating a CRUD application that displays activities available on or after today; I'm working through the filtering mechanism on displaying these activities, however I'm having a nightmare trying to only show the activities that are on/after today.
I'm getting the below error when I try to use the '>=' operand, however it's giving me the following error:
'>' not supported between instances of 'type' and 'datetime.date'
Below is my views.py for the comparison:
today= date.today()
available_activities = Activity.objects.filter(available = True).values()
activities = available_activities.filter(date > today).values()
activities= available_activities.order_by('date','start_time')
Below is a screenshot of the error traceback also, to show the data format for the data in the DB also.
|
[
"You can filter with the __gt lookup [Django-doc] so:\ntoday = date.today()\navailable_activities = Activity.objects.filter(\n available=True, date__gt=today\n).order_by('date', 'start_time')\n"
] |
[
2
] |
[] |
[] |
[
"date",
"datetime",
"django",
"python"
] |
stackoverflow_0074585991_date_datetime_django_python.txt
|
Q:
Scala code returns false for 1012 > 977 and a few other values
I have scala code and python code that are attempting the same task (2021 advent of code day 1 https://adventofcode.com/2021/day/1).
The Python returns the correct solution, the Scala does not. I ran diff on both of the outputs and have determined that my Scala code is incorrectly evaluating the following pairs:
1001 > 992 -> false
996 > 1007 -> true
1012 > 977 -> false
the following is my Python code:
import pandas as pd
data = pd.read_csv("01_input.csv", header=None)
incr = 0
prevval = 99999
for index, row in data.iterrows():
if index != 0:
if row[0] > prevval:
print(f"{index}-{row[0]}-{prevval}")
incr += 1
prevval = row[0]
prevval = row[0]
print(incr)
and here is my Scala code:
import scala.io.Source; // library to read input file in
object advent_of_code_2021_01 {
def main(args: Array[String]): Unit = {
val lines = Source.fromFile("01_input.csv").getLines().toList; // file as immutable list
var increases = 0;
for (i <- 1 until lines.length) { // iterate over list by index
if (lines(i) > lines(i-1)) {
increases += 1;
println(s"$i-${lines(i)}-${lines(i-1)}")
}
}
println(increases);
}
}
I do not understand what is causing this issue on these particular values. in the shell, Scala evaluates them correctly, but I do not know where to even begin with this. Is there some behavior I need to know about that I'm not accounting for? Am I just doing something stupid? Any help is appreciated, thank you.
A:
As @Edward Peters https://stackoverflow.com/users/6016064/edward-peters correctly identified, my problem was that I was doing string comparisons, and not numerical comparisons, so I needed to convert my values to Int and not String. I did this with the very simple .toInt and it fixed all my issues.
fixed scala code:
import scala.io.Source; // library to read input file in
object advent_of_code_2021_01 {
def main(args: Array[String]): Unit = {
val lines = Source.fromFile("01_input.csv").getLines().toList; // file as immutable list
var increases = 0;
for (i <- 1 until lines.length) { // iterate over list by index
if (lines(i).toInt > lines(i-1).toInt) { // evaluate
increases += 1; // increment when true
println(s"$i-${lines(i)}-${lines(i-1)}") // debug line
}
}
println(increases); // result
}
}
|
Scala code returns false for 1012 > 977 and a few other values
|
I have scala code and python code that are attempting the same task (2021 advent of code day 1 https://adventofcode.com/2021/day/1).
The Python returns the correct solution, the Scala does not. I ran diff on both of the outputs and have determined that my Scala code is incorrectly evaluating the following pairs:
1001 > 992 -> false
996 > 1007 -> true
1012 > 977 -> false
the following is my Python code:
import pandas as pd
data = pd.read_csv("01_input.csv", header=None)
incr = 0
prevval = 99999
for index, row in data.iterrows():
if index != 0:
if row[0] > prevval:
print(f"{index}-{row[0]}-{prevval}")
incr += 1
prevval = row[0]
prevval = row[0]
print(incr)
and here is my Scala code:
import scala.io.Source; // library to read input file in
object advent_of_code_2021_01 {
def main(args: Array[String]): Unit = {
val lines = Source.fromFile("01_input.csv").getLines().toList; // file as immutable list
var increases = 0;
for (i <- 1 until lines.length) { // iterate over list by index
if (lines(i) > lines(i-1)) {
increases += 1;
println(s"$i-${lines(i)}-${lines(i-1)}")
}
}
println(increases);
}
}
I do not understand what is causing this issue on these particular values. in the shell, Scala evaluates them correctly, but I do not know where to even begin with this. Is there some behavior I need to know about that I'm not accounting for? Am I just doing something stupid? Any help is appreciated, thank you.
|
[
"As @Edward Peters https://stackoverflow.com/users/6016064/edward-peters correctly identified, my problem was that I was doing string comparisons, and not numerical comparisons, so I needed to convert my values to Int and not String. I did this with the very simple .toInt and it fixed all my issues.\nfixed scala code:\nimport scala.io.Source; // library to read input file in\n\nobject advent_of_code_2021_01 {\n def main(args: Array[String]): Unit = {\n\n val lines = Source.fromFile(\"01_input.csv\").getLines().toList; // file as immutable list\n var increases = 0;\n\n for (i <- 1 until lines.length) { // iterate over list by index\n if (lines(i).toInt > lines(i-1).toInt) { // evaluate\n increases += 1; // increment when true\n println(s\"$i-${lines(i)}-${lines(i-1)}\") // debug line\n }\n }\n\n println(increases); // result\n }\n}\n\n"
] |
[
1
] |
[] |
[] |
[
"python",
"scala"
] |
stackoverflow_0074585988_python_scala.txt
|
Q:
Why can't my beam DoFn see my global imports?
I have a beam pipeline that uses a custom DoFn and references imports (like time) inside of its body.
Full code is here, the idea is below.
import time
class MyView(beam.DoFn):
@beam.DoFn.yields_elements
def process_batch(self, batch: List[Dict[str, Any]]) -> Iterator[Tuple[str, MyType]]:
start_time = time.perf_counter() # fails
# rest of code
I have a strange issue where my pipeline will fail if I run it from Github CI, but not if I run it directly on my machine. The most recent failure was
NameError: name 'time' is not defined
It just fails at whichever is the first import that it hits in the DoFn. I can move the imports into the DoFn body but I shouldn't need to do that, especially since it works when I run it locally. I'm running it locally and in CI with the (same command)[https://github.com/whylabs/dataflow-templates/blob/so-question-imports/Makefile#L31-L49] as well, so something about the runtime environment is causing the issue. That pipeline already has pipeline_options.view_as(SetupOptions).save_main_session = True also, which I thought was supposed to address this problem by pickling the entire main.
A:
The root cause was a python version mismatch. I'm using 3.8 for this project and, despite specifying 3.8 in CI using (abatilo/actions-poetry)[https://github.com/abatilo/actions-poetry], I was getting 3.9. I assume the issue there was that I had the poetry step before the setup-python step, but whenever I put it after the setup-python step I got an error.
I switched to (Gr1N/setup-poetry)[https://github.com/Gr1N/setup-poetry] instead and everything worked as expected. The python version mismatch probably just lead to some strange pickling issues.
|
Why can't my beam DoFn see my global imports?
|
I have a beam pipeline that uses a custom DoFn and references imports (like time) inside of its body.
Full code is here, the idea is below.
import time
class MyView(beam.DoFn):
@beam.DoFn.yields_elements
def process_batch(self, batch: List[Dict[str, Any]]) -> Iterator[Tuple[str, MyType]]:
start_time = time.perf_counter() # fails
# rest of code
I have a strange issue where my pipeline will fail if I run it from Github CI, but not if I run it directly on my machine. The most recent failure was
NameError: name 'time' is not defined
It just fails at whichever is the first import that it hits in the DoFn. I can move the imports into the DoFn body but I shouldn't need to do that, especially since it works when I run it locally. I'm running it locally and in CI with the (same command)[https://github.com/whylabs/dataflow-templates/blob/so-question-imports/Makefile#L31-L49] as well, so something about the runtime environment is causing the issue. That pipeline already has pipeline_options.view_as(SetupOptions).save_main_session = True also, which I thought was supposed to address this problem by pickling the entire main.
|
[
"The root cause was a python version mismatch. I'm using 3.8 for this project and, despite specifying 3.8 in CI using (abatilo/actions-poetry)[https://github.com/abatilo/actions-poetry], I was getting 3.9. I assume the issue there was that I had the poetry step before the setup-python step, but whenever I put it after the setup-python step I got an error.\nI switched to (Gr1N/setup-poetry)[https://github.com/Gr1N/setup-poetry] instead and everything worked as expected. The python version mismatch probably just lead to some strange pickling issues.\n"
] |
[
1
] |
[] |
[] |
[
"apache_beam",
"google_cloud_dataflow",
"python"
] |
stackoverflow_0074586062_apache_beam_google_cloud_dataflow_python.txt
|
Q:
Matplotlib: AttributeError: 'PolarAxesSubplot' object has no attribute 'polar'
I have to plot one polar and one scatter. Here is the code:
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1, projection='polar')
iterator = lidar.iter_scans()
line = ax.scatter([0, 0], [0, 0], s=5, color="xkcd:salmon")
ax.set_rmax(DMAX)
ax.grid(True)
data = []
def environment():
for i, scan in enumerate(lidar.iter_scans(5000)):
if i <= 100:
if i > 2:
data.extend(scan,)
else:
break
lidar.stop()
lidar.get_info()
lidar.get_info()
lidar.get_health()
env_variable= np.array([(np.radians(meas[1]), meas[2])
for meas in data])
theta= [eachValue[0] for eachValue in env_variable]
r= [eachValue[1] for eachValue in env_variable]
ax.polar(theta, r)
this last line(ax.polar(theta, r) ) creates error. How can I plot this polar in the same graph or what is the best way to plot two graph in a same place
A:
I think depends on the python version
Using python 3.9 works just fine
I did try python 3.6 and I got the error, but initialiazing in this way worked for me:
fig, ax1 = plt.subplots(figsize=(6, 6), subplot_kw=dict(polar=True))
without using the
ax1.polar(angles, values)
|
Matplotlib: AttributeError: 'PolarAxesSubplot' object has no attribute 'polar'
|
I have to plot one polar and one scatter. Here is the code:
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1, projection='polar')
iterator = lidar.iter_scans()
line = ax.scatter([0, 0], [0, 0], s=5, color="xkcd:salmon")
ax.set_rmax(DMAX)
ax.grid(True)
data = []
def environment():
for i, scan in enumerate(lidar.iter_scans(5000)):
if i <= 100:
if i > 2:
data.extend(scan,)
else:
break
lidar.stop()
lidar.get_info()
lidar.get_info()
lidar.get_health()
env_variable= np.array([(np.radians(meas[1]), meas[2])
for meas in data])
theta= [eachValue[0] for eachValue in env_variable]
r= [eachValue[1] for eachValue in env_variable]
ax.polar(theta, r)
this last line(ax.polar(theta, r) ) creates error. How can I plot this polar in the same graph or what is the best way to plot two graph in a same place
|
[
"I think depends on the python version\nUsing python 3.9 works just fine\nI did try python 3.6 and I got the error, but initialiazing in this way worked for me:\nfig, ax1 = plt.subplots(figsize=(6, 6), subplot_kw=dict(polar=True))\n\nwithout using the\nax1.polar(angles, values)\n\n"
] |
[
0
] |
[] |
[] |
[
"matplotlib",
"python"
] |
stackoverflow_0054989283_matplotlib_python.txt
|
Q:
pyFirmata gives error: module 'inspect' has no attribute 'getargspec'
I'm trying to use pyFirmata, but I can't get it to work. Even the most basic of the library does not work. I guess there is something wrong with the library code.
from pyfirmata import Arduino,util
import time
port = 'COM5'
board = Arduino(port)
I get this error:
Traceback (most recent call last):
File "c:\Users\Public\pythonpublic\arduino.py", line 5, in <module>
board = Arduino(port)
^^^^^^^^^^^^^
File "C:\Users\marce\AppData\Roaming\Python\Python311\site-packages\pyfirmata\__init__.py", line 19, in __init__
super(Arduino, self).__init__(*args, **kwargs)
File "C:\Users\marce\AppData\Roaming\Python\Python311\site-packages\pyfirmata\pyfirmata.py", line 101, in __init__
self.setup_layout(layout)
File "C:\Users\marce\AppData\Roaming\Python\Python311\site-packages\pyfirmata\pyfirmata.py", line 157, in setup_layout
self._set_default_handlers()
File "C:\Users\marce\AppData\Roaming\Python\Python311\site-packages\pyfirmata\pyfirmata.py", line 161, in _set_default_handlers
self.add_cmd_handler(ANALOG_MESSAGE, self._handle_analog_message)
File "C:\Users\marce\AppData\Roaming\Python\Python311\site-packages\pyfirmata\pyfirmata.py", line 185, in add_cmd_handler
len_args = len(inspect.getargspec(func)[0])
^^^^^^^^^^^^^^^^^^
AttributeError: module 'inspect' has no attribute 'getargspec'. Did you mean: 'getargs'?
A:
According to the first line of pyFirmata docs:
It runs on Python 2.7, 3.6 and 3.7
You are using Python 3.11. The inspect (core library module) has changed since Python 3.7.
|
pyFirmata gives error: module 'inspect' has no attribute 'getargspec'
|
I'm trying to use pyFirmata, but I can't get it to work. Even the most basic of the library does not work. I guess there is something wrong with the library code.
from pyfirmata import Arduino,util
import time
port = 'COM5'
board = Arduino(port)
I get this error:
Traceback (most recent call last):
File "c:\Users\Public\pythonpublic\arduino.py", line 5, in <module>
board = Arduino(port)
^^^^^^^^^^^^^
File "C:\Users\marce\AppData\Roaming\Python\Python311\site-packages\pyfirmata\__init__.py", line 19, in __init__
super(Arduino, self).__init__(*args, **kwargs)
File "C:\Users\marce\AppData\Roaming\Python\Python311\site-packages\pyfirmata\pyfirmata.py", line 101, in __init__
self.setup_layout(layout)
File "C:\Users\marce\AppData\Roaming\Python\Python311\site-packages\pyfirmata\pyfirmata.py", line 157, in setup_layout
self._set_default_handlers()
File "C:\Users\marce\AppData\Roaming\Python\Python311\site-packages\pyfirmata\pyfirmata.py", line 161, in _set_default_handlers
self.add_cmd_handler(ANALOG_MESSAGE, self._handle_analog_message)
File "C:\Users\marce\AppData\Roaming\Python\Python311\site-packages\pyfirmata\pyfirmata.py", line 185, in add_cmd_handler
len_args = len(inspect.getargspec(func)[0])
^^^^^^^^^^^^^^^^^^
AttributeError: module 'inspect' has no attribute 'getargspec'. Did you mean: 'getargs'?
|
[
"According to the first line of pyFirmata docs:\n\nIt runs on Python 2.7, 3.6 and 3.7\n\nYou are using Python 3.11. The inspect (core library module) has changed since Python 3.7.\n"
] |
[
0
] |
[
"As already pointed out in another answer, the pyFirmata modules is currently documented to run on Python 2.7, 3.6 and 3.7. This doesn't mean it won't work on other versions, but probably that it hasn't been tested on other versions by the author and it isn't officially supported. So it may or may not work on newer Python versions.\nYour error message is caused by a missing function inspect.getargspec(). This function is part of the Python Standard Library, but has been deprecated since Python 3.0 (which came out in 2008). Unfortunately the author wasn't aware of this or simply didn't bother to fix it, so now the code doesn't work anymore with the latest version of Python.\nIn the Python documentation, you can see that the function is still available in version 3.10, but not in version 3.11.\nTo solve this you have a number of options:\n\nDowngrade to Python 3.10, which is currently still a good option (Python 3.10 is \"alive\" until 2026-10-04). I don't know if all other functionality does work. I guess it will, but you would have to find out yourself.\nDowngrade to Python 3.7, which is claimed to be supported. Given that Python 3.7 is also still alive (until 2023-06-27), that's a reasonable option as well.\nCreate an issue for the pyFirmata module and hope the author will fix the problem. Note that an issue has already been created by someone in 2019 but apparently without effect. You could leave a comment there confirming that this has now broken for real.\nClone the library and fix it yourself (and create a Pull Request to get it in the official library).\nFind another, similar, library that does work with Python 3.11.\nWrite the code yourself.\n\nDowngrading to a Python version between 3.7 and 3.10 is certainly the simplest option, and leaving some feedback to the author will give you a chance it will be fixed in the future, in case your planning to use your script for a longer time.\n"
] |
[
-1
] |
[
"arduino",
"attributeerror",
"pyfirmata",
"python",
"python_3.11"
] |
stackoverflow_0074585622_arduino_attributeerror_pyfirmata_python_python_3.11.txt
|
Q:
pip command does nothing
I just installed Python 2.7.10 on windows 10.
I have added my python and pip directory to my PATH like so:
My Scripts folder looks like this:
My problem is, when I type in "pip" in command prompt and press enter absolutely nothing happens, even if I wait several minutes. If I remove the Scripts directory from the PATH variable I just get the error message like "pip not recognized as internal or external command". Python works fine. I have also tried to reinstall both pip and Python but the same problem occurs.
So, does anyone have any idea about why pip does not do anything?
**Edit: ** when I say it does not do anything, I mean the cmd "hangs", like if it is waiting for something to happen. The cursor just keeps on blinking.
A:
One command that is bound to work is writing:
python -m pip install requests
This works because you hand off the script invocation to python, which you know works, instead of relying on the PATH environment variable of windows, which can be dodgy.
Packages like numpy that require c-extensions to be built, will not work with pip unless you have a C Compiler installed on your system. More information can be found in this question.
If you are, as you're saying, unfamiliar with the python environment, then let me assure you, you will have a better day by installing Anaconda.
Anaconda is a completely free Python distribution (including for
commercial use and redistribution). It includes more than 300 of the
most popular Python packages for science, math, engineering, and data
analysis.
Anaconda comes with numpy, of course.
A:
After Python including pip at package, pip commands not work sometimes.
Then you can use pip through python like
python -m pip <pip commands that you want>
A:
Try disabling your virus scanner. If this fixes it, exclude the C:\Python27\ folder from scanning (at your own risk).
I had this same issue: typing pip on the command line just puts the cursor on the next line, and nothing happens. I was sure my PATH system variable had C:\Python27\ and C:\Python27\Scripts\ in it, and I could verify it using echo %PATH% on the command line.
I found that I had to disable my virus scanner (Avast). I excluded the C:\Python27\ from virus scanning, and now everything works. Apparently the scanner is interfering with Python's ability to load the module.
A:
Add the following path or you can also cd to the path and then try pip command, it will work fine.
C:\Python27\Lib\site-packages\pip
A:
I had the same issue after uninstalling my antivirus, which was blocking the script. The issue was resolved.
|
pip command does nothing
|
I just installed Python 2.7.10 on windows 10.
I have added my python and pip directory to my PATH like so:
My Scripts folder looks like this:
My problem is, when I type in "pip" in command prompt and press enter absolutely nothing happens, even if I wait several minutes. If I remove the Scripts directory from the PATH variable I just get the error message like "pip not recognized as internal or external command". Python works fine. I have also tried to reinstall both pip and Python but the same problem occurs.
So, does anyone have any idea about why pip does not do anything?
**Edit: ** when I say it does not do anything, I mean the cmd "hangs", like if it is waiting for something to happen. The cursor just keeps on blinking.
|
[
"One command that is bound to work is writing:\npython -m pip install requests\n\nThis works because you hand off the script invocation to python, which you know works, instead of relying on the PATH environment variable of windows, which can be dodgy.\nPackages like numpy that require c-extensions to be built, will not work with pip unless you have a C Compiler installed on your system. More information can be found in this question.\nIf you are, as you're saying, unfamiliar with the python environment, then let me assure you, you will have a better day by installing Anaconda.\n\nAnaconda is a completely free Python distribution (including for\n commercial use and redistribution). It includes more than 300 of the\n most popular Python packages for science, math, engineering, and data\n analysis.\n\nAnaconda comes with numpy, of course.\n",
"After Python including pip at package, pip commands not work sometimes.\nThen you can use pip through python like\npython -m pip <pip commands that you want>\n\n",
"Try disabling your virus scanner. If this fixes it, exclude the C:\\Python27\\ folder from scanning (at your own risk).\n\nI had this same issue: typing pip on the command line just puts the cursor on the next line, and nothing happens. I was sure my PATH system variable had C:\\Python27\\ and C:\\Python27\\Scripts\\ in it, and I could verify it using echo %PATH% on the command line.\nI found that I had to disable my virus scanner (Avast). I excluded the C:\\Python27\\ from virus scanning, and now everything works. Apparently the scanner is interfering with Python's ability to load the module.\n",
"Add the following path or you can also cd to the path and then try pip command, it will work fine.\nC:\\Python27\\Lib\\site-packages\\pip\n",
"I had the same issue after uninstalling my antivirus, which was blocking the script. The issue was resolved.\n"
] |
[
14,
4,
2,
0,
0
] |
[] |
[] |
[
"pip",
"python"
] |
stackoverflow_0033918678_pip_python.txt
|
Q:
How do you update column headers to change a Single index dataframe into a MultiIndex dataframe?
I have a dataset that arrives with commingled column headers as a wide dataframe that also has row groups. For instance, several types of furniture that have yearly row data and the column levels are product size and colors...
But for flattening the data to be processed/graphed, I have to create a color column (I know I can use .stack() or .melt() at that point)
Remaining Question: I simply can't find how to let Python know that I need to rename these columns to two indices.
Note: for the MWE I created a MultiIndex, but that was for an empty dataframe and I don't know how to REname the columns...
Unless maybe the solution is in the way I import the columns to begin with? I use Jupyter notebook, so It's currently just brought in with a plain call of:
df=pd.read_csv("filename")
Which is where the joint column name of size and color come from.
Honestly, I tried the solution here but couldn't get to try the rename() because I can't get at the MultiIndex second level. With this documentation I can make the incoming sample wide data and show how it currently reformats to tidy data in df_stk.
For the way I want it to look, I understand how to create the dataframe from scratch (which I did for df_multi), but without knowing how to simultaneously rename and reindex the columns for df_mix, I am unsure how to get what I want... which is st1.
# Minimum Working Example of incoming data & attempt to flatten
import pandas as pd
import numpy as np
colhead = ["Small Black", "Small White", "Small Brown", "Medium Black", "Medium White", "Medium Brown", "Large Black", "Large White", "Large Brown"]
rowhead = pd.MultiIndex.from_product([['sofa','table','chair'],[2011, 2012, 2013, 2014, 2015]])
df_mix = pd.DataFrame(np.random.randint(1,10, size=(15,9)),index=rowhead,columns=colhead)
df_stk = df_mix.stack()
# showing weird tidy with mixed colors and sizes
print(df_stk)
>Output:
sofa 2011 Small Black 2
Small White 3
Small Brown 6
Medium Black 5...
# Format of What I actually want to see as tidy data
colhead1 = pd.MultiIndex.from_product([['small','medium','large'],['Black','White','Brown']])
df_multi = pd.DataFrame(np.random.randint(1,10, size=(15,9)),index=rowhead,columns=colhead1)
st1 = df_multi.stack()
print(st1)
>Output:
large medium small
sofa 2011 Black 1 4 6
Brown 4 2 4
White 5 1 2
2012 Black 1 6 2
Brown 3 9 1
# Attempt to reindex
hierarch1 = ["Small", "Small", "Small", "Medium", "Medium", "Medium", "Large", "Large", "Large"]
hierarch2 = ["Black", "White", "Brown", "Black", "White", "Brown", "Black", "White", "Brown"]
df_mixfix = df_mix
df_mixfix.columns = [hierarch1,df_mix.columns]
print(df_mixfix)
>Output:
Small Medium \
Small Black Small White Small Brown Medium Black Medium White
sofa 2011 2 3 6 5 9
2012 4 4 4 9 2
...The main point is this is slightly different from all the other questions previously posted
A:
Per the suggestions in the comments on the question, I have pieced together the answer. To rename the columns, and to label both the columns and the indices, the code should be using .columns and .names, respectively:
# Minimum Working Example of incoming data in wide format
import pandas as pd
import numpy as np
colhead = ["Small Black", "Small White", "Small Brown", "Medium Black", "Medium White", "Medium Brown", "Large Black", "Large White", "Large Brown"]
rowhead = pd.MultiIndex.from_product([['sofa','table','chair'],[2011, 2012, 2013, 2014, 2015]])
df_mix = pd.DataFrame(np.random.randint(1,10, size=15,9)), index=rowhead, columns=colhead)
# Reindex by list and use .names to label dataframe hierarchy
hierarch1 = ["Small", "Small", "Small", "Medium", "Medium", "Medium", "Large", "Large", "Large"]
hierarch2 = ["Black", "White", "Brown", "Black", "White", "Brown", "Black", "White", "Brown"]
df_mixfix = df_mix
df_mixfix.columns = [hierarch1, hierarch2]
df_mixfix.columns.names = ['Size', 'Color']
df_mixfix.index.names = ['Product', 'Year']
# Stack for tidy data
stk = df_mixfix.stack()
df_stk = stk.stack()
print(df_stk)
This gives the output, in the desired format, with the proper labels, except for the last column, which should be named 'Sales' (and it fixed the wide format dataframe into a properly labelled MultiIndex):
Product Year Color Size
sofa 2011 Black Large 1
Medium 9
Small 2
Brown Large 1
Medium 3
..
chair 2015 Brown Medium 4
Small 9
White Large 5
Medium 3
Small 9
Length: 135, dtype: int32
|
How do you update column headers to change a Single index dataframe into a MultiIndex dataframe?
|
I have a dataset that arrives with commingled column headers as a wide dataframe that also has row groups. For instance, several types of furniture that have yearly row data and the column levels are product size and colors...
But for flattening the data to be processed/graphed, I have to create a color column (I know I can use .stack() or .melt() at that point)
Remaining Question: I simply can't find how to let Python know that I need to rename these columns to two indices.
Note: for the MWE I created a MultiIndex, but that was for an empty dataframe and I don't know how to REname the columns...
Unless maybe the solution is in the way I import the columns to begin with? I use Jupyter notebook, so It's currently just brought in with a plain call of:
df=pd.read_csv("filename")
Which is where the joint column name of size and color come from.
Honestly, I tried the solution here but couldn't get to try the rename() because I can't get at the MultiIndex second level. With this documentation I can make the incoming sample wide data and show how it currently reformats to tidy data in df_stk.
For the way I want it to look, I understand how to create the dataframe from scratch (which I did for df_multi), but without knowing how to simultaneously rename and reindex the columns for df_mix, I am unsure how to get what I want... which is st1.
# Minimum Working Example of incoming data & attempt to flatten
import pandas as pd
import numpy as np
colhead = ["Small Black", "Small White", "Small Brown", "Medium Black", "Medium White", "Medium Brown", "Large Black", "Large White", "Large Brown"]
rowhead = pd.MultiIndex.from_product([['sofa','table','chair'],[2011, 2012, 2013, 2014, 2015]])
df_mix = pd.DataFrame(np.random.randint(1,10, size=(15,9)),index=rowhead,columns=colhead)
df_stk = df_mix.stack()
# showing weird tidy with mixed colors and sizes
print(df_stk)
>Output:
sofa 2011 Small Black 2
Small White 3
Small Brown 6
Medium Black 5...
# Format of What I actually want to see as tidy data
colhead1 = pd.MultiIndex.from_product([['small','medium','large'],['Black','White','Brown']])
df_multi = pd.DataFrame(np.random.randint(1,10, size=(15,9)),index=rowhead,columns=colhead1)
st1 = df_multi.stack()
print(st1)
>Output:
large medium small
sofa 2011 Black 1 4 6
Brown 4 2 4
White 5 1 2
2012 Black 1 6 2
Brown 3 9 1
# Attempt to reindex
hierarch1 = ["Small", "Small", "Small", "Medium", "Medium", "Medium", "Large", "Large", "Large"]
hierarch2 = ["Black", "White", "Brown", "Black", "White", "Brown", "Black", "White", "Brown"]
df_mixfix = df_mix
df_mixfix.columns = [hierarch1,df_mix.columns]
print(df_mixfix)
>Output:
Small Medium \
Small Black Small White Small Brown Medium Black Medium White
sofa 2011 2 3 6 5 9
2012 4 4 4 9 2
...The main point is this is slightly different from all the other questions previously posted
|
[
"Per the suggestions in the comments on the question, I have pieced together the answer. To rename the columns, and to label both the columns and the indices, the code should be using .columns and .names, respectively:\n# Minimum Working Example of incoming data in wide format\nimport pandas as pd\nimport numpy as np\ncolhead = [\"Small Black\", \"Small White\", \"Small Brown\", \"Medium Black\", \"Medium White\", \"Medium Brown\", \"Large Black\", \"Large White\", \"Large Brown\"]\nrowhead = pd.MultiIndex.from_product([['sofa','table','chair'],[2011, 2012, 2013, 2014, 2015]])\ndf_mix = pd.DataFrame(np.random.randint(1,10, size=15,9)), index=rowhead, columns=colhead)\n\n# Reindex by list and use .names to label dataframe hierarchy\nhierarch1 = [\"Small\", \"Small\", \"Small\", \"Medium\", \"Medium\", \"Medium\", \"Large\", \"Large\", \"Large\"]\nhierarch2 = [\"Black\", \"White\", \"Brown\", \"Black\", \"White\", \"Brown\", \"Black\", \"White\", \"Brown\"]\ndf_mixfix = df_mix\ndf_mixfix.columns = [hierarch1, hierarch2]\ndf_mixfix.columns.names = ['Size', 'Color']\ndf_mixfix.index.names = ['Product', 'Year']\n\n# Stack for tidy data\nstk = df_mixfix.stack()\ndf_stk = stk.stack()\nprint(df_stk)\n\nThis gives the output, in the desired format, with the proper labels, except for the last column, which should be named 'Sales' (and it fixed the wide format dataframe into a properly labelled MultiIndex):\nProduct Year Color Size \nsofa 2011 Black Large 1\n Medium 9\n Small 2\n Brown Large 1\n Medium 3\n ..\nchair 2015 Brown Medium 4\n Small 9\n White Large 5\n Medium 3\n Small 9\nLength: 135, dtype: int32\n\n"
] |
[
0
] |
[] |
[] |
[
"dataframe",
"multi_index",
"pandas",
"python",
"rename"
] |
stackoverflow_0074553654_dataframe_multi_index_pandas_python_rename.txt
|
Q:
How to start python Eel in any available browser of user system?
I am trying to create a windows based application using eel. I want it to start in any available browser in the system of the user. How can I do it ? ( consider that user have not installed chrome in his system )
A:
You can pass in mode='default' to the eel.start function, and it will try to open the system's default browser. Something like this:
eel.start('index.html', mode='default')
In this case, behind-the-scenes Eel will be proxying the open request to Python's webbrowser.open function, so it's best to check there if you want to know the logic being used. Or you can look at the Python source file for webbrowser.py on your machine... on Windows it would be located somewhere like: C:\Python39\Lib\webbrowser.py.
A:
Maybe it's too late for you, I had the same problem. So to resolve it you have open your folder with code source of eel and open init.py and change browser option.
|
How to start python Eel in any available browser of user system?
|
I am trying to create a windows based application using eel. I want it to start in any available browser in the system of the user. How can I do it ? ( consider that user have not installed chrome in his system )
|
[
"You can pass in mode='default' to the eel.start function, and it will try to open the system's default browser. Something like this:\neel.start('index.html', mode='default')\n\nIn this case, behind-the-scenes Eel will be proxying the open request to Python's webbrowser.open function, so it's best to check there if you want to know the logic being used. Or you can look at the Python source file for webbrowser.py on your machine... on Windows it would be located somewhere like: C:\\Python39\\Lib\\webbrowser.py.\n",
"Maybe it's too late for you, I had the same problem. So to resolve it you have open your folder with code source of eel and open init.py and change browser option.\n"
] |
[
1,
0
] |
[] |
[] |
[
"eel",
"python"
] |
stackoverflow_0068740121_eel_python.txt
|
Q:
Wireshark/pcap file format for serial data?
I would like a Python file that uses a serial port to generate Wireshark/pcap compatible "trace" files of the serial data being exchanged. Can someone point me at the format of the pcap file I need to create for such data? For example do I have to fake a SLIP/PPP type file or is there such a thing as a "raw serial data" file?
Note that I appreciate that serial data does not have to be "packetized" although in the case I'm working on, it logically is.
And if there is a Python library that already allows me to create the file without much effort, even better!
Thanks.
A:
For example do I have to fake a SLIP/PPP type file
No.
or is there such a thing as a "raw serial data" file?
No.
What you can do is use one of the user-defined private LINKTYPE_USER0 through LINKTYPE_USER15 values for your packets. Note that other pcap or pcapng files may use those values for different types of packets, so it's not as if your choice will be universal.
|
Wireshark/pcap file format for serial data?
|
I would like a Python file that uses a serial port to generate Wireshark/pcap compatible "trace" files of the serial data being exchanged. Can someone point me at the format of the pcap file I need to create for such data? For example do I have to fake a SLIP/PPP type file or is there such a thing as a "raw serial data" file?
Note that I appreciate that serial data does not have to be "packetized" although in the case I'm working on, it logically is.
And if there is a Python library that already allows me to create the file without much effort, even better!
Thanks.
|
[
"\nFor example do I have to fake a SLIP/PPP type file\n\nNo.\n\nor is there such a thing as a \"raw serial data\" file?\n\nNo.\nWhat you can do is use one of the user-defined private LINKTYPE_USER0 through LINKTYPE_USER15 values for your packets. Note that other pcap or pcapng files may use those values for different types of packets, so it's not as if your choice will be universal.\n"
] |
[
0
] |
[] |
[] |
[
"pcap",
"python",
"wireshark"
] |
stackoverflow_0074573201_pcap_python_wireshark.txt
|
Q:
parallelize a time-consuming Python loop
I have a nested for loop that is time-consuming. I think parallelization can make it faster, but I do not know how I use it. this is my for loop in my code :
for itr2 in range(K):
tmp_cl=clusters[itr2+1]
if len(tmp_cl)>1:
BD_cent=np.zeros((len(tmp_cl),1))
for itr3 in range(len(tmp_cl)):
sumv=0
for itr5 in range(len(tmp_cl)):
condition = psnr_bitrate == tmp_cl[itr3,:]
where_result = np.where(condition)
tidx1 = where_result[0]
condition = psnr_bitrate == tmp_cl[itr5,:]
where_result = np.where(condition)
tidx2 = where_result[0]
BD_R=bd_rate(rate[tidx1[0],:],tmp_cl[itr3,:],rate[tidx2[0],:],tmp_cl[itr5,:])
BD_R=(BD_R-min_BDR)/(max_BDR-min_BDR)
BD_Q=bd_PSNR(rate[tidx1[0],:],tmp_cl[itr3,:],rate[tidx2[0],:],tmp_cl[itr5,:])
BD_Q=(BD_Q-min_BDQ)/(max_BDQ-min_BDQ)
value=(wr*BD_R+wq*BD_Q)
if value!=np.NINF:
sumv+=(value)
else:
sumv+=1000#for the curve which has not overlap with others
BD_cent[itr3]=sumv/len(tmp_cl)
new_centroid_index=np.argmin(BD_cent)
centroid[itr2]=clusters[itr2+1][new_centroid_index]
I checked some other examples about parallelization in Stackoverflow, but as a beginner, I could not understand what is the solution. do I have to define a function for code in the for loops? this for loops compute the distance between every two points in K=6 different clusters. but for parallelization, I do not know how do I use asyncio or joblib. is it possible for these loops or not?
A:
CPython implementation detail: In CPython, due to the Global Interpreter Lock, only one thread can execute Python code at once (even though certain performance-oriented libraries might overcome this limitation). If you want your application to make better use of the computational resources of multi-core machines, you are advised to use multiprocessing or concurrent.futures.ProcessPoolExecutor.
|
parallelize a time-consuming Python loop
|
I have a nested for loop that is time-consuming. I think parallelization can make it faster, but I do not know how I use it. this is my for loop in my code :
for itr2 in range(K):
tmp_cl=clusters[itr2+1]
if len(tmp_cl)>1:
BD_cent=np.zeros((len(tmp_cl),1))
for itr3 in range(len(tmp_cl)):
sumv=0
for itr5 in range(len(tmp_cl)):
condition = psnr_bitrate == tmp_cl[itr3,:]
where_result = np.where(condition)
tidx1 = where_result[0]
condition = psnr_bitrate == tmp_cl[itr5,:]
where_result = np.where(condition)
tidx2 = where_result[0]
BD_R=bd_rate(rate[tidx1[0],:],tmp_cl[itr3,:],rate[tidx2[0],:],tmp_cl[itr5,:])
BD_R=(BD_R-min_BDR)/(max_BDR-min_BDR)
BD_Q=bd_PSNR(rate[tidx1[0],:],tmp_cl[itr3,:],rate[tidx2[0],:],tmp_cl[itr5,:])
BD_Q=(BD_Q-min_BDQ)/(max_BDQ-min_BDQ)
value=(wr*BD_R+wq*BD_Q)
if value!=np.NINF:
sumv+=(value)
else:
sumv+=1000#for the curve which has not overlap with others
BD_cent[itr3]=sumv/len(tmp_cl)
new_centroid_index=np.argmin(BD_cent)
centroid[itr2]=clusters[itr2+1][new_centroid_index]
I checked some other examples about parallelization in Stackoverflow, but as a beginner, I could not understand what is the solution. do I have to define a function for code in the for loops? this for loops compute the distance between every two points in K=6 different clusters. but for parallelization, I do not know how do I use asyncio or joblib. is it possible for these loops or not?
|
[
"CPython implementation detail: In CPython, due to the Global Interpreter Lock, only one thread can execute Python code at once (even though certain performance-oriented libraries might overcome this limitation). If you want your application to make better use of the computational resources of multi-core machines, you are advised to use multiprocessing or concurrent.futures.ProcessPoolExecutor.\n"
] |
[
0
] |
[] |
[] |
[
"parallel_processing",
"python",
"python_3.x"
] |
stackoverflow_0073941643_parallel_processing_python_python_3.x.txt
|
Q:
How to count occurrences of a specific dict key in dicts list and some dicts values contains list and append the count in value
I'm trying to count the number of times a specified key occurs in my list of dicts. I've used loops and sum to count up all the keys, but how can I find the count for a specific key? I have this code, which does not work currently:
for dico in data:
for ele in dico['people']:
print(ele['name']+str(len(ele['animals'])))
The "entries" data looks like this:
[
{
"name": "Uzuzozne",
"people": [
{
"name": "Lillie Abbott",
"animals": [
{
"name": "John Dory"
}
]
}
]
},
{
"name": "Satanwi",
"people": [
{
"name": "Anthony Bruno",
"animals": [
{
"name": "Oryx"
}
]
}
]
},
{
"name": "Dillauti",
"people": [
{
"name": "Winifred Graham",
"animals": [
{ "name": "Anoa" },
{ "name": "Duck" },
{ "name": "Narwhal" },
{ "name": "Badger" },
{ "name": "Cobra" },
{ "name": "Crow" }
]
},
{
"name": "Blanche Viciani",
"animals":
[{ "name": "Barbet" },
{ "name": "Rhea" },
{ "name": "Snakes" },
{ "name": "Antelope" },
{ "name": "Echidna" },
{ "name": "Crow" },
{ "name": "Guinea Fowl" },
{ "name": "Deer Mouse" }]
}
]
}
]
My goal is to print the counts of People and Animals by counting the number of children and appeng it in the name, eg. Satanwi [2].
[ { name: 'Dillauti [5]',
people:
[ { name: 'Winifred Graham [6]',
animals:
[ { name: 'Anoa' },
{ name: 'Duck' },
{ name: 'Narwhal' },
{ name: 'Badger' },
{ name: 'Cobra' },
{ name: 'Crow' } ] },
{ name: 'Blanche Viciani [8]',
animals:
[ { name: 'Barbet' },
{ name: 'Rhea' },
{ name: 'Snakes' },
{ name: 'Antelope' },
{ name: 'Echidna' },
{ name: 'Crow' },
{ name: 'Guinea Fowl' },
{ name: 'Deer Mouse' } ] },
...
...
]
A:
This will do what you want; feel free to ask if you need explanations:
for dico in data:
children = 0
for ele in dico['people']:
animals = len(ele['animals'])
children += 1 + animals
ele['name'] += f" [{animals}]"
dico['name'] += f" [{children}]"
|
How to count occurrences of a specific dict key in dicts list and some dicts values contains list and append the count in value
|
I'm trying to count the number of times a specified key occurs in my list of dicts. I've used loops and sum to count up all the keys, but how can I find the count for a specific key? I have this code, which does not work currently:
for dico in data:
for ele in dico['people']:
print(ele['name']+str(len(ele['animals'])))
The "entries" data looks like this:
[
{
"name": "Uzuzozne",
"people": [
{
"name": "Lillie Abbott",
"animals": [
{
"name": "John Dory"
}
]
}
]
},
{
"name": "Satanwi",
"people": [
{
"name": "Anthony Bruno",
"animals": [
{
"name": "Oryx"
}
]
}
]
},
{
"name": "Dillauti",
"people": [
{
"name": "Winifred Graham",
"animals": [
{ "name": "Anoa" },
{ "name": "Duck" },
{ "name": "Narwhal" },
{ "name": "Badger" },
{ "name": "Cobra" },
{ "name": "Crow" }
]
},
{
"name": "Blanche Viciani",
"animals":
[{ "name": "Barbet" },
{ "name": "Rhea" },
{ "name": "Snakes" },
{ "name": "Antelope" },
{ "name": "Echidna" },
{ "name": "Crow" },
{ "name": "Guinea Fowl" },
{ "name": "Deer Mouse" }]
}
]
}
]
My goal is to print the counts of People and Animals by counting the number of children and appeng it in the name, eg. Satanwi [2].
[ { name: 'Dillauti [5]',
people:
[ { name: 'Winifred Graham [6]',
animals:
[ { name: 'Anoa' },
{ name: 'Duck' },
{ name: 'Narwhal' },
{ name: 'Badger' },
{ name: 'Cobra' },
{ name: 'Crow' } ] },
{ name: 'Blanche Viciani [8]',
animals:
[ { name: 'Barbet' },
{ name: 'Rhea' },
{ name: 'Snakes' },
{ name: 'Antelope' },
{ name: 'Echidna' },
{ name: 'Crow' },
{ name: 'Guinea Fowl' },
{ name: 'Deer Mouse' } ] },
...
...
]
|
[
"This will do what you want; feel free to ask if you need explanations:\nfor dico in data:\n children = 0\n for ele in dico['people']:\n animals = len(ele['animals'])\n children += 1 + animals\n ele['name'] += f\" [{animals}]\"\n dico['name'] += f\" [{children}]\"\n\n"
] |
[
0
] |
[] |
[] |
[
"python",
"python_3.x"
] |
stackoverflow_0074586097_python_python_3.x.txt
|
Q:
CNN model that inputs image list outputs a list of [int,int,float]
I'm still new to deep learning and CNN and I don't know what this error is so anyone can help me?
This model is designed to take input of image array and output is a list of items and each items contains an int,int,float and I cannot make a model than contains no error.
The error
ValueError: Dimensions must be equal, but are 162 and 3 for '{{node mean_squared_error/SquaredDifference}} = SquaredDifference[T=DT_FLOAT](sequential/dense_1/Relu, mean_squared_error/Cast)' with input shapes: [?,162], [?,54,3].
The code
import numpy as np
import os
from skimage import io
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D, Flatten, Dense
def SeperateDataFrameItems(imagesArray, maxCount):
tempList = []
finalList = []
ct = 0
for i in range(len(imagesArray)):
if imagesArray[i][0] != '#':
tempList.append(np.array([imagesArray[i][1], imagesArray[i][2], imagesArray[i][3]]))
ct = ct + 1
else:
for i in range(ct, maxCount):
tempList.append(np.array([-1,-1,-1]))
finalList.append(tempList)
tempList = []
ct = 0
return np.stack(finalList, axis=0)
def HandleDesigndata(start, end):
path = 'Designs/designs/'
all_images = []
for image_path in os.listdir(path):
img = io.imread(path + image_path, as_gray=True)
img = img.reshape([768, 607, 1])
all_images.append(img)
return np.array(all_images)
image_list = HandleDesigndata(100, 200)
circles_data = pd.read_csv('Data.csv')
circles_data = SeperateDataFrameItems(circles_data.to_numpy(), 54)
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=(768, 607, 1)))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dense(162, activation='relu'))
model.compile(optimizer='adam', loss='mse', metrics=['accuracy'])
model.fit(x=image_list, y=circles_data, batch_size=100, epochs=10, validation_split=0.1)```
A:
You have to add a final dense layer before the compiler. And the n_number is the number of the output of your model:
model.add(Dense(n_number,activation='sigmoid'))
|
CNN model that inputs image list outputs a list of [int,int,float]
|
I'm still new to deep learning and CNN and I don't know what this error is so anyone can help me?
This model is designed to take input of image array and output is a list of items and each items contains an int,int,float and I cannot make a model than contains no error.
The error
ValueError: Dimensions must be equal, but are 162 and 3 for '{{node mean_squared_error/SquaredDifference}} = SquaredDifference[T=DT_FLOAT](sequential/dense_1/Relu, mean_squared_error/Cast)' with input shapes: [?,162], [?,54,3].
The code
import numpy as np
import os
from skimage import io
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D, Flatten, Dense
def SeperateDataFrameItems(imagesArray, maxCount):
tempList = []
finalList = []
ct = 0
for i in range(len(imagesArray)):
if imagesArray[i][0] != '#':
tempList.append(np.array([imagesArray[i][1], imagesArray[i][2], imagesArray[i][3]]))
ct = ct + 1
else:
for i in range(ct, maxCount):
tempList.append(np.array([-1,-1,-1]))
finalList.append(tempList)
tempList = []
ct = 0
return np.stack(finalList, axis=0)
def HandleDesigndata(start, end):
path = 'Designs/designs/'
all_images = []
for image_path in os.listdir(path):
img = io.imread(path + image_path, as_gray=True)
img = img.reshape([768, 607, 1])
all_images.append(img)
return np.array(all_images)
image_list = HandleDesigndata(100, 200)
circles_data = pd.read_csv('Data.csv')
circles_data = SeperateDataFrameItems(circles_data.to_numpy(), 54)
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=(768, 607, 1)))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dense(162, activation='relu'))
model.compile(optimizer='adam', loss='mse', metrics=['accuracy'])
model.fit(x=image_list, y=circles_data, batch_size=100, epochs=10, validation_split=0.1)```
|
[
"You have to add a final dense layer before the compiler. And the n_number is the number of the output of your model:\nmodel.add(Dense(n_number,activation='sigmoid'))\n\n"
] |
[
0
] |
[] |
[] |
[
"conv_neural_network",
"deep_learning",
"python",
"tensorflow"
] |
stackoverflow_0066985400_conv_neural_network_deep_learning_python_tensorflow.txt
|
Q:
How to upload images using wordpress REST api in python?
I think I've got this 90% working, but it ends up 'uploading' a blank transparent image. I get a 201 response after the upload. I think that's probably a proxy for when WP finds a missing image. I'm unsure if i'm passing the image incorrectly (ie it doesn't leave my computer) or if I'm not tagging it properly to WP's liking.
from base64 import b64encode
import json
import requests
def imgUploadREST(imgPath):
url = 'https://www.XXXXXXXXXX.com/wp-json/wp/v2/media'
auth = b64encode('{}:{}'.format('USERNAME','PASS'))
payload = {
'type': 'image/jpeg', # mimetype
'title': 'title',
"Content":"content",
"excerpt":"Excerpt",
}
headers = {
'post_content':'post_content',
'Content':'content',
'Content-Disposition' : 'attachment; filename=image_20170510.jpg',
'Authorization': 'Basic {}'.format(auth),
}
with open(imgPath, "rb") as image_file:
files = {'field_name': image_file}
r = requests.post(url, files=files, headers=headers, data=payload)
print r
response = json.loads(r.content)
print response
return response
I've seen a fair number of answers in php or node.js, but I'm having trouble understanding the syntax in python. Thank you for any help!
A:
I've figured it out!
With this function I'm able to upload images via the WP REST api to my site (Photo Gear Hunter.) The function returns the ID of the image. You can then pass that id to a new post call and make it the featured image, or do whatever you wish with it.
def restImgUL(imgPath):
url='http://xxxxxxxxxxxx.com/wp-json/wp/v2/media'
data = open(imgPath, 'rb').read()
fileName = os.path.basename(imgPath)
res = requests.post(url='http://xxxxxxxxxxxxx.com/wp-json/wp/v2/media',
data=data,
headers={ 'Content-Type': 'image/jpg','Content-Disposition' : 'attachment; filename=%s'% fileName},
auth=('authname', 'authpass'))
# pp = pprint.PrettyPrinter(indent=4) ## print it pretty.
# pp.pprint(res.json()) #this is nice when you need it
newDict=res.json()
newID= newDict.get('id')
link = newDict.get('guid').get("rendered")
print newID, link
return (newID, link)
A:
An improved version to handle non-ASCII in the filename (Greek letters for example). What it does is convert all non-ASCII characters to a UTF-8 escape sequence.
(Original code from @my Year Of Code)
def uploadImage(filePath):
data = open(filePath, 'rb').read()
fileName = os.path.basename(filePath)
espSequence = bytes(fileName, "utf-8").decode("unicode_escape")
# Convert all non ASCII characters to UTF-8 escape sequence
res = requests.post(url='http://xxxxxxxxxxxxx.com/wp-json/wp/v2/media',
data=data,
headers={'Content-Type': 'image/jpeg',
'Content-Disposition': 'attachment; filename=%s' % espSequence,
},
auth=('authname', 'authpass'))
newDict=res.json()
newID= newDict.get('id')
link = newDict.get('guid').get("rendered")
print newID, link
return (newID, link)
From my understading a POST HEADER can only containg ASCII characters
A:
To specify additional fields supported by the api such as alt text, description ect:
from requests_toolbelt.multipart.encoder import MultipartEncoder
import requests
import os
fileName = os.path.basename(imgPath)
multipart_data = MultipartEncoder(
fields={
# a file upload field
'file': (fileName, open(imgPath, 'rb'), 'image/jpg'),
# plain text fields
'alt_text': 'alt test',
'caption': 'caption test',
'description': 'description test'
}
)
response = requests.post('http://example/wp-json/wp/v2/media', data=multipart_data,
headers={'Content-Type': multipart_data.content_type},
auth=('user', 'pass'))
A:
thanks to @my Year Of Code.
my final working code:
import
HOST = "https://www.crifan.com"
API_MEDIA = HOST + "/wp-json/wp/v2/media"
JWT_TOKEN = "eyJxxxxxxxxjLYB4"
imgMime = gImageSuffixToMime[imgSuffix] # 'image/png'
imgeFilename = "%s.%s" % (processedGuid, imgSuffix) # 'f6956c30ef0b475fa2b99c2f49622e35.png'
authValue = "Bearer %s" % JWT_TOKEN
curHeaders = {
"Authorization": authValue,
"Content-Type": imgMime,
'Content-Disposition': 'attachment; filename=%s' % imgeFilename,
}
# curHeaders={'Authorization': 'Bearer eyJ0xxxyyy.zzzB4', 'Content-Type': 'image/png', 'Content-Disposition': 'attachment; filename=f6956c30ef0b475fa2b99c2f49622e35.png'}
uploadImgUrl = API_MEDIA
resp = requests.post(
uploadImgUrl,
# proxies=cfgProxies,
headers=curHeaders,
data=imgBytes,
)
return 201 means Created OK
response json look like:
{
"id": 70393,
"date": "2020-03-07T18:43:47",
"date_gmt": "2020-03-07T10:43:47",
"guid": {
"rendered": "https://www.crifan.com/files/pic/uploads/2020/03/f6956c30ef0b475fa2b99c2f49622e35.png",
"raw": "https://www.crifan.com/files/pic/uploads/2020/03/f6956c30ef0b475fa2b99c2f49622e35.png"
},
...
more details refer my (Chinese) post: 【已解决】用Python通过WordPress的REST API上传图片
A:
This is my working code for uploading image into WordPress from local image or from url. hope someone find it useful.
import requests, json, os, base64
def wp_upload_image(domain, user, app_pass, imgPath):
# imgPath can be local image path or can be url
url = 'https://'+ domain + '/wp-json/wp/v2/media'
filename = imgPath.split('/')[-1] if len(imgPath.split('/')[-1])>1 else imgPath.split('\\')[-1]
extension = imgPath[imgPath.rfind('.')+1 : len(imgPath)]
if imgPath.find('http') == -1:
try: data = open(imgPath, 'rb').read()
except:
print('image local path not exits')
return None
else:
rs = requests.get(imgPath)
if rs.status_code == 200:
data = rs.content
else:
print('url get request failed')
return None
headers = { "Content-Disposition": f"attachment; filename={filename}" , "Content-Type": str("image/" + extension)}
rs = requests.post(url, auth=(user, app_pass), headers=headers, data=data)
print(rs)
return (rs.json()['source_url'], rs.json()['id'])
|
How to upload images using wordpress REST api in python?
|
I think I've got this 90% working, but it ends up 'uploading' a blank transparent image. I get a 201 response after the upload. I think that's probably a proxy for when WP finds a missing image. I'm unsure if i'm passing the image incorrectly (ie it doesn't leave my computer) or if I'm not tagging it properly to WP's liking.
from base64 import b64encode
import json
import requests
def imgUploadREST(imgPath):
url = 'https://www.XXXXXXXXXX.com/wp-json/wp/v2/media'
auth = b64encode('{}:{}'.format('USERNAME','PASS'))
payload = {
'type': 'image/jpeg', # mimetype
'title': 'title',
"Content":"content",
"excerpt":"Excerpt",
}
headers = {
'post_content':'post_content',
'Content':'content',
'Content-Disposition' : 'attachment; filename=image_20170510.jpg',
'Authorization': 'Basic {}'.format(auth),
}
with open(imgPath, "rb") as image_file:
files = {'field_name': image_file}
r = requests.post(url, files=files, headers=headers, data=payload)
print r
response = json.loads(r.content)
print response
return response
I've seen a fair number of answers in php or node.js, but I'm having trouble understanding the syntax in python. Thank you for any help!
|
[
"I've figured it out! \nWith this function I'm able to upload images via the WP REST api to my site (Photo Gear Hunter.) The function returns the ID of the image. You can then pass that id to a new post call and make it the featured image, or do whatever you wish with it.\ndef restImgUL(imgPath):\n url='http://xxxxxxxxxxxx.com/wp-json/wp/v2/media'\n data = open(imgPath, 'rb').read()\n fileName = os.path.basename(imgPath)\n res = requests.post(url='http://xxxxxxxxxxxxx.com/wp-json/wp/v2/media',\n data=data,\n headers={ 'Content-Type': 'image/jpg','Content-Disposition' : 'attachment; filename=%s'% fileName},\n auth=('authname', 'authpass'))\n # pp = pprint.PrettyPrinter(indent=4) ## print it pretty. \n # pp.pprint(res.json()) #this is nice when you need it\n newDict=res.json()\n newID= newDict.get('id')\n link = newDict.get('guid').get(\"rendered\")\n print newID, link\n return (newID, link)\n\n",
"An improved version to handle non-ASCII in the filename (Greek letters for example). What it does is convert all non-ASCII characters to a UTF-8 escape sequence.\n(Original code from @my Year Of Code)\ndef uploadImage(filePath):\n data = open(filePath, 'rb').read()\n \n fileName = os.path.basename(filePath)\n \n espSequence = bytes(fileName, \"utf-8\").decode(\"unicode_escape\") \n # Convert all non ASCII characters to UTF-8 escape sequence\n\n res = requests.post(url='http://xxxxxxxxxxxxx.com/wp-json/wp/v2/media',\n data=data,\n headers={'Content-Type': 'image/jpeg',\n 'Content-Disposition': 'attachment; filename=%s' % espSequence,\n },\n auth=('authname', 'authpass'))\n newDict=res.json()\n newID= newDict.get('id')\n link = newDict.get('guid').get(\"rendered\")\n print newID, link\n return (newID, link)\n\nFrom my understading a POST HEADER can only containg ASCII characters\n",
"To specify additional fields supported by the api such as alt text, description ect:\nfrom requests_toolbelt.multipart.encoder import MultipartEncoder\nimport requests\nimport os\nfileName = os.path.basename(imgPath)\nmultipart_data = MultipartEncoder(\n fields={\n # a file upload field\n 'file': (fileName, open(imgPath, 'rb'), 'image/jpg'),\n # plain text fields\n 'alt_text': 'alt test',\n 'caption': 'caption test',\n 'description': 'description test'\n }\n)\n\nresponse = requests.post('http://example/wp-json/wp/v2/media', data=multipart_data,\n headers={'Content-Type': multipart_data.content_type},\n auth=('user', 'pass'))\n\n",
"thanks to @my Year Of Code. \nmy final working code:\nimport \nHOST = \"https://www.crifan.com\"\nAPI_MEDIA = HOST + \"/wp-json/wp/v2/media\"\nJWT_TOKEN = \"eyJxxxxxxxxjLYB4\"\n\n imgMime = gImageSuffixToMime[imgSuffix] # 'image/png'\n imgeFilename = \"%s.%s\" % (processedGuid, imgSuffix) # 'f6956c30ef0b475fa2b99c2f49622e35.png'\n authValue = \"Bearer %s\" % JWT_TOKEN\n curHeaders = {\n \"Authorization\": authValue,\n \"Content-Type\": imgMime,\n 'Content-Disposition': 'attachment; filename=%s' % imgeFilename,\n }\n # curHeaders={'Authorization': 'Bearer eyJ0xxxyyy.zzzB4', 'Content-Type': 'image/png', 'Content-Disposition': 'attachment; filename=f6956c30ef0b475fa2b99c2f49622e35.png'}\n uploadImgUrl = API_MEDIA\n resp = requests.post(\n uploadImgUrl,\n # proxies=cfgProxies,\n headers=curHeaders,\n data=imgBytes,\n )\n\nreturn 201 means Created OK\nresponse json look like:\n{\n \"id\": 70393,\n \"date\": \"2020-03-07T18:43:47\",\n \"date_gmt\": \"2020-03-07T10:43:47\",\n \"guid\": {\n \"rendered\": \"https://www.crifan.com/files/pic/uploads/2020/03/f6956c30ef0b475fa2b99c2f49622e35.png\",\n \"raw\": \"https://www.crifan.com/files/pic/uploads/2020/03/f6956c30ef0b475fa2b99c2f49622e35.png\"\n },\n...\n\nmore details refer my (Chinese) post: 【已解决】用Python通过WordPress的REST API上传图片\n",
"This is my working code for uploading image into WordPress from local image or from url. hope someone find it useful.\nimport requests, json, os, base64\n\ndef wp_upload_image(domain, user, app_pass, imgPath):\n # imgPath can be local image path or can be url\n url = 'https://'+ domain + '/wp-json/wp/v2/media'\n filename = imgPath.split('/')[-1] if len(imgPath.split('/')[-1])>1 else imgPath.split('\\\\')[-1]\n extension = imgPath[imgPath.rfind('.')+1 : len(imgPath)]\n if imgPath.find('http') == -1:\n try: data = open(imgPath, 'rb').read()\n except:\n print('image local path not exits')\n return None\n else:\n rs = requests.get(imgPath)\n if rs.status_code == 200:\n data = rs.content\n else:\n print('url get request failed')\n return None\n headers = { \"Content-Disposition\": f\"attachment; filename={filename}\" , \"Content-Type\": str(\"image/\" + extension)}\n rs = requests.post(url, auth=(user, app_pass), headers=headers, data=data)\n print(rs)\n return (rs.json()['source_url'], rs.json()['id'])\n\n\n"
] |
[
18,
1,
1,
0,
0
] |
[
"import base64\nimport os\n\nimport requests\n\ndef rest_image_upload(image_path):\n message = '<user_name>' + \":\" + '<application password>'\n message_bytes = message.encode('ascii')\n base64_bytes = base64.b64encode(message_bytes)\n base64_message = base64_bytes.decode('ascii')\n\n # print(base64_message)\n\n data = open(image_path, 'rb').read()\n file_name = os.path.basename(image_path)\n res = requests.post(url='https://<example.com>/wp-json/wp/v2/media',\n data=data,\n headers={'Content-Type': 'image/jpg',\n 'Content-Disposition': 'attachment; filename=%s' % file_name,\n 'Authorization': 'Basic ' + base64_message})\n new_dict = res.json()\n new_id = new_dict.get('id')\n link = new_dict.get('guid').get(\"rendered\")\n # print(new_id, link)\n return new_id, link\n\n"
] |
[
-1
] |
[
"python",
"python_requests",
"rest",
"wordpress",
"wp_api"
] |
stackoverflow_0043915184_python_python_requests_rest_wordpress_wp_api.txt
|
Q:
Convert categorical data using 'if'
I have categorical data
df(notes) with the resulting number of a sum from 0 to 6.
How to convert this to another categorical data (0,1) using the conditional'if' ?
For example: if df(notes)=> 1, result=1,
f df(notes)= 0, result=o,
I don't know how to integrate the conditional with only values from o to 6
A:
If I correctly understood your question, you seem to have a dataframe with a column named notes that holds categorical integer values between 0 and 6. And you need to transform them to 0 or 1 values, 0 if the note is 0 and 1 if it is strictly positive.
If it is the case, you can achieve this by using the apply function on your notes column:
df['notes'] = df['notes'].apply(lambda x: 1 if x >= 1 else 0)
|
Convert categorical data using 'if'
|
I have categorical data
df(notes) with the resulting number of a sum from 0 to 6.
How to convert this to another categorical data (0,1) using the conditional'if' ?
For example: if df(notes)=> 1, result=1,
f df(notes)= 0, result=o,
I don't know how to integrate the conditional with only values from o to 6
|
[
"If I correctly understood your question, you seem to have a dataframe with a column named notes that holds categorical integer values between 0 and 6. And you need to transform them to 0 or 1 values, 0 if the note is 0 and 1 if it is strictly positive.\nIf it is the case, you can achieve this by using the apply function on your notes column:\ndf['notes'] = df['notes'].apply(lambda x: 1 if x >= 1 else 0)\n\n"
] |
[
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074586158_python.txt
|
Q:
Can't grab anything (title, price, etc) from a webpage using scrapy
I'm trying to extract the title of some products but it doesn't work and it yields an empty list every time. I tried grabbing the css and xpath of the 'title' using selectorgadget extension but failed, tried to grab the path by inspecting the element yet I failed.
These are some css, xpath (by selector gadget tool) and byInspectElement paths that I tried that didn't work:
css:
response.css('.eyNLqb > span > span > span:nth-child(1)').css('::text').extract()
xpath:
response.xpath('//*[contains(concat( " ", @class, " " ), concat( " ", "eyNLqb", " " ))]//>//span//>//span//>//span[(((count(preceding-sibling::*) + 1) = 1) and parent::*)]/text()').extract()
inspectingElement:
response.css('div.sc-9d1cc060-20.eyNLqb span span span').css('::text').extract()
here is the full code:
import scrapy
from ..items import NoonItem
class NoonspiderSpider(scrapy.Spider):
name = 'noonspider'
allowed_domains = ['noon.com']
start_urls = ['https://www.noon.com/uae-en/search/?q=figurine']
def parse(self, response):
items = NoonItem()
items['title'] = response.css('.eyNLqb span').css('::text').extract()
yield items
here is items.py
import scrapy
class NoonItem(scrapy.Item):
# define the fields for your item here like:
title = scrapy.Field()
here is the log
2022-11-27 02:10:27 [scrapy.utils.log] INFO: Scrapy 2.7.1 started (bot: noon)
2022-11-27 02:10:27 [scrapy.utils.log] INFO: Versions: lxml 4.9.1.0, libxml2 2.9.12, cssselect 1.2.0, parsel 1.7.0, w3lib 2.0.1, Twisted 22.10.0, Python 3.9.5 (tags/v3.9.5:0a7dcbd, May 3 2021, 17:27:52) [MSC v.1928 64 bit (AMD64)], pyOpenSSL 22.1.0 (OpenSSL 3.0.7 1 Nov 2022), cryptography 38.0.3, Platform Windows-8.1-6.3.9600-SP0
2022-11-27 02:10:27 [scrapy.crawler] INFO: Overridden settings:
{'BOT_NAME': 'noon',
'NEWSPIDER_MODULE': 'noon.spiders',
'REQUEST_FINGERPRINTER_IMPLEMENTATION': '2.7',
'ROBOTSTXT_OBEY': True,
'SPIDER_MODULES': ['noon.spiders'],
'TWISTED_REACTOR': 'twisted.internet.asyncioreactor.AsyncioSelectorReactor'}
2022-11-27 02:10:27 [asyncio] DEBUG: Using selector: SelectSelector
2022-11-27 02:10:27 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.asyncioreactor.AsyncioSelectorReactor
2022-11-27 02:10:27 [scrapy.utils.log] DEBUG: Using asyncio event loop: asyncio.windows_events._WindowsSelectorEventLoop
2022-11-27 02:10:27 [scrapy.extensions.telnet] INFO: Telnet Password: d24399c47a8a1a1f
2022-11-27 02:10:27 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.logstats.LogStats']
2022-11-27 02:10:28 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2022-11-27 02:10:28 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2022-11-27 02:10:28 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2022-11-27 02:10:28 [scrapy.core.engine] INFO: Spider opened
2022-11-27 02:10:28 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2022-11-27 02:10:28 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2022-11-27 02:10:28 [filelock] DEBUG: Attempting to acquire lock 354146442064 on C:\Users\Mohamed.aldhuhoori\.cache\python-tldextract\3.9.5.final__ScrapyTutorial__4afa8a__tldextract-3.4.0\publicsuffix.org-tlds\de84b5ca2167d4c83e38fb162f2e8738.tldextract.json.lock
2022-11-27 02:10:28 [filelock] DEBUG: Lock 354146442064 acquired on C:\Users\Mohamed.aldhuhoori\.cache\python-tldextract\3.9.5.final__ScrapyTutorial__4afa8a__tldextract-3.4.0\publicsuffix.org-tlds\de84b5ca2167d4c83e38fb162f2e8738.tldextract.json.lock
2022-11-27 02:10:29 [filelock] DEBUG: Attempting to release lock 354146442064 on C:\Users\Mohamed.aldhuhoori\.cache\python-tldextract\3.9.5.final__ScrapyTutorial__4afa8a__tldextract-3.4.0\publicsuffix.org-tlds\de84b5ca2167d4c83e38fb162f2e8738.tldextract.json.lock
2022-11-27 02:10:29 [filelock] DEBUG: Lock 354146442064 released on C:\Users\Mohamed.aldhuhoori\.cache\python-tldextract\3.9.5.final__ScrapyTutorial__4afa8a__tldextract-3.4.0\publicsuffix.org-tlds\de84b5ca2167d4c83e38fb162f2e8738.tldextract.json.lock
2022-11-27 02:10:29 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.noon.com/robots.txt> (referer: None)
2022-11-27 02:10:29 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.noon.com/uae-en/search/?q=figurine> (referer: None)
2022-11-27 02:10:29 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/uae-en/search/?q=figurine>
{'title': []}
2022-11-27 02:10:29 [scrapy.core.engine] INFO: Closing spider (finished)
2022-11-27 02:10:29 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 477,
'downloader/request_count': 2,
'downloader/request_method_count/GET': 2,
'downloader/response_bytes': 71960,
'downloader/response_count': 2,
'downloader/response_status_count/200': 2,
'elapsed_time_seconds': 1.162434,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2022, 11, 26, 22, 10, 29, 841064),
'httpcompression/response_bytes': 285255,
'httpcompression/response_count': 2,
'item_scraped_count': 1,
'log_count/DEBUG': 10,
'log_count/INFO': 10,
'response_received_count': 2,
'robotstxt/request_count': 1,
'robotstxt/response_count': 1,
'robotstxt/response_status_count/200': 1,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'start_time': datetime.datetime(2022, 11, 26, 22, 10, 28, 678630)}
2022-11-27 02:10:29 [scrapy.core.engine] INFO: Spider closed (finished)
A:
They are using javascript to load their page dynamically. Fortunately their search api is fairly straight forward and provides all of the information you are looking for most likely.
import scrapy
class NoonspiderSpider(scrapy.Spider):
name = 'noonspider'
allowed_domains = ['noon.com']
start_urls = ['https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine']
def parse(self, response):
for i in response.json()["hits"]:
yield {'title': i['name']}
{'title': 'Gold Plated Attractive Jewelry Box Multicolour '}
2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>
{'title': 'Football World Cup Trophy Gold '}
2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>
{'title': 'Sitting Camel Figurine Multicolour '}
2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>
{'title': 'Gold Plated Attractive Decorative Aftaba Set Golden '}
2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>
{'title': 'Merry Go Round Carousel Music Box Pink 110x190x110millimeter '}
2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>
{'title': 'Astronaut Moon Lamp Spaceman Night Light Battery Operated Space Figurine Desktop Lamp Gifts for Outerspace Party Favors Bedroom Decor '}
2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>
{'title': "Arts & Crafts Toys,Two Boxes Pack Children's Plaster Painting Set,DIY Graffiti Toys,Paint Your Own Figurines,STEAM Creative DIY Toys,Ceramics Plaster Painting Set Gift Toys For 6+ Year Old Boys & Girl "}
2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>
{'title': 'Christmas Ornaments Santa Claus Figurine '}
2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>
{'title': 'Harry Potter Black Edition Classic Mini Music Box '}
2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>
{'title': 'Home Decor Sculptures Collectible Figurines Stand Artwork Modern Graffiti Art for Home Decor Living Room Bedroom Office Retail Decoration Gift Bulldog Statue Resin L25xH16xW10 cm '}
2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>
{'title': 'Standing Camel Figurine Brown '}
2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>
{'title': 'Feng Shui Natural Citrine Gem Money Tree Yellow 470g '}
2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>
{'title': 'Trivia Metal Abstract Flute Man Figurine Silver 20cm '}
2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>
{'title': 'Fuse Face Changer Figurine '}
2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>
{'title': "Music Box Love Engraved Vintage Music Box Best Gift for Girlfriend Valentine's Day to Girlfriend "}
2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>
{'title': 'Tacto Dino by PlayShifu - Interactive Dinosaur Figurines | Explore 100+ Facts | Works with iPads, Android tablets, Amazon Fire tablets | Gift for Boys & Girls, Ages 4-8 (Tablet Not Included) 31 x 26 x 6cm '
}
2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>
{'title': 'Grendizer SFC Collectible PVC Figure 33cm Tall Statue Anime Manga Figurine Home Room Office Décor Gift '}
2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>
{'title': 'You are My Sunshine Wood Music Box for Wife Daughter Son Laser Engraved Vintage Wooden Hand Crank Music Box Gifts '}
2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>
{'title': 'Bell with Doll Figurine 2.5inch Assorted '}
2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>
{'title': '6pcs/set Jujutsu Kaisen Anime Figure Itadori Yuji Fushiguro Megumi Action Figure Gojo Satoru Kugisaki Nobara Figurine Model Toys '}
2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>
{'title': 'Black Panther Figurine '}
2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>
{'title': 'Trivia Metal Kick In Progress Figurine Silver 24cm '}
2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>
{'title': 'Modern Cute Coin Bank Box Resin KAWS Figurine Home Decorations Coin Storage Box Holder Toy Child Gift Organizer Money Box KAWS Blue Type B 10x25x9cm '}
2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>
{'title': 'Christmas Ornaments Santa Claus Figurine '}
2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>
{'title': 'Vintage Gramophone Shaped Music Box Gold/Red 130x225x107millimeter '}
2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>
{'title': 'Standing Animal Collection Figurine Fox DY1940-3 Multicolour '}
2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>
{'title': 'Wooden Music Box Hand Crank Carved Vintage Mechanism Music Box for Home Decor Gifts '}
2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>
{'title': 'Bald Eagle Figurine '}
2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>
{'title': 'Lighted Nativity Crèche Figurine Gold/Brown/Red 2.99inch '}
2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>
{'title': 'New Pink Wooden Merry Go Round Carousel Classic Music Box Gift Toy '}
2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>
{'title': 'Burj Khalifa And Sitting Camel With Waterball On Top Figurine Multicolour '}
2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>
{'title': 'Trivia Metal Men With Log Gold 14cm '}
2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>
{'title': 'Inaara Metal Hanging Man Figurine Gold '}
2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>
{'title': "Vintage Merry-Go-Round Horse Valentine's Birthday Gift Carousel Music Box Pink 11 x 18cm "}
2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>
{'title': 'Girl Dress Figurine For Home Décor Gold/Grey 36.6x23.2x52cm '}
2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>
{'title': 'My Daughter You Are Wood Music Box for Wife Daughter Son Dad Laser Engraved Vintage Wooden Hand Crank Music Box Gifts '}
2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>
{'title': 'Electroplating Trophy Gold 52cm '}
2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>
{'title': 'Roblox Game Zombie Attack Playset 7cm PVC Suite Dolls Action Figures Boys Toys Model Figurines for Collection Birthday Gifts for Kids (21 pcs) '}
2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>
{'title': 'Merry Go Round Carousel Music Box Blue 110x190x110millimeter '}
2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>
{'title': 'Attractive Jewelry Box Golden/White '}
2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>
{'title': 'Scout Wooden Horse Bust Figurine Brown '}
2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>
{'title': 'Astronaut Figurines Cake Topper Outer Space Birthday Decoration Spaceman Model Display Miniature Toys Set Planet Rocket Pearl Balls and Star DIY Toppers for Kids Party (4Pcs) '}
2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>
{'title': 'Snoopy With Woodstock In Nest Collectible Figure Beige/Green/Brown 6.75inch '}
2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>
{'title': 'Electronic Burj Al Arab Showpiece with USB Cable Silver/White '}
2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>
{'title': 'Romantic Christmas Trees Music Box White/Red/Blue 21x11centimeter '}
2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>
{'title': 'Tumbler 3D Figurine Avengers Comic Heroes Iron Man 360ml '}
2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>
{'title': 'Cinderella Ball Dress Figurine '}
2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>
{'title': 'Double Sided Camel With Waterball on Top Figurines Multicolour '}
2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>
{'title': 'Midnight Dragon Water Snow Globe Figurine Grey/Blue/Green 3.5inch '}
2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>
{'title': 'Sheep Shape Creative Hanging Double sided Black White Message Board Hanging Black 36 x 1 x 28cm '}
2022-11-26 15:03:14 [scrapy.core.engine] INFO: Closing spider (finished)
2022-11-26 15:03:14 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 329,
'downloader/request_count': 1,
'downloader/request_method_count/GET': 1,
'downloader/response_bytes': 79492,
'downloader/response_count': 1,
'downloader/response_status_count/200': 1,
'elapsed_time_seconds': 0.721553,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2022, 11, 26, 23, 3, 14, 214060),
'item_scraped_count': 50,
'log_count/DEBUG': 56,
'log_count/INFO': 10,
'response_received_count': 1,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'start_time': datetime.datetime(2022, 11, 26, 23, 3, 13, 492507)}
2022-11-26 15:03:14 [scrapy.core.engine] INFO: Spider closed (finished)
|
Can't grab anything (title, price, etc) from a webpage using scrapy
|
I'm trying to extract the title of some products but it doesn't work and it yields an empty list every time. I tried grabbing the css and xpath of the 'title' using selectorgadget extension but failed, tried to grab the path by inspecting the element yet I failed.
These are some css, xpath (by selector gadget tool) and byInspectElement paths that I tried that didn't work:
css:
response.css('.eyNLqb > span > span > span:nth-child(1)').css('::text').extract()
xpath:
response.xpath('//*[contains(concat( " ", @class, " " ), concat( " ", "eyNLqb", " " ))]//>//span//>//span//>//span[(((count(preceding-sibling::*) + 1) = 1) and parent::*)]/text()').extract()
inspectingElement:
response.css('div.sc-9d1cc060-20.eyNLqb span span span').css('::text').extract()
here is the full code:
import scrapy
from ..items import NoonItem
class NoonspiderSpider(scrapy.Spider):
name = 'noonspider'
allowed_domains = ['noon.com']
start_urls = ['https://www.noon.com/uae-en/search/?q=figurine']
def parse(self, response):
items = NoonItem()
items['title'] = response.css('.eyNLqb span').css('::text').extract()
yield items
here is items.py
import scrapy
class NoonItem(scrapy.Item):
# define the fields for your item here like:
title = scrapy.Field()
here is the log
2022-11-27 02:10:27 [scrapy.utils.log] INFO: Scrapy 2.7.1 started (bot: noon)
2022-11-27 02:10:27 [scrapy.utils.log] INFO: Versions: lxml 4.9.1.0, libxml2 2.9.12, cssselect 1.2.0, parsel 1.7.0, w3lib 2.0.1, Twisted 22.10.0, Python 3.9.5 (tags/v3.9.5:0a7dcbd, May 3 2021, 17:27:52) [MSC v.1928 64 bit (AMD64)], pyOpenSSL 22.1.0 (OpenSSL 3.0.7 1 Nov 2022), cryptography 38.0.3, Platform Windows-8.1-6.3.9600-SP0
2022-11-27 02:10:27 [scrapy.crawler] INFO: Overridden settings:
{'BOT_NAME': 'noon',
'NEWSPIDER_MODULE': 'noon.spiders',
'REQUEST_FINGERPRINTER_IMPLEMENTATION': '2.7',
'ROBOTSTXT_OBEY': True,
'SPIDER_MODULES': ['noon.spiders'],
'TWISTED_REACTOR': 'twisted.internet.asyncioreactor.AsyncioSelectorReactor'}
2022-11-27 02:10:27 [asyncio] DEBUG: Using selector: SelectSelector
2022-11-27 02:10:27 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.asyncioreactor.AsyncioSelectorReactor
2022-11-27 02:10:27 [scrapy.utils.log] DEBUG: Using asyncio event loop: asyncio.windows_events._WindowsSelectorEventLoop
2022-11-27 02:10:27 [scrapy.extensions.telnet] INFO: Telnet Password: d24399c47a8a1a1f
2022-11-27 02:10:27 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.logstats.LogStats']
2022-11-27 02:10:28 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2022-11-27 02:10:28 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2022-11-27 02:10:28 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2022-11-27 02:10:28 [scrapy.core.engine] INFO: Spider opened
2022-11-27 02:10:28 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2022-11-27 02:10:28 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2022-11-27 02:10:28 [filelock] DEBUG: Attempting to acquire lock 354146442064 on C:\Users\Mohamed.aldhuhoori\.cache\python-tldextract\3.9.5.final__ScrapyTutorial__4afa8a__tldextract-3.4.0\publicsuffix.org-tlds\de84b5ca2167d4c83e38fb162f2e8738.tldextract.json.lock
2022-11-27 02:10:28 [filelock] DEBUG: Lock 354146442064 acquired on C:\Users\Mohamed.aldhuhoori\.cache\python-tldextract\3.9.5.final__ScrapyTutorial__4afa8a__tldextract-3.4.0\publicsuffix.org-tlds\de84b5ca2167d4c83e38fb162f2e8738.tldextract.json.lock
2022-11-27 02:10:29 [filelock] DEBUG: Attempting to release lock 354146442064 on C:\Users\Mohamed.aldhuhoori\.cache\python-tldextract\3.9.5.final__ScrapyTutorial__4afa8a__tldextract-3.4.0\publicsuffix.org-tlds\de84b5ca2167d4c83e38fb162f2e8738.tldextract.json.lock
2022-11-27 02:10:29 [filelock] DEBUG: Lock 354146442064 released on C:\Users\Mohamed.aldhuhoori\.cache\python-tldextract\3.9.5.final__ScrapyTutorial__4afa8a__tldextract-3.4.0\publicsuffix.org-tlds\de84b5ca2167d4c83e38fb162f2e8738.tldextract.json.lock
2022-11-27 02:10:29 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.noon.com/robots.txt> (referer: None)
2022-11-27 02:10:29 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.noon.com/uae-en/search/?q=figurine> (referer: None)
2022-11-27 02:10:29 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/uae-en/search/?q=figurine>
{'title': []}
2022-11-27 02:10:29 [scrapy.core.engine] INFO: Closing spider (finished)
2022-11-27 02:10:29 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 477,
'downloader/request_count': 2,
'downloader/request_method_count/GET': 2,
'downloader/response_bytes': 71960,
'downloader/response_count': 2,
'downloader/response_status_count/200': 2,
'elapsed_time_seconds': 1.162434,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2022, 11, 26, 22, 10, 29, 841064),
'httpcompression/response_bytes': 285255,
'httpcompression/response_count': 2,
'item_scraped_count': 1,
'log_count/DEBUG': 10,
'log_count/INFO': 10,
'response_received_count': 2,
'robotstxt/request_count': 1,
'robotstxt/response_count': 1,
'robotstxt/response_status_count/200': 1,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'start_time': datetime.datetime(2022, 11, 26, 22, 10, 28, 678630)}
2022-11-27 02:10:29 [scrapy.core.engine] INFO: Spider closed (finished)
|
[
"They are using javascript to load their page dynamically. Fortunately their search api is fairly straight forward and provides all of the information you are looking for most likely.\nimport scrapy\n\nclass NoonspiderSpider(scrapy.Spider):\n name = 'noonspider'\n allowed_domains = ['noon.com']\n start_urls = ['https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine']\n\n def parse(self, response):\n for i in response.json()[\"hits\"]:\n yield {'title': i['name']}\n\n{'title': 'Gold Plated Attractive Jewelry Box Multicolour '}\n2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>\n{'title': 'Football World Cup Trophy Gold '}\n2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>\n{'title': 'Sitting Camel Figurine Multicolour '}\n2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>\n{'title': 'Gold Plated Attractive Decorative Aftaba Set Golden '}\n2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>\n{'title': 'Merry Go Round Carousel Music Box Pink 110x190x110millimeter '}\n2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>\n{'title': 'Astronaut Moon Lamp Spaceman Night Light Battery Operated Space Figurine Desktop Lamp Gifts for Outerspace Party Favors Bedroom Decor '}\n2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>\n{'title': \"Arts & Crafts Toys,Two Boxes Pack Children's Plaster Painting Set,DIY Graffiti Toys,Paint Your Own Figurines,STEAM Creative DIY Toys,Ceramics Plaster Painting Set Gift Toys For 6+ Year Old Boys & Girl \"}\n2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>\n{'title': 'Christmas Ornaments Santa Claus Figurine '}\n2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>\n{'title': 'Harry Potter Black Edition Classic Mini Music Box '}\n2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>\n{'title': 'Home Decor Sculptures Collectible Figurines Stand Artwork Modern Graffiti Art for Home Decor Living Room Bedroom Office Retail Decoration Gift Bulldog Statue Resin L25xH16xW10 cm '}\n2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>\n{'title': 'Standing Camel Figurine Brown '}\n2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>\n{'title': 'Feng Shui Natural Citrine Gem Money Tree Yellow 470g '}\n2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>\n{'title': 'Trivia Metal Abstract Flute Man Figurine Silver 20cm '}\n2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>\n{'title': 'Fuse Face Changer Figurine '}\n2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>\n{'title': \"Music Box Love Engraved Vintage Music Box Best Gift for Girlfriend Valentine's Day to Girlfriend \"}\n2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>\n{'title': 'Tacto Dino by PlayShifu - Interactive Dinosaur Figurines | Explore 100+ Facts | Works with iPads, Android tablets, Amazon Fire tablets | Gift for Boys & Girls, Ages 4-8 (Tablet Not Included) 31 x 26 x 6cm '\n}\n2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>\n{'title': 'Grendizer SFC Collectible PVC Figure 33cm Tall Statue Anime Manga Figurine Home Room Office Décor Gift '}\n2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>\n{'title': 'You are My Sunshine Wood Music Box for Wife Daughter Son Laser Engraved Vintage Wooden Hand Crank Music Box Gifts '}\n2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>\n{'title': 'Bell with Doll Figurine 2.5inch Assorted '}\n2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>\n{'title': '6pcs/set Jujutsu Kaisen Anime Figure Itadori Yuji Fushiguro Megumi Action Figure Gojo Satoru Kugisaki Nobara Figurine Model Toys '}\n2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>\n{'title': 'Black Panther Figurine '}\n2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>\n{'title': 'Trivia Metal Kick In Progress Figurine Silver 24cm '}\n2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>\n{'title': 'Modern Cute Coin Bank Box Resin KAWS Figurine Home Decorations Coin Storage Box Holder Toy Child Gift Organizer Money Box KAWS Blue Type B 10x25x9cm '}\n2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>\n{'title': 'Christmas Ornaments Santa Claus Figurine '}\n2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>\n{'title': 'Vintage Gramophone Shaped Music Box Gold/Red 130x225x107millimeter '}\n2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>\n{'title': 'Standing Animal Collection Figurine Fox DY1940-3 Multicolour '}\n2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>\n{'title': 'Wooden Music Box Hand Crank Carved Vintage Mechanism Music Box for Home Decor Gifts '}\n2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>\n{'title': 'Bald Eagle Figurine '}\n2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>\n{'title': 'Lighted Nativity Crèche Figurine Gold/Brown/Red 2.99inch '}\n2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>\n{'title': 'New Pink Wooden Merry Go Round Carousel Classic Music Box Gift Toy '}\n2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>\n{'title': 'Burj Khalifa And Sitting Camel With Waterball On Top Figurine Multicolour '}\n2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>\n{'title': 'Trivia Metal Men With Log Gold 14cm '}\n2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>\n{'title': 'Inaara Metal Hanging Man Figurine Gold '}\n2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>\n{'title': \"Vintage Merry-Go-Round Horse Valentine's Birthday Gift Carousel Music Box Pink 11 x 18cm \"}\n2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>\n{'title': 'Girl Dress Figurine For Home Décor Gold/Grey 36.6x23.2x52cm '}\n2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>\n{'title': 'My Daughter You Are Wood Music Box for Wife Daughter Son Dad Laser Engraved Vintage Wooden Hand Crank Music Box Gifts '}\n2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>\n{'title': 'Electroplating Trophy Gold 52cm '}\n2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>\n{'title': 'Roblox Game Zombie Attack Playset 7cm PVC Suite Dolls Action Figures Boys Toys Model Figurines for Collection Birthday Gifts for Kids (21 pcs) '}\n2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>\n{'title': 'Merry Go Round Carousel Music Box Blue 110x190x110millimeter '}\n2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>\n{'title': 'Attractive Jewelry Box Golden/White '}\n2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>\n{'title': 'Scout Wooden Horse Bust Figurine Brown '}\n2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>\n{'title': 'Astronaut Figurines Cake Topper Outer Space Birthday Decoration Spaceman Model Display Miniature Toys Set Planet Rocket Pearl Balls and Star DIY Toppers for Kids Party (4Pcs) '}\n2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>\n{'title': 'Snoopy With Woodstock In Nest Collectible Figure Beige/Green/Brown 6.75inch '}\n2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>\n{'title': 'Electronic Burj Al Arab Showpiece with USB Cable Silver/White '}\n2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>\n{'title': 'Romantic Christmas Trees Music Box White/Red/Blue 21x11centimeter '}\n2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>\n{'title': 'Tumbler 3D Figurine Avengers Comic Heroes Iron Man 360ml '}\n2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>\n{'title': 'Cinderella Ball Dress Figurine '}\n2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>\n{'title': 'Double Sided Camel With Waterball on Top Figurines Multicolour '}\n2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>\n{'title': 'Midnight Dragon Water Snow Globe Figurine Grey/Blue/Green 3.5inch '}\n2022-11-26 15:03:14 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.noon.com/_svc/catalog/api/v3/u/search/?q=figurine>\n{'title': 'Sheep Shape Creative Hanging Double sided Black White Message Board Hanging Black 36 x 1 x 28cm '}\n2022-11-26 15:03:14 [scrapy.core.engine] INFO: Closing spider (finished)\n2022-11-26 15:03:14 [scrapy.statscollectors] INFO: Dumping Scrapy stats:\n{'downloader/request_bytes': 329,\n 'downloader/request_count': 1,\n 'downloader/request_method_count/GET': 1,\n 'downloader/response_bytes': 79492,\n 'downloader/response_count': 1,\n 'downloader/response_status_count/200': 1,\n 'elapsed_time_seconds': 0.721553,\n 'finish_reason': 'finished',\n 'finish_time': datetime.datetime(2022, 11, 26, 23, 3, 14, 214060),\n 'item_scraped_count': 50,\n 'log_count/DEBUG': 56,\n 'log_count/INFO': 10,\n 'response_received_count': 1,\n 'scheduler/dequeued': 1,\n 'scheduler/dequeued/memory': 1,\n 'scheduler/enqueued': 1,\n 'scheduler/enqueued/memory': 1,\n 'start_time': datetime.datetime(2022, 11, 26, 23, 3, 13, 492507)}\n2022-11-26 15:03:14 [scrapy.core.engine] INFO: Spider closed (finished)\n\n"
] |
[
2
] |
[] |
[] |
[
"python",
"scrapy",
"web_scraping"
] |
stackoverflow_0074586059_python_scrapy_web_scraping.txt
|
Q:
(Python) Im trying to make a game, and if a choice is chosen a certain amount of time, a print statement will appear
How can I make it so that after choosing choice A 3 times, the program tells the user they need to sleep?
Choice = input("Are you ready to play? (yes/no) ")
if Choice.lower() == "yes":
Choice = input(" \n A. Work overtime \n B. Get Some Rest \n C.Grab Something to Eat \n D. Status Check \n Q. Quit \n What will be your choice? ")
if Choice == "a":
print("You continue working on your cure.")
A:
First of all, Welcome to stackoverflow !
I totally understand @Edward Peters suggestion of learning basics of python before posting this question.
I can help you with an method to approach this method:
Create a counter variable and initialize it to 1
Add a check in the if choice == "a" loop to check if counter is 3; if it so make counter variable in initial state(here:1) and print your message
If other if condition is selected make counter variable in initial state
To help you to learn and write python code: refer this link :Python Tutorial
|
(Python) Im trying to make a game, and if a choice is chosen a certain amount of time, a print statement will appear
|
How can I make it so that after choosing choice A 3 times, the program tells the user they need to sleep?
Choice = input("Are you ready to play? (yes/no) ")
if Choice.lower() == "yes":
Choice = input(" \n A. Work overtime \n B. Get Some Rest \n C.Grab Something to Eat \n D. Status Check \n Q. Quit \n What will be your choice? ")
if Choice == "a":
print("You continue working on your cure.")
|
[
"First of all, Welcome to stackoverflow !\nI totally understand @Edward Peters suggestion of learning basics of python before posting this question.\nI can help you with an method to approach this method:\n\nCreate a counter variable and initialize it to 1\nAdd a check in the if choice == \"a\" loop to check if counter is 3; if it so make counter variable in initial state(here:1) and print your message\nIf other if condition is selected make counter variable in initial state\nTo help you to learn and write python code: refer this link :Python Tutorial\n\n"
] |
[
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074586177_python.txt
|
Q:
Why does removing grid() make my program turn blank?
I'm using .place() in my tkinter program, so I wanted to remove references to grid(). So far my program works but for some reason there's a single .grid() line that makes my whole program turn blank if it's removed. This shouldn't happen, since I'm entirely using .place(). Here is that line:
AllFrames.grid(row=0, column=0, sticky='nsew')
And here is my full code:
from tkinter import *
root = Tk()
root.title("Account Signup")
DarkBlue = "#2460A7"
LightBlue = "#B3C7D6"
root.geometry('350x230')
Menu = Frame()
loginPage = Frame()
registerPage = Frame()
for AllFrames in (Menu, loginPage, registerPage):
AllFrames.grid(row=0, column=0, sticky='nsew')
AllFrames.configure(bg=LightBlue)
def show_frame(frame):
frame.tkraise()
root.grid_rowconfigure(0, weight=1)
root.grid_columnconfigure(0, weight=1)
show_frame(Menu)
# ============= Menu Page =========
menuTitle = Label(Menu, text="Menu", font=("Arial", 25), bg=LightBlue)
menuTitle.place(x=130, y=25)
loginButton1 = Button(Menu, width=25, text="Login", command=lambda: show_frame(loginPage))
loginButton1.place(x=85, y=85)
registerButton1 = Button(Menu, width=25, text="Register", command=lambda: show_frame(registerPage))
registerButton1.place(x=85, y=115)
# ======== Login Page ===========
loginUsernameL = Label(loginPage, text='Username').place(x=30, y=60)
loginUsernameE = Entry(loginPage).place(x=120, y=60)
loginPasswordL = Label(loginPage, text='Password').place(x=30, y=90)
loginPasswordE = Entry(loginPage).place(x=120, y=90)
backButton = Button(loginPage, text='Back', command=lambda: show_frame(Menu)).place(x=0, y=0)
loginButton = Button(loginPage, text='Login', width=20).place(x=100, y=150)
# ======== Register Page ===========
root.mainloop()
I've also noticed that changing anything in the parentheses causes the same result. For example, if I change sticky='nsew' to sticky='n' or row=0 to row=1 it will show a blank page.
How do I remove .grid() from my program without it turning blank?
A:
The place() manager does not reserve any space, unless you tell it directly.
The grid(sticky='nsew') makes the widget expand to fill the entire available space, in this case the containing widget. The widgets inside all use place() which will not take any space. When you change to grid(sticky='n') you place the zero size widget at the top of the containing widget.
But, for your current problem you can assign a size to the widgets:
AllFrames.place(relwidth=1, relheight=1) # w/h relative to size of master
I would recommend using the grid() geometry manager if you are going to make any more complicated layouts.
For more info have a look at effbot on archive.org
|
Why does removing grid() make my program turn blank?
|
I'm using .place() in my tkinter program, so I wanted to remove references to grid(). So far my program works but for some reason there's a single .grid() line that makes my whole program turn blank if it's removed. This shouldn't happen, since I'm entirely using .place(). Here is that line:
AllFrames.grid(row=0, column=0, sticky='nsew')
And here is my full code:
from tkinter import *
root = Tk()
root.title("Account Signup")
DarkBlue = "#2460A7"
LightBlue = "#B3C7D6"
root.geometry('350x230')
Menu = Frame()
loginPage = Frame()
registerPage = Frame()
for AllFrames in (Menu, loginPage, registerPage):
AllFrames.grid(row=0, column=0, sticky='nsew')
AllFrames.configure(bg=LightBlue)
def show_frame(frame):
frame.tkraise()
root.grid_rowconfigure(0, weight=1)
root.grid_columnconfigure(0, weight=1)
show_frame(Menu)
# ============= Menu Page =========
menuTitle = Label(Menu, text="Menu", font=("Arial", 25), bg=LightBlue)
menuTitle.place(x=130, y=25)
loginButton1 = Button(Menu, width=25, text="Login", command=lambda: show_frame(loginPage))
loginButton1.place(x=85, y=85)
registerButton1 = Button(Menu, width=25, text="Register", command=lambda: show_frame(registerPage))
registerButton1.place(x=85, y=115)
# ======== Login Page ===========
loginUsernameL = Label(loginPage, text='Username').place(x=30, y=60)
loginUsernameE = Entry(loginPage).place(x=120, y=60)
loginPasswordL = Label(loginPage, text='Password').place(x=30, y=90)
loginPasswordE = Entry(loginPage).place(x=120, y=90)
backButton = Button(loginPage, text='Back', command=lambda: show_frame(Menu)).place(x=0, y=0)
loginButton = Button(loginPage, text='Login', width=20).place(x=100, y=150)
# ======== Register Page ===========
root.mainloop()
I've also noticed that changing anything in the parentheses causes the same result. For example, if I change sticky='nsew' to sticky='n' or row=0 to row=1 it will show a blank page.
How do I remove .grid() from my program without it turning blank?
|
[
"The place() manager does not reserve any space, unless you tell it directly.\nThe grid(sticky='nsew') makes the widget expand to fill the entire available space, in this case the containing widget. The widgets inside all use place() which will not take any space. When you change to grid(sticky='n') you place the zero size widget at the top of the containing widget.\nBut, for your current problem you can assign a size to the widgets:\nAllFrames.place(relwidth=1, relheight=1) # w/h relative to size of master\n\nI would recommend using the grid() geometry manager if you are going to make any more complicated layouts.\nFor more info have a look at effbot on archive.org\n"
] |
[
2
] |
[] |
[] |
[
"python",
"screen",
"tkinter"
] |
stackoverflow_0074586068_python_screen_tkinter.txt
|
Q:
python - No module named 'pywidevine.L3'
Have been trying a python script which has required a bunch of additional modules, that I've installed according to the error messages, leading to the next one until I got stuck. The specific error is: "ModuleNotFoundError: No module named 'pywidevine.L3"
Have already installed "pywidevine". Did this on a whim without really knowing what to do when the previous module error message called for "pywidevine.L3.cdm.formats.widevine_pssh_data_pb2", after which this current module error message calling for "pywidevine.L3" comes up. Little to no idea what I'm actually doing or how these things work – only trying to get a script to work.
edit: https://github.com/JockeyJarus/AppleMusic-DownloaderV2
how to use:
python applemusic.py [applemusic link]
A:
The error is correct there is no module called L3 in pywidevine. please refer to the source code of the package repository. Please make edit to your question so that we can know what are you trying to achieve?
|
python - No module named 'pywidevine.L3'
|
Have been trying a python script which has required a bunch of additional modules, that I've installed according to the error messages, leading to the next one until I got stuck. The specific error is: "ModuleNotFoundError: No module named 'pywidevine.L3"
Have already installed "pywidevine". Did this on a whim without really knowing what to do when the previous module error message called for "pywidevine.L3.cdm.formats.widevine_pssh_data_pb2", after which this current module error message calling for "pywidevine.L3" comes up. Little to no idea what I'm actually doing or how these things work – only trying to get a script to work.
edit: https://github.com/JockeyJarus/AppleMusic-DownloaderV2
how to use:
python applemusic.py [applemusic link]
|
[
"The error is correct there is no module called L3 in pywidevine. please refer to the source code of the package repository. Please make edit to your question so that we can know what are you trying to achieve?\n"
] |
[
0
] |
[] |
[] |
[
"homebrew",
"python",
"widevine"
] |
stackoverflow_0074586258_homebrew_python_widevine.txt
|
Q:
Read certain column in excel given two other values match using pandas
Month Year Open High Low Close/Price Volume
6 2019 86.78 87.11 86.06 86.55 1507828
6 2019 86.63 87.23 84.81 85.06 2481284
6 2019 85.38 85.81 84.75 85.33 2034693
6 2019 85.65 86.86 85.13 86.43 1394847
6 2019 86.66 87.74 86.66 87.55 3025379
7 2019 88.84 89.72 87.77 88.45 4017249
7 2019 89.21 90 87.95 88.87 2237183
7 2019 89.14 91.08 89.14 90.67 1647124
7 2019 90.39 90.95 89.07 90.59 3227673
I want to get the monthly average of: Open High Low Close/Price
How do i set two values (Month, Year) as parameters for getting a value that is in another column?
df = pd.read_excel('DatosUnited.xlsx')
month = df.groupby('Month')
year = df.groupby('Year')
june2019 = month.get_group("6")
year2019 = year.get_group('2019')
I tried something like this, but i dont know how to use both as a filter simultaneously
A:
You can use .groupby() with multiple columns, and then you can use .mean() to get the desired averages:
df.groupby(["Month", "Year"]).mean()
This outputs:
Open High Low Close/Price Volume
Month Year
6 2019 86.220 86.9500 85.4820 86.184 2088806.20
7 2019 89.395 90.4375 88.4825 89.645 2782307.25
|
Read certain column in excel given two other values match using pandas
|
Month Year Open High Low Close/Price Volume
6 2019 86.78 87.11 86.06 86.55 1507828
6 2019 86.63 87.23 84.81 85.06 2481284
6 2019 85.38 85.81 84.75 85.33 2034693
6 2019 85.65 86.86 85.13 86.43 1394847
6 2019 86.66 87.74 86.66 87.55 3025379
7 2019 88.84 89.72 87.77 88.45 4017249
7 2019 89.21 90 87.95 88.87 2237183
7 2019 89.14 91.08 89.14 90.67 1647124
7 2019 90.39 90.95 89.07 90.59 3227673
I want to get the monthly average of: Open High Low Close/Price
How do i set two values (Month, Year) as parameters for getting a value that is in another column?
df = pd.read_excel('DatosUnited.xlsx')
month = df.groupby('Month')
year = df.groupby('Year')
june2019 = month.get_group("6")
year2019 = year.get_group('2019')
I tried something like this, but i dont know how to use both as a filter simultaneously
|
[
"You can use .groupby() with multiple columns, and then you can use .mean() to get the desired averages:\ndf.groupby([\"Month\", \"Year\"]).mean()\n\nThis outputs:\n Open High Low Close/Price Volume\nMonth Year\n6 2019 86.220 86.9500 85.4820 86.184 2088806.20\n7 2019 89.395 90.4375 88.4825 89.645 2782307.25\n\n"
] |
[
0
] |
[] |
[] |
[
"pandas",
"python"
] |
stackoverflow_0074586321_pandas_python.txt
|
Q:
Timing operation with increasing list size - unexpected behaviour
Problem: How long does it take to generate a Python list of prime numbers from 1 to N? Plot a graph of time taken against N.
I used SymPy to generate the list of primes.
I expected the time to increase monotonically.
But why is there a dip?
import numpy as np
import matplotlib.pyplot as plt
from time import perf_counter as timer
from sympy import sieve
T = []
tic=timer()
N= np.logspace(1,8,30)
for Nup in N:
tic = timer()
A=list(sieve.primerange(1,Nup))
toc = timer()
T.append(toc-tic)
plt.loglog(N,T,'x-')
plt.grid()
plt.show()
Time taken to generate primes up to N
A:
The sieve itself requires an exponential amount of time to compute ever larger numbers of primes, so plotting the pure runtime of a sieve should come out to roughly a straight line for large numbers.
In your copy of the plot, it looks like it's actually getting a bit worse over time, but when I run your script it's not perfectly straight, but close to a straight line on the log scale towards the end. However, there is a bit of a bend at the start, as with your result.
This makes sense because the sieve caches previous results, but initially it gets little benefit from that and there's the small overhead of setting up the cache and increasing its size which goes down over time, and more importantly there's the overhead of the actual call to the sieve routine. Also, this type of performance measurement is very sensitive to anything else going on on your system, including whatever Python and your IDE are doing
Here's your code with some added code to loop over various initial runs, warming the cache of the sieve before every run - it shows pretty clearly what the effect is:
import numpy as np
import matplotlib.pyplot as plt
from time import perf_counter as timer, sleep
from sympy import sieve
for warmup_step in range(0, 5):
warmup = 100 ** warmup_step
sieve._reset() # this resets the internal cache of the sieve
_ = list(sieve.primerange(1, warmup)) # warming the sieve's cache
_ = timer() # avoid initial delays from other elements of the code
sleep(3)
print('Start')
times = []
tic = timer()
numbers = np.logspace(1, 8, 30)
for n in numbers:
tic = timer()
_ = list(sieve.primerange(1, n))
toc = timer()
times.append(toc - tic)
print(toc, n) # provide some visual feedback of speed
plt.loglog(numbers, times, 'x-')
plt.title(f'Warmup: {warmup}')
plt.ylim(1e-6, 1e+1) # fix the y-axis, so the charts are easily comparable
plt.grid()
plt.show()
The lesson to be learned here is that you need to consider overhead. Of your own code and the libraries you use, but also the entire system that sits around it: the Python VM, your IDE, whatever is running on your workstation, the OS that runs it, the hardware.
The test above is better, but if you want really nice results, run the whole thing a dozen times and average out the results over runs.
Results:
|
Timing operation with increasing list size - unexpected behaviour
|
Problem: How long does it take to generate a Python list of prime numbers from 1 to N? Plot a graph of time taken against N.
I used SymPy to generate the list of primes.
I expected the time to increase monotonically.
But why is there a dip?
import numpy as np
import matplotlib.pyplot as plt
from time import perf_counter as timer
from sympy import sieve
T = []
tic=timer()
N= np.logspace(1,8,30)
for Nup in N:
tic = timer()
A=list(sieve.primerange(1,Nup))
toc = timer()
T.append(toc-tic)
plt.loglog(N,T,'x-')
plt.grid()
plt.show()
Time taken to generate primes up to N
|
[
"The sieve itself requires an exponential amount of time to compute ever larger numbers of primes, so plotting the pure runtime of a sieve should come out to roughly a straight line for large numbers.\nIn your copy of the plot, it looks like it's actually getting a bit worse over time, but when I run your script it's not perfectly straight, but close to a straight line on the log scale towards the end. However, there is a bit of a bend at the start, as with your result.\nThis makes sense because the sieve caches previous results, but initially it gets little benefit from that and there's the small overhead of setting up the cache and increasing its size which goes down over time, and more importantly there's the overhead of the actual call to the sieve routine. Also, this type of performance measurement is very sensitive to anything else going on on your system, including whatever Python and your IDE are doing\nHere's your code with some added code to loop over various initial runs, warming the cache of the sieve before every run - it shows pretty clearly what the effect is:\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom time import perf_counter as timer, sleep\nfrom sympy import sieve\n\nfor warmup_step in range(0, 5):\n warmup = 100 ** warmup_step\n sieve._reset() # this resets the internal cache of the sieve\n _ = list(sieve.primerange(1, warmup)) # warming the sieve's cache\n\n _ = timer() # avoid initial delays from other elements of the code\n sleep(3)\n print('Start')\n\n times = []\n tic = timer()\n\n numbers = np.logspace(1, 8, 30)\n for n in numbers:\n tic = timer()\n _ = list(sieve.primerange(1, n))\n toc = timer()\n times.append(toc - tic)\n print(toc, n) # provide some visual feedback of speed\n\n plt.loglog(numbers, times, 'x-')\n plt.title(f'Warmup: {warmup}')\n plt.ylim(1e-6, 1e+1) # fix the y-axis, so the charts are easily comparable\n plt.grid()\n plt.show()\n\nThe lesson to be learned here is that you need to consider overhead. Of your own code and the libraries you use, but also the entire system that sits around it: the Python VM, your IDE, whatever is running on your workstation, the OS that runs it, the hardware.\nThe test above is better, but if you want really nice results, run the whole thing a dozen times and average out the results over runs.\nResults:\n\n\n\n\n\n"
] |
[
0
] |
[] |
[] |
[
"primes",
"python",
"sympy",
"timing"
] |
stackoverflow_0074585845_primes_python_sympy_timing.txt
|
Q:
Convert pandas dataframe to a specific layout
I'm using a software that takes data in a certain format. In order for this software to work, I will need to convert a dataframe, like the first screenshoot, to a different format, like the second screenshoot. Any ideas how I can do that? Thanks in advance!
Original data format
Converted data format
A:
Given a simple dataframe:
0 1 2 3
0 1 2.0 4 1.0
1 2 3.0 r NaN
2 3 NaN 6 NaN
Created a single column dataframe using:
import pandas as pd
import numpy as np
df = pd.read_csv('test.csv',header=None,sep=";")
joined = pd.concat([df[column].dropna() for column in df.columns], ignore_index=True).to_frame()
Although I really don't understand the need of repeated data in a dataset,
as for the next step I iterate over the initial dataframe columns to obtain the index, appended an empty column and replaced the required values in place by positional assignment.
initial = 0
for column in df.columns:
cols = len(joined.columns)
joined[cols] = np.nan
bit = df[column].dropna()
final = initial + len(bit)
joined.iloc[initial:final, column+1] = bit
initial = final
The resulting joined DataFrame:
0 1 2 3 4
0 1 1.0 NaN NaN NaN
1 2 2.0 NaN NaN NaN
2 3 3.0 NaN NaN NaN
3 2.0 NaN 2.0 NaN NaN
4 3.0 NaN 3.0 NaN NaN
5 4 NaN NaN 4 NaN
6 r NaN NaN r NaN
7 6 NaN NaN 6 NaN
8 1.0 NaN NaN NaN 1.0
|
Convert pandas dataframe to a specific layout
|
I'm using a software that takes data in a certain format. In order for this software to work, I will need to convert a dataframe, like the first screenshoot, to a different format, like the second screenshoot. Any ideas how I can do that? Thanks in advance!
Original data format
Converted data format
|
[
"Given a simple dataframe:\n\n 0 1 2 3\n 0 1 2.0 4 1.0\n 1 2 3.0 r NaN\n 2 3 NaN 6 NaN\n\n\nCreated a single column dataframe using:\n\n import pandas as pd\n import numpy as np\n \n df = pd.read_csv('test.csv',header=None,sep=\";\")\n joined = pd.concat([df[column].dropna() for column in df.columns], ignore_index=True).to_frame()\n\n\nAlthough I really don't understand the need of repeated data in a dataset,\nas for the next step I iterate over the initial dataframe columns to obtain the index, appended an empty column and replaced the required values in place by positional assignment.\n\n initial = 0\n for column in df.columns:\n cols = len(joined.columns)\n joined[cols] = np.nan\n bit = df[column].dropna()\n final = initial + len(bit)\n joined.iloc[initial:final, column+1] = bit\n initial = final\n\n\nThe resulting joined DataFrame:\n\n 0 1 2 3 4\n 0 1 1.0 NaN NaN NaN\n 1 2 2.0 NaN NaN NaN\n 2 3 3.0 NaN NaN NaN\n 3 2.0 NaN 2.0 NaN NaN\n 4 3.0 NaN 3.0 NaN NaN\n 5 4 NaN NaN 4 NaN\n 6 r NaN NaN r NaN\n 7 6 NaN NaN 6 NaN\n 8 1.0 NaN NaN NaN 1.0\n\n\n"
] |
[
0
] |
[] |
[] |
[
"dataframe",
"pandas",
"python"
] |
stackoverflow_0074585358_dataframe_pandas_python.txt
|
Q:
Scrapy + Playwright error: "node:events:505 throw er; // Unhandled 'error' event"
I'm doing scrapping with scrapy + playwright of an ecommerce website, approximately in one hour returns 42k registers and broken with the message:
node:events:505
throw er; // Unhandled 'error' event
^
Error: write EPIPE
at WriteWrap.onWriteComplete [as oncomplete] (node:internal/stream_base_commons:94:16)
Emitted 'error' event on Socket instance at:
at emitErrorNT (node:internal/streams/destroy:157:8)
at emitErrorCloseNT (node:internal/streams/destroy:122:3)
at processTicksAndRejections (node:internal/process/task_queues:83:21) {
errno: -32,
code: 'EPIPE',
syscall: 'write'
}
UPDATE I check that browser memory increases a lot to a point that it crashes, it seems that the browser, scrapy or playwright do not close the open instances.
UPDATE2 I post the code because I think that could be some routine bursting the memory. If anyone has any ideas it would be most welcome.
import scrapy
from scrapy_playwright.page import PageMethod
class LojaSpider(scrapy.Spider):
name = 'loja'
start_urls = ['https://www.loja.com.br']
allowed_domains = ['loja.com.br']
def parse(self, response, **kwargs):
# Obtém os links dos departamentos principais
for link in response.xpath('//div[1][@class="nav-list-column"]//ul//li[@class="nav-item"]/a/@href'):
yield response.follow( url=link.root,
errback =self.errback,
meta = {
'playwright' : True,
'playwright_page_methods' : [PageMethod('wait_for_selector', 'div.css-1431dge-generic-carousel-generic-carousel--with-offset.e1w2odgr0')],
}, callback=self.parse_sublinks)
def parse_sublinks(self, response):
# Acessa todos os sublinks e obtém os links para a grade de produtos.
for link in response.css('div.swiper-slide'):
url = link.css('a').attrib['href']
yield response.follow( url=url,
errback =self.errback,
meta = {
'playwright' : True,
# "playwright_include_page": True,
'playwright_page_methods' : [PageMethod('wait_for_selector', 'div.css-agchm8-products--product-list.e1r1bq5m1')],
}, callback = self.parse_products)
async def parse_products(self, response):
print('Link verificado: ', response.url )
# Verifica se esta na grade de produtos para obter os itens.
if response.css('div.css-agchm8-products--product-list.e1r1bq5m1'):
for product in response.css('div.new-product-thumb'):
yield {
'description': product.css('span.css-1eaoahv-ellipsis.enof0xo0::text').get(),
'price': product.css('span.css-gcz93e-price-tag__price.ehxrgxb3::text').get(),
'installment_price': product.css('span.css-4id40w-price-tag__content-wrapper.ehxrgxb0::text').get(),
'code': product.css('span.css-19qfvzb-new-product-thumb__product-code.ecqorlx1::text').get(),
'link': product.css('a').attrib['href']
}
current_page = response.css('div.css-bslizp-badge::text').get()
next_link = response.css('i.glyph.glyph-arrow-right').get()
next_page = int(current_page) + 1 if next_link else None
if next_page:
if int(current_page) > 1:
next_page_url = response.url[:-8] + '?page=' + str(next_page).rjust(2, '0')
else:
next_page_url = response.url + '?page=' + str(next_page).rjust(2, '0')
yield response.follow(next_page_url,
errback = self.errback,
meta = {
'playwright' : True,
#"playwright_include_page": True,
'playwright_page_methods' : [PageMethod('wait_for_selector', 'div.css-agchm8-products--product-list.e1r1bq5m1')],
},
callback=self.parse_products
)
# Verifica se esta na página direta do produto e obtem os itens.
elif response.css('div.box-1'):
print('Entrou no elif do div.box-1')
yield {
'description': response.css('h1.product-title.align-left.color-text product-description::text').get(),
'price': response.css('span.css-rwb0cd-to-price__integer.e17u5sne7::text').get(),
'installment_price': response.css('p.css-1b49m6w-text-text--bold-text-color--n400-text--kilo-heading--no-margin::text').get(),
'code': response.css('div.badge.product-code.badge-product-code::text').get(),
}
else:
print('Entrou no else para obter os links')
# Caso não esteja na grade e na página do produto ele retorna para a função
# parse_sublink com a url para poder captar os links do carrocel js.
yield response.follow( url=response.url,
errback =self.errback,
meta = {
'playwright' : True,
}, callback=self.parse_sublinks)
# page = response.meta['playwright_page']
# await page.close()
async def errback(self, failure):
page = failure.request.meta["playwright_page"]
await page.close()
I searched a lot but dont find anything relevant considering I'm using scrapy(python) + playwright. Before had a error about overflow memory, but I solve with: export NODE_OPTIONS="--max-old-space-size=8192", I dont know if there's any connection.
A:
I had the same issue and it was because of an unclosed page instance, however, I see in your code page is closed.
Please make sure you did the following steps:
Close all playwright instances
Close all BrowserContext objects
Close all opened Page objects
Opened Playwright instance looks like this:
playwright_instance = await async_playwright().start()
browser = await playwright_instance.firefox.launch(headless=headless, proxy=PROXIES[0])
context = await browser.new_context(java_script_enabled=True)
Here is my solution:
await page.close()
await browser.close()
await playwright_instance.stop()
|
Scrapy + Playwright error: "node:events:505 throw er; // Unhandled 'error' event"
|
I'm doing scrapping with scrapy + playwright of an ecommerce website, approximately in one hour returns 42k registers and broken with the message:
node:events:505
throw er; // Unhandled 'error' event
^
Error: write EPIPE
at WriteWrap.onWriteComplete [as oncomplete] (node:internal/stream_base_commons:94:16)
Emitted 'error' event on Socket instance at:
at emitErrorNT (node:internal/streams/destroy:157:8)
at emitErrorCloseNT (node:internal/streams/destroy:122:3)
at processTicksAndRejections (node:internal/process/task_queues:83:21) {
errno: -32,
code: 'EPIPE',
syscall: 'write'
}
UPDATE I check that browser memory increases a lot to a point that it crashes, it seems that the browser, scrapy or playwright do not close the open instances.
UPDATE2 I post the code because I think that could be some routine bursting the memory. If anyone has any ideas it would be most welcome.
import scrapy
from scrapy_playwright.page import PageMethod
class LojaSpider(scrapy.Spider):
name = 'loja'
start_urls = ['https://www.loja.com.br']
allowed_domains = ['loja.com.br']
def parse(self, response, **kwargs):
# Obtém os links dos departamentos principais
for link in response.xpath('//div[1][@class="nav-list-column"]//ul//li[@class="nav-item"]/a/@href'):
yield response.follow( url=link.root,
errback =self.errback,
meta = {
'playwright' : True,
'playwright_page_methods' : [PageMethod('wait_for_selector', 'div.css-1431dge-generic-carousel-generic-carousel--with-offset.e1w2odgr0')],
}, callback=self.parse_sublinks)
def parse_sublinks(self, response):
# Acessa todos os sublinks e obtém os links para a grade de produtos.
for link in response.css('div.swiper-slide'):
url = link.css('a').attrib['href']
yield response.follow( url=url,
errback =self.errback,
meta = {
'playwright' : True,
# "playwright_include_page": True,
'playwright_page_methods' : [PageMethod('wait_for_selector', 'div.css-agchm8-products--product-list.e1r1bq5m1')],
}, callback = self.parse_products)
async def parse_products(self, response):
print('Link verificado: ', response.url )
# Verifica se esta na grade de produtos para obter os itens.
if response.css('div.css-agchm8-products--product-list.e1r1bq5m1'):
for product in response.css('div.new-product-thumb'):
yield {
'description': product.css('span.css-1eaoahv-ellipsis.enof0xo0::text').get(),
'price': product.css('span.css-gcz93e-price-tag__price.ehxrgxb3::text').get(),
'installment_price': product.css('span.css-4id40w-price-tag__content-wrapper.ehxrgxb0::text').get(),
'code': product.css('span.css-19qfvzb-new-product-thumb__product-code.ecqorlx1::text').get(),
'link': product.css('a').attrib['href']
}
current_page = response.css('div.css-bslizp-badge::text').get()
next_link = response.css('i.glyph.glyph-arrow-right').get()
next_page = int(current_page) + 1 if next_link else None
if next_page:
if int(current_page) > 1:
next_page_url = response.url[:-8] + '?page=' + str(next_page).rjust(2, '0')
else:
next_page_url = response.url + '?page=' + str(next_page).rjust(2, '0')
yield response.follow(next_page_url,
errback = self.errback,
meta = {
'playwright' : True,
#"playwright_include_page": True,
'playwright_page_methods' : [PageMethod('wait_for_selector', 'div.css-agchm8-products--product-list.e1r1bq5m1')],
},
callback=self.parse_products
)
# Verifica se esta na página direta do produto e obtem os itens.
elif response.css('div.box-1'):
print('Entrou no elif do div.box-1')
yield {
'description': response.css('h1.product-title.align-left.color-text product-description::text').get(),
'price': response.css('span.css-rwb0cd-to-price__integer.e17u5sne7::text').get(),
'installment_price': response.css('p.css-1b49m6w-text-text--bold-text-color--n400-text--kilo-heading--no-margin::text').get(),
'code': response.css('div.badge.product-code.badge-product-code::text').get(),
}
else:
print('Entrou no else para obter os links')
# Caso não esteja na grade e na página do produto ele retorna para a função
# parse_sublink com a url para poder captar os links do carrocel js.
yield response.follow( url=response.url,
errback =self.errback,
meta = {
'playwright' : True,
}, callback=self.parse_sublinks)
# page = response.meta['playwright_page']
# await page.close()
async def errback(self, failure):
page = failure.request.meta["playwright_page"]
await page.close()
I searched a lot but dont find anything relevant considering I'm using scrapy(python) + playwright. Before had a error about overflow memory, but I solve with: export NODE_OPTIONS="--max-old-space-size=8192", I dont know if there's any connection.
|
[
"I had the same issue and it was because of an unclosed page instance, however, I see in your code page is closed.\nPlease make sure you did the following steps:\n\nClose all playwright instances\nClose all BrowserContext objects\nClose all opened Page objects\n\nOpened Playwright instance looks like this:\nplaywright_instance = await async_playwright().start()\nbrowser = await playwright_instance.firefox.launch(headless=headless, proxy=PROXIES[0])\ncontext = await browser.new_context(java_script_enabled=True)\n\nHere is my solution:\nawait page.close()\nawait browser.close()\nawait playwright_instance.stop()\n\n"
] |
[
0
] |
[] |
[] |
[
"playwright",
"python",
"scrapy"
] |
stackoverflow_0073962320_playwright_python_scrapy.txt
|
Q:
File reading only first digit of a number rather than the whole number
The problem is that i have a text file with each line being
PlayerName wins with x points
with each player name and number being different and obviously with them being different are different number if digits long
Problem is as a part of the code it needs to read the entire code and print the players with the top 5 scores as such.
TOP 5 ALL TIME SCORES
Martin wins with 123 points
Jamie wins with 54 points
Kyle wins with 43 points
Andrew wins with 32 points
Dylan wins with 21 points
(The full code is a dice game with random rolling and a gambling system and tiebreaker but im at my dads house right now so dont have access to the full code)
`
name = str(input("Player 1 name"))
name2 = str(input("Player 2 name"))
score = str(input("Player 1 score"))
score2 = str(input("Player 2 score"))
text_file = open("CH30.txt", "r+")
if score > score2:
content = text_file.readlines(30)
if len(content) > 0 :
text_file.write("\n")
text_file.write(name)
text_file.write (" wins with ")
text_file.write (score)
text_file.write (" points")
else:
content = text_file.readlines(30)
if len(content) > 0 :
text_file.write("\n")
text_file.write (name2)
text_file.write (" wins with ")
text_file.write (score2)
text_file.write (" points")
text_file = open('CH30.txt.', 'r')
Lines = text_file.readlines()
PlayersScores = []
# read each line get the player name and points
for line in Lines:
# split the line into list of strings
line = line.split(" ")
# removing \n from last element
line[-1] = line[-1].replace("\n", "")
print(line)
# find player name position
playerName = line.index("wins") - 1
# find points position
points = line.index("points") - 1
points = int(points)
# add the tuple (playerName, points) in a list
PlayersScores.append((line[playerName], line[points]))
# descending order sort by player score
PlayersScores = sorted(PlayersScores, key=lambda t: t[1], reverse=True)
# get the first 5 players
print("Highest Scores:\n")
for i in range(5):
print(str(i+1) + ". " + PlayersScores[i][0] + " " + PlayersScores[i][1] + " points")
`
the output for this is
Jamie wins with 54 points
Kyle wins with 43 points
Andrew wins with 32 points
Dylan wins with 21 points
Martin wins with 123 points
note The names shown are just an example actually it reads from a text file (CH30.txt) rather than just 5 preselected names
Any help greatly appreciated
Thanks
A:
Below is the fixed part of the code to get the correct value for points.
I did not touch the other parts of the code.
text_file = open('CH30.txt.', 'r')
Lines = text_file.readlines()
PlayersScores = []
for line in Lines:
line = line.split()
playerName = line[0]
points = int(line[3])
.....
|
File reading only first digit of a number rather than the whole number
|
The problem is that i have a text file with each line being
PlayerName wins with x points
with each player name and number being different and obviously with them being different are different number if digits long
Problem is as a part of the code it needs to read the entire code and print the players with the top 5 scores as such.
TOP 5 ALL TIME SCORES
Martin wins with 123 points
Jamie wins with 54 points
Kyle wins with 43 points
Andrew wins with 32 points
Dylan wins with 21 points
(The full code is a dice game with random rolling and a gambling system and tiebreaker but im at my dads house right now so dont have access to the full code)
`
name = str(input("Player 1 name"))
name2 = str(input("Player 2 name"))
score = str(input("Player 1 score"))
score2 = str(input("Player 2 score"))
text_file = open("CH30.txt", "r+")
if score > score2:
content = text_file.readlines(30)
if len(content) > 0 :
text_file.write("\n")
text_file.write(name)
text_file.write (" wins with ")
text_file.write (score)
text_file.write (" points")
else:
content = text_file.readlines(30)
if len(content) > 0 :
text_file.write("\n")
text_file.write (name2)
text_file.write (" wins with ")
text_file.write (score2)
text_file.write (" points")
text_file = open('CH30.txt.', 'r')
Lines = text_file.readlines()
PlayersScores = []
# read each line get the player name and points
for line in Lines:
# split the line into list of strings
line = line.split(" ")
# removing \n from last element
line[-1] = line[-1].replace("\n", "")
print(line)
# find player name position
playerName = line.index("wins") - 1
# find points position
points = line.index("points") - 1
points = int(points)
# add the tuple (playerName, points) in a list
PlayersScores.append((line[playerName], line[points]))
# descending order sort by player score
PlayersScores = sorted(PlayersScores, key=lambda t: t[1], reverse=True)
# get the first 5 players
print("Highest Scores:\n")
for i in range(5):
print(str(i+1) + ". " + PlayersScores[i][0] + " " + PlayersScores[i][1] + " points")
`
the output for this is
Jamie wins with 54 points
Kyle wins with 43 points
Andrew wins with 32 points
Dylan wins with 21 points
Martin wins with 123 points
note The names shown are just an example actually it reads from a text file (CH30.txt) rather than just 5 preselected names
Any help greatly appreciated
Thanks
|
[
"Below is the fixed part of the code to get the correct value for points.\nI did not touch the other parts of the code.\ntext_file = open('CH30.txt.', 'r')\nLines = text_file.readlines()\nPlayersScores = []\nfor line in Lines:\n line = line.split()\n playerName = line[0]\n points = int(line[3])\n .....\n\n"
] |
[
0
] |
[] |
[] |
[
"numbers",
"python",
"text_files"
] |
stackoverflow_0074585551_numbers_python_text_files.txt
|
Q:
Django Rest how to show list of comment which belongs only from related Blog and Author?
Assume Author Jhone write an Blog which title is "This blog written by author Jhone" and Author Joe write an Blog "This blog written by author Joe" . Jhone blog received 20 comments and Joe blog received 10 comments. When Jhone will be login his account he can only able to see comments those belongs from his blog post and same will be for Joe. Here I tried this query Comment.objects.all().filter(blog__author=request.user.id) but still now everyone can see each others blog comments from my api url. here is my code:
@api_view(['POST', 'GET'])
def comment_api(request):
if request.method == 'POST':
serializer = CommentSerializer(data=request.data)
if serializer.is_valid():
serializer.save()
return Response(serializer.data, status=status.HTTP_201_CREATED)
return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)
if request.method == 'GET':
comment = Comment.objects.all().filter(blog__author=request.user.id)
serializer = CommentSerializer(comment, many=True)
return Response(serializer.data)
serializer.py
class CommentSerializer(serializers.ModelSerializer):
class Meta:
model = Comment
fields = '__all__'
models.py
class Blog(models.Model):
author = models.ForeignKey(
settings.AUTH_USER_MODEL, on_delete=models.CASCADE, blank=True, null=True)
blog_title = models.CharField(max_length=200, unique=True)
class Comment(models.Model):
name = models.CharField(max_length=100)
email = models.EmailField(max_length=100)
comment = models.TextField()
blog = models.ForeignKey(Blog, on_delete=models.CASCADE)
A:
I was missing author id. instead of this Comment.objects.all().filter(blog__author=request.user.id) it will be Comment.objects.all().filter(blog__author_id=request.user.id)
|
Django Rest how to show list of comment which belongs only from related Blog and Author?
|
Assume Author Jhone write an Blog which title is "This blog written by author Jhone" and Author Joe write an Blog "This blog written by author Joe" . Jhone blog received 20 comments and Joe blog received 10 comments. When Jhone will be login his account he can only able to see comments those belongs from his blog post and same will be for Joe. Here I tried this query Comment.objects.all().filter(blog__author=request.user.id) but still now everyone can see each others blog comments from my api url. here is my code:
@api_view(['POST', 'GET'])
def comment_api(request):
if request.method == 'POST':
serializer = CommentSerializer(data=request.data)
if serializer.is_valid():
serializer.save()
return Response(serializer.data, status=status.HTTP_201_CREATED)
return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)
if request.method == 'GET':
comment = Comment.objects.all().filter(blog__author=request.user.id)
serializer = CommentSerializer(comment, many=True)
return Response(serializer.data)
serializer.py
class CommentSerializer(serializers.ModelSerializer):
class Meta:
model = Comment
fields = '__all__'
models.py
class Blog(models.Model):
author = models.ForeignKey(
settings.AUTH_USER_MODEL, on_delete=models.CASCADE, blank=True, null=True)
blog_title = models.CharField(max_length=200, unique=True)
class Comment(models.Model):
name = models.CharField(max_length=100)
email = models.EmailField(max_length=100)
comment = models.TextField()
blog = models.ForeignKey(Blog, on_delete=models.CASCADE)
|
[
"I was missing author id. instead of this Comment.objects.all().filter(blog__author=request.user.id) it will be Comment.objects.all().filter(blog__author_id=request.user.id)\n"
] |
[
0
] |
[] |
[] |
[
"django",
"django_rest_framework",
"python",
"python_3.x"
] |
stackoverflow_0074586209_django_django_rest_framework_python_python_3.x.txt
|
Q:
Euclidean distance between features vectors
I have a dataset such as: `
team y
A African Dance [[1.059685349464416, 0.328705966472625, 0.3115...
Ballet [[0.486603736877441, 1.678925514221191, 0.0157...
Contemporary [[0.06553386151790601, 2.121821165084839, 0, 0...
B African Dance [[1.129618763923645, 0.775617241859436, 0.0577...
Ballet [[1.164714455604553, 0.6662477850914, 0, 0.138...
Contemporary [[0.050464563071727, 0.856616079807281, 0, 0.3...
`
I wanna go through each row to calculate euclidean distance between all 2 pairs of array instances in a specific row.
`
for i in range(features_vectors.size):
for j in range(len(features_vectors[i])-1):
fv1 = np.array(features_vectors[i][j])
fv2 = np.array(features_vectors[i][j+1])
print(np.linalg.norm(fv1 - fv2))
`
but I know in this way it won't see all the instances in an array because I want to calculate the distance between [0][0] and [0][1], then [0,0] and [0,2], and so on. how should I use the nested loop to see the data in this order?
A:
This will skip duplicates:
for i,fv1 in enumerate(features_vectors):
for fv2 in features_vectors[i+1:]:
print(np.linalg.norm(fv1 - fv2))
You haven't really told us anything about the input data (is it one column? Several columns?), so this might need adapting.
|
Euclidean distance between features vectors
|
I have a dataset such as: `
team y
A African Dance [[1.059685349464416, 0.328705966472625, 0.3115...
Ballet [[0.486603736877441, 1.678925514221191, 0.0157...
Contemporary [[0.06553386151790601, 2.121821165084839, 0, 0...
B African Dance [[1.129618763923645, 0.775617241859436, 0.0577...
Ballet [[1.164714455604553, 0.6662477850914, 0, 0.138...
Contemporary [[0.050464563071727, 0.856616079807281, 0, 0.3...
`
I wanna go through each row to calculate euclidean distance between all 2 pairs of array instances in a specific row.
`
for i in range(features_vectors.size):
for j in range(len(features_vectors[i])-1):
fv1 = np.array(features_vectors[i][j])
fv2 = np.array(features_vectors[i][j+1])
print(np.linalg.norm(fv1 - fv2))
`
but I know in this way it won't see all the instances in an array because I want to calculate the distance between [0][0] and [0][1], then [0,0] and [0,2], and so on. how should I use the nested loop to see the data in this order?
|
[
"This will skip duplicates:\nfor i,fv1 in enumerate(features_vectors):\n for fv2 in features_vectors[i+1:]:\n print(np.linalg.norm(fv1 - fv2))\n\nYou haven't really told us anything about the input data (is it one column? Several columns?), so this might need adapting.\n"
] |
[
0
] |
[] |
[] |
[
"arrays",
"euclidean_distance",
"numpy",
"python"
] |
stackoverflow_0074586430_arrays_euclidean_distance_numpy_python.txt
|
Q:
PyTorch - Receiving 0 Filled List from Prediction
ISSUE:
EXPECTED:tensor([[1., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0.,
0., 0., 1., 0., 1., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 1.,
0., 0., 1., 1.]], dtype=torch.float64)
ACTUAL:tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], device='cuda:0')
I'm building my own dataloader where I have a list of images, and each image has 40 attributes, with each attribute being a 0 or 1.
EG: index 1 might represent "Attractive", and index 2 might represent "Big Nose"
I am trying to send in a list of these numbers alongside each number with each index in that list corresponding to an attribute, and with a 0 or 1 in relation to that image.
My issue is that when training, the predicted values are all 0.
Get item in the dataloader, sends in the image alongside a list of numbers called 'attributes'
def __getitem__(self, idx):
if torch.is_tensor(idx):
idx = idx.tolist()
img_name = os.path.join(self.root_dir,
self.malefemale_frame.iloc[idx, 0])
image = read_image(img_name)
attributes = self.malefemale_frame.iloc[idx, 2:]
attributes = [float(i) for i in attributes]
attributes = torch.from_numpy(np.asarray(attributes))
#print(attributes)
return image.float(), attributes
Batch size used here
import torch
import torchvision
import torchvision.transforms as transforms
import os
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
batch_size = 8
trainset = MaleFemaleDataset(csv_file='attribute-files/CelebAMask-HQ-attribute-anno.txt', root_dir='CelebA-HQ-img//',transform=transform,train=True)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=batch_size,
shuffle=True, num_workers=0)
testset = MaleFemaleDataset(csv_file='attribute-files/CelebAMask-HQ-attribute-anno.txt', root_dir='CelebA-HQ-img//',transform=transform,train=False)
testloader = torch.utils.data.DataLoader(testset, batch_size=batch_size,
shuffle=False, num_workers=0)
print("Training/Testing sets initialized")
criterion = nn.CrossEntropyLoss(reduction='none')
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
Training Loop
for epoch in range(1): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
inputs, labels = data
inputs, labels = data[0].to(device), data[1].to(device)
#print(data[1])
optimizer.zero_grad()
outputs = net(inputs.to(device))
loss = criterion(outputs, labels)
loss.backward(gradient=torch.ones_like(loss))
optimizer.step()
# print statistics
running_loss += loss.item()
if i % 100 == 99: # print every 100 mini-batches
print(f'[{epoch + 1}, {i + 1:5d}] loss: {running_loss / 100:.3f}')
running_loss = 0.0
print('Finished Training')
It doesn't get much better than 30...
[1, 100] loss: 33.356
[1, 200] loss: 36.982
[1, 300] loss: 36.495
[1, 400] loss: 33.763
...etc
Testing Loop
correct, total = 0, 0
with torch.no_grad():
# Iterate over the test data and generate predictions
for i, data in enumerate(testloader, 0):
# Get inputs
inputs, targets = data
# Generate outputs
outputs = net(inputs.to(device))
# Set total and correct
_, predicted = torch.max(outputs, 0)
total += targets.size(0)
print("EXPECTED:"+ str(targets))
print("ACTUAL:"+ str(predicted.tolist()))
#for predict in predicted:
# print(predict.item())
correct += (predicted == targets).sum().item()
# Print accuracy
print('Accuracy: %d %%' % (100 * correct / total))
Final Output
EXPECTED:tensor([[1., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0.,
0., 0., 1., 0., 1., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 1.,
0., 0., 1., 1.]], dtype=torch.float64)
ACTUAL:tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], device='cuda:0')
A:
I got it working by modifying my testing loop. I removed the torch.max() function, and simply returned the values I got from the net.
correctAttributes = 0
threshold = .5
with torch.no_grad():
for i, data in enumerate(testloader, 0):
# Get inputs
inputs, targets = data
# Generate outputs
outputs = net(inputs.to(device))
#Check predictions
for predictedAttributes, targetAttributes in zip(outputs.tolist(), targets.tolist()):
for predictedAttribute, targetAttribute in zip(predictedAttributes, targetAttributes):
if predictedAttribute > threshold and targetAttribute > threshold:
correctAttributes+=1
elif predictedAttribute < threshold and targetAttribute < threshold:
correctAttributes+=1
total+=1
# Print accuracy
print('Accuracy: %d %%' % (100 * correctAttributes / total))
|
PyTorch - Receiving 0 Filled List from Prediction
|
ISSUE:
EXPECTED:tensor([[1., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0.,
0., 0., 1., 0., 1., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 1.,
0., 0., 1., 1.]], dtype=torch.float64)
ACTUAL:tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], device='cuda:0')
I'm building my own dataloader where I have a list of images, and each image has 40 attributes, with each attribute being a 0 or 1.
EG: index 1 might represent "Attractive", and index 2 might represent "Big Nose"
I am trying to send in a list of these numbers alongside each number with each index in that list corresponding to an attribute, and with a 0 or 1 in relation to that image.
My issue is that when training, the predicted values are all 0.
Get item in the dataloader, sends in the image alongside a list of numbers called 'attributes'
def __getitem__(self, idx):
if torch.is_tensor(idx):
idx = idx.tolist()
img_name = os.path.join(self.root_dir,
self.malefemale_frame.iloc[idx, 0])
image = read_image(img_name)
attributes = self.malefemale_frame.iloc[idx, 2:]
attributes = [float(i) for i in attributes]
attributes = torch.from_numpy(np.asarray(attributes))
#print(attributes)
return image.float(), attributes
Batch size used here
import torch
import torchvision
import torchvision.transforms as transforms
import os
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
batch_size = 8
trainset = MaleFemaleDataset(csv_file='attribute-files/CelebAMask-HQ-attribute-anno.txt', root_dir='CelebA-HQ-img//',transform=transform,train=True)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=batch_size,
shuffle=True, num_workers=0)
testset = MaleFemaleDataset(csv_file='attribute-files/CelebAMask-HQ-attribute-anno.txt', root_dir='CelebA-HQ-img//',transform=transform,train=False)
testloader = torch.utils.data.DataLoader(testset, batch_size=batch_size,
shuffle=False, num_workers=0)
print("Training/Testing sets initialized")
criterion = nn.CrossEntropyLoss(reduction='none')
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
Training Loop
for epoch in range(1): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
inputs, labels = data
inputs, labels = data[0].to(device), data[1].to(device)
#print(data[1])
optimizer.zero_grad()
outputs = net(inputs.to(device))
loss = criterion(outputs, labels)
loss.backward(gradient=torch.ones_like(loss))
optimizer.step()
# print statistics
running_loss += loss.item()
if i % 100 == 99: # print every 100 mini-batches
print(f'[{epoch + 1}, {i + 1:5d}] loss: {running_loss / 100:.3f}')
running_loss = 0.0
print('Finished Training')
It doesn't get much better than 30...
[1, 100] loss: 33.356
[1, 200] loss: 36.982
[1, 300] loss: 36.495
[1, 400] loss: 33.763
...etc
Testing Loop
correct, total = 0, 0
with torch.no_grad():
# Iterate over the test data and generate predictions
for i, data in enumerate(testloader, 0):
# Get inputs
inputs, targets = data
# Generate outputs
outputs = net(inputs.to(device))
# Set total and correct
_, predicted = torch.max(outputs, 0)
total += targets.size(0)
print("EXPECTED:"+ str(targets))
print("ACTUAL:"+ str(predicted.tolist()))
#for predict in predicted:
# print(predict.item())
correct += (predicted == targets).sum().item()
# Print accuracy
print('Accuracy: %d %%' % (100 * correct / total))
Final Output
EXPECTED:tensor([[1., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0.,
0., 0., 1., 0., 1., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 1.,
0., 0., 1., 1.]], dtype=torch.float64)
ACTUAL:tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], device='cuda:0')
|
[
"I got it working by modifying my testing loop. I removed the torch.max() function, and simply returned the values I got from the net.\ncorrectAttributes = 0\nthreshold = .5\nwith torch.no_grad():\n\nfor i, data in enumerate(testloader, 0):\n\n # Get inputs\n inputs, targets = data\n\n # Generate outputs\n outputs = net(inputs.to(device))\n \n #Check predictions\n for predictedAttributes, targetAttributes in zip(outputs.tolist(), targets.tolist()):\n for predictedAttribute, targetAttribute in zip(predictedAttributes, targetAttributes):\n if predictedAttribute > threshold and targetAttribute > threshold:\n correctAttributes+=1\n elif predictedAttribute < threshold and targetAttribute < threshold:\n correctAttributes+=1\n total+=1\n\n# Print accuracy\nprint('Accuracy: %d %%' % (100 * correctAttributes / total))\n\n"
] |
[
0
] |
[] |
[] |
[
"artificial_intelligence",
"numpy",
"python",
"pytorch"
] |
stackoverflow_0074579784_artificial_intelligence_numpy_python_pytorch.txt
|
Q:
graph that combine bar and line
I would like to reproduce this chart with python ie the unemployment rate with shaded period corresponding with recession period.
I downloaded the 2 series from Fred data base with :
import numpy as np
import pandas as pd
import pandas_datareader as wb
import datetime as dt
import matplotlib.pyplot as plt
data_fred = wb.get_data_fred(['USRECD'], dt.datetime(1969, 12, 31)).asfreq('MS')
data_fred_m = wb.get_data_fred(['UNRATE', ],dt.datetime(1969, 12, 31))
data = pd.concat([data_fred,data_fred_m], axis =1)
A:
Unfortunately I do not have your data, so generated something on my own. Nevertheless, just look at the plotting part and I propose using patches from matplotlib. Here is the script:
#!/usr/bin/env ipython
# ---------------------
import numpy as np
import pandas as pd
import matplotlib as mpl
import matplotlib.pylab as plt
import matplotlib.patches as mpatches
import datetime
#==============================================================================================
# HERE IS THE DATA GENERATION:
def get_rate(dd,ds,decay):
dt = dd-ds;
dt = dt.days + dt.seconds/86400
if dt<0: return 0.0
val = np.exp(-dt/decay)
return val
# ----------------------------
# let us generate random data:
d0 = datetime.datetime(1950,1,1)
ndays = (datetime.datetime.now()-d0).days
dvec = [d0+ii*datetime.timedelta(days=1) for ii in range(ndays)]
# ------------------------------------------------------------
unemp = 7.5 + np.random.random(np.size(dvec))*1
# ------------------------------------------------------------
# some recession periods:
recper_start = [datetime.datetime(1955,12,3),datetime.datetime(1970,6,1),datetime.datetime(1981,9,9),datetime.datetime(1999,4,23),datetime.datetime(2020,3,1)]
recper_len = [500,700,300,160,190]
recper_end = [recper_start[ii]+datetime.timedelta(days=recper_len[ii]) for ii in range(len(recper_len))]
# ------------------------------------------------------------
for ii in range(len(recper_len)):
magnitude = 7.5+10*np.random.random()
addunemp = [magnitude*get_rate(dval,recper_start[ii],recper_len[ii]) for dval in dvec]
unemp = unemp + addunemp
# ------------------------------------------------------------
recdict = {'start':recper_start,'end':recper_end}
# ================================================================================================
# HERE IS THE PLOTTING:
datadict = {'date':dvec,'data':unemp}; # data of unemployment
data_a = pd.DataFrame.from_dict(datadict);data_a = data_a.set_index('date')
datadict = {'start':recper_start,'end':recper_end};
data_b = pd.DataFrame.from_dict(datadict); # data of start and end of recession
# ------------------------------------------------------------
# Make a plot:
fig = plt.figure(figsize=(20,5));ax = fig.add_subplot(111);
ax.plot(data_a);
for ir,row in data_b.iterrows():
# the min-max of rec does not change:
yy = (np.nanmin(data_a),np.nanmax(data_a))
# ---------------------------------------------------------
left, bottom, width, height = (row.start, yy[0], row.end-row.start, np.diff(yy))
rect=mpatches.Rectangle((left,bottom),width,height,alpha=0.2,fill=True,facecolor="0.25")
ax.add_patch(rect)
ax.set_ylim(yy);ax.set_xlim(np.min(dvec),np.max(dvec))
plt.savefig('test.png',bbox_inches='tight');plt.show()
and here is the result:
Hope it helps!
|
graph that combine bar and line
|
I would like to reproduce this chart with python ie the unemployment rate with shaded period corresponding with recession period.
I downloaded the 2 series from Fred data base with :
import numpy as np
import pandas as pd
import pandas_datareader as wb
import datetime as dt
import matplotlib.pyplot as plt
data_fred = wb.get_data_fred(['USRECD'], dt.datetime(1969, 12, 31)).asfreq('MS')
data_fred_m = wb.get_data_fred(['UNRATE', ],dt.datetime(1969, 12, 31))
data = pd.concat([data_fred,data_fred_m], axis =1)
|
[
"Unfortunately I do not have your data, so generated something on my own. Nevertheless, just look at the plotting part and I propose using patches from matplotlib. Here is the script:\n#!/usr/bin/env ipython\n# ---------------------\nimport numpy as np\nimport pandas as pd\nimport matplotlib as mpl\nimport matplotlib.pylab as plt\nimport matplotlib.patches as mpatches\nimport datetime\n#==============================================================================================\n# HERE IS THE DATA GENERATION:\ndef get_rate(dd,ds,decay):\n dt = dd-ds;\n dt = dt.days + dt.seconds/86400\n if dt<0: return 0.0\n val = np.exp(-dt/decay)\n return val\n# ----------------------------\n# let us generate random data:\nd0 = datetime.datetime(1950,1,1)\nndays = (datetime.datetime.now()-d0).days\ndvec = [d0+ii*datetime.timedelta(days=1) for ii in range(ndays)]\n# ------------------------------------------------------------\nunemp = 7.5 + np.random.random(np.size(dvec))*1\n# ------------------------------------------------------------\n# some recession periods:\nrecper_start = [datetime.datetime(1955,12,3),datetime.datetime(1970,6,1),datetime.datetime(1981,9,9),datetime.datetime(1999,4,23),datetime.datetime(2020,3,1)]\nrecper_len = [500,700,300,160,190]\nrecper_end = [recper_start[ii]+datetime.timedelta(days=recper_len[ii]) for ii in range(len(recper_len))]\n# ------------------------------------------------------------\nfor ii in range(len(recper_len)):\n magnitude = 7.5+10*np.random.random()\n addunemp = [magnitude*get_rate(dval,recper_start[ii],recper_len[ii]) for dval in dvec]\n unemp = unemp + addunemp\n# ------------------------------------------------------------\nrecdict = {'start':recper_start,'end':recper_end}\n# ================================================================================================\n# HERE IS THE PLOTTING:\ndatadict = {'date':dvec,'data':unemp}; # data of unemployment\ndata_a = pd.DataFrame.from_dict(datadict);data_a = data_a.set_index('date')\ndatadict = {'start':recper_start,'end':recper_end};\ndata_b = pd.DataFrame.from_dict(datadict); # data of start and end of recession\n# ------------------------------------------------------------\n# Make a plot:\nfig = plt.figure(figsize=(20,5));ax = fig.add_subplot(111);\nax.plot(data_a);\nfor ir,row in data_b.iterrows():\n # the min-max of rec does not change:\n yy = (np.nanmin(data_a),np.nanmax(data_a))\n # ---------------------------------------------------------\n left, bottom, width, height = (row.start, yy[0], row.end-row.start, np.diff(yy))\n rect=mpatches.Rectangle((left,bottom),width,height,alpha=0.2,fill=True,facecolor=\"0.25\")\n ax.add_patch(rect)\nax.set_ylim(yy);ax.set_xlim(np.min(dvec),np.max(dvec))\nplt.savefig('test.png',bbox_inches='tight');plt.show()\n\nand here is the result:\n\nHope it helps!\n"
] |
[
0
] |
[] |
[] |
[
"matplotlib",
"pandas",
"python"
] |
stackoverflow_0074585379_matplotlib_pandas_python.txt
|
Q:
Create an instance of a class passing values of list as arguments in Python
I'm currently try to pass the items of a list create an instance of a class.
My code looks like this
args = ["1", "John", "Doe"]
class Token:
def __init__ = (self, id, name, last_name)
self.id = id
self.name = name
self.last_name = last_name
instance1 = Token(args)
I get the error
TypeError: Token.__init__() missing 3 required positional arguments:
I tried changing the type, the data I received comes from a previous for loop that filters the data I need.
Expecting
To create an instance with the values of the list and pass them as args.
Note
The topple is currently in order
A:
Use the iterable unpacking operator:
Token(*args)
|
Create an instance of a class passing values of list as arguments in Python
|
I'm currently try to pass the items of a list create an instance of a class.
My code looks like this
args = ["1", "John", "Doe"]
class Token:
def __init__ = (self, id, name, last_name)
self.id = id
self.name = name
self.last_name = last_name
instance1 = Token(args)
I get the error
TypeError: Token.__init__() missing 3 required positional arguments:
I tried changing the type, the data I received comes from a previous for loop that filters the data I need.
Expecting
To create an instance with the values of the list and pass them as args.
Note
The topple is currently in order
|
[
"Use the iterable unpacking operator:\nToken(*args)\n\n"
] |
[
0
] |
[] |
[] |
[
"class",
"instance_variables",
"list",
"python"
] |
stackoverflow_0074586544_class_instance_variables_list_python.txt
|
Q:
How do I make this pandas function faster?
I'm trying to make a function that solves for n in this equation (and then p, q and r) Equation. x, y, z, are known. It used binary search, to try to find n with a tolerance of +-0.0001
# game is a series with implied probabilites for each outcome in a football match
def logfunc(game):
n_range = [1, 0]
n = 0.5
probs = 1 / game
expression = (probs ** (1/n)).sum()
while not math.isclose(expression, 1, abs_tol=0.0001):
if expression > 1:
n_range[0] = n
else:
n_range[1] = n
n = (n_range[0] + n_range[1]) / 2
expression = (probs ** (1/n)).sum()
return game[ODDS] ** (1/n)
I was able to come up with this but it's painfully slow (on a dataframe with about 300000 rows)
Bonus question: :)
binary search has big O of O(log n). what is n in this function? Is it number of numbers between 0 and 1 at 0.0001 intervals?
A:
On big issue is that working with Pandas series is generally slow. A faster alternative is to work with Numpy arrays (that Pandas uses internally). Numpy arrays are not labelled like series so one need to adapt the game[ODDS] expression if ODDS is not an integer (the index of the label must be computed in this case). You can use Numpy array by just adding the line np_game = game.to_numpy() and replace the variable game by np_game. This is 25 times faster on my machine. If this is not enough, then you can use Numba so to speed up the computation (since it can remove the overhead of calling Numpy functions). This should be at least 1 order of magnitude faster.
binary search has big O of O(log n). what is n in this function? Is it number of numbers between 0 and 1 at 0.0001 intervals?
Not exactly. It would be true if the convergence check was directly based on n but this is not the case. The check is based on the convergence of (probs ** (1/n)).sum() and this expression does not behave linearly at all. For example, for the provided input, we get 0.456 for n=0.5, 0.120 for n=0.25 and 0.013 for n=0.125. Note that the last value is divided by ~10 while n is divided by 2 while it was only divided by ~4 for the prior step. I expect this computation to converge with an exponential rate initially and be found in O((log n)**k) where k is a constant that is a bit hard to find (certainly >=1), and where n is indeed number of numbers between 0 and 1 at 0.0001 intervals. The reason is that the range has a size that is divided by 2 for every step and one can approximate the computed expression by a polynomial function. When the range becomes very small, I expect the function to be smooth and even pseudo linear so the binary search should be pretty efficient for the last steps and k should be a small constant >=1 unless probs is carefully chosen so not to converge (it should certainly be very unlikely to happen with random values and probably even real-world ones).
|
How do I make this pandas function faster?
|
I'm trying to make a function that solves for n in this equation (and then p, q and r) Equation. x, y, z, are known. It used binary search, to try to find n with a tolerance of +-0.0001
# game is a series with implied probabilites for each outcome in a football match
def logfunc(game):
n_range = [1, 0]
n = 0.5
probs = 1 / game
expression = (probs ** (1/n)).sum()
while not math.isclose(expression, 1, abs_tol=0.0001):
if expression > 1:
n_range[0] = n
else:
n_range[1] = n
n = (n_range[0] + n_range[1]) / 2
expression = (probs ** (1/n)).sum()
return game[ODDS] ** (1/n)
I was able to come up with this but it's painfully slow (on a dataframe with about 300000 rows)
Bonus question: :)
binary search has big O of O(log n). what is n in this function? Is it number of numbers between 0 and 1 at 0.0001 intervals?
|
[
"On big issue is that working with Pandas series is generally slow. A faster alternative is to work with Numpy arrays (that Pandas uses internally). Numpy arrays are not labelled like series so one need to adapt the game[ODDS] expression if ODDS is not an integer (the index of the label must be computed in this case). You can use Numpy array by just adding the line np_game = game.to_numpy() and replace the variable game by np_game. This is 25 times faster on my machine. If this is not enough, then you can use Numba so to speed up the computation (since it can remove the overhead of calling Numpy functions). This should be at least 1 order of magnitude faster.\n\nbinary search has big O of O(log n). what is n in this function? Is it number of numbers between 0 and 1 at 0.0001 intervals?\n\nNot exactly. It would be true if the convergence check was directly based on n but this is not the case. The check is based on the convergence of (probs ** (1/n)).sum() and this expression does not behave linearly at all. For example, for the provided input, we get 0.456 for n=0.5, 0.120 for n=0.25 and 0.013 for n=0.125. Note that the last value is divided by ~10 while n is divided by 2 while it was only divided by ~4 for the prior step. I expect this computation to converge with an exponential rate initially and be found in O((log n)**k) where k is a constant that is a bit hard to find (certainly >=1), and where n is indeed number of numbers between 0 and 1 at 0.0001 intervals. The reason is that the range has a size that is divided by 2 for every step and one can approximate the computed expression by a polynomial function. When the range becomes very small, I expect the function to be smooth and even pseudo linear so the binary search should be pretty efficient for the last steps and k should be a small constant >=1 unless probs is carefully chosen so not to converge (it should certainly be very unlikely to happen with random values and probably even real-world ones).\n"
] |
[
0
] |
[] |
[] |
[
"big_o",
"optimization",
"pandas",
"python"
] |
stackoverflow_0074585408_big_o_optimization_pandas_python.txt
|
Q:
Multiply Columns Together Based on Condition
Is there a way for me to dynamically multiply columns together based on a value in another column in Python? I'm using Polars if that makes a difference. For example, if calendar_year is 2018, I'd want to multiply columns 2018, 2019, 2020, and 2021 together, but if calendar_year is 2019, I'd only want to multiply columns 2019, 2020, and 2021 together. I'd like to store the result in a new column called product. In the future, we'll have additional columns such as 2022, and 2023, so I'd love the ability to have my formula account for these new columns without having to go into the code base each year and add them to my product manually.
id
...
calendar_year
2017
2018
2019
2020
2021
product
123
...
2018
0.998
0.997
0.996
0.995
0.994
0.9801
456
...
2019
0.993
0.992
0.991
0.990
0.989
0.9557
Thanks in advance for the help!
A:
It looks like you want to multiply CY factors for all years beyond calendar_year, and not have to update this logic for each year.
If that's the case, one way to avoid hard-coding the CY selections is to use melt and filter the results.
(
df
.select([
'id',
'calendar_year',
pl.col('^20\d\d$')
])
.melt(
id_vars=['id', 'calendar_year'],
variable_name='CY',
value_name='CY factor',
)
.with_column(pl.col('CY').cast(pl.Int64))
.filter(pl.col('CY') >= pl.col('calendar_year'))
.groupby('id')
.agg(
pl.col('CY factor').product().alias('product')
)
)
shape: (2, 2)
┌─────┬──────────┐
│ id ┆ product │
│ --- ┆ --- │
│ i64 ┆ f64 │
╞═════╪══════════╡
│ 456 ┆ 0.970298 │
├╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌┤
│ 123 ┆ 0.982119 │
└─────┴──────────┘
From there, you can join the result back to your original dataset (using the id column).
|
Multiply Columns Together Based on Condition
|
Is there a way for me to dynamically multiply columns together based on a value in another column in Python? I'm using Polars if that makes a difference. For example, if calendar_year is 2018, I'd want to multiply columns 2018, 2019, 2020, and 2021 together, but if calendar_year is 2019, I'd only want to multiply columns 2019, 2020, and 2021 together. I'd like to store the result in a new column called product. In the future, we'll have additional columns such as 2022, and 2023, so I'd love the ability to have my formula account for these new columns without having to go into the code base each year and add them to my product manually.
id
...
calendar_year
2017
2018
2019
2020
2021
product
123
...
2018
0.998
0.997
0.996
0.995
0.994
0.9801
456
...
2019
0.993
0.992
0.991
0.990
0.989
0.9557
Thanks in advance for the help!
|
[
"It looks like you want to multiply CY factors for all years beyond calendar_year, and not have to update this logic for each year.\nIf that's the case, one way to avoid hard-coding the CY selections is to use melt and filter the results.\n(\n df\n .select([\n 'id',\n 'calendar_year',\n pl.col('^20\\d\\d$')\n ])\n .melt(\n id_vars=['id', 'calendar_year'],\n variable_name='CY',\n value_name='CY factor',\n )\n .with_column(pl.col('CY').cast(pl.Int64))\n .filter(pl.col('CY') >= pl.col('calendar_year'))\n .groupby('id')\n .agg(\n pl.col('CY factor').product().alias('product')\n )\n)\n\nshape: (2, 2)\n┌─────┬──────────┐\n│ id ┆ product │\n│ --- ┆ --- │\n│ i64 ┆ f64 │\n╞═════╪══════════╡\n│ 456 ┆ 0.970298 │\n├╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌┤\n│ 123 ┆ 0.982119 │\n└─────┴──────────┘\n\nFrom there, you can join the result back to your original dataset (using the id column).\n"
] |
[
2
] |
[
"I think you could use the np.where(condition,then,else) function to do that.\nDo you want to create a new column with the result of that operation? This could work\ndf['2018_result'] = np.where(df.calendar_year.isin(['2019','2020','2021']),df.2019*df.2020*df.2021, 'add more calculations')\n\nWould be great if you can provide more details :)\n"
] |
[
-2
] |
[
"dataframe",
"python",
"python_polars"
] |
stackoverflow_0074586017_dataframe_python_python_polars.txt
|
Q:
Betting system won't seem to process the user input correctly, how can this be improved?
I'm trying to code a game of craps in which the player has a virtual 'wallet' to gamble with. I've gotten to the user input part and it's not going as planned.
Here is what I have so far:
import random
import sys
money = "500"
# start the game
a = input("Hello travler, to start the game, please type 'yes'. If you are not ready to begin your journey, type 'no' ")
if a.lower() == "no":
sys.exit()
else:
print("Welcome")
print("Hold on... I'm missing something,")
name = input("What is your name, traveler?: ")
print("Welcome,", (name))
# those who need instructions can ask for it,
# others can start the game directly.
a = input("Welcome to a game that'll determine the rest of your life. Do you need instructions? (yes) or (no)? \n")
if a.lower() == "yes":
print('''1. the player will roll two six sided dice. the sum of the two dice is the player's number for the round.
2. rolling a 7 or an 11 as your first roll win you the game, but be weary, a 2, 3, or 12 automatically causes the player to lose. no more game for you. If a 4, 5, 6, 8, 9, or 10 are rolled on this first roll, that number becomes the 'point.'
3. the fated player continues to roll the two dice again until one of two things occur: either they roll the 'point' again, causing them to win the game; or they roll a 7, causing them to be doomed to the eternals.''')
elif a.lower() == "no":
print("may luck be with you on this fateful day,", name)
print("You will start off with 500 pieces of luck, if you leave this game with over 1000 pieces of luck, you will be fated for sucsess. On the other hand, should you drop below 0 peices of luck, you will find yourself in debt to the universe, a misfortune that few recover from.")
print("You currently have", money, "luck to spare")
# betting time
while True:
bet = input("How much luck do you wish to bet? ")
if bet.isdigit() is True:
if money > bet:
print("Bet accepted")
if bet.isdigit() is True:
if bet > money:
print("How unfortunate, you do not have the luck to make this bet.")
elif bet.isdigit() is False:
print ("Sorry, luck can only be quantitated in number values.")
# if you bet higher than your money, it wont allow you to bet within after that.
The code below the line # betting time is where I am hitting my head over.
I need the code to do the following:
check if the user input is in fact a number, if it's not a number, make them enter a new input
if the user input is a number, determine if it is within the amount they have in their 'wallet' (if not, make them input a new number)
if the number is within the amount in their 'wallet' and is a digit, give the user the option to "roll dice" triggering the game to start.
If they win, they need to be granted double what they bet, if they lose, they need to lose the amount they bet. The wallet will need to update after each play, giving the user the option to play again with their updated wallet (so if they bet 50 and win, their new balance will be 550. The process continues until the user either reaches 1000 or 0, causing the game to end either way.)
A:
For checking if the user's bet is a number use the .isnumeric() function on your bet like:
bet.isnumeric()
to do the second thing you needed help with you could do:
if bet < wallet: blah blah blah elif
bet > wallet: print("You do not enough money")
around like that it with actual good syntax
to do dice you could use the random.randint() function like this:
random.randint(1,6)
I hope that helps!
|
Betting system won't seem to process the user input correctly, how can this be improved?
|
I'm trying to code a game of craps in which the player has a virtual 'wallet' to gamble with. I've gotten to the user input part and it's not going as planned.
Here is what I have so far:
import random
import sys
money = "500"
# start the game
a = input("Hello travler, to start the game, please type 'yes'. If you are not ready to begin your journey, type 'no' ")
if a.lower() == "no":
sys.exit()
else:
print("Welcome")
print("Hold on... I'm missing something,")
name = input("What is your name, traveler?: ")
print("Welcome,", (name))
# those who need instructions can ask for it,
# others can start the game directly.
a = input("Welcome to a game that'll determine the rest of your life. Do you need instructions? (yes) or (no)? \n")
if a.lower() == "yes":
print('''1. the player will roll two six sided dice. the sum of the two dice is the player's number for the round.
2. rolling a 7 or an 11 as your first roll win you the game, but be weary, a 2, 3, or 12 automatically causes the player to lose. no more game for you. If a 4, 5, 6, 8, 9, or 10 are rolled on this first roll, that number becomes the 'point.'
3. the fated player continues to roll the two dice again until one of two things occur: either they roll the 'point' again, causing them to win the game; or they roll a 7, causing them to be doomed to the eternals.''')
elif a.lower() == "no":
print("may luck be with you on this fateful day,", name)
print("You will start off with 500 pieces of luck, if you leave this game with over 1000 pieces of luck, you will be fated for sucsess. On the other hand, should you drop below 0 peices of luck, you will find yourself in debt to the universe, a misfortune that few recover from.")
print("You currently have", money, "luck to spare")
# betting time
while True:
bet = input("How much luck do you wish to bet? ")
if bet.isdigit() is True:
if money > bet:
print("Bet accepted")
if bet.isdigit() is True:
if bet > money:
print("How unfortunate, you do not have the luck to make this bet.")
elif bet.isdigit() is False:
print ("Sorry, luck can only be quantitated in number values.")
# if you bet higher than your money, it wont allow you to bet within after that.
The code below the line # betting time is where I am hitting my head over.
I need the code to do the following:
check if the user input is in fact a number, if it's not a number, make them enter a new input
if the user input is a number, determine if it is within the amount they have in their 'wallet' (if not, make them input a new number)
if the number is within the amount in their 'wallet' and is a digit, give the user the option to "roll dice" triggering the game to start.
If they win, they need to be granted double what they bet, if they lose, they need to lose the amount they bet. The wallet will need to update after each play, giving the user the option to play again with their updated wallet (so if they bet 50 and win, their new balance will be 550. The process continues until the user either reaches 1000 or 0, causing the game to end either way.)
|
[
"For checking if the user's bet is a number use the .isnumeric() function on your bet like:\nbet.isnumeric()\n\nto do the second thing you needed help with you could do:\nif bet < wallet: blah blah blah elif \nbet > wallet: print(\"You do not enough money\")\n\naround like that it with actual good syntax\nto do dice you could use the random.randint() function like this:\nrandom.randint(1,6)\n\nI hope that helps!\n"
] |
[
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074586524_python.txt
|
Q:
Normalising Histograms Matplotlib
Hi I am plotting three different histograms which have different total frequencies but I want to normalise them such that the frequencies are the same.
As you can see from the picture, the three sets have different total frequencies but I want to normalise them so that they have the same total frequencies but that I want to preserve the proportion of the frequency at each value of the x-axis.
Here's the code I am using to plot the histograms
setA = [22.972972972972972, 0.0, 0.0, 27.5, 25.0, 18.64406779661017, 8.88888888888889, 20.512820512820515, 11.11111111111111, 15.151515151515152, 17.741935483870968, 13.333333333333334, 16.923076923076923, 12.820512820512821, 27.77777777777778, 4.0, 0.0, 15.625, 14.814814814814815, 7.142857142857143, 15.384615384615385, 14.545454545454545, 38.095238095238095, 17.647058823529413, 21.951219512195124, 21.428571428571427, 32.432432432432435, 10.526315789473685, 36.8421052631579, 13.114754098360656, 17.91044776119403, 12.64367816091954, 16.0, 22.727272727272727, 18.181818181818183, 9.523809523809524, 17.105263157894736, 11.904761904761905, 20.58823529411765, 10.714285714285714, 15.686274509803921, 27.5, 16.129032258064516, 21.333333333333332, 40.90909090909091, 11.904761904761905, 13.157894736842104]
setB = [1.492537313432836, 3.5714285714285716, 17.94871794871795, 11.363636363636363, 13.513513513513514, 14.285714285714286, 15.686274509803921, 17.94871794871795, 9.090909090909092, 41.07142857142857, 10.714285714285714, 25.0, 20.0, 40.0, 13.333333333333334, 13.793103448275861, 3.5714285714285716, 17.073170731707318, 25.675675675675677, 15.625, 17.46031746031746, 8.333333333333334, 18.64406779661017, 14.285714285714286, 0.0, 6.0606060606060606, 6.976744186046512, 18.181818181818183, 26.785714285714285, 22.80701754385965, 6.666666666666667, 12.5]
setC = [13.846153846153847, 23.076923076923077, 25.0, 10.714285714285714, 16.666666666666668, 9.75609756097561, 10.0, 10.0, 17.857142857142858, 20.0, 9.75609756097561, 26.470588235294116, 12.5, 13.333333333333334, 4.3478260869565215, 5.882352941176471, 14.545454545454545, 13.333333333333334, 8.571428571428571, 11.764705882352942, 0.0]
plt.figure('sets')
n, bins, patches = plt.hist(setA, 20, alpha=0.40 , label = 'setA')
n, bins, patches = plt.hist(setB, 20, alpha=0.40 , label = 'setB')
n, bins, patches = plt.hist(setC, 20, alpha=0.40 , label = 'setC')
plt.xlabel('Set')
plt.ylabel('Frequency')
plt.title('Different Sets that need to be normalised')
plt.legend()
plt.grid(True)
plt.show()
As a plus, because my aim is to be able to compare the distribution of the three sets, is there a better visual of the histogram I can use to compare them better graphically.
A:
You could normalise the histograms using the normed=True option. This will mean that the area of all histograms will add up to 1.
You could also make the plot look a bit tidier by using the same fixed bins for all three histograms (using the bins option to hist: bins = np.arange(0,48,2), for example).
Try this:
import numpy as np
...
mybins = np.arange(0,48,2)
n, bins, patches = plt.hist(setA, bins=mybins, alpha=0.40 , label = 'setA', normed=True)
n, bins, patches = plt.hist(setB, bins=mybins, alpha=0.40 , label = 'setB', normed=True)
n, bins, patches = plt.hist(setC, bins=mybins, alpha=0.40 , label = 'setC', normed=True)
Another option is to plot all three histograms in one call to plt.hist, in which case you can used the stacked=True option, which can further clean up your plot.
Note: this method normalises all three histograms, so the total integral is 1. It does not make all three histograms add up to the same value.
n, bins, patches = plt.hist([setA,setB,setC], bins=mybins,
label = ['setA','setB','setC'],
normed=True, stacked=True)
Or, finally, if a stacked histogram is not to your taste, you can plot the bars next to each other, by again plotting all three histograms in one call, but removing the stacked=True option from the line above:
n, bins, patches = plt.hist([setA,setB,setC], bins=mybins,
label = ['setA','setB','setC'],
normed=True)
As discussed in comments, when used stacked=True, the normed option just means the sum of all three histograms will equal 1, so they may not be normalised in the same way as in the other methods.
To counter this, we can use np.histogram, and plot the results using plt.bar.
For example, using the same data sets from above:
mybins = np.arange(0,48,2)
nA,binsA = np.histogram(setA,bins=mybins,normed=True)
nB,binsB = np.histogram(setB,bins=mybins,normed=True)
nC,binsC = np.histogram(setC,bins=mybins,normed=True)
# Since the sum of each of these will be 1., lets divide by 3.,
# so the sum of the stacked histogram will be 1.
nA/=3.
nB/=3.
nC/=3.
# Use bottom= to set where the bars should begin
plt.bar(binsA[:-1],nA,width=2,color='b',label='setA')
plt.bar(binsB[:-1],nB,width=2,color='g',label='setB',bottom=nA)
plt.bar(binsC[:-1],nC,width=2,color='r',label='setC',bottom=nA+nB)
A:
I personally like this function:
def get_histogram(array: np.ndarray,
xlabel: str,
ylabel: str,
title: str,
dpi=200, # dots per inch,
facecolor: str = 'white',
bins: int = None,
show: bool = False,
tight_layout=False,
linestyle: Optional[str] = '--',
alpha: float = 0.75,
edgecolor: str = "black",
stat: Optional = 'count',
color: Optional[str] = None,
):
""" """
# - check it's of size (N,)
if isinstance(array, list):
array: np.ndarray = np.array(array)
assert array.shape == (array.shape[0],)
assert len(array.shape) == 1
assert isinstance(array.shape[0], int)
# -
n: int = array.shape[0]
if bins is None:
bins: int = get_num_bins(n, option='square_root')
# bins: int = get_num_bins(n, option='square_root')
print(f'using this number of {bins=} and data size is {n=}')
# -
fig = plt.figure(dpi=dpi)
fig.patch.set_facecolor(facecolor)
import seaborn as sns
p = sns.histplot(array, stat=stat, color=color)
# n, bins, patches = plt.hist(array, bins=bins, facecolor='b', alpha=alpha, edgecolor=edgecolor, density=True)
plt.xlabel(xlabel)
plt.ylabel(ylabel)
plt.title(title)
# plt.xlim(40, 160)
# plt.ylim(0, 0.03)
plt.grid(linestyle=linestyle) if linestyle else None
plt.tight_layout() if tight_layout else None
plt.show() if show else None
sample plot:
|
Normalising Histograms Matplotlib
|
Hi I am plotting three different histograms which have different total frequencies but I want to normalise them such that the frequencies are the same.
As you can see from the picture, the three sets have different total frequencies but I want to normalise them so that they have the same total frequencies but that I want to preserve the proportion of the frequency at each value of the x-axis.
Here's the code I am using to plot the histograms
setA = [22.972972972972972, 0.0, 0.0, 27.5, 25.0, 18.64406779661017, 8.88888888888889, 20.512820512820515, 11.11111111111111, 15.151515151515152, 17.741935483870968, 13.333333333333334, 16.923076923076923, 12.820512820512821, 27.77777777777778, 4.0, 0.0, 15.625, 14.814814814814815, 7.142857142857143, 15.384615384615385, 14.545454545454545, 38.095238095238095, 17.647058823529413, 21.951219512195124, 21.428571428571427, 32.432432432432435, 10.526315789473685, 36.8421052631579, 13.114754098360656, 17.91044776119403, 12.64367816091954, 16.0, 22.727272727272727, 18.181818181818183, 9.523809523809524, 17.105263157894736, 11.904761904761905, 20.58823529411765, 10.714285714285714, 15.686274509803921, 27.5, 16.129032258064516, 21.333333333333332, 40.90909090909091, 11.904761904761905, 13.157894736842104]
setB = [1.492537313432836, 3.5714285714285716, 17.94871794871795, 11.363636363636363, 13.513513513513514, 14.285714285714286, 15.686274509803921, 17.94871794871795, 9.090909090909092, 41.07142857142857, 10.714285714285714, 25.0, 20.0, 40.0, 13.333333333333334, 13.793103448275861, 3.5714285714285716, 17.073170731707318, 25.675675675675677, 15.625, 17.46031746031746, 8.333333333333334, 18.64406779661017, 14.285714285714286, 0.0, 6.0606060606060606, 6.976744186046512, 18.181818181818183, 26.785714285714285, 22.80701754385965, 6.666666666666667, 12.5]
setC = [13.846153846153847, 23.076923076923077, 25.0, 10.714285714285714, 16.666666666666668, 9.75609756097561, 10.0, 10.0, 17.857142857142858, 20.0, 9.75609756097561, 26.470588235294116, 12.5, 13.333333333333334, 4.3478260869565215, 5.882352941176471, 14.545454545454545, 13.333333333333334, 8.571428571428571, 11.764705882352942, 0.0]
plt.figure('sets')
n, bins, patches = plt.hist(setA, 20, alpha=0.40 , label = 'setA')
n, bins, patches = plt.hist(setB, 20, alpha=0.40 , label = 'setB')
n, bins, patches = plt.hist(setC, 20, alpha=0.40 , label = 'setC')
plt.xlabel('Set')
plt.ylabel('Frequency')
plt.title('Different Sets that need to be normalised')
plt.legend()
plt.grid(True)
plt.show()
As a plus, because my aim is to be able to compare the distribution of the three sets, is there a better visual of the histogram I can use to compare them better graphically.
|
[
"You could normalise the histograms using the normed=True option. This will mean that the area of all histograms will add up to 1.\nYou could also make the plot look a bit tidier by using the same fixed bins for all three histograms (using the bins option to hist: bins = np.arange(0,48,2), for example).\nTry this:\nimport numpy as np\n\n...\n\nmybins = np.arange(0,48,2)\n\nn, bins, patches = plt.hist(setA, bins=mybins, alpha=0.40 , label = 'setA', normed=True) \nn, bins, patches = plt.hist(setB, bins=mybins, alpha=0.40 , label = 'setB', normed=True)\nn, bins, patches = plt.hist(setC, bins=mybins, alpha=0.40 , label = 'setC', normed=True) \n\n\n\nAnother option is to plot all three histograms in one call to plt.hist, in which case you can used the stacked=True option, which can further clean up your plot. \nNote: this method normalises all three histograms, so the total integral is 1. It does not make all three histograms add up to the same value.\nn, bins, patches = plt.hist([setA,setB,setC], bins=mybins, \n label = ['setA','setB','setC'], \n normed=True, stacked=True)\n\n\n\nOr, finally, if a stacked histogram is not to your taste, you can plot the bars next to each other, by again plotting all three histograms in one call, but removing the stacked=True option from the line above:\nn, bins, patches = plt.hist([setA,setB,setC], bins=mybins, \n label = ['setA','setB','setC'], \n normed=True)\n\n\n\nAs discussed in comments, when used stacked=True, the normed option just means the sum of all three histograms will equal 1, so they may not be normalised in the same way as in the other methods.\nTo counter this, we can use np.histogram, and plot the results using plt.bar.\nFor example, using the same data sets from above:\nmybins = np.arange(0,48,2)\n\nnA,binsA = np.histogram(setA,bins=mybins,normed=True)\nnB,binsB = np.histogram(setB,bins=mybins,normed=True)\nnC,binsC = np.histogram(setC,bins=mybins,normed=True)\n\n# Since the sum of each of these will be 1., lets divide by 3.,\n# so the sum of the stacked histogram will be 1.\nnA/=3.\nnB/=3.\nnC/=3.\n\n# Use bottom= to set where the bars should begin\nplt.bar(binsA[:-1],nA,width=2,color='b',label='setA')\nplt.bar(binsB[:-1],nB,width=2,color='g',label='setB',bottom=nA)\nplt.bar(binsC[:-1],nC,width=2,color='r',label='setC',bottom=nA+nB)\n\n\n",
"I personally like this function:\ndef get_histogram(array: np.ndarray,\n xlabel: str,\n ylabel: str,\n title: str,\n\n dpi=200, # dots per inch,\n facecolor: str = 'white',\n bins: int = None,\n show: bool = False,\n tight_layout=False,\n linestyle: Optional[str] = '--',\n alpha: float = 0.75,\n edgecolor: str = \"black\",\n stat: Optional = 'count',\n color: Optional[str] = None,\n ):\n \"\"\" \"\"\"\n # - check it's of size (N,)\n if isinstance(array, list):\n array: np.ndarray = np.array(array)\n assert array.shape == (array.shape[0],)\n assert len(array.shape) == 1\n assert isinstance(array.shape[0], int)\n # -\n n: int = array.shape[0]\n if bins is None:\n bins: int = get_num_bins(n, option='square_root')\n # bins: int = get_num_bins(n, option='square_root')\n print(f'using this number of {bins=} and data size is {n=}')\n # -\n fig = plt.figure(dpi=dpi)\n fig.patch.set_facecolor(facecolor)\n\n import seaborn as sns\n p = sns.histplot(array, stat=stat, color=color)\n # n, bins, patches = plt.hist(array, bins=bins, facecolor='b', alpha=alpha, edgecolor=edgecolor, density=True)\n\n plt.xlabel(xlabel)\n plt.ylabel(ylabel)\n plt.title(title)\n # plt.xlim(40, 160)\n # plt.ylim(0, 0.03)\n plt.grid(linestyle=linestyle) if linestyle else None\n plt.tight_layout() if tight_layout else None\n plt.show() if show else None\n\nsample plot:\n\n"
] |
[
5,
0
] |
[] |
[] |
[
"histogram",
"matplotlib",
"python"
] |
stackoverflow_0035482543_histogram_matplotlib_python.txt
|
Q:
Module not found Error when using Google Search API (SERPAPI)
When I run this example code on my local machine:
from serpapi import GoogleSearch
params = {
"api_key": "secret_api_key",
"engine": "google",
"q": "Coffee",
"location": "Austin, Texas, United States",
"google_domain": "google.com",
"gl": "us",
"hl": "en"
}
search = GoogleSearch(params)
results = search.get_dict()
I get the error:
ModuleNotFoundError: No module named 'serpapi'
I can't find anything on Google/Bing search nor the documentation on how to go about installing this module or where it is even located. It does not respond to a pip install.
A:
Use can use the below command to solve the above issue \
pip install google-search-results
You can find more details on the GitHub link
A:
I was having the same issue even after installing google-search-results
Problem: the selected interpreter on pycharm was on a different virtual environment that did not have Google-search installed
Solution: on your IDE add/select interpreter from working directory/virtual env where google-search-results was installed
|
Module not found Error when using Google Search API (SERPAPI)
|
When I run this example code on my local machine:
from serpapi import GoogleSearch
params = {
"api_key": "secret_api_key",
"engine": "google",
"q": "Coffee",
"location": "Austin, Texas, United States",
"google_domain": "google.com",
"gl": "us",
"hl": "en"
}
search = GoogleSearch(params)
results = search.get_dict()
I get the error:
ModuleNotFoundError: No module named 'serpapi'
I can't find anything on Google/Bing search nor the documentation on how to go about installing this module or where it is even located. It does not respond to a pip install.
|
[
"Use can use the below command to solve the above issue \\\npip install google-search-results\n\nYou can find more details on the GitHub link\n",
"I was having the same issue even after installing google-search-results\nProblem: the selected interpreter on pycharm was on a different virtual environment that did not have Google-search installed\nSolution: on your IDE add/select interpreter from working directory/virtual env where google-search-results was installed\n"
] |
[
4,
1
] |
[] |
[] |
[
"google_api",
"google_search_api",
"python"
] |
stackoverflow_0068912059_google_api_google_search_api_python.txt
|
Q:
typing recursive class and inheritance
I have the following class hierarchy:
#!/usr/bin/env python3
from typing import List, Optional, Tuple, Type
class Attribute:
def __init__(self, name: bytes) -> None:
self._name = name
@property
def name(self) -> bytes:
return self._name
class Element:
def __init__(self, name: bytes, attributes: Tuple[Type['Attribute'], ...], elements: Tuple['Element', ...]) -> None:
self._name = name
self._elements = elements
self._attributes = attributes
@property
def name(self) -> bytes:
return self._name
@property
def elements(self) -> Tuple['Element', ...]:
return self._elements
@property
def attributes(self) -> Tuple[Type['Attribute'], ...]:
return self._attributes
class SubAttribute1(Attribute):
def __init__(self, name: bytes, field1: bytes) -> None:
super().__init__(name)
self._afield1 = field1
class SubElement1(Element):
def __init__(self, name: bytes, attributes: Tuple[Type[Attribute], ...], elements: Tuple['Element', ...], field1: bytes, field2: bytes) -> None:
super().__init__(name, attributes, elements)
self._field1 = field1
self._field2 = field2
if __name__ == '__main__':
subE = SubElement1(b'name', None, None, b'', b'')
subA = SubAttribute1(b'name', b'field1')
subE2 = SubElement1(b'name', (subA,), (subE,), b'', b'')
print(subE2.elements[0]._field1)
print(subE2.attributes[0]._afield1)
print(type(subE2.elements[0]))
I subclass the base classes Element and Attribute to add additional fields. The fields 'elements' and 'attributes' should store the derived classes objects respectively. For SubElement1 SubElement1().elements stores a tuple with SubElement1 obejcts. All works fine, but I get the following mypy errors:
question.py:45: error: Argument 2 to "SubElement1" has incompatible type "Tuple[SubAttribute1]"; expected "Tuple[Type[Attribute], ...]"
question.py:46: error: "Element" has no attribute "_field1"
question.py:47: error: "Type[Attribute]" has no attribute "_afield1"
How can I change the code to eliminate the mypy errors?
A:
This question is quite interesting, I thought that PEP646 support is slightly better.
I assume python 3.10 and most recent released version of specific checker as of now, unless explicitly specified: mypy==0.991; pyre-check==0.9.17; pyright==1.1.281
Make elements proper
First of all, here's the (simple enough) code that resolves "elements" issue, but does not help with attributes:
from typing import Generic, List, Optional, Sequence, Tuple, Type, TypeVar
_Self = TypeVar('_Self', bound='Element')
class Attribute:
def __init__(self, name: bytes) -> None:
self._name = name
@property
def name(self) -> bytes:
return self._name
class Element:
def __init__(self: _Self, name: bytes, attributes: tuple[Attribute, ...], elements: Sequence[_Self]) -> None:
self._name = name
self._elements = tuple(elements)
self._attributes = attributes
@property
def name(self) -> bytes:
return self._name
@property
def elements(self: _Self) -> Tuple[_Self, ...]:
return self._elements
@property
def attributes(self) -> Tuple[Attribute, ...]:
return self._attributes
class SubAttribute1(Attribute):
def __init__(self, name: bytes, field1: bytes) -> None:
super().__init__(name)
self._afield1 = field1
class SubElement1(Element):
def __init__(self: _Self, name: bytes, attributes: tuple[Attribute, ...], elements: Sequence[_Self], field1: bytes, field2: bytes) -> None:
super().__init__(name, attributes, elements)
self._field1 = field1
self._field2 = field2
if __name__ == '__main__':
subE = SubElement1(b'name', tuple(), tuple(), b'', b'')
subA = SubAttribute1(b'name', b'field1')
subE2 = SubElement1(b'name', (subA,), (subE,), b'', b'')
print(subE2.elements[0]._field1)
print(subE2.attributes[0]._afield1) # E: "Attribute" has no attribute "_afield1" [attr-defined]
print(type(subE2.elements[0]))
This gives one error (commented in source). Here's playground.
In nearest future (works even on mypy master branch, but not on 0.991) you'll be able to replace _Self with from typing_extensions import Self and skip annotating self argument, like this:
# import from typing, if python >= 3.11
from typing_extensions import Self
class Element:
def __init__(self, name: bytes, attributes: tuple[Attribute, ...], elements: Sequence[Self]) -> None:
self._name = name
self._elements = tuple(elements)
self._attributes = attributes
You can try it here - same 1 error.
Variadic attributes
Now you want to preserve attributes types - they can be heterogeneous, thus you need PEP646 to continue. The class becomes generic in unknown amount of variables. pyre and pyright claim to support this (mypy does not, the work is currently in progress). pyre failed to typecheck the solution below, giving a few spurious errors. pyright succeed (though I personally dislike it, so won't recommend switching). Pyright sandbox is unofficial and not up-to-date, and doesn't work here - copy it locally, install and run pyright to verify.
from typing import Generic, List, Optional, Sequence, Tuple, Type, TypeVar
from typing_extensions import Unpack, Self, TypeVarTuple
_Ts = TypeVarTuple('_Ts')
class Attribute:
def __init__(self, name: bytes) -> None:
self._name = name
@property
def name(self) -> bytes:
return self._name
class Element(Generic[Unpack[_Ts]]):
def __init__(self, name: bytes, attributes: tuple[Unpack[_Ts]], elements: Sequence[Self]) -> None:
self._name = name
self._elements = tuple(elements)
self._attributes = attributes
@property
def name(self) -> bytes:
return self._name
@property
def elements(self) -> Tuple[Self, ...]:
return self._elements
@property
def attributes(self) -> Tuple[Unpack[_Ts]]:
return self._attributes
class SubAttribute1(Attribute):
def __init__(self, name: bytes, field1: bytes) -> None:
super().__init__(name)
self._afield1 = field1
class SubElement1(Element[Unpack[_Ts]]):
def __init__(self, name: bytes, attributes: tuple[Unpack[_Ts]], elements: Sequence[Self], field1: bytes, field2: bytes) -> None:
super().__init__(name, attributes, elements)
self._field1 = field1
self._field2 = field2
if __name__ == '__main__':
subE = SubElement1(b'name', tuple(), tuple(), b'', b'')
subA = SubAttribute1(b'name', b'field1')
subE2 = SubElement1(b'name', (subA,), (subE,), b'', b'')
print(subE2.elements[0]._field1)
print(subE2.attributes[0]._afield1)
print(type(subE2.elements[0]))
Pyright says 0 errors, 0 warnings, 0 informations, pyre errors:
ƛ Found 2 type errors!
t/t.py:15:14 Undefined or invalid type [11]: Annotation `Unpack` is not defined as a type.
t/t.py:15:14 Undefined or invalid type [11]: Annotation `_Ts` is not defined as a type.
mypy goes completely crazy even with experimental flags, paste into mypy playground if you want to look at this.
Homogeneous attributes
However, if your attributes can be represented by homogeneous sequence (so that, say, SubElement1 instances can contain only SubAttribute1), things are much simpler, and the generic with regular TypeVar is sufficient:
from typing import Generic, List, Optional, Sequence, Tuple, Type, TypeVar
_Self = TypeVar('_Self', bound='Element')
_A = TypeVar('_A', bound='Attribute')
class Attribute:
def __init__(self, name: bytes) -> None:
self._name = name
@property
def name(self) -> bytes:
return self._name
class Element(Generic[_A]):
def __init__(self: _Self, name: bytes, attributes: Sequence[_A], elements: Sequence[_Self]) -> None:
self._name = name
self._elements = tuple(elements)
self._attributes = tuple(attributes)
@property
def name(self) -> bytes:
return self._name
@property
def elements(self: _Self) -> Tuple[_Self, ...]:
return self._elements
@property
def attributes(self) -> Tuple[_A, ...]:
return self._attributes
class SubAttribute1(Attribute):
def __init__(self, name: bytes, field1: bytes) -> None:
super().__init__(name)
self._afield1 = field1
class SubElement1(Element[SubAttribute1]):
def __init__(self: _Self, name: bytes, attributes: Sequence[SubAttribute1], elements: Sequence[_Self], field1: bytes, field2: bytes) -> None:
super().__init__(name, attributes, elements)
self._field1 = field1
self._field2 = field2
if __name__ == '__main__':
subE = SubElement1(b'name', tuple(), tuple(), b'', b'')
subA = SubAttribute1(b'name', b'field1')
subE2 = SubElement1(b'name', (subA,), (subE,), b'', b'')
print(subE2.elements[0]._field1)
print(subE2.attributes[0]._afield1)
print(type(subE2.elements[0]))
And this works.
Bonus
All the code you present is called "writing Java in Python" (Citation). You definitely do not need getters with simple attribute access, because you can always add them later. You shouldn't write dataclasses by hands - dataclasses standard module will do it better. So, your example really reduces to much more concise and maintainable python:
from typing import Generic, Sequence, TypeVar
from typing_extensions import Self
from dataclasses import dataclass
_A = TypeVar('_A', bound='Attribute')
@dataclass
class Attribute:
name: bytes
@dataclass
class Element(Generic[_A]):
name: bytes
attributes: Sequence[_A]
elements: Sequence[Self]
# OK, if you need different names in constructor signature and class dict
class SubAttribute1(Attribute):
def __init__(self, name: bytes, field1: bytes) -> None:
super().__init__(name)
self._afield1 = field1
# But I'd really prefer
# @dataclass
# class SubAttribute1(Attribute):
# field1: bytes
# And adjust calls below to use `field1` instead of `_afield1` - you try to expose it anyway
@dataclass
class SubElement1(Element[SubAttribute1]):
field1: bytes
field2: bytes
if __name__ == '__main__':
subE = SubElement1(b'name', tuple(), tuple(), b'', b'')
subA = SubAttribute1(b'name', b'field1')
subE2 = SubElement1(b'name', (subA,), (subE,), b'', b'')
print(subE2.elements[0].field1)
print(subE2.attributes[0]._afield1)
print(type(subE2.elements[0]))
... and it works. Well, will work soon - currently Self is not fully supported in mypy, and checking this results in internal error (crash), reported here by me. Pyright responds with no errors. UPDATE: the error is fixed on mypy master, and the example above typechecks.
|
typing recursive class and inheritance
|
I have the following class hierarchy:
#!/usr/bin/env python3
from typing import List, Optional, Tuple, Type
class Attribute:
def __init__(self, name: bytes) -> None:
self._name = name
@property
def name(self) -> bytes:
return self._name
class Element:
def __init__(self, name: bytes, attributes: Tuple[Type['Attribute'], ...], elements: Tuple['Element', ...]) -> None:
self._name = name
self._elements = elements
self._attributes = attributes
@property
def name(self) -> bytes:
return self._name
@property
def elements(self) -> Tuple['Element', ...]:
return self._elements
@property
def attributes(self) -> Tuple[Type['Attribute'], ...]:
return self._attributes
class SubAttribute1(Attribute):
def __init__(self, name: bytes, field1: bytes) -> None:
super().__init__(name)
self._afield1 = field1
class SubElement1(Element):
def __init__(self, name: bytes, attributes: Tuple[Type[Attribute], ...], elements: Tuple['Element', ...], field1: bytes, field2: bytes) -> None:
super().__init__(name, attributes, elements)
self._field1 = field1
self._field2 = field2
if __name__ == '__main__':
subE = SubElement1(b'name', None, None, b'', b'')
subA = SubAttribute1(b'name', b'field1')
subE2 = SubElement1(b'name', (subA,), (subE,), b'', b'')
print(subE2.elements[0]._field1)
print(subE2.attributes[0]._afield1)
print(type(subE2.elements[0]))
I subclass the base classes Element and Attribute to add additional fields. The fields 'elements' and 'attributes' should store the derived classes objects respectively. For SubElement1 SubElement1().elements stores a tuple with SubElement1 obejcts. All works fine, but I get the following mypy errors:
question.py:45: error: Argument 2 to "SubElement1" has incompatible type "Tuple[SubAttribute1]"; expected "Tuple[Type[Attribute], ...]"
question.py:46: error: "Element" has no attribute "_field1"
question.py:47: error: "Type[Attribute]" has no attribute "_afield1"
How can I change the code to eliminate the mypy errors?
|
[
"This question is quite interesting, I thought that PEP646 support is slightly better.\nI assume python 3.10 and most recent released version of specific checker as of now, unless explicitly specified: mypy==0.991; pyre-check==0.9.17; pyright==1.1.281\nMake elements proper\nFirst of all, here's the (simple enough) code that resolves \"elements\" issue, but does not help with attributes:\nfrom typing import Generic, List, Optional, Sequence, Tuple, Type, TypeVar\n\n\n_Self = TypeVar('_Self', bound='Element')\n\n\nclass Attribute:\n def __init__(self, name: bytes) -> None:\n self._name = name\n\n @property\n def name(self) -> bytes:\n return self._name\n\n\nclass Element:\n def __init__(self: _Self, name: bytes, attributes: tuple[Attribute, ...], elements: Sequence[_Self]) -> None:\n self._name = name\n self._elements = tuple(elements)\n self._attributes = attributes\n\n @property\n def name(self) -> bytes:\n return self._name\n\n @property\n def elements(self: _Self) -> Tuple[_Self, ...]:\n return self._elements\n\n @property\n def attributes(self) -> Tuple[Attribute, ...]:\n return self._attributes\n\n\nclass SubAttribute1(Attribute):\n def __init__(self, name: bytes, field1: bytes) -> None:\n super().__init__(name)\n self._afield1 = field1\n\n\nclass SubElement1(Element):\n def __init__(self: _Self, name: bytes, attributes: tuple[Attribute, ...], elements: Sequence[_Self], field1: bytes, field2: bytes) -> None:\n super().__init__(name, attributes, elements)\n self._field1 = field1\n self._field2 = field2\n\n\nif __name__ == '__main__':\n subE = SubElement1(b'name', tuple(), tuple(), b'', b'')\n subA = SubAttribute1(b'name', b'field1')\n subE2 = SubElement1(b'name', (subA,), (subE,), b'', b'')\n print(subE2.elements[0]._field1)\n print(subE2.attributes[0]._afield1) # E: \"Attribute\" has no attribute \"_afield1\" [attr-defined]\n print(type(subE2.elements[0]))\n\nThis gives one error (commented in source). Here's playground.\nIn nearest future (works even on mypy master branch, but not on 0.991) you'll be able to replace _Self with from typing_extensions import Self and skip annotating self argument, like this:\n# import from typing, if python >= 3.11\nfrom typing_extensions import Self\n\nclass Element:\n def __init__(self, name: bytes, attributes: tuple[Attribute, ...], elements: Sequence[Self]) -> None:\n self._name = name\n self._elements = tuple(elements)\n self._attributes = attributes\n\nYou can try it here - same 1 error.\nVariadic attributes\nNow you want to preserve attributes types - they can be heterogeneous, thus you need PEP646 to continue. The class becomes generic in unknown amount of variables. pyre and pyright claim to support this (mypy does not, the work is currently in progress). pyre failed to typecheck the solution below, giving a few spurious errors. pyright succeed (though I personally dislike it, so won't recommend switching). Pyright sandbox is unofficial and not up-to-date, and doesn't work here - copy it locally, install and run pyright to verify.\nfrom typing import Generic, List, Optional, Sequence, Tuple, Type, TypeVar\nfrom typing_extensions import Unpack, Self, TypeVarTuple\n\n_Ts = TypeVarTuple('_Ts')\n\n\nclass Attribute:\n def __init__(self, name: bytes) -> None:\n self._name = name\n\n @property\n def name(self) -> bytes:\n return self._name\n\nclass Element(Generic[Unpack[_Ts]]):\n def __init__(self, name: bytes, attributes: tuple[Unpack[_Ts]], elements: Sequence[Self]) -> None:\n self._name = name\n self._elements = tuple(elements)\n self._attributes = attributes\n\n @property\n def name(self) -> bytes:\n return self._name\n\n @property\n def elements(self) -> Tuple[Self, ...]:\n return self._elements\n\n @property\n def attributes(self) -> Tuple[Unpack[_Ts]]:\n return self._attributes\n\nclass SubAttribute1(Attribute):\n def __init__(self, name: bytes, field1: bytes) -> None:\n super().__init__(name)\n self._afield1 = field1\n\nclass SubElement1(Element[Unpack[_Ts]]):\n def __init__(self, name: bytes, attributes: tuple[Unpack[_Ts]], elements: Sequence[Self], field1: bytes, field2: bytes) -> None:\n super().__init__(name, attributes, elements)\n self._field1 = field1\n self._field2 = field2\n \nif __name__ == '__main__':\n subE = SubElement1(b'name', tuple(), tuple(), b'', b'')\n subA = SubAttribute1(b'name', b'field1')\n subE2 = SubElement1(b'name', (subA,), (subE,), b'', b'')\n print(subE2.elements[0]._field1)\n print(subE2.attributes[0]._afield1)\n print(type(subE2.elements[0]))\n\nPyright says 0 errors, 0 warnings, 0 informations, pyre errors:\nƛ Found 2 type errors!\nt/t.py:15:14 Undefined or invalid type [11]: Annotation `Unpack` is not defined as a type.\nt/t.py:15:14 Undefined or invalid type [11]: Annotation `_Ts` is not defined as a type.\n\nmypy goes completely crazy even with experimental flags, paste into mypy playground if you want to look at this.\nHomogeneous attributes\nHowever, if your attributes can be represented by homogeneous sequence (so that, say, SubElement1 instances can contain only SubAttribute1), things are much simpler, and the generic with regular TypeVar is sufficient:\nfrom typing import Generic, List, Optional, Sequence, Tuple, Type, TypeVar\n\n\n_Self = TypeVar('_Self', bound='Element')\n_A = TypeVar('_A', bound='Attribute')\n\n\nclass Attribute:\n def __init__(self, name: bytes) -> None:\n self._name = name\n\n @property\n def name(self) -> bytes:\n return self._name\n\n\nclass Element(Generic[_A]):\n def __init__(self: _Self, name: bytes, attributes: Sequence[_A], elements: Sequence[_Self]) -> None:\n self._name = name\n self._elements = tuple(elements)\n self._attributes = tuple(attributes)\n\n @property\n def name(self) -> bytes:\n return self._name\n\n @property\n def elements(self: _Self) -> Tuple[_Self, ...]:\n return self._elements\n\n @property\n def attributes(self) -> Tuple[_A, ...]:\n return self._attributes\n\n\nclass SubAttribute1(Attribute):\n def __init__(self, name: bytes, field1: bytes) -> None:\n super().__init__(name)\n self._afield1 = field1\n\n\nclass SubElement1(Element[SubAttribute1]):\n def __init__(self: _Self, name: bytes, attributes: Sequence[SubAttribute1], elements: Sequence[_Self], field1: bytes, field2: bytes) -> None:\n super().__init__(name, attributes, elements)\n self._field1 = field1\n self._field2 = field2\n\n\nif __name__ == '__main__':\n subE = SubElement1(b'name', tuple(), tuple(), b'', b'')\n subA = SubAttribute1(b'name', b'field1')\n subE2 = SubElement1(b'name', (subA,), (subE,), b'', b'')\n print(subE2.elements[0]._field1)\n print(subE2.attributes[0]._afield1)\n print(type(subE2.elements[0]))\n\n\nAnd this works.\nBonus\nAll the code you present is called \"writing Java in Python\" (Citation). You definitely do not need getters with simple attribute access, because you can always add them later. You shouldn't write dataclasses by hands - dataclasses standard module will do it better. So, your example really reduces to much more concise and maintainable python:\nfrom typing import Generic, Sequence, TypeVar\nfrom typing_extensions import Self\nfrom dataclasses import dataclass\n\n\n_A = TypeVar('_A', bound='Attribute')\n\n\n@dataclass\nclass Attribute:\n name: bytes\n\n\n@dataclass\nclass Element(Generic[_A]):\n name: bytes\n attributes: Sequence[_A]\n elements: Sequence[Self]\n\n\n# OK, if you need different names in constructor signature and class dict\nclass SubAttribute1(Attribute):\n def __init__(self, name: bytes, field1: bytes) -> None:\n super().__init__(name)\n self._afield1 = field1\n \n \n# But I'd really prefer\n# @dataclass\n# class SubAttribute1(Attribute):\n# field1: bytes\n# And adjust calls below to use `field1` instead of `_afield1` - you try to expose it anyway\n\n@dataclass\nclass SubElement1(Element[SubAttribute1]):\n field1: bytes\n field2: bytes\n\n\nif __name__ == '__main__':\n subE = SubElement1(b'name', tuple(), tuple(), b'', b'')\n subA = SubAttribute1(b'name', b'field1')\n subE2 = SubElement1(b'name', (subA,), (subE,), b'', b'')\n print(subE2.elements[0].field1)\n print(subE2.attributes[0]._afield1)\n print(type(subE2.elements[0]))\n\n... and it works. Well, will work soon - currently Self is not fully supported in mypy, and checking this results in internal error (crash), reported here by me. Pyright responds with no errors. UPDATE: the error is fixed on mypy master, and the example above typechecks.\n"
] |
[
1
] |
[] |
[] |
[
"mypy",
"python"
] |
stackoverflow_0074569502_mypy_python.txt
|
Q:
The shuffling order of DataLoader in pytorch
I am really confused about the shuffle order of DataLoader in pytorch.
Supposed I have a dataset:
datasets = [0,1,2,3,4]
In scenario I, the code is:
torch.manual_seed(1)
G = torch.Generator()
G.manual_seed(1)
ran_sampler = RandomSampler(data_source=datasets,generator=G)
dataloader = DataLoader(dataset=datasets,sampler=ran_sampler)
the shuffling result is 0,4,2,3,1.
In scenario II, the code is:
torch.manual_seed(1)
G = torch.Generator()
G.manual_seed(1)
ran_sampler = RandomSampler(data_source=datasets)
dataloader = DataLoader(dataset=datasets, sampler=ran_sampler, generator=G)
the shuffling result is 1,3,4,0,2.
In scenario III, the code is:
torch.manual_seed(1)
G = torch.Generator()
G.manual_seed(1)
ran_sampler = RandomSampler(data_source=datasets, generator=G)
dataloader = DataLoader(dataset=datasets, sampler=ran_sampler, generator=G)
the shuffling result is 4,1,3,0,2.
Can someone explain what is going on here?
A:
Based on your code, I did a little modification (on scenario II) and inspection:
datasets = [0,1,2,3,4]
torch.manual_seed(1)
G = torch.Generator()
G = G.manual_seed(1)
ran_sampler = RandomSampler(data_source=datasets, generator=G)
dataloader = DataLoader(dataset=datasets, sampler=ran_sampler)
print(id(dataloader.generator)==id(dataloader.sampler.generator))
xs = []
for x in dataloader:
xs.append(x.item())
print(xs)
torch.manual_seed(1)
G = torch.Generator()
G.manual_seed(1)
# this is different from OP's scenario II because in that case the ran_sampler is not initialized with the right generator.
dataloader = DataLoader(dataset=datasets, shuffle=True, generator=G)
print(id(dataloader.generator)==id(dataloader.sampler.generator))
xs = []
for x in dataloader:
xs.append(x.item())
print(xs)
torch.manual_seed(1)
G = torch.Generator()
G.manual_seed(1)
ran_sampler = RandomSampler(data_source=datasets, generator=G)
dataloader = DataLoader(dataset=datasets, sampler=ran_sampler, generator=G)
print(id(dataloader.generator)==id(dataloader.sampler.generator))
xs = []
for x in dataloader:
xs.append(x.item())
print(xs)
The outputs are:
False
[0, 4, 2, 3, 1]
True
[4, 1, 3, 0, 2]
True
[4, 1, 3, 0, 2]
The reason why the above three seemingly equivalent setups lead to different outcomes is that there are two different generators actually being used inside the DataLoader, one of which is None, in the first scenario.
To make it clear, let's analyze the source. It seems that the generator not only decides the random number generation of the _index_sampler inside DataLoader but also affects the initialization of _BaseDataLoaderIter. See the source code
if sampler is None: # give default samplers
if self._dataset_kind == _DatasetKind.Iterable:
# See NOTE [ Custom Samplers and IterableDataset ]
sampler = _InfiniteConstantSampler()
else: # map-style
if shuffle:
sampler = RandomSampler(dataset, generator=generator) # type: ignore[arg-type]
else:
sampler = SequentialSampler(dataset) # type: ignore[arg-type]
and
self.sampler = sampler
self.batch_sampler = batch_sampler
self.generator = generator
and
def _get_iterator(self) -> '_BaseDataLoaderIter':
if self.num_workers == 0:
return _SingleProcessDataLoaderIter(self)
else:
self.check_worker_number_rationality()
return _MultiProcessingDataLoaderIter(self)
and
class _BaseDataLoaderIter(object):
def __init__(self, loader: DataLoader) -> None:
...
self._index_sampler = loader._index_sampler
Scenario II & Scenario III
Both setups are equivalent. We pass a generator to DataLoader and do not specify the sampler. DataLoader automatically creates a RandomSampler object with the generator and assign the same generator to self.generator.
Scenario I
We pass a sampler to DataLoader with the right generator but do not explicitly specify the keyword argument generator in DataLoader.__init__(...). DataLoader initializes the sampler with the given sampler but uses the default generator None for self.generator and the _BaseDataLoaderIter object returned by self._get_iterator().
|
The shuffling order of DataLoader in pytorch
|
I am really confused about the shuffle order of DataLoader in pytorch.
Supposed I have a dataset:
datasets = [0,1,2,3,4]
In scenario I, the code is:
torch.manual_seed(1)
G = torch.Generator()
G.manual_seed(1)
ran_sampler = RandomSampler(data_source=datasets,generator=G)
dataloader = DataLoader(dataset=datasets,sampler=ran_sampler)
the shuffling result is 0,4,2,3,1.
In scenario II, the code is:
torch.manual_seed(1)
G = torch.Generator()
G.manual_seed(1)
ran_sampler = RandomSampler(data_source=datasets)
dataloader = DataLoader(dataset=datasets, sampler=ran_sampler, generator=G)
the shuffling result is 1,3,4,0,2.
In scenario III, the code is:
torch.manual_seed(1)
G = torch.Generator()
G.manual_seed(1)
ran_sampler = RandomSampler(data_source=datasets, generator=G)
dataloader = DataLoader(dataset=datasets, sampler=ran_sampler, generator=G)
the shuffling result is 4,1,3,0,2.
Can someone explain what is going on here?
|
[
"Based on your code, I did a little modification (on scenario II) and inspection:\ndatasets = [0,1,2,3,4]\n\ntorch.manual_seed(1)\nG = torch.Generator()\nG = G.manual_seed(1)\n\nran_sampler = RandomSampler(data_source=datasets, generator=G)\ndataloader = DataLoader(dataset=datasets, sampler=ran_sampler)\nprint(id(dataloader.generator)==id(dataloader.sampler.generator))\nxs = []\nfor x in dataloader:\n xs.append(x.item())\nprint(xs)\n\ntorch.manual_seed(1)\nG = torch.Generator()\nG.manual_seed(1)\n\n# this is different from OP's scenario II because in that case the ran_sampler is not initialized with the right generator.\ndataloader = DataLoader(dataset=datasets, shuffle=True, generator=G)\nprint(id(dataloader.generator)==id(dataloader.sampler.generator))\nxs = []\nfor x in dataloader:\n xs.append(x.item())\nprint(xs)\n\ntorch.manual_seed(1)\nG = torch.Generator()\nG.manual_seed(1)\n\n\nran_sampler = RandomSampler(data_source=datasets, generator=G)\ndataloader = DataLoader(dataset=datasets, sampler=ran_sampler, generator=G)\nprint(id(dataloader.generator)==id(dataloader.sampler.generator))\nxs = []\nfor x in dataloader:\n xs.append(x.item())\nprint(xs)\n\nThe outputs are:\nFalse\n[0, 4, 2, 3, 1]\nTrue\n[4, 1, 3, 0, 2]\nTrue\n[4, 1, 3, 0, 2]\n\nThe reason why the above three seemingly equivalent setups lead to different outcomes is that there are two different generators actually being used inside the DataLoader, one of which is None, in the first scenario.\nTo make it clear, let's analyze the source. It seems that the generator not only decides the random number generation of the _index_sampler inside DataLoader but also affects the initialization of _BaseDataLoaderIter. See the source code\n if sampler is None: # give default samplers\n if self._dataset_kind == _DatasetKind.Iterable:\n # See NOTE [ Custom Samplers and IterableDataset ]\n sampler = _InfiniteConstantSampler()\n else: # map-style\n if shuffle:\n sampler = RandomSampler(dataset, generator=generator) # type: ignore[arg-type]\n else:\n sampler = SequentialSampler(dataset) # type: ignore[arg-type]\n\nand\n self.sampler = sampler\n self.batch_sampler = batch_sampler\n self.generator = generator\n\nand\n def _get_iterator(self) -> '_BaseDataLoaderIter':\n if self.num_workers == 0:\n return _SingleProcessDataLoaderIter(self)\n else:\n self.check_worker_number_rationality()\n return _MultiProcessingDataLoaderIter(self)\n\nand\nclass _BaseDataLoaderIter(object):\n def __init__(self, loader: DataLoader) -> None:\n ...\n self._index_sampler = loader._index_sampler\n\n\nScenario II & Scenario III\n\nBoth setups are equivalent. We pass a generator to DataLoader and do not specify the sampler. DataLoader automatically creates a RandomSampler object with the generator and assign the same generator to self.generator.\n\nScenario I\n\nWe pass a sampler to DataLoader with the right generator but do not explicitly specify the keyword argument generator in DataLoader.__init__(...). DataLoader initializes the sampler with the given sampler but uses the default generator None for self.generator and the _BaseDataLoaderIter object returned by self._get_iterator().\n"
] |
[
3
] |
[] |
[] |
[
"python",
"pytorch",
"pytorch_dataloader"
] |
stackoverflow_0074580942_python_pytorch_pytorch_dataloader.txt
|
Q:
How to rid output of quotation mark
I'm trying to work a program which converts number words to their related integer form for a school project on Grok. It works great but when the program outputs the result it gives it back in quotation marks due to the number being a string and not an integer. Sorry if this post isn't formatted correctly, it's my first post. Any help is appreciated, Thanks!
def words2number(s):
a = ""
for word in s:
s = s.replace("zero", "0")
s = ints.replace("one", "1")
s = s.replace("two", "2")
s = s.replace("three", "3")
s = s.replace("four", "4")
s = s.replace("five", "5")
s = s.replace("six", "6")
s = s.replace("seven", "7")
s = s.replace("eight", "8")
s = s.replace("nine", "9")
s = s.strip("'")
return(a.join(b))
I thought I could strip the quotation marks using a function as shown in the post but that doesn't help. I've also tried converting the output to an integer but that raises an error.
|
How to rid output of quotation mark
|
I'm trying to work a program which converts number words to their related integer form for a school project on Grok. It works great but when the program outputs the result it gives it back in quotation marks due to the number being a string and not an integer. Sorry if this post isn't formatted correctly, it's my first post. Any help is appreciated, Thanks!
def words2number(s):
a = ""
for word in s:
s = s.replace("zero", "0")
s = ints.replace("one", "1")
s = s.replace("two", "2")
s = s.replace("three", "3")
s = s.replace("four", "4")
s = s.replace("five", "5")
s = s.replace("six", "6")
s = s.replace("seven", "7")
s = s.replace("eight", "8")
s = s.replace("nine", "9")
s = s.strip("'")
return(a.join(b))
I thought I could strip the quotation marks using a function as shown in the post but that doesn't help. I've also tried converting the output to an integer but that raises an error.
|
[] |
[] |
[
"As I understand it, b is an early defined variable with a word.\nYou can try use int for return.\nreturn int(a.join(b))\n\n"
] |
[
-1
] |
[
"python"
] |
stackoverflow_0074586528_python.txt
|
Q:
point to point line-of-sight verification with barrier python
I'm trying to create code that does calculation to know if the line is passing through the rectangle (obstacle), since I only know the position of sta1 and sta2 and I also know the x_min, x_max and y_min, y_max of the rectangle.
sta1 position(x,y) = (1,5) sta2 position(x,y) = (5,1)
retangle x_min = 3 retangle x_max = 4
retangle y_min = 2 retangle y_max = 3
I already have the function that returns the position of the stations:
sta1.position
(1,5)
I also already have the code to get the Euclidean distance between the points:
distance sta1 sta2
example: 20
That is, I need a code where given two points (x, y), and an obstacle that is a rectangle (x,_min, x_max, y_min, y_max), the output of the code informs if there is a line of sight or if the obstacle is in the middle of the way.
Some help in python doing this calculation would help.
A:
I solved using the equation of the line:
# line/rectangle intersection (obstacle)
# node coordinates
source_x = 1
source_y = 5
destination_x = 5
destination_y = 1
# rectangle vertices
vertices_x = [3, 3, 4, 4]
vertices_y = [3, 2, 3, 2]
# Reduced equation of the line
# y = ax + b
# 1 - find the slope
# a = y2 - y1/x2 - x1
a = (destination_y - source_y) / (destination_x - source_x)
# 2 - find the linear coefficient
b = source_y - (a * source_x)
# f(x) = -(a*x) + y-(b)
signal_first = ''
result = ''
signal = ''
for i in range(0, 4):
fx = -(a * vertices_x[i]) + vertices_y[i] - (b)
if i == 0:
value = fx
if value > 0:
signal_first = 'positive'
elif value < 0:
signal_first = 'negative'
if fx > 0:
signal = 'positive'
elif fx < 0:
signal = 'negative'
if signal != signal_first:
result = 'stations have no line of sight'
print(result)
|
point to point line-of-sight verification with barrier python
|
I'm trying to create code that does calculation to know if the line is passing through the rectangle (obstacle), since I only know the position of sta1 and sta2 and I also know the x_min, x_max and y_min, y_max of the rectangle.
sta1 position(x,y) = (1,5) sta2 position(x,y) = (5,1)
retangle x_min = 3 retangle x_max = 4
retangle y_min = 2 retangle y_max = 3
I already have the function that returns the position of the stations:
sta1.position
(1,5)
I also already have the code to get the Euclidean distance between the points:
distance sta1 sta2
example: 20
That is, I need a code where given two points (x, y), and an obstacle that is a rectangle (x,_min, x_max, y_min, y_max), the output of the code informs if there is a line of sight or if the obstacle is in the middle of the way.
Some help in python doing this calculation would help.
|
[
"I solved using the equation of the line:\n# line/rectangle intersection (obstacle)\n\n# node coordinates\nsource_x = 1\nsource_y = 5\n\ndestination_x = 5\ndestination_y = 1\n\n# rectangle vertices\n\nvertices_x = [3, 3, 4, 4]\nvertices_y = [3, 2, 3, 2]\n\n# Reduced equation of the line\n# y = ax + b\n\n# 1 - find the slope\n# a = y2 - y1/x2 - x1\n\na = (destination_y - source_y) / (destination_x - source_x)\n\n# 2 - find the linear coefficient\n\nb = source_y - (a * source_x)\n\n# f(x) = -(a*x) + y-(b)\n\nsignal_first = ''\nresult = ''\nsignal = ''\nfor i in range(0, 4):\n fx = -(a * vertices_x[i]) + vertices_y[i] - (b)\n if i == 0:\n value = fx\n if value > 0:\n signal_first = 'positive'\n elif value < 0:\n signal_first = 'negative'\n if fx > 0:\n signal = 'positive'\n elif fx < 0:\n signal = 'negative'\n\n if signal != signal_first:\n result = 'stations have no line of sight'\n\nprint(result)\n\n"
] |
[
0
] |
[] |
[] |
[
"geometry",
"python"
] |
stackoverflow_0074579101_geometry_python.txt
|
Q:
Python: plot a dict of keys and values
I would like to plot an x, y graph of a data dictionary (key and values).
The key is a datetime value.
Each value for key contains an object, ie my_data, that has attributes - name, count, totalCount
On the graph, on the x-axis, I would like to use the key (datetime)
On the y-axis, I would like to multi plot the my_data attributes, a separate point for each attribute - my_data.name, my_data.count my_data.totalCount
Please advise.
Thanks
A:
Here is an example of what I had in mind for the solution:
#!/usr/bin/env ipython
# --------------------
import numpy as np
import matplotlib as mpl
mpl.rcParams['font.size'] = 20
import matplotlib.pylab as plt
import datetime
# -------------------------------------
# ==================================================
N = 5
dates = [datetime.datetime(2022,11,10)+ii*datetime.timedelta(days=1) for ii in range(5)];
count = 5+5*np.random.random(N,)
name = ['Name'+str(ii).rjust(2,'0') for ii in range(5)]
totcount = 10+2*np.random.random(N,)
# --------------------------------------------------
plotdata = {dates[ii]:{1:round(count[ii],1),2:name[ii],3:round(totcount[ii],1)} for ii in range(len(dates))}
# --------------------------------------------------
fig = plt.figure(figsize=(20,10));ax = fig.add_subplot(111);
for dval,pdict in plotdata.items():
for kv,vv in pdict.items():
ax.text(dval,kv,str(vv),ha='left',va='bottom')
ax.plot(dval,kv,ms=10,color='k');
ax.set_ylim((0,5)) # to squeeze more together the values
ax.set_yticklabels('') # to remove y-axis text
plt.show()
|
Python: plot a dict of keys and values
|
I would like to plot an x, y graph of a data dictionary (key and values).
The key is a datetime value.
Each value for key contains an object, ie my_data, that has attributes - name, count, totalCount
On the graph, on the x-axis, I would like to use the key (datetime)
On the y-axis, I would like to multi plot the my_data attributes, a separate point for each attribute - my_data.name, my_data.count my_data.totalCount
Please advise.
Thanks
|
[
"Here is an example of what I had in mind for the solution:\n#!/usr/bin/env ipython\n# --------------------\nimport numpy as np\nimport matplotlib as mpl\nmpl.rcParams['font.size'] = 20\nimport matplotlib.pylab as plt\nimport datetime\n# -------------------------------------\n# ==================================================\nN = 5\ndates = [datetime.datetime(2022,11,10)+ii*datetime.timedelta(days=1) for ii in range(5)];\ncount = 5+5*np.random.random(N,)\nname = ['Name'+str(ii).rjust(2,'0') for ii in range(5)]\ntotcount = 10+2*np.random.random(N,)\n# --------------------------------------------------\nplotdata = {dates[ii]:{1:round(count[ii],1),2:name[ii],3:round(totcount[ii],1)} for ii in range(len(dates))}\n# --------------------------------------------------\nfig = plt.figure(figsize=(20,10));ax = fig.add_subplot(111);\nfor dval,pdict in plotdata.items():\n for kv,vv in pdict.items():\n ax.text(dval,kv,str(vv),ha='left',va='bottom')\n ax.plot(dval,kv,ms=10,color='k');\nax.set_ylim((0,5)) # to squeeze more together the values\nax.set_yticklabels('') # to remove y-axis text\nplt.show()\n\n"
] |
[
0
] |
[] |
[] |
[
"matplotlib",
"python"
] |
stackoverflow_0074553478_matplotlib_python.txt
|
Q:
Crop a video in Python, centered on a 16x9 video, cropped to 9x16, moviepy?
I am wanting to crop a video that is 16x9 resolution to 9x16. This can be done by cropping a centered 607px wide rectangle on the 16x9 video. Can this be done? EDIT: I do not care to stay within moviepy. I want to use something with speed. Currently, writing a 5 min video file with moviepy is taking 10+ minutes.
cropping video from 16x9 to 9x16
from moviepy.video.io.ffmpeg_tools import ffmpeg_extract_subclip
import moviepy.editor as mpy
origVideo = 'video.mp4'
video = mpy.VideoFileClip(origVideo)
#Crop 'video' here and output 'cropped-video.mp4'
video.write_videofile('video-cropped.mp4')
Currently only getting a black screen with no audio. The video won't play, but it has a time code.
EDIT: Solution here: https://github.com/JerlinJR/Crop-a-Video/blob/main/crop.py
A:
You can use moviepy.video.fx.all.crop. Documentation are here. For example,
import moviepy.editor as mpy
from moviepy.video.fx.all import crop
clip = mpy.VideoFileClip("path/to/video.mp4")
(w, h) = clip.size
crop_width = h * 9/16
# x1,y1 is the top left corner, and x2, y2 is the lower right corner of the cropped area.
x1, x2 = (w - crop_width)//2, (w+crop_width)//2
y1, y2 = 0, h
cropped_clip = crop(clip, x1=x1, y1=y1, x2=x2, y2=y2)
# or you can specify center point and cropped width/height
# cropped_clip = crop(clip, width=crop_width, height=h, x_center=w/2, y_center=h/2)
cropped_clip.write_videofile('path/to/cropped/video.mp4')
The code is not tested. If there is any further question, please let me know.
|
Crop a video in Python, centered on a 16x9 video, cropped to 9x16, moviepy?
|
I am wanting to crop a video that is 16x9 resolution to 9x16. This can be done by cropping a centered 607px wide rectangle on the 16x9 video. Can this be done? EDIT: I do not care to stay within moviepy. I want to use something with speed. Currently, writing a 5 min video file with moviepy is taking 10+ minutes.
cropping video from 16x9 to 9x16
from moviepy.video.io.ffmpeg_tools import ffmpeg_extract_subclip
import moviepy.editor as mpy
origVideo = 'video.mp4'
video = mpy.VideoFileClip(origVideo)
#Crop 'video' here and output 'cropped-video.mp4'
video.write_videofile('video-cropped.mp4')
Currently only getting a black screen with no audio. The video won't play, but it has a time code.
EDIT: Solution here: https://github.com/JerlinJR/Crop-a-Video/blob/main/crop.py
|
[
"You can use moviepy.video.fx.all.crop. Documentation are here. For example,\nimport moviepy.editor as mpy\nfrom moviepy.video.fx.all import crop\n\nclip = mpy.VideoFileClip(\"path/to/video.mp4\")\n(w, h) = clip.size\n\ncrop_width = h * 9/16\n# x1,y1 is the top left corner, and x2, y2 is the lower right corner of the cropped area.\n\nx1, x2 = (w - crop_width)//2, (w+crop_width)//2\ny1, y2 = 0, h\ncropped_clip = crop(clip, x1=x1, y1=y1, x2=x2, y2=y2)\n# or you can specify center point and cropped width/height\n# cropped_clip = crop(clip, width=crop_width, height=h, x_center=w/2, y_center=h/2)\ncropped_clip.write_videofile('path/to/cropped/video.mp4')\n\nThe code is not tested. If there is any further question, please let me know.\n"
] |
[
0
] |
[] |
[] |
[
"crop",
"ffmpeg",
"moviepy",
"python",
"video"
] |
stackoverflow_0074586467_crop_ffmpeg_moviepy_python_video.txt
|
Q:
Multiple Python Flask sites on IIS
I want to run three separate Python Flask URLS, let's call them test, staging, and production. I want one IIS website to serve up these three different applications.
So I have created a website, created three IIS Applications under this site, and set the root folders for these to d:\execution\test, d:\execution\staging, d:\execution\prod respectively.
Under each of these folders is a hello_world.py that looks like this
import datetime
from flask import Flask
app = Flask(__name__)
@app.route('/test')
def hello_world():
return 'Hello World (Test)!' + datetime.datetime.now().strftime("%Y/%m/%d %H:%M:%S")
if __name__ == '__main__':
app.run(debug=True)
Obviously with the route changed to /test, /staging and /prod as needed
Each folder also contains a web.config that looks like this
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<system.webServer>
<defaultDocument>
<files>
<add value="hello_world.py" />
</files>
</defaultDocument>
<directoryBrowse enabled="true" />
</system.webServer>
<system.web>
<identity impersonate="false" />
</system.web>
<appSettings>
<add key="PYTHONPATH" value="D:\Execution\Test" />
<add key="WSGI_HANDLER" value="hello_world.app" />
</appSettings>
</configuration>
Again with the PYTHONPATH changed as required
The main website root has a web.config in that says
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<system.webServer>
<handlers>
<add name="Python Flask" path="*" verb="*" modules="FastCgiModule" scriptProcessor="C:\Python34\python.exe|C:\Python34\Lib\site-packages\wfastcgi.py" resourceType="Unspecified" />
</handlers>
</system.webServer>
</configuration>
This is the python handler. If I take this out, I can serve up basic HTML pages fine from the websites and the three application sub-folder/URLs. If I put it back in, I can't get anything, not even just a basic HTML page. IIS throws a generic "The webpage cannot be found. HTTP 404 Not Found" when I try to navigate to anything, be it directory browsing, an index.html or the http://server:port/test app route as mentioned above for my python flask app.
Does anyone have any suggestions as to what I am doing wrong?
A:
I think the root of your problem lies in the fact that your site has a CGIbased handler specified that is going to override the wsgi_handler values that you provide for each of your applications.
..are you really intending to run wsgi over cgi?
If your answer is no, then that definitely could be your problem.
A:
I know this is an old question but I have a similar problem and found a solution which may help people using Flask + IIS:
Solution: Use blueprints... Each application should set the blueprint URL including the name of the Virtual Directory (Application)
For example if you have the following structure on IIS:
DefaultWebSite
\YourFlaskApp01
\YourFlaskApp02
...
On YourFlaskApp01 code set a Flask Blueprint:
bpname = flask.Blueprint('bpname', __name__, url_prefix='/YourFlaskApp01')
On YourFlaskApp02 code set a Flask Blueprint:
bpname = flask.Blueprint('bpname', __name__, url_prefix='/YourFlaskApp02')
And use the blue prints on your endpoint methods:
@bpname.route("/foo", methods=["GET"])
def foo():
...
If you are not using blueprints I guess it also works if you set the correct setting directly on
@app.route('/YourFlaskApp01/test')
But I have not tested it. :(
On IIS remember always to give appropriate permissions to the user associated to the application pool.
General instruction on setting up Flask + IIS:
Microsoft Learn - Configure Python web apps for IIS
Medium.com - Deploy a Python Flask Application in IIS Server and run on machine IP address
Flask - Modular Applications with Blueprints
Hope this may help someone!
|
Multiple Python Flask sites on IIS
|
I want to run three separate Python Flask URLS, let's call them test, staging, and production. I want one IIS website to serve up these three different applications.
So I have created a website, created three IIS Applications under this site, and set the root folders for these to d:\execution\test, d:\execution\staging, d:\execution\prod respectively.
Under each of these folders is a hello_world.py that looks like this
import datetime
from flask import Flask
app = Flask(__name__)
@app.route('/test')
def hello_world():
return 'Hello World (Test)!' + datetime.datetime.now().strftime("%Y/%m/%d %H:%M:%S")
if __name__ == '__main__':
app.run(debug=True)
Obviously with the route changed to /test, /staging and /prod as needed
Each folder also contains a web.config that looks like this
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<system.webServer>
<defaultDocument>
<files>
<add value="hello_world.py" />
</files>
</defaultDocument>
<directoryBrowse enabled="true" />
</system.webServer>
<system.web>
<identity impersonate="false" />
</system.web>
<appSettings>
<add key="PYTHONPATH" value="D:\Execution\Test" />
<add key="WSGI_HANDLER" value="hello_world.app" />
</appSettings>
</configuration>
Again with the PYTHONPATH changed as required
The main website root has a web.config in that says
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<system.webServer>
<handlers>
<add name="Python Flask" path="*" verb="*" modules="FastCgiModule" scriptProcessor="C:\Python34\python.exe|C:\Python34\Lib\site-packages\wfastcgi.py" resourceType="Unspecified" />
</handlers>
</system.webServer>
</configuration>
This is the python handler. If I take this out, I can serve up basic HTML pages fine from the websites and the three application sub-folder/URLs. If I put it back in, I can't get anything, not even just a basic HTML page. IIS throws a generic "The webpage cannot be found. HTTP 404 Not Found" when I try to navigate to anything, be it directory browsing, an index.html or the http://server:port/test app route as mentioned above for my python flask app.
Does anyone have any suggestions as to what I am doing wrong?
|
[
"I think the root of your problem lies in the fact that your site has a CGIbased handler specified that is going to override the wsgi_handler values that you provide for each of your applications.\n..are you really intending to run wsgi over cgi? \nIf your answer is no, then that definitely could be your problem.\n",
"I know this is an old question but I have a similar problem and found a solution which may help people using Flask + IIS:\nSolution: Use blueprints... Each application should set the blueprint URL including the name of the Virtual Directory (Application)\nFor example if you have the following structure on IIS:\nDefaultWebSite\n \\YourFlaskApp01\n \\YourFlaskApp02\n ...\n\nOn YourFlaskApp01 code set a Flask Blueprint:\nbpname = flask.Blueprint('bpname', __name__, url_prefix='/YourFlaskApp01')\n\nOn YourFlaskApp02 code set a Flask Blueprint:\nbpname = flask.Blueprint('bpname', __name__, url_prefix='/YourFlaskApp02')\n\nAnd use the blue prints on your endpoint methods:\n@bpname.route(\"/foo\", methods=[\"GET\"])\ndef foo():\n ...\n\nIf you are not using blueprints I guess it also works if you set the correct setting directly on\n@app.route('/YourFlaskApp01/test')\n\nBut I have not tested it. :(\nOn IIS remember always to give appropriate permissions to the user associated to the application pool.\nGeneral instruction on setting up Flask + IIS:\nMicrosoft Learn - Configure Python web apps for IIS\nMedium.com - Deploy a Python Flask Application in IIS Server and run on machine IP address\nFlask - Modular Applications with Blueprints\nHope this may help someone!\n"
] |
[
0,
0
] |
[] |
[] |
[
"asp.net",
"flask",
"iis",
"python"
] |
stackoverflow_0037325602_asp.net_flask_iis_python.txt
|
Q:
using print() after invoking recursion
Can Someone tell me what exactly is happening here.Is the print statement executing after all the draw [3,2,1] are completed or it's happening simultaneously.I tried adding print(n) but still couldn't figure out. Is it unpacking after storing the values of ('#'*n).I am getting what I desired but just needed to understand what is actually happening
def draw(n:int):
if n<0:
return
draw(n-1)
print ('#'*n)
draw(3)
A:
the print comes after executing all the draw calls. if you want to print it in the same order make sure print comes first before calling the draw again
def draw(n:int):
if n<0:
return
print ('#'*n) # this should come first before the next call
draw(n-1)
to understand the sequence of the recursion, follow this visualization tool. just write your provided script and run it.
https://pythontutor.com/visualize.html#mode=edit
|
using print() after invoking recursion
|
Can Someone tell me what exactly is happening here.Is the print statement executing after all the draw [3,2,1] are completed or it's happening simultaneously.I tried adding print(n) but still couldn't figure out. Is it unpacking after storing the values of ('#'*n).I am getting what I desired but just needed to understand what is actually happening
def draw(n:int):
if n<0:
return
draw(n-1)
print ('#'*n)
draw(3)
|
[
"the print comes after executing all the draw calls. if you want to print it in the same order make sure print comes first before calling the draw again\ndef draw(n:int):\n if n<0:\n return\n print ('#'*n) # this should come first before the next call\n draw(n-1)\n\nto understand the sequence of the recursion, follow this visualization tool. just write your provided script and run it.\nhttps://pythontutor.com/visualize.html#mode=edit\n"
] |
[
0
] |
[] |
[] |
[
"python",
"python_3.x",
"recursion",
"tail_recursion"
] |
stackoverflow_0074586621_python_python_3.x_recursion_tail_recursion.txt
|
Q:
Python Kivy: 2 sets of buttons and connection with each other
Maybe someone would give me a hint what direction have I go in, couz I"ve stucked on my problem. I will be very grateful.
So, the thing is I am working on my Kivy project. The task is to evoke a line with 2 buttons, one of them have to rename another in the line.
Actually it works, but only with 1 row. If we have set of lines, 5 for instance, whatever line you would choose, only first button in first line will be renamed, not specified one. I am trying to explore indexing of button set and lists of buttons but unsuccessfully
py code is here:
class Radiators(Screen):
btn_lst = []
btn_cn_lst = []
i = 0
cn = 0
def add_button_radiator_setting(self):
self.i += 1
self.btn_rad = Button(text="room")
self.btn_lst.append(self.btn_rad)
self.ids.button_grid_radiators.add_widget(self.btn_rad)
self.cn = self.cn + 1
self.btn_context = Button(text="...")
self.btn_cn_lst.append(self.btn_context)
self.btn_cn_lst[self.cn-1].bind(on_press = self.rename_btn)
self.ids.button_context_menu.add_widget(self.btn_context)
def reject_button_radiator_setting(self):
self.ids.button_grid_radiators.remove_widget(self.btn_lst[self.i-1])
self.i -= 1
self.btn_lst.pop(-1)
self.ids.button_context_menu.remove_widget(self.btn_cn_lst[self.cn-1])
self.cn -= 1
self.btn_cn_lst.pop(-1)
def rename_btn(self, ind):
self.ids.button_grid_radiators.children[self.cn-1].text = str(input("enter: "))
class radiatorsApp(App):
pass
radiatorsApp().run()
kivy code is here:
Radiators:
<Radiators>:
name: "Radiators"
BoxLayout:
orientation: "horizontal"
size_hint: 1, 0.1
pos_hint: {"top": 1}
Label:
text: "Radiators"
font_size: 32
BoxLayout:
orientation: "horizontal"
size_hint: 1, 0.8
BoxLayout:
size_hint: 0.15, 0.8
BoxLayout:
size_hint: 0.6, 0.8
GridLayout:
orientation: "tb-lr"
id: button_grid_radiators
row_force_default: True
row_default_height: 40
cols: 1
BoxLayout:
size_hint: 0.05, 0.8
GridLayout:
orientation: "tb-lr"
id: button_context_menu
row_force_default: True
row_default_height: 40
cols: 1
BoxLayout:
size_hint: 0.15, 0.8
BoxLayout:
size_hint: 1, 0.1
Button:
text: "<-- back to previous Window"
on_press: root.manager.current = "Heat System"
Button:
text: "Reset"
Button:
text: "-"
on_press: root.reject_button_radiator_setting()
Button:
text: "+"
on_press: root.add_button_radiator_setting()
A:
The on_press method gets the Button that was pressed as an argument. You can use that to find the other Button:
def rename_btn(self, pressed_button):
index = self.btn_cn_lst.index(pressed_button) # get index in list of Buttons
butt = self.btn_lst[index] # get the Button from the other list at the same index
butt.text = str(input("enter: ")) # change the text of the Button
|
Python Kivy: 2 sets of buttons and connection with each other
|
Maybe someone would give me a hint what direction have I go in, couz I"ve stucked on my problem. I will be very grateful.
So, the thing is I am working on my Kivy project. The task is to evoke a line with 2 buttons, one of them have to rename another in the line.
Actually it works, but only with 1 row. If we have set of lines, 5 for instance, whatever line you would choose, only first button in first line will be renamed, not specified one. I am trying to explore indexing of button set and lists of buttons but unsuccessfully
py code is here:
class Radiators(Screen):
btn_lst = []
btn_cn_lst = []
i = 0
cn = 0
def add_button_radiator_setting(self):
self.i += 1
self.btn_rad = Button(text="room")
self.btn_lst.append(self.btn_rad)
self.ids.button_grid_radiators.add_widget(self.btn_rad)
self.cn = self.cn + 1
self.btn_context = Button(text="...")
self.btn_cn_lst.append(self.btn_context)
self.btn_cn_lst[self.cn-1].bind(on_press = self.rename_btn)
self.ids.button_context_menu.add_widget(self.btn_context)
def reject_button_radiator_setting(self):
self.ids.button_grid_radiators.remove_widget(self.btn_lst[self.i-1])
self.i -= 1
self.btn_lst.pop(-1)
self.ids.button_context_menu.remove_widget(self.btn_cn_lst[self.cn-1])
self.cn -= 1
self.btn_cn_lst.pop(-1)
def rename_btn(self, ind):
self.ids.button_grid_radiators.children[self.cn-1].text = str(input("enter: "))
class radiatorsApp(App):
pass
radiatorsApp().run()
kivy code is here:
Radiators:
<Radiators>:
name: "Radiators"
BoxLayout:
orientation: "horizontal"
size_hint: 1, 0.1
pos_hint: {"top": 1}
Label:
text: "Radiators"
font_size: 32
BoxLayout:
orientation: "horizontal"
size_hint: 1, 0.8
BoxLayout:
size_hint: 0.15, 0.8
BoxLayout:
size_hint: 0.6, 0.8
GridLayout:
orientation: "tb-lr"
id: button_grid_radiators
row_force_default: True
row_default_height: 40
cols: 1
BoxLayout:
size_hint: 0.05, 0.8
GridLayout:
orientation: "tb-lr"
id: button_context_menu
row_force_default: True
row_default_height: 40
cols: 1
BoxLayout:
size_hint: 0.15, 0.8
BoxLayout:
size_hint: 1, 0.1
Button:
text: "<-- back to previous Window"
on_press: root.manager.current = "Heat System"
Button:
text: "Reset"
Button:
text: "-"
on_press: root.reject_button_radiator_setting()
Button:
text: "+"
on_press: root.add_button_radiator_setting()
|
[
"The on_press method gets the Button that was pressed as an argument. You can use that to find the other Button:\ndef rename_btn(self, pressed_button):\n index = self.btn_cn_lst.index(pressed_button) # get index in list of Buttons\n butt = self.btn_lst[index] # get the Button from the other list at the same index\n butt.text = str(input(\"enter: \")) # change the text of the Button\n\n"
] |
[
0
] |
[] |
[] |
[
"kivy",
"python"
] |
stackoverflow_0074585788_kivy_python.txt
|
Q:
ModuleNotFoundError when import from constants file in python
app-main-folder
/local
/__init__.py
/run.py
constants.py
I am trying to import from constants in run.py it's throwing this error
Traceback (most recent call last):
File "local/run.py", line 4, in <module>
from init import app
File "/home/manavarthivenkat/ANUESERVICES--BACKEND/local/init.py", line 5, in <module>
from constants import BaseConstants
ModuleNotFoundError: No module named 'constants'
A:
pip install constants
Try this on your shell
and try running run.py
make sure you load the constants library
A:
this is because Python assumes all your python files are in the current directory. you should tell the compiler you are looking for the file somewhere else.
from app-main-folder.constants import constants # assuming constants is the name of your class in that constants.py
A:
You can use sys.path.append to tell Python interpreter where to search the modules.
First, you can check the current Python path
import sys
from pprint import pprint
pprint(sys.path)
If the path to the directory of constants.py is not included in the output, you can manually add that path, by
import sys
sys.path.append("/path/to/constants.py")
|
ModuleNotFoundError when import from constants file in python
|
app-main-folder
/local
/__init__.py
/run.py
constants.py
I am trying to import from constants in run.py it's throwing this error
Traceback (most recent call last):
File "local/run.py", line 4, in <module>
from init import app
File "/home/manavarthivenkat/ANUESERVICES--BACKEND/local/init.py", line 5, in <module>
from constants import BaseConstants
ModuleNotFoundError: No module named 'constants'
|
[
"pip install constants\n\nTry this on your shell\nand try running run.py\nmake sure you load the constants library\n",
"this is because Python assumes all your python files are in the current directory. you should tell the compiler you are looking for the file somewhere else.\nfrom app-main-folder.constants import constants # assuming constants is the name of your class in that constants.py\n\n",
"You can use sys.path.append to tell Python interpreter where to search the modules.\nFirst, you can check the current Python path\nimport sys\nfrom pprint import pprint\npprint(sys.path)\n\nIf the path to the directory of constants.py is not included in the output, you can manually add that path, by\nimport sys\nsys.path.append(\"/path/to/constants.py\")\n\n"
] |
[
0,
0,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074586603_python.txt
|
Q:
i made a function and then i used the same code that is in the function to get the minimum value through a for loop but i am getting different answers
the function is to calculate cost based on earning, loan taken, fees, rates and duration, but when applying the exact same code that is there in the function to a for loop the answers are coming different, can help me out as i am new to programming
#income tax calculation and cost calc
# earning, loan y n, interest rate, years, fees
def IT_calc(earn,l,ir,d,f):
interest_payable = f * ir * d
amount_payable = f + interest_payable
if l == 0:
if earn <= 250000:
income_tax = 0
elif earn <= 500000:
income_tax = (earn-250000)*0.05
elif earn <= 750000:
income_tax = (250000*0.05)+ (earn-500000)*0.10
elif earn <= 1000000:
income_tax = (250000)*(0.05+0.10) + (earn-750000)*0.15
elif earn <= 1250000:
income_tax = (250000)*(0.05+0.10+0.15) + (earn-1000000)*0.20
elif earn <= 1500000:
income_tax = (250000)*(0.05+0.10+0.15+0.20) + (earn-1250000)*0.25
else:
income_tax = (250000)*(0.05+0.10+0.15+0.20+0.25) + (earn-1500000)*0.30
cost = (income_tax * 1.04 * d) + f
print('Your total cost over', d, 'years without loan is:', cost)
else:
earn = earn-(amount_payable//d)
if earn <= 250000:
income_tax = 0
elif earn <= 500000:
income_tax = (earn-250000)*0.05
elif earn <= 750000:
income_tax = (250000*0.05)+ (earn-500000)*0.10
elif earn <= 1000000:
income_tax = (250000)*(0.05+0.10) + (earn-750000)*0.15
elif earn <= 1250000:
income_tax = (250000)*(0.05+0.10+0.15) + (earn-1000000)*0.20
elif earn <= 1500000:
income_tax = (250000)*(0.05+0.10+0.15+0.20) + (earn-1250000)*0.25
else:
income_tax = (250000)*(0.05+0.10+0.15+0.20+0.25) + (earn-1500000)*0.30
cost = (income_tax * 1.04 * d) + amount_payable
print('Your total cost over', d, 'years with loan is:', cost)
IT_calc(1200000,1,0.085,3,1600000)
>> Your total cost over 3 years with loan is: 2056568.104
#For loop
b = []
f = 1600000
ir = 0.085
earn = 1200000
l = 1
for t in range(1,20):
interest_payable = f * ir * t
amount_payable = f + interest_payable
if l == 0:
if earn <= 250000:
income_tax = 0
elif earn <= 500000:
income_tax = (earn-250000)*0.05
elif earn <= 750000:
income_tax = (250000*0.05)+ (earn-500000)*0.10
elif earn <= 1000000:
income_tax = (250000)*(0.05+0.10) + (earn-750000)*0.15
elif earn <= 1250000:
income_tax = (250000)*(0.05+0.10+0.15) + (earn-1000000)*0.20
elif earn <= 1500000:
income_tax = (250000)*(0.05+0.10+0.15+0.20) + (earn-1250000)*0.25
else:
income_tax = (250000)*(0.05+0.10+0.15+0.20+0.25) + (earn-1500000)*0.30
cost = (income_tax * 1.04 * t) + f
else:
earn = earn-(amount_payable//t)
if earn <= 250000:
income_tax = 0
elif earn <= 500000:
income_tax = (earn-250000)*0.05
elif earn <= 750000:
income_tax = (250000*0.05)+ (earn-500000)*0.10
elif earn <= 1000000:
income_tax = (250000)*(0.05+0.10) + (earn-750000)*0.15
elif earn <= 1250000:
income_tax = (250000)*(0.05+0.10+0.15) + (earn-1000000)*0.20
elif earn <= 1500000:
income_tax = (250000)*(0.05+0.10+0.15+0.20) + (earn-1250000)*0.25
else:
income_tax = (250000)*(0.05+0.10+0.15+0.20+0.25) + (earn-1500000)*0.30
cost = (income_tax * 1.04 * t) + amount_payable
b.append(cost)
b
>>[1736000.0,
1872000.0,
2008000.0,
2144000.0,
2280000.0,
2416000.0,
2552000.0,
2688000.0,
2824000.0,
2960000.0,
3096000.0,
3232000.0,
3368000.0,
3504000.0,
3640000.0,
3776000.0,
3912000.0,
4048000.0,
4184000.0]
i was trying to estimate the minimum costs but i am getting the wrong answer through the for loop, am i missing something?
A:
In your for loop, you are changing the value of earn on every loop with the line earn = earn-(amount_payable//t), so that it is not starting at 1200000 for each case.
Either reset it inside the for loop, or as OneMadGypsy suggests, change the IT_calc function to return the value instead of printing and use it in your for loop instead of having a slightly modified copy of the entire function.
|
i made a function and then i used the same code that is in the function to get the minimum value through a for loop but i am getting different answers
|
the function is to calculate cost based on earning, loan taken, fees, rates and duration, but when applying the exact same code that is there in the function to a for loop the answers are coming different, can help me out as i am new to programming
#income tax calculation and cost calc
# earning, loan y n, interest rate, years, fees
def IT_calc(earn,l,ir,d,f):
interest_payable = f * ir * d
amount_payable = f + interest_payable
if l == 0:
if earn <= 250000:
income_tax = 0
elif earn <= 500000:
income_tax = (earn-250000)*0.05
elif earn <= 750000:
income_tax = (250000*0.05)+ (earn-500000)*0.10
elif earn <= 1000000:
income_tax = (250000)*(0.05+0.10) + (earn-750000)*0.15
elif earn <= 1250000:
income_tax = (250000)*(0.05+0.10+0.15) + (earn-1000000)*0.20
elif earn <= 1500000:
income_tax = (250000)*(0.05+0.10+0.15+0.20) + (earn-1250000)*0.25
else:
income_tax = (250000)*(0.05+0.10+0.15+0.20+0.25) + (earn-1500000)*0.30
cost = (income_tax * 1.04 * d) + f
print('Your total cost over', d, 'years without loan is:', cost)
else:
earn = earn-(amount_payable//d)
if earn <= 250000:
income_tax = 0
elif earn <= 500000:
income_tax = (earn-250000)*0.05
elif earn <= 750000:
income_tax = (250000*0.05)+ (earn-500000)*0.10
elif earn <= 1000000:
income_tax = (250000)*(0.05+0.10) + (earn-750000)*0.15
elif earn <= 1250000:
income_tax = (250000)*(0.05+0.10+0.15) + (earn-1000000)*0.20
elif earn <= 1500000:
income_tax = (250000)*(0.05+0.10+0.15+0.20) + (earn-1250000)*0.25
else:
income_tax = (250000)*(0.05+0.10+0.15+0.20+0.25) + (earn-1500000)*0.30
cost = (income_tax * 1.04 * d) + amount_payable
print('Your total cost over', d, 'years with loan is:', cost)
IT_calc(1200000,1,0.085,3,1600000)
>> Your total cost over 3 years with loan is: 2056568.104
#For loop
b = []
f = 1600000
ir = 0.085
earn = 1200000
l = 1
for t in range(1,20):
interest_payable = f * ir * t
amount_payable = f + interest_payable
if l == 0:
if earn <= 250000:
income_tax = 0
elif earn <= 500000:
income_tax = (earn-250000)*0.05
elif earn <= 750000:
income_tax = (250000*0.05)+ (earn-500000)*0.10
elif earn <= 1000000:
income_tax = (250000)*(0.05+0.10) + (earn-750000)*0.15
elif earn <= 1250000:
income_tax = (250000)*(0.05+0.10+0.15) + (earn-1000000)*0.20
elif earn <= 1500000:
income_tax = (250000)*(0.05+0.10+0.15+0.20) + (earn-1250000)*0.25
else:
income_tax = (250000)*(0.05+0.10+0.15+0.20+0.25) + (earn-1500000)*0.30
cost = (income_tax * 1.04 * t) + f
else:
earn = earn-(amount_payable//t)
if earn <= 250000:
income_tax = 0
elif earn <= 500000:
income_tax = (earn-250000)*0.05
elif earn <= 750000:
income_tax = (250000*0.05)+ (earn-500000)*0.10
elif earn <= 1000000:
income_tax = (250000)*(0.05+0.10) + (earn-750000)*0.15
elif earn <= 1250000:
income_tax = (250000)*(0.05+0.10+0.15) + (earn-1000000)*0.20
elif earn <= 1500000:
income_tax = (250000)*(0.05+0.10+0.15+0.20) + (earn-1250000)*0.25
else:
income_tax = (250000)*(0.05+0.10+0.15+0.20+0.25) + (earn-1500000)*0.30
cost = (income_tax * 1.04 * t) + amount_payable
b.append(cost)
b
>>[1736000.0,
1872000.0,
2008000.0,
2144000.0,
2280000.0,
2416000.0,
2552000.0,
2688000.0,
2824000.0,
2960000.0,
3096000.0,
3232000.0,
3368000.0,
3504000.0,
3640000.0,
3776000.0,
3912000.0,
4048000.0,
4184000.0]
i was trying to estimate the minimum costs but i am getting the wrong answer through the for loop, am i missing something?
|
[
"In your for loop, you are changing the value of earn on every loop with the line earn = earn-(amount_payable//t), so that it is not starting at 1200000 for each case.\nEither reset it inside the for loop, or as OneMadGypsy suggests, change the IT_calc function to return the value instead of printing and use it in your for loop instead of having a slightly modified copy of the entire function.\n"
] |
[
1
] |
[] |
[] |
[
"jupyter_notebook",
"python"
] |
stackoverflow_0074586664_jupyter_notebook_python.txt
|
Q:
metadata-generation-failed while installing Scipy
I have been trying to install Scipy and I got an error called metadata-generation-failed, and I came over to stackoverflow looking for a solution but non of them worked for me. Neither updating pip, nor using commands such as --use-deprecated=legacy-resolver nor --use-deprecated=backtrack-on-build-failures. I ran out of ideas, if somebody can help I would appreciate. Code is right below:
`
Collecting scipy
Using cached scipy-1.9.3.tar.gz (42.1 MB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Installing backend dependencies ... done
Preparing metadata (pyproject.toml) ... error
error: subprocess-exited-with-error
× Preparing metadata (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [75 lines of output]
The Meson build system
Version: 0.64.1
Source dir: /private/var/folders/sy/vl408hcx0d11wftbc8rshy9r0000gn/T/pip-install-6luprtwk/scipy
Build dir: /private/var/folders/sy/vl408hcx0d11wftbc8rshy9r0000gn/T/pip-install-6luprtwk/scipy/.mesonpy-be8nnvg4/build
Build type: native build
Project name: SciPy
Project version: 1.9.3
C compiler for the host machine: cc (clang 12.0.5 "Apple clang version 12.0.5 (clang-1205.0.22.11)")
C linker for the host machine: cc ld64 650.9
C++ compiler for the host machine: c++ (clang 12.0.5 "Apple clang version 12.0.5 (clang-1205.0.22.11)")
C++ linker for the host machine: c++ ld64 650.9
Host machine cpu family: aarch64
Host machine cpu: aarch64
Compiler for C supports arguments -Wno-unused-but-set-variable: NO
Compiler for C supports arguments -Wno-unused-but-set-variable: NO (cached)
Compiler for C supports arguments -Wno-unused-function: YES
Compiler for C supports arguments -Wno-conversion: YES
Compiler for C supports arguments -Wno-misleading-indentation: YES
Compiler for C supports arguments -Wno-incompatible-pointer-types: YES
Library m found: YES
../../meson.build:57:0: ERROR: Unknown compiler(s): [['gfortran'], ['flang'], ['nvfortran'], ['pgfortran'], ['ifort'], ['ifx'], ['g95']]
The following exception(s) were encountered:
Running `gfortran --version` gave "[Errno 2] No such file or directory: 'gfortran'"
Running `gfortran -V` gave "[Errno 2] No such file or directory: 'gfortran'"
Running `flang --version` gave "[Errno 2] No such file or directory: 'flang'"
Running `flang -V` gave "[Errno 2] No such file or directory: 'flang'"
Running `nvfortran --version` gave "[Errno 2] No such file or directory: 'nvfortran'"
Running `nvfortran -V` gave "[Errno 2] No such file or directory: 'nvfortran'"
Running `pgfortran --version` gave "[Errno 2] No such file or directory: 'pgfortran'"
Running `pgfortran -V` gave "[Errno 2] No such file or directory: 'pgfortran'"
Running `ifort --version` gave "[Errno 2] No such file or directory: 'ifort'"
Running `ifort -V` gave "[Errno 2] No such file or directory: 'ifort'"
Running `ifx --version` gave "[Errno 2] No such file or directory: 'ifx'"
Running `ifx -V` gave "[Errno 2] No such file or directory: 'ifx'"
Running `g95 --version` gave "[Errno 2] No such file or directory: 'g95'"
Running `g95 -V` gave "[Errno 2] No such file or directory: 'g95'"
A full log can be found at /private/var/folders/sy/vl408hcx0d11wftbc8rshy9r0000gn/T/pip-install-6luprtwk/scipy/.mesonpy-be8nnvg4/build/meson-logs/meson-log.txt
+ meson setup --prefix=/Library/Frameworks/Python.framework/Versions/3.10 /private/var/folders/sy/vl408hcx0d11wftbc8rshy9r0000gn/T/pip-install-6luprtwk/scipy /private/var/folders/sy/vl408hcx0d11wftbc8rshy9r0000gn/T/pip-install-6luprtwk/scipy/.mesonpy-be8nnvg4/build --native-file=/private/var/folders/sy/vl408hcx0d11wftbc8rshy9r0000gn/T/pip-install-6luprtwk/scipy/.mesonpy-native-file.ini -Ddebug=false -Doptimization=2
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 144, in prepare_metadata_for_build_wheel
hook = backend.prepare_metadata_for_build_wheel
AttributeError: module 'mesonpy' has no attribute 'prepare_metadata_for_build_wheel'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 351, in <module>
main()
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 333, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 148, in prepare_metadata_for_build_wheel
whl_basename = backend.build_wheel(metadata_directory, config_settings)
File "/private/var/folders/sy/vl408hcx0d11wftbc8rshy9r0000gn/T/pip-build-env-5s9erroi/overlay/lib/python3.10/site-packages/mesonpy/__init__.py", line 1060, in build_wheel
with _project(config_settings) as project:
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/contextlib.py", line 135, in __enter__
return next(self.gen)
File "/private/var/folders/sy/vl408hcx0d11wftbc8rshy9r0000gn/T/pip-build-env-5s9erroi/overlay/lib/python3.10/site-packages/mesonpy/__init__.py", line 975, in _project
with Project.with_temp_working_dir(
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/contextlib.py", line 135, in __enter__
return next(self.gen)
File "/private/var/folders/sy/vl408hcx0d11wftbc8rshy9r0000gn/T/pip-build-env-5s9erroi/overlay/lib/python3.10/site-packages/mesonpy/__init__.py", line 750, in with_temp_working_dir
yield cls(source_dir, tmpdir, build_dir, meson_args)
File "/private/var/folders/sy/vl408hcx0d11wftbc8rshy9r0000gn/T/pip-build-env-5s9erroi/overlay/lib/python3.10/site-packages/mesonpy/__init__.py", line 632, in __init__
self._configure(reconfigure=bool(build_dir) and not native_file_mismatch)
File "/private/var/folders/sy/vl408hcx0d11wftbc8rshy9r0000gn/T/pip-build-env-5s9erroi/overlay/lib/python3.10/site-packages/mesonpy/__init__.py", line 680, in _configure
self._meson('setup', *setup_args)
File "/private/var/folders/sy/vl408hcx0d11wftbc8rshy9r0000gn/T/pip-build-env-5s9erroi/overlay/lib/python3.10/site-packages/mesonpy/__init__.py", line 657, in _meson
return self._proc('meson', *args)
File "/private/var/folders/sy/vl408hcx0d11wftbc8rshy9r0000gn/T/pip-build-env-5s9erroi/overlay/lib/python3.10/site-packages/mesonpy/__init__.py", line 652, in _proc
subprocess.check_call(list(args), env=self._env)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py", line 369, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['meson', 'setup', '--prefix=/Library/Frameworks/Python.framework/Versions/3.10', '/private/var/folders/sy/vl408hcx0d11wftbc8rshy9r0000gn/T/pip-install-6luprtwk/scipy', '/private/var/folders/sy/vl408hcx0d11wftbc8rshy9r0000gn/T/pip-install-6luprtwk/scipy/.mesonpy-be8nnvg4/build', '--native-file=/private/var/folders/sy/vl408hcx0d11wftbc8rshy9r0000gn/T/pip-install-6luprtwk/scipy/.mesonpy-native-file.ini', '-Ddebug=false', '-Doptimization=2']' returned non-zero exit status 1.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> scipy
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
`
I tried using commands such as --use-deprecated=legacy-resolver and --use-deprecated=backtrack-on-build-failures that I found in several cases on this page, expecting to solve my issue and finally get to install the library, but non of them worked, and I always end up with the same result:
`
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> scipy
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
`
A:
Are you trying to install scipy on macOS 11. If so, then we don't support scipy on that version of macOS. For M1 wheels you need to be on macOS 12 or later. I think Intel is ok on macOS 11.
If you are trying to install scipy from source you need to install various packages for installation, the absence of gfortran is what's giving this error.
|
metadata-generation-failed while installing Scipy
|
I have been trying to install Scipy and I got an error called metadata-generation-failed, and I came over to stackoverflow looking for a solution but non of them worked for me. Neither updating pip, nor using commands such as --use-deprecated=legacy-resolver nor --use-deprecated=backtrack-on-build-failures. I ran out of ideas, if somebody can help I would appreciate. Code is right below:
`
Collecting scipy
Using cached scipy-1.9.3.tar.gz (42.1 MB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Installing backend dependencies ... done
Preparing metadata (pyproject.toml) ... error
error: subprocess-exited-with-error
× Preparing metadata (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [75 lines of output]
The Meson build system
Version: 0.64.1
Source dir: /private/var/folders/sy/vl408hcx0d11wftbc8rshy9r0000gn/T/pip-install-6luprtwk/scipy
Build dir: /private/var/folders/sy/vl408hcx0d11wftbc8rshy9r0000gn/T/pip-install-6luprtwk/scipy/.mesonpy-be8nnvg4/build
Build type: native build
Project name: SciPy
Project version: 1.9.3
C compiler for the host machine: cc (clang 12.0.5 "Apple clang version 12.0.5 (clang-1205.0.22.11)")
C linker for the host machine: cc ld64 650.9
C++ compiler for the host machine: c++ (clang 12.0.5 "Apple clang version 12.0.5 (clang-1205.0.22.11)")
C++ linker for the host machine: c++ ld64 650.9
Host machine cpu family: aarch64
Host machine cpu: aarch64
Compiler for C supports arguments -Wno-unused-but-set-variable: NO
Compiler for C supports arguments -Wno-unused-but-set-variable: NO (cached)
Compiler for C supports arguments -Wno-unused-function: YES
Compiler for C supports arguments -Wno-conversion: YES
Compiler for C supports arguments -Wno-misleading-indentation: YES
Compiler for C supports arguments -Wno-incompatible-pointer-types: YES
Library m found: YES
../../meson.build:57:0: ERROR: Unknown compiler(s): [['gfortran'], ['flang'], ['nvfortran'], ['pgfortran'], ['ifort'], ['ifx'], ['g95']]
The following exception(s) were encountered:
Running `gfortran --version` gave "[Errno 2] No such file or directory: 'gfortran'"
Running `gfortran -V` gave "[Errno 2] No such file or directory: 'gfortran'"
Running `flang --version` gave "[Errno 2] No such file or directory: 'flang'"
Running `flang -V` gave "[Errno 2] No such file or directory: 'flang'"
Running `nvfortran --version` gave "[Errno 2] No such file or directory: 'nvfortran'"
Running `nvfortran -V` gave "[Errno 2] No such file or directory: 'nvfortran'"
Running `pgfortran --version` gave "[Errno 2] No such file or directory: 'pgfortran'"
Running `pgfortran -V` gave "[Errno 2] No such file or directory: 'pgfortran'"
Running `ifort --version` gave "[Errno 2] No such file or directory: 'ifort'"
Running `ifort -V` gave "[Errno 2] No such file or directory: 'ifort'"
Running `ifx --version` gave "[Errno 2] No such file or directory: 'ifx'"
Running `ifx -V` gave "[Errno 2] No such file or directory: 'ifx'"
Running `g95 --version` gave "[Errno 2] No such file or directory: 'g95'"
Running `g95 -V` gave "[Errno 2] No such file or directory: 'g95'"
A full log can be found at /private/var/folders/sy/vl408hcx0d11wftbc8rshy9r0000gn/T/pip-install-6luprtwk/scipy/.mesonpy-be8nnvg4/build/meson-logs/meson-log.txt
+ meson setup --prefix=/Library/Frameworks/Python.framework/Versions/3.10 /private/var/folders/sy/vl408hcx0d11wftbc8rshy9r0000gn/T/pip-install-6luprtwk/scipy /private/var/folders/sy/vl408hcx0d11wftbc8rshy9r0000gn/T/pip-install-6luprtwk/scipy/.mesonpy-be8nnvg4/build --native-file=/private/var/folders/sy/vl408hcx0d11wftbc8rshy9r0000gn/T/pip-install-6luprtwk/scipy/.mesonpy-native-file.ini -Ddebug=false -Doptimization=2
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 144, in prepare_metadata_for_build_wheel
hook = backend.prepare_metadata_for_build_wheel
AttributeError: module 'mesonpy' has no attribute 'prepare_metadata_for_build_wheel'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 351, in <module>
main()
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 333, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 148, in prepare_metadata_for_build_wheel
whl_basename = backend.build_wheel(metadata_directory, config_settings)
File "/private/var/folders/sy/vl408hcx0d11wftbc8rshy9r0000gn/T/pip-build-env-5s9erroi/overlay/lib/python3.10/site-packages/mesonpy/__init__.py", line 1060, in build_wheel
with _project(config_settings) as project:
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/contextlib.py", line 135, in __enter__
return next(self.gen)
File "/private/var/folders/sy/vl408hcx0d11wftbc8rshy9r0000gn/T/pip-build-env-5s9erroi/overlay/lib/python3.10/site-packages/mesonpy/__init__.py", line 975, in _project
with Project.with_temp_working_dir(
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/contextlib.py", line 135, in __enter__
return next(self.gen)
File "/private/var/folders/sy/vl408hcx0d11wftbc8rshy9r0000gn/T/pip-build-env-5s9erroi/overlay/lib/python3.10/site-packages/mesonpy/__init__.py", line 750, in with_temp_working_dir
yield cls(source_dir, tmpdir, build_dir, meson_args)
File "/private/var/folders/sy/vl408hcx0d11wftbc8rshy9r0000gn/T/pip-build-env-5s9erroi/overlay/lib/python3.10/site-packages/mesonpy/__init__.py", line 632, in __init__
self._configure(reconfigure=bool(build_dir) and not native_file_mismatch)
File "/private/var/folders/sy/vl408hcx0d11wftbc8rshy9r0000gn/T/pip-build-env-5s9erroi/overlay/lib/python3.10/site-packages/mesonpy/__init__.py", line 680, in _configure
self._meson('setup', *setup_args)
File "/private/var/folders/sy/vl408hcx0d11wftbc8rshy9r0000gn/T/pip-build-env-5s9erroi/overlay/lib/python3.10/site-packages/mesonpy/__init__.py", line 657, in _meson
return self._proc('meson', *args)
File "/private/var/folders/sy/vl408hcx0d11wftbc8rshy9r0000gn/T/pip-build-env-5s9erroi/overlay/lib/python3.10/site-packages/mesonpy/__init__.py", line 652, in _proc
subprocess.check_call(list(args), env=self._env)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py", line 369, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['meson', 'setup', '--prefix=/Library/Frameworks/Python.framework/Versions/3.10', '/private/var/folders/sy/vl408hcx0d11wftbc8rshy9r0000gn/T/pip-install-6luprtwk/scipy', '/private/var/folders/sy/vl408hcx0d11wftbc8rshy9r0000gn/T/pip-install-6luprtwk/scipy/.mesonpy-be8nnvg4/build', '--native-file=/private/var/folders/sy/vl408hcx0d11wftbc8rshy9r0000gn/T/pip-install-6luprtwk/scipy/.mesonpy-native-file.ini', '-Ddebug=false', '-Doptimization=2']' returned non-zero exit status 1.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> scipy
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
`
I tried using commands such as --use-deprecated=legacy-resolver and --use-deprecated=backtrack-on-build-failures that I found in several cases on this page, expecting to solve my issue and finally get to install the library, but non of them worked, and I always end up with the same result:
`
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> scipy
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
`
|
[
"Are you trying to install scipy on macOS 11. If so, then we don't support scipy on that version of macOS. For M1 wheels you need to be on macOS 12 or later. I think Intel is ok on macOS 11.\nIf you are trying to install scipy from source you need to install various packages for installation, the absence of gfortran is what's giving this error.\n"
] |
[
1
] |
[] |
[] |
[
"macos",
"python",
"scipy"
] |
stackoverflow_0074565108_macos_python_scipy.txt
|
Q:
Hitting connection paused fork issues with pymongo cause I need to access to db to configure before multiprocessing starts
I've been struggling with this for a couple months now and have tried a lot of different things to try to alleviate but am not sure what to do anymore. All the examples that I see are different than what I need and in my case it just wouldn't work.
To preface the problem, I have processor applications that get spawned by a manager as a docker container. The processor is a single class that gets run in a forever while loop that processes the same list of items over and over again and runs a function on them. The code I'm working with is quite large so I created a smaller version of the problem below.
this is how I create my engine
db.py
from os import getpid
from pymongo import MongoClient
_mongo_client = None
_mongo_client_pid = None
def get_mongodb_uri(MONGO_DB_HOST, MONGO_DB_PORT) -> str:
return 'mongodb://{}:{}/{}'.format(MONGO_DB_HOST, MONGO_DB_PORT, 'taskprocessor')
def get_db_engine():
global _mongo_client, _mongo_client_pid
curr_pid = getpid()
if curr_pid != _mongo_client_pid:
_mongo_client = MongoClient(get_mongodb_uri(), connect=False)
_mongo_client_pid = curr_pid
return _mongo_client
def get_db(name):
return get_db_engine()['taskprocessor'][name]
These are my DB models
processor.py
from uuid import uuid4
from taskprocessor.db import get_db
class ProcessorModel():
db = get_db("processors")
def __init__(self, **kwargs):
self.uid = kwargs.get('uid', str(uuid4()))
self.exceptions = kwargs.get('exceptions', [])
self.to_process = kwargs.get('to_process', [])
self.functions = kwargs.get('functions', ["int", "round"])
def save(self):
return self.db.insert_one(self.__dict__).inserted_id is not None
@classmethod
def get(cls, uid):
res = cls.db.find_one(dict(uid=uid))
return ProcessorModel(**res)
result.py
from uuid import uuid4
from taskprocessor.db import get_db
class ResultModel():
db = get_db("results")
def __init__(self, **kwargs):
self.uid = kwargs.get('uid', str(uuid4()))
self.res = kwargs.get('res', dict())
def save(self):
return self.db.insert_one(self.__dict__).inserted_id is not None
And my main.py that gets started as a docker container
to run a forever loop
import os
from time import sleep
from taskprocessor.db.processor import ProcessorModel
from taskprocessor.db.result import ResultModel
from multiprocessing import Pool
class Processor:
def __init__(self):
self.id = os.getenv("PROCESSOR_ID")
self.db_model = ProcessorModel.get(self.id)
self.to_process = self.db_model.to_process # list of floats [1.23, 1.535, 1.33499, 242.2352, 352.232]
self.functions = self.db_model.functions # list i.e ["round", "int"]
def run(self):
while True:
try:
pool = Pool(2)
res = list(pool.map(self.analyse, self.to_process))
print(res)
sleep(100)
except Exception as e:
self.db_model = ProcessorModel.get(os.getenv("PROCESSOR_ID"))
self.db_model.exceptions.append(f"exception {e}")
self.db_model.save()
print("Exception")
def analyse(self, item):
res = {}
for func in self.functions:
if func == "round":
res['round'] = round(item)
if func == "int":
res['int'] = int(item)
ResultModel(res=res).save()
return res
if __name__ == "__main__":
p = Processor()
p.run()
I've tried setting connect=False, or even trying to close the connection after the configuration but then end up with connection closed errors. I also tried using a system of recognizing the PID and giving a different client but that still did not help.
Almost all examples I see are where the DB access is not needed before the multiprocessing fork. In my case the initial configuration is heavy and cannot be efficient to do every single time in the process loop. Furthermore the items to process themselves depends on the data from the DB.
I can live with not being able to save the exceptions to the db object from the main pid.
I'm seeing the error logs around fork safety as well as hitting connection pool paused errors as as symptom of this issue.
A:
If anybody sees this, I was using pymongo 4.0.2 and upgraded to 4.3.3 and not seeing the errors I was previously seeing.
|
Hitting connection paused fork issues with pymongo cause I need to access to db to configure before multiprocessing starts
|
I've been struggling with this for a couple months now and have tried a lot of different things to try to alleviate but am not sure what to do anymore. All the examples that I see are different than what I need and in my case it just wouldn't work.
To preface the problem, I have processor applications that get spawned by a manager as a docker container. The processor is a single class that gets run in a forever while loop that processes the same list of items over and over again and runs a function on them. The code I'm working with is quite large so I created a smaller version of the problem below.
this is how I create my engine
db.py
from os import getpid
from pymongo import MongoClient
_mongo_client = None
_mongo_client_pid = None
def get_mongodb_uri(MONGO_DB_HOST, MONGO_DB_PORT) -> str:
return 'mongodb://{}:{}/{}'.format(MONGO_DB_HOST, MONGO_DB_PORT, 'taskprocessor')
def get_db_engine():
global _mongo_client, _mongo_client_pid
curr_pid = getpid()
if curr_pid != _mongo_client_pid:
_mongo_client = MongoClient(get_mongodb_uri(), connect=False)
_mongo_client_pid = curr_pid
return _mongo_client
def get_db(name):
return get_db_engine()['taskprocessor'][name]
These are my DB models
processor.py
from uuid import uuid4
from taskprocessor.db import get_db
class ProcessorModel():
db = get_db("processors")
def __init__(self, **kwargs):
self.uid = kwargs.get('uid', str(uuid4()))
self.exceptions = kwargs.get('exceptions', [])
self.to_process = kwargs.get('to_process', [])
self.functions = kwargs.get('functions', ["int", "round"])
def save(self):
return self.db.insert_one(self.__dict__).inserted_id is not None
@classmethod
def get(cls, uid):
res = cls.db.find_one(dict(uid=uid))
return ProcessorModel(**res)
result.py
from uuid import uuid4
from taskprocessor.db import get_db
class ResultModel():
db = get_db("results")
def __init__(self, **kwargs):
self.uid = kwargs.get('uid', str(uuid4()))
self.res = kwargs.get('res', dict())
def save(self):
return self.db.insert_one(self.__dict__).inserted_id is not None
And my main.py that gets started as a docker container
to run a forever loop
import os
from time import sleep
from taskprocessor.db.processor import ProcessorModel
from taskprocessor.db.result import ResultModel
from multiprocessing import Pool
class Processor:
def __init__(self):
self.id = os.getenv("PROCESSOR_ID")
self.db_model = ProcessorModel.get(self.id)
self.to_process = self.db_model.to_process # list of floats [1.23, 1.535, 1.33499, 242.2352, 352.232]
self.functions = self.db_model.functions # list i.e ["round", "int"]
def run(self):
while True:
try:
pool = Pool(2)
res = list(pool.map(self.analyse, self.to_process))
print(res)
sleep(100)
except Exception as e:
self.db_model = ProcessorModel.get(os.getenv("PROCESSOR_ID"))
self.db_model.exceptions.append(f"exception {e}")
self.db_model.save()
print("Exception")
def analyse(self, item):
res = {}
for func in self.functions:
if func == "round":
res['round'] = round(item)
if func == "int":
res['int'] = int(item)
ResultModel(res=res).save()
return res
if __name__ == "__main__":
p = Processor()
p.run()
I've tried setting connect=False, or even trying to close the connection after the configuration but then end up with connection closed errors. I also tried using a system of recognizing the PID and giving a different client but that still did not help.
Almost all examples I see are where the DB access is not needed before the multiprocessing fork. In my case the initial configuration is heavy and cannot be efficient to do every single time in the process loop. Furthermore the items to process themselves depends on the data from the DB.
I can live with not being able to save the exceptions to the db object from the main pid.
I'm seeing the error logs around fork safety as well as hitting connection pool paused errors as as symptom of this issue.
|
[
"If anybody sees this, I was using pymongo 4.0.2 and upgraded to 4.3.3 and not seeing the errors I was previously seeing.\n"
] |
[
0
] |
[] |
[] |
[
"fork",
"mongodb",
"multiprocessing",
"pymongo",
"python"
] |
stackoverflow_0074555523_fork_mongodb_multiprocessing_pymongo_python.txt
|
Q:
BUG: Cannot install SciPy 1.9.3 in Python 3.10 in macOS
Tried to install SciPy from the terminal using pip and from the idle using Github, but none of it worked.
In the documentation it says that it should support Python 3.10, so I do not know the reason of the issue.
Every other package, I have had no issue to install.
Any ideas about how to solve?
This is the error shown:
Collecting scipy
Using cached scipy-1.9.3.tar.gz (42.1 MB)
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'error'
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> [68 lines of output]
+ meson setup --native-file=/private/var/folders/tn/l95fzft50tb2hg345rn97ykr0000gn/T/pip-install-gkrcavxq/scipy_93c0e9d042a14457b9a3a4073762f4c2/.mesonpy-native-file.ini -Ddebug=false -Doptimization=2 --prefix=/Library/Frameworks/Python.framework/Versions/3.10 /private/var/folders/tn/l95fzft50tb2hg345rn97ykr0000gn/T/pip-install-gkrcavxq/scipy_93c0e9d042a14457b9a3a4073762f4c2 /private/var/folders/tn/l95fzft50tb2hg345rn97ykr0000gn/T/pip-install-gkrcavxq/scipy_93c0e9d042a14457b9a3a4073762f4c2/.mesonpy-flu011q5/build
The Meson build system
Version: 0.64.0
Source dir: /private/var/folders/tn/l95fzft50tb2hg345rn97ykr0000gn/T/pip-install-gkrcavxq/scipy_93c0e9d042a14457b9a3a4073762f4c2
Build dir: /private/var/folders/tn/l95fzft50tb2hg345rn97ykr0000gn/T/pip-install-gkrcavxq/scipy_93c0e9d042a14457b9a3a4073762f4c2/.mesonpy-flu011q5/build
Build type: native build
Project name: SciPy
Project version: 1.9.3
C compiler for the host machine: cc (clang 12.0.5 "Apple clang version 12.0.5 (clang-1205.0.22.11)")
C linker for the host machine: cc ld64 650.9
C++ compiler for the host machine: c++ (clang 12.0.5 "Apple clang version 12.0.5 (clang-1205.0.22.11)")
C++ linker for the host machine: c++ ld64 650.9
Host machine cpu family: aarch64
Host machine cpu: aarch64
Compiler for C supports arguments -Wno-unused-but-set-variable: NO
Compiler for C supports arguments -Wno-unused-but-set-variable: NO (cached)
Compiler for C supports arguments -Wno-unused-function: YES
Compiler for C supports arguments -Wno-conversion: YES
Compiler for C supports arguments -Wno-misleading-indentation: YES
Compiler for C supports arguments -Wno-incompatible-pointer-types: YES
Library m found: YES
../../meson.build:57:0: ERROR: Unknown compiler(s): [['gfortran'], ['flang'], ['nvfortran'], ['pgfortran'], ['ifort'], ['ifx'], ['g95']]
The following exception(s) were encountered:
Running `gfortran --version` gave "[Errno 2] No such file or directory: 'gfortran'"
Running `gfortran -V` gave "[Errno 2] No such file or directory: 'gfortran'"
Running `flang --version` gave "[Errno 2] No such file or directory: 'flang'"
Running `flang -V` gave "[Errno 2] No such file or directory: 'flang'"
Running `nvfortran --version` gave "[Errno 2] No such file or directory: 'nvfortran'"
Running `nvfortran -V` gave "[Errno 2] No such file or directory: 'nvfortran'"
Running `pgfortran --version` gave "[Errno 2] No such file or directory: 'pgfortran'"
Running `pgfortran -V` gave "[Errno 2] No such file or directory: 'pgfortran'"
Running `ifort --version` gave "[Errno 2] No such file or directory: 'ifort'"
Running `ifort -V` gave "[Errno 2] No such file or directory: 'ifort'"
Running `ifx --version` gave "[Errno 2] No such file or directory: 'ifx'"
Running `ifx -V` gave "[Errno 2] No such file or directory: 'ifx'"
Running `g95 --version` gave "[Errno 2] No such file or directory: 'g95'"
Running `g95 -V` gave "[Errno 2] No such file or directory: 'g95'"
A full log can be found at /private/var/folders/tn/l95fzft50tb2hg345rn97ykr0000gn/T/pip-install-gkrcavxq/scipy_93c0e9d042a14457b9a3a4073762f4c2/.mesonpy-flu011q5/build/meson-logs/meson-log.txt
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 351, in <module>
main()
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 333, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 118, in get_requires_for_build_wheel
return hook(config_settings)
File "/private/var/folders/tn/l95fzft50tb2hg345rn97ykr0000gn/T/pip-build-env-0imxa7cb/overlay/lib/python3.10/site-packages/mesonpy/__init__.py", line 969, in get_requires_for_build_wheel
with _project(config_settings) as project:
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/contextlib.py", line 135, in __enter__
return next(self.gen)
File "/private/var/folders/tn/l95fzft50tb2hg345rn97ykr0000gn/T/pip-build-env-0imxa7cb/overlay/lib/python3.10/site-packages/mesonpy/__init__.py", line 948, in _project
with Project.with_temp_working_dir(
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/contextlib.py", line 135, in __enter__
return next(self.gen)
File "/private/var/folders/tn/l95fzft50tb2hg345rn97ykr0000gn/T/pip-build-env-0imxa7cb/overlay/lib/python3.10/site-packages/mesonpy/__init__.py", line 777, in with_temp_working_dir
yield cls(source_dir, tmpdir, build_dir)
File "/private/var/folders/tn/l95fzft50tb2hg345rn97ykr0000gn/T/pip-build-env-0imxa7cb/overlay/lib/python3.10/site-packages/mesonpy/__init__.py", line 682, in __init__
self._configure(reconfigure=bool(build_dir) and not native_file_mismatch)
File "/private/var/folders/tn/l95fzft50tb2hg345rn97ykr0000gn/T/pip-build-env-0imxa7cb/overlay/lib/python3.10/site-packages/mesonpy/__init__.py", line 713, in _configure
self._meson(
File "/private/var/folders/tn/l95fzft50tb2hg345rn97ykr0000gn/T/pip-build-env-0imxa7cb/overlay/lib/python3.10/site-packages/mesonpy/__init__.py", line 696, in _meson
return self._proc('meson', *args)
File "/private/var/folders/tn/l95fzft50tb2hg345rn97ykr0000gn/T/pip-build-env-0imxa7cb/overlay/lib/python3.10/site-packages/mesonpy/__init__.py", line 691, in _proc
subprocess.check_call(list(args))
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py", line 369, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['meson', 'setup', '--native-file=/private/var/folders/tn/l95fzft50tb2hg345rn97ykr0000gn/T/pip-install-gkrcavxq/scipy_93c0e9d042a14457b9a3a4073762f4c2/.mesonpy-native-file.ini', '-Ddebug=false', '-Doptimization=2', '--prefix=/Library/Frameworks/Python.framework/Versions/3.10', '/private/var/folders/tn/l95fzft50tb2hg345rn97ykr0000gn/T/pip-install-gkrcavxq/scipy_93c0e9d042a14457b9a3a4073762f4c2', '/private/var/folders/tn/l95fzft50tb2hg345rn97ykr0000gn/T/pip-install-gkrcavxq/scipy_93c0e9d042a14457b9a3a4073762f4c2/.mesonpy-flu011q5/build']' returned non-zero exit status 1.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
A:
I'm guessing you're on macOS 11 with M1? (Please always give the OS and version as part of a question). If so, then scipy doesn't make wheels for macOS11 + M1, there are a few bugs that can't be removed. I advise upgrading to macOS 12 in this situation.
|
BUG: Cannot install SciPy 1.9.3 in Python 3.10 in macOS
|
Tried to install SciPy from the terminal using pip and from the idle using Github, but none of it worked.
In the documentation it says that it should support Python 3.10, so I do not know the reason of the issue.
Every other package, I have had no issue to install.
Any ideas about how to solve?
This is the error shown:
Collecting scipy
Using cached scipy-1.9.3.tar.gz (42.1 MB)
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'error'
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> [68 lines of output]
+ meson setup --native-file=/private/var/folders/tn/l95fzft50tb2hg345rn97ykr0000gn/T/pip-install-gkrcavxq/scipy_93c0e9d042a14457b9a3a4073762f4c2/.mesonpy-native-file.ini -Ddebug=false -Doptimization=2 --prefix=/Library/Frameworks/Python.framework/Versions/3.10 /private/var/folders/tn/l95fzft50tb2hg345rn97ykr0000gn/T/pip-install-gkrcavxq/scipy_93c0e9d042a14457b9a3a4073762f4c2 /private/var/folders/tn/l95fzft50tb2hg345rn97ykr0000gn/T/pip-install-gkrcavxq/scipy_93c0e9d042a14457b9a3a4073762f4c2/.mesonpy-flu011q5/build
The Meson build system
Version: 0.64.0
Source dir: /private/var/folders/tn/l95fzft50tb2hg345rn97ykr0000gn/T/pip-install-gkrcavxq/scipy_93c0e9d042a14457b9a3a4073762f4c2
Build dir: /private/var/folders/tn/l95fzft50tb2hg345rn97ykr0000gn/T/pip-install-gkrcavxq/scipy_93c0e9d042a14457b9a3a4073762f4c2/.mesonpy-flu011q5/build
Build type: native build
Project name: SciPy
Project version: 1.9.3
C compiler for the host machine: cc (clang 12.0.5 "Apple clang version 12.0.5 (clang-1205.0.22.11)")
C linker for the host machine: cc ld64 650.9
C++ compiler for the host machine: c++ (clang 12.0.5 "Apple clang version 12.0.5 (clang-1205.0.22.11)")
C++ linker for the host machine: c++ ld64 650.9
Host machine cpu family: aarch64
Host machine cpu: aarch64
Compiler for C supports arguments -Wno-unused-but-set-variable: NO
Compiler for C supports arguments -Wno-unused-but-set-variable: NO (cached)
Compiler for C supports arguments -Wno-unused-function: YES
Compiler for C supports arguments -Wno-conversion: YES
Compiler for C supports arguments -Wno-misleading-indentation: YES
Compiler for C supports arguments -Wno-incompatible-pointer-types: YES
Library m found: YES
../../meson.build:57:0: ERROR: Unknown compiler(s): [['gfortran'], ['flang'], ['nvfortran'], ['pgfortran'], ['ifort'], ['ifx'], ['g95']]
The following exception(s) were encountered:
Running `gfortran --version` gave "[Errno 2] No such file or directory: 'gfortran'"
Running `gfortran -V` gave "[Errno 2] No such file or directory: 'gfortran'"
Running `flang --version` gave "[Errno 2] No such file or directory: 'flang'"
Running `flang -V` gave "[Errno 2] No such file or directory: 'flang'"
Running `nvfortran --version` gave "[Errno 2] No such file or directory: 'nvfortran'"
Running `nvfortran -V` gave "[Errno 2] No such file or directory: 'nvfortran'"
Running `pgfortran --version` gave "[Errno 2] No such file or directory: 'pgfortran'"
Running `pgfortran -V` gave "[Errno 2] No such file or directory: 'pgfortran'"
Running `ifort --version` gave "[Errno 2] No such file or directory: 'ifort'"
Running `ifort -V` gave "[Errno 2] No such file or directory: 'ifort'"
Running `ifx --version` gave "[Errno 2] No such file or directory: 'ifx'"
Running `ifx -V` gave "[Errno 2] No such file or directory: 'ifx'"
Running `g95 --version` gave "[Errno 2] No such file or directory: 'g95'"
Running `g95 -V` gave "[Errno 2] No such file or directory: 'g95'"
A full log can be found at /private/var/folders/tn/l95fzft50tb2hg345rn97ykr0000gn/T/pip-install-gkrcavxq/scipy_93c0e9d042a14457b9a3a4073762f4c2/.mesonpy-flu011q5/build/meson-logs/meson-log.txt
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 351, in <module>
main()
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 333, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 118, in get_requires_for_build_wheel
return hook(config_settings)
File "/private/var/folders/tn/l95fzft50tb2hg345rn97ykr0000gn/T/pip-build-env-0imxa7cb/overlay/lib/python3.10/site-packages/mesonpy/__init__.py", line 969, in get_requires_for_build_wheel
with _project(config_settings) as project:
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/contextlib.py", line 135, in __enter__
return next(self.gen)
File "/private/var/folders/tn/l95fzft50tb2hg345rn97ykr0000gn/T/pip-build-env-0imxa7cb/overlay/lib/python3.10/site-packages/mesonpy/__init__.py", line 948, in _project
with Project.with_temp_working_dir(
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/contextlib.py", line 135, in __enter__
return next(self.gen)
File "/private/var/folders/tn/l95fzft50tb2hg345rn97ykr0000gn/T/pip-build-env-0imxa7cb/overlay/lib/python3.10/site-packages/mesonpy/__init__.py", line 777, in with_temp_working_dir
yield cls(source_dir, tmpdir, build_dir)
File "/private/var/folders/tn/l95fzft50tb2hg345rn97ykr0000gn/T/pip-build-env-0imxa7cb/overlay/lib/python3.10/site-packages/mesonpy/__init__.py", line 682, in __init__
self._configure(reconfigure=bool(build_dir) and not native_file_mismatch)
File "/private/var/folders/tn/l95fzft50tb2hg345rn97ykr0000gn/T/pip-build-env-0imxa7cb/overlay/lib/python3.10/site-packages/mesonpy/__init__.py", line 713, in _configure
self._meson(
File "/private/var/folders/tn/l95fzft50tb2hg345rn97ykr0000gn/T/pip-build-env-0imxa7cb/overlay/lib/python3.10/site-packages/mesonpy/__init__.py", line 696, in _meson
return self._proc('meson', *args)
File "/private/var/folders/tn/l95fzft50tb2hg345rn97ykr0000gn/T/pip-build-env-0imxa7cb/overlay/lib/python3.10/site-packages/mesonpy/__init__.py", line 691, in _proc
subprocess.check_call(list(args))
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py", line 369, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['meson', 'setup', '--native-file=/private/var/folders/tn/l95fzft50tb2hg345rn97ykr0000gn/T/pip-install-gkrcavxq/scipy_93c0e9d042a14457b9a3a4073762f4c2/.mesonpy-native-file.ini', '-Ddebug=false', '-Doptimization=2', '--prefix=/Library/Frameworks/Python.framework/Versions/3.10', '/private/var/folders/tn/l95fzft50tb2hg345rn97ykr0000gn/T/pip-install-gkrcavxq/scipy_93c0e9d042a14457b9a3a4073762f4c2', '/private/var/folders/tn/l95fzft50tb2hg345rn97ykr0000gn/T/pip-install-gkrcavxq/scipy_93c0e9d042a14457b9a3a4073762f4c2/.mesonpy-flu011q5/build']' returned non-zero exit status 1.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
|
[
"I'm guessing you're on macOS 11 with M1? (Please always give the OS and version as part of a question). If so, then scipy doesn't make wheels for macOS11 + M1, there are a few bugs that can't be removed. I advise upgrading to macOS 12 in this situation.\n"
] |
[
0
] |
[] |
[] |
[
"installation",
"python",
"scipy"
] |
stackoverflow_0074512612_installation_python_scipy.txt
|
Q:
Only show certain variables in Stargazer output (Python, not R)
I am using Stargazer for Python (not R).
I trained six statsmodels models and stored them in a list named models.
I want to filter the independent variables that are displayed by Stargazer. How can this be done?
This is what I have so far:
# Import
from stargazer.stargazer import Stargazer
# There are six statsmodels objects in `models`
len(models)
> 6
# What I'm currently doing
star_out = Stargazer(models)
I'm looking for a method with which I can select only a subset of the independent variables used to train each model. Say, something along the lines of star_out.select_covariates(['var1','var8']).
Is this possible?
A:
You can use the .covariate_order() function to select which covariates you want to display.
star_out.covariate_order(['var1','var8'])
see more at https://github.com/mwburke/stargazer/blob/master/examples.ipynb
|
Only show certain variables in Stargazer output (Python, not R)
|
I am using Stargazer for Python (not R).
I trained six statsmodels models and stored them in a list named models.
I want to filter the independent variables that are displayed by Stargazer. How can this be done?
This is what I have so far:
# Import
from stargazer.stargazer import Stargazer
# There are six statsmodels objects in `models`
len(models)
> 6
# What I'm currently doing
star_out = Stargazer(models)
I'm looking for a method with which I can select only a subset of the independent variables used to train each model. Say, something along the lines of star_out.select_covariates(['var1','var8']).
Is this possible?
|
[
"You can use the .covariate_order() function to select which covariates you want to display.\nstar_out.covariate_order(['var1','var8'])\n\nsee more at https://github.com/mwburke/stargazer/blob/master/examples.ipynb\n"
] |
[
0
] |
[] |
[] |
[
"python",
"stargazer"
] |
stackoverflow_0073585682_python_stargazer.txt
|
Q:
Python colored output doesn't work except when piping
I'm using Git Bash on Windows inside Windows Terminal and I'm writing a python script which needs to output colored text. As an example, I have the following one-line script named example.py:
print('\033[35m\033[K' + 'hello world' + '\033[m\033[K')
When I run the command python example.py, I expect to see colored output, but instead I get this:
←[35m←[Khello world←[m←[K
However, if I run python example.py | cat, I get the colored output I expect. How weird. I also get nice colored output if I run the script from cmd instead of bash, or if I run the line from the live interpreter (but not if it is a child of bash).
Any ideas? If possible I prefer to solve this without bringing in dependencies like Colorama.
EDIT: I resigned to using Colorama after seeing the replies. All it took to fix it was a call to the aptly named colorama.just_fix_windows_console(). Dependency-less solutions still welcome.
EDIT 2: Interestingly, this problem does not occur on my laptop which has what I thought was the exact same setup.
A:
For the record, I tried the following basic script,
#!/usr/bin/python3.8
print('\033[35m\033[K' + 'hello world' + '\033[m\033[K')
and I got the result which you were likely looking for, namely
Since you mentioned bash, to get what you wanted, you need to read the section of the bash man page for printf more closely.
The round brackets are used in AWK, C, others. Bash shows no mention of round brackets in the usage syntax. However, you need to add a "format" specifier (NOT same as AWK), but per the man page.
Your statement should be reworked to show as this:
printf "%b\n" "\033[35m\033[K" "hello world" "\033[m\033[K"
OR ... you could just replace the printf "%b\n" with simply echo -e to get the same result.
Of possible interest, I also created for myself what I call a "Bourne Header" file, in which I defined some preset combinations with variable strings that can be re-used in other scripts by simply source'ing the *.bh file. I include it here for general usage.
#!/bin/sh
##########################################################################################################
### $Id: INCLUDES__TerminalEscape_SGR.bh,v 1.2 2022/09/03 01:57:31 root Exp root $
###
### This includes string variables defined to perform various substitutions for the ANSI Terminal Escape Sequences, i.e. SGR (Select Graphic Rendition subset)
##########################################################################################################
### REFERENCE: https://en.wikipedia.org/wiki/ANSI_escape_code
### https://www.ecma-international.org/publications-and-standards/standards/ecma-48/
### "\e" is same as "\033"
style_e()
{
boldON="\e[1m"
boldOFF="\e[0m"
italicON="\e[3m"
italicOFF="\e[0m"
underlineON="\e[4m"
underlineOFF="\e[0m"
blinkON="\e[5m"
blinkOFF="\e[0m"
cyanON="\e[96;1m"
cyanOFF="\e[0m"
cyanDarkON="\e[36;1m"
cyanDarkOFF="\e[0m"
greenON="\e[92;1m"
greenOFF="\e[0m"
yellowON="\e[93;1m"
yellowOFF="\e[0m"
redON="\e[91;1m"
redOFF="\e[0m"
orangeON="\e[33;1m"
orangeOFF="\e[0m"
blueON="\e[94;1m"
blueOFF="\e[0m"
blueSteelON="\e[34;1m"
blueSteelOFF="\e[0m"
darkBlueON="\e[38;2;95;95;255m"
darkBlueOFF="\e[0m"
magentaON="\e[95;1m"
magentaOFF="\e[0m"
}
style_o()
{
boldON="\033[1m"
boldOFF="\033[0m"
italicON="\033[3m"
italicOFF="\033[0m"
underlineON="\033[4m"
underlineOFF="\033[0m"
blinkON="\033[5m"
blinkOFF="\033[0m"
cyanON="\033[96;1m"
cyanOFF="\033[0m"
cyanDarkON="\033[36;1m"
cyanDarkOFF="\033[0m"
greenON="\033[92;1m"
greenOFF="\033[0m"
yellowON="\033[93;1m"
yellowOFF="\033[0m"
redON="\033[91;1m"
redOFF="\033[0m"
orangeON="\033[33;1m"
orangeOFF="\033[0m"
blueON="\033[94;1m"
blueOFF="\033[0m"
blueSteelON="\033[34;1m"
blueSteelOFF="\033[0m"
darkBlueON="\033[38;2;95;95;255m"
darkBlueOFF="\033[0m"
magentaON="\033[95;1m"
magentaOFF="\033[0m"
}
#style_e
style_o
##########################################################################################################
### Usage Examples:
##########################################################################################################
# echo "\t RSYNC process is ${redON}not${redOFF} running (or has already ${greenON}terminated${greenOFF}).\n"
# echo "\t ${PID} is ${cyanON}${italicON}${descr}${italicOFF}${cyanOFF} process ..."
# echo "\t ${PID} is ${yellowON}${descr}${yellowOFF} process ..."
# echo "\n\n\t RSYNC process (# ${pid}) has ${greenON}completed${greenOFF}.\n"
##########################################################################################################
### Example of scenario where escape codes are hard-coded; \e was not accepted by awk
##########################################################################################################
# echo "\n\t ${testor}\n" | sed 's+--+\n\t\t\t\t\t\t\t\t--+g' | awk '{
# rLOC=index($0,"rsync") ;
# if( rLOC != 0 ){
# sBeg=sprintf("%s", substr($0,1,rLOC-1) ) ;
# sEnd=sprintf("%s", substr($0,rLOC+5) ) ;
# sMid="\033[91;1mrsync\033[0m" ;
# printf("%s%s%s\n", sBeg, sMid, sEnd) ;
# }else{
# print $0 ;
# } ;
# }'
##########################################################################################################
echo "\n\t Imported LIBRARY: INCLUDES__TerminalEscape_SGR.bh ..."
##########################################################################################################
You could likewise define your own set of preset escape sequences as formatting variables.
|
Python colored output doesn't work except when piping
|
I'm using Git Bash on Windows inside Windows Terminal and I'm writing a python script which needs to output colored text. As an example, I have the following one-line script named example.py:
print('\033[35m\033[K' + 'hello world' + '\033[m\033[K')
When I run the command python example.py, I expect to see colored output, but instead I get this:
←[35m←[Khello world←[m←[K
However, if I run python example.py | cat, I get the colored output I expect. How weird. I also get nice colored output if I run the script from cmd instead of bash, or if I run the line from the live interpreter (but not if it is a child of bash).
Any ideas? If possible I prefer to solve this without bringing in dependencies like Colorama.
EDIT: I resigned to using Colorama after seeing the replies. All it took to fix it was a call to the aptly named colorama.just_fix_windows_console(). Dependency-less solutions still welcome.
EDIT 2: Interestingly, this problem does not occur on my laptop which has what I thought was the exact same setup.
|
[
"For the record, I tried the following basic script,\n#!/usr/bin/python3.8\nprint('\\033[35m\\033[K' + 'hello world' + '\\033[m\\033[K')\n\nand I got the result which you were likely looking for, namely\n\nSince you mentioned bash, to get what you wanted, you need to read the section of the bash man page for printf more closely.\nThe round brackets are used in AWK, C, others. Bash shows no mention of round brackets in the usage syntax. However, you need to add a \"format\" specifier (NOT same as AWK), but per the man page.\nYour statement should be reworked to show as this:\nprintf \"%b\\n\" \"\\033[35m\\033[K\" \"hello world\" \"\\033[m\\033[K\"\n\nOR ... you could just replace the printf \"%b\\n\" with simply echo -e to get the same result.\nOf possible interest, I also created for myself what I call a \"Bourne Header\" file, in which I defined some preset combinations with variable strings that can be re-used in other scripts by simply source'ing the *.bh file. I include it here for general usage.\n#!/bin/sh\n\n##########################################################################################################\n### $Id: INCLUDES__TerminalEscape_SGR.bh,v 1.2 2022/09/03 01:57:31 root Exp root $\n###\n### This includes string variables defined to perform various substitutions for the ANSI Terminal Escape Sequences, i.e. SGR (Select Graphic Rendition subset)\n##########################################################################################################\n\n### REFERENCE: https://en.wikipedia.org/wiki/ANSI_escape_code\n### https://www.ecma-international.org/publications-and-standards/standards/ecma-48/\n\n### \"\\e\" is same as \"\\033\"\n\nstyle_e()\n{\nboldON=\"\\e[1m\"\nboldOFF=\"\\e[0m\"\n\nitalicON=\"\\e[3m\"\nitalicOFF=\"\\e[0m\"\n\nunderlineON=\"\\e[4m\"\nunderlineOFF=\"\\e[0m\"\n\nblinkON=\"\\e[5m\"\nblinkOFF=\"\\e[0m\"\n\ncyanON=\"\\e[96;1m\"\ncyanOFF=\"\\e[0m\"\n\ncyanDarkON=\"\\e[36;1m\"\ncyanDarkOFF=\"\\e[0m\"\n\ngreenON=\"\\e[92;1m\"\ngreenOFF=\"\\e[0m\"\n\nyellowON=\"\\e[93;1m\"\nyellowOFF=\"\\e[0m\"\n\nredON=\"\\e[91;1m\"\nredOFF=\"\\e[0m\"\n\norangeON=\"\\e[33;1m\"\norangeOFF=\"\\e[0m\"\n\nblueON=\"\\e[94;1m\"\nblueOFF=\"\\e[0m\"\n\nblueSteelON=\"\\e[34;1m\"\nblueSteelOFF=\"\\e[0m\"\n\ndarkBlueON=\"\\e[38;2;95;95;255m\"\ndarkBlueOFF=\"\\e[0m\"\n\nmagentaON=\"\\e[95;1m\"\nmagentaOFF=\"\\e[0m\"\n}\n\nstyle_o()\n{\nboldON=\"\\033[1m\"\nboldOFF=\"\\033[0m\"\n\nitalicON=\"\\033[3m\"\nitalicOFF=\"\\033[0m\"\n\nunderlineON=\"\\033[4m\"\nunderlineOFF=\"\\033[0m\"\n\nblinkON=\"\\033[5m\"\nblinkOFF=\"\\033[0m\"\n\ncyanON=\"\\033[96;1m\"\ncyanOFF=\"\\033[0m\"\n\ncyanDarkON=\"\\033[36;1m\"\ncyanDarkOFF=\"\\033[0m\"\n\ngreenON=\"\\033[92;1m\"\ngreenOFF=\"\\033[0m\"\n\nyellowON=\"\\033[93;1m\"\nyellowOFF=\"\\033[0m\"\n\nredON=\"\\033[91;1m\"\nredOFF=\"\\033[0m\"\n\norangeON=\"\\033[33;1m\"\norangeOFF=\"\\033[0m\"\n\nblueON=\"\\033[94;1m\"\nblueOFF=\"\\033[0m\"\n\nblueSteelON=\"\\033[34;1m\"\nblueSteelOFF=\"\\033[0m\"\n\ndarkBlueON=\"\\033[38;2;95;95;255m\"\ndarkBlueOFF=\"\\033[0m\"\n\nmagentaON=\"\\033[95;1m\"\nmagentaOFF=\"\\033[0m\"\n}\n\n#style_e\nstyle_o\n\n##########################################################################################################\n### Usage Examples:\n##########################################################################################################\n\n# echo \"\\t RSYNC process is ${redON}not${redOFF} running (or has already ${greenON}terminated${greenOFF}).\\n\"\n# echo \"\\t ${PID} is ${cyanON}${italicON}${descr}${italicOFF}${cyanOFF} process ...\"\n# echo \"\\t ${PID} is ${yellowON}${descr}${yellowOFF} process ...\"\n# echo \"\\n\\n\\t RSYNC process (# ${pid}) has ${greenON}completed${greenOFF}.\\n\"\n\n##########################################################################################################\n### Example of scenario where escape codes are hard-coded; \\e was not accepted by awk\n##########################################################################################################\n\n# echo \"\\n\\t ${testor}\\n\" | sed 's+--+\\n\\t\\t\\t\\t\\t\\t\\t\\t--+g' | awk '{\n# rLOC=index($0,\"rsync\") ;\n# if( rLOC != 0 ){\n# sBeg=sprintf(\"%s\", substr($0,1,rLOC-1) ) ;\n# sEnd=sprintf(\"%s\", substr($0,rLOC+5) ) ;\n# sMid=\"\\033[91;1mrsync\\033[0m\" ;\n# printf(\"%s%s%s\\n\", sBeg, sMid, sEnd) ;\n# }else{\n# print $0 ;\n# } ;\n# }'\n\n##########################################################################################################\n echo \"\\n\\t Imported LIBRARY: INCLUDES__TerminalEscape_SGR.bh ...\"\n##########################################################################################################\n\nYou could likewise define your own set of preset escape sequences as formatting variables.\n"
] |
[
0
] |
[] |
[] |
[
"bash",
"colors",
"git_bash",
"python",
"windows_terminal"
] |
stackoverflow_0074584828_bash_colors_git_bash_python_windows_terminal.txt
|
Q:
sublist to dictionary
So I have:
a = [["Hello", "Bye"], ["Morning", "Night"], ["Cat", "Dog"]]
And I want to convert it to a dictionary.
I tried using:
i = iter(a)
b = dict(zip(a[0::2], a[1::2]))
But it gave me an error: TypeError: unhashable type: 'list'
A:
Simply:
>>> a = [["Hello", "Bye"], ["Morning", "Night"], ["Cat", "Dog"]]
>>> dict(a)
{'Cat': 'Dog', 'Hello': 'Bye', 'Morning': 'Night'}
I love python's simplicity
You can see here for all the ways to construct a dictionary:
To illustrate, the following examples all return a dictionary equal to {"one": 1, "two": 2, "three": 3}:
>>> a = dict(one=1, two=2, three=3)
>>> b = {'one': 1, 'two': 2, 'three': 3}
>>> c = dict(zip(['one', 'two', 'three'], [1, 2, 3]))
>>> d = dict([('two', 2), ('one', 1), ('three', 3)]) #<-Your case(Key/value pairs)
>>> e = dict({'three': 3, 'one': 1, 'two': 2})
>>> a == b == c == d == e
True
A:
Maybe you can try this following code :
a = [
["Hello", "Bye"],
["Morning", "Night"],
["Cat", "Dog"]
]
b = {}
for x in a:
b[x[0]] = x[1]
print(b)
And if you want your value have more than 1 value (in the form of a list),
you can slightly change the code :
b[x[0]] = x[1]
to code :
b[x[0]] = x[1:]
Hope it will help you :)
|
sublist to dictionary
|
So I have:
a = [["Hello", "Bye"], ["Morning", "Night"], ["Cat", "Dog"]]
And I want to convert it to a dictionary.
I tried using:
i = iter(a)
b = dict(zip(a[0::2], a[1::2]))
But it gave me an error: TypeError: unhashable type: 'list'
|
[
"Simply:\n>>> a = [[\"Hello\", \"Bye\"], [\"Morning\", \"Night\"], [\"Cat\", \"Dog\"]]\n>>> dict(a)\n{'Cat': 'Dog', 'Hello': 'Bye', 'Morning': 'Night'}\n\nI love python's simplicity\nYou can see here for all the ways to construct a dictionary:\n\nTo illustrate, the following examples all return a dictionary equal to {\"one\": 1, \"two\": 2, \"three\": 3}:\n\n>>> a = dict(one=1, two=2, three=3)\n>>> b = {'one': 1, 'two': 2, 'three': 3}\n>>> c = dict(zip(['one', 'two', 'three'], [1, 2, 3]))\n>>> d = dict([('two', 2), ('one', 1), ('three', 3)]) #<-Your case(Key/value pairs)\n>>> e = dict({'three': 3, 'one': 1, 'two': 2})\n>>> a == b == c == d == e\nTrue\n\n",
"Maybe you can try this following code :\na = [\n [\"Hello\", \"Bye\"],\n [\"Morning\", \"Night\"],\n [\"Cat\", \"Dog\"]\n ]\n\nb = {}\nfor x in a:\n b[x[0]] = x[1]\nprint(b)\n\nAnd if you want your value have more than 1 value (in the form of a list),\nyou can slightly change the code :\nb[x[0]] = x[1]\n\nto code :\nb[x[0]] = x[1:]\n\nHope it will help you :)\n"
] |
[
8,
0
] |
[] |
[] |
[
"dictionary",
"list",
"python",
"sublist"
] |
stackoverflow_0015875678_dictionary_list_python_sublist.txt
|
Q:
AWS Batch Job Execution Results in Step Function
I'm newbie to AWS Step Functions and AWS Batch. I'm trying to integrate AWS Batch Job with Step Function. AWS Batch Job executes simple python scripts which output string value (High level simplified requirement) . I need to have the python script output available to the next state of the step function. How I should be able to accomplish this. AWS Batch Job output does not contain results of the python script. instead it contains all the container related information with input values.
Example : AWS Batch Job executes python script which output "Hello World". I need "Hello World" available to the next state of the step function to execute a lambda associated with it.
A:
I was able to do it, below is my state machine, I took the sample project for running the batch job Manage a Batch Job (AWS Batch, Amazon SNS) and modified it for two lambdas for passing input/output.
{
"Comment": "An example of the Amazon States Language for notification on an AWS Batch job completion",
"StartAt": "Submit Batch Job",
"TimeoutSeconds": 3600,
"States": {
"Submit Batch Job": {
"Type": "Task",
"Resource": "arn:aws:states:::batch:submitJob.sync",
"Parameters": {
"JobName": "BatchJobNotification",
"JobQueue": "arn:aws:batch:us-east-1:1234567890:job-queue/BatchJobQueue-737ed10e7ca3bfd",
"JobDefinition": "arn:aws:batch:us-east-1:1234567890:job-definition/BatchJobDefinition-89c42b1f452ac67:1"
},
"Next": "Notify Success",
"Catch": [
{
"ErrorEquals": [
"States.ALL"
],
"Next": "Notify Failure"
}
]
},
"Notify Success": {
"Type": "Task",
"Resource": "arn:aws:lambda:us-east-1:1234567890:function:readcloudwatchlogs",
"Parameters": {
"LogStreamName.$": "$.Container.LogStreamName"
},
"ResultPath": "$.lambdaOutput",
"Next": "ConsumeLogs"
},
"ConsumeLogs": {
"Type": "Task",
"Resource": "arn:aws:lambda:us-east-1:1234567890:function:consumelogs",
"Parameters": {
"randomstring.$": "$.lambdaOutput.logs"
},
"End": true
},
"Notify Failure": {
"Type": "Task",
"Resource": "arn:aws:states:::sns:publish",
"Parameters": {
"Message": "Batch job submitted through Step Functions failed",
"TopicArn": "arn:aws:sns:us-east-1:1234567890:StepFunctionsSample-BatchJobManagement17968f39-e227-47ab-9a75-08a7dcc10c4c-SNSTopic-1GR29R8TUHQY8"
},
"End": true
}
}
}
The key to read logs was in the Submit Batch Job output which contains LogStreamName, that I passed to my lambda named function:readcloudwatchlogs and read the logs and then eventually passed the read logs to the next function named function:consumelogs. You can see in the attached screenshot consumelogs function printing the logs.
{
"Attempts": [
{
"Container": {
"ContainerInstanceArn": "arn:aws:ecs:us-east-1:1234567890:container-instance/BatchComputeEnvironment-4a1593ce223b3cf_Batch_7557555f-5606-31a9-86b9-83321eb3e413/6d11fdbfc9eb4f40b0d6b85c396bb243",
"ExitCode": 0,
"LogStreamName": "BatchJobDefinition-89c42b1f452ac67/default/2ad955bf59a8418893f53182f0d87b4b",
"NetworkInterfaces": [],
"TaskArn": "arn:aws:ecs:us-east-1:1234567890:task/BatchComputeEnvironment-4a1593ce223b3cf_Batch_7557555f-5606-31a9-86b9-83321eb3e413/2ad955bf59a8418893f53182f0d87b4b"
},
"StartedAt": 1611329367577,
"StatusReason": "Essential container in task exited",
"StoppedAt": 1611329367748
}
],
"Container": {
"Command": [
"echo",
"Hello world"
],
"ContainerInstanceArn": "arn:aws:ecs:us-east-1:1234567890:container-instance/BatchComputeEnvironment-4a1593ce223b3cf_Batch_7557555f-5606-31a9-86b9-83321eb3e413/6d11fdbfc9eb4f40b0d6b85c396bb243",
"Environment": [
{
"Name": "MANAGED_BY_AWS",
"Value": "STARTED_BY_STEP_FUNCTIONS"
}
],
"ExitCode": 0,
"Image": "137112412989.dkr.ecr.us-east-1.amazonaws.com/amazonlinux:latest",
"LogStreamName": "BatchJobDefinition-89c42b1f452ac67/default/2ad955bf59a8418893f53182f0d87b4b",
"TaskArn": "arn:aws:ecs:us-east-1:1234567890:task/BatchComputeEnvironment-4a1593ce223b3cf_Batch_7557555f-5606-31a9-86b9-83321eb3e413/2ad955bf59a8418893f53182f0d87b4b",
..
},
..
"Tags": {
"resourceArn": "arn:aws:batch:us-east-1:1234567890:job/d36ba07a-54f9-4acf-a4b8-3e5413ea5ffc"
}
}
Read Logs Lambda code:
import boto3
client = boto3.client('logs')
def lambda_handler(event, context):
print(event)
response = client.get_log_events(
logGroupName='/aws/batch/job',
logStreamName=event.get('LogStreamName')
)
log = {'logs': response['events'][0]['message']}
return log
Consume Logs Lambda Code
import json
print('Loading function')
def lambda_handler(event, context):
print(event)
A:
You could pass your step function execution ID ($$.Execution.ID) to the batch process and then your batch process could write its response to DynamoDB using the execution ID and a primary key (or other filed). You would then need a subsequent step to read directly from DynamoDB and capture the process response.
I have been on the hunt for a way to do this without the subsequent step, but thus far no dice.
A:
While you can't do waitForTaskToken with submitJob, you can still use the callback pattern by passing the task token in the Parameters and referencing it in the command override with Ref::TaskToken:
...
"Submit Batch Job": {
"Type": "Task",
"Resource": "arn:aws:states:::batch:submitJob.sync",
"Parameters": {
"TaskToken.$": "$$.Task.Token"
},
"ContainerOverrides": {
"command": ["python3",
"my_script.py",
"Ref::TaskToken"]
}
...
Then when your script is done doing its processing, you just call StepFunctions.SendTaskSuccess or StepFunctions.SendTaskFailure:
import boto3
client = boto3.client('stepfunctions')
def main()
args = sys.argv[1:]
client.send_task_success(taskToken=args[0], output='Hello World')
This will tell StepFunctions your job is complete and the output should be 'Hello World'. This pattern can also be useful if your Batch job completes the work required to resume the state machine, but needs to do some cleanup work afterward. You can send_task_success with the results and the state machine can resume while the Batch job does the cleanup work.
|
AWS Batch Job Execution Results in Step Function
|
I'm newbie to AWS Step Functions and AWS Batch. I'm trying to integrate AWS Batch Job with Step Function. AWS Batch Job executes simple python scripts which output string value (High level simplified requirement) . I need to have the python script output available to the next state of the step function. How I should be able to accomplish this. AWS Batch Job output does not contain results of the python script. instead it contains all the container related information with input values.
Example : AWS Batch Job executes python script which output "Hello World". I need "Hello World" available to the next state of the step function to execute a lambda associated with it.
|
[
"I was able to do it, below is my state machine, I took the sample project for running the batch job Manage a Batch Job (AWS Batch, Amazon SNS) and modified it for two lambdas for passing input/output.\n{\n \"Comment\": \"An example of the Amazon States Language for notification on an AWS Batch job completion\",\n \"StartAt\": \"Submit Batch Job\",\n \"TimeoutSeconds\": 3600,\n \"States\": {\n \"Submit Batch Job\": {\n \"Type\": \"Task\",\n \"Resource\": \"arn:aws:states:::batch:submitJob.sync\",\n \"Parameters\": {\n \"JobName\": \"BatchJobNotification\",\n \"JobQueue\": \"arn:aws:batch:us-east-1:1234567890:job-queue/BatchJobQueue-737ed10e7ca3bfd\",\n \"JobDefinition\": \"arn:aws:batch:us-east-1:1234567890:job-definition/BatchJobDefinition-89c42b1f452ac67:1\"\n },\n \"Next\": \"Notify Success\",\n \"Catch\": [\n {\n \"ErrorEquals\": [\n \"States.ALL\"\n ],\n \"Next\": \"Notify Failure\"\n }\n ]\n },\n \"Notify Success\": {\n \"Type\": \"Task\",\n \"Resource\": \"arn:aws:lambda:us-east-1:1234567890:function:readcloudwatchlogs\",\n \"Parameters\": {\n \"LogStreamName.$\": \"$.Container.LogStreamName\"\n },\n \"ResultPath\": \"$.lambdaOutput\",\n \"Next\": \"ConsumeLogs\"\n },\n \"ConsumeLogs\": {\n \"Type\": \"Task\",\n \"Resource\": \"arn:aws:lambda:us-east-1:1234567890:function:consumelogs\",\n \"Parameters\": {\n \"randomstring.$\": \"$.lambdaOutput.logs\"\n },\n \"End\": true\n },\n \"Notify Failure\": {\n \"Type\": \"Task\",\n \"Resource\": \"arn:aws:states:::sns:publish\",\n \"Parameters\": {\n \"Message\": \"Batch job submitted through Step Functions failed\",\n \"TopicArn\": \"arn:aws:sns:us-east-1:1234567890:StepFunctionsSample-BatchJobManagement17968f39-e227-47ab-9a75-08a7dcc10c4c-SNSTopic-1GR29R8TUHQY8\"\n },\n \"End\": true\n }\n }\n}\n\nThe key to read logs was in the Submit Batch Job output which contains LogStreamName, that I passed to my lambda named function:readcloudwatchlogs and read the logs and then eventually passed the read logs to the next function named function:consumelogs. You can see in the attached screenshot consumelogs function printing the logs.\n\n{\n \"Attempts\": [\n {\n \"Container\": {\n \"ContainerInstanceArn\": \"arn:aws:ecs:us-east-1:1234567890:container-instance/BatchComputeEnvironment-4a1593ce223b3cf_Batch_7557555f-5606-31a9-86b9-83321eb3e413/6d11fdbfc9eb4f40b0d6b85c396bb243\",\n \"ExitCode\": 0,\n \"LogStreamName\": \"BatchJobDefinition-89c42b1f452ac67/default/2ad955bf59a8418893f53182f0d87b4b\",\n \"NetworkInterfaces\": [],\n \"TaskArn\": \"arn:aws:ecs:us-east-1:1234567890:task/BatchComputeEnvironment-4a1593ce223b3cf_Batch_7557555f-5606-31a9-86b9-83321eb3e413/2ad955bf59a8418893f53182f0d87b4b\"\n },\n \"StartedAt\": 1611329367577,\n \"StatusReason\": \"Essential container in task exited\",\n \"StoppedAt\": 1611329367748\n }\n ],\n \"Container\": {\n \"Command\": [\n \"echo\",\n \"Hello world\"\n ],\n \"ContainerInstanceArn\": \"arn:aws:ecs:us-east-1:1234567890:container-instance/BatchComputeEnvironment-4a1593ce223b3cf_Batch_7557555f-5606-31a9-86b9-83321eb3e413/6d11fdbfc9eb4f40b0d6b85c396bb243\",\n \"Environment\": [\n {\n \"Name\": \"MANAGED_BY_AWS\",\n \"Value\": \"STARTED_BY_STEP_FUNCTIONS\"\n }\n ],\n \"ExitCode\": 0,\n \"Image\": \"137112412989.dkr.ecr.us-east-1.amazonaws.com/amazonlinux:latest\",\n \"LogStreamName\": \"BatchJobDefinition-89c42b1f452ac67/default/2ad955bf59a8418893f53182f0d87b4b\",\n \"TaskArn\": \"arn:aws:ecs:us-east-1:1234567890:task/BatchComputeEnvironment-4a1593ce223b3cf_Batch_7557555f-5606-31a9-86b9-83321eb3e413/2ad955bf59a8418893f53182f0d87b4b\",\n..\n },\n..\n \"Tags\": {\n \"resourceArn\": \"arn:aws:batch:us-east-1:1234567890:job/d36ba07a-54f9-4acf-a4b8-3e5413ea5ffc\"\n }\n}\n\n\n\nRead Logs Lambda code:\n\nimport boto3\n\nclient = boto3.client('logs')\n\ndef lambda_handler(event, context):\n print(event)\n response = client.get_log_events(\n logGroupName='/aws/batch/job',\n logStreamName=event.get('LogStreamName')\n )\n log = {'logs': response['events'][0]['message']}\n return log\n\n\nConsume Logs Lambda Code\n\nimport json\n\nprint('Loading function')\n\n\ndef lambda_handler(event, context):\n print(event)\n\n\n\n",
"You could pass your step function execution ID ($$.Execution.ID) to the batch process and then your batch process could write its response to DynamoDB using the execution ID and a primary key (or other filed). You would then need a subsequent step to read directly from DynamoDB and capture the process response.\nI have been on the hunt for a way to do this without the subsequent step, but thus far no dice.\n",
"While you can't do waitForTaskToken with submitJob, you can still use the callback pattern by passing the task token in the Parameters and referencing it in the command override with Ref::TaskToken:\n...\n \"Submit Batch Job\": {\n \"Type\": \"Task\",\n \"Resource\": \"arn:aws:states:::batch:submitJob.sync\",\n \"Parameters\": {\n \"TaskToken.$\": \"$$.Task.Token\"\n },\n \"ContainerOverrides\": {\n \"command\": [\"python3\", \n \"my_script.py\", \n \"Ref::TaskToken\"]\n }\n...\n\nThen when your script is done doing its processing, you just call StepFunctions.SendTaskSuccess or StepFunctions.SendTaskFailure:\nimport boto3\n\nclient = boto3.client('stepfunctions')\n\ndef main()\n args = sys.argv[1:]\n client.send_task_success(taskToken=args[0], output='Hello World')\n\nThis will tell StepFunctions your job is complete and the output should be 'Hello World'. This pattern can also be useful if your Batch job completes the work required to resume the state machine, but needs to do some cleanup work afterward. You can send_task_success with the results and the state machine can resume while the Batch job does the cleanup work.\n"
] |
[
4,
2,
0
] |
[] |
[] |
[
"amazon_web_services",
"aws_batch",
"aws_step_functions",
"python"
] |
stackoverflow_0065835855_amazon_web_services_aws_batch_aws_step_functions_python.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.