content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
35
137
Q: Python C++ API function with multiple arguments I'm trying to create a python module using my C++ code and I want to declare a function with multiple arguments. (3 in this case) I've read the docs and it says that I must declare METH_VARARGS which I did, but I think I also must change something inside my function to actually receive the arguments. Otherwise it gives me "too many arguments" error when I use my function in python. Here is the code snippet I'm using: ... // This function can be called inside a python file. static PyObject * call_opencl(PyObject *self, PyObject *args) { const char *command; int sts; // We except at least one argument to this function // Not sure how to accept more than one. if (!PyArg_ParseTuple(args, "s", &command)) return NULL; OpenCL kernel = OpenCL(); kernel.init(); std::cout << "This message is called from our C code: " << std::string(command) << std::endl; sts = 21; return PyLong_FromLong(sts); } static PyMethodDef NervebloxMethods[] = { {"call_kernel", call_opencl, METH_VARARGS, "Creates an opencv instance."}, {NULL, NULL, 0, NULL} /* Sentinel */ }; ... A: You are still expecting one argument. if (!PyArg_ParseTuple(args, "s", &command)) the documentation defines how you can expect optional or additional arguments, for example "s|dd" will expect a string and two optional numbers, you still have to pass two doubles to the function for when the numbers are available. double a = 0; // initial value double b = 0; if (!PyArg_ParseTuple(args, "s|dd", &command, &a, &b))
Python C++ API function with multiple arguments
I'm trying to create a python module using my C++ code and I want to declare a function with multiple arguments. (3 in this case) I've read the docs and it says that I must declare METH_VARARGS which I did, but I think I also must change something inside my function to actually receive the arguments. Otherwise it gives me "too many arguments" error when I use my function in python. Here is the code snippet I'm using: ... // This function can be called inside a python file. static PyObject * call_opencl(PyObject *self, PyObject *args) { const char *command; int sts; // We except at least one argument to this function // Not sure how to accept more than one. if (!PyArg_ParseTuple(args, "s", &command)) return NULL; OpenCL kernel = OpenCL(); kernel.init(); std::cout << "This message is called from our C code: " << std::string(command) << std::endl; sts = 21; return PyLong_FromLong(sts); } static PyMethodDef NervebloxMethods[] = { {"call_kernel", call_opencl, METH_VARARGS, "Creates an opencv instance."}, {NULL, NULL, 0, NULL} /* Sentinel */ }; ...
[ "You are still expecting one argument.\nif (!PyArg_ParseTuple(args, \"s\", &command))\n\nthe documentation defines how you can expect optional or additional arguments, for example \"s|dd\" will expect a string and two optional numbers, you still have to pass two doubles to the function for when the numbers are available.\ndouble a = 0; // initial value\ndouble b = 0;\nif (!PyArg_ParseTuple(args, \"s|dd\", &command, &a, &b))\n\n" ]
[ 2 ]
[]
[]
[ "c++", "python", "python_3.x" ]
stackoverflow_0074597442_c++_python_python_3.x.txt
Q: BeautifulSoup Data Scraping : Unable to fetch correct information from the page I am trying to scrape data from:- https://www.canadapharmacy.com/ below are a few pages that I need to scrape:- https://www.canadapharmacy.com/products/abilify-tablet https://www.canadapharmacy.com/products/accolate https://www.canadapharmacy.com/products/abilify-mt I need all the information from the page. I wrote the below code:- base_url = 'https://www.canadapharmacy.com' data = [] for i in tqdm(range(len(medicine_url))): r = requests.get(base_url+medicine_url[i]) soup = BeautifulSoup(r.text,'lxml') # Scraping medicine Name try: main_name = (soup.find('h1',{"class":"mn"}).text.lstrip()).rstrip() except: main_name = None try: sec_name = (soup.find('div',{"class":"product-name"}).find('h3').text.lstrip()).rstrip() except: sec_name = None try: generic_name = (soup.find('div',{"class":"card product generic strength equal"}).find('div').find('h3').text.lstrip()).rstrip() except: generic_name = None # Description try: des1 = soup.find('div',{"class":"answer expanded"}).find_all('p')[1].text except: des1 = '' try: des2 = soup.find('div',{"class":"answer expanded"}).find('ul').text except: des2 = '' try: des3 = soup.find('div',{"class":"answer expanded"}).find_all('p')[2].text except: des3 = '' desc = (des1+des2+des3).replace('\n',' ') #Directions try: dir1 = soup.find('div',{"class":"answer expanded"}).find_all('h4')[1].text except: dir1 = '' try: dir2 = soup.find('div',{"class":"answer expanded"}).find_all('p')[5].text except: dir2 = '' try: dir3 = soup.find('div',{"class":"answer expanded"}).find_all('p')[6].text except: dir3 = '' try: dir4 = soup.find('div',{"class":"answer expanded"}).find_all('p')[7].text except: dir4 = '' directions = dir1+dir2+dir3+dir4 #Ingredients try: ing = soup.find('div',{"class":"answer expanded"}).find_all('p')[9].text except: ing = None #Cautions try: c1 = soup.find('div',{"class":"answer expanded"}).find_all('h4')[3].text except: c1 = None try: c2 = soup.find('div',{"class":"answer expanded"}).find_all('p')[11].text except: c2 = '' try: c3 = soup.find('div',{"class":"answer expanded"}).find_all('p')[12].text #//div[@class='answer expanded']//p[2] except: c3 = '' try: c4 = soup.find('div',{"class":"answer expanded"}).find_all('p')[13].text except: c4 = '' try: c5 = soup.find('div',{"class":"answer expanded"}).find_all('p')[14].text except: c5 = '' try: c6 = soup.find('div',{"class":"answer expanded"}).find_all('p')[15].text except: c6 = '' caution = (c1+c2+c3+c4+c5+c6).replace('\xa0','') #Side Effects try: se1 = soup.find('div',{"class":"answer expanded"}).find_all('h4')[4].text except: se1 = '' try: se2 = soup.find('div',{"class":"answer expanded"}).find_all('p')[18].text except: se2 = '' try: se3 = soup.find('div',{"class":"answer expanded"}).find_all('ul')[1].text except: se3 = '' try: se4 = soup.find('div',{"class":"answer expanded"}).find_all('p')[19].text except: se4 = '' try: se5 = soup.find('div',{"class":"post-author-bio"}).text except: se5 = '' se = (se1 + se2 + se3 + se4 + se5).replace('\n',' ') for j in soup.find('div',{"class":"answer expanded"}).find_all('h4'): if 'Product Code' in j.text: prod_code = j.text #prod_code = soup.find('div',{"class":"answer expanded"}).find_all('h4')[5].text #//div[@class='answer expanded']//h4 pharma = {"primary_name":main_name, "secondary_name":sec_name, "Generic_Name":generic_name, "Description":desc, "Directions":directions, "Ingredients":ing, "Caution":caution, "Side_Effects":se, "Product_Code":prod_code} data.append(pharma) But, each page is having different positions for the tags hence not giving correct data. So, I tried:- soup.find('div',{"class":"answer expanded"}).find_all('h4') which gives me the output:- [<h4>Description </h4>, <h4>Directions</h4>, <h4>Ingredients</h4>, <h4>Cautions</h4>, <h4>Side Effects</h4>, <h4>Product Code : 5513 </h4>] I want to create a data frame where the description contains all the information given in the description, directions contain all the information of directions given on the web page. for i in soup.find('div',{"class":"answer expanded"}).find_all('h4'): if 'Description' in i.text: print(soup.find('div',{"class":"answer expanded"}).findAllNext('p')) but it prints all the after the soup.find('div',{"class":"answer expanded"}).find_all('h4'). but I want only the tags are giving me the description of the medicine and no others. Can anyone suggest how to do this? Also, how to scrape the rate table from the page as it gives me values in unappropriate fashion? A: You can try the next working example: import requests from bs4 import BeautifulSoup import pandas as pd data = [] r = requests.get('https://www.canadapharmacy.com/products/abilify-tablet') soup = BeautifulSoup(r.text,"lxml") try: card = ''.join([x.get_text(' ',strip=True) for x in soup.select('div.answer.expanded')]) des = card.split('Directions')[0].replace('Description','') #print(des) drc = card.split('Directions')[1].split('Ingredients')[0] #print(drc) ingre= card.split('Directions')[1].split('Ingredients')[1].split('Cautions')[0] #print(ingre) cau=card.split('Directions')[1].split('Ingredients')[1].split('Cautions')[1].split('Side Effects')[0] #print(cau) se= card.split('Directions')[1].split('Ingredients')[1].split('Cautions')[1].split('Side Effects')[1] #print(se) except: pass data.append({ 'Description':des, 'Directions':drc, 'Ingredients':ingre, 'Cautions':cau, 'Side Effects':se }) print(data) # df = pd.DataFrame(data) # print(df) Output: [{'Description': " Abilify Tablet (Aripiprazole) Abilify (Aripiprazole) is a medication prescribed to treat or manage different conditions, including: Agitation associated with schizophrenia or bipolar mania (injection formulation only) Irritability associated with autistic disorder Major depressive disorder , adjunctive treatment Mania and mixed episodes associated with Bipolar I disorder Tourette's disorder Schizophrenia Abilify works by activating different neurotransmitter receptors located in brain cells. Abilify activates D2 (dopamine) and 5-HT1A (serotonin) receptors and blocks 5-HT2A (serotonin) receptors. This combination of receptor activity is responsible for the treatment effects of Abilify. Conditions like schizophrenia, major depressive disorder, and bipolar disorder are caused by neurotransmitter imbalances in the brain. Abilify helps to correct these imbalances and return the normal functioning of neurons. ", 'Directions': ' Once you are prescribed and buy Abilify, then take Abilify exactly as prescribed by your doctor. The dose will vary based on the condition that you are treating. The starting dose of Abilify ranges from 2-15 mg once daily, and the recommended dose for most conditions is between 5-15 mg once daily. The maximum dose is 30 mg once daily. Take Abilify with or without food. ', 'Ingredients': ' The active ingredient in Abilify medication is aripiprazole . ', 'Cautions': ' Abilify and other antipsychotic medications have been associated with an increased risk of death in elderly patients with dementia-related psychosis. When combined with other dopaminergic agents, Abilify can increase the risk of neuroleptic malignant syndrome. Abilify can cause metabolic changes and in some cases can induce high blood sugar in people with and without diabetes . Abilify can also weight gain and increased risk of dyslipidemia. Blood glucose should be monitored while taking Abilify. Monitor for low blood pressure and heart rate while taking Abilify; it can cause orthostatic hypertension which may lead to dizziness or fainting. Use with caution in patients with a history of seizures. ', 'Side Effects': ' The side effects of Abilify vary greatly depending on what condition is being treated, what other medications are being used concurrently, and what dose is being taken. Speak with your doctor or pharmacist for a full list of side effects that apply to you. Some of the most common side effects include: Akathisia Blurred vision Constipation Dizziness Drooling Extrapyramidal disorder Fatigue Headache Insomnia Nausea Restlessness Sedation Somnolence Tremor Vomiting Buy Abilify online from Canada Pharmacy . Abilify can be purchased online with a valid prescription from a doctor. About Dr. Conor Sheehy (Page Author) Dr. Sheehy (BSc Molecular Biology, PharmD) works a clinical pharmacist specializing in cardiology, oncology, and ambulatory care. He’s a board-certified pharmacotherapy specialist (BCPS), and his experience working one-on-one with patients to fine tune their medication and therapy plans for optimal results makes him a valuable subject matter expert for our pharmacy. Read More.... IMPORTANT NOTE: The above information is intended to increase awareness of health information and does not suggest treatment or diagnosis. This information is not a substitute for individual medical attention and should not be construed to indicate that use of the drug is safe, appropriate, or effective for you. See your health care professional for medical advice and treatment. Product Code : 5513'}]
BeautifulSoup Data Scraping : Unable to fetch correct information from the page
I am trying to scrape data from:- https://www.canadapharmacy.com/ below are a few pages that I need to scrape:- https://www.canadapharmacy.com/products/abilify-tablet https://www.canadapharmacy.com/products/accolate https://www.canadapharmacy.com/products/abilify-mt I need all the information from the page. I wrote the below code:- base_url = 'https://www.canadapharmacy.com' data = [] for i in tqdm(range(len(medicine_url))): r = requests.get(base_url+medicine_url[i]) soup = BeautifulSoup(r.text,'lxml') # Scraping medicine Name try: main_name = (soup.find('h1',{"class":"mn"}).text.lstrip()).rstrip() except: main_name = None try: sec_name = (soup.find('div',{"class":"product-name"}).find('h3').text.lstrip()).rstrip() except: sec_name = None try: generic_name = (soup.find('div',{"class":"card product generic strength equal"}).find('div').find('h3').text.lstrip()).rstrip() except: generic_name = None # Description try: des1 = soup.find('div',{"class":"answer expanded"}).find_all('p')[1].text except: des1 = '' try: des2 = soup.find('div',{"class":"answer expanded"}).find('ul').text except: des2 = '' try: des3 = soup.find('div',{"class":"answer expanded"}).find_all('p')[2].text except: des3 = '' desc = (des1+des2+des3).replace('\n',' ') #Directions try: dir1 = soup.find('div',{"class":"answer expanded"}).find_all('h4')[1].text except: dir1 = '' try: dir2 = soup.find('div',{"class":"answer expanded"}).find_all('p')[5].text except: dir2 = '' try: dir3 = soup.find('div',{"class":"answer expanded"}).find_all('p')[6].text except: dir3 = '' try: dir4 = soup.find('div',{"class":"answer expanded"}).find_all('p')[7].text except: dir4 = '' directions = dir1+dir2+dir3+dir4 #Ingredients try: ing = soup.find('div',{"class":"answer expanded"}).find_all('p')[9].text except: ing = None #Cautions try: c1 = soup.find('div',{"class":"answer expanded"}).find_all('h4')[3].text except: c1 = None try: c2 = soup.find('div',{"class":"answer expanded"}).find_all('p')[11].text except: c2 = '' try: c3 = soup.find('div',{"class":"answer expanded"}).find_all('p')[12].text #//div[@class='answer expanded']//p[2] except: c3 = '' try: c4 = soup.find('div',{"class":"answer expanded"}).find_all('p')[13].text except: c4 = '' try: c5 = soup.find('div',{"class":"answer expanded"}).find_all('p')[14].text except: c5 = '' try: c6 = soup.find('div',{"class":"answer expanded"}).find_all('p')[15].text except: c6 = '' caution = (c1+c2+c3+c4+c5+c6).replace('\xa0','') #Side Effects try: se1 = soup.find('div',{"class":"answer expanded"}).find_all('h4')[4].text except: se1 = '' try: se2 = soup.find('div',{"class":"answer expanded"}).find_all('p')[18].text except: se2 = '' try: se3 = soup.find('div',{"class":"answer expanded"}).find_all('ul')[1].text except: se3 = '' try: se4 = soup.find('div',{"class":"answer expanded"}).find_all('p')[19].text except: se4 = '' try: se5 = soup.find('div',{"class":"post-author-bio"}).text except: se5 = '' se = (se1 + se2 + se3 + se4 + se5).replace('\n',' ') for j in soup.find('div',{"class":"answer expanded"}).find_all('h4'): if 'Product Code' in j.text: prod_code = j.text #prod_code = soup.find('div',{"class":"answer expanded"}).find_all('h4')[5].text #//div[@class='answer expanded']//h4 pharma = {"primary_name":main_name, "secondary_name":sec_name, "Generic_Name":generic_name, "Description":desc, "Directions":directions, "Ingredients":ing, "Caution":caution, "Side_Effects":se, "Product_Code":prod_code} data.append(pharma) But, each page is having different positions for the tags hence not giving correct data. So, I tried:- soup.find('div',{"class":"answer expanded"}).find_all('h4') which gives me the output:- [<h4>Description </h4>, <h4>Directions</h4>, <h4>Ingredients</h4>, <h4>Cautions</h4>, <h4>Side Effects</h4>, <h4>Product Code : 5513 </h4>] I want to create a data frame where the description contains all the information given in the description, directions contain all the information of directions given on the web page. for i in soup.find('div',{"class":"answer expanded"}).find_all('h4'): if 'Description' in i.text: print(soup.find('div',{"class":"answer expanded"}).findAllNext('p')) but it prints all the after the soup.find('div',{"class":"answer expanded"}).find_all('h4'). but I want only the tags are giving me the description of the medicine and no others. Can anyone suggest how to do this? Also, how to scrape the rate table from the page as it gives me values in unappropriate fashion?
[ "You can try the next working example:\nimport requests\nfrom bs4 import BeautifulSoup\nimport pandas as pd\n\ndata = []\nr = requests.get('https://www.canadapharmacy.com/products/abilify-tablet')\n\nsoup = BeautifulSoup(r.text,\"lxml\")\ntry:\n card = ''.join([x.get_text(' ',strip=True) for x in soup.select('div.answer.expanded')])\n\n des = card.split('Directions')[0].replace('Description','')\n #print(des)\n\n drc = card.split('Directions')[1].split('Ingredients')[0]\n #print(drc)\n ingre= card.split('Directions')[1].split('Ingredients')[1].split('Cautions')[0]\n #print(ingre)\n\n cau=card.split('Directions')[1].split('Ingredients')[1].split('Cautions')[1].split('Side Effects')[0]\n #print(cau)\n se= card.split('Directions')[1].split('Ingredients')[1].split('Cautions')[1].split('Side Effects')[1]\n #print(se)\nexcept:\n pass \n\ndata.append({\n 'Description':des,\n 'Directions':drc,\n 'Ingredients':ingre,\n 'Cautions':cau,\n 'Side Effects':se\n})\n\nprint(data)\n# df = pd.DataFrame(data)\n# print(df)\n\nOutput:\n[{'Description': \" Abilify Tablet (Aripiprazole) Abilify (Aripiprazole) is a medication prescribed to treat or manage different conditions, including: Agitation associated with schizophrenia or bipolar mania (injection formulation only) Irritability associated with autistic disorder Major depressive disorder , adjunctive treatment Mania and mixed episodes associated with Bipolar I disorder Tourette's disorder Schizophrenia Abilify works by activating different neurotransmitter receptors located in brain cells. Abilify activates D2 (dopamine) and 5-HT1A (serotonin) receptors and blocks 5-HT2A (serotonin) receptors. This combination of receptor activity is responsible for the treatment effects of Abilify. Conditions like schizophrenia, major depressive disorder, and bipolar disorder are caused by neurotransmitter imbalances in the brain. Abilify helps to correct these imbalances and return the normal functioning of neurons. \", 'Directions': ' Once you are prescribed and buy Abilify, then take Abilify exactly as prescribed by your \ndoctor. The dose will vary based on the condition that you are treating. The starting dose of Abilify ranges from 2-15 mg once daily, and the recommended dose for most conditions is between 5-15 mg once daily. The maximum dose is 30 mg once daily. Take Abilify with or without food. ', 'Ingredients': ' The active ingredient in Abilify medication is aripiprazole . ', 'Cautions': ' Abilify and other antipsychotic medications have been associated with an increased risk of death in elderly patients with dementia-related psychosis. When combined with other dopaminergic agents, Abilify can increase the risk of neuroleptic malignant syndrome. Abilify can cause metabolic changes and in some cases can induce high blood sugar in people with and without diabetes . Abilify can also weight gain and increased risk of dyslipidemia. Blood glucose should be monitored while taking Abilify. Monitor for low blood pressure and heart rate while taking Abilify; it can cause orthostatic hypertension which may lead to dizziness or fainting. Use with caution in patients with a history of seizures. ', 'Side Effects': ' The side effects of Abilify vary greatly depending \non what condition is being treated, what other medications are being used concurrently, and what dose is being taken. Speak with your doctor or pharmacist for a full list of side effects that apply to you. Some of the most common side effects include: Akathisia Blurred vision Constipation Dizziness Drooling Extrapyramidal disorder Fatigue Headache Insomnia Nausea Restlessness Sedation Somnolence Tremor Vomiting Buy Abilify online from Canada Pharmacy . Abilify can be purchased online with a valid prescription from a doctor. About Dr. Conor Sheehy (Page Author) Dr. Sheehy (BSc Molecular Biology, PharmD) works a clinical pharmacist specializing in cardiology, oncology, and ambulatory care. He’s a board-certified pharmacotherapy specialist (BCPS), and his experience working one-on-one with patients to fine tune their medication and therapy plans for optimal results makes him a valuable subject matter expert for our pharmacy. Read More.... IMPORTANT NOTE: The above information is intended to increase awareness of health information \nand does not suggest treatment or diagnosis. This information is not a substitute for individual medical attention and should not be construed to indicate that use of the drug is safe, appropriate, or effective for you. See your health care professional for medical advice and treatment. Product Code : 5513'}]\n\n" ]
[ 1 ]
[]
[]
[ "beautifulsoup", "python", "web_scraping" ]
stackoverflow_0074596440_beautifulsoup_python_web_scraping.txt
Q: Remove duplicated elements (not lists) from 2D list, Python I would like to delete all the elements from list of lists that appear more than once and am looking for a smoother solution than this: Removing Duplicate Elements from List of Lists in Prolog I am not trying to remove duplicated lists inside of the parent list like here: How to remove duplicates from nested lists Consider this Int: list = [ [1, 3, 4, 5, 77], [1, 5, 10, 3, 4], [1, 5, 100, 3, 4], [1, 3, 4, 5, 89], [1, 3, 5, 47, 48]] Desired output: new_list= [ [77], [10], [100], [89], [47, 48]] Thanks. I am going to use this in Pandas: the new_list will serve as a new column with unique values on each row compare to the original column. A: There may be sleeker ways, but this works: from collections import Counter mylist = [ [1, 3, 4, 5, 77], [1, 5, 10, 3, 4], [1, 5, 100, 3, 4], [1, 3, 4, 5, 89], [1, 3, 5, 47, 48]] flat = [y for x in mylist for y in x] count = Counter(flat) uniq = [x for x,y in count.items() if y == 1] new_list = [[x for x in y if x in uniq] for y in mylist] which gives [[77], [10], [100], [89], [47, 48]] A: for bi,lst in enumerate(l): for el in lst: for i in range(len(l)): if bi != i: if el in l[i]: print(f'element:{el}') print(f'passing over list:{l[i]}') l[i].remove(el) try: l[bi].remove(el) except: continue that one is not that useful but I see that usually other answers(including your linked posts) uses another modules so I tried different approach. A: My solution: compare list of elements that appear multiple times (once we have that) with the original 2D list: list1 = [1,3,4,5] for list_of_numbers in numbers: for number in list_of_numbers: while number in list_of_numbers and list1: list_of_numbers.remove(number) [[77], [10], [100], [89], [47, 48]] Can anyone express the same with iteration?
Remove duplicated elements (not lists) from 2D list, Python
I would like to delete all the elements from list of lists that appear more than once and am looking for a smoother solution than this: Removing Duplicate Elements from List of Lists in Prolog I am not trying to remove duplicated lists inside of the parent list like here: How to remove duplicates from nested lists Consider this Int: list = [ [1, 3, 4, 5, 77], [1, 5, 10, 3, 4], [1, 5, 100, 3, 4], [1, 3, 4, 5, 89], [1, 3, 5, 47, 48]] Desired output: new_list= [ [77], [10], [100], [89], [47, 48]] Thanks. I am going to use this in Pandas: the new_list will serve as a new column with unique values on each row compare to the original column.
[ "There may be sleeker ways, but this works:\nfrom collections import Counter\n\nmylist = [\n[1, 3, 4, 5, 77],\n[1, 5, 10, 3, 4],\n[1, 5, 100, 3, 4], \n[1, 3, 4, 5, 89], \n[1, 3, 5, 47, 48]]\n\n\nflat = [y for x in mylist for y in x] \ncount = Counter(flat)\nuniq = [x for x,y in count.items() if y == 1]\nnew_list = [[x for x in y if x in uniq] for y in mylist]\n\nwhich gives\n[[77], [10], [100], [89], [47, 48]]\n\n", "for bi,lst in enumerate(l):\n for el in lst:\n for i in range(len(l)):\n if bi != i:\n if el in l[i]:\n print(f'element:{el}')\n print(f'passing over list:{l[i]}')\n l[i].remove(el)\n try: l[bi].remove(el)\n except: continue\n\nthat one is not that useful but I see that usually other answers(including your linked posts) uses another modules so I tried different approach.\n", "My solution: compare list of elements that appear multiple times (once we have that) with the original 2D list:\nlist1 = [1,3,4,5]\n\nfor list_of_numbers in numbers:\n for number in list_of_numbers:\n while number in list_of_numbers and list1:\n list_of_numbers.remove(number)\n \n\n\n[[77], [10], [100], [89], [47, 48]]\n\nCan anyone express the same with iteration?\n" ]
[ 2, 1, 0 ]
[]
[]
[ "list", "pandas", "python" ]
stackoverflow_0074583876_list_pandas_python.txt
Q: How can I prove that a value from input is a number in Python? For a task I had to write a programm the programm functions nicely so I dont have a problem there. But I have to use input() and than I have to prove if the type is correct. I only needs integer but the type of input(5) is a str. Althought I need a int. But if use int(input()) thats also dont work because I want that my programm says this is a str or a float and because of this we cant move on. So that the programm now this is a number or not I did try with only input() that were all Strings regardless of the content and i know why this is so but I dont like it. Then I tried int(input()) but this only works if I use actually only numbers. But I have also to type in strings and floats and then the programm should only say it is the wrong type but shouldnt print out an error message A: s = input() try: print(int(s)) except: print("not int") A: We can achieve this by using eval only. e.g: val = input() try: val = eval(val) except NameError: pass In try it will try to return the exact data type, like int, float, bool, dict, and list will works fine but if input value is string it will go to NameError and print val is string. if we want to handle some case on the basis of data type, we can do it using this: if isinstance(val, int): print("This is integer") if isinstance(val, float): print("This is float") if isinstance(val, str): print("This is string") similary for others as well.
How can I prove that a value from input is a number in Python?
For a task I had to write a programm the programm functions nicely so I dont have a problem there. But I have to use input() and than I have to prove if the type is correct. I only needs integer but the type of input(5) is a str. Althought I need a int. But if use int(input()) thats also dont work because I want that my programm says this is a str or a float and because of this we cant move on. So that the programm now this is a number or not I did try with only input() that were all Strings regardless of the content and i know why this is so but I dont like it. Then I tried int(input()) but this only works if I use actually only numbers. But I have also to type in strings and floats and then the programm should only say it is the wrong type but shouldnt print out an error message
[ "s = input()\n\n\ntry:\n print(int(s))\n\nexcept:\n print(\"not int\")\n\n", "We can achieve this by using eval only.\ne.g:\nval = input()\ntry:\n val = eval(val)\nexcept NameError:\n pass\n\nIn try it will try to return the exact data type, like int, float, bool, dict, and list will works fine but if input value is string it will go to NameError and print val is string.\nif we want to handle some case on the basis of data type, we can do it using this:\nif isinstance(val, int):\n print(\"This is integer\")\n\nif isinstance(val, float):\n print(\"This is float\")\n\nif isinstance(val, str):\n print(\"This is string\")\n\nsimilary for others as well.\n" ]
[ 0, -1 ]
[]
[]
[ "integer", "python", "string" ]
stackoverflow_0074597394_integer_python_string.txt
Q: How would I reference an external .py file containing an array, in another .py file, and use the array to retrieve information I am trying to retrive the data from an external array, to use in this program, but it gave me the error IndexError: list index out of range. This is the example data I used: [["James", 23], ["Jack", 27], ["Jimothy", 21],["Jillian", 22]] And my example code: import random data = open('array.py', 'r').readlines() randomInt= random.randint(0,4) randomName = data[randomInt][0] age = data[randomInt][1] print(randomName, age) in 2 separate files, and I was expecting it to output any of these 4: >>>James 23 >>>Jack 27 >>>Jimothy 21 >>>Jillian 22 But instead I received IndexError: list index out of range How would I fix this? It still outputs the same error if I use random.randint(0,3) A: You reading a python file as a text file & splitting accordingly. You don't even need to do that. Consider this example. array.py arr_from_file_one = [["James", 23], ["Jack", 27], ["Jimothy", 21],["Jillian", 22]] Main.py from array import arr_from_file_one for x in arr_from_file_one: print(x) Gives # ['James', 23] ['Jack', 27] ['Jimothy', 21] ['Jillian', 22] >>> Note: You have to make sure that array.py and main.py should be on same folder.
How would I reference an external .py file containing an array, in another .py file, and use the array to retrieve information
I am trying to retrive the data from an external array, to use in this program, but it gave me the error IndexError: list index out of range. This is the example data I used: [["James", 23], ["Jack", 27], ["Jimothy", 21],["Jillian", 22]] And my example code: import random data = open('array.py', 'r').readlines() randomInt= random.randint(0,4) randomName = data[randomInt][0] age = data[randomInt][1] print(randomName, age) in 2 separate files, and I was expecting it to output any of these 4: >>>James 23 >>>Jack 27 >>>Jimothy 21 >>>Jillian 22 But instead I received IndexError: list index out of range How would I fix this? It still outputs the same error if I use random.randint(0,3)
[ "You reading a python file as a text file & splitting accordingly. You don't even need to do that. Consider this example.\narray.py\narr_from_file_one = [[\"James\", 23], [\"Jack\", 27], [\"Jimothy\", 21],[\"Jillian\", 22]]\n\nMain.py\nfrom array import arr_from_file_one\n\nfor x in arr_from_file_one:\n print(x)\n\nGives #\n['James', 23]\n['Jack', 27]\n['Jimothy', 21]\n['Jillian', 22]\n>>> \n\n\nNote: You have to make sure that array.py and main.py should be on\nsame folder.\n\n" ]
[ 0 ]
[]
[]
[ "multidimensional_array", "python" ]
stackoverflow_0074597493_multidimensional_array_python.txt
Q: Two different Python max() function articles in GeeksForGeeks? I was reading about the key argument in max() in Python and came across two articles in GeeksForGeeks. First article: Python – max() function Second article: Python String | max(). In the first article, Example 2 reads: "By default, max() will return the string with the maximum lexicographic value" and gave the following sample code which prints "for". var1 = "geeks" var2 = "for" var3 = "geek" max_val = max(var1, var2, var3) print(max_val) # "for" However, when I plugged that code in my IDE, it prints "geeks". Am I missing something here or is there an error in the article? I also read Python's official doc but it didn't address the question. I've tried the following codes on my own IDE and the outcome is exactly as expected. var1 = "geeks" var2 = "for" var3 = "geek" print(max(var1, var2, var3)) # prints "geeks" lst1 = ["geeks", "for", "geek"] print(max(lst1)) # prints "geeks" print(max('aa','az')) # prints 'az' since it has the highest "lexicographic value" aka alphabetical order
Two different Python max() function articles in GeeksForGeeks?
I was reading about the key argument in max() in Python and came across two articles in GeeksForGeeks. First article: Python – max() function Second article: Python String | max(). In the first article, Example 2 reads: "By default, max() will return the string with the maximum lexicographic value" and gave the following sample code which prints "for". var1 = "geeks" var2 = "for" var3 = "geek" max_val = max(var1, var2, var3) print(max_val) # "for" However, when I plugged that code in my IDE, it prints "geeks". Am I missing something here or is there an error in the article? I also read Python's official doc but it didn't address the question. I've tried the following codes on my own IDE and the outcome is exactly as expected. var1 = "geeks" var2 = "for" var3 = "geek" print(max(var1, var2, var3)) # prints "geeks" lst1 = ["geeks", "for", "geek"] print(max(lst1)) # prints "geeks" print(max('aa','az')) # prints 'az' since it has the highest "lexicographic value" aka alphabetical order
[]
[]
[ "The article is incorrect in one place and correct in another - you've typed the data incorrectly. It has Geeks: not geeks. Geeks has a lower lexicographical value than for, geeks has a higher one.\n" ]
[ -2 ]
[ "python" ]
stackoverflow_0074597526_python.txt
Q: Delete similar values in an array python numpy Is there an easy possibility to delete similar values in an array (with condition) without using a for loop? For example lets say I have an array np.array([1.2, 3.4, 3.5, 8.9, 10.9]) In this case, i would set the condition for example difference < 0.3 and get as an output np.array([1.2, 3.4, 8.9, 10.9]) I haven't seen anything on the internet similar to this question. Of course there is the function .unique() but that is only for exactly the same values. A: If you want to delete the successive values, you can compute the successive differences and perform boolean indexing: a = np.array([1.2, 3.4, 3.5, 8.9, 10.9]) out = a[np.r_[True, np.diff(a)>=0.3]] Or, if you want the absolute difference: out = a[np.r_[True, np.abs(np.diff(a))>=0.3]] Output: array([ 1.2, 3.4, 8.9, 10.9]) Intermediates: np.diff(a) # array([2.2, 0.1, 5.4, 2. ]) np.diff(a)>=0.3 # array([ True, False, True, True]) np.r_[True, np.diff(a)>=0.3] # array([ True, True, False, True, True])
Delete similar values in an array python numpy
Is there an easy possibility to delete similar values in an array (with condition) without using a for loop? For example lets say I have an array np.array([1.2, 3.4, 3.5, 8.9, 10.9]) In this case, i would set the condition for example difference < 0.3 and get as an output np.array([1.2, 3.4, 8.9, 10.9]) I haven't seen anything on the internet similar to this question. Of course there is the function .unique() but that is only for exactly the same values.
[ "If you want to delete the successive values, you can compute the successive differences and perform boolean indexing:\na = np.array([1.2, 3.4, 3.5, 8.9, 10.9])\n\nout = a[np.r_[True, np.diff(a)>=0.3]]\n\nOr, if you want the absolute difference:\nout = a[np.r_[True, np.abs(np.diff(a))>=0.3]]\n\nOutput:\narray([ 1.2, 3.4, 8.9, 10.9])\n\nIntermediates:\nnp.diff(a)\n# array([2.2, 0.1, 5.4, 2. ])\n\nnp.diff(a)>=0.3\n# array([ True, False, True, True])\n\nnp.r_[True, np.diff(a)>=0.3]\n# array([ True, True, False, True, True])\n\n" ]
[ 1 ]
[]
[]
[ "numpy", "python" ]
stackoverflow_0074597543_numpy_python.txt
Q: How to make vscode refer to a .py file when I click on a python method? When I press ctrl+click on a built-in or library function method, it redirects me not to the source file of that method, but to a stub .pyi file, which is wildly annoying. For example, if I go to print() function, the IDE will open builtins.pyi file instead of builtins.py. I know this problem doesn't exist in PyCharm, but I want to work in vscode. A: Just create a .env file in root of your project and put path to python there, like that: PYTHONPATH=~/.venv/bin/python3 If needed, use : as a separator in case of multiple paths.
How to make vscode refer to a .py file when I click on a python method?
When I press ctrl+click on a built-in or library function method, it redirects me not to the source file of that method, but to a stub .pyi file, which is wildly annoying. For example, if I go to print() function, the IDE will open builtins.pyi file instead of builtins.py. I know this problem doesn't exist in PyCharm, but I want to work in vscode.
[ "Just create a .env file in root of your project and put path to python there, like that:\nPYTHONPATH=~/.venv/bin/python3\nIf needed, use : as a separator in case of multiple paths.\n" ]
[ 0 ]
[]
[]
[ "pylance", "python", "visual_studio_code" ]
stackoverflow_0071209751_pylance_python_visual_studio_code.txt
Q: Multiple conditions for string variable I am trying to add a new column "profile_type" to a dataframe "df_new" which contains the string "Decision Maker" if the "job_title" has any one of the following words: (Head or VP or COO or CEO or CMO or CLO or Chief or Partner or Founder or Owner or CIO or CTO or President or Leaders), "Key Influencer" if the "job_title" has any one of the following words: (Senior or Consultant or Manager or Learning or Training or Talent or HR or Human Resources or Consultant or L&D or Lead), and "Influencer" for all other fields in "job_title". For example, if the 'job_title' includes a row "Learning and Development Specialist", the code has to pull out just the word 'Learning' and segregate it as 'Key Influencer' under 'profile_type'. A: I would try something like this: import numpy as np dm_titles = ['Head', 'VP', 'COO', ...] ki_titles = ['Senior ', 'Consultant', 'Manager', ...] conditions = [ (any([word in new_df['job_title'] for word in dm_titles])), (any([word in new_df['job_title'] for word in ki_titles])), (all([word not in new_df['job_title'] for word in dm_titles] + [word not in new_df['job_title'] for word in ki_titles])) ] values = ["Decision Maker", "Key Influencer", "Influencer"] df_new['profile_type'] = np.select(conditions, values) Let me know if you need any clarification! A: The below code worked for me. import re s1 = pd.Series(df['job_title']) condition1 = s1.str.contains('Director|Head|VP|COO|CEO...', flags=re.IGNORECASE, regex=True) condition2 = s1.str.contains('Senior|Consultant|Manager|Learning...', flags=re.IGNORECASE, regex=True) df_new['profile_type'] = np.where(condition1 == True, 'Decision Maker', (np.where(condition2 == True, 'Key Influencer', 'Influencer')))
Multiple conditions for string variable
I am trying to add a new column "profile_type" to a dataframe "df_new" which contains the string "Decision Maker" if the "job_title" has any one of the following words: (Head or VP or COO or CEO or CMO or CLO or Chief or Partner or Founder or Owner or CIO or CTO or President or Leaders), "Key Influencer" if the "job_title" has any one of the following words: (Senior or Consultant or Manager or Learning or Training or Talent or HR or Human Resources or Consultant or L&D or Lead), and "Influencer" for all other fields in "job_title". For example, if the 'job_title' includes a row "Learning and Development Specialist", the code has to pull out just the word 'Learning' and segregate it as 'Key Influencer' under 'profile_type'.
[ "I would try something like this:\nimport numpy as np\n\ndm_titles = ['Head', 'VP', 'COO', ...]\nki_titles = ['Senior ', 'Consultant', 'Manager', ...]\n\n\nconditions = [\n(any([word in new_df['job_title'] for word in dm_titles])),\n(any([word in new_df['job_title'] for word in ki_titles])),\n(all([word not in new_df['job_title'] for word in dm_titles] + [word not in new_df['job_title'] for word in ki_titles]))\n]\n\nvalues = [\"Decision Maker\", \"Key Influencer\", \"Influencer\"]\n\ndf_new['profile_type'] = np.select(conditions, values)\n\n\nLet me know if you need any clarification!\n", "The below code worked for me.\nimport re\ns1 = pd.Series(df['job_title'])\n\ncondition1 = s1.str.contains('Director|Head|VP|COO|CEO...', flags=re.IGNORECASE, regex=True)\n\ncondition2 = s1.str.contains('Senior|Consultant|Manager|Learning...', flags=re.IGNORECASE, regex=True)\n\ndf_new['profile_type'] = np.where(condition1 == True, 'Decision Maker', \n (np.where(condition2 == True, 'Key Influencer', 'Influencer')))\n\n" ]
[ 0, 0 ]
[ "First, define a function that acts on a row of the dataframe, and returns what you want: in your case, 'Decision Maker' if the job_title contains any words in your list.\ndef is_key_worker(row):\n if (row[\"job_title\"] == \"CTO\" or row[\"job_title\"]==\"Founder\") # add more here.\n\nNext, apply the function to your dataframe, along axis 1.\ndf_new[\"Key influencer\"] = df_new.apply(is_key_worker, axis=1)\n\n" ]
[ -1 ]
[ "multiple_conditions", "pandas", "python" ]
stackoverflow_0074570681_multiple_conditions_pandas_python.txt
Q: How do you convert a Dictionary to a List? For example, if the Dictionary is {0:0, 1:0, 2:0} making a list: [0, 0, 0]. If this isn't possible, how do you take the minimum of a dictionary, meaning the dictionary: {0:3, 1:2, 2:1} returning 1? A: convert a dictionary to a list is pretty simple, you have 3 flavors for that .keys(), .values() and .items() >>> test = {1:30,2:20,3:10} >>> test.keys() # you get the same result with list(test) [1, 2, 3] >>> test.values() [30, 20, 10] >>> test.items() [(1, 30), (2, 20), (3, 10)] >>> (in python 3 you would need to call list on those) finding the maximum or minimum is also easy with the min or max function >>> min(test.keys()) # is the same as min(test) 1 >>> min(test.values()) 10 >>> min(test.items()) (1, 30) >>> max(test.keys()) # is the same as max(test) 3 >>> max(test.values()) 30 >>> max(test.items()) (3, 10) >>> (in python 2, to be efficient, use the .iter* versions of those instead ) the most interesting one is finding the key of min/max value, and min/max got that cover too >>> max(test.items(),key=lambda x: x[-1]) (1, 30) >>> min(test.items(),key=lambda x: x[-1]) (3, 10) >>> here you need a key function, which is a function that take one of whatever you give to the main function and return the element(s) (you can also transform it to something else too) for which you wish to compare them. lambda is a way to define anonymous functions, which save you the need of doing this >>> def last(x): return x[-1] >>> min(test.items(),key=last) (3, 10) >>> A: You can simply take the minimum with: min(dic.values()) And convert it to a list with: list(dic.values()) but since a dictionary is unordered, the order of elements of the resulting list is undefined. In python-2.7 you do not need to call list(..) simply dic.values() will be sufficient: dic.values() A: >>> a = {0:0, 1:2, 2:4} >>> a.keys() [0, 1, 2] >>> a.values() [0, 2, 4] A: Here is my one-liner solution for a flattened list of keys and values: d = {'foo': 'bar', 'zoo': 'bee'} list(sum(d.items(), tuple())) And the result: ['foo', 'bar', 'zoo', 'bee']
How do you convert a Dictionary to a List?
For example, if the Dictionary is {0:0, 1:0, 2:0} making a list: [0, 0, 0]. If this isn't possible, how do you take the minimum of a dictionary, meaning the dictionary: {0:3, 1:2, 2:1} returning 1?
[ "convert a dictionary to a list is pretty simple, you have 3 flavors for that .keys(), .values() and .items()\n>>> test = {1:30,2:20,3:10}\n>>> test.keys() # you get the same result with list(test)\n[1, 2, 3]\n>>> test.values()\n[30, 20, 10]\n>>> test.items()\n[(1, 30), (2, 20), (3, 10)]\n>>> \n\n(in python 3 you would need to call list on those)\nfinding the maximum or minimum is also easy with the min or max function\n>>> min(test.keys()) # is the same as min(test)\n1\n>>> min(test.values())\n10\n>>> min(test.items())\n(1, 30)\n>>> max(test.keys()) # is the same as max(test)\n3\n>>> max(test.values())\n30\n>>> max(test.items())\n(3, 10)\n>>> \n\n(in python 2, to be efficient, use the .iter* versions of those instead )\nthe most interesting one is finding the key of min/max value, and min/max got that cover too\n>>> max(test.items(),key=lambda x: x[-1])\n(1, 30)\n>>> min(test.items(),key=lambda x: x[-1])\n(3, 10)\n>>> \n\nhere you need a key function, which is a function that take one of whatever you give to the main function and return the element(s) (you can also transform it to something else too) for which you wish to compare them.\nlambda is a way to define anonymous functions, which save you the need of doing this\n>>> def last(x):\n return x[-1] \n\n>>> min(test.items(),key=last)\n(3, 10)\n>>> \n\n", "You can simply take the minimum with:\nmin(dic.values())\n\nAnd convert it to a list with:\nlist(dic.values())\n\nbut since a dictionary is unordered, the order of elements of the resulting list is undefined.\nIn python-2.7 you do not need to call list(..) simply dic.values() will be sufficient:\ndic.values()\n\n", ">>> a = {0:0, 1:2, 2:4}\n>>> a.keys()\n[0, 1, 2]\n>>> a.values()\n[0, 2, 4]\n\n", "Here is my one-liner solution for a flattened list of keys and values:\nd = {'foo': 'bar', 'zoo': 'bee'}\nlist(sum(d.items(), tuple()))\n\nAnd the result:\n['foo', 'bar', 'zoo', 'bee'] \n\n" ]
[ 5, 2, 1, 0 ]
[ "A dictionary is defined as the following:\ndict{[Any]:[Any]} = {[Key]:[Value]}\n\nThe problem with your question is that you haven't clarified what the keys are. \n1: Assuming the keys are just numbers and in ascending order without gaps, dict.values() will suffice, as other authors have already pointed out.\n2: Assuming the keys are just numbers in strictly ascending order but not in the right order:\ni = 0\nlist = []\nwhile i < max(mydict.keys()):\n list.append(mydict[i])\n i += 1\n\n3: Assuming the keys are just numbers but not in strictly ascending order:\n There still is a way, but you have to get the keys first and do it via the maximum of the keys and an try-except block\n4: If none of these is the case, maybe dict is not what you are looking for and a 2d or 3d array would suffice? This also counts if one of the solutions do work. Dict seems to be a bad choice for what you are doing.\n" ]
[ -1 ]
[ "dictionary", "list", "python", "python_2.7" ]
stackoverflow_0041915545_dictionary_list_python_python_2.7.txt
Q: Creating a Function to Count Unique Values Based on Another Column I have data that look like this. company_name new_company_status A Co.,Ltd Yes B. Inc No PT XYZ No PT DFE, Tbk. Yes A Co.,Ltd Yes PT DFE, Tbk. Yes I want to create a function in python to check every unique company name from 'company_name' column and compare the 'new_company_status', if the 'new_company_status' is "Yes" for every unique company name, it will count as 1 and iterate to get the total number of new company. So far this is the code that I write: ` def new_comp(DataFrame): comp_list = df['Company_Name'].values.tolist uniq_comp = set(comp_list) for x in uniq_comp: if df['Status_New_Company'] == "Yes": uniq_comp += 1 print('New Companies: ', uniq_comp) ` Can anyone help me to complete and/or revise the code? I expect the output is integer to define the total of new company. Thank u in advance. A: You can use masks and boolean addition to count the matches: # keep one company of each m1 = ~df['company_name'].duplicated() # is this a yes? m2 = df['new_company_status'].eq('Yes') # count cases for which both conditions are True out = (m1&m2).sum() Output: 2 If a given company can have both Yes and No and you want to count 1 if there is at least one Yes, you can use a groupby.any: out = (df['new_company_status'] .eq('Yes') .groupby(df['company_name']).any() .sum() ) Output: 2 A: If need total unique values of company_name if new_company_status match Yes filter and count length of sets: N = len(set(df.loc[df['new_company_status'].eq('Yes'), 'company_name'])) If need count number of Yes per company_name to new DataFrame aggregate boolean mask by sum: df1 = (df['new_company_status'].eq('Yes') .groupby(df['company_name']) .sum() .reset_index(name='countYes'))
Creating a Function to Count Unique Values Based on Another Column
I have data that look like this. company_name new_company_status A Co.,Ltd Yes B. Inc No PT XYZ No PT DFE, Tbk. Yes A Co.,Ltd Yes PT DFE, Tbk. Yes I want to create a function in python to check every unique company name from 'company_name' column and compare the 'new_company_status', if the 'new_company_status' is "Yes" for every unique company name, it will count as 1 and iterate to get the total number of new company. So far this is the code that I write: ` def new_comp(DataFrame): comp_list = df['Company_Name'].values.tolist uniq_comp = set(comp_list) for x in uniq_comp: if df['Status_New_Company'] == "Yes": uniq_comp += 1 print('New Companies: ', uniq_comp) ` Can anyone help me to complete and/or revise the code? I expect the output is integer to define the total of new company. Thank u in advance.
[ "You can use masks and boolean addition to count the matches:\n# keep one company of each\nm1 = ~df['company_name'].duplicated()\n# is this a yes?\nm2 = df['new_company_status'].eq('Yes')\n\n# count cases for which both conditions are True\nout = (m1&m2).sum()\n\nOutput: 2\nIf a given company can have both Yes and No and you want to count 1 if there is at least one Yes, you can use a groupby.any:\nout = (df['new_company_status']\n .eq('Yes')\n .groupby(df['company_name']).any()\n .sum()\n)\n\nOutput: 2\n", "If need total unique values of company_name if new_company_status match Yes filter and count length of sets:\nN = len(set(df.loc[df['new_company_status'].eq('Yes'), 'company_name']))\n\nIf need count number of Yes per company_name to new DataFrame aggregate boolean mask by sum:\ndf1 = (df['new_company_status'].eq('Yes')\n .groupby(df['company_name'])\n .sum()\n .reset_index(name='countYes'))\n\n" ]
[ 1, 0 ]
[]
[]
[ "function", "pandas", "python", "unique_values" ]
stackoverflow_0074597716_function_pandas_python_unique_values.txt
Q: Auto fill a field in model via function django I was searching to find a way to fill a field of a model via a function. example: def myfunction(): return a_file class SomeModel(models.Model): a_field_name=models.FileField(value=my_function()) I some how was thinking to rewrite the save().share with me your idea A: Well as per my understanding your question you can try it like this: def Function(self, parameter: int): return Models.objects.update_or_create( variable=parameter ) Please reply to this message If the issue still persist.
Auto fill a field in model via function django
I was searching to find a way to fill a field of a model via a function. example: def myfunction(): return a_file class SomeModel(models.Model): a_field_name=models.FileField(value=my_function()) I some how was thinking to rewrite the save().share with me your idea
[ "Well as per my understanding your question you can try it like this:\n def Function(self, parameter: int):\n return Models.objects.update_or_create(\n variable=parameter\n )\n\nPlease reply to this message If the issue still persist.\n" ]
[ 0 ]
[]
[]
[ "django", "django_models", "django_rest_framework", "python" ]
stackoverflow_0074597633_django_django_models_django_rest_framework_python.txt
Q: How do I skip some value when I'm using pandas df.transform I want to convert the names of items that occur less than two times to None But I don't want some items to be changed. The original df | Column A | Column B | | -------- | -------- | | Cat | Fish | | Cat | Bone | | Camel | Fish | | Dog | Bone | | Dog | Bone | | Tiger | Bone | I have tried to use this to convert the names df.loc[df.groupby('Column A').Column A.transform('count').lt(2), 'Column A'] = "None" | Column A | Column B | | -------- | -------- | | Cat | Fish | | Cat | Bone | | None | Fish | | Dog | Bone | | Dog | Bone | | None | Bone | What should I do if I want to keep the "Tiger"? A: Use several conditions for your boolean indexing: # is the count <= 2? m1 = df.groupby('Column A')['Column A'].transform('count').lt(2) # is the name NOT Tiger? m2 = df['Column A'].ne('Tiger') # if both conditions are True, change to "None" df.loc[m1&m2, 'Column A'] = "None"
How do I skip some value when I'm using pandas df.transform
I want to convert the names of items that occur less than two times to None But I don't want some items to be changed. The original df | Column A | Column B | | -------- | -------- | | Cat | Fish | | Cat | Bone | | Camel | Fish | | Dog | Bone | | Dog | Bone | | Tiger | Bone | I have tried to use this to convert the names df.loc[df.groupby('Column A').Column A.transform('count').lt(2), 'Column A'] = "None" | Column A | Column B | | -------- | -------- | | Cat | Fish | | Cat | Bone | | None | Fish | | Dog | Bone | | Dog | Bone | | None | Bone | What should I do if I want to keep the "Tiger"?
[ "Use several conditions for your boolean indexing:\n# is the count <= 2?\nm1 = df.groupby('Column A')['Column A'].transform('count').lt(2)\n# is the name NOT Tiger?\nm2 = df['Column A'].ne('Tiger')\n\n# if both conditions are True, change to \"None\"\ndf.loc[m1&m2, 'Column A'] = \"None\"\n\n" ]
[ 3 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074597748_pandas_python.txt
Q: How to catch an element from a for loop in Django template? I'm looping through to different categories and rendering results (of its name and its associated pages). This code is rendering correctly. {% for category in categories %} <div class="row"> <div class="col-lg-4"> <h3><a href="{{category.get_absolute_url}}"></a>{{category.category_name}}</h3> {% for page in category.page_set.all %} <p><a href="{{page.get_absolute_url}}">{{page.page_title}}</p> {% endfor %} </div> </div> {% endfor %} I'd like to catch an specific element inside the for loop (to be precise the 4th one in the forloop counter) and customize it (adding some html and text only to that one). I tried using the forloop.counter like this: {% if forloop.counter == 4 %} <!-- modified content --> {% endif %} But there isn't any results (not showing any changes). Is there any way to do this? A: As per my understanding of your question. You can try it soo: You can place this inside your for loop so that whenever you it iterates and by that you can use your if value to do some operations for specific value. {% if some_variable == some_value %} {{ do_something }} {% endif %}
How to catch an element from a for loop in Django template?
I'm looping through to different categories and rendering results (of its name and its associated pages). This code is rendering correctly. {% for category in categories %} <div class="row"> <div class="col-lg-4"> <h3><a href="{{category.get_absolute_url}}"></a>{{category.category_name}}</h3> {% for page in category.page_set.all %} <p><a href="{{page.get_absolute_url}}">{{page.page_title}}</p> {% endfor %} </div> </div> {% endfor %} I'd like to catch an specific element inside the for loop (to be precise the 4th one in the forloop counter) and customize it (adding some html and text only to that one). I tried using the forloop.counter like this: {% if forloop.counter == 4 %} <!-- modified content --> {% endif %} But there isn't any results (not showing any changes). Is there any way to do this?
[ "As per my understanding of your question. You can try it soo:\nYou can place this inside your for loop so that whenever you it iterates and by that you can use your if value to do some operations for specific value.\n{% if some_variable == some_value %}\n {{ do_something }}\n{% endif %}\n\n" ]
[ 0 ]
[]
[]
[ "django", "django_templates", "for_loop", "python" ]
stackoverflow_0074597538_django_django_templates_for_loop_python.txt
Q: I want to make a Guess the number code without input import random number = random.randint(1, 10) player_name = "doo" number_of_guesses = 0 print('I\'m glad to meet you! {} \nLet\'s play a game with you, I will think a number between 1 and 10 then you will guess, alright? \nDon\'t forget! You have only 3 chances so guess:'.format(player_name)) while number_of_guesses < 3: guess = int(input()) number_of_guesses += 1 if guess < number: print('Your estimate is too low, go up a little!') if guess > number: print('Your estimate is too high, go down a bit!') if guess == number: break if guess == number: print( 'Congratulations {}, you guessed the number in {} tries!'.format(player_name, number_of_guesses)) else: print('Close but no cigar, you couldn\'t guess the number. \nWell, the number was {}.'.format(number)) Above is the Guess the number project using input. I want to make it without input. Can't we use a list or variable to make it? So I tried. import random list=[1, 2, 3, 4, 5, 8, 9, 10] print('I am Guessing a number between 1 and 10:\n') for number in lis: number_of_guesses = 0 while number_of_guesses<3: guess_number=random.randint(1,10) if number<guess_number: number_of_guesses+=1 print('Your guess number is high '+str(guess_number)) elif number>guess_number: number_of_guesses+=1 print('Your guess number is low '+str(guess_number)) else: print("You guess Right The number is: "+str(guess_number)+"\nNumber of guess taken "+str(number_of_guesses+1)) break if number_of_guesses==3: print("Sorry your chances of guessing is over! You can not guess the number correct") Failed to create Guess the number code without input. Help me. A: As i understand, you want to "test" your program and therefore use a list of inputs instead of real user inputs? However, you made a mistake in line 6 for number in lis: -> for number in list: If I change this line it gives me this output. Is that what you wanted? I am Guessing a number between 1 and 10: Your guess number is high 9 Your guess number is high 7 Your guess number is high 8 Sorry your chances of guessing is over! You can not guess the number correct You guess Right The number is: 2 Number of guess taken 1 Your guess number is high 10 You guess Right The number is: 3 Number of guess taken 2 Your guess number is high 9 Your guess number is low 1 Your guess number is low 2 Sorry your chances of guessing is over! You can not guess the number correct Your guess number is low 2 Your guess number is high 10 Your guess number is high 6 Sorry your chances of guessing is over! You can not guess the number correct You guess Right The number is: 8 Number of guess taken 1 Your guess number is high 10 Your guess number is low 7 Your guess number is low 2 Sorry your chances of guessing is over! You can not guess the number correct Your guess number is low 7 Your guess number is low 8 Your guess number is low 8 Sorry your chances of guessing is over! You can not guess the number correct
I want to make a Guess the number code without input
import random number = random.randint(1, 10) player_name = "doo" number_of_guesses = 0 print('I\'m glad to meet you! {} \nLet\'s play a game with you, I will think a number between 1 and 10 then you will guess, alright? \nDon\'t forget! You have only 3 chances so guess:'.format(player_name)) while number_of_guesses < 3: guess = int(input()) number_of_guesses += 1 if guess < number: print('Your estimate is too low, go up a little!') if guess > number: print('Your estimate is too high, go down a bit!') if guess == number: break if guess == number: print( 'Congratulations {}, you guessed the number in {} tries!'.format(player_name, number_of_guesses)) else: print('Close but no cigar, you couldn\'t guess the number. \nWell, the number was {}.'.format(number)) Above is the Guess the number project using input. I want to make it without input. Can't we use a list or variable to make it? So I tried. import random list=[1, 2, 3, 4, 5, 8, 9, 10] print('I am Guessing a number between 1 and 10:\n') for number in lis: number_of_guesses = 0 while number_of_guesses<3: guess_number=random.randint(1,10) if number<guess_number: number_of_guesses+=1 print('Your guess number is high '+str(guess_number)) elif number>guess_number: number_of_guesses+=1 print('Your guess number is low '+str(guess_number)) else: print("You guess Right The number is: "+str(guess_number)+"\nNumber of guess taken "+str(number_of_guesses+1)) break if number_of_guesses==3: print("Sorry your chances of guessing is over! You can not guess the number correct") Failed to create Guess the number code without input. Help me.
[ "As i understand, you want to \"test\" your program and therefore use a list of inputs instead of real user inputs?\nHowever, you made a mistake in line 6\nfor number in lis: -> for number in list:\nIf I change this line it gives me this output. Is that what you wanted?\nI am Guessing a number between 1 and 10:\n\nYour guess number is high 9\nYour guess number is high 7\nYour guess number is high 8\nSorry your chances of guessing is over! You can not guess the number correct\nYou guess Right The number is: 2\nNumber of guess taken 1\nYour guess number is high 10\nYou guess Right The number is: 3\nNumber of guess taken 2\nYour guess number is high 9\nYour guess number is low 1\nYour guess number is low 2\nSorry your chances of guessing is over! You can not guess the number correct\nYour guess number is low 2\nYour guess number is high 10\nYour guess number is high 6\nSorry your chances of guessing is over! You can not guess the number correct\nYou guess Right The number is: 8\nNumber of guess taken 1\nYour guess number is high 10\nYour guess number is low 7\nYour guess number is low 2\nSorry your chances of guessing is over! You can not guess the number correct\nYour guess number is low 7\nYour guess number is low 8\nYour guess number is low 8\nSorry your chances of guessing is over! You can not guess the number correct\n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074596571_python.txt
Q: How to split number into combination to make the same number when sum I have bit stuck trying to implement a combination for example : inp = 3 Need combination which could make again the same value like below `(1,1,1) -> sum -> 3 (2,1) -> sum -> 3 (1,2) -> sum -> 3 (0,3) -> sum -> 3 (3,0) -> sum -> 3` Not sure how to achieve this. Any idea to start with the approach A: I remember this question. My teacher told me to solve this. This is the solution: # arr - array to store the combination # index - next location in array # num - given number # reducedNum - reduced number def findCombinationsUtil(arr, index, num, reducedNum): # Base condition if (reducedNum < 0): return # If combination is # found, print it if (reducedNum == 0): for i in range(index): print(arr[i], end = " ") print("") return # Find the previous number stored in arr[]. # It helps in maintaining increasing order prev = 1 if(index == 0) else arr[index - 1] # note loop starts from previous # number i.e. at array location # index - 1 for k in range(prev, num + 1): # next element of array is k arr[index] = k # call recursively with # reduced number findCombinationsUtil(arr, index + 1, num, reducedNum - k) # Function to find out all # combinations of positive numbers # that add upto given number. # It uses findCombinationsUtil() def findCombinations(n): # array to store the combinations # It can contain max n elements arr = [0] * n # find all combinations findCombinationsUtil(arr, 0, n, n) # Driver code n = 5 # This is the Sum findCombinations(n) # This code is contributed by mits Note: When my teacher gave me this, I didn’t solve this. I got the code from geeksforgekks.com. So I’m putting the link to website too: here
How to split number into combination to make the same number when sum
I have bit stuck trying to implement a combination for example : inp = 3 Need combination which could make again the same value like below `(1,1,1) -> sum -> 3 (2,1) -> sum -> 3 (1,2) -> sum -> 3 (0,3) -> sum -> 3 (3,0) -> sum -> 3` Not sure how to achieve this. Any idea to start with the approach
[ "I remember this question. My teacher told me to solve this.\nThis is the solution:\n\n# arr - array to store the combination\n# index - next location in array\n# num - given number\n# reducedNum - reduced number \n\n\ndef findCombinationsUtil(arr, index, num,\n\n reducedNum):\n \n\n # Base condition\n\n if (reducedNum < 0):\n\n return\n \n\n # If combination is \n\n # found, print it\n\n if (reducedNum == 0):\n \n\n for i in range(index):\n\n print(arr[i], end = \" \")\n\n print(\"\")\n\n return\n \n\n # Find the previous number stored in arr[]. \n\n # It helps in maintaining increasing order\n\n prev = 1 if(index == 0) else arr[index - 1]\n \n\n # note loop starts from previous \n\n # number i.e. at array location\n\n # index - 1\n\n for k in range(prev, num + 1):\n\n \n\n # next element of array is k\n\n arr[index] = k\n \n\n # call recursively with\n\n # reduced number\n\n findCombinationsUtil(arr, index + 1, num, \n\n reducedNum - k)\n \n# Function to find out all \n# combinations of positive numbers \n# that add upto given number.\n# It uses findCombinationsUtil() \n\ndef findCombinations(n):\n\n \n\n # array to store the combinations\n\n # It can contain max n elements\n\n arr = [0] * n\n \n\n # find all combinations\n\n findCombinationsUtil(arr, 0, n, n)\n \n# Driver code\n\nn = 5 # This is the Sum\nfindCombinations(n)\n \n# This code is contributed by mits\n\nNote: When my teacher gave me this, I didn’t solve this. I got the code from geeksforgekks.com.\nSo I’m putting the link to website too: here\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074597578_python.txt
Q: Numba: No implementation of function Function() found for signature: I was able to use Numba to solve a slow itertuples iteration issue with code provided in the answer of this question however when I try it on a new function with a similar loop format i run into this error No implementation of function Function(<built-in function setitem>) found for signature: >>> setitem(array(float64, 1d, C), int64, datetime64[ns]) There are 16 candidate implementations: - Of which 16 did not match due to: Overload of function 'setitem': File: <numerous>: Line N/A. With argument(s): '(array(float64, 1d, C), int64, datetime64[ns])': No match. During: typing of setitem at C:\.... File "dtb.py", line 774: def RI_finder(idx, start_dtm, end_dtm): <source elided> if #check conditions: start_dtm[i] = start_idx ^ There are some existing questions like this one, and this one, that have a similar issue however the solutions provided don't fix my error. Here is a sample of my code @njit def RI_finder(idx, start_dtm, end_dtm): for i in prange(len(start_dtm)): end_idx = idx[i] if #check conditions: for j in prange(len(start_dtm)): start_idx = idx[j] if #check conditions: for k in prange(len(start_dtm)): if #check conditions if #check conditions start_dtm[i] = start_idx end_dtm[i] = end_idx and here is a sample dataframe from which I am extracting the data used in the code provided import numpy as np import pandas as pd start_dtm = ['2022-10-29 06:59:59.999', '2022-10-29 07:14:59.999', '2022-10-29 07:29:59.999', '2022-10-29 07:44:59.999', '2022-10-29 07:59:59.999', '2022-10-29 08:14:59.999', '2022-10-29 08:29:59.999'] end_dtm = ['2022-10-29 06:59:59.999', '2022-10-29 07:05:59.999', '2022-10-29 07:10:59.999', '2022-10-29 07:15:59.999', '2022-10-29 07:20:59.999', '2022-10-29 07:25:59.999', '2022-10-29 07:30:59.999'] idx = ['2022-10-29 06:59:59.999', '2022-10-29 07:01:59.999', '2022-10-29 07:02:59.999', '2022-10-29 07:03:59.999', '2022-10-29 07:04:59.999', '2022-10-29 07:05:59.999', '2022-10-29 07:06:59.999'] df = pd.DataFrame({'start_dtm': start_dtm, 'end_dtm': end_dtm}, index=idx) What I am attempting is to loop through the arrays and find when certain conditions are met. Once those conditions are met I update start_dtm, end_dtm. Seeing as my code is so similar in structure to the code in this questions answer, (multiple loops to check conditions and then make changes). I can't see why mine is not working the same Edit: As suggested, making sure the variables are the same type worked to stop the error I was getting however now I am getting a new error which I have been unable to solve also. non-precise type array(pyobject, 1d, C) During: typing of argument at C: .... File "dtb.py", line 734: def RI_finder(idx, start_dtm, end_dtm): <source elided> for i in prange(len(open)): ^ A: It would help if you simplified the question and added a reproducible example. But based on the Exception, it looks like you're trying to set the item with a different incompatible type: setitem(array(float64, 1d, C), int64, datetime64[ns]) start_dtm = float64 i = int64 start_idx = datetime64[ns] Making sure both start_dtm and start_idx share the same datatype would probably solve the issue. edit: The error related to the "pyobject 1D" seems to be cause by the len() function not supporting object type arrays, which is caused by Pandas in this case. It can be reproduced by something like: @njit def func(x): return len(x) func(np.asarray(["x"], dtype=object)) I'm not sure what the idea behind it all is, but casting all three arrays to datetime (pd.to_datetime(start_dtm)) solves that, or making it a string type should also work.
Numba: No implementation of function Function() found for signature:
I was able to use Numba to solve a slow itertuples iteration issue with code provided in the answer of this question however when I try it on a new function with a similar loop format i run into this error No implementation of function Function(<built-in function setitem>) found for signature: >>> setitem(array(float64, 1d, C), int64, datetime64[ns]) There are 16 candidate implementations: - Of which 16 did not match due to: Overload of function 'setitem': File: <numerous>: Line N/A. With argument(s): '(array(float64, 1d, C), int64, datetime64[ns])': No match. During: typing of setitem at C:\.... File "dtb.py", line 774: def RI_finder(idx, start_dtm, end_dtm): <source elided> if #check conditions: start_dtm[i] = start_idx ^ There are some existing questions like this one, and this one, that have a similar issue however the solutions provided don't fix my error. Here is a sample of my code @njit def RI_finder(idx, start_dtm, end_dtm): for i in prange(len(start_dtm)): end_idx = idx[i] if #check conditions: for j in prange(len(start_dtm)): start_idx = idx[j] if #check conditions: for k in prange(len(start_dtm)): if #check conditions if #check conditions start_dtm[i] = start_idx end_dtm[i] = end_idx and here is a sample dataframe from which I am extracting the data used in the code provided import numpy as np import pandas as pd start_dtm = ['2022-10-29 06:59:59.999', '2022-10-29 07:14:59.999', '2022-10-29 07:29:59.999', '2022-10-29 07:44:59.999', '2022-10-29 07:59:59.999', '2022-10-29 08:14:59.999', '2022-10-29 08:29:59.999'] end_dtm = ['2022-10-29 06:59:59.999', '2022-10-29 07:05:59.999', '2022-10-29 07:10:59.999', '2022-10-29 07:15:59.999', '2022-10-29 07:20:59.999', '2022-10-29 07:25:59.999', '2022-10-29 07:30:59.999'] idx = ['2022-10-29 06:59:59.999', '2022-10-29 07:01:59.999', '2022-10-29 07:02:59.999', '2022-10-29 07:03:59.999', '2022-10-29 07:04:59.999', '2022-10-29 07:05:59.999', '2022-10-29 07:06:59.999'] df = pd.DataFrame({'start_dtm': start_dtm, 'end_dtm': end_dtm}, index=idx) What I am attempting is to loop through the arrays and find when certain conditions are met. Once those conditions are met I update start_dtm, end_dtm. Seeing as my code is so similar in structure to the code in this questions answer, (multiple loops to check conditions and then make changes). I can't see why mine is not working the same Edit: As suggested, making sure the variables are the same type worked to stop the error I was getting however now I am getting a new error which I have been unable to solve also. non-precise type array(pyobject, 1d, C) During: typing of argument at C: .... File "dtb.py", line 734: def RI_finder(idx, start_dtm, end_dtm): <source elided> for i in prange(len(open)): ^
[ "It would help if you simplified the question and added a reproducible example.\nBut based on the Exception, it looks like you're trying to set the item with a different incompatible type:\nsetitem(array(float64, 1d, C), int64, datetime64[ns])\nstart_dtm = float64\ni = int64\nstart_idx = datetime64[ns]\n\nMaking sure both start_dtm and start_idx share the same datatype would probably solve the issue.\nedit:\nThe error related to the \"pyobject 1D\" seems to be cause by the len() function not supporting object type arrays, which is caused by Pandas in this case. It can be reproduced by something like:\n@njit\ndef func(x):\n return len(x)\n\nfunc(np.asarray([\"x\"], dtype=object))\n\nI'm not sure what the idea behind it all is, but casting all three arrays to datetime (pd.to_datetime(start_dtm)) solves that, or making it a string type should also work.\n" ]
[ 1 ]
[]
[]
[ "numba", "numpy", "python" ]
stackoverflow_0074595975_numba_numpy_python.txt
Q: PLY reduce/reduce conflict I am encountering a reduce / reduce conflict that I am unsure how to tackle. I have the following grammar : Type -> int | bool | void | string | char | Identifier Expression -> ... ... | Expression Index | Identifier VariableDecl -> Type Identifier Initializer Semi | Type LBRAC RBRAC Identifier Initializer Semi Index -> LBRAC Expression RBRAC The problem arises when I provide the following snippet to my parser : array[5] = 5; where array is an int[]. Following my parser.out I see that the sequence "array[" is being captured as a VariableDecl instead of an expression with a member access operator. Identifier is used throughout my spec but only appeared as an issue with this specific case. A: Either you need to use some mechanism outside of the parser (such as a symbol table) to distinguish between Types and other Identifiers, or you need to avoid forcing the parser to immediately reduce identifier. That's a bit of a pain, because it requires a bit of duplication of grammar rules. Here's one possibility, which might or might not look a lot like your parser. I added a simple assignment statement and a couple of expression operators, to show the pattern (in the actual grammar, there is a precedence array to avoid the shift-reduce conflicts in expressions; that's not really relevant to this question). prog -> stmts stmt -> decl ; stmt -> expr ; stmt -> assign ; stmts -> <empty> stmts -> stmts stmt assign -> lvalue = expr decl -> type IDENT init decl -> TYPE [ ] IDENT array_init decl -> IDENT [ ] IDENT array_init type -> TYPE type -> IDENT init -> <empty> array_init -> <empty> init -> = expr array_init -> = array_val array_val -> { } array_val -> { expr_list } expr_list -> expr expr_list -> expr_list , expr expr -> term expr -> IDENT expr -> CHAR expr -> INT term -> STRING term -> index lvalue -> IDENT lvalue -> index index -> term [ expr ] index -> IDENT [ expr ] expr -> expr + expr expr -> expr - expr expr -> expr * expr expr -> expr / expr term -> ( expr ) Some of the decisions I made there fairly arbitrary (and might be wrong), but it gives an idea, I hope. Basically, the things at the end of the expr hierarchy, which might just be terms in a simpler grammar, are divided into several overlapping categories: term: things other than IDENT which can be indexed; lvalue: things which can be assigned to; literals other than STRING: just tossed into expr because they don't fit into the other categories; IDENT: handled specially with redundant productions to avoid premature reduction. Also, IDENT is separated from type and handled with a redundant production for the same reason.
PLY reduce/reduce conflict
I am encountering a reduce / reduce conflict that I am unsure how to tackle. I have the following grammar : Type -> int | bool | void | string | char | Identifier Expression -> ... ... | Expression Index | Identifier VariableDecl -> Type Identifier Initializer Semi | Type LBRAC RBRAC Identifier Initializer Semi Index -> LBRAC Expression RBRAC The problem arises when I provide the following snippet to my parser : array[5] = 5; where array is an int[]. Following my parser.out I see that the sequence "array[" is being captured as a VariableDecl instead of an expression with a member access operator. Identifier is used throughout my spec but only appeared as an issue with this specific case.
[ "Either you need to use some mechanism outside of the parser (such as a symbol table) to distinguish between Types and other Identifiers, or you need to avoid forcing the parser to immediately reduce identifier. That's a bit of a pain, because it requires a bit of duplication of grammar rules.\nHere's one possibility, which might or might not look a lot like your parser. I added a simple assignment statement and a couple of expression operators, to show the pattern (in the actual grammar, there is a precedence array to avoid the shift-reduce conflicts in expressions; that's not really relevant to this question).\nprog -> stmts\nstmt -> decl ; \nstmt -> expr ; \nstmt -> assign ;\nstmts -> <empty>\nstmts -> stmts stmt\nassign -> lvalue = expr\ndecl -> type IDENT init\ndecl -> TYPE [ ] IDENT array_init\ndecl -> IDENT [ ] IDENT array_init\ntype -> TYPE\ntype -> IDENT\ninit -> <empty>\narray_init -> <empty>\ninit -> = expr\narray_init -> = array_val\narray_val -> { }\narray_val -> { expr_list }\nexpr_list -> expr\nexpr_list -> expr_list , expr\nexpr -> term\nexpr -> IDENT\nexpr -> CHAR\nexpr -> INT\nterm -> STRING\nterm -> index\nlvalue -> IDENT \nlvalue -> index \nindex -> term [ expr ]\nindex -> IDENT [ expr ]\nexpr -> expr + expr\nexpr -> expr - expr\nexpr -> expr * expr\nexpr -> expr / expr\nterm -> ( expr )\n\nSome of the decisions I made there fairly arbitrary (and might be wrong), but it gives an idea, I hope. Basically, the things at the end of the expr hierarchy, which might just be terms in a simpler grammar, are divided into several overlapping categories:\n\nterm: things other than IDENT which can be indexed;\nlvalue: things which can be assigned to;\nliterals other than STRING: just tossed into expr because they don't fit into the other categories;\nIDENT: handled specially with redundant productions to avoid premature reduction.\nAlso, IDENT is separated from type and handled with a redundant production for the same reason.\n\n" ]
[ 1 ]
[]
[]
[ "compiler_construction", "ply", "python" ]
stackoverflow_0074596969_compiler_construction_ply_python.txt
Q: MySQL Deadlock when using DataFrame.to_sql in multithreaded environment I have a multithreaded ETL process inside a docker container that looks like this simplified code: class Query(abc.ABC): def __init__(self): self.connection = sqlalchemy.create_engine(MYSQL_CONNECTION_STR) def load(self, df: pd.DataFrame) -> None: df.to_sql( name=self.table, con=self.connection, if_exists="replace", index=False, ) @abc.abstractmethod def transform(self, data: object) -> pd.DataFrame: pass @abc.abstractmethod def extract(self) -> object: pass # other methods... class ComplianceInfluxQuery(Query): # Implements abstract methods... load method is the same as Query class ALL_QUERIES = [ComplianceInfluxQuery("cc_influx_share_count"),ComplianceInfluxQuery("cc_influx_we_count")....] while True: with ThreadPoolExecutor(max_workers=8) as pool: for query in ALL_QUERIES: pool.submit(execute_etl, query) # execute_etl function calls extract, transform and load Many classes inherit from Query, with the same implementation for load() as shown in class Query which simply loads a pandas DataFrame object to an sql table, and replacing the table if it exists. All the classes run concurrently and load the results to MySQL database after they finish Extract() and Transform(). Every class loads a different table to the database. Quite often I get a deadlock from a random thread when the load() method is called: 2020-09-17 09:48:28,138 | INFO | root | query | 44 | 1 | ComplianceInfluxQuery.load() 2020-09-17 09:48:28,160 | INFO | root | query | 44 | 1 | ComplianceInfluxQuery.load() 2020-09-17 09:48:28,241 | ERROR | root | calculate | 124 | 1 | Failed to execute query ComplianceInfluxQuery Traceback (most recent call last): File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1276, in _execute_context self.dialect.do_execute( File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 593, in do_execute cursor.execute(statement, parameters) File "/usr/local/lib/python3.8/site-packages/pymysql/cursors.py", line 163, in execute result = self._query(query) File "/usr/local/lib/python3.8/site-packages/pymysql/cursors.py", line 321, in _query conn.query(q) File "/usr/local/lib/python3.8/site-packages/pymysql/connections.py", line 505, in query self._affected_rows = self._read_query_result(unbuffered=unbuffered) File "/usr/local/lib/python3.8/site-packages/pymysql/connections.py", line 724, in _read_query_result result.read() File "/usr/local/lib/python3.8/site-packages/pymysql/connections.py", line 1069, in read first_packet = self.connection._read_packet() File "/usr/local/lib/python3.8/site-packages/pymysql/connections.py", line 676, in _read_packet packet.raise_for_error() File "/usr/local/lib/python3.8/site-packages/pymysql/protocol.py", line 223, in raise_for_error err.raise_mysql_exception(self._data) File "/usr/local/lib/python3.8/site-packages/pymysql/err.py", line 107, in raise_mysql_exception raise errorclass(errno, errval) pymysql.err.OperationalError: (1213, 'Deadlock found when trying to get lock; try restarting transaction') The above exception was the direct cause of the following exception: Traceback (most recent call last): File "calculate.py", line 119, in execute_etl query.run() File "/chil_etl/query.py", line 45, in run self.load(df) File "/chil_etl/query.py", line 22, in load df.to_sql( File "/usr/local/lib/python3.8/site-packages/pandas/core/generic.py", line 2653, in to_sql sql.to_sql( File "/usr/local/lib/python3.8/site-packages/pandas/io/sql.py", line 512, in to_sql pandas_sql.to_sql( File "/usr/local/lib/python3.8/site-packages/pandas/io/sql.py", line 1316, in to_sql table.create() File "/usr/local/lib/python3.8/site-packages/pandas/io/sql.py", line 649, in create self._execute_create() File "/usr/local/lib/python3.8/site-packages/pandas/io/sql.py", line 641, in _execute_create self.table.create() File "/usr/local/lib/python3.8/site-packages/sqlalchemy/sql/schema.py", line 927, in create bind._run_visitor(ddl.SchemaGenerator, self, checkfirst=checkfirst) File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 2097, in _run_visitor conn._run_visitor(visitorcallable, element, **kwargs) File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1656, in _run_visitor visitorcallable(self.dialect, self, **kwargs).traverse_single(element) File "/usr/local/lib/python3.8/site-packages/sqlalchemy/sql/visitors.py", line 145, in traverse_single return meth(obj, **kw) File "/usr/local/lib/python3.8/site-packages/sqlalchemy/sql/ddl.py", line 827, in visit_table self.connection.execute( File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1011, in execute return meth(self, multiparams, params) File "/usr/local/lib/python3.8/site-packages/sqlalchemy/sql/ddl.py", line 72, in _execute_on_connection return connection._execute_ddl(self, multiparams, params) File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1068, in _execute_ddl ret = self._execute_context( File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1316, in _execute_context self._handle_dbapi_exception( File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1510, in _handle_dbapi_exception util.raise_( File "/usr/local/lib/python3.8/site-packages/sqlalchemy/util/compat.py", line 182, in raise_ raise exception File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1276, in _execute_context self.dialect.do_execute( File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 593, in do_execute cursor.execute(statement, parameters) File "/usr/local/lib/python3.8/site-packages/pymysql/cursors.py", line 163, in execute result = self._query(query) File "/usr/local/lib/python3.8/site-packages/pymysql/cursors.py", line 321, in _query conn.query(q) File "/usr/local/lib/python3.8/site-packages/pymysql/connections.py", line 505, in query self._affected_rows = self._read_query_result(unbuffered=unbuffered) File "/usr/local/lib/python3.8/site-packages/pymysql/connections.py", line 724, in _read_query_result result.read() File "/usr/local/lib/python3.8/site-packages/pymysql/connections.py", line 1069, in read first_packet = self.connection._read_packet() File "/usr/local/lib/python3.8/site-packages/pymysql/connections.py", line 676, in _read_packet packet.raise_for_error() File "/usr/local/lib/python3.8/site-packages/pymysql/protocol.py", line 223, in raise_for_error err.raise_mysql_exception(self._data) File "/usr/local/lib/python3.8/site-packages/pymysql/err.py", line 107, in raise_mysql_exception raise errorclass(errno, errval) sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (1213, 'Deadlock found when trying to get lock; try restarting transaction') [SQL: CREATE TABLE cc_influx_share_count ( unique_identifier TEXT, nfs_share_count FLOAT(53), smb_share_count FLOAT(53), s3_bucket_count FLOAT(53) ) ] (Background on this error at: http://sqlalche.me/e/13/e3q8) The log shows the load() method called by two threads at almost the same time. This could happen in all of the classes regardless of the data. I ran the command SHOW ENGINE INNODB STATUS and no deadlocks were listed there. I checked the general_log table to understand what happened during the deadlock better, but didn't notice anything useful apart from the fact that the thread which deadlocked did not inserted any values to the table cc_influx_share_count when (i think) it should have: The error was raised at 09:48:28,241 SELECT * FROM mysql.general_log WHERE event_time >= "2020-09-17 09:48:27" AND event_time <= "2020-09-17 09:48:29" ORDER BY event_time ASC; Output (sensitive data removed): Time: 2020-09-17 09:48:27.010747. Event: COMMIT Time: 2020-09-17 09:48:27.011075. Event: ROLLBACK Time: 2020-09-17 09:48:27.012042. Event: CREATE TABLE cc_influx_last_metric_time ( unique_identifier TEXT, timestamp TIMESTAMP NULL, uptime BIGINT ) Time: 2020-09-17 09:48:27.033973. Event: COMMIT Time: 2020-09-17 09:48:27.034327. Event: ROLLBACK Time: 2020-09-17 09:48:27.053837. Event: INSERT INTO cc_influx_last_metric_time (unique_identifier, timestamp, uptime) VALUES (...) Time: 2020-09-17 09:48:27.066930. Event: COMMIT Time: 2020-09-17 09:48:27.068657. Event: ROLLBACK Time: 2020-09-17 09:48:27.887579. Event: DESCRIBE `container_post_deployments` Time: 2020-09-17 09:48:27.889705. Event: ROLLBACK Time: 2020-09-17 09:48:27.890186. Event: DESCRIBE `container_post_deployments` Time: 2020-09-17 09:48:27.892125. Event: ROLLBACK Time: 2020-09-17 09:48:27.892619. Event: SHOW FULL TABLES FROM `chil_etl` Time: 2020-09-17 09:48:27.894964. Event: SHOW CREATE TABLE `container_post_deployments` Time: 2020-09-17 09:48:27.896491. Event: ROLLBACK Time: 2020-09-17 09:48:27.897097. Event: DROP TABLE container_post_deployments Time: 2020-09-17 09:48:27.907816. Event: COMMIT Time: 2020-09-17 09:48:27.908322. Event: ROLLBACK Time: 2020-09-17 09:48:27.909890. Event: CREATE TABLE container_post_deployments ( image TEXT, `clientId` TEXT, message TEXT, timestamp TIMESTAMP NULL, status_code BIGINT, something TEXT, user_agent TEXT ) Time: 2020-09-17 09:48:27.928665. Event: COMMIT Time: 2020-09-17 09:48:27.929089. Event: ROLLBACK Time: 2020-09-17 09:48:27.932310. Event: INSERT INTO container_post_deployments (image, `clientId`, message, timestamp, status_code, something, user_agent) VALUES (...) Time: 2020-09-17 09:48:27.934410. Event: COMMIT Time: 2020-09-17 09:48:27.936774. Event: ROLLBACK Time: 2020-09-17 09:48:28.140219. Event: DESCRIBE `cc_influx_share_count` Time: 2020-09-17 09:48:28.163517. Event: DESCRIBE `cc_influx_we_count` Time: 2020-09-17 09:48:28.166070. Event: ROLLBACK Time: 2020-09-17 09:48:28.168159. Event: DESCRIBE `cc_influx_share_count` Time: 2020-09-17 09:48:28.169895. Event: ROLLBACK Time: 2020-09-17 09:48:28.170583. Event: DESCRIBE `cc_influx_we_count` Time: 2020-09-17 09:48:28.174444. Event: ROLLBACK Time: 2020-09-17 09:48:28.176339. Event: SHOW FULL TABLES FROM `chil_etl` Time: 2020-09-17 09:48:28.177915. Event: ROLLBACK Time: 2020-09-17 09:48:28.179331. Event: SHOW FULL TABLES FROM `chil_etl` Time: 2020-09-17 09:48:28.182284. Event: SHOW CREATE TABLE `cc_influx_share_count` Time: 2020-09-17 09:48:28.185154. Event: ROLLBACK Time: 2020-09-17 09:48:28.192493. Event: SHOW CREATE TABLE `cc_influx_we_count` Time: 2020-09-17 09:48:28.192887. Event: DROP TABLE cc_influx_share_count Time: 2020-09-17 09:48:28.194530. Event: ROLLBACK Time: 2020-09-17 09:48:28.195707. Event: DROP TABLE cc_influx_we_count Time: 2020-09-17 09:48:28.207712. Event: COMMIT Time: 2020-09-17 09:48:28.208141. Event: ROLLBACK Time: 2020-09-17 09:48:28.210087. Event: CREATE TABLE cc_influx_share_count ( unique_identifier TEXT, nfs_share_count FLOAT(53), smb_share_count FLOAT(53), s3_bucket_count FLOAT(53) ) Time: 2020-09-17 09:48:28.215350. Event: COMMIT Time: 2020-09-17 09:48:28.216115. Event: ROLLBACK Time: 2020-09-17 09:48:28.217996. Event: CREATE TABLE cc_influx_we_count ( unique_identifier TEXT, timestamp TIMESTAMP NULL, `ANF` FLOAT(53), `S3` FLOAT(53), `CVO` FLOAT(53) ) Time: 2020-09-17 09:48:28.240455. Event: ROLLBACK Time: 2020-09-17 09:48:28.240908. Event: ROLLBACK Time: 2020-09-17 09:48:28.244425. Event: COMMIT Time: 2020-09-17 09:48:28.244965. Event: ROLLBACK Time: 2020-09-17 09:48:28.249009. Event: INSERT INTO cc_influx_we_count (unique_identifier, timestamp, `ANF`, `S3`, `CVO`) VALUES (...) Time: 2020-09-17 09:48:28.253638. Event: COMMIT Time: 2020-09-17 09:48:28.256299. Event: ROLLBACK Time: 2020-09-17 09:48:28.525814. Event: DESCRIBE `cc_influx_disk_usage` Time: 2020-09-17 09:48:28.530211. Event: ROLLBACK Time: 2020-09-17 09:48:28.532392. Event: DESCRIBE `cc_influx_disk_usage` Time: 2020-09-17 09:48:28.539685. Event: ROLLBACK Time: 2020-09-17 09:48:28.541868. Event: SHOW FULL TABLES FROM `chil_etl` Time: 2020-09-17 09:48:28.560271. Event: SHOW CREATE TABLE `cc_influx_disk_usage` Time: 2020-09-17 09:48:28.565451. Event: ROLLBACK Time: 2020-09-17 09:48:28.569257. Event: DROP TABLE cc_influx_disk_usage Time: 2020-09-17 09:48:28.585562. Event: COMMIT Time: 2020-09-17 09:48:28.595193. Event: ROLLBACK Time: 2020-09-17 09:48:28.598230. Event: CREATE TABLE cc_influx_disk_usage ( unique_identifier TEXT, timestamp TIMESTAMP NULL, total_gb FLOAT(53), used_gb FLOAT(53) ) Time: 2020-09-17 09:48:28.619580. Event: COMMIT Time: 2020-09-17 09:48:28.620411. Event: ROLLBACK Time: 2020-09-17 09:48:28.625385. Event: INSERT INTO cc_influx_disk_usage (unique_identifier, timestamp, total_gb, used_gb) VALUES (....) Time: 2020-09-17 09:48:28.628706. Event: COMMIT Time: 2020-09-17 09:48:28.631955. Event: ROLLBACK Time: 2020-09-17 09:48:28.840143. Event: DESCRIBE `cc_influx_aws_subscription` Time: 2020-09-17 09:48:28.844303. Event: ROLLBACK Time: 2020-09-17 09:48:28.845637. Event: DESCRIBE `cc_influx_aws_subscription` Time: 2020-09-17 09:48:28.848076. Event: ROLLBACK Time: 2020-09-17 09:48:28.848646. Event: SHOW FULL TABLES FROM `chil_etl` Time: 2020-09-17 09:48:28.851165. Event: SHOW CREATE TABLE `cc_influx_aws_subscription` Time: 2020-09-17 09:48:28.852202. Event: ROLLBACK Time: 2020-09-17 09:48:28.852691. Event: DROP TABLE cc_influx_aws_subscription Time: 2020-09-17 09:48:28.861657. Event: COMMIT Time: 2020-09-17 09:48:28.862099. Event: ROLLBACK Time: 2020-09-17 09:48:28.863288. Event: CREATE TABLE cc_influx_aws_subscription ( unique_identifier TEXT, timestamp TIMESTAMP NULL, is_subscribed BIGINT ) Time: 2020-09-17 09:48:28.878554. Event: COMMIT Time: 2020-09-17 09:48:28.879113. Event: ROLLBACK Time: 2020-09-17 09:48:28.881054. Event: INSERT INTO cc_influx_aws_subscription (unique_identifier, timestamp, is_subscribed) VALUES (....) Time: 2020-09-17 09:48:28.882642. Event: COMMIT Time: 2020-09-17 09:48:28.884614. Event: ROLLBACK Time: 2020-09-17 09:48:28.918677. Event: DESCRIBE `hubspot_data` Time: 2020-09-17 09:48:28.922938. Event: ROLLBACK Time: 2020-09-17 09:48:28.923993. Event: DESCRIBE `hubspot_data` Time: 2020-09-17 09:48:28.928181. Event: ROLLBACK Time: 2020-09-17 09:48:28.928808. Event: SHOW FULL TABLES FROM `chil_etl` Time: 2020-09-17 09:48:28.931225. Event: SHOW CREATE TABLE `hubspot_data` Time: 2020-09-17 09:48:28.934269. Event: ROLLBACK Time: 2020-09-17 09:48:28.934851. Event: DROP TABLE hubspot_data Time: 2020-09-17 09:48:28.949309. Event: COMMIT Time: 2020-09-17 09:48:28.949778. Event: ROLLBACK Time: 2020-09-17 09:48:28.953829. Event: CREATE TABLE hubspot_data (...) Time: 2020-09-17 09:48:28.973177. Event: COMMIT Time: 2020-09-17 09:48:28.973652. Event: ROLLBACK This ETL is the only process running MySQL. I have read the documentation about why deadlocks occur but I can't understand how two different tables with no connection between them can cause a deadlock. I know I can simply run the load() method again until it succeeds but I want to understand why the deadlocks occur, and how to prevent them. MySQL version is 8.0.21. python 3.8.4. sqlalchemy 1.3.19. pandas 1.0.5. PyMySQL 0.10.1. A: If multiple connections try to INSERT or UPDATE to the same table concurrently, you can get deadlocks from contention in the tables' indexes. Your question says you perform your INSERTs from multiple threads. Performing INSERTs requires checking constraints such primary key uniqueness and foreign key validity, and then updating the indexes representing those constraints. So multiple concurrent updates have to lock indexes for reading, then lock them for writing. From your question it seems that MySQL sometimes gets into a deadlock situation (one thread locking indexes in a,b order and the other in b,a order). If different threads can INSERT rows to different tables concurrently and the tables are related to each other by foreign key constraints, it's relatively easy for index maintenance to tumble into deadlock situations. You may be able to remediate this by altering the tables you're populating to drop any indexes (except autoincrementing primary keys) before doing your load, and then recreating them afterward. Or, you can get rid of your concurrency and do the L of your ETL with just one thread. Because of all the index maintenance, threads don't help throughput as much as they intuitively should. Avoid running data definition language (CREATE TABLE, CREATE INDEX, etc) on multiple concurrent threads. Troubleshooting that stuff is more trouble than it's worth. Also, wrapping the INSERTs for each chunk of hundred or so rows in a transaction can help ETL throughput in astonishing ways. Before each trunk, say BEGIN TRANSACTION; After each chunk say COMMIT; Why does this help? Because COMMIT operations take time, and every operation that isn't in an explicit transaction has an implicit COMMIT right after it. A: A possible solution I found for this issue was a retry mechanism. If a deadlock occurs - sleep and try a few more times until success while keeping the DF in memory: class Query(abc.ABC): def __init__(self): self.engine = MysqlEngine.engine() .... .... def load(self, df: pd.DataFrame) -> None: for i in range(5): # If load fails due to a deadlock, try 4 more times try: df.to_sql( name=self.table, con=self.engine.connect(), if_exists="replace", index=False, ) return except sqlalchemy.exc.OperationalError as ex: if "1213" in repr(ex): logging.warning( "Failed to acquire lock for %s", self.__class__.__name__ ) sleep(1) The deadlocks still occur and you lose some performance, but it beats doing the entire Extrac - Transform all over again.
MySQL Deadlock when using DataFrame.to_sql in multithreaded environment
I have a multithreaded ETL process inside a docker container that looks like this simplified code: class Query(abc.ABC): def __init__(self): self.connection = sqlalchemy.create_engine(MYSQL_CONNECTION_STR) def load(self, df: pd.DataFrame) -> None: df.to_sql( name=self.table, con=self.connection, if_exists="replace", index=False, ) @abc.abstractmethod def transform(self, data: object) -> pd.DataFrame: pass @abc.abstractmethod def extract(self) -> object: pass # other methods... class ComplianceInfluxQuery(Query): # Implements abstract methods... load method is the same as Query class ALL_QUERIES = [ComplianceInfluxQuery("cc_influx_share_count"),ComplianceInfluxQuery("cc_influx_we_count")....] while True: with ThreadPoolExecutor(max_workers=8) as pool: for query in ALL_QUERIES: pool.submit(execute_etl, query) # execute_etl function calls extract, transform and load Many classes inherit from Query, with the same implementation for load() as shown in class Query which simply loads a pandas DataFrame object to an sql table, and replacing the table if it exists. All the classes run concurrently and load the results to MySQL database after they finish Extract() and Transform(). Every class loads a different table to the database. Quite often I get a deadlock from a random thread when the load() method is called: 2020-09-17 09:48:28,138 | INFO | root | query | 44 | 1 | ComplianceInfluxQuery.load() 2020-09-17 09:48:28,160 | INFO | root | query | 44 | 1 | ComplianceInfluxQuery.load() 2020-09-17 09:48:28,241 | ERROR | root | calculate | 124 | 1 | Failed to execute query ComplianceInfluxQuery Traceback (most recent call last): File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1276, in _execute_context self.dialect.do_execute( File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 593, in do_execute cursor.execute(statement, parameters) File "/usr/local/lib/python3.8/site-packages/pymysql/cursors.py", line 163, in execute result = self._query(query) File "/usr/local/lib/python3.8/site-packages/pymysql/cursors.py", line 321, in _query conn.query(q) File "/usr/local/lib/python3.8/site-packages/pymysql/connections.py", line 505, in query self._affected_rows = self._read_query_result(unbuffered=unbuffered) File "/usr/local/lib/python3.8/site-packages/pymysql/connections.py", line 724, in _read_query_result result.read() File "/usr/local/lib/python3.8/site-packages/pymysql/connections.py", line 1069, in read first_packet = self.connection._read_packet() File "/usr/local/lib/python3.8/site-packages/pymysql/connections.py", line 676, in _read_packet packet.raise_for_error() File "/usr/local/lib/python3.8/site-packages/pymysql/protocol.py", line 223, in raise_for_error err.raise_mysql_exception(self._data) File "/usr/local/lib/python3.8/site-packages/pymysql/err.py", line 107, in raise_mysql_exception raise errorclass(errno, errval) pymysql.err.OperationalError: (1213, 'Deadlock found when trying to get lock; try restarting transaction') The above exception was the direct cause of the following exception: Traceback (most recent call last): File "calculate.py", line 119, in execute_etl query.run() File "/chil_etl/query.py", line 45, in run self.load(df) File "/chil_etl/query.py", line 22, in load df.to_sql( File "/usr/local/lib/python3.8/site-packages/pandas/core/generic.py", line 2653, in to_sql sql.to_sql( File "/usr/local/lib/python3.8/site-packages/pandas/io/sql.py", line 512, in to_sql pandas_sql.to_sql( File "/usr/local/lib/python3.8/site-packages/pandas/io/sql.py", line 1316, in to_sql table.create() File "/usr/local/lib/python3.8/site-packages/pandas/io/sql.py", line 649, in create self._execute_create() File "/usr/local/lib/python3.8/site-packages/pandas/io/sql.py", line 641, in _execute_create self.table.create() File "/usr/local/lib/python3.8/site-packages/sqlalchemy/sql/schema.py", line 927, in create bind._run_visitor(ddl.SchemaGenerator, self, checkfirst=checkfirst) File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 2097, in _run_visitor conn._run_visitor(visitorcallable, element, **kwargs) File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1656, in _run_visitor visitorcallable(self.dialect, self, **kwargs).traverse_single(element) File "/usr/local/lib/python3.8/site-packages/sqlalchemy/sql/visitors.py", line 145, in traverse_single return meth(obj, **kw) File "/usr/local/lib/python3.8/site-packages/sqlalchemy/sql/ddl.py", line 827, in visit_table self.connection.execute( File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1011, in execute return meth(self, multiparams, params) File "/usr/local/lib/python3.8/site-packages/sqlalchemy/sql/ddl.py", line 72, in _execute_on_connection return connection._execute_ddl(self, multiparams, params) File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1068, in _execute_ddl ret = self._execute_context( File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1316, in _execute_context self._handle_dbapi_exception( File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1510, in _handle_dbapi_exception util.raise_( File "/usr/local/lib/python3.8/site-packages/sqlalchemy/util/compat.py", line 182, in raise_ raise exception File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1276, in _execute_context self.dialect.do_execute( File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 593, in do_execute cursor.execute(statement, parameters) File "/usr/local/lib/python3.8/site-packages/pymysql/cursors.py", line 163, in execute result = self._query(query) File "/usr/local/lib/python3.8/site-packages/pymysql/cursors.py", line 321, in _query conn.query(q) File "/usr/local/lib/python3.8/site-packages/pymysql/connections.py", line 505, in query self._affected_rows = self._read_query_result(unbuffered=unbuffered) File "/usr/local/lib/python3.8/site-packages/pymysql/connections.py", line 724, in _read_query_result result.read() File "/usr/local/lib/python3.8/site-packages/pymysql/connections.py", line 1069, in read first_packet = self.connection._read_packet() File "/usr/local/lib/python3.8/site-packages/pymysql/connections.py", line 676, in _read_packet packet.raise_for_error() File "/usr/local/lib/python3.8/site-packages/pymysql/protocol.py", line 223, in raise_for_error err.raise_mysql_exception(self._data) File "/usr/local/lib/python3.8/site-packages/pymysql/err.py", line 107, in raise_mysql_exception raise errorclass(errno, errval) sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (1213, 'Deadlock found when trying to get lock; try restarting transaction') [SQL: CREATE TABLE cc_influx_share_count ( unique_identifier TEXT, nfs_share_count FLOAT(53), smb_share_count FLOAT(53), s3_bucket_count FLOAT(53) ) ] (Background on this error at: http://sqlalche.me/e/13/e3q8) The log shows the load() method called by two threads at almost the same time. This could happen in all of the classes regardless of the data. I ran the command SHOW ENGINE INNODB STATUS and no deadlocks were listed there. I checked the general_log table to understand what happened during the deadlock better, but didn't notice anything useful apart from the fact that the thread which deadlocked did not inserted any values to the table cc_influx_share_count when (i think) it should have: The error was raised at 09:48:28,241 SELECT * FROM mysql.general_log WHERE event_time >= "2020-09-17 09:48:27" AND event_time <= "2020-09-17 09:48:29" ORDER BY event_time ASC; Output (sensitive data removed): Time: 2020-09-17 09:48:27.010747. Event: COMMIT Time: 2020-09-17 09:48:27.011075. Event: ROLLBACK Time: 2020-09-17 09:48:27.012042. Event: CREATE TABLE cc_influx_last_metric_time ( unique_identifier TEXT, timestamp TIMESTAMP NULL, uptime BIGINT ) Time: 2020-09-17 09:48:27.033973. Event: COMMIT Time: 2020-09-17 09:48:27.034327. Event: ROLLBACK Time: 2020-09-17 09:48:27.053837. Event: INSERT INTO cc_influx_last_metric_time (unique_identifier, timestamp, uptime) VALUES (...) Time: 2020-09-17 09:48:27.066930. Event: COMMIT Time: 2020-09-17 09:48:27.068657. Event: ROLLBACK Time: 2020-09-17 09:48:27.887579. Event: DESCRIBE `container_post_deployments` Time: 2020-09-17 09:48:27.889705. Event: ROLLBACK Time: 2020-09-17 09:48:27.890186. Event: DESCRIBE `container_post_deployments` Time: 2020-09-17 09:48:27.892125. Event: ROLLBACK Time: 2020-09-17 09:48:27.892619. Event: SHOW FULL TABLES FROM `chil_etl` Time: 2020-09-17 09:48:27.894964. Event: SHOW CREATE TABLE `container_post_deployments` Time: 2020-09-17 09:48:27.896491. Event: ROLLBACK Time: 2020-09-17 09:48:27.897097. Event: DROP TABLE container_post_deployments Time: 2020-09-17 09:48:27.907816. Event: COMMIT Time: 2020-09-17 09:48:27.908322. Event: ROLLBACK Time: 2020-09-17 09:48:27.909890. Event: CREATE TABLE container_post_deployments ( image TEXT, `clientId` TEXT, message TEXT, timestamp TIMESTAMP NULL, status_code BIGINT, something TEXT, user_agent TEXT ) Time: 2020-09-17 09:48:27.928665. Event: COMMIT Time: 2020-09-17 09:48:27.929089. Event: ROLLBACK Time: 2020-09-17 09:48:27.932310. Event: INSERT INTO container_post_deployments (image, `clientId`, message, timestamp, status_code, something, user_agent) VALUES (...) Time: 2020-09-17 09:48:27.934410. Event: COMMIT Time: 2020-09-17 09:48:27.936774. Event: ROLLBACK Time: 2020-09-17 09:48:28.140219. Event: DESCRIBE `cc_influx_share_count` Time: 2020-09-17 09:48:28.163517. Event: DESCRIBE `cc_influx_we_count` Time: 2020-09-17 09:48:28.166070. Event: ROLLBACK Time: 2020-09-17 09:48:28.168159. Event: DESCRIBE `cc_influx_share_count` Time: 2020-09-17 09:48:28.169895. Event: ROLLBACK Time: 2020-09-17 09:48:28.170583. Event: DESCRIBE `cc_influx_we_count` Time: 2020-09-17 09:48:28.174444. Event: ROLLBACK Time: 2020-09-17 09:48:28.176339. Event: SHOW FULL TABLES FROM `chil_etl` Time: 2020-09-17 09:48:28.177915. Event: ROLLBACK Time: 2020-09-17 09:48:28.179331. Event: SHOW FULL TABLES FROM `chil_etl` Time: 2020-09-17 09:48:28.182284. Event: SHOW CREATE TABLE `cc_influx_share_count` Time: 2020-09-17 09:48:28.185154. Event: ROLLBACK Time: 2020-09-17 09:48:28.192493. Event: SHOW CREATE TABLE `cc_influx_we_count` Time: 2020-09-17 09:48:28.192887. Event: DROP TABLE cc_influx_share_count Time: 2020-09-17 09:48:28.194530. Event: ROLLBACK Time: 2020-09-17 09:48:28.195707. Event: DROP TABLE cc_influx_we_count Time: 2020-09-17 09:48:28.207712. Event: COMMIT Time: 2020-09-17 09:48:28.208141. Event: ROLLBACK Time: 2020-09-17 09:48:28.210087. Event: CREATE TABLE cc_influx_share_count ( unique_identifier TEXT, nfs_share_count FLOAT(53), smb_share_count FLOAT(53), s3_bucket_count FLOAT(53) ) Time: 2020-09-17 09:48:28.215350. Event: COMMIT Time: 2020-09-17 09:48:28.216115. Event: ROLLBACK Time: 2020-09-17 09:48:28.217996. Event: CREATE TABLE cc_influx_we_count ( unique_identifier TEXT, timestamp TIMESTAMP NULL, `ANF` FLOAT(53), `S3` FLOAT(53), `CVO` FLOAT(53) ) Time: 2020-09-17 09:48:28.240455. Event: ROLLBACK Time: 2020-09-17 09:48:28.240908. Event: ROLLBACK Time: 2020-09-17 09:48:28.244425. Event: COMMIT Time: 2020-09-17 09:48:28.244965. Event: ROLLBACK Time: 2020-09-17 09:48:28.249009. Event: INSERT INTO cc_influx_we_count (unique_identifier, timestamp, `ANF`, `S3`, `CVO`) VALUES (...) Time: 2020-09-17 09:48:28.253638. Event: COMMIT Time: 2020-09-17 09:48:28.256299. Event: ROLLBACK Time: 2020-09-17 09:48:28.525814. Event: DESCRIBE `cc_influx_disk_usage` Time: 2020-09-17 09:48:28.530211. Event: ROLLBACK Time: 2020-09-17 09:48:28.532392. Event: DESCRIBE `cc_influx_disk_usage` Time: 2020-09-17 09:48:28.539685. Event: ROLLBACK Time: 2020-09-17 09:48:28.541868. Event: SHOW FULL TABLES FROM `chil_etl` Time: 2020-09-17 09:48:28.560271. Event: SHOW CREATE TABLE `cc_influx_disk_usage` Time: 2020-09-17 09:48:28.565451. Event: ROLLBACK Time: 2020-09-17 09:48:28.569257. Event: DROP TABLE cc_influx_disk_usage Time: 2020-09-17 09:48:28.585562. Event: COMMIT Time: 2020-09-17 09:48:28.595193. Event: ROLLBACK Time: 2020-09-17 09:48:28.598230. Event: CREATE TABLE cc_influx_disk_usage ( unique_identifier TEXT, timestamp TIMESTAMP NULL, total_gb FLOAT(53), used_gb FLOAT(53) ) Time: 2020-09-17 09:48:28.619580. Event: COMMIT Time: 2020-09-17 09:48:28.620411. Event: ROLLBACK Time: 2020-09-17 09:48:28.625385. Event: INSERT INTO cc_influx_disk_usage (unique_identifier, timestamp, total_gb, used_gb) VALUES (....) Time: 2020-09-17 09:48:28.628706. Event: COMMIT Time: 2020-09-17 09:48:28.631955. Event: ROLLBACK Time: 2020-09-17 09:48:28.840143. Event: DESCRIBE `cc_influx_aws_subscription` Time: 2020-09-17 09:48:28.844303. Event: ROLLBACK Time: 2020-09-17 09:48:28.845637. Event: DESCRIBE `cc_influx_aws_subscription` Time: 2020-09-17 09:48:28.848076. Event: ROLLBACK Time: 2020-09-17 09:48:28.848646. Event: SHOW FULL TABLES FROM `chil_etl` Time: 2020-09-17 09:48:28.851165. Event: SHOW CREATE TABLE `cc_influx_aws_subscription` Time: 2020-09-17 09:48:28.852202. Event: ROLLBACK Time: 2020-09-17 09:48:28.852691. Event: DROP TABLE cc_influx_aws_subscription Time: 2020-09-17 09:48:28.861657. Event: COMMIT Time: 2020-09-17 09:48:28.862099. Event: ROLLBACK Time: 2020-09-17 09:48:28.863288. Event: CREATE TABLE cc_influx_aws_subscription ( unique_identifier TEXT, timestamp TIMESTAMP NULL, is_subscribed BIGINT ) Time: 2020-09-17 09:48:28.878554. Event: COMMIT Time: 2020-09-17 09:48:28.879113. Event: ROLLBACK Time: 2020-09-17 09:48:28.881054. Event: INSERT INTO cc_influx_aws_subscription (unique_identifier, timestamp, is_subscribed) VALUES (....) Time: 2020-09-17 09:48:28.882642. Event: COMMIT Time: 2020-09-17 09:48:28.884614. Event: ROLLBACK Time: 2020-09-17 09:48:28.918677. Event: DESCRIBE `hubspot_data` Time: 2020-09-17 09:48:28.922938. Event: ROLLBACK Time: 2020-09-17 09:48:28.923993. Event: DESCRIBE `hubspot_data` Time: 2020-09-17 09:48:28.928181. Event: ROLLBACK Time: 2020-09-17 09:48:28.928808. Event: SHOW FULL TABLES FROM `chil_etl` Time: 2020-09-17 09:48:28.931225. Event: SHOW CREATE TABLE `hubspot_data` Time: 2020-09-17 09:48:28.934269. Event: ROLLBACK Time: 2020-09-17 09:48:28.934851. Event: DROP TABLE hubspot_data Time: 2020-09-17 09:48:28.949309. Event: COMMIT Time: 2020-09-17 09:48:28.949778. Event: ROLLBACK Time: 2020-09-17 09:48:28.953829. Event: CREATE TABLE hubspot_data (...) Time: 2020-09-17 09:48:28.973177. Event: COMMIT Time: 2020-09-17 09:48:28.973652. Event: ROLLBACK This ETL is the only process running MySQL. I have read the documentation about why deadlocks occur but I can't understand how two different tables with no connection between them can cause a deadlock. I know I can simply run the load() method again until it succeeds but I want to understand why the deadlocks occur, and how to prevent them. MySQL version is 8.0.21. python 3.8.4. sqlalchemy 1.3.19. pandas 1.0.5. PyMySQL 0.10.1.
[ "If multiple connections try to INSERT or UPDATE to the same table concurrently, you can get deadlocks from contention in the tables' indexes.\nYour question says you perform your INSERTs from multiple threads. Performing INSERTs requires checking constraints such primary key uniqueness and foreign key validity, and then updating the indexes representing those constraints. So multiple concurrent updates\n\nhave to lock indexes for reading, then\nlock them for writing.\n\nFrom your question it seems that MySQL sometimes gets into a deadlock situation (one thread locking indexes in a,b order and the other in b,a order). If different threads can INSERT rows to different tables concurrently and the tables are related to each other by foreign key constraints, it's relatively easy for index maintenance to tumble into deadlock situations.\nYou may be able to remediate this by altering the tables you're populating to drop any indexes (except autoincrementing primary keys) before doing your load, and then recreating them afterward.\nOr, you can get rid of your concurrency and do the L of your ETL with just one thread. Because of all the index maintenance, threads don't help throughput as much as they intuitively should.\nAvoid running data definition language (CREATE TABLE, CREATE INDEX, etc) on multiple concurrent threads. Troubleshooting that stuff is more trouble than it's worth.\nAlso, wrapping the INSERTs for each chunk of hundred or so rows in a transaction can help ETL throughput in astonishing ways. Before each trunk, say BEGIN TRANSACTION; After each chunk say COMMIT; Why does this help? Because COMMIT operations take time, and every operation that isn't in an explicit transaction has an implicit COMMIT right after it.\n", "A possible solution I found for this issue was a retry mechanism.\nIf a deadlock occurs - sleep and try a few more times until success while keeping the DF in memory:\nclass Query(abc.ABC):\n def __init__(self):\n self.engine = MysqlEngine.engine()\n\n ....\n ....\n\n def load(self, df: pd.DataFrame) -> None:\n for i in range(5): # If load fails due to a deadlock, try 4 more times\n try:\n df.to_sql(\n name=self.table,\n con=self.engine.connect(),\n if_exists=\"replace\",\n index=False,\n )\n return\n except sqlalchemy.exc.OperationalError as ex:\n if \"1213\" in repr(ex):\n logging.warning(\n \"Failed to acquire lock for %s\", self.__class__.__name__\n )\n sleep(1)\n\nThe deadlocks still occur and you lose some performance, but it beats doing the entire Extrac - Transform all over again.\n" ]
[ 1, 0 ]
[]
[]
[ "deadlock", "multithreading", "mysql", "python", "sqlalchemy" ]
stackoverflow_0063940226_deadlock_multithreading_mysql_python_sqlalchemy.txt
Q: Python progress bar and downloads I have a Python script that launches a URL that is a downloadable file. Is there some way to have Python display the download progress as oppose to launching the browser? A: I've just written a super simple (slightly hacky) approach to this for scraping PDFs off a certain site. Note, it only works correctly on Unix based systems (Linux, mac os) as PowerShell does not handle "\r": import sys import requests link = "http://indy/abcde1245" file_name = "download.data" with open(file_name, "wb") as f: print("Downloading %s" % file_name) response = requests.get(link, stream=True) total_length = response.headers.get('content-length') if total_length is None: # no content length header f.write(response.content) else: dl = 0 total_length = int(total_length) for data in response.iter_content(chunk_size=4096): dl += len(data) f.write(data) done = int(50 * dl / total_length) sys.stdout.write("\r[%s%s]" % ('=' * done, ' ' * (50-done)) ) sys.stdout.flush() It uses the requests library so you'll need to install that. This outputs something like the following into your console: >Downloading download.data >[=============                            ] The progress bar is 52 characters wide in the script (2 characters are simply the [] so 50 characters of progress). Each = represents 2% of the download. A: You can use the 'clint' package (written by the same author as 'requests') to add a simple progress bar to your downloads like this: from clint.textui import progress r = requests.get(url, stream=True) path = '/some/path/for/file.txt' with open(path, 'wb') as f: total_length = int(r.headers.get('content-length')) for chunk in progress.bar(r.iter_content(chunk_size=1024), expected_size=(total_length/1024) + 1): if chunk: f.write(chunk) f.flush() which will give you a dynamic output which will look like this: [################################] 5210/5210 - 00:00:01 It should work on multiple platforms as well! You can also change the bar to dots or a spinner with .dots and .mill instead of .bar. Enjoy! A: Python 3 with TQDM This is the suggested technique from the TQDM docs. import urllib.request from tqdm import tqdm class DownloadProgressBar(tqdm): def update_to(self, b=1, bsize=1, tsize=None): if tsize is not None: self.total = tsize self.update(b * bsize - self.n) def download_url(url, output_path): with DownloadProgressBar(unit='B', unit_scale=True, miniters=1, desc=url.split('/')[-1]) as t: urllib.request.urlretrieve(url, filename=output_path, reporthook=t.update_to) A: There is an answer with requests and tqdm. import requests from tqdm import tqdm def download(url: str, fname: str): resp = requests.get(url, stream=True) total = int(resp.headers.get('content-length', 0)) # Can also replace 'file' with a io.BytesIO object with open(fname, 'wb') as file, tqdm( desc=fname, total=total, unit='iB', unit_scale=True, unit_divisor=1024, ) as bar: for data in resp.iter_content(chunk_size=1024): size = file.write(data) bar.update(size) Gist: https://gist.github.com/yanqd0/c13ed29e29432e3cf3e7c38467f42f51 A: Another good option is wget: import wget wget.download('http://download.geonames.org/export/zip/US.zip') The output will look like this: 11% [........ ] 73728 / 633847 Source: https://medium.com/@petehouston/download-files-with-progress-in-python-96f14f6417a2 A: You can also use click. It has a good library for progress bar: import click with click.progressbar(length=total_size, label='Downloading files') as bar: for file in files: download(file) bar.update(file.size) A: Sorry for being late with an answer; just updated the tqdm docs: https://github.com/tqdm/tqdm/#hooks-and-callbacks Using urllib.urlretrieve and OOP: import urllib from tqdm.auto import tqdm class TqdmUpTo(tqdm): """Provides `update_to(n)` which uses `tqdm.update(delta_n)`.""" def update_to(self, b=1, bsize=1, tsize=None): """ b : Blocks transferred so far bsize : Size of each block tsize : Total size """ if tsize is not None: self.total = tsize self.update(b * bsize - self.n) # will also set self.n = b * bsize eg_link = "https://github.com/tqdm/tqdm/releases/download/v4.46.0/tqdm-4.46.0-py2.py3-none-any.whl" eg_file = eg_link.split('/')[-1] with TqdmUpTo(unit='B', unit_scale=True, unit_divisor=1024, miniters=1, desc=eg_file) as t: # all optional kwargs urllib.urlretrieve( eg_link, filename=eg_file, reporthook=t.update_to, data=None) t.total = t.n or using requests.get and file wrappers: import requests from tqdm.auto import tqdm eg_link = "https://github.com/tqdm/tqdm/releases/download/v4.46.0/tqdm-4.46.0-py2.py3-none-any.whl" eg_file = eg_link.split('/')[-1] response = requests.get(eg_link, stream=True) with tqdm.wrapattr(open(eg_file, "wb"), "write", miniters=1, total=int(response.headers.get('content-length', 0)), desc=eg_file) as fout: for chunk in response.iter_content(chunk_size=4096): fout.write(chunk) You could of course mix & match techniques. A: # Define Progress Bar function def print_progressbar(total, current, barsize=60): progress = int(current*barsize/total) completed = str(int(current*100/total)) + '%' print('[', chr(9608)*progress, ' ', completed, '.'*(barsize-progress), '] ', str(i)+'/'+str(total), sep='', end='\r', flush=True) # Sample Code total = 6000 barsize = 60 print_frequency = max(min(total//barsize, 100), 1) print("Start Task..", flush=True) for i in range(1, total+1): if i%print_frequency == 0 or i == 1: print_progressbar(total, i, barsize) print("\nFinished", flush=True) # Snapshot of Progress Bar : Below lines are for illustrations only. In command prompt you will see single progress bar showing incremental progress. [ 0%............................................................] 1/6000 [██████████ 16%..................................................] 1000/6000 [████████████████████ 33%........................................] 2000/6000 [██████████████████████████████ 50%..............................] 3000/6000 [████████████████████████████████████████ 66%....................] 4000/6000 [██████████████████████████████████████████████████ 83%..........] 5000/6000 [████████████████████████████████████████████████████████████ 100%] 6000/6000 A: The tqdm package now includes a function designed to handle exactly this type of situation: wrapattr. You just wrap an object's read (or write) attribute, and tqdm handles the rest. Here's a simple download function that puts it all together with requests: def download(url, filename): import functools import pathlib import shutil import requests import tqdm r = requests.get(url, stream=True, allow_redirects=True) if r.status_code != 200: r.raise_for_status() # Will only raise for 4xx codes, so... raise RuntimeError(f"Request to {url} returned status code {r.status_code}") file_size = int(r.headers.get('Content-Length', 0)) path = pathlib.Path(filename).expanduser().resolve() path.parent.mkdir(parents=True, exist_ok=True) desc = "(Unknown total file size)" if file_size == 0 else "" r.raw.read = functools.partial(r.raw.read, decode_content=True) # Decompress if needed with tqdm.tqdm.wrapattr(r.raw, "read", total=file_size, desc=desc) as r_raw: with path.open("wb") as f: shutil.copyfileobj(r_raw, f) return path A: Just some improvements of @rich-jones's answer import re import request from clint.textui import progress def get_filename(cd): """ Get filename from content-disposition """ if not cd: return None fname = re.findall('filename=(.+)', cd) if len(fname) == 0: return None return fname[0].replace('"', "") def stream_download_file(url, output, chunk_size=1024, session=None, verbose=False): if session: file = session.get(url, stream=True) else: file = requests.get(url, stream=True) file_name = get_filename(file.headers.get('content-disposition')) filepath = "{}/{}".format(output, file_name) if verbose: print ("Downloading {}".format(file_name)) with open(filepath, 'wb') as f: total_length = int(file.headers.get('content-length')) for chunk in progress.bar(file.iter_content(chunk_size=chunk_size), expected_size=(total_length/chunk_size) + 1): if chunk: f.write(chunk) f.flush() if verbose: print ("Finished") A: I come up with a solution that looks a bit nicer based on tqdm. My implementation is based on the answer of @Endophage. The effect: # import the download_file definition from the next cell first. >>> download_file(url, 'some_data.dat') Downloading some_data.dat. 7%|█▎ | 195.31MB/2.82GB: [00:04<01:02, 49.61MB/s] The implementation: import time import math import requests from tqdm import tqdm def download_file(url, filename, update_interval=500, chunk_size=4096): def memory2str(mem): sizes = ['B', 'KB', 'MB', 'GB', 'TB', 'PB'] power = int(math.log(mem, 1024)) size = sizes[power] for _ in range(power): mem /= 1024 if power > 0: return f'{mem:.2f}{size}' else: return f'{mem}{size}' with open(filename, 'wb') as f: response = requests.get(url, stream=True) total_length = response.headers.get('content-length') if total_length is None: f.write(response.content) else: print(f'Downloading {filename}.', flush=True) downloaded, total_length = 0, int(total_length) total_size = memory2str(total_length) bar_format = '{percentage:3.0f}%|{bar:20}| {desc} [{elapsed}<{remaining}' \ '{postfix}]' if update_interval * chunk_size * 100 >= total_length: update_interval = 1 with tqdm(total=total_length, bar_format=bar_format) as bar: counter = 0 now_time, now_size = time.time(), downloaded for data in response.iter_content(chunk_size=chunk_size): f.write(data) downloaded += len(data) counter += 1 bar.update(len(data)) if counter % update_interval == 0: ellapsed = time.time() - now_time runtime_downloaded = downloaded - now_size now_time, now_size = time.time(), downloaded cur_size = memory2str(downloaded) speed_size = memory2str(runtime_downloaded / ellapsed) bar.set_description(f'{cur_size}/{total_size}') bar.set_postfix_str(f'{speed_size}/s') counter = 0
Python progress bar and downloads
I have a Python script that launches a URL that is a downloadable file. Is there some way to have Python display the download progress as oppose to launching the browser?
[ "I've just written a super simple (slightly hacky) approach to this for scraping PDFs off a certain site. Note, it only works correctly on Unix based systems (Linux, mac os) as PowerShell does not handle \"\\r\":\nimport sys\nimport requests\n\nlink = \"http://indy/abcde1245\"\nfile_name = \"download.data\"\nwith open(file_name, \"wb\") as f:\n print(\"Downloading %s\" % file_name)\n response = requests.get(link, stream=True)\n total_length = response.headers.get('content-length')\n\n if total_length is None: # no content length header\n f.write(response.content)\n else:\n dl = 0\n total_length = int(total_length)\n for data in response.iter_content(chunk_size=4096):\n dl += len(data)\n f.write(data)\n done = int(50 * dl / total_length)\n sys.stdout.write(\"\\r[%s%s]\" % ('=' * done, ' ' * (50-done)) ) \n sys.stdout.flush()\n\nIt uses the requests library so you'll need to install that. This outputs something like the following into your console:\n\n>Downloading download.data\n>[=============                            ]\n\nThe progress bar is 52 characters wide in the script (2 characters are simply the [] so 50 characters of progress). Each = represents 2% of the download.\n", "You can use the 'clint' package (written by the same author as 'requests') to add a simple progress bar to your downloads like this:\nfrom clint.textui import progress\n\nr = requests.get(url, stream=True)\npath = '/some/path/for/file.txt'\nwith open(path, 'wb') as f:\n total_length = int(r.headers.get('content-length'))\n for chunk in progress.bar(r.iter_content(chunk_size=1024), expected_size=(total_length/1024) + 1): \n if chunk:\n f.write(chunk)\n f.flush()\n\nwhich will give you a dynamic output which will look like this:\n[################################] 5210/5210 - 00:00:01\n\nIt should work on multiple platforms as well! You can also change the bar to dots or a spinner with .dots and .mill instead of .bar.\nEnjoy!\n", "Python 3 with TQDM\nThis is the suggested technique from the TQDM docs.\nimport urllib.request\n\nfrom tqdm import tqdm\n\n\nclass DownloadProgressBar(tqdm):\n def update_to(self, b=1, bsize=1, tsize=None):\n if tsize is not None:\n self.total = tsize\n self.update(b * bsize - self.n)\n\n\ndef download_url(url, output_path):\n with DownloadProgressBar(unit='B', unit_scale=True,\n miniters=1, desc=url.split('/')[-1]) as t:\n urllib.request.urlretrieve(url, filename=output_path, reporthook=t.update_to)\n\n", "There is an answer with requests and tqdm.\nimport requests\nfrom tqdm import tqdm\n\n\ndef download(url: str, fname: str):\n resp = requests.get(url, stream=True)\n total = int(resp.headers.get('content-length', 0))\n # Can also replace 'file' with a io.BytesIO object\n with open(fname, 'wb') as file, tqdm(\n desc=fname,\n total=total,\n unit='iB',\n unit_scale=True,\n unit_divisor=1024,\n ) as bar:\n for data in resp.iter_content(chunk_size=1024):\n size = file.write(data)\n bar.update(size)\n\nGist: https://gist.github.com/yanqd0/c13ed29e29432e3cf3e7c38467f42f51\n", "Another good option is wget:\nimport wget\nwget.download('http://download.geonames.org/export/zip/US.zip')\n\nThe output will look like this:\n11% [........ ] 73728 / 633847\n\nSource: https://medium.com/@petehouston/download-files-with-progress-in-python-96f14f6417a2\n", "You can also use click. It has a good library for progress bar:\nimport click\n\nwith click.progressbar(length=total_size, label='Downloading files') as bar:\n for file in files:\n download(file)\n bar.update(file.size)\n\n", "Sorry for being late with an answer; just updated the tqdm docs:\nhttps://github.com/tqdm/tqdm/#hooks-and-callbacks\nUsing urllib.urlretrieve and OOP:\nimport urllib\nfrom tqdm.auto import tqdm\n\nclass TqdmUpTo(tqdm):\n \"\"\"Provides `update_to(n)` which uses `tqdm.update(delta_n)`.\"\"\"\n def update_to(self, b=1, bsize=1, tsize=None):\n \"\"\"\n b : Blocks transferred so far\n bsize : Size of each block\n tsize : Total size\n \"\"\"\n if tsize is not None:\n self.total = tsize\n self.update(b * bsize - self.n) # will also set self.n = b * bsize\n\neg_link = \"https://github.com/tqdm/tqdm/releases/download/v4.46.0/tqdm-4.46.0-py2.py3-none-any.whl\"\neg_file = eg_link.split('/')[-1]\nwith TqdmUpTo(unit='B', unit_scale=True, unit_divisor=1024, miniters=1,\n desc=eg_file) as t: # all optional kwargs\n urllib.urlretrieve(\n eg_link, filename=eg_file, reporthook=t.update_to, data=None)\n t.total = t.n\n\nor using requests.get and file wrappers:\nimport requests\nfrom tqdm.auto import tqdm\n\neg_link = \"https://github.com/tqdm/tqdm/releases/download/v4.46.0/tqdm-4.46.0-py2.py3-none-any.whl\"\neg_file = eg_link.split('/')[-1]\nresponse = requests.get(eg_link, stream=True)\nwith tqdm.wrapattr(open(eg_file, \"wb\"), \"write\", miniters=1,\n total=int(response.headers.get('content-length', 0)),\n desc=eg_file) as fout:\n for chunk in response.iter_content(chunk_size=4096):\n fout.write(chunk)\n\nYou could of course mix & match techniques.\n", "# Define Progress Bar function\ndef print_progressbar(total, current, barsize=60):\n progress = int(current*barsize/total)\n completed = str(int(current*100/total)) + '%'\n print('[', chr(9608)*progress, ' ', completed, '.'*(barsize-progress), '] ', str(i)+'/'+str(total), sep='', end='\\r', flush=True)\n\n# Sample Code\ntotal = 6000\nbarsize = 60\nprint_frequency = max(min(total//barsize, 100), 1)\nprint(\"Start Task..\", flush=True)\nfor i in range(1, total+1):\n if i%print_frequency == 0 or i == 1:\n print_progressbar(total, i, barsize)\nprint(\"\\nFinished\", flush=True)\n\n# Snapshot of Progress Bar :\nBelow lines are for illustrations only. In command prompt you will see single progress bar showing incremental progress.\n[ 0%............................................................] 1/6000\n\n[██████████ 16%..................................................] 1000/6000\n\n[████████████████████ 33%........................................] 2000/6000\n\n[██████████████████████████████ 50%..............................] 3000/6000\n\n[████████████████████████████████████████ 66%....................] 4000/6000\n\n[██████████████████████████████████████████████████ 83%..........] 5000/6000\n\n[████████████████████████████████████████████████████████████ 100%] 6000/6000\n\n", "The tqdm package now includes a function designed to handle exactly this type of situation: wrapattr. You just wrap an object's read (or write) attribute, and tqdm handles the rest. Here's a simple download function that puts it all together with requests:\ndef download(url, filename):\n import functools\n import pathlib\n import shutil\n import requests\n import tqdm\n \n r = requests.get(url, stream=True, allow_redirects=True)\n if r.status_code != 200:\n r.raise_for_status() # Will only raise for 4xx codes, so...\n raise RuntimeError(f\"Request to {url} returned status code {r.status_code}\")\n file_size = int(r.headers.get('Content-Length', 0))\n\n path = pathlib.Path(filename).expanduser().resolve()\n path.parent.mkdir(parents=True, exist_ok=True)\n\n desc = \"(Unknown total file size)\" if file_size == 0 else \"\"\n r.raw.read = functools.partial(r.raw.read, decode_content=True) # Decompress if needed\n with tqdm.tqdm.wrapattr(r.raw, \"read\", total=file_size, desc=desc) as r_raw:\n with path.open(\"wb\") as f:\n shutil.copyfileobj(r_raw, f)\n\n return path\n\n", "Just some improvements of @rich-jones's answer\n import re\n import request\n from clint.textui import progress\n\n def get_filename(cd):\n \"\"\"\n Get filename from content-disposition\n \"\"\"\n if not cd:\n return None\n fname = re.findall('filename=(.+)', cd)\n if len(fname) == 0:\n return None\n return fname[0].replace('\"', \"\")\n\ndef stream_download_file(url, output, chunk_size=1024, session=None, verbose=False):\n \n if session:\n file = session.get(url, stream=True)\n else:\n file = requests.get(url, stream=True)\n \n file_name = get_filename(file.headers.get('content-disposition'))\n filepath = \"{}/{}\".format(output, file_name)\n \n if verbose: \n print (\"Downloading {}\".format(file_name))\n \n with open(filepath, 'wb') as f:\n total_length = int(file.headers.get('content-length'))\n for chunk in progress.bar(file.iter_content(chunk_size=chunk_size), expected_size=(total_length/chunk_size) + 1): \n if chunk:\n f.write(chunk)\n f.flush()\n if verbose: \n print (\"Finished\")\n\n", "I come up with a solution that looks a bit nicer based on tqdm. My implementation is based on the answer of @Endophage.\nThe effect:\n# import the download_file definition from the next cell first.\n>>> download_file(url, 'some_data.dat')\nDownloading some_data.dat.\n 7%|█▎ | 195.31MB/2.82GB: [00:04<01:02, 49.61MB/s]\n\nThe implementation:\nimport time\nimport math\nimport requests\nfrom tqdm import tqdm\n\n\ndef download_file(url, filename, update_interval=500, chunk_size=4096):\n def memory2str(mem):\n sizes = ['B', 'KB', 'MB', 'GB', 'TB', 'PB']\n power = int(math.log(mem, 1024))\n size = sizes[power]\n for _ in range(power):\n mem /= 1024\n if power > 0:\n return f'{mem:.2f}{size}'\n else:\n return f'{mem}{size}'\n with open(filename, 'wb') as f:\n response = requests.get(url, stream=True)\n total_length = response.headers.get('content-length')\n if total_length is None:\n f.write(response.content)\n else:\n print(f'Downloading {filename}.', flush=True)\n downloaded, total_length = 0, int(total_length)\n total_size = memory2str(total_length)\n bar_format = '{percentage:3.0f}%|{bar:20}| {desc} [{elapsed}<{remaining}' \\\n '{postfix}]'\n if update_interval * chunk_size * 100 >= total_length:\n update_interval = 1\n with tqdm(total=total_length, bar_format=bar_format) as bar:\n counter = 0\n now_time, now_size = time.time(), downloaded\n for data in response.iter_content(chunk_size=chunk_size):\n f.write(data)\n downloaded += len(data)\n counter += 1\n bar.update(len(data))\n if counter % update_interval == 0:\n ellapsed = time.time() - now_time\n runtime_downloaded = downloaded - now_size\n now_time, now_size = time.time(), downloaded\n\n cur_size = memory2str(downloaded)\n speed_size = memory2str(runtime_downloaded / ellapsed)\n bar.set_description(f'{cur_size}/{total_size}')\n bar.set_postfix_str(f'{speed_size}/s')\n\n counter = 0\n\n" ]
[ 147, 79, 55, 27, 10, 7, 6, 4, 4, 0, 0 ]
[ "You can stream a downloads as it is here -> Stream a Download.\nAlso you can Stream Uploads.\nThe most important streaming a request is done unless you try to access the response.content\nwith just 2 lines\nfor line in r.iter_lines(): \n if line:\n print(line)\n\nStream Requests\n" ]
[ -1 ]
[ "download", "progress_bar", "python" ]
stackoverflow_0015644964_download_progress_bar_python.txt
Q: ImportError: cannot import name 'docevents' from 'botocore.docs.bcdoc' in AWS CodeBuild ImportError: cannot import name 'docevents' from 'botocore.docs.bcdoc' (/python3.7/site-packages/botocore/docs/bcdoc/init.py) Traceback (most recent call last): File "/root/.pyenv/versions/3.7.6/bin/aws", line 19, in <module> import awscli.clidriver File "/root/.pyenv/versions/3.7.6/lib/python3.7/site-packages/awscli/clidriver.py", line 36, in <module> from awscli.help import ProviderHelpCommand File "/root/.pyenv/versions/3.7.6/lib/python3.7/site-packages/awscli/help.py", line 23, in <module> from botocore.docs.bcdoc import docevents ImportError: cannot import name 'docevents' from 'botocore.docs.bcdoc' (/root/.pyenv/versions/3.7.6/lib/python3.7/site-packages/botocore/docs/bcdoc/__init__.py) [Container] 2020/10/29 16:48:39 Command did not exit successfully aws --version exit status 1 The failure occurs in the PRE_BUILD. And this is my spec build file: buildspec-cd.yml pre_build: commands: - AWS_REGION=${AWS_DEFAULT_REGION} - COMMIT_HASH=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7) - IMAGE_VERSION=${COMMIT_HASH} - REPOSITORY_URI=${CONTAINER_REGISTRY}/${APPLICATION_NAME} - aws --version - echo Logging in to Amazon ECR... - $(aws ecr get-login --region $AWS_DEFAULT_REGION --no-include-email) The codebuild was working correctly and nothing has been changed. Only stopped working. A: Reading this GitHub issue #2596. i fixed my error. Just before the PRE_BUILD section, I added this line to my buildspec-cd.yml file: pip3 install --upgrade awscli install: commands: - pip3 install awsebcli --upgrade - eb --version - pip3 install --upgrade awscli pre_build: commands: - AWS_REGION=${AWS_DEFAULT_REGION} - COMMIT_HASH=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7) - IMAGE_VERSION=${COMMIT_HASH} ... A: For me it's a version issue. So, I fixed it with below versions: aws-cli/1.18.105 Command: sudo python3 -m pip3 install awscli==1.18.105 botocore/1.17.28 Command: sudo python3 -m pip3 install botocore==1.17.28 A: In my case this error occurs running the command 'aws --version' on ubuntu 20.04. And the solution was: python3 -m pip install --upgrade pip python3 -m pip uninstall awscli python3 -m pip install awscli A: Was getting the same error on Ubuntu 20.04, the answer from @vijay rajput did not work at the beginning, fixed by replacing pip3 with pip - sudo python3 -m pip install awscli==1.18.105 and sudo python3 -m pip install botocore==1.17.28 Thx A: pip uninstall botocore worked for me A: I also got this error on Ubuntu 20.04 (though not in CodeBuild, I was running aws lambda invoke). Installing the AWS CLI v2 (https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2-linux.html) worked for me: curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" unzip awscliv2.zip sudo ./aws/install A: Upgrade pip (not necessary but it's better to do otherwise it will throw a Warning message while running the second command.) python3 -m pip install --upgrade pip Upgrade awscli (Necessary) pip3 install --upgrade awscli Add sudo in both commands if required to have root user permissions.
ImportError: cannot import name 'docevents' from 'botocore.docs.bcdoc' in AWS CodeBuild
ImportError: cannot import name 'docevents' from 'botocore.docs.bcdoc' (/python3.7/site-packages/botocore/docs/bcdoc/init.py) Traceback (most recent call last): File "/root/.pyenv/versions/3.7.6/bin/aws", line 19, in <module> import awscli.clidriver File "/root/.pyenv/versions/3.7.6/lib/python3.7/site-packages/awscli/clidriver.py", line 36, in <module> from awscli.help import ProviderHelpCommand File "/root/.pyenv/versions/3.7.6/lib/python3.7/site-packages/awscli/help.py", line 23, in <module> from botocore.docs.bcdoc import docevents ImportError: cannot import name 'docevents' from 'botocore.docs.bcdoc' (/root/.pyenv/versions/3.7.6/lib/python3.7/site-packages/botocore/docs/bcdoc/__init__.py) [Container] 2020/10/29 16:48:39 Command did not exit successfully aws --version exit status 1 The failure occurs in the PRE_BUILD. And this is my spec build file: buildspec-cd.yml pre_build: commands: - AWS_REGION=${AWS_DEFAULT_REGION} - COMMIT_HASH=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7) - IMAGE_VERSION=${COMMIT_HASH} - REPOSITORY_URI=${CONTAINER_REGISTRY}/${APPLICATION_NAME} - aws --version - echo Logging in to Amazon ECR... - $(aws ecr get-login --region $AWS_DEFAULT_REGION --no-include-email) The codebuild was working correctly and nothing has been changed. Only stopped working.
[ "Reading this GitHub issue #2596. i fixed my error.\nJust before the PRE_BUILD section, I added this line to my buildspec-cd.yml file:\npip3 install --upgrade awscli\ninstall:\n commands:\n - pip3 install awsebcli --upgrade\n - eb --version\n - pip3 install --upgrade awscli\n\n pre_build:\n commands:\n - AWS_REGION=${AWS_DEFAULT_REGION}\n - COMMIT_HASH=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7)\n - IMAGE_VERSION=${COMMIT_HASH}\n ...\n\n", "For me it's a version issue. So, I fixed it with below versions:\n\naws-cli/1.18.105\n\nCommand: sudo python3 -m pip3 install awscli==1.18.105\n\nbotocore/1.17.28\n\nCommand: sudo python3 -m pip3 install botocore==1.17.28\n", "In my case this error occurs running the command 'aws --version' on ubuntu 20.04.\nAnd the solution was:\npython3 -m pip install --upgrade pip\npython3 -m pip uninstall awscli\npython3 -m pip install awscli\n\n", "Was getting the same error on Ubuntu 20.04, the answer from @vijay rajput did not work at the beginning, fixed by replacing pip3 with pip - sudo python3 -m pip install awscli==1.18.105 and sudo python3 -m pip install botocore==1.17.28\nThx\n", "pip uninstall botocore\nworked for me\n", "I also got this error on Ubuntu 20.04 (though not in CodeBuild, I was running aws lambda invoke). Installing the AWS CLI v2 (https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2-linux.html) worked for me:\ncurl \"https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip\" -o \"awscliv2.zip\"\nunzip awscliv2.zip\nsudo ./aws/install\n\n", "Upgrade pip (not necessary but it's better to do otherwise it will throw a Warning message while running the second command.)\npython3 -m pip install --upgrade pip\nUpgrade awscli (Necessary)\npip3 install --upgrade awscli\nAdd sudo in both commands if required to have root user permissions.\n" ]
[ 146, 14, 7, 3, 1, 0, 0 ]
[]
[]
[ "amazon_web_services", "aws_codebuild", "docker", "python" ]
stackoverflow_0064596394_amazon_web_services_aws_codebuild_docker_python.txt
Q: Save the loop output into csv file I want to save the loop result into a csv file or dataframe; the below code just writes the tweets to the console. j =1 sortedDF = tweets_df.sort_values(by = ['Polarity']) for i in range (0, sortedDF.shape[0]): if(sortedDF['Analysis'][i] == 'Positive'): print(str(j)+')'+ sortedDF['transalted'][i]) print() j = j+1 A: with open("some.csv", "w") as f: j = 1 sortedDF = tweets_df.sort_values(by=['Polarity']) for i in range(0, sortedDF.shape[0]): if (sortedDF['Analysis'][i] == 'Positive'): f.write(str(j) + ')' + sortedDF['transalted'][i]) print() j = j + 1 A: We can call writelines at the end instead of write in for loop which is more optimized solution. sortedDF = tweets_df.sort_values(by=['Polarity']) file = open("positive_tweets.csv", "w") lines = [] j = 1 for i in range(0, sortedDF.shape[0]): if (sortedDF['Analysis'][i] == 'Positive'): lines.append(str(j) + ')' + sortedDF['transalted'][i]) j += 1 file.writelines(lines) file.close()
Save the loop output into csv file
I want to save the loop result into a csv file or dataframe; the below code just writes the tweets to the console. j =1 sortedDF = tweets_df.sort_values(by = ['Polarity']) for i in range (0, sortedDF.shape[0]): if(sortedDF['Analysis'][i] == 'Positive'): print(str(j)+')'+ sortedDF['transalted'][i]) print() j = j+1
[ "with open(\"some.csv\", \"w\") as f:\n j = 1\n sortedDF = tweets_df.sort_values(by=['Polarity'])\n for i in range(0, sortedDF.shape[0]):\n if (sortedDF['Analysis'][i] == 'Positive'):\n f.write(str(j) + ')' + sortedDF['transalted'][i])\n print()\n j = j + 1\n\n", "We can call writelines at the end instead of write in for loop which is more optimized solution.\nsortedDF = tweets_df.sort_values(by=['Polarity'])\nfile = open(\"positive_tweets.csv\", \"w\")\nlines = []\nj = 1\nfor i in range(0, sortedDF.shape[0]):\n if (sortedDF['Analysis'][i] == 'Positive'):\n lines.append(str(j) + ')' + sortedDF['transalted'][i])\n j += 1\nfile.writelines(lines)\nfile.close()\n\n" ]
[ 0, 0 ]
[]
[]
[ "export_to_csv", "python" ]
stackoverflow_0074597750_export_to_csv_python.txt
Q: Confused by convention in paho-mqtt for asigning a built in method as a function that i create (but without arguments) I am trying to get my head around the paho-MQTT library. I am struggling to understand what is clearly a convention in the coding of the library, but which doesn't make sense to me. I am happy to look this up, if someone can give me the topic I should be looking for. A lot of paho-mqtt tutorials and PAHO Foundation page (https://www.eclipse.org/paho/index.php?page=clients/python/docs/index.php#constructor-reinitialise) talk about handling the on_message, and on_connect methods (or at least what i think are methods) for the Client object. The tutorials all provide a standard way to engage with these methods, but in a way that I can't understand. It goes as follows: Define a function that takes a set number of arguments. Something like: def on_message(client, userdata, message): print(message.payload, 'on', message.topic) The process then is to create an 'mqtt Client' object and connect it to the broker. After that, to see the messages that client is subscribed to, i do the following: client.on_message = on_message This is this part that I don't understand. I understand this as meaning I am assigning a method the value of a function (but without calling its arguments or indicating its a function). I would have thought that client.on_message would have returned a 3 tupple, that I would have accessed via the function above, as follows: on_message(client.on_message) When I call type(client.on_message) I get NoneType back, indicating that mqtt.Client.on_message doesn't return anything. This explains why i can't call my function on the method. Perhaps this is just a syntax issue, but could someone explain the convention here (or tell me what i should be looking up). Is Client.on_message a method of the class mqtt.Client? and how am I assigning it the value of a function I defined without providing any arguments, despite specifying arguments when I defined them? Further, how am I assigning the function without indicating the parenthesis (on_message())? Below is a full bit of working code for paho-mqtt: #import the library import paho.mqtt.client as mqtt #Write the function to get the payload content (i.e. the text) from the message object def on_message(client, userdata, message): print('Recieved message', str(message.payload)) #create the mqtt object and connect to the broker MQTT_BROKER = [broker-IP] client = mqtt.Client('Client1') client.connect(MQTT_BROKER) #subscribe to the topic client.subscribe('TEST_TOPIC') #Somehow invoke the function defined above on the mqtt on.message method - i.e. what i don't understand client.on_message = on_message #Do this continually so i can keep looking for messages published on this topic client.loop_forever() The problem I am having is that i still can't see messages in the console, on the topic to which I am subscribed. I know these are being published elsewhere (on another client) because I can see them on the broker when running mosquitto_sub -t 'TEST_TOPIC'. At the moment I am just trying to understand the convention so that I can troubleshoot. A: You will never explicitly call any of the callback functions. The client will call them from it's event loop at the appropriate time. client.on_message = on_message is how yo tell the client which function to call (in the future) when a message arrives. Passing a function name with no arguments passes a handle to the function it's self. Similar with the on_connect function, this is a function that the client will call once it has completed connecting to the broker. I have re-arranged your code to ensure things are done in the right order using the on_connect callback to subscribe to the topic once it has finished connecting. #import the library import paho.mqtt.client as mqtt #Write the function to get the payload content (i.e. the text) from the message object def on_message(client, userdata, message): print('Recieved message', str(message.payload)) def on_connect(client, userdata, flags, rc): #subscribe to the topic client.subscribe('TEST_TOPIC') #create the mqtt object and connect to the broker MQTT_BROKER = [broker-IP] client = mqtt.Client('Client1') client.on_connect = on_connect client.on_message = on_message client.connect(MQTT_BROKER) #Do this continually so i can keep looking for messages published on this topic client.loop_forever()
Confused by convention in paho-mqtt for asigning a built in method as a function that i create (but without arguments)
I am trying to get my head around the paho-MQTT library. I am struggling to understand what is clearly a convention in the coding of the library, but which doesn't make sense to me. I am happy to look this up, if someone can give me the topic I should be looking for. A lot of paho-mqtt tutorials and PAHO Foundation page (https://www.eclipse.org/paho/index.php?page=clients/python/docs/index.php#constructor-reinitialise) talk about handling the on_message, and on_connect methods (or at least what i think are methods) for the Client object. The tutorials all provide a standard way to engage with these methods, but in a way that I can't understand. It goes as follows: Define a function that takes a set number of arguments. Something like: def on_message(client, userdata, message): print(message.payload, 'on', message.topic) The process then is to create an 'mqtt Client' object and connect it to the broker. After that, to see the messages that client is subscribed to, i do the following: client.on_message = on_message This is this part that I don't understand. I understand this as meaning I am assigning a method the value of a function (but without calling its arguments or indicating its a function). I would have thought that client.on_message would have returned a 3 tupple, that I would have accessed via the function above, as follows: on_message(client.on_message) When I call type(client.on_message) I get NoneType back, indicating that mqtt.Client.on_message doesn't return anything. This explains why i can't call my function on the method. Perhaps this is just a syntax issue, but could someone explain the convention here (or tell me what i should be looking up). Is Client.on_message a method of the class mqtt.Client? and how am I assigning it the value of a function I defined without providing any arguments, despite specifying arguments when I defined them? Further, how am I assigning the function without indicating the parenthesis (on_message())? Below is a full bit of working code for paho-mqtt: #import the library import paho.mqtt.client as mqtt #Write the function to get the payload content (i.e. the text) from the message object def on_message(client, userdata, message): print('Recieved message', str(message.payload)) #create the mqtt object and connect to the broker MQTT_BROKER = [broker-IP] client = mqtt.Client('Client1') client.connect(MQTT_BROKER) #subscribe to the topic client.subscribe('TEST_TOPIC') #Somehow invoke the function defined above on the mqtt on.message method - i.e. what i don't understand client.on_message = on_message #Do this continually so i can keep looking for messages published on this topic client.loop_forever() The problem I am having is that i still can't see messages in the console, on the topic to which I am subscribed. I know these are being published elsewhere (on another client) because I can see them on the broker when running mosquitto_sub -t 'TEST_TOPIC'. At the moment I am just trying to understand the convention so that I can troubleshoot.
[ "You will never explicitly call any of the callback functions. The client will call them from it's event loop at the appropriate time.\nclient.on_message = on_message is how yo tell the client which function to call (in the future) when a message arrives. Passing a function name with no arguments passes a handle to the function it's self.\nSimilar with the on_connect function, this is a function that the client will call once it has completed connecting to the broker.\nI have re-arranged your code to ensure things are done in the right order using the on_connect callback to subscribe to the topic once it has finished connecting.\n#import the library\nimport paho.mqtt.client as mqtt \n\n#Write the function to get the payload content (i.e. the text) from the message object\ndef on_message(client, userdata, message):\n print('Recieved message', str(message.payload))\n\ndef on_connect(client, userdata, flags, rc):\n #subscribe to the topic\n client.subscribe('TEST_TOPIC')\n\n#create the mqtt object and connect to the broker\nMQTT_BROKER = [broker-IP]\nclient = mqtt.Client('Client1')\nclient.on_connect = on_connect\nclient.on_message = on_message\nclient.connect(MQTT_BROKER)\n\n#Do this continually so i can keep looking for messages published on this topic\nclient.loop_forever()\n\n" ]
[ 0 ]
[]
[]
[ "mqtt", "paho", "python" ]
stackoverflow_0074595606_mqtt_paho_python.txt
Q: How to add a requirement.txt in my project python In fact When I do pip freeze > requirements.txt to put all the packages that I use in my project in a requirements.txt, it puts all python packages that I have in my pc and this despite I have activated my visual environment. In my project path I activated my venv then I did pip freeze > requirements.txt I had a requirement.txt with packages that had nothing to do with my project. I installed all my packages with pip. What did i did wrong. Any help would be welcome A: You may have inherited some global site packages when you created the venv. Likely ones you had pip installed while not in any venv. Try creating the venv using Windows: python -m venv (venv name here) --no-site-packages Linux: python3 -m venv (venv name here) --no-site-packages The no site packages argument tells python to ignore all global-site packages when creating the new venv. A: pip freeze is not a good option if you have chunk of installed packages, because pip freeze will include all unused packages and packages from different venv that has nothing to do with your current project in the requirements.txt. You should use $ pipreqs --encoding=utf8 <project-dir> which is supposed to generate requirements.txt based on used packages in the current project. Read pipreqs doc You first run the below pip command in terminal to install pipreqs incase you don't have it installed. pip install pipreqs After the installation run the following in terminal to generate your requirements.txt: $ pipreqs --encoding=utf8 [<path>] or pipreqs --encoding=utf8 [<path>] Note: Make sure you replace \ with \\ for your path in case you encounter FileNotFoundError: [Errno 2] No such file or directory:. Example: pipreqs --encoding=utf8 projects\working_directory output: FileNotFoundError: [Errno 2] No such file or directory: 'projectsworking_directory\\requirements.txt' Solved pipreqs --encoding=utf8 projects\\working_directory output: INFO: Successfully saved requirements file in projects\working_directory A: OK, let's start out with the idea that you have a mixed globals + local packages venv. And you want to only freeze what's in the virtualenv. You want to see what's in what pip freeze > local.global.txt deactivate pip freeze > global.txt OK, so now you have 2 lists. Let's make a BIG ASSUMPTION! All the packages your project REALLY need are in the local venv site_packages. This is probably wrong, but bear with me. So let's look at what's in what by using a file differ. diff global.txt local.global.txt I am going to filter this a bit. > means it's local+global, < means it's global only. > Django==3.2.14 > django-celery-results==2.4.0 > django-debug-toolbar==3.5.0 > django-redis-cache==3.0.1 > django-waffle==2.5.0 > django-webpack-loader==1.6.0 All this django stuff? It's only in my virtualenv. I want it in requirements.txt for this project. Now, let's look at what I don't see in this diff but is in BOTH global.txt and global.local.txt jupyter==1.0.0 jupyter-client==7.1.2 jupyter-console==6.4.0 jupyter-core==4.9.2 jupyterlab-pygments==0.1.2 jupyterlab-widgets==1.0.2 Yes, I can see jupyter from within my virtualenv. But I did not install it there, I am only seeing the global version. You don't want it. Put anything in the requirements.txt that comes with > # yes the 2 > are not a typo. diff global.txt local.global.txt | grep '>' > requirements.txt Then you need to clean up the leading > Sorry, these are some unix utilities being used here. Windows will have similar utilities in powershell or wsl (I think). If worse comes to worse you will have to build your requirements.txt manually, by only keeping lines that DONT exist in global.txt About that big assumption: This is where you create a new venv with only this requirements.txt and you try to run your application and/or its tests. Anything that it uses that you did not install locally in the virtualenv will cause errors. Figuring out stuff that you had in the virtualenv because you installed it there by mistake (my jupyter packages for example, if I had put it there) is harder to identify.
How to add a requirement.txt in my project python
In fact When I do pip freeze > requirements.txt to put all the packages that I use in my project in a requirements.txt, it puts all python packages that I have in my pc and this despite I have activated my visual environment. In my project path I activated my venv then I did pip freeze > requirements.txt I had a requirement.txt with packages that had nothing to do with my project. I installed all my packages with pip. What did i did wrong. Any help would be welcome
[ "You may have inherited some global site packages when you created the venv. Likely ones you had pip installed while not in any venv. Try creating the venv using\nWindows:\npython -m venv (venv name here) --no-site-packages\nLinux:\npython3 -m venv (venv name here) --no-site-packages\nThe no site packages argument tells python to ignore all global-site packages when creating the new venv.\n", "pip freeze is not a good option if you have chunk of installed packages, because pip freeze will include all unused packages and packages from different venv that has nothing to do with your current project in the requirements.txt.\nYou should use $ pipreqs --encoding=utf8 <project-dir> which is supposed to generate requirements.txt based on used packages in the current project.\nRead pipreqs doc\nYou first run the below pip command in terminal to install pipreqs incase you don't have it installed.\npip install pipreqs\n\nAfter the installation run the following in terminal to generate your requirements.txt:\n$ pipreqs --encoding=utf8 [<path>]\n\nor\npipreqs --encoding=utf8 [<path>]\n\nNote: Make sure you replace \\ with \\\\ for your path in case you encounter FileNotFoundError: [Errno 2] No such file or directory:.\nExample:\npipreqs --encoding=utf8 projects\\working_directory\n\noutput:\nFileNotFoundError: [Errno 2] No such file or directory: 'projectsworking_directory\\\\requirements.txt'\n\nSolved\npipreqs --encoding=utf8 projects\\\\working_directory\n\noutput:\nINFO: Successfully saved requirements file in projects\\working_directory\n\n", "OK, let's start out with the idea that you have a mixed globals + local packages venv. And you want to only freeze what's in the virtualenv.\nYou want to see what's in what\npip freeze > local.global.txt\n\ndeactivate\n\npip freeze > global.txt\n\nOK, so now you have 2 lists.\nLet's make a BIG ASSUMPTION!\nAll the packages your project REALLY need are in the local venv site_packages.\nThis is probably wrong, but bear with me.\nSo let's look at what's in what by using a file differ.\ndiff global.txt local.global.txt\n\nI am going to filter this a bit. > means it's local+global, < means it's global only.\n> Django==3.2.14\n> django-celery-results==2.4.0\n> django-debug-toolbar==3.5.0\n> django-redis-cache==3.0.1\n> django-waffle==2.5.0\n> django-webpack-loader==1.6.0\n\nAll this django stuff? It's only in my virtualenv. I want it in requirements.txt for this project.\nNow, let's look at what I don't see in this diff but is in BOTH global.txt and global.local.txt\njupyter==1.0.0\njupyter-client==7.1.2\njupyter-console==6.4.0\njupyter-core==4.9.2\njupyterlab-pygments==0.1.2\njupyterlab-widgets==1.0.2\n\nYes, I can see jupyter from within my virtualenv. But I did not install it there, I am only seeing the global version. You don't want it.\nPut anything in the requirements.txt that comes with >\n# yes the 2 > are not a typo.\ndiff global.txt local.global.txt | grep '>' > requirements.txt \n\nThen you need to clean up the leading >\nSorry, these are some unix utilities being used here. Windows will have similar utilities in powershell or wsl (I think).\nIf worse comes to worse you will have to build your requirements.txt manually, by only keeping lines that DONT exist in global.txt\nAbout that big assumption:\nThis is where you create a new venv with only this requirements.txt and you try to run your application and/or its tests.\nAnything that it uses that you did not install locally in the virtualenv will cause errors.\nFiguring out stuff that you had in the virtualenv because you installed it there by mistake (my jupyter packages for example, if I had put it there) is harder to identify.\n" ]
[ 1, 1, 0 ]
[]
[]
[ "pip", "python", "python_venv", "requirements.txt" ]
stackoverflow_0074592548_pip_python_python_venv_requirements.txt.txt
Q: AttributeError: module 'h11' has no attribute 'Event' ` Already up to date. venv "C:\StableDiffusion\stable-diffusion-webui\venv\Scripts\Python.exe" Python 3.8.0 (tags/v3.8.0:fa919fd, Oct 14 2019, 19:37:50) [MSC v.1916 64 bit (AMD64)] Commit hash: 828438b4a190759807f9054932cae3a8b880ddf1 Installing requirements for Web UI Launching Web UI with arguments: Traceback (most recent call last): File "launch.py", line 251, in <module> start() File "launch.py", line 242, in start import webui File "C:\StableDiffusion\stable-diffusion-webui\webui.py", line 13, in <module> from modules import devices, sd_samplers, upscaler, extensions, localization File "C:\StableDiffusion\stable-diffusion-webui\modules\sd_samplers.py", line 11, in <module> from modules import prompt_parser, devices, processing, images File "C:\StableDiffusion\stable-diffusion-webui\modules\processing.py", line 15, in <module> import modules.sd_hijack File "C:\StableDiffusion\stable-diffusion-webui\modules\sd_hijack.py", line 10, in <module> import modules.textual_inversion.textual_inversion File "C:\StableDiffusion\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 13, in <module> from modules import shared, devices, sd_hijack, processing, sd_models, images, sd_samplers File "C:\StableDiffusion\stable-diffusion-webui\modules\shared.py", line 8, in <module> import gradio as gr File "C:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\gradio\__init__.py", line 3, in <module> import gradio.components as components File "C:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\gradio\components.py", line 31, in <module> from gradio import media_data, processing_utils, utils File "C:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\gradio\processing_utils.py", line 20, in <module> from gradio import encryptor, utils File "C:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\gradio\utils.py", line 35, in <module> import httpx File "C:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\httpx\__init__.py", line 2, in <module> from ._api import delete, get, head, options, patch, post, put, request, stream File "C:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\httpx\_api.py", line 4, in <module> from ._client import Client File "C:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\httpx\_client.py", line 29, in <module> from ._transports.default import AsyncHTTPTransport, HTTPTransport File "C:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\httpx\_transports\default.py", line 30, in <module> import httpcore File "C:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\httpcore\__init__.py", line 1, in <module> from ._api import request, stream File "C:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\httpcore\_api.py", line 5, in <module> from ._sync.connection_pool import ConnectionPool File "C:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\httpcore\_sync\__init__.py", line 1, in <module> from .connection import HTTPConnection File "C:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\httpcore\_sync\connection.py", line 13, in <module> from .http11 import HTTP11Connection File "C:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\httpcore\_sync\http11.py", line 44, in <module> class HTTP11Connection(ConnectionInterface): File "C:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\httpcore\_sync\http11.py", line 140, in HTTP11Connection self, event: h11.Event, timeout: Optional[float] = None AttributeError: module 'h11' has no attribute 'Event' Appuyez sur une touche pour continuer... ` Hello everyone I'm a newbie trying to use Stable Diffusion :'( I tried to launch it for the first time but it got stuck each time to this error, I honestly don't really know what i'm supposed to do right now. A: Seems you have to reinstall httpcore in version 0.15 pip install --force-reinstall httpcore==0.15 works as a temporary workaround until some other fix is found. That or just append it to your requirement.txt This comes from a recent update in httpcore and nothing related to this repository. Source: https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/4833 A: Don't feel bad, for me it starts right up in windows. But I've tried several distros and installations since Aug and it has never run once. update:I kept banging away at it, and it finally worked! I just made this I ran it with "python launch.py" and every time an error popped up I searched for it and applied the recommended fix...good luck! A: I solved this issue by adding the below line in pipfile httpcore = "<=0.15" that is the solution for who using pipenv
AttributeError: module 'h11' has no attribute 'Event'
` Already up to date. venv "C:\StableDiffusion\stable-diffusion-webui\venv\Scripts\Python.exe" Python 3.8.0 (tags/v3.8.0:fa919fd, Oct 14 2019, 19:37:50) [MSC v.1916 64 bit (AMD64)] Commit hash: 828438b4a190759807f9054932cae3a8b880ddf1 Installing requirements for Web UI Launching Web UI with arguments: Traceback (most recent call last): File "launch.py", line 251, in <module> start() File "launch.py", line 242, in start import webui File "C:\StableDiffusion\stable-diffusion-webui\webui.py", line 13, in <module> from modules import devices, sd_samplers, upscaler, extensions, localization File "C:\StableDiffusion\stable-diffusion-webui\modules\sd_samplers.py", line 11, in <module> from modules import prompt_parser, devices, processing, images File "C:\StableDiffusion\stable-diffusion-webui\modules\processing.py", line 15, in <module> import modules.sd_hijack File "C:\StableDiffusion\stable-diffusion-webui\modules\sd_hijack.py", line 10, in <module> import modules.textual_inversion.textual_inversion File "C:\StableDiffusion\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 13, in <module> from modules import shared, devices, sd_hijack, processing, sd_models, images, sd_samplers File "C:\StableDiffusion\stable-diffusion-webui\modules\shared.py", line 8, in <module> import gradio as gr File "C:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\gradio\__init__.py", line 3, in <module> import gradio.components as components File "C:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\gradio\components.py", line 31, in <module> from gradio import media_data, processing_utils, utils File "C:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\gradio\processing_utils.py", line 20, in <module> from gradio import encryptor, utils File "C:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\gradio\utils.py", line 35, in <module> import httpx File "C:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\httpx\__init__.py", line 2, in <module> from ._api import delete, get, head, options, patch, post, put, request, stream File "C:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\httpx\_api.py", line 4, in <module> from ._client import Client File "C:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\httpx\_client.py", line 29, in <module> from ._transports.default import AsyncHTTPTransport, HTTPTransport File "C:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\httpx\_transports\default.py", line 30, in <module> import httpcore File "C:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\httpcore\__init__.py", line 1, in <module> from ._api import request, stream File "C:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\httpcore\_api.py", line 5, in <module> from ._sync.connection_pool import ConnectionPool File "C:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\httpcore\_sync\__init__.py", line 1, in <module> from .connection import HTTPConnection File "C:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\httpcore\_sync\connection.py", line 13, in <module> from .http11 import HTTP11Connection File "C:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\httpcore\_sync\http11.py", line 44, in <module> class HTTP11Connection(ConnectionInterface): File "C:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\httpcore\_sync\http11.py", line 140, in HTTP11Connection self, event: h11.Event, timeout: Optional[float] = None AttributeError: module 'h11' has no attribute 'Event' Appuyez sur une touche pour continuer... ` Hello everyone I'm a newbie trying to use Stable Diffusion :'( I tried to launch it for the first time but it got stuck each time to this error, I honestly don't really know what i'm supposed to do right now.
[ "Seems you have to reinstall httpcore in version 0.15\n\npip install --force-reinstall httpcore==0.15\nworks as a temporary workaround until some other fix is found.\nThat or just append it to your requirement.txt\nThis comes from a recent update in httpcore and nothing related to this repository.\n\nSource: https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/4833\n", "Don't feel bad, for me it starts right up in windows. But I've tried several distros and installations since Aug and it has never run once.\nupdate:I kept banging away at it, and it finally worked!\nI just made this\nI ran it with \"python launch.py\" and every time an error popped up I searched for it and applied the recommended fix...good luck!\n", "I solved this issue by adding the below line in pipfile\nhttpcore = \"<=0.15\"\n\nthat is the solution for who using pipenv\n" ]
[ 0, 0, 0 ]
[]
[]
[ "attributeerror", "events", "python" ]
stackoverflow_0074578145_attributeerror_events_python.txt
Q: Looking to create a string that is joint by commas to parse a json array I have the below code, however I need the output to return with commas between the pair of curly brackets. i.e., {},{}. ` for i in range (0,Eqpt_List.shape[0]): EquipmentCode = Eqpt_List['assetitemindex'].iloc[i] TotalRate = Eqpt_List['hourlycostprice'].iloc[i] test1 = { "equipmentCode": str(EquipmentCode), "totalRate": (TotalRate), "operatingRate": 0, "ownershipRate": 0, "id": "ef4cb06d-9cbd-4fa7-9e33-59e64fcaa092" } print(test1) ` The output is below.... {'equipmentCode': '1002', 'totalRate': 10.0, 'operatingRate': 0, 'ownershipRate': 0, 'id': 'ef4cb06d-9cbd-4fa7-9e33-59e64fcaa092'} {'equipmentCode': '1006', 'totalRate': 10.0, 'operatingRate': 0, 'ownershipRate': 0, 'id': 'ef4cb06d-9cbd-4fa7-9e33-59e64fcaa092'} {'equipmentCode': '1007', 'totalRate': 10.0, 'operatingRate': 0, 'ownershipRate': 0, 'id': 'ef4cb06d-9cbd-4fa7-9e33-59e64fcaa092'} but I require it return with commas in between such as: {'equipmentCode': '1002', 'totalRate': 10.0, 'operatingRate': 0, 'ownershipRate': 0, 'id': 'ef4cb06d-9cbd-4fa7-9e33-59e64fcaa092'}, {'equipmentCode': '1006', 'totalRate': 10.0, 'operatingRate': 0, 'ownershipRate': 0, 'id': 'ef4cb06d-9cbd-4fa7-9e33-59e64fcaa092'}, {'equipmentCode': '1007', 'totalRate': 10.0, 'operatingRate': 0, 'ownershipRate': 0, 'id': 'ef4cb06d-9cbd-4fa7-9e33-59e64fcaa092'} I have tried to use the ", ".join however i do not think that i was executing this method correctly as it returned errors. The reason i am doing the for loop is to be able to use the output in a nested jsons array to execute a put request into an API. Rather than loop through the whole block of code if i move the for loop to the start and reference the output then i think it will be more efficient... but am very new to this so bear with me. i want to insert the 'test1' output into the below code: for i in range (0,Eqpt_List.shape[0]): EquipmentCode = Eqpt_List['assetitemindex'].iloc[i] TotalRate = Eqpt_List['hourlycostprice'].iloc[i] hdr ={ 'Authorization': 'Bearer '+token, 'Content-Type': 'application/json' #'Cache-Control': 'no-cache', } # Request body data = { "businessUnitCode": "manager", "equipmentRates": [ (test1) ], "effectiveDate": (date), "rateSetGroupCode": "NIMBUS", "rateSetGroupDescription": "TEST NIMBUS - DO NOT USE", "id": "98437f72-839f-48f4-a9f9-6c96304d85f0" } data = json.dumps(data) #print(data) response = requests.put('https://api.hcssapps.com/setups/api/v1/RateSet/Equipment/ef4cb06d-9cbd-4fa7-9e33-59e64fcaa092', headers=hdr, data=data) u=(response.content) print(response.status_code)
Looking to create a string that is joint by commas to parse a json array
I have the below code, however I need the output to return with commas between the pair of curly brackets. i.e., {},{}. ` for i in range (0,Eqpt_List.shape[0]): EquipmentCode = Eqpt_List['assetitemindex'].iloc[i] TotalRate = Eqpt_List['hourlycostprice'].iloc[i] test1 = { "equipmentCode": str(EquipmentCode), "totalRate": (TotalRate), "operatingRate": 0, "ownershipRate": 0, "id": "ef4cb06d-9cbd-4fa7-9e33-59e64fcaa092" } print(test1) ` The output is below.... {'equipmentCode': '1002', 'totalRate': 10.0, 'operatingRate': 0, 'ownershipRate': 0, 'id': 'ef4cb06d-9cbd-4fa7-9e33-59e64fcaa092'} {'equipmentCode': '1006', 'totalRate': 10.0, 'operatingRate': 0, 'ownershipRate': 0, 'id': 'ef4cb06d-9cbd-4fa7-9e33-59e64fcaa092'} {'equipmentCode': '1007', 'totalRate': 10.0, 'operatingRate': 0, 'ownershipRate': 0, 'id': 'ef4cb06d-9cbd-4fa7-9e33-59e64fcaa092'} but I require it return with commas in between such as: {'equipmentCode': '1002', 'totalRate': 10.0, 'operatingRate': 0, 'ownershipRate': 0, 'id': 'ef4cb06d-9cbd-4fa7-9e33-59e64fcaa092'}, {'equipmentCode': '1006', 'totalRate': 10.0, 'operatingRate': 0, 'ownershipRate': 0, 'id': 'ef4cb06d-9cbd-4fa7-9e33-59e64fcaa092'}, {'equipmentCode': '1007', 'totalRate': 10.0, 'operatingRate': 0, 'ownershipRate': 0, 'id': 'ef4cb06d-9cbd-4fa7-9e33-59e64fcaa092'} I have tried to use the ", ".join however i do not think that i was executing this method correctly as it returned errors. The reason i am doing the for loop is to be able to use the output in a nested jsons array to execute a put request into an API. Rather than loop through the whole block of code if i move the for loop to the start and reference the output then i think it will be more efficient... but am very new to this so bear with me. i want to insert the 'test1' output into the below code: for i in range (0,Eqpt_List.shape[0]): EquipmentCode = Eqpt_List['assetitemindex'].iloc[i] TotalRate = Eqpt_List['hourlycostprice'].iloc[i] hdr ={ 'Authorization': 'Bearer '+token, 'Content-Type': 'application/json' #'Cache-Control': 'no-cache', } # Request body data = { "businessUnitCode": "manager", "equipmentRates": [ (test1) ], "effectiveDate": (date), "rateSetGroupCode": "NIMBUS", "rateSetGroupDescription": "TEST NIMBUS - DO NOT USE", "id": "98437f72-839f-48f4-a9f9-6c96304d85f0" } data = json.dumps(data) #print(data) response = requests.put('https://api.hcssapps.com/setups/api/v1/RateSet/Equipment/ef4cb06d-9cbd-4fa7-9e33-59e64fcaa092', headers=hdr, data=data) u=(response.content) print(response.status_code)
[]
[]
[ "Why not just print comma then new line char after print?\nIn python print() we have end argument that print() prints after the message. By default it’s = new line. So we just need to change it to either comma & new line or just comma.\nI’m putting both the comma & new line but if you just need comma then delete \\n in end!\nfor i in range (0,Eqpt_List.shape[0]):\n EquipmentCode = Eqpt_List['assetitemindex'].iloc[i]\n TotalRate = Eqpt_List['hourlycostprice'].iloc[i]\n test1 = {\n \"equipmentCode\": str(EquipmentCode),\n \"totalRate\": (TotalRate),\n \"operatingRate\": 0,\n \"ownershipRate\": 0,\n \"id\": \"ef4cb06d-9cbd-4fa7-9e33-59e64fcaa092\"\n }\n print(test1, end = ‘,\\n’)\n\n\nThe only problem with above code is that it will print comma with last element too. So what we need to do is that reduce the for loop with last element and do last element after loop like below.\nComplete code:\nfor i in range (0,Eqpt_List.shape[0] - 1):\n EquipmentCode = Eqpt_List['assetitemindex'].iloc[i]\n TotalRate = Eqpt_List['hourlycostprice'].iloc[i]\n test1 = {\n \"equipmentCode\": str(EquipmentCode),\n \"totalRate\": (TotalRate),\n \"operatingRate\": 0,\n \"ownershipRate\": 0,\n \"id\": \"ef4cb06d-9cbd-4fa7-9e33-59e64fcaa092\"\n }\n print(test1, end = ‘,\\n’)\n\n\nEquipmentCode = Eqpt_List['assetitemindex'].iloc[Eqpt_List.shape[0] - 1]\nTotalRate = Eqpt_List['hourlycostprice'].iloc[Eqpt_List.shape[0] - 1]\ntest1 = {\n \"equipmentCode\": str(EquipmentCode),\n \"totalRate\": (TotalRate),\n \"operatingRate\": 0,\n \"ownershipRate\": 0,\n \"id\": \"ef4cb06d-9cbd-4fa7-9e33-59e64fcaa092\"\n}\nprint(test1)\n\n\n" ]
[ -1 ]
[ "arrays", "for_loop", "json", "python" ]
stackoverflow_0074597868_arrays_for_loop_json_python.txt
Q: Regular expression to make non-greedy I have a text like this EXPRESS blood| muscle| testis| normal| tumor| fetus| adult RESTR_EXPR soft tissue/muscle tissue tumor Right now I want to only extract the last item in EXPRESS line, which is adult. My pattern is: [|](.*?)\n The code goes greedy to muscle| testis| normal| tumor| fetus| adult. Can I know if there is any way to solve this issue? A: You can take the capture group value exclude matching pipe chars after matching a pipe char followed by optional spaces. If there has to be a newline at the end of the string: \|[^\S\n]*([^|\n]*)\n Explanation \| Match | [^\S\n]* Match optional whitespace chars without newlines ( Capture group 1 [^|\n]* Match optional chars except for | or a newline ) Close group 1 \n Match a newline Regex demo Or asserting the end of the string: \|[^\S\n]*([^|\n]*)$ A: You could use this one. It spares you the space before, handle the \r\n case and is non-greedy: \|\s*([^\|])*?\r?\n Tested here
Regular expression to make non-greedy
I have a text like this EXPRESS blood| muscle| testis| normal| tumor| fetus| adult RESTR_EXPR soft tissue/muscle tissue tumor Right now I want to only extract the last item in EXPRESS line, which is adult. My pattern is: [|](.*?)\n The code goes greedy to muscle| testis| normal| tumor| fetus| adult. Can I know if there is any way to solve this issue?
[ "You can take the capture group value exclude matching pipe chars after matching a pipe char followed by optional spaces.\nIf there has to be a newline at the end of the string:\n\\|[^\\S\\n]*([^|\\n]*)\\n\n\nExplanation\n\n\\| Match |\n[^\\S\\n]* Match optional whitespace chars without newlines\n( Capture group 1\n\n[^|\\n]* Match optional chars except for | or a newline\n\n\n) Close group 1\n\\n Match a newline\n\nRegex demo\nOr asserting the end of the string:\n\\|[^\\S\\n]*([^|\\n]*)$\n\n", "You could use this one. It spares you the space before, handle the \\r\\n case and is non-greedy:\n\\|\\s*([^\\|])*?\\r?\\n\n\nTested here\n" ]
[ 1, 0 ]
[]
[]
[ "python", "regex" ]
stackoverflow_0074596624_python_regex.txt
Q: How to speed up calculation to get the minimum value of row by row a calculation over 2 dataframes in pandas I have 2 (for presentation simplified) Dataframes: table1_id lat long table2_id 1 5.5 45.5 2 5.2 50.2 3 8.9 49.7 table2_id lat long 1 5.0 47.2 2 8.5 22.5 3 2.1 33.3 Table1 has >40000 rows. Table2 has 3000 rows. What I want is to find the table2_id for each item in table 1 which has the shortest distance to its location using latitude/longitude and for the distance calculation I am using geopy.distance. To do this the slow way is to iterate over each coordinate in table1 and for each of those iterate over all rows of table 2 to find the minimum. Which is very slow using DataFrame.iterrows() or DataFrame.apply. Would look somewhat like that: for idx, row in table1_df.iterrows(): location1 = (row["lat"], row["long"]) min_table2id = 0 min_distance = 9999999 for idx2, row2 in table2_df.iterrows(): location2 = (row2["lat"], row2["long"]) distance = geopy.distance.geodesic(location1, location2).km if distance < min_distance: min_distance = distance min_table2id = row2[table2_id] row[table2_id] = min_table2id I've only done simple things over smaller dataset where speed was never a problem, but this is going for minutes, which was somewhat expected by those 2 for loops over that large of a table. I am not too familiar with vectorization (only used it to manipulate single columns in a dataframe) and was wondering if there is a good way to vectorize this, or speed it up in another way. Thanks! A: As far as I understand, the geodesic function does not support vectorization. Therefore, you need to implement the distance calculation function for coordinate vectors yourself. Fortunately, there are many such implementations. Here is a very simple implementation. Here is a complete example of the solution to your question. I have created two dataframes with the dimensions you provided. import pandas as pd import numpy as np df1 = pd.DataFrame({'df1_id':range(40_000), 'lat':np.random.uniform(-10, 10, size=40_000), 'lon':np.random.uniform(-10, 10, size=40_000)}) df1['df2_id'] = np.nan df2 = pd.DataFrame({'df2_id':range(3000), 'lat':np.random.uniform(-10, 10, size=3000), 'lon':np.random.uniform(-10, 10, size=3000)}) def haversine(lon1, lat1, lon2, lat2): lon1, lat1, lon2, lat2 = np.radians([lon1, lat1, lon2, lat2]) dlon = lon2 - lon1 dlat = lat2 - lat1 haver_formula = np.sin(dlat/2)**2 + np.cos(lat1) * np.cos(lat2) * np.sin(dlon/2)**2 r = 6371 #6371 for distance in KM for miles use 3958.756 dist = 2 * r * np.arcsin(np.sqrt(haver_formula)) return dist def foo(row, table2_df): size = table2_df.shape[0] distances = haversine([row[1]]*size,[row[0]]*size, table2_df.lon.values, table2_df.lat.values) return table2_df.iloc[np.argmin(distances), 0] %%timeit df1[['lat', 'lon']].apply(foo, axis=1, args=(df2,)) Out: 14 s ± 251 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) You can also parallelize the apply method using the parallel-pandas library. It's very simple and doesn't require you to rewrite your code. # pip install parallel-pandas from parallel_pandas import ParallelPandas ParallelPandas.initialize(disable_pr_bar=True) # p_apply is parallel analogue of apply method %%timeit df1[['lat', 'lon']].p_apply(foo, axis=1, args=(df2,)) Out: 1.79 s ± 75.5 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) Thus, the total time takes just over a second. I hope this is acceptable to you. Good luck!
How to speed up calculation to get the minimum value of row by row a calculation over 2 dataframes in pandas
I have 2 (for presentation simplified) Dataframes: table1_id lat long table2_id 1 5.5 45.5 2 5.2 50.2 3 8.9 49.7 table2_id lat long 1 5.0 47.2 2 8.5 22.5 3 2.1 33.3 Table1 has >40000 rows. Table2 has 3000 rows. What I want is to find the table2_id for each item in table 1 which has the shortest distance to its location using latitude/longitude and for the distance calculation I am using geopy.distance. To do this the slow way is to iterate over each coordinate in table1 and for each of those iterate over all rows of table 2 to find the minimum. Which is very slow using DataFrame.iterrows() or DataFrame.apply. Would look somewhat like that: for idx, row in table1_df.iterrows(): location1 = (row["lat"], row["long"]) min_table2id = 0 min_distance = 9999999 for idx2, row2 in table2_df.iterrows(): location2 = (row2["lat"], row2["long"]) distance = geopy.distance.geodesic(location1, location2).km if distance < min_distance: min_distance = distance min_table2id = row2[table2_id] row[table2_id] = min_table2id I've only done simple things over smaller dataset where speed was never a problem, but this is going for minutes, which was somewhat expected by those 2 for loops over that large of a table. I am not too familiar with vectorization (only used it to manipulate single columns in a dataframe) and was wondering if there is a good way to vectorize this, or speed it up in another way. Thanks!
[ "As far as I understand, the geodesic function does not support vectorization. Therefore, you need to implement the distance calculation function for coordinate vectors yourself. Fortunately, there are many such implementations.\nHere is a very simple implementation. Here is a complete example of the solution to your question. I have created two dataframes with the dimensions you provided.\nimport pandas as pd\nimport numpy as np\n\ndf1 = pd.DataFrame({'df1_id':range(40_000), 'lat':np.random.uniform(-10, 10, size=40_000), 'lon':np.random.uniform(-10, 10, size=40_000)})\ndf1['df2_id'] = np.nan\n\ndf2 = pd.DataFrame({'df2_id':range(3000), 'lat':np.random.uniform(-10, 10, size=3000), 'lon':np.random.uniform(-10, 10, size=3000)})\n\ndef haversine(lon1, lat1, lon2, lat2):\n lon1, lat1, lon2, lat2 = np.radians([lon1, lat1, lon2, lat2])\n dlon = lon2 - lon1\n dlat = lat2 - lat1\n\n haver_formula = np.sin(dlat/2)**2 + np.cos(lat1) * np.cos(lat2) * np.sin(dlon/2)**2\n\n r = 6371 #6371 for distance in KM for miles use 3958.756\n dist = 2 * r * np.arcsin(np.sqrt(haver_formula))\n return dist\n\n\ndef foo(row, table2_df):\n size = table2_df.shape[0]\n distances = haversine([row[1]]*size,[row[0]]*size, table2_df.lon.values, table2_df.lat.values)\n return table2_df.iloc[np.argmin(distances), 0]\n\n%%timeit\ndf1[['lat', 'lon']].apply(foo, axis=1, args=(df2,))\n\nOut:\n 14 s ± 251 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)\n\nYou can also parallelize the apply method using the parallel-pandas library. It's very simple and doesn't require you to rewrite your code.\n# pip install parallel-pandas\nfrom parallel_pandas import ParallelPandas\nParallelPandas.initialize(disable_pr_bar=True)\n\n# p_apply is parallel analogue of apply method\n%%timeit\ndf1[['lat', 'lon']].p_apply(foo, axis=1, args=(df2,))\n\nOut:\n 1.79 s ± 75.5 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)\n\nThus, the total time takes just over a second. I hope this is acceptable to you. Good luck!\n" ]
[ 1 ]
[]
[]
[ "dataframe", "pandas", "python", "vectorization" ]
stackoverflow_0074594718_dataframe_pandas_python_vectorization.txt
Q: Call Dlang function in struct with Python ctypes I have a .so (written in Dlang) which has a struct as below struct A { static A* load(string folder) { } } I am trying to consume the .so in Python. I am not sure how can i call the function which is present inside the structure. My python code is below from ctypes import * class A(Structure): _fields_ = [ ("load", CFUNCTYPE(c_void_p,c_char_p)) ] if __name__ == '__main__': so_path = "./libmycode.so" lib = CDLL(so_path) /* not sure how to call the load method */ I want to call the functions available in .so which is present inside the struct from python code. A: The ctypes module is designed to access a C module. So, you should define a C compatible structure and implement a C compatible function that calls your A.load() and converts arguments and return value. This is an example for that, assuming the A has a string member. import std.string; import std.conv; import core.stdc.stdlib; import core.stdc.string; struct A { string s; static A* load(string s) { auto a = new A; a.s = s ~ s; return a; } }; struct AForC { char* s; }; extern (C) AForC* loadAForC(const char* s) { auto a = A.load(to!string(s)); auto aForC = cast(AForC*)malloc(AForC.sizeof); if (aForC) { aForC.s = strdup(a.s.toStringz()); } return aForC; } Now you can call loadAForC() in Python using the ctypes. As a side note, a static member function of a struct in D or C++ is not stored inside a struct instance. So you should not define it in the _fields_ in Python.
Call Dlang function in struct with Python ctypes
I have a .so (written in Dlang) which has a struct as below struct A { static A* load(string folder) { } } I am trying to consume the .so in Python. I am not sure how can i call the function which is present inside the structure. My python code is below from ctypes import * class A(Structure): _fields_ = [ ("load", CFUNCTYPE(c_void_p,c_char_p)) ] if __name__ == '__main__': so_path = "./libmycode.so" lib = CDLL(so_path) /* not sure how to call the load method */ I want to call the functions available in .so which is present inside the struct from python code.
[ "The ctypes module is designed to access a C module. So, you should define a C compatible structure and implement a C compatible function that calls your A.load() and converts arguments and return value.\nThis is an example for that, assuming the A has a string member.\nimport std.string;\nimport std.conv;\nimport core.stdc.stdlib;\nimport core.stdc.string;\n\nstruct A\n{\n string s;\n static A* load(string s) {\n auto a = new A;\n a.s = s ~ s;\n return a;\n }\n};\n\nstruct AForC\n{\n char* s;\n};\n\nextern (C) AForC* loadAForC(const char* s)\n{\n auto a = A.load(to!string(s));\n auto aForC = cast(AForC*)malloc(AForC.sizeof);\n if (aForC) {\n aForC.s = strdup(a.s.toStringz());\n }\n return aForC;\n}\n\nNow you can call loadAForC() in Python using the ctypes.\nAs a side note, a static member function of a struct in D or C++ is not stored inside a struct instance. So you should not define it in the _fields_ in Python.\n" ]
[ 0 ]
[]
[]
[ "ctypes", "d", "python" ]
stackoverflow_0074595383_ctypes_d_python.txt
Q: Pulling files from real devices in appium iOS Im having a difficult time trying to pull files and folders in one of my automated tests using appium. We use real devices for testing and I would like to use driver.pull_file() to accomplish this task. The files I want exist in the On My iPad folder, and I cannot figure out how to get the file path of the actual file in that location on the device. Does anyone know where exactly I can find the right path? or what it would look like? A: How to get the file path of a file on iOS.
Pulling files from real devices in appium iOS
Im having a difficult time trying to pull files and folders in one of my automated tests using appium. We use real devices for testing and I would like to use driver.pull_file() to accomplish this task. The files I want exist in the On My iPad folder, and I cannot figure out how to get the file path of the actual file in that location on the device. Does anyone know where exactly I can find the right path? or what it would look like?
[ "How to get the file path of a file on iOS.\n" ]
[ 0 ]
[]
[]
[ "appium", "ios", "python" ]
stackoverflow_0074594979_appium_ios_python.txt
Q: FastAPI: Internal server error when accessing through OpenAPI docs I am exposing API using OpenAPI which is developed using FastAPI. Here is my pydantic model: class ComponentListResponse(BaseModel): """ This model is to list the component """ tag_info = ComponentSummaryTagInfoResp heath_status : Optional[str] = Field(alias="healthStatus") stage : Optional[str] = Field(alias="stage") component_notes: List[dict] =List[ComponentNotes] class ComponentList(BaseModel): """ This is the base model for component List """ data: List[dict] = List[ComponentListResponse] Here is the resource file: from .schema import ( ComponentListResponse,ComponentList ) from .service import ( get_component_list ) router = APIRouter(prefix="/component", tags=["Component"]) @router.get( "/componentList/{component_id}", response_description="List component by componentId & CompanyId", response_model=ComponentList, status_code=status.HTTP_200_OK, ) def get_component_endpoint( request: Request, component_id: str, company_id: str ): """ API handler function for component List API. """ component_list = get_component_list(component_id, company_id) print (component_list) if component_list: return component_list else: raise HTTPException( status_code=status.HTTP_404_NOT_FOUND, detail="Component List not found", ) I am getting the response properly when I am trying to make a GET request from browser. but when I am trying to access the same using OpenAPI docs through Swagger UI, it raises an error (Internal server error). I sense that this is caused due to data: List[dict] = List[ComponentListResponse]. Can anyone tell me how to solve this? A: Your models are wrongly defined. The "=" sign should be use to provide default values not type definitions. Therefore your models should be define as follows: class ComponentListResponse(BaseModel): """ This model is to list the component """ tag_info = ComponentSummaryTagInfoResp heath_status : Optional[str] = Field(None, alias="healthStatus") stage : Optional[str] = Field(None, alias="stage") component_notes: List[ComponentNotes] class ComponentList(BaseModel): """ This is the base model for component List """ data: List[ComponentListResponse] or if you really want to have default values: class ComponentListResponse(BaseModel): """ This model is to list the component """ tag_info = ComponentSummaryTagInfoResp heath_status : Optional[str] = Field(None, alias="healthStatus") stage : Optional[str] = Field(None, alias="stage") component_notes: List[ComponentNotes] = [] class ComponentList(BaseModel): """ This is the base model for component List """ data: List[ComponentListResponse] = [] Besides don't forget to specify None as a default value for your field that are optional when you are using Field. If you don't put it, FastAPI and Pydantic will expect that those values are always set.
FastAPI: Internal server error when accessing through OpenAPI docs
I am exposing API using OpenAPI which is developed using FastAPI. Here is my pydantic model: class ComponentListResponse(BaseModel): """ This model is to list the component """ tag_info = ComponentSummaryTagInfoResp heath_status : Optional[str] = Field(alias="healthStatus") stage : Optional[str] = Field(alias="stage") component_notes: List[dict] =List[ComponentNotes] class ComponentList(BaseModel): """ This is the base model for component List """ data: List[dict] = List[ComponentListResponse] Here is the resource file: from .schema import ( ComponentListResponse,ComponentList ) from .service import ( get_component_list ) router = APIRouter(prefix="/component", tags=["Component"]) @router.get( "/componentList/{component_id}", response_description="List component by componentId & CompanyId", response_model=ComponentList, status_code=status.HTTP_200_OK, ) def get_component_endpoint( request: Request, component_id: str, company_id: str ): """ API handler function for component List API. """ component_list = get_component_list(component_id, company_id) print (component_list) if component_list: return component_list else: raise HTTPException( status_code=status.HTTP_404_NOT_FOUND, detail="Component List not found", ) I am getting the response properly when I am trying to make a GET request from browser. but when I am trying to access the same using OpenAPI docs through Swagger UI, it raises an error (Internal server error). I sense that this is caused due to data: List[dict] = List[ComponentListResponse]. Can anyone tell me how to solve this?
[ "Your models are wrongly defined. The \"=\" sign should be use to provide default values not type definitions.\nTherefore your models should be define as follows:\nclass ComponentListResponse(BaseModel):\n \"\"\"\n This model is to list the component\n\n \"\"\"\n \n tag_info = ComponentSummaryTagInfoResp \n heath_status : Optional[str] = Field(None, alias=\"healthStatus\")\n stage : Optional[str] = Field(None, alias=\"stage\")\n component_notes: List[ComponentNotes]\n\nclass ComponentList(BaseModel):\n \"\"\"\n This is the base model for component List\n \"\"\"\n \n data: List[ComponentListResponse]\n\nor if you really want to have default values:\nclass ComponentListResponse(BaseModel):\n \"\"\"\n This model is to list the component\n\n \"\"\"\n \n tag_info = ComponentSummaryTagInfoResp \n heath_status : Optional[str] = Field(None, alias=\"healthStatus\")\n stage : Optional[str] = Field(None, alias=\"stage\")\n component_notes: List[ComponentNotes] = []\n\nclass ComponentList(BaseModel):\n \"\"\"\n This is the base model for component List\n \"\"\"\n \n data: List[ComponentListResponse] = []\n\nBesides don't forget to specify None as a default value for your field that are optional when you are using Field. If you don't put it, FastAPI and Pydantic will expect that those values are always set.\n" ]
[ 0 ]
[]
[]
[ "fastapi", "openapi", "python", "swagger" ]
stackoverflow_0074596711_fastapi_openapi_python_swagger.txt
Q: Django - How can view function see difference of the endpoint being hit , without any value stated in the url? I'm fairly new to Django and here's my case. If i have 3 endpoints that i can't modify, and i need to point them to one same View function such as : urls.py urlpatterns = [ ... url(r'^a/', views.functionz.as_view(), name='a'), url(r'^b/', views.functionz.as_view(), name='b'), url(r'^c/', views.functionz.as_view(), name='c'), ... ] If I'm restricted from changing the endpoints a/, b/, and c/ to something else that accepts parameters like xyz/a or xyz/b, how can my view function functionz identify the difference between them when it is being called? Can I do something like this pseudocode? views.py Class XYZ(API View): def post(self, request, format=None): if request.endpoint == '/a/': # do things if and only if the client hits /a/ A: I don't know if mapping diffrent url patterns to same view is a good idea, but if you want to do that logic with in the post you probably can use the get_full_path to get the current path and parse the last the item. Class XYZ(API View): def post(self, request, format=None): current_path = request.get_full_path().rsplit("/", 1) if current_path == 'a': # Do something if current_path == 'b': # Do something
Django - How can view function see difference of the endpoint being hit , without any value stated in the url?
I'm fairly new to Django and here's my case. If i have 3 endpoints that i can't modify, and i need to point them to one same View function such as : urls.py urlpatterns = [ ... url(r'^a/', views.functionz.as_view(), name='a'), url(r'^b/', views.functionz.as_view(), name='b'), url(r'^c/', views.functionz.as_view(), name='c'), ... ] If I'm restricted from changing the endpoints a/, b/, and c/ to something else that accepts parameters like xyz/a or xyz/b, how can my view function functionz identify the difference between them when it is being called? Can I do something like this pseudocode? views.py Class XYZ(API View): def post(self, request, format=None): if request.endpoint == '/a/': # do things if and only if the client hits /a/
[ "I don't know if mapping diffrent url patterns to same view is a good idea, but if you want to do that logic with in the post you probably can use the get_full_path to get the current path and parse the last the item.\nClass XYZ(API View):\n def post(self, request, format=None):\n current_path = request.get_full_path().rsplit(\"/\", 1)\n if current_path == 'a':\n # Do something\n if current_path == 'b':\n # Do something\n \n\n" ]
[ 0 ]
[]
[]
[ "backend", "django", "django_rest_framework", "django_views", "python" ]
stackoverflow_0074598040_backend_django_django_rest_framework_django_views_python.txt
Q: Python check if values of a dataframe are present in another dataframe index I have two dataframes. I want to drop the values in first dataframe (default) after comparing with second dataframe (provided by user) def_df = pd.DataFrame([['alpha','beta'],['gamma','delta']],index=['ab_plot',gd_plot]) 0 1 ab_plot alpha beta gd_plot gamma delta rk_plot ray kite user_df = pd.DataFrame([10,20],index=['alpha','beta']) 0 alpha 10 beta 20 I want to compare two dataframes and know the possible plots for given user data. Expected answer ['ab_plot'] # since user has provided data for `'alpha','beta'` My approach: posble_plots_with_user_data = [True for x in posble_plots.values if x in df.columns] Present answer: TypeError: unhashable type: 'numpy.ndarray' A: If need test all values if match at least one value by index from user_df use DataFrame.isin with DataFrame.any and filter def_df.index: #changed data def_df = pd.DataFrame([['alpha','beta'],['gamma','beta']],index=['ab_plot','gd_plot']) user_df = pd.DataFrame([10,20],index=['alpha','beta']) posble_plots_with_user_data = def_df.index[def_df.isin(user_df.index).any(axis=1)].tolist() print (posble_plots_with_user_data) ['ab_plot', 'gd_plot'] If need rows with match all values per rows use DataFrame.all: posble_plots_with_user_data = def_df.index[def_df.isin(user_df.index).all(axis=1)].tolist() print (posble_plots_with_user_data) ['ab_plot'] Details: print (def_df.isin(user_df.index)) 0 1 ab_plot True True gd_plot False True
Python check if values of a dataframe are present in another dataframe index
I have two dataframes. I want to drop the values in first dataframe (default) after comparing with second dataframe (provided by user) def_df = pd.DataFrame([['alpha','beta'],['gamma','delta']],index=['ab_plot',gd_plot]) 0 1 ab_plot alpha beta gd_plot gamma delta rk_plot ray kite user_df = pd.DataFrame([10,20],index=['alpha','beta']) 0 alpha 10 beta 20 I want to compare two dataframes and know the possible plots for given user data. Expected answer ['ab_plot'] # since user has provided data for `'alpha','beta'` My approach: posble_plots_with_user_data = [True for x in posble_plots.values if x in df.columns] Present answer: TypeError: unhashable type: 'numpy.ndarray'
[ "If need test all values if match at least one value by index from user_df use DataFrame.isin with DataFrame.any and filter def_df.index:\n#changed data\ndef_df = pd.DataFrame([['alpha','beta'],['gamma','beta']],index=['ab_plot','gd_plot'])\n\nuser_df = pd.DataFrame([10,20],index=['alpha','beta'])\n\nposble_plots_with_user_data = def_df.index[def_df.isin(user_df.index).any(axis=1)].tolist()\nprint (posble_plots_with_user_data)\n['ab_plot', 'gd_plot']\n\nIf need rows with match all values per rows use DataFrame.all:\nposble_plots_with_user_data = def_df.index[def_df.isin(user_df.index).all(axis=1)].tolist()\nprint (posble_plots_with_user_data)\n['ab_plot']\n\nDetails:\nprint (def_df.isin(user_df.index))\n 0 1\nab_plot True True\ngd_plot False True\n\n" ]
[ 2 ]
[]
[]
[ "dataframe", "numpy", "numpy_ndarray", "pandas", "python" ]
stackoverflow_0074598122_dataframe_numpy_numpy_ndarray_pandas_python.txt
Q: Trying to execute a `ddb_to_es.py` file in order to backfill OpenSearch index on my DynamoDB table (for @searchable Amplify directive) TLDR: I'm trying to execute a ddb_to_es.py file in order to backfill OpenSearch index on my DynamoDB table. But when I run the command in the terminal nothing happens. I've made an update to my Amplify/GraphQL schema and added a @searchable directive. I need to backfill OpenSearch index on my DynamoDB table, as per the grey info paragraph in the docs https://docs.amplify.aws/cli/graphql/search-and-result-aggregations/: Once the @searchable directive is added, all new records added to the model are streamed to OpenSearch. To backfill existing data, see Backfill OpenSearch index from DynamoDB table. The docs direct to these docs: https://docs.amplify.aws/cli/graphql/troubleshooting/#backfill-opensearch-index-from-dynamodb-table We are instructed to use the provided python file with this command: python3 ddb_to_es.py \ --rn 'us-west-2' \ # Use the region in which your table and OpenSearch domain reside --tn 'Post-XXXX-dev' \ # Table name --lf 'arn:aws:lambda:us-west-2:<...>:function:amplify-<...>-OpenSearchStreamingLambd-<...>' \ # Lambda function ARN, find the DynamoDB to OpenSearch streaming functions, copy entire ARN --esarn 'arn:aws:dynamodb:us-west-2:<...>:table/Post-<...>/stream/2019-20-03T00:00:00.350' # Event source ARN, copy the full DynamoDB table ARN I've tried this with my region, ARN's, and DynamoDB references but when I hit enter in my CLI it just goes to the next command line and nothing happens? I've not used python before. There are import statements at the top of the file, but I'm only trying to run the file in isolation. Is there an environment I need to set up? A: I have got it working. It seemed to respond this morning where it didn't on Friday. This is what I did today: Install file dependencies: pip3 install boto3 Execute file in cli (I changed the file name to the pathname): python3 /Users/myName/myPythonExecution/ddb_to_es.py \ --rn 'eu-west-2' \ --tn '<DynamoDB table name from console>-dev' \ --lf 'arn:aws:lambda:eu-west-2:<AWS Account number>:function:amplify-<name of app>-OpenSearchStreamingLambd-<id of lambda, found in Lambda Console>' \ --esarn 'arn:aws:dynamodb:eu-west-2:<AWS account number>:table/<DynamoDB table name from console>/stream/<date/time of table creation, found in DynamoDB console table information>'
Trying to execute a `ddb_to_es.py` file in order to backfill OpenSearch index on my DynamoDB table (for @searchable Amplify directive)
TLDR: I'm trying to execute a ddb_to_es.py file in order to backfill OpenSearch index on my DynamoDB table. But when I run the command in the terminal nothing happens. I've made an update to my Amplify/GraphQL schema and added a @searchable directive. I need to backfill OpenSearch index on my DynamoDB table, as per the grey info paragraph in the docs https://docs.amplify.aws/cli/graphql/search-and-result-aggregations/: Once the @searchable directive is added, all new records added to the model are streamed to OpenSearch. To backfill existing data, see Backfill OpenSearch index from DynamoDB table. The docs direct to these docs: https://docs.amplify.aws/cli/graphql/troubleshooting/#backfill-opensearch-index-from-dynamodb-table We are instructed to use the provided python file with this command: python3 ddb_to_es.py \ --rn 'us-west-2' \ # Use the region in which your table and OpenSearch domain reside --tn 'Post-XXXX-dev' \ # Table name --lf 'arn:aws:lambda:us-west-2:<...>:function:amplify-<...>-OpenSearchStreamingLambd-<...>' \ # Lambda function ARN, find the DynamoDB to OpenSearch streaming functions, copy entire ARN --esarn 'arn:aws:dynamodb:us-west-2:<...>:table/Post-<...>/stream/2019-20-03T00:00:00.350' # Event source ARN, copy the full DynamoDB table ARN I've tried this with my region, ARN's, and DynamoDB references but when I hit enter in my CLI it just goes to the next command line and nothing happens? I've not used python before. There are import statements at the top of the file, but I'm only trying to run the file in isolation. Is there an environment I need to set up?
[ "I have got it working. It seemed to respond this morning where it didn't on Friday. This is what I did today:\nInstall file dependencies:\npip3 install boto3\n\nExecute file in cli (I changed the file name to the pathname):\npython3 /Users/myName/myPythonExecution/ddb_to_es.py \\\n --rn 'eu-west-2' \\\n --tn '<DynamoDB table name from console>-dev' \\\n --lf 'arn:aws:lambda:eu-west-2:<AWS Account number>:function:amplify-<name of app>-OpenSearchStreamingLambd-<id of lambda, found in Lambda Console>' \\\n --esarn 'arn:aws:dynamodb:eu-west-2:<AWS account number>:table/<DynamoDB table name from console>/stream/<date/time of table creation, found in DynamoDB console table information>'\n\n" ]
[ 0 ]
[]
[]
[ "amazon_dynamodb", "amazon_web_services", "aws_appsync", "graphql", "python" ]
stackoverflow_0074565212_amazon_dynamodb_amazon_web_services_aws_appsync_graphql_python.txt
Q: Python- list of lists with values.tolist() I need to read a single column from db table as a list of values. I tried to do that with following line of code: ids_list = (pd.read_sql_query(q, db_internal.connection)).values.tolist() when I use the values.tolist() it does what documentation says, that is turns the df into list of lists: [['0x0043Fcb34e7470130fDe28198571DeE092c70Bd7'], ['0x00f93fBf00F97170B6cf295DC58888073CB5c2b8'], ['0x01FE650EF2f8e2982295489AE6aDc1413bF6011F'], ['0x0212133321479B183637e52942564162bCc37C1D']] Because I am reading single column, I would like to transform it into list of values, not list of lists: ['0x0043Fcb34e7470130fDe28198571DeE092c70Bd7', '0x00f93fBf00F97170B6cf295DC58888073CB5c2b8', '0x01FE650EF2f8e2982295489AE6aDc1413bF6011F', '0x0212133321479B183637e52942564162bCc37C1D'] What would be a way to do that? I keep on finding solutions that are focused on list of lists A: You can simply iterate through the list with: [i for i in ids_list[0]] I hope this helps. A: Just convert tolist() generated list into list with the first element only. ids_list = list(i[0] for i in (pd.read_sql_query(q, db_internal.connection)).values.tolist())
Python- list of lists with values.tolist()
I need to read a single column from db table as a list of values. I tried to do that with following line of code: ids_list = (pd.read_sql_query(q, db_internal.connection)).values.tolist() when I use the values.tolist() it does what documentation says, that is turns the df into list of lists: [['0x0043Fcb34e7470130fDe28198571DeE092c70Bd7'], ['0x00f93fBf00F97170B6cf295DC58888073CB5c2b8'], ['0x01FE650EF2f8e2982295489AE6aDc1413bF6011F'], ['0x0212133321479B183637e52942564162bCc37C1D']] Because I am reading single column, I would like to transform it into list of values, not list of lists: ['0x0043Fcb34e7470130fDe28198571DeE092c70Bd7', '0x00f93fBf00F97170B6cf295DC58888073CB5c2b8', '0x01FE650EF2f8e2982295489AE6aDc1413bF6011F', '0x0212133321479B183637e52942564162bCc37C1D'] What would be a way to do that? I keep on finding solutions that are focused on list of lists
[ "You can simply iterate through the list with:\n[i for i in ids_list[0]]\n\nI hope this helps.\n", "Just convert tolist() generated list into list with the first element only.\nids_list = list(i[0] for i in (pd.read_sql_query(q, db_internal.connection)).values.tolist())\n\n" ]
[ 2, 1 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074598156_pandas_python.txt
Q: Hypothesis create column with pd.datetime dtype in given test-dataframe I want to test whether a certain method can handle different dates in a pandas dataframe, which it takes as an argument. The following example should clarify what kind of setup I want. In the example column('Date', dtype=pd.datetime) does not work for creating a date column in the test dataframe: from hypothesis import given from hypothesis.extra.pandas import column, data_frames import pandas as pd from unittest import TestCase class TestExampleClass(TestCase): @given(data_frames([column('A', dtype=str), column('B', dtype=int),column('Date', dtype=pd.datetime)])) def test_example_test_method(self, dataframe): self.assertTrue(True) Any ideas? I am aware of How to create a datetime indexed pandas DataFrame with hypothesis library?, but it did not help for my specific case. A: Use dtype="datetime64[ns]" instead of dtype=pd.datetime. I've opened an issue to look into this in more detail and give helpful error messages when passed pd.datetime, datetime, or a unitless datetime dtype; this kind of confusion isn't the user experience we want to offer!
Hypothesis create column with pd.datetime dtype in given test-dataframe
I want to test whether a certain method can handle different dates in a pandas dataframe, which it takes as an argument. The following example should clarify what kind of setup I want. In the example column('Date', dtype=pd.datetime) does not work for creating a date column in the test dataframe: from hypothesis import given from hypothesis.extra.pandas import column, data_frames import pandas as pd from unittest import TestCase class TestExampleClass(TestCase): @given(data_frames([column('A', dtype=str), column('B', dtype=int),column('Date', dtype=pd.datetime)])) def test_example_test_method(self, dataframe): self.assertTrue(True) Any ideas? I am aware of How to create a datetime indexed pandas DataFrame with hypothesis library?, but it did not help for my specific case.
[ "Use dtype=\"datetime64[ns]\" instead of dtype=pd.datetime.\n\nI've opened an issue to look into this in more detail and give helpful error messages when passed pd.datetime, datetime, or a unitless datetime dtype; this kind of confusion isn't the user experience we want to offer!\n" ]
[ 1 ]
[]
[]
[ "pandas", "python", "python_hypothesis", "unit_testing" ]
stackoverflow_0074591552_pandas_python_python_hypothesis_unit_testing.txt
Q: Error when writting image from python to Excel I am trying to run this code: wb=openpyxl.load_workbook('Output_Report_v16.xlsm',read_only=False,keep_vba=True) sheets=wb.sheetnames sheet_InputData_Overview=wb [sheets[7]] img=openpyxl.drawing.image.Image('Eink_Liq.png') sheet_InputData_Overview.add_image(ws.cell(2,28)) wb.save('Output_Report_v16.xlsm') When python runs the last line of code this error arises: 'Cell' object has no attribute '_id' The excel file contains VBA which should not be changed or deleted. Do you have any idea what may be wrong with this code? A: Seem to be a few issues with your code but the problem is appying the image. You created the img object but never use it. ws.cell references an object not defined ... wb=openpyxl.load_workbook('Output_Report_v16.xlsm',read_only=False,keep_vba=True) sheets=wb.sheetnames sheet_InputData_Overview=wb [sheets[7]] img=openpyxl.drawing.image.Image('Eink_Liq.png') ### This line is wrong and references 'ws' object not defined # sheet_InputData_Overview.add_image(ws.cell(2,28)) ### Set the position for the image in the sheet img.anchor = sheet_InputData_Overview.cell(row=2, column=28).coordinate ### Add the image 'img' to the sheet sheet_InputData_Overview.add_image(img) wb.save('Output_Report_v16.xlsm')
Error when writting image from python to Excel
I am trying to run this code: wb=openpyxl.load_workbook('Output_Report_v16.xlsm',read_only=False,keep_vba=True) sheets=wb.sheetnames sheet_InputData_Overview=wb [sheets[7]] img=openpyxl.drawing.image.Image('Eink_Liq.png') sheet_InputData_Overview.add_image(ws.cell(2,28)) wb.save('Output_Report_v16.xlsm') When python runs the last line of code this error arises: 'Cell' object has no attribute '_id' The excel file contains VBA which should not be changed or deleted. Do you have any idea what may be wrong with this code?
[ "Seem to be a few issues with your code but the problem is appying the image.\nYou created the img object but never use it.\nws.cell references an object not defined\n...\nwb=openpyxl.load_workbook('Output_Report_v16.xlsm',read_only=False,keep_vba=True)\nsheets=wb.sheetnames\nsheet_InputData_Overview=wb [sheets[7]]\n\nimg=openpyxl.drawing.image.Image('Eink_Liq.png')\n\n### This line is wrong and references 'ws' object not defined\n# sheet_InputData_Overview.add_image(ws.cell(2,28))\n\n### Set the position for the image in the sheet\nimg.anchor = sheet_InputData_Overview.cell(row=2, column=28).coordinate\n### Add the image 'img' to the sheet\nsheet_InputData_Overview.add_image(img)\n\n\nwb.save('Output_Report_v16.xlsm')\n\n" ]
[ 1 ]
[ "Simple fix after looking at the attributes of Cell. Try this:\nws[cell].style = Style(font=Font(color=Color(colors.RED))) \n\n" ]
[ -1 ]
[ "excel", "image", "numpy", "openpyxl", "python" ]
stackoverflow_0074597600_excel_image_numpy_openpyxl_python.txt
Q: check if string exists on a json (python) i only need to print something if a certain condition was met, not all of the data data = json.loads(r.text) for value in data: if value['username'] == 'GDjkhp': print(value['content'], '\n') the following code gives me a keyerror
check if string exists on a json (python)
i only need to print something if a certain condition was met, not all of the data data = json.loads(r.text) for value in data: if value['username'] == 'GDjkhp': print(value['content'], '\n') the following code gives me a keyerror
[]
[]
[ "try using the get() method:\nif value.get('username','no_username') == 'GDjkhp':\n print(value.get('content',''), '\\n')\n\n\nif you need to check if a key is in there:\nif 'content' not in value:\n print('content not in json')\n\n" ]
[ -1 ]
[ "json", "python" ]
stackoverflow_0074598184_json_python.txt
Q: Retrieve Azure IoT connection string using Python I have the following code conn_str = "HostName=my_host.azure-devices.net;DeviceId=MY_DEVICE;SharedAccessKey=MY_KEY" device_conn = IoTHubDeviceClient.create_from_connection_string(conn_str) await device_conn.connect() This works fine, but only because I've manually retrieved this from the IoT hub and pasted it into the code. We are going to have hundreds of these devices, so is there a way to retrieve this connection string programmatically? It'll be the equivalent of the following az iot hub device-identity connection-string show --device-id MY_DEVICCE --hub-name MY_HUB --subscription ABCD1234 How do I do this? A: The device id and key are you give to the each device and you choose where to store/how to load it. The connection string is just a concept for easy to get started but it has no meaning in the actual technical level. You can use create_from_symmetric_key(symmetric_key, hostname, device_id, **kwargs) to direct pass key, id and hub uri to sdk. A: I found it's not possible to retrieve the actual connection string, but a connection string can be built from the device primary key from azure.iot.hub import IoTHubRegistryManager from azure.iot.device import IoTHubDeviceClient # HUB_HOST is YOURHOST.azure-devices.net # SHARED_ACCESS_KEY is from the registryReadWrite connection string reg_str = "HostName={0};SharedAccessKeyName=registryReadWrite;SharedAccessKey={1}".format( HUB_HOST, SHARED_ACCESS_KEY) device = IoTHubRegistryManager(reg_str).get_device("MY_DEVICE_ID") device_key = device.authentication.symmetric_key.primary_key conn_str = "HostName={0};DeviceId={1};SharedAccessKey={2}".format( HUB_HOST, "MY_DEVICE_ID", device_key) client = IoTHubDeviceClient.create_from_connection_string( conn_str) client.connect() # Remaining code here... A: Other options you could consider include: Use the Device Provisioning service to manage provisioning and connecting your device to your IoT hub. You won't need to generate your connection strings manually in this case. Use X.509 certificates (recommended for production environments instead of SAS). Each device has an X.509 cert derived from the root cert in your hub. See: https://learn.microsoft.com/azure/iot-hub/tutorial-x509-introduction
Retrieve Azure IoT connection string using Python
I have the following code conn_str = "HostName=my_host.azure-devices.net;DeviceId=MY_DEVICE;SharedAccessKey=MY_KEY" device_conn = IoTHubDeviceClient.create_from_connection_string(conn_str) await device_conn.connect() This works fine, but only because I've manually retrieved this from the IoT hub and pasted it into the code. We are going to have hundreds of these devices, so is there a way to retrieve this connection string programmatically? It'll be the equivalent of the following az iot hub device-identity connection-string show --device-id MY_DEVICCE --hub-name MY_HUB --subscription ABCD1234 How do I do this?
[ "The device id and key are you give to the each device and you choose where to store/how to load it. The connection string is just a concept for easy to get started but it has no meaning in the actual technical level.\nYou can use create_from_symmetric_key(symmetric_key, hostname, device_id, **kwargs) to direct pass key, id and hub uri to sdk.\n", "I found it's not possible to retrieve the actual connection string, but a connection string can be built from the device primary key\nfrom azure.iot.hub import IoTHubRegistryManager\nfrom azure.iot.device import IoTHubDeviceClient\n\n# HUB_HOST is YOURHOST.azure-devices.net\n# SHARED_ACCESS_KEY is from the registryReadWrite connection string\nreg_str = \"HostName={0};SharedAccessKeyName=registryReadWrite;SharedAccessKey={1}\".format(\n HUB_HOST, SHARED_ACCESS_KEY)\n\ndevice = IoTHubRegistryManager(reg_str).get_device(\"MY_DEVICE_ID\")\ndevice_key = device.authentication.symmetric_key.primary_key\nconn_str = \"HostName={0};DeviceId={1};SharedAccessKey={2}\".format(\n HUB_HOST, \"MY_DEVICE_ID\", device_key)\n\nclient = IoTHubDeviceClient.create_from_connection_string(\n conn_str)\n\nclient.connect()\n\n# Remaining code here...\n\n", "Other options you could consider include:\n\nUse the Device Provisioning service to manage provisioning and connecting your device to your IoT hub. You won't need to generate your connection strings manually in this case.\nUse X.509 certificates (recommended for production environments instead of SAS). Each device has an X.509 cert derived from the root cert in your hub. See: https://learn.microsoft.com/azure/iot-hub/tutorial-x509-introduction\n\n" ]
[ 1, 0, 0 ]
[]
[]
[ "azure_iot_hub", "python" ]
stackoverflow_0074563593_azure_iot_hub_python.txt
Q: I want to create a wordlist of incrementing decimal numbers by 1 using python I know i can create a wordlist using programms like 'crunch' but i wanted to use python in hopes of learning something new. so I'm doing this CTF where i need a wordlist of numbers from 1 to maybe 10,000 or more. all the wordlists in Seclists have at least 3 zeroes in front of them, i dont want to use those files because i need to hash each entry through md5. if there are zeros in front of a numbers the hash differs from the same number without any zeros in front of it. i need each numbers in its own line, starting with 1 to however many lines or number i want. I feel like there may be a github gist for this out there but i havnt been looking long or hard enough to find one. if you have a link for one pls let me know! A: this works better, then pipe the results into a file. #!/usr/bin/python3 def generate(): n = 10000 print("\n".join(str(v) for v in range(1, n + 1))) generate() A: Here is how you can create a wordlist of such numbers and get it into a .csv file: def generate(min=0,max=10000): ''' Generates a wordlist of numbers from min to max. ''' r = range(min,max+1,1) with open('myWordlist.csv','a') as file: for i in r: file.write(f'{i}\n') generate()
I want to create a wordlist of incrementing decimal numbers by 1 using python
I know i can create a wordlist using programms like 'crunch' but i wanted to use python in hopes of learning something new. so I'm doing this CTF where i need a wordlist of numbers from 1 to maybe 10,000 or more. all the wordlists in Seclists have at least 3 zeroes in front of them, i dont want to use those files because i need to hash each entry through md5. if there are zeros in front of a numbers the hash differs from the same number without any zeros in front of it. i need each numbers in its own line, starting with 1 to however many lines or number i want. I feel like there may be a github gist for this out there but i havnt been looking long or hard enough to find one. if you have a link for one pls let me know!
[ "this works better, then pipe the results into a file.\n#!/usr/bin/python3\n\ndef generate():\n\n n = 10000\n print(\"\\n\".join(str(v) for v in range(1, n + 1)))\ngenerate()\n\n\n", "Here is how you can create a wordlist of such numbers and get it into a .csv file:\ndef generate(min=0,max=10000):\n '''\n Generates a wordlist of numbers from min to max.\n '''\n r = range(min,max+1,1)\n with open('myWordlist.csv','a') as file:\n for i in r:\n file.write(f'{i}\\n')\n \ngenerate()\n\n" ]
[ 1, 0 ]
[]
[]
[ "ctf", "python" ]
stackoverflow_0074597397_ctf_python.txt
Q: A regex pattern that matches all words starting from a word with an s and stopping before a word that starts with an s I'm trying to capture words in a string such that the first word starts with an s, and the regex stops matching if the next word also starts with an s. For example. I have the string " Stack, Code and StackOverflow". I want to capture only " Stack, Code and " and not include "StackOverflow" in the match. This is what I am thinking: Start with a space followed by an s. Match everything except if the group is a space and an s (I'm using negative lookahead). The regex I have tried: (?<=\s)S[a-z -,]*(?!(\sS)) I don't know how to make it work. A: I think this should work. I adapted the regex from this thread. You can also test it out here. I have also included a non-regex solution. I basically track the first occurrence of a word starting with an 's' and the next word starting with an 's' and get the words in that range. import re teststring = " Stack, Code and StackOverflow" extractText = re.search(r"(\s)[sS][^*\s]*[^sS]*", teststring) print(extractText[0]) #non-regex solution listwords = teststring.split(' ') # non regex solution start = 0 end = 0 for i,word in enumerate(listwords): if word.startswith('s') or word.startswith('S'): if start == 0: start = i else: end = i break newstring = " " + " ".join([word for word in listwords[start:end]]) print(newstring) Output Stack, Code and Stack, Code and A: You could use for example a capture group: (S(?<!\S.).*?)\s*S(?<!\S.) Explanation ( Capture group 1 S(?<!\S.) Match S and assert that to the left of the S there is not a whitespace boundary .*? Match any character, as few as possible ) Close group \s* Match optional whitespace chars S(?<!\S.) Match S and assert that to the left of the S there is not a whitespace boundary See a regex demo and a Python demo. Example code: import re pattern = r"(S(?<!\S.).*?)\s*S(?<!\S.)" s = "Stack, Code and StackOverflow" m = re.search(pattern, s) if m: print(m.group(1)) Output Stack, Code and Another option using a lookaround to assert the S to the right and not consume it to allow multiple matches after each other: S(?<!\S.).*?(?=\s*S(?<!\S.)) Regex demo import re pattern = r"S(?<!\S.).*?(?=\s*S(?<!\S.))" s = "Stack, Code and StackOverflow test Stack" print(re.findall(pattern, s)) Output ['Stack, Code and', 'StackOverflow test']
A regex pattern that matches all words starting from a word with an s and stopping before a word that starts with an s
I'm trying to capture words in a string such that the first word starts with an s, and the regex stops matching if the next word also starts with an s. For example. I have the string " Stack, Code and StackOverflow". I want to capture only " Stack, Code and " and not include "StackOverflow" in the match. This is what I am thinking: Start with a space followed by an s. Match everything except if the group is a space and an s (I'm using negative lookahead). The regex I have tried: (?<=\s)S[a-z -,]*(?!(\sS)) I don't know how to make it work.
[ "I think this should work. I adapted the regex from this thread. You can also test it out here. I have also included a non-regex solution. I basically track the first occurrence of a word starting with an 's' and the next word starting with an 's' and get the words in that range.\nimport re\n\nteststring = \" Stack, Code and StackOverflow\"\nextractText = re.search(r\"(\\s)[sS][^*\\s]*[^sS]*\", teststring)\n\nprint(extractText[0])\n\n#non-regex solution\nlistwords = teststring.split(' ')\n\n# non regex solution\nstart = 0\nend = 0\nfor i,word in enumerate(listwords):\n if word.startswith('s') or word.startswith('S'):\n if start == 0:\n start = i\n else:\n end = i\n break\n\nnewstring = \" \" + \" \".join([word for word in listwords[start:end]])\nprint(newstring)\n\nOutput\n Stack, Code and\n Stack, Code and\n\n", "You could use for example a capture group:\n(S(?<!\\S.).*?)\\s*S(?<!\\S.)\n\nExplanation\n\n( Capture group 1\n\nS(?<!\\S.) Match S and assert that to the left of the S there is not a whitespace boundary\n.*? Match any character, as few as possible\n\n\n) Close group\n\\s* Match optional whitespace chars\nS(?<!\\S.) Match S and assert that to the left of the S there is not a whitespace boundary\n\nSee a regex demo and a Python demo.\nExample code:\nimport re\n\npattern = r\"(S(?<!\\S.).*?)\\s*S(?<!\\S.)\"\ns = \"Stack, Code and StackOverflow\"\nm = re.search(pattern, s)\nif m:\n print(m.group(1))\n\nOutput\nStack, Code and\n\n\nAnother option using a lookaround to assert the S to the right and not consume it to allow multiple matches after each other:\n S(?<!\\S.).*?(?=\\s*S(?<!\\S.))\n\nRegex demo\nimport re\n\npattern = r\"S(?<!\\S.).*?(?=\\s*S(?<!\\S.))\"\ns = \"Stack, Code and StackOverflow test Stack\"\nprint(re.findall(pattern, s))\n\nOutput\n['Stack, Code and', 'StackOverflow test']\n\n" ]
[ 0, 0 ]
[]
[]
[ "python", "regex", "string" ]
stackoverflow_0074595679_python_regex_string.txt
Q: Can't find ADP login page password box by webdriver I want to download report from ADP platform through webdriver, but I can't locate the login page password box. Can anyone help me, thanks a lot! Blow is my part code: print("start to login") chrome.get('https://online.adp.com/signin/v1/?APPID=WFNPortal&productId=80e309c3-7085-bae1-e053-3505430b5495&returnURL=https://workforcenow.adp.com/&callingAppId=WFN&TARGET=-SM-https://workforcenow.adp.com/theme/unified.html') wait.until(lambda x: x.find_elements(By.TAG_NAME, 'input')) user_name=chrome.find_element(By.XPATH, '//*[@id="login-form_username"]') user_name.send_keys('phenix.gao') wait.until(lambda x: x.find_element(By.ID, "verifUseridBtn")) chrome.find_element(By.ID, "verifUseridBtn").click() print("input ps") time.sleep(3) # wait.until(lambda x: x.find_element(By.XPATH,'//*[@id="login-form_password"]')) # wait.until(lambda x: x.find_element(By.ID, "login-form_password")) chrome.find_element(By.ID, "login-form_password").send_keys('123') print("click login button") wait.until(lambda x: x.find_elements(By.ID,'signBtn')) chrome.find_element(By.ID,'signBtn').click() wait.until(lambda x: x.find_element(By.XPATH, '//*[@title="Dashboards"]')) print("login success") ADP login page error I try to find it by XPATH and ID, and also add wait code, but none of them worked. Moreover, it is not in ifame. A: The selector seems ok. I think the problem is in the application. When running the test the application shows an error when next button is clicked. (By.ID, "login-form_password") You can try with this as well: WebDriverWait(driver, 10).until( EC.presence_of_element_located((By.ID, "login-form_password")) The easiest way to wait for the elements is to set implicit wait: driver.implicitly_wait(10) # seconds
Can't find ADP login page password box by webdriver
I want to download report from ADP platform through webdriver, but I can't locate the login page password box. Can anyone help me, thanks a lot! Blow is my part code: print("start to login") chrome.get('https://online.adp.com/signin/v1/?APPID=WFNPortal&productId=80e309c3-7085-bae1-e053-3505430b5495&returnURL=https://workforcenow.adp.com/&callingAppId=WFN&TARGET=-SM-https://workforcenow.adp.com/theme/unified.html') wait.until(lambda x: x.find_elements(By.TAG_NAME, 'input')) user_name=chrome.find_element(By.XPATH, '//*[@id="login-form_username"]') user_name.send_keys('phenix.gao') wait.until(lambda x: x.find_element(By.ID, "verifUseridBtn")) chrome.find_element(By.ID, "verifUseridBtn").click() print("input ps") time.sleep(3) # wait.until(lambda x: x.find_element(By.XPATH,'//*[@id="login-form_password"]')) # wait.until(lambda x: x.find_element(By.ID, "login-form_password")) chrome.find_element(By.ID, "login-form_password").send_keys('123') print("click login button") wait.until(lambda x: x.find_elements(By.ID,'signBtn')) chrome.find_element(By.ID,'signBtn').click() wait.until(lambda x: x.find_element(By.XPATH, '//*[@title="Dashboards"]')) print("login success") ADP login page error I try to find it by XPATH and ID, and also add wait code, but none of them worked. Moreover, it is not in ifame.
[ "The selector seems ok. I think the problem is in the application. When running the test the application shows an error when next button is clicked.\n(By.ID, \"login-form_password\")\n\nYou can try with this as well:\nWebDriverWait(driver, 10).until(\n EC.presence_of_element_located((By.ID, \"login-form_password\"))\n\nThe easiest way to wait for the elements is to set implicit wait:\ndriver.implicitly_wait(10) # seconds\n\n" ]
[ 0 ]
[]
[]
[ "adp", "python", "selenium", "webdriver" ]
stackoverflow_0074597372_adp_python_selenium_webdriver.txt
Q: How to join lists inside two or more different lists in arranged order? I have three lists as follows. A = [1, 2, 3]; B = [[3, 4, 5], [4, 5, 6], [4, 5, 7], [7, 4, 3]]; C = [[2, 3, 1], [2, 3, 3], [2, 4, 5], [4, 5, 6], [7, 3, 1]] I want to create another list containing all the above inner lists starting from A to C. Desired = [elements of A, elements of B, elements of C] just like this. Desired = [[1, 2, 3], [3, 4, 5], [4, 5, 6], [4, 5, 7], [7, 4, 3], [2, 3, 1], [2, 3, 3], [2, 4, 5], [4, 5, 6], [7, 3, 1]] A: The method I use below is to see if the inner_list contents are infact a list of themselves. If they are, then append the inner list. If they are not, then append the outer_list. A = [1, 2, 3]; B = [[3, 4, 5], [4, 5, 6], [4, 5, 7], [7, 4, 3]]; C = [[2, 3, 1], [2, 3, 3], [2, 4, 5], [4, 5, 6], [7, 3, 1]] desired = [] for outer_list in (A,B,C): list_state = False for inner_list in outer_list: if isinstance(inner_list, list): desired.append(inner_list) else: list_state = True # Add outer_list only if the inner_list was not a list if list_state: desired.append(outer_list) print(desired) OUTPUT: [[1, 2, 3], [3, 4, 5], [4, 5, 6], [4, 5, 7], [7, 4, 3], [2, 3, 1], [2, 3, 3], [2, 4, 5], [4, 5, 6], [7, 3, 1]] A: It seems like you know which list is nested (B and C) and which isn't (A). If that's the case then you could simply do: result = [A] + B + C If you don't know but can determine the nestedness by looking at the first item, then you could do: result = [ item for numbers in (A, B, C) for item in (numbers if isinstance(numbers[0], list) else [numbers]) ] If that's also not the case, i.e. there could be mixed lists (lists that contain numbers and lists), then you could use itertools.groupby from the standard library: from itertools import groupby result = [] for key, group in groupby(A + B + C, key=lambda item: isinstance(item, list)): if not key: group = [[*group]] result.extend(group) Results for all versions (with A, B and C like provided): [[1, 2, 3], [3, 4, 5], [4, 5, 6], [4, 5, 7], [7, 4, 3], [2, 3, 1], [2, 3, 3], [2, 4, 5], [4, 5, 6], [7, 3, 1]] A: I made something to make a list to an nested list if it is not. A = [1, 2, 3] B = [[3, 4, 5], [4, 5, 6], [4, 5, 7], [7, 4, 3]] C = [[2, 3, 1], [2, 3, 3], [2, 4, 5], [4, 5, 6], [7, 3, 1]] listt = [ 'A', 'B', 'C' ] des = [] for i in listt: if type((globals()[i])[0]) != list: globals()[i] = [[i for i in globals()[i]]] #turns into nested list if not (line 7-9) for i in (A,B,C): for x in i: des.append(x) print(des) Output: [[1, 2, 3], [3, 4, 5], [4, 5, 6], [4, 5, 7], [7, 4, 3], [2, 3, 1], [2, 3, 3], [2, 4, 5], [4, 5, 6], [7, 3, 1]]
How to join lists inside two or more different lists in arranged order?
I have three lists as follows. A = [1, 2, 3]; B = [[3, 4, 5], [4, 5, 6], [4, 5, 7], [7, 4, 3]]; C = [[2, 3, 1], [2, 3, 3], [2, 4, 5], [4, 5, 6], [7, 3, 1]] I want to create another list containing all the above inner lists starting from A to C. Desired = [elements of A, elements of B, elements of C] just like this. Desired = [[1, 2, 3], [3, 4, 5], [4, 5, 6], [4, 5, 7], [7, 4, 3], [2, 3, 1], [2, 3, 3], [2, 4, 5], [4, 5, 6], [7, 3, 1]]
[ "The method I use below is to see if the inner_list contents are infact a list of themselves.\n\nIf they are, then append the inner list.\nIf they are not, then append the outer_list.\n\nA = [1, 2, 3]; \nB = [[3, 4, 5], [4, 5, 6], [4, 5, 7], [7, 4, 3]]; \nC = [[2, 3, 1], [2, 3, 3], [2, 4, 5], [4, 5, 6], [7, 3, 1]]\n\ndesired = []\n\nfor outer_list in (A,B,C):\n list_state = False\n for inner_list in outer_list:\n if isinstance(inner_list, list):\n desired.append(inner_list)\n else:\n list_state = True\n \n # Add outer_list only if the inner_list was not a list\n if list_state:\n desired.append(outer_list)\n \nprint(desired)\n\nOUTPUT:\n[[1, 2, 3], [3, 4, 5], [4, 5, 6], [4, 5, 7], [7, 4, 3], [2, 3, 1], [2, 3, 3], [2, 4, 5], [4, 5, 6], [7, 3, 1]]\n\n", "It seems like you know which list is nested (B and C) and which isn't (A). If that's the case then you could simply do:\nresult = [A] + B + C\n\nIf you don't know but can determine the nestedness by looking at the first item, then you could do:\nresult = [\n item\n for numbers in (A, B, C)\n for item in (numbers if isinstance(numbers[0], list) else [numbers])\n]\n\nIf that's also not the case, i.e. there could be mixed lists (lists that contain numbers and lists), then you could use itertools.groupby from the standard library:\nfrom itertools import groupby\n\nresult = []\nfor key, group in groupby(A + B + C, key=lambda item: isinstance(item, list)):\n if not key:\n group = [[*group]]\n result.extend(group)\n\nResults for all versions (with A, B and C like provided):\n[[1, 2, 3], [3, 4, 5], [4, 5, 6], [4, 5, 7], [7, 4, 3],\n [2, 3, 1], [2, 3, 3], [2, 4, 5], [4, 5, 6], [7, 3, 1]]\n\n", "I made something to make a list to an nested list if it is not.\nA = [1, 2, 3]\nB = [[3, 4, 5], [4, 5, 6], [4, 5, 7], [7, 4, 3]]\nC = [[2, 3, 1], [2, 3, 3], [2, 4, 5], [4, 5, 6], [7, 3, 1]]\nlistt = [ 'A', 'B', 'C' ]\ndes = []\n\nfor i in listt:\n if type((globals()[i])[0]) != list:\n globals()[i] = [[i for i in globals()[i]]] #turns into nested list if not (line 7-9)\n\nfor i in (A,B,C):\n for x in i:\n des.append(x)\n\nprint(des)\n\nOutput:\n[[1, 2, 3], [3, 4, 5], [4, 5, 6], [4, 5, 7], [7, 4, 3], [2, 3, 1], [2, 3, 3], [2, 4, 5], [4, 5, 6], [7, 3, 1]]\n\n" ]
[ 2, 2, 1 ]
[]
[]
[ "arrays", "list", "python" ]
stackoverflow_0074580747_arrays_list_python.txt
Q: Why PyTorch BatchNorm1D gives "batch_norm" not implemented for 'Long'" error while normalizing Integer type tensor? I am trying to learn some functions in Pytorch framework and was stuck due to below error while normalizing a simple integer tensor. Could someone please help me with this. Here is the sample code to reproduce the error - import torch import torch.nn as nn #Integer type tensor test_int_input = torch.randint(size = [3,5],low=1,high=9) # BatchNorm1D object batchnorm1D = nn.BatchNorm1d(num_features=5) test_output = batchnorm1D(test_int_input) Error - --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-38-6c672cd731fa> in <module> 1 batchnorm1D = nn.BatchNorm1d(num_features=5) ----> 2 test_output = batchnorm1D(test_input) /opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 530 result = self._slow_forward(*input, **kwargs) 531 else: --> 532 result = self.forward(*input, **kwargs) 533 for hook in self._forward_hooks.values(): 534 hook_result = hook(self, input, result) /opt/conda/lib/python3.7/site-packages/torch/nn/modules/batchnorm.py in forward(self, input) 105 input, self.running_mean, self.running_var, self.weight, self.bias, 106 self.training or not self.track_running_stats, --> 107 exponential_average_factor, self.eps) 108 109 /opt/conda/lib/python3.7/site-packages/torch/nn/functional.py in batch_norm(input, running_mean, running_var, weight, bias, training, momentum, eps) 1668 return torch.batch_norm( 1669 input, weight, bias, running_mean, running_var, -> 1670 training, momentum, eps, torch.backends.cudnn.enabled 1671 ) 1672 RuntimeError: "batch_norm" not implemented for 'Long' However, if we try to apply the same on a different non-int tensor, then it works. Here is the example - import torch import torch.nn as nn #Integer type tensor #test_input = torch.randn(size = [3,5]) # BatchNorm1D object batchnorm1D = nn.BatchNorm1d(num_features=5) test_output = batchnorm1D(test_input) test_output Output - tensor([[ 0.4311, -1.1987, 0.9059, 1.1424, 1.2174], [-1.3820, 1.2492, -1.3934, 0.1508, 0.0146], [ 0.9509, -0.0505, 0.4875, -1.2931, -1.2320]], grad_fn=<NativeBatchNormBackward>) A: Your input tensor should be a floating point: >>> batchnorm1D(test_int_input.float()) tensor([[-5.9605e-08, -1.3887e+00, -9.8058e-01, 2.6726e-01, 1.4142e+00], [-1.2247e+00, 4.6291e-01, 1.3728e+00, -1.3363e+00, -7.0711e-01], [ 1.2247e+00, 9.2582e-01, -3.9223e-01, 1.0690e+00, -7.0711e-01]], grad_fn=<NativeBatchNormBackward0>)
Why PyTorch BatchNorm1D gives "batch_norm" not implemented for 'Long'" error while normalizing Integer type tensor?
I am trying to learn some functions in Pytorch framework and was stuck due to below error while normalizing a simple integer tensor. Could someone please help me with this. Here is the sample code to reproduce the error - import torch import torch.nn as nn #Integer type tensor test_int_input = torch.randint(size = [3,5],low=1,high=9) # BatchNorm1D object batchnorm1D = nn.BatchNorm1d(num_features=5) test_output = batchnorm1D(test_int_input) Error - --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-38-6c672cd731fa> in <module> 1 batchnorm1D = nn.BatchNorm1d(num_features=5) ----> 2 test_output = batchnorm1D(test_input) /opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 530 result = self._slow_forward(*input, **kwargs) 531 else: --> 532 result = self.forward(*input, **kwargs) 533 for hook in self._forward_hooks.values(): 534 hook_result = hook(self, input, result) /opt/conda/lib/python3.7/site-packages/torch/nn/modules/batchnorm.py in forward(self, input) 105 input, self.running_mean, self.running_var, self.weight, self.bias, 106 self.training or not self.track_running_stats, --> 107 exponential_average_factor, self.eps) 108 109 /opt/conda/lib/python3.7/site-packages/torch/nn/functional.py in batch_norm(input, running_mean, running_var, weight, bias, training, momentum, eps) 1668 return torch.batch_norm( 1669 input, weight, bias, running_mean, running_var, -> 1670 training, momentum, eps, torch.backends.cudnn.enabled 1671 ) 1672 RuntimeError: "batch_norm" not implemented for 'Long' However, if we try to apply the same on a different non-int tensor, then it works. Here is the example - import torch import torch.nn as nn #Integer type tensor #test_input = torch.randn(size = [3,5]) # BatchNorm1D object batchnorm1D = nn.BatchNorm1d(num_features=5) test_output = batchnorm1D(test_input) test_output Output - tensor([[ 0.4311, -1.1987, 0.9059, 1.1424, 1.2174], [-1.3820, 1.2492, -1.3934, 0.1508, 0.0146], [ 0.9509, -0.0505, 0.4875, -1.2931, -1.2320]], grad_fn=<NativeBatchNormBackward>)
[ "Your input tensor should be a floating point:\n>>> batchnorm1D(test_int_input.float())\ntensor([[-5.9605e-08, -1.3887e+00, -9.8058e-01, 2.6726e-01, 1.4142e+00],\n [-1.2247e+00, 4.6291e-01, 1.3728e+00, -1.3363e+00, -7.0711e-01],\n [ 1.2247e+00, 9.2582e-01, -3.9223e-01, 1.0690e+00, -7.0711e-01]],\n grad_fn=<NativeBatchNormBackward0>)\n\n" ]
[ 1 ]
[]
[]
[ "deep_learning", "machine_learning", "normalization", "python", "pytorch" ]
stackoverflow_0074598178_deep_learning_machine_learning_normalization_python_pytorch.txt
Q: Custom huggingface Tokenizer with custom model I am working on molecule data with representation called SMILES. an example molecule string looks like Cc1ccccc1N1C(=O)NC(=O)C(=Cc2cc(Br)c(N3CCOCC3)o2)C1=O. Now, I want a custom Tokenizer which can be used with Huggingface transformer APIs. I also donot want to use the existing tokenizer models like BPE etc. I want the SMILES string parsed through regex to give individual characters as tokens as follows: import re SMI_REGEX_PATTERN = r"""(\[[^\]]+]|Br?|Cl?|N|O|S|P|F|I|b|c|n|o|s|p|\(|\)|\.|=| #|-|\+|\\|\/|:|~|@|\?|>>?|\*|\$|\%[0-9]{2}|[0-9])""" regex = re.compile(SMI_REGEX_PATTERN) molecule = 'Cc1ccccc1N1C(=O)NC(=O)C(=Cc2cc(Br)c(N3CCOCC3)o2)C1=O' tokens = regex.findall(molecule) It is fairly simple to do the above, but I need a tokenizer which works with, let's say BERT API of Huggingface. Also, I donot want to use lowercase conversion, but still use BERT. A: This code snippet provides a tokenizer that can be used with Hugging Face transformers. It uses a simple Word Level (= mapping) "algorithm". from tokenizers import Regex, Tokenizer from tokenizers.models import WordLevel from tokenizers.pre_tokenizers import Split from tokenizers.processors import TemplateProcessing from tokenizers.trainers import WordLevelTrainer from transformers import PreTrainedTokenizerFast SMI_REGEX_PATTERN = r"""(\[[^\]]+]|Br?|Cl?|N|O|S|P|F|I|b|c|n|o|s|p|\(|\)|\.|=|-|\+|\\|\/|:|~|@|\?|>>?|\*|\$|\%[0-9]{2}|[0-9])""" BOS_TOKEN = "^" EOS_TOKEN = "&" PAD_TOKEN = " " UNK_TOKEN = "?" MODEL_MAX_LENGTH = 512 smi = "CC(C)(C)c1ccc2occ(CC(=O)Nc3ccccc3F)c2c1" smiles_tokenizer = Tokenizer(WordLevel(unk_token=UNK_TOKEN)) smiles_tokenizer.pre_tokenizer = Split( pattern=Regex(SMI_REGEX_PATTERN), behavior="isolated", invert=False ) smiles_trainer = WordLevelTrainer( special_tokens=[BOS_TOKEN, EOS_TOKEN, PAD_TOKEN, UNK_TOKEN] ) smiles_tokenizer.train_from_iterator([smi], trainer=smiles_trainer) smiles_tokenizer.post_processor = TemplateProcessing( single=BOS_TOKEN + " $A " + EOS_TOKEN, special_tokens=[ (BOS_TOKEN, smiles_tokenizer.token_to_id(BOS_TOKEN)), (EOS_TOKEN, smiles_tokenizer.token_to_id(EOS_TOKEN)), ], ) tokenizer_pretrained = PreTrainedTokenizerFast( tokenizer_object=smiles_tokenizer, model_max_length=MODEL_MAX_LENGTH, padding_side="right", truncation_side="left", bos_token=BOS_TOKEN, eos_token=EOS_TOKEN, pad_token=PAD_TOKEN, unk_token=UNK_TOKEN, ) print(tokenizer_pretrained.encode(smi)) # [0, 5, 5, 6, 5, ..., 4, 8, 1]
Custom huggingface Tokenizer with custom model
I am working on molecule data with representation called SMILES. an example molecule string looks like Cc1ccccc1N1C(=O)NC(=O)C(=Cc2cc(Br)c(N3CCOCC3)o2)C1=O. Now, I want a custom Tokenizer which can be used with Huggingface transformer APIs. I also donot want to use the existing tokenizer models like BPE etc. I want the SMILES string parsed through regex to give individual characters as tokens as follows: import re SMI_REGEX_PATTERN = r"""(\[[^\]]+]|Br?|Cl?|N|O|S|P|F|I|b|c|n|o|s|p|\(|\)|\.|=| #|-|\+|\\|\/|:|~|@|\?|>>?|\*|\$|\%[0-9]{2}|[0-9])""" regex = re.compile(SMI_REGEX_PATTERN) molecule = 'Cc1ccccc1N1C(=O)NC(=O)C(=Cc2cc(Br)c(N3CCOCC3)o2)C1=O' tokens = regex.findall(molecule) It is fairly simple to do the above, but I need a tokenizer which works with, let's say BERT API of Huggingface. Also, I donot want to use lowercase conversion, but still use BERT.
[ "This code snippet provides a tokenizer that can be used with Hugging Face transformers. It uses a simple Word Level (= mapping) \"algorithm\".\nfrom tokenizers import Regex, Tokenizer\nfrom tokenizers.models import WordLevel\nfrom tokenizers.pre_tokenizers import Split\nfrom tokenizers.processors import TemplateProcessing\nfrom tokenizers.trainers import WordLevelTrainer\nfrom transformers import PreTrainedTokenizerFast\n\nSMI_REGEX_PATTERN = r\"\"\"(\\[[^\\]]+]|Br?|Cl?|N|O|S|P|F|I|b|c|n|o|s|p|\\(|\\)|\\.|=|-|\\+|\\\\|\\/|:|~|@|\\?|>>?|\\*|\\$|\\%[0-9]{2}|[0-9])\"\"\"\nBOS_TOKEN = \"^\"\nEOS_TOKEN = \"&\"\nPAD_TOKEN = \" \"\nUNK_TOKEN = \"?\"\nMODEL_MAX_LENGTH = 512\n\nsmi = \"CC(C)(C)c1ccc2occ(CC(=O)Nc3ccccc3F)c2c1\"\n\nsmiles_tokenizer = Tokenizer(WordLevel(unk_token=UNK_TOKEN))\nsmiles_tokenizer.pre_tokenizer = Split(\n pattern=Regex(SMI_REGEX_PATTERN), behavior=\"isolated\", invert=False\n)\nsmiles_trainer = WordLevelTrainer(\n special_tokens=[BOS_TOKEN, EOS_TOKEN, PAD_TOKEN, UNK_TOKEN]\n)\nsmiles_tokenizer.train_from_iterator([smi], trainer=smiles_trainer)\nsmiles_tokenizer.post_processor = TemplateProcessing(\n single=BOS_TOKEN + \" $A \" + EOS_TOKEN,\n special_tokens=[\n (BOS_TOKEN, smiles_tokenizer.token_to_id(BOS_TOKEN)),\n (EOS_TOKEN, smiles_tokenizer.token_to_id(EOS_TOKEN)),\n ],\n)\n\ntokenizer_pretrained = PreTrainedTokenizerFast(\n tokenizer_object=smiles_tokenizer,\n model_max_length=MODEL_MAX_LENGTH,\n padding_side=\"right\",\n truncation_side=\"left\",\n bos_token=BOS_TOKEN,\n eos_token=EOS_TOKEN,\n pad_token=PAD_TOKEN,\n unk_token=UNK_TOKEN,\n)\n\nprint(tokenizer_pretrained.encode(smi)) # [0, 5, 5, 6, 5, ..., 4, 8, 1]\n\n" ]
[ 1 ]
[]
[]
[ "huggingface_tokenizers", "huggingface_transformers", "nlp", "python" ]
stackoverflow_0067513831_huggingface_tokenizers_huggingface_transformers_nlp_python.txt
Q: If/Else or One Line Say I've got a class who has a property called MyClass.name. I'm looping through some data where I want to arrange names to either be MyClass.name or other. I've got a method: def return_name(self, the_name): if the_name == self.name: return the_name else: return 'other' Would it make sense to rewrite the method as: def return_name(self, the_name): return the_name * (self.name == the_name) + 'other' * (self.name != the_name) I get that both examples produce the same output (the second might even have slightly better performance due to being branchless, but that's gotta be negligible, the method is so short that it's isn't going to affect the runtime at all), so I'm asking purely from a readability versus code length standpoint. Which one is to be preferred? A: The first one is better due to readability. As you mansion performance is not an issue. You could test it with a timer. The branchless argument is decent, however '*' is often a slow operation. Python also allows you to use the if else in a single line. def return_name(self, the_name): return the_name if self.name == the_name else 'other' A: I would strongly recommend against the second version: it's not quite clear what you are doing unless you know that, in Python, booleans "are" integers. If you want a single expression prefer this instead def return_name(self, the_name): return the_name if the_name == self.name else "other" Between this version and the branching version, I would prefer the former, but I think this is more of a personal taste than actual Python coding styles. As a general advice, you should not prefer a (maybe) slightly faster version of code which is also much less readable. Note that, in some languages, the branchless version would be the idiomatic solution (in J for instance), but not in Python (not that I know of at least).
If/Else or One Line
Say I've got a class who has a property called MyClass.name. I'm looping through some data where I want to arrange names to either be MyClass.name or other. I've got a method: def return_name(self, the_name): if the_name == self.name: return the_name else: return 'other' Would it make sense to rewrite the method as: def return_name(self, the_name): return the_name * (self.name == the_name) + 'other' * (self.name != the_name) I get that both examples produce the same output (the second might even have slightly better performance due to being branchless, but that's gotta be negligible, the method is so short that it's isn't going to affect the runtime at all), so I'm asking purely from a readability versus code length standpoint. Which one is to be preferred?
[ "The first one is better due to readability. As you mansion performance is not an issue. You could test it with a timer.\nThe branchless argument is decent, however '*' is often a slow operation.\nPython also allows you to use the if else in a single line.\ndef return_name(self, the_name):\n return the_name if self.name == the_name else 'other'\n\n", "I would strongly recommend against the second version: it's not quite clear what you are doing unless you know that, in Python, booleans \"are\" integers. If you want a single expression prefer this instead\ndef return_name(self, the_name):\n return the_name if the_name == self.name else \"other\"\n\nBetween this version and the branching version, I would prefer the former, but I think this is more of a personal taste than actual Python coding styles.\nAs a general advice, you should not prefer a (maybe) slightly faster version of code which is also much less readable.\nNote that, in some languages, the branchless version would be the idiomatic solution (in J for instance), but not in Python (not that I know of at least).\n" ]
[ 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074598418_python.txt
Q: Merge PDF files Is it possible, using Python, to merge separate PDF files? Assuming so, I need to extend this a little further. I am hoping to loop through folders in a directory and repeat this procedure. And I may be pushing my luck, but is it possible to exclude a page that is contained in each of the PDFs (my report generation always creates an extra blank page). A: You can use PyPdf2s PdfMerger class. File Concatenation You can simply concatenate files by using the append method. from PyPDF2 import PdfMerger pdfs = ['file1.pdf', 'file2.pdf', 'file3.pdf', 'file4.pdf'] merger = PdfMerger() for pdf in pdfs: merger.append(pdf) merger.write("result.pdf") merger.close() You can pass file handles instead file paths if you want. File Merging If you want more fine grained control of merging there is a merge method of the PdfMerger, which allows you to specify an insertion point in the output file, meaning you can insert the pages anywhere in the file. The append method can be thought of as a merge where the insertion point is the end of the file. e.g. merger.merge(2, pdf) Here we insert the whole pdf into the output but at page 2. Page Ranges If you wish to control which pages are appended from a particular file, you can use the pages keyword argument of append and merge, passing a tuple in the form (start, stop[, step]) (like the regular range function). e.g. merger.append(pdf, pages=(0, 3)) # first 3 pages merger.append(pdf, pages=(0, 6, 2)) # pages 1,3, 5 If you specify an invalid range you will get an IndexError. Note: also that to avoid files being left open, the PdfFileMergers close method should be called when the merged file has been written. This ensures all files are closed (input and output) in a timely manner. It's a shame that PdfFileMerger isn't implemented as a context manager, so we can use the with keyword, avoid the explicit close call and get some easy exception safety. You might also want to look at the pdfcat script provided as part of pypdf2. You can potentially avoid the need to write code altogether. The PyPdf2 github also includes some example code demonstrating merging. PyMuPdf Another library perhaps worth a look is PyMuPdf. Merging is equally simple. From command line: python -m fitz join -o result.pdf file1.pdf file2.pdf file3.pdf and from code import fitz result = fitz.open() for pdf in ['file1.pdf', 'file2.pdf', 'file3.pdf']: with fitz.open(pdf) as mfile: result.insertPDF(mfile) result.save("result.pdf") With plenty of options, detailed in the projects wiki. A: Use Pypdf or its successor PyPDF2: A Pure-Python library built as a PDF toolkit. It is capable of: splitting documents page by page, merging documents page by page, (and much more) Here's a sample program that works with both versions. #!/usr/bin/env python import sys try: from PyPDF2 import PdfFileReader, PdfFileWriter except ImportError: from pyPdf import PdfFileReader, PdfFileWriter def pdf_cat(input_files, output_stream): input_streams = [] try: # First open all the files, then produce the output file, and # finally close the input files. This is necessary because # the data isn't read from the input files until the write # operation. Thanks to # https://stackoverflow.com/questions/6773631/problem-with-closing-python-pypdf-writing-getting-a-valueerror-i-o-operation/6773733#6773733 for input_file in input_files: input_streams.append(open(input_file, 'rb')) writer = PdfFileWriter() for reader in map(PdfFileReader, input_streams): for n in range(reader.getNumPages()): writer.addPage(reader.getPage(n)) writer.write(output_stream) finally: for f in input_streams: f.close() output_stream.close() if __name__ == '__main__': if sys.platform == "win32": import os, msvcrt msvcrt.setmode(sys.stdout.fileno(), os.O_BINARY) pdf_cat(sys.argv[1:], sys.stdout) A: Merge all pdf files that are present in a dir Put the pdf files in a dir. Launch the program. You get one pdf with all the pdfs merged. import os from PyPDF2 import PdfFileMerger x = [a for a in os.listdir() if a.endswith(".pdf")] merger = PdfFileMerger() for pdf in x: merger.append(open(pdf, 'rb')) with open("result.pdf", "wb") as fout: merger.write(fout) How would I make the same code above today from glob import glob from PyPDF2 import PdfFileMerger def pdf_merge(): ''' Merges all the pdf files in current directory ''' merger = PdfFileMerger() allpdfs = [a for a in glob("*.pdf")] [merger.append(pdf) for pdf in allpdfs] with open("Merged_pdfs.pdf", "wb") as new_file: merger.write(new_file) if __name__ == "__main__": pdf_merge() A: The pdfrw library can do this quite easily, assuming you don't need to preserve bookmarks and annotations, and your PDFs aren't encrypted. cat.py is an example concatenation script, and subset.py is an example page subsetting script. The relevant part of the concatenation script -- assumes inputs is a list of input filenames, and outfn is an output file name: from pdfrw import PdfReader, PdfWriter writer = PdfWriter() for inpfn in inputs: writer.addpages(PdfReader(inpfn).pages) writer.write(outfn) As you can see from this, it would be pretty easy to leave out the last page, e.g. something like: writer.addpages(PdfReader(inpfn).pages[:-1]) Disclaimer: I am the primary pdfrw author. A: Is it possible, using Python, to merge seperate PDF files? Yes. The following example merges all files in one folder to a single new PDF file: #!/usr/bin/env python # -*- coding: utf-8 -*- from argparse import ArgumentParser from glob import glob from pyPdf import PdfFileReader, PdfFileWriter import os def merge(path, output_filename): output = PdfFileWriter() for pdffile in glob(path + os.sep + '*.pdf'): if pdffile == output_filename: continue print("Parse '%s'" % pdffile) document = PdfFileReader(open(pdffile, 'rb')) for i in range(document.getNumPages()): output.addPage(document.getPage(i)) print("Start writing '%s'" % output_filename) with open(output_filename, "wb") as f: output.write(f) if __name__ == "__main__": parser = ArgumentParser() # Add more options if you like parser.add_argument("-o", "--output", dest="output_filename", default="merged.pdf", help="write merged PDF to FILE", metavar="FILE") parser.add_argument("-p", "--path", dest="path", default=".", help="path of source PDF files") args = parser.parse_args() merge(args.path, args.output_filename) A: here, http://pieceofpy.com/2009/03/05/concatenating-pdf-with-python/, gives an solution. similarly: from pyPdf import PdfFileWriter, PdfFileReader def append_pdf(input,output): [output.addPage(input.getPage(page_num)) for page_num in range(input.numPages)] output = PdfFileWriter() append_pdf(PdfFileReader(file("C:\\sample.pdf","rb")),output) append_pdf(PdfFileReader(file("c:\\sample1.pdf","rb")),output) append_pdf(PdfFileReader(file("c:\\sample2.pdf","rb")),output) append_pdf(PdfFileReader(file("c:\\sample3.pdf","rb")),output) output.write(file("c:\\combined.pdf","wb")) ------ Updated on 25th Nov. ------ ------ Seems above code doesn't work anymore------ ------ Please use the following:------ from PyPDF2 import PdfFileMerger, PdfFileReader import os merger = PdfFileMerger() file_folder = "C:\\My Ducoments\\" root, dirs, files = next(os.walk(file_folder)) for path, subdirs, files in os.walk(root): for f in files: if f.endswith(".pdf"): merger.append(file_folder + f) merger.write(file_folder + "Economists-1.pdf") A: from PyPDF2 import PdfFileMerger import webbrowser import os dir_path = os.path.dirname(os.path.realpath(__file__)) def list_files(directory, extension): return (f for f in os.listdir(directory) if f.endswith('.' + extension)) pdfs = list_files(dir_path, "pdf") merger = PdfFileMerger() for pdf in pdfs: merger.append(open(pdf, 'rb')) with open('result.pdf', 'wb') as fout: merger.write(fout) webbrowser.open_new('file://'+ dir_path + '/result.pdf') Git Repo: https://github.com/mahaguru24/Python_Merge_PDF.git A: You can use pikepdf too (source code documentation). Example code could be (taken from the documentation): from glob import glob from pikepdf import Pdf pdf = Pdf.new() for file in glob('*.pdf'): # you can change this to browse directories recursively with Pdf.open(file) as src: pdf.pages.extend(src.pages) pdf.save('merged.pdf') pdf.close() If you want to exclude pages, you might proceed another way, for instance copying pages to a new pdf (you can select which ones you do not copy, then, the pdf.pages object behaving like a list). It is still actively maintained, which, as of february 2022, does not seem to be the case of PyPDF2 nor pdfrw. I haven't benchmarked it, so I don't know if it is quicker or slower than other solutions. One advantage over PyMuPDF, in my case, is that an official Ubuntu package is available (python3-pikepdf), what is practical to package my own software depending on it. A: Here's a time comparison for the most common answers for my specific use case: combining a list of 5 large single-page pdf files. I ran each test twice. (Disclaimer: I ran this function within Flask, your mileage may vary) TL;DR pdfrw is the fastest library for combining pdfs out of the 3 I tested. PyPDF2 start = time.time() merger = PdfFileMerger() for pdf in all_pdf_obj: merger.append( os.path.join( os.getcwd(), pdf.filename # full path ) ) formatted_name = f'Summary_Invoice_{date.today()}.pdf' merge_file = os.path.join(os.getcwd(), formatted_name) merger.write(merge_file) merger.close() end = time.time() print(end - start) #1 66.50084733963013 #2 68.2995400428772 PyMuPDF start = time.time() result = fitz.open() for pdf in all_pdf_obj: with fitz.open(os.path.join(os.getcwd(), pdf.filename)) as mfile: result.insertPDF(mfile) formatted_name = f'Summary_Invoice_{date.today()}.pdf' result.save(formatted_name) end = time.time() print(end - start) #1 2.7166640758514404 #2 1.694727897644043 pdfrw start = time.time() result = fitz.open() writer = PdfWriter() for pdf in all_pdf_obj: writer.addpages(PdfReader(os.path.join(os.getcwd(), pdf.filename)).pages) formatted_name = f'Summary_Invoice_{date.today()}.pdf' writer.write(formatted_name) end = time.time() print(end - start) #1 0.6040127277374268 #2 0.9576816558837891 A: A slight variation using a dictionary for greater flexibility (e.g. sort, dedup): import os from PyPDF2 import PdfFileMerger # use dict to sort by filepath or filename file_dict = {} for subdir, dirs, files in os.walk("<dir>"): for file in files: filepath = subdir + os.sep + file # you can have multiple endswith if filepath.endswith((".pdf", ".PDF")): file_dict[file] = filepath # use strict = False to ignore PdfReadError: Illegal character error merger = PdfFileMerger(strict=False) for k, v in file_dict.items(): print(k, v) merger.append(v) merger.write("combined_result.pdf") A: I used pdf unite on the linux terminal by leveraging subprocess (assumes one.pdf and two.pdf exist on the directory) and the aim is to merge them to three.pdf import subprocess subprocess.call(['pdfunite one.pdf two.pdf three.pdf'],shell=True) A: You can use PdfFileMerger from the PyPDF2 module. For example, to merge multiple PDF files from a list of paths you can use the following function: from PyPDF2 import PdfFileMerger # pass the path of the output final file.pdf and the list of paths def merge_pdf(out_path: str, extracted_files: list [str]): merger = PdfFileMerger() for pdf in extracted_files: merger.append(pdf) merger.write(out_path) merger.close() merge_pdf('./final.pdf', extracted_files) And this function to get all the files recursively from a parent folder: import os # pass the path of the parent_folder def fetch_all_files(parent_folder: str): target_files = [] for path, subdirs, files in os.walk(parent_folder): for name in files: target_files.append(os.path.join(path, name)) return target_files # get a list of all the paths of the pdf extracted_files = fetch_all_files('./parent_folder') Finally, you use the two functions declaring.a parent_folder_path that can contain multiple documents, and an output_pdf_path for the destination of the merged PDF: # get a list of all the paths of the pdf parent_folder_path = './parent_folder' outup_pdf_path = './final.pdf' extracted_files = fetch_all_files(parent_folder_path) merge_pdf(outup_pdf_path, extracted_files) You can get the full code from here (Source): How to merge PDF documents using Python A: The answer from Giovanni G. PY in an easily usable way (at least for me): import os from PyPDF2 import PdfFileMerger def merge_pdfs(export_dir, input_dir, folder): current_dir = os.path.join(input_dir, folder) pdfs = os.listdir(current_dir) merger = PdfFileMerger() for pdf in pdfs: merger.append(open(os.path.join(current_dir, pdf), 'rb')) with open(os.path.join(export_dir, folder + ".pdf"), "wb") as fout: merger.write(fout) export_dir = r"E:\Output" input_dir = r"E:\Input" folders = os.listdir(input_dir) [merge_pdfs(export_dir, input_dir, folder) for folder in folders]; A: conda activate py_envs pip install PyPDF2 from PyPDF2 import PdfFileMerger #set path files os.chdir('directory_files') cwd = os.path.abspath('') files = os.listdir(cwd) def merge_pdf_files(): merger = PdfFileMerger() pdf_files = [x for x in files if x.endswith(".pdf")] [merger.append(pdf) for pdf in pdf_files] with open("merged_pdf_all.pdf", "wb") as new_file: merger.write(new_file) if __name__ == "__main__": pdf_merge()
Merge PDF files
Is it possible, using Python, to merge separate PDF files? Assuming so, I need to extend this a little further. I am hoping to loop through folders in a directory and repeat this procedure. And I may be pushing my luck, but is it possible to exclude a page that is contained in each of the PDFs (my report generation always creates an extra blank page).
[ "You can use PyPdf2s PdfMerger class.\nFile Concatenation\nYou can simply concatenate files by using the append method.\nfrom PyPDF2 import PdfMerger\n\npdfs = ['file1.pdf', 'file2.pdf', 'file3.pdf', 'file4.pdf']\n\nmerger = PdfMerger()\n\nfor pdf in pdfs:\n merger.append(pdf)\n\nmerger.write(\"result.pdf\")\nmerger.close()\n\nYou can pass file handles instead file paths if you want.\nFile Merging\nIf you want more fine grained control of merging there is a merge method of the PdfMerger, which allows you to specify an insertion point in the output file, meaning you can insert the pages anywhere in the file. The append method can be thought of as a merge where the insertion point is the end of the file.\ne.g.\nmerger.merge(2, pdf)\n\nHere we insert the whole pdf into the output but at page 2.\nPage Ranges\nIf you wish to control which pages are appended from a particular file, you can use the pages keyword argument of append and merge, passing a tuple in the form (start, stop[, step]) (like the regular range function).\ne.g.\nmerger.append(pdf, pages=(0, 3)) # first 3 pages\nmerger.append(pdf, pages=(0, 6, 2)) # pages 1,3, 5\n\nIf you specify an invalid range you will get an IndexError.\nNote: also that to avoid files being left open, the PdfFileMergers close method should be called when the merged file has been written. This ensures all files are closed (input and output) in a timely manner. It's a shame that PdfFileMerger isn't implemented as a context manager, so we can use the with keyword, avoid the explicit close call and get some easy exception safety.\nYou might also want to look at the pdfcat script provided as part of pypdf2. You can potentially avoid the need to write code altogether.\nThe PyPdf2 github also includes some example code demonstrating merging.\nPyMuPdf\nAnother library perhaps worth a look is PyMuPdf. Merging is equally simple.\nFrom command line:\npython -m fitz join -o result.pdf file1.pdf file2.pdf file3.pdf\n\nand from code\nimport fitz\n\nresult = fitz.open()\n\nfor pdf in ['file1.pdf', 'file2.pdf', 'file3.pdf']:\n with fitz.open(pdf) as mfile:\n result.insertPDF(mfile)\n \nresult.save(\"result.pdf\")\n\nWith plenty of options, detailed in the projects wiki.\n", "Use Pypdf or its successor PyPDF2:\n\nA Pure-Python library built as a PDF toolkit. It is capable of:\n\nsplitting documents page by page,\nmerging documents page by page,\n\n\n(and much more)\nHere's a sample program that works with both versions.\n#!/usr/bin/env python\nimport sys\ntry:\n from PyPDF2 import PdfFileReader, PdfFileWriter\nexcept ImportError:\n from pyPdf import PdfFileReader, PdfFileWriter\n\ndef pdf_cat(input_files, output_stream):\n input_streams = []\n try:\n # First open all the files, then produce the output file, and\n # finally close the input files. This is necessary because\n # the data isn't read from the input files until the write\n # operation. Thanks to\n # https://stackoverflow.com/questions/6773631/problem-with-closing-python-pypdf-writing-getting-a-valueerror-i-o-operation/6773733#6773733\n for input_file in input_files:\n input_streams.append(open(input_file, 'rb'))\n writer = PdfFileWriter()\n for reader in map(PdfFileReader, input_streams):\n for n in range(reader.getNumPages()):\n writer.addPage(reader.getPage(n))\n writer.write(output_stream)\n finally:\n for f in input_streams:\n f.close()\n output_stream.close()\n\nif __name__ == '__main__':\n if sys.platform == \"win32\":\n import os, msvcrt\n msvcrt.setmode(sys.stdout.fileno(), os.O_BINARY)\n pdf_cat(sys.argv[1:], sys.stdout)\n\n", "Merge all pdf files that are present in a dir\nPut the pdf files in a dir. Launch the program. You get one pdf with all the pdfs merged.\nimport os\nfrom PyPDF2 import PdfFileMerger\n\nx = [a for a in os.listdir() if a.endswith(\".pdf\")]\n\nmerger = PdfFileMerger()\n\nfor pdf in x:\n merger.append(open(pdf, 'rb'))\n\nwith open(\"result.pdf\", \"wb\") as fout:\n merger.write(fout)\n\nHow would I make the same code above today\nfrom glob import glob\nfrom PyPDF2 import PdfFileMerger\n\n\n\ndef pdf_merge():\n ''' Merges all the pdf files in current directory '''\n merger = PdfFileMerger()\n allpdfs = [a for a in glob(\"*.pdf\")]\n [merger.append(pdf) for pdf in allpdfs]\n with open(\"Merged_pdfs.pdf\", \"wb\") as new_file:\n merger.write(new_file)\n\n\nif __name__ == \"__main__\":\n pdf_merge()\n\n", "The pdfrw library can do this quite easily, assuming you don't need to preserve bookmarks and annotations, and your PDFs aren't encrypted. cat.py is an example concatenation script, and subset.py is an example page subsetting script.\nThe relevant part of the concatenation script -- assumes inputs is a list of input filenames, and outfn is an output file name:\nfrom pdfrw import PdfReader, PdfWriter\n\nwriter = PdfWriter()\nfor inpfn in inputs:\n writer.addpages(PdfReader(inpfn).pages)\nwriter.write(outfn)\n\nAs you can see from this, it would be pretty easy to leave out the last page, e.g. something like:\n writer.addpages(PdfReader(inpfn).pages[:-1])\n\nDisclaimer: I am the primary pdfrw author.\n", "Is it possible, using Python, to merge seperate PDF files?\nYes.\nThe following example merges all files in one folder to a single new PDF file:\n#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\nfrom argparse import ArgumentParser\nfrom glob import glob\nfrom pyPdf import PdfFileReader, PdfFileWriter\nimport os\n\ndef merge(path, output_filename):\n output = PdfFileWriter()\n\n for pdffile in glob(path + os.sep + '*.pdf'):\n if pdffile == output_filename:\n continue\n print(\"Parse '%s'\" % pdffile)\n document = PdfFileReader(open(pdffile, 'rb'))\n for i in range(document.getNumPages()):\n output.addPage(document.getPage(i))\n\n print(\"Start writing '%s'\" % output_filename)\n with open(output_filename, \"wb\") as f:\n output.write(f)\n\nif __name__ == \"__main__\":\n parser = ArgumentParser()\n\n # Add more options if you like\n parser.add_argument(\"-o\", \"--output\",\n dest=\"output_filename\",\n default=\"merged.pdf\",\n help=\"write merged PDF to FILE\",\n metavar=\"FILE\")\n parser.add_argument(\"-p\", \"--path\",\n dest=\"path\",\n default=\".\",\n help=\"path of source PDF files\")\n\n args = parser.parse_args()\n merge(args.path, args.output_filename)\n\n", "here, http://pieceofpy.com/2009/03/05/concatenating-pdf-with-python/, gives an solution.\nsimilarly:\nfrom pyPdf import PdfFileWriter, PdfFileReader\n\ndef append_pdf(input,output):\n [output.addPage(input.getPage(page_num)) for page_num in range(input.numPages)]\n\noutput = PdfFileWriter()\n\nappend_pdf(PdfFileReader(file(\"C:\\\\sample.pdf\",\"rb\")),output)\nappend_pdf(PdfFileReader(file(\"c:\\\\sample1.pdf\",\"rb\")),output)\nappend_pdf(PdfFileReader(file(\"c:\\\\sample2.pdf\",\"rb\")),output)\nappend_pdf(PdfFileReader(file(\"c:\\\\sample3.pdf\",\"rb\")),output)\n\noutput.write(file(\"c:\\\\combined.pdf\",\"wb\"))\n\n------ Updated on 25th Nov. ------\n------ Seems above code doesn't work anymore------\n------ Please use the following:------\nfrom PyPDF2 import PdfFileMerger, PdfFileReader\nimport os\n\nmerger = PdfFileMerger()\n\nfile_folder = \"C:\\\\My Ducoments\\\\\"\n\nroot, dirs, files = next(os.walk(file_folder))\n\nfor path, subdirs, files in os.walk(root):\n for f in files:\n if f.endswith(\".pdf\"):\n merger.append(file_folder + f)\n\nmerger.write(file_folder + \"Economists-1.pdf\")\n\n", "from PyPDF2 import PdfFileMerger\nimport webbrowser\nimport os\ndir_path = os.path.dirname(os.path.realpath(__file__))\n\ndef list_files(directory, extension):\n return (f for f in os.listdir(directory) if f.endswith('.' + extension))\n\npdfs = list_files(dir_path, \"pdf\")\n\nmerger = PdfFileMerger()\n\nfor pdf in pdfs:\n merger.append(open(pdf, 'rb'))\n\nwith open('result.pdf', 'wb') as fout:\n merger.write(fout)\n\nwebbrowser.open_new('file://'+ dir_path + '/result.pdf')\n\nGit Repo: https://github.com/mahaguru24/Python_Merge_PDF.git\n", "You can use pikepdf too (source code documentation).\nExample code could be (taken from the documentation):\nfrom glob import glob\n\nfrom pikepdf import Pdf\n\npdf = Pdf.new()\n\nfor file in glob('*.pdf'): # you can change this to browse directories recursively\n with Pdf.open(file) as src:\n pdf.pages.extend(src.pages)\n\npdf.save('merged.pdf')\npdf.close()\n\nIf you want to exclude pages, you might proceed another way, for instance copying pages to a new pdf (you can select which ones you do not copy, then, the pdf.pages object behaving like a list).\nIt is still actively maintained, which, as of february 2022, does not seem to be the case of PyPDF2 nor pdfrw.\nI haven't benchmarked it, so I don't know if it is quicker or slower than other solutions.\nOne advantage over PyMuPDF, in my case, is that an official Ubuntu package is available (python3-pikepdf), what is practical to package my own software depending on it.\n", "Here's a time comparison for the most common answers for my specific use case: combining a list of 5 large single-page pdf files. I ran each test twice.\n(Disclaimer: I ran this function within Flask, your mileage may vary)\nTL;DR\npdfrw is the fastest library for combining pdfs out of the 3 I tested.\nPyPDF2\nstart = time.time()\nmerger = PdfFileMerger()\nfor pdf in all_pdf_obj:\n merger.append(\n os.path.join(\n os.getcwd(), pdf.filename # full path\n )\n )\nformatted_name = f'Summary_Invoice_{date.today()}.pdf'\nmerge_file = os.path.join(os.getcwd(), formatted_name)\nmerger.write(merge_file)\nmerger.close()\nend = time.time()\nprint(end - start) #1 66.50084733963013 #2 68.2995400428772\n\nPyMuPDF\nstart = time.time()\nresult = fitz.open()\n\nfor pdf in all_pdf_obj:\n with fitz.open(os.path.join(os.getcwd(), pdf.filename)) as mfile:\n result.insertPDF(mfile)\nformatted_name = f'Summary_Invoice_{date.today()}.pdf'\n\nresult.save(formatted_name)\nend = time.time()\nprint(end - start) #1 2.7166640758514404 #2 1.694727897644043\n\npdfrw\nstart = time.time()\nresult = fitz.open()\n\nwriter = PdfWriter()\nfor pdf in all_pdf_obj:\n writer.addpages(PdfReader(os.path.join(os.getcwd(), pdf.filename)).pages)\n\nformatted_name = f'Summary_Invoice_{date.today()}.pdf'\nwriter.write(formatted_name)\nend = time.time()\nprint(end - start) #1 0.6040127277374268 #2 0.9576816558837891\n\n", "A slight variation using a dictionary for greater flexibility (e.g. sort, dedup):\nimport os\nfrom PyPDF2 import PdfFileMerger\n# use dict to sort by filepath or filename\nfile_dict = {}\nfor subdir, dirs, files in os.walk(\"<dir>\"):\n for file in files:\n filepath = subdir + os.sep + file\n # you can have multiple endswith\n if filepath.endswith((\".pdf\", \".PDF\")):\n file_dict[file] = filepath\n# use strict = False to ignore PdfReadError: Illegal character error\nmerger = PdfFileMerger(strict=False)\n\nfor k, v in file_dict.items():\n print(k, v)\n merger.append(v)\n\nmerger.write(\"combined_result.pdf\")\n\n", "I used pdf unite on the linux terminal by leveraging subprocess (assumes one.pdf and two.pdf exist on the directory) and the aim is to merge them to three.pdf\n import subprocess\n subprocess.call(['pdfunite one.pdf two.pdf three.pdf'],shell=True)\n\n", "You can use PdfFileMerger from the PyPDF2 module.\nFor example, to merge multiple PDF files from a list of paths you can use the following function:\nfrom PyPDF2 import PdfFileMerger\n\n# pass the path of the output final file.pdf and the list of paths\ndef merge_pdf(out_path: str, extracted_files: list [str]):\n merger = PdfFileMerger()\n \n for pdf in extracted_files:\n merger.append(pdf)\n\n merger.write(out_path)\n merger.close()\n\nmerge_pdf('./final.pdf', extracted_files)\n\nAnd this function to get all the files recursively from a parent folder:\nimport os\n\n# pass the path of the parent_folder\ndef fetch_all_files(parent_folder: str):\n target_files = []\n for path, subdirs, files in os.walk(parent_folder):\n for name in files:\n target_files.append(os.path.join(path, name))\n return target_files \n\n# get a list of all the paths of the pdf\nextracted_files = fetch_all_files('./parent_folder')\n\nFinally, you use the two functions declaring.a parent_folder_path that can contain multiple documents, and an output_pdf_path for the destination of the merged PDF:\n# get a list of all the paths of the pdf\nparent_folder_path = './parent_folder'\noutup_pdf_path = './final.pdf'\n\nextracted_files = fetch_all_files(parent_folder_path)\nmerge_pdf(outup_pdf_path, extracted_files)\n\nYou can get the full code from here (Source): How to merge PDF documents using Python\n", "The answer from Giovanni G. PY in an easily usable way (at least for me):\nimport os\nfrom PyPDF2 import PdfFileMerger\n\ndef merge_pdfs(export_dir, input_dir, folder):\n current_dir = os.path.join(input_dir, folder)\n pdfs = os.listdir(current_dir)\n \n merger = PdfFileMerger()\n for pdf in pdfs:\n merger.append(open(os.path.join(current_dir, pdf), 'rb'))\n\n with open(os.path.join(export_dir, folder + \".pdf\"), \"wb\") as fout:\n merger.write(fout)\n\nexport_dir = r\"E:\\Output\"\ninput_dir = r\"E:\\Input\"\nfolders = os.listdir(input_dir)\n[merge_pdfs(export_dir, input_dir, folder) for folder in folders];\n\n", "conda activate py_envs\n\npip install PyPDF2\n\nfrom PyPDF2 import PdfFileMerger\n\n#set path files\n\nos.chdir('directory_files')\ncwd = os.path.abspath('')\nfiles = os.listdir(cwd)\n\ndef merge_pdf_files():\n merger = PdfFileMerger()\n pdf_files = [x for x in files if x.endswith(\".pdf\")]\n [merger.append(pdf) for pdf in pdf_files]\n with open(\"merged_pdf_all.pdf\", \"wb\") as new_file:\n merger.write(new_file)\n\nif __name__ == \"__main__\":\n pdf_merge()\n\n" ]
[ 414, 153, 37, 15, 9, 3, 3, 3, 2, 1, 1, 1, 0, 0 ]
[ "def pdf_merger(path):\n\"\"\"Merge the pdfs into one pdf\"\"\"\nimport logging\nlogging.basicConfig(filename = 'output.log', level = logging.DEBUG, format = '%(asctime)s %(levelname)s %(message)s' )\n\ntry:\n import glob, os\n import PyPDF2\n \n os.chdir(path)\n \n pdfs = []\n \n for file in glob.glob(\"*.pdf\"):\n pdfs.append(file)\n \n if len(pdfs) == 0:\n logging.info(\"No pdf in the given directory\")\n \n else:\n merger = PyPDF2.PdfFileMerger()\n \n for pdf in pdfs:\n merger.append(pdf)\n \n merger.write('result.pdf')\n merger.close()\n \nexcept Exception as e:\n logging.error('Error has happened')\n logging.exception('Exception occured' + str(e))\n\n" ]
[ -1 ]
[ "file_io", "pdf", "pypdf", "pypdf2", "python" ]
stackoverflow_0003444645_file_io_pdf_pypdf_pypdf2_python.txt
Q: extract number of ranking position in pandas dataframe I have a pandas dataframe with a column named ranking_pos. All the rows of this column look like this: #123 of 12,216. The output I need is only the number of the ranking, so for this example: 123 (as an integer). How do I extract the number after the # and get rid of the of 12,216? Currently the type of the column is object, just converting it to integer with .astype() doesn't work because of the other characters. A: You can use .str.extract: df['ranking_pos'].str.extract(r'#(\d+)').astype(int) or you can use .str.split(): df['ranking_pos'].str.split(' of ').str[0].str.replace('#', '').astype(int) A: df.loc[:,"ranking_pos"] =df.loc[:,"ranking_pos"].str.replace("#","").astype(int)
extract number of ranking position in pandas dataframe
I have a pandas dataframe with a column named ranking_pos. All the rows of this column look like this: #123 of 12,216. The output I need is only the number of the ranking, so for this example: 123 (as an integer). How do I extract the number after the # and get rid of the of 12,216? Currently the type of the column is object, just converting it to integer with .astype() doesn't work because of the other characters.
[ "You can use .str.extract:\ndf['ranking_pos'].str.extract(r'#(\\d+)').astype(int)\n\nor you can use .str.split():\ndf['ranking_pos'].str.split(' of ').str[0].str.replace('#', '').astype(int)\n\n", "df.loc[:,\"ranking_pos\"] =df.loc[:,\"ranking_pos\"].str.replace(\"#\",\"\").astype(int)\n\n" ]
[ 1, 0 ]
[]
[]
[ "dataframe", "integer", "pandas", "python", "type_conversion" ]
stackoverflow_0074598438_dataframe_integer_pandas_python_type_conversion.txt
Q: How to check whether the Html value is 1 in Django? home.html {% if stud.scrapper_status == 1 %} <td>{{stud.scrapper_status}} --> Started</td> {% else %} <td>{{stud.scrapper_status}} --> Completed</td> {% endif %} Output Image If my value is 1 I should get started but for all the value its getting Completed, How to check the value if 1 or not in html. A: It should be "1" not 1 so: home.html {% if stud.scrapper_status == "1" %} <td>{{stud.scrapper_status}} --> Started</td> {% else %} <td>{{stud.scrapper_status}} --> Completed</td> {% endif %}
How to check whether the Html value is 1 in Django?
home.html {% if stud.scrapper_status == 1 %} <td>{{stud.scrapper_status}} --> Started</td> {% else %} <td>{{stud.scrapper_status}} --> Completed</td> {% endif %} Output Image If my value is 1 I should get started but for all the value its getting Completed, How to check the value if 1 or not in html.
[ "It should be \"1\" not 1 so:\nhome.html\n\n {% if stud.scrapper_status == \"1\" %}\n <td>{{stud.scrapper_status}} --> Started</td>\n {% else %}\n <td>{{stud.scrapper_status}} --> Completed</td>\n {% endif %}\n\n" ]
[ 3 ]
[]
[]
[ "django", "django_templates", "python" ]
stackoverflow_0074598520_django_django_templates_python.txt
Q: Trying to wrap the header of data frame /excel but gives me error Final_Inv = Inv_report2[["Product ID","Product","Net Activations","Books(at warehouse)","Books(Received by Retailer)","In Transit","Activated","Lost/Stolen (After Activation)","Lost/Stolen(Before Activation)","Total stock at Retailer","Activations(Books)","Stock in Network(Weeks)","Sell Through(%)","Stock at warehouse(Week)","Game Start Date"]] with pd.ExcelWriter('K:\SnW Project\Inventory Report\Output Excels\Output.xlsx') as writer: Final_Inv.style.set_properties(**{'text-align': 'center'}).to_excel(writer,sheet_name='Inv_report',index = 0,startrow=0, startcol=0) workbook = writer.book worksheet = writer.sheets['Inv_report'] format.set_text_wrap worksheet.set_row('A:Z',30,format) write.save() when I try to wrap the header row with above code it gives me below error. **ERROR : ** AttributeError Traceback (most recent call last) Input In [29], in <cell line: 64>() 66 workbook = writer.book 67 worksheet = writer.sheets['Inv_report'] ---> 68 format.set_text_wrap 69 worksheet.set_column('A:Z',30,format) 70 write.save() AttributeError: 'builtin_function_or_method' object has no attribute 'set_text_wrap' Tried changing the few things around code but no success A: Because format is a built-in function. Can you try this: cell_format = workbook.add_format() cell_format.set_text_wrap() Full code: Final_Inv = Inv_report2[["Product ID","Product","Net Activations","Books(at warehouse)","Books(Received by Retailer)","In Transit","Activated","Lost/Stolen (After Activation)","Lost/Stolen(Before Activation)","Total stock at Retailer","Activations(Books)","Stock in Network(Weeks)","Sell Through(%)","Stock at warehouse(Week)","Game Start Date"]] with pd.ExcelWriter('K:\SnW Project\Inventory Report\Output Excels\Output.xlsx') as writer: Final_Inv.style.set_properties(**{'text-align': 'center'}).to_excel(writer,sheet_name='Inv_report',index = 0,startrow=0, startcol=0) workbook = writer.book worksheet = writer.sheets['Inv_report'] cell_format = workbook.add_format() cell_format.set_text_wrap() worksheet.set_row('A:Z',30,cell_format) write.save()
Trying to wrap the header of data frame /excel but gives me error
Final_Inv = Inv_report2[["Product ID","Product","Net Activations","Books(at warehouse)","Books(Received by Retailer)","In Transit","Activated","Lost/Stolen (After Activation)","Lost/Stolen(Before Activation)","Total stock at Retailer","Activations(Books)","Stock in Network(Weeks)","Sell Through(%)","Stock at warehouse(Week)","Game Start Date"]] with pd.ExcelWriter('K:\SnW Project\Inventory Report\Output Excels\Output.xlsx') as writer: Final_Inv.style.set_properties(**{'text-align': 'center'}).to_excel(writer,sheet_name='Inv_report',index = 0,startrow=0, startcol=0) workbook = writer.book worksheet = writer.sheets['Inv_report'] format.set_text_wrap worksheet.set_row('A:Z',30,format) write.save() when I try to wrap the header row with above code it gives me below error. **ERROR : ** AttributeError Traceback (most recent call last) Input In [29], in <cell line: 64>() 66 workbook = writer.book 67 worksheet = writer.sheets['Inv_report'] ---> 68 format.set_text_wrap 69 worksheet.set_column('A:Z',30,format) 70 write.save() AttributeError: 'builtin_function_or_method' object has no attribute 'set_text_wrap' Tried changing the few things around code but no success
[ "Because format is a built-in function.\nCan you try this:\ncell_format = workbook.add_format()\ncell_format.set_text_wrap()\n\nFull code:\nFinal_Inv = Inv_report2[[\"Product ID\",\"Product\",\"Net Activations\",\"Books(at warehouse)\",\"Books(Received by Retailer)\",\"In Transit\",\"Activated\",\"Lost/Stolen (After Activation)\",\"Lost/Stolen(Before Activation)\",\"Total stock at Retailer\",\"Activations(Books)\",\"Stock in Network(Weeks)\",\"Sell Through(%)\",\"Stock at warehouse(Week)\",\"Game Start Date\"]]\n\n\nwith pd.ExcelWriter('K:\\SnW Project\\Inventory Report\\Output Excels\\Output.xlsx') as writer:\n Final_Inv.style.set_properties(**{'text-align': 'center'}).to_excel(writer,sheet_name='Inv_report',index = 0,startrow=0, startcol=0)\n workbook = writer.book\n worksheet = writer.sheets['Inv_report']\n cell_format = workbook.add_format()\n cell_format.set_text_wrap()\n worksheet.set_row('A:Z',30,cell_format)\n write.save()\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "excel", "python", "word_wrap" ]
stackoverflow_0074596233_dataframe_excel_python_word_wrap.txt
Q: Pandas: faster string operations in dataframes I am working on a python script that read data from a database and save this data into a .csv file. In order to save it correctly I need to escape different characters such as \r\n or \n. Here is how I am currently doing it: Firstly, I use the read_sql pandas function in order to read the data from the database. import pandas as pd df = pd.read_sql( sql = 'SELECT * FROM exampleTable', con = SQLAlchemyConnection ) The table I get has different types of values. Then, the script updates the dataframe obtained changing every string value to raw string. In order to achive that I use two nested for loops in order to operate with every single value. def update_df(df) for rowIndex, row in df.iterrows(): for colIndex, values in row.items(): if isinstance(df[rowIndex, colIndex], str): df.at[rowIndex, colIndex] = repr(df.at[rowIndex, colIndex]) return df However, the amount of data I need to elaborate is large (more than 1 million rows with more than 100 columns) and it takes hours. What I need is a way to create the csv file in a faster way. Thank you in advance. A: It should be faster to use applymap if really you have mixed types: df = df.applymap(lambda x: repr(x) if isinstance(x, str) else x) However, if you can identify string columns, then you can slice them, (maybe in combination with re.escape?).: import re str_cols = ['col1', 'col2'] df[str_cols] = df[str_cols].applymap(re.escape)
Pandas: faster string operations in dataframes
I am working on a python script that read data from a database and save this data into a .csv file. In order to save it correctly I need to escape different characters such as \r\n or \n. Here is how I am currently doing it: Firstly, I use the read_sql pandas function in order to read the data from the database. import pandas as pd df = pd.read_sql( sql = 'SELECT * FROM exampleTable', con = SQLAlchemyConnection ) The table I get has different types of values. Then, the script updates the dataframe obtained changing every string value to raw string. In order to achive that I use two nested for loops in order to operate with every single value. def update_df(df) for rowIndex, row in df.iterrows(): for colIndex, values in row.items(): if isinstance(df[rowIndex, colIndex], str): df.at[rowIndex, colIndex] = repr(df.at[rowIndex, colIndex]) return df However, the amount of data I need to elaborate is large (more than 1 million rows with more than 100 columns) and it takes hours. What I need is a way to create the csv file in a faster way. Thank you in advance.
[ "It should be faster to use applymap if really you have mixed types:\ndf = df.applymap(lambda x: repr(x) if isinstance(x, str) else x)\n\nHowever, if you can identify string columns, then you can slice them, (maybe in combination with re.escape?).:\nimport re\nstr_cols = ['col1', 'col2']\ndf[str_cols] = df[str_cols].applymap(re.escape)\n\n" ]
[ 3 ]
[]
[]
[ "csv", "pandas", "performance", "python" ]
stackoverflow_0074598581_csv_pandas_performance_python.txt
Q: Unable to import module 'app': No module named '_tkinter'", "errorType": "Runtime.ImportModuleError" I am trying to create a docker container to deploy on AWS lambda but I continuously keep getting the error: "Unable to import module 'app': No module named '_tkinter'", "errorType": "Runtime.ImportModuleError", "stackTrace": []} The docker file I have created is as below: FROM public.ecr.aws/lambda/python:3.8 RUN yum -y update RUN yum -y install gcc RUN yum install -y gcc-c++ RUN yum install -y git RUN yum install -y which COPY requirements.txt ./requirements.txt RUN pip install -r requirements.txt \ && pip install -e git+https://github.com/ganesh3/icevision.git@master#egg=icevision[inference] --upgrade -q COPY model_dir ./model_dir COPY /app/app.py ./ CMD ["app.handler"] The requirements.txt is as below: --find-links https://download.pytorch.org/whl/torch_stable.html torch==1.10.0+cpu torchvision==0.11.1+cpu --find-links https://download.openmmlab.com/mmcv/dist/cpu/torch1.10.0/index.html mmcv-full==1.3.17 mmdet==2.17.0 numpy Pillow tk The app.py is as below: # app.py used in the early stages of the project just to test if I was able to import the icevision library import sys import os print("Executing install for fonts") os.system('mkdir -p /root/.icevision/fonts/') os.system('curl -LJO https://raw.githubusercontent.com/airctic/storage/master/SpaceGrotesk-Medium.ttf') os.system('cp SpaceGrotesk-Medium.ttf /root/.icevision/fonts/') os.system('yum install -y tkinter tcl-devel tk-devel') os.system('yum search tkinter') os.system('yum install -y python3-tkinter.x86_64') print("Before tkinter import") import tkinter print("After tkinter import") import icevision def handler(event, context): return 'Hello from AWS Lambda using Python ' + sys.version + ' and IceVision ' + icevision.__version__ + '!' I also logged into the docker container using: docker exec -it <container_name> sh Then I logged into the python shell and ran: >>> import icevision Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/var/task/src/icevision/icevision/__init__.py", line 3, in <module> from icevision import parsers File "/var/task/src/icevision/icevision/parsers/__init__.py", line 1, in <module> from icevision.parsers.parser import * File "/var/task/src/icevision/icevision/parsers/parser.py", line 7, in <module> from icevision.data import * File "/var/task/src/icevision/icevision/data/__init__.py", line 5, in <module> from icevision.data.convert_records_to_coco_style import * File "/var/task/src/icevision/icevision/data/convert_records_to_coco_style.py", line 17, in <module> from icevision.models.inference import * File "/var/task/src/icevision/icevision/models/__init__.py", line 15, in <module> from icevision.models import mmdet File "/var/task/src/icevision/icevision/models/mmdet/__init__.py", line 2, in <module> from icevision.models.mmdet.models import * File "/var/task/src/icevision/icevision/models/mmdet/models/__init__.py", line 18, in <module> from icevision.models.mmdet.models import mask_rcnn File "/var/task/src/icevision/icevision/models/mmdet/models/mask_rcnn/__init__.py", line 2, in <module> from icevision.models.mmdet.common.mask.two_stage import * File "/var/task/src/icevision/icevision/models/mmdet/common/mask/two_stage/__init__.py", line 2, in <module> from icevision.models.mmdet.common.mask.two_stage.model import * File "/var/task/src/icevision/icevision/models/mmdet/common/mask/two_stage/model.py", line 3, in <module> from turtle import back File "/var/lang/lib/python3.8/turtle.py", line 107, in <module> import tkinter as TK File "/var/lang/lib/python3.8/tkinter/__init__.py", line 36, in <module> import _tkinter # If this fails your Python may not be configured for Tk ModuleNotFoundError: No module named '_tkinter' I ran the following in the python shell: import subprocess subprocess.call(['pip', 'install', 'tk']) I also ran the following to install tkinter: yum install -y tkinter yum install -y python3-tkinter The import for tkinter still fails. I am checking this as I am getting this error irrespective of importing tkinter or not as some other library is importing it internally. Can someone please suggest changes to resolve the error? Warm Regards Ganesh Bhat A: Commented out the below code to resolve the issue: from turtle import back
Unable to import module 'app': No module named '_tkinter'", "errorType": "Runtime.ImportModuleError"
I am trying to create a docker container to deploy on AWS lambda but I continuously keep getting the error: "Unable to import module 'app': No module named '_tkinter'", "errorType": "Runtime.ImportModuleError", "stackTrace": []} The docker file I have created is as below: FROM public.ecr.aws/lambda/python:3.8 RUN yum -y update RUN yum -y install gcc RUN yum install -y gcc-c++ RUN yum install -y git RUN yum install -y which COPY requirements.txt ./requirements.txt RUN pip install -r requirements.txt \ && pip install -e git+https://github.com/ganesh3/icevision.git@master#egg=icevision[inference] --upgrade -q COPY model_dir ./model_dir COPY /app/app.py ./ CMD ["app.handler"] The requirements.txt is as below: --find-links https://download.pytorch.org/whl/torch_stable.html torch==1.10.0+cpu torchvision==0.11.1+cpu --find-links https://download.openmmlab.com/mmcv/dist/cpu/torch1.10.0/index.html mmcv-full==1.3.17 mmdet==2.17.0 numpy Pillow tk The app.py is as below: # app.py used in the early stages of the project just to test if I was able to import the icevision library import sys import os print("Executing install for fonts") os.system('mkdir -p /root/.icevision/fonts/') os.system('curl -LJO https://raw.githubusercontent.com/airctic/storage/master/SpaceGrotesk-Medium.ttf') os.system('cp SpaceGrotesk-Medium.ttf /root/.icevision/fonts/') os.system('yum install -y tkinter tcl-devel tk-devel') os.system('yum search tkinter') os.system('yum install -y python3-tkinter.x86_64') print("Before tkinter import") import tkinter print("After tkinter import") import icevision def handler(event, context): return 'Hello from AWS Lambda using Python ' + sys.version + ' and IceVision ' + icevision.__version__ + '!' I also logged into the docker container using: docker exec -it <container_name> sh Then I logged into the python shell and ran: >>> import icevision Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/var/task/src/icevision/icevision/__init__.py", line 3, in <module> from icevision import parsers File "/var/task/src/icevision/icevision/parsers/__init__.py", line 1, in <module> from icevision.parsers.parser import * File "/var/task/src/icevision/icevision/parsers/parser.py", line 7, in <module> from icevision.data import * File "/var/task/src/icevision/icevision/data/__init__.py", line 5, in <module> from icevision.data.convert_records_to_coco_style import * File "/var/task/src/icevision/icevision/data/convert_records_to_coco_style.py", line 17, in <module> from icevision.models.inference import * File "/var/task/src/icevision/icevision/models/__init__.py", line 15, in <module> from icevision.models import mmdet File "/var/task/src/icevision/icevision/models/mmdet/__init__.py", line 2, in <module> from icevision.models.mmdet.models import * File "/var/task/src/icevision/icevision/models/mmdet/models/__init__.py", line 18, in <module> from icevision.models.mmdet.models import mask_rcnn File "/var/task/src/icevision/icevision/models/mmdet/models/mask_rcnn/__init__.py", line 2, in <module> from icevision.models.mmdet.common.mask.two_stage import * File "/var/task/src/icevision/icevision/models/mmdet/common/mask/two_stage/__init__.py", line 2, in <module> from icevision.models.mmdet.common.mask.two_stage.model import * File "/var/task/src/icevision/icevision/models/mmdet/common/mask/two_stage/model.py", line 3, in <module> from turtle import back File "/var/lang/lib/python3.8/turtle.py", line 107, in <module> import tkinter as TK File "/var/lang/lib/python3.8/tkinter/__init__.py", line 36, in <module> import _tkinter # If this fails your Python may not be configured for Tk ModuleNotFoundError: No module named '_tkinter' I ran the following in the python shell: import subprocess subprocess.call(['pip', 'install', 'tk']) I also ran the following to install tkinter: yum install -y tkinter yum install -y python3-tkinter The import for tkinter still fails. I am checking this as I am getting this error irrespective of importing tkinter or not as some other library is importing it internally. Can someone please suggest changes to resolve the error? Warm Regards Ganesh Bhat
[ "Commented out the below code to resolve the issue:\nfrom turtle import back\n\n" ]
[ 0 ]
[]
[]
[ "aws_lambda", "docker", "python", "pytorch", "tkinter" ]
stackoverflow_0074473315_aws_lambda_docker_python_pytorch_tkinter.txt
Q: Python | Create combination of dictionary based on conditions I'm trying to create combination of dictionary based on some condition below is the main dictionary: payload = { "type": ["sedan","suv"], "name": ["car1","car2"], "color": ["black","white","green"], "version": ["mid","top"], "model": ["2","5","13"], } below are the conditions: color = { "car1": ["black","green"], "car2":["white"] } model = { "mid":["5"], "top":["13","5"] } For payload["name"] = "car1" the colors can be only be "black" or "green" even if the payload["color"] has more than these color. for payload["name"] = "car2", it can have only be "white" color. Same goes for model also, for mid version it can only have model as "5" and for top version it can have only "13" and "5". below is the expected output: [ {'type': 'sedan', 'name': 'car1', 'color': 'black', 'version': 'mid', 'model': '5'}, {'type': 'sedan', 'name': 'car1', 'color': 'black', 'version': 'top', 'model': '5'}, {'type': 'sedan', 'name': 'car1', 'color': 'black', 'version': 'top', 'model': '13'}, {'type': 'sedan', 'name': 'car1', 'color': 'green', 'version': 'mid', 'model': '5'}, {'type': 'sedan', 'name': 'car1', 'color': 'green', 'version': 'top', 'model': '5'}, {'type': 'sedan', 'name': 'car1', 'color': 'green', 'version': 'top', 'model': '13'}, {'type': 'suv', 'name': 'car1', 'color': 'black', 'version': 'top', 'model': '5'}, {'type': 'suv', 'name': 'car1', 'color': 'black', 'version': 'top', 'model': '13'}, {'type': 'suv', 'name': 'car1', 'color': 'black', 'version': 'mid', 'model': '5'}, {'type': 'suv', 'name': 'car1', 'color': 'green', 'version': 'top', 'model': '5'}, {'type': 'suv', 'name': 'car1', 'color': 'green', 'version': 'top', 'model': '13'}, {'type': 'suv', 'name': 'car1', 'color': 'green', 'version': 'mid', 'model': '5'}, {'type': 'sedan', 'name': 'car2', 'color': 'white', 'version': 'mid', 'model': '5'}, {'type': 'sedan', 'name': 'car2', 'color': 'white', 'version': 'top', 'model': '5'}, {'type': 'sedan', 'name': 'car2', 'color': 'white', 'version': 'top', 'model': '13'}, {'type': 'suv', 'name': 'car2', 'color': 'white', 'version': 'mid', 'model': '5'}, {'type': 'suv', 'name': 'car2', 'color': 'white', 'version': 'top', 'model': '5'}, {'type': 'suv', 'name': 'car2', 'color': 'white', 'version': 'top', 'model': '13'} ] Below is my code..it brings out all the combinations, how can i add the condition check to this. Can someone help? import itertools payload = { "type": ["sedan","suv"], "name": ["car1","car2"], "color": ["black","white","green"], "version": ["mid","top"], "model": ["2","5","13"], } color = { "car1": ["black","green"], "car2":["white"] } model = { "mid":["5"], "top":["13","5"] } output = [dict(zip(payload.keys(), a)) for a in itertools.product(*payload.values())] print(output) A: You can try to create a dataframe with the outputs and the filter it for each condition like below import itertools import pandas as pd payload = { "type": ["sedan","suv"], "name": ["car1","car2"], "color": ["black","white","green"], "version": ["mid","top"], "model": ["2","5","13"], } color = { "car1": ["black","green"], "car2":["white"] } model = { "mid":["5"], "top":["13","5"] } output = [dict(zip(payload.keys(), a)) for a in itertools.product(*payload.values())] print(output) df = pd.DataFrame(output) # filter for car1 color df = df.loc[~((df.name == 'car1') & ~(df.color.isin(color['car1']))),:] # filter for car2 color df = df.loc[~((df.name == 'car2') & ~(df.color.isin(color['car2']))),:] #filter for mid model df = df.loc[~((df.version == 'mid') & ~(df.model.isin(model['mid']))),:] #filter for topmodel df = df.loc[~((df.version == 'top') & ~(df.model.isin(model['top']))),:] df
Python | Create combination of dictionary based on conditions
I'm trying to create combination of dictionary based on some condition below is the main dictionary: payload = { "type": ["sedan","suv"], "name": ["car1","car2"], "color": ["black","white","green"], "version": ["mid","top"], "model": ["2","5","13"], } below are the conditions: color = { "car1": ["black","green"], "car2":["white"] } model = { "mid":["5"], "top":["13","5"] } For payload["name"] = "car1" the colors can be only be "black" or "green" even if the payload["color"] has more than these color. for payload["name"] = "car2", it can have only be "white" color. Same goes for model also, for mid version it can only have model as "5" and for top version it can have only "13" and "5". below is the expected output: [ {'type': 'sedan', 'name': 'car1', 'color': 'black', 'version': 'mid', 'model': '5'}, {'type': 'sedan', 'name': 'car1', 'color': 'black', 'version': 'top', 'model': '5'}, {'type': 'sedan', 'name': 'car1', 'color': 'black', 'version': 'top', 'model': '13'}, {'type': 'sedan', 'name': 'car1', 'color': 'green', 'version': 'mid', 'model': '5'}, {'type': 'sedan', 'name': 'car1', 'color': 'green', 'version': 'top', 'model': '5'}, {'type': 'sedan', 'name': 'car1', 'color': 'green', 'version': 'top', 'model': '13'}, {'type': 'suv', 'name': 'car1', 'color': 'black', 'version': 'top', 'model': '5'}, {'type': 'suv', 'name': 'car1', 'color': 'black', 'version': 'top', 'model': '13'}, {'type': 'suv', 'name': 'car1', 'color': 'black', 'version': 'mid', 'model': '5'}, {'type': 'suv', 'name': 'car1', 'color': 'green', 'version': 'top', 'model': '5'}, {'type': 'suv', 'name': 'car1', 'color': 'green', 'version': 'top', 'model': '13'}, {'type': 'suv', 'name': 'car1', 'color': 'green', 'version': 'mid', 'model': '5'}, {'type': 'sedan', 'name': 'car2', 'color': 'white', 'version': 'mid', 'model': '5'}, {'type': 'sedan', 'name': 'car2', 'color': 'white', 'version': 'top', 'model': '5'}, {'type': 'sedan', 'name': 'car2', 'color': 'white', 'version': 'top', 'model': '13'}, {'type': 'suv', 'name': 'car2', 'color': 'white', 'version': 'mid', 'model': '5'}, {'type': 'suv', 'name': 'car2', 'color': 'white', 'version': 'top', 'model': '5'}, {'type': 'suv', 'name': 'car2', 'color': 'white', 'version': 'top', 'model': '13'} ] Below is my code..it brings out all the combinations, how can i add the condition check to this. Can someone help? import itertools payload = { "type": ["sedan","suv"], "name": ["car1","car2"], "color": ["black","white","green"], "version": ["mid","top"], "model": ["2","5","13"], } color = { "car1": ["black","green"], "car2":["white"] } model = { "mid":["5"], "top":["13","5"] } output = [dict(zip(payload.keys(), a)) for a in itertools.product(*payload.values())] print(output)
[ "You can try to create a dataframe with the outputs and the filter it for each condition like below\nimport itertools\nimport pandas as pd\npayload = {\n \"type\": [\"sedan\",\"suv\"],\n \"name\": [\"car1\",\"car2\"],\n \"color\": [\"black\",\"white\",\"green\"],\n \"version\": [\"mid\",\"top\"],\n \"model\": [\"2\",\"5\",\"13\"],\n}\n\ncolor = {\n \"car1\": [\"black\",\"green\"],\n \"car2\":[\"white\"]\n }\nmodel = {\n \"mid\":[\"5\"],\n \"top\":[\"13\",\"5\"]\n}\noutput = [dict(zip(payload.keys(), a)) for a in itertools.product(*payload.values())]\nprint(output)\n\ndf = pd.DataFrame(output)\n\n\n# filter for car1 color\ndf = df.loc[~((df.name == 'car1') & ~(df.color.isin(color['car1']))),:] \n\n# filter for car2 color\ndf = df.loc[~((df.name == 'car2') & ~(df.color.isin(color['car2']))),:]\n\n#filter for mid model\ndf = df.loc[~((df.version == 'mid') & ~(df.model.isin(model['mid']))),:]\n\n#filter for topmodel\ndf = df.loc[~((df.version == 'top') & ~(df.model.isin(model['top']))),:]\ndf\n\n" ]
[ 0 ]
[]
[]
[ "combinations", "dictionary", "python", "python_3.x" ]
stackoverflow_0074595956_combinations_dictionary_python_python_3.x.txt
Q: Getting a list of months between two dates I need to implement the code which will help me to get the list of months between two dates. I already have the code which will give the month delta , That is the number of months. Actually, I need the way to achieve getting the list of months between two dates. Here it is code for getting month delta. import calendar import datetime def calculate_monthdelta(date1, date2): def is_last_day_of_the_month(date): days_in_month = calendar.monthrange(date.year, date.month)[1] return date.day == days_in_month imaginary_day_2 = 31 if is_last_day_of_the_month(date2) else date2.day monthdelta = ( (date2.month - date1.month) + (date2.year - date1.year) * 12 + (-1 if date1.day > imaginary_day_2 else 0) ) print monthdelta return monthdelta date2 = datetime.datetime.today() date1 = date2.replace(month=01) calculate_monthdelta(date1, date2) Now I need the way to get the list of months between the same. Any help is appreciated If there is any way to get the list of months between two dates. Note: Please suggest any idea (If available) apart from the code I have used here. A: try this import datetime import time from dateutil.rrule import rrule, MONTHLY months = [dt.strftime("%m") for dt in rrule(MONTHLY, dtstart=date1, until=date2)] print months A: You can use the datae_range function in pandas library. import pandas as pd months = pd.date_range(data1, date2, freq="MS").strftime("%Y-%m").tolist() It will give you a list of all months with the selected format (here in "%Y-%m") in the range of date1 and date2 which looks like somewhat like this: ['2012-01', '2012-02', '2012-03', '2012-04', '2012-05'] If you want only the distinct months between date1 and date2, you can use datae_range function like this: months = pd.date_range(date1, date2, freq="MS").strftime("%m").tolist() months = list(dict.fromkeys(months)) And the output looks like this: ['01', '02', '03', '04', '05']
Getting a list of months between two dates
I need to implement the code which will help me to get the list of months between two dates. I already have the code which will give the month delta , That is the number of months. Actually, I need the way to achieve getting the list of months between two dates. Here it is code for getting month delta. import calendar import datetime def calculate_monthdelta(date1, date2): def is_last_day_of_the_month(date): days_in_month = calendar.monthrange(date.year, date.month)[1] return date.day == days_in_month imaginary_day_2 = 31 if is_last_day_of_the_month(date2) else date2.day monthdelta = ( (date2.month - date1.month) + (date2.year - date1.year) * 12 + (-1 if date1.day > imaginary_day_2 else 0) ) print monthdelta return monthdelta date2 = datetime.datetime.today() date1 = date2.replace(month=01) calculate_monthdelta(date1, date2) Now I need the way to get the list of months between the same. Any help is appreciated If there is any way to get the list of months between two dates. Note: Please suggest any idea (If available) apart from the code I have used here.
[ "try this\nimport datetime\nimport time\nfrom dateutil.rrule import rrule, MONTHLY\nmonths = [dt.strftime(\"%m\") for dt in rrule(MONTHLY, dtstart=date1, until=date2)]\nprint months\n\n", "You can use the datae_range function in pandas library. \nimport pandas as pd\n\nmonths = pd.date_range(data1, date2, freq=\"MS\").strftime(\"%Y-%m\").tolist()\n\nIt will give you a list of all months with the selected format (here in \"%Y-%m\") in the range of date1 and date2 which looks like somewhat like this:\n['2012-01', '2012-02', '2012-03', '2012-04', '2012-05']\n\nIf you want only the distinct months between date1 and date2, you can use datae_range function like this:\nmonths = pd.date_range(date1, date2, freq=\"MS\").strftime(\"%m\").tolist()\nmonths = list(dict.fromkeys(months))\n\nAnd the output looks like this:\n['01', '02', '03', '04', '05']\n\n" ]
[ 13, 0 ]
[ "Change your print statement to:\nprint calendar.month_name[:monthdelta]\n", "Because I don't know your desired output I can't format mine, but this returns an integer of the total number of months between two dates.\ndef calculate_monthdelta(date1, date2):\n print abs(date1.year - date2.year) * 12 + abs(date1.month - date2.month)\n\n" ]
[ -1, -1 ]
[ "calendar", "datetime", "python" ]
stackoverflow_0037456421_calendar_datetime_python.txt
Q: CDK: How to get L2 construct instance from L1 (CFN)? In my CDK code there is a low-lavel ecs.CfnTaskDefinition task definition. my_task_definition = aws_cdk.ecs.CfnTaskDefinition( scope=self, id="my_task_definition", # rest of the parameters... ) I want to use this task definition to a create a Ecs service, like this. my_service = aws_cdk.ecs.Ec2Service( scope=self, id="my_service", cluster=my_cluster, task_definition=my_task_definition, # NOT COMPATIBLE desired_count=1, # rest of the parameters.. ) But as the task_definition argument of Ec2Service should be an instance of aws_cdk.aws_ecs.TaskDefinition; it's not possible to use my_task_definition here, which is instance of aws_cdk.aws_ecs.CfnTaskDefinition. So the question is it possible to get aws_cdk.aws_ecs.TaskDefinition object from aws_cdk.aws_ecs.CfnTaskDefinition instance? A: Is it possible to get aws_cdk.aws_ecs.TaskDefinition object from aws_cdk.aws_ecs.CfnTaskDefinition instance? ❌ No. You cannot get a L2 Something construct from a L1 CfnSomething. You can get a L2 ISomething interface construct with from_task_definition_arn (see below). But the task_definition prop won't accept the interface type. ✅ In your case, start with a L2 TaskDefinition or ECSTaskDefinition construct. Then, if you need to muck about with the L1 attributes, use escape hatch syntax to modify its underlying CfnTaskDefinition. How to get L2 construct instance from L1 (CFN)? More generally, yes, there are two ways to get a L2 ISomething from a L1 CfnSomething: (1) The CDK "unescape hatch" syntax, but only for S3 Buckets and KMS keys: cfnBucket = CfnBucket(stack, 'CfnBucket') bucket: IBucket = Bucket.from_cfn_bucket(cfn_bucket=cfnBucket) (2) For other constructs, the from_something_arn reference methods achieve the same result.
CDK: How to get L2 construct instance from L1 (CFN)?
In my CDK code there is a low-lavel ecs.CfnTaskDefinition task definition. my_task_definition = aws_cdk.ecs.CfnTaskDefinition( scope=self, id="my_task_definition", # rest of the parameters... ) I want to use this task definition to a create a Ecs service, like this. my_service = aws_cdk.ecs.Ec2Service( scope=self, id="my_service", cluster=my_cluster, task_definition=my_task_definition, # NOT COMPATIBLE desired_count=1, # rest of the parameters.. ) But as the task_definition argument of Ec2Service should be an instance of aws_cdk.aws_ecs.TaskDefinition; it's not possible to use my_task_definition here, which is instance of aws_cdk.aws_ecs.CfnTaskDefinition. So the question is it possible to get aws_cdk.aws_ecs.TaskDefinition object from aws_cdk.aws_ecs.CfnTaskDefinition instance?
[ "\nIs it possible to get aws_cdk.aws_ecs.TaskDefinition object from aws_cdk.aws_ecs.CfnTaskDefinition instance?\n\n❌ No. You cannot get a L2 Something construct from a L1 CfnSomething. You can get a L2 ISomething interface construct with from_task_definition_arn (see below). But the task_definition prop won't accept the interface type.\n✅ In your case, start with a L2 TaskDefinition or ECSTaskDefinition construct. Then, if you need to muck about with the L1 attributes, use escape hatch syntax to modify its underlying CfnTaskDefinition.\n\nHow to get L2 construct instance from L1 (CFN)?\n\nMore generally, yes, there are two ways to get a L2 ISomething from a L1 CfnSomething:\n(1) The CDK \"unescape hatch\" syntax, but only for S3 Buckets and KMS keys:\ncfnBucket = CfnBucket(stack, 'CfnBucket')\nbucket: IBucket = Bucket.from_cfn_bucket(cfn_bucket=cfnBucket)\n\n(2) For other constructs, the from_something_arn reference methods achieve the same result.\n" ]
[ 2 ]
[]
[]
[ "amazon_ecs", "amazon_web_services", "aws_cdk", "aws_cdk_python", "python" ]
stackoverflow_0074592569_amazon_ecs_amazon_web_services_aws_cdk_aws_cdk_python_python.txt
Q: Python loop through rows and then calculate doesn't wok What I wanted to do, is to loop through each row. If the category is "HR contacts" and it's number is smaller than 500 then keep it. Otherwise only keep 500 as part of it. My code is: cntByUserNm['keep #'] = np.nan cntByUserNm['rest #'] = np.nan for index, row in cntByUserNm.iterrows(): print(row['Owner Name'], row['source']) if row['source'] == 'HR': if row['total number'] <= 500: row['keep #'] = row['total number'] row['rest #'] = 0 else: row['keep #'] = 500 row['rest #'] = row['total number'] - 500 But this seems doesn't work, all of the keep # and rest # still remains nan. How to fix this? for i in range(0, len(cntByUserNm)): print(cntByUserNm.iloc[i]['Owner Name'], cntByUserNm.iloc[i]['blizday source']) if cntByUserNm.iloc[i]['blizday source'] == mainCat: if cntByUserNm.iloc[i][befCnt] <= destiNum: cntByUserNm.iloc[i]['keep #'] = cntByUserNm.iloc[i][befCnt] cntByUserNm.iloc[i]['rest #'] = 0 else: cntByUserNm.iloc[i]['keep #'] = destiNum cntByUserNm.iloc[i]['rest #'] = cntByUserNm.iloc[i][befCnt] - destiNum``` A: You are updating the copy of row of the dataframe, instead of the dataframe itself. Assuming that your row index is continuous (from 0 to len(dataframe)), you can use .loc to modify directly on the dataframe. for index, row in cntByUserNm.iterrows(): print(row['Owner Name'], row['source']) if row['source'] == 'HR': if row['total number'] <= 500: cntByUserNm.loc[index, 'keep #'] = row['total number'] cntByUserNm.loc[index, 'rest #'] = 0 else: cntByUserNm.loc[index, 'keep #'] = 500 cntByUserNm.loc[index, 'rest #'] = row['total number'] - 500 If the index is not continuous, you can get the column integer location of keep # and rest # and use .iloc keep_idx = df.columns.get_loc('keep #') rest_idx = df.columns.get_loc('rest #') for index, row in cntByUserNm.iterrows(): print(row['Owner Name'], row['source']) if row['source'] == 'HR': if row['total number'] <= 500: cntByUserNm.iloc[index, keep_idx] = row['total number'] cntByUserNm.iloc[index, rest_idx] = 0 else: cntByUserNm.iloc[index, keep_idx] = 500 cntByUserNm.iloc[index, rest_idx] = row['total number'] - 500 A: In pandas working with vectors is faster. So I suggest: cntByUserNm['keep #'] = np.nan cntByUserNm['rest #'] = np.nan mask = (cntByUserNm.loc[:, 'source'] == 'HR') & (cntByUserNm.loc[:, 'total number'] <= 500) cntByUserNm.loc[mask, 'keep #'] = cntByUserNm.loc[mask, 'total number'] cntByUserNm.loc[mask, 'rest #'] = 0 cntByUserNm.loc[~mask, 'keep #'] = 500 cntByUserNm.loc[~mask, 'rest #'] = cntByUserNm.loc[~mask, 'total number'] - 500 A: Answer: keep_idx = df.columns.get_loc('keep #') rest_idx = df.columns.get_loc('rest #') for index, row in cntByUserNm.iterrows(): print(row['Owner Name'], row['source']) if row['source'] == 'HR': if row['total number'] <= 500: cntByUserNm.iloc[index, keep_idx] = row['total number'] cntByUserNm.iloc[index, rest_idx] = 0 else: cntByUserNm.iloc[index, keep_idx] = 500 cntByUserNm.iloc[index, rest_idx] = row['total number'] - 500
Python loop through rows and then calculate doesn't wok
What I wanted to do, is to loop through each row. If the category is "HR contacts" and it's number is smaller than 500 then keep it. Otherwise only keep 500 as part of it. My code is: cntByUserNm['keep #'] = np.nan cntByUserNm['rest #'] = np.nan for index, row in cntByUserNm.iterrows(): print(row['Owner Name'], row['source']) if row['source'] == 'HR': if row['total number'] <= 500: row['keep #'] = row['total number'] row['rest #'] = 0 else: row['keep #'] = 500 row['rest #'] = row['total number'] - 500 But this seems doesn't work, all of the keep # and rest # still remains nan. How to fix this? for i in range(0, len(cntByUserNm)): print(cntByUserNm.iloc[i]['Owner Name'], cntByUserNm.iloc[i]['blizday source']) if cntByUserNm.iloc[i]['blizday source'] == mainCat: if cntByUserNm.iloc[i][befCnt] <= destiNum: cntByUserNm.iloc[i]['keep #'] = cntByUserNm.iloc[i][befCnt] cntByUserNm.iloc[i]['rest #'] = 0 else: cntByUserNm.iloc[i]['keep #'] = destiNum cntByUserNm.iloc[i]['rest #'] = cntByUserNm.iloc[i][befCnt] - destiNum```
[ "You are updating the copy of row of the dataframe, instead of the dataframe itself. Assuming that your row index is continuous (from 0 to len(dataframe)), you can use .loc to modify directly on the dataframe.\nfor index, row in cntByUserNm.iterrows():\n print(row['Owner Name'], row['source'])\n if row['source'] == 'HR':\n if row['total number'] <= 500:\n cntByUserNm.loc[index, 'keep #'] = row['total number']\n cntByUserNm.loc[index, 'rest #'] = 0\n else:\n cntByUserNm.loc[index, 'keep #'] = 500\n cntByUserNm.loc[index, 'rest #'] = row['total number'] - 500\n\nIf the index is not continuous, you can get the column integer location of keep # and rest # and use .iloc\nkeep_idx = df.columns.get_loc('keep #')\nrest_idx = df.columns.get_loc('rest #')\nfor index, row in cntByUserNm.iterrows():\n print(row['Owner Name'], row['source'])\n if row['source'] == 'HR':\n if row['total number'] <= 500:\n cntByUserNm.iloc[index, keep_idx] = row['total number']\n cntByUserNm.iloc[index, rest_idx] = 0\n else:\n cntByUserNm.iloc[index, keep_idx] = 500\n cntByUserNm.iloc[index, rest_idx] = row['total number'] - 500\n\n", "In pandas working with vectors is faster. So I suggest:\ncntByUserNm['keep #'] = np.nan\ncntByUserNm['rest #'] = np.nan\nmask = (cntByUserNm.loc[:, 'source'] == 'HR') & (cntByUserNm.loc[:, 'total number'] <= 500)\ncntByUserNm.loc[mask, 'keep #'] = cntByUserNm.loc[mask, 'total number']\ncntByUserNm.loc[mask, 'rest #'] = 0\ncntByUserNm.loc[~mask, 'keep #'] = 500\ncntByUserNm.loc[~mask, 'rest #'] = cntByUserNm.loc[~mask, 'total number'] - 500\n\n", "Answer:\nkeep_idx = df.columns.get_loc('keep #')\nrest_idx = df.columns.get_loc('rest #')\nfor index, row in cntByUserNm.iterrows():\n print(row['Owner Name'], row['source'])\n if row['source'] == 'HR':\n if row['total number'] <= 500:\n cntByUserNm.iloc[index, keep_idx] = row['total number']\n cntByUserNm.iloc[index, rest_idx] = 0\n else:\n cntByUserNm.iloc[index, keep_idx] = 500\n cntByUserNm.iloc[index, rest_idx] = row['total number'] - 500\n\n\n" ]
[ 2, 2, 1 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074598297_pandas_python.txt
Q: How to assign value to variable in list [PYTHON] How can I do something like this on python : class Game: def __init__(self, size: int): self.settings = { 'timeout_turn' = 0 'timeout_match' = 0 'max_memory' = 0 'time_left' = 2147483647 'game_type' = 0 'rule' = 0 'evaluate' = 0 'folder' = './' } there is an error, I think this is not the right way to do it but I didn't find an other solution. Thanks in advance A: To make it work you need to replace the = by and : and add a , after every entry. class Game: def __init__(self, size: int): self.settings = { 'timeout_turn': 0, 'timeout_match': 0, 'max_memory': 0, 'time_left': 2147483647, 'game_type': 0, 'rule': 0, 'evaluate': 0, 'folder': './' } By doing this you are creating a class variable named settings that is of type dictionary. Dictionaries are also known as maps in other languages. They use immutable values as indexes (keys) instead of numbers like in lists. A: here you go but it's a dictionnary not a list class Game: def __init__(self, size: int): self.settings = { 'timeout_turn' : 0, 'timeout_match' : 0, 'max_memory' : 0, 'time_left' : 2147483647, 'game_type' : 0, 'rule' : 0, 'evaluate' : 0, 'folder' : './', } A: If you want your settings data to be a dictionnary : instead of = to affect value to the key (variable name). If you just want to create new variables you could do it like that : class Game: def __init__(self, size: int): self.timeout_turn=0 self.max_memory=0 .... A: Hi I Hope you are doing well! Because you are using dict the syntax should have : instead of =: class Game: def __init__(self, size: int) -> None: """Initialize.""" self.settings = { "timeout_turn": 0, "timeout_match": 0, "max_memory": 0, "time_left": 2147483647, "game_type": 0, "rule": 0, "evaluate": 0, "folder": "./", } or if you want to have it as attributes you can do this: class Game: def __init__(self, size: int) -> None: """Initialize.""" self.timeout_turn = 0 self.timeout_match = 0 ...
How to assign value to variable in list [PYTHON]
How can I do something like this on python : class Game: def __init__(self, size: int): self.settings = { 'timeout_turn' = 0 'timeout_match' = 0 'max_memory' = 0 'time_left' = 2147483647 'game_type' = 0 'rule' = 0 'evaluate' = 0 'folder' = './' } there is an error, I think this is not the right way to do it but I didn't find an other solution. Thanks in advance
[ "To make it work you need to replace the = by and : and add a , after every entry.\nclass Game:\n def __init__(self, size: int):\n self.settings = { \n 'timeout_turn': 0,\n 'timeout_match': 0,\n 'max_memory': 0,\n 'time_left': 2147483647,\n 'game_type': 0,\n 'rule': 0,\n 'evaluate': 0,\n 'folder': './'\n }\n\nBy doing this you are creating a class variable named settings that is of type dictionary. Dictionaries are also known as maps in other languages. They use immutable values as indexes (keys) instead of numbers like in lists.\n", "here you go but it's a dictionnary not a list\nclass Game:\n def __init__(self, size: int):\n self.settings = { \n 'timeout_turn' : 0,\n 'timeout_match' : 0,\n 'max_memory' : 0,\n 'time_left' : 2147483647,\n 'game_type' : 0,\n 'rule' : 0,\n 'evaluate' : 0,\n 'folder' : './',\n }\n\n", "If you want your settings data to be a dictionnary : instead of = to affect value to the key (variable name).\nIf you just want to create new variables you could do it like that :\nclass Game:\n\ndef __init__(self, size: int):\n self.timeout_turn=0\n self.max_memory=0\n ....\n \n \n\n", "Hi I Hope you are doing well!\nBecause you are using dict the syntax should have : instead of =:\nclass Game:\n\n def __init__(self, size: int) -> None:\n \"\"\"Initialize.\"\"\"\n\n self.settings = {\n \"timeout_turn\": 0,\n \"timeout_match\": 0,\n \"max_memory\": 0,\n \"time_left\": 2147483647,\n \"game_type\": 0,\n \"rule\": 0,\n \"evaluate\": 0,\n \"folder\": \"./\",\n }\n\nor if you want to have it as attributes you can do this:\nclass Game:\n\n def __init__(self, size: int) -> None:\n \"\"\"Initialize.\"\"\"\n\n self.timeout_turn = 0\n self.timeout_match = 0\n\n ...\n\n" ]
[ 3, 2, 1, 1 ]
[]
[]
[ "python" ]
stackoverflow_0074598563_python.txt
Q: Can I use pymysql.connect() with "with" statement? The following is listed as example in pymysql: conn = pymysql.connect(...) with conn.cursor() as cursor: cursor.execute(...) ... conn.close() Can I use the following instead, or will this leave a lingering connection? (it executes successfully) import pymysql with pymysql.connect(...) as cursor: cursor.execute('show tables') (python 3, latest pymysql) A: This does not look safe, if you look here, the __enter__ and __exit__ functions are what are called in a with clause. For the pymysql connection they look like this: def __enter__(self): """Context manager that returns a Cursor""" return self.cursor() def __exit__(self, exc, value, traceback): """On successful exit, commit. On exception, rollback""" if exc: self.rollback() else: self.commit() So it doesn't look like the exit clause closes the connection, which means it would be lingering. I'm not sure why they did it this way. You could make your own wrappers that do this though. You could recycle a connection by creating multiple cursors with it (the source for cursors is here) the cursor methods look like this: def __enter__(self): return self def __exit__(self, *exc_info): del exc_info self.close() So they do close themselves. You could create a single connection and reuse it with multiple cursors in with clauses. If you want to hide the logic of closing connections behind a with clause, e.g. a context manager, a simple way to do it would be like this: from contextlib import contextmanager import pymysql @contextmanager def get_connection(*args, **kwargs): connection = pymysql.connect(*args, **kwargs) try: yield connection finally: connection.close() You could then use that context manager like this: with get_connection(...) as con: with con.cursor() as cursor: cursor.execute(...) A: As it was pointed out, the Cursor takes care of itself, but all the Connection's support for context manager was removed completely just a few days ago, so the only option now is to write yours: https://github.com/PyMySQL/PyMySQL/pull/763 https://github.com/PyMySQL/PyMySQL/issues/446 A: As an alternative to this, since I wanted to support the context manager pattern for a connection, I implemented it with a monkey patch. Not the best approach, but it's something. import pymysql MONKEYPATCH_PYMYSQL_CONNECTION = True def monkeypatch_pymysql_connection(): Connection = pymysql.connections.Connection def enter_patch(self): return self def exit_patch(self, exc, value, traceback): try: self.rollback() # Implicit rollback when connection closed per PEP-249 finally: self.close() Connection.__enter__ = enter_patch Connection.__exit__ = exit_patch if MONKEYPATCH_PYMYSQL_CONNECTION: monkeypatch_pymysql_connection() MONKEYPATCH_PYMYSQL_CONNECTION = False # Prevent patching more than once This approach worked for my use case. I would prefer to have __enter__ and __exit__ methods in the Connection class. That approach, however, was rejected by the developers when they addressed the issue in late 2018. A: A recent update to Pymysql (https://github.com/PyMySQL/PyMySQL/pull/886/files) now calls close() on exit, so using with is supported and reflected in their documentation. import pymysql.cursors # Connect to the database connection = pymysql.connect(host='localhost', user='user', password='passwd', database='db', cursorclass=pymysql.cursors.DictCursor) with connection: . .
Can I use pymysql.connect() with "with" statement?
The following is listed as example in pymysql: conn = pymysql.connect(...) with conn.cursor() as cursor: cursor.execute(...) ... conn.close() Can I use the following instead, or will this leave a lingering connection? (it executes successfully) import pymysql with pymysql.connect(...) as cursor: cursor.execute('show tables') (python 3, latest pymysql)
[ "This does not look safe, if you look here, the __enter__ and __exit__ functions are what are called in a with clause. For the pymysql connection they look like this:\ndef __enter__(self):\n \"\"\"Context manager that returns a Cursor\"\"\"\n return self.cursor()\n\ndef __exit__(self, exc, value, traceback):\n \"\"\"On successful exit, commit. On exception, rollback\"\"\"\n if exc:\n self.rollback()\n else:\n self.commit()\n\nSo it doesn't look like the exit clause closes the connection, which means it would be lingering. I'm not sure why they did it this way. You could make your own wrappers that do this though.\nYou could recycle a connection by creating multiple cursors with it (the source for cursors is here) the cursor methods look like this:\ndef __enter__(self):\n return self\n\ndef __exit__(self, *exc_info):\n del exc_info\n self.close()\n\nSo they do close themselves. You could create a single connection and reuse it with multiple cursors in with clauses.\nIf you want to hide the logic of closing connections behind a with clause, e.g. a context manager, a simple way to do it would be like this:\nfrom contextlib import contextmanager\nimport pymysql\n\n\n@contextmanager\ndef get_connection(*args, **kwargs):\n connection = pymysql.connect(*args, **kwargs)\n try:\n yield connection\n finally:\n connection.close()\n\nYou could then use that context manager like this:\nwith get_connection(...) as con:\n with con.cursor() as cursor:\n cursor.execute(...)\n\n", "As it was pointed out, the Cursor takes care of itself, but all the Connection's support for context manager was removed completely just a few days ago, so the only option now is to write yours:\nhttps://github.com/PyMySQL/PyMySQL/pull/763\nhttps://github.com/PyMySQL/PyMySQL/issues/446\n", "As an alternative to this, since I wanted to support the context manager pattern for a connection, I implemented it with a monkey patch. Not the best approach, but it's something.\nimport pymysql\n\n\nMONKEYPATCH_PYMYSQL_CONNECTION = True\n\n\ndef monkeypatch_pymysql_connection():\n Connection = pymysql.connections.Connection\n\n def enter_patch(self):\n return self\n\n def exit_patch(self, exc, value, traceback):\n try:\n self.rollback() # Implicit rollback when connection closed per PEP-249\n finally:\n self.close()\n\n Connection.__enter__ = enter_patch\n Connection.__exit__ = exit_patch\n\n\nif MONKEYPATCH_PYMYSQL_CONNECTION:\n monkeypatch_pymysql_connection()\n MONKEYPATCH_PYMYSQL_CONNECTION = False # Prevent patching more than once\n\n\nThis approach worked for my use case. I would prefer to have __enter__ and __exit__ methods in the Connection class. That approach, however, was rejected by the developers when they addressed the issue in late 2018.\n", "A recent update to Pymysql (https://github.com/PyMySQL/PyMySQL/pull/886/files) now calls close() on exit, so using with is supported and reflected in their documentation.\nimport pymysql.cursors\n\n# Connect to the database\nconnection = pymysql.connect(host='localhost',\n user='user',\n password='passwd',\n database='db',\n cursorclass=pymysql.cursors.DictCursor)\n\nwith connection:\n.\n.\n\n\n" ]
[ 18, 7, 1, 0 ]
[]
[]
[ "pymysql", "python", "with_statement" ]
stackoverflow_0031214658_pymysql_python_with_statement.txt
Q: Linkedin LogIn with Selenium I am creating a short program in Python - Selenium to login to my Linkedin Profile, it opens the new windows but I get an error on line 13 during the debug: Exception has occurred: AttributeError 'WebDriver' object has no attribute 'find_element_by_xpath' File "C:\Users\viale\Desktop\Automation\linkedin_selenium_auto.py", line 13, in <module> username = driver.find_element_by_xpath("//input[@name='session_key']") from selenium import webdriver from selenium.webdriver.common.keys import Keys from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.by import By from selenium.webdriver.support.wait import WebDriverWait import time driver = webdriver.Chrome("C:/Users/viale/Desktop/Automation/chromedriver.exe") driver.get("https://linkedin.com") time.sleep(4) username = driver.find_element_by_xpath("//input[@name='session_key']") password = driver.find_element_by_xpath("//input[@name='session_password']") username.send_keys("username@gmail.com") password.send_keys("******") time.sleep(4) submit = driver.find_element_by_xpath("//button[@type='submit']").click() time.sleep(4) A: You have to mention like this: driver.find_element(By.XPATH,"//input[@name='session_key']")
Linkedin LogIn with Selenium
I am creating a short program in Python - Selenium to login to my Linkedin Profile, it opens the new windows but I get an error on line 13 during the debug: Exception has occurred: AttributeError 'WebDriver' object has no attribute 'find_element_by_xpath' File "C:\Users\viale\Desktop\Automation\linkedin_selenium_auto.py", line 13, in <module> username = driver.find_element_by_xpath("//input[@name='session_key']") from selenium import webdriver from selenium.webdriver.common.keys import Keys from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.by import By from selenium.webdriver.support.wait import WebDriverWait import time driver = webdriver.Chrome("C:/Users/viale/Desktop/Automation/chromedriver.exe") driver.get("https://linkedin.com") time.sleep(4) username = driver.find_element_by_xpath("//input[@name='session_key']") password = driver.find_element_by_xpath("//input[@name='session_password']") username.send_keys("username@gmail.com") password.send_keys("******") time.sleep(4) submit = driver.find_element_by_xpath("//button[@type='submit']").click() time.sleep(4)
[ "You have to mention like this:\ndriver.find_element(By.XPATH,\"//input[@name='session_key']\")\n\n" ]
[ 0 ]
[]
[]
[ "authentication", "linkedin", "python", "selenium" ]
stackoverflow_0074598558_authentication_linkedin_python_selenium.txt
Q: What is the shortcut key to comment multiple lines using PyCharm IDE? In Corey Schafer's Programming Terms: Mutable vs Immutable, at 3:06, he selected multiple lines and commented them out in PyCharm all in one action. What is this action? Is it a built-in shortcut in PyCharm that I can use or configure myself? A: This is a setting you can change and define in "Settings". The default is with Ctrl+/ for Windows, or Cmd+/ for Mac. A: Is depends on you're text editor , but probably all text editor use (ctrl + /) just highlight all the code you need to comments and use the shortcut , to know what shortcut using in you're favorite text editor search in google : YourTextEditor shortcuts A: If you use macbook build-in keyboard, this shortcut does not work. So you can assign new shortcut for this purpose by following steps; 1.Go keymap menu PyCharm -> Preferences -> Keymap 2.Find "comment with line comment" then click pencil sign "add keyboard shortcut" then assign your custom shortcut (press your favorite keyboard combination) A: This heavily depends on where you're writing your python code. If you're writing it in Notepad, there won't be a shortcut for commenting a line. However, if you use an IDE, you will probably have such capability alongside the ability to change the shortcut. Just search Google for keyboard shortcuts for your preferred IDE.
What is the shortcut key to comment multiple lines using PyCharm IDE?
In Corey Schafer's Programming Terms: Mutable vs Immutable, at 3:06, he selected multiple lines and commented them out in PyCharm all in one action. What is this action? Is it a built-in shortcut in PyCharm that I can use or configure myself?
[ "This is a setting you can change and define in \"Settings\".\nThe default is with Ctrl+/ for Windows, or Cmd+/ for Mac.\n", "Is depends on you're text editor , but probably all text editor use (ctrl + /) just highlight all the code you need to comments and use the shortcut , to know what shortcut using in you're favorite text editor search in google : YourTextEditor shortcuts \n", "If you use macbook build-in keyboard, this shortcut does not work. So you can assign new shortcut for this purpose by following steps;\n1.Go keymap menu\nPyCharm -> Preferences -> Keymap\n\n2.Find \"comment with line comment\" then click pencil sign \"add keyboard shortcut\" then assign your custom shortcut (press your favorite keyboard combination)\n", "This heavily depends on where you're writing your python code. If you're writing it in Notepad, there won't be a shortcut for commenting a line.\nHowever, if you use an IDE, you will probably have such capability alongside the ability to change the shortcut.\nJust search Google for keyboard shortcuts for your preferred IDE.\n" ]
[ 40, 4, 1, 0 ]
[]
[]
[ "comments", "pycharm", "python" ]
stackoverflow_0053426322_comments_pycharm_python.txt
Q: Django formsubmission gives me a 405 error I am trying to display a form and and take the submission in post of my class-based view. I am not using Django's form as it breaks my design. Code for my form: <form action="." method="POST" > <input type='hidden' name='pf_id' value='{{pf.id}}' /> <input type='hidden' name='content_type' value='portfolio' /> <textarea id="id_comment" name="comment"></textarea> <section><input type="submit" value="submit" name="commentSubmit" class="comment-button" title="submit" class="comment-button" /></section> </form> In views.py: class ProjectDetailView(FormMixin, DetailView): template_name = 'account/inner-profile-page.html' model = ProjectDetail context_object_name = 'project' def get_object(self, queryset=None): return get_object_or_404(ProjectDetail, title_slug = self.kwargs['title_slug']) def get_context_data(self, **kwargs): context = super(ProjectDetailView, self).get_context_data(**kwargs) projects = [] for st in SubType.objects.all(): user = self.get_object().user pd = ProjectDetail.objects.filter(user=user,project_sub_type__sub_type=st) if pd.count() > 0: projects.append((st.name, pd.count())) context['projects'] = projects return context def post(self, request, *args, **kwargs): import pdb;pdb.set_trace() I am expecting the post method to be called when form is submitted (hopefully I am right in my assumption), but it does not, as submitting this form takes me to a blank page. The URL does not change and I get 405 error message in my runserver shell. Why is this happening ? my urls are like this: url(r'^project-detail/(?P<title_slug>\w+)/$',ProjectDetailView.as_view(), name="project-detail-view"), url(r'^project-page/(?P<user_slug>.+)/$',projectPage.as_view(),name='projectPage'), A: I guess the problem is with your view. As you have inherited the FormMixin and DetailView neither does implement the POST method and hence django returns 405 error code. Try inheriting an updateview or createview to support post functionality. A: For those who don't want to use CreateView because they are not dealing with create model object and just want to use TemplateView and FormMixin to manage forms that are not related with the actual model, yo need to understand that this combination doesn't have a POSTmethod implementation. I you really want to achieve this with CBV you have to define your form as the following: class ProjectDetailView(FormMixin, DetailView, ProcessFormView): #Your code here def form_valid(self, form): return super().form_valid(form) def post(self, request, *args, **kwargs): return super().post(request, *args, **kwargs) Then you can use either post or form_valid method to manage the submit action. Which do the trick is ProcessFormView that will allow you POST methods in your view. A: The best solution is: success_url = reverse_lazy(your_path) You also need to import reverse_lazy from django.urls like so: from django.urls import reverse_lazy
Django formsubmission gives me a 405 error
I am trying to display a form and and take the submission in post of my class-based view. I am not using Django's form as it breaks my design. Code for my form: <form action="." method="POST" > <input type='hidden' name='pf_id' value='{{pf.id}}' /> <input type='hidden' name='content_type' value='portfolio' /> <textarea id="id_comment" name="comment"></textarea> <section><input type="submit" value="submit" name="commentSubmit" class="comment-button" title="submit" class="comment-button" /></section> </form> In views.py: class ProjectDetailView(FormMixin, DetailView): template_name = 'account/inner-profile-page.html' model = ProjectDetail context_object_name = 'project' def get_object(self, queryset=None): return get_object_or_404(ProjectDetail, title_slug = self.kwargs['title_slug']) def get_context_data(self, **kwargs): context = super(ProjectDetailView, self).get_context_data(**kwargs) projects = [] for st in SubType.objects.all(): user = self.get_object().user pd = ProjectDetail.objects.filter(user=user,project_sub_type__sub_type=st) if pd.count() > 0: projects.append((st.name, pd.count())) context['projects'] = projects return context def post(self, request, *args, **kwargs): import pdb;pdb.set_trace() I am expecting the post method to be called when form is submitted (hopefully I am right in my assumption), but it does not, as submitting this form takes me to a blank page. The URL does not change and I get 405 error message in my runserver shell. Why is this happening ? my urls are like this: url(r'^project-detail/(?P<title_slug>\w+)/$',ProjectDetailView.as_view(), name="project-detail-view"), url(r'^project-page/(?P<user_slug>.+)/$',projectPage.as_view(),name='projectPage'),
[ "I guess the problem is with your view. As you have inherited the FormMixin and DetailView neither does implement the POST method and hence django returns 405 error code. Try inheriting an updateview or createview to support post functionality.\n", "For those who don't want to use CreateView because they are not dealing with create model object and just want to use TemplateView and FormMixin to manage forms that are not related with the actual model, yo need to understand that this combination doesn't have a POSTmethod implementation.\nI you really want to achieve this with CBV you have to define your form as the following:\nclass ProjectDetailView(FormMixin, DetailView, ProcessFormView):\n #Your code here\n\n def form_valid(self, form):\n return super().form_valid(form)\n\n def post(self, request, *args, **kwargs):\n return super().post(request, *args, **kwargs)\n\n\nThen you can use either post or form_valid method to manage the submit action.\nWhich do the trick is ProcessFormView that will allow you POST methods in your view.\n", "The best solution is:\nsuccess_url = reverse_lazy(your_path)\n\nYou also need to import reverse_lazy from django.urls like so:\nfrom django.urls import reverse_lazy\n\n" ]
[ 1, 0, 0 ]
[]
[]
[ "django", "forms", "python" ]
stackoverflow_0022477586_django_forms_python.txt
Q: Cygwin: (python) ERROR: Failed building wheel for cryptography I'm using cygwin to develop a django application. And I'm stuck at a package install call digikey-api. It requires a cryptography package to be installed and it fails with the following error messages: generating cffi module 'build/temp.cygwin-3.2.0-x86_64-3.8/_openssl.c' running build_rust =============================DEBUG ASSISTANCE============================= If you are seeing a compilation error please try the following steps to successfully install cryptography: 1) Upgrade to the latest pip and try again. This will fix errors for most users. See: https://pip.pypa.io/en/stable/installing/#upgrading-pip 2) Read https://cryptography.io/en/latest/installation.html for specific instructions for your platform. 3) Check our frequently asked questions for more information: https://cryptography.io/en/latest/faq.html 4) Ensure you have a recent Rust toolchain installed: https://cryptography.io/en/latest/installation.html#rust 5) If you are experiencing issues with Rust for *this release only* you may set the environment variable `CRYPTOGRAPHY_DONT_BUILD_RUST=1`. =============================DEBUG ASSISTANCE============================= error: can't find Rust compiler If you are using an outdated pip version, it is possible a prebuilt wheel is available for this package but pip is not able to install from it. Installing from the wheel would avoid the need for a Rust compiler. To update pip, run: pip install --upgrade pip and then retry package installation. If you did intend to build this package from source, try installing a Rust compiler from your system package manager and ensure it is on the PATH during installation. Alternatively, rustup (available at https://rustup.rs) is the recommended way to download and update the Rust compiler toolchain. This package requires Rust >=1.41.0. ---------------------------------------- ERROR: Failed building wheel for cryptography Failed to build cryptography ERROR: Could not build wheels for cryptography which use PEP 517 and cannot be installed directly (venv) I've been trying various solution but no success so far (ie. pip install --upgrade pip did not solve the issue). Funny this is that it fails in this python virtual env. but it is succesfull outside this environment (same python version 3.8.10). thanks for any help getting around this. Sebastien A: The message is very clear error: can't find Rust compiler As Cygwin has NO rust compiler, you can not build it https://cygwin.com/packages/package_list.html A: FYI, The current workaround is to force installation of cryptography at 3.2.1 and pyopenssl at 21.0.0. (or cryptography==3.3.2)
Cygwin: (python) ERROR: Failed building wheel for cryptography
I'm using cygwin to develop a django application. And I'm stuck at a package install call digikey-api. It requires a cryptography package to be installed and it fails with the following error messages: generating cffi module 'build/temp.cygwin-3.2.0-x86_64-3.8/_openssl.c' running build_rust =============================DEBUG ASSISTANCE============================= If you are seeing a compilation error please try the following steps to successfully install cryptography: 1) Upgrade to the latest pip and try again. This will fix errors for most users. See: https://pip.pypa.io/en/stable/installing/#upgrading-pip 2) Read https://cryptography.io/en/latest/installation.html for specific instructions for your platform. 3) Check our frequently asked questions for more information: https://cryptography.io/en/latest/faq.html 4) Ensure you have a recent Rust toolchain installed: https://cryptography.io/en/latest/installation.html#rust 5) If you are experiencing issues with Rust for *this release only* you may set the environment variable `CRYPTOGRAPHY_DONT_BUILD_RUST=1`. =============================DEBUG ASSISTANCE============================= error: can't find Rust compiler If you are using an outdated pip version, it is possible a prebuilt wheel is available for this package but pip is not able to install from it. Installing from the wheel would avoid the need for a Rust compiler. To update pip, run: pip install --upgrade pip and then retry package installation. If you did intend to build this package from source, try installing a Rust compiler from your system package manager and ensure it is on the PATH during installation. Alternatively, rustup (available at https://rustup.rs) is the recommended way to download and update the Rust compiler toolchain. This package requires Rust >=1.41.0. ---------------------------------------- ERROR: Failed building wheel for cryptography Failed to build cryptography ERROR: Could not build wheels for cryptography which use PEP 517 and cannot be installed directly (venv) I've been trying various solution but no success so far (ie. pip install --upgrade pip did not solve the issue). Funny this is that it fails in this python virtual env. but it is succesfull outside this environment (same python version 3.8.10). thanks for any help getting around this. Sebastien
[ "The message is very clear\nerror: can't find Rust compiler\n\nAs Cygwin has NO rust compiler, you can not build it\nhttps://cygwin.com/packages/package_list.html\n", "FYI,\nThe current workaround is to force installation of cryptography at 3.2.1 and pyopenssl at 21.0.0.\n(or cryptography==3.3.2)\n" ]
[ 1, 0 ]
[]
[]
[ "cygwin", "pip", "python" ]
stackoverflow_0068438667_cygwin_pip_python.txt
Q: Can't install python packages on my ubuntu virtual machine This is the full script: (venv) ubuntu@ubuntu:~$ pip install wxPython Collecting wxPython Using cached wxPython-4.2.0.tar.gz (71.0 MB) Preparing metadata (setup.py) ... error error: subprocess-exited-with-error × python setup.py egg_info did not run successfully. │ exit code: 1 ╰─> [12 lines of output] Traceback (most recent call last): File "<string>", line 2, in <module> File "<pip-setuptools-caller>", line 34, in <module> File "/tmp/pip-install-jlwwpkvj/wxpython_66c7996a596740a4b92c4f3a3724336d/setup.py", line 27, in <module> from buildtools.config import Config, msg, opj, runcmd, canGetSOName, getSOName File "/tmp/pip-install-jlwwpkvj/wxpython_66c7996a596740a4b92c4f3a3724336d/buildtools/config.py", line 30, in <module> from attrdict import AttrDict File "/home/ubuntu/PycharmProjects/pythonProject/venv/lib/python3.10/site-packages/attrdict/__init__.py", line 5, in <module> from attrdict.mapping import AttrMap File "/home/ubuntu/PycharmProjects/pythonProject/venv/lib/python3.10/site-packages/attrdict/mapping.py", line 4, in <module> from collections import Mapping ImportError: cannot import name 'Mapping' from 'collections' (/usr/lib/python3.10/collections/__init__.py) [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed × Encountered error while generating package metadata. ╰─> See above for output. note: This is an issue with the package mentioned above, not pip. hint: See above for details. (venv) ubuntu@ubuntu:~$ I think cause of the problem is virtuel machine. I can download packages on my host OS. I am using UTM for ubuntu. I try updating pip and setuptolls. I reinstalled differently ubuntu for multiple times. I am searcing forums for weeks and still nothing. A: You need to use an older version of Python (I am guessing 3.9). The best option is probably to set it up in virtualenv like this: sudo apt update sudo apt install python3.9 sudo apt-get install python3.9-dev python3.9-venv python3.9 -m venv myenv source venv/bin/activate pip install wxPython A: Try to download the wheel (*.whl) file for that package and then: pip install <wheel file path>
Can't install python packages on my ubuntu virtual machine
This is the full script: (venv) ubuntu@ubuntu:~$ pip install wxPython Collecting wxPython Using cached wxPython-4.2.0.tar.gz (71.0 MB) Preparing metadata (setup.py) ... error error: subprocess-exited-with-error × python setup.py egg_info did not run successfully. │ exit code: 1 ╰─> [12 lines of output] Traceback (most recent call last): File "<string>", line 2, in <module> File "<pip-setuptools-caller>", line 34, in <module> File "/tmp/pip-install-jlwwpkvj/wxpython_66c7996a596740a4b92c4f3a3724336d/setup.py", line 27, in <module> from buildtools.config import Config, msg, opj, runcmd, canGetSOName, getSOName File "/tmp/pip-install-jlwwpkvj/wxpython_66c7996a596740a4b92c4f3a3724336d/buildtools/config.py", line 30, in <module> from attrdict import AttrDict File "/home/ubuntu/PycharmProjects/pythonProject/venv/lib/python3.10/site-packages/attrdict/__init__.py", line 5, in <module> from attrdict.mapping import AttrMap File "/home/ubuntu/PycharmProjects/pythonProject/venv/lib/python3.10/site-packages/attrdict/mapping.py", line 4, in <module> from collections import Mapping ImportError: cannot import name 'Mapping' from 'collections' (/usr/lib/python3.10/collections/__init__.py) [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed × Encountered error while generating package metadata. ╰─> See above for output. note: This is an issue with the package mentioned above, not pip. hint: See above for details. (venv) ubuntu@ubuntu:~$ I think cause of the problem is virtuel machine. I can download packages on my host OS. I am using UTM for ubuntu. I try updating pip and setuptolls. I reinstalled differently ubuntu for multiple times. I am searcing forums for weeks and still nothing.
[ "You need to use an older version of Python (I am guessing 3.9). The best option is probably to set it up in virtualenv like this:\nsudo apt update\nsudo apt install python3.9\nsudo apt-get install python3.9-dev python3.9-venv\npython3.9 -m venv myenv\nsource venv/bin/activate\npip install wxPython\n\n", "Try to download the wheel (*.whl) file for that package and then:\npip install <wheel file path>\n\n" ]
[ 0, 0 ]
[]
[]
[ "python", "ubuntu" ]
stackoverflow_0074598555_python_ubuntu.txt
Q: The Fastest way to convert bytes to int32_t list in Python I need to convert bytes data to signed int32_t format and put it in to a list. (It is about recieveing ADC data from external ADC converter via Ethernet). For me the fastest method so far: tempADC = np.ndarray(256,np.intc,rawADC).tolist() 256 - I will have 256 int32_t values rawADC - are raw bytes: b'T\x08\x00\x00W\xf2\xff\xff\xfe\.... Some measurement (1000 iterations): mean = 20 usec, max = 1130 usec Second method: ln = int(len(rawADC)/4) tadc = [0]*256 #prealloc list - buffer for i in range(ln): tadc[i] = (int.from_bytes(rawADC[i:(i+4)], byteorder='little', signed=True)) Some measurement (1000 iterations): mean = 181 usec, max = 1150 usec Is there any more effective way to do that? It is weird the big difference between mean and max time - but I have 2 threads in my program so probably this is the reason?!? A: I presume your input data from the ADC can be simulated with: import numpy as np input = np.arange(256,dtype=np.uint32).tobytes() So, I would try this to unpack: import struct %timeit struct.unpack('<256I',input) 735 ns ± 1.57 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
The Fastest way to convert bytes to int32_t list in Python
I need to convert bytes data to signed int32_t format and put it in to a list. (It is about recieveing ADC data from external ADC converter via Ethernet). For me the fastest method so far: tempADC = np.ndarray(256,np.intc,rawADC).tolist() 256 - I will have 256 int32_t values rawADC - are raw bytes: b'T\x08\x00\x00W\xf2\xff\xff\xfe\.... Some measurement (1000 iterations): mean = 20 usec, max = 1130 usec Second method: ln = int(len(rawADC)/4) tadc = [0]*256 #prealloc list - buffer for i in range(ln): tadc[i] = (int.from_bytes(rawADC[i:(i+4)], byteorder='little', signed=True)) Some measurement (1000 iterations): mean = 181 usec, max = 1150 usec Is there any more effective way to do that? It is weird the big difference between mean and max time - but I have 2 threads in my program so probably this is the reason?!?
[ "I presume your input data from the ADC can be simulated with:\nimport numpy as np\ninput = np.arange(256,dtype=np.uint32).tobytes()\n\nSo, I would try this to unpack:\nimport struct\n%timeit struct.unpack('<256I',input)\n735 ns ± 1.57 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)\n\n" ]
[ 2 ]
[]
[]
[ "list", "numpy", "python" ]
stackoverflow_0074598518_list_numpy_python.txt
Q: Selecting dataframe columns with boolen, rest have to be false I try to filter a dataframe with a specific condition, but don't now how to get safe that all other columns have to be false. A | B | C | D | E | F True True False False False False True False True False False True True True True False False False given this df i want to select every row where A is tru and B or C is True. df.loc[(df.A == True) & ((df.B == True) or (df.C == True))] or df.query('A and (b or C) ') and my resut would be A | B | C | D | E | F True True False False False False True False True False False True True False True False False False but how can I get safe that all other columns that are not to mentioned (D,E,F) have to be False so that the result is A | B | C | D | E | F True True False False False False True False True False False False A: You can use another mask with columns.difference and any: m1 = df['A'] & (df['B'] | df['C']) m2 = ~df[df.columns.difference(['A', 'B', 'C'])].any(axis=1) df.loc[m1 & m2] Output: A B C D E F 0 True True False False False False 2 True True True False False False
Selecting dataframe columns with boolen, rest have to be false
I try to filter a dataframe with a specific condition, but don't now how to get safe that all other columns have to be false. A | B | C | D | E | F True True False False False False True False True False False True True True True False False False given this df i want to select every row where A is tru and B or C is True. df.loc[(df.A == True) & ((df.B == True) or (df.C == True))] or df.query('A and (b or C) ') and my resut would be A | B | C | D | E | F True True False False False False True False True False False True True False True False False False but how can I get safe that all other columns that are not to mentioned (D,E,F) have to be False so that the result is A | B | C | D | E | F True True False False False False True False True False False False
[ "You can use another mask with columns.difference and any:\nm1 = df['A'] & (df['B'] | df['C'])\nm2 = ~df[df.columns.difference(['A', 'B', 'C'])].any(axis=1)\ndf.loc[m1 & m2]\n\nOutput:\n A B C D E F\n0 True True False False False False\n2 True True True False False False\n\n" ]
[ 1 ]
[]
[]
[ "boolean_logic", "dataframe", "lines_of_code", "pandas", "python" ]
stackoverflow_0074598811_boolean_logic_dataframe_lines_of_code_pandas_python.txt
Q: How to update values of python 2D array using loop and animate changing the colors I am looking for a way to update the values of the array numphy array created by creating an update function to update the values of the previous array and change the colors of the new values updated below is my code though it only display the final frame. My Question is how do i display the entire process to show how each cell satisfying the condition given change color step by step. from tkinter import N import args as args import numpy as np from matplotlib import pyplot as plt, animation from matplotlib import colors # setting up the values for the grid ON = 1 OFF = 0 vals = [ON, OFF] data = np.array([ [0,0,0,0,1,1,0,0,0,0], [0,0,0,0,0,0,0,0,0,0], [0,0,2,0,0,0,0,2,0,0], [0,0,0,0,0,0,0,0,0,0], [0,0,0,0,2,0,0,0,0,0], [0,0,0,0,0,0,0,0,0,0], [0,0,0,0,0,0,0,0,0,0], [0,0,0,0,0,0,0,0,0,0] ]) def update(previous_array): for row in range(previous_array): for cell in range(row): if cell > 1: color = 'red' return previous_array cmap = colors.ListedColormap(['Blue','red']) # plt.figure(figsize=(6,6)) fig, ax = plt.subplots() # im = ax.imshow(data, cmap=cmap) plt.pcolor(data[::-1], cmap=cmap, edgecolors='k', linewidths=2) for i in range(len(data)): for j in range(len(data[1])): color = 'red' if data[i, j] == 0 else 'blue' ani = animation.FuncAnimation(fig, update, frames=30, interval=50, save_count=50) ani.save('basic_animation.mp4', fps=30, extra_args=['-vcodec', 'libx264']) plt.show() `` A: Does it need to be a movie file format? You can probably use the Plotly animation feature, where you create a new heatmap for each frame: https://plotly.com/python/animations/ https://plotly.com/python/visualizing-mri-volume-slices/ https://plotly.com/python/heatmaps/ edit: someone did something similar already (at least in an earlier version): https://plotly.com/python/v3/heatmap-animation/ The example below takes your data and some altered versions into an animation. You shoud replace the list of frames of data by your own list (pre-generate these results). import plotly.graph_objects as go import numpy as np data = np.array([ [0,0,0,0,1,1,0,0,0,0], [0,0,0,0,0,0,0,0,0,0], [0,0,2,0,0,0,0,2,0,0], [0,0,0,0,0,0,0,0,0,0], [0,0,0,0,2,0,0,0,0,0], [0,0,0,0,0,0,0,0,0,0], [0,0,0,0,0,0,0,0,0,0], [0,0,0,0,0,0,0,0,0,0] ]) data2 = data.copy() data3 = data.copy() data4 = data.copy() data5 = data.copy() data2[data2 == 1] = 3 data3[data3 == 2] = 0 data4[data4 == 0] = 4 data5[data5 > 0] = 2 fig = go.Figure( data=[go.Heatmap(z=data*0)], layout=go.Layout( xaxis=dict(range=[0, data.shape[1]-1], dtick=1), yaxis=dict(range=[0, data.shape[0]-1], dtick=1), title="Start Title", updatemenus=[dict( type="buttons", buttons=[dict(label="Play", method="animate", args=[None])])] ), frames=[go.Frame(data=[go.Heatmap(z=data)], layout=go.Layout(title_text="Title1")), go.Frame(data=[go.Heatmap(z=data2)], layout=go.Layout(title_text="Title2")), go.Frame(data=[go.Heatmap(z=data3)], layout=go.Layout(title_text="Title3")), go.Frame(data=[go.Heatmap(z=data4)], layout=go.Layout(title_text="Title4")), go.Frame(data=[go.Heatmap(z=data5)], layout=go.Layout(title_text="Title5"))] ) fig.show()
How to update values of python 2D array using loop and animate changing the colors
I am looking for a way to update the values of the array numphy array created by creating an update function to update the values of the previous array and change the colors of the new values updated below is my code though it only display the final frame. My Question is how do i display the entire process to show how each cell satisfying the condition given change color step by step. from tkinter import N import args as args import numpy as np from matplotlib import pyplot as plt, animation from matplotlib import colors # setting up the values for the grid ON = 1 OFF = 0 vals = [ON, OFF] data = np.array([ [0,0,0,0,1,1,0,0,0,0], [0,0,0,0,0,0,0,0,0,0], [0,0,2,0,0,0,0,2,0,0], [0,0,0,0,0,0,0,0,0,0], [0,0,0,0,2,0,0,0,0,0], [0,0,0,0,0,0,0,0,0,0], [0,0,0,0,0,0,0,0,0,0], [0,0,0,0,0,0,0,0,0,0] ]) def update(previous_array): for row in range(previous_array): for cell in range(row): if cell > 1: color = 'red' return previous_array cmap = colors.ListedColormap(['Blue','red']) # plt.figure(figsize=(6,6)) fig, ax = plt.subplots() # im = ax.imshow(data, cmap=cmap) plt.pcolor(data[::-1], cmap=cmap, edgecolors='k', linewidths=2) for i in range(len(data)): for j in range(len(data[1])): color = 'red' if data[i, j] == 0 else 'blue' ani = animation.FuncAnimation(fig, update, frames=30, interval=50, save_count=50) ani.save('basic_animation.mp4', fps=30, extra_args=['-vcodec', 'libx264']) plt.show() ``
[ "Does it need to be a movie file format?\nYou can probably use the Plotly animation feature, where you create a new heatmap for each frame:\nhttps://plotly.com/python/animations/\nhttps://plotly.com/python/visualizing-mri-volume-slices/\nhttps://plotly.com/python/heatmaps/\nedit: someone did something similar already (at least in an earlier version):\nhttps://plotly.com/python/v3/heatmap-animation/\nThe example below takes your data and some altered versions into an animation. You shoud replace the list of frames of data by your own list (pre-generate these results).\nimport plotly.graph_objects as go\nimport numpy as np\n\ndata = np.array([\n [0,0,0,0,1,1,0,0,0,0],\n [0,0,0,0,0,0,0,0,0,0],\n [0,0,2,0,0,0,0,2,0,0],\n [0,0,0,0,0,0,0,0,0,0],\n [0,0,0,0,2,0,0,0,0,0],\n [0,0,0,0,0,0,0,0,0,0],\n [0,0,0,0,0,0,0,0,0,0],\n [0,0,0,0,0,0,0,0,0,0]\n ])\n\ndata2 = data.copy()\ndata3 = data.copy()\ndata4 = data.copy()\ndata5 = data.copy()\n\ndata2[data2 == 1] = 3\ndata3[data3 == 2] = 0\ndata4[data4 == 0] = 4\ndata5[data5 > 0] = 2\n\nfig = go.Figure(\n data=[go.Heatmap(z=data*0)],\n layout=go.Layout(\n xaxis=dict(range=[0, data.shape[1]-1],\n dtick=1),\n yaxis=dict(range=[0, data.shape[0]-1],\n dtick=1),\n title=\"Start Title\",\n updatemenus=[dict(\n type=\"buttons\",\n buttons=[dict(label=\"Play\",\n method=\"animate\",\n args=[None])])]\n ),\n frames=[go.Frame(data=[go.Heatmap(z=data)],\n layout=go.Layout(title_text=\"Title1\")),\n go.Frame(data=[go.Heatmap(z=data2)],\n layout=go.Layout(title_text=\"Title2\")),\n go.Frame(data=[go.Heatmap(z=data3)],\n layout=go.Layout(title_text=\"Title3\")),\n go.Frame(data=[go.Heatmap(z=data4)],\n layout=go.Layout(title_text=\"Title4\")),\n go.Frame(data=[go.Heatmap(z=data5)],\n layout=go.Layout(title_text=\"Title5\"))]\n)\n\nfig.show()\n\n" ]
[ 0 ]
[]
[]
[ "2d", "animation", "arrays", "python" ]
stackoverflow_0074598238_2d_animation_arrays_python.txt
Q: Drop function removes more indices in pandas than it should I am trying to concatinate an older df(main_df) with a newer (ellicom_df) and then drop all the rows where i have the same manufacturer but a different date from the one making the update. However the code drops far too many lines than it should. In the example below old the main_df has 6269 lines the new (ellicom_df) has 7126 , the updated has (correctly) 13395 however after the drop i only get 3472 lines in the updated main_df, but i should have 7126 . I know the problem is either in the drop or in the date but i cant figure out where print(ellicom_df.shape) print(main_df.shape) (7126, 4) (6269, 8) date_=pd.Timestamp('now').date() ellicom_df['UPDATED'] = date_ ellicom_df['MANUFACTURER']='ELLICOM' main_df = pd.concat([main_df, ellicom_df], sort=False) main_df.shape (13395, 9) main_df.drop(main_df.loc[(main_df['MANUFACTURER']=='ELLICOM') & (main_df['UPDATED']!=date_)].index, inplace=True) main_df.shape (3472, 9) here is an example of the df's main_df: A: Why not simply filter this way: main_df = main_df[(main_df['MANUFACTURER']!='ELLICOM') | (main_df['UPDATED']==date_)]
Drop function removes more indices in pandas than it should
I am trying to concatinate an older df(main_df) with a newer (ellicom_df) and then drop all the rows where i have the same manufacturer but a different date from the one making the update. However the code drops far too many lines than it should. In the example below old the main_df has 6269 lines the new (ellicom_df) has 7126 , the updated has (correctly) 13395 however after the drop i only get 3472 lines in the updated main_df, but i should have 7126 . I know the problem is either in the drop or in the date but i cant figure out where print(ellicom_df.shape) print(main_df.shape) (7126, 4) (6269, 8) date_=pd.Timestamp('now').date() ellicom_df['UPDATED'] = date_ ellicom_df['MANUFACTURER']='ELLICOM' main_df = pd.concat([main_df, ellicom_df], sort=False) main_df.shape (13395, 9) main_df.drop(main_df.loc[(main_df['MANUFACTURER']=='ELLICOM') & (main_df['UPDATED']!=date_)].index, inplace=True) main_df.shape (3472, 9) here is an example of the df's main_df:
[ "Why not simply filter this way:\nmain_df = main_df[(main_df['MANUFACTURER']!='ELLICOM') | (main_df['UPDATED']==date_)]\n\n" ]
[ 1 ]
[]
[]
[ "concatenation", "drop", "pandas", "python" ]
stackoverflow_0074598819_concatenation_drop_pandas_python.txt
Q: How to cache pip packages within Azure Pipelines Although this source provides a lot of information on caching within Azure pipelines, it is not clear how to cache Python pip packages for a Python project. How to proceed if one is willing to cache Pip packages on an Azure pipelines build? According to this, it may be so that pip cache will be enabled by default in the future. As far as I know it is not yet the case. A: I used the pre-commit documentation as inspiration: https://pre-commit.com/#azure-pipelines-example https://github.com/asottile/azure-pipeline-templates/blob/master/job--pre-commit.yml and configured the following Python pipeline with Anaconda: pool: vmImage: 'ubuntu-latest' variables: CONDA_ENV: foobar-env CONDA_HOME: /usr/share/miniconda/envs/$(CONDA_ENV)/ steps: - script: echo "##vso[task.prependpath]$CONDA/bin" displayName: Add conda to PATH - task: Cache@2 displayName: Use cached Anaconda environment inputs: key: conda | environment.yml path: $(CONDA_HOME) cacheHitVar: CONDA_CACHE_RESTORED - script: conda env create --file environment.yml displayName: Create Anaconda environment (if not restored from cache) condition: eq(variables.CONDA_CACHE_RESTORED, 'false') - script: | source activate $(CONDA_ENV) pytest displayName: Run unit tests A: To cache a standard pip install use this: variables: # variables are automatically exported as environment variables # so this will override pip's default cache dir - name: pip_cache_dir value: $(Pipeline.Workspace)/.pip steps: - task: Cache@2 inputs: key: 'pip | "$(Agent.OS)" | requirements.txt' restoreKeys: | pip | "$(Agent.OS)" path: $(pip_cache_dir) displayName: Cache pip - script: | pip install -r requirements.txt displayName: "pip install" A: I wasn't very happy with the standard pip cache implementation that is mentioned in the official documentation. You basically always install your dependencies normally, which means that pip will perform loads of checks that take up time. Pip will find the cached builds (*.whl, *.tar.gz) eventually, but it all takes up time. You can opt to use venv or conda instead, but for me it lead to buggy situations with unexpected behaviour. What I ended up doing instead was using pip download and pip install separately: variables: pipDownloadDir: $(Pipeline.Workspace)/.pip steps: - task: Cache@2 displayName: Load cache inputs: key: 'pip | "$(Agent.OS)" | requirements.txt' path: $(pipDownloadDir) cacheHitVar: cacheRestored - script: pip download -r requirements.txt --dest=$(pipDownloadDir) displayName: "Download requirements" condition: eq(variables.cacheRestored, 'false') - script: pip install -r requirements.txt --no-index --find-links=$(pipDownloadDir) displayName: "Install requirements"
How to cache pip packages within Azure Pipelines
Although this source provides a lot of information on caching within Azure pipelines, it is not clear how to cache Python pip packages for a Python project. How to proceed if one is willing to cache Pip packages on an Azure pipelines build? According to this, it may be so that pip cache will be enabled by default in the future. As far as I know it is not yet the case.
[ "I used the pre-commit documentation as inspiration:\n\nhttps://pre-commit.com/#azure-pipelines-example\nhttps://github.com/asottile/azure-pipeline-templates/blob/master/job--pre-commit.yml\n\nand configured the following Python pipeline with Anaconda:\npool:\n vmImage: 'ubuntu-latest'\n\nvariables:\n CONDA_ENV: foobar-env\n CONDA_HOME: /usr/share/miniconda/envs/$(CONDA_ENV)/\n\nsteps:\n- script: echo \"##vso[task.prependpath]$CONDA/bin\"\n displayName: Add conda to PATH\n\n- task: Cache@2\n displayName: Use cached Anaconda environment\n inputs:\n key: conda | environment.yml\n path: $(CONDA_HOME)\n cacheHitVar: CONDA_CACHE_RESTORED\n\n- script: conda env create --file environment.yml\n displayName: Create Anaconda environment (if not restored from cache)\n condition: eq(variables.CONDA_CACHE_RESTORED, 'false')\n\n- script: |\n source activate $(CONDA_ENV)\n pytest\n displayName: Run unit tests\n\n", "To cache a standard pip install use this:\nvariables:\n # variables are automatically exported as environment variables\n # so this will override pip's default cache dir\n - name: pip_cache_dir\n value: $(Pipeline.Workspace)/.pip\n\nsteps:\n - task: Cache@2\n inputs:\n key: 'pip | \"$(Agent.OS)\" | requirements.txt'\n restoreKeys: |\n pip | \"$(Agent.OS)\"\n path: $(pip_cache_dir)\n displayName: Cache pip\n\n - script: |\n pip install -r requirements.txt\n displayName: \"pip install\"\n\n", "I wasn't very happy with the standard pip cache implementation that is mentioned in the official documentation. You basically always install your dependencies normally, which means that pip will perform loads of checks that take up time. Pip will find the cached builds (*.whl, *.tar.gz) eventually, but it all takes up time. You can opt to use venv or conda instead, but for me it lead to buggy situations with unexpected behaviour. What I ended up doing instead was using pip download and pip install separately:\nvariables:\n pipDownloadDir: $(Pipeline.Workspace)/.pip\n\nsteps:\n- task: Cache@2\n displayName: Load cache\n inputs:\n key: 'pip | \"$(Agent.OS)\" | requirements.txt'\n path: $(pipDownloadDir)\n cacheHitVar: cacheRestored\n\n- script: pip download -r requirements.txt --dest=$(pipDownloadDir)\n displayName: \"Download requirements\"\n condition: eq(variables.cacheRestored, 'false')\n\n- script: pip install -r requirements.txt --no-index --find-links=$(pipDownloadDir)\n displayName: \"Install requirements\"\n\n" ]
[ 4, 3, 0 ]
[]
[]
[ "azure_pipelines", "caching", "pip", "python" ]
stackoverflow_0062420695_azure_pipelines_caching_pip_python.txt
Q: How to store the positions of an element in string in a dictionary (Python)? I want to get all the positions (indexes) of an element in a string and store them in a dictionary. This is what I've tried: string = "This is an example" test = {letter: pos for pos, letter in enumerate(string)} But this only gives the last position of the letter. I'd like all positions, desired output: test["a"] {8, 13} A: At the moment you are overwriting the dictionary values. For example, >>> my_dict = {} >>> my_dict['my_val'] = 1 # creating new value >>> my_dict {'my_val': 1} >>> my_dict['my_val'] = 2 # overwriting the value for `my_val` >>> my_dict {'my_val': 2} If you want to keep all values for a key you can use a list along with dict.setdefault method . >>> print(dict.setdefault.__doc__) Insert key with a value of default if key is not in the dictionary. Return the value for key if key is in the dictionary, else default. >>> >>> result = {} >>> string = "This is an example" >>> >>> for index, value in enumerate(string): ... result.setdefault(value, []).append(index) ... >>> result["a"] [8, 13] A: FOR LIST Creating a dictionary with the keys being the characters in the input_string and the characters being the indices of the characters in the input_string. output = {} Creating a input_string variable called input_string and assigning it the character "This is an example" input_string = "This is an example" Iterating through the input_string and assigning the index of the character to index and the character to character. output.setdefault(character, []) is checking if the key character exists in the dictionary output. If it does not exist, it will create the key character and assign it the character []. If it does exist, it will return the character of the key character. Then, .append(index) will append the character of index to the character of the key character. for index, character in enumerate(input_string): output.setdefault(character, []).append(index) Desired Output output["i"] [2, 5] In SORT CODE BE LIKE:- output = {} input_string = "This is an example" for index, character in enumerate(input_string): output.setdefault(character, []).append(index) FOR DICT/SET Creating a dictionary with the keys being the characters in the input string and the values being the indices of the characters in the input string. output = {} input_string = "This is an example" for index, character in enumerate(input_string): output.setdefault(character, set()).add(index)
How to store the positions of an element in string in a dictionary (Python)?
I want to get all the positions (indexes) of an element in a string and store them in a dictionary. This is what I've tried: string = "This is an example" test = {letter: pos for pos, letter in enumerate(string)} But this only gives the last position of the letter. I'd like all positions, desired output: test["a"] {8, 13}
[ "At the moment you are overwriting the dictionary values. For example,\n>>> my_dict = {}\n>>> my_dict['my_val'] = 1 # creating new value\n>>> my_dict\n{'my_val': 1}\n>>> my_dict['my_val'] = 2 # overwriting the value for `my_val`\n>>> my_dict\n{'my_val': 2}\n\nIf you want to keep all values for a key you can use a list along with dict.setdefault method .\n>>> print(dict.setdefault.__doc__)\nInsert key with a value of default if key is not in the dictionary.\nReturn the value for key if key is in the dictionary, else default.\n>>>\n>>> result = {}\n>>> string = \"This is an example\"\n>>> \n>>> for index, value in enumerate(string):\n... result.setdefault(value, []).append(index)\n... \n>>> result[\"a\"]\n[8, 13]\n\n", "FOR LIST\nCreating a dictionary with the keys being the characters in the input_string and the characters being the\nindices of the characters in the input_string.\noutput = {}\n\nCreating a input_string variable called input_string and assigning it the character \"This is an example\"\ninput_string = \"This is an example\"\n\nIterating through the input_string and assigning the index of the character to index and the character\nto character.\noutput.setdefault(character, []) is checking if the key character exists in the dictionary output. If it does not exist, it will create the key character and assign it the character []. If it does exist, it will return the character of the key character. Then, .append(index) will append the character of index to the character of the key character.\nfor index, character in enumerate(input_string):\n output.setdefault(character, []).append(index) \n \n\nDesired Output\noutput[\"i\"]\n[2, 5]\n\n\nIn SORT CODE BE LIKE:-\noutput = {}\n\ninput_string = \"This is an example\"\n\nfor index, character in enumerate(input_string):\n output.setdefault(character, []).append(index) \n \n\nFOR DICT/SET\n\nCreating a dictionary with the keys being the characters in the input string and the values being\nthe indices of the characters in the input string.\noutput = {}\n\ninput_string = \"This is an example\"\n\nfor index, character in enumerate(input_string):\n output.setdefault(character, set()).add(index) \n \n\n" ]
[ 3, 1 ]
[]
[]
[ "dictionary", "python", "string" ]
stackoverflow_0074598636_dictionary_python_string.txt
Q: Python - Pandas - DROPNA(subset) deleting value for no apparent reasons? I'm cleaning some data and I've been struggling with one thing. I have a dataframe with 7740 rows and 68 columns. Most of the columns contains Nan values. What i'm interested in, is to remove NaN values when it is NaN in those two columns : [SERIAL_ID],[NUMBER_ID] Example : SERIAL_ID NUMBER_ID 8RY68U4R NaN 8756ERT5 8759321 NaN NaN NaN 7896521 7EY68U4R NaN 95856ERT5 988888 NaN NaN NaN 4555555 Results SERIAL_ID NUMBER_ID 8RY68U4R NaN 8756ERT5 8759321 NaN 7896521 7EY68U4R NaN 95856ERT5 988888 NaN 4555555 Removing rows when NaN is in the two columns. I've used the followings to do so : df.dropna(subset=['SERIAL_ID', 'NUMBER_ID'], how='all', inplace=True) When I use this on my dataframe with 68 columns the result I get is this one : SERIAL_ID NUMBER_ID NaN NaN NaN NaN NaN NaN NaN 7896521 NaN NaN 95856ERT5 NaN NaN NaN NaN 4555555 I tried with a copy of the dataframe with only 3 columns, it is working fine. It is somehow working (I can tel cause I have an identical ID in another column) but remove some of the value, and I have no idea why. Please help I've been struggling the whole day with this. Thanks again. A: I don't know why it only works for 3 columns and not for 68 originals. However, we can obtain desired output in other way. use boolean indexing: df[df[['SERIAL_ID', 'NUMBER_ID']].notnull().any(axis=1)] A: You can use boolean logic or simple do something like this for any given column: import numpy as np import pandas as pd # sample dataframe d = {'SERIAL_ID':['8RY68U4R', '8756ERT5', np.nan, np.nan], 'NUMBER_ID':[np.nan, 8759321, np.nan ,7896521]} df = pd.DataFrame(d) # apply logic to columns df['nans'] = df['NUMBER_ID'].isnull() * df['SERIAL_ID'].isnull() # filter columns df_filtered = df[df['nans']==False] print(df_filtered) which returns this: SERIAL_ID NUMBER_ID nans 0 8RY68U4R NaN False 1 8756ERT5 8759321.0 False 3 NaN 7896521.0 False
Python - Pandas - DROPNA(subset) deleting value for no apparent reasons?
I'm cleaning some data and I've been struggling with one thing. I have a dataframe with 7740 rows and 68 columns. Most of the columns contains Nan values. What i'm interested in, is to remove NaN values when it is NaN in those two columns : [SERIAL_ID],[NUMBER_ID] Example : SERIAL_ID NUMBER_ID 8RY68U4R NaN 8756ERT5 8759321 NaN NaN NaN 7896521 7EY68U4R NaN 95856ERT5 988888 NaN NaN NaN 4555555 Results SERIAL_ID NUMBER_ID 8RY68U4R NaN 8756ERT5 8759321 NaN 7896521 7EY68U4R NaN 95856ERT5 988888 NaN 4555555 Removing rows when NaN is in the two columns. I've used the followings to do so : df.dropna(subset=['SERIAL_ID', 'NUMBER_ID'], how='all', inplace=True) When I use this on my dataframe with 68 columns the result I get is this one : SERIAL_ID NUMBER_ID NaN NaN NaN NaN NaN NaN NaN 7896521 NaN NaN 95856ERT5 NaN NaN NaN NaN 4555555 I tried with a copy of the dataframe with only 3 columns, it is working fine. It is somehow working (I can tel cause I have an identical ID in another column) but remove some of the value, and I have no idea why. Please help I've been struggling the whole day with this. Thanks again.
[ "I don't know why it only works for 3 columns and not for 68 originals.\nHowever, we can obtain desired output in other way.\nuse boolean indexing:\ndf[df[['SERIAL_ID', 'NUMBER_ID']].notnull().any(axis=1)]\n\n", "You can use boolean logic or simple do something like this for any given column:\nimport numpy as np\nimport pandas as pd\n\n# sample dataframe\nd = {'SERIAL_ID':['8RY68U4R', '8756ERT5', np.nan, np.nan],\n 'NUMBER_ID':[np.nan, 8759321, np.nan ,7896521]}\ndf = pd.DataFrame(d)\n\n# apply logic to columns\ndf['nans'] = df['NUMBER_ID'].isnull() * df['SERIAL_ID'].isnull()\n\n# filter columns\ndf_filtered = df[df['nans']==False]\nprint(df_filtered)\n\n\nwhich returns this:\n SERIAL_ID NUMBER_ID nans\n0 8RY68U4R NaN False\n1 8756ERT5 8759321.0 False\n3 NaN 7896521.0 False\n\n" ]
[ 1, 0 ]
[]
[]
[ "data_cleaning", "dataframe", "pandas", "python" ]
stackoverflow_0074596404_data_cleaning_dataframe_pandas_python.txt
Q: Recursively find and replace string in text files I want to recursively search through a directory with subdirectories of text files and replace every occurrence of {$replace} within the files with the contents of a multi line string. How can this be achieved with Python? So far all I have is the recursive code using os.walk to get a list of files that are required to be changed. import os import sys fileList = [] rootdir = "C:\\test" for root, subFolders, files in os.walk(rootdir): if subFolders != ".svn": for file in files: fileParts = file.split('.') if len(fileParts) > 1: if fileParts[1] == "php": fileList.append(os.path.join(root,file)) print fileList A: os.walk is great. However, it looks like you need to filer file types (which I would suggest if you are going to walk some directory). To do this, you should add import fnmatch. import os, fnmatch def findReplace(directory, find, replace, filePattern): for path, dirs, files in os.walk(os.path.abspath(directory)): for filename in fnmatch.filter(files, filePattern): filepath = os.path.join(path, filename) with open(filepath) as f: s = f.read() s = s.replace(find, replace) with open(filepath, "w") as f: f.write(s) This allows you to do something like: findReplace("some_dir", "find this", "replace with this", "*.txt") A: Check out os.walk: import os replacement = """some multi-line string""" for dname, dirs, files in os.walk("some_dir"): for fname in files: fpath = os.path.join(dname, fname) with open(fpath) as f: s = f.read() s = s.replace("{$replace}", replacement) with open(fpath, "w") as f: f.write(s) The above solution has flaws, such as the fact that it opens literally every file it finds, or the fact that each file is read entirely into memory (which would be bad if you had a 1GB text file), but it should be a good starting point. You also may want to look into the re module if you want to do a more complex find/replace than looking for a specific string. A: For those using Python 3.5+ you can now use a glob recursively with the use of ** and the recursive flag. Here's an example replacing hello with world for all .txt files: for filepath in glob.iglob('./**/*.txt', recursive=True): with open(filepath) as file: s = file.read() s = s.replace('hello', 'world') with open(filepath, "w") as file: file.write(s) A: To avoid recursing into .svn directories, os.walk() allows you to change the dirs list inplace. To simplify the text replacement in a file without requiring to read the whole file in memory, you could use fileinput module. And to filter filenames using a file pattern, you could use fnmatch module as suggested by @David Sulpy: #!/usr/bin/env python from __future__ import print_function import fnmatch import os from fileinput import FileInput def find_replace(topdir, file_pattern, text, replacement): for dirpath, dirs, files in os.walk(topdir, topdown=True): dirs[:] = [d for d in dirs if d != '.svn'] # skip .svn dirs files = [os.path.join(dirpath, filename) for filename in fnmatch.filter(files, file_pattern)] for line in FileInput(files, inplace=True): print(line.replace(text, replacement), end='') find_replace(r"C:\test", "*.php", '{$replace}', "multiline\nreplacement") A: This is an old question but I figured I'd provide an updated and simpler answer using current libraries in python3.8. from pathlib import Path import re rootdir = Path("C:\\test") pattern = r'REGEX for the text you want to replace' replace = r'REGEX for what to replace it with' for file in [ f for f in rootdir.glob("**.php") ]: #modify glob pattern as needed file_contents = file.read_text() new_file_contents = re.sub(f"{pattern}", f"{replace}", file_contents) file.write_text(new_file_contents) A: Here's my code (which I think is the same as the above but I'm including it just in case there's something subtly different about it): import os, fnmatch, sys def findReplace(directory, find, replace, filePattern): for path, dirs, files in os.walk(os.path.abspath(directory)): for filename in fnmatch.filter(files, filePattern): filepath = os.path.join(path, filename) with open(filepath) as f: s = f.read() s = s.replace(find, replace) with open(filepath, "w") as f: f.write(s) it runs without error. BUT, the file, in z:\test is unchanged. I've put in print statements, like print("got here") but they don't print out either. A: Sulpy's answer is good but incomplete. The user would be likely to want to input the parameters through an entry widget, so we might have something more like this (also incomplete, but left as an exercise): import os, fnmatch from Tkinter import * fields = 'Folder', 'Search', 'Replace', 'FilePattern' def fetch(entvals): # print entvals # print ents entItems = entvals.items() for entItem in entItems: field = entItem[0] text = entItem[1].get() print('%s: "%s"' % (field, text)) def findReplace(entvals): # print ents directory = entvals.get("Folder").get() find = entvals.get("Search").get() replace = entvals.get("Replace").get() filePattern = entvals.get("FilePattern").get() for path, dirs, files in os.walk(os.path.abspath(directory)): for filename in fnmatch.filter(files, filePattern): # print filename filepath = os.path.join(path, filename) print filepath # Can be commented out -- used for confirmation with open(filepath) as f: s = f.read() s = s.replace(find, replace) with open(filepath, "w") as f: f.write(s) def makeform(root, fields): entvals = {} for field in fields: row = Frame(root) lab = Label(row, width=17, text=field+": ", anchor='w') ent = Entry(row) row.pack(side=TOP, fill=X, padx=5, pady=5) lab.pack(side=LEFT) ent.pack(side=RIGHT, expand=YES, fill=X) entvals[field] = ent # print ent return entvals if __name__ == '__main__': root = Tk() root.title("Recursive S&R") ents = makeform(root, fields) # print ents root.bind('<Return>', (lambda event, e=ents: fetch(e))) b1 = Button(root, text='Show', command=(lambda e=ents: fetch(e))) b1.pack(side=LEFT, padx=5, pady=5) b2 = Button(root, text='Execute', command=(lambda e=ents: findReplace(e))) b2.pack(side=LEFT, padx=5, pady=5) b3 = Button(root, text='Quit', command=root.quit) b3.pack(side=LEFT, padx=5, pady=5) root.mainloop() A: Use: pip3 install manip This lets you use a decorator to create something like: @manip(at='.php$', recursive=True) # to apply to subfolders def replace_on_php(text, find, replacement): return text.replace(find, replacement) Now in your prompt you should be able to call replace_on_php('explode', 'myCustomExplode', path='./myPhPFiles', modify=True) and this should make the function apply itself on the entire folder. A: As mentioned by others, the way to go is glob. However, use binary mode to prevent changing line endings when you find/replace. #! /usr/bin/env python3 replacements = [ # find, replace (b'#include <Common/LogLevelInfo.h>', b'#include <Common/LogLevelErr.h>'), (b'#include <Common/LogLevelDbg.h>', b'#include <Common/LogLevelErr.h>'), (b'#include <Common/LogLevelErr.h>', b'#include <Common/LogLevelNone.h>'), ] # path to files. here all cpp files in the current working directory globpath = './**/*.cpp' import glob for filepath in glob.iglob(globpath, recursive=True): with open(filepath, 'rb') as file: s = file.read() for f, r in replacements: s = s.replace(f, r) with open(filepath, "wb") as file: file.write(s)
Recursively find and replace string in text files
I want to recursively search through a directory with subdirectories of text files and replace every occurrence of {$replace} within the files with the contents of a multi line string. How can this be achieved with Python? So far all I have is the recursive code using os.walk to get a list of files that are required to be changed. import os import sys fileList = [] rootdir = "C:\\test" for root, subFolders, files in os.walk(rootdir): if subFolders != ".svn": for file in files: fileParts = file.split('.') if len(fileParts) > 1: if fileParts[1] == "php": fileList.append(os.path.join(root,file)) print fileList
[ "os.walk is great. However, it looks like you need to filer file types (which I would suggest if you are going to walk some directory). To do this, you should add import fnmatch.\nimport os, fnmatch\ndef findReplace(directory, find, replace, filePattern):\n for path, dirs, files in os.walk(os.path.abspath(directory)):\n for filename in fnmatch.filter(files, filePattern):\n filepath = os.path.join(path, filename)\n with open(filepath) as f:\n s = f.read()\n s = s.replace(find, replace)\n with open(filepath, \"w\") as f:\n f.write(s)\n\nThis allows you to do something like:\nfindReplace(\"some_dir\", \"find this\", \"replace with this\", \"*.txt\")\n\n", "Check out os.walk:\nimport os\nreplacement = \"\"\"some\nmulti-line string\"\"\"\nfor dname, dirs, files in os.walk(\"some_dir\"):\n for fname in files:\n fpath = os.path.join(dname, fname)\n with open(fpath) as f:\n s = f.read()\n s = s.replace(\"{$replace}\", replacement)\n with open(fpath, \"w\") as f:\n f.write(s)\n\nThe above solution has flaws, such as the fact that it opens literally every file it finds, or the fact that each file is read entirely into memory (which would be bad if you had a 1GB text file), but it should be a good starting point.\nYou also may want to look into the re module if you want to do a more complex find/replace than looking for a specific string.\n", "For those using Python 3.5+ you can now use a glob recursively with the use of ** and the recursive flag.\nHere's an example replacing hello with world for all .txt files:\nfor filepath in glob.iglob('./**/*.txt', recursive=True):\n with open(filepath) as file:\n s = file.read()\n s = s.replace('hello', 'world')\n with open(filepath, \"w\") as file:\n file.write(s)\n\n", "To avoid recursing into .svn directories, os.walk() allows you to change the dirs list inplace. To simplify the text replacement in a file without requiring to read the whole file in memory, you could use fileinput module. And to filter filenames using a file pattern, you could use fnmatch module as suggested by @David Sulpy:\n#!/usr/bin/env python\nfrom __future__ import print_function\nimport fnmatch\nimport os\nfrom fileinput import FileInput\n\ndef find_replace(topdir, file_pattern, text, replacement):\n for dirpath, dirs, files in os.walk(topdir, topdown=True):\n dirs[:] = [d for d in dirs if d != '.svn'] # skip .svn dirs\n files = [os.path.join(dirpath, filename)\n for filename in fnmatch.filter(files, file_pattern)]\n for line in FileInput(files, inplace=True):\n print(line.replace(text, replacement), end='')\n\nfind_replace(r\"C:\\test\", \"*.php\", '{$replace}', \"multiline\\nreplacement\")\n\n", "This is an old question but I figured I'd provide an updated and simpler answer using current libraries in python3.8.\nfrom pathlib import Path\nimport re\n\nrootdir = Path(\"C:\\\\test\")\npattern = r'REGEX for the text you want to replace'\nreplace = r'REGEX for what to replace it with'\n\nfor file in [ f for f in rootdir.glob(\"**.php\") ]: #modify glob pattern as needed\n file_contents = file.read_text()\n new_file_contents = re.sub(f\"{pattern}\", f\"{replace}\", file_contents)\n file.write_text(new_file_contents)\n\n", "Here's my code (which I think is the same as the above but I'm including it just in case there's something subtly different about it):\nimport os, fnmatch, sys\ndef findReplace(directory, find, replace, filePattern):\n for path, dirs, files in os.walk(os.path.abspath(directory)):\n for filename in fnmatch.filter(files, filePattern): \n filepath = os.path.join(path, filename)\n with open(filepath) as f:\n s = f.read()\n s = s.replace(find, replace)\n with open(filepath, \"w\") as f:\n f.write(s)\n\nit runs without error. \nBUT, the file, in z:\\test is unchanged.\nI've put in print statements, like print(\"got here\") but they don't print out either.\n", "Sulpy's answer is good but incomplete. The user would be likely to want to input the parameters through an entry widget, so we might have something more like this (also incomplete, but left as an exercise):\nimport os, fnmatch\nfrom Tkinter import *\nfields = 'Folder', 'Search', 'Replace', 'FilePattern'\n\ndef fetch(entvals):\n# print entvals\n# print ents\n entItems = entvals.items()\n for entItem in entItems:\n field = entItem[0]\n text = entItem[1].get()\n print('%s: \"%s\"' % (field, text))\n\ndef findReplace(entvals):\n# print ents\n directory = entvals.get(\"Folder\").get()\n find = entvals.get(\"Search\").get()\n replace = entvals.get(\"Replace\").get()\n filePattern = entvals.get(\"FilePattern\").get()\n for path, dirs, files in os.walk(os.path.abspath(directory)):\n for filename in fnmatch.filter(files, filePattern):\n# print filename\n filepath = os.path.join(path, filename)\n print filepath # Can be commented out -- used for confirmation\n with open(filepath) as f:\n s = f.read()\n s = s.replace(find, replace)\n with open(filepath, \"w\") as f:\n f.write(s)\n\ndef makeform(root, fields):\n entvals = {}\n for field in fields:\n row = Frame(root)\n lab = Label(row, width=17, text=field+\": \", anchor='w')\n ent = Entry(row)\n row.pack(side=TOP, fill=X, padx=5, pady=5)\n lab.pack(side=LEFT)\n ent.pack(side=RIGHT, expand=YES, fill=X)\n entvals[field] = ent\n# print ent\n return entvals\n\nif __name__ == '__main__':\n root = Tk()\n root.title(\"Recursive S&R\")\n ents = makeform(root, fields)\n# print ents\n root.bind('<Return>', (lambda event, e=ents: fetch(e)))\n b1 = Button(root, text='Show', command=(lambda e=ents: fetch(e)))\n b1.pack(side=LEFT, padx=5, pady=5)\n b2 = Button(root, text='Execute', command=(lambda e=ents: findReplace(e)))\n b2.pack(side=LEFT, padx=5, pady=5)\n b3 = Button(root, text='Quit', command=root.quit)\n b3.pack(side=LEFT, padx=5, pady=5)\n root.mainloop()\n\n", "Use:\npip3 install manip\n\nThis lets you use a decorator to create something like:\n@manip(at='.php$', recursive=True) # to apply to subfolders\ndef replace_on_php(text, find, replacement):\n return text.replace(find, replacement)\n\nNow in your prompt you should be able to call\nreplace_on_php('explode', 'myCustomExplode', path='./myPhPFiles', modify=True)\n\nand this should make the function apply itself on the entire folder.\n", "As mentioned by others, the way to go is glob. However, use binary mode to prevent changing line endings when you find/replace.\n#! /usr/bin/env python3\n\nreplacements = [\n# find, replace\n(b'#include <Common/LogLevelInfo.h>', b'#include <Common/LogLevelErr.h>'),\n(b'#include <Common/LogLevelDbg.h>', b'#include <Common/LogLevelErr.h>'),\n(b'#include <Common/LogLevelErr.h>', b'#include <Common/LogLevelNone.h>'),\n]\n# path to files. here all cpp files in the current working directory\nglobpath = './**/*.cpp'\n\nimport glob\nfor filepath in glob.iglob(globpath, recursive=True):\n with open(filepath, 'rb') as file:\n s = file.read()\n for f, r in replacements:\n s = s.replace(f, r)\n with open(filepath, \"wb\") as file:\n file.write(s)\n\n" ]
[ 67, 35, 15, 7, 2, 0, 0, 0, 0 ]
[ "How about just using:\nclean = ''.join([e for e in text if e != 'string'])\n\n", "Multiple files string change\nimport glob\nfor allfiles in glob.glob('*.txt'):\nfor line in open(allfiles,'r'):\n change=line.replace(\"old_string\",\"new_string\")\n output=open(allfiles,'w')\n output.write(change) \n\n" ]
[ -1, -4 ]
[ "python" ]
stackoverflow_0004205854_python.txt
Q: MongoDB extract specific value python I want to extract value of name : x = col.find({},{'_id': 0, 'country.name': 1}) for data in x: #if data == 'India': print(data['country']) This above code generate this output: {'name': 'India'} {'name': 'Colombia'} {'name': 'Iran (Islamic Republic of)'} {'name': 'Germany'} Desire output: India Colombia Iran (Islamic Republic of) Germany In mongodb every entris have id but name of country is subarray of array A: We need to create a list and append country name in that, and use join outside the for loop, It will print the expected output. countries = [] for data in x: countries.append(data.get("country", {}).get("name")) print("Output: ", " ".join(countries))
MongoDB extract specific value python
I want to extract value of name : x = col.find({},{'_id': 0, 'country.name': 1}) for data in x: #if data == 'India': print(data['country']) This above code generate this output: {'name': 'India'} {'name': 'Colombia'} {'name': 'Iran (Islamic Republic of)'} {'name': 'Germany'} Desire output: India Colombia Iran (Islamic Republic of) Germany In mongodb every entris have id but name of country is subarray of array
[ "We need to create a list and append country name in that, and use join outside the for loop, It will print the expected output.\ncountries = []\nfor data in x:\n countries.append(data.get(\"country\", {}).get(\"name\"))\nprint(\"Output: \", \" \".join(countries))\n\n" ]
[ 1 ]
[]
[]
[ "bash", "mongodb", "python" ]
stackoverflow_0074598861_bash_mongodb_python.txt
Q: get value from tuple keys dictionary and sequential I have a dictionary d = {(1,100) : 0.5 , (1,150): 0.7 ,(1,190) : 0.8, (2,100) : 0.5 , (2,120): 0.7 ,(2,150) : 0.8, (3,100) : 0.5 , (3,110): 0.7 ,(4,100) : 0.5 , (4,150): 0.7 ,(4,190) : 0.8,(5,100) : 0.5 , (5,150): 0.7} list = [4,2,1,3,5] for (k1,k2),k3 in d.items(): for k1 in list : print(k1,k2 : ,k3) I want get the value of dictionary sequential like my list for the key 1 and for key 2 I have diferrent score and count (4,100) : 0.5 , (4,150): 0.7 ,(4,190) : 0.8,(2,100) : 0.5 , (2,120): 0.7 ,(2,150) : 0.8,(1,100) : 0.5 , (1,150): 0.7 ,(1,190) : 0.8,(3,100) : 0.5 , (3,110): 0.7 ,(5,100) : 0.5 , (5,150): 0.7} A: You can use sorted() with the the values from the tuple as index in the list d = dict(sorted(d.items(), key=lambda x: lst.index(x[0][0]))) print(d) Output {(4, 100): 0.5, (4, 150): 0.7, (4, 190): 0.8, (2, 100): 0.5, (2, 120): 0.7, (2, 150): 0.8, (1, 100): 0.5, (1, 150): 0.7, (1, 190): 0.8, (3, 100): 0.5, (3, 110): 0.7, (5, 100): 0.5, (5, 150): 0.7} A: Try the following: d = {(1, 100): 0.5, (1, 150): 0.5, (1, 190): 0.8, (2, 100): 0.5, (2, 120): 0.7, (2, 150): 0.8, (3, 100): 0.5, (3, 110): 0.7, (4, 100): 0.5, (4, 150): 0.7, (4, 190): 0.8, (5, 100): 0.5, (5, 150): 0.7} lst = [4, 2, 1, 3, 5] for key, k3 in d.items(): print(f'({lst[key[0]-1]},{key[1]}) : ,{k3}') Output: (4,100) : ,0.5 (4,150) : ,0.5 (4,190) : ,0.8 (2,100) : ,0.5 (2,120) : ,0.7 (2,150) : ,0.8 (1,100) : ,0.5 (1,110) : ,0.7 (3,100) : ,0.5 (3,150) : ,0.7 (3,190) : ,0.8 (5,100) : ,0.5 (5,150) : ,0.7
get value from tuple keys dictionary and sequential
I have a dictionary d = {(1,100) : 0.5 , (1,150): 0.7 ,(1,190) : 0.8, (2,100) : 0.5 , (2,120): 0.7 ,(2,150) : 0.8, (3,100) : 0.5 , (3,110): 0.7 ,(4,100) : 0.5 , (4,150): 0.7 ,(4,190) : 0.8,(5,100) : 0.5 , (5,150): 0.7} list = [4,2,1,3,5] for (k1,k2),k3 in d.items(): for k1 in list : print(k1,k2 : ,k3) I want get the value of dictionary sequential like my list for the key 1 and for key 2 I have diferrent score and count (4,100) : 0.5 , (4,150): 0.7 ,(4,190) : 0.8,(2,100) : 0.5 , (2,120): 0.7 ,(2,150) : 0.8,(1,100) : 0.5 , (1,150): 0.7 ,(1,190) : 0.8,(3,100) : 0.5 , (3,110): 0.7 ,(5,100) : 0.5 , (5,150): 0.7}
[ "You can use sorted() with the the values from the tuple as index in the list\nd = dict(sorted(d.items(), key=lambda x: lst.index(x[0][0])))\nprint(d)\n\nOutput\n{(4, 100): 0.5, (4, 150): 0.7, (4, 190): 0.8, (2, 100): 0.5, (2, 120): 0.7, (2, 150): 0.8, (1, 100): 0.5, (1, 150): 0.7, (1, 190): 0.8, (3, 100): 0.5, (3, 110): 0.7, (5, 100): 0.5, (5, 150): 0.7}\n\n", "Try the following:\nd = {(1, 100): 0.5,\n (1, 150): 0.5,\n (1, 190): 0.8,\n (2, 100): 0.5,\n (2, 120): 0.7,\n (2, 150): 0.8,\n (3, 100): 0.5,\n (3, 110): 0.7,\n (4, 100): 0.5,\n (4, 150): 0.7,\n (4, 190): 0.8,\n (5, 100): 0.5,\n (5, 150): 0.7}\n\n\nlst = [4, 2, 1, 3, 5]\n\nfor key, k3 in d.items():\n\n print(f'({lst[key[0]-1]},{key[1]}) : ,{k3}')\n\nOutput:\n(4,100) : ,0.5\n(4,150) : ,0.5\n(4,190) : ,0.8\n(2,100) : ,0.5\n(2,120) : ,0.7\n(2,150) : ,0.8\n(1,100) : ,0.5\n(1,110) : ,0.7\n(3,100) : ,0.5\n(3,150) : ,0.7\n(3,190) : ,0.8\n(5,100) : ,0.5\n(5,150) : ,0.7\n\n" ]
[ 1, 0 ]
[]
[]
[ "dictionary", "python", "tuples" ]
stackoverflow_0074598712_dictionary_python_tuples.txt
Q: Can't display SVG pictures with IPython.HTML I Tried to plot out SVG images with IPython using this from from IPython.display import HTML, display if '.svg' in link: #img_data = bunch of tags display(HTML(img_data))) continue image = io.imread(link) ratio = image.shape[1]/image.shape[0] print(ratio) print(image.shape) resized = cv.cvtColor(image, cv.COLOR_BGR2RGB) resized = cv.resize(resized,(int(HEIGHT_RESIZED*ratio),HEIGHT_RESIZED),interpolation = cv.INTER_AREA) cv2_imshow(resized) but when It reaches SVG images it shows this small picture icon instead of the SVG image A: You can display SVG images in IPython with: from from IPython import display svg = '<svg ...' # Replace with your SVG content. display.SVG(svg)
Can't display SVG pictures with IPython.HTML
I Tried to plot out SVG images with IPython using this from from IPython.display import HTML, display if '.svg' in link: #img_data = bunch of tags display(HTML(img_data))) continue image = io.imread(link) ratio = image.shape[1]/image.shape[0] print(ratio) print(image.shape) resized = cv.cvtColor(image, cv.COLOR_BGR2RGB) resized = cv.resize(resized,(int(HEIGHT_RESIZED*ratio),HEIGHT_RESIZED),interpolation = cv.INTER_AREA) cv2_imshow(resized) but when It reaches SVG images it shows this small picture icon instead of the SVG image
[ "You can display SVG images in IPython with:\nfrom from IPython import display\nsvg = '<svg ...' # Replace with your SVG content.\ndisplay.SVG(svg)\n\n" ]
[ 1 ]
[]
[]
[ "html", "python", "svg" ]
stackoverflow_0071004245_html_python_svg.txt
Q: May I ask there are any algorithms(in python) could filter "deep valley" data points on a sloping straight line? I have a group of datasets, each of them containing 251 points, which will be fitted as a sloping straight line. However there are around 30 outliers forming a lot "deep valleys" as shown below in every dataset.enter image description here My task is to remove these deep valleys for future data processing and my initial idea was like this below: lastData = limit def limiting(nowData, limit): global lastData if (abs(nowData-lastData) > limit): return lastData else: lastData = nowData return nowData and my code is shown as below: limit = 250 index = np.random.randint(0, 250) last_data = honing_data_matrix[index, 0] data_filtered = np.zeros((251, 251)) for i in range(0, len(data[index])): current_data = data[index, i] if abs(current_data - last_data) <= limit: data_filtered[index, i] = current_data last_data = current_data else: data_filtered[index, i] = last_data last_data = data_filtered[index, i] data_filtered[index, 0] = data[index, 0] It looked ok in several dataset but on most of the datasets the results were bad as shown below, the blue line is the filtered dataset: enter image description here This one up here looks good enter image description here But this one not The filtered data is as below: [5455. 5467. 5463. 5468. 5477. 5484. 5480. 5488. 5497. 5501. 5414. 5446. 5501. 5505. 5509. 5530. 5534. 5538. 5541. 5550. 5548. 5553. 5574. 5569. 5558. 5578. 5567. 5568. 5575. 5580. 5587. 5592. 5594. 5605. 5611. 5614. 5612. 5617. 5580. 5441. 5378. 5520. 5642. 5657. 5657. 5673. 5688. 5644. 5637. 5678. 5694. 5696. 5686. 5690. 5712. 5730. 5700. 5706. 5725. 5719. 5714. 5712. 5712. 5712. 5712. 5712. 5712. 5533. 5700. 5685. 5676. 5725. 5756. 5772. 5776. 5714. 5640. 5698. 5752. 5563. 5476. 5563. 5645. 5712. 5783. 5831. 5835. 5861. 5791. 5650. 5631. 5724. 5806. 5854. 5875. 5889. 5896. 5904. 5900. 5908. 5905. 5907. 5910. 5916. 5915. 5930. 5934. 5935. 5938. 5949. 5945. 5917. 5768. 5783. 5840. 5712. 5547. 5499. 5572. 5775. 5769. 5670. 5793. 5969. 6039. 6025. 6000. 6016. 6026. 6013. 5978. 6005. 6036. 6044. 6047. 6061. 6072. 6080. 6080. 6090. 6097. 6101. 5971. 5828. 5751. 5751. 5751. 5751. 5525. 5525. 5525. 5525. 5525. 5525. 5525. 5525. 5525. 5525. 5525. 5525. 5525. 5525. 5525. 5654. 5520. 5755. 5755. 5755. 5755. 5564. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326.] The original data is as below: [5455. 5467. 5463. 5468. 5477. 5484. 5480. 5488. 5497. 5501. 5414. 5446. 5501. 5505. 5509. 5530. 5534. 5538. 5541. 5550. 5548. 5553. 5574. 5569. 5558. 5578. 5567. 5568. 5575. 5580. 5587. 5592. 5594. 5605. 5611. 5614. 5612. 5617. 5580. 5441. 5378. 5520. 5642. 5657. 5657. 5673. 5688. 5644. 5637. 5678. 5694. 5696. 5686. 5690. 5712. 5730. 5700. 5706. 5725. 5719. 5714. 5712. 5202. 4653. 4553. 4836. 5205. 5533. 5700. 5685. 5676. 5725. 5756. 5772. 5776. 5714. 5640. 5698. 5752. 5563. 5476. 5563. 5645. 5712. 5783. 5831. 5835. 5861. 5791. 5650. 5631. 5724. 5806. 5854. 5875. 5889. 5896. 5904. 5900. 5908. 5905. 5907. 5910. 5916. 5915. 5930. 5934. 5935. 5938. 5949. 5945. 5917. 5768. 5783. 5840. 5712. 5547. 5499. 5572. 5775. 5769. 5670. 5793. 5969. 6039. 6025. 6000. 6016. 6026. 6013. 5978. 6005. 6036. 6044. 6047. 6061. 6072. 6080. 6080. 6090. 6097. 6101. 5971. 5828. 5751. 5433. 4973. 4978. 5525. 5976. 6079. 6111. 6139. 6154. 6154. 6161. 6182. 6161. 6164. 6194. 6174. 6163. 6058. 5654. 5520. 5755. 6049. 6185. 6028. 5564. 5326. 5670. 6048. 6197. 6204. 6140. 5937. 5807. 5869. 6095. 6225. 6162. 5791. 5610. 5831. 6119. 6198. 5980. 5801. 5842. 5999. 6177. 6273. 6320. 6335. 6329. 6336. 6358. 6363. 6355. 6357. 6373. 6350. 6099. 6045. 6236. 6371. 6385. 6352. 6353. 6366. 6392. 6394. 6403. 6405. 6416. 6415. 6425. 6428. 6426. 6374. 6313. 6239. 6059. 6077. 6197. 6293. 6365. 6437. 6448. 6469. 6486. 6470. 6473. 6451. 6476. 6509. 6514. 6517. 6535. 6545. 6525. 6364. 6295. 6388. 6510. 6556. 6568. 6570. 6459. 6343.] Should I not filter the data one by one? Is there any other better filter for these kinds of sloping straight line data? A: What an outlier is can be very dependent on the dataset. Here is a similar question that was answered that might help: Is there a numpy builtin to reject outliers from a list In your case it comes done to keeping track of a running average, and checking if values are further out then you would prefer. Last but not least scipy has great tools for al sorts of these problems. They have a section on outliers here: https://scikit-learn.org/stable/modules/outlier_detection.html
May I ask there are any algorithms(in python) could filter "deep valley" data points on a sloping straight line?
I have a group of datasets, each of them containing 251 points, which will be fitted as a sloping straight line. However there are around 30 outliers forming a lot "deep valleys" as shown below in every dataset.enter image description here My task is to remove these deep valleys for future data processing and my initial idea was like this below: lastData = limit def limiting(nowData, limit): global lastData if (abs(nowData-lastData) > limit): return lastData else: lastData = nowData return nowData and my code is shown as below: limit = 250 index = np.random.randint(0, 250) last_data = honing_data_matrix[index, 0] data_filtered = np.zeros((251, 251)) for i in range(0, len(data[index])): current_data = data[index, i] if abs(current_data - last_data) <= limit: data_filtered[index, i] = current_data last_data = current_data else: data_filtered[index, i] = last_data last_data = data_filtered[index, i] data_filtered[index, 0] = data[index, 0] It looked ok in several dataset but on most of the datasets the results were bad as shown below, the blue line is the filtered dataset: enter image description here This one up here looks good enter image description here But this one not The filtered data is as below: [5455. 5467. 5463. 5468. 5477. 5484. 5480. 5488. 5497. 5501. 5414. 5446. 5501. 5505. 5509. 5530. 5534. 5538. 5541. 5550. 5548. 5553. 5574. 5569. 5558. 5578. 5567. 5568. 5575. 5580. 5587. 5592. 5594. 5605. 5611. 5614. 5612. 5617. 5580. 5441. 5378. 5520. 5642. 5657. 5657. 5673. 5688. 5644. 5637. 5678. 5694. 5696. 5686. 5690. 5712. 5730. 5700. 5706. 5725. 5719. 5714. 5712. 5712. 5712. 5712. 5712. 5712. 5533. 5700. 5685. 5676. 5725. 5756. 5772. 5776. 5714. 5640. 5698. 5752. 5563. 5476. 5563. 5645. 5712. 5783. 5831. 5835. 5861. 5791. 5650. 5631. 5724. 5806. 5854. 5875. 5889. 5896. 5904. 5900. 5908. 5905. 5907. 5910. 5916. 5915. 5930. 5934. 5935. 5938. 5949. 5945. 5917. 5768. 5783. 5840. 5712. 5547. 5499. 5572. 5775. 5769. 5670. 5793. 5969. 6039. 6025. 6000. 6016. 6026. 6013. 5978. 6005. 6036. 6044. 6047. 6061. 6072. 6080. 6080. 6090. 6097. 6101. 5971. 5828. 5751. 5751. 5751. 5751. 5525. 5525. 5525. 5525. 5525. 5525. 5525. 5525. 5525. 5525. 5525. 5525. 5525. 5525. 5525. 5654. 5520. 5755. 5755. 5755. 5755. 5564. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326. 5326.] The original data is as below: [5455. 5467. 5463. 5468. 5477. 5484. 5480. 5488. 5497. 5501. 5414. 5446. 5501. 5505. 5509. 5530. 5534. 5538. 5541. 5550. 5548. 5553. 5574. 5569. 5558. 5578. 5567. 5568. 5575. 5580. 5587. 5592. 5594. 5605. 5611. 5614. 5612. 5617. 5580. 5441. 5378. 5520. 5642. 5657. 5657. 5673. 5688. 5644. 5637. 5678. 5694. 5696. 5686. 5690. 5712. 5730. 5700. 5706. 5725. 5719. 5714. 5712. 5202. 4653. 4553. 4836. 5205. 5533. 5700. 5685. 5676. 5725. 5756. 5772. 5776. 5714. 5640. 5698. 5752. 5563. 5476. 5563. 5645. 5712. 5783. 5831. 5835. 5861. 5791. 5650. 5631. 5724. 5806. 5854. 5875. 5889. 5896. 5904. 5900. 5908. 5905. 5907. 5910. 5916. 5915. 5930. 5934. 5935. 5938. 5949. 5945. 5917. 5768. 5783. 5840. 5712. 5547. 5499. 5572. 5775. 5769. 5670. 5793. 5969. 6039. 6025. 6000. 6016. 6026. 6013. 5978. 6005. 6036. 6044. 6047. 6061. 6072. 6080. 6080. 6090. 6097. 6101. 5971. 5828. 5751. 5433. 4973. 4978. 5525. 5976. 6079. 6111. 6139. 6154. 6154. 6161. 6182. 6161. 6164. 6194. 6174. 6163. 6058. 5654. 5520. 5755. 6049. 6185. 6028. 5564. 5326. 5670. 6048. 6197. 6204. 6140. 5937. 5807. 5869. 6095. 6225. 6162. 5791. 5610. 5831. 6119. 6198. 5980. 5801. 5842. 5999. 6177. 6273. 6320. 6335. 6329. 6336. 6358. 6363. 6355. 6357. 6373. 6350. 6099. 6045. 6236. 6371. 6385. 6352. 6353. 6366. 6392. 6394. 6403. 6405. 6416. 6415. 6425. 6428. 6426. 6374. 6313. 6239. 6059. 6077. 6197. 6293. 6365. 6437. 6448. 6469. 6486. 6470. 6473. 6451. 6476. 6509. 6514. 6517. 6535. 6545. 6525. 6364. 6295. 6388. 6510. 6556. 6568. 6570. 6459. 6343.] Should I not filter the data one by one? Is there any other better filter for these kinds of sloping straight line data?
[ "What an outlier is can be very dependent on the dataset. Here is a similar question that was answered that might help: Is there a numpy builtin to reject outliers from a list\nIn your case it comes done to keeping track of a running average, and checking if values are further out then you would prefer.\nLast but not least scipy has great tools for al sorts of these problems. They have a section on outliers here: https://scikit-learn.org/stable/modules/outlier_detection.html\n" ]
[ 0 ]
[]
[]
[ "curve_fitting", "data_fitting", "filter", "outliers", "python" ]
stackoverflow_0074598916_curve_fitting_data_fitting_filter_outliers_python.txt
Q: Access Tools from .venv What is the best way to access python tools from within a script? For my example I want to use msgfmt.py and pygettext from the Tools/i18n package(?). On a Linux system probably no issue, since they are already on the PATH, but under Windows I have to call them with python as interpreter, so setting the directory on the path doesn't work like in Linux. So is calling os.system('my_pygettext_command') actually the right attempt or should I do this with a different approach, like importing? If importing would be correct, how can I access them, if they are only installed on the system installation and not in venv A: Researching a bit further into the topic, I got really mind gobbled: Maybe I am doing this totally wrong under windows, but my line ratio (linux/windows) to call a python tool from inside a venv: 1/34. I did not fully tested the final call under linux yet, but this ratio is only for getting the subprocess command. This is my momentary solution and I am open for better approaches: windows-utils import sys from typing import Dict def stdout_get_command_to_dict(string: str): lines = [s for s in string.split("\n") if s] # remove header lines = lines[2:] stdout_dict = {} for idx, line in enumerate(lines): # Reduce Spaces while " " in line: line = line.replace(" ", " ") line_as_list = line.split() stdout_dict[idx] = { "Version": line_as_list[2][:5], "Source": line_as_list[3], "Venv": line_as_list[3].find("venv") > 0, } return stdout_dict def get_system_py_path(stdout_dict: Dict[int, Dict[str, str]]): major = sys.version_info.major minor = sys.version_info.minor micro = sys.version_info.micro env_version = f"{major}.{minor}.{micro}" for key in stdout_dict.keys(): if stdout_dict[key]["Version"] == env_version and not stdout_dict[key]["Venv"]: return stdout_dict[key]["Source"] and the script: if platform.system() == "Windows": cmd = ["powershell", "get-command", "python", "-totalCount", "4"] processed_cmd = subprocess.run(cmd, shell=True, capture_output=True, text=True) stdout_as_dict = stdout_get_command_to_dict(processed_cmd.stdout) sys_python_path = Path(get_system_py_path(stdout_as_dict)).parent tool = sys_python_path / "Tools/i18n/msgfmt.py" tool_cmd = f"python {tool}" elif platform.system() == "Linux": tool_cmd = "msgfmt"
Access Tools from .venv
What is the best way to access python tools from within a script? For my example I want to use msgfmt.py and pygettext from the Tools/i18n package(?). On a Linux system probably no issue, since they are already on the PATH, but under Windows I have to call them with python as interpreter, so setting the directory on the path doesn't work like in Linux. So is calling os.system('my_pygettext_command') actually the right attempt or should I do this with a different approach, like importing? If importing would be correct, how can I access them, if they are only installed on the system installation and not in venv
[ "Researching a bit further into the topic, I got really mind gobbled:\nMaybe I am doing this totally wrong under windows, but my line ratio (linux/windows) to call a python tool from inside a venv: 1/34. I did not fully tested the final call under linux yet, but this ratio is only for getting the subprocess command.\nThis is my momentary solution and I am open for better approaches:\nwindows-utils\nimport sys\nfrom typing import Dict\n\n\ndef stdout_get_command_to_dict(string: str):\n lines = [s for s in string.split(\"\\n\") if s]\n # remove header\n lines = lines[2:]\n stdout_dict = {}\n for idx, line in enumerate(lines):\n # Reduce Spaces\n while \" \" in line:\n line = line.replace(\" \", \" \")\n line_as_list = line.split()\n stdout_dict[idx] = {\n \"Version\": line_as_list[2][:5],\n \"Source\": line_as_list[3],\n \"Venv\": line_as_list[3].find(\"venv\") > 0,\n }\n return stdout_dict\n\n\ndef get_system_py_path(stdout_dict: Dict[int, Dict[str, str]]):\n major = sys.version_info.major\n minor = sys.version_info.minor\n micro = sys.version_info.micro\n env_version = f\"{major}.{minor}.{micro}\"\n for key in stdout_dict.keys():\n if stdout_dict[key][\"Version\"] == env_version and not stdout_dict[key][\"Venv\"]:\n return stdout_dict[key][\"Source\"]\n\nand the script:\nif platform.system() == \"Windows\":\n cmd = [\"powershell\", \"get-command\", \"python\", \"-totalCount\", \"4\"]\n processed_cmd = subprocess.run(cmd, shell=True, capture_output=True, text=True)\n stdout_as_dict = stdout_get_command_to_dict(processed_cmd.stdout)\n sys_python_path = Path(get_system_py_path(stdout_as_dict)).parent\n tool = sys_python_path / \"Tools/i18n/msgfmt.py\"\n tool_cmd = f\"python {tool}\"\n elif platform.system() == \"Linux\":\n tool_cmd = \"msgfmt\"\n\n" ]
[ 0 ]
[]
[]
[ "gettext", "python" ]
stackoverflow_0074572814_gettext_python.txt
Q: Detect Changes In Two Or More CSVs Using Pandas I am trying to use Pandas to detect changes across two CSVs. I would like it ideally to highlight which UIDs have been changed. I've attached an example of the ideal output here. CSV 1 (imported as DataFrame): | UID | Email | | -------- | --------------- | | U01 | u01@email.com | | U02 | u02@email.com | | U03 | u03@email.com | | U04 | u04@email.com | CSV 2 (imported as DataFrame): | UID | Email | | -------- | --------------- | | U01 | u01@email.com | | U02 | newemail@email.com | | U03 | u03@email.com | | U04 | newemail2@email.com | | U05 | u05@email.com | | U06 | u06@email.com | Over the two CSVs, U02 and U04 saw email changes, whereas U05 and U06 were new records entirely. I have tried using the pandas compare function, and unfortunately it doesn't work because CSV2 has more records than CSV1. I have since concatenated the UID and email field, like so, and then created a new field called "Unique" to show whether the concatenated value is a duplication as True or False (but doesn't show if it's a new record entirely) df3['Concatenated'] = df3["UID"] +"~"+ df3["Email"] df3['Unique'] = ~df3['Concatenated'].duplicated(keep=False) This works to an extent, but it feels clunky, and I was wondering if anyone had a smarter way of doing this - especially when it comes into showing whether the record is new or not. A: The strategy here is to merge the two dataframes on UID, then compare the email columns, and finally see if the new UIDs are in the UID list. df_compare = pd.merge(left=df, right=df_new, how='outer', on='UID') df_compare['Change Status'] = df_compare.apply(lambda x: 'No Change' if x.Email_x == x.Email_y else 'Change', axis=1) df_compare.loc[~df_compare.UID.isin(df.UID),'Change Status'] = 'New Record' df_compare = df_compare.drop(columns=['Email_x']).rename(columns={'Email_y': 'Email'}) gives df_compare as: UID Email Change Status 0 U01 u01@email.com No Change 1 U02 newemail@email.com Change 2 U03 u03@email.com No Change 3 U04 newemail2@email.com Change 4 U05 u05@email.com New Record 5 U06 u06@email.com New Record
Detect Changes In Two Or More CSVs Using Pandas
I am trying to use Pandas to detect changes across two CSVs. I would like it ideally to highlight which UIDs have been changed. I've attached an example of the ideal output here. CSV 1 (imported as DataFrame): | UID | Email | | -------- | --------------- | | U01 | u01@email.com | | U02 | u02@email.com | | U03 | u03@email.com | | U04 | u04@email.com | CSV 2 (imported as DataFrame): | UID | Email | | -------- | --------------- | | U01 | u01@email.com | | U02 | newemail@email.com | | U03 | u03@email.com | | U04 | newemail2@email.com | | U05 | u05@email.com | | U06 | u06@email.com | Over the two CSVs, U02 and U04 saw email changes, whereas U05 and U06 were new records entirely. I have tried using the pandas compare function, and unfortunately it doesn't work because CSV2 has more records than CSV1. I have since concatenated the UID and email field, like so, and then created a new field called "Unique" to show whether the concatenated value is a duplication as True or False (but doesn't show if it's a new record entirely) df3['Concatenated'] = df3["UID"] +"~"+ df3["Email"] df3['Unique'] = ~df3['Concatenated'].duplicated(keep=False) This works to an extent, but it feels clunky, and I was wondering if anyone had a smarter way of doing this - especially when it comes into showing whether the record is new or not.
[ "The strategy here is to merge the two dataframes on UID, then compare the email columns, and finally see if the new UIDs are in the UID list.\ndf_compare = pd.merge(left=df, right=df_new, how='outer', on='UID')\n\ndf_compare['Change Status'] = df_compare.apply(lambda x: 'No Change' if x.Email_x == x.Email_y else 'Change', axis=1)\ndf_compare.loc[~df_compare.UID.isin(df.UID),'Change Status'] = 'New Record'\n\ndf_compare = df_compare.drop(columns=['Email_x']).rename(columns={'Email_y': 'Email'})\n\ngives df_compare as:\n UID Email Change Status\n0 U01 u01@email.com No Change\n1 U02 newemail@email.com Change\n2 U03 u03@email.com No Change\n3 U04 newemail2@email.com Change\n4 U05 u05@email.com New Record\n5 U06 u06@email.com New Record\n\n" ]
[ 0 ]
[]
[]
[ "csv", "dataframe", "pandas", "python", "validation" ]
stackoverflow_0074595978_csv_dataframe_pandas_python_validation.txt
Q: Does anyone know how to break this loop in python if the correct password has been entered? I'm new to Python and programming in general so I decided to set myself a bit of a challenge to code a password storage system. It basically writes and reads from a text file with passwords stored inside it. At the end, the user needs to put in the password that matches in the text file. All of it works but I can't seem to get the while loop to end after the correct password has been entered. It just keeps asking to enter the password even if the user got it correct. Does anyone know how to correct this? I'm sure it's quite a small mistake that I overlooked but I've been stuck on this for the past few hours now. Many thanks in advance :) Luke my_list = ["yes", "no"] while True: password_question: str = input('Do you want to create a new password?, please type yes or no ') word = password_question if word.lower() in my_list: break else: print("Sorry, try again") if password_question == my_list[0]: print("Great, let's get started!") text_write = open("passwords.txt", "a") lst = [] new_password = input("Please enter a new password -> ") lst.append(new_password) text_write.write(new_password + '\n') text_write.close() print("Okay, your new password is " + new_password) while True: password: str = input("Please enter your password -> ") found: bool = False with open("passwords.txt", "r") as text_read: for line in text_read: if line.rstrip() == password: print("Great, you are logged in!") found = True break if not found: print('Incorrect password, try again') I tried to move the while loop around as I thought it was in the wrong place initially but then that messed everything else up. I also tried moving the "break" statement to see if it works elsewhere but that didn't work either. A: while True: is creating an "infinite" loop, which you probably tried to break with the break statement, but it breaks an internal for loop only. I think a good solution is to put "not found" into the while condition, so it will run the input sequence only if found is False found: bool = False while not found: password: str = input("Please enter your password -> ") with open("passwords.txt", "r") as text_read: for line in text_read: if line.rstrip() == password: print("Great, you are logged in!") found = True break if not found: print('Incorrect password, try again') print('here we go')
Does anyone know how to break this loop in python if the correct password has been entered?
I'm new to Python and programming in general so I decided to set myself a bit of a challenge to code a password storage system. It basically writes and reads from a text file with passwords stored inside it. At the end, the user needs to put in the password that matches in the text file. All of it works but I can't seem to get the while loop to end after the correct password has been entered. It just keeps asking to enter the password even if the user got it correct. Does anyone know how to correct this? I'm sure it's quite a small mistake that I overlooked but I've been stuck on this for the past few hours now. Many thanks in advance :) Luke my_list = ["yes", "no"] while True: password_question: str = input('Do you want to create a new password?, please type yes or no ') word = password_question if word.lower() in my_list: break else: print("Sorry, try again") if password_question == my_list[0]: print("Great, let's get started!") text_write = open("passwords.txt", "a") lst = [] new_password = input("Please enter a new password -> ") lst.append(new_password) text_write.write(new_password + '\n') text_write.close() print("Okay, your new password is " + new_password) while True: password: str = input("Please enter your password -> ") found: bool = False with open("passwords.txt", "r") as text_read: for line in text_read: if line.rstrip() == password: print("Great, you are logged in!") found = True break if not found: print('Incorrect password, try again') I tried to move the while loop around as I thought it was in the wrong place initially but then that messed everything else up. I also tried moving the "break" statement to see if it works elsewhere but that didn't work either.
[ "while True: is creating an \"infinite\" loop, which you probably tried to break with the break statement, but it breaks an internal for loop only.\nI think a good solution is to put \"not found\" into the while condition, so it will run the input sequence only if found is False\nfound: bool = False\nwhile not found:\n password: str = input(\"Please enter your password -> \")\n with open(\"passwords.txt\", \"r\") as text_read:\n for line in text_read:\n if line.rstrip() == password:\n print(\"Great, you are logged in!\")\n found = True\n break\n if not found:\n print('Incorrect password, try again')\nprint('here we go')\n\n" ]
[ 1 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0074599082_python_python_3.x.txt
Q: How do i get my bars in my df.plot() to be one column and the rest of the columns to be on the x axis? i have this dataframe: Week Income Fuel ... Extras Total costs Remainder 18-18-218 2000.0 1200.0 ... 122.0 1842.0 158.0 12-12-2012 1750.0 100.0 ... 100.0 820.0 930.0 nan 1786.0 289.0 ... 109.0 1060.0 726.0 What i have now is: chartdf = pd.DataFrame(chartlist) chartdf.plot.bar(x="Week", rot=0) plt.show() And what i would like is: for the bars in my plot.bar() to be the column "week" and then the x axis to the other columns headers. So that instead of it showing 9 bars in 3 categories, it shows 3 bars(weeks) and 9 categories. The data is just dummydata to see if it works thats why the dates are weird haha. A: you need to change your data so you set the Week as an index and drop it from the columns then you create the plot from the transpose of the data and it will give you your expected output: chartdf=chartdf.set_index('Week', drop=True) chartdf.T.plot.bar(rot=0) as an example of results:
How do i get my bars in my df.plot() to be one column and the rest of the columns to be on the x axis?
i have this dataframe: Week Income Fuel ... Extras Total costs Remainder 18-18-218 2000.0 1200.0 ... 122.0 1842.0 158.0 12-12-2012 1750.0 100.0 ... 100.0 820.0 930.0 nan 1786.0 289.0 ... 109.0 1060.0 726.0 What i have now is: chartdf = pd.DataFrame(chartlist) chartdf.plot.bar(x="Week", rot=0) plt.show() And what i would like is: for the bars in my plot.bar() to be the column "week" and then the x axis to the other columns headers. So that instead of it showing 9 bars in 3 categories, it shows 3 bars(weeks) and 9 categories. The data is just dummydata to see if it works thats why the dates are weird haha.
[ "you need to change your data so you set the Week as an index and drop it from the columns then you create the plot from the transpose of the data and it will give you your expected output:\nchartdf=chartdf.set_index('Week', drop=True)\n \nchartdf.T.plot.bar(rot=0)\n\nas an example of results:\n\n" ]
[ 1 ]
[]
[]
[ "matplotlib", "pandas", "python" ]
stackoverflow_0074598829_matplotlib_pandas_python.txt
Q: Plugin or Settings for Python/Pycharms for Code Editor is there a plugin or way that if i press the run button pycharms starts to highlight the code that is currently running step by step so i can see what pycharms is doing I looked into the pycharms editor settings but didnt really find anything that would help me A: I think that what you are referring if the debugger tool running the script line by line or just putting breakpoints to check specific values. check this A: for me there is two possibility : -use thonny IDE (i dont like it but the debug mod is interresting for that) -use the tutor module (https://pythontutor.com if you want navigator version) i only know those solution.
Plugin or Settings for Python/Pycharms for Code Editor
is there a plugin or way that if i press the run button pycharms starts to highlight the code that is currently running step by step so i can see what pycharms is doing I looked into the pycharms editor settings but didnt really find anything that would help me
[ "I think that what you are referring if the debugger tool running the script line by line or just putting breakpoints to check specific values. check this\n", "for me there is two possibility :\n-use thonny IDE (i dont like it but the debug mod is interresting for that)\n-use the tutor module (https://pythontutor.com if you want navigator version)\ni only know those solution.\n" ]
[ 0, 0 ]
[]
[]
[ "pycharm", "python" ]
stackoverflow_0074599017_pycharm_python.txt
Q: How to normalize-scale data in attribute in range <-1;1> Hello i have used many options for normalize data in my dataframe attribute elnino_1["air_temp"] ,but it always shows me an error like "Reshape your data either using array.reshape(-1, 1) if your data has a single feature or array.reshape(1, -1) if it contains a single sample." or "'int' object is not callable" . I try this code: (1) elnino_1["air_temp"].min=-1 elnino_1["air_temp"].max=1 elnino_1_std = (elnino_1["air_temp"] - elnino_1["air_temp"].min(axis=0)) / (elnino_1["air_temp"].max(axis=0) - elnino_1["air_temp"].min(axis=0)) elnino_1_scaled = elnino_1_std * (max - min) + min (2) XD=elnino_1["air_temp"] scaler = MinMaxScaler(feature_range=(-1, 1)) scaler = MinMaxScaler(feature_range=(-1, 1)) In both option I use libraries: from sklearn.preprocessing import scale from sklearn import preprocessing What I should to do for normalize this data please? A: As I do not have access to your dataset, here I'm using make_classification to generate some synthetic data. Please run through in a notebook to gain understanding. (Do note as well there may be slight differences as I'm using a numpy array as dataset, yours is a DataFrame.) import pandas as pd import numpy as np from sklearn.preprocessing import MinMaxScaler from sklearn.datasets import make_classification X, y = make_classification(n_samples=100, n_features=2, n_redundant=0, n_informative=1) pd.DataFrame(X).head() Thereafter, we fit a MinMaxScaler to the data. MinMaxScaler accepts a 2d array as input by default. In other words, a 'table'. Throughout these, please call X.shape to understand how array shapes work. For e.g in the above, X.shape >> (100, 2) where it is (numrows, numcolumns) scaler = MinMaxScaler(feature_range=(-1, 1)) X_norm = scaler.fit_transform(X) pd.DataFrame(X_norm).head() In your case, you are only trying to fit/scale a single column. When you only fit elnino_1["air_temp"] it is a 1d array, shape is something like (100,). So we have to reshape it into a 2d array. x1_norm = scaler.fit_transform(X[:, 1].reshape(-1,1)) pd.DataFrame(x1_norm) For example, if xyz.shape is (100,) and I want it to be (100,1), I can use xyz.reshape(100,1) if I'm being specific. The length of the dimension set to -1 is automatically determined by inferring from the specified values of other dimensions. This is useful when converting a large array shape. Thus xyz.reshape(-1,1) achieves the same as above.
How to normalize-scale data in attribute in range <-1;1>
Hello i have used many options for normalize data in my dataframe attribute elnino_1["air_temp"] ,but it always shows me an error like "Reshape your data either using array.reshape(-1, 1) if your data has a single feature or array.reshape(1, -1) if it contains a single sample." or "'int' object is not callable" . I try this code: (1) elnino_1["air_temp"].min=-1 elnino_1["air_temp"].max=1 elnino_1_std = (elnino_1["air_temp"] - elnino_1["air_temp"].min(axis=0)) / (elnino_1["air_temp"].max(axis=0) - elnino_1["air_temp"].min(axis=0)) elnino_1_scaled = elnino_1_std * (max - min) + min (2) XD=elnino_1["air_temp"] scaler = MinMaxScaler(feature_range=(-1, 1)) scaler = MinMaxScaler(feature_range=(-1, 1)) In both option I use libraries: from sklearn.preprocessing import scale from sklearn import preprocessing What I should to do for normalize this data please?
[ "As I do not have access to your dataset, here I'm using make_classification to generate some synthetic data. Please run through in a notebook to gain understanding. (Do note as well there may be slight differences as I'm using a numpy array as dataset, yours is a DataFrame.)\nimport pandas as pd\nimport numpy as np\nfrom sklearn.preprocessing import MinMaxScaler\nfrom sklearn.datasets import make_classification\n\nX, y = make_classification(n_samples=100, n_features=2, n_redundant=0, n_informative=1)\n\npd.DataFrame(X).head()\n\nThereafter, we fit a MinMaxScaler to the data. MinMaxScaler accepts a 2d array as input by default. In other words, a 'table'. Throughout these, please call X.shape to understand how array shapes work. For e.g in the above, X.shape >> (100, 2) where it is (numrows, numcolumns)\nscaler = MinMaxScaler(feature_range=(-1, 1))\n\nX_norm = scaler.fit_transform(X)\n\npd.DataFrame(X_norm).head()\n\nIn your case, you are only trying to fit/scale a single column. When you only fit elnino_1[\"air_temp\"] it is a 1d array, shape is something like (100,).\nSo we have to reshape it into a 2d array.\nx1_norm = scaler.fit_transform(X[:, 1].reshape(-1,1))\npd.DataFrame(x1_norm)\n\nFor example, if xyz.shape is (100,) and I want it to be (100,1), I can use xyz.reshape(100,1) if I'm being specific.\nThe length of the dimension set to -1 is automatically determined by inferring from the specified values of other dimensions. This is useful when converting a large array shape. Thus xyz.reshape(-1,1) achieves the same as above.\n" ]
[ 0 ]
[]
[]
[ "dataframe", "pandas", "python", "scikit_learn", "sklearn_pandas" ]
stackoverflow_0074560125_dataframe_pandas_python_scikit_learn_sklearn_pandas.txt
Q: Python Solution for Project Euler Question 8 I am trying to solve the question about the largest product in a series from Project Euler website. https://projecteuler.net/problem=8 I basically, saved the 1000 digits as a text file converted it to string created an array called window that stores the values this array goes through the 1000 digit array and stores the adjacent digits multiplies these digits and only keeps the maximum value. The code works for the example answer given (for 4 numbers it actually calculates 5832). The problem is the code wrongly calculates the 13 digit answer and I can't seem to find the problem. Here is my code from functools import reduce import numpy as np # open the textfile with open('Euler_8.txt') as file: lines = file.readlines() # remove the \n from each line mappedLines = list(map(lambda s: s.strip(), lines)) p = [] for ls in mappedLines: p.append(list(ls)) collapse = sum(p, []) # collapsing of matrix window = np.zeros(13) size = len(window) maxValue = 0 for a in range(len(collapse) - size - 1): for b in range(size): window[b] = collapse[a + b] intWin = list(map(int, window)) mult = reduce(lambda x, y: x * y, intWin) if mult > maxValue: maxValue = mult print(maxValue) Could you help me find the problem ? Thank you for your time. A: You compute intWin and mult inside the b loop, giving you a mix of old window and new window, and as a result you get a product that's sometimes too high. Instead, you should only compute intWin and mult once you've populated the current window. But really, your code is over-complicated, and doesn't need reduce or numpy or a separate window array (especially as that array is a float array, which may or may not have enough accuracy for the calculations you perform). Your code also omits the window that includes the final digit in the input, although that happens not to matter in this case. For example, this is a simplified (and corrected) version of your code. with open('Euler_8.txt') as file: lines = file.readlines() digits = ''.join(line.strip() for line in lines) digits = [int(x) for x in digits] size = 13 maxValue = 0 for a in range(len(digits) - size): prod = 1 for b in range(size): prod *= digits[a + b] maxValue = max(maxValue, prod) print(maxValue)
Python Solution for Project Euler Question 8
I am trying to solve the question about the largest product in a series from Project Euler website. https://projecteuler.net/problem=8 I basically, saved the 1000 digits as a text file converted it to string created an array called window that stores the values this array goes through the 1000 digit array and stores the adjacent digits multiplies these digits and only keeps the maximum value. The code works for the example answer given (for 4 numbers it actually calculates 5832). The problem is the code wrongly calculates the 13 digit answer and I can't seem to find the problem. Here is my code from functools import reduce import numpy as np # open the textfile with open('Euler_8.txt') as file: lines = file.readlines() # remove the \n from each line mappedLines = list(map(lambda s: s.strip(), lines)) p = [] for ls in mappedLines: p.append(list(ls)) collapse = sum(p, []) # collapsing of matrix window = np.zeros(13) size = len(window) maxValue = 0 for a in range(len(collapse) - size - 1): for b in range(size): window[b] = collapse[a + b] intWin = list(map(int, window)) mult = reduce(lambda x, y: x * y, intWin) if mult > maxValue: maxValue = mult print(maxValue) Could you help me find the problem ? Thank you for your time.
[ "You compute intWin and mult inside the b loop, giving you a mix of old window and new window, and as a result you get a product that's sometimes too high. Instead, you should only compute intWin and mult once you've populated the current window.\nBut really, your code is over-complicated, and doesn't need reduce or numpy or a separate window array (especially as that array is a float array, which may or may not have enough accuracy for the calculations you perform). Your code also omits the window that includes the final digit in the input, although that happens not to matter in this case.\nFor example, this is a simplified (and corrected) version of your code.\nwith open('Euler_8.txt') as file:\n lines = file.readlines()\n\ndigits = ''.join(line.strip() for line in lines)\ndigits = [int(x) for x in digits]\n\nsize = 13\nmaxValue = 0\nfor a in range(len(digits) - size):\n prod = 1\n for b in range(size):\n prod *= digits[a + b]\n maxValue = max(maxValue, prod)\n\n\nprint(maxValue)\n\n" ]
[ 0 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0074598986_python_python_3.x.txt
Q: pandas.DataFrame.assign: how to refer to newly created columns? I'm trying to use pandas.DataFrame.assign in Pandas 1.5.2. Let's consider this code, for instance: df = pd.DataFrame({"col1":[1,2,3], "col2": [4,5,6]}) df.assign( test1="hello", test2=df.test1 + " world" ) I'm facing this error: AttributeError: 'DataFrame' object has no attribute 'test1' However, it's explicitly stated in the documentation that: Assigning multiple columns within the same assign is possible. Later items in **kwargs may refer to newly created or modified columns in df; items are computed and assigned into df in order. So I don't understand: how can I refer to newly created or modified columns in df when calling assign? A: You can pass a callable to assign. Here use a lambda to reference the DataFrame. Parameters **kwargsdict of {str: callable or Series} The column names are keywords. If the values are callable, they are computed on the DataFrame and assigned to the new columns. The callable must not change input DataFrame (though pandas doesn’t check it). If the values are not callable, (e.g. a Series, scalar, or array), they are simply assigned. df = pd.DataFrame({"col1":[1,2,3], "col2": [4,5,6]}) df.assign( test1="hello", test2=lambda d: d.test1 + " world" ) Output: col1 col2 test1 test2 0 1 4 hello hello world 1 2 5 hello hello world 2 3 6 hello hello world A: You can crete new columns like ```df['new']= df.col1 + something import pandas as pd df = pd.DataFrame({"col1":[1,2,3], "col2": [4,5,6]}) df['test1'] = "hello" df['test2']=df.test1 + " world" df
pandas.DataFrame.assign: how to refer to newly created columns?
I'm trying to use pandas.DataFrame.assign in Pandas 1.5.2. Let's consider this code, for instance: df = pd.DataFrame({"col1":[1,2,3], "col2": [4,5,6]}) df.assign( test1="hello", test2=df.test1 + " world" ) I'm facing this error: AttributeError: 'DataFrame' object has no attribute 'test1' However, it's explicitly stated in the documentation that: Assigning multiple columns within the same assign is possible. Later items in **kwargs may refer to newly created or modified columns in df; items are computed and assigned into df in order. So I don't understand: how can I refer to newly created or modified columns in df when calling assign?
[ "You can pass a callable to assign. Here use a lambda to reference the DataFrame.\n\nParameters\n**kwargsdict of {str: callable or Series}\nThe column names are keywords. If the values are callable, they are computed on the DataFrame and\nassigned to the new columns. The callable must not change input\nDataFrame (though pandas doesn’t check it). If the values are not\ncallable, (e.g. a Series, scalar, or array), they are simply assigned.\n\ndf = pd.DataFrame({\"col1\":[1,2,3], \"col2\": [4,5,6]})\n\ndf.assign(\n test1=\"hello\",\n test2=lambda d: d.test1 + \" world\"\n)\n\nOutput:\n col1 col2 test1 test2\n0 1 4 hello hello world\n1 2 5 hello hello world\n2 3 6 hello hello world\n\n", "You can crete new columns like ```df['new']= df.col1 + something\nimport pandas as pd\ndf = pd.DataFrame({\"col1\":[1,2,3], \"col2\": [4,5,6]})\ndf['test1'] = \"hello\"\ndf['test2']=df.test1 + \" world\"\ndf\n\n" ]
[ 1, 0 ]
[]
[]
[ "dataframe", "pandas", "python", "python_3.x" ]
stackoverflow_0074599116_dataframe_pandas_python_python_3.x.txt
Q: How to return data type with input? I'm brand new to python and am using the "Job Ready for Python" as a first text and ran across this chapter 4 problem that I can't get my head around: Create a program that prompts the user to a number and then displays the type of number entered(e.g., complex, integer, or a float). I'm having a hard time understanding how to classify inputs as anything other than strings -- additionally in the book I have covered basics, variables, booleans, and operators: and (per the book) should have all the tools to do this Any help would be appreciated I tried something like num = input("Type any number: ") print(num, ":", type(num)) but that kept returning string.... and then I thought maybe I have to classify values using operators such as num = input(....) if num % 2 == x IDK from hear A: In Python, whenever you take input from user. It is always a string. age = input("Enter your age :") print(type(age)) This will print str. To convert this, you can do this, age = int(input("Enter your age :")) print(type(age)) This will print int. A: This basic code seems to work for me; not sure why the previous answerer claims it does not: Mac_3.2.57$cat getType.py age = input("Enter your age :") print(type(age)) Mac_3.2.57$python getType.py Enter your age :22 <type 'int'> Mac_3.2.57$python getType.py Enter your age :22.0 <type 'float'> Mac_3.2.57$python getType.py Enter your age :22 + 23j <type 'complex'> Mac_3.2.57$
How to return data type with input?
I'm brand new to python and am using the "Job Ready for Python" as a first text and ran across this chapter 4 problem that I can't get my head around: Create a program that prompts the user to a number and then displays the type of number entered(e.g., complex, integer, or a float). I'm having a hard time understanding how to classify inputs as anything other than strings -- additionally in the book I have covered basics, variables, booleans, and operators: and (per the book) should have all the tools to do this Any help would be appreciated I tried something like num = input("Type any number: ") print(num, ":", type(num)) but that kept returning string.... and then I thought maybe I have to classify values using operators such as num = input(....) if num % 2 == x IDK from hear
[ "In Python, whenever you take input from user. It is always a string.\nage = input(\"Enter your age :\")\nprint(type(age))\n\nThis will print str.\nTo convert this, you can do this,\nage = int(input(\"Enter your age :\"))\nprint(type(age))\n\nThis will print int.\n", "This basic code seems to work for me; not sure why the previous answerer claims it does not:\nMac_3.2.57$cat getType.py\nage = input(\"Enter your age :\")\nprint(type(age))\nMac_3.2.57$python getType.py\nEnter your age :22\n<type 'int'>\nMac_3.2.57$python getType.py\nEnter your age :22.0\n<type 'float'>\nMac_3.2.57$python getType.py\nEnter your age :22 + 23j\n<type 'complex'>\nMac_3.2.57$\n\n" ]
[ 1, 0 ]
[]
[]
[ "floating_point", "input", "integer", "python" ]
stackoverflow_0074595996_floating_point_input_integer_python.txt
Q: Get Values from CSV to pass as Arguments in Array/List (Python) I would like to find values from one CSV in another and modify/remove the rows accordingly. Removing already works quite well, but I would like to automate this process as much as possible. So my question is how can I put all values from the serachforthat.csv (column [0]) into a kind of array or list and use it to run through the all.csv. what i got so far: *args = "searchforthat.csv[0]" # These are my values import csv with open('all.csv', 'r') as inp, open('final.csv', 'w') as out: writer = csv.writer(out) for row in csv.reader(inp): if row[3] != args: # That does not work :( writer.writerow(row) I am completely new to python and a little confused as to the correct way to write it... A: import csv with open('searchforthat.csv', 'r') as inp: args = [row[0] for row in csv.reader(inp)] with open('all.csv', 'r') as inp, open('final.csv', 'w') as out: writer = csv.writer(out) for row in csv.reader(inp): if row[3] not in args: writer.writerow(row) A: You have to get the values in the first column of searchforthat.csv out of the file first. search_values = [] with open('searchforthat.csv', 'r') as sfile: lines = [line.replace("\n", "") for line in sfile.readlines()] search_values = [line.split(",")[0] for line in lines] Good Luck! Then you can use the strings in search_values to look through the other file(s). It looks like in your question you want to compare the search_values with the values in column 0 of all.csv . And if there in there, write the lines to a new file. If that is the case you can do so like this: # get the row data from searchforthat.csv search_row_data = [] with open('searchforthat.csv', 'r') as sfile: lines = [line.replace("\n", "") for line in sfile.readlines()] search_row_data = [line.split(",") for line in lines] # map search data to values search_values = [row_data[0] for row_data in search_row_data] # get the row data from all.csv all_row_data = [] with open('all.csv', 'r') as afile: lines = [line.replace("\n", "") for line in afile.readlines()] all_row_data = [line.split(",") for line in lines # go over all.csv values and if it is not it searchforthat.csv, write it. with open('out.csv', 'w') as ofile: for row_data in all_row_data: if row_data[3] not in search_values: ofile.write(','.join(row_data) + '\n')
Get Values from CSV to pass as Arguments in Array/List (Python)
I would like to find values from one CSV in another and modify/remove the rows accordingly. Removing already works quite well, but I would like to automate this process as much as possible. So my question is how can I put all values from the serachforthat.csv (column [0]) into a kind of array or list and use it to run through the all.csv. what i got so far: *args = "searchforthat.csv[0]" # These are my values import csv with open('all.csv', 'r') as inp, open('final.csv', 'w') as out: writer = csv.writer(out) for row in csv.reader(inp): if row[3] != args: # That does not work :( writer.writerow(row) I am completely new to python and a little confused as to the correct way to write it...
[ "import csv\n\nwith open('searchforthat.csv', 'r') as inp:\n args = [row[0] for row in csv.reader(inp)]\n\nwith open('all.csv', 'r') as inp, open('final.csv', 'w') as out:\n writer = csv.writer(out)\n for row in csv.reader(inp):\n if row[3] not in args:\n writer.writerow(row)\n\n", "You have to get the values in the first column of searchforthat.csv out of the file first.\nsearch_values = []\nwith open('searchforthat.csv', 'r') as sfile:\n lines = [line.replace(\"\\n\", \"\") for line in sfile.readlines()]\n search_values = [line.split(\",\")[0] for line in lines]\n\n\nGood Luck!\nThen you can use the strings in search_values to look through the other file(s).\nIt looks like in your question you want to compare the search_values with the values in column 0 of all.csv . And if there in there, write the lines to a new file. If that is the case you can do so like this:\n# get the row data from searchforthat.csv\nsearch_row_data = []\nwith open('searchforthat.csv', 'r') as sfile:\n lines = [line.replace(\"\\n\", \"\") for line in sfile.readlines()]\n search_row_data = [line.split(\",\") for line in lines]\n\n\n# map search data to values\nsearch_values = [row_data[0] for row_data in search_row_data]\n\n# get the row data from all.csv\nall_row_data = []\nwith open('all.csv', 'r') as afile:\n lines = [line.replace(\"\\n\", \"\") for line in afile.readlines()]\n all_row_data = [line.split(\",\") for line in lines\n\n# go over all.csv values and if it is not it searchforthat.csv, write it.\nwith open('out.csv', 'w') as ofile:\n for row_data in all_row_data: \n if row_data[3] not in search_values:\n ofile.write(','.join(row_data) + '\\n')\n\n\n" ]
[ 1, 0 ]
[]
[]
[ "arrays", "csv", "list", "python", "writer" ]
stackoverflow_0074599103_arrays_csv_list_python_writer.txt
Q: Pandas to_csv() checking for overwrite When I am analyzing data, I save my dataframes into a csv-file and use pd.to_csv() for that. However, the function (over)writes the new file, without checking whether there exists one with the same name. Is there a way to check whether the file already exists, and if so, ask for a new filename? I know I can add the system's datetime to the filename, which will prevent any overwriting, but I would like to know when I made the mistake. A: Try the following: import glob import pandas as pd # Give the filename you wish to save the file to filename = 'Your_filename.csv' # Use this function to search for any files which match your filename files_present = glob.glob(filename) # if no matching files, write to csv, if there are matching files, print statement if not files_present: pd.to_csv(filename) else: print 'WARNING: This file already exists!' I have not tested this but it has been lifted and compiled from some previous code which I have written. This will simply STOP files overwriting others. N.B. you will have to change the filename variable yourself to then save the file, or use some datetime variable as you suggested. I hope this helps in some way. A: Based on TaylorDay's suggestion I made some adjustments to the function. With the following code you are asked whether you would like to overwrite an existing file. If not, you are allowed to type in another name. Then, the same write-function is called, which will again check whether the new_filename exists. from os import path import pandas as pd def write_csv_df(path, filename, df): # Give the filename you wish to save the file to pathfile = os.path.normpath(os.path.join(path,filename)) # Use this function to search for any files which match your filename files_present = os.path.isfile(pathfile) # if no matching files, write to csv, if there are matching files, print statement if not files_present: df.to_csv(pathfile, sep=';') else: overwrite = raw_input("WARNING: " + pathfile + " already exists! Do you want to overwrite <y/n>? \n ") if overwrite == 'y': df.to_csv(pathfile, sep=';') elif overwrite == 'n': new_filename = raw_input("Type new filename: \n ") write_csv_df(path,new_filename,df) else: print "Not a valid input. Data is NOT saved!\n" A: For 3.3+ use mode='x' From the docs: open for exclusive creation, failing if the file already exists try: df.to_csv('abc.csv', mode='x') except FileExistsError: df.to_csv('unique_name.csv')
Pandas to_csv() checking for overwrite
When I am analyzing data, I save my dataframes into a csv-file and use pd.to_csv() for that. However, the function (over)writes the new file, without checking whether there exists one with the same name. Is there a way to check whether the file already exists, and if so, ask for a new filename? I know I can add the system's datetime to the filename, which will prevent any overwriting, but I would like to know when I made the mistake.
[ "Try the following:\nimport glob\nimport pandas as pd\n\n# Give the filename you wish to save the file to\nfilename = 'Your_filename.csv'\n\n# Use this function to search for any files which match your filename\nfiles_present = glob.glob(filename)\n\n\n# if no matching files, write to csv, if there are matching files, print statement\nif not files_present:\n pd.to_csv(filename)\nelse:\n print 'WARNING: This file already exists!' \n\nI have not tested this but it has been lifted and compiled from some previous code which I have written. This will simply STOP files overwriting others. N.B. you will have to change the filename variable yourself to then save the file, or use some datetime variable as you suggested. I hope this helps in some way. \n", "Based on TaylorDay's suggestion I made some adjustments to the function. With the following code you are asked whether you would like to overwrite an existing file. If not, you are allowed to type in another name. Then, the same write-function is called, which will again check whether the new_filename exists. \nfrom os import path\nimport pandas as pd\ndef write_csv_df(path, filename, df):\n # Give the filename you wish to save the file to\n pathfile = os.path.normpath(os.path.join(path,filename))\n\n # Use this function to search for any files which match your filename\n files_present = os.path.isfile(pathfile) \n # if no matching files, write to csv, if there are matching files, print statement\n if not files_present:\n df.to_csv(pathfile, sep=';')\n else:\n overwrite = raw_input(\"WARNING: \" + pathfile + \" already exists! Do you want to overwrite <y/n>? \\n \")\n if overwrite == 'y':\n df.to_csv(pathfile, sep=';')\n elif overwrite == 'n':\n new_filename = raw_input(\"Type new filename: \\n \")\n write_csv_df(path,new_filename,df)\n else:\n print \"Not a valid input. Data is NOT saved!\\n\"\n\n", "For 3.3+ use mode='x'\nFrom the docs:\n\nopen for exclusive creation, failing if the file already exists\n\ntry:\n df.to_csv('abc.csv', mode='x')\nexcept FileExistsError:\n df.to_csv('unique_name.csv')\n\n" ]
[ 13, 5, 0 ]
[ " # if you already has a file with the name \"out\"\n # nothing will happen as pass gets excuted\ntry:\n df.to_csv('out.csv')\nexcept:\n pass\n\n" ]
[ -1 ]
[ "export_to_csv", "file_management", "pandas", "python", "python_2.7" ]
stackoverflow_0040375366_export_to_csv_file_management_pandas_python_python_2.7.txt
Q: AUTH_USER_MODEL refers to model that has not been installed I am getting an error ImproperlyConfigured at /admin/ AUTH_USER_MODEL refers to model 'ledger.User' that has not been installed I am only getting it on my production server. Not when I run things via localhost. First it was only when I was making a certain request. Then I thought my database must be out of sync so I deleted all the tables and then ran manage.py syncdb. Now, it seems to have propogated and even going to the admin throws the error. I have never seen this error before and can't figure out what the deal is. I have defined the AUTH_USER_MODEL in settings.py: ... INSTALLED_APPS = ( 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'ledger', 'extension', 'plugin', 'social.apps.django_app.default', ) AUTH_USER_MODEL = 'ledger.User' ... models.py: ... class User(AbstractUser): def __unicode__(self): return self.username balance = models.IntegerField(default=0) total_pledged = models.IntegerField(default=0) last_pledged = models.ForeignKey('Transaction', related_name='pledger', blank=True, null=True) extension_key = models.CharField(max_length=100, null=True, blank=True) plugin_key = models.CharField(max_length=100, null=True, blank=True) ghosted = models.BooleanField(default=False) def save(self, *args, **kwargs): print('saving') try: self.company.save() except: print('no company') super(User, self).save(*args, **kwargs) ... A: I had this problem and it was solved by properly understanding how the Django filed are structured. The instructions in tutorials are often different and confusing. You need to understand that when you install Django, there are two key steps: 1: creating a project 2: creating an app (application) Let's illustrate the problem by following the official Django tutorial: https://docs.djangoproject.com/en/3.1/intro/tutorial01/ Step 1: create a new project: django-admin startproject mysite You will now find there is a directory called "mysite" **Step 2: ** the tutorial says: To create your app, make sure you’re in the same directory as manage.py and type this command: which is the directory just created, so go to: cd mysite and if you ls then you will find in this directory is: a file named manage.py confusingly, another directory named mysite Step 3: Now the tutorial says to create your django application: python manage.py startapp polls So now ls shows these files: manage.py mysite polls Consufion has probably set in right now because all these files are under the mysite directory. Step 4: If you want to use a custom user model, then the official advice is to do it right at the start of the project before doing any migrations. OK, so edit mysite/settings.py and add the line: AUTH_USER_MODEL = 'polls.User' and edit polls/models and add: from django.db import models # Create your models here. from django.contrib.auth.models import AbstractUser class User(AbstractUser): pass ** Step 5:** So now it should be OK to do the first migration, right? python manage.py makemigrations But SPLAT! Traceback (most recent call last): File "/opt/theapp/venv3.8/lib/python3.8/site-packages/django/apps/registry.py", line 156, in get_app_config return self.app_configs[app_label] KeyError: 'polls' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/theapp/venv3.8/lib/python3.8/site-packages/django/contrib/auth/__init__.py", line 157, in get_user_model return django_apps.get_model(settings.AUTH_USER_MODEL, require_ready=False) File "/opt/theapp/venv3.8/lib/python3.8/site-packages/django/apps/registry.py", line 206, in get_model app_config = self.get_app_config(app_label) File "/opt/theapp/venv3.8/lib/python3.8/site-packages/django/apps/registry.py", line 163, in get_app_config raise LookupError(message) LookupError: No installed app with label 'polls'. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "manage.py", line 22, in <module> main() File "manage.py", line 18, in main execute_from_command_line(sys.argv) File "/opt/theapp/venv3.8/lib/python3.8/site-packages/django/core/management/__init__.py", line 401, in execute_from_command_line utility.execute() File "/opt/theapp/venv3.8/lib/python3.8/site-packages/django/core/management/__init__.py", line 377, in execute django.setup() File "/opt/theapp/venv3.8/lib/python3.8/site-packages/django/__init__.py", line 24, in setup apps.populate(settings.INSTALLED_APPS) File "/opt/theapp/venv3.8/lib/python3.8/site-packages/django/apps/registry.py", line 122, in populate app_config.ready() File "/opt/theapp/venv3.8/lib/python3.8/site-packages/django/contrib/admin/apps.py", line 24, in ready self.module.autodiscover() File "/opt/theapp/venv3.8/lib/python3.8/site-packages/django/contrib/admin/__init__.py", line 24, in autodiscover autodiscover_modules('admin', register_to=site) File "/opt/theapp/venv3.8/lib/python3.8/site-packages/django/utils/module_loading.py", line 47, in autodiscover_modules import_module('%s.%s' % (app_config.name, module_to_search)) File "/usr/lib/python3.8/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1014, in _gcd_import File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 671, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 783, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/opt/theapp/venv3.8/lib/python3.8/site-packages/django/contrib/auth/admin.py", line 6, in <module> from django.contrib.auth.forms import ( File "/opt/theapp/venv3.8/lib/python3.8/site-packages/django/contrib/auth/forms.py", line 21, in <module> UserModel = get_user_model() File "/opt/theapp/venv3.8/lib/python3.8/site-packages/django/contrib/auth/__init__.py", line 161, in get_user_model raise ImproperlyConfigured( django.core.exceptions.ImproperlyConfigured: AUTH_USER_MODEL refers to model 'polls.User' that has not been installed (venv3.8) ubuntu@ip-172-26-5-79:~/mysite$ There is the error: .ImproperlyConfigured: AUTH_USER_MODEL refers to model 'polls.User' that has not been installed So we fix this by modifying mysite/settings.py from this: INSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', ] to this: INSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'polls', ] Notice that the line you needed to add was "polls" which is the name of your Django application. Now try again: (venv3.8) ubuntu@ip-172-26-5-79:~/mysite$ python manage.py makemigrations Migrations for 'polls': polls/migrations/0001_initial.py - Create model User (venv3.8) ubuntu@ip-172-26-5-79:~/mysite$ Success!!! So the point of this long story is to make it clear that Django MUST know where your Django application is. And the place that you tell Django that, is in INSTALLED_APPS in the settings.py file. It's really confusing the difference between Django project and Django app and its made worse by the strange suggestion to create two directories with the same name. Instead, to make things much more clear I suggest you suffix your Django project name with "project" and suffix your Django application name with "app" and don't give the top level directory the same name as the project. Thus to set up a new project: (venv3.8) ubuntu@ip-172-26-5-79:~$ mkdir container (venv3.8) ubuntu@ip-172-26-5-79:~$ cd container/ (venv3.8) ubuntu@ip-172-26-5-79:~/container$ django-admin startproject myproject . (venv3.8) ubuntu@ip-172-26-5-79:~/container$ ls manage.py myproject (venv3.8) ubuntu@ip-172-26-5-79:~/container$ python manage.py startapp myapp (venv3.8) ubuntu@ip-172-26-5-79:~/container$ ls -lah total 20K drwxrwxr-x 4 ubuntu ubuntu 4.0K Oct 27 05:30 . drwxr-xr-x 11 ubuntu ubuntu 4.0K Oct 27 05:29 .. -rwxrwx--- 1 ubuntu ubuntu 665 Oct 27 05:30 manage.py drwxrwxr-x 3 ubuntu ubuntu 4.0K Oct 27 05:30 myapp drwxrwxr-x 3 ubuntu ubuntu 4.0K Oct 27 05:30 myproject (venv3.8) ubuntu@ip-172-26-5-79:~/container$ It's now going to be much more clear to you when you are dealing with the Django site and when you are dealing with the Django project and you will no longer be confused by there being multiple directories of the same name. A: Don't have enough rep to comment on @Duke Dougal solution so i'm creating a new comment. Had the same problem but none of the above solutions worked for me. Finally fixed it by moving my app (where I have my User model) to the very end of INSTALLED APPS (even after all of my other apps). So something like this: INSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'my_other_app_1', 'my_other_app_2', 'polls', # <-- App with my custom User model ] A: In my case a circular import statement produced the error mentioned by OP. Here was my buggy code: custom_auth_app/models.py ... from custom_auth_app import views.py ... class CustomUser(AbstractUser): ... Meanwhile in custom_auth_app/views.py from custom_auth_app import models ... models.CustomUser.objects.all() And as a second proof to my claim, I did not get an error after reordering statements in custom_auth_app/models.py ... class CustomUser(AbstractUser): ... from custom_auth_app import views.py # won't cause an error ... Thus, this circular import prevented CustomUser from being loaded, and caused the error. A: I know this is a very old question but I felt like sharing my discovery (of how stupid I can be!). In my case, the issue was that my custom user model had abstract=True set in the Meta class (happens when you copy the base class code in order to override it :P). A: I'm aware this is an old question but I struggled with this issue for two days before finding my mistake, which was failing to follow the models organization in the Django Models docs. If you have the AUTH_USER_MODEL = '<app_name>.<user_model>' (e.g. 'ledger.User') correctly written, and you have your '<app_name>', (e.g. ledger), in your INSTALLED_APPS list, but you're still getting this error, it's possible that your <custom_user> model (e.g. User) is in the wrong place. It needs to be defined in either the models.py file in the top level of your app directory: <app_name>/models.py OR in a models/ directory containing an __init__.py file that imports your model: <app_name>/models/<arbitrary_name>.py AND there is an <app_name>/models/__init__.py that contains the line from .<arbitrary_name> import <custom_user> A: This error can appear if any one of your other apps fails to load - even if it has nothing to do with the custom user model. In my case, I had the 'gcharts' app installed but needed to install the google-visualization-python library with it. Neither have anything to do with the user model, but django returned this error regardless. The following should reveal the true root cause of your issue: Comment out most of your apps in settings.py until django boots up. Add apps back one at a time until you get an error A: @Abhishek Dalvi answer also saves me. class Meta: verbose_name = _('user') verbose_name_plural = _('users') abstract = True # <-- this must be deleted if you create custom User A: Using Django 3.1 My issue was very simple due to the fact that i was using development settings. which had info was missing for the installed apps for normal settings file A: My issue was that my custom user model had an incorrect app_label manually applied to it in the Meta class. Only took me 3 hours to figure it out! # project/settings.py INSTALLED_APPS += [ "project.random_app", "project.user_app", # app containing the custom user model ] AUTH_USER_MODEL = "user_app.MyUser" And the error in the user model class app_label # project/user_app/models.py class MyUser(Model): class Meta: app_label = "WRONG" # must match an app in `installed_apps` I solved this by removing the app_label, since its not actually needed. You could also make it match the name of the app, but user_app. In my case this was set due to trying to fix the same issue a while ago, getting it working, and then doing a reoganization before pushing. A: I have tried all the answers above, made sure that everything was set up as described before, it still didn't work. I'm running django in a docker container and don't have much experience with this. The solution for me was to simply rebuild (only) the django image with docker-compose up -d --no-deps --build django. A: In my case, the problem was that I had imported something that belonged to the default User model. # my_app/models.py from django.db import models from django.contrib.auth.forms import (UserCreationForm, UserChangeForm) # Problematic line from django.contrib.auth.models import AbstractUser class CustomUser(AbstractUser): pass Both UserCreationForm and UserChangeForm actually have set model = User in their inner Meta class and can't be used with a custom user model. Once I removed that line, makemigrations (and other manage.py commands) worked with no problem. I had forgot that I can't use User forms directly with a custom user model, and should rather subclass them and set model = CustomUser in their inner Meta class. For example: class CustomUserCreationForm(UserCreationForm): class Meta(UserCreationForm.Meta): model = CustomUser
AUTH_USER_MODEL refers to model that has not been installed
I am getting an error ImproperlyConfigured at /admin/ AUTH_USER_MODEL refers to model 'ledger.User' that has not been installed I am only getting it on my production server. Not when I run things via localhost. First it was only when I was making a certain request. Then I thought my database must be out of sync so I deleted all the tables and then ran manage.py syncdb. Now, it seems to have propogated and even going to the admin throws the error. I have never seen this error before and can't figure out what the deal is. I have defined the AUTH_USER_MODEL in settings.py: ... INSTALLED_APPS = ( 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'ledger', 'extension', 'plugin', 'social.apps.django_app.default', ) AUTH_USER_MODEL = 'ledger.User' ... models.py: ... class User(AbstractUser): def __unicode__(self): return self.username balance = models.IntegerField(default=0) total_pledged = models.IntegerField(default=0) last_pledged = models.ForeignKey('Transaction', related_name='pledger', blank=True, null=True) extension_key = models.CharField(max_length=100, null=True, blank=True) plugin_key = models.CharField(max_length=100, null=True, blank=True) ghosted = models.BooleanField(default=False) def save(self, *args, **kwargs): print('saving') try: self.company.save() except: print('no company') super(User, self).save(*args, **kwargs) ...
[ "I had this problem and it was solved by properly understanding how the Django filed are structured.\nThe instructions in tutorials are often different and confusing.\nYou need to understand that when you install Django, there are two key steps:\n1: creating a project\n2: creating an app (application)\nLet's illustrate the problem by following the official Django tutorial:\nhttps://docs.djangoproject.com/en/3.1/intro/tutorial01/\nStep 1: create a new project:\ndjango-admin startproject mysite\n\nYou will now find there is a directory called \"mysite\"\n**Step 2: **\nthe tutorial says:\nTo create your app, make sure you’re in the same directory as manage.py and type this command:\nwhich is the directory just created, so go to:\ncd mysite\n\nand if you ls then you will find in this directory is:\na file named manage.py\nconfusingly, another directory named mysite\nStep 3:\nNow the tutorial says to create your django application:\npython manage.py startapp polls\n\nSo now ls shows these files:\nmanage.py\nmysite\npolls\nConsufion has probably set in right now because all these files are under the mysite directory.\nStep 4:\nIf you want to use a custom user model, then the official advice is to do it right at the start of the project before doing any migrations.\nOK, so edit mysite/settings.py and add the line:\nAUTH_USER_MODEL = 'polls.User'\nand edit polls/models and add:\nfrom django.db import models\n\n# Create your models here.\n\nfrom django.contrib.auth.models import AbstractUser\n\nclass User(AbstractUser):\n pass\n\n** Step 5:**\nSo now it should be OK to do the first migration, right?\npython manage.py makemigrations\n\nBut SPLAT!\nTraceback (most recent call last):\n File \"/opt/theapp/venv3.8/lib/python3.8/site-packages/django/apps/registry.py\", line 156, in get_app_config\n return self.app_configs[app_label]\nKeyError: 'polls'\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/opt/theapp/venv3.8/lib/python3.8/site-packages/django/contrib/auth/__init__.py\", line 157, in get_user_model\n return django_apps.get_model(settings.AUTH_USER_MODEL, require_ready=False)\n File \"/opt/theapp/venv3.8/lib/python3.8/site-packages/django/apps/registry.py\", line 206, in get_model\n app_config = self.get_app_config(app_label)\n File \"/opt/theapp/venv3.8/lib/python3.8/site-packages/django/apps/registry.py\", line 163, in get_app_config\n raise LookupError(message)\nLookupError: No installed app with label 'polls'.\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"manage.py\", line 22, in <module>\n main()\n File \"manage.py\", line 18, in main\n execute_from_command_line(sys.argv)\n File \"/opt/theapp/venv3.8/lib/python3.8/site-packages/django/core/management/__init__.py\", line 401, in execute_from_command_line\n utility.execute()\n File \"/opt/theapp/venv3.8/lib/python3.8/site-packages/django/core/management/__init__.py\", line 377, in execute\n django.setup()\n File \"/opt/theapp/venv3.8/lib/python3.8/site-packages/django/__init__.py\", line 24, in setup\n apps.populate(settings.INSTALLED_APPS)\n File \"/opt/theapp/venv3.8/lib/python3.8/site-packages/django/apps/registry.py\", line 122, in populate\n app_config.ready()\n File \"/opt/theapp/venv3.8/lib/python3.8/site-packages/django/contrib/admin/apps.py\", line 24, in ready\n self.module.autodiscover()\n File \"/opt/theapp/venv3.8/lib/python3.8/site-packages/django/contrib/admin/__init__.py\", line 24, in autodiscover\n autodiscover_modules('admin', register_to=site)\n File \"/opt/theapp/venv3.8/lib/python3.8/site-packages/django/utils/module_loading.py\", line 47, in autodiscover_modules\n import_module('%s.%s' % (app_config.name, module_to_search))\n File \"/usr/lib/python3.8/importlib/__init__.py\", line 127, in import_module\n return _bootstrap._gcd_import(name[level:], package, level)\n File \"<frozen importlib._bootstrap>\", line 1014, in _gcd_import\n File \"<frozen importlib._bootstrap>\", line 991, in _find_and_load\n File \"<frozen importlib._bootstrap>\", line 975, in _find_and_load_unlocked\n File \"<frozen importlib._bootstrap>\", line 671, in _load_unlocked\n File \"<frozen importlib._bootstrap_external>\", line 783, in exec_module\n File \"<frozen importlib._bootstrap>\", line 219, in _call_with_frames_removed\n File \"/opt/theapp/venv3.8/lib/python3.8/site-packages/django/contrib/auth/admin.py\", line 6, in <module>\n from django.contrib.auth.forms import (\n File \"/opt/theapp/venv3.8/lib/python3.8/site-packages/django/contrib/auth/forms.py\", line 21, in <module>\n UserModel = get_user_model()\n File \"/opt/theapp/venv3.8/lib/python3.8/site-packages/django/contrib/auth/__init__.py\", line 161, in get_user_model\n raise ImproperlyConfigured(\ndjango.core.exceptions.ImproperlyConfigured: AUTH_USER_MODEL refers to model 'polls.User' that has not been installed\n(venv3.8) ubuntu@ip-172-26-5-79:~/mysite$\n\nThere is the error:\n.ImproperlyConfigured: AUTH_USER_MODEL refers to model 'polls.User' that has not been installed\nSo we fix this by modifying mysite/settings.py from this:\nINSTALLED_APPS = [\n 'django.contrib.admin',\n 'django.contrib.auth',\n 'django.contrib.contenttypes',\n 'django.contrib.sessions',\n 'django.contrib.messages',\n 'django.contrib.staticfiles',\n]\n\nto this:\nINSTALLED_APPS = [\n 'django.contrib.admin',\n 'django.contrib.auth',\n 'django.contrib.contenttypes',\n 'django.contrib.sessions',\n 'django.contrib.messages',\n 'django.contrib.staticfiles',\n 'polls',\n]\n\nNotice that the line you needed to add was \"polls\" which is the name of your Django application.\nNow try again:\n(venv3.8) ubuntu@ip-172-26-5-79:~/mysite$ python manage.py makemigrations\nMigrations for 'polls':\n polls/migrations/0001_initial.py\n - Create model User\n(venv3.8) ubuntu@ip-172-26-5-79:~/mysite$\n\nSuccess!!!\nSo the point of this long story is to make it clear that Django MUST know where your Django application is. And the place that you tell Django that, is in INSTALLED_APPS in the settings.py file.\nIt's really confusing the difference between Django project and Django app and its made worse by the strange suggestion to create two directories with the same name.\nInstead, to make things much more clear I suggest you suffix your Django project name with \"project\" and suffix your Django application name with \"app\" and don't give the top level directory the same name as the project.\nThus to set up a new project:\n(venv3.8) ubuntu@ip-172-26-5-79:~$ mkdir container\n(venv3.8) ubuntu@ip-172-26-5-79:~$ cd container/\n(venv3.8) ubuntu@ip-172-26-5-79:~/container$ django-admin startproject myproject .\n(venv3.8) ubuntu@ip-172-26-5-79:~/container$ ls\nmanage.py myproject\n(venv3.8) ubuntu@ip-172-26-5-79:~/container$ python manage.py startapp myapp\n(venv3.8) ubuntu@ip-172-26-5-79:~/container$ ls -lah\ntotal 20K\ndrwxrwxr-x 4 ubuntu ubuntu 4.0K Oct 27 05:30 .\ndrwxr-xr-x 11 ubuntu ubuntu 4.0K Oct 27 05:29 ..\n-rwxrwx--- 1 ubuntu ubuntu 665 Oct 27 05:30 manage.py\ndrwxrwxr-x 3 ubuntu ubuntu 4.0K Oct 27 05:30 myapp\ndrwxrwxr-x 3 ubuntu ubuntu 4.0K Oct 27 05:30 myproject\n(venv3.8) ubuntu@ip-172-26-5-79:~/container$\n\nIt's now going to be much more clear to you when you are dealing with the Django site and when you are dealing with the Django project and you will no longer be confused by there being multiple directories of the same name.\n", "Don't have enough rep to comment on @Duke Dougal solution so i'm creating a new comment.\nHad the same problem but none of the above solutions worked for me. Finally fixed it by moving my app (where I have my User model) to the very end of INSTALLED APPS (even after all of my other apps). So something like this:\nINSTALLED_APPS = [\n'django.contrib.admin',\n'django.contrib.auth',\n'django.contrib.contenttypes',\n'django.contrib.sessions',\n'django.contrib.messages',\n'django.contrib.staticfiles',\n'my_other_app_1',\n'my_other_app_2',\n'polls', # <-- App with my custom User model\n\n]\n", "In my case a circular import statement produced the error mentioned by OP.\nHere was my buggy code:\ncustom_auth_app/models.py\n...\nfrom custom_auth_app import views.py\n...\nclass CustomUser(AbstractUser):\n ...\n\nMeanwhile in custom_auth_app/views.py\nfrom custom_auth_app import models\n...\nmodels.CustomUser.objects.all()\n\nAnd as a second proof to my claim, I did not get an error after reordering statements in custom_auth_app/models.py\n...\nclass CustomUser(AbstractUser):\n ...\nfrom custom_auth_app import views.py # won't cause an error\n...\n\nThus, this circular import prevented CustomUser from being loaded, and caused the error.\n", "I know this is a very old question but I felt like sharing my discovery (of how stupid I can be!).\n\nIn my case, the issue was that my custom user model had abstract=True set in the Meta class (happens when you copy the base class code in order to override it :P).\n", "I'm aware this is an old question but I struggled with this issue for two days before finding my mistake, which was failing to follow the models organization in the Django Models docs.\nIf you have the AUTH_USER_MODEL = '<app_name>.<user_model>' (e.g. 'ledger.User') correctly written, and you have your '<app_name>', (e.g. ledger), in your INSTALLED_APPS list, but you're still getting this error, it's possible that your <custom_user> model (e.g. User) is in the wrong place.\nIt needs to be defined in either the models.py file in the top level of your app directory:\n\n<app_name>/models.py\n\nOR in a models/ directory containing an __init__.py file that imports your model:\n\n<app_name>/models/<arbitrary_name>.py AND there is an <app_name>/models/__init__.py that contains the line from .<arbitrary_name> import <custom_user>\n\n", "This error can appear if any one of your other apps fails to load - even if it has nothing to do with the custom user model. In my case, I had the 'gcharts' app installed but needed to install the google-visualization-python library with it. Neither have anything to do with the user model, but django returned this error regardless. \nThe following should reveal the true root cause of your issue:\n\nComment out most of your apps in settings.py until django boots up.\nAdd apps back one at a time until you get an error\n\n", "@Abhishek Dalvi answer also saves me.\n\nclass Meta:\n verbose_name = _('user')\n verbose_name_plural = _('users')\n abstract = True # <-- this must be deleted if you create custom User\n\n\n", "Using Django 3.1\nMy issue was very simple due to the fact that i was using development settings. which had info was missing for the installed apps for normal settings file\n", "My issue was that my custom user model had an incorrect app_label manually applied to it in the Meta class. Only took me 3 hours to figure it out!\n# project/settings.py\nINSTALLED_APPS += [\n \"project.random_app\",\n \"project.user_app\", # app containing the custom user model\n]\nAUTH_USER_MODEL = \"user_app.MyUser\"\n\nAnd the error in the user model class app_label\n# project/user_app/models.py\nclass MyUser(Model):\n class Meta:\n app_label = \"WRONG\" # must match an app in `installed_apps`\n\nI solved this by removing the app_label, since its not actually needed. You could also make it match the name of the app, but user_app.\nIn my case this was set due to trying to fix the same issue a while ago, getting it working, and then doing a reoganization before pushing.\n", "I have tried all the answers above, made sure that everything was set up as described before, it still didn't work.\nI'm running django in a docker container and don't have much experience with this. The solution for me was to simply rebuild (only) the django image with docker-compose up -d --no-deps --build django.\n", "In my case, the problem was that I had imported something that belonged to the default User model.\n# my_app/models.py\nfrom django.db import models\nfrom django.contrib.auth.forms import (UserCreationForm, UserChangeForm) # Problematic line\nfrom django.contrib.auth.models import AbstractUser\n\n\nclass CustomUser(AbstractUser):\n pass\n\nBoth UserCreationForm and UserChangeForm actually have set model = User in their inner Meta class and can't be used with a custom user model. Once I removed that line, makemigrations (and other manage.py commands) worked with no problem.\nI had forgot that I can't use User forms directly with a custom user model, and should rather subclass them and set model = CustomUser in their inner Meta class. For example:\nclass CustomUserCreationForm(UserCreationForm):\n class Meta(UserCreationForm.Meta):\n model = CustomUser\n\n" ]
[ 8, 3, 3, 2, 2, 1, 0, 0, 0, 0, 0 ]
[ "I've had the same problem, but no answer here worked for me.\nMy issue was that my custom UserAdmin was in models.py but not in admin.py. Hope that helps!\nI found the solution here : Django LookupError: App 'accounts' doesn't have a 'User' model\nLink to the original answer: https://groups.google.com/g/django-users/c/pciwcW84DkQ/m/Z7W6KznlBwAJ\n" ]
[ -1 ]
[ "django", "django_authentication", "django_settings", "python" ]
stackoverflow_0026914022_django_django_authentication_django_settings_python.txt
Q: how to use Wait in Print standment in for loop python without appending into LISTs? I am wondering if its possible to execute first print statement and then others. For example in below code. It can print the prod_val then c. code: l = [2,3,4] pro_val = 1 c = 0 for i in range(len(l)): pro_val = pro_val * l[c] c = c+1 print(pro_val) await #looking something here and it print c after print(c) expected: 2 6 24 1 2 3 A: You get your desired output, if you simply do two loops. l = [2, 3, 4] pro_val = 1 for num in l: pro_val *= num print(pro_val) for num in l: print(num) Output: 2 6 24 2 3 4 If you want the second to print the indices shifted by one instead, you would do this instead: ... for num in l: pro_val *= num print(pro_val) for i in range(len(l)): print(i + 1) Output: 2 6 24 1 2 3 Not sure, why you want to throw await-expressions into the mix here. PS You seem to think that there is some magical statement, with which you can defer that second print in your original loop until the end of the loop, such that all those print(c) calls are executed after the loop. I guess just writing the algorithm accordingly is too simple...
how to use Wait in Print standment in for loop python without appending into LISTs?
I am wondering if its possible to execute first print statement and then others. For example in below code. It can print the prod_val then c. code: l = [2,3,4] pro_val = 1 c = 0 for i in range(len(l)): pro_val = pro_val * l[c] c = c+1 print(pro_val) await #looking something here and it print c after print(c) expected: 2 6 24 1 2 3
[ "You get your desired output, if you simply do two loops.\nl = [2, 3, 4]\npro_val = 1\n\nfor num in l:\n pro_val *= num\n print(pro_val)\n\nfor num in l:\n print(num)\n\nOutput:\n\n2\n6\n24\n2\n3\n4\n\nIf you want the second to print the indices shifted by one instead, you would do this instead:\n...\nfor num in l:\n pro_val *= num\n print(pro_val)\n\nfor i in range(len(l)):\n print(i + 1)\n\nOutput:\n\n2\n6\n24\n1\n2\n3\n\nNot sure, why you want to throw await-expressions into the mix here.\n\nPS\nYou seem to think that there is some magical statement, with which you can defer that second print in your original loop until the end of the loop, such that all those print(c) calls are executed after the loop. I guess just writing the algorithm accordingly is too simple...\n" ]
[ 1 ]
[]
[]
[ "for_loop", "python" ]
stackoverflow_0074599097_for_loop_python.txt
Q: Import modules in package in ROS2 I have created a package for ROS2 and I have added a Python repository I downloaded. The problem I am having is that in the original repository the modules from the own repo were imported directly while in mine I have to import them adding the ROS2 package name before the module, even though I am importing a module from the same repo, like: import planner_pkg.SimpleOneTrailerSystem as SimpleOneTrailerSystem while I would like: import SimpleOneTrailerSystem My ROS2 project structure is like: ros2_ws src planner planner_pkg __init__.py SimpleOneTrailerSystem.py planner_node.py ... package.xml setup.py package.xml <?xml version="1.0"?> <package format="2"> <name>planner_pkg</name> <version>0.0.1</version> <description>This package contains algorithm for park planner</description> <maintainer email=""></maintainer> <license>Apache License 2.0</license> <exec_depend>rclpy</exec_depend> <exec_depend>std_msgs</exec_depend> <!-- These test dependencies are optional Their purpose is to make sure that the code passes the linters --> <test_depend>ament_copyright</test_depend> <test_depend>ament_flake8</test_depend> <test_depend>ament_pep257</test_depend> <test_depend>python3-pytest</test_depend> <export> <build_type>ament_python</build_type> </export> </package> setup.py: from setuptools import setup package_name = 'planner_pkg' setup( name=package_name, version='0.0.0', packages=[package_name], data_files=[ ('share/ament_index/resource_index/packages', ['resource/' + package_name]), ('share/' + package_name, ['package.xml']), ], install_requires=['setuptools'], zip_safe=True, author='', author_email='', maintainer='', maintainer_email='', keywords=['ROS'], classifiers=[ 'Intended Audience :: Developers', 'License :: OSI Approved :: Apache Software License', 'Programming Language :: Python', 'Topic :: Software Development', ], description='Package containing examples of how to use the rclpy API.', license='Apache License, Version 2.0', tests_require=['pytest'], entry_points={ 'console_scripts': [ 'planner_node = planner_pkg.planner_node:main', ], }, ) A: First, according to the Module Search Path docs, when you do import something, Python looks for that something in the following places: From the built-in modules sys.path, which is a list containing: The directory of the input script PYTHONPATH, which is an environment variable containing a list of directories The installation-dependent default Second, when you build your ROS2 Python package (by calling colcon build invoking the ament_python build type), your Python codes will be copied over to an install folder with a tree structure like this: install ... ├── planner_pkg │   ├── bin │   │   └── planner_node │   ├── lib │   │   └── python3.6 │   │   └── site-packages │   │   ├── planner_pkg │   │   │   ├── __init__.py │   │   │   ├── planner_node.py │   │   │   └── SimpleOneTrailerSystem.py ... Now, when you do import SimpleOneTrailerSystem, Python will first search for it from the built-in modules, which for sure it won't find there. Next on the list is from sys.path. You can add a print(sys.path) at the top of planner_node.py to see something like this list: ['/path/to/install/planner_pkg/bin', '/path/to/install/planner_pkg/lib/python3.6/site-packages', '/opt/ros/eloquent/lib/python3.6/site-packages', '/usr/lib/python36.zip', '/usr/lib/python3.6', ...other Python3.6 installation-dependent dirs... ] First on the sys.path list is the bin folder of the input script. There are only executables there, no SimpleOneTrailerSystem.py file/module, so that import will fail. The next on the list would be the planner_pkg/lib/pythonX.X/site-packages, and as you can see from the tree structure above, there is a SimpleOneTrailerSystem.py module BUT it is under a planner_pkg folder. So a direct import like this import SimpleOneTrailerSystem will not work. You need to qualify it with the package folder like this: import planner_pkg.SimpleOneTrailerSystem There are 2 ways to get around this. Modify sys.path before import-ing SimpleOneTrailerSystem import sys sys.path.append("/path/to/install/planner_pkg/lib/python3.6/site-packages/planner_pkg") import SimpleOneTrailerSystem This approach adds the path to the planner_pkg install directory to the sys.path list so that you don't need to specify it in subsequent imports. Modify PYTHONPATH before running your ROS2 node $ colcon build $ source install/setup.bash $ export PYTHONPATH=$PYTHONPATH:/path/to/install/planner_pkg/lib/python3.6/site-packages/planner_pkg $ planner_node This approach is almost the same as the first one, but there is no code change involved (and no rebuilding involved), as you only need to modify the PYTHONPATH environment variable. A: I had the same problem and fixed it by modifying setup.py. Add: ('lib/' + package_name, [package_name+'/SimpleOneTrailerSystem.py']), to "data_files" list. setup.py: from setuptools import setup package_name = 'planner_pkg' setup( name=package_name, version='0.0.0', packages=[package_name], data_files=[ ('share/ament_index/resource_index/packages', ['resource/' + package_name]), ('share/' + package_name, ['package.xml']), ('lib/' + package_name, [package_name+'/SimpleOneTrailerSystem.py']), ], install_requires=['setuptools'], zip_safe=True, author='', author_email='', maintainer='', maintainer_email='', keywords=['ROS'], classifiers=[ 'Intended Audience :: Developers', 'License :: OSI Approved :: Apache Software License', 'Programming Language :: Python', 'Topic :: Software Development', ], description='Package containing examples of how to use the rclpy API.', license='Apache License, Version 2.0', tests_require=['pytest'], entry_points={ 'console_scripts': [ 'planner_node = planner_pkg.planner_node:main', ], }, )
Import modules in package in ROS2
I have created a package for ROS2 and I have added a Python repository I downloaded. The problem I am having is that in the original repository the modules from the own repo were imported directly while in mine I have to import them adding the ROS2 package name before the module, even though I am importing a module from the same repo, like: import planner_pkg.SimpleOneTrailerSystem as SimpleOneTrailerSystem while I would like: import SimpleOneTrailerSystem My ROS2 project structure is like: ros2_ws src planner planner_pkg __init__.py SimpleOneTrailerSystem.py planner_node.py ... package.xml setup.py package.xml <?xml version="1.0"?> <package format="2"> <name>planner_pkg</name> <version>0.0.1</version> <description>This package contains algorithm for park planner</description> <maintainer email=""></maintainer> <license>Apache License 2.0</license> <exec_depend>rclpy</exec_depend> <exec_depend>std_msgs</exec_depend> <!-- These test dependencies are optional Their purpose is to make sure that the code passes the linters --> <test_depend>ament_copyright</test_depend> <test_depend>ament_flake8</test_depend> <test_depend>ament_pep257</test_depend> <test_depend>python3-pytest</test_depend> <export> <build_type>ament_python</build_type> </export> </package> setup.py: from setuptools import setup package_name = 'planner_pkg' setup( name=package_name, version='0.0.0', packages=[package_name], data_files=[ ('share/ament_index/resource_index/packages', ['resource/' + package_name]), ('share/' + package_name, ['package.xml']), ], install_requires=['setuptools'], zip_safe=True, author='', author_email='', maintainer='', maintainer_email='', keywords=['ROS'], classifiers=[ 'Intended Audience :: Developers', 'License :: OSI Approved :: Apache Software License', 'Programming Language :: Python', 'Topic :: Software Development', ], description='Package containing examples of how to use the rclpy API.', license='Apache License, Version 2.0', tests_require=['pytest'], entry_points={ 'console_scripts': [ 'planner_node = planner_pkg.planner_node:main', ], }, )
[ "First, according to the Module Search Path docs, when you do import something, Python looks for that something in the following places:\n\nFrom the built-in modules\nsys.path, which is a list containing:\n\nThe directory of the input script\nPYTHONPATH, which is an environment variable containing a list of directories\nThe installation-dependent default\n\n\n\nSecond, when you build your ROS2 Python package (by calling colcon build invoking the ament_python build type), your Python codes will be copied over to an install folder with a tree structure like this:\ninstall\n...\n├── planner_pkg\n│   ├── bin\n│   │   └── planner_node\n│   ├── lib\n│   │   └── python3.6\n│   │   └── site-packages\n│   │   ├── planner_pkg\n│   │   │   ├── __init__.py\n│   │   │   ├── planner_node.py\n│   │   │   └── SimpleOneTrailerSystem.py\n...\n\nNow, when you do import SimpleOneTrailerSystem, Python will first search for it from the built-in modules, which for sure it won't find there. Next on the list is from sys.path. You can add a print(sys.path) at the top of planner_node.py to see something like this list:\n['/path/to/install/planner_pkg/bin', \n '/path/to/install/planner_pkg/lib/python3.6/site-packages', \n '/opt/ros/eloquent/lib/python3.6/site-packages', \n '/usr/lib/python36.zip', \n '/usr/lib/python3.6', \n ...other Python3.6 installation-dependent dirs...\n]\n\nFirst on the sys.path list is the bin folder of the input script. There are only executables there, no SimpleOneTrailerSystem.py file/module, so that import will fail.\nThe next on the list would be the planner_pkg/lib/pythonX.X/site-packages, and as you can see from the tree structure above, there is a SimpleOneTrailerSystem.py module BUT it is under a planner_pkg folder. So a direct import like this\nimport SimpleOneTrailerSystem\n\nwill not work. You need to qualify it with the package folder like this:\nimport planner_pkg.SimpleOneTrailerSystem\n\nThere are 2 ways to get around this.\n\nModify sys.path before import-ing SimpleOneTrailerSystem\nimport sys\nsys.path.append(\"/path/to/install/planner_pkg/lib/python3.6/site-packages/planner_pkg\")\n\nimport SimpleOneTrailerSystem\n\nThis approach adds the path to the planner_pkg install directory to the sys.path list so that you don't need to specify it in subsequent imports.\n\nModify PYTHONPATH before running your ROS2 node\n$ colcon build\n$ source install/setup.bash\n$ export PYTHONPATH=$PYTHONPATH:/path/to/install/planner_pkg/lib/python3.6/site-packages/planner_pkg\n$ planner_node\n\nThis approach is almost the same as the first one, but there is no code change involved (and no rebuilding involved), as you only need to modify the PYTHONPATH environment variable.\n\n\n", "I had the same problem and fixed it by modifying setup.py.\nAdd:\n\n('lib/' + package_name, [package_name+'/SimpleOneTrailerSystem.py']),\n\nto \"data_files\" list.\nsetup.py:\nfrom setuptools import setup\n\npackage_name = 'planner_pkg'\n\nsetup(\n name=package_name,\n version='0.0.0',\n packages=[package_name],\n data_files=[\n ('share/ament_index/resource_index/packages',\n ['resource/' + package_name]),\n ('share/' + package_name, ['package.xml']),\n ('lib/' + package_name, [package_name+'/SimpleOneTrailerSystem.py']),\n ],\n install_requires=['setuptools'],\n zip_safe=True,\n author='',\n author_email='',\n maintainer='',\n maintainer_email='',\n keywords=['ROS'],\n classifiers=[\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python',\n 'Topic :: Software Development',\n ],\n description='Package containing examples of how to use the rclpy API.',\n license='Apache License, Version 2.0',\n tests_require=['pytest'],\n entry_points={\n 'console_scripts': [\n 'planner_node = planner_pkg.planner_node:main',\n ],\n },\n)\n\n" ]
[ 9, 0 ]
[ "For CMAKE packages with cpp and python code you can add the following lines to your CMakeLists.txt.\nThis will copy your python_pkg folder into your install environment to \"lib/python{version}/site-packages\" which is per default included in your python path.\n# for python code\nfind_package(ament_cmake_python REQUIRED)\nfind_package(rclpy REQUIRED)\n\ninstall(DIRECTORY\n ../path_to_python_pkg\n DESTINATION lib/python3.8/site-packages\n)\n\ninstall(PROGRAMS\n ${PROJECT_NAME}/python_node.py\n DESTINATION lib/${PROJECT_NAME}\n)\n\nament_python_install_package(${PROJECT_NAME})\n\nIf you are using e.g. Dashing or Eloquent it is \"python3.6\", for Foxy or newer it is \"python3.8\"\n" ]
[ -1 ]
[ "import", "python", "python_3.x", "ros2" ]
stackoverflow_0057426715_import_python_python_3.x_ros2.txt
Q: StyleGAN image generation doesn't work, TensorFlow doesn't see GPU After reinstalling Ubuntu 18.04, I cannot generate images anymore using a StyleGAN agent. The error message I get is InvalidArgumentError: Cannot assign a device for operation Gs_1/_Run/Gs/latents_in: {{node Gs_1/_Run/Gs/latents_in}}was explicitly assigned to /device:GPU:0 but available devices are [ /job:localhost/replica:0/task:0/device:CPU:0, /job:localhost/replica:0/task:0/device:XLA_CPU:0, /job:localhost/replica:0/task:0/device:XLA_GPU:0 ]. Make sure the device specification refers to a valid device. I have CUDA 10.1 and my driver version is 418.87. The yml file for the conda environment is available here. I installed tensorflow-gpu==1.14 using pip. Here yopu can find the jupyter notebook I'm using to generate the images. If I check the available resources as recommended using the commands from tensorflow.python.client import device_lib print(device_lib.list_local_devices()) I get the answer [name: "/device:CPU:0" device_type: "CPU" memory_limit: 268435456 locality { } incarnation: 7185754797200004029 , name: "/device:XLA_GPU:0" device_type: "XLA_GPU" memory_limit: 17179869184 locality { } incarnation: 18095173531080603805 physical_device_desc: "device: XLA_GPU device" , name: "/device:XLA_CPU:0" device_type: "XLA_CPU" memory_limit: 17179869184 locality { } incarnation: 10470458648887235209 physical_device_desc: "device: XLA_CPU device" ] Any suggestion on how to fix the issue is very welcome! A: Might be because TensorFlow is looking for GPU:0 to assign a device for operation when the name of your graphical unit is actually XLA_GPU:0. What you could try to do is using soft placement when opening your session, so that TensorFlow uses any existing GPU (or any other supported devices if unavailable) when running: # using allow_soft_placement=True se = tf.Session(config=tf.ConfigProto(allow_soft_placement=True)) A: for running stylegan in Colab: go to dnnlib/tflib/tfutil.py and change config_proto = tf.ConfigProto() to config_proto = tf.ConfigProto(allow_soft_placement=True). this worked for me.
StyleGAN image generation doesn't work, TensorFlow doesn't see GPU
After reinstalling Ubuntu 18.04, I cannot generate images anymore using a StyleGAN agent. The error message I get is InvalidArgumentError: Cannot assign a device for operation Gs_1/_Run/Gs/latents_in: {{node Gs_1/_Run/Gs/latents_in}}was explicitly assigned to /device:GPU:0 but available devices are [ /job:localhost/replica:0/task:0/device:CPU:0, /job:localhost/replica:0/task:0/device:XLA_CPU:0, /job:localhost/replica:0/task:0/device:XLA_GPU:0 ]. Make sure the device specification refers to a valid device. I have CUDA 10.1 and my driver version is 418.87. The yml file for the conda environment is available here. I installed tensorflow-gpu==1.14 using pip. Here yopu can find the jupyter notebook I'm using to generate the images. If I check the available resources as recommended using the commands from tensorflow.python.client import device_lib print(device_lib.list_local_devices()) I get the answer [name: "/device:CPU:0" device_type: "CPU" memory_limit: 268435456 locality { } incarnation: 7185754797200004029 , name: "/device:XLA_GPU:0" device_type: "XLA_GPU" memory_limit: 17179869184 locality { } incarnation: 18095173531080603805 physical_device_desc: "device: XLA_GPU device" , name: "/device:XLA_CPU:0" device_type: "XLA_CPU" memory_limit: 17179869184 locality { } incarnation: 10470458648887235209 physical_device_desc: "device: XLA_CPU device" ] Any suggestion on how to fix the issue is very welcome!
[ "Might be because TensorFlow is looking for GPU:0 to assign a device for operation when the name of your graphical unit is actually XLA_GPU:0.\nWhat you could try to do is using soft placement when opening your session, so that TensorFlow uses any existing GPU (or any other supported devices if unavailable) when running:\n# using allow_soft_placement=True\nse = tf.Session(config=tf.ConfigProto(allow_soft_placement=True))\n\n", "for running stylegan in Colab:\ngo to dnnlib/tflib/tfutil.py and change config_proto = tf.ConfigProto() to config_proto = tf.ConfigProto(allow_soft_placement=True).\nthis worked for me.\n" ]
[ 1, 0 ]
[]
[]
[ "gpu", "machine_learning", "python", "tensorflow" ]
stackoverflow_0059199502_gpu_machine_learning_python_tensorflow.txt
Q: Typechecking with conditional parameters I'm trying to use typing with a function that has conditional parameters, that works like this: from typing import Optional, Union class Foo: some_param_to_check: str = 'foo_name' one_param_exclusive_to_foo: int class Bar: some_param_to_check: str = 'bar_name' another_param_exclusive_to_bar: str def some_process_that_returns_a_bool( f_or_b: Union[Foo, Bar], a_name: str, ) -> bool: return f_or_b.some_param_to_check == a_name def do_something_with_foo_or_bar( foo: Optional[Foo], bar: Optional[Bar], some_name: str, ) -> bool: if not foo and not bar: raise ValueError('You need to specify either "foo" or "bar".') # I added this explicit type hint after the first error, hoping it would solve the issue: foo_or_bar: Union[Foo, Bar] # later becomes Union[Foo, Bar, None] foo_or_bar = foo if foo else bar return some_process_that_returns_a_bool(foo_or_bar, some_name) foo_obj = Foo() bar_obj = Bar() # This will work: do_something_with_foo_or_bar(foo_obj, bar_obj, 'test_string') # This will also work: do_something_with_foo_or_bar(foo_obj, None, 'test_string') # This too: do_something_with_foo_or_bar(None, bar_obj, 'test_string') # But this should not: do_something_with_foo_or_bar(None, None, 'test_string') To add more context: The function works by expecting foo, or, if not available, bar. If foo is not None, bar will essentially be ignored. When checking with mypy, it complains about: Incompatible types in assignment (expression has type "Union[Foo, Bar, None]", variable has type "Union[Foo, Bar]" (I'm guessing because of the Optional in the parameter type hints.) If I then add None as the type hint for foo_or_bar then the error becomes: error: Item "None" of "Union[Foo, Bar, None]" has no attribute "some_param_to_check" How would I fix this so that mypy stops complaining (while still keeping the type hints)? A: I'm pretty sure this is just a mypy issue. Its type inference system is not clever enough to recognize that your if not foo and not bar block that raises an exception excludes the double None case later on (since it can't conclusively infer anything about either type in isolation). There doesn't seem to be a good way to directly fix the type hinting, but you could instead change the logic a bit to separate the cases more clearly, and mypy should understand it better: def do_something_with_foo_or_bar( foo: Optional[Foo], bar: Optional[Bar], some_name: str, ) -> bool: foo_or_bar: Union[Foo, Bar] if foo: foo_or_bar = foo elif bar: foo_or_bar = bar else: raise ValueError('You need to specify either "foo" or "bar".') return some_process_that_returns_a_bool(foo_or_bar, some_name) You could also get rid of the foo_or_bar variable all together and just put two different function calls in the if and elif blocks, using foo or bar as appropriate. A: You're getting the error because in some_process_that_returns_a_bool you are trying to access some_param_to_check on a value that can be None and None does not have that property. If some_process_that_returns_a_bool should accept None as a possible value for f_or_b , then you should check for None before trying to access any properties. def some_process_that_returns_a_bool( f_or_b: Union[Foo, Bar, None], a_name: str, ) -> bool: if f_or_b is None: # Handle None... return False else: return f_or_b.some_param_to_check == a_name This way you will only try to access some_param_to_check when f_or_b is not None and else you will return False when f_or_b is None. But please handle None in whatever way makes sense for your application. A: Adding my $0.02 to @Blckknght answer: This solves the problem with inner type checking (no errors inside function body, if you fix a missing annotation), but doesn't help external callers. To achieve typing "one or another, but never none of them", you can use overloads: from typing import Optional, Union, overload class Foo: some_param_to_check: str = 'foo_name' one_param_exclusive_to_foo: int class Bar: some_param_to_check: str = 'bar_name' another_param_exclusive_to_bar: str def some_process_that_returns_a_bool( f_or_b: Union[Foo, Bar], a_name: str, ) -> bool: return f_or_b.some_param_to_check == a_name @overload def do_something_with_foo_or_bar( foo: Foo, bar: Optional[Bar], some_name: str, ) -> bool: ... @overload def do_something_with_foo_or_bar( foo: Optional[Foo], bar: Bar, some_name: str, ) -> bool: ... def do_something_with_foo_or_bar( foo: Optional[Foo], bar: Optional[Bar], some_name: str, ) -> bool: foo_or_bar: Union[Foo, Bar] # This annotation was missing if foo: foo_or_bar = foo elif bar: foo_or_bar = bar else: raise ValueError('You need to specify either "foo" or "bar".') return some_process_that_returns_a_bool(foo_or_bar, some_name) do_something_with_foo_or_bar(Foo(), Bar(), '') do_something_with_foo_or_bar(None, Bar(), '') do_something_with_foo_or_bar(Foo(), None, '') do_something_with_foo_or_bar(None, None, '') # Line 51 And voila: main.py:51: error: No overload variant of "do_something_with_foo_or_bar" matches argument types "None", "None", "str" [call-overload] main.py:51: note: Possible overload variants: main.py:51: note: def do_something_with_foo_or_bar(foo: Foo, bar: Optional[Bar], some_name: str) -> bool main.py:51: note: def do_something_with_foo_or_bar(foo: Optional[Foo], bar: Bar, some_name: str) -> bool Here's a playground link.
Typechecking with conditional parameters
I'm trying to use typing with a function that has conditional parameters, that works like this: from typing import Optional, Union class Foo: some_param_to_check: str = 'foo_name' one_param_exclusive_to_foo: int class Bar: some_param_to_check: str = 'bar_name' another_param_exclusive_to_bar: str def some_process_that_returns_a_bool( f_or_b: Union[Foo, Bar], a_name: str, ) -> bool: return f_or_b.some_param_to_check == a_name def do_something_with_foo_or_bar( foo: Optional[Foo], bar: Optional[Bar], some_name: str, ) -> bool: if not foo and not bar: raise ValueError('You need to specify either "foo" or "bar".') # I added this explicit type hint after the first error, hoping it would solve the issue: foo_or_bar: Union[Foo, Bar] # later becomes Union[Foo, Bar, None] foo_or_bar = foo if foo else bar return some_process_that_returns_a_bool(foo_or_bar, some_name) foo_obj = Foo() bar_obj = Bar() # This will work: do_something_with_foo_or_bar(foo_obj, bar_obj, 'test_string') # This will also work: do_something_with_foo_or_bar(foo_obj, None, 'test_string') # This too: do_something_with_foo_or_bar(None, bar_obj, 'test_string') # But this should not: do_something_with_foo_or_bar(None, None, 'test_string') To add more context: The function works by expecting foo, or, if not available, bar. If foo is not None, bar will essentially be ignored. When checking with mypy, it complains about: Incompatible types in assignment (expression has type "Union[Foo, Bar, None]", variable has type "Union[Foo, Bar]" (I'm guessing because of the Optional in the parameter type hints.) If I then add None as the type hint for foo_or_bar then the error becomes: error: Item "None" of "Union[Foo, Bar, None]" has no attribute "some_param_to_check" How would I fix this so that mypy stops complaining (while still keeping the type hints)?
[ "I'm pretty sure this is just a mypy issue. Its type inference system is not clever enough to recognize that your if not foo and not bar block that raises an exception excludes the double None case later on (since it can't conclusively infer anything about either type in isolation). There doesn't seem to be a good way to directly fix the type hinting, but you could instead change the logic a bit to separate the cases more clearly, and mypy should understand it better:\ndef do_something_with_foo_or_bar(\n foo: Optional[Foo],\n bar: Optional[Bar],\n some_name: str,\n) -> bool:\n foo_or_bar: Union[Foo, Bar]\n if foo:\n foo_or_bar = foo\n elif bar:\n foo_or_bar = bar\n else:\n raise ValueError('You need to specify either \"foo\" or \"bar\".')\n\n return some_process_that_returns_a_bool(foo_or_bar, some_name)\n\nYou could also get rid of the foo_or_bar variable all together and just put two different function calls in the if and elif blocks, using foo or bar as appropriate.\n", "You're getting the error because in some_process_that_returns_a_bool you are trying to access some_param_to_check on a value that can be None and None does not have that property.\nIf some_process_that_returns_a_bool should accept None as a possible value for f_or_b , then you should check for None before trying to access any properties.\ndef some_process_that_returns_a_bool(\n f_or_b: Union[Foo, Bar, None],\n a_name: str,\n) -> bool:\n if f_or_b is None:\n # Handle None...\n return False\n else:\n return f_or_b.some_param_to_check == a_name\n\nThis way you will only try to access some_param_to_check when f_or_b is not None and else you will return False when f_or_b is None.\nBut please handle None in whatever way makes sense for your application.\n", "Adding my $0.02 to @Blckknght answer:\nThis solves the problem with inner type checking (no errors inside function body, if you fix a missing annotation), but doesn't help external callers.\nTo achieve typing \"one or another, but never none of them\", you can use overloads:\nfrom typing import Optional, Union, overload\n\n\nclass Foo:\n some_param_to_check: str = 'foo_name'\n one_param_exclusive_to_foo: int\n\n\nclass Bar:\n some_param_to_check: str = 'bar_name'\n another_param_exclusive_to_bar: str\n\n\ndef some_process_that_returns_a_bool(\n f_or_b: Union[Foo, Bar],\n a_name: str,\n) -> bool:\n return f_or_b.some_param_to_check == a_name\n\n@overload\ndef do_something_with_foo_or_bar(\n foo: Foo,\n bar: Optional[Bar],\n some_name: str,\n) -> bool: ...\n@overload\ndef do_something_with_foo_or_bar(\n foo: Optional[Foo],\n bar: Bar,\n some_name: str,\n) -> bool: ...\ndef do_something_with_foo_or_bar(\n foo: Optional[Foo],\n bar: Optional[Bar],\n some_name: str,\n) -> bool:\n foo_or_bar: Union[Foo, Bar] # This annotation was missing\n if foo:\n foo_or_bar = foo\n elif bar:\n foo_or_bar = bar\n else:\n raise ValueError('You need to specify either \"foo\" or \"bar\".')\n\n return some_process_that_returns_a_bool(foo_or_bar, some_name)\n \n\ndo_something_with_foo_or_bar(Foo(), Bar(), '')\ndo_something_with_foo_or_bar(None, Bar(), '')\ndo_something_with_foo_or_bar(Foo(), None, '')\ndo_something_with_foo_or_bar(None, None, '') # Line 51\n\nAnd voila:\nmain.py:51: error: No overload variant of \"do_something_with_foo_or_bar\" matches argument types \"None\", \"None\", \"str\" [call-overload]\nmain.py:51: note: Possible overload variants:\nmain.py:51: note: def do_something_with_foo_or_bar(foo: Foo, bar: Optional[Bar], some_name: str) -> bool\nmain.py:51: note: def do_something_with_foo_or_bar(foo: Optional[Foo], bar: Bar, some_name: str) -> bool\n\nHere's a playground link.\n" ]
[ 3, 1, 1 ]
[]
[]
[ "mypy", "python", "python_typing", "type_hinting" ]
stackoverflow_0074596129_mypy_python_python_typing_type_hinting.txt
Q: write the python program for this? Write a Python program that takes the user's name as input and displays and welcomes them. Expected behaviour: Enter your name: Nimal Welcome Nimal The Python code for taking the input and displaying the output is already provided. answer for this code! A: name = input("Enter your name: ") print("Welcome", name)
write the python program for this?
Write a Python program that takes the user's name as input and displays and welcomes them. Expected behaviour: Enter your name: Nimal Welcome Nimal The Python code for taking the input and displaying the output is already provided. answer for this code!
[ "name = input(\"Enter your name: \")\nprint(\"Welcome\", name)\n\n" ]
[ -2 ]
[]
[]
[ "python" ]
stackoverflow_0074599303_python.txt
Q: i want to make a loop, for every x actions do y I want to make a loop, to run x times, and when it has run x times, then do y and start again to run x time like first time. This is what Itried, but run only 1 time (x), and then did (y) but I need to run 20 times (x) and only after 20 times do y and start again to do (x) for user in usernames: print('Sending Commnet!...') try: time.sleep(1) driver.find_element( By.XPATH, comment_section).send_keys(f'@{user} ') time.sleep(1) mentiune = mentiune + 1 if f'@{user}' in driver.page_source: time.sleep(0.5) print( f'{Fore.GREEN} {user} --> Added to comment Successfuly! -- {datetime.now()} -- {Fore.RESET} \n') x = random.randint(0.5, 1) print(f'{str(x)} secound sleeped!... \n') time.sleep(x) else: print(f'{username} is Reaport') continue except: pass driver.find_element(By.XPATH, send_button).click() So, i need, to get from "usernames" line by line "user" and type in a comment "@+user " 20 times, and then click "send_button" and then type again 20 users, but not from the start of the list. I inserted the full code here A: Try if this works: def generator(usernames): for user in usernames: yield user def my_func(usernames): generator_obj = generator(usernames) try: while True: for _ in range(20): user = next(generator_obj) # do something with user # do y except StopIteration: # do y again for last few items return None
i want to make a loop, for every x actions do y
I want to make a loop, to run x times, and when it has run x times, then do y and start again to run x time like first time. This is what Itried, but run only 1 time (x), and then did (y) but I need to run 20 times (x) and only after 20 times do y and start again to do (x) for user in usernames: print('Sending Commnet!...') try: time.sleep(1) driver.find_element( By.XPATH, comment_section).send_keys(f'@{user} ') time.sleep(1) mentiune = mentiune + 1 if f'@{user}' in driver.page_source: time.sleep(0.5) print( f'{Fore.GREEN} {user} --> Added to comment Successfuly! -- {datetime.now()} -- {Fore.RESET} \n') x = random.randint(0.5, 1) print(f'{str(x)} secound sleeped!... \n') time.sleep(x) else: print(f'{username} is Reaport') continue except: pass driver.find_element(By.XPATH, send_button).click() So, i need, to get from "usernames" line by line "user" and type in a comment "@+user " 20 times, and then click "send_button" and then type again 20 users, but not from the start of the list. I inserted the full code here
[ "Try if this works:\ndef generator(usernames):\n for user in usernames:\n yield user\n\ndef my_func(usernames):\n generator_obj = generator(usernames)\n try:\n while True:\n for _ in range(20):\n user = next(generator_obj)\n # do something with user\n # do y\n except StopIteration:\n # do y again for last few items\n return None\n\n" ]
[ 0 ]
[]
[]
[ "loops", "python", "python_3.x", "while_loop" ]
stackoverflow_0074598809_loops_python_python_3.x_while_loop.txt
Q: How to create reports with Python SDK Api I am trying to create reports with Python on dailymotion but I have error,According to my received error, renponse is empty. I don't get it. I guess, My user coudln't login to dailymotion. Please check error. {'data': {'askPartnerReportFile': None}, 'errors': [{'message': 'Not authorized to access `askPartnerReportFile` field.', 'path': ['askPartnerReportFile'], 'locations': [{'line': 3, 'column': 9}], **'type': 'not_authorized**'}]} Traceback (most recent call last): File "get_reports.py", line 143, in <module> product='CONTENT', File "get_reports.py", line 65, in create_report_request return response.json()['data']['askPartnerReportFile']['reportFile']['reportToken']; TypeError: 'NoneType' object is not subscriptable Here is my code; ` def get_access_token(app_key, app_secret, username, password): ''' Authenticate on the API in order to get an access token ''' response = requests.post('https://graphql.api.dailymotion.com/oauth/token', data={ 'client_id': app_key, 'client_secret': app_secret, 'username': username, 'password': password, 'grant_type': 'password', 'version': '2' }) if response.status_code != 200 or not 'access_token' in response.json(): raise Exception('Invalid authentication response') return response.json()['access_token'] def create_report_request(access_token, dimensions, metrics, start_date, end_date, product, filters = None): ''' Creating a report request ''' reportRequest = """ mutation ($input: AskPartnerReportFileInput!) { askPartnerReportFile(input: $input) { reportFile { reportToken } } } """ response = requests.post( 'https://graphql.api.dailymotion.com', json={ 'query': reportRequest, 'variables': { 'input': { 'metrics': metrics, 'dimensions': dimensions, 'filters': filters, 'startDate': start_date, 'endDate': end_date, 'product': product, } } }, headers={'Authorization': 'Bearer ' + access_token} ) print(response.status_code) if response.status_code != 200 or not 'data' in response.json(): raise Exception('Invalid response') print(response.json()) return response.json()['data']['askPartnerReportFile']['reportFile']['reportToken']; def check_report_status(access_token, report_token): ''' Checking the status of the reporting request ''' report_request_status_check = """ query PartnerGetReportFile ($reportToken: String!) { partner { reportFile(reportToken: $reportToken) { status downloadLinks { edges { node { link } } } } } } """ response = requests.post( 'https://graphql.api.dailymotion.com', json={ 'query': report_request_status_check, 'variables': { 'reportToken': report_token } }, headers={'Authorization': 'Bearer ' + access_token} ) if response.status_code != 200 or not 'data' in response.json(): raise Exception('Invalid response') status = response.json()['data']['partner']['reportFile']['status']; if (status == 'FINISHED'): download_links = [] for url in map(lambda edge: edge['node']['link'], response.json()['data']['partner']['reportFile']['downloadLinks']['edges']): download_links.append(url) return download_links else: return None def download_report(download_links, base_path=None): ''' Downloading the report files ''' cpt = 1 if not base_path: base_path = os.getcwd() for url in download_links: r = requests.get(url) filename = 'report_{}.csv'.format(cpt) file_path = os.path.join(base_path, filename) open(file_path, 'wb').write(r.content) print('Report file {} downloaded: {}'.format(cpt, file_path)) cpt += 1 print('Generating access token...') access_token = get_access_token( app_key='******', app_secret='*******', username='*****', password='*****' ) print('Creating report request...') report_token = create_report_request( access_token=access_token, dimensions=('DAY', 'VIDEO_TITLE'), metrics=('VIEWS'), filters={'videoOwnerChannelSlug': 'B******'}, start_date='2022-11-23', end_date='2022-11-24', product='CONTENT', ) download_links = None while not download_links: print('Checking report status...') # Checks every 15secs the report status time.sleep(15) download_links = check_report_status( access_token=access_token, report_token=report_token ) download_report(download_links=download_links) ` I tried to get data dailymotion api. Thanks A: This feature requires a specific API access, which is missing on your API Key, that's why you get the message Not authorized to access askPartnerReportFile field. As it's a feature restricted to verified-partners, you should reach out to your content manager to ask him this kind of access, or you can try to contact our support
How to create reports with Python SDK Api
I am trying to create reports with Python on dailymotion but I have error,According to my received error, renponse is empty. I don't get it. I guess, My user coudln't login to dailymotion. Please check error. {'data': {'askPartnerReportFile': None}, 'errors': [{'message': 'Not authorized to access `askPartnerReportFile` field.', 'path': ['askPartnerReportFile'], 'locations': [{'line': 3, 'column': 9}], **'type': 'not_authorized**'}]} Traceback (most recent call last): File "get_reports.py", line 143, in <module> product='CONTENT', File "get_reports.py", line 65, in create_report_request return response.json()['data']['askPartnerReportFile']['reportFile']['reportToken']; TypeError: 'NoneType' object is not subscriptable Here is my code; ` def get_access_token(app_key, app_secret, username, password): ''' Authenticate on the API in order to get an access token ''' response = requests.post('https://graphql.api.dailymotion.com/oauth/token', data={ 'client_id': app_key, 'client_secret': app_secret, 'username': username, 'password': password, 'grant_type': 'password', 'version': '2' }) if response.status_code != 200 or not 'access_token' in response.json(): raise Exception('Invalid authentication response') return response.json()['access_token'] def create_report_request(access_token, dimensions, metrics, start_date, end_date, product, filters = None): ''' Creating a report request ''' reportRequest = """ mutation ($input: AskPartnerReportFileInput!) { askPartnerReportFile(input: $input) { reportFile { reportToken } } } """ response = requests.post( 'https://graphql.api.dailymotion.com', json={ 'query': reportRequest, 'variables': { 'input': { 'metrics': metrics, 'dimensions': dimensions, 'filters': filters, 'startDate': start_date, 'endDate': end_date, 'product': product, } } }, headers={'Authorization': 'Bearer ' + access_token} ) print(response.status_code) if response.status_code != 200 or not 'data' in response.json(): raise Exception('Invalid response') print(response.json()) return response.json()['data']['askPartnerReportFile']['reportFile']['reportToken']; def check_report_status(access_token, report_token): ''' Checking the status of the reporting request ''' report_request_status_check = """ query PartnerGetReportFile ($reportToken: String!) { partner { reportFile(reportToken: $reportToken) { status downloadLinks { edges { node { link } } } } } } """ response = requests.post( 'https://graphql.api.dailymotion.com', json={ 'query': report_request_status_check, 'variables': { 'reportToken': report_token } }, headers={'Authorization': 'Bearer ' + access_token} ) if response.status_code != 200 or not 'data' in response.json(): raise Exception('Invalid response') status = response.json()['data']['partner']['reportFile']['status']; if (status == 'FINISHED'): download_links = [] for url in map(lambda edge: edge['node']['link'], response.json()['data']['partner']['reportFile']['downloadLinks']['edges']): download_links.append(url) return download_links else: return None def download_report(download_links, base_path=None): ''' Downloading the report files ''' cpt = 1 if not base_path: base_path = os.getcwd() for url in download_links: r = requests.get(url) filename = 'report_{}.csv'.format(cpt) file_path = os.path.join(base_path, filename) open(file_path, 'wb').write(r.content) print('Report file {} downloaded: {}'.format(cpt, file_path)) cpt += 1 print('Generating access token...') access_token = get_access_token( app_key='******', app_secret='*******', username='*****', password='*****' ) print('Creating report request...') report_token = create_report_request( access_token=access_token, dimensions=('DAY', 'VIDEO_TITLE'), metrics=('VIEWS'), filters={'videoOwnerChannelSlug': 'B******'}, start_date='2022-11-23', end_date='2022-11-24', product='CONTENT', ) download_links = None while not download_links: print('Checking report status...') # Checks every 15secs the report status time.sleep(15) download_links = check_report_status( access_token=access_token, report_token=report_token ) download_report(download_links=download_links) ` I tried to get data dailymotion api. Thanks
[ "This feature requires a specific API access, which is missing on your API Key, that's why you get the message Not authorized to access askPartnerReportFile field.\nAs it's a feature restricted to verified-partners, you should reach out to your content manager to ask him this kind of access, or you can try to contact our support\n" ]
[ 0 ]
[]
[]
[ "dailymotion_api", "python", "report" ]
stackoverflow_0074571992_dailymotion_api_python_report.txt
Q: How to set a style for a particular cell in a multiindex dataframe I'm iterating over a multi-index dataframe, and I trying to set the color for particular cells to the style in the two variables points_color and stat_color. How to apply the style to the cells? for metric, new_df in df3.groupby(level=0): idx = pd.IndexSlice row = new_df.loc[(metric),:] for geo in ['US', 'UK']: points_color, stat_color = color(new_df.loc[metric,idx[:,:,['difference']]][geo]['']['difference'], new_df.loc[metric,idx[:,:,['stat']]][geo]['']['stat']) ##### SEE HERE ####### df3.loc[metric,idx[:,:,['points']]][geo]['GM']['points'] = # apply points_color style to this value df3.loc[metric,idx[:,:,['points']]][geo]['GM']['points'] df3.loc[metric,idx[:,:,['stat']]][geo]['']['stat'] = # apply stat_color style to this value df3.loc[metric,idx[:,:,['stat']]][geo]['']['stat'] ########### df3 Setup for the dataframe: dic = {'US':{'Quality':{'points':"-2 n", 'difference':'equal', 'stat': 'same'}, 'Prices':{'points':"-7 n", 'difference':'negative', 'stat': 'below'}, 'Satisfaction':{'points':"3 n", 'difference':'positive', 'stat': 'below'}}, 'UK': {'Quality':{'points':"3 n", 'difference':'equal', 'stat': 'above'}, 'Prices':{'points':"-13 n", 'difference':'negative', 'stat': 'below'}, 'Satisfaction':{'points':"2 n", 'difference':'negative', 'stat': 'same'}}} d1 = defaultdict(dict) for k, v in dic.items(): for k1, v1 in v.items(): for k2, v2 in v1.items(): d1[(k, k2)].update({k1: v2}) df = pd.DataFrame(d1) df.columns = df.columns.rename("Skateboard", level=0) df.columns = df.columns.rename("Metric", level=1) df3 = pd.concat([df], keys=[''], names=['Q3'], axis=1).swaplevel(0, 1, axis=1) df3.columns = df3.columns.map(lambda x: (x[0], 'GM', x[2]) if x[2] == 'points' else x) df3.insert(loc=0, column=('','', 'Mode'), value="Website") df3 Setup for the color function: It takes two cell values difference and stat and determines if the style for cells points and stats is in the dataframe. def color(difference, stat): points_color, stat_color = '', '' if stat in ('below', 'above'): stat_color = 'background-color: #f2dcdb; color: red' if difference == "negative": points_color = 'color: red' elif difference == "positive": points_color = 'color: green' return points_color, stat_color A: You can select geo columns by list, compare stat and difference and set values in slices: def color(x): idx = pd.IndexSlice geo = ['US', 'UK'] m1 = x.loc[:, idx[geo, :, 'stat']].isin(('below', 'above')) diff = x.loc[:, idx[geo, :, 'difference']] df1 = pd.DataFrame('', index=x.index, columns=x.columns) diff = (diff.rename(columns={'difference':'points'}, level=2) .rename(columns={'':'GM'}, level=1)) df1.loc[:, idx[geo, 'GM', 'points']] = np.select([diff.eq('negative'), diff.eq('positive')], ['color: red','color: green'], '') df1.loc[:, idx[geo, :, 'stat']] = np.where(m1, 'background-color: #f2dcdb; color: red', '') return df1 df.style.apply(color, axis=None)
How to set a style for a particular cell in a multiindex dataframe
I'm iterating over a multi-index dataframe, and I trying to set the color for particular cells to the style in the two variables points_color and stat_color. How to apply the style to the cells? for metric, new_df in df3.groupby(level=0): idx = pd.IndexSlice row = new_df.loc[(metric),:] for geo in ['US', 'UK']: points_color, stat_color = color(new_df.loc[metric,idx[:,:,['difference']]][geo]['']['difference'], new_df.loc[metric,idx[:,:,['stat']]][geo]['']['stat']) ##### SEE HERE ####### df3.loc[metric,idx[:,:,['points']]][geo]['GM']['points'] = # apply points_color style to this value df3.loc[metric,idx[:,:,['points']]][geo]['GM']['points'] df3.loc[metric,idx[:,:,['stat']]][geo]['']['stat'] = # apply stat_color style to this value df3.loc[metric,idx[:,:,['stat']]][geo]['']['stat'] ########### df3 Setup for the dataframe: dic = {'US':{'Quality':{'points':"-2 n", 'difference':'equal', 'stat': 'same'}, 'Prices':{'points':"-7 n", 'difference':'negative', 'stat': 'below'}, 'Satisfaction':{'points':"3 n", 'difference':'positive', 'stat': 'below'}}, 'UK': {'Quality':{'points':"3 n", 'difference':'equal', 'stat': 'above'}, 'Prices':{'points':"-13 n", 'difference':'negative', 'stat': 'below'}, 'Satisfaction':{'points':"2 n", 'difference':'negative', 'stat': 'same'}}} d1 = defaultdict(dict) for k, v in dic.items(): for k1, v1 in v.items(): for k2, v2 in v1.items(): d1[(k, k2)].update({k1: v2}) df = pd.DataFrame(d1) df.columns = df.columns.rename("Skateboard", level=0) df.columns = df.columns.rename("Metric", level=1) df3 = pd.concat([df], keys=[''], names=['Q3'], axis=1).swaplevel(0, 1, axis=1) df3.columns = df3.columns.map(lambda x: (x[0], 'GM', x[2]) if x[2] == 'points' else x) df3.insert(loc=0, column=('','', 'Mode'), value="Website") df3 Setup for the color function: It takes two cell values difference and stat and determines if the style for cells points and stats is in the dataframe. def color(difference, stat): points_color, stat_color = '', '' if stat in ('below', 'above'): stat_color = 'background-color: #f2dcdb; color: red' if difference == "negative": points_color = 'color: red' elif difference == "positive": points_color = 'color: green' return points_color, stat_color
[ "You can select geo columns by list, compare stat and difference and set values in slices:\ndef color(x):\n \n idx = pd.IndexSlice\n geo = ['US', 'UK']\n \n m1 = x.loc[:, idx[geo, :, 'stat']].isin(('below', 'above'))\n diff = x.loc[:, idx[geo, :, 'difference']]\n \n df1 = pd.DataFrame('', index=x.index, columns=x.columns)\n \n diff = (diff.rename(columns={'difference':'points'}, level=2)\n .rename(columns={'':'GM'}, level=1))\n df1.loc[:, idx[geo, 'GM', 'points']] = np.select([diff.eq('negative'), \n diff.eq('positive')], \n ['color: red','color: green'], '')\n df1.loc[:, idx[geo, :, 'stat']] = np.where(m1, \n 'background-color: #f2dcdb; color: red', '')\n \n return df1\n\ndf.style.apply(color, axis=None)\n\n" ]
[ 1 ]
[]
[]
[ "dataframe", "multi_index", "pandas", "python" ]
stackoverflow_0074599211_dataframe_multi_index_pandas_python.txt
Q: How to create this kind of crosstab by Python? The data looks like: bad score1 score2 1 80-90 70-80 0 90-100 80-90 1 70-80 90-100 1 70-80 70-80 0 70-80 70-80 1 80-90 70-80 The result should be like the total number of 'the bad flag is 1 when it is in the corresponding range of socre1 and score2'. For example: 70-80 80-90 90-100 (score2) 70-80 1 0 1 80-90 2 0 0 90-100 0 1 0 (score1) I know the pd.crosstab has the similar function, but it can not solve my issue. pd.crosstab(df.score1, df.score2) A: Use values andaggfunc, combined with fillna: out = (pd.crosstab(df.score1, df.score2, values=df['bad'], aggfunc='sum') .fillna(0, downcast='infer') ) Output: score2 70-80 80-90 90-100 score1 70-80 1 0 1 80-90 2 0 0 90-100 0 0 0
How to create this kind of crosstab by Python?
The data looks like: bad score1 score2 1 80-90 70-80 0 90-100 80-90 1 70-80 90-100 1 70-80 70-80 0 70-80 70-80 1 80-90 70-80 The result should be like the total number of 'the bad flag is 1 when it is in the corresponding range of socre1 and score2'. For example: 70-80 80-90 90-100 (score2) 70-80 1 0 1 80-90 2 0 0 90-100 0 1 0 (score1) I know the pd.crosstab has the similar function, but it can not solve my issue. pd.crosstab(df.score1, df.score2)
[ "Use values andaggfunc, combined with fillna:\nout = (pd.crosstab(df.score1, df.score2, values=df['bad'], aggfunc='sum')\n .fillna(0, downcast='infer')\n)\n\nOutput:\nscore2 70-80 80-90 90-100\nscore1 \n70-80 1 0 1\n80-90 2 0 0\n90-100 0 0 0\n\n" ]
[ 0 ]
[]
[]
[ "numpy", "pandas", "python", "scikit_learn" ]
stackoverflow_0074599436_numpy_pandas_python_scikit_learn.txt
Q: Custom button in Django Admin page, that when clicked, will change the field of the model to True I have a sample model in my Django App: class CustomerInformation(models.Model): # CustomerInformation Schema name=models.CharField(max_length=200, verbose_name="Name",default="Default Name") login_url=models.URLField(max_length=200, verbose_name="Login URL",default="") is_test=models.BooleanField(default=False,verbose_name="Test") I have to create a button on the Django Admin page for each entry of the model (which I have) that says "Test Integration", like this: [Screenshot of admin page Now once I click the button, the 'is_test' value for that specific CustomerInformation model, should be True. I haven't been able to do that so far. Here's what I have tried so far, thanks to Haki Benita's blog postBlog here: # admin.py class CustomerAdmin(admin.ModelAdmin): list_display = ( 'name', 'username', 'account_actions' ) def get_urls(self): urls = super().get_urls() custom_urls = [ url( r'^(?P<account_id>.+)/test_integration/$', self.admin_site.admin_view(self.test_integration), name='test-integration', ) ] return custom_urls + urls def test_integration(self, request, account_id, *args, **kwargs): return self.process_action( request=request, account_id=account_id ) def process_action(self,request,account_id): pass def account_actions(self, obj): return format_html( '<a class="button" href={}>Test Integration</a>', reverse('admin:test-integration', args=[obj.pk]), ) account_actions.short_description = 'Account Actions' account_actions.allow_tags = True I realize that I have to do something with the process_action and test_integration methods in this class to execute what I am wanting to do, but as a Django Beginner, I'm kinda lost. Any suggestions? Thank you so much!! A: in your process_action function do a setattr(self, 'is_test_result', True) then override get_form like so def get_form(self, *arg, **kwargs): form = super().get_form(*arg, **kwargs) if arg[1] and hasattr(self, 'is_test_result'): if self.is_test_result is not None: arg[1].is_test = self.is_test_result self.is_test_result = None return form in arg[1], is your admin template, you can do an arg[1].save() or let the user click the usual Save or Save and Continue button at the bottom right
Custom button in Django Admin page, that when clicked, will change the field of the model to True
I have a sample model in my Django App: class CustomerInformation(models.Model): # CustomerInformation Schema name=models.CharField(max_length=200, verbose_name="Name",default="Default Name") login_url=models.URLField(max_length=200, verbose_name="Login URL",default="") is_test=models.BooleanField(default=False,verbose_name="Test") I have to create a button on the Django Admin page for each entry of the model (which I have) that says "Test Integration", like this: [Screenshot of admin page Now once I click the button, the 'is_test' value for that specific CustomerInformation model, should be True. I haven't been able to do that so far. Here's what I have tried so far, thanks to Haki Benita's blog postBlog here: # admin.py class CustomerAdmin(admin.ModelAdmin): list_display = ( 'name', 'username', 'account_actions' ) def get_urls(self): urls = super().get_urls() custom_urls = [ url( r'^(?P<account_id>.+)/test_integration/$', self.admin_site.admin_view(self.test_integration), name='test-integration', ) ] return custom_urls + urls def test_integration(self, request, account_id, *args, **kwargs): return self.process_action( request=request, account_id=account_id ) def process_action(self,request,account_id): pass def account_actions(self, obj): return format_html( '<a class="button" href={}>Test Integration</a>', reverse('admin:test-integration', args=[obj.pk]), ) account_actions.short_description = 'Account Actions' account_actions.allow_tags = True I realize that I have to do something with the process_action and test_integration methods in this class to execute what I am wanting to do, but as a Django Beginner, I'm kinda lost. Any suggestions? Thank you so much!!
[ "in your process_action function do a setattr(self, 'is_test_result', True) then override get_form like so\ndef get_form(self, *arg, **kwargs):\n form = super().get_form(*arg, **kwargs)\n if arg[1] and hasattr(self, 'is_test_result'):\n if self.is_test_result is not None:\n arg[1].is_test = self.is_test_result\n\n self.is_test_result = None\n\n\n\n return form\n\nin arg[1], is your admin template, you can do an arg[1].save() or let the user click the usual Save or Save and Continue button at the bottom right\n" ]
[ 0 ]
[]
[]
[ "django", "django_admin", "django_admin_actions", "django_models", "python" ]
stackoverflow_0073040960_django_django_admin_django_admin_actions_django_models_python.txt
Q: How to use multiprocessing pool.map with multiple arguments In the Python multiprocessing library, is there a variant of pool.map which supports multiple arguments? import multiprocessing text = "test" def harvester(text, case): X = case[0] text + str(X) if __name__ == '__main__': pool = multiprocessing.Pool(processes=6) case = RAW_DATASET pool.map(harvester(text, case), case, 1) pool.close() pool.join() A: is there a variant of pool.map which support multiple arguments? Python 3.3 includes pool.starmap() method: #!/usr/bin/env python3 from functools import partial from itertools import repeat from multiprocessing import Pool, freeze_support def func(a, b): return a + b def main(): a_args = [1,2,3] second_arg = 1 with Pool() as pool: L = pool.starmap(func, [(1, 1), (2, 1), (3, 1)]) M = pool.starmap(func, zip(a_args, repeat(second_arg))) N = pool.map(partial(func, b=second_arg), a_args) assert L == M == N if __name__=="__main__": freeze_support() main() For older versions: #!/usr/bin/env python2 import itertools from multiprocessing import Pool, freeze_support def func(a, b): print a, b def func_star(a_b): """Convert `f([1,2])` to `f(1,2)` call.""" return func(*a_b) def main(): pool = Pool() a_args = [1,2,3] second_arg = 1 pool.map(func_star, itertools.izip(a_args, itertools.repeat(second_arg))) if __name__=="__main__": freeze_support() main() Output 1 1 2 1 3 1 Notice how itertools.izip() and itertools.repeat() are used here. Due to the bug mentioned by @unutbu you can't use functools.partial() or similar capabilities on Python 2.6, so the simple wrapper function func_star() should be defined explicitly. See also the workaround suggested by uptimebox. A: The answer to this is version- and situation-dependent. The most general answer for recent versions of Python (since 3.3) was first described below by J.F. Sebastian.1 It uses the Pool.starmap method, which accepts a sequence of argument tuples. It then automatically unpacks the arguments from each tuple and passes them to the given function: import multiprocessing from itertools import product def merge_names(a, b): return '{} & {}'.format(a, b) if __name__ == '__main__': names = ['Brown', 'Wilson', 'Bartlett', 'Rivera', 'Molloy', 'Opie'] with multiprocessing.Pool(processes=3) as pool: results = pool.starmap(merge_names, product(names, repeat=2)) print(results) # Output: ['Brown & Brown', 'Brown & Wilson', 'Brown & Bartlett', ... For earlier versions of Python, you'll need to write a helper function to unpack the arguments explicitly. If you want to use with, you'll also need to write a wrapper to turn Pool into a context manager. (Thanks to muon for pointing this out.) import multiprocessing from itertools import product from contextlib import contextmanager def merge_names(a, b): return '{} & {}'.format(a, b) def merge_names_unpack(args): return merge_names(*args) @contextmanager def poolcontext(*args, **kwargs): pool = multiprocessing.Pool(*args, **kwargs) yield pool pool.terminate() if __name__ == '__main__': names = ['Brown', 'Wilson', 'Bartlett', 'Rivera', 'Molloy', 'Opie'] with poolcontext(processes=3) as pool: results = pool.map(merge_names_unpack, product(names, repeat=2)) print(results) # Output: ['Brown & Brown', 'Brown & Wilson', 'Brown & Bartlett', ... In simpler cases, with a fixed second argument, you can also use partial, but only in Python 2.7+. import multiprocessing from functools import partial from contextlib import contextmanager @contextmanager def poolcontext(*args, **kwargs): pool = multiprocessing.Pool(*args, **kwargs) yield pool pool.terminate() def merge_names(a, b): return '{} & {}'.format(a, b) if __name__ == '__main__': names = ['Brown', 'Wilson', 'Bartlett', 'Rivera', 'Molloy', 'Opie'] with poolcontext(processes=3) as pool: results = pool.map(partial(merge_names, b='Sons'), names) print(results) # Output: ['Brown & Sons', 'Wilson & Sons', 'Bartlett & Sons', ... 1. Much of this was inspired by his answer, which should probably have been accepted instead. But since this one is stuck at the top, it seemed best to improve it for future readers. A: I think the below will be better: def multi_run_wrapper(args): return add(*args) def add(x,y): return x+y if __name__ == "__main__": from multiprocessing import Pool pool = Pool(4) results = pool.map(multi_run_wrapper,[(1,2),(2,3),(3,4)]) print results Output [3, 5, 7] A: Using Python 3.3+ with pool.starmap(): from multiprocessing.dummy import Pool as ThreadPool def write(i, x): print(i, "---", x) a = ["1","2","3"] b = ["4","5","6"] pool = ThreadPool(2) pool.starmap(write, zip(a,b)) pool.close() pool.join() Result: 1 --- 4 2 --- 5 3 --- 6 You can also zip() more arguments if you like: zip(a,b,c,d,e) In case you want to have a constant value passed as an argument: import itertools zip(itertools.repeat(constant), a) In case your function should return something: results = pool.starmap(write, zip(a,b)) This gives a List with the returned values. A: How to take multiple arguments: def f1(args): a, b, c = args[0] , args[1] , args[2] return a+b+c if __name__ == "__main__": import multiprocessing pool = multiprocessing.Pool(4) result1 = pool.map(f1, [ [1,2,3] ]) print(result1) A: Having learnt about itertools in J.F. Sebastian's answer I decided to take it a step further and write a parmap package that takes care about parallelization, offering map and starmap functions in Python 2.7 and Python 3.2 (and later also) that can take any number of positional arguments. Installation pip install parmap How to parallelize: import parmap # If you want to do: y = [myfunction(x, argument1, argument2) for x in mylist] # In parallel: y = parmap.map(myfunction, mylist, argument1, argument2) # If you want to do: z = [myfunction(x, y, argument1, argument2) for (x,y) in mylist] # In parallel: z = parmap.starmap(myfunction, mylist, argument1, argument2) # If you want to do: listx = [1, 2, 3, 4, 5, 6] listy = [2, 3, 4, 5, 6, 7] param = 3.14 param2 = 42 listz = [] for (x, y) in zip(listx, listy): listz.append(myfunction(x, y, param1, param2)) # In parallel: listz = parmap.starmap(myfunction, zip(listx, listy), param1, param2) I have uploaded parmap to PyPI and to a GitHub repository. As an example, the question can be answered as follows: import parmap def harvester(case, text): X = case[0] text+ str(X) if __name__ == "__main__": case = RAW_DATASET # assuming this is an iterable parmap.map(harvester, case, "test", chunksize=1) A: There's a fork of multiprocessing called pathos (note: use the version on GitHub) that doesn't need starmap -- the map functions mirror the API for Python's map, thus map can take multiple arguments. With pathos, you can also generally do multiprocessing in the interpreter, instead of being stuck in the __main__ block. Pathos is due for a release, after some mild updating -- mostly conversion to Python 3.x. Python 2.7.5 (default, Sep 30 2013, 20:15:49) [GCC 4.2.1 (Apple Inc. build 5566)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> def func(a,b): ... print a,b ... >>> >>> from pathos.multiprocessing import ProcessingPool >>> pool = ProcessingPool(nodes=4) >>> pool.map(func, [1,2,3], [1,1,1]) 1 1 2 1 3 1 [None, None, None] >>> >>> # also can pickle stuff like lambdas >>> result = pool.map(lambda x: x**2, range(10)) >>> result [0, 1, 4, 9, 16, 25, 36, 49, 64, 81] >>> >>> # also does asynchronous map >>> result = pool.amap(pow, [1,2,3], [4,5,6]) >>> result.get() [1, 32, 729] >>> >>> # or can return a map iterator >>> result = pool.imap(pow, [1,2,3], [4,5,6]) >>> result <processing.pool.IMapIterator object at 0x110c2ffd0> >>> list(result) [1, 32, 729] pathos has several ways that that you can get the exact behavior of starmap. >>> def add(*x): ... return sum(x) ... >>> x = [[1,2,3],[4,5,6]] >>> import pathos >>> import numpy as np >>> # use ProcessPool's map and transposing the inputs >>> pp = pathos.pools.ProcessPool() >>> pp.map(add, *np.array(x).T) [6, 15] >>> # use ProcessPool's map and a lambda to apply the star >>> pp.map(lambda x: add(*x), x) [6, 15] >>> # use a _ProcessPool, which has starmap >>> _pp = pathos.pools._ProcessPool() >>> _pp.starmap(add, x) [6, 15] >>> A: Another way is to pass a list of lists to a one-argument routine: import os from multiprocessing import Pool def task(args): print "PID =", os.getpid(), ", arg1 =", args[0], ", arg2 =", args[1] pool = Pool() pool.map(task, [ [1,2], [3,4], [5,6], [7,8] ]) One can then construct a list lists of arguments with one's favorite method. A: A better way is using a decorator instead of writing a wrapper function by hand. Especially when you have a lot of functions to map, a decorator will save your time by avoiding writing a wrapper for every function. Usually a decorated function is not picklable, however we may use functools to get around it. More discussions can be found here. Here is the example: def unpack_args(func): from functools import wraps @wraps(func) def wrapper(args): if isinstance(args, dict): return func(**args) else: return func(*args) return wrapper @unpack_args def func(x, y): return x + y Then you may map it with zipped arguments: np, xlist, ylist = 2, range(10), range(10) pool = Pool(np) res = pool.map(func, zip(xlist, ylist)) pool.close() pool.join() Of course, you may always use Pool.starmap in Python 3 (>=3.3) as mentioned in other answers. A: A better solution for Python 2: from multiprocessing import Pool def func((i, (a, b))): print i, a, b return a + b pool = Pool(3) pool.map(func, [(0,(1,2)), (1,(2,3)), (2,(3, 4))]) Output 2 3 4 1 2 3 0 1 2 out[]: [3, 5, 7] A: You can use the following two functions so as to avoid writing a wrapper for each new function: import itertools from multiprocessing import Pool def universal_worker(input_pair): function, args = input_pair return function(*args) def pool_args(function, *args): return zip(itertools.repeat(function), zip(*args)) Use the function function with the lists of arguments arg_0, arg_1 and arg_2 as follows: pool = Pool(n_core) list_model = pool.map(universal_worker, pool_args(function, arg_0, arg_1, arg_2) pool.close() pool.join() A: Another simple alternative is to wrap your function parameters in a tuple and then wrap the parameters that should be passed in tuples as well. This is perhaps not ideal when dealing with large pieces of data. I believe it would make copies for each tuple. from multiprocessing import Pool def f((a,b,c,d)): print a,b,c,d return a + b + c +d if __name__ == '__main__': p = Pool(10) data = [(i+0,i+1,i+2,i+3) for i in xrange(10)] print(p.map(f, data)) p.close() p.join() Gives the output in some random order: 0 1 2 3 1 2 3 4 2 3 4 5 3 4 5 6 4 5 6 7 5 6 7 8 7 8 9 10 6 7 8 9 8 9 10 11 9 10 11 12 [6, 10, 14, 18, 22, 26, 30, 34, 38, 42] A: Here is another way to do it that IMHO is more simple and elegant than any of the other answers provided. This program has a function that takes two parameters, prints them out and also prints the sum: import multiprocessing def main(): with multiprocessing.Pool(10) as pool: params = [ (2, 2), (3, 3), (4, 4) ] pool.starmap(printSum, params) # end with # end function def printSum(num1, num2): mySum = num1 + num2 print('num1 = ' + str(num1) + ', num2 = ' + str(num2) + ', sum = ' + str(mySum)) # end function if __name__ == '__main__': main() output is: num1 = 2, num2 = 2, sum = 4 num1 = 3, num2 = 3, sum = 6 num1 = 4, num2 = 4, sum = 8 See the python docs for more info: https://docs.python.org/3/library/multiprocessing.html#module-multiprocessing.pool In particular be sure to check out the starmap function. I'm using Python 3.6, I'm not sure if this will work with older Python versions Why there is not a very straight-forward example like this in the docs, I'm not sure. A: From Python 3.4.4, you can use multiprocessing.get_context() to obtain a context object to use multiple start methods: import multiprocessing as mp def foo(q, h, w): q.put(h + ' ' + w) print(h + ' ' + w) if __name__ == '__main__': ctx = mp.get_context('spawn') q = ctx.Queue() p = ctx.Process(target=foo, args=(q,'hello', 'world')) p.start() print(q.get()) p.join() Or you just simply replace pool.map(harvester(text, case), case, 1) with: pool.apply_async(harvester(text, case), case, 1) A: In the official documentation states that it supports only one iterable argument. I like to use apply_async in such cases. In your case I would do: from multiprocessing import Process, Pool, Manager text = "test" def harvester(text, case, q = None): X = case[0] res = text+ str(X) if q: q.put(res) return res def block_until(q, results_queue, until_counter=0): i = 0 while i < until_counter: results_queue.put(q.get()) i+=1 if __name__ == '__main__': pool = multiprocessing.Pool(processes=6) case = RAW_DATASET m = Manager() q = m.Queue() results_queue = m.Queue() # when it completes results will reside in this queue blocking_process = Process(block_until, (q, results_queue, len(case))) blocking_process.start() for c in case: try: res = pool.apply_async(harvester, (text, case, q = None)) res.get(timeout=0.1) except: pass blocking_process.join() A: There are many answers here, but none seem to provide Python 2/3 compatible code that will work on any version. If you want your code to just work, this will work for either Python version: # For python 2/3 compatibility, define pool context manager # to support the 'with' statement in Python 2 if sys.version_info[0] == 2: from contextlib import contextmanager @contextmanager def multiprocessing_context(*args, **kwargs): pool = multiprocessing.Pool(*args, **kwargs) yield pool pool.terminate() else: multiprocessing_context = multiprocessing.Pool After that, you can use multiprocessing the regular Python 3 way, however you like. For example: def _function_to_run_for_each(x): return x.lower() with multiprocessing_context(processes=3) as pool: results = pool.map(_function_to_run_for_each, ['Bob', 'Sue', 'Tim']) print(results) will work in Python 2 or Python 3. A: text = "test" def unpack(args): return args[0](*args[1:]) def harvester(text, case): X = case[0] text+ str(X) if __name__ == '__main__': pool = multiprocessing.Pool(processes=6) case = RAW_DATASET # args is a list of tuples # with the function to execute as the first item in each tuple args = [(harvester, text, c) for c in case] # doing it this way, we can pass any function # and we don't need to define a wrapper for each different function # if we need to use more than one pool.map(unpack, args) pool.close() pool.join() A: This is an example of the routine I use to pass multiple arguments to a one-argument function used in a pool.imap fork: from multiprocessing import Pool # Wrapper of the function to map: class makefun: def __init__(self, var2): self.var2 = var2 def fun(self, i): var2 = self.var2 return var1[i] + var2 # Couple of variables for the example: var1 = [1, 2, 3, 5, 6, 7, 8] var2 = [9, 10, 11, 12] # Open the pool: pool = Pool(processes=2) # Wrapper loop for j in range(len(var2)): # Obtain the function to map pool_fun = makefun(var2[j]).fun # Fork loop for i, value in enumerate(pool.imap(pool_fun, range(len(var1))), 0): print(var1[i], '+' ,var2[j], '=', value) # Close the pool pool.close() A: This might be another option. The trick is in the wrapper function that returns another function which is passed in to pool.map. The code below reads an input array and for each (unique) element in it, returns how many times (ie counts) that element appears in the array, For example if the input is np.eye(3) = [ [1. 0. 0.] [0. 1. 0.] [0. 0. 1.]] then zero appears 6 times and one 3 times import numpy as np from multiprocessing.dummy import Pool as ThreadPool from multiprocessing import cpu_count def extract_counts(label_array): labels = np.unique(label_array) out = extract_counts_helper([label_array], labels) return out def extract_counts_helper(args, labels): n = max(1, cpu_count() - 1) pool = ThreadPool(n) results = {} pool.map(wrapper(args, results), labels) pool.close() pool.join() return results def wrapper(argsin, results): def inner_fun(label): label_array = argsin[0] counts = get_label_counts(label_array, label) results[label] = counts return inner_fun def get_label_counts(label_array, label): return sum(label_array.flatten() == label) if __name__ == "__main__": img = np.ones([2,2]) out = extract_counts(img) print('input array: \n', img) print('label counts: ', out) print("========") img = np.eye(3) out = extract_counts(img) print('input array: \n', img) print('label counts: ', out) print("========") img = np.random.randint(5, size=(3, 3)) out = extract_counts(img) print('input array: \n', img) print('label counts: ', out) print("========") You should get: input array: [[1. 1.] [1. 1.]] label counts: {1.0: 4} ======== input array: [[1. 0. 0.] [0. 1. 0.] [0. 0. 1.]] label counts: {0.0: 6, 1.0: 3} ======== input array: [[4 4 0] [2 4 3] [2 3 1]] label counts: {0: 1, 1: 1, 2: 2, 3: 2, 4: 3} ======== A: import time from multiprocessing import Pool def f1(args): vfirst, vsecond, vthird = args[0] , args[1] , args[2] print(f'First Param: {vfirst}, Second value: {vsecond} and finally third value is: {vthird}') pass if __name__ == '__main__': p = Pool() result = p.map(f1, [['Dog','Cat','Mouse']]) p.close() p.join() print(result) A: Store all your arguments as an array of tuples. The example says normally you call your function as: def mainImage(fragCoord: vec2, iResolution: vec3, iTime: float) -> vec3: Instead pass one tuple and unpack the arguments: def mainImage(package_iter) -> vec3: fragCoord = package_iter[0] iResolution = package_iter[1] iTime = package_iter[2] Build up the tuple by using a loop beforehand: package_iter = [] iResolution = vec3(nx, ny, 1) for j in range((ny-1), -1, -1): for i in range(0, nx, 1): fragCoord: vec2 = vec2(i, j) time_elapsed_seconds = 10 package_iter.append((fragCoord, iResolution, time_elapsed_seconds)) Then execute all using map by passing the array of tuples: array_rgb_values = [] with concurrent.futures.ProcessPoolExecutor() as executor: for val in executor.map(mainImage, package_iter): fragColor = val ir = clip(int(255* fragColor.r), 0, 255) ig = clip(int(255* fragColor.g), 0, 255) ib = clip(int(255* fragColor.b), 0, 255) array_rgb_values.append((ir, ig, ib)) I know Python has * and ** for unpacking, but I haven't tried those yet. Also better to use the higher-level library concurrent futures than the low level multiprocessing library. A: For me,Below one was short and simple solution: from multiprocessing.pool import ThreadPool from functools import partial from time import sleep from random import randint def dosomething(var,s): sleep(randint(1,5)) print(var) return var + s array = ["a", "b", "c", "d", "e"] with ThreadPool(processes=5) as pool: resp_ = pool.map(partial(dosomething,s="2"), array) print(resp_) Output: a b d e c ['a2', 'b2', 'c2', 'd2', 'e2']
How to use multiprocessing pool.map with multiple arguments
In the Python multiprocessing library, is there a variant of pool.map which supports multiple arguments? import multiprocessing text = "test" def harvester(text, case): X = case[0] text + str(X) if __name__ == '__main__': pool = multiprocessing.Pool(processes=6) case = RAW_DATASET pool.map(harvester(text, case), case, 1) pool.close() pool.join()
[ "\nis there a variant of pool.map which support multiple arguments?\n\nPython 3.3 includes pool.starmap() method:\n#!/usr/bin/env python3\nfrom functools import partial\nfrom itertools import repeat\nfrom multiprocessing import Pool, freeze_support\n\ndef func(a, b):\n return a + b\n\ndef main():\n a_args = [1,2,3]\n second_arg = 1\n with Pool() as pool:\n L = pool.starmap(func, [(1, 1), (2, 1), (3, 1)])\n M = pool.starmap(func, zip(a_args, repeat(second_arg)))\n N = pool.map(partial(func, b=second_arg), a_args)\n assert L == M == N\n\nif __name__==\"__main__\":\n freeze_support()\n main()\n\nFor older versions:\n#!/usr/bin/env python2\nimport itertools\nfrom multiprocessing import Pool, freeze_support\n\ndef func(a, b):\n print a, b\n\ndef func_star(a_b):\n \"\"\"Convert `f([1,2])` to `f(1,2)` call.\"\"\"\n return func(*a_b)\n\ndef main():\n pool = Pool()\n a_args = [1,2,3]\n second_arg = 1\n pool.map(func_star, itertools.izip(a_args, itertools.repeat(second_arg)))\n\nif __name__==\"__main__\":\n freeze_support()\n main()\n\nOutput\n1 1\n2 1\n3 1\n\nNotice how itertools.izip() and itertools.repeat() are used here.\nDue to the bug mentioned by @unutbu you can't use functools.partial() or similar capabilities on Python 2.6, so the simple wrapper function func_star() should be defined explicitly. See also the workaround suggested by uptimebox.\n", "The answer to this is version- and situation-dependent. The most general answer for recent versions of Python (since 3.3) was first described below by J.F. Sebastian.1 It uses the Pool.starmap method, which accepts a sequence of argument tuples. It then automatically unpacks the arguments from each tuple and passes them to the given function:\nimport multiprocessing\nfrom itertools import product\n\ndef merge_names(a, b):\n return '{} & {}'.format(a, b)\n\nif __name__ == '__main__':\n names = ['Brown', 'Wilson', 'Bartlett', 'Rivera', 'Molloy', 'Opie']\n with multiprocessing.Pool(processes=3) as pool:\n results = pool.starmap(merge_names, product(names, repeat=2))\n print(results)\n\n# Output: ['Brown & Brown', 'Brown & Wilson', 'Brown & Bartlett', ...\n\nFor earlier versions of Python, you'll need to write a helper function to unpack the arguments explicitly. If you want to use with, you'll also need to write a wrapper to turn Pool into a context manager. (Thanks to muon for pointing this out.)\nimport multiprocessing\nfrom itertools import product\nfrom contextlib import contextmanager\n\ndef merge_names(a, b):\n return '{} & {}'.format(a, b)\n\ndef merge_names_unpack(args):\n return merge_names(*args)\n\n@contextmanager\ndef poolcontext(*args, **kwargs):\n pool = multiprocessing.Pool(*args, **kwargs)\n yield pool\n pool.terminate()\n\nif __name__ == '__main__':\n names = ['Brown', 'Wilson', 'Bartlett', 'Rivera', 'Molloy', 'Opie']\n with poolcontext(processes=3) as pool:\n results = pool.map(merge_names_unpack, product(names, repeat=2))\n print(results)\n\n# Output: ['Brown & Brown', 'Brown & Wilson', 'Brown & Bartlett', ...\n\nIn simpler cases, with a fixed second argument, you can also use partial, but only in Python 2.7+.\nimport multiprocessing\nfrom functools import partial\nfrom contextlib import contextmanager\n\n@contextmanager\ndef poolcontext(*args, **kwargs):\n pool = multiprocessing.Pool(*args, **kwargs)\n yield pool\n pool.terminate()\n\ndef merge_names(a, b):\n return '{} & {}'.format(a, b)\n\nif __name__ == '__main__':\n names = ['Brown', 'Wilson', 'Bartlett', 'Rivera', 'Molloy', 'Opie']\n with poolcontext(processes=3) as pool:\n results = pool.map(partial(merge_names, b='Sons'), names)\n print(results)\n\n# Output: ['Brown & Sons', 'Wilson & Sons', 'Bartlett & Sons', ...\n\n1. Much of this was inspired by his answer, which should probably have been accepted instead. But since this one is stuck at the top, it seemed best to improve it for future readers.\n", "I think the below will be better:\ndef multi_run_wrapper(args):\n return add(*args)\n\ndef add(x,y):\n return x+y\n\nif __name__ == \"__main__\":\n from multiprocessing import Pool\n pool = Pool(4)\n results = pool.map(multi_run_wrapper,[(1,2),(2,3),(3,4)])\n print results\n\nOutput\n[3, 5, 7]\n\n", "Using Python 3.3+ with pool.starmap():\nfrom multiprocessing.dummy import Pool as ThreadPool \n\ndef write(i, x):\n print(i, \"---\", x)\n\na = [\"1\",\"2\",\"3\"]\nb = [\"4\",\"5\",\"6\"] \n\npool = ThreadPool(2)\npool.starmap(write, zip(a,b)) \npool.close() \npool.join()\n\nResult:\n1 --- 4\n2 --- 5\n3 --- 6\n\nYou can also zip() more arguments if you like: zip(a,b,c,d,e)\nIn case you want to have a constant value passed as an argument:\nimport itertools\n\nzip(itertools.repeat(constant), a)\n\nIn case your function should return something:\nresults = pool.starmap(write, zip(a,b))\n\nThis gives a List with the returned values.\n", "How to take multiple arguments:\ndef f1(args):\n a, b, c = args[0] , args[1] , args[2]\n return a+b+c\n\nif __name__ == \"__main__\":\n import multiprocessing\n pool = multiprocessing.Pool(4) \n\n result1 = pool.map(f1, [ [1,2,3] ])\n print(result1)\n\n", "Having learnt about itertools in J.F. Sebastian's answer I decided to take it a step further and write a parmap package that takes care about parallelization, offering map and starmap functions in Python 2.7 and Python 3.2 (and later also) that can take any number of positional arguments.\nInstallation\npip install parmap\n\nHow to parallelize:\nimport parmap\n# If you want to do:\ny = [myfunction(x, argument1, argument2) for x in mylist]\n# In parallel:\ny = parmap.map(myfunction, mylist, argument1, argument2)\n\n# If you want to do:\nz = [myfunction(x, y, argument1, argument2) for (x,y) in mylist]\n# In parallel:\nz = parmap.starmap(myfunction, mylist, argument1, argument2)\n\n# If you want to do:\nlistx = [1, 2, 3, 4, 5, 6]\nlisty = [2, 3, 4, 5, 6, 7]\nparam = 3.14\nparam2 = 42\nlistz = []\nfor (x, y) in zip(listx, listy):\n listz.append(myfunction(x, y, param1, param2))\n# In parallel:\nlistz = parmap.starmap(myfunction, zip(listx, listy), param1, param2)\n\nI have uploaded parmap to PyPI and to a GitHub repository.\nAs an example, the question can be answered as follows:\nimport parmap\n\ndef harvester(case, text):\n X = case[0]\n text+ str(X)\n\nif __name__ == \"__main__\":\n case = RAW_DATASET # assuming this is an iterable\n parmap.map(harvester, case, \"test\", chunksize=1)\n\n", "There's a fork of multiprocessing called pathos (note: use the version on GitHub) that doesn't need starmap -- the map functions mirror the API for Python's map, thus map can take multiple arguments.\nWith pathos, you can also generally do multiprocessing in the interpreter, instead of being stuck in the __main__ block. Pathos is due for a release, after some mild updating -- mostly conversion to Python 3.x.\n Python 2.7.5 (default, Sep 30 2013, 20:15:49)\n [GCC 4.2.1 (Apple Inc. build 5566)] on darwin\n Type \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n >>> def func(a,b):\n ... print a,b\n ...\n >>>\n >>> from pathos.multiprocessing import ProcessingPool\n >>> pool = ProcessingPool(nodes=4)\n >>> pool.map(func, [1,2,3], [1,1,1])\n 1 1\n 2 1\n 3 1\n [None, None, None]\n >>>\n >>> # also can pickle stuff like lambdas\n >>> result = pool.map(lambda x: x**2, range(10))\n >>> result\n [0, 1, 4, 9, 16, 25, 36, 49, 64, 81]\n >>>\n >>> # also does asynchronous map\n >>> result = pool.amap(pow, [1,2,3], [4,5,6])\n >>> result.get()\n [1, 32, 729]\n >>>\n >>> # or can return a map iterator\n >>> result = pool.imap(pow, [1,2,3], [4,5,6])\n >>> result\n <processing.pool.IMapIterator object at 0x110c2ffd0>\n >>> list(result)\n [1, 32, 729]\n\npathos has several ways that that you can get the exact behavior of starmap.\n>>> def add(*x):\n... return sum(x)\n...\n>>> x = [[1,2,3],[4,5,6]]\n>>> import pathos\n>>> import numpy as np\n>>> # use ProcessPool's map and transposing the inputs\n>>> pp = pathos.pools.ProcessPool()\n>>> pp.map(add, *np.array(x).T)\n[6, 15]\n>>> # use ProcessPool's map and a lambda to apply the star\n>>> pp.map(lambda x: add(*x), x)\n[6, 15]\n>>> # use a _ProcessPool, which has starmap\n>>> _pp = pathos.pools._ProcessPool()\n>>> _pp.starmap(add, x)\n[6, 15]\n>>>\n\n", "Another way is to pass a list of lists to a one-argument routine:\nimport os\nfrom multiprocessing import Pool\n\ndef task(args):\n print \"PID =\", os.getpid(), \", arg1 =\", args[0], \", arg2 =\", args[1]\n\npool = Pool()\n\npool.map(task, [\n [1,2],\n [3,4],\n [5,6],\n [7,8]\n ])\n\nOne can then construct a list lists of arguments with one's favorite method.\n", "A better way is using a decorator instead of writing a wrapper function by hand. Especially when you have a lot of functions to map, a decorator will save your time by avoiding writing a wrapper for every function. Usually a decorated function is not picklable, however we may use functools to get around it. More discussions can be found here.\nHere is the example:\ndef unpack_args(func):\n from functools import wraps\n @wraps(func)\n def wrapper(args):\n if isinstance(args, dict):\n return func(**args)\n else:\n return func(*args)\n return wrapper\n\n@unpack_args\ndef func(x, y):\n return x + y\n\nThen you may map it with zipped arguments:\nnp, xlist, ylist = 2, range(10), range(10)\npool = Pool(np)\nres = pool.map(func, zip(xlist, ylist))\npool.close()\npool.join()\n\nOf course, you may always use Pool.starmap in Python 3 (>=3.3) as mentioned in other answers.\n", "A better solution for Python 2:\nfrom multiprocessing import Pool\ndef func((i, (a, b))):\n print i, a, b\n return a + b\npool = Pool(3)\npool.map(func, [(0,(1,2)), (1,(2,3)), (2,(3, 4))])\n\nOutput\n2 3 4\n\n1 2 3\n\n0 1 2\n\nout[]:\n\n[3, 5, 7]\n\n\n", "You can use the following two functions so as to avoid writing a wrapper for each new function:\nimport itertools\nfrom multiprocessing import Pool\n\ndef universal_worker(input_pair):\n function, args = input_pair\n return function(*args)\n\ndef pool_args(function, *args):\n return zip(itertools.repeat(function), zip(*args))\n\nUse the function function with the lists of arguments arg_0, arg_1 and arg_2 as follows:\npool = Pool(n_core)\nlist_model = pool.map(universal_worker, pool_args(function, arg_0, arg_1, arg_2)\npool.close()\npool.join()\n\n", "Another simple alternative is to wrap your function parameters in a tuple and then wrap the parameters that should be passed in tuples as well. This is perhaps not ideal when dealing with large pieces of data. I believe it would make copies for each tuple.\nfrom multiprocessing import Pool\n\ndef f((a,b,c,d)):\n print a,b,c,d\n return a + b + c +d\n\nif __name__ == '__main__':\n p = Pool(10)\n data = [(i+0,i+1,i+2,i+3) for i in xrange(10)]\n print(p.map(f, data))\n p.close()\n p.join()\n\nGives the output in some random order:\n0 1 2 3\n1 2 3 4\n2 3 4 5\n3 4 5 6\n4 5 6 7\n5 6 7 8\n7 8 9 10\n6 7 8 9\n8 9 10 11\n9 10 11 12\n[6, 10, 14, 18, 22, 26, 30, 34, 38, 42]\n\n", "Here is another way to do it that IMHO is more simple and elegant than any of the other answers provided.\nThis program has a function that takes two parameters, prints them out and also prints the sum:\nimport multiprocessing\n\ndef main():\n\n with multiprocessing.Pool(10) as pool:\n params = [ (2, 2), (3, 3), (4, 4) ]\n pool.starmap(printSum, params)\n # end with\n\n# end function\n\ndef printSum(num1, num2):\n mySum = num1 + num2\n print('num1 = ' + str(num1) + ', num2 = ' + str(num2) + ', sum = ' + str(mySum))\n# end function\n\nif __name__ == '__main__':\n main()\n\noutput is:\nnum1 = 2, num2 = 2, sum = 4\nnum1 = 3, num2 = 3, sum = 6\nnum1 = 4, num2 = 4, sum = 8\n\nSee the python docs for more info:\nhttps://docs.python.org/3/library/multiprocessing.html#module-multiprocessing.pool\nIn particular be sure to check out the starmap function.\nI'm using Python 3.6, I'm not sure if this will work with older Python versions\nWhy there is not a very straight-forward example like this in the docs, I'm not sure.\n", "From Python 3.4.4, you can use multiprocessing.get_context() to obtain a context object to use multiple start methods:\nimport multiprocessing as mp\n\ndef foo(q, h, w):\n q.put(h + ' ' + w)\n print(h + ' ' + w)\n\nif __name__ == '__main__':\n ctx = mp.get_context('spawn')\n q = ctx.Queue()\n p = ctx.Process(target=foo, args=(q,'hello', 'world'))\n p.start()\n print(q.get())\n p.join()\n\nOr you just simply replace\npool.map(harvester(text, case), case, 1)\n\nwith:\npool.apply_async(harvester(text, case), case, 1)\n\n", "In the official documentation states that it supports only one iterable argument. I like to use apply_async in such cases. In your case I would do:\nfrom multiprocessing import Process, Pool, Manager\n\ntext = \"test\"\ndef harvester(text, case, q = None):\n X = case[0]\n res = text+ str(X)\n if q:\n q.put(res)\n return res\n\n\ndef block_until(q, results_queue, until_counter=0):\n i = 0\n while i < until_counter:\n results_queue.put(q.get())\n i+=1\n\nif __name__ == '__main__':\n pool = multiprocessing.Pool(processes=6)\n case = RAW_DATASET\n m = Manager()\n q = m.Queue()\n results_queue = m.Queue() # when it completes results will reside in this queue\n blocking_process = Process(block_until, (q, results_queue, len(case)))\n blocking_process.start()\n for c in case:\n try:\n res = pool.apply_async(harvester, (text, case, q = None))\n res.get(timeout=0.1)\n except:\n pass\n blocking_process.join()\n\n", "There are many answers here, but none seem to provide Python 2/3 compatible code that will work on any version. If you want your code to just work, this will work for either Python version:\n# For python 2/3 compatibility, define pool context manager\n# to support the 'with' statement in Python 2\nif sys.version_info[0] == 2:\n from contextlib import contextmanager\n @contextmanager\n def multiprocessing_context(*args, **kwargs):\n pool = multiprocessing.Pool(*args, **kwargs)\n yield pool\n pool.terminate()\nelse:\n multiprocessing_context = multiprocessing.Pool\n\nAfter that, you can use multiprocessing the regular Python 3 way, however you like. For example:\ndef _function_to_run_for_each(x):\n return x.lower()\nwith multiprocessing_context(processes=3) as pool:\n results = pool.map(_function_to_run_for_each, ['Bob', 'Sue', 'Tim']) print(results)\n\nwill work in Python 2 or Python 3.\n", "text = \"test\"\n\ndef unpack(args):\n return args[0](*args[1:])\n\ndef harvester(text, case):\n X = case[0]\n text+ str(X)\n\nif __name__ == '__main__':\n pool = multiprocessing.Pool(processes=6)\n case = RAW_DATASET\n # args is a list of tuples \n # with the function to execute as the first item in each tuple\n args = [(harvester, text, c) for c in case]\n # doing it this way, we can pass any function\n # and we don't need to define a wrapper for each different function\n # if we need to use more than one\n pool.map(unpack, args)\n pool.close()\n pool.join()\n\n", "This is an example of the routine I use to pass multiple arguments to a one-argument function used in a pool.imap fork:\nfrom multiprocessing import Pool\n\n# Wrapper of the function to map:\nclass makefun:\n def __init__(self, var2):\n self.var2 = var2\n def fun(self, i):\n var2 = self.var2\n return var1[i] + var2\n\n# Couple of variables for the example:\nvar1 = [1, 2, 3, 5, 6, 7, 8]\nvar2 = [9, 10, 11, 12]\n\n# Open the pool:\npool = Pool(processes=2)\n\n# Wrapper loop\nfor j in range(len(var2)):\n # Obtain the function to map\n pool_fun = makefun(var2[j]).fun\n\n # Fork loop\n for i, value in enumerate(pool.imap(pool_fun, range(len(var1))), 0):\n print(var1[i], '+' ,var2[j], '=', value)\n\n# Close the pool\npool.close()\n\n", "This might be another option. The trick is in the wrapper function that returns another function which is passed in to pool.map. The code below reads an input array and for each (unique) element in it, returns how many times (ie counts) that element appears in the array, For example if the input is\nnp.eye(3) = [ [1. 0. 0.]\n [0. 1. 0.]\n [0. 0. 1.]]\n\nthen zero appears 6 times and one 3 times\nimport numpy as np\nfrom multiprocessing.dummy import Pool as ThreadPool\nfrom multiprocessing import cpu_count\n\n\ndef extract_counts(label_array):\n labels = np.unique(label_array)\n out = extract_counts_helper([label_array], labels)\n return out\n\ndef extract_counts_helper(args, labels):\n n = max(1, cpu_count() - 1)\n pool = ThreadPool(n)\n results = {}\n pool.map(wrapper(args, results), labels)\n pool.close()\n pool.join()\n return results\n\ndef wrapper(argsin, results):\n def inner_fun(label):\n label_array = argsin[0]\n counts = get_label_counts(label_array, label)\n results[label] = counts\n return inner_fun\n\ndef get_label_counts(label_array, label):\n return sum(label_array.flatten() == label)\n\nif __name__ == \"__main__\":\n img = np.ones([2,2])\n out = extract_counts(img)\n print('input array: \\n', img)\n print('label counts: ', out)\n print(\"========\")\n \n img = np.eye(3)\n out = extract_counts(img)\n print('input array: \\n', img)\n print('label counts: ', out)\n print(\"========\")\n \n img = np.random.randint(5, size=(3, 3))\n out = extract_counts(img)\n print('input array: \\n', img)\n print('label counts: ', out)\n print(\"========\")\n\nYou should get:\ninput array: \n [[1. 1.]\n [1. 1.]]\nlabel counts: {1.0: 4}\n========\ninput array: \n [[1. 0. 0.]\n [0. 1. 0.]\n [0. 0. 1.]]\nlabel counts: {0.0: 6, 1.0: 3}\n========\ninput array: \n [[4 4 0]\n [2 4 3]\n [2 3 1]]\nlabel counts: {0: 1, 1: 1, 2: 2, 3: 2, 4: 3}\n========\n\n", "import time\nfrom multiprocessing import Pool\n\n\ndef f1(args):\n vfirst, vsecond, vthird = args[0] , args[1] , args[2]\n print(f'First Param: {vfirst}, Second value: {vsecond} and finally third value is: {vthird}')\n pass\n\n\nif __name__ == '__main__':\n p = Pool()\n result = p.map(f1, [['Dog','Cat','Mouse']])\n p.close()\n p.join()\n print(result)\n\n", "Store all your arguments as an array of tuples.\nThe example says normally you call your function as:\ndef mainImage(fragCoord: vec2, iResolution: vec3, iTime: float) -> vec3:\n\nInstead pass one tuple and unpack the arguments:\ndef mainImage(package_iter) -> vec3:\n fragCoord = package_iter[0]\n iResolution = package_iter[1]\n iTime = package_iter[2]\n\nBuild up the tuple by using a loop beforehand:\npackage_iter = []\niResolution = vec3(nx, ny, 1)\nfor j in range((ny-1), -1, -1):\n for i in range(0, nx, 1):\n fragCoord: vec2 = vec2(i, j)\n time_elapsed_seconds = 10\n package_iter.append((fragCoord, iResolution, time_elapsed_seconds))\n\nThen execute all using map by passing the array of tuples:\narray_rgb_values = []\n\nwith concurrent.futures.ProcessPoolExecutor() as executor:\n for val in executor.map(mainImage, package_iter):\n fragColor = val\n ir = clip(int(255* fragColor.r), 0, 255)\n ig = clip(int(255* fragColor.g), 0, 255)\n ib = clip(int(255* fragColor.b), 0, 255)\n\n array_rgb_values.append((ir, ig, ib))\n\nI know Python has * and ** for unpacking, but I haven't tried those yet.\nAlso better to use the higher-level library concurrent futures than the low level multiprocessing library.\n", "For me,Below one was short and simple solution:\nfrom multiprocessing.pool import ThreadPool\nfrom functools import partial\nfrom time import sleep\nfrom random import randint\n\ndef dosomething(var,s):\n sleep(randint(1,5))\n print(var)\n return var + s\n\narray = [\"a\", \"b\", \"c\", \"d\", \"e\"]\nwith ThreadPool(processes=5) as pool:\n resp_ = pool.map(partial(dosomething,s=\"2\"), array)\n print(resp_)\n\nOutput:\na\nb\nd\ne\nc\n['a2', 'b2', 'c2', 'd2', 'e2']\n\n" ]
[ 736, 492, 177, 116, 81, 31, 19, 10, 10, 10, 9, 9, 7, 5, 3, 3, 2, 2, 2, 2, 0, 0 ]
[ "For Python 2, you can use this trick\ndef fun(a, b):\n return a + b\n\npool = multiprocessing.Pool(processes=6)\nb = 233\npool.map(lambda x:fun(x, b), range(1000))\n\n" ]
[ -2 ]
[ "multiprocessing", "python", "python_multiprocessing" ]
stackoverflow_0005442910_multiprocessing_python_python_multiprocessing.txt
Q: Extracting data for a specific location from netCDF by python I am new to using Python and also new to NetCDF, so apologies if I'm unclear. I have an nc file that has several variables and I need to extract data from those nc files in a new order. My nc file has 8 variables (longitude, latitude, time, u10, v10, swh, mwd, mwp) and the logic I'm trying is "If I input longitude and latitude, my program outputs other variables (u10, v10, swh, mwd, mwp) ordered by time." Then I would put the extracted data in another database. I tested my nc file as below: import netCDF4 from netCDF4 import Dataset jan = Dataset('2016_01.nc') print jan.variables.keys() lon = jan.variables['longitude'] lat = jan.variables['latitude'] time = jan.variables['time'] for d in jan.dimensions.items(): print d lon_array = lon[:] lat_array = lat[:] time_array = time[:] print lon_array print lat_array print time_array and some of the result is below [u'longitude', u'latitude', u'time', u'u10', u'v10', u'swh', u'mwd', u'mwp'] (u'longitude', <type 'netCDF4._netCDF4.Dimension'>: name = 'longitude', size = 1440) (u'latitude', <type 'netCDF4._netCDF4.Dimension'>: name = 'latitude', size = 721) (u'time', <type 'netCDF4._netCDF4.Dimension'> (unlimited): name = 'time', size = 186) Any advice would be appreciated. Thank you. A: 2022 edit: this is now much easier with xarray, as shown in Adrian's answer: https://stackoverflow.com/a/74599597/3581217 You first need to know the order of the dimensions in the time/space varying variables like e.g. u10, which you can obtain with: u10 = jan.variables['u10'] print(u10.dimensions) Next it is a matter of slicing/indexing the array correctly. If you want data for lets say latitude=30, longitude = 10, the corresponding (closest) indexes can be found with (after importing Numpy as import numpy as np): i = np.abs(lon_array - 10).argmin() j = np.abs(lat_array - 30).argmin() Assuming that the dimensions of u10 are ordered as {time, lat, lon}, you can read the data as: u10_time = u10[:,j,i] Which gives you all (time varying) u10 values for your requested location. A: This kind of task is straightforward using xarray, for example import xarray as xr lon=30 lat=10 # open the file, select the location and write to new netcdf da=xr.open_dataset('2016_01.nc') ts=da.sel(x=lon, y=lat, method="nearest") ts.to_netcdf('timeseries.nc')
Extracting data for a specific location from netCDF by python
I am new to using Python and also new to NetCDF, so apologies if I'm unclear. I have an nc file that has several variables and I need to extract data from those nc files in a new order. My nc file has 8 variables (longitude, latitude, time, u10, v10, swh, mwd, mwp) and the logic I'm trying is "If I input longitude and latitude, my program outputs other variables (u10, v10, swh, mwd, mwp) ordered by time." Then I would put the extracted data in another database. I tested my nc file as below: import netCDF4 from netCDF4 import Dataset jan = Dataset('2016_01.nc') print jan.variables.keys() lon = jan.variables['longitude'] lat = jan.variables['latitude'] time = jan.variables['time'] for d in jan.dimensions.items(): print d lon_array = lon[:] lat_array = lat[:] time_array = time[:] print lon_array print lat_array print time_array and some of the result is below [u'longitude', u'latitude', u'time', u'u10', u'v10', u'swh', u'mwd', u'mwp'] (u'longitude', <type 'netCDF4._netCDF4.Dimension'>: name = 'longitude', size = 1440) (u'latitude', <type 'netCDF4._netCDF4.Dimension'>: name = 'latitude', size = 721) (u'time', <type 'netCDF4._netCDF4.Dimension'> (unlimited): name = 'time', size = 186) Any advice would be appreciated. Thank you.
[ "2022 edit: this is now much easier with xarray, as shown in Adrian's answer: https://stackoverflow.com/a/74599597/3581217\n\nYou first need to know the order of the dimensions in the time/space varying variables like e.g. u10, which you can obtain with:\nu10 = jan.variables['u10']\nprint(u10.dimensions)\n\nNext it is a matter of slicing/indexing the array correctly. If you want data for lets say latitude=30, longitude = 10, the corresponding (closest) indexes can be found with (after importing Numpy as import numpy as np):\ni = np.abs(lon_array - 10).argmin()\nj = np.abs(lat_array - 30).argmin()\n\nAssuming that the dimensions of u10 are ordered as {time, lat, lon}, you can read the data as:\nu10_time = u10[:,j,i]\n\nWhich gives you all (time varying) u10 values for your requested location.\n", "This kind of task is straightforward using xarray, for example\nimport xarray as xr\nlon=30\nlat=10\n\n# open the file, select the location and write to new netcdf \nda=xr.open_dataset('2016_01.nc') \nts=da.sel(x=lon, y=lat, method=\"nearest\")\nts.to_netcdf('timeseries.nc')\n\n" ]
[ 8, 1 ]
[ "I used this on netCDF files that are generated with the WRF model.\nimport numpy as np\nfrom netCDF4 import Dataset # http://code.google.com/p/netcdf4-python/\nimport pandas as pd\nimport os\n\nos.chdir('.../netcdf') # Select your dir\nf = Dataset('wrfout_d01_2007-01-01_10_00_00', 'r') #Charge your file\n\nlatbounds = [ 4.691417 ]# Latitud\nlonbounds = [ -74.209 ]# Longitud\n\ncor_lat = pd.DataFrame(f.variables['XLAT'][0][:])\ncor_lat2 = pd.DataFrame({'a':cor_lat.iloc[:,0], 'b':abs(cor_lat.iloc[:,0] - latbounds)})\n\na = cor_lat2[cor_lat2.b == min(cor_lat2.b)].index.get_values()[0]\n\ncor_lon = pd.DataFrame(f.variables['XLONG'][0][:])\ncor_lon2 = pd.DataFrame({'a':cor_lon.iloc[0,:], 'b':abs(cor_lon.iloc[0,:] - lonbounds)})\n\nb = cor_lon2[cor_lon2.b == min(cor_lon2.b)].index.get_values()[0]\n\nvlr = (f.variables['T2'][ : , a , b ] - 273.15)[0] #This change from kelvin to celsius\nvlr\n\n" ]
[ -1 ]
[ "netcdf", "netcdf4", "python" ]
stackoverflow_0045582344_netcdf_netcdf4_python.txt
Q: Emojis in Pycharm Windows 7 I am making a program in Python using Pycharm IDE and I need Emoji packages in it. I have seen some guys do it in Mac using Ctrl + Space, How I can do this in windows ? A: It's a new feature in Windows 10. You can access the emoji keyboard shortcut using the keys: "Windows key + ." or, "Windows key + >" The "." is the period or, the full stop key, not the point or the decimal key. A: In windows 10 you can use windows key + .(dot) or windows key + ; A: Use either of these keyboard shortcuts: Windows Key and Period / Full Stop Key ( . ) Windows Key and Semicolon Key ( ; ) An emoji picker will appear on screen over the text field: In other words, hold the Windows key down and press either the period (.) or semicolon (;) key. Your cursor must be somewhere that can accept text while pressing these keys, but you can use this shortcut in any application, from text fields in your web browser to messaging apps to Notepad to Microsoft Word. You have to just click the emoji in the window that pops up to insert it. A: Actually, there is an alternative way to use emoji in windows, it works for me, You can try this. You have to use Unicode for your specific emoji. Here I am sharing a link that contains the Unicode chart. (https://unicode.org/emoji/charts/full-emoji-list.html). At first copy Unicode. Ex:--> For ":)" Unicode: U+1F600 Add 3 (zero's) 0 in the place of + Ex:--> "U0001F600" Add a \ (backslash) in front of the code. Ex:--> "\U0001F600" emojis={ ":)": "\U0001F600", ":D": "\U0001F603" } A: I use this plugin on Pycharm : https://plugins.jetbrains.com/plugin/9174-emoji-support-plugin It does exactly what you want. A: Adding for macOS as well, its ^+⌘+Space This command is not just for PyCharm, it will work on other applications as well, in macOS. A: On ubuntu ctrl + alt + ; opens an emoji picker window on Jetbrains IDE's , check this out this may work on windows too
Emojis in Pycharm Windows 7
I am making a program in Python using Pycharm IDE and I need Emoji packages in it. I have seen some guys do it in Mac using Ctrl + Space, How I can do this in windows ?
[ "It's a new feature in Windows 10. You can access the emoji keyboard shortcut using the keys:\n\"Windows key + .\" or, \"Windows key + >\"\nThe \".\" is the period or, the full stop key, not the point or the decimal key.\n", "In windows 10 you can use windows key + .(dot) or windows key + ; \n", "Use either of these keyboard shortcuts:\n\nWindows Key and Period / Full Stop Key ( . )\nWindows Key and Semicolon Key ( ; )\n\nAn emoji picker will appear on screen over the text field:\nIn other words, hold the Windows key down and press either the period (.) or semicolon (;) key.\nYour cursor must be somewhere that can accept text while pressing these keys, but you can use this shortcut in any application, from text fields in your web browser to messaging apps to Notepad to Microsoft Word.\nYou have to just click the emoji in the window that pops up to insert it.\n", "Actually, there is an alternative way to use emoji in windows, it works for me, You \n can try this.\n\nYou have to use Unicode for your specific emoji.\n Here I am sharing a link that contains the Unicode chart. \n (https://unicode.org/emoji/charts/full-emoji-list.html). \n\nAt first copy Unicode. Ex:--> For \":)\" Unicode: U+1F600\nAdd 3 (zero's) 0 in the place of + Ex:--> \"U0001F600\" \nAdd a \\ (backslash) in front of the code. Ex:--> \"\\U0001F600\"\n emojis={\n \":)\": \"\\U0001F600\",\n\n \":D\": \"\\U0001F603\"\n }\n\n\n", "I use this plugin on Pycharm : https://plugins.jetbrains.com/plugin/9174-emoji-support-plugin\nIt does exactly what you want. \n", "Adding for macOS as well, its ^+⌘+Space\nThis command is not just for PyCharm, it will work on other applications as well, in macOS.\n", "On ubuntu ctrl + alt + ; opens an emoji picker window on Jetbrains IDE's , check this out this may work on windows too\n" ]
[ 10, 3, 3, 1, 0, 0, 0 ]
[ "It is include in Mac OS, I do not think you can do that in Windows.\n", "You can use the Windows + ; key combination to open that box, however the emojis won't appear as bright as they do in mac. \n" ]
[ -2, -3 ]
[ "pycharm", "python" ]
stackoverflow_0054909711_pycharm_python.txt