content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
35
137
Q: How do I target a ForeignKey attribute inside a loop? I've a cart view and when a user is authenticated and has products in the cart view, I wanna check against product availability and if one product is found with availability set to False I wanna render Out of stock, the code for not authenticated users works, but for authenticated users Traceback Error is 'OrderItem' object has no attribute 'availability' Models class Product(models.Model): availability = models.BooleanField() class OrderItem(models.Model): product = models.ForeignKey(Product) order = models.ForeignKey(Order) Views def cart(request): data = cartData(request) items = data['items'] orderitemlist = OrderItem.objects.all() if not request.user.is_authenticated: available = all(x['product']['availability'] for x in items) else: available = all(x.availability for x in orderitemlist) context = {"items": items, 'available': available} Template {% if available %} <a href="#">Checkout</a> {% else %} <p>Out of stock</p> {% endif %} A: You check the availability of the product, so: def cart(request): data = cartData(request) items = data['items'] orderitemlist = OrderItem.objects.select_related('product') if not request.user.is_authenticated: available = all(x['product']['availability'] for x in items) else: available = all(x.product.availability for x in orderitemlist) context = {'items': items, 'available': available} The .select_related(…) clause [Django-doc] is not really necessary, but will fetch all the products in the same query and thus will boost processing.
How do I target a ForeignKey attribute inside a loop?
I've a cart view and when a user is authenticated and has products in the cart view, I wanna check against product availability and if one product is found with availability set to False I wanna render Out of stock, the code for not authenticated users works, but for authenticated users Traceback Error is 'OrderItem' object has no attribute 'availability' Models class Product(models.Model): availability = models.BooleanField() class OrderItem(models.Model): product = models.ForeignKey(Product) order = models.ForeignKey(Order) Views def cart(request): data = cartData(request) items = data['items'] orderitemlist = OrderItem.objects.all() if not request.user.is_authenticated: available = all(x['product']['availability'] for x in items) else: available = all(x.availability for x in orderitemlist) context = {"items": items, 'available': available} Template {% if available %} <a href="#">Checkout</a> {% else %} <p>Out of stock</p> {% endif %}
[ "You check the availability of the product, so:\ndef cart(request):\n data = cartData(request)\n items = data['items']\n\n orderitemlist = OrderItem.objects.select_related('product')\n\n if not request.user.is_authenticated:\n available = all(x['product']['availability'] for x in items)\n else:\n available = all(x.product.availability for x in orderitemlist)\n\n context = {'items': items, 'available': available}\nThe .select_related(…) clause [Django-doc] is not really necessary, but will fetch all the products in the same query and thus will boost processing.\n" ]
[ 1 ]
[]
[]
[ "django", "python" ]
stackoverflow_0074578557_django_python.txt
Q: How to assign points in a simple game I am making a simple game, at the beginning you can choose the amount of players, which is between 2 and 5 (shown below) I am having problem with assigning the initial amount of points, which is 100 points. Also, not sure where to place the code regarding the points in my woring code below. When I start working on the game, after each dice moevent the score would increase. players_list= [] max_players= 5 max_players = int(input(" Please, insert the number of players? : ")) while (max_players <2) or (max_players > 5) : max_players = int(input(" Number of players must be between 2 and 5.Number of players ?")) players_list = [] while len(players_list) < max_players: player1 = input(" Enter your first and last name? : ") players_list.append(player1) print("Players in the game : ") print(players_list) Should I change the players list into a dictionary? The code with the score system that does not work score=100 players_list= [] max_players= 5 max_players = int(input(" Please, insert the number of players? : ")) while (max_players <2) or (max_players > 5) : max_players = int(input(" Number of players must be between 2 and 5.Number of players ?")) players_list = [] while len(players_list) < max_players: player1 = input(" Enter your first and last name? : ") players_list.append(player1) print("Players in the game : ") players_list.appened (players)= { score:100} print(players_list) print(score) A: I'd recommend to use dictionary where keys are player names (assuming player names will be unique) and values will be player's score: players_dict = {} score = 100 max_players = -1 while not (2 <= max_players <= 5): max_players = int(input("Please, insert the number of players: ")) while len(players_dict) < max_players: player = input("Enter your first and last name: ") if player in players_dict: print(f"Player {player} already exists, choose another name") else: players_dict[player] = score print(players_dict) Prints (for example): Please, insert the number of players: 1 Please, insert the number of players: 3 Enter your first and last name: John Enter your first and last name: Adam Enter your first and last name: John Player John already exists, choose another name Enter your first and last name: Lucy {'John': 100, 'Adam': 100, 'Lucy': 100} A: No you don't need to change the list into a dictionary. Rather you probably want to have a list of dictionaries. TLDR; I'm not sure if I understand your problem, cause the description is vague tbh. There's no need to use the input before the while loop. The user can input something what cannot be parsed into an int, so I'd wrap it into try...except block. "player_list" is redefined in the 2nd loop it's not player1, it's the "next" player I guess you can also keep name and surname in a single string and then skip the split one way or another it would make sense to keep your player as a dict Consider renaming players_list to players, typically you don't add the name of the data structure to the variable name Code: score=100 players_list= [] max_players= 0 while (max_players <2) or (max_players > 5) : try: max_players = int(input(" Number of players must be between 2 and 5.Number of players ?")) except Exception as e: print(f"Invalid input: {e}") while len(players_list) < max_players: name, surname = input(" Enter your first and last name? : ").split(" ") player = {"name": name, "surname": surname, "points": score} players_list.append(player) print(f"Players in the game : {players_list}")
How to assign points in a simple game
I am making a simple game, at the beginning you can choose the amount of players, which is between 2 and 5 (shown below) I am having problem with assigning the initial amount of points, which is 100 points. Also, not sure where to place the code regarding the points in my woring code below. When I start working on the game, after each dice moevent the score would increase. players_list= [] max_players= 5 max_players = int(input(" Please, insert the number of players? : ")) while (max_players <2) or (max_players > 5) : max_players = int(input(" Number of players must be between 2 and 5.Number of players ?")) players_list = [] while len(players_list) < max_players: player1 = input(" Enter your first and last name? : ") players_list.append(player1) print("Players in the game : ") print(players_list) Should I change the players list into a dictionary? The code with the score system that does not work score=100 players_list= [] max_players= 5 max_players = int(input(" Please, insert the number of players? : ")) while (max_players <2) or (max_players > 5) : max_players = int(input(" Number of players must be between 2 and 5.Number of players ?")) players_list = [] while len(players_list) < max_players: player1 = input(" Enter your first and last name? : ") players_list.append(player1) print("Players in the game : ") players_list.appened (players)= { score:100} print(players_list) print(score)
[ "I'd recommend to use dictionary where keys are player names (assuming player names will be unique) and values will be player's score:\nplayers_dict = {}\nscore = 100\n\nmax_players = -1\nwhile not (2 <= max_players <= 5):\n max_players = int(input(\"Please, insert the number of players: \"))\n\nwhile len(players_dict) < max_players:\n player = input(\"Enter your first and last name: \")\n if player in players_dict:\n print(f\"Player {player} already exists, choose another name\")\n else:\n players_dict[player] = score\n\nprint(players_dict)\n\nPrints (for example):\nPlease, insert the number of players: 1\nPlease, insert the number of players: 3\nEnter your first and last name: John\nEnter your first and last name: Adam\nEnter your first and last name: John\nPlayer John already exists, choose another name\nEnter your first and last name: Lucy\n{'John': 100, 'Adam': 100, 'Lucy': 100}\n\n", "No you don't need to change the list into a dictionary. Rather you probably want to have a list of dictionaries.\nTLDR;\nI'm not sure if I understand your problem, cause the description is vague tbh.\n\nThere's no need to use the input before the while loop.\nThe user can input something what cannot be parsed into an int, so I'd wrap it into try...except block.\n\"player_list\" is redefined\nin the 2nd loop it's not player1, it's the \"next\" player I guess\nyou can also keep name and surname in a single string and then skip the split\none way or another it would make sense to keep your player as a dict\nConsider renaming players_list to players, typically you don't add the name of the data structure to the variable name\n\nCode:\nscore=100\nplayers_list= []\nmax_players= 0\nwhile (max_players <2) or (max_players > 5) :\n try:\n max_players = int(input(\" Number of players must be between 2 and 5.Number of players ?\"))\n except Exception as e:\n print(f\"Invalid input: {e}\")\nwhile len(players_list) < max_players:\n name, surname = input(\" Enter your first and last name? : \").split(\" \")\n player = {\"name\": name, \"surname\": surname, \"points\": score}\n players_list.append(player)\n print(f\"Players in the game : {players_list}\")\n\n" ]
[ 1, 0 ]
[]
[]
[ "list", "python" ]
stackoverflow_0074578272_list_python.txt
Q: recursive tree search create array of all paths I have a dict {child, parent} mydict = {'1': '0', '2': '0', '3': '1', '4': '3', '5': '3', '6': '2', '7': '6', '8': '7' } I don't know how many levels of grandchildren there are. I need to end up with a structure that has all unique parent,child,grandchild paths. path1:[0,1,3,4], path2:[0,1,3,5], path3:[0,2,6,7,8] I have written function that traverses the entire tree and print each value def getvalues(x): xlist = [(i,j) for i,j in mydict.items() if j == x] for y in xlist: print(y[1], y[0]) getvalues(y[0]) getvalues('0') 0 1 1 3 3 4 3 5 0 2 2 6 6 7 7 8 But I am at a loss of how to store the interim values at each level and get them into the array format i need path1:[0,1,3,4], path2:[0,1,3,5], path3:[0,2,6,7,8] A: Try: mydict = { "1": "0", "2": "0", "3": "1", "4": "3", "5": "3", "6": "2", "7": "6", "8": "7", } def get_all_paths(dct, start="0", curr_path=None): if curr_path is None: curr_path = [start] next_values = dct.get(start) if not next_values: yield curr_path return for v in next_values: yield from get_all_paths(dct, v, curr_path + [v]) inv_dict = {} for k, v in mydict.items(): inv_dict.setdefault(v, []).append(k) out = {f"path{i}": path for i, path in enumerate(get_all_paths(inv_dict), 1)} print(out) Prints: { "path1": ["0", "1", "3", "4"], "path2": ["0", "1", "3", "5"], "path3": ["0", "2", "6", "7", "8"], }
recursive tree search create array of all paths
I have a dict {child, parent} mydict = {'1': '0', '2': '0', '3': '1', '4': '3', '5': '3', '6': '2', '7': '6', '8': '7' } I don't know how many levels of grandchildren there are. I need to end up with a structure that has all unique parent,child,grandchild paths. path1:[0,1,3,4], path2:[0,1,3,5], path3:[0,2,6,7,8] I have written function that traverses the entire tree and print each value def getvalues(x): xlist = [(i,j) for i,j in mydict.items() if j == x] for y in xlist: print(y[1], y[0]) getvalues(y[0]) getvalues('0') 0 1 1 3 3 4 3 5 0 2 2 6 6 7 7 8 But I am at a loss of how to store the interim values at each level and get them into the array format i need path1:[0,1,3,4], path2:[0,1,3,5], path3:[0,2,6,7,8]
[ "Try:\nmydict = {\n \"1\": \"0\",\n \"2\": \"0\",\n \"3\": \"1\",\n \"4\": \"3\",\n \"5\": \"3\",\n \"6\": \"2\",\n \"7\": \"6\",\n \"8\": \"7\",\n}\n\n\ndef get_all_paths(dct, start=\"0\", curr_path=None):\n if curr_path is None:\n curr_path = [start]\n\n next_values = dct.get(start)\n if not next_values:\n yield curr_path\n return\n\n for v in next_values:\n yield from get_all_paths(dct, v, curr_path + [v])\n\n\ninv_dict = {}\nfor k, v in mydict.items():\n inv_dict.setdefault(v, []).append(k)\n\n\nout = {f\"path{i}\": path for i, path in enumerate(get_all_paths(inv_dict), 1)}\nprint(out)\n\nPrints:\n{\n \"path1\": [\"0\", \"1\", \"3\", \"4\"],\n \"path2\": [\"0\", \"1\", \"3\", \"5\"],\n \"path3\": [\"0\", \"2\", \"6\", \"7\", \"8\"],\n}\n\n" ]
[ 0 ]
[]
[]
[ "dictionary", "python", "tree" ]
stackoverflow_0074578143_dictionary_python_tree.txt
Q: Iterate through a folder which contains 5 more folders each with 500 text files to match words I have a folder which contain 5 folders, with round 450-550 text files each. The text file has around 1-12 sentences varying in length, seperated by a tab, like this: i love burgers i want to eat a burger etc I want to create a code which asks the user to input a search term and then goes inside each folder, opens and reads each text file, and matches how many times that search term appears. Then, go back out to the next folder, rinse and repeat till it goes through every folder and every text file. So the output should be something like this: input search term: good the search term appears this many times __ in the following files file name 001.txt file name 002.txt file name 003.txt Here is some of the code I have so far: from pathlib import Path import os from os.path import isdir, isfile import nltk search_word = input("Please enter the word you want to search for: ") punctuation = "he fold!,:;-_'.?" location = Path(r'the folder') os.chdir(location) print(Path.cwd()) fileslist = os.listdir(Path.cwd()) print(fileslist) for file in fileslist: if isdir(file): os.chdir(file) print(Path.cwd()) content = os.listdir(Path.cwd()) for document in content: with open(document,'r') as infile: data = [] for line in infile: data += [line.strip(punctuation)] print(data) os.chdir('../') print(Path.cwd()) else: os.chdir(location) I have tried watching some YouTube videos on how to do it, but I haven't been able to figure it out. A: If you just want to count the number of occurrences of a word, for example, in a set of .txt files, something like this will do it: from pathlib import Path word = input('Enter the word you want to search for: ') path = Path('/some/folder') counter = {} for file in path.rglob('*.txt'): if file.is_file(): counter[file] = file.read_text().count(word) print( f'The search term "{word}" appears {sum(counter.values())}', 'times in the following files:' ) for file in [_ for _ in counter if counter[_]]: print(f'{file}: {counter[file]} times') A: this would be a perfect use case for the walk() function in the os module. given a start directory os.walk() recursively iterates through the directory structure and provides a tuple of (current_directory, directory_names, file_names) then you can iterate through the filenames to check which ones end with '.txt' and open that file and use a generator expression to check each line of the file to see if the line contains the search term and sum up the results of the generator with the sum() function import os import os.path STARTDIR=input("directory: ") SEARCH=input("search term: ") total = 0 for dirname, dirlist, filelist in os.walk(STARTDIR): for filename in filelist: if filename.endswith(".txt"): # get full filename to use with open() function fullname = os.path.join(dirname, filename) name # use generator expression to iterate over the lines of the # opened file and sum up the results (True == 1 for sum()) count = sum(SEARCH in line for line in open(fullname)) # if non zero count then print the filename and count if count: print(f"{fullname} contains {count} lines with {SEARCH}") total += count print(f"{SEARCH} occurred a total of {total} times") SAMPLE OUTPUT: directory: c:\downloads\test search term: hello c:\downloads\test\a\aa\info.txt contains 1 lines with hello c:\downloads\test\a\aa\log.txt contains 1 lines with hello c:\downloads\test\a\bb\greeting.txt contains 1 lines with hello c:\downloads\test\b\cc\control.txt contains 3 lines with hello c:\downloads\test\b\cc\dumb.txt contains 1 lines with hello c:\downloads\test\b\cc\info.txt contains 4 lines with hello c:\downloads\test\c\aa\dog.txt contains 2 lines with hello c:\downloads\test\c\dd\good.txt contains 1 lines with hello hello occurred a total of 14 times
Iterate through a folder which contains 5 more folders each with 500 text files to match words
I have a folder which contain 5 folders, with round 450-550 text files each. The text file has around 1-12 sentences varying in length, seperated by a tab, like this: i love burgers i want to eat a burger etc I want to create a code which asks the user to input a search term and then goes inside each folder, opens and reads each text file, and matches how many times that search term appears. Then, go back out to the next folder, rinse and repeat till it goes through every folder and every text file. So the output should be something like this: input search term: good the search term appears this many times __ in the following files file name 001.txt file name 002.txt file name 003.txt Here is some of the code I have so far: from pathlib import Path import os from os.path import isdir, isfile import nltk search_word = input("Please enter the word you want to search for: ") punctuation = "he fold!,:;-_'.?" location = Path(r'the folder') os.chdir(location) print(Path.cwd()) fileslist = os.listdir(Path.cwd()) print(fileslist) for file in fileslist: if isdir(file): os.chdir(file) print(Path.cwd()) content = os.listdir(Path.cwd()) for document in content: with open(document,'r') as infile: data = [] for line in infile: data += [line.strip(punctuation)] print(data) os.chdir('../') print(Path.cwd()) else: os.chdir(location) I have tried watching some YouTube videos on how to do it, but I haven't been able to figure it out.
[ "If you just want to count the number of occurrences of a word, for example, in a set of .txt files, something like this will do it:\nfrom pathlib import Path\n\nword = input('Enter the word you want to search for: ')\npath = Path('/some/folder')\ncounter = {}\n\nfor file in path.rglob('*.txt'):\n if file.is_file():\n counter[file] = file.read_text().count(word)\n\nprint(\n f'The search term \"{word}\" appears {sum(counter.values())}',\n 'times in the following files:'\n)\n\nfor file in [_ for _ in counter if counter[_]]:\n print(f'{file}: {counter[file]} times')\n\n", "this would be a perfect use case for the walk() function in the os module.\ngiven a start directory os.walk() recursively iterates through the directory structure and provides a tuple of (current_directory, directory_names, file_names)\nthen you can iterate through the filenames to check which ones end with '.txt'\nand open that file and use a generator expression to check each line of the file to see if the line contains the search term and sum up the results of the generator with the sum() function\nimport os\nimport os.path\n \nSTARTDIR=input(\"directory: \")\nSEARCH=input(\"search term: \")\ntotal = 0\n\nfor dirname, dirlist, filelist in os.walk(STARTDIR):\n for filename in filelist:\n if filename.endswith(\".txt\"):\n # get full filename to use with open() function\n fullname = os.path.join(dirname, filename) name\n\n # use generator expression to iterate over the lines of the\n # opened file and sum up the results (True == 1 for sum())\n count = sum(SEARCH in line for line in open(fullname))\n\n # if non zero count then print the filename and count\n if count:\n print(f\"{fullname} contains {count} lines with {SEARCH}\")\n\n total += count\n\nprint(f\"{SEARCH} occurred a total of {total} times\")\n\nSAMPLE OUTPUT:\n\ndirectory: c:\\downloads\\test \nsearch term: hello\n\nc:\\downloads\\test\\a\\aa\\info.txt contains 1 lines with hello\nc:\\downloads\\test\\a\\aa\\log.txt contains 1 lines with hello\nc:\\downloads\\test\\a\\bb\\greeting.txt contains 1 lines with hello\nc:\\downloads\\test\\b\\cc\\control.txt contains 3 lines with hello\nc:\\downloads\\test\\b\\cc\\dumb.txt contains 1 lines with hello\nc:\\downloads\\test\\b\\cc\\info.txt contains 4 lines with hello\nc:\\downloads\\test\\c\\aa\\dog.txt contains 2 lines with hello\nc:\\downloads\\test\\c\\dd\\good.txt contains 1 lines with hello\nhello occurred a total of 14 times\n\n" ]
[ 0, 0 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0074578521_python_python_3.x.txt
Q: How to change instance attributes in python class I am starting up with python classes and I have some doubts about how to properly change values in instance attributes: I have the following example: import numpy as np class myClass: def __init__(self): self.name = "none" self.val = np.array([]) Instances of myClass will have two attributes: name and values. After creating an object of myClass obj1 = myClass() I would like to change the variable name and val of obj1. What is the correct way of doing this? doing: obj1.name="name1" obj1.val = np.zeros(3) or creating member function to define this: def setName(self, name): self.name = name def setVal(self, arr): self.val = arr obj1.setName("name2") obj1.setName(np.zeros(3)) If I want to loop through the val arr can I: for i in range(3): obj1.val[i] = i Or should I create a function to assign entry "i" in the array def setValI(self, i, val): self.val[i] = val and use it in the loop as: for i in range(3): obj1.setValI(i, i) Or is it all the same and I can use whatever approach I want? Kind regards A: The easiest would be to change your __init__ to require two arguments: import numpy as np class myClass: def __init__(self, name, val): self.name = name self.val = val # Create an instance of myClass obj1 = myClass("name1", np.zeros(3)) If you want to change the values of name and/or val use getter and setter methods as you shouldn't access instance variables outside of the class itself. As you don't have private instance variables, it's just an agreement, that you don't access the variables directly. You can also go ahead and add one underscore before your instance variables, to highlight that these should be treated as private variables: class MyClass: def __init__(self, name, val): # Underscore to signal "private" variable self._name = name self._val = val
How to change instance attributes in python class
I am starting up with python classes and I have some doubts about how to properly change values in instance attributes: I have the following example: import numpy as np class myClass: def __init__(self): self.name = "none" self.val = np.array([]) Instances of myClass will have two attributes: name and values. After creating an object of myClass obj1 = myClass() I would like to change the variable name and val of obj1. What is the correct way of doing this? doing: obj1.name="name1" obj1.val = np.zeros(3) or creating member function to define this: def setName(self, name): self.name = name def setVal(self, arr): self.val = arr obj1.setName("name2") obj1.setName(np.zeros(3)) If I want to loop through the val arr can I: for i in range(3): obj1.val[i] = i Or should I create a function to assign entry "i" in the array def setValI(self, i, val): self.val[i] = val and use it in the loop as: for i in range(3): obj1.setValI(i, i) Or is it all the same and I can use whatever approach I want? Kind regards
[ "The easiest would be to change your __init__ to require two arguments:\nimport numpy as np\n\nclass myClass:\n def __init__(self, name, val):\n self.name = name\n self.val = val\n\n# Create an instance of myClass\nobj1 = myClass(\"name1\", np.zeros(3))\n\nIf you want to change the values of name and/or val use getter and setter methods as you shouldn't access instance variables outside of the class itself.\nAs you don't have private instance variables, it's just an agreement, that you don't access the variables directly. You can also go ahead and add one underscore before your instance variables, to highlight that these should be treated as private variables:\nclass MyClass:\n def __init__(self, name, val):\n # Underscore to signal \"private\" variable\n self._name = name\n self._val = val\n\n" ]
[ 0 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0074578631_python_python_3.x.txt
Q: Unique entries across multiple list I am building a fixture manager for a sport event. To simplify the program: There are four teams in a group. They play both home and away matches. So, in total 6 matches, happens across 6 weeks. So, total combination of "possible matches" at the start would look like this. (I have similar data structure in my code) from itertools import combinations teams = ["Swin", "Lon", "Key", "Stran"] dates = ["2023/05/17", "2023/05/22", "2023/05/29", "2023/05/17", "2023/05/22", "2023/05/29"] possibilities = [] for the_date in dates: for match in combinations(teams, 2): possibilities.append({"Home": match[0], "Away": match[1], "Date": the_date}) possibilities.append({"Home": match[1], "Away": match[0], "Date": the_date}) for i in possibilities: print (i) From the "possibilities", I want to get only valid set of possibilities, which is basically satisfying: No two team combination is playing same match type (Home, Away) twice Neither of the "Home" and "Away" teams are playing on the same "Date" twice What is the efficient way to do this? A: This code gives you all 46080 possible calendars from itertools import combinations from itertools import permutations teams = ["Swin", "Lon", "Key", "Stran"] dates = ["2023/05/17", "2023/05/22", "2023/05/29", "2023/05/17", "2023/05/22", "2023/05/29"] possibilities = {} mbt={} matches=[] for match in combinations(teams, 2): t1,t2 = match matches.append((t1,t2,0)) matches.append((t1,t2,1)) mpd=[] for m in [ x for x in matches if x[0]==teams[0] or x[1] ==teams[0]]: already=[m[0],m[1]] for m2 in [mx for mx in matches if mx[0] not in already and mx[1] not in already]: mpd.append((m,m2)) calendars=[] c=0 for d1 in matches[0:2]: for d2 in matches[2:4]: for d3 in matches[4:6]: for d4 in matches[6:8]: for d5 in matches[8:10]: for d6 in matches[10:12]: for pm in permutations([d1,d2,d3,d4,d5,d6],6): cal=[] for i in range(6): if pm[0]==0: t1=pm[i][0];t2=pm[i][1] else: t1=pm[i][1];t2=pm[i][0] cal.append({"Home": t1, "Away": t2, "Date": dates[i]}) calendars.append(cal) #print any of the 46080 possible calendars for line in calendars[0]: print(line)
Unique entries across multiple list
I am building a fixture manager for a sport event. To simplify the program: There are four teams in a group. They play both home and away matches. So, in total 6 matches, happens across 6 weeks. So, total combination of "possible matches" at the start would look like this. (I have similar data structure in my code) from itertools import combinations teams = ["Swin", "Lon", "Key", "Stran"] dates = ["2023/05/17", "2023/05/22", "2023/05/29", "2023/05/17", "2023/05/22", "2023/05/29"] possibilities = [] for the_date in dates: for match in combinations(teams, 2): possibilities.append({"Home": match[0], "Away": match[1], "Date": the_date}) possibilities.append({"Home": match[1], "Away": match[0], "Date": the_date}) for i in possibilities: print (i) From the "possibilities", I want to get only valid set of possibilities, which is basically satisfying: No two team combination is playing same match type (Home, Away) twice Neither of the "Home" and "Away" teams are playing on the same "Date" twice What is the efficient way to do this?
[ "This code gives you all 46080 possible calendars\nfrom itertools import combinations\nfrom itertools import permutations\n\nteams = [\"Swin\", \"Lon\", \"Key\", \"Stran\"]\ndates = [\"2023/05/17\", \"2023/05/22\", \"2023/05/29\", \"2023/05/17\", \"2023/05/22\", \"2023/05/29\"]\n\npossibilities = {}\nmbt={}\nmatches=[]\nfor match in combinations(teams, 2):\n t1,t2 = match\n matches.append((t1,t2,0))\n matches.append((t1,t2,1))\n\nmpd=[]\nfor m in [ x for x in matches if x[0]==teams[0] or x[1] ==teams[0]]:\n already=[m[0],m[1]]\n for m2 in [mx for mx in matches if mx[0] not in already and mx[1] not in already]:\n mpd.append((m,m2))\n \n \ncalendars=[]\nc=0\nfor d1 in matches[0:2]:\n for d2 in matches[2:4]:\n for d3 in matches[4:6]:\n for d4 in matches[6:8]:\n for d5 in matches[8:10]:\n for d6 in matches[10:12]:\n for pm in permutations([d1,d2,d3,d4,d5,d6],6):\n cal=[]\n for i in range(6):\n if pm[0]==0:\n t1=pm[i][0];t2=pm[i][1]\n else:\n t1=pm[i][1];t2=pm[i][0]\n cal.append({\"Home\": t1, \"Away\": t2, \"Date\": dates[i]})\n \n calendars.append(cal)\n#print any of the 46080 possible calendars \nfor line in calendars[0]:\n print(line)\n\n" ]
[ 0 ]
[]
[]
[ "loops", "python", "recursion", "reduce" ]
stackoverflow_0074575434_loops_python_recursion_reduce.txt
Q: Are there any pure python game library(without c)? Are there any pure python game library(without c)? I created a game in pygame, but that's painful for releasing game because people must install pygame to run that game (specially my friends they are not programmers). There are some PURE PYTHON libraries like requests, but no pure python library for game dev I tried to use pyinstaller, but it's not cross platform, and has problem on MacBook big Sur: https://stackoverflow.com/a/65383402/18248235 Are there any pure python libraries without c? (ps: using standard library(like tkinter) in python are also pure python) A: PyOpenGl Im not sure if it's pure python but it's python!
Are there any pure python game library(without c)?
Are there any pure python game library(without c)? I created a game in pygame, but that's painful for releasing game because people must install pygame to run that game (specially my friends they are not programmers). There are some PURE PYTHON libraries like requests, but no pure python library for game dev I tried to use pyinstaller, but it's not cross platform, and has problem on MacBook big Sur: https://stackoverflow.com/a/65383402/18248235 Are there any pure python libraries without c? (ps: using standard library(like tkinter) in python are also pure python)
[ "PyOpenGl Im not sure if it's pure python but it's python!\n" ]
[ 0 ]
[]
[]
[ "platform", "pygame", "pyinstaller", "python" ]
stackoverflow_0074578708_platform_pygame_pyinstaller_python.txt
Q: Round decimals of numbers in Python I have a list, each row contains 4 floats (should represent a bounding box) [[7.426758, 47.398349, 7.850835593464796, 47.68617800490421], [7.850835593464796, 47.398349, 8.274913186929592, 47.68617800490421], [8.274913186929592, 47.398349, 8.698990780394388, 47.68617800490421]] I would like to round each float to 6 decimal places. I tried to round the numbers using pandas, but it also didn't work. {49: '7.850835593464796,49.12532302942521,8.274913186929592,49.413152034329414', 17: '7.850835593464796,47.9740070098084,8.274913186929592,48.26183601471261', 71: '10.395301154253572,49.70098103923361,10.819378747718368,49.988810044137814'} A: Try numpy.around: # lst is your list np.around(lst, 6).tolist() Output: [[7.426758, 47.398349, 7.850836, 47.686178], [7.850836, 47.398349, 8.274913, 47.686178], [8.274913, 47.398349, 8.698991, 47.686178]]
Round decimals of numbers in Python
I have a list, each row contains 4 floats (should represent a bounding box) [[7.426758, 47.398349, 7.850835593464796, 47.68617800490421], [7.850835593464796, 47.398349, 8.274913186929592, 47.68617800490421], [8.274913186929592, 47.398349, 8.698990780394388, 47.68617800490421]] I would like to round each float to 6 decimal places. I tried to round the numbers using pandas, but it also didn't work. {49: '7.850835593464796,49.12532302942521,8.274913186929592,49.413152034329414', 17: '7.850835593464796,47.9740070098084,8.274913186929592,48.26183601471261', 71: '10.395301154253572,49.70098103923361,10.819378747718368,49.988810044137814'}
[ "Try numpy.around:\n# lst is your list\nnp.around(lst, 6).tolist()\n\nOutput:\n[[7.426758, 47.398349, 7.850836, 47.686178],\n [7.850836, 47.398349, 8.274913, 47.686178],\n [8.274913, 47.398349, 8.698991, 47.686178]]\n\n" ]
[ 1 ]
[]
[]
[ "coordinates", "dataframe", "pandas", "python", "rounding" ]
stackoverflow_0074578723_coordinates_dataframe_pandas_python_rounding.txt
Q: Can you create a function with a specific signature without using eval? I’ve written some code that inspects function signatures, and I would like to generate test cases for it. For this, I need to be able to construct objects that result in a given Signature object when signature is called on them. I want to avoid just eval-ing spliced together strings for this. Is there some other method for generating functions or objects that behave like them with signature? Specifically, I have this code: from inspect import signature, Parameter from typing import Any, Callable, ValuesView def passable(fun: Callable[..., Any], arg: str | int) -> bool: if not callable(fun): raise TypeError("Argument fun to passable() must be callable") if isinstance(arg, str): return kwarg_passable(fun, arg) elif isinstance(arg, int): return arg_passable(fun, arg) else: raise TypeError("Argument arg to passable() must be int or str") def kwarg_passable(fun: Callable[..., Any], arg_key: str) -> bool: assert callable(fun) assert isinstance(arg_key, str) params: ValuesView[Parameter] = signature(fun).parameters.values() return any( param.kind is Parameter.VAR_KEYWORD or ( param.kind in [Parameter.KEYWORD_ONLY, Parameter.POSITIONAL_OR_KEYWORD] and param.name == arg_key ) for param in params ) def arg_passable(fun: Callable[..., Any], arg_ix: int) -> bool: assert callable(fun) assert isinstance(arg_ix, int) params: ValuesView[Parameter] = signature(fun).parameters.values() return sum( 1 for param in params if param.kind in [Parameter.POSITIONAL_ONLY, Parameter.POSITIONAL_OR_KEYWORD] ) > arg_ix or any(param.kind is Parameter.VAR_POSITIONAL for param in params) I want to test passable on randomly generated dummy functions using Hypothesis. A: You can assign a Signature object to the callable's .__signature__ attribute: import inspect from inspect import Signature as S, Parameter as P def f(x: int = 2): pass print(inspect.signature(f)) new_param = P(name="y", kind=P.KEYWORD_ONLY, default="abc", annotation=str) f.__signature__ = S(parameters=[new_param]) print(inspect.signature(f)) (x: int = 2) (*, y: str = 'abc') Though note that of course this won't affect what happens if you try to call it; f(y="def") will give you a TypeError: f() got an unexpected keyword argument 'y', which you can in turn work around with *args, **kwargs.
Can you create a function with a specific signature without using eval?
I’ve written some code that inspects function signatures, and I would like to generate test cases for it. For this, I need to be able to construct objects that result in a given Signature object when signature is called on them. I want to avoid just eval-ing spliced together strings for this. Is there some other method for generating functions or objects that behave like them with signature? Specifically, I have this code: from inspect import signature, Parameter from typing import Any, Callable, ValuesView def passable(fun: Callable[..., Any], arg: str | int) -> bool: if not callable(fun): raise TypeError("Argument fun to passable() must be callable") if isinstance(arg, str): return kwarg_passable(fun, arg) elif isinstance(arg, int): return arg_passable(fun, arg) else: raise TypeError("Argument arg to passable() must be int or str") def kwarg_passable(fun: Callable[..., Any], arg_key: str) -> bool: assert callable(fun) assert isinstance(arg_key, str) params: ValuesView[Parameter] = signature(fun).parameters.values() return any( param.kind is Parameter.VAR_KEYWORD or ( param.kind in [Parameter.KEYWORD_ONLY, Parameter.POSITIONAL_OR_KEYWORD] and param.name == arg_key ) for param in params ) def arg_passable(fun: Callable[..., Any], arg_ix: int) -> bool: assert callable(fun) assert isinstance(arg_ix, int) params: ValuesView[Parameter] = signature(fun).parameters.values() return sum( 1 for param in params if param.kind in [Parameter.POSITIONAL_ONLY, Parameter.POSITIONAL_OR_KEYWORD] ) > arg_ix or any(param.kind is Parameter.VAR_POSITIONAL for param in params) I want to test passable on randomly generated dummy functions using Hypothesis.
[ "You can assign a Signature object to the callable's .__signature__ attribute:\nimport inspect\nfrom inspect import Signature as S, Parameter as P\n\ndef f(x: int = 2): pass\nprint(inspect.signature(f))\n\nnew_param = P(name=\"y\", kind=P.KEYWORD_ONLY, default=\"abc\", annotation=str)\nf.__signature__ = S(parameters=[new_param])\nprint(inspect.signature(f))\n\n(x: int = 2)\n(*, y: str = 'abc')\n\nThough note that of course this won't affect what happens if you try to call it; f(y=\"def\") will give you a TypeError: f() got an unexpected keyword argument 'y', which you can in turn work around with *args, **kwargs.\n" ]
[ 1 ]
[]
[]
[ "python", "python_hypothesis" ]
stackoverflow_0074526030_python_python_hypothesis.txt
Q: Confusion about crawler settigs, spider settings, project settings I have confusion about crawler settings, spider settings, settings.py and project setting.I see the docunmention about scrapy while I haven't understand the difference.For example, in the function process = CrawlerProcess(settings={ "FEEDS": { "items.json": {"format": "json"}, }, }) what does the difference, and how to use them. Sorry for my bad english. I want to know the difference among them.and if you have the example and can demonstrate, please attach them below.and at last,thank you! A: The FEEDS setting is the output settings for your spider. If you were to run scrapy crawl spidername -o file.json That would be roughly the same as process = CrawlerProcess(settings={"FEEDS": {"file.json": {"format": "json"}}) Another example would be scrapy crawl spidername -o file2.csv is roughly the same as process = CrawlerProcess(settings={"FEEDS": {"file2.csv": {"format": "csv"}}) So the value of the "FEEDS" setting is a dictionary, the key is the output location, and the value is the format/handler used to process each of the items generated by your spider.
Confusion about crawler settigs, spider settings, project settings
I have confusion about crawler settings, spider settings, settings.py and project setting.I see the docunmention about scrapy while I haven't understand the difference.For example, in the function process = CrawlerProcess(settings={ "FEEDS": { "items.json": {"format": "json"}, }, }) what does the difference, and how to use them. Sorry for my bad english. I want to know the difference among them.and if you have the example and can demonstrate, please attach them below.and at last,thank you!
[ "The FEEDS setting is the output settings for your spider.\nIf you were to run\nscrapy crawl spidername -o file.json\n\nThat would be roughly the same as\nprocess = CrawlerProcess(settings={\"FEEDS\": {\"file.json\": {\"format\": \"json\"}})\n\nAnother example would be\nscrapy crawl spidername -o file2.csv\n\nis roughly the same as\nprocess = CrawlerProcess(settings={\"FEEDS\": {\"file2.csv\": {\"format\": \"csv\"}})\n\nSo the value of the \"FEEDS\" setting is a dictionary, the key is the output location, and the value is the format/handler used to process each of the items generated by your spider.\n" ]
[ 0 ]
[]
[]
[ "python", "scrapy" ]
stackoverflow_0074546952_python_scrapy.txt
Q: Why does PyGad fitness_function not work when inside of a class? I am trying to train a genetic algorithm but for some reason it does not work when it's stored inside of a class. I have two equivalent pieces of code but the one stored inside of a class fails. It returns this.. raise ValueError("The fitness function must accept 2 parameters: 1) A solution to calculate its fitness value. 2) The solution's index within the population. The passed fitness function named '{funcname}' accepts {argcount} parameter(s).".format(funcname=fitness_func.__code__.co_name, argcount=fitness_func.__code__.co_argcount)) ValueError: The fitness function must accept 2 parameters: 1) A solution to calculate its fitness value. 2) The solution's index within the population. The passed fitness function named 'fitness_func' accepts 3 parameter(s). Here is the simplified version of the one that doesnt work. import torch import torch.nn as nn import pygad.torchga import pygad class NN(nn.Module): def __init__(self, input_size, hidden_size, output_size): super().__init__() self.linear1 = nn.Linear(input_size, hidden_size) self.linear2 = nn.Linear(hidden_size, hidden_size) self.linear3 = nn.Linear(hidden_size, hidden_size) self.linear4 = nn.Linear(hidden_size, output_size) def forward(self, x): x = self.linear1(x) x = self.linear2(x) x = self.linear3(x) x = self.linear4(x) return x class Coin: def __init__(self): self.NeuralNet = NN(1440, 1440, 3) def fitness_func(self, solution, solution_idx): return 0 def trainModel(self): torch_ga = pygad.torchga.TorchGA(model=self.NeuralNet, num_solutions=10) ga_instance = pygad.GA(num_generations=10, num_parents_mating=2, initial_population=torch_ga.population_weights, fitness_func=self.fitness_func) ga_instance.run() if __name__ == "__main__": coin = Coin() coin.trainModel() Here is the simplified version of the one that does work. import torch import torch.nn as nn import pygad.torchga import pygad class NN(nn.Module): def __init__(self, input_size, hidden_size, output_size): super().__init__() self.linear1 = nn.Linear(input_size, hidden_size) self.linear2 = nn.Linear(hidden_size, hidden_size) self.linear3 = nn.Linear(hidden_size, hidden_size) self.linear4 = nn.Linear(hidden_size, output_size) def forward(self, x): x = self.linear1(x) x = self.linear2(x) x = self.linear3(x) x = self.linear4(x) return x def fitness_func(solution, solution_idx): return 0 def trainModel(): NeuralNet = NN(1440, 1440, 3) torch_ga = pygad.torchga.TorchGA(model=NeuralNet, num_solutions=10) ga_instance = pygad.GA(num_generations=10, num_parents_mating=2, initial_population=torch_ga.population_weights, fitness_func=fitness_func) ga_instance.run() if __name__ == "__main__": trainModel() Both of these should work the same but they don't A: When you look at the pygad code you can see it's explicitly checking that the fitness function has exactly two parameters: # Check if the fitness function accepts 2 paramaters. if (fitness_func.__code__.co_argcount == 2): self.fitness_func = fitness_func else: self.valid_parameters = False raise ValueError("The fitness function must accept 2 parameters:\n1) A solution to calculate its fitness value.\n2) The solution's index within the population.\n\nThe passed fitness function named '{funcname}' accepts {argcount} parameter(s).".format(funcname=fitness_func.__code__.co_name, argcount=fitness_func.__code__.co_argcount)) So if you want to use it in a class you'll need to make it a static method so you aren't required to pass in self: @staticmethod def fitness_func(solution, solution_idx): return 0
Why does PyGad fitness_function not work when inside of a class?
I am trying to train a genetic algorithm but for some reason it does not work when it's stored inside of a class. I have two equivalent pieces of code but the one stored inside of a class fails. It returns this.. raise ValueError("The fitness function must accept 2 parameters: 1) A solution to calculate its fitness value. 2) The solution's index within the population. The passed fitness function named '{funcname}' accepts {argcount} parameter(s).".format(funcname=fitness_func.__code__.co_name, argcount=fitness_func.__code__.co_argcount)) ValueError: The fitness function must accept 2 parameters: 1) A solution to calculate its fitness value. 2) The solution's index within the population. The passed fitness function named 'fitness_func' accepts 3 parameter(s). Here is the simplified version of the one that doesnt work. import torch import torch.nn as nn import pygad.torchga import pygad class NN(nn.Module): def __init__(self, input_size, hidden_size, output_size): super().__init__() self.linear1 = nn.Linear(input_size, hidden_size) self.linear2 = nn.Linear(hidden_size, hidden_size) self.linear3 = nn.Linear(hidden_size, hidden_size) self.linear4 = nn.Linear(hidden_size, output_size) def forward(self, x): x = self.linear1(x) x = self.linear2(x) x = self.linear3(x) x = self.linear4(x) return x class Coin: def __init__(self): self.NeuralNet = NN(1440, 1440, 3) def fitness_func(self, solution, solution_idx): return 0 def trainModel(self): torch_ga = pygad.torchga.TorchGA(model=self.NeuralNet, num_solutions=10) ga_instance = pygad.GA(num_generations=10, num_parents_mating=2, initial_population=torch_ga.population_weights, fitness_func=self.fitness_func) ga_instance.run() if __name__ == "__main__": coin = Coin() coin.trainModel() Here is the simplified version of the one that does work. import torch import torch.nn as nn import pygad.torchga import pygad class NN(nn.Module): def __init__(self, input_size, hidden_size, output_size): super().__init__() self.linear1 = nn.Linear(input_size, hidden_size) self.linear2 = nn.Linear(hidden_size, hidden_size) self.linear3 = nn.Linear(hidden_size, hidden_size) self.linear4 = nn.Linear(hidden_size, output_size) def forward(self, x): x = self.linear1(x) x = self.linear2(x) x = self.linear3(x) x = self.linear4(x) return x def fitness_func(solution, solution_idx): return 0 def trainModel(): NeuralNet = NN(1440, 1440, 3) torch_ga = pygad.torchga.TorchGA(model=NeuralNet, num_solutions=10) ga_instance = pygad.GA(num_generations=10, num_parents_mating=2, initial_population=torch_ga.population_weights, fitness_func=fitness_func) ga_instance.run() if __name__ == "__main__": trainModel() Both of these should work the same but they don't
[ "When you look at the pygad code you can see it's explicitly checking that the fitness function has exactly two parameters:\n # Check if the fitness function accepts 2 paramaters.\n if (fitness_func.__code__.co_argcount == 2):\n self.fitness_func = fitness_func\n else:\n self.valid_parameters = False\n raise ValueError(\"The fitness function must accept 2 parameters:\\n1) A solution to calculate its fitness value.\\n2) The solution's index within the population.\\n\\nThe passed fitness function named '{funcname}' accepts {argcount} parameter(s).\".format(funcname=fitness_func.__code__.co_name, argcount=fitness_func.__code__.co_argcount))\n\nSo if you want to use it in a class you'll need to make it a static method so you aren't required to pass in self:\n@staticmethod\ndef fitness_func(solution, solution_idx):\n return 0\n\n" ]
[ 0 ]
[]
[]
[ "genetic_algorithm", "pygad", "python", "python_3.x", "pytorch" ]
stackoverflow_0074578431_genetic_algorithm_pygad_python_python_3.x_pytorch.txt
Q: Python object orientated programming. Need to make my code shorter, I'm repeating myself quite a bit I'm a few weeks into learning python and I'm trying to improve but my code looks very messy and way too long winded, I'm pretty sure I could greatly reduce the code used here. My code is just a very simple banking program that can select account and deposit withdraw and transfer money ect. I am repeating myself quite often when it comes to asking the user to select which account they want to select, I'm wondering if I could cut down on some of this. Any tips would be greatly appreciated. my code is as follows: class BankAcccount: def __init__(self, acc_number, balance): self.acc_number = acc_number self.balance = balance def get_balance(self): return self.balance def get_account(self): return self.acc_number def deposit(self,amount): self.balance += amount self.get_balance() def withdraw(self,amount): if self.balance>=amount: self.balance-=amount print("\n You Withdrew:",amount) else: print("\n Insufficient balance ") def transfer(self,BankAccount, amount): self.balance = self.balance - amount BankAccount.balance = BankAccount.balance + amount def main(): print("select an option:\n1-View Balance\n2-Deposit\n3-Withdrawal\n4-Transfer\n5-Exit") op = int(input("Enter your option: ")) Account1= BankAcccount(1,1000) Account2 = BankAcccount(2,3000) if op == 1: account_choice= input("select account number:") if account_choice== "1": Account = Account1 print("Account 1 selected") elif account_choice== "2": Account = Account2 print("Account 2 selected") else: print("no acc") print("Account Balance", Account.get_balance()) print("*"*30) elif op ==2: account_choice= input("select account number:") if account_choice== "1": Account = Account1 print("Account 1 selected") elif account_choice== "2": Account = Account2 print("Account 2 selected") else: print("no acc") x=float(input("Enter a deposit amount for account: ")) Account.deposit(x) print("Account new Balance", Account.get_balance()) print("*"*30) elif op==3: account_choice= input("select account number:") if account_choice== "1": Account = Account1 print("Account 1 selected") elif account_choice== "2": Account = Account2 print("Account 2 selected") else: print("no acc") y=float(input("Enter a Withdraw amount for account: ")) Account.withdraw(y) print("Account new Balance", Account.get_balance()) print("*"*30) elif op==4: Account_from= input("Which account would you like to transfer from ") if Account_from=="1": acc_from =Account1 elif Account_from=="2": acc_from =Account2 else: print("no acc") Account_to= input("Which account would you like to transfer to ") if Account_to=="1": acc_to=Account1 elif Account_to=="2": acc_to=Account2 else: print("no acc") trans_amount=int(input("Enter transfer amount")) acc_from.transfer(acc_to, trans_amount) print("Account 1 new Balance", Account1.get_balance()) print("Account 2 new Balance", Account2.get_balance()) else: print("Wrong option") main() I tried to make a function that i could call whenever i wanted to 'select account' but I couldn't figure out how to do it. A: There are quite a few things that can be improved but a few baby steps: You can move the inner if-elif blocks that determine the account to outside. That way you don't have to ask for which account every time. You should handle the case where user inputs an account number that is not 1 or 2. Your code only prints "no acc" and will get an error once it tries to run Account.get_balance() because Account is not defined in that case.
Python object orientated programming. Need to make my code shorter, I'm repeating myself quite a bit
I'm a few weeks into learning python and I'm trying to improve but my code looks very messy and way too long winded, I'm pretty sure I could greatly reduce the code used here. My code is just a very simple banking program that can select account and deposit withdraw and transfer money ect. I am repeating myself quite often when it comes to asking the user to select which account they want to select, I'm wondering if I could cut down on some of this. Any tips would be greatly appreciated. my code is as follows: class BankAcccount: def __init__(self, acc_number, balance): self.acc_number = acc_number self.balance = balance def get_balance(self): return self.balance def get_account(self): return self.acc_number def deposit(self,amount): self.balance += amount self.get_balance() def withdraw(self,amount): if self.balance>=amount: self.balance-=amount print("\n You Withdrew:",amount) else: print("\n Insufficient balance ") def transfer(self,BankAccount, amount): self.balance = self.balance - amount BankAccount.balance = BankAccount.balance + amount def main(): print("select an option:\n1-View Balance\n2-Deposit\n3-Withdrawal\n4-Transfer\n5-Exit") op = int(input("Enter your option: ")) Account1= BankAcccount(1,1000) Account2 = BankAcccount(2,3000) if op == 1: account_choice= input("select account number:") if account_choice== "1": Account = Account1 print("Account 1 selected") elif account_choice== "2": Account = Account2 print("Account 2 selected") else: print("no acc") print("Account Balance", Account.get_balance()) print("*"*30) elif op ==2: account_choice= input("select account number:") if account_choice== "1": Account = Account1 print("Account 1 selected") elif account_choice== "2": Account = Account2 print("Account 2 selected") else: print("no acc") x=float(input("Enter a deposit amount for account: ")) Account.deposit(x) print("Account new Balance", Account.get_balance()) print("*"*30) elif op==3: account_choice= input("select account number:") if account_choice== "1": Account = Account1 print("Account 1 selected") elif account_choice== "2": Account = Account2 print("Account 2 selected") else: print("no acc") y=float(input("Enter a Withdraw amount for account: ")) Account.withdraw(y) print("Account new Balance", Account.get_balance()) print("*"*30) elif op==4: Account_from= input("Which account would you like to transfer from ") if Account_from=="1": acc_from =Account1 elif Account_from=="2": acc_from =Account2 else: print("no acc") Account_to= input("Which account would you like to transfer to ") if Account_to=="1": acc_to=Account1 elif Account_to=="2": acc_to=Account2 else: print("no acc") trans_amount=int(input("Enter transfer amount")) acc_from.transfer(acc_to, trans_amount) print("Account 1 new Balance", Account1.get_balance()) print("Account 2 new Balance", Account2.get_balance()) else: print("Wrong option") main() I tried to make a function that i could call whenever i wanted to 'select account' but I couldn't figure out how to do it.
[ "There are quite a few things that can be improved but a few baby steps:\n\nYou can move the inner if-elif blocks that determine the account to outside. That way you don't have to ask for which account every time.\nYou should handle the case where user inputs an account number that is not 1 or 2. Your code only prints \"no acc\" and will get an error once it tries to run Account.get_balance() because Account is not defined in that case.\n\n" ]
[ 0 ]
[]
[]
[ "bank", "oop", "python" ]
stackoverflow_0074578690_bank_oop_python.txt
Q: Scrapy cralwed 0 pages, scraped 0 item. What things should I check for the troubleshooting? I'm trying to parse the post of this website to collect the texts for sentiment analysis. Here is the code that I'm working with. # ~/dcscraper/dcscraper/spiders/spider.py import scrapy import pandas as pd class dcscraper(scrapy.Spider): name = "dcscraper" def start_requests(self): start_urls = ["https://gall.dcinside.com/board/lists?id=bitcoins_new1&page=1"] for url in start_urls: yield scrapy.Request(url=url, callback=self.parse) def parse(self, response): # parse the links of the posts, join the links with 'https://gall.dcinside.com/', and yield the result which will be called by the text_parse function for link in response.xpath('//*[@id="container"]/section/article/div/table/tbody/tr/td/a[contains(@href, "/board/view")'): yield scrapy.Request(url=response.urljoin(link), callback=self.text_parse) # convert the last element of url of the response which is splitted by 'page=' into integer, and add 1 to it, and convert it to string again, and then combine the splitted urls[0] with string 'page=', and name the string next_page next_page = response.url.split('page=')[0] + 'page=' + str(int(response.url.split('page=')[1]) + 1) # if the next_page is not the same as the response url, and if the request reponse of next_page is 200, yield the next_page which will be called by the parse function, or stop the spider if next_page != response.url and scrapy.Request(url=next_page).status == 200: yield scrapy.Request(url=next_page, callback=self.parse) # define a function text_parse def text_parse(self, response): for text in response.xpath('//*[@id="container"]/section/article/div[contains(@class, "view_content_wrap")]'): # find an element of the post of the title and call this title title = text.xpath('span[contains(@class, "title_subject")]') # find an elements of the body of the post, join them with ' ', and name this body_text body_text = ' '.join(text.xpath('div[contains(@class, "write_div")]/br').extract()) # find an element of date posted, and name this post_date post_date = text.xpath('span[contains(@class, "gall_date")]') # make pandas dataframe with title, body_text, post_date and append this dataframe to a csv file in the file path '/home/luxiant/dcscraper/result/result.csv', or save this dataframe as a csv file to path '/home/luxiant/dcscraper/result/result.csv' if the file does not exist pd.DataFrame({'title':title , 'body_text':body_text, 'post_date':post_date}).to_csv('/home/luxiant/dcscraper/result/result.csv', mode='a', header=False, index=False) # ~/dcscraper/dcscraper/settings.py BOT_NAME = 'dcscraper' SPIDER_MODULES = ['dcscraper.spiders'] NEWSPIDER_MODULE = 'dcscraper.spiders' USER_AGENT = 'Googlebot/2.1 (+http://www.google.com/bot.html)' ROBOTSTXT_OBEY = True REQUEST_FINGERPRINTER_IMPLEMENTATION = '2.7' TWISTED_REACTOR = 'twisted.internet.asyncioreactor.AsyncioSelectorReactor' and in terminal, cd dcscraper scrapy crawl dcscraper -o ~/dcscraper/result/result.csv and here is the log. 2022-11-22 15:57:53 [scrapy.utils.log] INFO: Scrapy 2.7.0 started (bot: dcscraper) 2022-11-22 15:57:53 [scrapy.utils.log] INFO: Versions: lxml 4.9.1.0, libxml2 2.10.3, cssselect 1.2.0, parsel 1.7.0, w3lib 2.0.1, Twisted 22.10.0, Python 3.10.8 (main, Nov 1 2022, 14:18:21) [GCC 12.2.0], pyOpenSSL 22.1.0 (OpenSSL 3.0.7 1 Nov 2022), cryptography 38.0.3, Platform Linux-5.15.78-1-MANJARO-x86_64-with-glibc2.36 2022-11-22 15:57:53 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'dcscraper', 'EDITOR': '/usr/bin/nano', 'NEWSPIDER_MODULE': 'dcscraper.spiders', 'REQUEST_FINGERPRINTER_IMPLEMENTATION': '2.7', 'ROBOTSTXT_OBEY': True, 'SPIDER_MODULES': ['dcscraper.spiders'], 'TWISTED_REACTOR': 'twisted.internet.asyncioreactor.AsyncioSelectorReactor', 'USER_AGENT': 'Googlebot/2.1 (+http://www.google.com/bot.html)'} 2022-11-22 15:57:53 [asyncio] DEBUG: Using selector: EpollSelector 2022-11-22 15:57:53 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.asyncioreactor.AsyncioSelectorReactor 2022-11-22 15:57:53 [scrapy.utils.log] DEBUG: Using asyncio event loop: asyncio.unix_events._UnixSelectorEventLoop 2022-11-22 15:57:53 [scrapy.extensions.telnet] INFO: Telnet Password: 0ceb3c2ae12e2e05 2022-11-22 15:57:53 [scrapy.middleware] INFO: Enabled extensions: ['scrapy.extensions.corestats.CoreStats', 'scrapy.extensions.telnet.TelnetConsole', 'scrapy.extensions.memusage.MemoryUsage', 'scrapy.extensions.feedexport.FeedExporter', 'scrapy.extensions.logstats.LogStats'] 2022-11-22 15:57:53 [scrapy.middleware] INFO: Enabled downloader middlewares: ['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware', 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware', 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware', 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware', 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware', 'scrapy.downloadermiddlewares.retry.RetryMiddleware', 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware', 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware', 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware', 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware', 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware', 'scrapy.downloadermiddlewares.stats.DownloaderStats'] 2022-11-22 15:57:53 [scrapy.middleware] INFO: Enabled spider middlewares: ['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware', 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware', 'scrapy.spidermiddlewares.referer.RefererMiddleware', 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware', 'scrapy.spidermiddlewares.depth.DepthMiddleware'] 2022-11-22 15:57:53 [scrapy.middleware] INFO: Enabled item pipelines: [] 2022-11-22 15:57:53 [scrapy.core.engine] INFO: Spider opened 2022-11-22 15:57:53 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 2022-11-22 15:57:53 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023 2022-11-22 15:57:53 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://gall.dcinside.com/robots.txt> (referer: None) 2022-11-22 15:57:53 [filelock] DEBUG: Attempting to acquire lock 140598032389680 on /home/luxiant/.cache/python-tldextract/3.10.8.final__usr__7d8fdf__tldextract-3.4.0/publicsuffix.org-tlds/de84b5ca2167d4c83e38fb162f2e8738.tldextract.json.lock 2022-11-22 15:57:53 [filelock] DEBUG: Lock 140598032389680 acquired on /home/luxiant/.cache/python-tldextract/3.10.8.final__usr__7d8fdf__tldextract-3.4.0/publicsuffix.org-tlds/de84b5ca2167d4c83e38fb162f2e8738.tldextract.json.lock 2022-11-22 15:57:53 [filelock] DEBUG: Attempting to release lock 140598032389680 on /home/luxiant/.cache/python-tldextract/3.10.8.final__usr__7d8fdf__tldextract-3.4.0/publicsuffix.org-tlds/de84b5ca2167d4c83e38fb162f2e8738.tldextract.json.lock 2022-11-22 15:57:53 [filelock] DEBUG: Lock 140598032389680 released on /home/luxiant/.cache/python-tldextract/3.10.8.final__usr__7d8fdf__tldextract-3.4.0/publicsuffix.org-tlds/de84b5ca2167d4c83e38fb162f2e8738.tldextract.json.lock 2022-11-22 15:57:53 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://gall.dcinside.com/board/lists?id=bitcoins_new1&page=1> (referer: None) 2022-11-22 15:57:53 [scrapy.core.scraper] ERROR: Spider error processing <GET https://gall.dcinside.com/board/lists?id=bitcoins_new1&page=1> (referer: None) Traceback (most recent call last): File "/usr/lib/python3.10/site-packages/parsel/selector.py", line 423, in xpath result = xpathev( File "src/lxml/etree.pyx", line 1599, in lxml.etree._Element.xpath File "src/lxml/xpath.pxi", line 305, in lxml.etree.XPathElementEvaluator.__call__ File "src/lxml/xpath.pxi", line 225, in lxml.etree._XPathEvaluatorBase._handle_result lxml.etree.XPathEvalError: Invalid predicate During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/lib/python3.10/site-packages/scrapy/utils/defer.py", line 240, in iter_errback yield next(it) File "/usr/lib/python3.10/site-packages/scrapy/utils/python.py", line 338, in __next__ return next(self.data) File "/usr/lib/python3.10/site-packages/scrapy/utils/python.py", line 338, in __next__ return next(self.data) File "/usr/lib/python3.10/site-packages/scrapy/core/spidermw.py", line 79, in process_sync for r in iterable: File "/usr/lib/python3.10/site-packages/scrapy/spidermiddlewares/offsite.py", line 29, in <genexpr> return (r for r in result or () if self._filter(r, spider)) File "/usr/lib/python3.10/site-packages/scrapy/core/spidermw.py", line 79, in process_sync for r in iterable: File "/usr/lib/python3.10/site-packages/scrapy/spidermiddlewares/referer.py", line 336, in <genexpr> return (self._set_referer(r, response) for r in result or ()) File "/usr/lib/python3.10/site-packages/scrapy/core/spidermw.py", line 79, in process_sync for r in iterable: File "/usr/lib/python3.10/site-packages/scrapy/spidermiddlewares/urllength.py", line 28, in <genexpr> return (r for r in result or () if self._filter(r, spider)) File "/usr/lib/python3.10/site-packages/scrapy/core/spidermw.py", line 79, in process_sync for r in iterable: File "/usr/lib/python3.10/site-packages/scrapy/spidermiddlewares/depth.py", line 32, in <genexpr> return (r for r in result or () if self._filter(r, response, spider)) File "/usr/lib/python3.10/site-packages/scrapy/core/spidermw.py", line 79, in process_sync for r in iterable: File "/home/luxiant/dcscraper/dcscraper/spiders/spider.py", line 14, in parse for link in response.xpath('//*[@id="container"]/section/article/div/table/tbody/tr/td/a[contains(@href, "/board/view")'): File "/usr/lib/python3.10/site-packages/scrapy/http/response/text.py", line 138, in xpath return self.selector.xpath(query, **kwargs) File "/usr/lib/python3.10/site-packages/parsel/selector.py", line 430, in xpath raise ValueError(f"XPath error: {exc} in {query}") ValueError: XPath error: Invalid predicate in //*[@id="container"]/section/article/div/table/tbody/tr/td/a[contains(@href, "/board/view") 2022-11-22 15:57:53 [scrapy.core.engine] INFO: Closing spider (finished) 2022-11-22 15:57:53 [scrapy.statscollectors] INFO: Dumping Scrapy stats: {'downloader/request_bytes': 505, 'downloader/request_count': 2, 'downloader/request_method_count/GET': 2, 'downloader/response_bytes': 33699, 'downloader/response_count': 2, 'downloader/response_status_count/200': 2, 'elapsed_time_seconds': 0.36847, 'finish_reason': 'finished', 'finish_time': datetime.datetime(2022, 11, 22, 6, 57, 53, 838005), 'httpcompression/response_bytes': 169467, 'httpcompression/response_count': 2, 'log_count/DEBUG': 9, 'log_count/ERROR': 1, 'log_count/INFO': 10, 'memusage/max': 117219328, 'memusage/startup': 117219328, 'response_received_count': 2, 'robotstxt/request_count': 1, 'robotstxt/response_count': 1, 'robotstxt/response_status_count/200': 1, 'scheduler/dequeued': 1, 'scheduler/dequeued/memory': 1, 'scheduler/enqueued': 1, 'scheduler/enqueued/memory': 1, 'spider_exceptions/ValueError': 1, 'start_time': datetime.datetime(2022, 11, 22, 6, 57, 53, 469535)} 2022-11-22 15:57:53 [scrapy.core.engine] INFO: Spider closed (finished) What things should I check for the troubleshooting? I first thought that it was a matter of the element that I put, so I put the xpath of the element that I want to collect, resulting in the code that I'm just showing right now. Checking the debug log, I found that the parser does not read the adequate element. (referer: None) I think that might be one of the cause, but yet tried to deal with this. A: It tells you what the problem is in the logs. File "/home/luxiant/dcscraper/dcscraper/spiders/spider.py", line 14, in parse for link in response.xpath('//[@id="container"]/section/article/div/table/tbody/tr/td/a[contains(@href, "/board/view")'): File "/usr/lib/python3.10/site-packages/scrapy/http/response/text.py", line 138, in xpath return self.selector.xpath(query, **kwargs) ... raise ValueError(f"XPath error: {exc} in {query}") ValueError: XPath error: Invalid predicate in //[@id="container"]/section/article/div/table/tbody/tr/td/a[contains(@href, "/board/view") 2022-11-22 15:57:53 [scrapy.core.engine] INFO: Closing spider (finished) Which means you are not using correct syntax for your xpath selector on line 14 of your parse method. the issue is that you never close the last set of [] square brackets in your expression. It should look like this: for link in response.xpath('//*[@id="container"]/section/article/div/table/tbody/tr/td/a[contains(@href, "/board/view")]'): yield scrapy.Request(url=response.urljoin(link), callback=self.text_parse)
Scrapy cralwed 0 pages, scraped 0 item. What things should I check for the troubleshooting?
I'm trying to parse the post of this website to collect the texts for sentiment analysis. Here is the code that I'm working with. # ~/dcscraper/dcscraper/spiders/spider.py import scrapy import pandas as pd class dcscraper(scrapy.Spider): name = "dcscraper" def start_requests(self): start_urls = ["https://gall.dcinside.com/board/lists?id=bitcoins_new1&page=1"] for url in start_urls: yield scrapy.Request(url=url, callback=self.parse) def parse(self, response): # parse the links of the posts, join the links with 'https://gall.dcinside.com/', and yield the result which will be called by the text_parse function for link in response.xpath('//*[@id="container"]/section/article/div/table/tbody/tr/td/a[contains(@href, "/board/view")'): yield scrapy.Request(url=response.urljoin(link), callback=self.text_parse) # convert the last element of url of the response which is splitted by 'page=' into integer, and add 1 to it, and convert it to string again, and then combine the splitted urls[0] with string 'page=', and name the string next_page next_page = response.url.split('page=')[0] + 'page=' + str(int(response.url.split('page=')[1]) + 1) # if the next_page is not the same as the response url, and if the request reponse of next_page is 200, yield the next_page which will be called by the parse function, or stop the spider if next_page != response.url and scrapy.Request(url=next_page).status == 200: yield scrapy.Request(url=next_page, callback=self.parse) # define a function text_parse def text_parse(self, response): for text in response.xpath('//*[@id="container"]/section/article/div[contains(@class, "view_content_wrap")]'): # find an element of the post of the title and call this title title = text.xpath('span[contains(@class, "title_subject")]') # find an elements of the body of the post, join them with ' ', and name this body_text body_text = ' '.join(text.xpath('div[contains(@class, "write_div")]/br').extract()) # find an element of date posted, and name this post_date post_date = text.xpath('span[contains(@class, "gall_date")]') # make pandas dataframe with title, body_text, post_date and append this dataframe to a csv file in the file path '/home/luxiant/dcscraper/result/result.csv', or save this dataframe as a csv file to path '/home/luxiant/dcscraper/result/result.csv' if the file does not exist pd.DataFrame({'title':title , 'body_text':body_text, 'post_date':post_date}).to_csv('/home/luxiant/dcscraper/result/result.csv', mode='a', header=False, index=False) # ~/dcscraper/dcscraper/settings.py BOT_NAME = 'dcscraper' SPIDER_MODULES = ['dcscraper.spiders'] NEWSPIDER_MODULE = 'dcscraper.spiders' USER_AGENT = 'Googlebot/2.1 (+http://www.google.com/bot.html)' ROBOTSTXT_OBEY = True REQUEST_FINGERPRINTER_IMPLEMENTATION = '2.7' TWISTED_REACTOR = 'twisted.internet.asyncioreactor.AsyncioSelectorReactor' and in terminal, cd dcscraper scrapy crawl dcscraper -o ~/dcscraper/result/result.csv and here is the log. 2022-11-22 15:57:53 [scrapy.utils.log] INFO: Scrapy 2.7.0 started (bot: dcscraper) 2022-11-22 15:57:53 [scrapy.utils.log] INFO: Versions: lxml 4.9.1.0, libxml2 2.10.3, cssselect 1.2.0, parsel 1.7.0, w3lib 2.0.1, Twisted 22.10.0, Python 3.10.8 (main, Nov 1 2022, 14:18:21) [GCC 12.2.0], pyOpenSSL 22.1.0 (OpenSSL 3.0.7 1 Nov 2022), cryptography 38.0.3, Platform Linux-5.15.78-1-MANJARO-x86_64-with-glibc2.36 2022-11-22 15:57:53 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'dcscraper', 'EDITOR': '/usr/bin/nano', 'NEWSPIDER_MODULE': 'dcscraper.spiders', 'REQUEST_FINGERPRINTER_IMPLEMENTATION': '2.7', 'ROBOTSTXT_OBEY': True, 'SPIDER_MODULES': ['dcscraper.spiders'], 'TWISTED_REACTOR': 'twisted.internet.asyncioreactor.AsyncioSelectorReactor', 'USER_AGENT': 'Googlebot/2.1 (+http://www.google.com/bot.html)'} 2022-11-22 15:57:53 [asyncio] DEBUG: Using selector: EpollSelector 2022-11-22 15:57:53 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.asyncioreactor.AsyncioSelectorReactor 2022-11-22 15:57:53 [scrapy.utils.log] DEBUG: Using asyncio event loop: asyncio.unix_events._UnixSelectorEventLoop 2022-11-22 15:57:53 [scrapy.extensions.telnet] INFO: Telnet Password: 0ceb3c2ae12e2e05 2022-11-22 15:57:53 [scrapy.middleware] INFO: Enabled extensions: ['scrapy.extensions.corestats.CoreStats', 'scrapy.extensions.telnet.TelnetConsole', 'scrapy.extensions.memusage.MemoryUsage', 'scrapy.extensions.feedexport.FeedExporter', 'scrapy.extensions.logstats.LogStats'] 2022-11-22 15:57:53 [scrapy.middleware] INFO: Enabled downloader middlewares: ['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware', 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware', 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware', 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware', 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware', 'scrapy.downloadermiddlewares.retry.RetryMiddleware', 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware', 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware', 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware', 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware', 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware', 'scrapy.downloadermiddlewares.stats.DownloaderStats'] 2022-11-22 15:57:53 [scrapy.middleware] INFO: Enabled spider middlewares: ['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware', 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware', 'scrapy.spidermiddlewares.referer.RefererMiddleware', 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware', 'scrapy.spidermiddlewares.depth.DepthMiddleware'] 2022-11-22 15:57:53 [scrapy.middleware] INFO: Enabled item pipelines: [] 2022-11-22 15:57:53 [scrapy.core.engine] INFO: Spider opened 2022-11-22 15:57:53 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 2022-11-22 15:57:53 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023 2022-11-22 15:57:53 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://gall.dcinside.com/robots.txt> (referer: None) 2022-11-22 15:57:53 [filelock] DEBUG: Attempting to acquire lock 140598032389680 on /home/luxiant/.cache/python-tldextract/3.10.8.final__usr__7d8fdf__tldextract-3.4.0/publicsuffix.org-tlds/de84b5ca2167d4c83e38fb162f2e8738.tldextract.json.lock 2022-11-22 15:57:53 [filelock] DEBUG: Lock 140598032389680 acquired on /home/luxiant/.cache/python-tldextract/3.10.8.final__usr__7d8fdf__tldextract-3.4.0/publicsuffix.org-tlds/de84b5ca2167d4c83e38fb162f2e8738.tldextract.json.lock 2022-11-22 15:57:53 [filelock] DEBUG: Attempting to release lock 140598032389680 on /home/luxiant/.cache/python-tldextract/3.10.8.final__usr__7d8fdf__tldextract-3.4.0/publicsuffix.org-tlds/de84b5ca2167d4c83e38fb162f2e8738.tldextract.json.lock 2022-11-22 15:57:53 [filelock] DEBUG: Lock 140598032389680 released on /home/luxiant/.cache/python-tldextract/3.10.8.final__usr__7d8fdf__tldextract-3.4.0/publicsuffix.org-tlds/de84b5ca2167d4c83e38fb162f2e8738.tldextract.json.lock 2022-11-22 15:57:53 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://gall.dcinside.com/board/lists?id=bitcoins_new1&page=1> (referer: None) 2022-11-22 15:57:53 [scrapy.core.scraper] ERROR: Spider error processing <GET https://gall.dcinside.com/board/lists?id=bitcoins_new1&page=1> (referer: None) Traceback (most recent call last): File "/usr/lib/python3.10/site-packages/parsel/selector.py", line 423, in xpath result = xpathev( File "src/lxml/etree.pyx", line 1599, in lxml.etree._Element.xpath File "src/lxml/xpath.pxi", line 305, in lxml.etree.XPathElementEvaluator.__call__ File "src/lxml/xpath.pxi", line 225, in lxml.etree._XPathEvaluatorBase._handle_result lxml.etree.XPathEvalError: Invalid predicate During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/lib/python3.10/site-packages/scrapy/utils/defer.py", line 240, in iter_errback yield next(it) File "/usr/lib/python3.10/site-packages/scrapy/utils/python.py", line 338, in __next__ return next(self.data) File "/usr/lib/python3.10/site-packages/scrapy/utils/python.py", line 338, in __next__ return next(self.data) File "/usr/lib/python3.10/site-packages/scrapy/core/spidermw.py", line 79, in process_sync for r in iterable: File "/usr/lib/python3.10/site-packages/scrapy/spidermiddlewares/offsite.py", line 29, in <genexpr> return (r for r in result or () if self._filter(r, spider)) File "/usr/lib/python3.10/site-packages/scrapy/core/spidermw.py", line 79, in process_sync for r in iterable: File "/usr/lib/python3.10/site-packages/scrapy/spidermiddlewares/referer.py", line 336, in <genexpr> return (self._set_referer(r, response) for r in result or ()) File "/usr/lib/python3.10/site-packages/scrapy/core/spidermw.py", line 79, in process_sync for r in iterable: File "/usr/lib/python3.10/site-packages/scrapy/spidermiddlewares/urllength.py", line 28, in <genexpr> return (r for r in result or () if self._filter(r, spider)) File "/usr/lib/python3.10/site-packages/scrapy/core/spidermw.py", line 79, in process_sync for r in iterable: File "/usr/lib/python3.10/site-packages/scrapy/spidermiddlewares/depth.py", line 32, in <genexpr> return (r for r in result or () if self._filter(r, response, spider)) File "/usr/lib/python3.10/site-packages/scrapy/core/spidermw.py", line 79, in process_sync for r in iterable: File "/home/luxiant/dcscraper/dcscraper/spiders/spider.py", line 14, in parse for link in response.xpath('//*[@id="container"]/section/article/div/table/tbody/tr/td/a[contains(@href, "/board/view")'): File "/usr/lib/python3.10/site-packages/scrapy/http/response/text.py", line 138, in xpath return self.selector.xpath(query, **kwargs) File "/usr/lib/python3.10/site-packages/parsel/selector.py", line 430, in xpath raise ValueError(f"XPath error: {exc} in {query}") ValueError: XPath error: Invalid predicate in //*[@id="container"]/section/article/div/table/tbody/tr/td/a[contains(@href, "/board/view") 2022-11-22 15:57:53 [scrapy.core.engine] INFO: Closing spider (finished) 2022-11-22 15:57:53 [scrapy.statscollectors] INFO: Dumping Scrapy stats: {'downloader/request_bytes': 505, 'downloader/request_count': 2, 'downloader/request_method_count/GET': 2, 'downloader/response_bytes': 33699, 'downloader/response_count': 2, 'downloader/response_status_count/200': 2, 'elapsed_time_seconds': 0.36847, 'finish_reason': 'finished', 'finish_time': datetime.datetime(2022, 11, 22, 6, 57, 53, 838005), 'httpcompression/response_bytes': 169467, 'httpcompression/response_count': 2, 'log_count/DEBUG': 9, 'log_count/ERROR': 1, 'log_count/INFO': 10, 'memusage/max': 117219328, 'memusage/startup': 117219328, 'response_received_count': 2, 'robotstxt/request_count': 1, 'robotstxt/response_count': 1, 'robotstxt/response_status_count/200': 1, 'scheduler/dequeued': 1, 'scheduler/dequeued/memory': 1, 'scheduler/enqueued': 1, 'scheduler/enqueued/memory': 1, 'spider_exceptions/ValueError': 1, 'start_time': datetime.datetime(2022, 11, 22, 6, 57, 53, 469535)} 2022-11-22 15:57:53 [scrapy.core.engine] INFO: Spider closed (finished) What things should I check for the troubleshooting? I first thought that it was a matter of the element that I put, so I put the xpath of the element that I want to collect, resulting in the code that I'm just showing right now. Checking the debug log, I found that the parser does not read the adequate element. (referer: None) I think that might be one of the cause, but yet tried to deal with this.
[ "It tells you what the problem is in the logs.\n\nFile \"/home/luxiant/dcscraper/dcscraper/spiders/spider.py\", line 14, in parse for link in response.xpath('//[@id=\"container\"]/section/article/div/table/tbody/tr/td/a[contains(@href, \"/board/view\")'):\nFile \"/usr/lib/python3.10/site-packages/scrapy/http/response/text.py\", line 138, in xpath\nreturn self.selector.xpath(query, **kwargs)\n...\nraise ValueError(f\"XPath error: {exc} in {query}\")\nValueError: XPath error: Invalid predicate in //[@id=\"container\"]/section/article/div/table/tbody/tr/td/a[contains(@href, \"/board/view\")\n2022-11-22 15:57:53 [scrapy.core.engine] INFO: Closing spider (finished)\n\nWhich means you are not using correct syntax for your xpath selector on line 14 of your parse method.\nthe issue is that you never close the last set of [] square brackets in your expression. It should look like this:\nfor link in response.xpath('//*[@id=\"container\"]/section/article/div/table/tbody/tr/td/a[contains(@href, \"/board/view\")]'):\n yield scrapy.Request(url=response.urljoin(link), callback=self.text_parse)\n\n" ]
[ 0 ]
[]
[]
[ "parsing", "python", "scrapy", "web_crawler" ]
stackoverflow_0074528684_parsing_python_scrapy_web_crawler.txt
Q: Trying to reorganize a random string so that all the letters are grouped up in the order that a different random string dictates I am creating a matrix that will list these a random string of letters that are used for colors and they are to be organized so that, for example all of the G's will be together and then R, etc. legoString = "YRRBRBYBGRBRGRRRYRGBRBGBBRBG" shuffledColString = "YRBG" for letter in shuffledColString: for index in range(len(legoString)): if letter == legoString[index]: legoString = legoString.replace(legoString[index], "") + "R" elif letter == legoString[index]: legoString = legoString.replace(legoString[index], "") + "G" elif letter == legoString[index]: legoString = legoString.replace(legoString[index], "") + "B" elif letter == legoString[index]: legoString = legoString.replace(legoString[index], "") + "C" elif letter == legoString[index]: legoString = legoString.replace(legoString[index], "") + "Y" A: try this legoString = "YRRBRBYBGRBRGRRRYRGBRBGBBRBG" shuffledColString = "YRBG" ns="".join(sorted(legoString,key=lambda x:shuffledColString.index(x))) print(ns) #OR IF ABOVE IS NOT CLEAR ns="" for c in shuffledColString: ns=ns+ c*legoString.count(c) print(ns)
Trying to reorganize a random string so that all the letters are grouped up in the order that a different random string dictates
I am creating a matrix that will list these a random string of letters that are used for colors and they are to be organized so that, for example all of the G's will be together and then R, etc. legoString = "YRRBRBYBGRBRGRRRYRGBRBGBBRBG" shuffledColString = "YRBG" for letter in shuffledColString: for index in range(len(legoString)): if letter == legoString[index]: legoString = legoString.replace(legoString[index], "") + "R" elif letter == legoString[index]: legoString = legoString.replace(legoString[index], "") + "G" elif letter == legoString[index]: legoString = legoString.replace(legoString[index], "") + "B" elif letter == legoString[index]: legoString = legoString.replace(legoString[index], "") + "C" elif letter == legoString[index]: legoString = legoString.replace(legoString[index], "") + "Y"
[ "try this\nlegoString = \"YRRBRBYBGRBRGRRRYRGBRBGBBRBG\"\nshuffledColString = \"YRBG\"\n\nns=\"\".join(sorted(legoString,key=lambda x:shuffledColString.index(x)))\nprint(ns)\n\n#OR IF ABOVE IS NOT CLEAR\nns=\"\"\nfor c in shuffledColString:\n ns=ns+ c*legoString.count(c)\n\nprint(ns)\n \n\n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074578721_python.txt
Q: How to index the same char in different locations correctly I was writing a program to loop through a string and index the capital letters in the string, but it keeps returning the first index of a same character example: AbcDeFgAiJ will return [0, 3, 5, 0, 9] instead of [0, 3, 5, 7, 9] here is the code def capitals(word): arr = [] for i in word: if i.isupper(): arr.append(word.index(i)) return arr A: str.index will return first index found, so when you have for example two A in your string, it will return the index of first A. Try to use enumerate() instead: word = "AbcDeFgAiJ" out = [idx for idx, ch in enumerate(word) if ch.isupper()] print(out) Prints: [0, 3, 5, 7, 9]
How to index the same char in different locations correctly
I was writing a program to loop through a string and index the capital letters in the string, but it keeps returning the first index of a same character example: AbcDeFgAiJ will return [0, 3, 5, 0, 9] instead of [0, 3, 5, 7, 9] here is the code def capitals(word): arr = [] for i in word: if i.isupper(): arr.append(word.index(i)) return arr
[ "str.index will return first index found, so when you have for example two A in your string, it will return the index of first A.\nTry to use enumerate() instead:\nword = \"AbcDeFgAiJ\"\n\nout = [idx for idx, ch in enumerate(word) if ch.isupper()]\nprint(out)\n\nPrints:\n[0, 3, 5, 7, 9]\n\n" ]
[ 0 ]
[]
[]
[ "indexing", "list", "loops", "python", "string" ]
stackoverflow_0074578789_indexing_list_loops_python_string.txt
Q: Break a loop externally with the button that started the loop I want to break a loop with the same button that I started the loop with. For all intents and purposes, it does what I need it to, up until I click the button again. If I hit the button again, it crashes. Right now with the debug visual setup, I can hit q to break the loop. But down the road I obviously want to turn that off and just have it all go through the button instead. I should also note that the button does not update unless the window is interacted with/moved. This goes for when the button is pushed. It doesn't crash until you move the window. def bttnup(): global is_on #starts autoaccept if is_on: button.config(image = selected, bg=selected_bg ) start() is_on = False else: #Stops autoaccept button.config(image = unselected, bg=unselected_bg, ) is_on = True #Object detection and keystroke def start(): while(True): screenshot = wincap.get_screenshot() points = vision_accept.find(screenshot, .325, 'rectangles') if cv.waitKey(1) == ord('q'): cv.destroyAllWindows() break #Button button = Button(window, image=unselected, command=bttnup) button.pack() I have also tried: def bttnup(): #starts autoaccept if button['bg'] == 'unselected_bg': threading.join(1) button.config(image=selected, bg=selected_bg ) start() return #Stops autoaccept if button['bg'] == 'selected_bg': threading.start() button.config(image = unselected, bg=unselected_bg, ) return #start object detection def start(): i = 1 while button['bg'] == 'selected_bg': i = i+1 # get an updated image of the game screenshot = wincap.get_screenshot() # screenshot = cv.resize(screenshot, (800, 600)) points = vision_accept.find(screenshot, .325, 'rectangles') sleep(5) #Button! button = Button(window, image=unselected, command=lambda: bttnup()) A: So I went and changed a majority of the code to use .after instead of using a while statement. Its not as clean looking, but it does what it needs to. created a function for on, and one for off and have the button cycle when the button is fixed. when the on button is pushed it calls to the start function which then cycles every second. # initialize the WindowCapture class wincap = WindowCapture() # Initialize the Vision class vision_accept = Vision('accept.jpg') #window parameters window = Tk() window.resizable(False, False) window.title('AutoAccept') unselected = PhotoImage(file='Button_unselected.png') selected = PhotoImage(file='matching.png') unselected_bg='white' selected_bg='black' is_on = True def start_on(): global is_on #starts autoaccept button.config(image = selected, bg=selected_bg, command = start_off ) start() is_on = True def start_off(): global is_on button.config(image = unselected, bg=unselected_bg, command = start_on ) is_on = False #start object detection def start(): if is_on: # get an updated image of the game screenshot = wincap.get_screenshot() points = vision_accept.find(screenshot, .32, 'rectangles') window.after(500, start) button = Button(window, image=unselected, command=start_on ) button.pack() window.mainloop()
Break a loop externally with the button that started the loop
I want to break a loop with the same button that I started the loop with. For all intents and purposes, it does what I need it to, up until I click the button again. If I hit the button again, it crashes. Right now with the debug visual setup, I can hit q to break the loop. But down the road I obviously want to turn that off and just have it all go through the button instead. I should also note that the button does not update unless the window is interacted with/moved. This goes for when the button is pushed. It doesn't crash until you move the window. def bttnup(): global is_on #starts autoaccept if is_on: button.config(image = selected, bg=selected_bg ) start() is_on = False else: #Stops autoaccept button.config(image = unselected, bg=unselected_bg, ) is_on = True #Object detection and keystroke def start(): while(True): screenshot = wincap.get_screenshot() points = vision_accept.find(screenshot, .325, 'rectangles') if cv.waitKey(1) == ord('q'): cv.destroyAllWindows() break #Button button = Button(window, image=unselected, command=bttnup) button.pack() I have also tried: def bttnup(): #starts autoaccept if button['bg'] == 'unselected_bg': threading.join(1) button.config(image=selected, bg=selected_bg ) start() return #Stops autoaccept if button['bg'] == 'selected_bg': threading.start() button.config(image = unselected, bg=unselected_bg, ) return #start object detection def start(): i = 1 while button['bg'] == 'selected_bg': i = i+1 # get an updated image of the game screenshot = wincap.get_screenshot() # screenshot = cv.resize(screenshot, (800, 600)) points = vision_accept.find(screenshot, .325, 'rectangles') sleep(5) #Button! button = Button(window, image=unselected, command=lambda: bttnup())
[ "So I went and changed a majority of the code to use .after instead of using a while statement. Its not as clean looking, but it does what it needs to.\ncreated a function for on, and one for off and have the button cycle when the button is fixed. when the on button is pushed it calls to the start function which then cycles every second.\n# initialize the WindowCapture class\nwincap = WindowCapture()\n\n# Initialize the Vision class\nvision_accept = Vision('accept.jpg')\n\n#window parameters\nwindow = Tk()\nwindow.resizable(False, False)\nwindow.title('AutoAccept')\nunselected = PhotoImage(file='Button_unselected.png')\nselected = PhotoImage(file='matching.png')\nunselected_bg='white'\nselected_bg='black'\nis_on = True\n\n\ndef start_on():\n global is_on\n \n #starts autoaccept\n button.config(image = selected,\n bg=selected_bg,\n command = start_off\n )\n start()\n is_on = True\n\ndef start_off():\n global is_on\n button.config(image = unselected,\n bg=unselected_bg,\n command = start_on\n )\n is_on = False\n\n\n#start object detection\ndef start():\n if is_on:\n\n # get an updated image of the game\n screenshot = wincap.get_screenshot()\n points = vision_accept.find(screenshot, .32, 'rectangles')\n\n window.after(500, start)\n\nbutton = Button(window,\n image=unselected,\n command=start_on\n )\n\nbutton.pack()\nwindow.mainloop()\n\n" ]
[ 0 ]
[]
[]
[ "python", "tkinter_button", "while_loop" ]
stackoverflow_0074565110_python_tkinter_button_while_loop.txt
Q: Transformation and structuring of Excel file in Python I ask you to help with the solution of the problem. I recently started learning Python, and I don't have enough experience to solve it yet. It is necessary to write a python script that transforms an Excel spreadsheet into a flat view for further work and analytics. Source table: input ex.xlsx Example of expected result: out ex.xlsx I will be very grateful for help! enter image description here enter image description here A: Consider to find out more about pandas dataframe. We can then use pd.read_excel() and pd.read_csv() to read the data into memory and process the data using pandas.
Transformation and structuring of Excel file in Python
I ask you to help with the solution of the problem. I recently started learning Python, and I don't have enough experience to solve it yet. It is necessary to write a python script that transforms an Excel spreadsheet into a flat view for further work and analytics. Source table: input ex.xlsx Example of expected result: out ex.xlsx I will be very grateful for help! enter image description here enter image description here
[ "Consider to find out more about pandas dataframe. We can then use pd.read_excel() and pd.read_csv() to read the data into memory and process the data using pandas.\n" ]
[ 0 ]
[]
[]
[ "excel", "openpyxl", "pandas", "python", "python_3.x" ]
stackoverflow_0074569013_excel_openpyxl_pandas_python_python_3.x.txt
Q: How to remove certain items that are not duplicate in 2 lists? For example (here's the code I'm working on): from bs4 import BeautifulSoup from string import digits import requests joke_of_the_day = [] a = [] url_joke_of_the_day = "https://www.womansday.com/life/entertainment/a38635408/corny-jokes/" page_joke_of_the_day = requests.get(url_joke_of_the_day) soup_joke_of_the_day = BeautifulSoup(page_joke_of_the_day.content, "html.parser") content_joke_of_the_day = soup_joke_of_the_day.find("div", class_="article-body-content article-body standard-body-content css-z6i669 ewisyje5") goodcontents_joke_of_the_day = content_joke_of_the_day.find_all("li") a.append(goodcontents_joke_of_the_day) #print(a) for goodcontent_joke_of_the_day in goodcontents_joke_of_the_day: joke_of_the_day1 = goodcontent_joke_of_the_day.find("strong") joke_of_the_day2 = str(joke_of_the_day1).replace("<strong>","") joke_of_the_day3 = joke_of_the_day2.replace("</strong>","") joke_of_the_day4 = joke_of_the_day3.replace("<br>","") joke_of_the_day5 = joke_of_the_day4.replace("<br/>","") joke_of_the_day.append(joke_of_the_day5) I'm trying to web scrape jokes for a project I'm working on, however the response to the jokes are outside of . An example: <li> ::marker <strong> Why did the bay strawberry cry?</strong> <br> **"His parents were in a jam."** </li> I was thinking on creating two lists and removing duplicates however that didn't work, here's the code to remove duplicates: for i in a[:]: if i in joke_of_the_day: a.remove(i) I'm open to any suggestions, I just need the bold-part of the code A: To get question + responses you can use next example: import requests from bs4 import BeautifulSoup url = "https://www.womansday.com/life/entertainment/a38635408/corny-jokes/" soup = BeautifulSoup(requests.get(url).content, "html.parser") for n, joke in enumerate(soup.select("li:has(strong)"), 1): question = joke.strong.text response = joke.br.find_next(text=True) print(f"Joke nr.{n}") print(question) print(response) print("-" * 80) Prints: ... -------------------------------------------------------------------------------- Joke nr.99 What did the tomato say to the other tomato during a race? "Ketchup." -------------------------------------------------------------------------------- Joke nr.100 What has four wheels and flies? A garbage truck. -------------------------------------------------------------------------------- Joke nr.101 Why didn't the skeleton get a prom date? He didn't have the guts to ask anyone. --------------------------------------------------------------------------------
How to remove certain items that are not duplicate in 2 lists?
For example (here's the code I'm working on): from bs4 import BeautifulSoup from string import digits import requests joke_of_the_day = [] a = [] url_joke_of_the_day = "https://www.womansday.com/life/entertainment/a38635408/corny-jokes/" page_joke_of_the_day = requests.get(url_joke_of_the_day) soup_joke_of_the_day = BeautifulSoup(page_joke_of_the_day.content, "html.parser") content_joke_of_the_day = soup_joke_of_the_day.find("div", class_="article-body-content article-body standard-body-content css-z6i669 ewisyje5") goodcontents_joke_of_the_day = content_joke_of_the_day.find_all("li") a.append(goodcontents_joke_of_the_day) #print(a) for goodcontent_joke_of_the_day in goodcontents_joke_of_the_day: joke_of_the_day1 = goodcontent_joke_of_the_day.find("strong") joke_of_the_day2 = str(joke_of_the_day1).replace("<strong>","") joke_of_the_day3 = joke_of_the_day2.replace("</strong>","") joke_of_the_day4 = joke_of_the_day3.replace("<br>","") joke_of_the_day5 = joke_of_the_day4.replace("<br/>","") joke_of_the_day.append(joke_of_the_day5) I'm trying to web scrape jokes for a project I'm working on, however the response to the jokes are outside of . An example: <li> ::marker <strong> Why did the bay strawberry cry?</strong> <br> **"His parents were in a jam."** </li> I was thinking on creating two lists and removing duplicates however that didn't work, here's the code to remove duplicates: for i in a[:]: if i in joke_of_the_day: a.remove(i) I'm open to any suggestions, I just need the bold-part of the code
[ "To get question + responses you can use next example:\nimport requests\nfrom bs4 import BeautifulSoup\n\nurl = \"https://www.womansday.com/life/entertainment/a38635408/corny-jokes/\"\nsoup = BeautifulSoup(requests.get(url).content, \"html.parser\")\n\nfor n, joke in enumerate(soup.select(\"li:has(strong)\"), 1):\n question = joke.strong.text\n response = joke.br.find_next(text=True)\n\n print(f\"Joke nr.{n}\")\n print(question)\n print(response)\n print(\"-\" * 80)\n\nPrints:\n\n...\n\n--------------------------------------------------------------------------------\nJoke nr.99\nWhat did the tomato say to the other tomato during a race? \n\"Ketchup.\"\n--------------------------------------------------------------------------------\nJoke nr.100\nWhat has four wheels and flies?\nA garbage truck.\n--------------------------------------------------------------------------------\nJoke nr.101\nWhy didn't the skeleton get a prom date?\nHe didn't have the guts to ask anyone. \n--------------------------------------------------------------------------------\n\n" ]
[ 0 ]
[]
[]
[ "list", "python", "web_scraping" ]
stackoverflow_0074578792_list_python_web_scraping.txt
Q: Python script giving back error when attempting to load from file using JSON I'm attempting to load two dictionaries using JSON. I save them with the following function: def save_file(self): print(save_dict['filename']) with open(save_dict['filename'], 'w') as f: save_list = [save_dict, cad_dict] json.dump(save_list, f) I then attempt to load them using this function: def open_file(self): global save_dict global cad_dict filename = fd.askopenfilename( initialdir=r'C:\Users\Tim\Desktop\Code\Python\Surveying_Program', title='Browse', filetype = (('job files', '*.saj'), ("all files","*.*")) ) with open(filename, 'r', encoding='utf-8') as f: save_list = json.load(f) save_dict = save_list[0] cad_dict = save_list[1] I end up with the following error: Exception in Tkinter callback Traceback (most recent call last): File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\lib\tkinter\__init__.py", line 1921, in __call__ return self.func(*args) File "c:\Users\Tim\Desktop\Code\Python\Surveying_Program\survey_amateur_main.py", line 187, in open_file save_list = json.load(f) File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\lib\json\__init__.py", line 293, in load return loads(fp.read(), File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\lib\json\__init__.py", line 346, in loads return _default_decoder.decode(s) File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\lib\json\decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\lib\json\decoder.py", line 355, in raw_decode raise JSONDecodeError("Expecting value", s, err.value) from None json.decoder.JSONDecodeError: Expecting value: line 1 column 64467 (char 64466) PS C:\Users\Tim\Desktop\Code\Python\Surveying_Program> For reference, here is an example of the file I'm attempting to load: [{"filename": "C:/Users/Tim/Desktop/Code/Python/Surveying_Program/dxf_practice/job_file.saj", "point_dict": {"filename": "", "1": {"northing": 780443.4568, "easting": 2039690.279, "elevation": 286.5, "description": "Walls", "time": "10:25:11", "date": "11/25/2022"}}}, {"cad_filename": "C:/Users/Tim/Desktop/Code/Python/Surveying_Program/dxf_practice/one_object_dxf.dxf", "dxf_extents": [[1125.591828718476, 5050.379053781544], [1131.50998529643, 5054.804957938229]], "dxf_lines": [["0", [1125.591828718476, 5050.379053781544, 0.0], [1131.50998529643, 5054.804957938229, 0.0]]], "dxf_plines": [], "dxf_mtext": [], "dxf_circles": [], "dxf_arcs": []}] A: I didn't look closely at my data and my ezdxf module was putting the coordinates as a an object that evidently doesn't work well with json. Converted those objects to strings and it now works.
Python script giving back error when attempting to load from file using JSON
I'm attempting to load two dictionaries using JSON. I save them with the following function: def save_file(self): print(save_dict['filename']) with open(save_dict['filename'], 'w') as f: save_list = [save_dict, cad_dict] json.dump(save_list, f) I then attempt to load them using this function: def open_file(self): global save_dict global cad_dict filename = fd.askopenfilename( initialdir=r'C:\Users\Tim\Desktop\Code\Python\Surveying_Program', title='Browse', filetype = (('job files', '*.saj'), ("all files","*.*")) ) with open(filename, 'r', encoding='utf-8') as f: save_list = json.load(f) save_dict = save_list[0] cad_dict = save_list[1] I end up with the following error: Exception in Tkinter callback Traceback (most recent call last): File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\lib\tkinter\__init__.py", line 1921, in __call__ return self.func(*args) File "c:\Users\Tim\Desktop\Code\Python\Surveying_Program\survey_amateur_main.py", line 187, in open_file save_list = json.load(f) File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\lib\json\__init__.py", line 293, in load return loads(fp.read(), File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\lib\json\__init__.py", line 346, in loads return _default_decoder.decode(s) File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\lib\json\decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\lib\json\decoder.py", line 355, in raw_decode raise JSONDecodeError("Expecting value", s, err.value) from None json.decoder.JSONDecodeError: Expecting value: line 1 column 64467 (char 64466) PS C:\Users\Tim\Desktop\Code\Python\Surveying_Program> For reference, here is an example of the file I'm attempting to load: [{"filename": "C:/Users/Tim/Desktop/Code/Python/Surveying_Program/dxf_practice/job_file.saj", "point_dict": {"filename": "", "1": {"northing": 780443.4568, "easting": 2039690.279, "elevation": 286.5, "description": "Walls", "time": "10:25:11", "date": "11/25/2022"}}}, {"cad_filename": "C:/Users/Tim/Desktop/Code/Python/Surveying_Program/dxf_practice/one_object_dxf.dxf", "dxf_extents": [[1125.591828718476, 5050.379053781544], [1131.50998529643, 5054.804957938229]], "dxf_lines": [["0", [1125.591828718476, 5050.379053781544, 0.0], [1131.50998529643, 5054.804957938229, 0.0]]], "dxf_plines": [], "dxf_mtext": [], "dxf_circles": [], "dxf_arcs": []}]
[ "I didn't look closely at my data and my ezdxf module was putting the coordinates as a an object that evidently doesn't work well with json. Converted those objects to strings and it now works.\n" ]
[ 0 ]
[]
[]
[ "json", "python" ]
stackoverflow_0074574489_json_python.txt
Q: Find the closest point to a line from an array of points I have the problem of finding the point which is closest to a line from an array of x- and y-data. The line is semi-infinite originating from the origin at (0,0) and running into the direction of a given angle. The x,y data of the points are given in relation to the origin. How do I find the closest point (and its distance) to the line in line direction (not opposite)? This is an example of the data I have: import numpy as np import matplotlib.pyplot as plt def main(): depth = np.random.random((100))*20+50 angle = np.linspace(0, 2*np.pi, 100) x,y = depth2xy(depth, angle) line = np.random.random_sample()*2*np.pi # fig, ax = plt.subplots(subplot_kw={'projection': 'polar'}) plt.scatter(x, y) plt.plot([0,100*np.cos(line)], [0, 100*np.sin(line)], markersize=10, color = "r") plt.show() def depth2xy(depth, angle): x, y = np.zeros(len(depth)), np.zeros(len(depth)) for i in range(len(depth)): x[i] = depth[i]*np.cos(angle[i]) y[i] = depth[i]*np.sin(angle[i]) return x,y if __name__ == "__main__": main() I could try a brute force approach, iterating over different distances along the line to find the ultimate smallest distance. But as time efficiency is critical my case and the algorithm would not perform as well as I think it could, I would rather try an analytical approach. I also thought about scipy.spatial.distance, but I am not sure how this would work for a line. A: Let P be a point from your know data set. Let Q be the projection of this point on the line. You can use an analytic approach to determine the exact location of Q: OQ is the segment from the origin to the Q point. It is aligned to the line. PQ is the distance of the point P to the line. from geometry, the dot product between QP and OQ is zero (the two segments are orthogonal to each other). From this equation we can compute the point Q. After that, you simply compute all distances and find the shortest one. I'm going to use SymPy for the analytical part, Numpy for the numerical part and Matplotlib for plotting: from sympy import * import numpy as np import matplotlib.pyplot as plt xq, xp, yq, yp, m = symbols("x_Q, x_P, y_Q, y_P, m") A = Matrix([xq - xp, yq - yp]) B = Matrix([xq, yq]) # this equations contains two unkowns: xq, yq eq = A.dot(B) # but we know the line equation: yq = m * xq, so we substitute it into # eq and solve for xq xq_expr = solve(eq.subs(yq, m * xq), xq)[1] print(xq_expr) # (m*y_P + x_P)/(m**2 + 1) # generate data mv = -0.5 xp_vals = np.random.uniform(2, 10, 30) yp_vals = np.random.uniform(2, 10, 30) # convert the symbolic expression to a numerical function f = lambdify([m, xp, yp], xq_expr) # compute the projections on the line xq_vals = f(mv, xp_vals, yp_vals) yq_vals = mv * xq_vals # compute the distance d = np.sqrt((xp_vals - xq_vals)**2 + (yp_vals - yq_vals)**2) # find the index of the shortest distance idx = d.argmin() fig, ax = plt.subplots() xline = np.linspace(0, 10) yline = mv * xline ax.plot(xline, yline, "k:", label="line") ax.scatter(xq_vals, yq_vals, label="Q", marker=".") ax.scatter(xp_vals, yp_vals, label="P", marker="*") ax.plot([xp_vals[idx], xq_vals[idx]], [yp_vals[idx], yq_vals[idx]], "r", label="min distance") ax.set_aspect("equal") ax.legend() plt.show() A: Your assigned line passes through the origin, its parametric equation is x = u cos(a) y = u sin(a) and you can see the parameter u is simply the (oriented) distance beteween the origin and a point on the assigned line. Now, consider a point of coordinates X and Y, a line perpendicular to the assigned one has the parametric equation x = X - v sin(a) y = Y + v cos(a) and again, the parameter v is simply the (oriented) distance between (X, Y) and a point on a line passing per (X, Y) and perpendicular to the assigned one. The intersection is given by the equation X = u cos(a) + v sin(a) Y = u sin(a) - v cos(a) you can check by inspection that the solution of the system is u = X cos(a) + Y sin(a) v = X sin(a) - Y cos(a) The distance of the point (X, Y) from the assigned line is hence d = | X sin(a) - Y cos(a) | A Python Implementation import numpy as np import matplotlib.pyplot as plt np.random.seed(20221126) X = 2*np.random.random(32)-1 Y = 2*np.random.random(32)-1 fig, ax = plt.subplots() ax.set_xlim((-1.2, 1.2)) ax.set_ylim((-1.2, 1.2)) ax.grid(1) ax.set_aspect(1) ax.scatter(X, Y, s=80, ec='k', color='y') a = 2*np.random.random()*np.pi s, c = np.sin(a), np.cos(a) plt.plot((0, c), (0, s), color='k') plt.plot((-s, s), (c, -c), color='r') # strike out "bad" points bad = X*c+Y*s<0 plt.scatter(X[bad], Y[bad], marker='x', color='k') # consider only good (i.e., not bad) points Xg, Yg = X[~bad], Y[~bad] # compute all distances (but for good points only) d = np.abs(Xg*s-Yg*c) # find the nearest point and hilight it imin = np.argmin(d) plt.scatter(Xg[imin], Yg[imin], ec='k', color='r') plt.show() An OVERDONE Example import numpy as np import matplotlib.pyplot as plt np.random.seed(20221126) X = 2*np.random.random(32)-1 Y = 2*np.random.random(32)-1 fig, axs = plt.subplots(2, 4, figsize=(10,5), layout='constrained') for ax, a in zip(axs.flat, (2.8, 1.8, 1.4, 0.2, 3.4, 4.5, 4.9, 6.0)): ax.set_xlim((-1.2, 1.2)) ax.set_xticks((-1, -0.5, 0, 0.5, 1.0)) ax.set_ylim((-1.2, 1.2)) ax.grid(1) ax.set_aspect(1) ax.set_title('$\\alpha \\approx %d^o$'%round(np.rad2deg(a))) ax.scatter(X, Y, s=80, ec='k', color='yellow') s, c = np.sin(a), np.cos(a) ax.arrow(0, 0, 1.2*c, 1.2*s, fc='k', length_includes_head=True, head_width=0.08, head_length=0.1) # divide the drawing surface in two semiplanes if abs(c)>abs(s): if c>0: ax.plot((1.2*s, -1.2*s), (-1.2, 1.2)) else: ax.plot((-1.2*s, 1.2*s), (-1.2, 1.2)) elif abs(s)>=abs(c): if s>0: ax.plot((-1.2, 1.2), (1.2*c, -1.2*c)) else: ax.plot((-1.2, 1.2), (-1.2*c, 1.2*c)) # strike out "bad" points bad = X*c+Y*s<0 ax.scatter(X[bad], Y[bad], marker='x', color='k') # consider only good (i.e., not bad) points Xg, Yg = X[~bad], Y[~bad] # compute all distances (but for good points only) d = np.abs(Xg*s-Yg*c) # find the nearest point and hilight it imin = np.argmin(d) ax.scatter(Xg[imin], Yg[imin], s=80, ec='k', color='yellow') ax.scatter(Xg[imin], Yg[imin], s= 10, color='k', alpha=1.0) plt.show()
Find the closest point to a line from an array of points
I have the problem of finding the point which is closest to a line from an array of x- and y-data. The line is semi-infinite originating from the origin at (0,0) and running into the direction of a given angle. The x,y data of the points are given in relation to the origin. How do I find the closest point (and its distance) to the line in line direction (not opposite)? This is an example of the data I have: import numpy as np import matplotlib.pyplot as plt def main(): depth = np.random.random((100))*20+50 angle = np.linspace(0, 2*np.pi, 100) x,y = depth2xy(depth, angle) line = np.random.random_sample()*2*np.pi # fig, ax = plt.subplots(subplot_kw={'projection': 'polar'}) plt.scatter(x, y) plt.plot([0,100*np.cos(line)], [0, 100*np.sin(line)], markersize=10, color = "r") plt.show() def depth2xy(depth, angle): x, y = np.zeros(len(depth)), np.zeros(len(depth)) for i in range(len(depth)): x[i] = depth[i]*np.cos(angle[i]) y[i] = depth[i]*np.sin(angle[i]) return x,y if __name__ == "__main__": main() I could try a brute force approach, iterating over different distances along the line to find the ultimate smallest distance. But as time efficiency is critical my case and the algorithm would not perform as well as I think it could, I would rather try an analytical approach. I also thought about scipy.spatial.distance, but I am not sure how this would work for a line.
[ "Let P be a point from your know data set. Let Q be the projection of this point on the line. You can use an analytic approach to determine the exact location of Q:\n\nOQ is the segment from the origin to the Q point. It is aligned to the line.\nPQ is the distance of the point P to the line.\nfrom geometry, the dot product between QP and OQ is zero (the two segments are orthogonal to each other). From this equation we can compute the point Q.\n\nAfter that, you simply compute all distances and find the shortest one.\nI'm going to use SymPy for the analytical part, Numpy for the numerical part and Matplotlib for plotting:\nfrom sympy import *\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nxq, xp, yq, yp, m = symbols(\"x_Q, x_P, y_Q, y_P, m\")\nA = Matrix([xq - xp, yq - yp])\nB = Matrix([xq, yq])\n# this equations contains two unkowns: xq, yq\neq = A.dot(B)\n\n# but we know the line equation: yq = m * xq, so we substitute it into\n# eq and solve for xq\nxq_expr = solve(eq.subs(yq, m * xq), xq)[1]\nprint(xq_expr)\n# (m*y_P + x_P)/(m**2 + 1)\n\n# generate data\nmv = -0.5\nxp_vals = np.random.uniform(2, 10, 30)\nyp_vals = np.random.uniform(2, 10, 30)\n\n# convert the symbolic expression to a numerical function\nf = lambdify([m, xp, yp], xq_expr)\n# compute the projections on the line\nxq_vals = f(mv, xp_vals, yp_vals)\nyq_vals = mv * xq_vals\n\n# compute the distance\nd = np.sqrt((xp_vals - xq_vals)**2 + (yp_vals - yq_vals)**2)\n# find the index of the shortest distance\nidx = d.argmin()\n\nfig, ax = plt.subplots()\nxline = np.linspace(0, 10)\nyline = mv * xline\nax.plot(xline, yline, \"k:\", label=\"line\")\nax.scatter(xq_vals, yq_vals, label=\"Q\", marker=\".\")\nax.scatter(xp_vals, yp_vals, label=\"P\", marker=\"*\")\nax.plot([xp_vals[idx], xq_vals[idx]], [yp_vals[idx], yq_vals[idx]], \"r\", label=\"min distance\")\nax.set_aspect(\"equal\")\nax.legend()\nplt.show()\n\n\n", "Your assigned line passes through the origin, its parametric equation is\nx = u cos(a)\ny = u sin(a)\n\nand you can see the parameter u is simply the (oriented) distance beteween the origin and a point on the assigned line.\nNow, consider a point of coordinates X and Y, a line perpendicular to the assigned one has the parametric equation\nx = X - v sin(a)\ny = Y + v cos(a)\n\nand again, the parameter v is simply the (oriented) distance between (X, Y) and a point on a line passing per (X, Y) and perpendicular to the assigned one.\nThe intersection is given by the equation\nX = u cos(a) + v sin(a)\nY = u sin(a) - v cos(a)\n\nyou can check by inspection that the solution of the system is\nu = X cos(a) + Y sin(a)\nv = X sin(a) - Y cos(a)\n\nThe distance of the point (X, Y) from the assigned line is hence\nd = | X sin(a) - Y cos(a) |\n\n\nA Python Implementation\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nnp.random.seed(20221126)\n\nX = 2*np.random.random(32)-1\nY = 2*np.random.random(32)-1\n\nfig, ax = plt.subplots()\nax.set_xlim((-1.2, 1.2))\nax.set_ylim((-1.2, 1.2))\nax.grid(1)\nax.set_aspect(1)\n\nax.scatter(X, Y, s=80, ec='k', color='y')\n\na = 2*np.random.random()*np.pi\ns, c = np.sin(a), np.cos(a)\n\nplt.plot((0, c), (0, s), color='k')\nplt.plot((-s, s), (c, -c), color='r')\n\n# strike out \"bad\" points\nbad = X*c+Y*s<0\nplt.scatter(X[bad], Y[bad], marker='x', color='k')\n\n# consider only good (i.e., not bad) points\nXg, Yg = X[~bad], Y[~bad]\n\n# compute all distances (but for good points only)\nd = np.abs(Xg*s-Yg*c)\n# find the nearest point and hilight it\nimin = np.argmin(d)\nplt.scatter(Xg[imin], Yg[imin], ec='k', color='r')\nplt.show()\n\n\nAn OVERDONE Example\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nnp.random.seed(20221126)\n\nX = 2*np.random.random(32)-1\nY = 2*np.random.random(32)-1\n\nfig, axs = plt.subplots(2, 4, figsize=(10,5), layout='constrained')\nfor ax, a in zip(axs.flat,\n (2.8, 1.8, 1.4, 0.2,\n 3.4, 4.5, 4.9, 6.0)): \n ax.set_xlim((-1.2, 1.2))\n ax.set_xticks((-1, -0.5, 0, 0.5, 1.0))\n ax.set_ylim((-1.2, 1.2))\n ax.grid(1)\n ax.set_aspect(1)\n ax.set_title('$\\\\alpha \\\\approx %d^o$'%round(np.rad2deg(a)))\n\n ax.scatter(X, Y, s=80, ec='k', color='yellow')\n\n s, c = np.sin(a), np.cos(a)\n\n ax.arrow(0, 0, 1.2*c, 1.2*s, fc='k',\n length_includes_head=True,\n head_width=0.08, head_length=0.1)\n\n # divide the drawing surface in two semiplanes \n if abs(c)>abs(s):\n if c>0:\n ax.plot((1.2*s, -1.2*s), (-1.2, 1.2))\n else:\n ax.plot((-1.2*s, 1.2*s), (-1.2, 1.2))\n elif abs(s)>=abs(c):\n if s>0:\n ax.plot((-1.2, 1.2), (1.2*c, -1.2*c)) \n else:\n ax.plot((-1.2, 1.2), (-1.2*c, 1.2*c)) \n\n # strike out \"bad\" points\n bad = X*c+Y*s<0\n ax.scatter(X[bad], Y[bad], marker='x', color='k')\n\n # consider only good (i.e., not bad) points\n Xg, Yg = X[~bad], Y[~bad]\n\n # compute all distances (but for good points only)\n d = np.abs(Xg*s-Yg*c)\n # find the nearest point and hilight it\n imin = np.argmin(d)\n ax.scatter(Xg[imin], Yg[imin], s=80, ec='k', color='yellow')\n ax.scatter(Xg[imin], Yg[imin], s= 10, color='k', alpha=1.0)\nplt.show()\n\n" ]
[ 1, 1 ]
[]
[]
[ "geometry", "matplotlib", "python", "scipy" ]
stackoverflow_0074577462_geometry_matplotlib_python_scipy.txt
Q: How can I add values to the begining and end of rows in a Pandas Dataframe? I have a .csv file that lists a couple thousand names. If I read the csv file into a pandas data frame is there an easy what to add quotes around the names with a ',' at the end of each row? Example below This is what the output of the CSV file looks like now. name1 name2 name3 name4 What I would like the output to look like with Pandas 'name1', 'name2', 'name3', 'name4', etc I would usually do this with Sublime or some other text editor however it has been freezing on me when I try to make a mass edit. One thing I do want to point out is that the names in the csv file are all unique. Also, I am only referencing pandas as it is all I am familiar with in regard to working with csv files in python. The code I have so far looks like this and I am not sure how to proceed: file = ('path/file.csv') df = pd.read_csv(file) print(df) A: Assume that the column has a name such as df['name'] Use the following the add "'" df['newName'] = df['name'].apply(lambda x : "'" + x + "'") A new column 'newName' is created where the quote is added.
How can I add values to the begining and end of rows in a Pandas Dataframe?
I have a .csv file that lists a couple thousand names. If I read the csv file into a pandas data frame is there an easy what to add quotes around the names with a ',' at the end of each row? Example below This is what the output of the CSV file looks like now. name1 name2 name3 name4 What I would like the output to look like with Pandas 'name1', 'name2', 'name3', 'name4', etc I would usually do this with Sublime or some other text editor however it has been freezing on me when I try to make a mass edit. One thing I do want to point out is that the names in the csv file are all unique. Also, I am only referencing pandas as it is all I am familiar with in regard to working with csv files in python. The code I have so far looks like this and I am not sure how to proceed: file = ('path/file.csv') df = pd.read_csv(file) print(df)
[ "\nAssume that the column has a name such as df['name']\n\nUse the following the add \"'\" \ndf['newName'] = df['name'].apply(lambda x : \"'\" + x + \"'\")\n\nA new column 'newName' is created where the quote is added.\n\n\n" ]
[ 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074578183_pandas_python.txt
Q: This function does not print This function is supposed to recieve a string of text and tell if it is an isogram (a word with no repeated letters) or not. I do not understand why this does not work. Here is the code. String = input("input a string "); def is_isogram(String): String = String.lower() counter = 0 while counter < 2: for i in String: if i == String: print("Not isogram") counter += 1 is_isogram(String) A: What you posted cannot work. You need a separate counter for each character to check this. But using a set seems to be better idea imho. def is_isogram(s: str)->bool: unique_chars = set() for char in s: pre_add_len = len(unique_chars) unique_chars.add(char) post_add_len = len(unique_chars) if prea_add_len == post_add_len: return False return True Not very memory efficient since we load the whole string to the memory, but should also work: def is_isogram(s: str)->bool: return len(list(s)) == len(set(list(s)))
This function does not print
This function is supposed to recieve a string of text and tell if it is an isogram (a word with no repeated letters) or not. I do not understand why this does not work. Here is the code. String = input("input a string "); def is_isogram(String): String = String.lower() counter = 0 while counter < 2: for i in String: if i == String: print("Not isogram") counter += 1 is_isogram(String)
[ "What you posted cannot work. You need a separate counter for each character to check this. But using a set seems to be better idea imho.\ndef is_isogram(s: str)->bool:\n unique_chars = set()\n for char in s:\n pre_add_len = len(unique_chars)\n unique_chars.add(char)\n post_add_len = len(unique_chars)\n if prea_add_len == post_add_len:\n return False\n return True\n\nNot very memory efficient since we load the whole string to the memory, but should also work:\ndef is_isogram(s: str)->bool:\n return len(list(s)) == len(set(list(s)))\n\n" ]
[ 0 ]
[]
[]
[ "function", "python" ]
stackoverflow_0074575346_function_python.txt
Q: What is the error " TypeError: 'int' object is not callable"? Substitute three numpy for the audio and combine them to get the max-min average. I am getting an error with this, what should I do? import torch import torchaudio import torchaudio.transforms as T import os import requests import librosa import matplotlib.pyplot as plt # 音声の保存 _SAMPLE_DIR = "_sample_data" SAMPLE_WAV_URL = "https://pytorch-tutorial-assets.s3.amazonaws.com/VOiCES_devkit/source-16k/train/sp0307/Lab41-SRI-VOiCES-src-sp0307-ch127535-sg0042.wav" SAMPLE_WAV_PATH = os.path.join(_SAMPLE_DIR, "speech.wav") def plot_spectrogram(spec, title=None, ylabel="freq_bin", aspect="auto", xmax=None): fig, axs = plt.subplots(1, 1) axs.set_title(title or "Spectrogram (db)") axs.set_ylabel(ylabel) axs.set_xlabel("frame") im = axs.imshow(librosa.power_to_db(spec), origin="lower", aspect=aspect) if xmax: axs.set_xlim((0, xmax)) fig.colorbar(im, ax=axs) plt.show(block=False) def synthesis(sigList): maxLength = 0 tmpLength = 0 tmpArray = [] #最大長の音声を探索する for i, data in enumerate(sigList): if len(data) > tmpLength: maxLength = len(data) tmpLength = len(data) index = i #最大長の音声の長さの0埋め配列を定義 sig = np.zeros(maxLength) for i in sigList: tmp = i.tolist() #numpy→list #全ての音声を最大長の音声に合わせて0埋めする for data in range(maxLength - len(i)): tmp.append(0) tmpArray.append(tmp) #配列3つを合成する sig = np.array(tmpArray[0]) + np.array(tmpArray[1]) + np.array(tmpArray[2]) return sig def min_max(x, axis=None): min = x.min(axis=axis, keepdims=True) max = x.max(axis=axis, keepdims=True) try: z = (x - min) / (max - min) except ZeroDivisionError: z = (x - min) / min return z waveform, sample_rate = torchaudio.load(filepath=SAMPLE_WAV_URL) n_fft = 1024 win_length = None hop_length = 512 window_fn = torch.hann_window waveforms = waveform.numpy() k = waveforms for i in range(2): waveforms = np.concatenate([waveforms,k],0) spectrogram = T.Spectrogram( n_fft=n_fft, win_length=win_length, hop_length=hop_length, window_fn=window_fn, power=2.0, ) sig = min_max(synthesis(waveforms)) spec = spectrogram(sig) plot_spectrogram(spec[0], title='torchaudio') spec = spectrogram(sig) This is the line where the error occurs. Detailed error is TypeError Traceback (most recent call last) <ipython-input-44-a0a6c4ba7770> in <module> 70 sig = min_max(synthesis(waveforms)) 71 ---> 72 spec = spectrogram(sig) 73 plot_spectrogram(spec[0], title='torchaudio') 2 frames /usr/local/lib/python3.7/dist-packages/torchaudio/functional/functional.py in spectrogram(waveform, pad, window, n_fft, hop_length, win_length, power, normalized, center, pad_mode, onesided, return_complex) 106 107# pack batch --> 108 shape = waveform.size() 109 waveform = waveform.reshape(-1, shape[-1]) 110 TypeError: 'int' object is not callable A: According to the docs for Torchaudio Spectrogram, the parameter that's passed to its return value (spectrogram() in your code) needs to be a PyTorch Tensor. In your code, you're giving it a Numpy array instead, because that's what your function synthesis() returns. You can convert a Numpy ndarray into a Tensor with torch.from_numpy. For example: spec = spectrogram(torch.from_numpy(sig))
What is the error " TypeError: 'int' object is not callable"?
Substitute three numpy for the audio and combine them to get the max-min average. I am getting an error with this, what should I do? import torch import torchaudio import torchaudio.transforms as T import os import requests import librosa import matplotlib.pyplot as plt # 音声の保存 _SAMPLE_DIR = "_sample_data" SAMPLE_WAV_URL = "https://pytorch-tutorial-assets.s3.amazonaws.com/VOiCES_devkit/source-16k/train/sp0307/Lab41-SRI-VOiCES-src-sp0307-ch127535-sg0042.wav" SAMPLE_WAV_PATH = os.path.join(_SAMPLE_DIR, "speech.wav") def plot_spectrogram(spec, title=None, ylabel="freq_bin", aspect="auto", xmax=None): fig, axs = plt.subplots(1, 1) axs.set_title(title or "Spectrogram (db)") axs.set_ylabel(ylabel) axs.set_xlabel("frame") im = axs.imshow(librosa.power_to_db(spec), origin="lower", aspect=aspect) if xmax: axs.set_xlim((0, xmax)) fig.colorbar(im, ax=axs) plt.show(block=False) def synthesis(sigList): maxLength = 0 tmpLength = 0 tmpArray = [] #最大長の音声を探索する for i, data in enumerate(sigList): if len(data) > tmpLength: maxLength = len(data) tmpLength = len(data) index = i #最大長の音声の長さの0埋め配列を定義 sig = np.zeros(maxLength) for i in sigList: tmp = i.tolist() #numpy→list #全ての音声を最大長の音声に合わせて0埋めする for data in range(maxLength - len(i)): tmp.append(0) tmpArray.append(tmp) #配列3つを合成する sig = np.array(tmpArray[0]) + np.array(tmpArray[1]) + np.array(tmpArray[2]) return sig def min_max(x, axis=None): min = x.min(axis=axis, keepdims=True) max = x.max(axis=axis, keepdims=True) try: z = (x - min) / (max - min) except ZeroDivisionError: z = (x - min) / min return z waveform, sample_rate = torchaudio.load(filepath=SAMPLE_WAV_URL) n_fft = 1024 win_length = None hop_length = 512 window_fn = torch.hann_window waveforms = waveform.numpy() k = waveforms for i in range(2): waveforms = np.concatenate([waveforms,k],0) spectrogram = T.Spectrogram( n_fft=n_fft, win_length=win_length, hop_length=hop_length, window_fn=window_fn, power=2.0, ) sig = min_max(synthesis(waveforms)) spec = spectrogram(sig) plot_spectrogram(spec[0], title='torchaudio') spec = spectrogram(sig) This is the line where the error occurs. Detailed error is TypeError Traceback (most recent call last) <ipython-input-44-a0a6c4ba7770> in <module> 70 sig = min_max(synthesis(waveforms)) 71 ---> 72 spec = spectrogram(sig) 73 plot_spectrogram(spec[0], title='torchaudio') 2 frames /usr/local/lib/python3.7/dist-packages/torchaudio/functional/functional.py in spectrogram(waveform, pad, window, n_fft, hop_length, win_length, power, normalized, center, pad_mode, onesided, return_complex) 106 107# pack batch --> 108 shape = waveform.size() 109 waveform = waveform.reshape(-1, shape[-1]) 110 TypeError: 'int' object is not callable
[ "According to the docs for Torchaudio Spectrogram, the parameter that's passed to its return value (spectrogram() in your code) needs to be a PyTorch Tensor. In your code, you're giving it a Numpy array instead, because that's what your function synthesis() returns.\nYou can convert a Numpy ndarray into a Tensor with torch.from_numpy. For example:\nspec = spectrogram(torch.from_numpy(sig))\n\n" ]
[ 1 ]
[]
[]
[ "python", "pytorch" ]
stackoverflow_0074573351_python_pytorch.txt
Q: PyTorch and torchvision compiled with different versions. How to solve? PyTorch and Torchvision were compiled with different CUDA versions. PyTorch has CUDA version=11.6 and torchvision CUDA Version 11.3. Please reinstall the torchvision that matches your PyTorch install. I've tried to reinstall torchvision so many times from the website as well as PyTorch and python. I'm stuck I have no idea how to solve this issue. A: You need to upgrade your torchvision to one compiled with CUDA 11.6: pip install --upgrade torchvision>=0.12.0 A: This worked for me: pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu116
PyTorch and torchvision compiled with different versions. How to solve?
PyTorch and Torchvision were compiled with different CUDA versions. PyTorch has CUDA version=11.6 and torchvision CUDA Version 11.3. Please reinstall the torchvision that matches your PyTorch install. I've tried to reinstall torchvision so many times from the website as well as PyTorch and python. I'm stuck I have no idea how to solve this issue.
[ "You need to upgrade your torchvision to one compiled with CUDA 11.6:\npip install --upgrade torchvision>=0.12.0\n\n", "This worked for me:\npip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu116\n" ]
[ 0, 0 ]
[]
[]
[ "python", "pytorch", "torch", "torchvision" ]
stackoverflow_0074455445_python_pytorch_torch_torchvision.txt
Q: Filtering using a str array I am trying to filter an ASCII list (which contains ASCII and other characters) by using an array that I have created. I am trying to remove any integer string within the list. import pandas as pd with open('ASCII.txt') as f: data = f.read().replace('\t', ',') print(data, file=open('my_file.csv', 'w')) df = list(data) test = ['0','1','2','3','4','5','6','7','8','9'] for x in df: try: df = int(df) for i in range(0,9): while any(test) in df: df.remove('i') print(df) except: continue print(df) This is what I currently have however, it does not work and outputs: ['3', '3', ',', '0', '4', '1', ',', '2', '1', ',', '!', ',', '\n', '3', '4', ',', '0', '4', ...] A: Your if condition for numbers is broken. any checks if at least one element in the passed iterable is truthy, i.e. not an empty string in your case. test = ['0','1','2','3','4','5','6','7','8','9'] while any(test) in df: # Condition always evaluates to False df.remove('i') # Only removes the character 'i' from df So your condition any(test) evaluates to True. And now you are checking if True is in df which it isn't, so the condition evaluates to False. The next error is, that you try to remove the letter 'i' from your list with the remove call. This can be fixed by casting the integer to a string for i in range(9): # Cast integer to str while str(i) in df: # Remove str i from df df.remove(str(i)) Using a str list instead of the range function, you can directly iterate over the elements of the test list: df = list(data) test = ['0','1','2','3','4','5','6','7','8','9'] for num in test: # Loop as long as num appears in df while num in df: df.remove(num) # removes all elements with value of num By doing so you have to run a second loop to remove all appearances of the current num in df, as remove only removes the first occurrence of that value. Alternatively you can also check each element of df if it is a digit by using the str method isdigit. But as you modify the list in-place you need to iterate over a copy. Otherwise you'll encounter side-effects as you reduce the size of df: # Use slice to create a copy of df for el in df[:]: if el.isdigit(): df.remove(el) As you iterate over each element in df you don't need an inner loop to remove each occurrence of value el.
Filtering using a str array
I am trying to filter an ASCII list (which contains ASCII and other characters) by using an array that I have created. I am trying to remove any integer string within the list. import pandas as pd with open('ASCII.txt') as f: data = f.read().replace('\t', ',') print(data, file=open('my_file.csv', 'w')) df = list(data) test = ['0','1','2','3','4','5','6','7','8','9'] for x in df: try: df = int(df) for i in range(0,9): while any(test) in df: df.remove('i') print(df) except: continue print(df) This is what I currently have however, it does not work and outputs: ['3', '3', ',', '0', '4', '1', ',', '2', '1', ',', '!', ',', '\n', '3', '4', ',', '0', '4', ...]
[ "Your if condition for numbers is broken.\nany checks if at least one element in the passed iterable is truthy, i.e. not an empty string in your case.\ntest = ['0','1','2','3','4','5','6','7','8','9']\nwhile any(test) in df: # Condition always evaluates to False\n df.remove('i') # Only removes the character 'i' from df\n\nSo your condition any(test) evaluates to True. And now you are checking if True is in df which it isn't, so the condition evaluates to False.\nThe next error is, that you try to remove the letter 'i' from your list with the remove call. This can be fixed by casting the integer to a string\nfor i in range(9):\n # Cast integer to str\n while str(i) in df:\n # Remove str i from df\n df.remove(str(i))\n\nUsing a str list instead of the range function, you can directly iterate over the elements of the test list:\ndf = list(data)\ntest = ['0','1','2','3','4','5','6','7','8','9']\n\nfor num in test:\n # Loop as long as num appears in df\n while num in df:\n df.remove(num) # removes all elements with value of num\n\nBy doing so you have to run a second loop to remove all appearances of the current num in df, as remove only removes the first occurrence of that value.\nAlternatively you can also check each element of df if it is a digit by using the str method isdigit. But as you modify the list in-place you need to iterate over a copy. Otherwise you'll encounter side-effects as you reduce the size of df:\n# Use slice to create a copy of df\nfor el in df[:]:\n if el.isdigit():\n df.remove(el)\n\nAs you iterate over each element in df you don't need an inner loop to remove each occurrence of value el.\n" ]
[ 0 ]
[]
[]
[ "python", "string" ]
stackoverflow_0074578882_python_string.txt
Q: Tkinter - "can not find channel named "stdout" So I'm receiving the error, _tkinter.TclError: can not find channel named "stdout" by running this code: from tkinter import Tcl tcl = Tcl() tcl.eval(''' puts hello ''') For others it seems to work. I wonder if it is because I'm on windows and the distinction between console and gui application ? An interesting approach that I have found in a book, where they state it is possible to have a console and a window at the same time via console show, but this got me into another error: _tkinter.TclError: invalid command name "console" How to fix this ? A: I have found a sufficient solutions by combining two of Bryan's outstanding answers and the tutorial that I watch. To summarize the code below: wrap a python function into a tcl proc via register overwrite the tcl puts command take a list as input and join the items with empty space between call the wrapped python function in tcl by its name returned by register from tkinter import Tcl tcl = Tcl() cmd = tcl.register(lambda inp:print(inp)) tcl.eval( 'proc puts {args} {' + f'{cmd} [join $args " "]' + '}') tcl.eval('puts "hello"') Note: Whitespace is a styling choice and has no effect in this code. Only important thing to note is the f-string (formatted string). This needs to be encapsulated due the curly barces, otherwise tcl interprets it as list or python's format string protest about the curly braces. For clarification a normal function is used in the code below: from tkinter import Tcl def puts(inp): print(inp) tcl = Tcl() cmd = tcl.register(puts) tcl.eval( 'proc puts {args} {' + f'{cmd} [join $args " "]' + '}') tcl.eval('puts "hello, world"') For gets you can do the same, but at exit it might throw an error because the interpreter can't close a connection to a function, as it expects a socket/file descriptor. Anyway, for playground: def gets(): return input() cmd = tcl.register(gets) tcl.tk.eval( 'proc gets {} {' + f'{cmd}' + '}')
Tkinter - "can not find channel named "stdout"
So I'm receiving the error, _tkinter.TclError: can not find channel named "stdout" by running this code: from tkinter import Tcl tcl = Tcl() tcl.eval(''' puts hello ''') For others it seems to work. I wonder if it is because I'm on windows and the distinction between console and gui application ? An interesting approach that I have found in a book, where they state it is possible to have a console and a window at the same time via console show, but this got me into another error: _tkinter.TclError: invalid command name "console" How to fix this ?
[ "I have found a sufficient solutions by combining two of Bryan's outstanding answers and the tutorial that I watch. To summarize the code below:\n\nwrap a python function into a tcl proc via register\noverwrite the tcl puts command\ntake a list as input and join the items with empty space between\ncall the wrapped python function in tcl by its name returned by register\n\n\nfrom tkinter import Tcl\n\n\ntcl = Tcl()\ncmd = tcl.register(lambda inp:print(inp))\ntcl.eval(\n 'proc puts {args} {' +\n f'{cmd} [join $args \" \"]' +\n '}')\n\ntcl.eval('puts \"hello\"') \n\nNote: Whitespace is a styling choice and has no effect in this code. Only important thing to note is the f-string (formatted string). This needs to be encapsulated due the curly barces, otherwise tcl interprets it as list or python's format string protest about the curly braces.\nFor clarification a normal function is used in the code below:\nfrom tkinter import Tcl\n\ndef puts(inp):\n print(inp)\n\ntcl = Tcl()\ncmd = tcl.register(puts)\ntcl.eval(\n 'proc puts {args} {' +\n f'{cmd} [join $args \" \"]' +\n '}')\n\ntcl.eval('puts \"hello, world\"')\n\n\nFor gets you can do the same, but at exit it might throw an error because the interpreter can't close a connection to a function, as it expects a socket/file descriptor. Anyway, for playground:\ndef gets():\n return input()\n\ncmd = tcl.register(gets)\ntcl.tk.eval(\n 'proc gets {} {' +\n f'{cmd}' +\n '}')\n\n" ]
[ 0 ]
[]
[]
[ "python", "tcl", "tk_toolkit", "tkinter" ]
stackoverflow_0074571933_python_tcl_tk_toolkit_tkinter.txt
Q: 'NoneType' object has no attribute 'val' in Python Linked List I have recently started to practice using LinkedList in Python and encountered the problem below. Both code seems like they are doing the same thing but 1 got the error while the other did not. Can someone let me know why this is the case?: The ListNode class is defined as: #Python Linked List class ListNode: def __init__(self, val=0, next=None): self.val = val self.next = next Assume we have this linked list: node = ListNode{val: 2, next: ListNode{val: 4, next: ListNode{val: 3, next: None}}} Code 1 This can run fine and will print "2 4 3": while node: print(node.val) # access the values of the node by node.val node=node.next` Code 2: This gives me an error saying 'NoneType' object has no attribute 'val' while still printing "2" node = node.next print(node.val) I expect to see code 2 to print "2" and not giving me the error. Note that code 1 and code 2 are run independently with node = ListNode{val: 2, next: ListNode{val: 4, next: ListNode{val: 3, next: None}}} In fact, code 2 does print 2, it just prints 2 with the Nonetype error, which I want to avoid. A: After your while loop finished you are at the last node in your linked list. So node.next will point to None as per your definition: class ListNode: def __init__(self, val=0, next=None): self.val = val self.next = next A: According to your code, you have set the default values for the properties "val" as 0, "next" as None. If supposed that you have created an instance, None will be assigned to the "next" property of the instance. If you want to access to the "next" property, create another instance and assign it to the "next" property of the previous instance class ListNode: def __init__(self, val=0, next=None): self.val = val self.next = next node1 = ListNode() node2 = ListNode() node1.next = node2 print(node1.next)
'NoneType' object has no attribute 'val' in Python Linked List
I have recently started to practice using LinkedList in Python and encountered the problem below. Both code seems like they are doing the same thing but 1 got the error while the other did not. Can someone let me know why this is the case?: The ListNode class is defined as: #Python Linked List class ListNode: def __init__(self, val=0, next=None): self.val = val self.next = next Assume we have this linked list: node = ListNode{val: 2, next: ListNode{val: 4, next: ListNode{val: 3, next: None}}} Code 1 This can run fine and will print "2 4 3": while node: print(node.val) # access the values of the node by node.val node=node.next` Code 2: This gives me an error saying 'NoneType' object has no attribute 'val' while still printing "2" node = node.next print(node.val) I expect to see code 2 to print "2" and not giving me the error. Note that code 1 and code 2 are run independently with node = ListNode{val: 2, next: ListNode{val: 4, next: ListNode{val: 3, next: None}}} In fact, code 2 does print 2, it just prints 2 with the Nonetype error, which I want to avoid.
[ "After your while loop finished you are at the last node in your linked list.\nSo node.next will point to None as per your definition:\nclass ListNode:\n def __init__(self, val=0, next=None):\n self.val = val\n self.next = next\n\n", "According to your code, you have set the default values for the properties \"val\" as 0, \"next\" as None. If supposed that you have created an instance, None will be assigned to the \"next\" property of the instance. If you want to access to the \"next\" property, create another instance and assign it to the \"next\" property of the previous instance\nclass ListNode:\n def __init__(self, val=0, next=None):\n self.val = val\n self.next = next\n \nnode1 = ListNode()\nnode2 = ListNode()\n\nnode1.next = node2\nprint(node1.next)\n\n" ]
[ 0, 0 ]
[]
[]
[ "linked_list", "python" ]
stackoverflow_0074578957_linked_list_python.txt
Q: Concatenate two tensors of different shape from two different input modalities I have two tensors: a = torch.randn((1, 30, 1220)) # represents text embedding vector (30 spans, each with embedding size of 1220) b = torch.randn((1, 128, 256)) # represents image features obtained from a pretrained CNN (object detection) How do I concatenate everything in b to each one of the 30 spans of a? How to concatenate the whole b to the whole a? This is what I'm trying to do: The authors have only provided this text: I'm extracting features (outlined in red) from a 3d point cloud (similar to CNN but for 3d) as shown below: A: You're looking to combine two tensors with different shapes, there is no trivial way of concatenating them. Both tensors hold information regarding the same instance: the element you want to characterize with features embeddings through two different modalities: textual and visual. The only way that makes sense to me is to learn two separate layers to map your text embedding and your image features to a common space where you can easily fuse them. The design you adopt for this mapping is entirely up to you. Of course, this mapping layers need to be learned through training i.e. applying some kind of supervision at the other end. A: Since these representations are from two different modalities (i.e., text and image) and they contain valuable features that are of great importance to the final goal, I would suggest to fuse them in a "learnable" manner instead of a mere concatenation or addition. Furthermore, such a learnable weighting (between features) would be optimal since in some cases one representation would be far more useful than the other whereas at other instances the vice versa applies. Please note that a mere concatenation can also happen in this fusion module that you would implement. For the actual implementation, there are several types of fusion techniques. E.g. Simple Fusion, Cold Fusion etc. (cf. Fusion Models for Improved Visual Captioning, 2021) For instance, one straightforward idea would be to use a simple linear layer to project one of the features to the same dimensionality as the other and then do a simple concatenation, with some optional non-linearity if needed. A: Try concatenation as followד: concatenatedTensor = torch.cat([torch.reshape(firstTensor,[-1]),torch.reshape(secondTensor,[-1]))
Concatenate two tensors of different shape from two different input modalities
I have two tensors: a = torch.randn((1, 30, 1220)) # represents text embedding vector (30 spans, each with embedding size of 1220) b = torch.randn((1, 128, 256)) # represents image features obtained from a pretrained CNN (object detection) How do I concatenate everything in b to each one of the 30 spans of a? How to concatenate the whole b to the whole a? This is what I'm trying to do: The authors have only provided this text: I'm extracting features (outlined in red) from a 3d point cloud (similar to CNN but for 3d) as shown below:
[ "You're looking to combine two tensors with different shapes, there is no trivial way of concatenating them. Both tensors hold information regarding the same instance: the element you want to characterize with features embeddings through two different modalities: textual and visual.\nThe only way that makes sense to me is to learn two separate layers to map your text embedding and your image features to a common space where you can easily fuse them.\nThe design you adopt for this mapping is entirely up to you. Of course, this mapping layers need to be learned through training i.e. applying some kind of supervision at the other end.\n", "Since these representations are from two different modalities (i.e., text and image) and they contain valuable features that are of great importance to the final goal, I would suggest to fuse them in a \"learnable\" manner instead of a mere concatenation or addition. Furthermore, such a learnable weighting (between features) would be optimal since in some cases one representation would be far more useful than the other whereas at other instances the vice versa applies.\nPlease note that a mere concatenation can also happen in this fusion module that you would implement. For the actual implementation, there are several types of fusion techniques. E.g. Simple Fusion, Cold Fusion etc. (cf. Fusion Models for Improved Visual Captioning, 2021)\nFor instance, one straightforward idea would be to use a simple linear layer to project one of the features to the same dimensionality as the other and then do a simple concatenation, with some optional non-linearity if needed.\n", "Try concatenation as followד:\nconcatenatedTensor = torch.cat([torch.reshape(firstTensor,[-1]),torch.reshape(secondTensor,[-1]))\n\n" ]
[ 1, 1, 0 ]
[]
[]
[ "concatenation", "deep_learning", "machine_learning", "python", "pytorch" ]
stackoverflow_0069036172_concatenation_deep_learning_machine_learning_python_pytorch.txt
Q: Getting the Output of (ERROR: sequence item 0: expected str instance, bytes found) as a result of my function I converted my code from python 2 to python 3, everything is working well except for this part of the code: from binascii import unhexlify def swap_endian_words(hex_words): '''Swaps the endianness of a hexidecimal string of words and converts to binary string.''' message = unhexlify(hex_words) if len(message) % 4 != 0: raise ValueError('Must be 4-byte word aligned') return ''.join(([ message[4 * i: 4 * i + 4][::-1] for i in range(0, len(message) // 4) ])) print(swap_endian_words('aabbccdd')) The console is giving me the output: TypeError: sequence item 0: expected str instance, bytes found I think that this is due to the fact that the program cannot iterate over bytes, which the variable message is formatted in. I tried to use .decode() on message, but it did not work. A: binascii.unhexlify returns a bytes object in Python 3, so the .join should also use a bytes string: from binascii import unhexlify def swap_endian_words(hex_words): '''Swaps the endianness of a hexidecimal string of words and converts to binary string.''' message = unhexlify(hex_words) if len(message) % 4 != 0: raise ValueError('Must be 4-byte word aligned') return b''.join(([ message[4 * i: 4 * i + 4][::-1] for i in range(0, len(message) // 4) ])) # ^ here print(swap_endian_words('aabbccdd')) Output: b'\xdd\xcc\xbb\xaa' Here's a faster version using array.array: import array def swap_endian_words(hex_words): '''Swaps the endianness of a hexadecimal string of words and converts to binary string.''' # 'L' is unsigned 32-bit integer item type arr = array.array('L', bytes.fromhex(hex_words)) arr.byteswap() # may want the final 32-bit integers return arr.tobytes() # or dump back as bytes print(swap_endian_words('00112233445566778899aabbccddeeff').hex(' ')) Output: 33 22 11 00 77 66 55 44 bb aa 99 88 ff ee dd cc
Getting the Output of (ERROR: sequence item 0: expected str instance, bytes found) as a result of my function
I converted my code from python 2 to python 3, everything is working well except for this part of the code: from binascii import unhexlify def swap_endian_words(hex_words): '''Swaps the endianness of a hexidecimal string of words and converts to binary string.''' message = unhexlify(hex_words) if len(message) % 4 != 0: raise ValueError('Must be 4-byte word aligned') return ''.join(([ message[4 * i: 4 * i + 4][::-1] for i in range(0, len(message) // 4) ])) print(swap_endian_words('aabbccdd')) The console is giving me the output: TypeError: sequence item 0: expected str instance, bytes found I think that this is due to the fact that the program cannot iterate over bytes, which the variable message is formatted in. I tried to use .decode() on message, but it did not work.
[ "binascii.unhexlify returns a bytes object in Python 3, so the .join should also use a bytes string:\nfrom binascii import unhexlify\n\ndef swap_endian_words(hex_words):\n '''Swaps the endianness of a hexidecimal string of words and converts to binary string.'''\n message = unhexlify(hex_words)\n if len(message) % 4 != 0: raise ValueError('Must be 4-byte word aligned')\n return b''.join(([ message[4 * i: 4 * i + 4][::-1] for i in range(0, len(message) // 4) ]))\n # ^ here\n\nprint(swap_endian_words('aabbccdd'))\n\nOutput:\nb'\\xdd\\xcc\\xbb\\xaa'\n\nHere's a faster version using array.array:\nimport array\n\ndef swap_endian_words(hex_words):\n '''Swaps the endianness of a hexadecimal string of words and converts to binary string.'''\n # 'L' is unsigned 32-bit integer item type\n arr = array.array('L', bytes.fromhex(hex_words))\n arr.byteswap() # may want the final 32-bit integers\n return arr.tobytes() # or dump back as bytes\n\nprint(swap_endian_words('00112233445566778899aabbccddeeff').hex(' '))\n\nOutput:\n33 22 11 00 77 66 55 44 bb aa 99 88 ff ee dd cc\n\n" ]
[ 1 ]
[]
[]
[ "python", "python_2.7", "python_3.x" ]
stackoverflow_0074579045_python_python_2.7_python_3.x.txt
Q: 'BatchDataset' object has no attribute 'shape' Here is my code: train_images = tf.keras.utils.image_dataset_from_directory( '/content/drive/MyDrive/ArabicHandwritten2/train') train_labels = tf.keras.utils.image_dataset_from_directory( '/content/drive/MyDrive/ArabicHandwritten2/test') train_images = tf.reshape(train_images.shape[0], 256, 256, 3).astype('float32') train_images = (train_images - 127.5) / 127.5 # Normalize the images to [-1, 1] BUFFER_SIZE = 16765 BATCH_SIZE = 32 train_dataset = tf.data.Dataset.from_tensor_slices(train_images).shuffle(BUFFER_SIZE).batch(BATCH_SIZE) The error is: AttributeError: 'BatchDataset' object has no attribute 'shape' How can I solve the error? Thanks. A: Because image_dataset_from_directory returns a tf.data.Dataset which has no shape attribute. And you can see how to use the tf.reshape function here:
'BatchDataset' object has no attribute 'shape'
Here is my code: train_images = tf.keras.utils.image_dataset_from_directory( '/content/drive/MyDrive/ArabicHandwritten2/train') train_labels = tf.keras.utils.image_dataset_from_directory( '/content/drive/MyDrive/ArabicHandwritten2/test') train_images = tf.reshape(train_images.shape[0], 256, 256, 3).astype('float32') train_images = (train_images - 127.5) / 127.5 # Normalize the images to [-1, 1] BUFFER_SIZE = 16765 BATCH_SIZE = 32 train_dataset = tf.data.Dataset.from_tensor_slices(train_images).shuffle(BUFFER_SIZE).batch(BATCH_SIZE) The error is: AttributeError: 'BatchDataset' object has no attribute 'shape' How can I solve the error? Thanks.
[ "Because image_dataset_from_directory returns a tf.data.Dataset which has no shape attribute. And you can see how to use the tf.reshape function here:\n" ]
[ 0 ]
[]
[]
[ "python", "reshape", "shapes", "tensorflow" ]
stackoverflow_0074575411_python_reshape_shapes_tensorflow.txt
Q: Python typing validation I would like to implement validation for Python 3.6 type annotation within my project. I have a method that uses __annotations__ dict to check if all attributes of the class have the correct value. It works perfectly for basic types like int, str or bool, but fails for more sophisticated elements like typing.Union or typing.Optional (which is also a Union). The failure is caused by isinstance() method within Union object that throws TypeError. I even cannot find a way to ensure that the annotation is a Union so I cannot validate if a value complies with a type. The typing module does not have any solution for it. Is there a way to validate if specified variable complies with typing.Union? A: Yes. isinstance and issubclass were killed some time ago for cases like Union. The idea, as also stated in a comment on the issue by GvR is to implement your own version of issubclass/isinstance that use some of the extra metadata attached to types: >>> Union[int, str].__args__ (int, str) >>> Union[int, str].__origin__ typing.Union __args__ and __origin__ are available as of Python 3.6.3. They might not in earlier versions since typing is still provisional. Until the internal interface for introspecting types is fleshed out and typing graduates from provisional status, you should expect breakage due to changes in the API. A: You can use Unions from typing as here: pydantic A: Before using isinstance(), you need to check if the annotation is an annotation from module typing. I do it like this: def _is_instance(self, value: Any, annotation: type) -> bool: str_annotation = str(annotation) if self._is_typing_alias(str_annotation): if self._is_supported_alias(str_annotation): is_instance = self._get_alias_method(str_annotation) if is_instance is not None: return is_instance(value, annotation) exception = TypeError('Alias is not supported!') self._field_errors.append(TypingValidationError( value_repr=get_value_repr(value), value_type=type(value), annotation=annotation, exception=exception )) return False return super()._is_instance(value, annotation) More details here: https://github.com/EvgeniyBurdin/validated_dc/blob/master/validated_dc.py A: convtools models are based on typing and are validation first: docs | github. from typing import Union, List from convtools.contrib.models import DictModel, build class ItemModel(DictModel): value: Union[int, str] obj, errors = build( List[ItemModel], [{"value": 123}, {"value": "cde"}, {"value": 1.5}] ) """ >>> In [7]: errors >>> Out[7]: {2: {'value': {'__ERRORS': {'type': 'float instead of int/str'}}}} """ obj, errors = build(List[ItemModel], [{"value": 123}, {"value": "cde"}]) """ >>> In [9]: obj >>> Out[9]: [ItemModel(value=123), ItemModel(value='cde')] """ A: Maybe I misunderstood the question, but, as of 3.10, isinstance worked for me, including on 3.10 style types (using | for unions and |None for optional). class Check: a : int b : int|float c : str|None def __init__(self,a,b,c=None) -> None: self.a, self.b, self.c = a,b,c def validate(self): for k, constraint in self.__annotations__.items(): v = getattr(self, k, None) if not isinstance(v, constraint): print(f"\n\n❌{k} : {v} does not match {constraint}") return False return True def check(annotation, value): print(f"\n\ncheck({locals()})",end="") return isinstance(value, annotation) a = Check.__annotations__["a"] b = Check.__annotations__["b"] c = Check.__annotations__["c"] print(f"✅ {check(a,3)}") print(f"❌ {check(a,4.1)}") print(f"✅ {check(b,4.1)}") print(f"✅ {check(c,None)}") print(f"❌ {check(b,None)}") check = Check(a=1,b=2) print("\n",check.validate(), f"for {check.__dict__}",) check = Check(a=1,b="2") print("\n",check.validate(), f"for {check.__dict__}","\n\n") output: check({'annotation': <class 'int'>, 'value': 3})✅ True check({'annotation': <class 'int'>, 'value': 4.1})❌ False check({'annotation': int | float, 'value': 4.1})✅ True check({'annotation': str | None, 'value': None})✅ True check({'annotation': int | float, 'value': None})❌ False True for {'a': 1, 'b': 2, 'c': None} ❌b : 2 does not match int | float False for {'a': 1, 'b': '2', 'c': None} p.s. changed it to c : Optional[str] and confirmed that worked as well.
Python typing validation
I would like to implement validation for Python 3.6 type annotation within my project. I have a method that uses __annotations__ dict to check if all attributes of the class have the correct value. It works perfectly for basic types like int, str or bool, but fails for more sophisticated elements like typing.Union or typing.Optional (which is also a Union). The failure is caused by isinstance() method within Union object that throws TypeError. I even cannot find a way to ensure that the annotation is a Union so I cannot validate if a value complies with a type. The typing module does not have any solution for it. Is there a way to validate if specified variable complies with typing.Union?
[ "Yes. isinstance and issubclass were killed some time ago for cases like Union. \nThe idea, as also stated in a comment on the issue by GvR is to implement your own version of issubclass/isinstance that use some of the extra metadata attached to types:\n>>> Union[int, str].__args__\n(int, str)\n>>> Union[int, str].__origin__\ntyping.Union\n\n__args__ and __origin__ are available as of Python 3.6.3. They might not in earlier versions since typing is still provisional. \nUntil the internal interface for introspecting types is fleshed out and typing graduates from provisional status, you should expect breakage due to changes in the API. \n", "You can use Unions from typing as here: pydantic\n", "Before using isinstance(), you need to check if the annotation is an annotation from module typing.\nI do it like this:\n def _is_instance(self, value: Any, annotation: type) -> bool:\n\n str_annotation = str(annotation)\n\n if self._is_typing_alias(str_annotation):\n\n if self._is_supported_alias(str_annotation):\n\n is_instance = self._get_alias_method(str_annotation)\n if is_instance is not None:\n return is_instance(value, annotation)\n\n exception = TypeError('Alias is not supported!')\n self._field_errors.append(TypingValidationError(\n value_repr=get_value_repr(value), value_type=type(value),\n annotation=annotation, exception=exception\n ))\n return False\n\n return super()._is_instance(value, annotation)\n\nMore details here: https://github.com/EvgeniyBurdin/validated_dc/blob/master/validated_dc.py\n", "convtools models are based on typing and are validation first: docs | github.\nfrom typing import Union, List\nfrom convtools.contrib.models import DictModel, build\n\nclass ItemModel(DictModel):\n value: Union[int, str]\n\nobj, errors = build(\n List[ItemModel], [{\"value\": 123}, {\"value\": \"cde\"}, {\"value\": 1.5}]\n)\n\"\"\"\n>>> In [7]: errors\n>>> Out[7]: {2: {'value': {'__ERRORS': {'type': 'float instead of int/str'}}}}\n\"\"\"\n\nobj, errors = build(List[ItemModel], [{\"value\": 123}, {\"value\": \"cde\"}])\n\"\"\"\n>>> In [9]: obj\n>>> Out[9]: [ItemModel(value=123), ItemModel(value='cde')]\n\"\"\"\n\n", "Maybe I misunderstood the question, but, as of 3.10, isinstance worked for me, including on 3.10 style types (using | for unions and |None for optional).\nclass Check:\n a : int\n b : int|float\n c : str|None\n\n def __init__(self,a,b,c=None) -> None:\n self.a, self.b, self.c = a,b,c\n\n def validate(self):\n for k, constraint in self.__annotations__.items():\n v = getattr(self, k, None)\n if not isinstance(v, constraint):\n print(f\"\\n\\n❌{k} : {v} does not match {constraint}\")\n return False\n return True\n\n\ndef check(annotation, value):\n print(f\"\\n\\ncheck({locals()})\",end=\"\")\n return isinstance(value, annotation)\n\na = Check.__annotations__[\"a\"]\nb = Check.__annotations__[\"b\"]\nc = Check.__annotations__[\"c\"]\n\nprint(f\"✅ {check(a,3)}\")\nprint(f\"❌ {check(a,4.1)}\")\nprint(f\"✅ {check(b,4.1)}\")\nprint(f\"✅ {check(c,None)}\")\nprint(f\"❌ {check(b,None)}\")\n\ncheck = Check(a=1,b=2)\nprint(\"\\n\",check.validate(), f\"for {check.__dict__}\",)\n\ncheck = Check(a=1,b=\"2\")\nprint(\"\\n\",check.validate(), f\"for {check.__dict__}\",\"\\n\\n\")\n\n\noutput:\ncheck({'annotation': <class 'int'>, 'value': 3})✅ True\n\n\ncheck({'annotation': <class 'int'>, 'value': 4.1})❌ False\n\n\ncheck({'annotation': int | float, 'value': 4.1})✅ True\n\n\ncheck({'annotation': str | None, 'value': None})✅ True\n\n\ncheck({'annotation': int | float, 'value': None})❌ False\n\n True for {'a': 1, 'b': 2, 'c': None}\n\n\n❌b : 2 does not match int | float\n\n False for {'a': 1, 'b': '2', 'c': None}\n\np.s. changed it to c : Optional[str] and confirmed that worked as well.\n" ]
[ 2, 2, 0, 0, 0 ]
[]
[]
[ "python", "python_3.6", "python_3.x", "type_hinting" ]
stackoverflow_0049067070_python_python_3.6_python_3.x_type_hinting.txt
Q: SEC_ERROR_UNKNOWN_ISSUER, playwright python inside docker My code is quite simple: from playwright.sync_api import sync_playwright pw = sync_playwright().start() firefox = pw.firefox.launch(headless=True) context=firefox.new_context() page= context.new_page() page.goto("http://www.uaf.cl/prensa/sanciones_new.aspx") Every single time I get a SEC_ERROR_UNKNOWN_ISSUER. Anyone know how I can bypass this? This is running inside a Docker container with update-ca-certificates. I've tried using the "ignore HTTPS errors" option inside the context, to no avail(I need to access the webpage in spite of the error). A: Solved with: context=firefox.new_context(ignore_https_errors=True)
SEC_ERROR_UNKNOWN_ISSUER, playwright python inside docker
My code is quite simple: from playwright.sync_api import sync_playwright pw = sync_playwright().start() firefox = pw.firefox.launch(headless=True) context=firefox.new_context() page= context.new_page() page.goto("http://www.uaf.cl/prensa/sanciones_new.aspx") Every single time I get a SEC_ERROR_UNKNOWN_ISSUER. Anyone know how I can bypass this? This is running inside a Docker container with update-ca-certificates. I've tried using the "ignore HTTPS errors" option inside the context, to no avail(I need to access the webpage in spite of the error).
[ "Solved with:\ncontext=firefox.new_context(ignore_https_errors=True)\n\n" ]
[ 1 ]
[]
[]
[ "docker", "playwright", "playwright_python", "python" ]
stackoverflow_0074551138_docker_playwright_playwright_python_python.txt
Q: fast way to find the related nets from a file (python) try to find a fast way to find out the related nets of a net from a file. R1 net net2 R2 net net3 R3 net2 net4 R4 net3 net5 R5 net6 net7 ... if a net is connected to another net through R, then these nets are considered as connected. In above example, net/net2/net3/net4/net5 are connected i have a file containing over 1 million lines, i need to find out all the related nets of a net. maybe 500 nets are related, and others are all redundant. def get_nets(net, file): related_nets = set([net.lower()]) searched_nets = set() unsearched_nets = related_nets - searched_nets while len(unsearched_nets) > 0: snet = unsearched_nets.pop().lower() with open(file) as fi: for line in fi: if line[0].lower() == 'r': rname, net1, net2 = line.split() if net1.lower() == snet: related_nets.update([net2.lower()]) elif net2.lower() == snet: related_nets.update([net1.lower()]) searched_nets.update([snet]) unsearched_nets = related_nets - searched_nets return related_nets the above code costs 2s for each snet searching, so if i get 500 related nets, then total time is around 1000s, is there any fast way to do this job? A: use networkx import networkx as nx g=nx.Graph() with open(file) as fi: for line in fi: if line[0].lower() == 'r': net1,net2 = line.split()[1:3] g.add_edge(net1.lower(),net2.lower()) if net in g: related_nets = nx.descendants(g,net)
fast way to find the related nets from a file (python)
try to find a fast way to find out the related nets of a net from a file. R1 net net2 R2 net net3 R3 net2 net4 R4 net3 net5 R5 net6 net7 ... if a net is connected to another net through R, then these nets are considered as connected. In above example, net/net2/net3/net4/net5 are connected i have a file containing over 1 million lines, i need to find out all the related nets of a net. maybe 500 nets are related, and others are all redundant. def get_nets(net, file): related_nets = set([net.lower()]) searched_nets = set() unsearched_nets = related_nets - searched_nets while len(unsearched_nets) > 0: snet = unsearched_nets.pop().lower() with open(file) as fi: for line in fi: if line[0].lower() == 'r': rname, net1, net2 = line.split() if net1.lower() == snet: related_nets.update([net2.lower()]) elif net2.lower() == snet: related_nets.update([net1.lower()]) searched_nets.update([snet]) unsearched_nets = related_nets - searched_nets return related_nets the above code costs 2s for each snet searching, so if i get 500 related nets, then total time is around 1000s, is there any fast way to do this job?
[ "use networkx\nimport networkx as nx\n\ng=nx.Graph()\nwith open(file) as fi:\n for line in fi:\n if line[0].lower() == 'r':\n net1,net2 = line.split()[1:3]\n g.add_edge(net1.lower(),net2.lower())\n\n if net in g:\n related_nets = nx.descendants(g,net)\n\n" ]
[ 0 ]
[]
[]
[ "optimization", "performance", "python" ]
stackoverflow_0074558507_optimization_performance_python.txt
Q: Cartesian product of x and y array points into single array of 2D points I have two numpy arrays that define the x and y axes of a grid. For example: x = numpy.array([1,2,3]) y = numpy.array([4,5]) I'd like to generate the Cartesian product of these arrays to generate: array([[1,4],[2,4],[3,4],[1,5],[2,5],[3,5]]) In a way that's not terribly inefficient since I need to do this many times in a loop. I'm assuming that converting them to a Python list and using itertools.product and back to a numpy array is not the most efficient form. A: A canonical cartesian_product (almost) There are many approaches to this problem with different properties. Some are faster than others, and some are more general-purpose. After a lot of testing and tweaking, I've found that the following function, which calculates an n-dimensional cartesian_product, is faster than most others for many inputs. For a pair of approaches that are slightly more complex, but are even a bit faster in many cases, see the answer by Paul Panzer. Given that answer, this is no longer the fastest implementation of the cartesian product in numpy that I'm aware of. However, I think its simplicity will continue to make it a useful benchmark for future improvement: def cartesian_product(*arrays): la = len(arrays) dtype = numpy.result_type(*arrays) arr = numpy.empty([len(a) for a in arrays] + [la], dtype=dtype) for i, a in enumerate(numpy.ix_(*arrays)): arr[...,i] = a return arr.reshape(-1, la) It's worth mentioning that this function uses ix_ in an unusual way; whereas the documented use of ix_ is to generate indices into an array, it just so happens that arrays with the same shape can be used for broadcasted assignment. Many thanks to mgilson, who inspired me to try using ix_ this way, and to unutbu, who provided some extremely helpful feedback on this answer, including the suggestion to use numpy.result_type. Notable alternatives It's sometimes faster to write contiguous blocks of memory in Fortran order. That's the basis of this alternative, cartesian_product_transpose, which has proven faster on some hardware than cartesian_product (see below). However, Paul Panzer's answer, which uses the same principle, is even faster. Still, I include this here for interested readers: def cartesian_product_transpose(*arrays): broadcastable = numpy.ix_(*arrays) broadcasted = numpy.broadcast_arrays(*broadcastable) rows, cols = numpy.prod(broadcasted[0].shape), len(broadcasted) dtype = numpy.result_type(*arrays) out = numpy.empty(rows * cols, dtype=dtype) start, end = 0, rows for a in broadcasted: out[start:end] = a.reshape(-1) start, end = end, end + rows return out.reshape(cols, rows).T After coming to understand Panzer's approach, I wrote a new version that's almost as fast as his, and is almost as simple as cartesian_product: def cartesian_product_simple_transpose(arrays): la = len(arrays) dtype = numpy.result_type(*arrays) arr = numpy.empty([la] + [len(a) for a in arrays], dtype=dtype) for i, a in enumerate(numpy.ix_(*arrays)): arr[i, ...] = a return arr.reshape(la, -1).T This appears to have some constant-time overhead that makes it run slower than Panzer's for small inputs. But for larger inputs, in all the tests I ran, it performs just as well as his fastest implementation (cartesian_product_transpose_pp). In following sections, I include some tests of other alternatives. These are now somewhat out of date, but rather than duplicate effort, I've decided to leave them here out of historical interest. For up-to-date tests, see Panzer's answer, as well as Nico Schlömer's. Tests against alternatives Here is a battery of tests that show the performance boost that some of these functions provide relative to a number of alternatives. All the tests shown here were performed on a quad-core machine, running Mac OS 10.12.5, Python 3.6.1, and numpy 1.12.1. Variations on hardware and software are known to produce different results, so YMMV. Run these tests for yourself to be sure! Definitions: import numpy import itertools from functools import reduce ### Two-dimensional products ### def repeat_product(x, y): return numpy.transpose([numpy.tile(x, len(y)), numpy.repeat(y, len(x))]) def dstack_product(x, y): return numpy.dstack(numpy.meshgrid(x, y)).reshape(-1, 2) ### Generalized N-dimensional products ### def cartesian_product(*arrays): la = len(arrays) dtype = numpy.result_type(*arrays) arr = numpy.empty([len(a) for a in arrays] + [la], dtype=dtype) for i, a in enumerate(numpy.ix_(*arrays)): arr[...,i] = a return arr.reshape(-1, la) def cartesian_product_transpose(*arrays): broadcastable = numpy.ix_(*arrays) broadcasted = numpy.broadcast_arrays(*broadcastable) rows, cols = numpy.prod(broadcasted[0].shape), len(broadcasted) dtype = numpy.result_type(*arrays) out = numpy.empty(rows * cols, dtype=dtype) start, end = 0, rows for a in broadcasted: out[start:end] = a.reshape(-1) start, end = end, end + rows return out.reshape(cols, rows).T # from https://stackoverflow.com/a/1235363/577088 def cartesian_product_recursive(*arrays, out=None): arrays = [numpy.asarray(x) for x in arrays] dtype = arrays[0].dtype n = numpy.prod([x.size for x in arrays]) if out is None: out = numpy.zeros([n, len(arrays)], dtype=dtype) m = n // arrays[0].size out[:,0] = numpy.repeat(arrays[0], m) if arrays[1:]: cartesian_product_recursive(arrays[1:], out=out[0:m,1:]) for j in range(1, arrays[0].size): out[j*m:(j+1)*m,1:] = out[0:m,1:] return out def cartesian_product_itertools(*arrays): return numpy.array(list(itertools.product(*arrays))) ### Test code ### name_func = [('repeat_product', repeat_product), ('dstack_product', dstack_product), ('cartesian_product', cartesian_product), ('cartesian_product_transpose', cartesian_product_transpose), ('cartesian_product_recursive', cartesian_product_recursive), ('cartesian_product_itertools', cartesian_product_itertools)] def test(in_arrays, test_funcs): global func global arrays arrays = in_arrays for name, func in test_funcs: print('{}:'.format(name)) %timeit func(*arrays) def test_all(*in_arrays): test(in_arrays, name_func) # `cartesian_product_recursive` throws an # unexpected error when used on more than # two input arrays, so for now I've removed # it from these tests. def test_cartesian(*in_arrays): test(in_arrays, name_func[2:4] + name_func[-1:]) x10 = [numpy.arange(10)] x50 = [numpy.arange(50)] x100 = [numpy.arange(100)] x500 = [numpy.arange(500)] x1000 = [numpy.arange(1000)] Test results: In [2]: test_all(*(x100 * 2)) repeat_product: 67.5 µs ± 633 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each) dstack_product: 67.7 µs ± 1.09 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each) cartesian_product: 33.4 µs ± 558 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each) cartesian_product_transpose: 67.7 µs ± 932 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each) cartesian_product_recursive: 215 µs ± 6.01 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) cartesian_product_itertools: 3.65 ms ± 38.7 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) In [3]: test_all(*(x500 * 2)) repeat_product: 1.31 ms ± 9.28 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) dstack_product: 1.27 ms ± 7.5 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) cartesian_product: 375 µs ± 4.5 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) cartesian_product_transpose: 488 µs ± 8.88 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) cartesian_product_recursive: 2.21 ms ± 38.4 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) cartesian_product_itertools: 105 ms ± 1.17 ms per loop (mean ± std. dev. of 7 runs, 10 loops each) In [4]: test_all(*(x1000 * 2)) repeat_product: 10.2 ms ± 132 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) dstack_product: 12 ms ± 120 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) cartesian_product: 4.75 ms ± 57.1 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) cartesian_product_transpose: 7.76 ms ± 52.7 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) cartesian_product_recursive: 13 ms ± 209 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) cartesian_product_itertools: 422 ms ± 7.77 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) In all cases, cartesian_product as defined at the beginning of this answer is fastest. For those functions that accept an arbitrary number of input arrays, it's worth checking performance when len(arrays) > 2 as well. (Until I can determine why cartesian_product_recursive throws an error in this case, I've removed it from these tests.) In [5]: test_cartesian(*(x100 * 3)) cartesian_product: 8.8 ms ± 138 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) cartesian_product_transpose: 7.87 ms ± 91.5 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) cartesian_product_itertools: 518 ms ± 5.5 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) In [6]: test_cartesian(*(x50 * 4)) cartesian_product: 169 ms ± 5.1 ms per loop (mean ± std. dev. of 7 runs, 10 loops each) cartesian_product_transpose: 184 ms ± 4.32 ms per loop (mean ± std. dev. of 7 runs, 10 loops each) cartesian_product_itertools: 3.69 s ± 73.5 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) In [7]: test_cartesian(*(x10 * 6)) cartesian_product: 26.5 ms ± 449 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) cartesian_product_transpose: 16 ms ± 133 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) cartesian_product_itertools: 728 ms ± 16 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) In [8]: test_cartesian(*(x10 * 7)) cartesian_product: 650 ms ± 8.14 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) cartesian_product_transpose: 518 ms ± 7.09 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) cartesian_product_itertools: 8.13 s ± 122 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) As these tests show, cartesian_product remains competitive until the number of input arrays rises above (roughly) four. After that, cartesian_product_transpose does have a slight edge. It's worth reiterating that users with other hardware and operating systems may see different results. For example, unutbu reports seeing the following results for these tests using Ubuntu 14.04, Python 3.4.3, and numpy 1.14.0.dev0+b7050a9: >>> %timeit cartesian_product_transpose(x500, y500) 1000 loops, best of 3: 682 µs per loop >>> %timeit cartesian_product(x500, y500) 1000 loops, best of 3: 1.55 ms per loop Below, I go into a few details about earlier tests I've run along these lines. The relative performance of these approaches has changed over time, for different hardware and different versions of Python and numpy. While it's not immediately useful for people using up-to-date versions of numpy, it illustrates how things have changed since the first version of this answer. A simple alternative: meshgrid + dstack The currently accepted answer uses tile and repeat to broadcast two arrays together. But the meshgrid function does practically the same thing. Here's the output of tile and repeat before being passed to transpose: In [1]: import numpy In [2]: x = numpy.array([1,2,3]) ...: y = numpy.array([4,5]) ...: In [3]: [numpy.tile(x, len(y)), numpy.repeat(y, len(x))] Out[3]: [array([1, 2, 3, 1, 2, 3]), array([4, 4, 4, 5, 5, 5])] And here's the output of meshgrid: In [4]: numpy.meshgrid(x, y) Out[4]: [array([[1, 2, 3], [1, 2, 3]]), array([[4, 4, 4], [5, 5, 5]])] As you can see, it's almost identical. We need only reshape the result to get exactly the same result. In [5]: xt, xr = numpy.meshgrid(x, y) ...: [xt.ravel(), xr.ravel()] Out[5]: [array([1, 2, 3, 1, 2, 3]), array([4, 4, 4, 5, 5, 5])] Rather than reshaping at this point, though, we could pass the output of meshgrid to dstack and reshape afterwards, which saves some work: In [6]: numpy.dstack(numpy.meshgrid(x, y)).reshape(-1, 2) Out[6]: array([[1, 4], [2, 4], [3, 4], [1, 5], [2, 5], [3, 5]]) Contrary to the claim in this comment, I've seen no evidence that different inputs will produce differently shaped outputs, and as the above demonstrates, they do very similar things, so it would be quite strange if they did. Please let me know if you find a counterexample. Testing meshgrid + dstack vs. repeat + transpose The relative performance of these two approaches has changed over time. In an earlier version of Python (2.7), the result using meshgrid + dstack was noticeably faster for small inputs. (Note that these tests are from an old version of this answer.) Definitions: >>> def repeat_product(x, y): ... return numpy.transpose([numpy.tile(x, len(y)), numpy.repeat(y, len(x))]) ... >>> def dstack_product(x, y): ... return numpy.dstack(numpy.meshgrid(x, y)).reshape(-1, 2) ... For moderately-sized input, I saw a significant speedup. But I retried these tests with more recent versions of Python (3.6.1) and numpy (1.12.1), on a newer machine. The two approaches are almost identical now. Old Test >>> x, y = numpy.arange(500), numpy.arange(500) >>> %timeit repeat_product(x, y) 10 loops, best of 3: 62 ms per loop >>> %timeit dstack_product(x, y) 100 loops, best of 3: 12.2 ms per loop New Test In [7]: x, y = numpy.arange(500), numpy.arange(500) In [8]: %timeit repeat_product(x, y) 1.32 ms ± 24.7 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) In [9]: %timeit dstack_product(x, y) 1.26 ms ± 8.47 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) As always, YMMV, but this suggests that in recent versions of Python and numpy, these are interchangeable. Generalized product functions In general, we might expect that using built-in functions will be faster for small inputs, while for large inputs, a purpose-built function might be faster. Furthermore for a generalized n-dimensional product, tile and repeat won't help, because they don't have clear higher-dimensional analogues. So it's worth investigating the behavior of purpose-built functions as well. Most of the relevant tests appear at the beginning of this answer, but here are a few of the tests performed on earlier versions of Python and numpy for comparison. The cartesian function defined in another answer used to perform pretty well for larger inputs. (It's the same as the function called cartesian_product_recursive above.) In order to compare cartesian to dstack_prodct, we use just two dimensions. Here again, the old test showed a significant difference, while the new test shows almost none. Old Test >>> x, y = numpy.arange(1000), numpy.arange(1000) >>> %timeit cartesian([x, y]) 10 loops, best of 3: 25.4 ms per loop >>> %timeit dstack_product(x, y) 10 loops, best of 3: 66.6 ms per loop New Test In [10]: x, y = numpy.arange(1000), numpy.arange(1000) In [11]: %timeit cartesian([x, y]) 12.1 ms ± 199 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) In [12]: %timeit dstack_product(x, y) 12.7 ms ± 334 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) As before, dstack_product still beats cartesian at smaller scales. New Test (redundant old test not shown) In [13]: x, y = numpy.arange(100), numpy.arange(100) In [14]: %timeit cartesian([x, y]) 215 µs ± 4.75 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) In [15]: %timeit dstack_product(x, y) 65.7 µs ± 1.15 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each) These distinctions are, I think, interesting and worth recording; but they are academic in the end. As the tests at the beginning of this answer showed, all of these versions are almost always slower than cartesian_product, defined at the very beginning of this answer -- which is itself a bit slower than the fastest implementations among the answers to this question. A: >>> numpy.transpose([numpy.tile(x, len(y)), numpy.repeat(y, len(x))]) array([[1, 4], [2, 4], [3, 4], [1, 5], [2, 5], [3, 5]]) See Using numpy to build an array of all combinations of two arrays for a general solution for computing the Cartesian product of N arrays. A: You can just do normal list comprehension in python x = numpy.array([1,2,3]) y = numpy.array([4,5]) [[x0, y0] for x0 in x for y0 in y] which should give you [[1, 4], [1, 5], [2, 4], [2, 5], [3, 4], [3, 5]] A: I was interested in this as well and did a little performance comparison, perhaps somewhat clearer than in @senderle's answer. For two arrays (the classical case): For four arrays: (Note that the length the arrays is only a few dozen entries here.) Code to reproduce the plots: from functools import reduce import itertools import numpy import perfplot def dstack_product(arrays): return numpy.dstack(numpy.meshgrid(*arrays, indexing="ij")).reshape(-1, len(arrays)) # Generalized N-dimensional products def cartesian_product(arrays): la = len(arrays) dtype = numpy.find_common_type([a.dtype for a in arrays], []) arr = numpy.empty([len(a) for a in arrays] + [la], dtype=dtype) for i, a in enumerate(numpy.ix_(*arrays)): arr[..., i] = a return arr.reshape(-1, la) def cartesian_product_transpose(arrays): broadcastable = numpy.ix_(*arrays) broadcasted = numpy.broadcast_arrays(*broadcastable) rows, cols = reduce(numpy.multiply, broadcasted[0].shape), len(broadcasted) dtype = numpy.find_common_type([a.dtype for a in arrays], []) out = numpy.empty(rows * cols, dtype=dtype) start, end = 0, rows for a in broadcasted: out[start:end] = a.reshape(-1) start, end = end, end + rows return out.reshape(cols, rows).T # from https://stackoverflow.com/a/1235363/577088 def cartesian_product_recursive(arrays, out=None): arrays = [numpy.asarray(x) for x in arrays] dtype = arrays[0].dtype n = numpy.prod([x.size for x in arrays]) if out is None: out = numpy.zeros([n, len(arrays)], dtype=dtype) m = n // arrays[0].size out[:, 0] = numpy.repeat(arrays[0], m) if arrays[1:]: cartesian_product_recursive(arrays[1:], out=out[0:m, 1:]) for j in range(1, arrays[0].size): out[j * m : (j + 1) * m, 1:] = out[0:m, 1:] return out def cartesian_product_itertools(arrays): return numpy.array(list(itertools.product(*arrays))) perfplot.show( setup=lambda n: 2 * (numpy.arange(n, dtype=float),), n_range=[2 ** k for k in range(13)], # setup=lambda n: 4 * (numpy.arange(n, dtype=float),), # n_range=[2 ** k for k in range(6)], kernels=[ dstack_product, cartesian_product, cartesian_product_transpose, cartesian_product_recursive, cartesian_product_itertools, ], logx=True, logy=True, xlabel="len(a), len(b)", equality_check=None, ) A: As of Oct. 2017, numpy now has a generic np.stack function that takes an axis parameter. Using it, we can have a "generalized cartesian product" using the "dstack and meshgrid" technique: import numpy as np def cartesian_product(*arrays): ndim = len(arrays) return (np.stack(np.meshgrid(*arrays), axis=-1) .reshape(-1, ndim)) a = np.array([1,2]) b = np.array([10,20]) cartesian_product(a,b) # output: # array([[ 1, 10], # [ 2, 10], # [ 1, 20], # [ 2, 20]]) Note on the axis=-1 parameter. This is the last (inner-most) axis in the result. It is equivalent to using axis=ndim. One other comment, since Cartesian products blow up very quickly, unless we need to realize the array in memory for some reason, if the product is very large, we may want to make use of itertools and use the values on-the-fly. A: Building on @senderle's exemplary ground work I've come up with two versions - one for C and one for Fortran layouts - that are often a bit faster. cartesian_product_transpose_pp is - unlike @senderle's cartesian_product_transpose which uses a different strategy altogether - a version of cartesion_product that uses the more favorable transpose memory layout + some very minor optimizations. cartesian_product_pp sticks with the original memory layout. What makes it fast is its using contiguous copying. Contiguous copies turn out to be so much faster that copying a full block of memory even though only part of it contains valid data is preferable to only copying the valid bits. Some perfplots. I made separate ones for C and Fortran layouts, because these are different tasks IMO. Names ending in 'pp' are my approaches. 1) many tiny factors (2 elements each) 2) many small factors (4 elements each) 3) three factors of equal length 4) two factors of equal length Code (need to do separate runs for each plot b/c I couldn't figure out how to reset; also need to edit / comment in / out appropriately): import numpy import numpy as np from functools import reduce import itertools import timeit import perfplot def dstack_product(arrays): return numpy.dstack( numpy.meshgrid(*arrays, indexing='ij') ).reshape(-1, len(arrays)) def cartesian_product_transpose_pp(arrays): la = len(arrays) dtype = numpy.result_type(*arrays) arr = numpy.empty((la, *map(len, arrays)), dtype=dtype) idx = slice(None), *itertools.repeat(None, la) for i, a in enumerate(arrays): arr[i, ...] = a[idx[:la-i]] return arr.reshape(la, -1).T def cartesian_product(arrays): la = len(arrays) dtype = numpy.result_type(*arrays) arr = numpy.empty([len(a) for a in arrays] + [la], dtype=dtype) for i, a in enumerate(numpy.ix_(*arrays)): arr[...,i] = a return arr.reshape(-1, la) def cartesian_product_transpose(arrays): broadcastable = numpy.ix_(*arrays) broadcasted = numpy.broadcast_arrays(*broadcastable) rows, cols = numpy.prod(broadcasted[0].shape), len(broadcasted) dtype = numpy.result_type(*arrays) out = numpy.empty(rows * cols, dtype=dtype) start, end = 0, rows for a in broadcasted: out[start:end] = a.reshape(-1) start, end = end, end + rows return out.reshape(cols, rows).T from itertools import accumulate, repeat, chain def cartesian_product_pp(arrays, out=None): la = len(arrays) L = *map(len, arrays), la dtype = numpy.result_type(*arrays) arr = numpy.empty(L, dtype=dtype) arrs = *accumulate(chain((arr,), repeat(0, la-1)), np.ndarray.__getitem__), idx = slice(None), *itertools.repeat(None, la-1) for i in range(la-1, 0, -1): arrs[i][..., i] = arrays[i][idx[:la-i]] arrs[i-1][1:] = arrs[i] arr[..., 0] = arrays[0][idx] return arr.reshape(-1, la) def cartesian_product_itertools(arrays): return numpy.array(list(itertools.product(*arrays))) # from https://stackoverflow.com/a/1235363/577088 def cartesian_product_recursive(arrays, out=None): arrays = [numpy.asarray(x) for x in arrays] dtype = arrays[0].dtype n = numpy.prod([x.size for x in arrays]) if out is None: out = numpy.zeros([n, len(arrays)], dtype=dtype) m = n // arrays[0].size out[:, 0] = numpy.repeat(arrays[0], m) if arrays[1:]: cartesian_product_recursive(arrays[1:], out=out[0:m, 1:]) for j in range(1, arrays[0].size): out[j*m:(j+1)*m, 1:] = out[0:m, 1:] return out ### Test code ### if False: perfplot.save('cp_4el_high.png', setup=lambda n: n*(numpy.arange(4, dtype=float),), n_range=list(range(6, 11)), kernels=[ dstack_product, cartesian_product_recursive, cartesian_product, # cartesian_product_transpose, cartesian_product_pp, # cartesian_product_transpose_pp, ], logx=False, logy=True, xlabel='#factors', equality_check=None ) else: perfplot.save('cp_2f_T.png', setup=lambda n: 2*(numpy.arange(n, dtype=float),), n_range=[2**k for k in range(5, 11)], kernels=[ # dstack_product, # cartesian_product_recursive, # cartesian_product, cartesian_product_transpose, # cartesian_product_pp, cartesian_product_transpose_pp, ], logx=True, logy=True, xlabel='length of each factor', equality_check=None ) A: The Scikit-learn package has a fast implementation of exactly this: from sklearn.utils.extmath import cartesian product = cartesian((x,y)) Note that the convention of this implementation is different from what you want, if you care about the order of the output. For your exact ordering, you can do product = cartesian((y,x))[:, ::-1] A: I used @kennytm answer for a while, but when trying to do the same in TensorFlow, but I found that TensorFlow has no equivalent of numpy.repeat(). After a little experimentation, I think I found a more general solution for arbitrary vectors of points. For numpy: import numpy as np def cartesian_product(*args: np.ndarray) -> np.ndarray: """ Produce the cartesian product of arbitrary length vectors. Parameters ---------- np.ndarray args vector of points of interest in each dimension Returns ------- np.ndarray the cartesian product of size [m x n] wherein: m = prod([len(a) for a in args]) n = len(args) """ for i, a in enumerate(args): assert a.ndim == 1, "arg {:d} is not rank 1".format(i) return np.concatenate([np.reshape(xi, [-1, 1]) for xi in np.meshgrid(*args)], axis=1) and for TensorFlow: import tensorflow as tf def cartesian_product(*args: tf.Tensor) -> tf.Tensor: """ Produce the cartesian product of arbitrary length vectors. Parameters ---------- tf.Tensor args vector of points of interest in each dimension Returns ------- tf.Tensor the cartesian product of size [m x n] wherein: m = prod([len(a) for a in args]) n = len(args) """ for i, a in enumerate(args): tf.assert_rank(a, 1, message="arg {:d} is not rank 1".format(i)) return tf.concat([tf.reshape(xi, [-1, 1]) for xi in tf.meshgrid(*args)], axis=1) A: More generally, if you have two 2d numpy arrays a and b, and you want to concatenate every row of a to every row of b (A cartesian product of rows, kind of like a join in a database), you can use this method: import numpy def join_2d(a, b): assert a.dtype == b.dtype a_part = numpy.tile(a, (len(b), 1)) b_part = numpy.repeat(b, len(a), axis=0) return numpy.hstack((a_part, b_part)) A: The fastest you can get is either by combining a generator expression with the map function: import numpy import datetime a = np.arange(1000) b = np.arange(200) start = datetime.datetime.now() foo = (item for sublist in [list(map(lambda x: (x,i),a)) for i in b] for item in sublist) print (list(foo)) print ('execution time: {} s'.format((datetime.datetime.now() - start).total_seconds())) Outputs (actually the whole resulting list is printed): [(0, 0), (1, 0), ...,(998, 199), (999, 199)] execution time: 1.253567 s or by using a double generator expression: a = np.arange(1000) b = np.arange(200) start = datetime.datetime.now() foo = ((x,y) for x in a for y in b) print (list(foo)) print ('execution time: {} s'.format((datetime.datetime.now() - start).total_seconds())) Outputs (whole list printed): [(0, 0), (1, 0), ...,(998, 199), (999, 199)] execution time: 1.187415 s Take into account that most of the computation time goes into the printing command. The generator calculations are otherwise decently efficient. Without printing the calculation times are: execution time: 0.079208 s for generator expression + map function and: execution time: 0.007093 s for the double generator expression. If what you actually want is to calculate the actual product of each of the coordinate pairs, the fastest is to solve it as a numpy matrix product: a = np.arange(1000) b = np.arange(200) start = datetime.datetime.now() foo = np.dot(np.asmatrix([[i,0] for i in a]), np.asmatrix([[i,0] for i in b]).T) print (foo) print ('execution time: {} s'.format((datetime.datetime.now() - start).total_seconds())) Outputs: [[ 0 0 0 ..., 0 0 0] [ 0 1 2 ..., 197 198 199] [ 0 2 4 ..., 394 396 398] ..., [ 0 997 1994 ..., 196409 197406 198403] [ 0 998 1996 ..., 196606 197604 198602] [ 0 999 1998 ..., 196803 197802 198801]] execution time: 0.003869 s and without printing (in this case it doesn't save much since only a tiny piece of the matrix is actually printed out): execution time: 0.003083 s A: This can also be easily done by using itertools.product method from itertools import product import numpy as np x = np.array([1, 2, 3]) y = np.array([4, 5]) cart_prod = np.array(list(product(*[x, y])),dtype='int32') Result: array([[1, 4], [1, 5], [2, 4], [2, 5], [3, 4], [3, 5]], dtype=int32) Execution time: 0.000155 s A: In the specific case that you need to perform simple operations such as addition on each pair, you can introduce an extra dimension and let broadcasting do the job: >>> a, b = np.array([1,2,3]), np.array([10,20,30]) >>> a[None,:] + b[:,None] array([[11, 12, 13], [21, 22, 23], [31, 32, 33]]) I'm not sure if there is any similar way to actually get the pairs themselves. A: I'm a bit late to the party, but I encoutered a tricky variant of that problem. Let's say I want the cartesian product of several arrays, but that cartesian product ends up being much larger than the computers' memory (however, the computation done with that product are fast, or at least parallelizable). The obvious solution is to divide this cartesian product in chunks, and treat these chunks one after the other (in sort of a "streaming" manner). You can do that easily with itertools.product, but it's horrendously slow. Also, none of the proposed solutions here (as fast as they are) give us this possibility. The solution I propose uses Numba, and is slightly faster than the "canonical" cartesian_product mentioned here. It's pretty long because I tried to optimize it everywhere I could. import numba as nb import numpy as np from typing import List @nb.njit(nb.types.Tuple((nb.int32[:, :], nb.int32[:]))(nb.int32[:], nb.int32[:], nb.int64, nb.int64)) def cproduct(sizes: np.ndarray, current_tuple: np.ndarray, start_idx: int, end_idx: int): """Generates ids tuples from start_id to end_id""" assert len(sizes) >= 2 assert start_idx < end_idx tuples = np.zeros((end_idx - start_idx, len(sizes)), dtype=np.int32) tuple_idx = 0 # stores the current combination current_tuple = current_tuple.copy() while tuple_idx < end_idx - start_idx: tuples[tuple_idx] = current_tuple current_tuple[0] += 1 # using a condition here instead of including this in the inner loop # to gain a bit of speed: this is going to be tested each iteration, # and starting a loop to have it end right away is a bit silly if current_tuple[0] == sizes[0]: # the reset to 0 and subsequent increment amount to carrying # the number to the higher "power" current_tuple[0] = 0 current_tuple[1] += 1 for i in range(1, len(sizes) - 1): if current_tuple[i] == sizes[i]: # same as before, but in a loop, since this is going # to get called less often current_tuple[i + 1] += 1 current_tuple[i] = 0 else: break tuple_idx += 1 return tuples, current_tuple def chunked_cartesian_product_ids(sizes: List[int], chunk_size: int): """Just generates chunks of the cartesian product of the ids of each input arrays (thus, we just need their sizes here, not the actual arrays)""" prod = np.prod(sizes) # putting the largest number at the front to more efficiently make use # of the cproduct numba function sizes = np.array(sizes, dtype=np.int32) sorted_idx = np.argsort(sizes)[::-1] sizes = sizes[sorted_idx] if chunk_size > prod: chunk_bounds = (np.array([0, prod])).astype(np.int64) else: num_chunks = np.maximum(np.ceil(prod / chunk_size), 2).astype(np.int32) chunk_bounds = (np.arange(num_chunks + 1) * chunk_size).astype(np.int64) chunk_bounds[-1] = prod current_tuple = np.zeros(len(sizes), dtype=np.int32) for start_idx, end_idx in zip(chunk_bounds[:-1], chunk_bounds[1:]): tuples, current_tuple = cproduct(sizes, current_tuple, start_idx, end_idx) # re-arrange columns to match the original order of the sizes list # before yielding yield tuples[:, np.argsort(sorted_idx)] def chunked_cartesian_product(*arrays, chunk_size=2 ** 25): """Returns chunks of the full cartesian product, with arrays of shape (chunk_size, n_arrays). The last chunk will obviously have the size of the remainder""" array_lengths = [len(array) for array in arrays] for array_ids_chunk in chunked_cartesian_product_ids(array_lengths, chunk_size): slices_lists = [arrays[i][array_ids_chunk[:, i]] for i in range(len(arrays))] yield np.vstack(slices_lists).swapaxes(0,1) def cartesian_product(*arrays): """Actual cartesian product, not chunked, still fast""" total_prod = np.prod([len(array) for array in arrays]) return next(chunked_cartesian_product(*arrays, total_prod)) a = np.arange(0, 3) b = np.arange(8, 10) c = np.arange(13, 16) for cartesian_tuples in chunked_cartesian_product(*[a, b, c], chunk_size=5): print(cartesian_tuples) This would output our cartesian product in chunks of 5 3-uples: [[ 0 8 13] [ 0 8 14] [ 0 8 15] [ 1 8 13] [ 1 8 14]] [[ 1 8 15] [ 2 8 13] [ 2 8 14] [ 2 8 15] [ 0 9 13]] [[ 0 9 14] [ 0 9 15] [ 1 9 13] [ 1 9 14] [ 1 9 15]] [[ 2 9 13] [ 2 9 14] [ 2 9 15]] If you're willing to understand what is being done here, the intuition behind the njitted function is to enumerate each "number" in a weird numerical base whose elements would be composed of the sizes of the input arrays (instead of the same number in regular binary, decimal or hexadecimal bases). Obviously, this solution is interesting for large products. For small ones, the overhead might be a bit costly. NOTE: since numba is still under heavy development, i'm using numba 0.50 to run this, with python 3.6. A: Yet another one: >>>x1, y1 = np.meshgrid(x, y) >>>np.c_[x1.ravel(), y1.ravel()] array([[1, 4], [2, 4], [3, 4], [1, 5], [2, 5], [3, 5]]) A: Inspired by Ashkan's answer, you can also try the following. >>> x, y = np.meshgrid(x, y) >>> np.concatenate([x.flatten().reshape(-1,1), y.flatten().reshape(-1,1)], axis=1) This will give you the required cartesian product! A: This is a generalized version of the accepted answer (Cartesian product of multiple arrays using numpy.tile and numpy.repeat functions). from functors import reduce from operator import mul def cartesian_product(arrays): return np.vstack( np.tile( np.repeat(arrays[j], reduce(mul, map(len, arrays[j+1:]), 1)), reduce(mul, map(len, arrays[:j]), 1), ) for j in range(len(arrays)) ).T A: If you are willing to use PyTorch, I should think it is highly efficient: >>> import torch >>> torch.cartesian_prod(torch.as_tensor(x), torch.as_tensor(y)) tensor([[1, 4], [1, 5], [2, 4], [2, 5], [3, 4], [3, 5]]) and you can easily get a numpy array: >>> torch.cartesian_prod(torch.as_tensor(x), torch.as_tensor(y)).numpy() array([[1, 4], [1, 5], [2, 4], [2, 5], [3, 4], [3, 5]])
Cartesian product of x and y array points into single array of 2D points
I have two numpy arrays that define the x and y axes of a grid. For example: x = numpy.array([1,2,3]) y = numpy.array([4,5]) I'd like to generate the Cartesian product of these arrays to generate: array([[1,4],[2,4],[3,4],[1,5],[2,5],[3,5]]) In a way that's not terribly inefficient since I need to do this many times in a loop. I'm assuming that converting them to a Python list and using itertools.product and back to a numpy array is not the most efficient form.
[ "A canonical cartesian_product (almost)\nThere are many approaches to this problem with different properties. Some are faster than others, and some are more general-purpose. After a lot of testing and tweaking, I've found that the following function, which calculates an n-dimensional cartesian_product, is faster than most others for many inputs. For a pair of approaches that are slightly more complex, but are even a bit faster in many cases, see the answer by Paul Panzer.\nGiven that answer, this is no longer the fastest implementation of the cartesian product in numpy that I'm aware of. However, I think its simplicity will continue to make it a useful benchmark for future improvement:\ndef cartesian_product(*arrays):\n la = len(arrays)\n dtype = numpy.result_type(*arrays)\n arr = numpy.empty([len(a) for a in arrays] + [la], dtype=dtype)\n for i, a in enumerate(numpy.ix_(*arrays)):\n arr[...,i] = a\n return arr.reshape(-1, la)\n\nIt's worth mentioning that this function uses ix_ in an unusual way; whereas the documented use of ix_ is to generate indices into an array, it just so happens that arrays with the same shape can be used for broadcasted assignment. Many thanks to mgilson, who inspired me to try using ix_ this way, and to unutbu, who provided some extremely helpful feedback on this answer, including the suggestion to use numpy.result_type. \nNotable alternatives\nIt's sometimes faster to write contiguous blocks of memory in Fortran order. That's the basis of this alternative, cartesian_product_transpose, which has proven faster on some hardware than cartesian_product (see below). However, Paul Panzer's answer, which uses the same principle, is even faster. Still, I include this here for interested readers: \ndef cartesian_product_transpose(*arrays):\n broadcastable = numpy.ix_(*arrays)\n broadcasted = numpy.broadcast_arrays(*broadcastable)\n rows, cols = numpy.prod(broadcasted[0].shape), len(broadcasted)\n dtype = numpy.result_type(*arrays)\n\n out = numpy.empty(rows * cols, dtype=dtype)\n start, end = 0, rows\n for a in broadcasted:\n out[start:end] = a.reshape(-1)\n start, end = end, end + rows\n return out.reshape(cols, rows).T\n\nAfter coming to understand Panzer's approach, I wrote a new version that's almost as fast as his, and is almost as simple as cartesian_product:\ndef cartesian_product_simple_transpose(arrays):\n la = len(arrays)\n dtype = numpy.result_type(*arrays)\n arr = numpy.empty([la] + [len(a) for a in arrays], dtype=dtype)\n for i, a in enumerate(numpy.ix_(*arrays)):\n arr[i, ...] = a\n return arr.reshape(la, -1).T\n\nThis appears to have some constant-time overhead that makes it run slower than Panzer's for small inputs. But for larger inputs, in all the tests I ran, it performs just as well as his fastest implementation (cartesian_product_transpose_pp).\nIn following sections, I include some tests of other alternatives. These are now somewhat out of date, but rather than duplicate effort, I've decided to leave them here out of historical interest. For up-to-date tests, see Panzer's answer, as well as Nico Schlömer's.\nTests against alternatives\nHere is a battery of tests that show the performance boost that some of these functions provide relative to a number of alternatives. All the tests shown here were performed on a quad-core machine, running Mac OS 10.12.5, Python 3.6.1, and numpy 1.12.1. Variations on hardware and software are known to produce different results, so YMMV. Run these tests for yourself to be sure!\nDefinitions: \nimport numpy\nimport itertools\nfrom functools import reduce\n\n### Two-dimensional products ###\n\ndef repeat_product(x, y):\n return numpy.transpose([numpy.tile(x, len(y)), \n numpy.repeat(y, len(x))])\n\ndef dstack_product(x, y):\n return numpy.dstack(numpy.meshgrid(x, y)).reshape(-1, 2)\n\n### Generalized N-dimensional products ###\n\ndef cartesian_product(*arrays):\n la = len(arrays)\n dtype = numpy.result_type(*arrays)\n arr = numpy.empty([len(a) for a in arrays] + [la], dtype=dtype)\n for i, a in enumerate(numpy.ix_(*arrays)):\n arr[...,i] = a\n return arr.reshape(-1, la)\n\ndef cartesian_product_transpose(*arrays):\n broadcastable = numpy.ix_(*arrays)\n broadcasted = numpy.broadcast_arrays(*broadcastable)\n rows, cols = numpy.prod(broadcasted[0].shape), len(broadcasted)\n dtype = numpy.result_type(*arrays)\n\n out = numpy.empty(rows * cols, dtype=dtype)\n start, end = 0, rows\n for a in broadcasted:\n out[start:end] = a.reshape(-1)\n start, end = end, end + rows\n return out.reshape(cols, rows).T\n\n# from https://stackoverflow.com/a/1235363/577088\n\ndef cartesian_product_recursive(*arrays, out=None):\n arrays = [numpy.asarray(x) for x in arrays]\n dtype = arrays[0].dtype\n\n n = numpy.prod([x.size for x in arrays])\n if out is None:\n out = numpy.zeros([n, len(arrays)], dtype=dtype)\n\n m = n // arrays[0].size\n out[:,0] = numpy.repeat(arrays[0], m)\n if arrays[1:]:\n cartesian_product_recursive(arrays[1:], out=out[0:m,1:])\n for j in range(1, arrays[0].size):\n out[j*m:(j+1)*m,1:] = out[0:m,1:]\n return out\n\ndef cartesian_product_itertools(*arrays):\n return numpy.array(list(itertools.product(*arrays)))\n\n### Test code ###\n\nname_func = [('repeat_product', \n repeat_product), \n ('dstack_product', \n dstack_product), \n ('cartesian_product', \n cartesian_product), \n ('cartesian_product_transpose', \n cartesian_product_transpose), \n ('cartesian_product_recursive', \n cartesian_product_recursive), \n ('cartesian_product_itertools', \n cartesian_product_itertools)]\n\ndef test(in_arrays, test_funcs):\n global func\n global arrays\n arrays = in_arrays\n for name, func in test_funcs:\n print('{}:'.format(name))\n %timeit func(*arrays)\n\ndef test_all(*in_arrays):\n test(in_arrays, name_func)\n\n# `cartesian_product_recursive` throws an \n# unexpected error when used on more than\n# two input arrays, so for now I've removed\n# it from these tests.\n\ndef test_cartesian(*in_arrays):\n test(in_arrays, name_func[2:4] + name_func[-1:])\n\nx10 = [numpy.arange(10)]\nx50 = [numpy.arange(50)]\nx100 = [numpy.arange(100)]\nx500 = [numpy.arange(500)]\nx1000 = [numpy.arange(1000)]\n\nTest results:\nIn [2]: test_all(*(x100 * 2))\nrepeat_product:\n67.5 µs ± 633 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)\ndstack_product:\n67.7 µs ± 1.09 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)\ncartesian_product:\n33.4 µs ± 558 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)\ncartesian_product_transpose:\n67.7 µs ± 932 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)\ncartesian_product_recursive:\n215 µs ± 6.01 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)\ncartesian_product_itertools:\n3.65 ms ± 38.7 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)\n\nIn [3]: test_all(*(x500 * 2))\nrepeat_product:\n1.31 ms ± 9.28 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)\ndstack_product:\n1.27 ms ± 7.5 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)\ncartesian_product:\n375 µs ± 4.5 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)\ncartesian_product_transpose:\n488 µs ± 8.88 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)\ncartesian_product_recursive:\n2.21 ms ± 38.4 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)\ncartesian_product_itertools:\n105 ms ± 1.17 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)\n\nIn [4]: test_all(*(x1000 * 2))\nrepeat_product:\n10.2 ms ± 132 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)\ndstack_product:\n12 ms ± 120 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)\ncartesian_product:\n4.75 ms ± 57.1 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)\ncartesian_product_transpose:\n7.76 ms ± 52.7 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)\ncartesian_product_recursive:\n13 ms ± 209 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)\ncartesian_product_itertools:\n422 ms ± 7.77 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)\n\nIn all cases, cartesian_product as defined at the beginning of this answer is fastest.\nFor those functions that accept an arbitrary number of input arrays, it's worth checking performance when len(arrays) > 2 as well. (Until I can determine why cartesian_product_recursive throws an error in this case, I've removed it from these tests.) \nIn [5]: test_cartesian(*(x100 * 3))\ncartesian_product:\n8.8 ms ± 138 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)\ncartesian_product_transpose:\n7.87 ms ± 91.5 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)\ncartesian_product_itertools:\n518 ms ± 5.5 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)\n\nIn [6]: test_cartesian(*(x50 * 4))\ncartesian_product:\n169 ms ± 5.1 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)\ncartesian_product_transpose:\n184 ms ± 4.32 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)\ncartesian_product_itertools:\n3.69 s ± 73.5 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)\n\nIn [7]: test_cartesian(*(x10 * 6))\ncartesian_product:\n26.5 ms ± 449 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)\ncartesian_product_transpose:\n16 ms ± 133 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)\ncartesian_product_itertools:\n728 ms ± 16 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)\n\nIn [8]: test_cartesian(*(x10 * 7))\ncartesian_product:\n650 ms ± 8.14 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)\ncartesian_product_transpose:\n518 ms ± 7.09 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)\ncartesian_product_itertools:\n8.13 s ± 122 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)\n\nAs these tests show, cartesian_product remains competitive until the number of input arrays rises above (roughly) four. After that, cartesian_product_transpose does have a slight edge.\nIt's worth reiterating that users with other hardware and operating systems may see different results. For example, unutbu reports seeing the following results for these tests using Ubuntu 14.04, Python 3.4.3, and numpy 1.14.0.dev0+b7050a9:\n>>> %timeit cartesian_product_transpose(x500, y500) \n1000 loops, best of 3: 682 µs per loop\n>>> %timeit cartesian_product(x500, y500)\n1000 loops, best of 3: 1.55 ms per loop\n\nBelow, I go into a few details about earlier tests I've run along these lines. The relative performance of these approaches has changed over time, for different hardware and different versions of Python and numpy. While it's not immediately useful for people using up-to-date versions of numpy, it illustrates how things have changed since the first version of this answer.\nA simple alternative: meshgrid + dstack\nThe currently accepted answer uses tile and repeat to broadcast two arrays together. But the meshgrid function does practically the same thing. Here's the output of tile and repeat before being passed to transpose:\nIn [1]: import numpy\nIn [2]: x = numpy.array([1,2,3])\n ...: y = numpy.array([4,5])\n ...: \n\nIn [3]: [numpy.tile(x, len(y)), numpy.repeat(y, len(x))]\nOut[3]: [array([1, 2, 3, 1, 2, 3]), array([4, 4, 4, 5, 5, 5])]\n\nAnd here's the output of meshgrid:\nIn [4]: numpy.meshgrid(x, y)\nOut[4]: \n[array([[1, 2, 3],\n [1, 2, 3]]), array([[4, 4, 4],\n [5, 5, 5]])]\n\nAs you can see, it's almost identical. We need only reshape the result to get exactly the same result. \nIn [5]: xt, xr = numpy.meshgrid(x, y)\n ...: [xt.ravel(), xr.ravel()]\nOut[5]: [array([1, 2, 3, 1, 2, 3]), array([4, 4, 4, 5, 5, 5])]\n\nRather than reshaping at this point, though, we could pass the output of meshgrid to dstack and reshape afterwards, which saves some work:\nIn [6]: numpy.dstack(numpy.meshgrid(x, y)).reshape(-1, 2)\nOut[6]: \narray([[1, 4],\n [2, 4],\n [3, 4],\n [1, 5],\n [2, 5],\n [3, 5]])\n\nContrary to the claim in this comment, I've seen no evidence that different inputs will produce differently shaped outputs, and as the above demonstrates, they do very similar things, so it would be quite strange if they did. Please let me know if you find a counterexample.\nTesting meshgrid + dstack vs. repeat + transpose\nThe relative performance of these two approaches has changed over time. In an earlier version of Python (2.7), the result using meshgrid + dstack was noticeably faster for small inputs. (Note that these tests are from an old version of this answer.) Definitions:\n>>> def repeat_product(x, y):\n... return numpy.transpose([numpy.tile(x, len(y)), \n numpy.repeat(y, len(x))])\n...\n>>> def dstack_product(x, y):\n... return numpy.dstack(numpy.meshgrid(x, y)).reshape(-1, 2)\n... \n\nFor moderately-sized input, I saw a significant speedup. But I retried these tests with more recent versions of Python (3.6.1) and numpy (1.12.1), on a newer machine. The two approaches are almost identical now.\nOld Test\n>>> x, y = numpy.arange(500), numpy.arange(500)\n>>> %timeit repeat_product(x, y)\n10 loops, best of 3: 62 ms per loop\n>>> %timeit dstack_product(x, y)\n100 loops, best of 3: 12.2 ms per loop\n\nNew Test\nIn [7]: x, y = numpy.arange(500), numpy.arange(500)\nIn [8]: %timeit repeat_product(x, y)\n1.32 ms ± 24.7 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)\nIn [9]: %timeit dstack_product(x, y)\n1.26 ms ± 8.47 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)\n\nAs always, YMMV, but this suggests that in recent versions of Python and numpy, these are interchangeable. \nGeneralized product functions\nIn general, we might expect that using built-in functions will be faster for small inputs, while for large inputs, a purpose-built function might be faster. Furthermore for a generalized n-dimensional product, tile and repeat won't help, because they don't have clear higher-dimensional analogues. So it's worth investigating the behavior of purpose-built functions as well.\nMost of the relevant tests appear at the beginning of this answer, but here are a few of the tests performed on earlier versions of Python and numpy for comparison.\nThe cartesian function defined in another answer used to perform pretty well for larger inputs. (It's the same as the function called cartesian_product_recursive above.) In order to compare cartesian to dstack_prodct, we use just two dimensions.\nHere again, the old test showed a significant difference, while the new test shows almost none.\nOld Test\n>>> x, y = numpy.arange(1000), numpy.arange(1000)\n>>> %timeit cartesian([x, y])\n10 loops, best of 3: 25.4 ms per loop\n>>> %timeit dstack_product(x, y)\n10 loops, best of 3: 66.6 ms per loop\n\nNew Test\nIn [10]: x, y = numpy.arange(1000), numpy.arange(1000)\nIn [11]: %timeit cartesian([x, y])\n12.1 ms ± 199 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)\nIn [12]: %timeit dstack_product(x, y)\n12.7 ms ± 334 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)\n\nAs before, dstack_product still beats cartesian at smaller scales.\nNew Test (redundant old test not shown)\nIn [13]: x, y = numpy.arange(100), numpy.arange(100)\nIn [14]: %timeit cartesian([x, y])\n215 µs ± 4.75 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)\nIn [15]: %timeit dstack_product(x, y)\n65.7 µs ± 1.15 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)\n\nThese distinctions are, I think, interesting and worth recording; but they are academic in the end. As the tests at the beginning of this answer showed, all of these versions are almost always slower than cartesian_product, defined at the very beginning of this answer -- which is itself a bit slower than the fastest implementations among the answers to this question.\n", ">>> numpy.transpose([numpy.tile(x, len(y)), numpy.repeat(y, len(x))])\narray([[1, 4],\n [2, 4],\n [3, 4],\n [1, 5],\n [2, 5],\n [3, 5]])\n\nSee Using numpy to build an array of all combinations of two arrays for a general solution for computing the Cartesian product of N arrays.\n", "You can just do normal list comprehension in python\nx = numpy.array([1,2,3])\ny = numpy.array([4,5])\n[[x0, y0] for x0 in x for y0 in y]\n\nwhich should give you\n[[1, 4], [1, 5], [2, 4], [2, 5], [3, 4], [3, 5]]\n\n", "I was interested in this as well and did a little performance comparison, perhaps somewhat clearer than in @senderle's answer.\nFor two arrays (the classical case):\n\nFor four arrays:\n\n(Note that the length the arrays is only a few dozen entries here.)\n\nCode to reproduce the plots:\nfrom functools import reduce\nimport itertools\nimport numpy\nimport perfplot\n\n\ndef dstack_product(arrays):\n return numpy.dstack(numpy.meshgrid(*arrays, indexing=\"ij\")).reshape(-1, len(arrays))\n\n\n# Generalized N-dimensional products\ndef cartesian_product(arrays):\n la = len(arrays)\n dtype = numpy.find_common_type([a.dtype for a in arrays], [])\n arr = numpy.empty([len(a) for a in arrays] + [la], dtype=dtype)\n for i, a in enumerate(numpy.ix_(*arrays)):\n arr[..., i] = a\n return arr.reshape(-1, la)\n\n\ndef cartesian_product_transpose(arrays):\n broadcastable = numpy.ix_(*arrays)\n broadcasted = numpy.broadcast_arrays(*broadcastable)\n rows, cols = reduce(numpy.multiply, broadcasted[0].shape), len(broadcasted)\n dtype = numpy.find_common_type([a.dtype for a in arrays], [])\n\n out = numpy.empty(rows * cols, dtype=dtype)\n start, end = 0, rows\n for a in broadcasted:\n out[start:end] = a.reshape(-1)\n start, end = end, end + rows\n return out.reshape(cols, rows).T\n\n\n# from https://stackoverflow.com/a/1235363/577088\ndef cartesian_product_recursive(arrays, out=None):\n arrays = [numpy.asarray(x) for x in arrays]\n dtype = arrays[0].dtype\n\n n = numpy.prod([x.size for x in arrays])\n if out is None:\n out = numpy.zeros([n, len(arrays)], dtype=dtype)\n\n m = n // arrays[0].size\n out[:, 0] = numpy.repeat(arrays[0], m)\n if arrays[1:]:\n cartesian_product_recursive(arrays[1:], out=out[0:m, 1:])\n for j in range(1, arrays[0].size):\n out[j * m : (j + 1) * m, 1:] = out[0:m, 1:]\n return out\n\n\ndef cartesian_product_itertools(arrays):\n return numpy.array(list(itertools.product(*arrays)))\n\n\nperfplot.show(\n setup=lambda n: 2 * (numpy.arange(n, dtype=float),),\n n_range=[2 ** k for k in range(13)],\n # setup=lambda n: 4 * (numpy.arange(n, dtype=float),),\n # n_range=[2 ** k for k in range(6)],\n kernels=[\n dstack_product,\n cartesian_product,\n cartesian_product_transpose,\n cartesian_product_recursive,\n cartesian_product_itertools,\n ],\n logx=True,\n logy=True,\n xlabel=\"len(a), len(b)\",\n equality_check=None,\n)\n\n", "As of Oct. 2017, numpy now has a generic np.stack function that takes an axis parameter. Using it, we can have a \"generalized cartesian product\" using the \"dstack and meshgrid\" technique:\nimport numpy as np\n\ndef cartesian_product(*arrays):\n ndim = len(arrays)\n return (np.stack(np.meshgrid(*arrays), axis=-1)\n .reshape(-1, ndim))\n\na = np.array([1,2])\nb = np.array([10,20])\ncartesian_product(a,b)\n\n# output:\n# array([[ 1, 10],\n# [ 2, 10],\n# [ 1, 20],\n# [ 2, 20]]) \n\nNote on the axis=-1 parameter. This is the last (inner-most) axis in the result. It is equivalent to using axis=ndim.\nOne other comment, since Cartesian products blow up very quickly, unless we need to realize the array in memory for some reason, if the product is very large, we may want to make use of itertools and use the values on-the-fly.\n", "Building on @senderle's exemplary ground work I've come up with two versions - one for C and one for Fortran layouts - that are often a bit faster.\n\ncartesian_product_transpose_pp is - unlike @senderle's cartesian_product_transpose which uses a different strategy altogether - a version of cartesion_product that uses the more favorable transpose memory layout + some very minor optimizations.\ncartesian_product_pp sticks with the original memory layout. What makes it fast is its using contiguous copying. Contiguous copies turn out to be so much faster that copying a full block of memory even though only part of it contains valid data is preferable to only copying the valid bits.\n\nSome perfplots. I made separate ones for C and Fortran layouts, because these are different tasks IMO.\nNames ending in 'pp' are my approaches. \n1) many tiny factors (2 elements each)\n\n2) many small factors (4 elements each)\n\n3) three factors of equal length\n\n4) two factors of equal length\n\nCode (need to do separate runs for each plot b/c I couldn't figure out how to reset; also need to edit / comment in / out appropriately):\nimport numpy\nimport numpy as np\nfrom functools import reduce\nimport itertools\nimport timeit\nimport perfplot\n\ndef dstack_product(arrays):\n return numpy.dstack(\n numpy.meshgrid(*arrays, indexing='ij')\n ).reshape(-1, len(arrays))\n\ndef cartesian_product_transpose_pp(arrays):\n la = len(arrays)\n dtype = numpy.result_type(*arrays)\n arr = numpy.empty((la, *map(len, arrays)), dtype=dtype)\n idx = slice(None), *itertools.repeat(None, la)\n for i, a in enumerate(arrays):\n arr[i, ...] = a[idx[:la-i]]\n return arr.reshape(la, -1).T\n\ndef cartesian_product(arrays):\n la = len(arrays)\n dtype = numpy.result_type(*arrays)\n arr = numpy.empty([len(a) for a in arrays] + [la], dtype=dtype)\n for i, a in enumerate(numpy.ix_(*arrays)):\n arr[...,i] = a\n return arr.reshape(-1, la)\n\ndef cartesian_product_transpose(arrays):\n broadcastable = numpy.ix_(*arrays)\n broadcasted = numpy.broadcast_arrays(*broadcastable)\n rows, cols = numpy.prod(broadcasted[0].shape), len(broadcasted)\n dtype = numpy.result_type(*arrays)\n\n out = numpy.empty(rows * cols, dtype=dtype)\n start, end = 0, rows\n for a in broadcasted:\n out[start:end] = a.reshape(-1)\n start, end = end, end + rows\n return out.reshape(cols, rows).T\n\nfrom itertools import accumulate, repeat, chain\n\ndef cartesian_product_pp(arrays, out=None):\n la = len(arrays)\n L = *map(len, arrays), la\n dtype = numpy.result_type(*arrays)\n arr = numpy.empty(L, dtype=dtype)\n arrs = *accumulate(chain((arr,), repeat(0, la-1)), np.ndarray.__getitem__),\n idx = slice(None), *itertools.repeat(None, la-1)\n for i in range(la-1, 0, -1):\n arrs[i][..., i] = arrays[i][idx[:la-i]]\n arrs[i-1][1:] = arrs[i]\n arr[..., 0] = arrays[0][idx]\n return arr.reshape(-1, la)\n\ndef cartesian_product_itertools(arrays):\n return numpy.array(list(itertools.product(*arrays)))\n\n\n# from https://stackoverflow.com/a/1235363/577088\ndef cartesian_product_recursive(arrays, out=None):\n arrays = [numpy.asarray(x) for x in arrays]\n dtype = arrays[0].dtype\n\n n = numpy.prod([x.size for x in arrays])\n if out is None:\n out = numpy.zeros([n, len(arrays)], dtype=dtype)\n\n m = n // arrays[0].size\n out[:, 0] = numpy.repeat(arrays[0], m)\n if arrays[1:]:\n cartesian_product_recursive(arrays[1:], out=out[0:m, 1:])\n for j in range(1, arrays[0].size):\n out[j*m:(j+1)*m, 1:] = out[0:m, 1:]\n return out\n\n### Test code ###\nif False:\n perfplot.save('cp_4el_high.png',\n setup=lambda n: n*(numpy.arange(4, dtype=float),),\n n_range=list(range(6, 11)),\n kernels=[\n dstack_product,\n cartesian_product_recursive,\n cartesian_product,\n# cartesian_product_transpose,\n cartesian_product_pp,\n# cartesian_product_transpose_pp,\n ],\n logx=False,\n logy=True,\n xlabel='#factors',\n equality_check=None\n )\nelse:\n perfplot.save('cp_2f_T.png',\n setup=lambda n: 2*(numpy.arange(n, dtype=float),),\n n_range=[2**k for k in range(5, 11)],\n kernels=[\n# dstack_product,\n# cartesian_product_recursive,\n# cartesian_product,\n cartesian_product_transpose,\n# cartesian_product_pp,\n cartesian_product_transpose_pp,\n ],\n logx=True,\n logy=True,\n xlabel='length of each factor',\n equality_check=None\n )\n\n", "The Scikit-learn package has a fast implementation of exactly this:\nfrom sklearn.utils.extmath import cartesian\nproduct = cartesian((x,y))\n\nNote that the convention of this implementation is different from what you want, if you care about the order of the output. For your exact ordering, you can do\nproduct = cartesian((y,x))[:, ::-1]\n\n", "I used @kennytm answer for a while, but when trying to do the same in TensorFlow, but I found that TensorFlow has no equivalent of numpy.repeat(). After a little experimentation, I think I found a more general solution for arbitrary vectors of points.\nFor numpy:\nimport numpy as np\n\ndef cartesian_product(*args: np.ndarray) -> np.ndarray:\n \"\"\"\n Produce the cartesian product of arbitrary length vectors.\n\n Parameters\n ----------\n np.ndarray args\n vector of points of interest in each dimension\n\n Returns\n -------\n np.ndarray\n the cartesian product of size [m x n] wherein:\n m = prod([len(a) for a in args])\n n = len(args)\n \"\"\"\n for i, a in enumerate(args):\n assert a.ndim == 1, \"arg {:d} is not rank 1\".format(i)\n return np.concatenate([np.reshape(xi, [-1, 1]) for xi in np.meshgrid(*args)], axis=1)\n\nand for TensorFlow:\nimport tensorflow as tf\n\ndef cartesian_product(*args: tf.Tensor) -> tf.Tensor:\n \"\"\"\n Produce the cartesian product of arbitrary length vectors.\n\n Parameters\n ----------\n tf.Tensor args\n vector of points of interest in each dimension\n\n Returns\n -------\n tf.Tensor\n the cartesian product of size [m x n] wherein:\n m = prod([len(a) for a in args])\n n = len(args)\n \"\"\"\n for i, a in enumerate(args):\n tf.assert_rank(a, 1, message=\"arg {:d} is not rank 1\".format(i))\n return tf.concat([tf.reshape(xi, [-1, 1]) for xi in tf.meshgrid(*args)], axis=1)\n\n", "More generally, if you have two 2d numpy arrays a and b, and you want to concatenate every row of a to every row of b (A cartesian product of rows, kind of like a join in a database), you can use this method:\nimport numpy\ndef join_2d(a, b):\n assert a.dtype == b.dtype\n a_part = numpy.tile(a, (len(b), 1))\n b_part = numpy.repeat(b, len(a), axis=0)\n return numpy.hstack((a_part, b_part))\n\n", "The fastest you can get is either by combining a generator expression with the map function:\nimport numpy\nimport datetime\na = np.arange(1000)\nb = np.arange(200)\n\nstart = datetime.datetime.now()\n\nfoo = (item for sublist in [list(map(lambda x: (x,i),a)) for i in b] for item in sublist)\n\nprint (list(foo))\n\nprint ('execution time: {} s'.format((datetime.datetime.now() - start).total_seconds()))\n\nOutputs (actually the whole resulting list is printed):\n[(0, 0), (1, 0), ...,(998, 199), (999, 199)]\nexecution time: 1.253567 s\n\nor by using a double generator expression:\na = np.arange(1000)\nb = np.arange(200)\n\nstart = datetime.datetime.now()\n\nfoo = ((x,y) for x in a for y in b)\n\nprint (list(foo))\n\nprint ('execution time: {} s'.format((datetime.datetime.now() - start).total_seconds()))\n\nOutputs (whole list printed):\n[(0, 0), (1, 0), ...,(998, 199), (999, 199)]\nexecution time: 1.187415 s\n\nTake into account that most of the computation time goes into the printing command. The generator calculations are otherwise decently efficient. Without printing the calculation times are:\nexecution time: 0.079208 s\n\nfor generator expression + map function and:\nexecution time: 0.007093 s\n\nfor the double generator expression.\nIf what you actually want is to calculate the actual product of each of the coordinate pairs, the fastest is to solve it as a numpy matrix product:\na = np.arange(1000)\nb = np.arange(200)\n\nstart = datetime.datetime.now()\n\nfoo = np.dot(np.asmatrix([[i,0] for i in a]), np.asmatrix([[i,0] for i in b]).T)\n\nprint (foo)\n\nprint ('execution time: {} s'.format((datetime.datetime.now() - start).total_seconds()))\n\nOutputs:\n [[ 0 0 0 ..., 0 0 0]\n [ 0 1 2 ..., 197 198 199]\n [ 0 2 4 ..., 394 396 398]\n ..., \n [ 0 997 1994 ..., 196409 197406 198403]\n [ 0 998 1996 ..., 196606 197604 198602]\n [ 0 999 1998 ..., 196803 197802 198801]]\nexecution time: 0.003869 s\n\nand without printing (in this case it doesn't save much since only a tiny piece of the matrix is actually printed out):\nexecution time: 0.003083 s\n\n", "This can also be easily done by using itertools.product method \nfrom itertools import product\nimport numpy as np\n\nx = np.array([1, 2, 3])\ny = np.array([4, 5])\ncart_prod = np.array(list(product(*[x, y])),dtype='int32')\n\nResult:\n array([[1, 4],\n [1, 5], \n [2, 4],\n [2, 5],\n [3, 4],\n [3, 5]], dtype=int32)\nExecution time: 0.000155 s\n", "In the specific case that you need to perform simple operations such as addition on each pair, you can introduce an extra dimension and let broadcasting do the job:\n>>> a, b = np.array([1,2,3]), np.array([10,20,30])\n>>> a[None,:] + b[:,None]\narray([[11, 12, 13],\n [21, 22, 23],\n [31, 32, 33]])\n\nI'm not sure if there is any similar way to actually get the pairs themselves.\n", "I'm a bit late to the party, but I encoutered a tricky variant of that problem.\nLet's say I want the cartesian product of several arrays, but that cartesian product ends up being much larger than the computers' memory (however, the computation done with that product are fast, or at least parallelizable).\nThe obvious solution is to divide this cartesian product in chunks, and treat these chunks one after the other (in sort of a \"streaming\" manner). You can do that easily with itertools.product, but it's horrendously slow. Also, none of the proposed solutions here (as fast as they are) give us this possibility. The solution I propose uses Numba, and is slightly faster than the \"canonical\" cartesian_product mentioned here. It's pretty long because I tried to optimize it everywhere I could.\nimport numba as nb\nimport numpy as np\nfrom typing import List\n\n\n@nb.njit(nb.types.Tuple((nb.int32[:, :],\n nb.int32[:]))(nb.int32[:],\n nb.int32[:],\n nb.int64, nb.int64))\ndef cproduct(sizes: np.ndarray, current_tuple: np.ndarray, start_idx: int, end_idx: int):\n \"\"\"Generates ids tuples from start_id to end_id\"\"\"\n assert len(sizes) >= 2\n assert start_idx < end_idx\n\n tuples = np.zeros((end_idx - start_idx, len(sizes)), dtype=np.int32)\n tuple_idx = 0\n # stores the current combination\n current_tuple = current_tuple.copy()\n while tuple_idx < end_idx - start_idx:\n tuples[tuple_idx] = current_tuple\n current_tuple[0] += 1\n # using a condition here instead of including this in the inner loop\n # to gain a bit of speed: this is going to be tested each iteration,\n # and starting a loop to have it end right away is a bit silly\n if current_tuple[0] == sizes[0]:\n # the reset to 0 and subsequent increment amount to carrying\n # the number to the higher \"power\"\n current_tuple[0] = 0\n current_tuple[1] += 1\n for i in range(1, len(sizes) - 1):\n if current_tuple[i] == sizes[i]:\n # same as before, but in a loop, since this is going\n # to get called less often\n current_tuple[i + 1] += 1\n current_tuple[i] = 0\n else:\n break\n tuple_idx += 1\n return tuples, current_tuple\n\n\ndef chunked_cartesian_product_ids(sizes: List[int], chunk_size: int):\n \"\"\"Just generates chunks of the cartesian product of the ids of each\n input arrays (thus, we just need their sizes here, not the actual arrays)\"\"\"\n prod = np.prod(sizes)\n\n # putting the largest number at the front to more efficiently make use\n # of the cproduct numba function\n sizes = np.array(sizes, dtype=np.int32)\n sorted_idx = np.argsort(sizes)[::-1]\n sizes = sizes[sorted_idx]\n if chunk_size > prod:\n chunk_bounds = (np.array([0, prod])).astype(np.int64)\n else:\n num_chunks = np.maximum(np.ceil(prod / chunk_size), 2).astype(np.int32)\n chunk_bounds = (np.arange(num_chunks + 1) * chunk_size).astype(np.int64)\n chunk_bounds[-1] = prod\n current_tuple = np.zeros(len(sizes), dtype=np.int32)\n for start_idx, end_idx in zip(chunk_bounds[:-1], chunk_bounds[1:]):\n tuples, current_tuple = cproduct(sizes, current_tuple, start_idx, end_idx)\n # re-arrange columns to match the original order of the sizes list\n # before yielding\n yield tuples[:, np.argsort(sorted_idx)]\n\n\ndef chunked_cartesian_product(*arrays, chunk_size=2 ** 25):\n \"\"\"Returns chunks of the full cartesian product, with arrays of shape\n (chunk_size, n_arrays). The last chunk will obviously have the size of the\n remainder\"\"\"\n array_lengths = [len(array) for array in arrays]\n for array_ids_chunk in chunked_cartesian_product_ids(array_lengths, chunk_size):\n slices_lists = [arrays[i][array_ids_chunk[:, i]] for i in range(len(arrays))]\n yield np.vstack(slices_lists).swapaxes(0,1)\n\n\ndef cartesian_product(*arrays):\n \"\"\"Actual cartesian product, not chunked, still fast\"\"\"\n total_prod = np.prod([len(array) for array in arrays])\n return next(chunked_cartesian_product(*arrays, total_prod))\n\n\na = np.arange(0, 3)\nb = np.arange(8, 10)\nc = np.arange(13, 16)\nfor cartesian_tuples in chunked_cartesian_product(*[a, b, c], chunk_size=5):\n print(cartesian_tuples)\n\n\nThis would output our cartesian product in chunks of 5 3-uples:\n[[ 0 8 13]\n [ 0 8 14]\n [ 0 8 15]\n [ 1 8 13]\n [ 1 8 14]]\n[[ 1 8 15]\n [ 2 8 13]\n [ 2 8 14]\n [ 2 8 15]\n [ 0 9 13]]\n[[ 0 9 14]\n [ 0 9 15]\n [ 1 9 13]\n [ 1 9 14]\n [ 1 9 15]]\n[[ 2 9 13]\n [ 2 9 14]\n [ 2 9 15]]\n\nIf you're willing to understand what is being done here, the intuition behind the njitted function is to enumerate each \"number\" in a weird numerical base whose elements would be composed of the sizes of the input arrays (instead of the same number in regular binary, decimal or hexadecimal bases).\nObviously, this solution is interesting for large products. For small ones, the overhead might be a bit costly.\nNOTE: since numba is still under heavy development, i'm using numba 0.50 to run this, with python 3.6.\n", "Yet another one:\n>>>x1, y1 = np.meshgrid(x, y)\n>>>np.c_[x1.ravel(), y1.ravel()]\narray([[1, 4],\n [2, 4],\n [3, 4],\n [1, 5],\n [2, 5],\n [3, 5]])\n\n", "Inspired by Ashkan's answer, you can also try the following.\n>>> x, y = np.meshgrid(x, y)\n>>> np.concatenate([x.flatten().reshape(-1,1), y.flatten().reshape(-1,1)], axis=1)\n\nThis will give you the required cartesian product!\n", "This is a generalized version of the accepted answer (Cartesian product of multiple arrays using numpy.tile and numpy.repeat functions).\nfrom functors import reduce\nfrom operator import mul\n\ndef cartesian_product(arrays):\n return np.vstack(\n np.tile(\n np.repeat(arrays[j], reduce(mul, map(len, arrays[j+1:]), 1)),\n reduce(mul, map(len, arrays[:j]), 1),\n )\n for j in range(len(arrays))\n ).T\n\n", "If you are willing to use PyTorch, I should think it is highly efficient:\n>>> import torch\n\n>>> torch.cartesian_prod(torch.as_tensor(x), torch.as_tensor(y))\ntensor([[1, 4],\n [1, 5],\n [2, 4],\n [2, 5],\n [3, 4],\n [3, 5]])\n\n\nand you can easily get a numpy array:\n>>> torch.cartesian_prod(torch.as_tensor(x), torch.as_tensor(y)).numpy()\narray([[1, 4],\n [1, 5],\n [2, 4],\n [2, 5],\n [3, 4],\n [3, 5]])\n\n\n" ]
[ 191, 119, 63, 45, 21, 21, 12, 8, 4, 3, 3, 3, 1, 0, 0, 0, 0 ]
[]
[]
[ "cartesian_product", "numpy", "python" ]
stackoverflow_0011144513_cartesian_product_numpy_python.txt
Q: I don't know how to process Python txt file input data input.txt 3 3 1 3 3 | 3 2 7 8 | 4 1 5 1 | 5 I want to read it through Python file reading and then divide it into two matrices, like 1 3 3 2 7 8 1 5 1 3 4 5 but I don't know how. f=open("input.txt","r") # open input file line = f.readline() row = list(map(int, line.split()))[0] col = list(map(int, line.split()))[1] print("row = ",row, "col = ",col) main_matrix=[[]] sub_matrix=[] tmp_matrix=[[]] for i in range(0,row): line = f.readline() tmp_matrix = line[0:row+2] tmp_matrix = (list(map(int, tmp_matrix.split()))) main_matrix=tmp_matrix print(main_matrix) I thought this would separate the main matrix, but the result was that only the last row was separated A: Here is a better way to handle your input, using the suggestions from my comment. f=open("input.txt","r") line = f.readline().strip().split() rows = int(line[0]) cols = int(line[1]) print("row = ",rows, "col = ",cols) main_matrix=[] sub_matrix=[] for line in f: parts = line.strip().split() main_matrix.append( [int(k) for k in parts[:cols]] ) sub_matrix.append( int(parts[-1]) ) print(main_matrix) print(sub_matrix) Output: row = 3 col = 3 [[1, 3, 3], [2, 7, 8], [1, 5, 1]] [3, 4, 5]
I don't know how to process Python txt file input data
input.txt 3 3 1 3 3 | 3 2 7 8 | 4 1 5 1 | 5 I want to read it through Python file reading and then divide it into two matrices, like 1 3 3 2 7 8 1 5 1 3 4 5 but I don't know how. f=open("input.txt","r") # open input file line = f.readline() row = list(map(int, line.split()))[0] col = list(map(int, line.split()))[1] print("row = ",row, "col = ",col) main_matrix=[[]] sub_matrix=[] tmp_matrix=[[]] for i in range(0,row): line = f.readline() tmp_matrix = line[0:row+2] tmp_matrix = (list(map(int, tmp_matrix.split()))) main_matrix=tmp_matrix print(main_matrix) I thought this would separate the main matrix, but the result was that only the last row was separated
[ "Here is a better way to handle your input, using the suggestions from my comment.\nf=open(\"input.txt\",\"r\")\nline = f.readline().strip().split()\nrows = int(line[0])\ncols = int(line[1])\n\nprint(\"row = \",rows, \"col = \",cols)\nmain_matrix=[]\nsub_matrix=[]\n\nfor line in f:\n parts = line.strip().split()\n main_matrix.append( [int(k) for k in parts[:cols]] )\n sub_matrix.append( int(parts[-1]) )\n\nprint(main_matrix)\nprint(sub_matrix)\n\nOutput:\nrow = 3 col = 3\n[[1, 3, 3], [2, 7, 8], [1, 5, 1]]\n[3, 4, 5]\n\n" ]
[ 0 ]
[]
[]
[ "input", "matrix", "python" ]
stackoverflow_0074579099_input_matrix_python.txt
Q: My search in python using tree view is not working/displaying the search results This is my two functions which operate my search. The problem seems to occur with my search function when I binded it to my key releases on my search entries. However when I search with my button it works with no error messages . def SearchCustomer(self): connection = sqlite3.connect("Guestrecord.db") cursor = connection.cursor() columnID = ["title","firstName","surname","dob","payment","email","address","postcode"] columnStr =["Title","FirstName","Surname","DOB","Payment","Email","Address","Postcode"] self.search_table = ttk.Treeview(self.search_frame,columns=columnID,show="headings") self.search_table.bind("<Motion>","break") for i in range(0,8): self.search_table.heading(columnID[i],text = columnStr[i]) self.search_table.column(columnID[i],minwidth = 0, width = 108) self.search_table.place(x=20,y=0) for GuestRec in cursor.execute("SELECT * FROM tb1Guest1"): self.search_table.insert("",END,values=GuestRec) connection.commit() connection.close() SearchCustomer(self) search_icon = Image.open("search icon.png") search_icon_resize = search_icon.resize((20,20)) search_icon = search_icon_resize search_icon_photo = ImageTk.PhotoImage(search_icon) self.search_firstname = Entry(self.search_frame2, width=30,bg="#e2f0d9",font=("Avenir Next",18),highlightthickness = 0,relief=FLAT) self.search_firstname.place(x = 140, y =0) self.search_firstname_label = Label(self.search_frame2,bg = "white", text = "First Name", font=("Avenir Next",20)) self.search_firstname_label.place(x= 20,y=0) self.search_Surname = Entry(self.search_frame2, width=30,bg="#e2f0d9",font=("Avenir Next",18),highlightthickness = 0,relief=FLAT) self.search_Surname.place(x = 140, y =40) self.search_Surname_label = Label(self.search_frame2,bg = "white", text = "Surname", font=("Avenir Next",20)) self.search_Surname_label.place(x= 20,y=40) searchButton = Button(self.search_frame2, image=search_icon_photo,height = 35, width =35, command=self.Search,bg ="white") searchButton.place(x= 500, y = 0) ## Binding entries self.search_firstname.bind("<KeyRelease>",self.Search) self.search_Surname.bind("<KeyRelease>",self.Search) def Search(self): sFirst_Name = self.search_firstname.get() sSurname = self.search_Surname.get() search_rec = (sFirst_Name,sSurname) search_rec_new = tuple(item for item in search_rec if item !="") search_fields = ["guestFirstname","guestFirstname"] search_SQL = "SELECT * FROM tb1Guest1 WHERE guestID LIKE '%'" for i in range(len(search_rec)): if search_rec[i] != "": search_SQL += " AND " + search_fields[i] + " LIKE '%' || ? || '%'" connection = sqlite3.connect("Guestrecord.db") cursor = connection.cursor() # Clearing search results for rec in self.search_table.get_children(): self.search_table.delete(rec) #Display the records for GuestRec in cursor.execute(search_SQL,search_rec_new): self.search_table.insert("",END,values=GuestRec) connection.commit() connection.close() Then this is the message which pops up when I try to type in my search entries: It may have something to do with my .self but I don't know how I would over come this error Exception in Tkinter callback Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/tkinter/__init__.py", line 1948, in __call__ return self.func(*args) ^^^^^^^^^^^^^^^^ TypeError: Main_menu.Search() takes 1 positional argument but 2 were given If someone could provide a solution to my problem it would be great as I have spend seemingly a lot of time trying to figure this error out. A: The function for a binding event expects an argument: the Event object. However you also use the same function for a button command which does not expect that extra argument. So you need to add an optional argument to Search(): # event argument is optional, if not provided, it will be None def Search(self, event=None): ...
My search in python using tree view is not working/displaying the search results
This is my two functions which operate my search. The problem seems to occur with my search function when I binded it to my key releases on my search entries. However when I search with my button it works with no error messages . def SearchCustomer(self): connection = sqlite3.connect("Guestrecord.db") cursor = connection.cursor() columnID = ["title","firstName","surname","dob","payment","email","address","postcode"] columnStr =["Title","FirstName","Surname","DOB","Payment","Email","Address","Postcode"] self.search_table = ttk.Treeview(self.search_frame,columns=columnID,show="headings") self.search_table.bind("<Motion>","break") for i in range(0,8): self.search_table.heading(columnID[i],text = columnStr[i]) self.search_table.column(columnID[i],minwidth = 0, width = 108) self.search_table.place(x=20,y=0) for GuestRec in cursor.execute("SELECT * FROM tb1Guest1"): self.search_table.insert("",END,values=GuestRec) connection.commit() connection.close() SearchCustomer(self) search_icon = Image.open("search icon.png") search_icon_resize = search_icon.resize((20,20)) search_icon = search_icon_resize search_icon_photo = ImageTk.PhotoImage(search_icon) self.search_firstname = Entry(self.search_frame2, width=30,bg="#e2f0d9",font=("Avenir Next",18),highlightthickness = 0,relief=FLAT) self.search_firstname.place(x = 140, y =0) self.search_firstname_label = Label(self.search_frame2,bg = "white", text = "First Name", font=("Avenir Next",20)) self.search_firstname_label.place(x= 20,y=0) self.search_Surname = Entry(self.search_frame2, width=30,bg="#e2f0d9",font=("Avenir Next",18),highlightthickness = 0,relief=FLAT) self.search_Surname.place(x = 140, y =40) self.search_Surname_label = Label(self.search_frame2,bg = "white", text = "Surname", font=("Avenir Next",20)) self.search_Surname_label.place(x= 20,y=40) searchButton = Button(self.search_frame2, image=search_icon_photo,height = 35, width =35, command=self.Search,bg ="white") searchButton.place(x= 500, y = 0) ## Binding entries self.search_firstname.bind("<KeyRelease>",self.Search) self.search_Surname.bind("<KeyRelease>",self.Search) def Search(self): sFirst_Name = self.search_firstname.get() sSurname = self.search_Surname.get() search_rec = (sFirst_Name,sSurname) search_rec_new = tuple(item for item in search_rec if item !="") search_fields = ["guestFirstname","guestFirstname"] search_SQL = "SELECT * FROM tb1Guest1 WHERE guestID LIKE '%'" for i in range(len(search_rec)): if search_rec[i] != "": search_SQL += " AND " + search_fields[i] + " LIKE '%' || ? || '%'" connection = sqlite3.connect("Guestrecord.db") cursor = connection.cursor() # Clearing search results for rec in self.search_table.get_children(): self.search_table.delete(rec) #Display the records for GuestRec in cursor.execute(search_SQL,search_rec_new): self.search_table.insert("",END,values=GuestRec) connection.commit() connection.close() Then this is the message which pops up when I try to type in my search entries: It may have something to do with my .self but I don't know how I would over come this error Exception in Tkinter callback Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/tkinter/__init__.py", line 1948, in __call__ return self.func(*args) ^^^^^^^^^^^^^^^^ TypeError: Main_menu.Search() takes 1 positional argument but 2 were given If someone could provide a solution to my problem it would be great as I have spend seemingly a lot of time trying to figure this error out.
[ "The function for a binding event expects an argument: the Event object. However you also use the same function for a button command which does not expect that extra argument.\nSo you need to add an optional argument to Search():\n# event argument is optional, if not provided, it will be None\ndef Search(self, event=None):\n ...\n\n" ]
[ 0 ]
[]
[]
[ "python", "search", "tkinter", "treeview" ]
stackoverflow_0074579024_python_search_tkinter_treeview.txt
Q: Python opens new windows to run script? print("test") m = input("Name: ") print(m) I was getting ready to start programing. I opened cmd and ran my program and it opened a new cmd and printed out my code. Why is python opening a new cmd window to run my script unstead of using the cmd that was opened? Also I recently updated python to python 3.10 A: When you type the name of a file, and the program associated with the file type is a console application (like python.exe) then Windows will open a new console window to run the application. When the application closes, the command window will close automatically. You can get around this by opening Python yourself in the current console: python -m response.py A: I'm very slow, so yes for whatever reason you now have to type python3 C:\Users\johm\Desktop>python3 reponse.py This worked
Python opens new windows to run script?
print("test") m = input("Name: ") print(m) I was getting ready to start programing. I opened cmd and ran my program and it opened a new cmd and printed out my code. Why is python opening a new cmd window to run my script unstead of using the cmd that was opened? Also I recently updated python to python 3.10
[ "When you type the name of a file, and the program associated with the file type is a console application (like python.exe) then Windows will open a new console window to run the application. When the application closes, the command window will close automatically.\nYou can get around this by opening Python yourself in the current console:\npython -m response.py\n\n", "I'm very slow, so yes for whatever reason you now have to type python3\nC:\\Users\\johm\\Desktop>python3 reponse.py\nThis worked\n" ]
[ 0, 0 ]
[]
[]
[ "cmd", "python", "python_3.10" ]
stackoverflow_0074579088_cmd_python_python_3.10.txt
Q: How to calculate churn in Pyspark Does anyone know how to apply a churn rule on the dataset below? The goal is to create a column called "churn" and use it to informing if it is true or false to whenever the Id remains "false" for more than 30 consecutive days in the "using" column I already tried to work with window function but I didn't have success A: Create a window function groupby by the id and ordering by date. Set the window to be between the current row and the previous 30 rows. To create the column, take the max of the using column which wil return True if any date has Using == True in the past 30 days. Finally, negate that value with ~ because you're interested only when NO True is found within a 30 day window. from pyspark.sql import Window, functions as F w = ( Window() .partitionBy("id") .orderBy("reference_date") .rowsBetween(start=Window.currentRow - 30, end=Window.currentRow) ) df.withColumn('churn', ~F.max('using').over(w)).display()
How to calculate churn in Pyspark
Does anyone know how to apply a churn rule on the dataset below? The goal is to create a column called "churn" and use it to informing if it is true or false to whenever the Id remains "false" for more than 30 consecutive days in the "using" column I already tried to work with window function but I didn't have success
[ "Create a window function groupby by the id and ordering by date. Set the window to be between the current row and the previous 30 rows. To create the column, take the max of the using column which wil return True if any date has Using == True in the past 30 days. Finally, negate that value with ~ because you're interested only when NO True is found within a 30 day window.\nfrom pyspark.sql import Window, functions as F\n\nw = (\n Window()\n .partitionBy(\"id\")\n .orderBy(\"reference_date\")\n .rowsBetween(start=Window.currentRow - 30, end=Window.currentRow)\n)\n\n\ndf.withColumn('churn', ~F.max('using').over(w)).display()\n\n" ]
[ 1 ]
[]
[]
[ "pyspark", "python" ]
stackoverflow_0074577969_pyspark_python.txt
Q: Simulate keyboard key press in Python to play games on Linux? I'd like to simulate keyboard events, mainly WASD from Python to control games on Linux. So far, I've tried PyKey and Keyboard module but unfortunately, they are unable to simulate the keypress in a way that games detect it as continuous movement and so most games just don't work with these. Are there any alternatives to these modules? Is there something like DirectInput for Linux? A: You could look into pyautogui and achieve your task with the following sample command: pyautogui.press('w') I have not tried it for games specifically, but it may work!
Simulate keyboard key press in Python to play games on Linux?
I'd like to simulate keyboard events, mainly WASD from Python to control games on Linux. So far, I've tried PyKey and Keyboard module but unfortunately, they are unable to simulate the keypress in a way that games detect it as continuous movement and so most games just don't work with these. Are there any alternatives to these modules? Is there something like DirectInput for Linux?
[ "You could look into pyautogui and achieve your task with the following sample command:\npyautogui.press('w')\nI have not tried it for games specifically, but it may work!\n" ]
[ 1 ]
[]
[]
[ "gamecontroller", "input", "python", "simulation" ]
stackoverflow_0074579109_gamecontroller_input_python_simulation.txt
Q: How to find missing elements between two elements in a list in Python? I have a list as follows: ['1', '5', '6', '7', '10'] I want to find the missing element between two elements in the above list. For example, I want to get the missing elements between '1' and '5', i.e. '2', '3' and '4'. Another example, there are no elements between '5' and '6', so it doesn't need to return anything. Following the list above, I expect it to return a list like this: ['2', '3', '4', '8', '9'] My code: # Sort elements in a list input_list.sort() # Remove duplicates from a list input_list = list(dict.fromkeys(input_list)) How to return the above list? I would appreciate any help. Thank you in advance! A: If you iterate in pairs, it is easy to detect and fill in the gaps: L = ['1', '5', '6', '7', '10'] result = [] for left, right in zip(L, L[1:]): left, right = int(left), int(right) result += map(str, range(left + 1, right)) A: I would use the range() function to generate the missing numbers, and maybe use itertools.pairwise() to easily compare to the previous number. Since Python 3.10, pairwise is better than zip(arr, arr[1:]) because pairwise is implemented in the C layer, and does not make a copy of lists. import itertools arr = ['1', '5', '6', '7', '10'] new_arr = [] for p, c in itertools.pairwise(arr): prev, curr = int(p), int(c) if (prev + 1) != curr: new_arr.extend([str(i) for i in range(prev + 1, curr)]) A: If performance is not an issue, you can use a simple nested for loop: input_list = ['1', '5', '6', '7', '10'] missing_list = [] for ind in range(0, len(input_list)-1): el_0 = int(input_list[ind]) el_f = int(input_list[ind+1]) for num in range(el_0 + 1, el_f): missing_list.append(str(num)) The above assumes that the numbers are integers. The first loop is something not recommended - the loop iterates using the length of the list instead of constructs like enumerate. I used it here for its simplicity. A: You could use a combination of range and set and skip iteration altogether. #original data data = ['1', '5', '6', '7', '10'] #convert to list[int] L = list(map(int, data)) #get a from/to count R = range(min(L), max(L)+1) #remove duplicates and convert back to list[str] out = list(map(str, set(R) ^ set(L))) print(out) A: Using a classic for loop and range() method and also for one-liner fans: arr = ['1', '5', '6', '7', '10'] ans = [] for i in range(len(arr) - 1): ans += map(str, list(range(int(arr[i]) + 1, int(arr[i + 1])))) print(ans) outputs: ['2', '3', '4', '8', '9']
How to find missing elements between two elements in a list in Python?
I have a list as follows: ['1', '5', '6', '7', '10'] I want to find the missing element between two elements in the above list. For example, I want to get the missing elements between '1' and '5', i.e. '2', '3' and '4'. Another example, there are no elements between '5' and '6', so it doesn't need to return anything. Following the list above, I expect it to return a list like this: ['2', '3', '4', '8', '9'] My code: # Sort elements in a list input_list.sort() # Remove duplicates from a list input_list = list(dict.fromkeys(input_list)) How to return the above list? I would appreciate any help. Thank you in advance!
[ "If you iterate in pairs, it is easy to detect and fill in the gaps:\nL = ['1', '5', '6', '7', '10']\nresult = []\nfor left, right in zip(L, L[1:]):\n left, right = int(left), int(right)\n result += map(str, range(left + 1, right))\n\n", "I would use the range() function to generate the missing numbers, and maybe use itertools.pairwise() to easily compare to the previous number. Since Python 3.10, pairwise is better than zip(arr, arr[1:]) because pairwise is implemented in the C layer, and does not make a copy of lists.\nimport itertools\narr = ['1', '5', '6', '7', '10']\n\nnew_arr = []\n\nfor p, c in itertools.pairwise(arr):\n prev, curr = int(p), int(c)\n if (prev + 1) != curr:\n new_arr.extend([str(i) for i in range(prev + 1, curr)])\n\n", "If performance is not an issue, you can use a simple nested for loop:\ninput_list = ['1', '5', '6', '7', '10']\n\nmissing_list = []\n\nfor ind in range(0, len(input_list)-1):\n\n el_0 = int(input_list[ind])\n el_f = int(input_list[ind+1])\n\n for num in range(el_0 + 1, el_f):\n missing_list.append(str(num))\n\nThe above assumes that the numbers are integers. The first loop is something not recommended - the loop iterates using the length of the list instead of constructs like enumerate. I used it here for its simplicity.\n", "You could use a combination of range and set and skip iteration altogether.\n#original data\ndata = ['1', '5', '6', '7', '10']\n\n#convert to list[int]\nL = list(map(int, data))\n\n#get a from/to count\nR = range(min(L), max(L)+1)\n\n#remove duplicates and convert back to list[str]\nout = list(map(str, set(R) ^ set(L)))\n\nprint(out)\n\n", "Using a classic for loop and range() method and also for one-liner fans:\narr = ['1', '5', '6', '7', '10']\nans = []\n\nfor i in range(len(arr) - 1): ans += map(str, list(range(int(arr[i]) + 1, int(arr[i + 1]))))\n \nprint(ans)\n\noutputs: ['2', '3', '4', '8', '9']\n" ]
[ 2, 2, 2, 2, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074579119_python.txt
Q: How best to append NetworkX Degree Value to original pandas dataframe? I'm using NetworkX package for some network analysis, and I'm stuck on how best to append the degree of each node back to the original dataframe. I have a dataframe that looks like this: Focal_Coach sibling_coach_id years_under_focal coach_name 1 2 10 Bill Belichick 1 3 4 Bill Belichick 2 4 6 Andy Reid .etc I'm using the Network package to create the chart, and I have this code: G_weighted = nx.from_pandas_edgelist( sibling_df_agg.rename( columns={"Focal Coach": "source", "sibling_coach_id": "target", "years_under_focal": "weight","coach_name":"edge_attr"} ) ) #Relabel nodes from ID to name G_weighted = nx.relabel_nodes(G_weighted,coach_dict) When I try to run the Degree function to append a column, it returns a tuple of the Coach Name and Degree. sibling_df_agg['Degree']=G_weighted.degree(sibling_df_agg['coach_name']) The new dataframe would look like this: Focal_Coach sibling_coach_id years_under_focal coach_name Degreeenter code here 1 2 10 Bill Belichick (Bill Belichick ,2) 1 3 4 Bill Belichick (Bill Belichick ,2) 2 4 6 Andy Reid (Andy Reid, 1) I've tried adding a [1] to the code to return the number, but that just copied the Degree for the coach in the [1] dataframe row. sibling_df_agg['Degree']=G_weighted.degree(sibling_df_agg['coach_name'][1]) #did not work I'm wanting it to just return a number so that I can do some math manipulation to the data frame (e.g. return only coaches who have a degree greater than X). How best can I accomplish this? A: I needed to unpack the tuple first and turn it into a series. Changed the line to sibling_names,sibling_degrees = zip(*G_weighted.degree(sibling_df_agg['coach_name'])) sibling_df_agg['degree_cnt'] = pd.Series(sibling_degrees) and it worked perfectly.
How best to append NetworkX Degree Value to original pandas dataframe?
I'm using NetworkX package for some network analysis, and I'm stuck on how best to append the degree of each node back to the original dataframe. I have a dataframe that looks like this: Focal_Coach sibling_coach_id years_under_focal coach_name 1 2 10 Bill Belichick 1 3 4 Bill Belichick 2 4 6 Andy Reid .etc I'm using the Network package to create the chart, and I have this code: G_weighted = nx.from_pandas_edgelist( sibling_df_agg.rename( columns={"Focal Coach": "source", "sibling_coach_id": "target", "years_under_focal": "weight","coach_name":"edge_attr"} ) ) #Relabel nodes from ID to name G_weighted = nx.relabel_nodes(G_weighted,coach_dict) When I try to run the Degree function to append a column, it returns a tuple of the Coach Name and Degree. sibling_df_agg['Degree']=G_weighted.degree(sibling_df_agg['coach_name']) The new dataframe would look like this: Focal_Coach sibling_coach_id years_under_focal coach_name Degreeenter code here 1 2 10 Bill Belichick (Bill Belichick ,2) 1 3 4 Bill Belichick (Bill Belichick ,2) 2 4 6 Andy Reid (Andy Reid, 1) I've tried adding a [1] to the code to return the number, but that just copied the Degree for the coach in the [1] dataframe row. sibling_df_agg['Degree']=G_weighted.degree(sibling_df_agg['coach_name'][1]) #did not work I'm wanting it to just return a number so that I can do some math manipulation to the data frame (e.g. return only coaches who have a degree greater than X). How best can I accomplish this?
[ "I needed to unpack the tuple first and turn it into a series.\nChanged the line to\n sibling_names,sibling_degrees = zip(*G_weighted.degree(sibling_df_agg['coach_name']))\n sibling_df_agg['degree_cnt'] = pd.Series(sibling_degrees)\n\nand it worked perfectly.\n" ]
[ 0 ]
[]
[]
[ "networkx", "python" ]
stackoverflow_0074578575_networkx_python.txt
Q: Is it possible to find the value of x in python? i.e asking python to solve an equation such as 2x + 23 - 7x when x is not a pre-defined variable What I want is a program that can determine the value of x from an equation when x is not yet defined i.e. not a python variable. Just an example below, not the real thing. sol = eval("input please type the equation: ") #i.e sol = 32x - 40 print(sol) A: I am not aware of any built in way to do that but Sympy library is built exactly for this stuff. Solvers module in Sympy can be used to solve linear equations. (Here) is a link to its docs. A: An explicit example using sympy import sympy from sympy.abc import x print sympy.solve(32*x-40,"x") print sympy.solve(2*x+23-7*x,"x") Gives as output: [5/4] [23/5] Note that there is the separate question of parsing user input. That is, how do we take the string "32x-40" and turn it into the expression 32*x-40. This can be a non-trivial task depending on the complexity of the equations you are looking to model. If you are insterested in that, I would look into pyparsing. A: You can just use sympy. Then you can do it in the print command. It looks like this. import sympy from sympy.abc import x print sympy.solve(nub1*x+nub2-nub3*x,"Whatever you want here.")
Is it possible to find the value of x in python? i.e asking python to solve an equation such as 2x + 23 - 7x when x is not a pre-defined variable
What I want is a program that can determine the value of x from an equation when x is not yet defined i.e. not a python variable. Just an example below, not the real thing. sol = eval("input please type the equation: ") #i.e sol = 32x - 40 print(sol)
[ "I am not aware of any built in way to do that but Sympy library is built exactly for this stuff. Solvers module in Sympy can be used to solve linear equations. (Here) is a link to its docs.\n", "An explicit example using sympy\nimport sympy\nfrom sympy.abc import x\n\nprint sympy.solve(32*x-40,\"x\")\nprint sympy.solve(2*x+23-7*x,\"x\")\n\nGives as output:\n[5/4]\n[23/5]\n\nNote that there is the separate question of parsing user input. That is, how do we take the string \"32x-40\" and turn it into the expression 32*x-40. This can be a non-trivial task depending on the complexity of the equations you are looking to model. If you are insterested in that, I would look into pyparsing.\n", "You can just use sympy. Then you can do it in the print command. It looks like this.\nimport sympy \nfrom sympy.abc import x\nprint sympy.solve(nub1*x+nub2-nub3*x,\"Whatever you want here.\")\n\n" ]
[ 2, 1, 0 ]
[]
[]
[ "equation", "equation_solving", "eval", "math", "python" ]
stackoverflow_0029092649_equation_equation_solving_eval_math_python.txt
Q: How to send keyboard input in real time to server - python I have a job that I need to make a tcp connection with socket. right. but I need that what the client type is sent to the server. that is, what is being typed at the client's prompt needs to be appearing at the server's prompt at the same time. including, if I delete a letter from the phrase or word in the client prompt, the letter also needs to be deleted in real time in the server prompt. (all this without having to press "enter") How do i do this in python? I would really appreciate it if anyone could help me with this. This will save my semester from college A: You may want to learn about the getch module
How to send keyboard input in real time to server - python
I have a job that I need to make a tcp connection with socket. right. but I need that what the client type is sent to the server. that is, what is being typed at the client's prompt needs to be appearing at the server's prompt at the same time. including, if I delete a letter from the phrase or word in the client prompt, the letter also needs to be deleted in real time in the server prompt. (all this without having to press "enter") How do i do this in python? I would really appreciate it if anyone could help me with this. This will save my semester from college
[ "You may want to learn about the getch module\n" ]
[ 0 ]
[]
[]
[ "python", "sockets", "tcp", "tcpclient", "tcpserver" ]
stackoverflow_0074579177_python_sockets_tcp_tcpclient_tcpserver.txt
Q: How do you I get the value differently while looping for common prefix? strs = ["cir","car"] #strs = ["flower","flow","flight"] def get_min_str(lst): return min(lst, key=len) str1 = get_min_str(strs) lens = len(strs) x = "" mlen = len(str1) if(lens == 1): print(strs[0]) for i in range(0, mlen): for j in range(0, lens-1): if( strs[j][i] == strs[j+1][i] ): if(j == lens-2): x = x + strs[j][i] print(strs[j][i]) else: break print(strs[j][i] == strs[j+1][i]) print(x) So in order to find the longest common prefix, I have used two loops. To loop over the values. But in the example, strs = ["cir","car"]. I should the value x = "c" but I stead get the value "cr", since I have used the break function. The function should have stopped at c. Why isn't it?Why do I get the value "cr"your text A: Since you have two nested loops, the break keyword only exits the inner loop. Then, the third letter matches, so r is added to x. To fix this, set a variable when you should exit the outer loop and check it before each iteration. Edit: also, for readability's sake, you may want to explore the enumerate() function, which converts an iterable (like a string) into an iterable of tuples of the form (index, value), so your code could look like: should_break = False for letter_index, current_character in enumerate(str1): if should_break: break for str_index in range(0, lens): if strs[str_index][letter_index] != current_character: should_break = True # set break condition for outer loop break # break from inner loop else: # executed when the for loop doesn't break x += current_character
How do you I get the value differently while looping for common prefix?
strs = ["cir","car"] #strs = ["flower","flow","flight"] def get_min_str(lst): return min(lst, key=len) str1 = get_min_str(strs) lens = len(strs) x = "" mlen = len(str1) if(lens == 1): print(strs[0]) for i in range(0, mlen): for j in range(0, lens-1): if( strs[j][i] == strs[j+1][i] ): if(j == lens-2): x = x + strs[j][i] print(strs[j][i]) else: break print(strs[j][i] == strs[j+1][i]) print(x) So in order to find the longest common prefix, I have used two loops. To loop over the values. But in the example, strs = ["cir","car"]. I should the value x = "c" but I stead get the value "cr", since I have used the break function. The function should have stopped at c. Why isn't it?Why do I get the value "cr"your text
[ "Since you have two nested loops, the break keyword only exits the inner loop. Then, the third letter matches, so r is added to x. To fix this, set a variable when you should exit the outer loop and check it before each iteration.\nEdit: also, for readability's sake, you may want to explore the enumerate() function, which converts an iterable (like a string) into an iterable of tuples of the form (index, value), so your code could look like:\nshould_break = False\n\nfor letter_index, current_character in enumerate(str1):\n if should_break:\n break\n for str_index in range(0, lens):\n if strs[str_index][letter_index] != current_character:\n should_break = True # set break condition for outer loop\n break # break from inner loop\n else: # executed when the for loop doesn't break\n x += current_character\n\n\n" ]
[ 0 ]
[]
[]
[ "arrays", "prefix", "python", "range", "string" ]
stackoverflow_0074579225_arrays_prefix_python_range_string.txt
Q: Printing out filenames of txt.-files that include at least one line of words, each starting with vowels. Only 1 line of code My task is to write only one line of code in Python. The code should do the following: I need to print out filenames of txt.files, which include at least one line of words, starting with only vowels (each word needs to start with a vowel from at least one line). My code is: [filename for filename in listdir(".")if filename.endswith('.txt') and [line for line in open(filename) if ([word for word in line.split() if word[0] in ['a','e','i','o','u']])]] I tried to use os.listdir(), but I am not allowed to write more than one sentence. My code also prints all 3 of my txt.files, even if one text file doesn´t include a sentence with words, which all start with vowels. A: You could try the following one-liner: print([filename for filename in os.listdir(".") if filename.endswith('.txt') and [line for line in open(filename) if all(word.startswith(('a','e','i','o','u','A','E','I','O','U')) for word in line.split())]]) OUTPUT: ['file3.txt'] Test files Tested with file1.txt, file2.txt and file3.txt. file1.txt: The quick brown fox jumps over the lazy dog The quick brown fox jumps over the lazy dog file2.txt: A quick brown fox jumps over the lazy dog The quick brown fox jumps over the lazy dog file3.txt: The quick brown fox jumps over the lazy dog Orangutangs appreciate apples over oranges and apricots Note You can also use: all((word[0] in 'aeiouAEIOU') for word in line.split()) instead of all(word.startswith(('a','e','i','o','u','A','E','I','O','U')) for word in line.split())
Printing out filenames of txt.-files that include at least one line of words, each starting with vowels. Only 1 line of code
My task is to write only one line of code in Python. The code should do the following: I need to print out filenames of txt.files, which include at least one line of words, starting with only vowels (each word needs to start with a vowel from at least one line). My code is: [filename for filename in listdir(".")if filename.endswith('.txt') and [line for line in open(filename) if ([word for word in line.split() if word[0] in ['a','e','i','o','u']])]] I tried to use os.listdir(), but I am not allowed to write more than one sentence. My code also prints all 3 of my txt.files, even if one text file doesn´t include a sentence with words, which all start with vowels.
[ "You could try the following one-liner:\nprint([filename for filename in os.listdir(\".\") if filename.endswith('.txt') and [line for line in open(filename) if all(word.startswith(('a','e','i','o','u','A','E','I','O','U')) for word in line.split())]])\n\nOUTPUT:\n['file3.txt']\n\n\n\n\nTest files\nTested with file1.txt, file2.txt and file3.txt.\nfile1.txt:\nThe quick brown fox jumps over the lazy dog\nThe quick brown fox jumps over the lazy dog\n\nfile2.txt:\nA quick brown fox jumps over the lazy dog\nThe quick brown fox jumps over the lazy dog\n\nfile3.txt:\nThe quick brown fox jumps over the lazy dog\nOrangutangs appreciate apples over oranges and apricots\n\n\n\nNote\nYou can also use:\nall((word[0] in 'aeiouAEIOU') for word in line.split()) \n\ninstead of\n\nall(word.startswith(('a','e','i','o','u','A','E','I','O','U')) for word in line.split())\n\n" ]
[ 1 ]
[]
[]
[ "file", "listdir", "one_liner", "python" ]
stackoverflow_0074567651_file_listdir_one_liner_python.txt
Q: Linear Regression not Defined Previously had code and did not have any issues not it is giving an error of not having "Linear Regression" defined. Code below, not sure what I am missing. from sklearn.preprocessing import PolynomialFeatures model2 = PolynomialFeatures(degree = 2) x_poly = model2.fit_transform(X_train) model2.fit(x_poly, y_train) poly_reg = LinearRegression() poly_reg.fit(x_poly, y_train) m2_pred = poly_reg.predict(model2.fit_transform(X_test)) I have tried google and other sources and not finding resolution, previously code worked. A: You need to import LinearRegression at the top of your code. from sklearn.linear_model import LinearRegression
Linear Regression not Defined
Previously had code and did not have any issues not it is giving an error of not having "Linear Regression" defined. Code below, not sure what I am missing. from sklearn.preprocessing import PolynomialFeatures model2 = PolynomialFeatures(degree = 2) x_poly = model2.fit_transform(X_train) model2.fit(x_poly, y_train) poly_reg = LinearRegression() poly_reg.fit(x_poly, y_train) m2_pred = poly_reg.predict(model2.fit_transform(X_test)) I have tried google and other sources and not finding resolution, previously code worked.
[ "You need to import LinearRegression at the top of your code.\nfrom sklearn.linear_model import LinearRegression\n\n" ]
[ 0 ]
[]
[]
[ "knn", "linear_regression", "python" ]
stackoverflow_0074579292_knn_linear_regression_python.txt
Q: How to edit guild banner | discord.py I have generated image, and i need to post it to the guild banner. When i am trying to do this, is is not working and don't have errors. My code: @commands.command() async def test(self, ctx): await ctx.message.delete() background = Editor('./assets/card_1.jpg').resize((900, 300)) background.rectangle((30,220), width=600, height=30, fill='#C0B6CC', radius=20) file = File(fp=background.image_bytes, filename='card.png') guild = self.client.get_guild(267725224789803008) await ctx.send(file = file) await guild.edit(banner = background.image_bytes) A: You can't pass raw bytes into discord.File, as it needs a file-like object to read bytes from. Instead, wrap it in a BytesIO. file = File(fp=io.BytesIO(background.image_bytes), filename='card.png')
How to edit guild banner | discord.py
I have generated image, and i need to post it to the guild banner. When i am trying to do this, is is not working and don't have errors. My code: @commands.command() async def test(self, ctx): await ctx.message.delete() background = Editor('./assets/card_1.jpg').resize((900, 300)) background.rectangle((30,220), width=600, height=30, fill='#C0B6CC', radius=20) file = File(fp=background.image_bytes, filename='card.png') guild = self.client.get_guild(267725224789803008) await ctx.send(file = file) await guild.edit(banner = background.image_bytes)
[ "You can't pass raw bytes into discord.File, as it needs a file-like object to read bytes from. Instead, wrap it in a BytesIO.\nfile = File(fp=io.BytesIO(background.image_bytes), filename='card.png')\n\n" ]
[ 0 ]
[]
[]
[ "discord.py", "python" ]
stackoverflow_0074578488_discord.py_python.txt
Q: PyTorch RuntimeError: DataLoader worker (pid(s) 15332) exited unexpectedly I am a beginner at PyTorch and I am just trying out some examples on this webpage. But I can't seem to get the 'super_resolution' program running due to this error: RuntimeError: DataLoader worker (pid(s) 15332) exited unexpectedly I searched the Internet and found that some people suggest setting num_workers to 0. But if I do that, the program tells me that I am running out of memory (either with CPU or GPU): RuntimeError: [enforce fail at ..\c10\core\CPUAllocator.cpp:72] data. DefaultCPUAllocator: not enough memory: you tried to allocate 9663676416 bytes. Buy new RAM! or RuntimeError: CUDA out of memory. Tried to allocate 1024.00 MiB (GPU 0; 4.00 GiB total capacity; 2.03 GiB already allocated; 0 bytes free; 2.03 GiB reserved in total by PyTorch) How do I fix this? I am using python 3.8 on Win10(64bit) and pytorch 1.4.0. More complete error messages (--cuda means using GPU, --threads x means passing x to the num_worker parameter): with command line arguments --upscale_factor 1 --cuda File "E:\Python38\lib\site-packages\torch\utils\data\dataloader.py", line 761, in _try_get_data data = self._data_queue.get(timeout=timeout) File "E:\Python38\lib\multiprocessing\queues.py", line 108, in get raise Empty _queue.Empty During handling of the above exception, another exception occurred: Traceback (most recent call last): File "Z:\super_resolution\main.py", line 81, in <module> train(epoch) File "Z:\super_resolution\main.py", line 48, in train for iteration, batch in enumerate(training_data_loader, 1): File "E:\Python38\lib\site-packages\torch\utils\data\dataloader.py", line 345, in __next__ data = self._next_data() File "E:\Python38\lib\site-packages\torch\utils\data\dataloader.py", line 841, in _next_data idx, data = self._get_data() File "E:\Python38\lib\site-packages\torch\utils\data\dataloader.py", line 808, in _get_data success, data = self._try_get_data() File "E:\Python38\lib\site-packages\torch\utils\data\dataloader.py", line 774, in _try_get_data raise RuntimeError('DataLoader worker (pid(s) {}) exited unexpectedly'.format(pids_str)) RuntimeError: DataLoader worker (pid(s) 16596, 9376, 12756, 9844) exited unexpectedly with command line arguments --upscale_factor 1 --cuda --threads 0 File "Z:\super_resolution\main.py", line 81, in <module> train(epoch) File "Z:\super_resolution\main.py", line 52, in train loss = criterion(model(input), target) File "E:\Python38\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "Z:\super_resolution\model.py", line 21, in forward x = self.relu(self.conv2(x)) File "E:\Python38\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "E:\Python38\lib\site-packages\torch\nn\modules\conv.py", line 345, in forward return self.conv2d_forward(input, self.weight) File "E:\Python38\lib\site-packages\torch\nn\modules\conv.py", line 341, in conv2d_forward return F.conv2d(input, weight, self.bias, self.stride, RuntimeError: CUDA out of memory. Tried to allocate 1024.00 MiB (GPU 0; 4.00 GiB total capacity; 2.03 GiB already allocated; 954.35 MiB free; 2.03 GiB reserved in total by PyTorch) A: There is no "complete" solve for GPU out of memory errors, but there are quite a few things you can do to relieve the memory demand. Also, make sure that you are not passing the trainset and testset to the GPU at the same time! Decrease batch size to 1 Decrease the dimensionality of the fully-connected layers (they are the most memory-intensive) (Image data) Apply centre cropping (Image data) Transform RGB data to greyscale (Text data) Truncate input at n chars (which probably won't help that much) Alternatively, you can try running on Google Colaboratory (12 hour usage limit on K80 GPU) and Next Journal, both of which provide up to 12GB for use, free of charge. Worst case scenario, you might have to conduct training on your CPU. Hope this helps! A: This is the solution that worked for me. it may work for other Windows users. Just remove/comment the num workers to disable parallel loads A: Restart your system for the GPU to regain its memory. Save all the work and restart your System. A: I tried to fine-tuning it using different combinations. The solution for me is on batch_size = 1 and n_of_jobs=8 A: Reduce number of workers, -- threads x in your case. A: On windows Aneesh Cherian's solution works well for notebooks (IPython). But if you want to use num_workers>0 you should avoid interpreters like IPython and put the dataload in if __name__ == '__main__:. Also, with persistent_workers=True the dataload appears to be faster on windows if num_workers>0. More information can be found in this thread: https://github.com/pytorch/pytorch/issues/12831 A: As the accepted response states. There is no explicit solution. However, in my case, I had to resize all images as the images were large and model huge. You can refer to this post for resizing: https://stackoverflow.com/a/73798986/16599761
PyTorch RuntimeError: DataLoader worker (pid(s) 15332) exited unexpectedly
I am a beginner at PyTorch and I am just trying out some examples on this webpage. But I can't seem to get the 'super_resolution' program running due to this error: RuntimeError: DataLoader worker (pid(s) 15332) exited unexpectedly I searched the Internet and found that some people suggest setting num_workers to 0. But if I do that, the program tells me that I am running out of memory (either with CPU or GPU): RuntimeError: [enforce fail at ..\c10\core\CPUAllocator.cpp:72] data. DefaultCPUAllocator: not enough memory: you tried to allocate 9663676416 bytes. Buy new RAM! or RuntimeError: CUDA out of memory. Tried to allocate 1024.00 MiB (GPU 0; 4.00 GiB total capacity; 2.03 GiB already allocated; 0 bytes free; 2.03 GiB reserved in total by PyTorch) How do I fix this? I am using python 3.8 on Win10(64bit) and pytorch 1.4.0. More complete error messages (--cuda means using GPU, --threads x means passing x to the num_worker parameter): with command line arguments --upscale_factor 1 --cuda File "E:\Python38\lib\site-packages\torch\utils\data\dataloader.py", line 761, in _try_get_data data = self._data_queue.get(timeout=timeout) File "E:\Python38\lib\multiprocessing\queues.py", line 108, in get raise Empty _queue.Empty During handling of the above exception, another exception occurred: Traceback (most recent call last): File "Z:\super_resolution\main.py", line 81, in <module> train(epoch) File "Z:\super_resolution\main.py", line 48, in train for iteration, batch in enumerate(training_data_loader, 1): File "E:\Python38\lib\site-packages\torch\utils\data\dataloader.py", line 345, in __next__ data = self._next_data() File "E:\Python38\lib\site-packages\torch\utils\data\dataloader.py", line 841, in _next_data idx, data = self._get_data() File "E:\Python38\lib\site-packages\torch\utils\data\dataloader.py", line 808, in _get_data success, data = self._try_get_data() File "E:\Python38\lib\site-packages\torch\utils\data\dataloader.py", line 774, in _try_get_data raise RuntimeError('DataLoader worker (pid(s) {}) exited unexpectedly'.format(pids_str)) RuntimeError: DataLoader worker (pid(s) 16596, 9376, 12756, 9844) exited unexpectedly with command line arguments --upscale_factor 1 --cuda --threads 0 File "Z:\super_resolution\main.py", line 81, in <module> train(epoch) File "Z:\super_resolution\main.py", line 52, in train loss = criterion(model(input), target) File "E:\Python38\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "Z:\super_resolution\model.py", line 21, in forward x = self.relu(self.conv2(x)) File "E:\Python38\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "E:\Python38\lib\site-packages\torch\nn\modules\conv.py", line 345, in forward return self.conv2d_forward(input, self.weight) File "E:\Python38\lib\site-packages\torch\nn\modules\conv.py", line 341, in conv2d_forward return F.conv2d(input, weight, self.bias, self.stride, RuntimeError: CUDA out of memory. Tried to allocate 1024.00 MiB (GPU 0; 4.00 GiB total capacity; 2.03 GiB already allocated; 954.35 MiB free; 2.03 GiB reserved in total by PyTorch)
[ "There is no \"complete\" solve for GPU out of memory errors, but there are quite a few things you can do to relieve the memory demand. Also, make sure that you are not passing the trainset and testset to the GPU at the same time!\n\nDecrease batch size to 1\nDecrease the dimensionality of the fully-connected layers (they are the most memory-intensive)\n(Image data) Apply centre cropping\n(Image data) Transform RGB data to greyscale\n(Text data) Truncate input at n chars (which probably won't help that much)\n\nAlternatively, you can try running on Google Colaboratory (12 hour usage limit on K80 GPU) and Next Journal, both of which provide up to 12GB for use, free of charge. Worst case scenario, you might have to conduct training on your CPU. Hope this helps!\n", "This is the solution that worked for me. it may work for other Windows users.\nJust remove/comment the num workers to disable parallel loads\n", "Restart your system for the GPU to regain its memory. Save all the work and restart your System.\n", "I tried to fine-tuning it using different combinations. The solution for me is on batch_size = 1 and n_of_jobs=8\n", "Reduce number of workers, -- threads x in your case.\n", "On windows Aneesh Cherian's solution works well for notebooks (IPython). But if you want to use num_workers>0 you should avoid interpreters like IPython and put the dataload in if __name__ == '__main__:. Also, with persistent_workers=True the dataload appears to be faster on windows if num_workers>0.\nMore information can be found in this thread: https://github.com/pytorch/pytorch/issues/12831\n", "As the accepted response states. There is no explicit solution. However, in my case, I had to resize all images as the images were large and model huge. You can refer to this post for resizing: https://stackoverflow.com/a/73798986/16599761\n" ]
[ 13, 11, 1, 1, 0, 0, 0 ]
[]
[]
[ "python", "python_3.x", "pytorch" ]
stackoverflow_0060101168_python_python_3.x_pytorch.txt
Q: How can I improve recall value in Deep learning? I'm applying vgg16 as feature extraction method, However my instructor require a high recall. (90%-95%). I will explain about my dataset my data are labeled videos of traffic sign in a foggy weather (they are labeled as visible, not visible, poor viability) I extracted frames from the video as image and randomly stored the data in training & test/val folder I'm trying to apply deep learning to classify the images. As you can see my model is doing good but not very good. I can't get more videos from my instructor. How can I possibly improve my model? can I add more layers to my model? can I feed feature extraction base model to a conv2d model that I create? can I apply feed feature extraction from vgg16 to transfer learning ? How can I feed feature extraction vgg16 to svm? BATCH = 50 IMG_WIDTH = 224 IMG_HEIGHT = 224 from keras.applications import VGG16 conv_base = VGG16(weights='imagenet', include_top=False, input_shape=(IMG_HEIGHT, IMG_WIDTH, 3)) # This is the Size of the image conv_base.trainable= False datagen = ImageDataGenerator(rescale=1.0/255.0 # ,brightness_range=(1,1.5), # zoom_range=0.1, # rotation_range=45, # horizontal_flip=True, # vertical_flip=True, ) train = datagen.flow_from_directory(train_path ,class_mode='categorical' ,batch_size = BATCH ,target_size=(IMG_HEIGHT, IMG_WIDTH)) #test data val = datagen.flow_from_directory(val_path ,class_mode='categorical' ,batch_size = BATCH ,target_size=(IMG_HEIGHT, IMG_WIDTH)) model = tf.keras.models.Sequential() #We now add the vggModel directly to our new model model.add(conv_base) model.add(tf.keras.layers.Flatten()) model.add(tf.keras.layers.Dense(129, activation='relu')) model.add(tf.keras.layers.Dropout((0.5))) model.add(tf.keras.layers.Dense(5, activation='softmax')) model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.001) ,loss='categorical_crossentropy' , metrics=["accuracy", tf.keras.metrics.Precision(), tf.keras.metrics.Recall()] ) early_stopping = EarlyStopping(monitor='val_loss' ,patience=2 ) history_1 = model.fit(train ,validation_data=val ,epochs=10 ,steps_per_epoch=len(train) ,validation_steps=len(val) ,verbose=1 ,callbacks =[early_stopping] ) Training loss : 0.5572120547294617 Training accuracy : 0.8088889122009277 Training precision: 0.9959514141082764 Training recall: 0.437333345413208 Test loss : 0.5427007079124451 Test accuracy : 0.8233333230018616 Test precision: 1.0 Test recall: 0.44333332777023315 A: recognition rates depend on many variables not only the noises of the image when our recognition is about 80 percent but we also use some information for our recognition process as well. You talked about sign traffic and foggy image when you are local and foreigners who had well recognize the image sign, you had a driving instructions guidebook. How can I possibly improve my model? The model does not specify, I use a small simple model working with camera input, I use grids and multiple outputs as a sequence to help determine the label for input as object 1 contains in { P( Object | Grid 1 ), P( Object | Grid 2 ), P( Object | Grid 3 ) ... } can I add more layers to my model? That depends on your application practice, input camera does not much information compare to the variety you may try to cascade model or resolution for extracting values, VGG16 is a series of convolution layers and a concat path is your application on it. I use only few dense layers with convolution layers that is because my input image is small. can I feed feature extraction base model to a conv2d model that I create? The same answer, convolution layers is the extracting functions but you can convert information by time-related functions they are not limited to use with sounds or frequency such as MFCC feature extraction as extended information for recognition rates, masking sign, or model concatenation. can I apply feed feature extraction from vgg16 to transfer learning ? The same answer is above. How can I feed feature extraction vgg16 to svm? The same answer VGG16 is convolution layers you can use concatenate model or concatenate input. Sample: Working templates, multiplot anime is one of OS response feedbacks it does not need time elapses. """"""""""""""""""""""""""""""""""""""""""""""""""""""""" Functions """"""""""""""""""""""""""""""""""""""""""""""""""""""""" def f1( picture ): return tf.constant( picture ).numpy() def animate( i ): ret0, frame0 = video_capture_0.read() if (ret0): frame0 = tf.image.resize(frame0, [29, 39]).numpy() temp = img_array = tf.keras.preprocessing.image.img_to_array(frame0[:,:,2:3]) temp2 = img_array = tf.keras.preprocessing.image.img_to_array(frame0[:,:,1:2]) temp3 = img_array = tf.keras.preprocessing.image.img_to_array(frame0[:,:,0:1]) temp = tf.keras.layers.Concatenate(axis=2)([temp, temp2]) temp = tf.keras.layers.Concatenate(axis=2)([temp, temp3]) # 480, 640 temp = tf.keras.preprocessing.image.array_to_img( temp, data_format=None, scale=True ) temp = f1( temp ) im.set_array( temp ) result = predict_action( temp ) print( result ) return im, def predict_action ( image ) : predictions = model.predict(tf.constant(image, shape=(1, 29, 39, 3) , dtype=tf.float32)) result = tf.math.argmax(predictions[0]) return result Output: Simple camera and object detection, I wrote simple codes with a small model ( No object detection API ), there are more examples A: you stated my data are labeled videos of traffic sign in a foggy weather (they are labeled as visible, not visible, poor viability) from this I assume you have 3 classes. However in your model you have the code model.add(tf.keras.layers.Dense(5, activation='softmax')) ``` which implies you have 5 classes? In model.fit you have the code ,steps_per_epoch=len(train) ,validation_steps=len(val) in your generators you set the batch size to 50. in which case you should have steps_per_epoch =int(len(train/50) and validation_steps=int(len(val/50) You are using the early stopping callback but you should add the parameter restore_best_weights=True also change patience=4 This way your model will be set to the weights for the epoch with the lowest validation loss. I also recommend you use the Keras callback ReduceLROnPlateau. Documentation is [here.][1] my recommended code for this callback is: rlronp=tf.keras.callbacks.ReduceLROnPlateau( monitor="val_loss", factor=0.4, patience=2, verbose=1) In model.fit change callbacks to callbacks=[[early_stopping, rlronp] Also run for more epochs [1]: https://keras.io/api/callbacks/reduce_lr_on_plateau/
How can I improve recall value in Deep learning?
I'm applying vgg16 as feature extraction method, However my instructor require a high recall. (90%-95%). I will explain about my dataset my data are labeled videos of traffic sign in a foggy weather (they are labeled as visible, not visible, poor viability) I extracted frames from the video as image and randomly stored the data in training & test/val folder I'm trying to apply deep learning to classify the images. As you can see my model is doing good but not very good. I can't get more videos from my instructor. How can I possibly improve my model? can I add more layers to my model? can I feed feature extraction base model to a conv2d model that I create? can I apply feed feature extraction from vgg16 to transfer learning ? How can I feed feature extraction vgg16 to svm? BATCH = 50 IMG_WIDTH = 224 IMG_HEIGHT = 224 from keras.applications import VGG16 conv_base = VGG16(weights='imagenet', include_top=False, input_shape=(IMG_HEIGHT, IMG_WIDTH, 3)) # This is the Size of the image conv_base.trainable= False datagen = ImageDataGenerator(rescale=1.0/255.0 # ,brightness_range=(1,1.5), # zoom_range=0.1, # rotation_range=45, # horizontal_flip=True, # vertical_flip=True, ) train = datagen.flow_from_directory(train_path ,class_mode='categorical' ,batch_size = BATCH ,target_size=(IMG_HEIGHT, IMG_WIDTH)) #test data val = datagen.flow_from_directory(val_path ,class_mode='categorical' ,batch_size = BATCH ,target_size=(IMG_HEIGHT, IMG_WIDTH)) model = tf.keras.models.Sequential() #We now add the vggModel directly to our new model model.add(conv_base) model.add(tf.keras.layers.Flatten()) model.add(tf.keras.layers.Dense(129, activation='relu')) model.add(tf.keras.layers.Dropout((0.5))) model.add(tf.keras.layers.Dense(5, activation='softmax')) model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.001) ,loss='categorical_crossentropy' , metrics=["accuracy", tf.keras.metrics.Precision(), tf.keras.metrics.Recall()] ) early_stopping = EarlyStopping(monitor='val_loss' ,patience=2 ) history_1 = model.fit(train ,validation_data=val ,epochs=10 ,steps_per_epoch=len(train) ,validation_steps=len(val) ,verbose=1 ,callbacks =[early_stopping] ) Training loss : 0.5572120547294617 Training accuracy : 0.8088889122009277 Training precision: 0.9959514141082764 Training recall: 0.437333345413208 Test loss : 0.5427007079124451 Test accuracy : 0.8233333230018616 Test precision: 1.0 Test recall: 0.44333332777023315
[ "recognition rates depend on many variables not only the noises of the image when our recognition is about 80 percent but we also use some information for our recognition process as well. You talked about sign traffic and foggy image when you are local and foreigners who had well recognize the image sign, you had a driving instructions guidebook.\n\nHow can I possibly improve my model?\nThe model does not specify, I use a small simple model working with camera input, I use grids and multiple outputs as a sequence to help determine the label for input as object 1 contains in { P( Object | Grid 1 ), P( Object | Grid 2 ), P( Object | Grid 3 ) ... }\n\ncan I add more layers to my model?\nThat depends on your application practice, input camera does not much information compare to the variety you may try to cascade model or resolution for extracting values, VGG16 is a series of convolution layers and a concat path is your application on it. I use only few dense layers with convolution layers that is because my input image is small.\n\ncan I feed feature extraction base model to a conv2d model that I\ncreate?\nThe same answer, convolution layers is the extracting functions but you can convert information by time-related functions they are not limited to use with sounds or frequency such as MFCC feature extraction as extended information for recognition rates, masking sign, or model concatenation.\n\ncan I apply feed feature extraction from vgg16 to transfer learning ?\nThe same answer is above.\n\nHow can I feed feature extraction vgg16 to svm?\nThe same answer VGG16 is convolution layers you can use concatenate model or concatenate input.\n\n\n\nSample: Working templates, multiplot anime is one of OS response feedbacks it does not need time elapses.\n\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\nFunctions\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\ndef f1( picture ):\n return tf.constant( picture ).numpy()\n\ndef animate( i ):\n ret0, frame0 = video_capture_0.read()\n if (ret0): \n \n frame0 = tf.image.resize(frame0, [29, 39]).numpy()\n \n temp = img_array = tf.keras.preprocessing.image.img_to_array(frame0[:,:,2:3])\n temp2 = img_array = tf.keras.preprocessing.image.img_to_array(frame0[:,:,1:2])\n temp3 = img_array = tf.keras.preprocessing.image.img_to_array(frame0[:,:,0:1])\n\n temp = tf.keras.layers.Concatenate(axis=2)([temp, temp2])\n temp = tf.keras.layers.Concatenate(axis=2)([temp, temp3])\n # 480, 640\n temp = tf.keras.preprocessing.image.array_to_img(\n temp,\n data_format=None,\n scale=True\n )\n temp = f1( temp )\n \n im.set_array( temp )\n result = predict_action( temp )\n print( result )\n return im,\n\ndef predict_action ( image ) :\n predictions = model.predict(tf.constant(image, shape=(1, 29, 39, 3) , dtype=tf.float32))\n result = tf.math.argmax(predictions[0])\n return result\n\n\nOutput: Simple camera and object detection, I wrote simple codes with a small model ( No object detection API ), there are more examples\n\n\n", "you stated\nmy data are labeled videos of traffic sign in a foggy weather (they are labeled as visible, not visible, poor viability) \n\nfrom this I assume you have 3 classes. However in your model you have the code\nmodel.add(tf.keras.layers.Dense(5, activation='softmax'))\n```\nwhich implies you have 5 classes?\nIn model.fit you have the code\n\n,steps_per_epoch=len(train) ,validation_steps=len(val)\nin your generators you set the batch size to 50. in which case you should have\nsteps_per_epoch =int(len(train/50) and validation_steps=int(len(val/50)\n\nYou are using the early stopping callback but you should add the parameter\n\nrestore_best_weights=True also change patience=4\nThis way your model will be set to the weights for the epoch with the lowest validation loss. I also recommend you use the Keras callback ReduceLROnPlateau. Documentation is [here.][1] my recommended code for this callback is:\n\nrlronp=tf.keras.callbacks.ReduceLROnPlateau( monitor=\"val_loss\", factor=0.4,\npatience=2, verbose=1)\nIn model.fit change callbacks to callbacks=[[early_stopping, rlronp]\n\nAlso run for more epochs\n \n\n\n\n [1]: https://keras.io/api/callbacks/reduce_lr_on_plateau/\n\n" ]
[ 0, 0 ]
[]
[]
[ "keras", "python", "tensorflow" ]
stackoverflow_0074577832_keras_python_tensorflow.txt
Q: do i need to use exactly same attribute names to properties in custom preprocessing class which inherit scikit learn BaseEstimator? when writing custom classes inherit from BaseEstimator of the sklearn throwing AttributeError: object has no attribute . but that attribute is present and has values. class BaseNull(BaseEstimator, TransformerMixin): def __init__(self, variables: Union[str, list[str]], row_wise: bool = False, na_kwds: Union[str, list, tuple] = None): self.na_kwds = na_kwds self.row = row_wise self.null_index = None self.null_columns = None self.row_count = None self.column_count = None `null = BaseNull(temp_data.columns).fit(temp_data)`. it is working fine until print(null) execute or null. then it throws the above attribute error. traceback shows that this error happens in getattr() in sklearn base. c:\program files\python39\lib\site-packages\sklearn\utils\_pprint.py in _changed_params(estimator) 91 estimator with non-default values.""" 92 ---> 93 params = estimator.get_params(deep=False) 94 init_func = getattr(estimator.__init__, "deprecated_original", estimator.__init__) 95 init_params = inspect.signature(init_func).parameters c:\program files\python39\lib\site-packages\sklearn\base.py in get_params(self, deep) 209 out = dict() 210 for key in self._get_param_names(): --> 211 value = getattr(self, key) 212 if deep and hasattr(value, "get_params") and not isinstance(value, type): 213 deep_items = value.get_params().items() I found that this is caused by attributes that assign to different property names ex: self.row = row_wise. what's happening here? and can I use different property names to assign attribute values? A: do i need to use exactly same attribute names to properties in custom preprocessing class which inherit scikit learn BaseEstimator? Yes. See this part of the docs: All scikit-learn estimators have get_params and set_params functions. The get_params function takes no arguments and returns a dict of the __init__ parameters of the estimator, together with their values. (Source.) In other words, if you have a parameter to __init__ called variables, then it expects self.variables to be a valid variable.
do i need to use exactly same attribute names to properties in custom preprocessing class which inherit scikit learn BaseEstimator?
when writing custom classes inherit from BaseEstimator of the sklearn throwing AttributeError: object has no attribute . but that attribute is present and has values. class BaseNull(BaseEstimator, TransformerMixin): def __init__(self, variables: Union[str, list[str]], row_wise: bool = False, na_kwds: Union[str, list, tuple] = None): self.na_kwds = na_kwds self.row = row_wise self.null_index = None self.null_columns = None self.row_count = None self.column_count = None `null = BaseNull(temp_data.columns).fit(temp_data)`. it is working fine until print(null) execute or null. then it throws the above attribute error. traceback shows that this error happens in getattr() in sklearn base. c:\program files\python39\lib\site-packages\sklearn\utils\_pprint.py in _changed_params(estimator) 91 estimator with non-default values.""" 92 ---> 93 params = estimator.get_params(deep=False) 94 init_func = getattr(estimator.__init__, "deprecated_original", estimator.__init__) 95 init_params = inspect.signature(init_func).parameters c:\program files\python39\lib\site-packages\sklearn\base.py in get_params(self, deep) 209 out = dict() 210 for key in self._get_param_names(): --> 211 value = getattr(self, key) 212 if deep and hasattr(value, "get_params") and not isinstance(value, type): 213 deep_items = value.get_params().items() I found that this is caused by attributes that assign to different property names ex: self.row = row_wise. what's happening here? and can I use different property names to assign attribute values?
[ "\ndo i need to use exactly same attribute names to properties in custom preprocessing class which inherit scikit learn BaseEstimator?\n\nYes. See this part of the docs:\n\nAll scikit-learn estimators have get_params and set_params functions. The get_params function takes no arguments and returns a dict of the __init__ parameters of the estimator, together with their values.\n\n(Source.)\nIn other words, if you have a parameter to __init__ called variables, then it expects self.variables to be a valid variable.\n" ]
[ 1 ]
[]
[]
[ "class", "python", "scikit_learn" ]
stackoverflow_0074579336_class_python_scikit_learn.txt
Q: Rewriting loop without breaks Below is the code that I want to replicate without using break /// while True: choice = input("Is it time to choose(Y/N)? ") if choice.upper() == "N": break idx = random.randint(0, len(contents)) fnd = False for i in range(3): for j in range(3): if table[i][j] == contents[idx]: table[i][j] = "SEEN" fnd = True break if fnd: break won = check_win(table) print_table(table) if won: print("The game has been won!!") break Can't figure out, already tried using variables but can't make it work A: You can use a flag instead: w_flag = True while w_flag: choice = input("Is it time to choose(Y/N)? ") if choice.upper() == "N": w_flag = False if w_flag: idx = random.randint(0, len(contents)) fnd = False i,j = 0,0 i_flag = True while i < 3 and i_flag: j_flag = True while j < 3 and j_flag: if table[i][j] == contents[idx]: table[i][j] = "SEEN" fnd = True j_flag = False j += 1 if fnd: i_flag = False i += 1 won = check_win(table) print_table(table) if won: print("The game has been won!!") w_flag = False A: Let's handle this case by case. (I'm leaving initializing variables up to you, btw.) while True: choice = input("Is it time to choose(Y/N)? ") if choice.upper() == "N": break The above break statement could be removed by having a playing variable. Change while True: to while playing, and then set playing to false if the player chooses; put the rest of the loop body in the else branch. idx = random.randint(0, len(contents)) fnd = False for i in range(3): for j in range(3): if table[i][j] == contents[idx]: table[i][j] = "SEEN" fnd = True break if fnd: break I'm taking the above two break statements to be independent of the top-level while loop, as that's the way it's currently coded. You could re-code these as two nested while loops that increment counters- while i < 3 && !fnd and while j < 3 && !fnd, though that's messy. The inner one could also be handled with index() won = check_win(table) print_table(table) if won: print("The game has been won!!") break This last one is the easiest: Just change the loop condition to while playing && !won That's messy, but it should be equivalent. I might be able to make it cleaner if I knew what you were trying to do with the code as a whole.
Rewriting loop without breaks
Below is the code that I want to replicate without using break /// while True: choice = input("Is it time to choose(Y/N)? ") if choice.upper() == "N": break idx = random.randint(0, len(contents)) fnd = False for i in range(3): for j in range(3): if table[i][j] == contents[idx]: table[i][j] = "SEEN" fnd = True break if fnd: break won = check_win(table) print_table(table) if won: print("The game has been won!!") break Can't figure out, already tried using variables but can't make it work
[ "You can use a flag instead:\nw_flag = True\nwhile w_flag:\n choice = input(\"Is it time to choose(Y/N)? \")\n if choice.upper() == \"N\": \n w_flag = False\n \n if w_flag:\n idx = random.randint(0, len(contents))\n fnd = False\n i,j = 0,0\n i_flag = True\n while i < 3 and i_flag:\n j_flag = True\n while j < 3 and j_flag:\n if table[i][j] == contents[idx]:\n table[i][j] = \"SEEN\"\n fnd = True\n j_flag = False\n j += 1\n if fnd:\n i_flag = False\n i += 1\n \n won = check_win(table)\n print_table(table)\n if won:\n print(\"The game has been won!!\")\n w_flag = False\n\n", "Let's handle this case by case. (I'm leaving initializing variables up to you, btw.)\nwhile True:\n choice = input(\"Is it time to choose(Y/N)? \")\n if choice.upper() == \"N\": \n break\n\nThe above break statement could be removed by having a playing variable. Change while True: to while playing, and then set playing to false if the player chooses; put the rest of the loop body in the else branch.\n idx = random.randint(0, len(contents))\n fnd = False\n for i in range(3):\n for j in range(3):\n if table[i][j] == contents[idx]:\n table[i][j] = \"SEEN\"\n fnd = True\n break\n if fnd:\n break\n\nI'm taking the above two break statements to be independent of the top-level while loop, as that's the way it's currently coded. You could re-code these as two nested while loops that increment counters- while i < 3 && !fnd and while j < 3 && !fnd, though that's messy. The inner one could also be handled with index()\n won = check_win(table)\n print_table(table)\n if won:\n print(\"The game has been won!!\")\n break\n\nThis last one is the easiest: Just change the loop condition to while playing && !won\nThat's messy, but it should be equivalent. I might be able to make it cleaner if I knew what you were trying to do with the code as a whole.\n" ]
[ 0, 0 ]
[]
[]
[ "break", "loops", "python" ]
stackoverflow_0074579272_break_loops_python.txt
Q: How do I get rid of labels in django Form? I have dealt with only ModelForm previously, so it is my first time using Form. I'd like to get rid of labels from my form, however, how I get rid of labels in ModelForm does not seem to work with Form. Here is my code: forms.py class UserLoginForm(forms.Form): email = forms.CharField(max_length=255) password = forms.CharField(max_length=255) labels = { 'email': '', 'password': '' } widgets = { 'email': forms.TextInput(attrs={'class': 'login_input', 'placeholder': 'Email'}), 'password': forms.PasswordInput(attrs={'class': 'login_input', 'placeholder': 'Password'}) } It seemed like a simple problem but it turned out I could not get what I wanted from the official django document or Google. I'd be really appreciated if you could help me solving it. A: As @Carcigenicate mentioned in the above comment that you can directly use {{form.email}} which would only render input tag instead of label tag. To remove the label you should use inline labels not labels dict as they are defined in Meta class, so: class UserLoginForm(forms.Form): email = forms.CharField(max_length=255, label="") password = forms.CharField(max_length=255, label="") You can also define inline widegts. Then you can use {{form}} and won't see labels in the template.
How do I get rid of labels in django Form?
I have dealt with only ModelForm previously, so it is my first time using Form. I'd like to get rid of labels from my form, however, how I get rid of labels in ModelForm does not seem to work with Form. Here is my code: forms.py class UserLoginForm(forms.Form): email = forms.CharField(max_length=255) password = forms.CharField(max_length=255) labels = { 'email': '', 'password': '' } widgets = { 'email': forms.TextInput(attrs={'class': 'login_input', 'placeholder': 'Email'}), 'password': forms.PasswordInput(attrs={'class': 'login_input', 'placeholder': 'Password'}) } It seemed like a simple problem but it turned out I could not get what I wanted from the official django document or Google. I'd be really appreciated if you could help me solving it.
[ "As @Carcigenicate mentioned in the above comment that you can directly use {{form.email}} which would only render input tag instead of label tag.\nTo remove the label you should use inline labels not labels dict as they are defined in Meta class, so:\nclass UserLoginForm(forms.Form):\n email = forms.CharField(max_length=255, label=\"\")\n password = forms.CharField(max_length=255, label=\"\")\n\nYou can also define inline widegts.\nThen you can use {{form}} and won't see labels in the template.\n" ]
[ 2 ]
[]
[]
[ "django", "django_forms", "django_templates", "django_widget", "python" ]
stackoverflow_0074578967_django_django_forms_django_templates_django_widget_python.txt
Q: customize html table of pytest with docstring of the test I have a pytest test function as below. def test_mytest(): ''' this is my awesome test ''' assert 1==1 I'd like to print this test_mytest.docstring in the html report as a column between test and duration columns. I could gather that the pytest_runtest_makereport() pytest.mark.hookwrapper could help. Is there any code snippets that I can use as a reference to make the modification A: Fortunately the exact example is mention in the pytest-html user guide. https://pytest-html.readthedocs.io/en/latest/user_guide.html#modifying-the-results-table A mere copy paste of the example adds the docstring to the title
customize html table of pytest with docstring of the test
I have a pytest test function as below. def test_mytest(): ''' this is my awesome test ''' assert 1==1 I'd like to print this test_mytest.docstring in the html report as a column between test and duration columns. I could gather that the pytest_runtest_makereport() pytest.mark.hookwrapper could help. Is there any code snippets that I can use as a reference to make the modification
[ "Fortunately the exact example is mention in the pytest-html user guide.\nhttps://pytest-html.readthedocs.io/en/latest/user_guide.html#modifying-the-results-table\nA mere copy paste of the example adds the docstring to the title\n" ]
[ 0 ]
[]
[]
[ "pytest", "pytest_html", "python" ]
stackoverflow_0074577151_pytest_pytest_html_python.txt
Q: Cannot import name '_png' from 'matplotlib' I want to use matplotlib instead kivy_garden.graph. Actually, I tried this code to check if it works for me. I've had some problems with installing matplotlib but I have successfully(or not) done that. When I started the code I got from matplotlib import _png ImportError: cannot import name '_png' from 'matplotlib' (D:\PyCharmProjects\kivyApp\venv\lib\site-packages\matplotlib\__init__.py) I reinstalled matplotlib and pip, tried another version of matplotlib and I don't know why it is not working for me. I have Python 3.7.5, pip 20.2.4 and matplotlib 3.3.3 A: Reverting to matplotlib version 3.0.2 didn't work for me, but with 3.1.3 it did. python -m pip uninstall matplotlib pip install matplotlib==3.1.3 Python 3.8.2 A: I was having this problem in Google Colab and couldn't solve it. The simple solution that I found was to install the stable version that is pip install -U matplotlib and restarted the runtime and it worked. A: It's work now. I executed py -m pip uninstall matplotlib and after py -m pip install matplotlib --version=3.0.2 from terminal in PyCharm. The same commands in cmd and git bash didn't work. A: If you are using a linux distribution the problem is in the installation of matplotlib. uninstall the current version: pip uninstall matplotlib The correct installation can be found at: https://matplotlib.org/stable/users/installing.html Choose the correct code for your distribution from below: Linux package manager If you are using the Python version that comes with your Linux distribution, you can install Matplotlib via your package manager, e.g.: Debian / Ubuntu: sudo apt-get install python3-matplotlib Fedora: sudo dnf install python3-matplotlib Red Hat: sudo yum install python3-matplotlib Arch: sudo pacman -S python-matplotlib after this it should work fine. A: You also can coment line where this import is :) (for me it is C:\Users\{username}\.kivy\garden\garden.matplotlib\backend_kivy.py , line 256) But i dont know what problems it can bring in the future. A: Since this bug remains for so many years I gave up waiting for the kivy-garden community to fix it. So when deploying my kivy-based software, I also distribute the following quick-and-dirty fix. It finds the file and comments out that line automatically. from pathlib import Path # Join user's home directory with Garden's file path file = Path.home().joinpath('.kivy/garden/garden.matplotlib/backend_kivy.py') # Read file lines = [] with open(file, 'r') as f: for line in f: lines.append(line) # Write file with open(file, 'w') as f: for line in lines: if line.startswith('from matplotlib import _png'): # comment out this troublesome line line = '#' + line f.write(line) A: I'm also facing this issue in Google Colab try this pip install matplotlib==3.1.3 work for me in Google Colab A: Just import matplotlib library pip install matplotlib==3.1.3 and, this version
Cannot import name '_png' from 'matplotlib'
I want to use matplotlib instead kivy_garden.graph. Actually, I tried this code to check if it works for me. I've had some problems with installing matplotlib but I have successfully(or not) done that. When I started the code I got from matplotlib import _png ImportError: cannot import name '_png' from 'matplotlib' (D:\PyCharmProjects\kivyApp\venv\lib\site-packages\matplotlib\__init__.py) I reinstalled matplotlib and pip, tried another version of matplotlib and I don't know why it is not working for me. I have Python 3.7.5, pip 20.2.4 and matplotlib 3.3.3
[ "Reverting to matplotlib version 3.0.2 didn't work for me, but with 3.1.3 it did.\npython -m pip uninstall matplotlib\npip install matplotlib==3.1.3\n\nPython 3.8.2\n", "I was having this problem in Google Colab and couldn't solve it. The simple solution that I found was to install the stable version that is pip install -U matplotlib and restarted the runtime and it worked.\n", "It's work now. I executed py -m pip uninstall matplotlib and after py -m pip install matplotlib --version=3.0.2 from terminal in PyCharm. The same commands in cmd and git bash didn't work.\n", "If you are using a linux distribution the problem is in the installation of matplotlib.\nuninstall the current version:\npip uninstall matplotlib\nThe correct installation can be found at:\nhttps://matplotlib.org/stable/users/installing.html\nChoose the correct code for your distribution from below:\nLinux package manager\nIf you are using the Python version that comes with your Linux distribution, you can install Matplotlib via your package manager, e.g.:\nDebian / Ubuntu: sudo apt-get install python3-matplotlib\nFedora: sudo dnf install python3-matplotlib\nRed Hat: sudo yum install python3-matplotlib\nArch: sudo pacman -S python-matplotlib\nafter this it should work fine.\n", "You also can coment line where this import is :)\n(for me it is C:\\Users\\{username}\\.kivy\\garden\\garden.matplotlib\\backend_kivy.py , line 256)\nBut i dont know what problems it can bring in the future.\n", "Since this bug remains for so many years I gave up waiting for the kivy-garden community to fix it. So when deploying my kivy-based software, I also distribute the following quick-and-dirty fix. It finds the file and comments out that line automatically.\nfrom pathlib import Path\n# Join user's home directory with Garden's file path\nfile = Path.home().joinpath('.kivy/garden/garden.matplotlib/backend_kivy.py')\n# Read file\nlines = []\nwith open(file, 'r') as f:\n for line in f:\n lines.append(line)\n# Write file\nwith open(file, 'w') as f:\n for line in lines:\n if line.startswith('from matplotlib import _png'):\n # comment out this troublesome line\n line = '#' + line\n f.write(line)\n\n", "I'm also facing this issue in Google Colab\ntry this pip install matplotlib==3.1.3 work for me in Google Colab\n", "Just import matplotlib library\npip install matplotlib==3.1.3\n\nand, this version\n" ]
[ 26, 5, 1, 0, 0, 0, 0, 0 ]
[]
[]
[ "kivy", "matplotlib", "python" ]
stackoverflow_0064862818_kivy_matplotlib_python.txt
Q: Cholesky factorisation not lower triangular I am building a cholesky factorisation algorthim as proposed from the book: Linear Algebra and Optimisation for Machine Learning They provide the following algorthim: I have attempted this with python with the following algorithm: def CF(array): array = np.array(array, float) arr = np.linalg.eigvals(array) for o in arr: if o < 0: return print('Cannot apply Cholesky Factorisation') else: continue L = np.zeros_like(array) n,_ = array.shape for j in range(n): sumkj = 0 for k in range(j): sumkj += L[j,k]**2 print(array[j,j], sumkj) L[j,j] = np.sqrt(array[j,j] - sumkj) for i in range(j+1): sumki = 0 for ki in range(i): sumki += L[i,ki] L[i,j] = (array[i,j] - sumki*sumkj)/L[j,j] return L When testing this on a matrix I get upper triangular as opposed to lower triangular. Where was I mistaken? a = np.array([[4, 2, 1], [2, 6, 1], [2, 8, 2]]) CF(a) >array([[2. , 0.81649658, 0.70710678], [0. , 2.44948974, 0.70710678], [0. , 0. , 1.41421356]]) A: You don't even need to look at the detail of the algorithm. Just see that in your book algorithm you have for j=1 to d L(j,j) = something for i=j+1 to d L(i,j) = something So, necessarily, all elements whose line number i is greater or equal than column number j are filled. Rest is 0. Hence a lower triangle In your algorithm on another hand you have for j in range(n): L[j,j] = something() for i in range(j+1): L[i,j] = something() So all elements whose line number i is < to column number j, +1 (since i is in range(j+1), that is i<j+1, that is i<=j) are filled with something. Rest is 0. Hence upper triangle You probably wanted to for i in range(j+1, n)
Cholesky factorisation not lower triangular
I am building a cholesky factorisation algorthim as proposed from the book: Linear Algebra and Optimisation for Machine Learning They provide the following algorthim: I have attempted this with python with the following algorithm: def CF(array): array = np.array(array, float) arr = np.linalg.eigvals(array) for o in arr: if o < 0: return print('Cannot apply Cholesky Factorisation') else: continue L = np.zeros_like(array) n,_ = array.shape for j in range(n): sumkj = 0 for k in range(j): sumkj += L[j,k]**2 print(array[j,j], sumkj) L[j,j] = np.sqrt(array[j,j] - sumkj) for i in range(j+1): sumki = 0 for ki in range(i): sumki += L[i,ki] L[i,j] = (array[i,j] - sumki*sumkj)/L[j,j] return L When testing this on a matrix I get upper triangular as opposed to lower triangular. Where was I mistaken? a = np.array([[4, 2, 1], [2, 6, 1], [2, 8, 2]]) CF(a) >array([[2. , 0.81649658, 0.70710678], [0. , 2.44948974, 0.70710678], [0. , 0. , 1.41421356]])
[ "You don't even need to look at the detail of the algorithm.\nJust see that in your book algorithm you have\nfor j=1 to d\n L(j,j) = something\n for i=j+1 to d\n L(i,j) = something\n\nSo, necessarily, all elements whose line number i is greater or equal than column number j are filled. Rest is 0. Hence a lower triangle\nIn your algorithm on another hand you have\nfor j in range(n):\n L[j,j] = something()\n for i in range(j+1):\n L[i,j] = something()\n\nSo all elements whose line number i is < to column number j, +1 (since i is in range(j+1), that is i<j+1, that is i<=j) are filled with something. Rest is 0.\nHence upper triangle\nYou probably wanted to\n for i in range(j+1, n)\n\n" ]
[ 2 ]
[]
[]
[ "linear_algebra", "numpy", "python" ]
stackoverflow_0074579332_linear_algebra_numpy_python.txt
Q: Replacing a string with NaN or 0 I have a data file that I'm cleaning, and the source uses '--' to indicate missing data. I ultimately need to have this data field be either an integer or float. But I am not sure how to remove the string. I specified the types in a type_dict statement before importing the csv file. 6 of my 8 variables correctly came in as an integer or float. Of course, the two that are still objects are the ones I need to fix. I've tried using the df = df.var.str.replace('--', '') I've tried using the df.var.fillna(df.var.mode().values[0], inplace=True) (and I wonder if I need to just change the values '0' to '--') My presumption is that if I can empty those cells in some fashion, I can define the variable as an int/float. I'm sure I'm missing something really simple, have walked away and come back, but am just not figuring it out. A: try something like this cleaning input before antering into pandas import sys from io import StringIO import pandas as pd with open('data.txt', 'r') as file: data = StringIO(file.read().replace('--', '0')) df = pd.read_csv(data) A: OK, we figured out two options to make this work: solution 1: df = df.replace(r'^--$', np.nan, regex=True) solution 2 (a simplified version of #1): df = df.replace(r'--', np.nan) Both gave the expected output of empty cells when I exported the csv into a spreadsheet. And then when I reimported that intermediate file, I had floats instead of strings as expected.
Replacing a string with NaN or 0
I have a data file that I'm cleaning, and the source uses '--' to indicate missing data. I ultimately need to have this data field be either an integer or float. But I am not sure how to remove the string. I specified the types in a type_dict statement before importing the csv file. 6 of my 8 variables correctly came in as an integer or float. Of course, the two that are still objects are the ones I need to fix. I've tried using the df = df.var.str.replace('--', '') I've tried using the df.var.fillna(df.var.mode().values[0], inplace=True) (and I wonder if I need to just change the values '0' to '--') My presumption is that if I can empty those cells in some fashion, I can define the variable as an int/float. I'm sure I'm missing something really simple, have walked away and come back, but am just not figuring it out.
[ "try something like this cleaning input before antering into pandas\nimport sys\nfrom io import StringIO\nimport pandas as pd\n\nwith open('data.txt', 'r') as file:\n data = StringIO(file.read().replace('--', '0'))\n\ndf = pd.read_csv(data)\n\n\n\n", "OK, we figured out two options to make this work:\nsolution 1:\ndf = df.replace(r'^--$', np.nan, regex=True)\nsolution 2 (a simplified version of #1):\ndf = df.replace(r'--', np.nan)\nBoth gave the expected output of empty cells when I exported the csv into a spreadsheet. And then when I reimported that intermediate file, I had floats instead of strings as expected.\n" ]
[ 0, 0 ]
[]
[]
[ "dtype", "nan", "pandas", "python", "replace" ]
stackoverflow_0074578696_dtype_nan_pandas_python_replace.txt
Q: How to decode message_summary_info in macOS Messages chat.db In the chat.db Messages database on macOS, in the message table there exist two binary blob columns: attributedBody message_summary_info The message edit history (introduced with macOS Ventura) is stored in the message_summary_info. I'm able to decode and parse the attributedBody with python-typedstream, however attempting to do so on the message_summary_info yields an error typedstream.stream.InvalidTypedStreamError: Invalid streamer version: 98. How can I decode and parse message_summary_info? (Related to this question) A: I believe I found at least a partial solution with python leveraging plistlib since the message_summary_info is a binary plist, but I'm open to better solutions if they exist. Here is some output of some test messages I sent with macOS Messages and edited. The message was originally "Test BEFORE edit", then was edited to "Test AFTER edit", then was edited once more to "Test AFTER edit 2". >>> import plistlib >>> pprint(plistlib.loads(bytes_str)) {'ec': {'0': [{'d': 690916194.2870002, 't': b'\x04\x0bstreamtyped\x81\xe8\x03\x84\x01@\x84\x84\x84\x12N' b'SAttributedString\x00\x84\x84\x08NSObject\x00\x85\x92' b'\x84\x84\x84\x08NSString\x01\x94\x84\x01+\x10Test BEFOR' b'E edit\x86\x84\x02iI\x01\x10\x92\x84\x84\x84\x0cNSDict' b'ionary\x00\x94\x84\x01i\x01\x92\x84\x96\x96\x1d__kIMMe' b'ssagePartAttributeName\x86\x92\x84\x84\x84\x08NSNumber' b'\x00\x84\x84\x07NSValue\x00\x94\x84\x01*\x84\x99\x99\x00' b'\x86\x86\x86'}, {'bcg': '5BE7ACC0-8863-4418-A966-1320577ED52F', 'd': 690916202.619842, 't': b'\x04\x0bstreamtyped\x81\xe8\x03\x84\x01@\x84\x84\x84\x12N' b'SAttributedString\x00\x84\x84\x08NSObject\x00\x85\x92' b'\x84\x84\x84\x08NSString\x01\x94\x84\x01+\x0fTest AFTER' b' edit\x86\x84\x02iI\x01\x0f\x92\x84\x84\x84\x0cNSDicti' b'onary\x00\x94\x84\x01i\x01\x92\x84\x96\x96\x1d__kIMMes' b'sagePartAttributeName\x86\x92\x84\x84\x84\x08NSNumber\x00' b'\x84\x84\x07NSValue\x00\x94\x84\x01*\x84\x99\x99\x00\x86' b'\x86\x86'}, {'bcg': '98E04B01-7490-4984-92D5-5910C24F51C1', 'd': 690916210.155562, 't': b'\x04\x0bstreamtyped\x81\xe8\x03\x84\x01@\x84\x84\x84\x12N' b'SAttributedString\x00\x84\x84\x08NSObject\x00\x85\x92' b'\x84\x84\x84\x08NSString\x01\x94\x84\x01+\x11Test AFTER' b' edit 2\x86\x84\x02iI\x01\x11\x92\x84\x84\x84\x0cNSDic' b'tionary\x00\x94\x84\x01i\x01\x92\x84\x96\x96\x1d__kIMM' b'essagePartAttributeName\x86\x92\x84\x84\x84\x08NSNumbe' b'r\x00\x84\x84\x07NSValue\x00\x94\x84\x01*\x84\x99\x99' b'\x00\x86\x86\x86'}]}, 'ep': [0], 'euh': ['RECIPIENT@icloud.com'], 'otr': {'0': {'le': 16, 'lo': 0}}, 'ust': True}
How to decode message_summary_info in macOS Messages chat.db
In the chat.db Messages database on macOS, in the message table there exist two binary blob columns: attributedBody message_summary_info The message edit history (introduced with macOS Ventura) is stored in the message_summary_info. I'm able to decode and parse the attributedBody with python-typedstream, however attempting to do so on the message_summary_info yields an error typedstream.stream.InvalidTypedStreamError: Invalid streamer version: 98. How can I decode and parse message_summary_info? (Related to this question)
[ "I believe I found at least a partial solution with python leveraging plistlib since the message_summary_info is a binary plist, but I'm open to better solutions if they exist.\nHere is some output of some test messages I sent with macOS Messages and edited. The message was originally \"Test BEFORE edit\", then was edited to \"Test AFTER edit\", then was edited once more to \"Test AFTER edit 2\".\n>>> import plistlib\n>>> pprint(plistlib.loads(bytes_str))\n{'ec': {'0': [{'d': 690916194.2870002,\n 't': b'\\x04\\x0bstreamtyped\\x81\\xe8\\x03\\x84\\x01@\\x84\\x84\\x84\\x12N'\n b'SAttributedString\\x00\\x84\\x84\\x08NSObject\\x00\\x85\\x92'\n b'\\x84\\x84\\x84\\x08NSString\\x01\\x94\\x84\\x01+\\x10Test BEFOR'\n b'E edit\\x86\\x84\\x02iI\\x01\\x10\\x92\\x84\\x84\\x84\\x0cNSDict'\n b'ionary\\x00\\x94\\x84\\x01i\\x01\\x92\\x84\\x96\\x96\\x1d__kIMMe'\n b'ssagePartAttributeName\\x86\\x92\\x84\\x84\\x84\\x08NSNumber'\n b'\\x00\\x84\\x84\\x07NSValue\\x00\\x94\\x84\\x01*\\x84\\x99\\x99\\x00'\n b'\\x86\\x86\\x86'},\n {'bcg': '5BE7ACC0-8863-4418-A966-1320577ED52F',\n 'd': 690916202.619842,\n 't': b'\\x04\\x0bstreamtyped\\x81\\xe8\\x03\\x84\\x01@\\x84\\x84\\x84\\x12N'\n b'SAttributedString\\x00\\x84\\x84\\x08NSObject\\x00\\x85\\x92'\n b'\\x84\\x84\\x84\\x08NSString\\x01\\x94\\x84\\x01+\\x0fTest AFTER'\n b' edit\\x86\\x84\\x02iI\\x01\\x0f\\x92\\x84\\x84\\x84\\x0cNSDicti'\n b'onary\\x00\\x94\\x84\\x01i\\x01\\x92\\x84\\x96\\x96\\x1d__kIMMes'\n b'sagePartAttributeName\\x86\\x92\\x84\\x84\\x84\\x08NSNumber\\x00'\n b'\\x84\\x84\\x07NSValue\\x00\\x94\\x84\\x01*\\x84\\x99\\x99\\x00\\x86'\n b'\\x86\\x86'},\n {'bcg': '98E04B01-7490-4984-92D5-5910C24F51C1',\n 'd': 690916210.155562,\n 't': b'\\x04\\x0bstreamtyped\\x81\\xe8\\x03\\x84\\x01@\\x84\\x84\\x84\\x12N'\n b'SAttributedString\\x00\\x84\\x84\\x08NSObject\\x00\\x85\\x92'\n b'\\x84\\x84\\x84\\x08NSString\\x01\\x94\\x84\\x01+\\x11Test AFTER'\n b' edit 2\\x86\\x84\\x02iI\\x01\\x11\\x92\\x84\\x84\\x84\\x0cNSDic'\n b'tionary\\x00\\x94\\x84\\x01i\\x01\\x92\\x84\\x96\\x96\\x1d__kIMM'\n b'essagePartAttributeName\\x86\\x92\\x84\\x84\\x84\\x08NSNumbe'\n b'r\\x00\\x84\\x84\\x07NSValue\\x00\\x94\\x84\\x01*\\x84\\x99\\x99'\n b'\\x00\\x86\\x86\\x86'}]},\n 'ep': [0],\n 'euh': ['RECIPIENT@icloud.com'],\n 'otr': {'0': {'le': 16, 'lo': 0}},\n 'ust': True}\n\n" ]
[ 1 ]
[]
[]
[ "imessage", "macos", "python" ]
stackoverflow_0074579463_imessage_macos_python.txt
Q: I cannot pass a parameter inside a function to another function (Python) I successfully defined a parameter and passed it to a function but when I return to main menu that parameter's value is completely reset and I cannot use it anywhere. The value of the parameter stays only within the second function. Like, the parameter cannot communicate with the whole program as far as I understand. def main_menu(subtotal): while True: print("1) Appetizers") print("2) Option 2") print("3) Option 3") print("4) Option 4") print("5) Option 5") print(" ") print("Current overall subtotal: $" + str(round(subtotal,2))) while True: try: choice = int(input("What is your choice? ")) break except ValueError: print("Please enter whole numbers only.") while choice > 5 or choice < 1: print("Please choose an option from 1 to 5") try: choice = int(input("what is your choice? ")) except ValueError: print("Please enter whole numbers only.") if choice == 1: appetizers(subtotal) """ elif choice == 2: option_2(subtotal) elif choice == 3: option_3(subtotal) elif choice == 4: option_4(subtotal) elif choice == 5: end(subtotal) return subtotal """ def appetizers(subtotal): while True: print("1) Option 1") print("2) Option 2") print("3) Option 3") print("4) Return to Main Menu") print(" ") print("Current overall subtotal: $" + str(round(subtotal,2))) product_amount = 1 while True: try: choice = int(input("What is your choice? ")) break except ValueError: print("Please enter whole numbers only.") while choice > 4 or choice < 1: print("Please choose an option from 1 to 4") try: choice = int(input("what is your choice? ")) except ValueError: print("Please enter whole numbers only.") if choice == 4: return subtotal else: while True: try: product_amount = int(input("How many would you like? ")) break except ValueError: print("Please enter whole numbers only.") while product_amount > 100000 or product_amount < 1: print("Please choose an option from 1 to 100,000") product_amount = int(input("How many would you like? ")) if choice == 1: subtotal = subtotal + (product_amount * 4.99) elif choice == 2: subtotal = subtotal + (product_amount * 2.99) elif choice == 3: subtotal = subtotal + (product_amount * 8.99) For this project's sake, I don't want to use global variables. I only want to use the subtotal variable as a parameter throughout the program and continuously alter its value throughout the program and make calculations with it. I want to do this by passing it through other functions. A: Since you've written the operations into functions and you're passing in the current subtotal, you just need to be updating the subtotal by saving the return value from appetizers() in main_menu(), like here: # ... if choice == 1: subtotal = appetizers(subtotal)
I cannot pass a parameter inside a function to another function (Python)
I successfully defined a parameter and passed it to a function but when I return to main menu that parameter's value is completely reset and I cannot use it anywhere. The value of the parameter stays only within the second function. Like, the parameter cannot communicate with the whole program as far as I understand. def main_menu(subtotal): while True: print("1) Appetizers") print("2) Option 2") print("3) Option 3") print("4) Option 4") print("5) Option 5") print(" ") print("Current overall subtotal: $" + str(round(subtotal,2))) while True: try: choice = int(input("What is your choice? ")) break except ValueError: print("Please enter whole numbers only.") while choice > 5 or choice < 1: print("Please choose an option from 1 to 5") try: choice = int(input("what is your choice? ")) except ValueError: print("Please enter whole numbers only.") if choice == 1: appetizers(subtotal) """ elif choice == 2: option_2(subtotal) elif choice == 3: option_3(subtotal) elif choice == 4: option_4(subtotal) elif choice == 5: end(subtotal) return subtotal """ def appetizers(subtotal): while True: print("1) Option 1") print("2) Option 2") print("3) Option 3") print("4) Return to Main Menu") print(" ") print("Current overall subtotal: $" + str(round(subtotal,2))) product_amount = 1 while True: try: choice = int(input("What is your choice? ")) break except ValueError: print("Please enter whole numbers only.") while choice > 4 or choice < 1: print("Please choose an option from 1 to 4") try: choice = int(input("what is your choice? ")) except ValueError: print("Please enter whole numbers only.") if choice == 4: return subtotal else: while True: try: product_amount = int(input("How many would you like? ")) break except ValueError: print("Please enter whole numbers only.") while product_amount > 100000 or product_amount < 1: print("Please choose an option from 1 to 100,000") product_amount = int(input("How many would you like? ")) if choice == 1: subtotal = subtotal + (product_amount * 4.99) elif choice == 2: subtotal = subtotal + (product_amount * 2.99) elif choice == 3: subtotal = subtotal + (product_amount * 8.99) For this project's sake, I don't want to use global variables. I only want to use the subtotal variable as a parameter throughout the program and continuously alter its value throughout the program and make calculations with it. I want to do this by passing it through other functions.
[ "Since you've written the operations into functions and you're passing in the current subtotal, you just need to be updating the subtotal by saving the return value from appetizers() in main_menu(), like here:\n# ...\nif choice == 1:\n subtotal = appetizers(subtotal)\n\n" ]
[ 1 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0074579462_python_python_3.x.txt
Q: python subprocess with gzip I am trying to stream data through a subprocess, gzip it and write to a file. The following works. I wonder if it is possible to use python's native gzip library instead. fid = gzip.open(self.ipFile, 'rb') # input data oFid = open(filtSortFile, 'wb') # output file sort = subprocess.Popen(args="sort | gzip -c ", shell=True, stdin=subprocess.PIPE, stdout=oFid) # set up the pipe processlines(fid, sort.stdin, filtFid) # pump data into the pipe THE QUESTION: How do I do this instead .. where the gzip package of python is used? I'm mostly curious to know why the following gives me a text files (instead of a compressed binary version) ... very odd. fid = gzip.open(self.ipFile, 'rb') oFid = gzip.open(filtSortFile, 'wb') sort = subprocess.Popen(args="sort ", shell=True, stdin=subprocess.PIPE, stdout=oFid) processlines(fid, sort.stdin, filtFid) A: subprocess writes to oFid.fileno() but gzip returns fd of underlying file object: def fileno(self): """Invoke the underlying file object's fileno() method.""" return self.fileobj.fileno() To enable compression use gzip methods directly: import gzip from subprocess import Popen, PIPE from threading import Thread def f(input, output): for line in iter(input.readline, ''): output.write(line) p = Popen(["sort"], bufsize=-1, stdin=PIPE, stdout=PIPE) Thread(target=f, args=(p.stdout, gzip.open('out.gz', 'wb'))).start() for s in "cafebabe": p.stdin.write(s+"\n") p.stdin.close() Example $ python gzip_subprocess.py && od -c out.gz && zcat out.gz 0000000 037 213 \b \b 251 E t N 002 377 o u t \0 K 344 0000020 J 344 J 002 302 d 256 T L 343 002 \0 j 017 j 0000040 k 020 \0 \0 \0 0000045 a a b b c e e f A: Since you just specify the file handle to give to the process you're executing, there are no further methods involved of the file object. To work around it, you could write your output to a pipe and read from that like so: oFid = gzip.open(filtSortFile, 'wb') sort = subprocess.Popen(args="sort ", shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE) oFid.writelines(sort.stdout) oFid.close() A: Yes, it is possible to use python's native gzip library instead. I recommend looking at this question: gzip a file in Python. I'm now using Jace Browning's answer: with open('path/to/file', 'rb') as src, gzip.open('path/to/file.gz', 'wb') as dst: dst.writelines(src) Although one comments raises you have to convert the src content to bytes, it is not required with this code.
python subprocess with gzip
I am trying to stream data through a subprocess, gzip it and write to a file. The following works. I wonder if it is possible to use python's native gzip library instead. fid = gzip.open(self.ipFile, 'rb') # input data oFid = open(filtSortFile, 'wb') # output file sort = subprocess.Popen(args="sort | gzip -c ", shell=True, stdin=subprocess.PIPE, stdout=oFid) # set up the pipe processlines(fid, sort.stdin, filtFid) # pump data into the pipe THE QUESTION: How do I do this instead .. where the gzip package of python is used? I'm mostly curious to know why the following gives me a text files (instead of a compressed binary version) ... very odd. fid = gzip.open(self.ipFile, 'rb') oFid = gzip.open(filtSortFile, 'wb') sort = subprocess.Popen(args="sort ", shell=True, stdin=subprocess.PIPE, stdout=oFid) processlines(fid, sort.stdin, filtFid)
[ "subprocess writes to oFid.fileno() but gzip returns fd of underlying file object:\ndef fileno(self):\n \"\"\"Invoke the underlying file object's fileno() method.\"\"\"\n return self.fileobj.fileno()\n\nTo enable compression use gzip methods directly:\nimport gzip\nfrom subprocess import Popen, PIPE\nfrom threading import Thread\n\ndef f(input, output):\n for line in iter(input.readline, ''):\n output.write(line)\n\np = Popen([\"sort\"], bufsize=-1, stdin=PIPE, stdout=PIPE)\nThread(target=f, args=(p.stdout, gzip.open('out.gz', 'wb'))).start()\n\nfor s in \"cafebabe\":\n p.stdin.write(s+\"\\n\")\np.stdin.close()\n\nExample\n$ python gzip_subprocess.py && od -c out.gz && zcat out.gz \n0000000 037 213 \\b \\b 251 E t N 002 377 o u t \\0 K 344\n0000020 J 344 J 002 302 d 256 T L 343 002 \\0 j 017 j\n0000040 k 020 \\0 \\0 \\0\n0000045\na\na\nb\nb\nc\ne\ne\nf\n\n", "Since you just specify the file handle to give to the process you're executing, there are no further methods involved of the file object. To work around it, you could write your output to a pipe and read from that like so:\noFid = gzip.open(filtSortFile, 'wb')\nsort = subprocess.Popen(args=\"sort \", shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE)\noFid.writelines(sort.stdout)\noFid.close()\n\n", "Yes, it is possible to use python's native gzip library instead.\nI recommend looking at this question: gzip a file in Python.\nI'm now using Jace Browning's answer:\nwith open('path/to/file', 'rb') as src, gzip.open('path/to/file.gz', 'wb') as dst:\n dst.writelines(src)\n\nAlthough one comments raises you have to convert the src content to bytes, it is not required with this code.\n" ]
[ 6, 2, 0 ]
[]
[]
[ "gzip", "python" ]
stackoverflow_0007452427_gzip_python.txt
Q: dictionary of key, value from two lists without repeating items from either list I am learning python and wanted to try and make a little script to handle "pulling names from a hat" to decide who has who for Christmas. I have no doubt that there is a more efficient way than this, but it works for the moment. My issue is that it's taking a very inconsistent amount of time to complete. I can run this once, and it spits out the results instantly, but then the next time or few times it will just spin and I've let it sit for 5 minutes and it's still not complete. Looking for some advice on why this is occurring and how to fix to make sure it doesn't take such a long time. To start, I have two identical lists of the same names and then I shuffle them up: fam1 = ["name1", "name2", "name3", "name4", "name5", "name6", "name7"] fam2 = ["name1", "name2", "name3", "name4", "name5", "name6", "name7"] fam1_shuffled = random.sample(fam1, len(fam1)) fam2_shuffled = random.sample(fam2, len(fam2)) I then have a dictionary of name pairs that are not allowed (so that husband: wife and wife: husband from the same house don't pull each other's names for example): not_allowed_pairs = { "name1": "name4", "name4": "name1", "name3": "name6", "name6": "name3" } Then I have the function itself: def pick_names(list1, list2): pairs = {} gifters = list1 used_names = [] while len(pairs) < len(gifters): for i in range(len(list1)): if ((gifters[i] != list2[i]) & (list2[i] not in used_names)): k = gifters[i] v = list2[i] if (k, v) not in non_allowed_pairs.items(): pairs[k] = v used_names.append(v) return pairs Finally, just to separate it out, I have the following function to print out who picked who. def print_picks(pair_dict): for k, v in pair_dict.items(): print(f"{k} picked: {v}") A: My issue is that it's taking a very inconsistent amount of time to complete. I can run this once, and it spits out the results instantly, but then the next time or few times it will just spin and I've let it sit for 5 minutes and it's still not complete. This is because you end up with nothing but "not allowed" possibilities so, the program can never end. I gave what you are trying to do a shot, and below is what I came up with. It can still hang, though. About one out of ten times the last value left from each fam results in a double. import random fam = ["name1", "name2", "name3", "name4", "name5", "name6", "name7"] illegal = { "name1": "name4", "name4": "name1", "name3": "name6", "name6": "name3"} def make_pairs(fam): #sort the illegal values to the beginning of a shuffled list to be handled first #and never shuffle this again fam1 = sorted(random.sample(fam, len(fam)), key=lambda n: n in illegal) #normal order, at first fam2 = fam while fam1: #keep shuffling fam2 til something is allowed while (fam1[0]==fam2[0]) or illegal.get(fam1[0])==fam2[0]: fam2 = random.sample(fam2, len(fam2)) #yield pair yield fam1.pop(0), fam2.pop(0) for k,v in make_pairs(fam): print(f'{k} picked {v}') If you only invite an even number of people to Christmas the doubles problem goes away. You got a weird Uncle or something that you'd have more fun without? A: The crucial part of this algorithm is how we pick the receiver. For example, if the gifter is name1 then the receivers list should not contain name1, or name4 (because of the not_allowed_pairs). Here is my code: import random def print_picks(pair_dict): for k, v in pair_dict.items(): print(f"{k} picked: {v}") def remove(lst, *elements): """Return a list with elements removed.""" reduced_list = [ element for element in lst if element not in elements ] return reduced_list def pick_names(list1, list2, not_allowed_pairs): # Make copy to avoid modification of the original list2 = list2.copy() pairs = dict() for gifter in list1: receivers = remove(list2, gifter, not_allowed_pairs.get(gifter)) receiver = random.choice(receivers) pairs[gifter] = receiver list2.remove(receiver) return pairs def main(): fam1 = ["name1", "name2", "name3", "name4", "name5", "name6", "name7", "name8"] fam2 = ["name1", "name2", "name3", "name4", "name5", "name6", "name7", "name8"] not_allowed_pairs = { "name1": "name4", "name4": "name1", "name3": "name6", "name6": "name3" } pairs = pick_names(fam1, fam2, not_allowed_pairs) print_picks(pairs) if __name__ == '__main__': main() Sample output: name1 picked: name7 name2 picked: name3 name3 picked: name2 name4 picked: name5 name5 picked: name4 name6 picked: name8 name7 picked: name1 name8 picked: name6 Now this algorithm is not perfect: There will be time when we run out of names. For example, given name1, name2, and name3: name1 picked name2, name2 picked name1, then name3 got left over. These are edge cases, and I just run the script again.
dictionary of key, value from two lists without repeating items from either list
I am learning python and wanted to try and make a little script to handle "pulling names from a hat" to decide who has who for Christmas. I have no doubt that there is a more efficient way than this, but it works for the moment. My issue is that it's taking a very inconsistent amount of time to complete. I can run this once, and it spits out the results instantly, but then the next time or few times it will just spin and I've let it sit for 5 minutes and it's still not complete. Looking for some advice on why this is occurring and how to fix to make sure it doesn't take such a long time. To start, I have two identical lists of the same names and then I shuffle them up: fam1 = ["name1", "name2", "name3", "name4", "name5", "name6", "name7"] fam2 = ["name1", "name2", "name3", "name4", "name5", "name6", "name7"] fam1_shuffled = random.sample(fam1, len(fam1)) fam2_shuffled = random.sample(fam2, len(fam2)) I then have a dictionary of name pairs that are not allowed (so that husband: wife and wife: husband from the same house don't pull each other's names for example): not_allowed_pairs = { "name1": "name4", "name4": "name1", "name3": "name6", "name6": "name3" } Then I have the function itself: def pick_names(list1, list2): pairs = {} gifters = list1 used_names = [] while len(pairs) < len(gifters): for i in range(len(list1)): if ((gifters[i] != list2[i]) & (list2[i] not in used_names)): k = gifters[i] v = list2[i] if (k, v) not in non_allowed_pairs.items(): pairs[k] = v used_names.append(v) return pairs Finally, just to separate it out, I have the following function to print out who picked who. def print_picks(pair_dict): for k, v in pair_dict.items(): print(f"{k} picked: {v}")
[ "\nMy issue is that it's taking a very inconsistent amount of time to\ncomplete. I can run this once, and it spits out the results instantly,\nbut then the next time or few times it will just spin and I've let it\nsit for 5 minutes and it's still not complete.\n\nThis is because you end up with nothing but \"not allowed\" possibilities so, the program can never end.\nI gave what you are trying to do a shot, and below is what I came up with. It can still hang, though. About one out of ten times the last value left from each fam results in a double.\nimport random\n\nfam = [\"name1\", \"name2\", \"name3\", \"name4\", \"name5\", \"name6\", \"name7\"]\n\n\nillegal = {\n \"name1\": \"name4\",\n \"name4\": \"name1\", \n \"name3\": \"name6\",\n \"name6\": \"name3\"}\n \n \ndef make_pairs(fam): \n #sort the illegal values to the beginning of a shuffled list to be handled first\n #and never shuffle this again\n fam1 = sorted(random.sample(fam, len(fam)), key=lambda n: n in illegal)\n \n #normal order, at first\n fam2 = fam\n \n while fam1:\n #keep shuffling fam2 til something is allowed\n while (fam1[0]==fam2[0]) or illegal.get(fam1[0])==fam2[0]:\n fam2 = random.sample(fam2, len(fam2))\n \n #yield pair\n yield fam1.pop(0), fam2.pop(0)\n\n\nfor k,v in make_pairs(fam):\n print(f'{k} picked {v}')\n\nIf you only invite an even number of people to Christmas the doubles problem goes away. You got a weird Uncle or something that you'd have more fun without?\n", "The crucial part of this algorithm is how we pick the receiver. For example, if the gifter is name1 then the receivers list should not contain name1, or name4 (because of the not_allowed_pairs). Here is my code:\nimport random\n\n\ndef print_picks(pair_dict):\n for k, v in pair_dict.items():\n print(f\"{k} picked: {v}\") \n\n\ndef remove(lst, *elements):\n \"\"\"Return a list with elements removed.\"\"\"\n reduced_list = [\n element\n for element in lst\n if element not in elements\n ]\n return reduced_list\n\n\ndef pick_names(list1, list2, not_allowed_pairs):\n # Make copy to avoid modification of the original\n list2 = list2.copy()\n\n pairs = dict()\n for gifter in list1:\n receivers = remove(list2, gifter, not_allowed_pairs.get(gifter))\n receiver = random.choice(receivers)\n pairs[gifter] = receiver\n list2.remove(receiver)\n return pairs\n\n\ndef main():\n fam1 = [\"name1\", \"name2\", \"name3\", \"name4\", \"name5\", \"name6\", \"name7\", \"name8\"]\n fam2 = [\"name1\", \"name2\", \"name3\", \"name4\", \"name5\", \"name6\", \"name7\", \"name8\"]\n not_allowed_pairs = {\n \"name1\": \"name4\",\n \"name4\": \"name1\", \n \"name3\": \"name6\", \n \"name6\": \"name3\"\n }\n\n pairs = pick_names(fam1, fam2, not_allowed_pairs)\n print_picks(pairs)\n\nif __name__ == '__main__':\n main()\n\nSample output:\nname1 picked: name7\nname2 picked: name3\nname3 picked: name2\nname4 picked: name5\nname5 picked: name4\nname6 picked: name8\nname7 picked: name1\nname8 picked: name6\n\nNow this algorithm is not perfect: There will be time when we run out of names. For example, given name1, name2, and name3: name1 picked name2, name2 picked name1, then name3 got left over. These are edge cases, and I just run the script again.\n" ]
[ 0, 0 ]
[]
[]
[ "dictionary", "list", "python" ]
stackoverflow_0074577651_dictionary_list_python.txt
Q: Highlight multiple characters in array with different colors I'm fairly new to python so I'm wondering if anyone could help with my issue. I have a program that is reading an array of numbers and replacing the 1's with green coloured circles and the 4's with yellow coloured circles. The issue I'm having is I would like for the array to print with the two changes applied but can only get it to print with one change applied, followed by the same array with the other change applied. How can I get the array to print with the two changes at once? Here's the code: import time from colorama import Fore def highlight(text): return text.replace("1", "{}{}".format(Fore.GREEN, Fore.RESET)) + text.replace("4", "{}{}".format(Fore.YELLOW, Fore.RESET)) i = 0 while i <= 10: time.sleep(3) with open(r'./serial_data/generations.txt', 'r') as file: data = file.read() text = highlight(data) print(text) i = i+1 I'm aware this is happening because of the plus I have in the return line but this is the closest I can get it to achieve the required result. Anything else I've tried only results in errors. Any help would be greatly appreciated. A: I've never used colorama, don't know anything about it, and don't even have it installed, but I'm going to show you a better way to do it, anyway. I think it's obvious this is for coloring the console text, and in that you have a big issue. The formats that this is going to add to your text is full of numbers. This means that as you go along replacing numbers, you are going to also start replacing chunks of the format that you injected. The only way around this is with regex. However it becomes more complicated because you need to replace a single character with many characters, which means as the regex is rolling along we need to build a separate string that will be your final data. Getting everything to line up without losing any data is cumbersome. This is well beyond an example. import time, re from colorama import Fore #for finding digits that are not marked as a format expr = re.compile(r'((?<!\[)\d+)') #append all your formats to this `dict` formats = { '1':f'{Fore.GREEN}{Fore.RESET}', '2':f'{Fore.RED}{Fore.RESET}', '3':f'{Fore.BLUE}{Fore.RESET}', '4':f'{Fore.YELLOW}{Fore.RESET}', } #engine def highlight(data:str) -> str: text, i = '', 0 #starter kit for m in expr.finditer(data): #iterate over entire string if fmt := formats.get(m.group(1)): #determine if this number is a token text += (data[i:m.span()[0]] + fmt) #concatenate "tween" + format value i = m.span()[1] #store last position for next iteration return text #shenanigans? for i in range(10): time.sleep(3) with open(r'./serial_data/generations.txt', 'r') as file: text = highlight(file.read()) print(text)
Highlight multiple characters in array with different colors
I'm fairly new to python so I'm wondering if anyone could help with my issue. I have a program that is reading an array of numbers and replacing the 1's with green coloured circles and the 4's with yellow coloured circles. The issue I'm having is I would like for the array to print with the two changes applied but can only get it to print with one change applied, followed by the same array with the other change applied. How can I get the array to print with the two changes at once? Here's the code: import time from colorama import Fore def highlight(text): return text.replace("1", "{}{}".format(Fore.GREEN, Fore.RESET)) + text.replace("4", "{}{}".format(Fore.YELLOW, Fore.RESET)) i = 0 while i <= 10: time.sleep(3) with open(r'./serial_data/generations.txt', 'r') as file: data = file.read() text = highlight(data) print(text) i = i+1 I'm aware this is happening because of the plus I have in the return line but this is the closest I can get it to achieve the required result. Anything else I've tried only results in errors. Any help would be greatly appreciated.
[ "I've never used colorama, don't know anything about it, and don't even have it installed, but I'm going to show you a better way to do it, anyway.\nI think it's obvious this is for coloring the console text, and in that you have a big issue. The formats that this is going to add to your text is full of numbers. This means that as you go along replacing numbers, you are going to also start replacing chunks of the format that you injected.\nThe only way around this is with regex. However it becomes more complicated because you need to replace a single character with many characters, which means as the regex is rolling along we need to build a separate string that will be your final data. Getting everything to line up without losing any data is cumbersome.\nThis is well beyond an example.\nimport time, re\nfrom colorama import Fore\n\n#for finding digits that are not marked as a format\nexpr = re.compile(r'((?<!\\[)\\d+)')\n\n#append all your formats to this `dict`\nformats = {\n '1':f'{Fore.GREEN}{Fore.RESET}',\n '2':f'{Fore.RED}{Fore.RESET}',\n '3':f'{Fore.BLUE}{Fore.RESET}',\n '4':f'{Fore.YELLOW}{Fore.RESET}',\n}\n\n#engine \ndef highlight(data:str) -> str:\n text, i = '', 0 #starter kit\n for m in expr.finditer(data): #iterate over entire string\n if fmt := formats.get(m.group(1)): #determine if this number is a token\n text += (data[i:m.span()[0]] + fmt) #concatenate \"tween\" + format value\n i = m.span()[1] #store last position for next iteration\n return text\n\n#shenanigans? \nfor i in range(10):\n time.sleep(3)\n \n with open(r'./serial_data/generations.txt', 'r') as file:\n text = highlight(file.read())\n print(text)\n\n" ]
[ 0 ]
[]
[]
[ "colorama", "colors", "highlight", "python" ]
stackoverflow_0074579473_colorama_colors_highlight_python.txt
Q: Moviepy doubles the speed of the video without affecting audio. I suspect a framerate issue but I havent been able to fix it Right now here is all I'm having moviepy do: full_video = VideoFileClip(input_video_path) full_video.write_videofile("output.mp4") quit() It just takes the video and writes it to another file with no changes. But when the input video looks like this the output ends up looking like this with the video speed doubled but the audio just the same. I could take the audio and video separately, halve the speed of the video then put them back together but is there a way I can correct for whatever problem is causing this? edit 2: It is the VideoFileClip method causing the speedup most likely, not the write_videofile method. When I try full_video = VideoFileClip(input_video_path) print( full_video.fps ) full_video.preview(fps = full_video.fps) quit() it is still double speed in the preview. edit 3: The problem only happens with videos captured with Windows game bar. I tried a different video and it worked just fine with no speedup. I'll probably just find a different way to capture the screen recordings to fix it but I dont know what the root problem was edit 1: the full code from moviepy.editor import * # get all dash times times_path = "times.txt" input_video_path = "input.mp4" offset_time = 0 clip_length = float( input("Enter clip length: ") ) def get_times(path, offset): dash_times_str = [] with open(path, "r") as file: dash_times_str = file.readlines() count = 0 # Strips the newline character # also add offset time temp = [] for line in dash_times_str: count += 1 temp.append ("{}".format(line.strip())) dash_times_str = temp dash_times = [] for time in dash_times_str: dash_times.append( float(time) + offset ) return dash_times dash_times = get_times(times_path, offset_time) def get_offset_time(): a = float(input("Enter time for first dash in video")) b = dash_times[0] return a-b offset_time = get_offset_time() full_video = VideoFileClip(input_video_path) counter = 0 times_count = len(dash_times) clips = [] for dash_time in dash_times: clip = full_video.subclip(dash_time,dash_time+clip_length) clips.append(clip) counter+=1 print("Clip " + str(counter) + " out of " + str(times_count)) final_clip = concatenate_videoclips(clips) final_clip.write_videofile("output.mp4") A: I haven't been able to go deep down in the source code to figure out why this is, but I could indeed duplicate your bug with videos recorded with the Windows game bar. I also agree with you that it seems to be tied directly to the VideoFileClip method. I got my code to work by writing it like this: full_video = VideoFileClip(input_video_path, fps_source="fps") with the key detail being the (fps_source = "fps") bit.
Moviepy doubles the speed of the video without affecting audio. I suspect a framerate issue but I havent been able to fix it
Right now here is all I'm having moviepy do: full_video = VideoFileClip(input_video_path) full_video.write_videofile("output.mp4") quit() It just takes the video and writes it to another file with no changes. But when the input video looks like this the output ends up looking like this with the video speed doubled but the audio just the same. I could take the audio and video separately, halve the speed of the video then put them back together but is there a way I can correct for whatever problem is causing this? edit 2: It is the VideoFileClip method causing the speedup most likely, not the write_videofile method. When I try full_video = VideoFileClip(input_video_path) print( full_video.fps ) full_video.preview(fps = full_video.fps) quit() it is still double speed in the preview. edit 3: The problem only happens with videos captured with Windows game bar. I tried a different video and it worked just fine with no speedup. I'll probably just find a different way to capture the screen recordings to fix it but I dont know what the root problem was edit 1: the full code from moviepy.editor import * # get all dash times times_path = "times.txt" input_video_path = "input.mp4" offset_time = 0 clip_length = float( input("Enter clip length: ") ) def get_times(path, offset): dash_times_str = [] with open(path, "r") as file: dash_times_str = file.readlines() count = 0 # Strips the newline character # also add offset time temp = [] for line in dash_times_str: count += 1 temp.append ("{}".format(line.strip())) dash_times_str = temp dash_times = [] for time in dash_times_str: dash_times.append( float(time) + offset ) return dash_times dash_times = get_times(times_path, offset_time) def get_offset_time(): a = float(input("Enter time for first dash in video")) b = dash_times[0] return a-b offset_time = get_offset_time() full_video = VideoFileClip(input_video_path) counter = 0 times_count = len(dash_times) clips = [] for dash_time in dash_times: clip = full_video.subclip(dash_time,dash_time+clip_length) clips.append(clip) counter+=1 print("Clip " + str(counter) + " out of " + str(times_count)) final_clip = concatenate_videoclips(clips) final_clip.write_videofile("output.mp4")
[ "I haven't been able to go deep down in the source code to figure out why this is, but I could indeed duplicate your bug with videos recorded with the Windows game bar.\nI also agree with you that it seems to be tied directly to the VideoFileClip method.\nI got my code to work by writing it like this:\nfull_video = VideoFileClip(input_video_path, fps_source=\"fps\")\n\nwith the key detail being the (fps_source = \"fps\") bit.\n" ]
[ 0 ]
[]
[]
[ "moviepy", "python" ]
stackoverflow_0073341202_moviepy_python.txt
Q: Saving captured frames to separate folders Currently working on extracting frames from videos and have noticed that the images get overwritten. Would be nice to create a folder for each of the captured frames but I'm unsure how to do that. data_folder = r"C:\Users\jagac\Downloads\Data" sub_folder_vid = "hmdb51_org" path2videos = os.path.join(data_folder, sub_folder_vid) for path, subdirs, files in os.walk(path2videos): for name in files: vids = os.path.join(path, name) cap = cv2.VideoCapture(vids) i = 0 frame_skip = 10 frame_count = 0 while(cap.isOpened()): ret, frame = cap.read() if ret == False: break if i > frame_skip - 1: frame_count += 1 path2store= r"C:\Users\jagac\OneDrive\Documents\CSC578\final\HumanActionClassifier\images" os.makedirs(path2store, exist_ok= True) path2img = os.path.join(path2store, 'test_' + str(frame_count*frame_skip) + ".jpg") cv2.imwrite(path2img, frame) i = 0 continue i += 1 cap.release() cv2.destroyAllWindows() A: The problem is that you're not incorporating name into your path, so each video is overwriting the previous. You can either add the file name to path2img, or add name to the path2store variable before you make the directory.
Saving captured frames to separate folders
Currently working on extracting frames from videos and have noticed that the images get overwritten. Would be nice to create a folder for each of the captured frames but I'm unsure how to do that. data_folder = r"C:\Users\jagac\Downloads\Data" sub_folder_vid = "hmdb51_org" path2videos = os.path.join(data_folder, sub_folder_vid) for path, subdirs, files in os.walk(path2videos): for name in files: vids = os.path.join(path, name) cap = cv2.VideoCapture(vids) i = 0 frame_skip = 10 frame_count = 0 while(cap.isOpened()): ret, frame = cap.read() if ret == False: break if i > frame_skip - 1: frame_count += 1 path2store= r"C:\Users\jagac\OneDrive\Documents\CSC578\final\HumanActionClassifier\images" os.makedirs(path2store, exist_ok= True) path2img = os.path.join(path2store, 'test_' + str(frame_count*frame_skip) + ".jpg") cv2.imwrite(path2img, frame) i = 0 continue i += 1 cap.release() cv2.destroyAllWindows()
[ "The problem is that you're not incorporating name into your path, so each video is overwriting the previous. You can either add the file name to path2img, or add name to the path2store variable before you make the directory.\n" ]
[ 0 ]
[]
[]
[ "file", "path", "python" ]
stackoverflow_0074579529_file_path_python.txt
Q: Most recent previous business day in Python I need to subtract business days from the current date. I currently have some code which needs always to be running on the most recent business day. So that may be today if we're Monday thru Friday, but if it's Saturday or Sunday then I need to set it back to the Friday before the weekend. I currently have some pretty clunky code to do this: lastBusDay = datetime.datetime.today() if datetime.date.weekday(lastBusDay) == 5: #if it's Saturday lastBusDay = lastBusDay - datetime.timedelta(days = 1) #then make it Friday elif datetime.date.weekday(lastBusDay) == 6: #if it's Sunday lastBusDay = lastBusDay - datetime.timedelta(days = 2); #then make it Friday Is there a better way? Can I tell timedelta to work in weekdays rather than calendar days for example? A: Use pandas! import datetime # BDay is business day, not birthday... from pandas.tseries.offsets import BDay today = datetime.datetime.today() print(today - BDay(4)) Since today is Thursday, Sept 26, that will give you an output of: datetime.datetime(2013, 9, 20, 14, 8, 4, 89761) A: If you want to skip US holidays as well as weekends, this worked for me (using pandas 0.23.3): import pandas as pd from pandas.tseries.holiday import USFederalHolidayCalendar from pandas.tseries.offsets import CustomBusinessDay US_BUSINESS_DAY = CustomBusinessDay(calendar=USFederalHolidayCalendar()) july_5 = pd.datetime(2018, 7, 5) result = july_5 - 2 * US_BUSINESS_DAY # 2018-7-2 To convert to a python date object I did this: result.to_pydatetime().date() A: Maybe this code could help: lastBusDay = datetime.datetime.today() shift = datetime.timedelta(max(1,(lastBusDay.weekday() + 6) % 7 - 3)) lastBusDay = lastBusDay - shift The idea is that on Mondays yo have to go back 3 days, on Sundays 2, and 1 in any other day. The statement (lastBusDay.weekday() + 6) % 7 just re-bases the Monday from 0 to 6. Really don't know if this will be better in terms of performance. A: There seem to be several options if you're open to installing extra libraries. This post describes a way of defining workdays with dateutil. http://coding.derkeiler.com/Archive/Python/comp.lang.python/2004-09/3758.html BusinessHours lets you custom-define your list of holidays, etc., to define when your working hours (and by extension working days) are. http://pypi.python.org/pypi/BusinessHours/ A: DISCLAMER: I'm the author... I wrote a package that does exactly this, business dates calculations. You can use custom week specification and holidays. I had this exact problem while working with financial data and didn't find any of the available solutions particularly easy, so I wrote one. Hope this is useful for other people. https://pypi.python.org/pypi/business_calendar/ A: If somebody is looking for solution respecting holidays (without any huge library like pandas), try this function: import holidays import datetime def previous_working_day(check_day_, holidays=holidays.US()): offset = max(1, (check_day_.weekday() + 6) % 7 - 3) most_recent = check_day_ - datetime.timedelta(offset) if most_recent not in holidays: return most_recent else: return previous_working_day(most_recent, holidays) check_day = datetime.date(2020, 12, 28) previous_working_day(check_day) which produces: datetime.date(2020, 12, 24) A: timeboard package does this. Suppose your date is 04 Sep 2017. In spite of being a Monday, it was a holiday in the US (the Labor Day). So, the most recent business day was Friday, Sep 1. >>> import timeboard.calendars.US as US >>> clnd = US.Weekly8x5() >>> clnd('04 Sep 2017').rollback().to_timestamp().date() datetime.date(2017, 9, 1) In UK, 04 Sep 2017 was the regular business day, so the most recent business day was itself. >>> import timeboard.calendars.UK as UK >>> clnd = UK.Weekly8x5() >>> clnd('04 Sep 2017').rollback().to_timestamp().date() datetime.date(2017, 9, 4) DISCLAIMER: I am the author of timeboard. A: For the pandas usecase, I found the following to be quite useful and compact, although not completely readable: Get most recent previous business day: In [2]: datetime.datetime(2019, 11, 30) + BDay(1) - BDay(1) # Saturday Out[2]: Timestamp('2019-11-29 00:00:00') In [3]: datetime.datetime(2019, 11, 29) + BDay(1) - BDay(1) # Friday Out[3]: Timestamp('2019-11-29 00:00:00') In the other direction, simply use: In [4]: datetime.datetime(2019, 11, 30) + BDay(0) # Saturday Out[4]: Timestamp('2019-12-02 00:00:00') In [5]: datetime.datetime(2019, 11, 29) + BDay(0) # Friday Out[5]: Timestamp('2019-11-29 00:00:00') A: This will give a generator of working days, of course without holidays, stop is datetime.datetime object. If you need holidays just make additional argument with list of holidays and check with 'IFology' ;-) def workingdays(stop, start=datetime.date.today()): while start != stop: if start.weekday() < 5: yield start start += datetime.timedelta(1) Later on you can count them like workdays = workingdays(datetime.datetime(2015, 8, 8)) len(list(workdays)) A: def getNthBusinessDay(startDate, businessDaysInBetween): currentDate = startDate daysToAdd = businessDaysInBetween while daysToAdd > 0: currentDate += relativedelta(days=1) day = currentDate.weekday() if day < 5: daysToAdd -= 1 return currentDate A: Why don't you try something like: lastBusDay = datetime.datetime.today() if datetime.date.weekday(lastBusDay) not in range(0,5): lastBusDay = 5 A: another simplify version lastBusDay = datetime.datetime.today() wk_day = datetime.date.weekday(lastBusDay) if wk_day > 4: #if it's Saturday or Sunday lastBusDay = lastBusDay - datetime.timedelta(days = wk_day-4) #then make it Friday A: Solution irrespective of different jurisdictions having different holidays: If you need to find the right id within a table, you can use this snippet. The Table model is a sqlalchemy model and the dates to search from are in the field day. def last_relevant_date(db: Session, given_date: date) -> int: available_days = (db.query(Table.id, Table.day) .order_by(desc(Table.day)) .limit(100).all()) close_dates = pd.DataFrame(available_days) close_dates['delta'] = close_dates['day'] - given_date past_dates = (close_dates .loc[close_dates['delta'] < pd.Timedelta(0, unit='d')]) table_id = int(past_dates.loc[past_dates['delta'].idxmax()]['id']) return table_id This is not a solution that I would recommend when you have to convert in bulk. It is rather generic and expensive as you are not using joins. Moreover, it assumes that you have a relevant day that is one of the 100 most recent days in the model Table. So it tackles data input that may have different dates. A: When I am writing this answer, today is Friday in USA so next business day shall be Monday, in the meantime yesterday is thanksgiving holiday so previous business day should be Wednesday So today date of Friday, November 24, 2022, is a perfect time to get the previous, current and next business days. By having trial and error, I could only find the correct output by combining the method as below: from datetime import datetime, timedelta from pandas.tseries.offsets import BDay from pandas.tseries.offsets import CustomBusinessDay from pandas.tseries.holiday import USFederalHolidayCalendar US_BUSINESS_DAY = CustomBusinessDay(calendar=USFederalHolidayCalendar()) TODAY = datetime.today() - 1 * US_BUSINESS_DAY YESTERDAY = (datetime.today() - timedelta(max(1,(TODAY.weekday() + 6) % 7 - 3))) - 1 * US_BUSINESS_DAY TOMORROW = TODAY + BDay(1) DAY_NAME = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday','Sunday'] BUSINESS_DATE = "[Previous (" + DAY_NAME[YESTERDAY.weekday()] + "):'" + YESTERDAY.strftime('%y%m%d') BUSINESS_DATE += "', Current (" + DAY_NAME[TODAY.weekday()] + "):'" + TODAY.strftime('%y%m%d') BUSINESS_DATE += "', Next (" + DAY_NAME[TOMORROW.weekday()] + "):'" + TOMORROW.strftime('%y%m%d') + "']" print_("Business Date USA = ", BUSINESS_DATE) Output: Business Date USA = [Previous (Wednesday):'221123', Current (Friday):'221125', Next (Monday):'221128']
Most recent previous business day in Python
I need to subtract business days from the current date. I currently have some code which needs always to be running on the most recent business day. So that may be today if we're Monday thru Friday, but if it's Saturday or Sunday then I need to set it back to the Friday before the weekend. I currently have some pretty clunky code to do this: lastBusDay = datetime.datetime.today() if datetime.date.weekday(lastBusDay) == 5: #if it's Saturday lastBusDay = lastBusDay - datetime.timedelta(days = 1) #then make it Friday elif datetime.date.weekday(lastBusDay) == 6: #if it's Sunday lastBusDay = lastBusDay - datetime.timedelta(days = 2); #then make it Friday Is there a better way? Can I tell timedelta to work in weekdays rather than calendar days for example?
[ "Use pandas!\nimport datetime\n# BDay is business day, not birthday...\nfrom pandas.tseries.offsets import BDay\n\ntoday = datetime.datetime.today()\nprint(today - BDay(4))\n\nSince today is Thursday, Sept 26, that will give you an output of:\ndatetime.datetime(2013, 9, 20, 14, 8, 4, 89761)\n\n", "If you want to skip US holidays as well as weekends, this worked for me (using pandas 0.23.3):\nimport pandas as pd\nfrom pandas.tseries.holiday import USFederalHolidayCalendar\nfrom pandas.tseries.offsets import CustomBusinessDay\nUS_BUSINESS_DAY = CustomBusinessDay(calendar=USFederalHolidayCalendar())\njuly_5 = pd.datetime(2018, 7, 5)\nresult = july_5 - 2 * US_BUSINESS_DAY # 2018-7-2\n\nTo convert to a python date object I did this:\nresult.to_pydatetime().date()\n\n", "Maybe this code could help:\nlastBusDay = datetime.datetime.today()\nshift = datetime.timedelta(max(1,(lastBusDay.weekday() + 6) % 7 - 3))\nlastBusDay = lastBusDay - shift\n\nThe idea is that on Mondays yo have to go back 3 days, on Sundays 2, and 1 in any other day.\nThe statement (lastBusDay.weekday() + 6) % 7 just re-bases the Monday from 0 to 6.\nReally don't know if this will be better in terms of performance.\n", "There seem to be several options if you're open to installing extra libraries.\nThis post describes a way of defining workdays with dateutil.\nhttp://coding.derkeiler.com/Archive/Python/comp.lang.python/2004-09/3758.html\nBusinessHours lets you custom-define your list of holidays, etc., to define when your working hours (and by extension working days) are.\nhttp://pypi.python.org/pypi/BusinessHours/\n", "DISCLAMER: I'm the author...\nI wrote a package that does exactly this, business dates calculations. You can use custom week specification and holidays.\nI had this exact problem while working with financial data and didn't find any of the available solutions particularly easy, so I wrote one.\nHope this is useful for other people.\nhttps://pypi.python.org/pypi/business_calendar/\n", "If somebody is looking for solution respecting holidays (without any huge library like pandas), try this function:\nimport holidays\nimport datetime\n\n\ndef previous_working_day(check_day_, holidays=holidays.US()):\n offset = max(1, (check_day_.weekday() + 6) % 7 - 3)\n most_recent = check_day_ - datetime.timedelta(offset)\n if most_recent not in holidays:\n return most_recent\n else:\n return previous_working_day(most_recent, holidays)\n\ncheck_day = datetime.date(2020, 12, 28)\nprevious_working_day(check_day)\n\nwhich produces:\ndatetime.date(2020, 12, 24)\n\n", "timeboard package does this.\nSuppose your date is 04 Sep 2017. In spite of being a Monday, it was a holiday in the US (the Labor Day). So, the most recent business day was Friday, Sep 1.\n>>> import timeboard.calendars.US as US\n>>> clnd = US.Weekly8x5()\n>>> clnd('04 Sep 2017').rollback().to_timestamp().date()\ndatetime.date(2017, 9, 1)\n\nIn UK, 04 Sep 2017 was the regular business day, so the most recent business day was itself.\n>>> import timeboard.calendars.UK as UK\n>>> clnd = UK.Weekly8x5()\n>>> clnd('04 Sep 2017').rollback().to_timestamp().date()\ndatetime.date(2017, 9, 4)\n\nDISCLAIMER: I am the author of timeboard.\n", "For the pandas usecase, I found the following to be quite useful and compact, although not completely readable:\nGet most recent previous business day:\nIn [2]: datetime.datetime(2019, 11, 30) + BDay(1) - BDay(1) # Saturday\nOut[2]: Timestamp('2019-11-29 00:00:00')\n\n\nIn [3]: datetime.datetime(2019, 11, 29) + BDay(1) - BDay(1) # Friday\nOut[3]: Timestamp('2019-11-29 00:00:00')\n\nIn the other direction, simply use:\nIn [4]: datetime.datetime(2019, 11, 30) + BDay(0) # Saturday\nOut[4]: Timestamp('2019-12-02 00:00:00')\n\nIn [5]: datetime.datetime(2019, 11, 29) + BDay(0) # Friday\nOut[5]: Timestamp('2019-11-29 00:00:00')\n\n", "This will give a generator of working days, of course without holidays, stop is datetime.datetime object. If you need holidays just make additional argument with list of holidays and check with 'IFology' ;-)\ndef workingdays(stop, start=datetime.date.today()):\n while start != stop:\n if start.weekday() < 5:\n yield start\n start += datetime.timedelta(1)\n\nLater on you can count them like\nworkdays = workingdays(datetime.datetime(2015, 8, 8))\nlen(list(workdays))\n\n", " def getNthBusinessDay(startDate, businessDaysInBetween):\n currentDate = startDate\n daysToAdd = businessDaysInBetween\n while daysToAdd > 0:\n currentDate += relativedelta(days=1)\n day = currentDate.weekday()\n if day < 5:\n daysToAdd -= 1\n\n return currentDate \n\n", "Why don't you try something like:\nlastBusDay = datetime.datetime.today()\nif datetime.date.weekday(lastBusDay) not in range(0,5):\n lastBusDay = 5\n\n", "another simplify version\nlastBusDay = datetime.datetime.today()\nwk_day = datetime.date.weekday(lastBusDay)\nif wk_day > 4: #if it's Saturday or Sunday\n lastBusDay = lastBusDay - datetime.timedelta(days = wk_day-4) #then make it Friday\n\n", "Solution irrespective of different jurisdictions having different holidays:\nIf you need to find the right id within a table, you can use this snippet. The Table model is a sqlalchemy model and the dates to search from are in the field day.\ndef last_relevant_date(db: Session, given_date: date) -> int:\n available_days = (db.query(Table.id, Table.day)\n .order_by(desc(Table.day))\n .limit(100).all())\n close_dates = pd.DataFrame(available_days)\n close_dates['delta'] = close_dates['day'] - given_date\n past_dates = (close_dates\n .loc[close_dates['delta'] < pd.Timedelta(0, unit='d')])\n table_id = int(past_dates.loc[past_dates['delta'].idxmax()]['id'])\n return table_id\n\nThis is not a solution that I would recommend when you have to convert in bulk. It is rather generic and expensive as you are not using joins. Moreover, it assumes that you have a relevant day that is one of the 100 most recent days in the model Table. So it tackles data input that may have different dates.\n", "When I am writing this answer, today is Friday in USA so next business day shall be Monday, in the meantime yesterday is thanksgiving holiday so previous business day should be Wednesday\nSo today date of Friday, November 24, 2022, is a perfect time to get the previous, current and next business days.\nBy having trial and error, I could only find the correct output by combining the method as below:\nfrom datetime import datetime, timedelta\n\nfrom pandas.tseries.offsets import BDay\nfrom pandas.tseries.offsets import CustomBusinessDay\nfrom pandas.tseries.holiday import USFederalHolidayCalendar\n\n\nUS_BUSINESS_DAY = CustomBusinessDay(calendar=USFederalHolidayCalendar())\n\nTODAY = datetime.today() - 1 * US_BUSINESS_DAY\nYESTERDAY = (datetime.today() - timedelta(max(1,(TODAY.weekday() + 6) % 7 - 3))) - 1 * US_BUSINESS_DAY\nTOMORROW = TODAY + BDay(1)\n\nDAY_NAME = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday','Sunday']\n\nBUSINESS_DATE = \"[Previous (\" + DAY_NAME[YESTERDAY.weekday()] + \"):'\" + YESTERDAY.strftime('%y%m%d') \nBUSINESS_DATE += \"', Current (\" + DAY_NAME[TODAY.weekday()] + \"):'\" + TODAY.strftime('%y%m%d') \nBUSINESS_DATE += \"', Next (\" + DAY_NAME[TOMORROW.weekday()] + \"):'\" + TOMORROW.strftime('%y%m%d') + \"']\"\n\nprint_(\"Business Date USA = \", BUSINESS_DATE)\n\n\nOutput:\n\nBusiness Date USA = [Previous (Wednesday):'221123', Current (Friday):'221125', Next (Monday):'221128']\n\n" ]
[ 180, 21, 18, 13, 11, 8, 7, 3, 2, 1, 0, 0, 0, 0 ]
[]
[]
[ "datetime", "python" ]
stackoverflow_0002224742_datetime_python.txt
Q: Python how to avoid a repeated dictionary key So, I was trying to make a function to distribute tokens from a dictionary that works as a data base to other blank dictionaries. I use the random.randint() to obtain random tokens out of the big dict and transferred them to the other dict until they are filled with a len(dict) == 7. The problem occurs when the random number generated repeats itself and the dictionary doesn't add some tokens since its repeating the same key. I want to know how would you avoid the random number to repeat itself or instead of not adding the element to add the next (key,value). Import random L_TOKENS = ["P.0|0","P.0|1","P.0|2", "P.0|3", "P.0|4", "P.0|5", "P.0|6", "P.1|1", "P.1|2" , "P.1|3", "P.1|4", "P.1|5", "P.1|6", "P.2|2", "P.2|3", "P.2|4", "P.2|5", "P.2|6", "P.3|3", "P.3|4", "P.3|5", "P.3|6", "P.4|4", "P.4|5", "P.4|6","P.5|5", "P.5|6","P.6|6" ] TOKENS = { "P.0|0": [0,0], "P.0|1": [0,1], "P.0|2": [0,2], "P.0|3": [0,3], "P.0|4": [0,4], "P.0|5": [0,5], "P.0|6": [0,6], "P.1|1": [1,1], "P.1|2": [1,2], "P.1|3": [1,3], "P.1|4": [1,4], "P.1|5": [1,5], "P.1|6": [1,6], "P.2|2":[2,2], "P.2|3":[2,3], "P.2|4":[2,4], "P.2|5":[2,5], "P.2|6":[2,6], "P.3|3":[3,3], "P.3|4":[3,4], "P.3|5":[3,5], "P.3|6":[3,6], "P.4|4":[4,4], "P.4|5":[4,5], "P.4|6":[4,6], "P.5|5":[5,5], "P.5|6":[5,6], "P.6|6":[6,6] } TOKENS_JP = {} #Player's tokens TOKENS_CPU = {} # CPU's tokens def distribute(): for j in range(7): #Number of times a player receives a token r = random.randint(0,27) for keys in TOKENS: if L_TOKENS[r] == keys: TOKENS_JP[clave] = TOKENS[clave] #The player receives a token for i in range(7): for keys in TOKENS: r = random.randint(0,27) if L_TOKENS[r] not in TOKENS_JP: #The CPU receives a token the player doesn't have. if L_TOKENS[r] == keys: TOKENS_CPU[keys] = TOKNES[key] print(TOKENS_JP) print(len(TOKENS_JP)) print(TOKENS_CPU) print(len(TOKENS_CPU)) distribute() And I get the following result: {'P.0|3': [0, 3], 'P.3|5': [3, 5], 'P.2|5': [2, 5], 'P.2|6': [2, 6], 'P.2|2': [2, 2], 'P.3|6': [3, 6]} 6 {'P.1|3': [1, 3], 'P.0|6': [0, 6], 'P.0|0': [0, 0], 'P.0|1': [0, 1], 'P.0|5': [0, 5]} 5 But I wish for it to look like this: {'P.0|3': [0, 3], 'P.3|5': [3, 5], 'P.2|5': [2, 5], 'P.2|6': [2, 6], 'P.2|2': [2, 2], 'P.3|6': [3, 6], 'P.4|4':[4,4]} 7 {'P.1|3': [1, 3], 'P.0|6': [0, 6], 'P.0|0': [0, 0], 'P.0|1': [0, 1], 'P.0|5': [0, 5], 'P.6|6':[6,6], 'P.2|4':[2,4} 7 A: To randomly select without repetition, you could instead use random.sample(), which returns n unique elements. We can take all the needed elements with this—7 tokens for the user, plus 7 tokens for the cpu. Since the results are random, it doesn't matter the order that they're selected, so we can take the first 7 tokens for the user and the next 7 tokens for the cpu: selected_tokens = random.sample(L_TOKENS, 14) player_tokens = selected_tokens[0:7] cpu_tokens = selected_tokens[7:14] Then, for selecting the keys, you can do: TOKENS_JP = {key: TOKENS[key] for key in player_tokens} TOKENS_CPU = {key: TOKENS[key] for key in cpu_tokens}
Python how to avoid a repeated dictionary key
So, I was trying to make a function to distribute tokens from a dictionary that works as a data base to other blank dictionaries. I use the random.randint() to obtain random tokens out of the big dict and transferred them to the other dict until they are filled with a len(dict) == 7. The problem occurs when the random number generated repeats itself and the dictionary doesn't add some tokens since its repeating the same key. I want to know how would you avoid the random number to repeat itself or instead of not adding the element to add the next (key,value). Import random L_TOKENS = ["P.0|0","P.0|1","P.0|2", "P.0|3", "P.0|4", "P.0|5", "P.0|6", "P.1|1", "P.1|2" , "P.1|3", "P.1|4", "P.1|5", "P.1|6", "P.2|2", "P.2|3", "P.2|4", "P.2|5", "P.2|6", "P.3|3", "P.3|4", "P.3|5", "P.3|6", "P.4|4", "P.4|5", "P.4|6","P.5|5", "P.5|6","P.6|6" ] TOKENS = { "P.0|0": [0,0], "P.0|1": [0,1], "P.0|2": [0,2], "P.0|3": [0,3], "P.0|4": [0,4], "P.0|5": [0,5], "P.0|6": [0,6], "P.1|1": [1,1], "P.1|2": [1,2], "P.1|3": [1,3], "P.1|4": [1,4], "P.1|5": [1,5], "P.1|6": [1,6], "P.2|2":[2,2], "P.2|3":[2,3], "P.2|4":[2,4], "P.2|5":[2,5], "P.2|6":[2,6], "P.3|3":[3,3], "P.3|4":[3,4], "P.3|5":[3,5], "P.3|6":[3,6], "P.4|4":[4,4], "P.4|5":[4,5], "P.4|6":[4,6], "P.5|5":[5,5], "P.5|6":[5,6], "P.6|6":[6,6] } TOKENS_JP = {} #Player's tokens TOKENS_CPU = {} # CPU's tokens def distribute(): for j in range(7): #Number of times a player receives a token r = random.randint(0,27) for keys in TOKENS: if L_TOKENS[r] == keys: TOKENS_JP[clave] = TOKENS[clave] #The player receives a token for i in range(7): for keys in TOKENS: r = random.randint(0,27) if L_TOKENS[r] not in TOKENS_JP: #The CPU receives a token the player doesn't have. if L_TOKENS[r] == keys: TOKENS_CPU[keys] = TOKNES[key] print(TOKENS_JP) print(len(TOKENS_JP)) print(TOKENS_CPU) print(len(TOKENS_CPU)) distribute() And I get the following result: {'P.0|3': [0, 3], 'P.3|5': [3, 5], 'P.2|5': [2, 5], 'P.2|6': [2, 6], 'P.2|2': [2, 2], 'P.3|6': [3, 6]} 6 {'P.1|3': [1, 3], 'P.0|6': [0, 6], 'P.0|0': [0, 0], 'P.0|1': [0, 1], 'P.0|5': [0, 5]} 5 But I wish for it to look like this: {'P.0|3': [0, 3], 'P.3|5': [3, 5], 'P.2|5': [2, 5], 'P.2|6': [2, 6], 'P.2|2': [2, 2], 'P.3|6': [3, 6], 'P.4|4':[4,4]} 7 {'P.1|3': [1, 3], 'P.0|6': [0, 6], 'P.0|0': [0, 0], 'P.0|1': [0, 1], 'P.0|5': [0, 5], 'P.6|6':[6,6], 'P.2|4':[2,4} 7
[ "To randomly select without repetition, you could instead use random.sample(), which returns n unique elements. We can take all the needed elements with this—7 tokens for the user, plus 7 tokens for the cpu.\nSince the results are random, it doesn't matter the order that they're selected, so we can take the first 7 tokens for the user and the next 7 tokens for the cpu:\nselected_tokens = random.sample(L_TOKENS, 14)\n\nplayer_tokens = selected_tokens[0:7]\ncpu_tokens = selected_tokens[7:14]\n\nThen, for selecting the keys, you can do:\nTOKENS_JP = {key: TOKENS[key] for key in player_tokens}\nTOKENS_CPU = {key: TOKENS[key] for key in cpu_tokens}\n\n" ]
[ 2 ]
[]
[]
[ "dictionary", "python" ]
stackoverflow_0074579558_dictionary_python.txt
Q: Send direct message to new members from target telegram group i want to make a script, that get the user_id from new joined members in a target group, and send to new member a direct message, how i can do that? I didn't know what to try. A: I think its a welcome message from new member. You can try this code. Correct me if i wrong. @bot.on(events.ChatAction) async def handler(event): if event.user_joined: await event.reply('Your Custom Message') print(event.sender.id) With this code, will send a new message when nember joined to your group and print their user_id to your terminal.
Send direct message to new members from target telegram group
i want to make a script, that get the user_id from new joined members in a target group, and send to new member a direct message, how i can do that? I didn't know what to try.
[ "I think its a welcome message from new member. You can try this code. Correct me if i wrong.\n@bot.on(events.ChatAction)\nasync def handler(event):\n\nif event.user_joined:\n await event.reply('Your Custom Message')\n print(event.sender.id)\n\nWith this code, will send a new message when nember joined to your group and print their user_id to your terminal.\n" ]
[ 0 ]
[]
[]
[ "py_telegram_bot_api", "python", "reply", "telegram", "telethon" ]
stackoverflow_0074509906_py_telegram_bot_api_python_reply_telegram_telethon.txt
Q: display number on top or bottom of a candlestick chart with plotly or other charting libraries Assume my data look like this: With open/close/high/low/ I want to display a normal candlestick, but I also want to add the number in annotation column to top or bottom of the candlestick chart (like chart below), can anyone help me potentially ways how to achieve it. Thank you! I have tried mplfinance, but looking is not that great. A: There are several ways to annotate, but in the case of this question it is easiest to use the text mode for scatter plots. After creating the candlestick, add a scatter plot. To link the position to be annotated in the string to the candlestick, the opening and closing prices are compared, with the price corresponding to the annotation position as the value. In addition, since all text annotation positions are set to "top center," an offset position is set and calculated. This could be done by having a list of annotation positions for every line. However, since there is no clear rule regarding the annotation position, we have set it to the candlestick reference. offset = 0.004 df['txt_position'] = np.where(df['Open'] >= df['Close'], df['Open'], df['Low'] - offset) import pandas as pd import plotly.graph_objects as go fig = go.Figure() fig.add_trace( go.Candlestick( x=df['Date'], open=df['Open'], high=df['High'], low=df['Low'], close=df['Close'], name='candlestick' ) ) fig.add_trace(go.Scatter(x=df['Date'], y=df['txt_position'], mode='text', text=df['Annotation'], textposition='top center', name='annotation') ) fig.update_layout(xaxis_rangeslider_visible=False) fig.show()
display number on top or bottom of a candlestick chart with plotly or other charting libraries
Assume my data look like this: With open/close/high/low/ I want to display a normal candlestick, but I also want to add the number in annotation column to top or bottom of the candlestick chart (like chart below), can anyone help me potentially ways how to achieve it. Thank you! I have tried mplfinance, but looking is not that great.
[ "There are several ways to annotate, but in the case of this question it is easiest to use the text mode for scatter plots. After creating the candlestick, add a scatter plot. To link the position to be annotated in the string to the candlestick, the opening and closing prices are compared, with the price corresponding to the annotation position as the value. In addition, since all text annotation positions are set to \"top center,\" an offset position is set and calculated. This could be done by having a list of annotation positions for every line. However, since there is no clear rule regarding the annotation position, we have set it to the candlestick reference.\noffset = 0.004\ndf['txt_position'] = np.where(df['Open'] >= df['Close'], df['Open'], df['Low'] - offset)\n\nimport pandas as pd\nimport plotly.graph_objects as go\n\nfig = go.Figure()\nfig.add_trace(\n go.Candlestick(\n x=df['Date'],\n open=df['Open'],\n high=df['High'],\n low=df['Low'],\n close=df['Close'],\n name='candlestick'\n )\n)\nfig.add_trace(go.Scatter(x=df['Date'],\n y=df['txt_position'],\n mode='text',\n text=df['Annotation'],\n textposition='top center',\n name='annotation')\n )\n\nfig.update_layout(xaxis_rangeslider_visible=False)\nfig.show()\n\n\n" ]
[ 1 ]
[]
[]
[ "plotly", "python" ]
stackoverflow_0074572415_plotly_python.txt
Q: Can scrapy be used to scrape dynamic content from websites that are using AJAX? I have recently been learning Python and am dipping my hand into building a web-scraper. It's nothing fancy at all; its only purpose is to get the data off of a betting website and have this data put into Excel. Most of the issues are solvable and I'm having a good little mess around. However I'm hitting a massive hurdle over one issue. If a site loads a table of horses and lists current betting prices this information is not in any source file. The clue is that this data is live sometimes, with the numbers being updated obviously from some remote server. The HTML on my PC simply has a hole where their servers are pushing through all the interesting data that I need. Now my experience with dynamic web content is low, so this thing is something I'm having trouble getting my head around. I think Java or Javascript is a key, this pops up often. The scraper is simply a odds comparison engine. Some sites have APIs but I need this for those that don't. I'm using the scrapy library with Python 2.7 I do apologize if this question is too open-ended. In short, my question is: how can scrapy be used to scrape this dynamic data so that I can use it? So that I can scrape this betting odds data in real-time? A: Here is a simple example of scrapy with an AJAX request. Let see the site rubin-kazan.ru. All messages are loaded with an AJAX request. My goal is to fetch these messages with all their attributes (author, date, ...): When I analyze the source code of the page I can't see all these messages because the web page uses AJAX technology. But I can with Firebug from Mozilla Firefox (or an equivalent tool in other browsers) to analyze the HTTP request that generate the messages on the web page: It doesn't reload the whole page but only the parts of the page that contain messages. For this purpose I click an arbitrary number of page on the bottom: And I observe the HTTP request that is responsible for message body: After finish, I analyze the headers of the request (I must quote that this URL I'll extract from source page from var section, see the code below): And the form data content of the request (the HTTP method is "Post"): And the content of response, which is a JSON file: Which presents all the information I'm looking for. From now, I must implement all this knowledge in scrapy. Let's define the spider for this purpose: class spider(BaseSpider): name = 'RubiGuesst' start_urls = ['http://www.rubin-kazan.ru/guestbook.html'] def parse(self, response): url_list_gb_messages = re.search(r'url_list_gb_messages="(.*)"', response.body).group(1) yield FormRequest('http://www.rubin-kazan.ru' + url_list_gb_messages, callback=self.RubiGuessItem, formdata={'page': str(page + 1), 'uid': ''}) def RubiGuessItem(self, response): json_file = response.body In parse function I have the response for first request. In RubiGuessItem I have the JSON file with all information. A: Webkit based browsers (like Google Chrome or Safari) has built-in developer tools. In Chrome you can open it Menu->Tools->Developer Tools. The Network tab allows you to see all information about every request and response: In the bottom of the picture you can see that I've filtered request down to XHR - these are requests made by javascript code. Tip: log is cleared every time you load a page, at the bottom of the picture, the black dot button will preserve log. After analyzing requests and responses you can simulate these requests from your web-crawler and extract valuable data. In many cases it will be easier to get your data than parsing HTML, because that data does not contain presentation logic and is formatted to be accessed by javascript code. Firefox has similar extension, it is called firebug. Some will argue that firebug is even more powerful but I like the simplicity of webkit. A: Many times when crawling we run into problems where content that is rendered on the page is generated with Javascript and therefore scrapy is unable to crawl for it (eg. ajax requests, jQuery craziness). However, if you use Scrapy along with the web testing framework Selenium then we are able to crawl anything displayed in a normal web browser. Some things to note: You must have the Python version of Selenium RC installed for this to work, and you must have set up Selenium properly. Also this is just a template crawler. You could get much crazier and more advanced with things but I just wanted to show the basic idea. As the code stands now you will be doing two requests for any given url. One request is made by Scrapy and the other is made by Selenium. I am sure there are ways around this so that you could possibly just make Selenium do the one and only request but I did not bother to implement that and by doing two requests you get to crawl the page with Scrapy too. This is quite powerful because now you have the entire rendered DOM available for you to crawl and you can still use all the nice crawling features in Scrapy. This will make for slower crawling of course but depending on how much you need the rendered DOM it might be worth the wait. from scrapy.contrib.spiders import CrawlSpider, Rule from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor from scrapy.selector import HtmlXPathSelector from scrapy.http import Request from selenium import selenium class SeleniumSpider(CrawlSpider): name = "SeleniumSpider" start_urls = ["http://www.domain.com"] rules = ( Rule(SgmlLinkExtractor(allow=('\.html', )), callback='parse_page',follow=True), ) def __init__(self): CrawlSpider.__init__(self) self.verificationErrors = [] self.selenium = selenium("localhost", 4444, "*chrome", "http://www.domain.com") self.selenium.start() def __del__(self): self.selenium.stop() print self.verificationErrors CrawlSpider.__del__(self) def parse_page(self, response): item = Item() hxs = HtmlXPathSelector(response) #Do some XPath selection with Scrapy hxs.select('//div').extract() sel = self.selenium sel.open(response.url) #Wait for javscript to load in Selenium time.sleep(2.5) #Do some crawling of javascript created content with Selenium sel.get_text("//div") yield item # Snippet imported from snippets.scrapy.org (which no longer works) # author: wynbennett # date : Jun 21, 2011 Reference: http://snipplr.com/view/66998/ A: Another solution would be to implement a download handler or download handler middleware. (see scrapy docs for more information on downloader middleware) The following is an example class using selenium with headless phantomjs webdriver: 1) Define class within the middlewares.py script. from selenium import webdriver from scrapy.http import HtmlResponse class JsDownload(object): @check_spider_middleware def process_request(self, request, spider): driver = webdriver.PhantomJS(executable_path='D:\phantomjs.exe') driver.get(request.url) return HtmlResponse(request.url, encoding='utf-8', body=driver.page_source.encode('utf-8')) 2) Add JsDownload() class to variable DOWNLOADER_MIDDLEWARE within settings.py: DOWNLOADER_MIDDLEWARES = {'MyProj.middleware.MiddleWareModule.MiddleWareClass': 500} 3) Integrate the HTMLResponse within your_spider.py. Decoding the response body will get you the desired output. class Spider(CrawlSpider): # define unique name of spider name = "spider" start_urls = ["https://www.url.de"] def parse(self, response): # initialize items item = CrawlerItem() # store data as items item["js_enabled"] = response.body.decode("utf-8") Optional Addon: I wanted the ability to tell different spiders which middleware to use so I implemented this wrapper: def check_spider_middleware(method): @functools.wraps(method) def wrapper(self, request, spider): msg = '%%s %s middleware step' % (self.__class__.__name__,) if self.__class__ in spider.middleware: spider.log(msg % 'executing', level=log.DEBUG) return method(self, request, spider) else: spider.log(msg % 'skipping', level=log.DEBUG) return None return wrapper for wrapper to work all spiders must have at minimum: middleware = set([]) to include a middleware: middleware = set([MyProj.middleware.ModuleName.ClassName]) Advantage: The main advantage to implementing it this way rather than in the spider is that you only end up making one request. In A T's solution for example: The download handler processes the request and then hands off the response to the spider. The spider then makes a brand new request in it's parse_page function -- That's two requests for the same content. A: I was using a custom downloader middleware, but wasn't very happy with it, as I didn't manage to make the cache work with it. A better approach was to implement a custom download handler. There is a working example here. It looks like this: # encoding: utf-8 from __future__ import unicode_literals from scrapy import signals from scrapy.signalmanager import SignalManager from scrapy.responsetypes import responsetypes from scrapy.xlib.pydispatch import dispatcher from selenium import webdriver from six.moves import queue from twisted.internet import defer, threads from twisted.python.failure import Failure class PhantomJSDownloadHandler(object): def __init__(self, settings): self.options = settings.get('PHANTOMJS_OPTIONS', {}) max_run = settings.get('PHANTOMJS_MAXRUN', 10) self.sem = defer.DeferredSemaphore(max_run) self.queue = queue.LifoQueue(max_run) SignalManager(dispatcher.Any).connect(self._close, signal=signals.spider_closed) def download_request(self, request, spider): """use semaphore to guard a phantomjs pool""" return self.sem.run(self._wait_request, request, spider) def _wait_request(self, request, spider): try: driver = self.queue.get_nowait() except queue.Empty: driver = webdriver.PhantomJS(**self.options) driver.get(request.url) # ghostdriver won't response when switch window until page is loaded dfd = threads.deferToThread(lambda: driver.switch_to.window(driver.current_window_handle)) dfd.addCallback(self._response, driver, spider) return dfd def _response(self, _, driver, spider): body = driver.execute_script("return document.documentElement.innerHTML") if body.startswith("<head></head>"): # cannot access response header in Selenium body = driver.execute_script("return document.documentElement.textContent") url = driver.current_url respcls = responsetypes.from_args(url=url, body=body[:100].encode('utf8')) resp = respcls(url=url, body=body, encoding="utf-8") response_failed = getattr(spider, "response_failed", None) if response_failed and callable(response_failed) and response_failed(resp, driver): driver.close() return defer.fail(Failure()) else: self.queue.put(driver) return defer.succeed(resp) def _close(self): while not self.queue.empty(): driver = self.queue.get_nowait() driver.close() Suppose your scraper is called "scraper". If you put the mentioned code inside a file called handlers.py on the root of the "scraper" folder, then you could add to your settings.py: DOWNLOAD_HANDLERS = { 'http': 'scraper.handlers.PhantomJSDownloadHandler', 'https': 'scraper.handlers.PhantomJSDownloadHandler', } And voilà, the JS parsed DOM, with scrapy cache, retries, etc. A: how can scrapy be used to scrape this dynamic data so that I can use it? I wonder why no one has posted the solution using Scrapy only. Check out the blog post from Scrapy team SCRAPING INFINITE SCROLLING PAGES . The example scraps http://spidyquotes.herokuapp.com/scroll website which uses infinite scrolling. The idea is to use Developer Tools of your browser and notice the AJAX requests, then based on that information create the requests for Scrapy. import json import scrapy class SpidyQuotesSpider(scrapy.Spider): name = 'spidyquotes' quotes_base_url = 'http://spidyquotes.herokuapp.com/api/quotes?page=%s' start_urls = [quotes_base_url % 1] download_delay = 1.5 def parse(self, response): data = json.loads(response.body) for item in data.get('quotes', []): yield { 'text': item.get('text'), 'author': item.get('author', {}).get('name'), 'tags': item.get('tags'), } if data['has_next']: next_page = data['page'] + 1 yield scrapy.Request(self.quotes_base_url % next_page) A: Data that generated from external url which is API calls HTML response as POST method. import scrapy from scrapy.crawler import CrawlerProcess class TestSpider(scrapy.Spider): name = 'test' def start_requests(self): url = 'https://howlongtobeat.com/search_results?page=1' payload = "queryString=&t=games&sorthead=popular&sortd=0&plat=&length_type=main&length_min=&length_max=&v=&f=&g=&detail=&randomize=0" headers = { "content-type":"application/x-www-form-urlencoded", "user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.54 Safari/537.36" } yield scrapy.Request(url,method='POST', body=payload,headers=headers,callback=self.parse) def parse(self, response): cards = response.css('div[class="search_list_details"]') for card in cards: game_name = card.css('a[class=text_white]::attr(title)').get() yield { "game_name":game_name } if __name__ == "__main__": process =CrawlerProcess() process.crawl(TestSpider) process.start() A: Yes, Scrapy can scrape dynamic websites, website that are rendered through JavaScript. There are Two approaches to scrapy these kind of websites. you can use splash to render Javascript code and then parse the rendered HTML. you can find the doc and project here Scrapy splash, git as previously stated, by monitoring the network calls, yes, you can find the API call that fetch the data and mock that call in your scrapy spider might help you to get desired data. A: There are a few more modern alternatives in 2022 that I think should be mentioned, and I would like to list some pros and cons for the methods discussed in the more popular answers to this question. The top answer and several others discuss using the browsers dev tools or packet capturing software to try to identify patterns in response url's, and try to re-construct them to use as scrapy.Requests. Pros: This is still the best option in my opinion, and when it is available it is quick and often times simpler than even the traditional approach i.e. extracting content from the HTML using xpath and css selectors. Cons: Unfortunately this is only available on a fraction of dynamic sites and frequently websites have security measures in place that make using this strategy difficult. Using Selenium Webdriver is the other approach mentioned a lot in previous answers. Pros: It's easy to implement, and integrate into the scrapy workflow. Additionally there are a ton of examples, and requires very little configuration if you use 3rd-party extensions like scrapy-selenium Cons: It's slow! One of scrapy's key features is it's asynchronous workflow that makes it easy to crawl dozens or even hundreds of pages in seconds. Using selenium cuts this down significantly. There are two new methods that defenitely worth consideration, scrapy-splash and scrapy-playwright. scrapy-splash: A scrapy plugin that integrates splash, a javascript rendering service created and maintained by the developers of scrapy, into the scrapy workflow. The plugin can be installed from pypi with pip3 install scrapy-splash, while splash needs to run in it's own process, and is easiest to run from a docker container. scrapy-playwright: Playwright is a browser automation tool kind of like selenium, but without the crippling decrease in speed that comes with using selenium. Playwright has no issues fitting into the asynchronous scrapy workflow making sending requests just as quick as using scrapy alone. It is also much easier to install and integrate than selenium. The scrapy-playwright plugin is maintained by the developers of scrapy as well, and after installing via pypi with pip3 install scrapy-playwright is as easy as running playwright install in the terminal. More details and many examples can be found at each of the plugin's github pages https://github.com/scrapy-plugins/scrapy-playwright and https://github.com/scrapy-plugins/scrapy-splash. p.s. Both projects tend to work better in a linux environment in my experience. for windows users i recommend using it with The Windows Subsystem for Linux(wsl).
Can scrapy be used to scrape dynamic content from websites that are using AJAX?
I have recently been learning Python and am dipping my hand into building a web-scraper. It's nothing fancy at all; its only purpose is to get the data off of a betting website and have this data put into Excel. Most of the issues are solvable and I'm having a good little mess around. However I'm hitting a massive hurdle over one issue. If a site loads a table of horses and lists current betting prices this information is not in any source file. The clue is that this data is live sometimes, with the numbers being updated obviously from some remote server. The HTML on my PC simply has a hole where their servers are pushing through all the interesting data that I need. Now my experience with dynamic web content is low, so this thing is something I'm having trouble getting my head around. I think Java or Javascript is a key, this pops up often. The scraper is simply a odds comparison engine. Some sites have APIs but I need this for those that don't. I'm using the scrapy library with Python 2.7 I do apologize if this question is too open-ended. In short, my question is: how can scrapy be used to scrape this dynamic data so that I can use it? So that I can scrape this betting odds data in real-time?
[ "Here is a simple example of scrapy with an AJAX request. Let see the site rubin-kazan.ru.\nAll messages are loaded with an AJAX request. My goal is to fetch these messages with all their attributes (author, date, ...):\n\nWhen I analyze the source code of the page I can't see all these messages because the web page uses AJAX technology. But I can with Firebug from Mozilla Firefox (or an equivalent tool in other browsers) to analyze the HTTP request that generate the messages on the web page:\n\nIt doesn't reload the whole page but only the parts of the page that contain messages. For this purpose I click an arbitrary number of page on the bottom:\n\nAnd I observe the HTTP request that is responsible for message body:\n\nAfter finish, I analyze the headers of the request (I must quote that this URL I'll extract from source page from var section, see the code below):\n\nAnd the form data content of the request (the HTTP method is \"Post\"):\n\nAnd the content of response, which is a JSON file:\n\nWhich presents all the information I'm looking for.\nFrom now, I must implement all this knowledge in scrapy. Let's define the spider for this purpose:\nclass spider(BaseSpider):\n name = 'RubiGuesst'\n start_urls = ['http://www.rubin-kazan.ru/guestbook.html']\n\n def parse(self, response):\n url_list_gb_messages = re.search(r'url_list_gb_messages=\"(.*)\"', response.body).group(1)\n yield FormRequest('http://www.rubin-kazan.ru' + url_list_gb_messages, callback=self.RubiGuessItem,\n formdata={'page': str(page + 1), 'uid': ''})\n\n def RubiGuessItem(self, response):\n json_file = response.body\n\nIn parse function I have the response for first request.\nIn RubiGuessItem I have the JSON file with all information. \n", "Webkit based browsers (like Google Chrome or Safari) has built-in developer tools. In Chrome you can open it Menu->Tools->Developer Tools. The Network tab allows you to see all information about every request and response:\n\nIn the bottom of the picture you can see that I've filtered request down to XHR - these are requests made by javascript code.\nTip: log is cleared every time you load a page, at the bottom of the picture, the black dot button will preserve log.\nAfter analyzing requests and responses you can simulate these requests from your web-crawler and extract valuable data. In many cases it will be easier to get your data than parsing HTML, because that data does not contain presentation logic and is formatted to be accessed by javascript code.\nFirefox has similar extension, it is called firebug. Some will argue that firebug is even more powerful but I like the simplicity of webkit.\n", "Many times when crawling we run into problems where content that is rendered on the page is generated with Javascript and therefore scrapy is unable to crawl for it (eg. ajax requests, jQuery craziness).\nHowever, if you use Scrapy along with the web testing framework Selenium then we are able to crawl anything displayed in a normal web browser.\nSome things to note:\n\nYou must have the Python version of Selenium RC installed for this to work, and you must have set up Selenium properly. Also this is just a template crawler. You could get much crazier and more advanced with things but I just wanted to show the basic idea. As the code stands now you will be doing two requests for any given url. One request is made by Scrapy and the other is made by Selenium. I am sure there are ways around this so that you could possibly just make Selenium do the one and only request but I did not bother to implement that and by doing two requests you get to crawl the page with Scrapy too.\nThis is quite powerful because now you have the entire rendered DOM available for you to crawl and you can still use all the nice crawling features in Scrapy. This will make for slower crawling of course but depending on how much you need the rendered DOM it might be worth the wait.\nfrom scrapy.contrib.spiders import CrawlSpider, Rule\nfrom scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor\nfrom scrapy.selector import HtmlXPathSelector\nfrom scrapy.http import Request\n\nfrom selenium import selenium\n\nclass SeleniumSpider(CrawlSpider):\n name = \"SeleniumSpider\"\n start_urls = [\"http://www.domain.com\"]\n\n rules = (\n Rule(SgmlLinkExtractor(allow=('\\.html', )), callback='parse_page',follow=True),\n )\n\n def __init__(self):\n CrawlSpider.__init__(self)\n self.verificationErrors = []\n self.selenium = selenium(\"localhost\", 4444, \"*chrome\", \"http://www.domain.com\")\n self.selenium.start()\n\n def __del__(self):\n self.selenium.stop()\n print self.verificationErrors\n CrawlSpider.__del__(self)\n\n def parse_page(self, response):\n item = Item()\n\n hxs = HtmlXPathSelector(response)\n #Do some XPath selection with Scrapy\n hxs.select('//div').extract()\n\n sel = self.selenium\n sel.open(response.url)\n\n #Wait for javscript to load in Selenium\n time.sleep(2.5)\n\n #Do some crawling of javascript created content with Selenium\n sel.get_text(\"//div\")\n yield item\n\n# Snippet imported from snippets.scrapy.org (which no longer works)\n# author: wynbennett\n# date : Jun 21, 2011\n\n\nReference: http://snipplr.com/view/66998/\n", "Another solution would be to implement a download handler or download handler middleware. (see scrapy docs for more information on downloader middleware) The following is an example class using selenium with headless phantomjs webdriver: \n1) Define class within the middlewares.py script.\nfrom selenium import webdriver\nfrom scrapy.http import HtmlResponse\n\nclass JsDownload(object):\n\n @check_spider_middleware\n def process_request(self, request, spider):\n driver = webdriver.PhantomJS(executable_path='D:\\phantomjs.exe')\n driver.get(request.url)\n return HtmlResponse(request.url, encoding='utf-8', body=driver.page_source.encode('utf-8'))\n\n2) Add JsDownload() class to variable DOWNLOADER_MIDDLEWARE within settings.py:\nDOWNLOADER_MIDDLEWARES = {'MyProj.middleware.MiddleWareModule.MiddleWareClass': 500}\n\n3) Integrate the HTMLResponse within your_spider.py. Decoding the response body will get you the desired output.\nclass Spider(CrawlSpider):\n # define unique name of spider\n name = \"spider\"\n\n start_urls = [\"https://www.url.de\"] \n\n def parse(self, response):\n # initialize items\n item = CrawlerItem()\n\n # store data as items\n item[\"js_enabled\"] = response.body.decode(\"utf-8\") \n\nOptional Addon: \nI wanted the ability to tell different spiders which middleware to use so I implemented this wrapper:\ndef check_spider_middleware(method):\n@functools.wraps(method)\ndef wrapper(self, request, spider):\n msg = '%%s %s middleware step' % (self.__class__.__name__,)\n if self.__class__ in spider.middleware:\n spider.log(msg % 'executing', level=log.DEBUG)\n return method(self, request, spider)\n else:\n spider.log(msg % 'skipping', level=log.DEBUG)\n return None\n\nreturn wrapper\n\nfor wrapper to work all spiders must have at minimum:\nmiddleware = set([])\n\nto include a middleware:\nmiddleware = set([MyProj.middleware.ModuleName.ClassName])\n\nAdvantage: \nThe main advantage to implementing it this way rather than in the spider is that you only end up making one request. In A T's solution for example: The download handler processes the request and then hands off the response to the spider. The spider then makes a brand new request in it's parse_page function -- That's two requests for the same content.\n", "I was using a custom downloader middleware, but wasn't very happy with it, as I didn't manage to make the cache work with it.\nA better approach was to implement a custom download handler.\nThere is a working example here. It looks like this:\n# encoding: utf-8\nfrom __future__ import unicode_literals\n\nfrom scrapy import signals\nfrom scrapy.signalmanager import SignalManager\nfrom scrapy.responsetypes import responsetypes\nfrom scrapy.xlib.pydispatch import dispatcher\nfrom selenium import webdriver\nfrom six.moves import queue\nfrom twisted.internet import defer, threads\nfrom twisted.python.failure import Failure\n\n\nclass PhantomJSDownloadHandler(object):\n\n def __init__(self, settings):\n self.options = settings.get('PHANTOMJS_OPTIONS', {})\n\n max_run = settings.get('PHANTOMJS_MAXRUN', 10)\n self.sem = defer.DeferredSemaphore(max_run)\n self.queue = queue.LifoQueue(max_run)\n\n SignalManager(dispatcher.Any).connect(self._close, signal=signals.spider_closed)\n\n def download_request(self, request, spider):\n \"\"\"use semaphore to guard a phantomjs pool\"\"\"\n return self.sem.run(self._wait_request, request, spider)\n\n def _wait_request(self, request, spider):\n try:\n driver = self.queue.get_nowait()\n except queue.Empty:\n driver = webdriver.PhantomJS(**self.options)\n\n driver.get(request.url)\n # ghostdriver won't response when switch window until page is loaded\n dfd = threads.deferToThread(lambda: driver.switch_to.window(driver.current_window_handle))\n dfd.addCallback(self._response, driver, spider)\n return dfd\n\n def _response(self, _, driver, spider):\n body = driver.execute_script(\"return document.documentElement.innerHTML\")\n if body.startswith(\"<head></head>\"): # cannot access response header in Selenium\n body = driver.execute_script(\"return document.documentElement.textContent\")\n url = driver.current_url\n respcls = responsetypes.from_args(url=url, body=body[:100].encode('utf8'))\n resp = respcls(url=url, body=body, encoding=\"utf-8\")\n\n response_failed = getattr(spider, \"response_failed\", None)\n if response_failed and callable(response_failed) and response_failed(resp, driver):\n driver.close()\n return defer.fail(Failure())\n else:\n self.queue.put(driver)\n return defer.succeed(resp)\n\n def _close(self):\n while not self.queue.empty():\n driver = self.queue.get_nowait()\n driver.close()\n\nSuppose your scraper is called \"scraper\". If you put the mentioned code inside a file called handlers.py on the root of the \"scraper\" folder, then you could add to your settings.py:\nDOWNLOAD_HANDLERS = {\n 'http': 'scraper.handlers.PhantomJSDownloadHandler',\n 'https': 'scraper.handlers.PhantomJSDownloadHandler',\n}\n\nAnd voilà, the JS parsed DOM, with scrapy cache, retries, etc.\n", "\nhow can scrapy be used to scrape this dynamic data so that I can use\n it?\n\nI wonder why no one has posted the solution using Scrapy only. \nCheck out the blog post from Scrapy team SCRAPING INFINITE SCROLLING PAGES\n. The example scraps http://spidyquotes.herokuapp.com/scroll website which uses infinite scrolling. \nThe idea is to use Developer Tools of your browser and notice the AJAX requests, then based on that information create the requests for Scrapy.\nimport json\nimport scrapy\n\n\nclass SpidyQuotesSpider(scrapy.Spider):\n name = 'spidyquotes'\n quotes_base_url = 'http://spidyquotes.herokuapp.com/api/quotes?page=%s'\n start_urls = [quotes_base_url % 1]\n download_delay = 1.5\n\n def parse(self, response):\n data = json.loads(response.body)\n for item in data.get('quotes', []):\n yield {\n 'text': item.get('text'),\n 'author': item.get('author', {}).get('name'),\n 'tags': item.get('tags'),\n }\n if data['has_next']:\n next_page = data['page'] + 1\n yield scrapy.Request(self.quotes_base_url % next_page)\n\n", "Data that generated from external url which is API calls HTML response as POST method.\nimport scrapy\nfrom scrapy.crawler import CrawlerProcess\n\nclass TestSpider(scrapy.Spider):\n name = 'test' \n def start_requests(self):\n url = 'https://howlongtobeat.com/search_results?page=1'\n payload = \"queryString=&t=games&sorthead=popular&sortd=0&plat=&length_type=main&length_min=&length_max=&v=&f=&g=&detail=&randomize=0\"\n headers = {\n \"content-type\":\"application/x-www-form-urlencoded\",\n \"user-agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.54 Safari/537.36\"\n }\n\n yield scrapy.Request(url,method='POST', body=payload,headers=headers,callback=self.parse)\n\n def parse(self, response):\n cards = response.css('div[class=\"search_list_details\"]')\n\n for card in cards: \n game_name = card.css('a[class=text_white]::attr(title)').get()\n yield {\n \"game_name\":game_name\n }\n \n\nif __name__ == \"__main__\":\n process =CrawlerProcess()\n process.crawl(TestSpider)\n process.start()\n\n", "Yes, Scrapy can scrape dynamic websites, website that are rendered through JavaScript.\nThere are Two approaches to scrapy these kind of websites.\n\nyou can use splash to render Javascript code and then parse the rendered HTML.\nyou can find the doc and project here Scrapy splash, git\n\nas previously stated, by monitoring the network calls, yes, you can find the API call that fetch the data and mock that call in your scrapy spider might help you to get desired data.\n\n\n", "There are a few more modern alternatives in 2022 that I think should be mentioned, and I would like to list some pros and cons for the methods discussed in the more popular answers to this question.\n\nThe top answer and several others discuss using the browsers dev tools or packet capturing software to try to identify patterns in response url's, and try to re-construct them to use as scrapy.Requests.\n\nPros: This is still the best option in my opinion, and when it is available it is quick and often times simpler than even the traditional approach i.e. extracting content from the HTML using xpath and css selectors.\n\nCons: Unfortunately this is only available on a fraction of dynamic sites and frequently websites have security measures in place that make using this strategy difficult.\n\n\n\nUsing Selenium Webdriver is the other approach mentioned a lot in previous answers.\n\nPros: It's easy to implement, and integrate into the scrapy workflow. Additionally there are a ton of examples, and requires very little configuration if you use 3rd-party extensions like scrapy-selenium\n\nCons: It's slow! One of scrapy's key features is it's asynchronous workflow that makes it easy to crawl dozens or even hundreds of pages in seconds. Using selenium cuts this down significantly.\n\n\n\n\nThere are two new methods that defenitely worth consideration, scrapy-splash and scrapy-playwright.\nscrapy-splash:\n\nA scrapy plugin that integrates splash, a javascript rendering service created and maintained by the developers of scrapy, into the scrapy workflow. The plugin can be installed from pypi with pip3 install scrapy-splash, while splash needs to run in it's own process, and is easiest to run from a docker container.\n\nscrapy-playwright:\n\nPlaywright is a browser automation tool kind of like selenium, but without the crippling decrease in speed that comes with using selenium. Playwright has no issues fitting into the asynchronous scrapy workflow making sending requests just as quick as using scrapy alone. It is also much easier to install and integrate than selenium. The scrapy-playwright plugin is maintained by the developers of scrapy as well, and after installing via pypi with pip3 install scrapy-playwright is as easy as running playwright install in the terminal.\n\nMore details and many examples can be found at each of the plugin's github pages https://github.com/scrapy-plugins/scrapy-playwright and https://github.com/scrapy-plugins/scrapy-splash.\np.s. Both projects tend to work better in a linux environment in my experience. for windows users i recommend using it with The Windows Subsystem for Linux(wsl).\n" ]
[ 104, 80, 44, 36, 11, 3, 3, 2, 0 ]
[ "I handle the ajax request by using Selenium and the Firefox web driver. It is not that fast if you need the crawler as a daemon, but much better than any manual solution.\n" ]
[ -1 ]
[ "ajax", "javascript", "python", "scrapy", "screen_scraping" ]
stackoverflow_0008550114_ajax_javascript_python_scrapy_screen_scraping.txt
Q: Airflow SimpleHttpOperator is not pushing to xcom I have the following SimpleHttpOperator inside my dag: extracting_user = SimpleHttpOperator( task_id='extracting_user', http_conn_id='user_api', endpoint='api/', # Some Api already configured and checked method="GET", response_filter=lambda response: json.loads(response.text), log_response=True, do_xcom_push=True, ) followed by a PythonOperator: processing_user = PythonOperator( task_id='processing_user', python_callable=_processing_user ) The function: def _processing_user(ti): users = ti.xcom_pull(task_ids=['extracting_user']) if not len(users) or 'results' not in users[0]: raise ValueError(f'User is empty') **More function code** When I execute airflow tasks test myDag extracting_user 2022-03-02 followed by airflow tasks test myDag processing_user 2022-03-02 I get the value error with users variable equals to an empty array. I have tested extracting_user task alone and it gets the desired data from the API. I have already queried with sqlite xcom and it is an empty table. I am using airflow 2.3.0 A: I solved the problem changing to the version 2.0.0 of airflow. It seems that the SimpleHttpOperator doesn't store the request response on the xcom table on 2.3.0 version A: the SimpleHttpOperator does return a XCOM. However, the command airflow tasks test does NOT create XComs anymore, whe
Airflow SimpleHttpOperator is not pushing to xcom
I have the following SimpleHttpOperator inside my dag: extracting_user = SimpleHttpOperator( task_id='extracting_user', http_conn_id='user_api', endpoint='api/', # Some Api already configured and checked method="GET", response_filter=lambda response: json.loads(response.text), log_response=True, do_xcom_push=True, ) followed by a PythonOperator: processing_user = PythonOperator( task_id='processing_user', python_callable=_processing_user ) The function: def _processing_user(ti): users = ti.xcom_pull(task_ids=['extracting_user']) if not len(users) or 'results' not in users[0]: raise ValueError(f'User is empty') **More function code** When I execute airflow tasks test myDag extracting_user 2022-03-02 followed by airflow tasks test myDag processing_user 2022-03-02 I get the value error with users variable equals to an empty array. I have tested extracting_user task alone and it gets the desired data from the API. I have already queried with sqlite xcom and it is an empty table. I am using airflow 2.3.0
[ "I solved the problem changing to the version 2.0.0 of airflow. It seems that the SimpleHttpOperator doesn't store the request response on the xcom table on 2.3.0 version\n", "the SimpleHttpOperator does return a XCOM. However, the command airflow tasks test does NOT create XComs anymore, whe\n" ]
[ 1, 0 ]
[]
[]
[ "airflow", "python" ]
stackoverflow_0072232029_airflow_python.txt
Q: python argument and position on how to pass file i need to pass the filename on a opencv img read how can i do that class show_image: def __init__(self, window): self.window = window frame = Frame(window) frame.pack(side=BOTTOM, padx=15, pady=15) def showimage(self): self.filename = filedialog.askopenfilename(initialdir=os.getcwd(), title="Select Image File", filetypes=( ("JPG File", "*.jpg"), ("PNG File", "*.png"), ("All file", "how are you .txt"))) self.img = Image.open(self.filename) self.img = ImageTk.PhotoImage(self.img) self.lbl.configure(image=self.img) self.lbl.image = self.img def backtovehiclecount(self): win = Toplevel() Thesis.VehicleCounting.Vehicle_Counting(win, self.filename) self.window.withdraw() def page(): window = Tk() show_image(window) window.mainloop() if __name__ == '__main__': page() how can i pass the file name on img=cv2.imread()self.filename but theres a Thesis.VehicleCounting.Vehicle_Counting(win, self.filename) "TypeError: Vehicle_Counting.init() takes 2 positional arguments but 3 were given" error class Vehicle_Counting: def __init__(self, window): self.window = window def Counter(self,filename): self.filename = filename vd = VehicleDetector() img = cv2.imread(self.filename) vehicle_boxes = vd.detect_vehicles(img) self.vehicle_count = len(vehicle_boxes) for box in vehicle_boxes: x, y, w, h = box cv2.rectangle(img, (x, y), (x + w, y + h), (25, 0, 180), 3) cv2.putText(img, "Vehicles:" + str(self.vehicle_count), (20, 50), 0, 2, (100, 200, 0), cv2.imshow("Cars", img) cv2.waitKey(0)
python argument and position on how to pass file
i need to pass the filename on a opencv img read how can i do that class show_image: def __init__(self, window): self.window = window frame = Frame(window) frame.pack(side=BOTTOM, padx=15, pady=15) def showimage(self): self.filename = filedialog.askopenfilename(initialdir=os.getcwd(), title="Select Image File", filetypes=( ("JPG File", "*.jpg"), ("PNG File", "*.png"), ("All file", "how are you .txt"))) self.img = Image.open(self.filename) self.img = ImageTk.PhotoImage(self.img) self.lbl.configure(image=self.img) self.lbl.image = self.img def backtovehiclecount(self): win = Toplevel() Thesis.VehicleCounting.Vehicle_Counting(win, self.filename) self.window.withdraw() def page(): window = Tk() show_image(window) window.mainloop() if __name__ == '__main__': page() how can i pass the file name on img=cv2.imread()self.filename but theres a Thesis.VehicleCounting.Vehicle_Counting(win, self.filename) "TypeError: Vehicle_Counting.init() takes 2 positional arguments but 3 were given" error class Vehicle_Counting: def __init__(self, window): self.window = window def Counter(self,filename): self.filename = filename vd = VehicleDetector() img = cv2.imread(self.filename) vehicle_boxes = vd.detect_vehicles(img) self.vehicle_count = len(vehicle_boxes) for box in vehicle_boxes: x, y, w, h = box cv2.rectangle(img, (x, y), (x + w, y + h), (25, 0, 180), 3) cv2.putText(img, "Vehicles:" + str(self.vehicle_count), (20, 50), 0, 2, (100, 200, 0), cv2.imshow("Cars", img) cv2.waitKey(0)
[]
[]
[ "Your Vehicle_Counting.__init__() only takes 2 parameters: the implicit self, and window.\nIn this line\nThesis.VehicleCounting.Vehicle_Counting(win, self.filename)\n\nYou're providing it with three parameters: the implicit self, win, and self.filename. Either add filename to the list of __init__ arguments, or don't pass it to the initialization.\n" ]
[ -1 ]
[ "arguments", "file", "python", "tkinter" ]
stackoverflow_0074579218_arguments_file_python_tkinter.txt
Q: Rename the first index of data frame in Panadas I created one new data frame by using one list and column value, and I successfully renamed the Index name but I'm not able to rename the first column name, I tried all the possible methods that I know (I want to rename this column name O with date, I tried all methods but it won't work as you can see in code snap) datebucket=[] def Walmart(data,stateAbb): Walmart_df=pd.DataFrame(data) Walmart_df=Walmart_df[Walmart_df['STRSTATE']== stateAbb] date=Walmart_df.sort_values(by='date_super').groupby(['STRSTATE','date_super']) ['date_super'] test=date.first().index for i in test: datebucket.append(i[1]) cumsum=Walmart_df.groupby(['STRSTATE','date_super']).count()['storenum'] NewDf=pd.DataFrame(datebucket,cumsum) NewDf.index.names = ['cumsum'] NewDf.rename(columns = {'0':'date'}, inplace = True) NewDf.rename(columns={NewDf.columns[0]: 'new'}) NewDf.dropna() display(NewDf) Walmart(df,'TX') A: I tried to reproduce your case reading a .csv file, but I was able to rename the column. It does seem to me that you'll catch the problem here: NewDf.rename(columns = {'0':'date'}, inplace = True) A good trick to debug this, according to the documentation would be to add the argument errors="raise", so try to do this: NewDf.rename(columns = {'0':'date'}, inplace = True, errors= 'raise') It will return you some clearer error message that will be way easier for you to understand.
Rename the first index of data frame in Panadas
I created one new data frame by using one list and column value, and I successfully renamed the Index name but I'm not able to rename the first column name, I tried all the possible methods that I know (I want to rename this column name O with date, I tried all methods but it won't work as you can see in code snap) datebucket=[] def Walmart(data,stateAbb): Walmart_df=pd.DataFrame(data) Walmart_df=Walmart_df[Walmart_df['STRSTATE']== stateAbb] date=Walmart_df.sort_values(by='date_super').groupby(['STRSTATE','date_super']) ['date_super'] test=date.first().index for i in test: datebucket.append(i[1]) cumsum=Walmart_df.groupby(['STRSTATE','date_super']).count()['storenum'] NewDf=pd.DataFrame(datebucket,cumsum) NewDf.index.names = ['cumsum'] NewDf.rename(columns = {'0':'date'}, inplace = True) NewDf.rename(columns={NewDf.columns[0]: 'new'}) NewDf.dropna() display(NewDf) Walmart(df,'TX')
[ "I tried to reproduce your case reading a .csv file, but I was able to rename the column.\nIt does seem to me that you'll catch the problem here:\nNewDf.rename(columns = {'0':'date'}, inplace = True)\nA good trick to debug this, according to the documentation would be to add the argument errors=\"raise\", so try to do this:\nNewDf.rename(columns = {'0':'date'}, inplace = True, errors= 'raise')\nIt will return you some clearer error message that will be way easier for you to understand.\n" ]
[ 0 ]
[]
[]
[ "dataframe", "matplotlib", "pandas", "python" ]
stackoverflow_0074579406_dataframe_matplotlib_pandas_python.txt
Q: Can't convert all dates to timestamp in python I'm trying to convert all the dates into timestamp from a url in json but I don't know what I'm doing wrong, it gives me the following error: This is the code I'm using: from datetime import datetime from dateutil import relativedelta from dateutil import parser from datetime import datetime from dateutil.parser import isoparse import requests separator = '\n' url = requests.get("https://fortnite-api.com/v2/cosmetics/br/search/all?language=es&name=palito%20de%20pescado%20de%20gominola&searchLanguage=es") historialSKIN= url.json() for i in historialSKIN["data"]: fecha = isoparse(*i['shopHistory'], sep=separator).timestamp() print(fecha) I want to get all dates in timestamp " 1652777302" A: As esqew said, isoparse doesn't take a sep argument. You should loop over the dates and parse them individually, like so: for i in historialSKIN["data"]: for datestr in i['shopHistory']: fecha = isoparse(datestr).timestamp() print(fecha)
Can't convert all dates to timestamp in python
I'm trying to convert all the dates into timestamp from a url in json but I don't know what I'm doing wrong, it gives me the following error: This is the code I'm using: from datetime import datetime from dateutil import relativedelta from dateutil import parser from datetime import datetime from dateutil.parser import isoparse import requests separator = '\n' url = requests.get("https://fortnite-api.com/v2/cosmetics/br/search/all?language=es&name=palito%20de%20pescado%20de%20gominola&searchLanguage=es") historialSKIN= url.json() for i in historialSKIN["data"]: fecha = isoparse(*i['shopHistory'], sep=separator).timestamp() print(fecha) I want to get all dates in timestamp " 1652777302"
[ "As esqew said, isoparse doesn't take a sep argument. You should loop over the dates and parse them individually, like so:\nfor i in historialSKIN[\"data\"]:\n for datestr in i['shopHistory']:\n fecha = isoparse(datestr).timestamp()\n print(fecha)\n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074579684_python.txt
Q: Run Websocket in a separate Thread, updating class attributes I want to implement a class with the possibility to start various websockets in different threads to retrieve market data and update the class attributes. I am using the kucoin-python-sdk library to that purpose. The below works fine in spyder, however when I set my script to run via a conda batch it fails with the following errors over and over. Thank you. <Task finished name='Task-4' coro=<ConnectWebsocket._run() done,> defined at > path\lib\site-packages\kucoin\websocket\websocket.py:33>> exception=RuntimeError("can't register atexit after shutdown")> got an> exception can't register atexit after shutdown pending name='Task-3' coro=<ConnectWebsocket._recover_topic_req_msg() running> at> path\lib\site-packages\kucoin\websocket\websocket.py:127>> wait_for=> cancel ok.> _reconnect over. <Task finished name='Task-7' coro=<ConnectWebsocket._run() done, defined at>> path\lib\site-packages\kucoin\websocket\websocket. py:33>> exception=RuntimeError("can't register atexit after shutdown")> got an> exception can't register atexit after shutdown pending name='Task-6' coro=<ConnectWebsocket._recover_topic_req_msg() running> at path\lib\site-packages\kucoin\websocket\websocket.py:127>> wait_for=> cancel ok.> _reconnect over. Hence wondering: Does the issue come from the Kucoin package or is my implementation of threads/asyncio incorrect ? How to explain the different behavior between Spyder execution and conda on the same environment ? Python 3.9.13 | Spyder 5.3.3 | Spyder kernel 2.3.3 | websocket 0.2.1 | nest-asyncio 1.5.6 | kucoin-python 1.0.11 Class_X.py import asyncio import nest_asyncio nest_asyncio.apply() from kucoin.client import WsToken from kucoin.ws_client import KucoinWsClient from threading import Thread class class_X(): def __init__(self): self.msg= "" async def main(self): async def book_msg(msg): self.msg = msg client = WsToken() ws_client = await KucoinWsClient.create(None, client, book_msg, private=False) await ws_client.subscribe(f'/market/level2:BTC-USDT') while True: await asyncio.sleep(20) def launch(self): loop = asyncio.new_event_loop() asyncio.set_event_loop(loop) loop.run_until_complete(self.main()) instance = class_X() t = Thread(target=instance.launch) t.start() Batch call path\anaconda3\Scripts\activate myENV python "path1\class_X.py" conda deactivate A: I want to say it's your implementation but I haven't tried using that client the way you're doing it. Here's a pared down skeleton of what I'm doing to implement that kucoin-python in async. import asyncio from kucoin.client import WsToken from kucoin.ws_client import KucoinWsClient from kucoin.client import Market from kucoin.client import User from kucoin.client import Trade async def main(): async def handle_event(msg): if '/market/snapshot:' in msg['topic']: snapshot = msg['data']['data'] ## trade logic here using snapshot data elif msg['topic'] == '/spotMarket/tradeOrders': print(msg['data']) else: print("Unhandled message type") print(msg) async def unsubscribeFromPublicSnapsot(symbol): ksm.unsubscribe('/market/snapshot:' + symbol) async def subscribeToPublicSnapshot(symbol): try: print("subscribing to " + symbol) await ksm.subscribe('/market/snapshot:' + symbol) except Exception as e: print("Error subscribing to snapshot for " + doc['currency']) print(e) pubClient = WsToken() print("creating websocket client") ksm = await KucoinWsClient.create(None, pubClient, handle_event, private=False) # for private topics pass private=True privateClient = WsToken(config["tradeKey"], config["tradeSecret"], config["tradePass"]) ksm_private = await KucoinWsClient.create(None, privateClient, handle_event, private=True) # Always subscribe to BTC-USDT await subscribeToPublicSnapshot('BTC-USDT') # Subscribe to the currency-BTC spot market for each available currency for doc in tradeable_holdings: if doc['currency'] != 'BTC': # Don't need to resubscribe :D await subscribeToPublicSnapshot(doc['currency'] + "-BTC") # Subscribe to spot market trade orders await ksm_private.subscribe('/spotMarket/tradeOrders') if __name__ == "__main__": print("Step 1: Kubot initialzied") print("Step 2: ???") print("Step 2: Profit") loopMain = asyncio.get_event_loop() loopMain.create_task(main()) loopMain.run_forever() loopMain.close() As you can probably guess, "tradeable_holdings" is a list of symbols I'm interested in that I already own. You'll also notice I'm using the snapshot instead of the market/ticker subscription. I think at 100ms updates on the ticker, it could quickly run into latency and race conditions - at least until I figure out how to deal with those. So I opted for the snapshot which only updates every 2 seconds and for the less active coins, not even that often. Anyway, I'm not to where it's looking to trade but I'm quickly getting to that logic. Hope this helps you figure your implementation out even though it's different.
Run Websocket in a separate Thread, updating class attributes
I want to implement a class with the possibility to start various websockets in different threads to retrieve market data and update the class attributes. I am using the kucoin-python-sdk library to that purpose. The below works fine in spyder, however when I set my script to run via a conda batch it fails with the following errors over and over. Thank you. <Task finished name='Task-4' coro=<ConnectWebsocket._run() done,> defined at > path\lib\site-packages\kucoin\websocket\websocket.py:33>> exception=RuntimeError("can't register atexit after shutdown")> got an> exception can't register atexit after shutdown pending name='Task-3' coro=<ConnectWebsocket._recover_topic_req_msg() running> at> path\lib\site-packages\kucoin\websocket\websocket.py:127>> wait_for=> cancel ok.> _reconnect over. <Task finished name='Task-7' coro=<ConnectWebsocket._run() done, defined at>> path\lib\site-packages\kucoin\websocket\websocket. py:33>> exception=RuntimeError("can't register atexit after shutdown")> got an> exception can't register atexit after shutdown pending name='Task-6' coro=<ConnectWebsocket._recover_topic_req_msg() running> at path\lib\site-packages\kucoin\websocket\websocket.py:127>> wait_for=> cancel ok.> _reconnect over. Hence wondering: Does the issue come from the Kucoin package or is my implementation of threads/asyncio incorrect ? How to explain the different behavior between Spyder execution and conda on the same environment ? Python 3.9.13 | Spyder 5.3.3 | Spyder kernel 2.3.3 | websocket 0.2.1 | nest-asyncio 1.5.6 | kucoin-python 1.0.11 Class_X.py import asyncio import nest_asyncio nest_asyncio.apply() from kucoin.client import WsToken from kucoin.ws_client import KucoinWsClient from threading import Thread class class_X(): def __init__(self): self.msg= "" async def main(self): async def book_msg(msg): self.msg = msg client = WsToken() ws_client = await KucoinWsClient.create(None, client, book_msg, private=False) await ws_client.subscribe(f'/market/level2:BTC-USDT') while True: await asyncio.sleep(20) def launch(self): loop = asyncio.new_event_loop() asyncio.set_event_loop(loop) loop.run_until_complete(self.main()) instance = class_X() t = Thread(target=instance.launch) t.start() Batch call path\anaconda3\Scripts\activate myENV python "path1\class_X.py" conda deactivate
[ "I want to say it's your implementation but I haven't tried using that client the way you're doing it. Here's a pared down skeleton of what I'm doing to implement that kucoin-python in async.\nimport asyncio\nfrom kucoin.client import WsToken\nfrom kucoin.ws_client import KucoinWsClient\nfrom kucoin.client import Market\nfrom kucoin.client import User\nfrom kucoin.client import Trade\n\nasync def main():\n \n async def handle_event(msg):\n if '/market/snapshot:' in msg['topic']:\n snapshot = msg['data']['data']\n ## trade logic here using snapshot data\n \n elif msg['topic'] == '/spotMarket/tradeOrders':\n print(msg['data'])\n\n else:\n print(\"Unhandled message type\")\n print(msg)\n\n async def unsubscribeFromPublicSnapsot(symbol):\n ksm.unsubscribe('/market/snapshot:' + symbol)\n \n async def subscribeToPublicSnapshot(symbol):\n try:\n print(\"subscribing to \" + symbol)\n await ksm.subscribe('/market/snapshot:' + symbol)\n except Exception as e:\n print(\"Error subscribing to snapshot for \" + doc['currency'])\n print(e)\n\n pubClient = WsToken()\n print(\"creating websocket client\")\n ksm = await KucoinWsClient.create(None, pubClient, handle_event, private=False)\n\n # for private topics pass private=True\n privateClient = WsToken(config[\"tradeKey\"], config[\"tradeSecret\"], config[\"tradePass\"])\n ksm_private = await KucoinWsClient.create(None, privateClient, handle_event, private=True)\n # Always subscribe to BTC-USDT\n await subscribeToPublicSnapshot('BTC-USDT')\n # Subscribe to the currency-BTC spot market for each available currency\n for doc in tradeable_holdings:\n if doc['currency'] != 'BTC': # Don't need to resubscribe :D\n await subscribeToPublicSnapshot(doc['currency'] + \"-BTC\")\n\n # Subscribe to spot market trade orders\n await ksm_private.subscribe('/spotMarket/tradeOrders')\n\n\nif __name__ == \"__main__\":\n print(\"Step 1: Kubot initialzied\")\n print(\"Step 2: ???\")\n print(\"Step 2: Profit\")\n loopMain = asyncio.get_event_loop()\n loopMain.create_task(main())\n loopMain.run_forever()\n loopMain.close()\n\nAs you can probably guess, \"tradeable_holdings\" is a list of symbols I'm interested in that I already own. You'll also notice I'm using the snapshot instead of the market/ticker subscription. I think at 100ms updates on the ticker, it could quickly run into latency and race conditions - at least until I figure out how to deal with those. So I opted for the snapshot which only updates every 2 seconds and for the less active coins, not even that often.\nAnyway, I'm not to where it's looking to trade but I'm quickly getting to that logic.\nHope this helps you figure your implementation out even though it's different.\n" ]
[ 0 ]
[]
[]
[ "conda", "kucoin", "nest_asyncio", "python", "websocket" ]
stackoverflow_0074302351_conda_kucoin_nest_asyncio_python_websocket.txt
Q: writing "dictionaries" to .csv file in a particular format after loading data allowing pickle I saved a python dictionary with the numpy np.save() function. I had to load allow_pickle to load it back so I now have the dictionary in this format: values = {(0, 0, 0): {0: -1421.05, 1: -1578.94, 2: -1473.65, 3: -1471.21},(0, 0, 1): {0: -142, 1: -157, 2: -147, 3: -147},(0, 0, 2): {0: 19, 1: 15, 2: 10, 3: 12}}, I want to write the dictionary into csv in this format: 0 1 2 3 (0,0,0):-1421.05,-1578.94,-1473.65,-1471.21 (0,0,1):-142, -157, -147, -147 (0,0,2):19, 15, 10, 12 Please how do I fix this? I tried to use pandas: df = pd.DataFrame(policy) df.to_csv('saved-policy') But I keep getting below error error message A: How about we try this? As you represented the desired dataframe, I had to assume that the index would be a string and not a tuple. In case I assumed wrong, please feel free to remove the single quotes from my dictionary. import pandas as pd # Change the key to strings :) values = {'(0, 0, 0)': {0: -1421.05, 1: -1578.94, 2: -1473.65, 3: -1471.21}, '(0, 0, 1)': {0: -142, 1: -157, 2: -147, 3: -147}, '(0, 0, 2)': {0: 19, 1: 15, 2: 10, 3: 12}} # Read the dictionary in a pd.DataFrame df = pd.DataFrame(values) # Transpose the dataframe df = df.T # Write it as .csv df.to_csv('output.csv') Output: 0 1 2 3 (0, 0, 0) -1421.05 -1578.94 -1473.65 -1471.21 (0, 0, 1) -142.00 -157.00 -147.00 -147.00 (0, 0, 2) 19.00 15.00 10.00 12.00
writing "dictionaries" to .csv file in a particular format after loading data allowing pickle
I saved a python dictionary with the numpy np.save() function. I had to load allow_pickle to load it back so I now have the dictionary in this format: values = {(0, 0, 0): {0: -1421.05, 1: -1578.94, 2: -1473.65, 3: -1471.21},(0, 0, 1): {0: -142, 1: -157, 2: -147, 3: -147},(0, 0, 2): {0: 19, 1: 15, 2: 10, 3: 12}}, I want to write the dictionary into csv in this format: 0 1 2 3 (0,0,0):-1421.05,-1578.94,-1473.65,-1471.21 (0,0,1):-142, -157, -147, -147 (0,0,2):19, 15, 10, 12 Please how do I fix this? I tried to use pandas: df = pd.DataFrame(policy) df.to_csv('saved-policy') But I keep getting below error error message
[ "How about we try this?\nAs you represented the desired dataframe, I had to assume that the index would be a string and not a tuple.\nIn case I assumed wrong, please feel free to remove the single quotes from my dictionary.\nimport pandas as pd\n\n# Change the key to strings :)\n\nvalues = {'(0, 0, 0)': {0: -1421.05, 1: -1578.94, 2: -1473.65, 3: -1471.21},\n'(0, 0, 1)': {0: -142, 1: -157, 2: -147, 3: -147},\n'(0, 0, 2)': {0: 19, 1: 15, 2: 10, 3: 12}}\n\n# Read the dictionary in a pd.DataFrame\n\ndf = pd.DataFrame(values)\n\n# Transpose the dataframe\n\ndf = df.T\n\n# Write it as .csv\n\ndf.to_csv('output.csv')\n\nOutput:\n 0 1 2 3\n(0, 0, 0) -1421.05 -1578.94 -1473.65 -1471.21\n(0, 0, 1) -142.00 -157.00 -147.00 -147.00\n(0, 0, 2) 19.00 15.00 10.00 12.00\n\n" ]
[ 0 ]
[]
[]
[ "csv", "dictionary", "numpy", "pandas", "python" ]
stackoverflow_0074578974_csv_dictionary_numpy_pandas_python.txt
Q: Python: fast aggregation of many observations to daily sum I have observations with start and end date of the following format: import pandas as pd data = pd.DataFrame({ 'start_date':pd.to_datetime(['2021-01-07','2021-01-04','2021-01-12','2021-01-03']), 'end_date':pd.to_datetime(['2021-01-16','2021-01-12','2021-01-13','2021-01-15']), 'value':[7,6,5,4] }) data start_date end_date value 0 2021-01-07 2021-01-16 7 1 2021-01-04 2021-01-12 6 2 2021-01-12 2021-01-13 5 3 2021-01-03 2021-01-15 4 The date ranges between observations overlap. I would like to compute the daily sum aggregated across all observations. My version with a loop (below) is slow and crashes for ~100k observations. What would be a way to speed things up? def turn_data_into_date_range(row): dates = pd.date_range(start=row.start_date, end=row.end_date) return pd.Series(data=row.value, index=dates) out = [] for index, row in data.iterrows(): out.append(turn_data_into_date_range(row)) result = pd.concat(out, axis=1).sum(axis=1) result 2021-01-03 4.0 2021-01-04 10.0 2021-01-05 10.0 2021-01-06 10.0 2021-01-07 17.0 2021-01-08 17.0 2021-01-09 17.0 2021-01-10 17.0 2021-01-11 17.0 2021-01-12 22.0 2021-01-13 16.0 2021-01-14 11.0 2021-01-15 11.0 2021-01-16 7.0 Freq: D, dtype: float64 PS: the answer to this related question doesn't work in my case, as they have non-overlapping observations and can use a left join: Convert Date Ranges to Time Series in Pandas A: I feel this problem comes back regularly as it’s not an easy thing to do. Some techniques would probably transform each row into a date range or otherwise iterate on rows. In this case there’s a smarter workaround, which is to use cumulative sums, then reindex. >>> starts = data.set_index('start_date')['value'].sort_index().cumsum() >>> starts start_date 2021-01-03 4 2021-01-04 10 2021-01-07 17 2021-01-12 22 Name: value, dtype: int64 >>> ends = data.set_index('end_date')['value'].sort_index().cumsum() >>> ends end_date 2021-01-12 6 2021-01-13 11 2021-01-15 15 2021-01-16 22 Name: value, dtype: int64 In case your dates are not unique, you could group by by date and sum first. Then the series definitions are as follows: >>> starts = data.groupby('start_date')['value'].sum().sort_index().cumsum() >>> ends = data.groupby('end_date')['value'].sum().sort_index().cumsum() Note that here we don’t need the set_index() anymore which is done by sum() as it is an aggregation, contrarily to .cumsum()which is a transform operation. Of course if the ends are inclusive you might need to add a .shift(): >>> dates = pd.date_range(starts.index.min(), ends.index.max()) >>> ends.reindex(dates).ffill().shift().fillna(0) 2021-01-03 0.0 2021-01-04 0.0 2021-01-05 0.0 2021-01-06 0.0 2021-01-07 0.0 2021-01-08 0.0 2021-01-09 0.0 2021-01-10 0.0 2021-01-11 0.0 2021-01-12 0.0 2021-01-13 6.0 2021-01-14 11.0 2021-01-15 11.0 2021-01-16 15.0 Freq: D, Name: value, dtype: float64 Then just subtract the (possibly shifted) ends from the starts: >>> starts.reindex(dates).ffill() - ends.reindex(dates).ffill().shift().fillna(0) 2021-01-03 4.0 2021-01-04 10.0 2021-01-05 10.0 2021-01-06 10.0 2021-01-07 17.0 2021-01-08 17.0 2021-01-09 17.0 2021-01-10 17.0 2021-01-11 17.0 2021-01-12 22.0 2021-01-13 16.0 2021-01-14 11.0 2021-01-15 11.0 2021-01-16 7.0 Freq: D, Name: value, dtype: float64 A: You can use explode to break each range into individual days: data['day'] = data.apply(lambda row: pd.date_range(row['start_date'], row['end_date']), axis=1) result = data[['day', 'value']].explode('day').groupby('day').sum() A: What would be a way to speed things up? You did for index, row in data.iterrows(): out.append(turn_data_into_date_range(row)) practical usage show that it is possible to get speed increase from using .itertuples() rather than .iterrows() see Why Pandas itertuples() Is Faster Than iterrows() and How To Make It Even Faster. I suggest reworking your code to use said .itertuples() method. I do not have ability to test it now, but I suspect your turn_data_into_date_range function might work without any changes, as said tuples support access via dot attribute. A: One option is to solve this with an inequality join, using the conditional_join from pyjanitor: # pip install pyjanitor import pandas as pd import janitor # Build a Series of the all dates: dates = data.filter(like='date') start = dates.min().min() end = dates.max().max() dates = pd.date_range(start, end, freq='D', name = 'dates') dates = pd.Series(dates) (data .conditional_join( dates, ('start_date', 'dates', '<='), ('end_date', 'dates', '>='), # depending on the data size, # numba offers more performance use_numba=False, df_columns='value') .groupby('dates') .sum() ) value dates 2021-01-03 4 2021-01-04 10 2021-01-05 10 2021-01-06 10 2021-01-07 17 2021-01-08 17 2021-01-09 17 2021-01-10 17 2021-01-11 17 2021-01-12 22 2021-01-13 16 2021-01-14 11 2021-01-15 11 2021-01-16 7
Python: fast aggregation of many observations to daily sum
I have observations with start and end date of the following format: import pandas as pd data = pd.DataFrame({ 'start_date':pd.to_datetime(['2021-01-07','2021-01-04','2021-01-12','2021-01-03']), 'end_date':pd.to_datetime(['2021-01-16','2021-01-12','2021-01-13','2021-01-15']), 'value':[7,6,5,4] }) data start_date end_date value 0 2021-01-07 2021-01-16 7 1 2021-01-04 2021-01-12 6 2 2021-01-12 2021-01-13 5 3 2021-01-03 2021-01-15 4 The date ranges between observations overlap. I would like to compute the daily sum aggregated across all observations. My version with a loop (below) is slow and crashes for ~100k observations. What would be a way to speed things up? def turn_data_into_date_range(row): dates = pd.date_range(start=row.start_date, end=row.end_date) return pd.Series(data=row.value, index=dates) out = [] for index, row in data.iterrows(): out.append(turn_data_into_date_range(row)) result = pd.concat(out, axis=1).sum(axis=1) result 2021-01-03 4.0 2021-01-04 10.0 2021-01-05 10.0 2021-01-06 10.0 2021-01-07 17.0 2021-01-08 17.0 2021-01-09 17.0 2021-01-10 17.0 2021-01-11 17.0 2021-01-12 22.0 2021-01-13 16.0 2021-01-14 11.0 2021-01-15 11.0 2021-01-16 7.0 Freq: D, dtype: float64 PS: the answer to this related question doesn't work in my case, as they have non-overlapping observations and can use a left join: Convert Date Ranges to Time Series in Pandas
[ "I feel this problem comes back regularly as it’s not an easy thing to do. Some techniques would probably transform each row into a date range or otherwise iterate on rows. In this case there’s a smarter workaround, which is to use cumulative sums, then reindex.\n>>> starts = data.set_index('start_date')['value'].sort_index().cumsum()\n>>> starts\nstart_date\n2021-01-03 4\n2021-01-04 10\n2021-01-07 17\n2021-01-12 22\nName: value, dtype: int64\n>>> ends = data.set_index('end_date')['value'].sort_index().cumsum()\n>>> ends\nend_date\n2021-01-12 6\n2021-01-13 11\n2021-01-15 15\n2021-01-16 22\nName: value, dtype: int64\n\nIn case your dates are not unique, you could group by by date and sum first. Then the series definitions are as follows:\n>>> starts = data.groupby('start_date')['value'].sum().sort_index().cumsum()\n>>> ends = data.groupby('end_date')['value'].sum().sort_index().cumsum()\n\nNote that here we don’t need the set_index() anymore which is done by sum() as it is an aggregation, contrarily to .cumsum()which is a transform operation.\nOf course if the ends are inclusive you might need to add a .shift():\n>>> dates = pd.date_range(starts.index.min(), ends.index.max())\n>>> ends.reindex(dates).ffill().shift().fillna(0)\n2021-01-03 0.0\n2021-01-04 0.0\n2021-01-05 0.0\n2021-01-06 0.0\n2021-01-07 0.0\n2021-01-08 0.0\n2021-01-09 0.0\n2021-01-10 0.0\n2021-01-11 0.0\n2021-01-12 0.0\n2021-01-13 6.0\n2021-01-14 11.0\n2021-01-15 11.0\n2021-01-16 15.0\nFreq: D, Name: value, dtype: float64\n\nThen just subtract the (possibly shifted) ends from the starts:\n>>> starts.reindex(dates).ffill() - ends.reindex(dates).ffill().shift().fillna(0)\n2021-01-03 4.0\n2021-01-04 10.0\n2021-01-05 10.0\n2021-01-06 10.0\n2021-01-07 17.0\n2021-01-08 17.0\n2021-01-09 17.0\n2021-01-10 17.0\n2021-01-11 17.0\n2021-01-12 22.0\n2021-01-13 16.0\n2021-01-14 11.0\n2021-01-15 11.0\n2021-01-16 7.0\nFreq: D, Name: value, dtype: float64\n\n", "You can use explode to break each range into individual days:\ndata['day'] = data.apply(lambda row: pd.date_range(row['start_date'], row['end_date']), axis=1)\nresult = data[['day', 'value']].explode('day').groupby('day').sum()\n\n", "What would be a way to speed things up?\nYou did\nfor index, row in data.iterrows():\n out.append(turn_data_into_date_range(row))\n\npractical usage show that it is possible to get speed increase from using .itertuples() rather than .iterrows() see Why Pandas itertuples() Is Faster Than iterrows() and How To Make It Even Faster. I suggest reworking your code to use said .itertuples() method. I do not have ability to test it now, but I suspect your turn_data_into_date_range function might work without any changes, as said tuples support access via dot attribute.\n", "One option is to solve this with an inequality join, using the conditional_join from pyjanitor:\n# pip install pyjanitor\nimport pandas as pd\nimport janitor\n\n# Build a Series of the all dates:\ndates = data.filter(like='date')\nstart = dates.min().min()\nend = dates.max().max()\ndates = pd.date_range(start, end, freq='D', name = 'dates')\ndates = pd.Series(dates)\n\n(data\n.conditional_join(\n dates, \n ('start_date', 'dates', '<='), \n ('end_date', 'dates', '>='),\n # depending on the data size, \n # numba offers more performance\n use_numba=False, \n df_columns='value')\n.groupby('dates')\n.sum()\n)\n\n value\ndates\n2021-01-03 4\n2021-01-04 10\n2021-01-05 10\n2021-01-06 10\n2021-01-07 17\n2021-01-08 17\n2021-01-09 17\n2021-01-10 17\n2021-01-11 17\n2021-01-12 22\n2021-01-13 16\n2021-01-14 11\n2021-01-15 11\n2021-01-16 7\n\n" ]
[ 3, 0, 0, 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0069194678_pandas_python.txt
Q: Python mypy: float and int are incompatible types with numbers.Real I am new to Python's static typing module mypy. I am trying to append ints and floats to an array, which I typed statically to be Real. But mypy says that they are incompatible types with Real. I thought ints and floats are a subtype of Real? from typing import List from numbers import Real data : List[Real] = [] with open(path, 'r') as file: for line in file: line = line.strip() if subject == 'time': data.append(float(line)) else: data.append(int(line)) Error message: graph.py:56: error: Argument 1 to "append" of "list" has incompatible type "float"; expected "Real" graph.py:58: error: Argument 1 to "append" of "list" has incompatible type "int"; expected "Real" A: As AChampion says you can just use float in place of numbers.Real. Pep 0484 (specifically here) basically says that in the context of type-checking the hierarchy complex > float > int is respected and that numbers.* doesn't need to be used. This is a known 'issue' with mypy and was raised and discussed here. I haven't read the entire discussion but the tldr seems to be that Pep 0484 is enough to make this a low priority issue (and it hasn't been resolved in 5 years). I will say that (as the issue discussion mentioned) it would be nice if mypy gave some indication of this as a known issue in it's error message, which it currently (mypy 0.991) doesn't.
Python mypy: float and int are incompatible types with numbers.Real
I am new to Python's static typing module mypy. I am trying to append ints and floats to an array, which I typed statically to be Real. But mypy says that they are incompatible types with Real. I thought ints and floats are a subtype of Real? from typing import List from numbers import Real data : List[Real] = [] with open(path, 'r') as file: for line in file: line = line.strip() if subject == 'time': data.append(float(line)) else: data.append(int(line)) Error message: graph.py:56: error: Argument 1 to "append" of "list" has incompatible type "float"; expected "Real" graph.py:58: error: Argument 1 to "append" of "list" has incompatible type "int"; expected "Real"
[ "As AChampion says you can just use float in place of numbers.Real. Pep 0484 (specifically here) basically says that in the context of type-checking the hierarchy complex > float > int is respected and that numbers.* doesn't need to be used.\nThis is a known 'issue' with mypy and was raised and discussed here. I haven't read the entire discussion but the tldr seems to be that Pep 0484 is enough to make this a low priority issue (and it hasn't been resolved in 5 years).\nI will say that (as the issue discussion mentioned) it would be nice if mypy gave some indication of this as a known issue in it's error message, which it currently (mypy 0.991) doesn't.\n" ]
[ 0 ]
[]
[]
[ "mypy", "python", "python_typing", "type_hinting" ]
stackoverflow_0063894187_mypy_python_python_typing_type_hinting.txt
Q: AttributeError: module 'yfinance' has no attribute 'download' I'm trying to import yfinance and some stocks into pandas dataframe. Initially had major issues importing yfinance. I installed using pip but still had to manually put in the files to actually get rid of the no module error. This is my code so far: Now I'm getting attribute error when trying to download yfinance. import pandas as pd import datetime as dt import yfinance as yf # import fix_yahoo_finance as yf stocks = ["AMZN", "MSFT", "INTC", "GOOG", "INFY.NS", "3988.HK"] start = dt.datetime.today()- dt.timedelta(30) end = dt.datetime.today() cl_price = pd.DataFrame() for ticker in stocks: cl_price[ticker] = yf.download(ticker,start,end)["Adj Close"] and this is the error: AttributeError Traceback (most recent call last) <ipython-input-51-3347ed0c7f2b> in <module> 10 11 for ticker in stocks: ---> 12 cl_price[ticker] = yf.download(ticker,start,end)["Adj Close"] AttributeError: module 'yfinance' has no attribute 'download' I tried the suggestion from AttributeError: module 'yahoo_finance' has no attribute 'download' but its still not working Any solutions appreciated A: I just been having the same error but the following code worked after deleting a local file named yahoofinance !pip install yfinance import yfinance as yf import pandas as pd import datetime as dt A: I installed using pip but still had to manually put in the files to actually get rid of the no module error. This could be the likely cause of the error. Manually doing is highly discouraged and it always recommended to install with a package tool. If you are using an anaconda environment, consider installing via conda and remove your manually placed files. $ conda install -c ranaroussi yfinance And also make sure that you satisfy all the requirements Python >= 2.7, 3.4+, Pandas (tested to work with >=0.23.1), Numpy >= 1.11.1, requests >= 2.14.2 A: This will do what you want. import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns import scipy.optimize as sco import datetime as dt import math from datetime import datetime, timedelta from pandas_datareader import data as wb from sklearn.cluster import KMeans np.random.seed(777) start = '2022-09-22' end = '2022-11-23' tickers = ['AXP','AMGN','AAPL','BA','CAT','CSCO','CVX','GS','HD','HON','IBM','INTC','JNJ','KO','JPM','MCD','MMM','MRK','MSFT','NKE','PG','TRV','UNH','CRM','VZ','V','WBA','WMT','DIS'] thelen = len(tickers) price_data = [] for ticker in tickers: try: prices = wb.DataReader(ticker, start = start, end = end, data_source='yahoo')[['Adj Close']] price_data.append(prices.assign(ticker=ticker)[['ticker', 'Adj Close']]) except: print(ticker) df = pd.concat(price_data) df.dtypes df.tail() df.shape pd.set_option('display.max_columns', 500) pd.set_option('display.max_rows', None) df = df.reset_index() df = df.set_index('Date') table = df.pivot(columns='ticker') # By specifying col[1] in below list comprehension # You can select the stock names under multi-level column table.columns = [col[1] for col in table.columns] table.head()
AttributeError: module 'yfinance' has no attribute 'download'
I'm trying to import yfinance and some stocks into pandas dataframe. Initially had major issues importing yfinance. I installed using pip but still had to manually put in the files to actually get rid of the no module error. This is my code so far: Now I'm getting attribute error when trying to download yfinance. import pandas as pd import datetime as dt import yfinance as yf # import fix_yahoo_finance as yf stocks = ["AMZN", "MSFT", "INTC", "GOOG", "INFY.NS", "3988.HK"] start = dt.datetime.today()- dt.timedelta(30) end = dt.datetime.today() cl_price = pd.DataFrame() for ticker in stocks: cl_price[ticker] = yf.download(ticker,start,end)["Adj Close"] and this is the error: AttributeError Traceback (most recent call last) <ipython-input-51-3347ed0c7f2b> in <module> 10 11 for ticker in stocks: ---> 12 cl_price[ticker] = yf.download(ticker,start,end)["Adj Close"] AttributeError: module 'yfinance' has no attribute 'download' I tried the suggestion from AttributeError: module 'yahoo_finance' has no attribute 'download' but its still not working Any solutions appreciated
[ "I just been having the same error but the following code worked after deleting a local file named yahoofinance\n!pip install yfinance\nimport yfinance as yf\nimport pandas as pd\nimport datetime as dt\n\n", "\nI installed using pip but still had to manually put in the files to actually get rid of the no module error.\n\nThis could be the likely cause of the error. Manually doing is highly discouraged and it always recommended to install with a package tool. If you are using an anaconda environment, consider installing via conda and remove your manually placed files.\n$ conda install -c ranaroussi yfinance\n\nAnd also make sure that you satisfy all the requirements\n\nPython >= 2.7, 3.4+,\n Pandas (tested to work with >=0.23.1),\n Numpy >= 1.11.1,\n requests >= 2.14.2\n\n", "This will do what you want.\nimport pandas as pd \nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport scipy.optimize as sco\nimport datetime as dt\nimport math\nfrom datetime import datetime, timedelta\nfrom pandas_datareader import data as wb\nfrom sklearn.cluster import KMeans\nnp.random.seed(777)\n\n\nstart = '2022-09-22'\nend = '2022-11-23'\n\n\ntickers = ['AXP','AMGN','AAPL','BA','CAT','CSCO','CVX','GS','HD','HON','IBM','INTC','JNJ','KO','JPM','MCD','MMM','MRK','MSFT','NKE','PG','TRV','UNH','CRM','VZ','V','WBA','WMT','DIS']\n\nthelen = len(tickers)\n\n\nprice_data = []\nfor ticker in tickers:\n try:\n prices = wb.DataReader(ticker, start = start, end = end, data_source='yahoo')[['Adj Close']]\n price_data.append(prices.assign(ticker=ticker)[['ticker', 'Adj Close']])\n except:\n print(ticker) \ndf = pd.concat(price_data)\ndf.dtypes\ndf.tail()\ndf.shape\n\npd.set_option('display.max_columns', 500)\npd.set_option('display.max_rows', None)\n\ndf = df.reset_index()\ndf = df.set_index('Date')\ntable = df.pivot(columns='ticker')\n\n# By specifying col[1] in below list comprehension\n# You can select the stock names under multi-level column\ntable.columns = [col[1] for col in table.columns]\ntable.head()\n\n\n" ]
[ 1, 0, 0 ]
[]
[]
[ "python", "yfinance" ]
stackoverflow_0061509917_python_yfinance.txt
Q: Exception has occurred: TypeError unsupported operand type(s) for *: 'NoneType' and 'float' Code is correct, has an output, but gets the error message. import math class Prelim: def __init__(self): self.LA1 = int(input('Enter your grade in Lab Activity #1: ')) #100 self.LA2 = int(input('Enter your grade in Lab Activity #2: ')) #100 self.LA3 = int(input('Enter your grade in Lab Activity #3: ')) #100 def displayLAG(self): self.LAG = int(print((self.LA1 + self.LA2 + self.LA3) / 3) * 0.25) la1 = Prelim() la1.displayLAG() Output: Enter your grade in Lab Activity #1: 84 Enter your grade in Lab Activity #2: 98 Enter your grade in Lab Activity #3: 91 91.0 I tried to remove float and changed it to int and also none at all. A: just wrong usage of print command. use code below: import math class Prelim: def __init__(self): self.LA1 = int(input('Enter your grade in Lab Activity #1: ')) #100 self.LA2 = int(input('Enter your grade in Lab Activity #2: ')) #100 self.LA3 = int(input('Enter your grade in Lab Activity #3: ')) #100 def displayLAG(self): result= int((self.LA1 + self.LA2 + self.LA3) / 3) print(result) self.LAG=result*0.25 la1 = Prelim() la1.displayLAG() have fun :)
Exception has occurred: TypeError unsupported operand type(s) for *: 'NoneType' and 'float'
Code is correct, has an output, but gets the error message. import math class Prelim: def __init__(self): self.LA1 = int(input('Enter your grade in Lab Activity #1: ')) #100 self.LA2 = int(input('Enter your grade in Lab Activity #2: ')) #100 self.LA3 = int(input('Enter your grade in Lab Activity #3: ')) #100 def displayLAG(self): self.LAG = int(print((self.LA1 + self.LA2 + self.LA3) / 3) * 0.25) la1 = Prelim() la1.displayLAG() Output: Enter your grade in Lab Activity #1: 84 Enter your grade in Lab Activity #2: 98 Enter your grade in Lab Activity #3: 91 91.0 I tried to remove float and changed it to int and also none at all.
[ "just wrong usage of print command. use code below:\nimport math\nclass Prelim:\n def __init__(self):\n self.LA1 = int(input('Enter your grade in Lab Activity #1: ')) #100\n self.LA2 = int(input('Enter your grade in Lab Activity #2: ')) #100\n self.LA3 = int(input('Enter your grade in Lab Activity #3: ')) #100\n \n def displayLAG(self): \n result= int((self.LA1 + self.LA2 + self.LA3) / 3)\n print(result)\n self.LAG=result*0.25\n \nla1 = Prelim()\nla1.displayLAG()\n\nhave fun :)\n" ]
[ 0 ]
[]
[]
[ "class", "function", "python" ]
stackoverflow_0074579749_class_function_python.txt
Q: Python: Download video with Youtube and pytube - fix error (regex...) When I try to run the code below, it gives me the response: "Python: Download video with Youtube and pytube - fix error (regex...)" Tried multiple solutions, all to no avail. Here is my code: link = "https://www.youtube.com/watch?v=vEQ8CXFWLZU&t=475s&ab_channel=InternetMadeCoder" yt = YouTube(link) print(yt.title) I have tried to reinstall pytube and I have tried downloading it from github but the problem still occurs. A: I tried the following code: from pytube import YouTube link = "https://www.youtube.com/watch?v=vEQ8CXFWLZU&t=475s&ab_channel=InternetMadeCoder" yt = YouTube(link) print(yt.title) OUTPUT: 3 PYTHON AUTOMATION PROJECTS FOR BEGINNERS Perhaps try creating a new python virtual environment - as you may be experiencing a conflict with another package? You can find a tutorial on how to create a virtual environment here. I am also using Python 3 (version 3.10.8) and pytube-12.1.0: >>> sys.version '3.10.8 (tags/v3.10.8:aaaf517, Oct 11 2022, 16:50:30) [MSC v.1933 64 bit (AMD64)]' A: I ran this on my IDE: from pytube import YouTube video = 'https://www.youtube.com/watch?v=4-43lLKaqBQ' yt = YouTube(video) print(yt.title) Response: The Animals - House of the Rising Sun (1964) HQ/Widescreen ♫ 58 YEARS AGO Any chance that you can show us your python version? :) A: Working correctly with pytube 12.1.0 & Python 3.10.8. Check version of pytube.
Python: Download video with Youtube and pytube - fix error (regex...)
When I try to run the code below, it gives me the response: "Python: Download video with Youtube and pytube - fix error (regex...)" Tried multiple solutions, all to no avail. Here is my code: link = "https://www.youtube.com/watch?v=vEQ8CXFWLZU&t=475s&ab_channel=InternetMadeCoder" yt = YouTube(link) print(yt.title) I have tried to reinstall pytube and I have tried downloading it from github but the problem still occurs.
[ "I tried the following code:\nfrom pytube import YouTube\n\nlink = \"https://www.youtube.com/watch?v=vEQ8CXFWLZU&t=475s&ab_channel=InternetMadeCoder\"\nyt = YouTube(link)\nprint(yt.title)\n\n\nOUTPUT:\n3 PYTHON AUTOMATION PROJECTS FOR BEGINNERS\n\nPerhaps try creating a new python virtual environment - as you may be experiencing a conflict with another package? You can find a tutorial on how to create a virtual environment here.\n\nI am also using Python 3 (version 3.10.8) and pytube-12.1.0:\n>>> sys.version\n'3.10.8 (tags/v3.10.8:aaaf517, Oct 11 2022, 16:50:30) [MSC v.1933 64 bit (AMD64)]'\n\n", "I ran this on my IDE:\nfrom pytube import YouTube\n\nvideo = 'https://www.youtube.com/watch?v=4-43lLKaqBQ'\n\nyt = YouTube(video)\n\nprint(yt.title)\n\nResponse:\nThe Animals - House of the Rising Sun (1964) HQ/Widescreen ♫ 58 YEARS AGO\n\nAny chance that you can show us your python version? :)\n", "Working correctly with pytube 12.1.0 & Python 3.10.8. Check version of pytube.\n" ]
[ 0, 0, 0 ]
[]
[]
[ "python", "pytube" ]
stackoverflow_0074579512_python_pytube.txt
Q: adjacency matrix map manipulation 0 I'm making a simple text based adventure game. My code uses an adjacency matrix as a map. I would like to navigate the map by direction ex(N,E,S,W) my current attempt can only navigate via the name of the location current output You are currently in Foyer. From this location, you could go to any of the following: Living Room Where would you like to go? Living Room You are currently in Living Room. From this location, you could go to any of the following: Foyer Bedroom 2 Powder Room Kitchen Where would you like to go? -__________________________________________________ I would like an output like You are currently in Foyer. From this location, you could go to any of the following: South Where would you like to go? South You are currently in Living Room. From this location, you could go to any of the following: North West South East Where would you like to go? import pygame Inventory = [] names = ["Foyer", "Living Room", "Bedroom 2", "Full Bath", "Bedroom 3","Powder Room","Dinning", "Home Office", "Kitchen","Walkin Closet", "Hallway", "Bedroom1", "Sitting Room","Balcony", "Storage", "Garage", "Supply Closet", "Utility Closet", "Front Yard", "Sidewalk"] graph = [[0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0], [1,0,1,0,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0], [0,1,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,1,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0], [0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0], [0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,1,0,0,0,0], [0,0,0,0,0,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0],[0,1,0,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,1,0,1,0,0,0,1,0,0,0,0,0,0,0], [0,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,1,0,0,1,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,1,1,1,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0]] directions =["North", "East", "South", "West"] curr_location = "Foyer" while True: print("You are currently in ", curr_location, ".", sep = '') print() exits = [] print("From this location, you could go to any of the following:") indx_location = names.index(curr_location) for each in range(len(graph[indx_location])): if graph[indx_location][each] == 1: print("\t", names[each]) exits.append(names[each]) print() next_location = input("Where would you like to go? ") if not (next_location in exits): print() print("You cannot go this way.") else: curr_location = next_location print() A: Maybe add North, South, East, West to exits conditionally if it's possible to move in that direction?
adjacency matrix map manipulation
0 I'm making a simple text based adventure game. My code uses an adjacency matrix as a map. I would like to navigate the map by direction ex(N,E,S,W) my current attempt can only navigate via the name of the location current output You are currently in Foyer. From this location, you could go to any of the following: Living Room Where would you like to go? Living Room You are currently in Living Room. From this location, you could go to any of the following: Foyer Bedroom 2 Powder Room Kitchen Where would you like to go? -__________________________________________________ I would like an output like You are currently in Foyer. From this location, you could go to any of the following: South Where would you like to go? South You are currently in Living Room. From this location, you could go to any of the following: North West South East Where would you like to go? import pygame Inventory = [] names = ["Foyer", "Living Room", "Bedroom 2", "Full Bath", "Bedroom 3","Powder Room","Dinning", "Home Office", "Kitchen","Walkin Closet", "Hallway", "Bedroom1", "Sitting Room","Balcony", "Storage", "Garage", "Supply Closet", "Utility Closet", "Front Yard", "Sidewalk"] graph = [[0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0], [1,0,1,0,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0], [0,1,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,1,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0], [0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0], [0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,1,0,0,0,0], [0,0,0,0,0,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0],[0,1,0,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,1,0,1,0,0,0,1,0,0,0,0,0,0,0], [0,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,1,0,0,1,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,1,1,1,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0]] directions =["North", "East", "South", "West"] curr_location = "Foyer" while True: print("You are currently in ", curr_location, ".", sep = '') print() exits = [] print("From this location, you could go to any of the following:") indx_location = names.index(curr_location) for each in range(len(graph[indx_location])): if graph[indx_location][each] == 1: print("\t", names[each]) exits.append(names[each]) print() next_location = input("Where would you like to go? ") if not (next_location in exits): print() print("You cannot go this way.") else: curr_location = next_location print()
[ "Maybe add North, South, East, West to exits conditionally if it's possible to move in that direction?\n" ]
[ 0 ]
[]
[]
[ "arrays", "matrix", "python" ]
stackoverflow_0074579808_arrays_matrix_python.txt
Q: I want to get the first string before comma on a csv file but also get the string for rows that have no commas (only one tag) This is my original CSV file enter image description here I want to make the genre column only the first tag. when I use dataframe['genre'] = dataframe['genre'].str.extract('^(.+?),') it gets the string before the first comma but it also gets rid of columns without commas enter image description here how can I make it keep the ones without commas as well? A: Use a different regex: dataframe['genre'] = dataframe['genre'].str.extract('^([^,]+)') Regex: ^ # match start of line ([^,]+) # capture everything but comma A: Close, but it's easier to split the strings than develop a regex in this case, because it's so simple. You can do this instead. dataframe['genre'] = dataframe['genre'].str.split(',').str[0]
I want to get the first string before comma on a csv file but also get the string for rows that have no commas (only one tag)
This is my original CSV file enter image description here I want to make the genre column only the first tag. when I use dataframe['genre'] = dataframe['genre'].str.extract('^(.+?),') it gets the string before the first comma but it also gets rid of columns without commas enter image description here how can I make it keep the ones without commas as well?
[ "Use a different regex:\ndataframe['genre'] = dataframe['genre'].str.extract('^([^,]+)')\n\nRegex:\n^ # match start of line\n([^,]+) # capture everything but comma\n\n", "Close, but it's easier to split the strings than develop a regex in this case, because it's so simple. You can do this instead.\ndataframe['genre'] = dataframe['genre'].str.split(',').str[0]\n\n" ]
[ 2, 1 ]
[]
[]
[ "csv", "pandas", "python", "regex" ]
stackoverflow_0074579824_csv_pandas_python_regex.txt
Q: Enter the number of students – Enter the name, age, height of each student. Print list in sorted order Name(Abc)>Age(small-large)>Height (small-large) This is my code i don't know what i'm doing wrong. TypeError: 'Students' object is not subscriptable this is the error of this code and i don't know how to fix it i try many ways import operator from operator import itemgetter, attrgetter class Students: def __init__(self,name,age,height): self.name = name self.age = age self.height = height def input_student_info(): name = input("Enter student name: " ) age = input("Enter student age: " ) height = input("Enter student height: " ) student = Students(name,age,height) return student def print_student_info(student): print("Student name: " + student.name) print("Student age: " + student.age) print("Student height: " + student.height) def input_students_info(): std = [] total_students = int(input("How many student: ")) for i in range(total_students): print("Student " + str(i+1) + ": " ) stu = input_student_info() std.append(stu) return std def print_students_info(std): for i in range(len(std)): print("Student " + str(i+1) + ": " ) print_student_info(std[i]) def sort_list(std): std.sort(key=operator.itemgetter(0)) def main(): students = input_students_info() input("Press Enter to continue") students_new = sort_list(students) print_students_info(students_new) main() please fix the thing i doing wrong and explain what i'm doing wrong A: In sort_list, you are using operator.itemgetter(0). If you have a Students object like so: s = Students(...) Then operator.itemgetter(0) is performing something analogous to: s[0] There is no value in a Students object that can be accessed with square brackets, which is why you're getting the error 'Students' object is not subscriptable. Instead, you should be sorting with the attrgetter() method, which accesses a named attribute of the objects. stu.sort(key=operator.attrgetter("age"))
Enter the number of students – Enter the name, age, height of each student. Print list in sorted order Name(Abc)>Age(small-large)>Height (small-large)
This is my code i don't know what i'm doing wrong. TypeError: 'Students' object is not subscriptable this is the error of this code and i don't know how to fix it i try many ways import operator from operator import itemgetter, attrgetter class Students: def __init__(self,name,age,height): self.name = name self.age = age self.height = height def input_student_info(): name = input("Enter student name: " ) age = input("Enter student age: " ) height = input("Enter student height: " ) student = Students(name,age,height) return student def print_student_info(student): print("Student name: " + student.name) print("Student age: " + student.age) print("Student height: " + student.height) def input_students_info(): std = [] total_students = int(input("How many student: ")) for i in range(total_students): print("Student " + str(i+1) + ": " ) stu = input_student_info() std.append(stu) return std def print_students_info(std): for i in range(len(std)): print("Student " + str(i+1) + ": " ) print_student_info(std[i]) def sort_list(std): std.sort(key=operator.itemgetter(0)) def main(): students = input_students_info() input("Press Enter to continue") students_new = sort_list(students) print_students_info(students_new) main() please fix the thing i doing wrong and explain what i'm doing wrong
[ "In sort_list, you are using operator.itemgetter(0).\nIf you have a Students object like so:\ns = Students(...)\n\nThen operator.itemgetter(0) is performing something analogous to:\ns[0]\n\nThere is no value in a Students object that can be accessed with square brackets, which is why you're getting the error 'Students' object is not subscriptable.\nInstead, you should be sorting with the attrgetter() method, which accesses a named attribute of the objects.\nstu.sort(key=operator.attrgetter(\"age\"))\n\n" ]
[ 0 ]
[]
[]
[ "python", "sorting" ]
stackoverflow_0074579775_python_sorting.txt
Q: Stacking a lot of SVG images as layers issue So I would like to stack a lot of svg images on top of each other in Python. I am using this to do so: import svgutils.transform as st template = st.fromfile('firstLayer.svg') second_svg = st.fromfile('secondLayer.svg') template.append(second_svg) template.save('merged.svg') It technically works. Only problem is that for example in my first image (template) I have 9 classes (cls 1 - 9) and in the second I have 4 (cls 1 - 4). The name of the classes doesn't change when stacking them so the images comes out weird because the style is mixing. Does a solution that changes the classes name with respect to the existing SVG class name exists? for example If I stack the second layer on the first one the class names will change from 1 - 4 to 10 - 13 and so on for any other SVG image that will be added? A: If anyone found himself due to the same issue, I didn't find an already made Python solution that rewrites the class name in the elements and the class attribute of each path so I have created one myself: https://github.com/Amirh24/SVGAppender Feel free to use it :) A: Hey I was running into the exact same issue. I found an example at https://svgutils.readthedocs.io/en/latest/tutorials/publication_quality_figures.html Try this change: import svgutils.transform as st fig = st.SVGFigure("100px", "100px") template = st.fromfile('firstLayer.svg') second_svg = st.fromfile('secondLayer.svg') fig.append([template.getroot(), second_svg.getroot()]) fig.save('merged.svg')
Stacking a lot of SVG images as layers issue
So I would like to stack a lot of svg images on top of each other in Python. I am using this to do so: import svgutils.transform as st template = st.fromfile('firstLayer.svg') second_svg = st.fromfile('secondLayer.svg') template.append(second_svg) template.save('merged.svg') It technically works. Only problem is that for example in my first image (template) I have 9 classes (cls 1 - 9) and in the second I have 4 (cls 1 - 4). The name of the classes doesn't change when stacking them so the images comes out weird because the style is mixing. Does a solution that changes the classes name with respect to the existing SVG class name exists? for example If I stack the second layer on the first one the class names will change from 1 - 4 to 10 - 13 and so on for any other SVG image that will be added?
[ "If anyone found himself due to the same issue, I didn't find an already made Python solution that rewrites the class name in the elements and the class attribute of each path so I have created one myself:\nhttps://github.com/Amirh24/SVGAppender\nFeel free to use it :)\n", "Hey I was running into the exact same issue.\nI found an example at https://svgutils.readthedocs.io/en/latest/tutorials/publication_quality_figures.html\nTry this change:\nimport svgutils.transform as st\nfig = st.SVGFigure(\"100px\", \"100px\")\n\ntemplate = st.fromfile('firstLayer.svg')\nsecond_svg = st.fromfile('secondLayer.svg')\n\nfig.append([template.getroot(), second_svg.getroot()])\nfig.save('merged.svg')\n\n" ]
[ 0, 0 ]
[]
[]
[ "python", "svg" ]
stackoverflow_0050652568_python_svg.txt
Q: python recursion for child parent grandparent etc hierarchy I am trying to create the full hierarchy chain for every child. I have two dicts, the first contains all child-parent pairs to be used as lookup, and the second contains all children as keys, and their values starts off as a list containing the immediate parent where then the grand parents, great garnd parents etc will be appended to. For simplicity sake, a child cannot have more than one parent, but a parent can have multiple children. c_p = { "A":"C","B":"C","C":"F","D":"E","E":"F","F":""} hierarchy = { "A": ["C"], "B": ["C"], "C": ["F"], "D": ["E"], "E": ["F"], "F": [""] } expected_result = { "A": ["C", "F"], "B": ["C", "F"], "C": ["F"], "D": ["E", "F"], "E": ["F"], "F": [""], } Below is the function I have thus far, but it is returning 'str' object has no attribute 'copy'. I know this is because the function is expecting a dict but it is passing through a string but I am not clear on how to structure this function. def hierarchy_gen(data): for k, v in data.copy().items(): last_parent = v[-1] if last_parent not in ['F','',None]: v += hierarchy_gen(c_p[last_parent]) else: v += '' continue return data test = hierarchy_gen(hierarchy) A: Some terminology. We're doing a transitive closure over a forest (a data structure containing the roots of many trees). By denoting the structure as a forest, I'm assuming there are no cycles in the graph. def transitive_closure(forest): def recurse(root, path): if root in forest and forest[root]: for x in forest[root]: path.append(root) recurse(x, path) path.pop() elif path: result[path[0]] = path[1:] result = {} for root in forest: recurse(root, []) return result if __name__ == "__main__": hierarchy = { "A": ["C"], "B": ["C"], "C": ["F"], "D": ["E"], "E": ["F"], "F": [""] } print(transitive_closure(hierarchy)) Now, there's some inconsistency in the expected output. Either "" counts as a node value or it doesn't. If we count it, you can use result[path[0]] = path[1:] + [root] but then you'll see "" at the end of every closure path because "F": [""] is the terminal point of everything in this particular forest. Or maybe you meant to use "F": [] to indicate that "F" has no children, in which case you'll still want to append the current node to the path on the base case. You could also handle this as a special value, but that strikes me as a poor design. Additionally, c_p seems totally pointless since hierarchy contains the exact same information, with lists as values. I assume it's possible that some elements have more than a single child, otherwise it's a pretty dull data structure. As it stands, the example is basically a linked list, a degenerate tree.
python recursion for child parent grandparent etc hierarchy
I am trying to create the full hierarchy chain for every child. I have two dicts, the first contains all child-parent pairs to be used as lookup, and the second contains all children as keys, and their values starts off as a list containing the immediate parent where then the grand parents, great garnd parents etc will be appended to. For simplicity sake, a child cannot have more than one parent, but a parent can have multiple children. c_p = { "A":"C","B":"C","C":"F","D":"E","E":"F","F":""} hierarchy = { "A": ["C"], "B": ["C"], "C": ["F"], "D": ["E"], "E": ["F"], "F": [""] } expected_result = { "A": ["C", "F"], "B": ["C", "F"], "C": ["F"], "D": ["E", "F"], "E": ["F"], "F": [""], } Below is the function I have thus far, but it is returning 'str' object has no attribute 'copy'. I know this is because the function is expecting a dict but it is passing through a string but I am not clear on how to structure this function. def hierarchy_gen(data): for k, v in data.copy().items(): last_parent = v[-1] if last_parent not in ['F','',None]: v += hierarchy_gen(c_p[last_parent]) else: v += '' continue return data test = hierarchy_gen(hierarchy)
[ "Some terminology. We're doing a transitive closure over a forest (a data structure containing the roots of many trees). By denoting the structure as a forest, I'm assuming there are no cycles in the graph.\ndef transitive_closure(forest):\n def recurse(root, path):\n if root in forest and forest[root]:\n for x in forest[root]:\n path.append(root)\n recurse(x, path)\n path.pop()\n elif path:\n result[path[0]] = path[1:]\n\n result = {}\n\n for root in forest:\n recurse(root, [])\n\n return result\n\n\nif __name__ == \"__main__\":\n hierarchy = {\n \"A\": [\"C\"],\n \"B\": [\"C\"],\n \"C\": [\"F\"],\n \"D\": [\"E\"],\n \"E\": [\"F\"],\n \"F\": [\"\"]\n }\n print(transitive_closure(hierarchy))\n\nNow, there's some inconsistency in the expected output. Either \"\" counts as a node value or it doesn't. If we count it, you can use\nresult[path[0]] = path[1:] + [root]\n\nbut then you'll see \"\" at the end of every closure path because \"F\": [\"\"] is the terminal point of everything in this particular forest. Or maybe you meant to use \"F\": [] to indicate that \"F\" has no children, in which case you'll still want to append the current node to the path on the base case. You could also handle this as a special value, but that strikes me as a poor design.\nAdditionally, c_p seems totally pointless since hierarchy contains the exact same information, with lists as values. I assume it's possible that some elements have more than a single child, otherwise it's a pretty dull data structure. As it stands, the example is basically a linked list, a degenerate tree.\n" ]
[ 1 ]
[]
[]
[ "dictionary", "python", "recursion" ]
stackoverflow_0074543804_dictionary_python_recursion.txt
Q: Python Seleniume Multiple Buttons with same name For the below HTML code, I want to select the button in the DIV for 2022 November <div class="reportTitleNonMobile ng-binding"> 2022 November <button class="myButton" ng-show="!gridFile.monthOpen" ng-click="toggleMonthReport(gridFile)">Open</button> <button class="myButton ng-hide" ng-show="gridFile.monthOpen" ng-click="toggleMonthReport(gridFile)">Close</button> </div> However the same is repeated for all the months. Below is the link to the larger HTML Code for reference. Any help is appreciated: https://workdrive.zohoexternal.com/external/ecd06139063100a07635e16e74cb43a21504e8cb0b2c3bd57d6f3d7e317724d5 I have tried the following: driver.find_element_by_xpath('//*[@id="reportHeaderCol4NonMobile"]/div[2]/div[3]/div[3]/fieldset/div1/div1/div').click() driver.find_element_by_link_text('Open')1.click() driver.find_element_by_class_name('button.myButton')1.click() driver.find_element_by_class_name('myButton')1.click() and a bunch more but none of them seem to work and I end up with error Message: no such element: Unable to locate element ---- Edit / Addition--- I even tried the suggestion from @AbiSaran with Xpath as (.//*[@class='reportTitleAreaNonMobile'])1//button1 but got the same error. I feel the Xpath's are correct since I am able to find them in Chrome > Inspect screen. But my python code fails. -----------Solution--- The solution below was pointing to accurate Xpath but was not working since the website was opening a new tab and the code continued to look for the element in the previous tab. I has the switch the active tab using the doe below and it worked fine. driver.switch_to.window(driver.window_handles[1]) A: You can use the below XPath: For the 'Open' button in '2022 November': (.//*[@class='reportTitleAreaNonMobile'])[1]//button[1] For the 'Close' button in '2022 November': (.//*[@class='reportTitleAreaNonMobile'])[1]//button[2]
Python Seleniume Multiple Buttons with same name
For the below HTML code, I want to select the button in the DIV for 2022 November <div class="reportTitleNonMobile ng-binding"> 2022 November <button class="myButton" ng-show="!gridFile.monthOpen" ng-click="toggleMonthReport(gridFile)">Open</button> <button class="myButton ng-hide" ng-show="gridFile.monthOpen" ng-click="toggleMonthReport(gridFile)">Close</button> </div> However the same is repeated for all the months. Below is the link to the larger HTML Code for reference. Any help is appreciated: https://workdrive.zohoexternal.com/external/ecd06139063100a07635e16e74cb43a21504e8cb0b2c3bd57d6f3d7e317724d5 I have tried the following: driver.find_element_by_xpath('//*[@id="reportHeaderCol4NonMobile"]/div[2]/div[3]/div[3]/fieldset/div1/div1/div').click() driver.find_element_by_link_text('Open')1.click() driver.find_element_by_class_name('button.myButton')1.click() driver.find_element_by_class_name('myButton')1.click() and a bunch more but none of them seem to work and I end up with error Message: no such element: Unable to locate element ---- Edit / Addition--- I even tried the suggestion from @AbiSaran with Xpath as (.//*[@class='reportTitleAreaNonMobile'])1//button1 but got the same error. I feel the Xpath's are correct since I am able to find them in Chrome > Inspect screen. But my python code fails. -----------Solution--- The solution below was pointing to accurate Xpath but was not working since the website was opening a new tab and the code continued to look for the element in the previous tab. I has the switch the active tab using the doe below and it worked fine. driver.switch_to.window(driver.window_handles[1])
[ "You can use the below XPath:\nFor the 'Open' button in '2022 November':\n(.//*[@class='reportTitleAreaNonMobile'])[1]//button[1]\n\nFor the 'Close' button in '2022 November':\n(.//*[@class='reportTitleAreaNonMobile'])[1]//button[2]\n\n" ]
[ 0 ]
[]
[]
[ "python", "selenium" ]
stackoverflow_0074579838_python_selenium.txt
Q: How can I print the number of steps forward, backward, and total? Here's my code: import math import random while True: fwd= random.randint(2,20) bkwd= random.randint(2,fwd) total=random.randint(10,85) f= 0 b = 0 t= 0 if bkwd > fwd: break while total > 0: f = 0 while fwd > f: if total > 0: print("F", end="") f=f+1 t=t+1 total=total-1 else: f = fwd b = 0 while bkwd > b: if total > 0: print("B", end="") t=t-1 b=b+1 total=total-1 else: b = bkwd if f > total: break print(" ",t, "steps from the start") #I need help here printing the right amount of total steps print("Forward:", f, "Backward:", b, "Total:", ) My instructions are: A person walks a random amount of steps forward, and then a different random number of steps backwards. The random steps are anywhere between 2 and 20 The number of steps forward is always greater than the number of steps backwards That motion of forward / backward random steps repeats itself again and again The motion is consistent (the number of forward steps stays the same throughout the motion, and the number of backwards steps stays the same throughout the motion) After making a specific amount of total steps the person is told to stop and will be a certain amount of steps forward from where they started. The total number of steps is generated randomly and will be between 10 and 85 You are writing a program to simulate the motion taken by the person. Display that motion and the number of steps he ends away from where he started. For Example: If the program generated the forward steps to be 4, and the backward steps to be 2, and the total number of steps to be 13, your program would display: FFFFBBFFFFBBF = 5 Steps from the start If the program generated the forward steps to be 5, and the backward steps to be 3, and the total steps to be 16, your program would display FFFFFBBBFFFFFBBB = 4 Steps from the start A: For one thing, bkwd = random.randint(2,fwd) will generate a number between 2<=n<=fwd, when instead you want 2<=n<=fwd-1. To answer your question, you just need to add a new variable to keep track of total steps taken. Maybe call it steps_taken? You should increment this counter once for every step taken.
How can I print the number of steps forward, backward, and total?
Here's my code: import math import random while True: fwd= random.randint(2,20) bkwd= random.randint(2,fwd) total=random.randint(10,85) f= 0 b = 0 t= 0 if bkwd > fwd: break while total > 0: f = 0 while fwd > f: if total > 0: print("F", end="") f=f+1 t=t+1 total=total-1 else: f = fwd b = 0 while bkwd > b: if total > 0: print("B", end="") t=t-1 b=b+1 total=total-1 else: b = bkwd if f > total: break print(" ",t, "steps from the start") #I need help here printing the right amount of total steps print("Forward:", f, "Backward:", b, "Total:", ) My instructions are: A person walks a random amount of steps forward, and then a different random number of steps backwards. The random steps are anywhere between 2 and 20 The number of steps forward is always greater than the number of steps backwards That motion of forward / backward random steps repeats itself again and again The motion is consistent (the number of forward steps stays the same throughout the motion, and the number of backwards steps stays the same throughout the motion) After making a specific amount of total steps the person is told to stop and will be a certain amount of steps forward from where they started. The total number of steps is generated randomly and will be between 10 and 85 You are writing a program to simulate the motion taken by the person. Display that motion and the number of steps he ends away from where he started. For Example: If the program generated the forward steps to be 4, and the backward steps to be 2, and the total number of steps to be 13, your program would display: FFFFBBFFFFBBF = 5 Steps from the start If the program generated the forward steps to be 5, and the backward steps to be 3, and the total steps to be 16, your program would display FFFFFBBBFFFFFBBB = 4 Steps from the start
[ "For one thing, bkwd = random.randint(2,fwd) will generate a number between 2<=n<=fwd, when instead you want 2<=n<=fwd-1.\nTo answer your question, you just need to add a new variable to keep track of total steps taken. Maybe call it steps_taken? You should increment this counter once for every step taken.\n" ]
[ 0 ]
[]
[]
[ "iteration", "loops", "motion", "python" ]
stackoverflow_0074579955_iteration_loops_motion_python.txt
Q: How do I check if a string represents a number (float or int)? How do I check if a string represents a numeric value in Python? def is_number(s): try: float(s) return True except ValueError: return False The above works, but it seems clunky. If what you are testing comes from user input, it is still a string even if it represents an int or a float. See How can I read inputs as numbers? for converting the input, and Asking the user for input until they give a valid response for ensuring that the input represents an int or float (or other requirements) before proceeding. A: For non-negative (unsigned) integers only, use isdigit(): >>> a = "03523" >>> a.isdigit() True >>> b = "963spam" >>> b.isdigit() False Documentation for isdigit(): Python2, Python3 For Python 2 Unicode strings: isnumeric(). A: Which, not only is ugly and slow I'd dispute both. A regex or other string parsing method would be uglier and slower. I'm not sure that anything much could be faster than the above. It calls the function and returns. Try/Catch doesn't introduce much overhead because the most common exception is caught without an extensive search of stack frames. The issue is that any numeric conversion function has two kinds of results A number, if the number is valid A status code (e.g., via errno) or exception to show that no valid number could be parsed. C (as an example) hacks around this a number of ways. Python lays it out clearly and explicitly. I think your code for doing this is perfect. A: TL;DR The best solution is s.replace('.','',1).isdigit() I did some benchmarks comparing the different approaches def is_number_tryexcept(s): """ Returns True is string is a number. """ try: float(s) return True except ValueError: return False import re def is_number_regex(s): """ Returns True is string is a number. """ if re.match("^\d+?\.\d+?$", s) is None: return s.isdigit() return True def is_number_repl_isdigit(s): """ Returns True is string is a number. """ return s.replace('.','',1).isdigit() If the string is not a number, the except-block is quite slow. But more importantly, the try-except method is the only approach that handles scientific notations correctly. funcs = [ is_number_tryexcept, is_number_regex, is_number_repl_isdigit ] a_float = '.1234' print('Float notation ".1234" is not supported by:') for f in funcs: if not f(a_float): print('\t -', f.__name__) Float notation ".1234" is not supported by: - is_number_regex scientific1 = '1.000000e+50' scientific2 = '1e50' print('Scientific notation "1.000000e+50" is not supported by:') for f in funcs: if not f(scientific1): print('\t -', f.__name__) print('Scientific notation "1e50" is not supported by:') for f in funcs: if not f(scientific2): print('\t -', f.__name__) Scientific notation "1.000000e+50" is not supported by: - is_number_regex - is_number_repl_isdigit Scientific notation "1e50" is not supported by: - is_number_regex - is_number_repl_isdigit EDIT: The benchmark results import timeit test_cases = ['1.12345', '1.12.345', 'abc12345', '12345'] times_n = {f.__name__:[] for f in funcs} for t in test_cases: for f in funcs: f = f.__name__ times_n[f].append(min(timeit.Timer('%s(t)' %f, 'from __main__ import %s, t' %f) .repeat(repeat=3, number=1000000))) where the following functions were tested from re import match as re_match from re import compile as re_compile def is_number_tryexcept(s): """ Returns True is string is a number. """ try: float(s) return True except ValueError: return False def is_number_regex(s): """ Returns True is string is a number. """ if re_match("^\d+?\.\d+?$", s) is None: return s.isdigit() return True comp = re_compile("^\d+?\.\d+?$") def compiled_regex(s): """ Returns True is string is a number. """ if comp.match(s) is None: return s.isdigit() return True def is_number_repl_isdigit(s): """ Returns True is string is a number. """ return s.replace('.','',1).isdigit() A: There is one exception that you may want to take into account: the string 'NaN' If you want is_number to return FALSE for 'NaN' this code will not work as Python converts it to its representation of a number that is not a number (talk about identity issues): >>> float('NaN') nan Otherwise, I should actually thank you for the piece of code I now use extensively. :) G. A: how about this: '3.14'.replace('.','',1).isdigit() which will return true only if there is one or no '.' in the string of digits. '3.14.5'.replace('.','',1).isdigit() will return false edit: just saw another comment ... adding a .replace(badstuff,'',maxnum_badstuff) for other cases can be done. if you are passing salt and not arbitrary condiments (ref:xkcd#974) this will do fine :P A: Updated after Alfe pointed out you don't need to check for float separately as complex handles both: def is_number(s): try: complex(s) # for int, long, float and complex except ValueError: return False return True Previously said: Is some rare cases you might also need to check for complex numbers (e.g. 1+2i), which can not be represented by a float: def is_number(s): try: float(s) # for int, long and float except ValueError: try: complex(s) # for complex except ValueError: return False return True A: Which, not only is ugly and slow, seems clunky. It may take some getting used to, but this is the pythonic way of doing it. As has been already pointed out, the alternatives are worse. But there is one other advantage of doing things this way: polymorphism. The central idea behind duck typing is that "if it walks and talks like a duck, then it's a duck." What if you decide that you need to subclass string so that you can change how you determine if something can be converted into a float? Or what if you decide to test some other object entirely? You can do these things without having to change the above code. Other languages solve these problems by using interfaces. I'll save the analysis of which solution is better for another thread. The point, though, is that python is decidedly on the duck typing side of the equation, and you're probably going to have to get used to syntax like this if you plan on doing much programming in Python (but that doesn't mean you have to like it of course). One other thing you might want to take into consideration: Python is pretty fast in throwing and catching exceptions compared to a lot of other languages (30x faster than .Net for instance). Heck, the language itself even throws exceptions to communicate non-exceptional, normal program conditions (every time you use a for loop). Thus, I wouldn't worry too much about the performance aspects of this code until you notice a significant problem. A: For int use this: >>> "1221323".isdigit() True But for float we need some tricks ;-). Every float number has one point... >>> "12.34".isdigit() False >>> "12.34".replace('.','',1).isdigit() True >>> "12.3.4".replace('.','',1).isdigit() False Also for negative numbers just add lstrip(): >>> '-12'.lstrip('-') '12' And now we get a universal way: >>> '-12.34'.lstrip('-').replace('.','',1).isdigit() True >>> '.-234'.lstrip('-').replace('.','',1).isdigit() False A: This answer provides step by step guide having function with examples to find the string is: Positive integer Positive/negative - integer/float How to discard "NaN" (not a number) strings while checking for number? Check if string is positive integer You may use str.isdigit() to check whether given string is positive integer. Sample Results: # For digit >>> '1'.isdigit() True >>> '1'.isalpha() False Check for string as positive/negative - integer/float str.isdigit() returns False if the string is a negative number or a float number. For example: # returns `False` for float >>> '123.3'.isdigit() False # returns `False` for negative number >>> '-123'.isdigit() False If you want to also check for the negative integers and float, then you may write a custom function to check for it as: def is_number(n): try: float(n) # Type-casting the string to `float`. # If string is not a valid `float`, # it'll raise `ValueError` exception except ValueError: return False return True Sample Run: >>> is_number('123') # positive integer number True >>> is_number('123.4') # positive float number True >>> is_number('-123') # negative integer number True >>> is_number('-123.4') # negative `float` number True >>> is_number('abc') # `False` for "some random" string False Discard "NaN" (not a number) strings while checking for number The above functions will return True for the "NAN" (Not a number) string because for Python it is valid float representing it is not a number. For example: >>> is_number('NaN') True In order to check whether the number is "NaN", you may use math.isnan() as: >>> import math >>> nan_num = float('nan') >>> math.isnan(nan_num) True Or if you don't want to import additional library to check this, then you may simply check it via comparing it with itself using ==. Python returns False when nan float is compared with itself. For example: # `nan_num` variable is taken from above example >>> nan_num == nan_num False Hence, above function is_number can be updated to return False for "NaN" as: def is_number(n): is_number = True try: num = float(n) # check for "nan" floats is_number = num == num # or use `math.isnan(num)` except ValueError: is_number = False return is_number Sample Run: >>> is_number('Nan') # not a number "Nan" string False >>> is_number('nan') # not a number string "nan" with all lower cased False >>> is_number('123') # positive integer True >>> is_number('-123') # negative integer True >>> is_number('-1.12') # negative `float` True >>> is_number('abc') # "some random" string False PS: Each operation for each check depending on the type of number comes with additional overhead. Choose the version of is_number function which fits your requirement. A: For strings of non-numbers, try: except: is actually slower than regular expressions. For strings of valid numbers, regex is slower. So, the appropriate method depends on your input. If you find that you are in a performance bind, you can use a new third-party module called fastnumbers that provides a function called isfloat. Full disclosure, I am the author. I have included its results in the timings below. from __future__ import print_function import timeit prep_base = '''\ x = 'invalid' y = '5402' z = '4.754e3' ''' prep_try_method = '''\ def is_number_try(val): try: float(val) return True except ValueError: return False ''' prep_re_method = '''\ import re float_match = re.compile(r'[-+]?\d*\.?\d+(?:[eE][-+]?\d+)?$').match def is_number_re(val): return bool(float_match(val)) ''' fn_method = '''\ from fastnumbers import isfloat ''' print('Try with non-number strings', timeit.timeit('is_number_try(x)', prep_base + prep_try_method), 'seconds') print('Try with integer strings', timeit.timeit('is_number_try(y)', prep_base + prep_try_method), 'seconds') print('Try with float strings', timeit.timeit('is_number_try(z)', prep_base + prep_try_method), 'seconds') print() print('Regex with non-number strings', timeit.timeit('is_number_re(x)', prep_base + prep_re_method), 'seconds') print('Regex with integer strings', timeit.timeit('is_number_re(y)', prep_base + prep_re_method), 'seconds') print('Regex with float strings', timeit.timeit('is_number_re(z)', prep_base + prep_re_method), 'seconds') print() print('fastnumbers with non-number strings', timeit.timeit('isfloat(x)', prep_base + 'from fastnumbers import isfloat'), 'seconds') print('fastnumbers with integer strings', timeit.timeit('isfloat(y)', prep_base + 'from fastnumbers import isfloat'), 'seconds') print('fastnumbers with float strings', timeit.timeit('isfloat(z)', prep_base + 'from fastnumbers import isfloat'), 'seconds') print() Try with non-number strings 2.39108395576 seconds Try with integer strings 0.375686168671 seconds Try with float strings 0.369210958481 seconds Regex with non-number strings 0.748660802841 seconds Regex with integer strings 1.02021503448 seconds Regex with float strings 1.08564686775 seconds fastnumbers with non-number strings 0.174362897873 seconds fastnumbers with integer strings 0.179651021957 seconds fastnumbers with float strings 0.20222902298 seconds As you can see try: except: was fast for numeric input but very slow for an invalid input regex is very efficient when the input is invalid fastnumbers wins in both cases A: I know this is particularly old but I would add an answer I believe covers the information missing from the highest voted answer that could be very valuable to any who find this: For each of the following methods connect them with a count if you need any input to be accepted. (Assuming we are using vocal definitions of integers rather than 0-255, etc.) x.isdigit() works well for checking if x is an integer. x.replace('-','').isdigit() works well for checking if x is a negative.(Check - in first position) x.replace('.','').isdigit() works well for checking if x is a decimal. x.replace(':','').isdigit() works well for checking if x is a ratio. x.replace('/','',1).isdigit() works well for checking if x is a fraction. A: Just Mimic C# In C# there are two different functions that handle parsing of scalar values: Float.Parse() Float.TryParse() float.parse(): def parse(string): try: return float(string) except Exception: throw TypeError Note: If you're wondering why I changed the exception to a TypeError, here's the documentation. float.try_parse(): def try_parse(string, fail=None): try: return float(string) except Exception: return fail; Note: You don't want to return the boolean 'False' because that's still a value type. None is better because it indicates failure. Of course, if you want something different you can change the fail parameter to whatever you want. To extend float to include the 'parse()' and 'try_parse()' you'll need to monkeypatch the 'float' class to add these methods. If you want respect pre-existing functions the code should be something like: def monkey_patch(): if(!hasattr(float, 'parse')): float.parse = parse if(!hasattr(float, 'try_parse')): float.try_parse = try_parse SideNote: I personally prefer to call it Monkey Punching because it feels like I'm abusing the language when I do this but YMMV. Usage: float.parse('giggity') // throws TypeException float.parse('54.3') // returns the scalar value 54.3 float.tryParse('twank') // returns None float.tryParse('32.2') // returns the scalar value 32.2 And the great Sage Pythonas said to the Holy See Sharpisus, "Anything you can do I can do better; I can do anything better than you." A: Casting to float and catching ValueError is probably the fastest way, since float() is specifically meant for just that. Anything else that requires string parsing (regex, etc) will likely be slower due to the fact that it's not tuned for this operation. My $0.02. A: You can use Unicode strings, they have a method to do just what you want: >>> s = u"345" >>> s.isnumeric() True Or: >>> s = "345" >>> u = unicode(s) >>> u.isnumeric() True http://www.tutorialspoint.com/python/string_isnumeric.htm http://docs.python.org/2/howto/unicode.html A: So to put it all together, checking for Nan, infinity and complex numbers (it would seem they are specified with j, not i, i.e. 1+2j) it results in: def is_number(s): try: n=str(float(s)) if n == "nan" or n=="inf" or n=="-inf" : return False except ValueError: try: complex(s) # for complex except ValueError: return False return True A: I wanted to see which method is fastest. Overall the best and most consistent results were given by the check_replace function. The fastest results were given by the check_exception function, but only if there was no exception fired - meaning its code is the most efficient, but the overhead of throwing an exception is quite large. Please note that checking for a successful cast is the only method which is accurate, for example, this works with check_exception but the other two test functions will return False for a valid float: huge_number = float('1e+100') Here is the benchmark code: import time, re, random, string ITERATIONS = 10000000 class Timer: def __enter__(self): self.start = time.clock() return self def __exit__(self, *args): self.end = time.clock() self.interval = self.end - self.start def check_regexp(x): return re.compile("^\d*\.?\d*$").match(x) is not None def check_replace(x): return x.replace('.','',1).isdigit() def check_exception(s): try: float(s) return True except ValueError: return False to_check = [check_regexp, check_replace, check_exception] print('preparing data...') good_numbers = [ str(random.random() / random.random()) for x in range(ITERATIONS)] bad_numbers = ['.' + x for x in good_numbers] strings = [ ''.join(random.choice(string.ascii_uppercase + string.digits) for _ in range(random.randint(1,10))) for x in range(ITERATIONS)] print('running test...') for func in to_check: with Timer() as t: for x in good_numbers: res = func(x) print('%s with good floats: %s' % (func.__name__, t.interval)) with Timer() as t: for x in bad_numbers: res = func(x) print('%s with bad floats: %s' % (func.__name__, t.interval)) with Timer() as t: for x in strings: res = func(x) print('%s with strings: %s' % (func.__name__, t.interval)) Here are the results with Python 2.7.10 on a 2017 MacBook Pro 13: check_regexp with good floats: 12.688639 check_regexp with bad floats: 11.624862 check_regexp with strings: 11.349414 check_replace with good floats: 4.419841 check_replace with bad floats: 4.294909 check_replace with strings: 4.086358 check_exception with good floats: 3.276668 check_exception with bad floats: 13.843092 check_exception with strings: 15.786169 Here are the results with Python 3.6.5 on a 2017 MacBook Pro 13: check_regexp with good floats: 13.472906000000009 check_regexp with bad floats: 12.977665000000016 check_regexp with strings: 12.417542999999995 check_replace with good floats: 6.011045999999993 check_replace with bad floats: 4.849356 check_replace with strings: 4.282754000000011 check_exception with good floats: 6.039081999999979 check_exception with bad floats: 9.322753000000006 check_exception with strings: 9.952595000000002 Here are the results with PyPy 2.7.13 on a 2017 MacBook Pro 13: check_regexp with good floats: 2.693217 check_regexp with bad floats: 2.744819 check_regexp with strings: 2.532414 check_replace with good floats: 0.604367 check_replace with bad floats: 0.538169 check_replace with strings: 0.598664 check_exception with good floats: 1.944103 check_exception with bad floats: 2.449182 check_exception with strings: 2.200056 A: The input may be as follows: a="50" b=50 c=50.1 d="50.1" 1-General input: The input of this function can be everything! Finds whether the given variable is numeric. Numeric strings consist of optional sign, any number of digits, optional decimal part and optional exponential part. Thus +0123.45e6 is a valid numeric value. Hexadecimal (e.g. 0xf4c3b00c) and binary (e.g. 0b10100111001) notation is not allowed. is_numeric function import ast import numbers def is_numeric(obj): if isinstance(obj, numbers.Number): return True elif isinstance(obj, str): nodes = list(ast.walk(ast.parse(obj)))[1:] if not isinstance(nodes[0], ast.Expr): return False if not isinstance(nodes[-1], ast.Num): return False nodes = nodes[1:-1] for i in range(len(nodes)): #if used + or - in digit : if i % 2 == 0: if not isinstance(nodes[i], ast.UnaryOp): return False else: if not isinstance(nodes[i], (ast.USub, ast.UAdd)): return False return True else: return False test: >>> is_numeric("54") True >>> is_numeric("54.545") True >>> is_numeric("0x45") True is_float function Finds whether the given variable is float. float strings consist of optional sign, any number of digits, ... import ast def is_float(obj): if isinstance(obj, float): return True if isinstance(obj, int): return False elif isinstance(obj, str): nodes = list(ast.walk(ast.parse(obj)))[1:] if not isinstance(nodes[0], ast.Expr): return False if not isinstance(nodes[-1], ast.Num): return False if not isinstance(nodes[-1].n, float): return False nodes = nodes[1:-1] for i in range(len(nodes)): if i % 2 == 0: if not isinstance(nodes[i], ast.UnaryOp): return False else: if not isinstance(nodes[i], (ast.USub, ast.UAdd)): return False return True else: return False test: >>> is_float("5.4") True >>> is_float("5") False >>> is_float(5) False >>> is_float("5") False >>> is_float("+5.4") True what is ast? 2- If you are confident that the variable content is String: use str.isdigit() method >>> a=454 >>> a.isdigit() Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'int' object has no attribute 'isdigit' >>> a="454" >>> a.isdigit() True 3-Numerical input: detect int value: >>> isinstance("54", int) False >>> isinstance(54, int) True >>> detect float: >>> isinstance("45.1", float) False >>> isinstance(45.1, float) True A: In a most general case for a float, one would like to take care of integers and decimals. Let's take the string "1.1" as an example. I would try one of the following: 1.> isnumeric() word = "1.1" "".join(word.split(".")).isnumeric() >>> True 2.> isdigit() word = "1.1" "".join(word.split(".")).isdigit() >>> True 3.> isdecimal() word = "1.1" "".join(word.split(".")).isdecimal() >>> True Speed: ► All the aforementioned methods have similar speeds. %timeit "".join(word.split(".")).isnumeric() >>> 257 ns ± 12 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each) %timeit "".join(word.split(".")).isdigit() >>> 252 ns ± 11 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each) %timeit "".join(word.split(".")).isdecimal() >>> 244 ns ± 7.17 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each) A: str.isnumeric() Return True if all characters in the string are numeric characters, and there is at least one character, False otherwise. Numeric characters include digit characters, and all characters that have the Unicode numeric value property, e.g. U+2155, VULGAR FRACTION ONE FIFTH. Formally, numeric characters are those with the property value Numeric_Type=Digit, Numeric_Type=Decimal or Numeric_Type=Numeric. str.isdecimal() Return True if all characters in the string are decimal characters and there is at least one character, False otherwise. Decimal characters are those that can be used to form numbers in base 10, e.g. U+0660, ARABIC-INDIC DIGIT ZERO. Formally a decimal character is a character in the Unicode General Category “Nd”. Both available for string types from Python 3.0. A: I needed to determine if a string cast into basic types (float,int,str,bool). After not finding anything on the internet I created this: def str_to_type (s): """ Get possible cast type for a string Parameters ---------- s : string Returns ------- float,int,str,bool : type Depending on what it can be cast to """ try: f = float(s) if "." not in s: return int return float except ValueError: value = s.upper() if value == "TRUE" or value == "FALSE": return bool return type(s) Example str_to_type("true") # bool str_to_type("6.0") # float str_to_type("6") # int str_to_type("6abc") # str str_to_type(u"6abc") # unicode You can capture the type and use it s = "6.0" type_ = str_to_type(s) # float f = type_(s) A: I think your solution is fine, but there is a correct regexp implementation. There does seem to be a lot of regexp hate towards these answers which I think is unjustified, regexps can be reasonably clean and correct and fast. It really depends on what you're trying to do. The original question was how can you "check if a string can be represented as a number (float)" (as per your title). Presumably you would want to use the numeric/float value once you've checked that it's valid, in which case your try/except makes a lot of sense. But if, for some reason, you just want to validate that a string is a number then a regex also works fine, but it's hard to get correct. I think most of the regex answers so far, for example, do not properly parse strings without an integer part (such as ".7") which is a float as far as python is concerned. And that's slightly tricky to check for in a single regex where the fractional portion is not required. I've included two regex to show this. It does raise the interesting question as to what a "number" is. Do you include "inf" which is valid as a float in python? Or do you include numbers that are "numbers" but maybe can't be represented in python (such as numbers that are larger than the float max). There's also ambiguities in how you parse numbers. For example, what about "--20"? Is this a "number"? Is this a legal way to represent "20"? Python will let you do "var = --20" and set it to 20 (though really this is because it treats it as an expression), but float("--20") does not work. Anyways, without more info, here's a regex that I believe covers all the ints and floats as python parses them. # Doesn't properly handle floats missing the integer part, such as ".7" SIMPLE_FLOAT_REGEXP = re.compile(r'^[-+]?[0-9]+\.?[0-9]+([eE][-+]?[0-9]+)?$') # Example "-12.34E+56" # sign (-) # integer (12) # mantissa (34) # exponent (E+56) # Should handle all floats FLOAT_REGEXP = re.compile(r'^[-+]?([0-9]+|[0-9]*\.[0-9]+)([eE][-+]?[0-9]+)?$') # Example "-12.34E+56" # sign (-) # integer (12) # OR # int/mantissa (12.34) # exponent (E+56) def is_float(str): return True if FLOAT_REGEXP.match(str) else False Some example test values: True <- +42 True <- +42.42 False <- +42.42.22 True <- +42.42e22 True <- +42.42E-22 False <- +42.42e-22.8 True <- .42 False <- 42nope Running the benchmarking code in @ron-reiter's answer shows that this regex is actually faster than the normal regex and is much faster at handling bad values than the exception, which makes some sense. Results: check_regexp with good floats: 18.001921 check_regexp with bad floats: 17.861423 check_regexp with strings: 17.558862 check_correct_regexp with good floats: 11.04428 check_correct_regexp with bad floats: 8.71211 check_correct_regexp with strings: 8.144161 check_replace with good floats: 6.020597 check_replace with bad floats: 5.343049 check_replace with strings: 5.091642 check_exception with good floats: 5.201605 check_exception with bad floats: 23.921864 check_exception with strings: 23.755481 A: I did some speed test. Lets say that if the string is likely to be a number the try/except strategy is the fastest possible.If the string is not likely to be a number and you are interested in Integer check, it worths to do some test (isdigit plus heading '-'). If you are interested to check float number, you have to use the try/except code whitout escape. A: RyanN suggests If you want to return False for a NaN and Inf, change line to x = float(s); return (x == x) and (x - 1 != x). This should return True for all floats except Inf and NaN But this doesn't quite work, because for sufficiently large floats, x-1 == x returns true. For example, 2.0**54 - 1 == 2.0**54 A: I was working on a problem that led me to this thread, namely how to convert a collection of data to strings and numbers in the most intuitive way. I realized after reading the original code that what I needed was different in two ways: 1 - I wanted an integer result if the string represented an integer 2 - I wanted a number or a string result to stick into a data structure so I adapted the original code to produce this derivative: def string_or_number(s): try: z = int(s) return z except ValueError: try: z = float(s) return z except ValueError: return s A: import re def is_number(num): pattern = re.compile(r'^[-+]?[-0-9]\d*\.\d*|[-+]?\.?[0-9]\d*$') result = pattern.match(num) if result: return True else: return False ​>>>: is_number('1') True >>>: is_number('111') True >>>: is_number('11.1') True >>>: is_number('-11.1') True >>>: is_number('inf') False >>>: is_number('-inf') False A: This code handles the exponents, floats, and integers, wihtout using regex. return True if str1.lstrip('-').replace('.','',1).isdigit() or float(str1) else False A: Here's my simple way of doing it. Let's say that I'm looping through some strings and I want to add them to an array if they turn out to be numbers. try: myvar.append( float(string_to_check) ) except: continue Replace the myvar.apppend with whatever operation you want to do with the string if it turns out to be a number. The idea is to try to use a float() operation and use the returned error to determine whether or not the string is a number. A: I also used the function you mentioned, but soon I notice that strings as "Nan", "Inf" and it's variation are considered as number. So I propose you improved version of your function, that will return false on those type of input and will not fail "1e3" variants: def is_float(text): try: float(text) # check for nan/infinity etc. if text.isalpha(): return False return True except ValueError: return False A: User helper function: def if_ok(fn, string): try: return fn(string) except Exception as e: return None then if_ok(int, my_str) or if_ok(float, my_str) or if_ok(complex, my_str) is_number = lambda s: any([if_ok(fn, s) for fn in (int, float, complex)]) A: def is_float(s): if s is None: return False if len(s) == 0: return False digits_count = 0 dots_count = 0 signs_count = 0 for c in s: if '0' <= c <= '9': digits_count += 1 elif c == '.': dots_count += 1 elif c == '-' or c == '+': signs_count += 1 else: return False if digits_count == 0: return False if dots_count > 1: return False if signs_count > 1: return False return True A: I know I'm late to the party, but figured out a solution which wasn't here: This solution follows the EAFP principle in Python def get_number_from_string(value): try: int_value = int(value) return int_value except ValueError: return float(value) Explanation: If the value in the string is a float and I first try to parse it as an int, it will throw a ValueError. So, I catch that error and parse the value as float and return. A: You can generalize the exception technique in a useful way by returning more useful values than True and False. For example this function puts quotes round strings but leaves numbers alone. Which is just what I needed for a quick and dirty filter to make some variable definitions for R. import sys def fix_quotes(s): try: float(s) return s except ValueError: return '"{0}"'.format(s) for line in sys.stdin: input = line.split() print input[0], '<- c(', ','.join(fix_quotes(c) for c in input[1:]), ')' A: Try this. def is_number(var): try: if var == int(var): return True except Exception: return False A: Sorry for the Zombie thread post - just wanted to round out the code for completeness... # is_number() function - Uses re = regex library # Should handle all normal and complex numbers # Does not accept trailing spaces. # Note: accepts both engineering "j" and math "i" but only the imaginary part "+bi" of a complex number a+bi # Also accepts inf or NaN # Thanks to the earlier responders for most the regex fu import re ISNUM_REGEXP = re.compile(r'^[-+]?([0-9]+|[0-9]*\.[0-9]+)([eE][-+]?[0-9]+)?[ij]?$') def is_number(str): #change order if you have a lot of NaN or inf to parse if ISNUM_REGEXP.match(str) or str == "NaN" or str == "inf": return True else: return False # A couple test numbers # +42.42e-42j # -42.42E+42i print('Is it a number?', is_number(input('Gimme any number: '))) Gimme any number: +42.42e-42j Is it a number? True A: For my very simple and very common use-case: is this human written string with keyboard a number? I read through most answers, and ended up with: def isNumeric(string): result = True try: x = float(string) result = (x == x) and (x - 1 != x) except ValueError: result = False return result It will return False for (+-)NaN and (+-)inf. You can check it out here: https://trinket.io/python/ce32c0e54e A: One fast and simple option is to check the data type: def is_number(value): return type(value) in [int, float] Or if you want to test if the values os a string are numeric: def isNumber (value): return True if type(value) in [int, float] else str(value).replace('.','',1).isdigit() tests: >>> isNumber(1) True >>> isNumber(1/3) True >>> isNumber(1.3) True >>> isNumber('1.3') True >>> isNumber('s1.3') False A: There are already good answers in this post. I wanted to give a slightly different perspective. Instead of searching for a digit, number or float we could do a negative search for an alphabet. i.e. we could ask the program to look if it is not alphabet. ## Check whether it is not alpha rather than checking if it is digit print(not "-1.2345".isalpha()) print(not "-1.2345e-10".isalpha()) It will work well if you are sure that your string is a well formed number (Condition 1 and Condition 2 below). However it will fail if the string is not a well formed number by mistake. In such a case it will return a number match even if the string was not a valid number. To take care of this situation, there are many rule based methods must be there. However at this moment, regex comes to my mind. Below are three cases. Please note regex can be much better since I am not a regex expert. Below there are two lists: one for valid numbers and one for invalid numbers. Valid numbers must be picked up while the invalid numbers must not be. == Condition 1: String is guranteed to be a valid number but 'inf' is not picked == Valid_Numbers = ["1","-1","+1","0.0",".1","1.2345","-1.2345","+1.2345","1.2345e10","1.2345e-10","-1.2345e10","-1.2345E10","-inf"] Invalid_Numbers = ["1.1.1","++1","--1","-1-1","1.23e10e5","--inf"] ################################ Condition 1: Valid number excludes 'inf' #################################### Case_1_Positive_Result = list(map(lambda x: not x.isalpha(),Valid_Numbers)) print("The below must all be True") print(Case_1_Positive_Result) ## This check assumes a valid number. So it fails for the negative cases and wrongly detects string as number Case_1_Negative_Result = list(map(lambda x: not x.isalpha(),Invalid_Numbers)) print("The below must all be False") print(Case_1_Negative_Result) The below must all be True [True, True, True, True, True, True, True, True, True, True, True, True, True] The below must all be False [True, True, True, True, True, True] == Condition 2: String is guranteed to be a valid number and 'inf' is picked == ################################ Condition 2: Valid number includes 'inf' ################################### Case_2_Positive_Result = list(map(lambda x: x=="inf" or not x.isalpha(),Valid_Numbers+["inf"])) print("The below must all be True") print(Case_2_Positive_Result) ## This check assumes a valid number. So it fails for the negative cases and wrongly detects string as number Case_2_Negative_Result = list(map(lambda x: x=="inf" or not x.isalpha(),Invalid_Numbers+["++inf"])) print("The below must all be False") print(Case_2_Negative_Result) The below must all be True [True, True, True, True, True, True, True, True, True, True, True, True, True, True] The below must all be False [True, True, True, True, True, True, True] == Condition 3: String is not guranteed to be a valid number == import re CompiledPattern = re.compile(r"([+-]?(inf){1}$)|([+-]?[0-9]*\.?[0-9]*$)|([+-]?[0-9]*\.?[0-9]*[eE]{1}[+-]?[0-9]*$)") Case_3_Positive_Result = list(map(lambda x: True if CompiledPattern.match(x) else False,Valid_Numbers+["inf"])) print("The below must all be True") print(Case_3_Positive_Result) ## This check assumes a valid number. So it fails for the negative cases and wrongly detects string as number Case_3_Negative_Result = list(map(lambda x: True if CompiledPattern.match(x) else False,Invalid_Numbers+["++inf"])) print("The below must all be False") print(Case_3_Negative_Result) The below must all be True [True, True, True, True, True, True, True, True, True, True, True, True, True, True] The below must all be False [False, False, False, False, False, False, False]
How do I check if a string represents a number (float or int)?
How do I check if a string represents a numeric value in Python? def is_number(s): try: float(s) return True except ValueError: return False The above works, but it seems clunky. If what you are testing comes from user input, it is still a string even if it represents an int or a float. See How can I read inputs as numbers? for converting the input, and Asking the user for input until they give a valid response for ensuring that the input represents an int or float (or other requirements) before proceeding.
[ "For non-negative (unsigned) integers only, use isdigit():\n>>> a = \"03523\"\n>>> a.isdigit()\nTrue\n>>> b = \"963spam\"\n>>> b.isdigit()\nFalse\n\n\nDocumentation for isdigit(): Python2, Python3\nFor Python 2 Unicode strings:\nisnumeric().\n", "\nWhich, not only is ugly and slow\n\nI'd dispute both.\nA regex or other string parsing method would be uglier and slower. \nI'm not sure that anything much could be faster than the above. It calls the function and returns. Try/Catch doesn't introduce much overhead because the most common exception is caught without an extensive search of stack frames.\nThe issue is that any numeric conversion function has two kinds of results\n\nA number, if the number is valid\nA status code (e.g., via errno) or exception to show that no valid number could be parsed.\n\nC (as an example) hacks around this a number of ways. Python lays it out clearly and explicitly.\nI think your code for doing this is perfect.\n", "TL;DR The best solution is s.replace('.','',1).isdigit()\nI did some benchmarks comparing the different approaches\ndef is_number_tryexcept(s):\n \"\"\" Returns True is string is a number. \"\"\"\n try:\n float(s)\n return True\n except ValueError:\n return False\n\nimport re \ndef is_number_regex(s):\n \"\"\" Returns True is string is a number. \"\"\"\n if re.match(\"^\\d+?\\.\\d+?$\", s) is None:\n return s.isdigit()\n return True\n\n\ndef is_number_repl_isdigit(s):\n \"\"\" Returns True is string is a number. \"\"\"\n return s.replace('.','',1).isdigit()\n\nIf the string is not a number, the except-block is quite slow. But more importantly, the try-except method is the only approach that handles scientific notations correctly.\nfuncs = [\n is_number_tryexcept, \n is_number_regex,\n is_number_repl_isdigit\n ]\n\na_float = '.1234'\n\nprint('Float notation \".1234\" is not supported by:')\nfor f in funcs:\n if not f(a_float):\n print('\\t -', f.__name__)\n\nFloat notation \".1234\" is not supported by:\n- is_number_regex \nscientific1 = '1.000000e+50'\nscientific2 = '1e50'\n\n\nprint('Scientific notation \"1.000000e+50\" is not supported by:')\nfor f in funcs:\n if not f(scientific1):\n print('\\t -', f.__name__)\n\n\n\n\nprint('Scientific notation \"1e50\" is not supported by:')\nfor f in funcs:\n if not f(scientific2):\n print('\\t -', f.__name__)\n\nScientific notation \"1.000000e+50\" is not supported by:\n- is_number_regex\n- is_number_repl_isdigit\nScientific notation \"1e50\" is not supported by:\n- is_number_regex\n- is_number_repl_isdigit \nEDIT: The benchmark results\nimport timeit\n\ntest_cases = ['1.12345', '1.12.345', 'abc12345', '12345']\ntimes_n = {f.__name__:[] for f in funcs}\n\nfor t in test_cases:\n for f in funcs:\n f = f.__name__\n times_n[f].append(min(timeit.Timer('%s(t)' %f, \n 'from __main__ import %s, t' %f)\n .repeat(repeat=3, number=1000000)))\n\nwhere the following functions were tested\nfrom re import match as re_match\nfrom re import compile as re_compile\n\ndef is_number_tryexcept(s):\n \"\"\" Returns True is string is a number. \"\"\"\n try:\n float(s)\n return True\n except ValueError:\n return False\n\ndef is_number_regex(s):\n \"\"\" Returns True is string is a number. \"\"\"\n if re_match(\"^\\d+?\\.\\d+?$\", s) is None:\n return s.isdigit()\n return True\n\n\ncomp = re_compile(\"^\\d+?\\.\\d+?$\") \n\ndef compiled_regex(s):\n \"\"\" Returns True is string is a number. \"\"\"\n if comp.match(s) is None:\n return s.isdigit()\n return True\n\n\ndef is_number_repl_isdigit(s):\n \"\"\" Returns True is string is a number. \"\"\"\n return s.replace('.','',1).isdigit()\n\n\n", "There is one exception that you may want to take into account: the string 'NaN'\nIf you want is_number to return FALSE for 'NaN' this code will not work as Python converts it to its representation of a number that is not a number (talk about identity issues):\n>>> float('NaN')\nnan\n\nOtherwise, I should actually thank you for the piece of code I now use extensively. :)\nG.\n", "how about this:\n'3.14'.replace('.','',1).isdigit()\n\nwhich will return true only if there is one or no '.' in the string of digits.\n'3.14.5'.replace('.','',1).isdigit()\n\nwill return false\nedit: just saw another comment ...\nadding a .replace(badstuff,'',maxnum_badstuff) for other cases can be done. if you are passing salt and not arbitrary condiments (ref:xkcd#974) this will do fine :P\n", "Updated after Alfe pointed out you don't need to check for float separately as complex handles both:\ndef is_number(s):\n try:\n complex(s) # for int, long, float and complex\n except ValueError:\n return False\n\n return True\n\n\nPreviously said: Is some rare cases you might also need to check for complex numbers (e.g. 1+2i), which can not be represented by a float:\ndef is_number(s):\n try:\n float(s) # for int, long and float\n except ValueError:\n try:\n complex(s) # for complex\n except ValueError:\n return False\n\n return True\n\n", "\nWhich, not only is ugly and slow, seems clunky.\n\nIt may take some getting used to, but this is the pythonic way of doing it. As has been already pointed out, the alternatives are worse. But there is one other advantage of doing things this way: polymorphism.\nThe central idea behind duck typing is that \"if it walks and talks like a duck, then it's a duck.\" What if you decide that you need to subclass string so that you can change how you determine if something can be converted into a float? Or what if you decide to test some other object entirely? You can do these things without having to change the above code.\nOther languages solve these problems by using interfaces. I'll save the analysis of which solution is better for another thread. The point, though, is that python is decidedly on the duck typing side of the equation, and you're probably going to have to get used to syntax like this if you plan on doing much programming in Python (but that doesn't mean you have to like it of course).\nOne other thing you might want to take into consideration: Python is pretty fast in throwing and catching exceptions compared to a lot of other languages (30x faster than .Net for instance). Heck, the language itself even throws exceptions to communicate non-exceptional, normal program conditions (every time you use a for loop). Thus, I wouldn't worry too much about the performance aspects of this code until you notice a significant problem.\n", "For int use this:\n>>> \"1221323\".isdigit()\nTrue\n\nBut for float we need some tricks ;-). Every float number has one point...\n>>> \"12.34\".isdigit()\nFalse\n>>> \"12.34\".replace('.','',1).isdigit()\nTrue\n>>> \"12.3.4\".replace('.','',1).isdigit()\nFalse\n\nAlso for negative numbers just add lstrip():\n>>> '-12'.lstrip('-')\n'12'\n\nAnd now we get a universal way:\n>>> '-12.34'.lstrip('-').replace('.','',1).isdigit()\nTrue\n>>> '.-234'.lstrip('-').replace('.','',1).isdigit()\nFalse\n\n", "This answer provides step by step guide having function with examples to find the string is:\n\nPositive integer\nPositive/negative - integer/float\nHow to discard \"NaN\" (not a number) strings while checking for number?\n\nCheck if string is positive integer\nYou may use str.isdigit() to check whether given string is positive integer.\nSample Results:\n# For digit\n>>> '1'.isdigit()\nTrue\n>>> '1'.isalpha()\nFalse\n\nCheck for string as positive/negative - integer/float\nstr.isdigit() returns False if the string is a negative number or a float number. For example:\n# returns `False` for float\n>>> '123.3'.isdigit()\nFalse\n# returns `False` for negative number\n>>> '-123'.isdigit()\nFalse\n\nIf you want to also check for the negative integers and float, then you may write a custom function to check for it as:\ndef is_number(n):\n try:\n float(n) # Type-casting the string to `float`.\n # If string is not a valid `float`, \n # it'll raise `ValueError` exception\n except ValueError:\n return False\n return True\n\nSample Run:\n>>> is_number('123') # positive integer number\nTrue\n\n>>> is_number('123.4') # positive float number\nTrue\n \n>>> is_number('-123') # negative integer number\nTrue\n\n>>> is_number('-123.4') # negative `float` number\nTrue\n\n>>> is_number('abc') # `False` for \"some random\" string\nFalse\n\nDiscard \"NaN\" (not a number) strings while checking for number\nThe above functions will return True for the \"NAN\" (Not a number) string because for Python it is valid float representing it is not a number. For example:\n>>> is_number('NaN')\nTrue\n\nIn order to check whether the number is \"NaN\", you may use math.isnan() as:\n>>> import math\n>>> nan_num = float('nan')\n\n>>> math.isnan(nan_num)\nTrue\n\nOr if you don't want to import additional library to check this, then you may simply check it via comparing it with itself using ==. Python returns False when nan float is compared with itself. For example:\n# `nan_num` variable is taken from above example\n>>> nan_num == nan_num\nFalse\n\nHence, above function is_number can be updated to return False for \"NaN\" as:\ndef is_number(n):\n is_number = True\n try:\n num = float(n)\n # check for \"nan\" floats\n is_number = num == num # or use `math.isnan(num)`\n except ValueError:\n is_number = False\n return is_number\n\nSample Run:\n>>> is_number('Nan') # not a number \"Nan\" string\nFalse\n\n>>> is_number('nan') # not a number string \"nan\" with all lower cased\nFalse\n\n>>> is_number('123') # positive integer\nTrue\n\n>>> is_number('-123') # negative integer\nTrue\n\n>>> is_number('-1.12') # negative `float`\nTrue\n\n>>> is_number('abc') # \"some random\" string\nFalse\n\nPS: Each operation for each check depending on the type of number comes with additional overhead. Choose the version of is_number function which fits your requirement.\n", "For strings of non-numbers, try: except: is actually slower than regular expressions. For strings of valid numbers, regex is slower. So, the appropriate method depends on your input. \nIf you find that you are in a performance bind, you can use a new third-party module called fastnumbers that provides a function called isfloat. Full disclosure, I am the author. I have included its results in the timings below.\n\nfrom __future__ import print_function\nimport timeit\n\nprep_base = '''\\\nx = 'invalid'\ny = '5402'\nz = '4.754e3'\n'''\n\nprep_try_method = '''\\\ndef is_number_try(val):\n try:\n float(val)\n return True\n except ValueError:\n return False\n\n'''\n\nprep_re_method = '''\\\nimport re\nfloat_match = re.compile(r'[-+]?\\d*\\.?\\d+(?:[eE][-+]?\\d+)?$').match\ndef is_number_re(val):\n return bool(float_match(val))\n\n'''\n\nfn_method = '''\\\nfrom fastnumbers import isfloat\n\n'''\n\nprint('Try with non-number strings', timeit.timeit('is_number_try(x)',\n prep_base + prep_try_method), 'seconds')\nprint('Try with integer strings', timeit.timeit('is_number_try(y)',\n prep_base + prep_try_method), 'seconds')\nprint('Try with float strings', timeit.timeit('is_number_try(z)',\n prep_base + prep_try_method), 'seconds')\nprint()\nprint('Regex with non-number strings', timeit.timeit('is_number_re(x)',\n prep_base + prep_re_method), 'seconds')\nprint('Regex with integer strings', timeit.timeit('is_number_re(y)',\n prep_base + prep_re_method), 'seconds')\nprint('Regex with float strings', timeit.timeit('is_number_re(z)',\n prep_base + prep_re_method), 'seconds')\nprint()\nprint('fastnumbers with non-number strings', timeit.timeit('isfloat(x)',\n prep_base + 'from fastnumbers import isfloat'), 'seconds')\nprint('fastnumbers with integer strings', timeit.timeit('isfloat(y)',\n prep_base + 'from fastnumbers import isfloat'), 'seconds')\nprint('fastnumbers with float strings', timeit.timeit('isfloat(z)',\n prep_base + 'from fastnumbers import isfloat'), 'seconds')\nprint()\n\n\nTry with non-number strings 2.39108395576 seconds\nTry with integer strings 0.375686168671 seconds\nTry with float strings 0.369210958481 seconds\n\nRegex with non-number strings 0.748660802841 seconds\nRegex with integer strings 1.02021503448 seconds\nRegex with float strings 1.08564686775 seconds\n\nfastnumbers with non-number strings 0.174362897873 seconds\nfastnumbers with integer strings 0.179651021957 seconds\nfastnumbers with float strings 0.20222902298 seconds\n\nAs you can see\n\ntry: except: was fast for numeric input but very slow for an invalid input\nregex is very efficient when the input is invalid\nfastnumbers wins in both cases\n\n", "I know this is particularly old but I would add an answer I believe covers the information missing from the highest voted answer that could be very valuable to any who find this:\nFor each of the following methods connect them with a count if you need any input to be accepted. (Assuming we are using vocal definitions of integers rather than 0-255, etc.)\nx.isdigit()\nworks well for checking if x is an integer.\nx.replace('-','').isdigit()\nworks well for checking if x is a negative.(Check - in first position)\nx.replace('.','').isdigit()\nworks well for checking if x is a decimal.\nx.replace(':','').isdigit()\nworks well for checking if x is a ratio.\nx.replace('/','',1).isdigit()\nworks well for checking if x is a fraction.\n", "Just Mimic C#\nIn C# there are two different functions that handle parsing of scalar values:\n\nFloat.Parse()\nFloat.TryParse()\n\nfloat.parse():\ndef parse(string):\n try:\n return float(string)\n except Exception:\n throw TypeError\n\nNote: If you're wondering why I changed the exception to a TypeError, here's the documentation.\nfloat.try_parse():\ndef try_parse(string, fail=None):\n try:\n return float(string)\n except Exception:\n return fail;\n\nNote: You don't want to return the boolean 'False' because that's still a value type. None is better because it indicates failure. Of course, if you want something different you can change the fail parameter to whatever you want.\nTo extend float to include the 'parse()' and 'try_parse()' you'll need to monkeypatch the 'float' class to add these methods.\nIf you want respect pre-existing functions the code should be something like:\ndef monkey_patch():\n if(!hasattr(float, 'parse')):\n float.parse = parse\n if(!hasattr(float, 'try_parse')):\n float.try_parse = try_parse\n\nSideNote: I personally prefer to call it Monkey Punching because it feels like I'm abusing the language when I do this but YMMV.\nUsage:\nfloat.parse('giggity') // throws TypeException\nfloat.parse('54.3') // returns the scalar value 54.3\nfloat.tryParse('twank') // returns None\nfloat.tryParse('32.2') // returns the scalar value 32.2\n\nAnd the great Sage Pythonas said to the Holy See Sharpisus, \"Anything you can do I can do better; I can do anything better than you.\"\n", "Casting to float and catching ValueError is probably the fastest way, since float() is specifically meant for just that. Anything else that requires string parsing (regex, etc) will likely be slower due to the fact that it's not tuned for this operation. My $0.02.\n", "You can use Unicode strings, they have a method to do just what you want:\n>>> s = u\"345\"\n>>> s.isnumeric()\nTrue\n\nOr:\n>>> s = \"345\"\n>>> u = unicode(s)\n>>> u.isnumeric()\nTrue\n\nhttp://www.tutorialspoint.com/python/string_isnumeric.htm\nhttp://docs.python.org/2/howto/unicode.html\n", "So to put it all together, checking for Nan, infinity and complex numbers (it would seem they are specified with j, not i, i.e. 1+2j) it results in:\ndef is_number(s):\n try:\n n=str(float(s))\n if n == \"nan\" or n==\"inf\" or n==\"-inf\" : return False\n except ValueError:\n try:\n complex(s) # for complex\n except ValueError:\n return False\n return True\n\n", "I wanted to see which method is fastest. Overall the best and most consistent results were given by the check_replace function. The fastest results were given by the check_exception function, but only if there was no exception fired - meaning its code is the most efficient, but the overhead of throwing an exception is quite large.\nPlease note that checking for a successful cast is the only method which is accurate, for example, this works with check_exception but the other two test functions will return False for a valid float:\nhuge_number = float('1e+100')\n\nHere is the benchmark code:\nimport time, re, random, string\n\nITERATIONS = 10000000\n\nclass Timer: \n def __enter__(self):\n self.start = time.clock()\n return self\n def __exit__(self, *args):\n self.end = time.clock()\n self.interval = self.end - self.start\n\ndef check_regexp(x):\n return re.compile(\"^\\d*\\.?\\d*$\").match(x) is not None\n\ndef check_replace(x):\n return x.replace('.','',1).isdigit()\n\ndef check_exception(s):\n try:\n float(s)\n return True\n except ValueError:\n return False\n\nto_check = [check_regexp, check_replace, check_exception]\n\nprint('preparing data...')\ngood_numbers = [\n str(random.random() / random.random()) \n for x in range(ITERATIONS)]\n\nbad_numbers = ['.' + x for x in good_numbers]\n\nstrings = [\n ''.join(random.choice(string.ascii_uppercase + string.digits) for _ in range(random.randint(1,10)))\n for x in range(ITERATIONS)]\n\nprint('running test...')\nfor func in to_check:\n with Timer() as t:\n for x in good_numbers:\n res = func(x)\n print('%s with good floats: %s' % (func.__name__, t.interval))\n with Timer() as t:\n for x in bad_numbers:\n res = func(x)\n print('%s with bad floats: %s' % (func.__name__, t.interval))\n with Timer() as t:\n for x in strings:\n res = func(x)\n print('%s with strings: %s' % (func.__name__, t.interval))\n\nHere are the results with Python 2.7.10 on a 2017 MacBook Pro 13:\ncheck_regexp with good floats: 12.688639\ncheck_regexp with bad floats: 11.624862\ncheck_regexp with strings: 11.349414\ncheck_replace with good floats: 4.419841\ncheck_replace with bad floats: 4.294909\ncheck_replace with strings: 4.086358\ncheck_exception with good floats: 3.276668\ncheck_exception with bad floats: 13.843092\ncheck_exception with strings: 15.786169\n\nHere are the results with Python 3.6.5 on a 2017 MacBook Pro 13:\ncheck_regexp with good floats: 13.472906000000009\ncheck_regexp with bad floats: 12.977665000000016\ncheck_regexp with strings: 12.417542999999995\ncheck_replace with good floats: 6.011045999999993\ncheck_replace with bad floats: 4.849356\ncheck_replace with strings: 4.282754000000011\ncheck_exception with good floats: 6.039081999999979\ncheck_exception with bad floats: 9.322753000000006\ncheck_exception with strings: 9.952595000000002\n\nHere are the results with PyPy 2.7.13 on a 2017 MacBook Pro 13:\ncheck_regexp with good floats: 2.693217\ncheck_regexp with bad floats: 2.744819\ncheck_regexp with strings: 2.532414\ncheck_replace with good floats: 0.604367\ncheck_replace with bad floats: 0.538169\ncheck_replace with strings: 0.598664\ncheck_exception with good floats: 1.944103\ncheck_exception with bad floats: 2.449182\ncheck_exception with strings: 2.200056\n\n", "The input may be as follows:\na=\"50\"\nb=50\nc=50.1\nd=\"50.1\"\n\n1-General input:\nThe input of this function can be everything!\nFinds whether the given variable is numeric. Numeric strings consist of optional sign, any number of digits, optional decimal part and optional exponential part. Thus +0123.45e6 is a valid numeric value. Hexadecimal (e.g. 0xf4c3b00c) and binary (e.g. 0b10100111001) notation is not allowed.\nis_numeric function\nimport ast\nimport numbers \ndef is_numeric(obj):\n if isinstance(obj, numbers.Number):\n return True\n elif isinstance(obj, str):\n nodes = list(ast.walk(ast.parse(obj)))[1:]\n if not isinstance(nodes[0], ast.Expr):\n return False\n if not isinstance(nodes[-1], ast.Num):\n return False\n nodes = nodes[1:-1]\n for i in range(len(nodes)):\n #if used + or - in digit :\n if i % 2 == 0:\n if not isinstance(nodes[i], ast.UnaryOp):\n return False\n else:\n if not isinstance(nodes[i], (ast.USub, ast.UAdd)):\n return False\n return True\n else:\n return False\n\ntest:\n>>> is_numeric(\"54\")\nTrue\n>>> is_numeric(\"54.545\")\nTrue\n>>> is_numeric(\"0x45\")\nTrue\n\nis_float function\nFinds whether the given variable is float. float strings consist of optional sign, any number of digits, ...\nimport ast\n\ndef is_float(obj):\n if isinstance(obj, float):\n return True\n if isinstance(obj, int):\n return False\n elif isinstance(obj, str):\n nodes = list(ast.walk(ast.parse(obj)))[1:]\n if not isinstance(nodes[0], ast.Expr):\n return False\n if not isinstance(nodes[-1], ast.Num):\n return False\n if not isinstance(nodes[-1].n, float):\n return False\n nodes = nodes[1:-1]\n for i in range(len(nodes)):\n if i % 2 == 0:\n if not isinstance(nodes[i], ast.UnaryOp):\n return False\n else:\n if not isinstance(nodes[i], (ast.USub, ast.UAdd)):\n return False\n return True\n else:\n return False\n\ntest:\n>>> is_float(\"5.4\")\nTrue\n>>> is_float(\"5\")\nFalse\n>>> is_float(5)\nFalse\n>>> is_float(\"5\")\nFalse\n>>> is_float(\"+5.4\")\nTrue\n\nwhat is ast?\n\n2- If you are confident that the variable content is String:\nuse str.isdigit() method\n>>> a=454\n>>> a.isdigit()\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nAttributeError: 'int' object has no attribute 'isdigit'\n>>> a=\"454\"\n>>> a.isdigit()\nTrue\n\n\n3-Numerical input:\ndetect int value:\n>>> isinstance(\"54\", int)\nFalse\n>>> isinstance(54, int)\nTrue\n>>> \n\ndetect float:\n>>> isinstance(\"45.1\", float)\nFalse\n>>> isinstance(45.1, float)\nTrue\n\n", "In a most general case for a float, one would like to take care of integers and decimals. Let's take the string \"1.1\" as an example.\nI would try one of the following:\n1.> isnumeric()\nword = \"1.1\"\n\n\"\".join(word.split(\".\")).isnumeric()\n>>> True\n\n2.> isdigit()\nword = \"1.1\"\n\n\"\".join(word.split(\".\")).isdigit()\n>>> True\n\n3.> isdecimal()\nword = \"1.1\"\n\n\"\".join(word.split(\".\")).isdecimal()\n>>> True\n\nSpeed:\n► All the aforementioned methods have similar speeds.\n%timeit \"\".join(word.split(\".\")).isnumeric()\n>>> 257 ns ± 12 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)\n\n%timeit \"\".join(word.split(\".\")).isdigit()\n>>> 252 ns ± 11 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)\n\n%timeit \"\".join(word.split(\".\")).isdecimal()\n>>> 244 ns ± 7.17 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)\n\n", "str.isnumeric()\n\nReturn True if all characters in the string are numeric characters,\n and there is at least one character, False otherwise. Numeric\n characters include digit characters, and all characters that have the\n Unicode numeric value property, e.g. U+2155, VULGAR FRACTION ONE\n FIFTH. Formally, numeric characters are those with the property value\n Numeric_Type=Digit, Numeric_Type=Decimal or Numeric_Type=Numeric.\n\nstr.isdecimal()\n\nReturn True if all characters in the string are decimal characters and\n there is at least one character, False otherwise. Decimal characters\n are those that can be used to form numbers in base 10, e.g. U+0660,\n ARABIC-INDIC DIGIT ZERO. Formally a decimal character is a character\n in the Unicode General Category “Nd”.\n\nBoth available for string types from Python 3.0. \n", "I needed to determine if a string cast into basic types (float,int,str,bool). After not finding anything on the internet I created this:\ndef str_to_type (s):\n \"\"\" Get possible cast type for a string\n\n Parameters\n ----------\n s : string\n\n Returns\n -------\n float,int,str,bool : type\n Depending on what it can be cast to\n\n \"\"\" \n try: \n f = float(s) \n if \".\" not in s:\n return int\n return float\n except ValueError:\n value = s.upper()\n if value == \"TRUE\" or value == \"FALSE\":\n return bool\n return type(s)\n\nExample\nstr_to_type(\"true\") # bool\nstr_to_type(\"6.0\") # float\nstr_to_type(\"6\") # int\nstr_to_type(\"6abc\") # str\nstr_to_type(u\"6abc\") # unicode \n\nYou can capture the type and use it \ns = \"6.0\"\ntype_ = str_to_type(s) # float\nf = type_(s) \n\n", "I think your solution is fine, but there is a correct regexp implementation.\nThere does seem to be a lot of regexp hate towards these answers which I think is unjustified, regexps can be reasonably clean and correct and fast. It really depends on what you're trying to do. The original question was how can you \"check if a string can be represented as a number (float)\" (as per your title). Presumably you would want to use the numeric/float value once you've checked that it's valid, in which case your try/except makes a lot of sense. But if, for some reason, you just want to validate that a string is a number then a regex also works fine, but it's hard to get correct. I think most of the regex answers so far, for example, do not properly parse strings without an integer part (such as \".7\") which is a float as far as python is concerned. And that's slightly tricky to check for in a single regex where the fractional portion is not required. I've included two regex to show this.\nIt does raise the interesting question as to what a \"number\" is. Do you include \"inf\" which is valid as a float in python? Or do you include numbers that are \"numbers\" but maybe can't be represented in python (such as numbers that are larger than the float max).\nThere's also ambiguities in how you parse numbers. For example, what about \"--20\"? Is this a \"number\"? Is this a legal way to represent \"20\"? Python will let you do \"var = --20\" and set it to 20 (though really this is because it treats it as an expression), but float(\"--20\") does not work.\nAnyways, without more info, here's a regex that I believe covers all the ints and floats as python parses them.\n# Doesn't properly handle floats missing the integer part, such as \".7\"\nSIMPLE_FLOAT_REGEXP = re.compile(r'^[-+]?[0-9]+\\.?[0-9]+([eE][-+]?[0-9]+)?$')\n# Example \"-12.34E+56\" # sign (-)\n # integer (12)\n # mantissa (34)\n # exponent (E+56)\n\n# Should handle all floats\nFLOAT_REGEXP = re.compile(r'^[-+]?([0-9]+|[0-9]*\\.[0-9]+)([eE][-+]?[0-9]+)?$')\n# Example \"-12.34E+56\" # sign (-)\n # integer (12)\n # OR\n # int/mantissa (12.34)\n # exponent (E+56)\n\ndef is_float(str):\n return True if FLOAT_REGEXP.match(str) else False\n\nSome example test values:\nTrue <- +42\nTrue <- +42.42\nFalse <- +42.42.22\nTrue <- +42.42e22\nTrue <- +42.42E-22\nFalse <- +42.42e-22.8\nTrue <- .42\nFalse <- 42nope\n\nRunning the benchmarking code in @ron-reiter's answer shows that this regex is actually faster than the normal regex and is much faster at handling bad values than the exception, which makes some sense. Results:\ncheck_regexp with good floats: 18.001921\ncheck_regexp with bad floats: 17.861423\ncheck_regexp with strings: 17.558862\ncheck_correct_regexp with good floats: 11.04428\ncheck_correct_regexp with bad floats: 8.71211\ncheck_correct_regexp with strings: 8.144161\ncheck_replace with good floats: 6.020597\ncheck_replace with bad floats: 5.343049\ncheck_replace with strings: 5.091642\ncheck_exception with good floats: 5.201605\ncheck_exception with bad floats: 23.921864\ncheck_exception with strings: 23.755481\n\n", "I did some speed test. Lets say that if the string is likely to be a number the try/except strategy is the fastest possible.If the string is not likely to be a number and you are interested in Integer check, it worths to do some test (isdigit plus heading '-'). \nIf you are interested to check float number, you have to use the try/except code whitout escape.\n", "RyanN suggests\n\nIf you want to return False for a NaN and Inf, change line to x = float(s); return (x == x) and (x - 1 != x). This should return True for all floats except Inf and NaN\n\nBut this doesn't quite work, because for sufficiently large floats, x-1 == x returns true. For example, 2.0**54 - 1 == 2.0**54\n", "I was working on a problem that led me to this thread, namely how to convert a collection of data to strings and numbers in the most intuitive way. I realized after reading the original code that what I needed was different in two ways:\n1 - I wanted an integer result if the string represented an integer\n2 - I wanted a number or a string result to stick into a data structure\nso I adapted the original code to produce this derivative:\ndef string_or_number(s):\n try:\n z = int(s)\n return z\n except ValueError:\n try:\n z = float(s)\n return z\n except ValueError:\n return s\n\n", "import re\ndef is_number(num):\n pattern = re.compile(r'^[-+]?[-0-9]\\d*\\.\\d*|[-+]?\\.?[0-9]\\d*$')\n result = pattern.match(num)\n if result:\n return True\n else:\n return False\n\n\n​>>>: is_number('1')\nTrue\n\n>>>: is_number('111')\nTrue\n\n>>>: is_number('11.1')\nTrue\n\n>>>: is_number('-11.1')\nTrue\n\n>>>: is_number('inf')\nFalse\n\n>>>: is_number('-inf')\nFalse\n\n", "This code handles the exponents, floats, and integers, wihtout using regex.\nreturn True if str1.lstrip('-').replace('.','',1).isdigit() or float(str1) else False\n\n", "Here's my simple way of doing it. Let's say that I'm looping through some strings and I want to add them to an array if they turn out to be numbers.\ntry:\n myvar.append( float(string_to_check) )\nexcept:\n continue\n\nReplace the myvar.apppend with whatever operation you want to do with the string if it turns out to be a number. The idea is to try to use a float() operation and use the returned error to determine whether or not the string is a number.\n", "I also used the function you mentioned, but soon I notice that strings as \"Nan\", \"Inf\" and it's variation are considered as number. So I propose you improved version of your function, that will return false on those type of input and will not fail \"1e3\" variants:\ndef is_float(text):\n try:\n float(text)\n # check for nan/infinity etc.\n if text.isalpha():\n return False\n return True\n except ValueError:\n return False\n\n", "User helper function:\ndef if_ok(fn, string):\n try:\n return fn(string)\n except Exception as e:\n return None\n\nthen \nif_ok(int, my_str) or if_ok(float, my_str) or if_ok(complex, my_str)\nis_number = lambda s: any([if_ok(fn, s) for fn in (int, float, complex)])\n\n", "def is_float(s):\n if s is None:\n return False\n\n if len(s) == 0:\n return False\n\n digits_count = 0\n dots_count = 0\n signs_count = 0\n\n for c in s:\n if '0' <= c <= '9':\n digits_count += 1\n elif c == '.':\n dots_count += 1\n elif c == '-' or c == '+':\n signs_count += 1\n else:\n return False\n\n if digits_count == 0:\n return False\n\n if dots_count > 1:\n return False\n\n if signs_count > 1:\n return False\n\n return True\n\n", "I know I'm late to the party, but figured out a solution which wasn't here:\nThis solution follows the EAFP principle in Python\ndef get_number_from_string(value):\n try:\n int_value = int(value)\n return int_value\n\n except ValueError:\n return float(value)\n\nExplanation:\nIf the value in the string is a float and I first try to parse it as an int, it will throw a ValueError. So, I catch that error and parse the value as float and return.\n", "You can generalize the exception technique in a useful way by returning more useful values than True and False. For example this function puts quotes round strings but leaves numbers alone. Which is just what I needed for a quick and dirty filter to make some variable definitions for R. \nimport sys\n\ndef fix_quotes(s):\n try:\n float(s)\n return s\n except ValueError:\n return '\"{0}\"'.format(s)\n\nfor line in sys.stdin:\n input = line.split()\n print input[0], '<- c(', ','.join(fix_quotes(c) for c in input[1:]), ')'\n\n", "Try this.\n def is_number(var):\n try:\n if var == int(var):\n return True\n except Exception:\n return False\n\n", "Sorry for the Zombie thread post - just wanted to round out the code for completeness...\n# is_number() function - Uses re = regex library\n# Should handle all normal and complex numbers\n# Does not accept trailing spaces. \n# Note: accepts both engineering \"j\" and math \"i\" but only the imaginary part \"+bi\" of a complex number a+bi\n# Also accepts inf or NaN\n# Thanks to the earlier responders for most the regex fu\n\nimport re\n\nISNUM_REGEXP = re.compile(r'^[-+]?([0-9]+|[0-9]*\\.[0-9]+)([eE][-+]?[0-9]+)?[ij]?$')\n\ndef is_number(str):\n#change order if you have a lot of NaN or inf to parse\n if ISNUM_REGEXP.match(str) or str == \"NaN\" or str == \"inf\": \n return True \n else:\n return False\n# A couple test numbers\n# +42.42e-42j\n# -42.42E+42i\n\nprint('Is it a number?', is_number(input('Gimme any number: ')))\n\nGimme any number: +42.42e-42j\nIs it a number? True\n", "For my very simple and very common use-case: is this human written string with keyboard a number?\nI read through most answers, and ended up with:\ndef isNumeric(string):\n result = True\n try:\n x = float(string)\n result = (x == x) and (x - 1 != x)\n except ValueError:\n result = False\n return result\n\nIt will return False for (+-)NaN and (+-)inf.\nYou can check it out here: https://trinket.io/python/ce32c0e54e\n", "One fast and simple option is to check the data type:\ndef is_number(value):\n return type(value) in [int, float]\n\nOr if you want to test if the values os a string are numeric:\ndef isNumber (value):\n return True if type(value) in [int, float] else str(value).replace('.','',1).isdigit()\n\ntests:\n>>> isNumber(1)\nTrue\n\n>>> isNumber(1/3)\nTrue\n\n>>> isNumber(1.3)\nTrue\n\n>>> isNumber('1.3')\nTrue\n\n>>> isNumber('s1.3')\nFalse\n\n", "There are already good answers in this post. I wanted to give a slightly different perspective.\nInstead of searching for a digit, number or float we could do a negative search for an alphabet. i.e. we could ask the program to look if it is not alphabet.\n## Check whether it is not alpha rather than checking if it is digit\nprint(not \"-1.2345\".isalpha())\nprint(not \"-1.2345e-10\".isalpha())\n\nIt will work well if you are sure that your string is a well formed number (Condition 1 and Condition 2 below). However it will fail if the string is not a well formed number by mistake. In such a case it will return a number match even if the string was not a valid number. To take care of this situation, there are many rule based methods must be there. However at this moment, regex comes to my mind. Below are three cases. Please note regex can be much better since I am not a regex expert. Below there are two lists: one for valid numbers and one for invalid numbers. Valid numbers must be picked up while the invalid numbers must not be.\n== Condition 1: String is guranteed to be a valid number but 'inf' is not picked ==\nValid_Numbers = [\"1\",\"-1\",\"+1\",\"0.0\",\".1\",\"1.2345\",\"-1.2345\",\"+1.2345\",\"1.2345e10\",\"1.2345e-10\",\"-1.2345e10\",\"-1.2345E10\",\"-inf\"]\nInvalid_Numbers = [\"1.1.1\",\"++1\",\"--1\",\"-1-1\",\"1.23e10e5\",\"--inf\"]\n\n################################ Condition 1: Valid number excludes 'inf' ####################################\n\nCase_1_Positive_Result = list(map(lambda x: not x.isalpha(),Valid_Numbers))\nprint(\"The below must all be True\")\nprint(Case_1_Positive_Result)\n\n## This check assumes a valid number. So it fails for the negative cases and wrongly detects string as number\nCase_1_Negative_Result = list(map(lambda x: not x.isalpha(),Invalid_Numbers))\nprint(\"The below must all be False\")\nprint(Case_1_Negative_Result)\n\nThe below must all be True\n[True, True, True, True, True, True, True, True, True, True, True, True, True]\nThe below must all be False\n[True, True, True, True, True, True]\n\n== Condition 2: String is guranteed to be a valid number and 'inf' is picked ==\n################################ Condition 2: Valid number includes 'inf' ###################################\nCase_2_Positive_Result = list(map(lambda x: x==\"inf\" or not x.isalpha(),Valid_Numbers+[\"inf\"]))\nprint(\"The below must all be True\")\nprint(Case_2_Positive_Result)\n\n## This check assumes a valid number. So it fails for the negative cases and wrongly detects string as number\nCase_2_Negative_Result = list(map(lambda x: x==\"inf\" or not x.isalpha(),Invalid_Numbers+[\"++inf\"]))\nprint(\"The below must all be False\")\nprint(Case_2_Negative_Result)\n\nThe below must all be True\n[True, True, True, True, True, True, True, True, True, True, True, True, True, True]\nThe below must all be False\n[True, True, True, True, True, True, True]\n\n== Condition 3: String is not guranteed to be a valid number ==\nimport re\nCompiledPattern = re.compile(r\"([+-]?(inf){1}$)|([+-]?[0-9]*\\.?[0-9]*$)|([+-]?[0-9]*\\.?[0-9]*[eE]{1}[+-]?[0-9]*$)\")\nCase_3_Positive_Result = list(map(lambda x: True if CompiledPattern.match(x) else False,Valid_Numbers+[\"inf\"]))\nprint(\"The below must all be True\")\nprint(Case_3_Positive_Result)\n\n## This check assumes a valid number. So it fails for the negative cases and wrongly detects string as number\nCase_3_Negative_Result = list(map(lambda x: True if CompiledPattern.match(x) else False,Invalid_Numbers+[\"++inf\"]))\nprint(\"The below must all be False\")\nprint(Case_3_Negative_Result)\n\nThe below must all be True\n[True, True, True, True, True, True, True, True, True, True, True, True, True, True]\nThe below must all be False\n[False, False, False, False, False, False, False]\n\n" ]
[ 1733, 770, 285, 79, 67, 48, 45, 31, 29, 17, 15, 14, 12, 11, 9, 9, 9, 8, 7, 5, 5, 4, 3, 3, 2, 2, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0 ]
[ "I have a similar problem. Instead of defining a isNumber function, I want to convert a list of strings to floats, something that in high-level terms would be:\n[ float(s) for s in list if isFloat(s)]\n\nIt is a given we can not really separate the float(s) from the isFloat(s) functions: these two results should be returned by the same function. \nAlso, if float(s) fails, the whole process fails, instead of just ignoring the faulty element. Plus, \"0\" is a valid number and should be included in the list. When filtering out bad elements, be certain not to exclude 0.\nTherefore, the above comprehension must be modified somehow to:\n\nif any element in the list cannot be converted, ignore it and don't throw an exception\navoid calling float(s) more than once for each element (one for the conversion, the other for the test)\nif the converted value is 0, it should still be present in the final list \n\nI propose a solution inspired in the Nullable numerical types of C#. These types are internally represented by a struct that has the numerical value and adds a boolean indicating if the value is valid:\ndef tryParseFloat(s):\n try:\n return(float(s), True)\n except:\n return(None, False)\n\ntupleList = [tryParseFloat(x) for x in list]\nfloats = [v for v,b in tupleList if b]\n\n", "use following it handles all cases:-\nimport re\na=re.match('((\\d+[\\.]\\d*$)|(\\.)\\d+$)' , '2.3') \na=re.match('((\\d+[\\.]\\d*$)|(\\.)\\d+$)' , '2.')\na=re.match('((\\d+[\\.]\\d*$)|(\\.)\\d+$)' , '.3')\na=re.match('((\\d+[\\.]\\d*$)|(\\.)\\d+$)' , '2.3sd')\na=re.match('((\\d+[\\.]\\d*$)|(\\.)\\d+$)' , '2.3')\n\n" ]
[ -2, -3 ]
[ "casting", "floating_point", "python", "type_conversion" ]
stackoverflow_0000354038_casting_floating_point_python_type_conversion.txt
Q: Compare 2 lists consisting of dictionaries by key python there are 2 lists of dictionaries with the same keys, for example: old = [{'key1': 'AAA', 'key2': 'value2', 'key3': 'value3'},{'key1': 'BBB', 'key2': 'value4', 'key3': 'value5'},{'key1': 'CCC', 'key2': 'value4', 'key3': 'value5'}] new = [{'key1': 'BBB', 'key2': 'value2', 'key3': 'value3'},{'key1': 'CCC', 'key2': 'value4', 'key3': 'value1'}] The task is to get old[0] ({'key1': AAA, ...}) via the key 'key1'. Tried the method below, but if the length of the lists are different, it does not work: for x in old: for y in new: if x['key1'] in y['key1']: old.remove(x) A: Use the nested loop the other way round: for y in new: for x in old: if x['key1'] == y['key1']: old.remove(x) print(old) which produces: [{'key1': 'AAA', 'key2': 'value2', 'key3': 'value3'}]
Compare 2 lists consisting of dictionaries by key python
there are 2 lists of dictionaries with the same keys, for example: old = [{'key1': 'AAA', 'key2': 'value2', 'key3': 'value3'},{'key1': 'BBB', 'key2': 'value4', 'key3': 'value5'},{'key1': 'CCC', 'key2': 'value4', 'key3': 'value5'}] new = [{'key1': 'BBB', 'key2': 'value2', 'key3': 'value3'},{'key1': 'CCC', 'key2': 'value4', 'key3': 'value1'}] The task is to get old[0] ({'key1': AAA, ...}) via the key 'key1'. Tried the method below, but if the length of the lists are different, it does not work: for x in old: for y in new: if x['key1'] in y['key1']: old.remove(x)
[ "Use the nested loop the other way round:\nfor y in new:\n for x in old:\n if x['key1'] == y['key1']:\n old.remove(x)\nprint(old)\n\nwhich produces:\n[{'key1': 'AAA', 'key2': 'value2', 'key3': 'value3'}]\n\n" ]
[ 0 ]
[]
[]
[ "dictionary", "list", "python" ]
stackoverflow_0074579817_dictionary_list_python.txt
Q: Why can I multiply string by int, but not variable with int value? I am making a text-based geometry calculator, and I'm working on a perimeter calculator feature, using a for loop it will ask you for side length x amount of times, x being the side count or sideCnt, in order for the for loop to work, I believe I need to set up a string that has characters equal to the amount of cycles, to do this I'm using a string and multiplying by sideCnt, when I use a integer it works but with the variable integer it gives me an error. This is the code with normal int: sideStr = "z" * 3 print(sideStr) which returns: zzz However when I use a variable as int sideCnt = 3 sideStr = "z" * sideCnt print(sideStr) it returns: TypeError: can't multiply sequence by non-int of type 'str' does anyone know what I am doing wrong? I only started messing with for loops yesterday so sorry if the solution is obvious A: Are you sure you don't have: sideCnt = "3" Somewhere in your code, or something similar? Maybe you are reading 3 in from user input and not casting it to an int? There is definitely something else happening in your code, you are not showing, what you have shown will work fine.
Why can I multiply string by int, but not variable with int value?
I am making a text-based geometry calculator, and I'm working on a perimeter calculator feature, using a for loop it will ask you for side length x amount of times, x being the side count or sideCnt, in order for the for loop to work, I believe I need to set up a string that has characters equal to the amount of cycles, to do this I'm using a string and multiplying by sideCnt, when I use a integer it works but with the variable integer it gives me an error. This is the code with normal int: sideStr = "z" * 3 print(sideStr) which returns: zzz However when I use a variable as int sideCnt = 3 sideStr = "z" * sideCnt print(sideStr) it returns: TypeError: can't multiply sequence by non-int of type 'str' does anyone know what I am doing wrong? I only started messing with for loops yesterday so sorry if the solution is obvious
[ "Are you sure you don't have:\nsideCnt = \"3\"\n\nSomewhere in your code, or something similar?\nMaybe you are reading 3 in from user input and not casting it to an int?\nThere is definitely something else happening in your code, you are not showing, what you have shown will work fine.\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074580047_python.txt
Q: Should you decorate dataclass subclasses if not making additional fields If you don't add any more fields to your subclass is there a need to add the @dataclass decorator to it and would it do anything? If there is no difference, which is the usual convention? from dataclasses import dataclass @dataclass class AAA: x: str y: str ... # decorate? class BBB(AAA): ... A: If you don't decorate your class, the variables defined in the root of the class will be class attributes which would be shared between all instances of the class. The dataclass let's you define the variables there but they would be instance attributes so each instance can hold its own value. If you instantiate only a single object, you won't see the difference. UPDATE Misunderstood your question. The decorate "adds" a few methods to your class, such as __init__ and __repr__. If you're not adding any attributes to your child class, then adding the decorator just instructs the interpreter to go through the MRO to update the mapping, which hasn't changed. It will not cause any trouble, it's just unnecessary. I haven't seen any well-defined convention for it but I wouldn't do it. A: From documentation, dataclasses source code and experimenting with class instances, I don't see any difference, even if new fields are added to a subclass. If we are talking conventions, then I would advise against decorating a subclass. Class BBB's definition is supposed to say "this class behaves just like AAA, but with these changes" (likely a couple of new methods in your case). Re-decorating BBB: serves no purpose violates DRY principle: although it's just one line, but still there's no particular difference between re-decorating and copy-pasting a short method from superclass could (potentially) get in the way of changing AAA: you might theoretically switch to another library for dataclassing or decide to use @dataclass with non-default parameters, and that'd require maintaining subclass as well (for no good reason).
Should you decorate dataclass subclasses if not making additional fields
If you don't add any more fields to your subclass is there a need to add the @dataclass decorator to it and would it do anything? If there is no difference, which is the usual convention? from dataclasses import dataclass @dataclass class AAA: x: str y: str ... # decorate? class BBB(AAA): ...
[ "If you don't decorate your class, the variables defined in the root of the class will be class attributes which would be shared between all instances of the class. The dataclass let's you define the variables there but they would be instance attributes so each instance can hold its own value. If you instantiate only a single object, you won't see the difference.\nUPDATE\nMisunderstood your question. The decorate \"adds\" a few methods to your class, such as __init__ and __repr__. If you're not adding any attributes to your child class, then adding the decorator just instructs the interpreter to go through the MRO to update the mapping, which hasn't changed. It will not cause any trouble, it's just unnecessary. I haven't seen any well-defined convention for it but I wouldn't do it.\n", "From documentation, dataclasses source code and experimenting with class instances, I don't see any difference, even if new fields are added to a subclass.\nIf we are talking conventions, then I would advise against decorating a subclass. Class BBB's definition is supposed to say \"this class behaves just like AAA, but with these changes\" (likely a couple of new methods in your case).\nRe-decorating BBB:\n\nserves no purpose\nviolates DRY principle: although it's just one line, but still there's no particular difference between re-decorating and copy-pasting a short method from superclass\ncould (potentially) get in the way of changing AAA: you might theoretically switch to another library for dataclassing or decide to use @dataclass with non-default parameters, and that'd require maintaining subclass as well (for no good reason).\n\n" ]
[ 1, 1 ]
[]
[]
[ "conventions", "python", "python_dataclasses" ]
stackoverflow_0074563511_conventions_python_python_dataclasses.txt
Q: Libssl and libcrypto causing dyld: Library not loaded: /usr/lib/libpq.5.dylib I recently uninstalled postgresql and installed pyscopg2 via pip. I know there's some trickery involved with libcrypto and libssl Currently i have them symlinked to: $ ls -lah libssl.* -rwxr-xr-x 1 root wheel 402K Aug 28 11:06 libssl.0.9.7.dylib -rwxr-xr-x 1 root wheel 589K Aug 28 11:06 libssl.0.9.8.dylib lrwxr-xr-x 1 root wheel 55B Nov 29 23:38 libssl.1.0.0.dylib -> /usr/local/Cellar/openssl/1.0.1c/lib/libssl.1.0.0.dylib lrwxr-xr-x 1 root wheel 55B Nov 30 02:25 libssl.dylib -> /usr/local/Cellar/openssl/1.0.1c/lib/libssl.1.0.0.dylib /usr/lib $ ls -lah libcrypto.* -rwxr-xr-x 1 root wheel 2.1M Aug 28 11:06 libcrypto.0.9.7.dylib -rwxr-xr-x 1 root wheel 2.6M Aug 28 11:06 libcrypto.0.9.8.dylib -r-xr-xr-x 1 root wheel 1.6M Oct 31 22:12 libcrypto.1.0.0.dylib lrwxr-xr-x 1 root wheel 58B Nov 30 02:27 libcrypto.dylib -> /usr/local/Cellar/openssl/1.0.1c/lib/libcrypto.1.0.0.dylib whereby I installed openssl via ports Now when I run arc diff, I am getting the infamous $ arc diff dyld: Library not loaded: /usr/lib/libpq.5.dylib Referenced from: /usr/bin/php Reason: image not found Trace/BPT trap: 5 There are a few answers here in SO which talks about symlinking these libs to the postgresql install directory. Obviously, this won't work for me. What should I do? A: Turns out /usr/lib/libpq.5.dylib was absent but /usr/lib/libpq.5.4.dylib was not. sudo ln -s /usr/lib/libpq.5.4.dylib /usr/lib/libpq.5.dylib fixed the issue. A: just use the below commands in your terminal (use the proper postgresql version) $ brew unlink postgresql@14 $ brew link libpq --force https://github.com/opentable/otj-pg-embedded/issues/152#issuecomment-954348544 A: Not unlike @Pablo Marambio, I fixed this issue by adding the following line to ~/.profile: export DYLD_LIBRARY_PATH=/Library/PostgreSQL/9.3/lib:$DYLD_LIBRARY_PATH For Postgres.app v9.3.5.0 (presumably others too) I added the following line instead: export DYLD_LIBRARY_PATH=/Applications/Postgres.app/Contents/Versions/9.3/lib:$DYLD_LIBRARY_PATH Then, of course, run source ~/.profile A: To resolve this, I had to uninstall postgresql and then install again. $ brew uninstall postgresql $ brew update $ brew install postgres A: I got the error Library not loaded: '/usr/local/opt/postgresql/lib/libpq.5.dylib' Reason: tried: '/usr/local/opt/postgresql/lib/libpq.5.dylib' (no such file), '/usr/local/lib/libpq.5.dylib' (no such file), '/usr/lib/libpq.5.dylib' (no such file) when running a Django project and to fix it I had to uninstall the pip packages: pip uninstall psycopg2 pip uninstall psycopg2-binary and then install them again: pip install psycopg2 pip install psycopg2-binary And this made the project run without the error. A: I had to do this for postgresql 14 + brew sudo ln -s /opt/homebrew/opt/postgresql@14/lib/postgresql@14/libpq.5.dylib /opt/homebrew/opt/postgresql/lib/libpq.5.dylib
Libssl and libcrypto causing dyld: Library not loaded: /usr/lib/libpq.5.dylib
I recently uninstalled postgresql and installed pyscopg2 via pip. I know there's some trickery involved with libcrypto and libssl Currently i have them symlinked to: $ ls -lah libssl.* -rwxr-xr-x 1 root wheel 402K Aug 28 11:06 libssl.0.9.7.dylib -rwxr-xr-x 1 root wheel 589K Aug 28 11:06 libssl.0.9.8.dylib lrwxr-xr-x 1 root wheel 55B Nov 29 23:38 libssl.1.0.0.dylib -> /usr/local/Cellar/openssl/1.0.1c/lib/libssl.1.0.0.dylib lrwxr-xr-x 1 root wheel 55B Nov 30 02:25 libssl.dylib -> /usr/local/Cellar/openssl/1.0.1c/lib/libssl.1.0.0.dylib /usr/lib $ ls -lah libcrypto.* -rwxr-xr-x 1 root wheel 2.1M Aug 28 11:06 libcrypto.0.9.7.dylib -rwxr-xr-x 1 root wheel 2.6M Aug 28 11:06 libcrypto.0.9.8.dylib -r-xr-xr-x 1 root wheel 1.6M Oct 31 22:12 libcrypto.1.0.0.dylib lrwxr-xr-x 1 root wheel 58B Nov 30 02:27 libcrypto.dylib -> /usr/local/Cellar/openssl/1.0.1c/lib/libcrypto.1.0.0.dylib whereby I installed openssl via ports Now when I run arc diff, I am getting the infamous $ arc diff dyld: Library not loaded: /usr/lib/libpq.5.dylib Referenced from: /usr/bin/php Reason: image not found Trace/BPT trap: 5 There are a few answers here in SO which talks about symlinking these libs to the postgresql install directory. Obviously, this won't work for me. What should I do?
[ "Turns out /usr/lib/libpq.5.dylib was absent but /usr/lib/libpq.5.4.dylib was not. \nsudo ln -s /usr/lib/libpq.5.4.dylib /usr/lib/libpq.5.dylib\n\nfixed the issue. \n", "just use the below commands in your terminal\n(use the proper postgresql version)\n$ brew unlink postgresql@14\n$ brew link libpq --force\nhttps://github.com/opentable/otj-pg-embedded/issues/152#issuecomment-954348544\n", "Not unlike @Pablo Marambio, I fixed this issue by adding the following line to ~/.profile:\nexport DYLD_LIBRARY_PATH=/Library/PostgreSQL/9.3/lib:$DYLD_LIBRARY_PATH\n\nFor Postgres.app v9.3.5.0 (presumably others too) I added the following line instead:\nexport DYLD_LIBRARY_PATH=/Applications/Postgres.app/Contents/Versions/9.3/lib:$DYLD_LIBRARY_PATH\n\nThen, of course, run source ~/.profile\n", "To resolve this, I had to uninstall postgresql and then install again. \n$ brew uninstall postgresql\n\n$ brew update\n\n$ brew install postgres\n\n", "I got the error\n\nLibrary not loaded: '/usr/local/opt/postgresql/lib/libpq.5.dylib'\n\n\nReason: tried: '/usr/local/opt/postgresql/lib/libpq.5.dylib' (no such file), '/usr/local/lib/libpq.5.dylib' (no such file), '/usr/lib/libpq.5.dylib' (no such file)\n\nwhen running a Django project and to fix it I had to uninstall the pip packages:\npip uninstall psycopg2 \npip uninstall psycopg2-binary \n\nand then install them again:\npip install psycopg2 \npip install psycopg2-binary \n\nAnd this made the project run without the error.\n", "I had to do this for postgresql 14 + brew\nsudo ln -s /opt/homebrew/opt/postgresql@14/lib/postgresql@14/libpq.5.dylib /opt/homebrew/opt/postgresql/lib/libpq.5.dylib\n\n" ]
[ 16, 8, 3, 2, 2, 0 ]
[]
[]
[ "libssl", "pip", "postgresql", "psycopg2", "python" ]
stackoverflow_0013643452_libssl_pip_postgresql_psycopg2_python.txt
Q: construct dataset for ner train i have in input : text = "Apple est une entreprise, James Alfred travaille ici" spans = [ { "start":0, "end":5, "label":"ORG" }, { "start":26, "end":38, "label":"PER" } ] correspondance_dict = {"PER":2, "ORG": 4 , "O" : 0} i want to tokenize the text and construct label according to spans list i.e : i want to have in output : tokenized_text = ["Apple", "est", "une", "entreprise", "," , "James","Alfred", "travaille", "ici"] labels = [4,0,0,0,0,2,2,0,0] #this list constructed with correspondance_dict and spans (4 because Apple is ORG and the "2,2" because "James,Alfred" is person A: If you're trying to use a huggingface's pipeline in other parts of your program, it's easy to aggregate output text chunks using an appropriate strategy. The documentation for a thorough explanation is available here! from transformers import pipeline # Initialize the NER pipeline ner = pipeline("ner", aggregation_strategy="simple") # Phrase phrase = "David helped Peter enter the building, where his house is located." # NER task ner_result = ner(phrase) # Print result print(ner_result) output: [{'entity_group': 'PER', 'score': 0.99642086, 'word': 'David', 'start': 0, 'end': 5}, {'entity_group': 'PER', 'score': 0.99559766, 'word': 'Peter', 'start': 13, 'end': 18}]
construct dataset for ner train
i have in input : text = "Apple est une entreprise, James Alfred travaille ici" spans = [ { "start":0, "end":5, "label":"ORG" }, { "start":26, "end":38, "label":"PER" } ] correspondance_dict = {"PER":2, "ORG": 4 , "O" : 0} i want to tokenize the text and construct label according to spans list i.e : i want to have in output : tokenized_text = ["Apple", "est", "une", "entreprise", "," , "James","Alfred", "travaille", "ici"] labels = [4,0,0,0,0,2,2,0,0] #this list constructed with correspondance_dict and spans (4 because Apple is ORG and the "2,2" because "James,Alfred" is person
[ "If you're trying to use a huggingface's pipeline in other parts of your program, it's easy to aggregate output text chunks using an appropriate strategy.\nThe documentation for a thorough explanation is available here!\nfrom transformers import pipeline\n\n# Initialize the NER pipeline\nner = pipeline(\"ner\", aggregation_strategy=\"simple\")\n\n# Phrase\nphrase = \"David helped Peter enter the building, where his house is located.\"\n\n# NER task\nner_result = ner(phrase)\n\n# Print result\nprint(ner_result)\n\noutput:\n[{'entity_group': 'PER', 'score': 0.99642086, 'word': 'David', 'start': 0, 'end': 5}, {'entity_group': 'PER', 'score': 0.99559766, 'word': 'Peter', 'start': 13, 'end': 18}]\n\n" ]
[ 0 ]
[]
[]
[ "nlp", "python", "python_3.x", "spacy" ]
stackoverflow_0074577951_nlp_python_python_3.x_spacy.txt
Q: List Comprehension instead of a for loop not working Udemy course: Loop over the items of the passwords list and in each iteration print out the item if the item contains the strings 'ba' or 'ab' inside. passwords = ['ccavfb', 'baaded', 'bbaa', 'aaeed', 'vbb', 'aadeba', 'aba', 'dee', 'dade', 'abc', 'aae', 'dded', 'abb', 'aaf', 'ffaec'] I know i could create the following for loop for this and it would work for x in passwords: if 'ab' in x or 'ba' in x: print(x) but I just learned about list comprehension so i tried making the following function to go over the loop instead. def checker(passes): return (x for x in passes if 'ab' in x or 'ba' in x) print(checker(passwords)) this doesn't work however and gives me this error : <generator object checker.. at 0x00000212414B4110> I have no clue what this means even after looking at my old friend google for help. \I dont understand why this function isn't working please help explain to me what I'm missing. I'm completely lost. this is the expected outcome according to the answer hint baaded bbaa aadeba aba abc abb A: Even if that did work it wouldn't give you the results you desire. If you want to use that style you will have to join it. def checker(passes): return '\n'.join(x for x in passes if 'ab' in x or 'ba' in x)
List Comprehension instead of a for loop not working
Udemy course: Loop over the items of the passwords list and in each iteration print out the item if the item contains the strings 'ba' or 'ab' inside. passwords = ['ccavfb', 'baaded', 'bbaa', 'aaeed', 'vbb', 'aadeba', 'aba', 'dee', 'dade', 'abc', 'aae', 'dded', 'abb', 'aaf', 'ffaec'] I know i could create the following for loop for this and it would work for x in passwords: if 'ab' in x or 'ba' in x: print(x) but I just learned about list comprehension so i tried making the following function to go over the loop instead. def checker(passes): return (x for x in passes if 'ab' in x or 'ba' in x) print(checker(passwords)) this doesn't work however and gives me this error : <generator object checker.. at 0x00000212414B4110> I have no clue what this means even after looking at my old friend google for help. \I dont understand why this function isn't working please help explain to me what I'm missing. I'm completely lost. this is the expected outcome according to the answer hint baaded bbaa aadeba aba abc abb
[ "Even if that did work it wouldn't give you the results you desire. If you want to use that style you will have to join it.\ndef checker(passes):\n return '\\n'.join(x for x in passes if 'ab' in x or 'ba' in x)\n\n" ]
[ 1 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0074580155_python_python_3.x.txt
Q: How to convert JSON data into a tree image? I'm using treelib to generate trees, now I need easy-to-read version of trees, so I want to convert them into images. For example: The sample JSON data, for the following tree: With data: >>> print(tree.to_json(with_data=True)) {"Harry": {"data": null, "children": [{"Bill": {"data": null}}, {"Jane": {"data": null, "children": [{"Diane": {"data": null}}, {"Mark": {"data": null}}]}}, {"Mary": {"data": null}}]}} Without data: >>> print(tree.to_json(with_data=False)) {"Harry": {"children": ["Bill", {"Jane": {"children": [{"Diane": {"children": ["Mary"]}}, "Mark"]}}]}} Is there anyway to use graphviz or d3.js or some other python library to generate tree using this JSON data? A: For a tree like this there's no need to use a library: you can generate the Graphviz DOT language statements directly. The only tricky part is extracting the tree edges from the JSON data. To do that, we first convert the JSON string back into a Python dict, and then parse that dict recursively. If a name in the tree dict has no children it's a simple string, otherwise, it's a dict and we need to scan the items in its "children" list. Each (parent, child) pair we find gets appended to a global list edges. This somewhat cryptic line: name = next(iter(treedict.keys())) gets a single key from treedict. This gives us the person's name, since that's the only key in treedict. In Python 2 we could do name = treedict.keys()[0] but the previous code works in both Python 2 and Python 3. from __future__ import print_function import json import sys # Tree in JSON format s = '{"Harry": {"children": ["Bill", {"Jane": {"children": [{"Diane": {"children": ["Mary"]}}, "Mark"]}}]}}' # Convert JSON tree to a Python dict data = json.loads(s) # Convert back to JSON & print to stderr so we can verify that the tree is correct. print(json.dumps(data, indent=4), file=sys.stderr) # Extract tree edges from the dict edges = [] def get_edges(treedict, parent=None): name = next(iter(treedict.keys())) if parent is not None: edges.append((parent, name)) for item in treedict[name]["children"]: if isinstance(item, dict): get_edges(item, parent=name) else: edges.append((name, item)) get_edges(data) # Dump edge list in Graphviz DOT format print('strict digraph tree {') for row in edges: print(' {0} -> {1};'.format(*row)) print('}') stderr output { "Harry": { "children": [ "Bill", { "Jane": { "children": [ { "Diane": { "children": [ "Mary" ] } }, "Mark" ] } } ] } } stdout output strict digraph tree { Harry -> Bill; Harry -> Jane; Jane -> Diane; Diane -> Mary; Jane -> Mark; } The code above runs on Python 2 & Python 3. It prints the JSON data to stderr so we can verify that it's correct. It then prints the Graphviz data to stdout so we can capture it to a file or pipe it directly to a Graphviz program. Eg, if the script is name "tree_to_graph.py", then you can do this in the command line to save the graph as a PNG file named "tree.png": python tree_to_graph.py | dot -Tpng -otree.png And here's the PNG output: A: Based on the answer of PM 2Ring I create a script which can be used via command line: #!/usr/bin/env python # -*- coding: utf-8 -*- """Convert a JSON to a graph.""" from __future__ import print_function import json import sys def tree2graph(data, verbose=True): """ Convert a JSON to a graph. Run `dot -Tpng -otree.png` Parameters ---------- json_filepath : str Path to a JSON file out_dot_path : str Path where the output dot file will be stored Examples -------- >>> s = {"Harry": [ "Bill", \ {"Jane": [{"Diane": ["Mary", "Mark"]}]}]} >>> tree2graph(s) [('Harry', 'Bill'), ('Harry', 'Jane'), ('Jane', 'Diane'), ('Diane', 'Mary'), ('Diane', 'Mark')] """ # Extract tree edges from the dict edges = [] def get_edges(treedict, parent=None): name = next(iter(treedict.keys())) if parent is not None: edges.append((parent, name)) for item in treedict[name]: if isinstance(item, dict): get_edges(item, parent=name) elif isinstance(item, list): for el in item: if isinstance(item, dict): edges.append((parent, item.keys()[0])) get_edges(item[item.keys()[0]]) else: edges.append((parent, el)) else: edges.append((name, item)) get_edges(data) return edges def main(json_filepath, out_dot_path, lr=False, verbose=True): """IO.""" # Read JSON with open(json_filepath) as data_file: data = json.load(data_file) if verbose: # Convert back to JSON & print to stderr so we can verfiy that the tree # is correct. print(json.dumps(data, indent=4), file=sys.stderr) # Get edges edges = tree2graph(data, verbose) # Dump edge list in Graphviz DOT format with open(out_dot_path, 'w') as f: f.write('strict digraph tree {\n') if lr: f.write('rankdir="LR";\n') for row in edges: f.write(' "{0}" -> "{1}";\n'.format(*row)) f.write('}\n') def get_parser(): """Get parser object for tree2graph.py.""" from argparse import ArgumentParser, ArgumentDefaultsHelpFormatter parser = ArgumentParser(description=__doc__, formatter_class=ArgumentDefaultsHelpFormatter) parser.add_argument("-i", "--input", dest="json_filepath", help="JSON FILE to read", metavar="FILE", required=True) parser.add_argument("-o", "--output", dest="out_dot_path", help="DOT FILE to write", metavar="FILE", required=True) return parser if __name__ == "__main__": import doctest doctest.testmod() args = get_parser().parse_args() main(args.json_filepath, args.out_dot_path, verbose=False) A: Here is the solution to convert snwflk's json data directly to tree using graphviz: Your input data: json_data = {"Harry": {"data": None, "children": [{"Bill": {"data": None}}, {"Jane": {"data": None, "children": [{"Diane": {"data": None}}, {"Mark": {"data": None}}]}}, {"Mary": {"data": None}}]}} We can traverse the tree using Breadth First Search so we can ensure all edges are gone through, which is the requirement to construct the tree with graphviz: import graphviz def get_node_info(node): node_name = list(node.keys())[0] node_data = node[node_name]['data'] node_children = node[node_name].get('children', None) return node_name, node_data, node_children traversed_nodes = [json_data] # start with root node # initialize the graph f = graphviz.Digraph('finite_state_machine', filename='fsm.gv') f.attr(rankdir='LR', size='8,5') f.attr('node', shape='rectangle') while (len(traversed_nodes) > 0): cur_node = traversed_nodes.pop(0) cur_node_name, cur_node_data, cur_node_children = get_node_info(cur_node) if (cur_node_children is not None): # check if the cur_node has a child for next_node in cur_node_children: traversed_nodes.append(next_node) next_node_name = get_node_info(next_node)[0] f.edge(cur_node_name, next_node_name, label='') # add edge to the graph f.view() The output graph looks as follows (I think your example tree image has Marie's node in the wrong place)
How to convert JSON data into a tree image?
I'm using treelib to generate trees, now I need easy-to-read version of trees, so I want to convert them into images. For example: The sample JSON data, for the following tree: With data: >>> print(tree.to_json(with_data=True)) {"Harry": {"data": null, "children": [{"Bill": {"data": null}}, {"Jane": {"data": null, "children": [{"Diane": {"data": null}}, {"Mark": {"data": null}}]}}, {"Mary": {"data": null}}]}} Without data: >>> print(tree.to_json(with_data=False)) {"Harry": {"children": ["Bill", {"Jane": {"children": [{"Diane": {"children": ["Mary"]}}, "Mark"]}}]}} Is there anyway to use graphviz or d3.js or some other python library to generate tree using this JSON data?
[ "For a tree like this there's no need to use a library: you can generate the Graphviz DOT language statements directly. The only tricky part is extracting the tree edges from the JSON data. To do that, we first convert the JSON string back into a Python dict, and then parse that dict recursively.\nIf a name in the tree dict has no children it's a simple string, otherwise, it's a dict and we need to scan the items in its \"children\" list. Each (parent, child) pair we find gets appended to a global list edges.\nThis somewhat cryptic line:\nname = next(iter(treedict.keys()))\n\ngets a single key from treedict. This gives us the person's name, since that's the only key in treedict. In Python 2 we could do\nname = treedict.keys()[0]\n\nbut the previous code works in both Python 2 and Python 3.\nfrom __future__ import print_function\nimport json\nimport sys\n\n# Tree in JSON format\ns = '{\"Harry\": {\"children\": [\"Bill\", {\"Jane\": {\"children\": [{\"Diane\": {\"children\": [\"Mary\"]}}, \"Mark\"]}}]}}'\n\n# Convert JSON tree to a Python dict\ndata = json.loads(s)\n\n# Convert back to JSON & print to stderr so we can verify that the tree is correct.\nprint(json.dumps(data, indent=4), file=sys.stderr)\n\n# Extract tree edges from the dict\nedges = []\n\ndef get_edges(treedict, parent=None):\n name = next(iter(treedict.keys()))\n if parent is not None:\n edges.append((parent, name))\n for item in treedict[name][\"children\"]:\n if isinstance(item, dict):\n get_edges(item, parent=name)\n else:\n edges.append((name, item))\n\nget_edges(data)\n\n# Dump edge list in Graphviz DOT format\nprint('strict digraph tree {')\nfor row in edges:\n print(' {0} -> {1};'.format(*row))\nprint('}')\n\nstderr output\n{\n \"Harry\": {\n \"children\": [\n \"Bill\",\n {\n \"Jane\": {\n \"children\": [\n {\n \"Diane\": {\n \"children\": [\n \"Mary\"\n ]\n }\n },\n \"Mark\"\n ]\n }\n }\n ]\n }\n}\n\nstdout output\nstrict digraph tree {\n Harry -> Bill;\n Harry -> Jane;\n Jane -> Diane;\n Diane -> Mary;\n Jane -> Mark;\n}\n\nThe code above runs on Python 2 & Python 3. It prints the JSON data to stderr so we can verify that it's correct. It then prints the Graphviz data to stdout so we can capture it to a file or pipe it directly to a Graphviz program. Eg, if the script is name \"tree_to_graph.py\", then you can do this in the command line to save the graph as a PNG file named \"tree.png\":\npython tree_to_graph.py | dot -Tpng -otree.png\n\nAnd here's the PNG output:\n\n", "Based on the answer of PM 2Ring I create a script which can be used via command line:\n#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n\"\"\"Convert a JSON to a graph.\"\"\"\n\nfrom __future__ import print_function\nimport json\nimport sys\n\n\ndef tree2graph(data, verbose=True):\n \"\"\"\n Convert a JSON to a graph.\n\n Run `dot -Tpng -otree.png`\n\n Parameters\n ----------\n json_filepath : str\n Path to a JSON file\n out_dot_path : str\n Path where the output dot file will be stored\n\n Examples\n --------\n >>> s = {\"Harry\": [ \"Bill\", \\\n {\"Jane\": [{\"Diane\": [\"Mary\", \"Mark\"]}]}]}\n >>> tree2graph(s)\n [('Harry', 'Bill'), ('Harry', 'Jane'), ('Jane', 'Diane'), ('Diane', 'Mary'), ('Diane', 'Mark')]\n \"\"\"\n # Extract tree edges from the dict\n edges = []\n\n def get_edges(treedict, parent=None):\n name = next(iter(treedict.keys()))\n if parent is not None:\n edges.append((parent, name))\n for item in treedict[name]:\n if isinstance(item, dict):\n get_edges(item, parent=name)\n elif isinstance(item, list):\n for el in item:\n if isinstance(item, dict):\n edges.append((parent, item.keys()[0]))\n get_edges(item[item.keys()[0]])\n else:\n edges.append((parent, el))\n else:\n edges.append((name, item))\n get_edges(data)\n return edges\n\n\ndef main(json_filepath, out_dot_path, lr=False, verbose=True):\n \"\"\"IO.\"\"\"\n # Read JSON\n with open(json_filepath) as data_file:\n data = json.load(data_file)\n\n if verbose:\n # Convert back to JSON & print to stderr so we can verfiy that the tree\n # is correct.\n print(json.dumps(data, indent=4), file=sys.stderr)\n\n # Get edges\n edges = tree2graph(data, verbose)\n\n # Dump edge list in Graphviz DOT format\n with open(out_dot_path, 'w') as f:\n f.write('strict digraph tree {\\n')\n if lr:\n f.write('rankdir=\"LR\";\\n')\n for row in edges:\n f.write(' \"{0}\" -> \"{1}\";\\n'.format(*row))\n f.write('}\\n')\n\n\ndef get_parser():\n \"\"\"Get parser object for tree2graph.py.\"\"\"\n from argparse import ArgumentParser, ArgumentDefaultsHelpFormatter\n parser = ArgumentParser(description=__doc__,\n formatter_class=ArgumentDefaultsHelpFormatter)\n parser.add_argument(\"-i\", \"--input\",\n dest=\"json_filepath\",\n help=\"JSON FILE to read\",\n metavar=\"FILE\",\n required=True)\n parser.add_argument(\"-o\", \"--output\",\n dest=\"out_dot_path\",\n help=\"DOT FILE to write\",\n metavar=\"FILE\",\n required=True)\n return parser\n\n\nif __name__ == \"__main__\":\n import doctest\n doctest.testmod()\n args = get_parser().parse_args()\n main(args.json_filepath, args.out_dot_path, verbose=False)\n\n", "Here is the solution to convert snwflk's json data directly to tree using graphviz:\nYour input data:\njson_data = {\"Harry\": {\"data\": None, \"children\": [{\"Bill\": {\"data\": None}}, {\"Jane\": {\"data\": None, \"children\": [{\"Diane\": {\"data\": None}}, {\"Mark\": {\"data\": None}}]}}, {\"Mary\": {\"data\": None}}]}}\n\nWe can traverse the tree using Breadth First Search so we can ensure all edges are gone through, which is the requirement to construct the tree with graphviz:\nimport graphviz\n\ndef get_node_info(node):\n node_name = list(node.keys())[0]\n node_data = node[node_name]['data']\n node_children = node[node_name].get('children', None)\n return node_name, node_data, node_children\n\ntraversed_nodes = [json_data] # start with root node\n\n# initialize the graph \nf = graphviz.Digraph('finite_state_machine', filename='fsm.gv')\nf.attr(rankdir='LR', size='8,5')\nf.attr('node', shape='rectangle')\n\nwhile (len(traversed_nodes) > 0):\n cur_node = traversed_nodes.pop(0)\n cur_node_name, cur_node_data, cur_node_children = get_node_info(cur_node)\n if (cur_node_children is not None): # check if the cur_node has a child\n for next_node in cur_node_children: \n traversed_nodes.append(next_node)\n next_node_name = get_node_info(next_node)[0]\n f.edge(cur_node_name, next_node_name, label='') # add edge to the graph\n\nf.view()\n\n\nThe output graph looks as follows (I think your example tree image has Marie's node in the wrong place)\n\n" ]
[ 16, 4, 0 ]
[]
[]
[ "json", "python", "tree" ]
stackoverflow_0040118113_json_python_tree.txt
Q: I can't show the dates in the same embed with python from discord BOT I have created a bot for discord that shows all the dates of any fortnite skin through the api, but my problem is that it does not show me all the dates inside the embed with the !skin command, it only shows me a date in timestamp format this is the code: @bot.command() async def skin(ctx): url = requests.get("https://fortnite-api.com/v2/cosmetics/br/search/all?language=es&name=palito%20de%20pescado%20de%20gominola&searchLanguage=es") historialSKIN= url.json() for i in historialSKIN["data"]: for datestr in i['shopHistory']: fecha = int(isoparse(datestr).timestamp()) embed=discord.Embed(title="Titulo", description=f"{fecha}") embed.add_field(name="Skin", value="skin", inline=False) await ctx.send(embed=embed) What should I do to show all the dates in the same embed? Thank you very much in advance! I should stay like this: can someone help me, thank you very much! A: Your variable "fecha" must be array or string to contain all your dates. Here's the code, it should work. I think I've shown you the problem and then you'll figure it out for yourself. @bot.command() async def skin(ctx): url = requests.get("https://fortnite-api.com/v2/cosmetics/br/search/all?language=es&name=palito%20de%20pescado%20de%20gominola&searchLanguage=es") historialSKIN= url.json() fecha = "" for i in historialSKIN["data"]: for datestr in i['shopHistory']: fecha += str(isoparse(datestr).timestamp()) +"\n" embed=discord.Embed(title="Titulo", description=fecha) embed.add_field(name="Skin", value="skin", inline=False) await ctx.send(embed=embed)
I can't show the dates in the same embed with python from discord BOT
I have created a bot for discord that shows all the dates of any fortnite skin through the api, but my problem is that it does not show me all the dates inside the embed with the !skin command, it only shows me a date in timestamp format this is the code: @bot.command() async def skin(ctx): url = requests.get("https://fortnite-api.com/v2/cosmetics/br/search/all?language=es&name=palito%20de%20pescado%20de%20gominola&searchLanguage=es") historialSKIN= url.json() for i in historialSKIN["data"]: for datestr in i['shopHistory']: fecha = int(isoparse(datestr).timestamp()) embed=discord.Embed(title="Titulo", description=f"{fecha}") embed.add_field(name="Skin", value="skin", inline=False) await ctx.send(embed=embed) What should I do to show all the dates in the same embed? Thank you very much in advance! I should stay like this: can someone help me, thank you very much!
[ "Your variable \"fecha\" must be array or string to contain all your dates.\nHere's the code, it should work. I think I've shown you the problem and then you'll figure it out for yourself.\n@bot.command()\nasync def skin(ctx):\n url = requests.get(\"https://fortnite-api.com/v2/cosmetics/br/search/all?language=es&name=palito%20de%20pescado%20de%20gominola&searchLanguage=es\")\n historialSKIN= url.json()\n fecha = \"\"\n for i in historialSKIN[\"data\"]:\n for datestr in i['shopHistory']:\n fecha += str(isoparse(datestr).timestamp()) +\"\\n\"\n\n embed=discord.Embed(title=\"Titulo\", description=fecha)\n embed.add_field(name=\"Skin\", value=\"skin\", inline=False)\n await ctx.send(embed=embed)\n\n" ]
[ 1 ]
[]
[]
[ "python" ]
stackoverflow_0074580125_python.txt
Q: simple flask application form, form page not functioning form page not displaying response, I left off some tags that were supposed to be closed. I changed that already. Basically, once i fill out the form and click on the submit button it should take me to the display page where it displays a string such as " Hello" + string + "thank you for submitting!" main.py code #import the flask module from flask import Flask, render_template, request,url_for ap Flask(__name__) @app.route("/") def home(): return render_template('home.html') @app.route("/teams") def teams(): return render_template('teams.html') @app.route("/form", methods = ['GET','POST']) def form(): #get the method of the post and the method of the get if request.method == "POST" and request.form.get('submit'): string = request.form.get('name') feedback = "Hello" + string + "\n Thank you for submiting!!" // else: return render_template('form.html').format(feedback = "") A: If everything is ok with you except that feedback is not displaying then do this: def form(): #get the method of the post and the method of the get if request.method == "POST" and request.form.get('submit'): string = request.form.get('name') feedback = "Hello" + string + "\n Thank you for submiting!!" return render_template('display.html', feedback=feedback) else: feedback = ... # define feedback here return render_template('form.html', feedback=feedback) A: As mentioned in the Comments with the Code you provide the if Block never gets true. Because inside the Form Data there is no thing called submit. To solve this your form should look like this: <!DOCTYPE html> <html> <head> <title>World Cup</title> </head> <body> <h1> What team are you going to support? </h1> <h2> Please fill in the information in the bottom</h2> <br> <br> <nav> <a href = "{{url_for('home')}}">Home</a> &nbsp; &nbsp; &nbsp; <a href = "{{url_for('teams')}}">Teams</a> &nbsp; &nbsp; &nbsp; <a href = "{{url_for('form')}}">Form</a> &nbsp; &nbsp; &nbsp; </nav> <br> <br> <form method = "post"> <label> Name: <input type ="text" name = "name"></label> <br> <br> <label> Comments: <input type = "text" name = "Comments"> </label> <br> <br> <input type = "submit" name="submit" value = "Submit"> <input type = "reset"> </form> </body> </html> So i added the name Attribute to the Submit Button. I also changed the name Attribute of the Comment button because it is identically with the name of the Name Field. The second Problem is the call of render_template. To pass Variables to the Form you have to put them into the render_template call. This is how your route should look like: @app.route("/form", methods=["GET", "POST"]) def form(): # get the method of the post and the method of the get if request.method == "POST" and request.form.get("submit"): string = request.form.get("name") feedback = "Hello" + string + "\n Thank you for submiting!!" return render_template("display.html", feedback=feedback) else: return render_template("form.html")
simple flask application form, form page not functioning
form page not displaying response, I left off some tags that were supposed to be closed. I changed that already. Basically, once i fill out the form and click on the submit button it should take me to the display page where it displays a string such as " Hello" + string + "thank you for submitting!" main.py code #import the flask module from flask import Flask, render_template, request,url_for ap Flask(__name__) @app.route("/") def home(): return render_template('home.html') @app.route("/teams") def teams(): return render_template('teams.html') @app.route("/form", methods = ['GET','POST']) def form(): #get the method of the post and the method of the get if request.method == "POST" and request.form.get('submit'): string = request.form.get('name') feedback = "Hello" + string + "\n Thank you for submiting!!" // else: return render_template('form.html').format(feedback = "")
[ "If everything is ok with you except that feedback is not displaying then do this:\ndef form():\n#get the method of the post and the method of the get \n if request.method == \"POST\" and request.form.get('submit'):\n string = request.form.get('name')\n feedback = \"Hello\" + string + \"\\n Thank you for submiting!!\"\n return render_template('display.html', feedback=feedback)\n else:\n feedback = ... # define feedback here\n return render_template('form.html', feedback=feedback)\n\n", "As mentioned in the Comments with the Code you provide the if Block never gets true. Because inside the Form Data there is no thing called submit.\nTo solve this your form should look like this:\n<!DOCTYPE html>\n<html>\n<head> \n <title>World Cup</title>\n</head>\n<body>\n <h1> What team are you going to support? </h1>\n <h2> Please fill in the information in the bottom</h2>\n <br> <br>\n\n <nav> <a href = \"{{url_for('home')}}\">Home</a> &nbsp; &nbsp; &nbsp; \n <a href = \"{{url_for('teams')}}\">Teams</a> &nbsp; &nbsp; &nbsp; \n <a href = \"{{url_for('form')}}\">Form</a> &nbsp; &nbsp; &nbsp; \n </nav>\n <br> <br>\n <form method = \"post\">\n <label> Name:\n <input type =\"text\" name = \"name\"></label>\n <br> <br>\n\n <label> Comments:\n <input type = \"text\" name = \"Comments\"> </label>\n <br> <br>\n\n <input type = \"submit\" name=\"submit\" value = \"Submit\">\n <input type = \"reset\">\n </form>\n</body>\n</html>\n\nSo i added the name Attribute to the Submit Button. I also changed the name Attribute of the Comment button because it is identically with the name of the Name Field.\nThe second Problem is the call of render_template. To pass Variables to the Form you have to put them into the render_template call.\nThis is how your route should look like:\n@app.route(\"/form\", methods=[\"GET\", \"POST\"])\ndef form():\n # get the method of the post and the method of the get\n if request.method == \"POST\" and request.form.get(\"submit\"):\n string = request.form.get(\"name\")\n feedback = \"Hello\" + string + \"\\n Thank you for submiting!!\"\n return render_template(\"display.html\", feedback=feedback)\n else:\n return render_template(\"form.html\")\n\n" ]
[ 0, 0 ]
[]
[]
[ "flask", "forms", "python" ]
stackoverflow_0074576479_flask_forms_python.txt
Q: Form unicode character from label I have a simple syntax related question that I would be grateful if someone could answer. So I currently have character labels in a string format: '0941'. To print out unicode characters in Python, I can just use the command: print(u'\u0941') Now, my question is how can I convert the label I have ('0941') into the unicode readable format (u'\u0941')? Thank you so much! A: >>> chr(int('0941',16)) == '\u0941' True
Form unicode character from label
I have a simple syntax related question that I would be grateful if someone could answer. So I currently have character labels in a string format: '0941'. To print out unicode characters in Python, I can just use the command: print(u'\u0941') Now, my question is how can I convert the label I have ('0941') into the unicode readable format (u'\u0941')? Thank you so much!
[ ">>> chr(int('0941',16)) == '\\u0941'\nTrue\n\n" ]
[ 0 ]
[ "One way to accomplish this without fussing with your numeric keypad is to simply print the character and then copy/paste it as a label.\n >>> print(\"lower case delta: \\u03B4\")\n lower case delta: δ\n >>> δ = 42 # copy the lower case delta symbol and paste it to use it as a label\n >>> δδ = δ ** 2 # paste it twice to define another label. \n >>> δ # at this point, they are just normal labels...\n 42\n >>> δδ\n 1764\n >>> δabc = 737 # using paste, it's just another character in a label\n >>> δ123 = 456\n >>> δabc, δ123 # exactly like any other alpha character.\n (737, 456)\n\n" ]
[ -1 ]
[ "python", "unicode", "utf_8" ]
stackoverflow_0062114695_python_unicode_utf_8.txt
Q: What is the path that Django uses for locating and loading templates? I'm following this tutorial on a Windows 7 environment. My settings file has this definition: TEMPLATE_DIRS = ( 'C:/django-project/myapp/mytemplates/admin' ) I got the base_template from the template admin/base_site.html from within the default Django admin template directory in the source code of Django itself (django/contrib/admin/templates) into an admin subdirectory of myapp directory as the tutorial instructed, but it doesn't seem to take affect for some reason. Any clue of what might be the problem? A: I know this isn't in the Django tutorial, and shame on them, but it's better to set up relative paths for your path variables. You can set it up like so: import os.path PROJECT_PATH = os.path.realpath(os.path.dirname(__file__)) ... MEDIA_ROOT = os.path.join(PROJECT_PATH, 'media/') TEMPLATE_DIRS = [ os.path.join(PROJECT_PATH, 'templates/'), ] This way you can move your Django project and your path roots will update automatically. This is useful when you're setting up your production server. Second, there's something suspect to your TEMPLATE_DIRS path. It should point to the root of your template directory. Also, it should also end in a trailing /. I'm just going to guess here that the .../admin/ directory is not your template root. If you still want to write absolute paths you should take out the reference to the admin template directory. TEMPLATE_DIRS = [ 'C:/django-project/myapp/mytemplates/', ] With that being said, the template loaders by default should be set up to recursively traverse into your app directories to locate template files. TEMPLATE_LOADERS = [ 'django.template.loaders.filesystem.load_template_source', 'django.template.loaders.app_directories.load_template_source', # 'django.template.loaders.eggs.load_template_source', ] You shouldn't need to copy over the admin templates unless if you specifically want to overwrite something. You will have to run a syncdb if you haven't run it yet. You'll also need to statically server your media files if you're hosting django through runserver. A: If using Django settings as installed, then why not just use its baked-in, predefined BASE_DIR and TEMPLATES? In the pip installed Django(v1.8), I get: BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) TEMPLATES = [ { 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'DIRS': [ ### ADD YOUR DIRECTORY HERE LIKE SO: BASE_DIR + '/templates/', ], 'APP_DIRS': True, 'OPTIONS': { 'context_processors': [ 'django.template.context_processors.debug', 'django.template.context_processors.request', 'django.contrib.auth.context_processors.auth', 'django.contrib.messages.context_processors.messages', ], }, }, ] A: Smart solution in Django 2.0.3 for keeping templates in project directory (/root/templates/app_name): settings.py BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) TEMP_DIR = os.path.join(BASE_DIR, 'templates') ... TEMPLATES = [ { 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'DIRS': [TEMP_DIR], ... in views.py just add such template path: app_name/html_name A: For Django 1.6.6: BASE_DIR = os.path.dirname(os.path.dirname(__file__)) TEMPLATE_DIRS = os.path.join(BASE_DIR, 'templates') Also static and media for debug and production mode: STATIC_URL = '/static/' MEDIA_URL = '/media/' if DEBUG: STATIC_ROOT = os.path.join(BASE_DIR, 'static') MEDIA_ROOT = os.path.join(BASE_DIR, 'media') else: STATIC_ROOT = %REAL_PATH_TO_PRODUCTION_STATIC_FOLDER% MEDIA_ROOT = %REAL_PATH_TO_PRODUCTION_MEDIA_FOLDER% Into urls.py you must add: from django.conf.urls import patterns, include, url from django.contrib import admin from django.conf.urls.static import static from django.conf import settings from news.views import Index admin.autodiscover() urlpatterns = patterns('', url(r'^admin/', include(admin.site.urls)), ... ) urlpatterns += static(settings.STATIC_URL, document_root=settings.STATIC_ROOT) In Django 1.8 you can set template paths, backend and other parameters for templates in one dictionary (settings.py): TEMPLATES = [ { 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'DIRS': [ path.join(BASE_DIR, 'templates') ], 'APP_DIRS': True, 'OPTIONS': { 'context_processors': [ 'django.template.context_processors.debug', 'django.template.context_processors.request', 'django.contrib.auth.context_processors.auth', 'django.contrib.messages.context_processors.messages', ], }, }, ] Official docs. A: I also had issues with this part of the tutorial (used tutorial for version 1.7). My mistake was that I only edited the 'Django administration' string, and did not pay enough attention to the manual. This is the line from django/contrib/admin/templates/admin/base_site.html: <h1 id="site-name"><a href="{% url 'admin:index' %}">{{ site_header|default:_('Django administration') }}</a></h1> But after some time and frustration it became clear that there was the 'site_header or default:_' statement, which should be removed. So after removing the statement (like the example in the manual everything worked like expected). Example manual: <h1 id="site-name"><a href="{% url 'admin:index' %}">Polls Administration</a></h1> A: Alright Let's say you have a brand new project, if so you would go to settings.py file and search for TEMPLATES once you found it you just paste this line os.path.join(BASE_DIR, 'template') in 'DIRS' At the end, you should get somethings like this : TEMPLATES = [ { 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'DIRS': [ os.path.join(BASE_DIR, 'template') ], 'APP_DIRS': True, 'OPTIONS': { 'context_processors': [ 'django.template.context_processors.debug', 'django.template.context_processors.request', 'django.contrib.auth.context_processors.auth', 'django.contrib.messages.context_processors.messages', ], }, }, ] If you want to know where your BASE_DIR directory is located type these 3 simple commands: python3 manage.py shell Once you're in the shell : >>> from django.conf import settings >>> settings.BASE_DIR PS: If you named your template folder with another name, you would change it here too. A: In django 3.1, go to setting of your project and import os TEMPLATES = [ { 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'DIRS': [os.path.join(BASE_DIR, "templates")], 'APP_DIRS': True, 'OPTIONS': { 'context_processors': [ 'django.template.context_processors.debug', 'django.template.context_processors.request', 'django.contrib.auth.context_processors.auth', 'django.contrib.messages.context_processors.messages', ], }, }, ] A: Contrary to some answers posted in this thread, adding 'DIRS': ['templates'] has no effect - it's redundant - since templates is the default path where Django looks for templates. If you are attempting to reference an app's template, ensure that your app is in the list of INSTALLED_APPS in the main project settings.py. INSTALLED_APPS': [ # ... 'my_app', ] Quoting Django's Templates documentation: class DjangoTemplates¶ Set BACKEND to 'django.template.backends.django.DjangoTemplates' to configure a Django template engine. When APP_DIRS is True, DjangoTemplates engines look for templates in the templates subdirectory of installed applications. This generic name was kept for backwards-compatibility. When you create an application for your project, there's no templates directory inside the application directory. Django admin doesn't create the directory for you by default. Below's another paragraph from Django Tutorial documentation, which is even clearer: Your project’s TEMPLATES setting describes how Django will load and render templates. The default settings file configures a DjangoTemplates backend whose APP_DIRS option is set to True. By convention DjangoTemplates looks for a “templates” subdirectory in each of the INSTALLED_APPS. A: In django 2.2 this is explained here https://docs.djangoproject.com/en/2.2/howto/overriding-templates/ import os BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) INSTALLED_APPS = [ ..., 'blog', ..., ] TEMPLATES = [ { 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'DIRS': [os.path.join(BASE_DIR, 'templates')], 'APP_DIRS': True, ... }, ] A: basically BASE_DIR is your django project directory, same dir where manage.py is. TEMPLATES = [ { 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'DIRS': [os.path.join(BASE_DIR, 'templates')], 'APP_DIRS': True, 'OPTIONS': { 'context_processors': [ 'django.template.context_processors.debug', 'django.template.context_processors.request', 'django.contrib.auth.context_processors.auth', 'django.contrib.messages.context_processors.messages', ], }, }, ] A: By default django looks for the template folder in apps. But if you want to use template folder from root of project, please create a template folder on root of project and do the followings in settings.py: import os TEMPLATE_DIR = os.path.join(BASE_DIR, "templates") TEMPLATES = [ { 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'DIRS': [TEMPLATE_DIR], 'APP_DIRS': True, 'OPTIONS': { 'context_processors': [ 'django.template.context_processors.debug', 'django.template.context_processors.request', 'django.contrib.auth.context_processors.auth', 'django.contrib.messages.context_processors.messages', ], }, }, ] A: You can easily add template folder in settings.py folder, os.path is deprecated in django 3.1, so you can use path instead of os.path. You just have to import path in settings.py, you have to specify the base directory, then you have to specify template path, and last but not the least, you have to add template folder path in TEMPLATES = [{}], for example: from pathlib import Path BASE_DIR = Path(__file__).resolve().parent.parent TEMPLATE_DIR = Path(BASE_DIR, 'templates') (you can name TEMPLATE_DIR to any name) TEMPLATES = [ { 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'DIRS': [TEMPLATE_DIR], 'APP_DIRS': True, 'OPTIONS': { 'context_processors': [ 'django.template.context_processors.debug', 'django.template.context_processors.request', 'django.contrib.auth.context_processors.auth', 'django.contrib.messages.context_processors.messages', ], }, }, ] A: One interesting thing I noted for templates searching TEMPLATES = [ { 'BACKEND': 'django.template.backends.django.DjangoTemplates', #'DIRS': [os.path.join(BASE_DIR,"templates")], 'DIRS': [], 'APP_DIRS': True, if the app folder have templates sub-folder then only it is searched and listed under Template-loader postmortem If app/templates do not exist, it is not listed in error messages. Understanding this will prevent newbee to add template folders via DIRS directive
What is the path that Django uses for locating and loading templates?
I'm following this tutorial on a Windows 7 environment. My settings file has this definition: TEMPLATE_DIRS = ( 'C:/django-project/myapp/mytemplates/admin' ) I got the base_template from the template admin/base_site.html from within the default Django admin template directory in the source code of Django itself (django/contrib/admin/templates) into an admin subdirectory of myapp directory as the tutorial instructed, but it doesn't seem to take affect for some reason. Any clue of what might be the problem?
[ "I know this isn't in the Django tutorial, and shame on them, but it's better to set up relative paths for your path variables. You can set it up like so:\nimport os.path\n\nPROJECT_PATH = os.path.realpath(os.path.dirname(__file__))\n\n...\n\nMEDIA_ROOT = os.path.join(PROJECT_PATH, 'media/')\n\nTEMPLATE_DIRS = [\n os.path.join(PROJECT_PATH, 'templates/'),\n]\n\nThis way you can move your Django project and your path roots will update automatically. This is useful when you're setting up your production server.\nSecond, there's something suspect to your TEMPLATE_DIRS path. It should point to the root of your template directory. Also, it should also end in a trailing /.\nI'm just going to guess here that the .../admin/ directory is not your template root. If you still want to write absolute paths you should take out the reference to the admin template directory.\nTEMPLATE_DIRS = [\n 'C:/django-project/myapp/mytemplates/',\n]\n\nWith that being said, the template loaders by default should be set up to recursively traverse into your app directories to locate template files.\nTEMPLATE_LOADERS = [\n 'django.template.loaders.filesystem.load_template_source',\n 'django.template.loaders.app_directories.load_template_source',\n # 'django.template.loaders.eggs.load_template_source',\n]\n\nYou shouldn't need to copy over the admin templates unless if you specifically want to overwrite something.\nYou will have to run a syncdb if you haven't run it yet. You'll also need to statically server your media files if you're hosting django through runserver.\n", "If using Django settings as installed, then why not just use its baked-in, predefined BASE_DIR and TEMPLATES? In the pip installed Django(v1.8), I get: \nBASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) \n\n\nTEMPLATES = [\n {\n 'BACKEND': 'django.template.backends.django.DjangoTemplates',\n 'DIRS': [\n ### ADD YOUR DIRECTORY HERE LIKE SO:\n BASE_DIR + '/templates/',\n ],\n 'APP_DIRS': True,\n 'OPTIONS': {\n 'context_processors': [\n 'django.template.context_processors.debug',\n 'django.template.context_processors.request',\n 'django.contrib.auth.context_processors.auth',\n 'django.contrib.messages.context_processors.messages',\n ],\n },\n },\n]\n\n", "Smart solution in Django 2.0.3 for keeping templates in project directory (/root/templates/app_name):\nsettings.py\nBASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\nTEMP_DIR = os.path.join(BASE_DIR, 'templates')\n...\nTEMPLATES = [\n{\n 'BACKEND': 'django.template.backends.django.DjangoTemplates',\n 'DIRS': [TEMP_DIR],\n...\n\nin views.py just add such template path:\napp_name/html_name\n\n", "For Django 1.6.6:\nBASE_DIR = os.path.dirname(os.path.dirname(__file__))\nTEMPLATE_DIRS = os.path.join(BASE_DIR, 'templates')\n\nAlso static and media for debug and production mode:\nSTATIC_URL = '/static/'\nMEDIA_URL = '/media/'\nif DEBUG:\n STATIC_ROOT = os.path.join(BASE_DIR, 'static')\n MEDIA_ROOT = os.path.join(BASE_DIR, 'media')\nelse:\n STATIC_ROOT = %REAL_PATH_TO_PRODUCTION_STATIC_FOLDER%\n MEDIA_ROOT = %REAL_PATH_TO_PRODUCTION_MEDIA_FOLDER%\n\nInto urls.py you must add:\nfrom django.conf.urls import patterns, include, url\nfrom django.contrib import admin\nfrom django.conf.urls.static import static\nfrom django.conf import settings\n\nfrom news.views import Index\n\nadmin.autodiscover()\n\nurlpatterns = patterns('',\n url(r'^admin/', include(admin.site.urls)),\n ...\n )\n\nurlpatterns += static(settings.STATIC_URL, document_root=settings.STATIC_ROOT)\n\nIn Django 1.8 you can set template paths, backend and other parameters for templates in one dictionary (settings.py):\nTEMPLATES = [\n {\n 'BACKEND': 'django.template.backends.django.DjangoTemplates',\n 'DIRS': [\n path.join(BASE_DIR, 'templates')\n ],\n 'APP_DIRS': True,\n 'OPTIONS': {\n 'context_processors': [\n 'django.template.context_processors.debug',\n 'django.template.context_processors.request',\n 'django.contrib.auth.context_processors.auth',\n 'django.contrib.messages.context_processors.messages',\n ],\n },\n },\n]\n\nOfficial docs.\n", "I also had issues with this part of the tutorial (used tutorial for version 1.7).\nMy mistake was that I only edited the 'Django administration' string, and did not pay enough attention to the manual.\nThis is the line from django/contrib/admin/templates/admin/base_site.html:\n<h1 id=\"site-name\"><a href=\"{% url 'admin:index' %}\">{{ site_header|default:_('Django administration') }}</a></h1>\n\nBut after some time and frustration it became clear that there was the 'site_header or default:_' statement, which should be removed. So after removing the statement (like the example in the manual everything worked like expected).\nExample manual:\n<h1 id=\"site-name\"><a href=\"{% url 'admin:index' %}\">Polls Administration</a></h1>\n\n", "Alright Let's say you have a brand new project, if so you would go to settings.py file and search for TEMPLATES once you found it you just paste this line os.path.join(BASE_DIR, 'template') in 'DIRS' At the end, you should get somethings like this : \nTEMPLATES = [\n {\n 'BACKEND': 'django.template.backends.django.DjangoTemplates',\n 'DIRS': [\n os.path.join(BASE_DIR, 'template')\n ],\n 'APP_DIRS': True,\n 'OPTIONS': {\n 'context_processors': [\n 'django.template.context_processors.debug',\n 'django.template.context_processors.request',\n 'django.contrib.auth.context_processors.auth',\n 'django.contrib.messages.context_processors.messages',\n ],\n },\n },\n]\n\nIf you want to know where your BASE_DIR directory is located type these 3 simple commands: \npython3 manage.py shell\n\nOnce you're in the shell :\n>>> from django.conf import settings\n>>> settings.BASE_DIR\n\nPS: If you named your template folder with another name, you would change it here too. \n", "In django 3.1, go to setting of your project and import os\nTEMPLATES = [\n {\n 'BACKEND': 'django.template.backends.django.DjangoTemplates',\n 'DIRS': [os.path.join(BASE_DIR, \"templates\")],\n 'APP_DIRS': True,\n 'OPTIONS': {\n 'context_processors': [\n 'django.template.context_processors.debug',\n 'django.template.context_processors.request',\n 'django.contrib.auth.context_processors.auth',\n 'django.contrib.messages.context_processors.messages',\n ],\n },\n },\n]\n\n", "Contrary to some answers posted in this thread, adding 'DIRS': ['templates'] has no effect - it's redundant - since templates is the default path where Django looks for templates.\nIf you are attempting to reference an app's template, ensure that your app is in the list of INSTALLED_APPS in the main project settings.py.\nINSTALLED_APPS': [\n # ...\n 'my_app',\n]\n\nQuoting Django's Templates documentation:\n\nclass DjangoTemplates¶\nSet BACKEND to 'django.template.backends.django.DjangoTemplates' to configure a Django template engine.\nWhen APP_DIRS is True, DjangoTemplates engines look for templates\nin the templates subdirectory of installed applications. This generic name was kept for backwards-compatibility.\n\nWhen you create an application for your project, there's no templates directory inside the application directory. Django admin doesn't create the directory for you by default.\nBelow's another paragraph from Django Tutorial documentation, which is even clearer:\n\nYour project’s TEMPLATES setting describes how Django will load and render templates. The default settings file configures a DjangoTemplates backend whose APP_DIRS option is set to True. By convention DjangoTemplates looks for a “templates” subdirectory in each of the INSTALLED_APPS.\n\n", "In django 2.2 this is explained here \nhttps://docs.djangoproject.com/en/2.2/howto/overriding-templates/\nimport os\n\nBASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\n\nINSTALLED_APPS = [\n ...,\n 'blog',\n ...,\n]\n\nTEMPLATES = [\n {\n 'BACKEND': 'django.template.backends.django.DjangoTemplates',\n 'DIRS': [os.path.join(BASE_DIR, 'templates')],\n 'APP_DIRS': True,\n ...\n },\n]\n\n", "basically BASE_DIR is your django project directory, same dir where manage.py is.\n TEMPLATES = [\n {\n 'BACKEND': 'django.template.backends.django.DjangoTemplates',\n 'DIRS': [os.path.join(BASE_DIR, 'templates')],\n 'APP_DIRS': True,\n 'OPTIONS': {\n 'context_processors': [\n 'django.template.context_processors.debug',\n 'django.template.context_processors.request',\n 'django.contrib.auth.context_processors.auth',\n 'django.contrib.messages.context_processors.messages',\n ],\n },\n },\n ]\n\n\n", "By default django looks for the template folder in apps. But if you want to use template folder from root of project, please create a template folder on root of project and do the followings in settings.py:\nimport os\n\nTEMPLATE_DIR = os.path.join(BASE_DIR, \"templates\")\nTEMPLATES = [\n {\n 'BACKEND': 'django.template.backends.django.DjangoTemplates',\n 'DIRS': [TEMPLATE_DIR],\n 'APP_DIRS': True,\n 'OPTIONS': {\n 'context_processors': [\n 'django.template.context_processors.debug',\n 'django.template.context_processors.request',\n 'django.contrib.auth.context_processors.auth',\n 'django.contrib.messages.context_processors.messages',\n ],\n },\n },\n]\n\n", "You can easily add template folder in settings.py folder, os.path is deprecated in django 3.1, so you can use path instead of os.path. You just have to import path in settings.py, you have to specify the base directory, then you have to specify template path, and last but not the least, you have to add template folder path in TEMPLATES = [{}], for example:\nfrom pathlib import Path\n\nBASE_DIR = Path(__file__).resolve().parent.parent\n\nTEMPLATE_DIR = Path(BASE_DIR, 'templates') (you can name TEMPLATE_DIR to any name)\n\nTEMPLATES = [\n{\n\n 'BACKEND': 'django.template.backends.django.DjangoTemplates',\n\n 'DIRS': [TEMPLATE_DIR],\n\n 'APP_DIRS': True,\n\n 'OPTIONS': {\n\n 'context_processors': [\n\n 'django.template.context_processors.debug',\n\n 'django.template.context_processors.request',\n\n 'django.contrib.auth.context_processors.auth',\n\n 'django.contrib.messages.context_processors.messages',\n\n ],\n },\n},\n\n]\n", "One interesting thing I noted for templates searching\nTEMPLATES = [\n {\n 'BACKEND': 'django.template.backends.django.DjangoTemplates',\n #'DIRS': [os.path.join(BASE_DIR,\"templates\")],\n 'DIRS': [],\n 'APP_DIRS': True,\n\nif the app folder have templates sub-folder then only it is searched and listed under Template-loader postmortem\nIf app/templates do not exist, it is not listed in error messages. Understanding this will prevent newbee to add template folders via DIRS directive\n" ]
[ 203, 41, 15, 9, 6, 4, 3, 3, 1, 1, 1, 0, 0 ]
[]
[]
[ "django", "django_templates", "python" ]
stackoverflow_0003038459_django_django_templates_python.txt
Q: Python, Only allow a certain IP address to access get-request Im fairly new to python and this may be a stupid question but have been working on a small project that will listen out for a get request and then strip off the useful information and disregard things like (IP of sender or port number). Had my server up for less than a day and noticed a few failed attempts to access it. Is it possible for me to code something along the sorts of: E.g. of what my server does: 192.168.1.12:1000/{Server uses info after the slash} Is it possible to make something along the lines of: IP = Requesters IP if IP != My_IP: deny connection A: It's been two years now, but I'm putting the answer here for anyone who needs it. You have to make you own handler, and only give access to the IPs: from http.server import HTTPServer, SimpleHTTPRequestHandler class MyHttpHandler(SimpleHTTPRequestHandler): allowed_addresses = [ '192.168.1.50', ] def __init__(self, *args, directory=None, **kwargs) -> None: super().__init__(*args, directory=directory, **kwargs) def do_GET(self): """Serve a GET request.""" address = self.address_string() # access the IP with this function if address in self.allowed_addresses: super().do_GET() HTTPServer(('', 8080), MyHttpHandler).serve_forever() The same goes for do_POST if you want to implement it.
Python, Only allow a certain IP address to access get-request
Im fairly new to python and this may be a stupid question but have been working on a small project that will listen out for a get request and then strip off the useful information and disregard things like (IP of sender or port number). Had my server up for less than a day and noticed a few failed attempts to access it. Is it possible for me to code something along the sorts of: E.g. of what my server does: 192.168.1.12:1000/{Server uses info after the slash} Is it possible to make something along the lines of: IP = Requesters IP if IP != My_IP: deny connection
[ "It's been two years now, but I'm putting the answer here for anyone who needs it.\nYou have to make you own handler, and only give access to the IPs:\nfrom http.server import HTTPServer, SimpleHTTPRequestHandler\n\nclass MyHttpHandler(SimpleHTTPRequestHandler):\n allowed_addresses = [\n '192.168.1.50',\n ]\n\n def __init__(self, *args, directory=None, **kwargs) -> None:\n super().__init__(*args, directory=directory, **kwargs)\n\n def do_GET(self):\n \"\"\"Serve a GET request.\"\"\"\n address = self.address_string() # access the IP with this function\n if address in self.allowed_addresses:\n super().do_GET()\n\nHTTPServer(('', 8080), MyHttpHandler).serve_forever()\n\nThe same goes for do_POST if you want to implement it.\n" ]
[ 0 ]
[]
[]
[ "get_request", "python" ]
stackoverflow_0060689548_get_request_python.txt
Q: keras , val_accuracy, val_loss is loss: 0.0000e+00 val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00 problem first of all, i using 100class and use 150 videos per class and, i devide this 80% is training set, 20% is validation set. and under is my code def generator(filePath,labelList): tmp = [[x,y] for x, y in zip(filePath, labelList)] np.random.shuffle(tmp) Files = [n[0] for n in tmp] Labels = [n[1] for n in tmp] for File,Label in zip(Files,Labels): File = np.load(File) #x = tf.squeeze(File,1) #x = tf.squeeze(x,2) #PoolingOutput = tf.keras.layers.AveragePooling1D()(x) #PoolingOutput = tf.squeeze(PoolingOutput) #x = tf.squeeze(PoolingOutput) #--------------------------------------------------------- x = tf.squeeze(File) transformed_label = encoder.transform([Label]) yield x, transformed_label[0] train_dataset = tf.data.Dataset.from_generator( generator, args = (TrainFilePath,TrainLabelList), output_types=(tf.float64, tf.int16), output_shapes=((20, 2048),len(EncoderOnlyList))) train_dataset = train_dataset.batch(8).prefetch(tf.data.experimental.AUTOTUNE) #train_dataset = train_dataset.batch(16) valid_dataset = tf.data.Dataset.from_generator( generator, args = (ValiFilePath, VailLabelPath), output_types=(tf.float64, tf.int16), output_shapes=((20, 2048),len(EncoderOnlyList))) valid_dataset = valid_dataset.batch(8).prefetch(tf.data.experimental.AUTOTUNE) #valid_dataset = valid_dataset.batch(16) with tf.device(device_name): model = Sequential() model.add(keras.layers.Input(shape=(20, 2048),)) model.add(tf.keras.layers.Masking(mask_value=0.)) model.add(tf.keras.layers.LSTM(256) model.add(tf.keras.layers.Dropout(0.5)) model.add(tf.keras.layers.Dense(128,activation='relu')) model.add(tf.keras.layers.Dropout(0.5)) model.add(tf.keras.layers.Dense(100, activation='softmax')) model.compile(optimizer=rmsprop, loss='categorical_crossentropy', metrics=['accuracy']) model.fit(train_dataset, epochs=20, validation_data=valid_dataset) model.save_weights('/content/drive/MyDrive/Resnet50BaseWeight_3.h5', overwrite=True) model.save("/content/drive/MyDrive/Resnet50Base_3.h5") and result is like this Epoch 1/20 1500/1500 [==============================] - 97s 61ms/step - loss: 0.0000e+00 - accuracy: 0.0012 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00 Epoch 2/20 1500/1500 [==============================] - 102s 68ms/step - loss: 0.0000e+00 - accuracy: 0.0086 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00 Epoch 3/20 1500/1500 [==============================] - 91s 60ms/step - loss: 0.0000e+00 - accuracy: 0.0103 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00 Epoch 4/20 1500/1500 [==============================] - 95s 63ms/step - loss: 0.0000e+00 - accuracy: 0.0113 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00 Epoch 5/20 1500/1500 [==============================] - 93s 62ms/step - loss: 0.0000e+00 - accuracy: 0.0103 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00 Epoch 6/20 1500/1500 [==============================] - 92s 61ms/step - loss: 0.0000e+00 - accuracy: 0.0098 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00 Even if the epoch increases, the accuracy does not increase well anymore And most of the results come out as 0.0000e+00 like that I don't know what is wrong plz help A: it is the logits shape and categorizes mismatches you need to select 100 target classes with different shape, and forms it into ( 100, 0 ) or equivalent. Sample: Identical is expecting of input types or compared among them, loss function and statistics matrix calculation from estimating and evaluation by step of the same input or different input. RMS can use with anything still correct but you need to make scopes the estimators. import tensorflow as tf import os from os.path import exists """"""""""""""""""""""""""""""""""""""""""""""""""""""""" [PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')] None """"""""""""""""""""""""""""""""""""""""""""""""""""""""" physical_devices = tf.config.experimental.list_physical_devices('GPU') assert len(physical_devices) > 0, "Not enough GPU hardware devices available" config = tf.config.experimental.set_memory_growth(physical_devices[0], True) print(physical_devices) print(config) """"""""""""""""""""""""""""""""""""""""""""""""""""""""" : Variables """"""""""""""""""""""""""""""""""""""""""""""""""""""""" class_100_names = ['apple', 'aquarium_fish', 'baby', 'bear', 'beaver', 'bed', 'bee', 'beetle', 'bicycle', 'bottle', 'bowl', 'boy', 'bridge', 'bus', 'butterfly', 'camel', 'can', 'castle', 'caterpillar', 'cattle', 'chair', 'chimpanzee', 'clock', 'cloud', 'cockroach', 'couch', 'crab', 'crocodile', 'cup', 'dinosaur', 'dolphin', 'elephant', 'flatfish', 'forest', 'fox', 'girl', 'hamster', 'house', 'kangaroo', 'keyboard', 'lamp', 'lawn_mower', 'leopard', 'lion', 'lizard', 'lobster', 'man', 'maple_tree', 'motorcycle', 'mountain', 'mouse', 'mushroom', 'oak_tree', 'orange', 'orchid', 'otter', 'palm_tree', 'pear', 'pickup_truck', 'pine_tree', 'plain', 'plate', 'poppy', 'porcupine', 'possum', 'rabbit', 'raccoon', 'ray', 'road', 'rocket', 'rose', 'sea', 'seal', 'shark', 'shrew', 'skunk', 'skyscraper', 'snail', 'snake', 'spider', 'squirrel', 'streetcar', 'sunflower', 'sweet_pepper', 'table', 'tank', 'telephone', 'television', 'tiger', 'tractor', 'train', 'trout', 'tulip', 'turtle', 'wardrobe', 'whale', 'willow_tree', 'wolf', 'woman', 'worm'] checkpoint_path = "F:\\models\\checkpoint\\" + os.path.basename(__file__).split('.')[0] + "\\TF_DataSets_01.h5" checkpoint_dir = os.path.dirname(checkpoint_path) if not exists(checkpoint_dir) : os.mkdir(checkpoint_dir) print("Create directory: " + checkpoint_dir) """"""""""""""""""""""""""""""""""""""""""""""""""""""""" : Dataset """"""""""""""""""""""""""""""""""""""""""""""""""""""""" (train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.cifar100.load_data(label_mode='fine') """"""""""""""""""""""""""""""""""""""""""""""""""""""""" : Model Initialize """"""""""""""""""""""""""""""""""""""""""""""""""""""""" model = tf.keras.models.Sequential([ tf.keras.layers.InputLayer(input_shape=( 32, 32, 3 )), tf.keras.layers.Normalization(mean=3., variance=2.), tf.keras.layers.Normalization(mean=4., variance=6.), tf.keras.layers.Conv2D(32, (3, 3), activation='relu'), tf.keras.layers.MaxPooling2D((2, 2)), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Reshape((128, 225)), tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(96, return_sequences=True, return_state=False)), tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(96)), tf.keras.layers.Flatten(), tf.keras.layers.Dense(192, activation='relu'), tf.keras.layers.Dense(100), ]) """"""""""""""""""""""""""""""""""""""""""""""""""""""""" : Callback """"""""""""""""""""""""""""""""""""""""""""""""""""""""" class custom_callback(tf.keras.callbacks.Callback): def on_epoch_end(self, epoch, logs={}): if( logs['accuracy'] >= 0.95 ): self.model.stop_training = True custom_callback = custom_callback() """"""""""""""""""""""""""""""""""""""""""""""""""""""""" : Optimizer """"""""""""""""""""""""""""""""""""""""""""""""""""""""" optimizer = tf.keras.optimizers.RMSprop( learning_rate=0.001, rho=0.9, momentum=0.0, epsilon=1e-07, centered=False, # decay=None, # {'lr', 'global_clipnorm', 'clipnorm', 'decay', 'clipvalue'} # clipnorm=None, # clipvalue=None, # global_clipnorm=None, # use_ema=False, # ema_momentum=0.99, # ema_overwrite_frequency=100, # jit_compile=True, name='RMSprop', ) """"""""""""""""""""""""""""""""""""""""""""""""""""""""" : Loss Fn """"""""""""""""""""""""""""""""""""""""""""""""""""""""" lossfn = tf.keras.losses.SparseCategoricalCrossentropy( from_logits=False, reduction=tf.keras.losses.Reduction.AUTO, name='sparse_categorical_crossentropy' ) """"""""""""""""""""""""""""""""""""""""""""""""""""""""" : Model Summary """"""""""""""""""""""""""""""""""""""""""""""""""""""""" # model.compile(optimizer=optimizer, loss=lossfn, metrics=['accuracy']) model.compile(optimizer=optimizer, loss=lossfn, metrics=['accuracy']) """"""""""""""""""""""""""""""""""""""""""""""""""""""""" : FileWriter """"""""""""""""""""""""""""""""""""""""""""""""""""""""" if exists(checkpoint_path) : model.load_weights(checkpoint_path) print("model load: " + checkpoint_path) input("Press Any Key!") """"""""""""""""""""""""""""""""""""""""""""""""""""""""" : Training """"""""""""""""""""""""""""""""""""""""""""""""""""""""" history = model.fit( train_images, train_labels, batch_size=100, epochs=10000, callbacks=[custom_callback] ) model.save_weights(checkpoint_path) Output: Compares 100 classes it is not the problem, updates new or none identical input should select favorites classes. 2022-11-26 12:02:17.507553: I tensorflow/stream_executor/cuda/cuda_dnn.cc:368] Loaded cuDNN version 8100 500/500 [==============================] - 34s 54ms/step - loss: 10.1518 - accuracy: 0.0104 Epoch 2/10000 500/500 [==============================] - 27s 53ms/step - loss: 9.5093 - accuracy: 0.0122 Epoch 3/10000 500/500 [==============================] - 26s 53ms/step - loss: 9.2861 - accuracy: 0.0127 Epoch 4/10000 462/500 [==========================>...] - ETA: 2s - loss: 9.1570 - accuracy: 0.0126 Output: Image categorized, number of types is not effects much but how identical they are, when waiting for the CIFAR100 training ...
keras , val_accuracy, val_loss is loss: 0.0000e+00 val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00 problem
first of all, i using 100class and use 150 videos per class and, i devide this 80% is training set, 20% is validation set. and under is my code def generator(filePath,labelList): tmp = [[x,y] for x, y in zip(filePath, labelList)] np.random.shuffle(tmp) Files = [n[0] for n in tmp] Labels = [n[1] for n in tmp] for File,Label in zip(Files,Labels): File = np.load(File) #x = tf.squeeze(File,1) #x = tf.squeeze(x,2) #PoolingOutput = tf.keras.layers.AveragePooling1D()(x) #PoolingOutput = tf.squeeze(PoolingOutput) #x = tf.squeeze(PoolingOutput) #--------------------------------------------------------- x = tf.squeeze(File) transformed_label = encoder.transform([Label]) yield x, transformed_label[0] train_dataset = tf.data.Dataset.from_generator( generator, args = (TrainFilePath,TrainLabelList), output_types=(tf.float64, tf.int16), output_shapes=((20, 2048),len(EncoderOnlyList))) train_dataset = train_dataset.batch(8).prefetch(tf.data.experimental.AUTOTUNE) #train_dataset = train_dataset.batch(16) valid_dataset = tf.data.Dataset.from_generator( generator, args = (ValiFilePath, VailLabelPath), output_types=(tf.float64, tf.int16), output_shapes=((20, 2048),len(EncoderOnlyList))) valid_dataset = valid_dataset.batch(8).prefetch(tf.data.experimental.AUTOTUNE) #valid_dataset = valid_dataset.batch(16) with tf.device(device_name): model = Sequential() model.add(keras.layers.Input(shape=(20, 2048),)) model.add(tf.keras.layers.Masking(mask_value=0.)) model.add(tf.keras.layers.LSTM(256) model.add(tf.keras.layers.Dropout(0.5)) model.add(tf.keras.layers.Dense(128,activation='relu')) model.add(tf.keras.layers.Dropout(0.5)) model.add(tf.keras.layers.Dense(100, activation='softmax')) model.compile(optimizer=rmsprop, loss='categorical_crossentropy', metrics=['accuracy']) model.fit(train_dataset, epochs=20, validation_data=valid_dataset) model.save_weights('/content/drive/MyDrive/Resnet50BaseWeight_3.h5', overwrite=True) model.save("/content/drive/MyDrive/Resnet50Base_3.h5") and result is like this Epoch 1/20 1500/1500 [==============================] - 97s 61ms/step - loss: 0.0000e+00 - accuracy: 0.0012 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00 Epoch 2/20 1500/1500 [==============================] - 102s 68ms/step - loss: 0.0000e+00 - accuracy: 0.0086 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00 Epoch 3/20 1500/1500 [==============================] - 91s 60ms/step - loss: 0.0000e+00 - accuracy: 0.0103 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00 Epoch 4/20 1500/1500 [==============================] - 95s 63ms/step - loss: 0.0000e+00 - accuracy: 0.0113 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00 Epoch 5/20 1500/1500 [==============================] - 93s 62ms/step - loss: 0.0000e+00 - accuracy: 0.0103 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00 Epoch 6/20 1500/1500 [==============================] - 92s 61ms/step - loss: 0.0000e+00 - accuracy: 0.0098 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00 Even if the epoch increases, the accuracy does not increase well anymore And most of the results come out as 0.0000e+00 like that I don't know what is wrong plz help
[ "it is the logits shape and categorizes mismatches you need to select 100 target classes with different shape, and forms it into ( 100, 0 ) or equivalent.\n\nSample: Identical is expecting of input types or compared among them, loss function and statistics matrix calculation from estimating and evaluation by step of the same input or different input. RMS can use with anything still correct but you need to make scopes the estimators.\n\nimport tensorflow as tf\n\nimport os\nfrom os.path import exists\n\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]\nNone\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\nphysical_devices = tf.config.experimental.list_physical_devices('GPU')\nassert len(physical_devices) > 0, \"Not enough GPU hardware devices available\"\nconfig = tf.config.experimental.set_memory_growth(physical_devices[0], True)\nprint(physical_devices)\nprint(config)\n\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n: Variables\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\nclass_100_names = ['apple', 'aquarium_fish', 'baby', 'bear', 'beaver', \n'bed', 'bee', 'beetle', 'bicycle', 'bottle', \n'bowl', 'boy', 'bridge', 'bus', 'butterfly', \n'camel', 'can', 'castle', 'caterpillar', \n'cattle', 'chair', 'chimpanzee', 'clock', 'cloud', \n'cockroach', 'couch', 'crab', 'crocodile', 'cup', \n'dinosaur', 'dolphin', 'elephant', 'flatfish', 'forest', \n'fox', 'girl', 'hamster', 'house', 'kangaroo', \n'keyboard', 'lamp', 'lawn_mower', 'leopard', 'lion', \n'lizard', 'lobster', 'man', 'maple_tree', 'motorcycle', \n'mountain', 'mouse', 'mushroom', 'oak_tree', 'orange', \n'orchid', 'otter', 'palm_tree', 'pear', 'pickup_truck', \n'pine_tree', 'plain', 'plate', 'poppy', 'porcupine', \n'possum', 'rabbit', 'raccoon', 'ray', 'road', \n'rocket', 'rose', 'sea', 'seal', 'shark', \n'shrew', 'skunk', 'skyscraper', 'snail', 'snake', \n'spider', 'squirrel', 'streetcar', 'sunflower', 'sweet_pepper', \n'table', 'tank', 'telephone', 'television', 'tiger', \n'tractor', 'train', 'trout', 'tulip', 'turtle', \n'wardrobe', 'whale', 'willow_tree', 'wolf', 'woman', 'worm'] \n\ncheckpoint_path = \"F:\\\\models\\\\checkpoint\\\\\" + os.path.basename(__file__).split('.')[0] + \"\\\\TF_DataSets_01.h5\"\ncheckpoint_dir = os.path.dirname(checkpoint_path)\n\nif not exists(checkpoint_dir) : \n os.mkdir(checkpoint_dir)\n print(\"Create directory: \" + checkpoint_dir)\n\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n: Dataset\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n(train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.cifar100.load_data(label_mode='fine')\n\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n: Model Initialize\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\nmodel = tf.keras.models.Sequential([\n tf.keras.layers.InputLayer(input_shape=( 32, 32, 3 )),\n tf.keras.layers.Normalization(mean=3., variance=2.),\n tf.keras.layers.Normalization(mean=4., variance=6.),\n tf.keras.layers.Conv2D(32, (3, 3), activation='relu'),\n tf.keras.layers.MaxPooling2D((2, 2)),\n tf.keras.layers.Dense(128, activation='relu'),\n tf.keras.layers.Reshape((128, 225)),\n tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(96, return_sequences=True, return_state=False)),\n tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(96)),\n tf.keras.layers.Flatten(),\n tf.keras.layers.Dense(192, activation='relu'),\n tf.keras.layers.Dense(100),\n])\n\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n: Callback\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\nclass custom_callback(tf.keras.callbacks.Callback):\n def on_epoch_end(self, epoch, logs={}):\n if( logs['accuracy'] >= 0.95 ):\n self.model.stop_training = True\n \ncustom_callback = custom_callback()\n\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n: Optimizer\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\noptimizer = tf.keras.optimizers.RMSprop(\n learning_rate=0.001,\n rho=0.9,\n momentum=0.0,\n epsilon=1e-07,\n centered=False,\n # decay=None, # {'lr', 'global_clipnorm', 'clipnorm', 'decay', 'clipvalue'}\n # clipnorm=None,\n # clipvalue=None,\n # global_clipnorm=None,\n # use_ema=False,\n # ema_momentum=0.99,\n # ema_overwrite_frequency=100,\n # jit_compile=True,\n name='RMSprop',\n)\n\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n: Loss Fn\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\" \nlossfn = tf.keras.losses.SparseCategoricalCrossentropy(\n from_logits=False,\n reduction=tf.keras.losses.Reduction.AUTO,\n name='sparse_categorical_crossentropy'\n)\n\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n: Model Summary\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n# model.compile(optimizer=optimizer, loss=lossfn, metrics=['accuracy'])\nmodel.compile(optimizer=optimizer,\n loss=lossfn,\n metrics=['accuracy'])\n \n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n: FileWriter\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\nif exists(checkpoint_path) :\n model.load_weights(checkpoint_path)\n print(\"model load: \" + checkpoint_path)\n input(\"Press Any Key!\")\n\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n: Training\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\nhistory = model.fit( train_images, train_labels, batch_size=100, epochs=10000, callbacks=[custom_callback] )\nmodel.save_weights(checkpoint_path)\n\n\nOutput: Compares 100 classes it is not the problem, updates new or none identical input should select favorites classes.\n\n2022-11-26 12:02:17.507553: I tensorflow/stream_executor/cuda/cuda_dnn.cc:368] Loaded cuDNN version 8100\n500/500 [==============================] - 34s 54ms/step - loss: 10.1518 - accuracy: 0.0104\nEpoch 2/10000\n500/500 [==============================] - 27s 53ms/step - loss: 9.5093 - accuracy: 0.0122\nEpoch 3/10000\n500/500 [==============================] - 26s 53ms/step - loss: 9.2861 - accuracy: 0.0127\nEpoch 4/10000\n462/500 [==========================>...] - ETA: 2s - loss: 9.1570 - accuracy: 0.0126\n\n\nOutput: Image categorized, number of types is not effects much but how identical they are, when waiting for the CIFAR100 training ...\n\n\n" ]
[ 0 ]
[]
[]
[ "keras", "lstm", "python", "tensorflow" ]
stackoverflow_0074580072_keras_lstm_python_tensorflow.txt