content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
35
137
Q: SyntaxError: invalid non-printable character U+0016. What is the cause? import discord from discord.ext import commands import secrets from secrets import TOKEN client = discord.Client() @client.event async def on_ready(): print("Bot is ready.") client.run(f"{TOKEN}") I'm using VSC and I just wrote this code into my main file. I'm not sure why but it keeps saying: File "<stdin>", line 1 ▬▬▬& C:/Users/username/AppData/Local/Programs/Python/Python311/python.exe c:/Users/username/OneDrive/Desktop/PythonCode/main.py ^ SyntaxError: invalid non-printable character U+0016 Anyone know what would be the cause of it? A: Your Python file seems perfectly fine from a syntax point of view. This WebRepl has no problem parsing it: https://www.online-python.com/LE9maKwF8z . (It can't find discord, but that's fine and the syntax is ok). It looks like a VSC problem. Try saving the file and run it with python3 main.py to validate that's it is a VSC problem. Edit: Maybe one of the quotes is off, but isn't in the StackOverflow paste.
SyntaxError: invalid non-printable character U+0016. What is the cause?
import discord from discord.ext import commands import secrets from secrets import TOKEN client = discord.Client() @client.event async def on_ready(): print("Bot is ready.") client.run(f"{TOKEN}") I'm using VSC and I just wrote this code into my main file. I'm not sure why but it keeps saying: File "<stdin>", line 1 ▬▬▬& C:/Users/username/AppData/Local/Programs/Python/Python311/python.exe c:/Users/username/OneDrive/Desktop/PythonCode/main.py ^ SyntaxError: invalid non-printable character U+0016 Anyone know what would be the cause of it?
[ "Your Python file seems perfectly fine from a syntax point of view. This WebRepl has no problem parsing it: https://www.online-python.com/LE9maKwF8z . (It can't find discord, but that's fine and the syntax is ok).\nIt looks like a VSC problem. Try saving the file and run it with\npython3 main.py\n\nto validate that's it is a VSC problem.\nEdit:\nMaybe one of the quotes is off, but isn't in the StackOverflow paste.\n" ]
[ 0 ]
[]
[]
[ "discord", "discord.py", "python" ]
stackoverflow_0074586820_discord_discord.py_python.txt
Q: webbrowser.open(site) doesn't process korean characters I'm quite an infrequent coder, I hope my question won't be too obvious. I have this very simple code to open some websites based on string (open website for a specific word) which works on Windows but somehow doesn't on my new computer with Mac OS. The tricky part is that I'm using Korean alphabet (I learn this language and therefore research websites to create flashcards) which somehow doesn't land properly in the website's URL when opening based on this simple script. Example: If I run python3 flashcard.py 가다 in my terminal, I would expect it to return (among others): https://en.dict.naver.com/#/search?query=가다 But unfortunately it returns: https://en.dict.naver.com/#/search?query=??? Which means Korean characters are somehow not recognised and changed to question marks. I tested different parts of code with print statements, but everything down to for loop works fine, so the culprit is the webbrowser.open(). I tried encoding the strings, but then usually getting some errors and apparently I'm not doing it right. I have Korean installed as language both in system & browser. Has anyone of you experienced similar issue and has resolved the problem? import sys import webbrowser import pyperclip # Get search word from command line search_word = sys.argv[1] # Sites to search for search word sites = [ f'https://en.dict.naver.com/#/search?query={search_word}', f'https://search.naver.com/search.naver?where=image&sm=tab_jum&query={search_word}', # f'https://ko.dict.naver.com/#/search?query={search_word}', f'https://forvo.com/word/{search_word}/#ko', # f'https://translate.google.com/#view=home&op=translate&sl=ko&tl=en&text={search_word}', # f'https://papago.naver.com/?sk=ko&tk=en&st={search_word}', # f'https://ko.wiktionary.org/wiki/{search_word}#%ED%95%9C%EA%B5%AD%EC%96%B4' ] # Search for search word in each site for site in sites: webbrowser.open(site) # Copy search word to clipboard pyperclip.copy(search_word) A: I think the problem is that the word isn't getting properly URL-encoded (i.e. '가다' needs to be converted to '%EA%B0%80%EB%8B%A4' for use in a URL). Some browsers deal with this differently than others, and I think you're seeing a difference between the browser you use on Windows vs. on macOS. To encode it, you can use: import urllib.parse url_search_word = urllib.parse.quote(search_word) ...and then use {url_search_word} instead of {search_word} in the sites strings. A: In google colab, this code works: from IPython.display import Javascript korean_url = 'https://en.dict.naver.com/#/search?query=가다' def open_website(url): display(Javascript('window.open("{url}");'.format(url=url))) open_website(korean_url) As you can see, it also works from inside a function.
webbrowser.open(site) doesn't process korean characters
I'm quite an infrequent coder, I hope my question won't be too obvious. I have this very simple code to open some websites based on string (open website for a specific word) which works on Windows but somehow doesn't on my new computer with Mac OS. The tricky part is that I'm using Korean alphabet (I learn this language and therefore research websites to create flashcards) which somehow doesn't land properly in the website's URL when opening based on this simple script. Example: If I run python3 flashcard.py 가다 in my terminal, I would expect it to return (among others): https://en.dict.naver.com/#/search?query=가다 But unfortunately it returns: https://en.dict.naver.com/#/search?query=??? Which means Korean characters are somehow not recognised and changed to question marks. I tested different parts of code with print statements, but everything down to for loop works fine, so the culprit is the webbrowser.open(). I tried encoding the strings, but then usually getting some errors and apparently I'm not doing it right. I have Korean installed as language both in system & browser. Has anyone of you experienced similar issue and has resolved the problem? import sys import webbrowser import pyperclip # Get search word from command line search_word = sys.argv[1] # Sites to search for search word sites = [ f'https://en.dict.naver.com/#/search?query={search_word}', f'https://search.naver.com/search.naver?where=image&sm=tab_jum&query={search_word}', # f'https://ko.dict.naver.com/#/search?query={search_word}', f'https://forvo.com/word/{search_word}/#ko', # f'https://translate.google.com/#view=home&op=translate&sl=ko&tl=en&text={search_word}', # f'https://papago.naver.com/?sk=ko&tk=en&st={search_word}', # f'https://ko.wiktionary.org/wiki/{search_word}#%ED%95%9C%EA%B5%AD%EC%96%B4' ] # Search for search word in each site for site in sites: webbrowser.open(site) # Copy search word to clipboard pyperclip.copy(search_word)
[ "I think the problem is that the word isn't getting properly URL-encoded (i.e. '가다' needs to be converted to '%EA%B0%80%EB%8B%A4' for use in a URL). Some browsers deal with this differently than others, and I think you're seeing a difference between the browser you use on Windows vs. on macOS. To encode it, you can use:\nimport urllib.parse\nurl_search_word = urllib.parse.quote(search_word)\n\n...and then use {url_search_word} instead of {search_word} in the sites strings.\n", "In google colab, this code works:\nfrom IPython.display import Javascript\nkorean_url = 'https://en.dict.naver.com/#/search?query=가다'\ndef open_website(url):\n display(Javascript('window.open(\"{url}\");'.format(url=url)))\n\nopen_website(korean_url)\n\nAs you can see, it also works from inside a function.\n" ]
[ 0, 0 ]
[]
[]
[ "macos", "python", "python_webbrowser" ]
stackoverflow_0074586320_macos_python_python_webbrowser.txt
Q: How to swap two elsticsearch indexes I want to implement cash for highly loaded elasticsearch-based search system. I want to store cash in special elastic index. The problem is in cache warm-up: once an hour my system needs to update cached results with the fresh ones. So, I'm creating a new empty index and fill it with updated results, then I need to swap old index and new index, so users can use fresh cached results. The question is: how to swap two elasticsearch indexes efficiently? A: For this kind of scenario you use something that is called "index alias swapping". You have an alias that points to your current index, you fill a new index with the fresh records, and then you point this alias to the new index. Something like this: Current index name is items-2022-11-26-001 Create alias items pointing to items-2022-11-26-001 POST _aliases { "actions": [ { "add": { "index": "items-2022-11-26-001", "alias": "items" } } ] } Create new index with fresh data items-2022-11-26-002 When it finishes, now point the items alias to items-2022-11-26-002 POST _aliases { "actions": [ { "remove": { "index": "items-2022-11-26-001", "alias": "items" } }, { "add": { "index": "items-2022-11-26-002", "alias": "items" } } ] } Delete items-2022-11-26-001 You run all your queries against "items" alias that will act as an index. References: https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-aliases.html
How to swap two elsticsearch indexes
I want to implement cash for highly loaded elasticsearch-based search system. I want to store cash in special elastic index. The problem is in cache warm-up: once an hour my system needs to update cached results with the fresh ones. So, I'm creating a new empty index and fill it with updated results, then I need to swap old index and new index, so users can use fresh cached results. The question is: how to swap two elasticsearch indexes efficiently?
[ "For this kind of scenario you use something that is called \"index alias swapping\".\nYou have an alias that points to your current index, you fill a new index with the fresh records, and then you point this alias to the new index.\nSomething like this:\n\nCurrent index name is items-2022-11-26-001\nCreate alias items pointing to items-2022-11-26-001\n\nPOST _aliases\n{\n \"actions\": [\n {\n \"add\": {\n \"index\": \"items-2022-11-26-001\",\n \"alias\": \"items\"\n }\n }\n ]\n}\n\n\nCreate new index with fresh data items-2022-11-26-002\nWhen it finishes, now point the items alias to items-2022-11-26-002\n\nPOST _aliases\n{\n \"actions\": [\n {\n \"remove\": {\n \"index\": \"items-2022-11-26-001\",\n \"alias\": \"items\"\n }\n },\n {\n \"add\": {\n \"index\": \"items-2022-11-26-002\",\n \"alias\": \"items\"\n }\n }\n ]\n}\n\n\nDelete items-2022-11-26-001\n\nYou run all your queries against \"items\" alias that will act as an index.\nReferences:\nhttps://www.elastic.co/guide/en/elasticsearch/reference/current/indices-aliases.html\n" ]
[ 1 ]
[]
[]
[ "elasticsearch", "full_text_search", "python" ]
stackoverflow_0074581027_elasticsearch_full_text_search_python.txt
Q: How can I iterate a list with repeated values? I'm having a problem in my code where I'm trying to check if (in my case) there is a review already created with that title from that reviewer. For that I'm doing: def review_result(self): print("Complete your review") title = input("Title of the paper: ") reviewer = input("Reviewer's name: ") for x in self.__review: if x == title: index = self.__review.index(x) if self.__review[index + 1] == reviewer: But in my self.__review list I can have the same title repeated multiple times but all with diferent reviewers, for example: ['Book1', 'Rev1', 'Book1', 'Rev2', 'Book1' Rev3'] When I have 2 reviews from the same paper I can't access the 2nd review because that for x in self.__review is only searching for the 1st value that apears. Is there any way I can see the next 'x' in that for x in self.__review loop? A: You can use enumerate: for index, x in enumerate(self.__review): current_review = x next_review = self.__review[index + 1] # note that this will error when you reach the last item on the list, make sure you have handling for it! # more code
How can I iterate a list with repeated values?
I'm having a problem in my code where I'm trying to check if (in my case) there is a review already created with that title from that reviewer. For that I'm doing: def review_result(self): print("Complete your review") title = input("Title of the paper: ") reviewer = input("Reviewer's name: ") for x in self.__review: if x == title: index = self.__review.index(x) if self.__review[index + 1] == reviewer: But in my self.__review list I can have the same title repeated multiple times but all with diferent reviewers, for example: ['Book1', 'Rev1', 'Book1', 'Rev2', 'Book1' Rev3'] When I have 2 reviews from the same paper I can't access the 2nd review because that for x in self.__review is only searching for the 1st value that apears. Is there any way I can see the next 'x' in that for x in self.__review loop?
[ "You can use enumerate:\nfor index, x in enumerate(self.__review):\n current_review = x\n next_review = self.__review[index + 1] # note that this will error when you reach the last item on the list, make sure you have handling for it!\n # more code\n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074586958_python.txt
Q: How do I filter a dataframe based on complicated conditions? Right now my dataframes look like this (I simplified it cause the original has hundreds of rows) import pandas as pd Winner=[[1938,"Italy"],[1950,"Uruguay"],[2014,"Germany"]] df=pd.DataFrame(Winner, columns=['Year', 'Winner']) print(df) MatchB=[[1938,"Germany",1.0],[1938,"Germany",2.0],[1938,"Brazil",1.0],[1950,"Italy",2.0],[1950,"Spain",2.0],[1950,"Spain",1.0],[1950,"Spain",1.0],[1950,"Brazil",1.0], [2014,"Italy",2.0],[2014,"Spain",3.0],[2014,"Germany",1.0]] df2B=pd.DataFrame(MatchB, columns=['Year', 'Away Team Name','Away Team Goals']) df2B I would like to filter df2B so that I will have the rows where the "Year" and "Away Team Name" match df: Filtered List (Simplified) I check google but can't find anything useful A: You can merge. df = pd.merge(left=df, right=df2B, left_on=["Year", "Winner"], right_on=["Year", "Away Team Name"]) print(df) Output: Year Winner Away Team Name Away Team Goals 0 2014 Germany Germany 1.0
How do I filter a dataframe based on complicated conditions?
Right now my dataframes look like this (I simplified it cause the original has hundreds of rows) import pandas as pd Winner=[[1938,"Italy"],[1950,"Uruguay"],[2014,"Germany"]] df=pd.DataFrame(Winner, columns=['Year', 'Winner']) print(df) MatchB=[[1938,"Germany",1.0],[1938,"Germany",2.0],[1938,"Brazil",1.0],[1950,"Italy",2.0],[1950,"Spain",2.0],[1950,"Spain",1.0],[1950,"Spain",1.0],[1950,"Brazil",1.0], [2014,"Italy",2.0],[2014,"Spain",3.0],[2014,"Germany",1.0]] df2B=pd.DataFrame(MatchB, columns=['Year', 'Away Team Name','Away Team Goals']) df2B I would like to filter df2B so that I will have the rows where the "Year" and "Away Team Name" match df: Filtered List (Simplified) I check google but can't find anything useful
[ "You can merge.\ndf = pd.merge(left=df, right=df2B, left_on=[\"Year\", \"Winner\"], right_on=[\"Year\", \"Away Team Name\"])\nprint(df)\n\nOutput:\n Year Winner Away Team Name Away Team Goals\n0 2014 Germany Germany 1.0\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "filter", "pandas", "python" ]
stackoverflow_0074586976_dataframe_filter_pandas_python.txt
Q: How would I get multiple Inputs from entry on a button press, Tkinter? I'm working on a project for a friend where I need to make 12 entries and have them saved as a xml file when I press the button, but I keep getting it to duplicate the input to the other boxes and only print it once. import tkinter as tk from tkinter import * from tkinter.ttk import * # GUI #--------------------------------------------------------------- # # Root Setup and size root = tk.Tk() root.geometry('900x500') root.title('Box Designer') # Top Label label = tk.Label(root, text = 'Please Enter Parameters', font = ('Arial', 18)) label.pack(padx= 20, pady= 20) # Submit Button sv = StringVar() def callback(): print(sv.get()) return True button_border = tk.Frame(root, highlightbackground = "black", highlightthickness = 2, bd=0,) donebtn = tk.Button(button_border, text = 'Submit', fg = 'black', font = (15),default="active", command=callback) root.bind('<Return>', lambda e: donebtn.invoke()) # Entry Boxes for X1-12 #setup columns entryframe = tk.Frame(root) entryframe.columnconfigure(0) entryframe.columnconfigure(1) #X1 entry1 = tk.Entry(entryframe, textvariable= sv) entry1.grid(row=0, column = 1, sticky = tk.W+tk.E) entry1Label = tk.Label(entryframe, text= 'X1:') entry1Label.grid(row=0, column = 0, sticky = tk.W+tk.E) # X2 entry2 = tk.Entry(entryframe, textvariable= sv) entry2.grid(row=0, column = 3, sticky = tk.W+tk.E) entry2Label = tk.Label(entryframe, text= 'X2:') entry2Label.grid(row=0, column = 2, sticky = tk.W+tk.E) # Packs entryframe.pack(side= 'bottom', fill= 'x', pady= 60) donebtn.pack() button_border.pack() root.mainloop() I'm trying to use a StringVar to record the inputs then print them with a callback on my button A: As mentioned in the comment in your question, your code with added stringvars and modified callback: import tkinter as tk from tkinter import * from tkinter.ttk import * # GUI # --------------------------------------------------------------- # # Root Setup and size root = tk.Tk() root.geometry('900x500') root.title('Box Designer') # Top Label label = tk.Label(root, text='Please Enter Parameters', font=('Arial', 18)) label.pack(padx=20, pady=20) # Submit Button def callback(): for idx, field in enumerate((field1, field2), 1): print(f'field{idx}: {field.get()}') return True button_border = tk.Frame(root, highlightbackground="black", highlightthickness=2, bd=0, ) donebtn = tk.Button(button_border, text='Submit', fg='black', font=(15), default="active", command=callback) root.bind('<Return>', lambda e: donebtn.invoke()) # Entry Boxes for X1-12 # setup columns entryframe = tk.Frame(root) entryframe.columnconfigure(0) entryframe.columnconfigure(1) # X1 field1 = StringVar(root) entry1 = tk.Entry(entryframe, textvariable=field1) entry1.grid(row=0, column=1, sticky=tk.W + tk.E) entry1Label = tk.Label(entryframe, text='X1:') entry1Label.grid(row=0, column=0, sticky=tk.W + tk.E) # X2 field2 = StringVar(root) entry2 = tk.Entry(entryframe, textvariable=field2) entry2.grid(row=0, column=3, sticky=tk.W + tk.E) entry2Label = tk.Label(entryframe, text='X2:') entry2Label.grid(row=0, column=2, sticky=tk.W + tk.E) # Packs entryframe.pack(side='bottom', fill='x', pady=60) donebtn.pack() button_border.pack() root.mainloop()
How would I get multiple Inputs from entry on a button press, Tkinter?
I'm working on a project for a friend where I need to make 12 entries and have them saved as a xml file when I press the button, but I keep getting it to duplicate the input to the other boxes and only print it once. import tkinter as tk from tkinter import * from tkinter.ttk import * # GUI #--------------------------------------------------------------- # # Root Setup and size root = tk.Tk() root.geometry('900x500') root.title('Box Designer') # Top Label label = tk.Label(root, text = 'Please Enter Parameters', font = ('Arial', 18)) label.pack(padx= 20, pady= 20) # Submit Button sv = StringVar() def callback(): print(sv.get()) return True button_border = tk.Frame(root, highlightbackground = "black", highlightthickness = 2, bd=0,) donebtn = tk.Button(button_border, text = 'Submit', fg = 'black', font = (15),default="active", command=callback) root.bind('<Return>', lambda e: donebtn.invoke()) # Entry Boxes for X1-12 #setup columns entryframe = tk.Frame(root) entryframe.columnconfigure(0) entryframe.columnconfigure(1) #X1 entry1 = tk.Entry(entryframe, textvariable= sv) entry1.grid(row=0, column = 1, sticky = tk.W+tk.E) entry1Label = tk.Label(entryframe, text= 'X1:') entry1Label.grid(row=0, column = 0, sticky = tk.W+tk.E) # X2 entry2 = tk.Entry(entryframe, textvariable= sv) entry2.grid(row=0, column = 3, sticky = tk.W+tk.E) entry2Label = tk.Label(entryframe, text= 'X2:') entry2Label.grid(row=0, column = 2, sticky = tk.W+tk.E) # Packs entryframe.pack(side= 'bottom', fill= 'x', pady= 60) donebtn.pack() button_border.pack() root.mainloop() I'm trying to use a StringVar to record the inputs then print them with a callback on my button
[ "As mentioned in the comment in your question, your code with added stringvars and modified callback:\nimport tkinter as tk\nfrom tkinter import *\nfrom tkinter.ttk import *\n\n# GUI\n# ---------------------------------------------------------------\n#\n\n# Root Setup and size\nroot = tk.Tk()\nroot.geometry('900x500')\nroot.title('Box Designer')\n\n# Top Label\nlabel = tk.Label(root, text='Please Enter Parameters', font=('Arial', 18))\nlabel.pack(padx=20, pady=20)\n\n\n# Submit Button\n\ndef callback():\n for idx, field in enumerate((field1, field2), 1):\n print(f'field{idx}: {field.get()}')\n return True\n\n\nbutton_border = tk.Frame(root, highlightbackground=\"black\", highlightthickness=2, bd=0, )\ndonebtn = tk.Button(button_border, text='Submit', fg='black', font=(15), default=\"active\", command=callback)\nroot.bind('<Return>', lambda e: donebtn.invoke())\n\n# Entry Boxes for X1-12\n\n# setup columns\nentryframe = tk.Frame(root)\nentryframe.columnconfigure(0)\nentryframe.columnconfigure(1)\n\n# X1\nfield1 = StringVar(root)\nentry1 = tk.Entry(entryframe, textvariable=field1)\nentry1.grid(row=0, column=1, sticky=tk.W + tk.E)\nentry1Label = tk.Label(entryframe, text='X1:')\nentry1Label.grid(row=0, column=0, sticky=tk.W + tk.E)\n\n# X2\nfield2 = StringVar(root)\nentry2 = tk.Entry(entryframe, textvariable=field2)\nentry2.grid(row=0, column=3, sticky=tk.W + tk.E)\nentry2Label = tk.Label(entryframe, text='X2:')\nentry2Label.grid(row=0, column=2, sticky=tk.W + tk.E)\n\n# Packs\nentryframe.pack(side='bottom', fill='x', pady=60)\ndonebtn.pack()\nbutton_border.pack()\nroot.mainloop()\n\n" ]
[ 0 ]
[]
[]
[ "python", "python_3.x", "tkinter" ]
stackoverflow_0074586821_python_python_3.x_tkinter.txt
Q: Why is this command not working? is it me or the code? When I try to use the command !addrole it is supposed to give me the role but the bot won't. import discord from discord.ext import commands client = commands.Bot(command_prefix='!',intents = intents) @client.command(pass_context=True) @commands.has_role("ADMIN") async def addrole(ctx): member = ctx.message.author role = get(member.server.roles, name="Test") await client.add_roles(member, role) The things I have tried are making it so the bot has the highest rank on the server, making sure the bot has administrator permissions, changing the command I used to !addrole [member] [role], but none of that worked. I'm also not getting any errors and yes I do have the ADMIN role. Is it the code or is it me using the command wrong? If so what command should I use? A: Seems like you are missing message intents. intents.messages = True client = commands.Bot(command_prefix="!", intents = intents) # ... Follow the instructions here to enable message intents like any other intent on the bot's developer portal: https://discordpy.readthedocs.io/en/stable/intents.html
Why is this command not working? is it me or the code?
When I try to use the command !addrole it is supposed to give me the role but the bot won't. import discord from discord.ext import commands client = commands.Bot(command_prefix='!',intents = intents) @client.command(pass_context=True) @commands.has_role("ADMIN") async def addrole(ctx): member = ctx.message.author role = get(member.server.roles, name="Test") await client.add_roles(member, role) The things I have tried are making it so the bot has the highest rank on the server, making sure the bot has administrator permissions, changing the command I used to !addrole [member] [role], but none of that worked. I'm also not getting any errors and yes I do have the ADMIN role. Is it the code or is it me using the command wrong? If so what command should I use?
[ "Seems like you are missing message intents.\nintents.messages = True\nclient = commands.Bot(command_prefix=\"!\", intents = intents)\n# ...\n\nFollow the instructions here to enable message intents like any other intent on the bot's developer portal:\nhttps://discordpy.readthedocs.io/en/stable/intents.html\n" ]
[ 1 ]
[]
[]
[ "discord.py", "python" ]
stackoverflow_0074586797_discord.py_python.txt
Q: Network-X edge_betweeness_centrality function does not take weight into account I am trying to randomize the link weights of a network x graph to change the betweenness centrality of that network. I have verified that the weights do change but it does not change the output of the edge_betweeness_centrality function. I suspect it has something to do with adding objects to the nodes and edges as I was able to get it working as long as I didn't have these. However, I need the objects for other parts of my code so deleting them isn't an option. I have tried load_centrality and betweenness_centrality and both give me the same issue. I have also verified my inputs are integers or floats. I understand the functions interpret the values as distances so I have tried both the normal weight and the reciprocal of the weight. I've been stuck on this issue for weeks and would appreciate any help or ideas. Thank you. The following is my code. import networkx as nx import numpy as np from itertools import combinations, groupby import random class Link(object): def __init__(self,link_id,rate,distance,network): self.link_id = link_id self.rate = rate self.distance = distance self.network = network self.buffer = [] class Node: def __init__(self,node_id,network): self.node_id = node_id self.len = 0 self.queue = [] self.network = network self.packet_delay = [] self.packets = [] def gnp_random_connected_graph(n, p): graph_seed = np.random.default_rng(2021) network = nx.Graph() node_dict = {} for node in range(n): new_node = Node(node, network) node_dict[node] = new_node network.add_nodes_from(node_dict.items()) edges = list(combinations(range(n), 2)) link_dict = {} for id, node_edges in groupby(edges, key=lambda x: x[0]): rate = random.randint(8000000,40000000) #bps distance = random.randint(10,185) #meters node_edges = list(node_edges) random_edge = tuple(graph_seed.choice(node_edges)) link_dict[random_edge] = Link(random_edge,rate,distance,network) for e in node_edges: if graph_seed.random() < p: link_dict[e] = Link(e,rate,distance,network) for key,value in link_dict.items(): nodes = list(network.nodes) network.add_edge(nodes[key[0]],nodes[key[1]],obj=value,weight=int(value.distance),rp = float(1/(value.distance))) return network def randomize_weights(network): for edge in network.edges(): rand_num = random.randint(10,100) network[edge[0]][edge[1]]['weight'] = rand_num network[edge[0]][edge[1]]['rp'] = 1/rand_num return network org_net = gnp_random_connected_graph(10,.1) #print("edges weight",nx.get_edge_attributes(org_net,'weight')) bc_dict = nx.edge_betweenness_centrality(org_net, weight='rp') print("OLD",bc_dict) new_net = randomize_weights(org_net.copy()) #print("edges weight",nx.get_edge_attributes(new_net,'weight')) new_bc_dict = nx.edge_betweenness_centrality(new_net, weight='rp') print("NEW", new_bc_dict) A: I am relatively inexperienced with network algorithms in general, so take my answer and terminology here with a boulder of salt. First off, I think your code actually does work. Try, for example, redefining org_net as: org_net = nx.random_geometric_graph(10, 3, seed=42) This does indeed produce different betweenness centrality values for the edges. From my reading of the definition of betweenness centrality, I think your problem is that the structure of the graph you've chosen actually means that the betweenness centrality values of your edges are essentially independent of weights. So randomizing the weights results in the same values, regardless of how they're assigned. Betweenness centrality only depends on the number of shortest paths, and if you draw your example network I think you'll see that randomizing the weights won't effect your result: org_net = gnp_random_connected_graph(10,.1) pos = nx.spring_layout(org_net, seed=42) nx.draw(org_net) gives One other potentially conflicting problem that may have tripped up your troubleshooting is that networkx (as is often the case, unfortunately) silently ignores an issue that one might expect to throw an error. For example, consider bc_dict = nx.edge_betweenness_centrality(org_net,weight='banana') This doesn't throw an error despite there being no edge attribute 'banana'. EDIT: If you simply add a few edges to your 10-node org_net, you'll find that the betweenness centralities of your nodes do indeed change with the rest of your current code untouched: a = [node for node in org_net.nodes()] org_net.add_edge(a[4], a[9]) org_net.add_edge(a[6], a[9]) org_net.add_edge(a[7], a[9]) bc_dict = nx.edge_betweenness_centrality(org_net, weight='rp') new_net = randomize_weights(org_net.copy()) new_bc_dict = nx.edge_betweenness_centrality(new_net, weight='rp') bc_dict == new_bc_dict returns False.
Network-X edge_betweeness_centrality function does not take weight into account
I am trying to randomize the link weights of a network x graph to change the betweenness centrality of that network. I have verified that the weights do change but it does not change the output of the edge_betweeness_centrality function. I suspect it has something to do with adding objects to the nodes and edges as I was able to get it working as long as I didn't have these. However, I need the objects for other parts of my code so deleting them isn't an option. I have tried load_centrality and betweenness_centrality and both give me the same issue. I have also verified my inputs are integers or floats. I understand the functions interpret the values as distances so I have tried both the normal weight and the reciprocal of the weight. I've been stuck on this issue for weeks and would appreciate any help or ideas. Thank you. The following is my code. import networkx as nx import numpy as np from itertools import combinations, groupby import random class Link(object): def __init__(self,link_id,rate,distance,network): self.link_id = link_id self.rate = rate self.distance = distance self.network = network self.buffer = [] class Node: def __init__(self,node_id,network): self.node_id = node_id self.len = 0 self.queue = [] self.network = network self.packet_delay = [] self.packets = [] def gnp_random_connected_graph(n, p): graph_seed = np.random.default_rng(2021) network = nx.Graph() node_dict = {} for node in range(n): new_node = Node(node, network) node_dict[node] = new_node network.add_nodes_from(node_dict.items()) edges = list(combinations(range(n), 2)) link_dict = {} for id, node_edges in groupby(edges, key=lambda x: x[0]): rate = random.randint(8000000,40000000) #bps distance = random.randint(10,185) #meters node_edges = list(node_edges) random_edge = tuple(graph_seed.choice(node_edges)) link_dict[random_edge] = Link(random_edge,rate,distance,network) for e in node_edges: if graph_seed.random() < p: link_dict[e] = Link(e,rate,distance,network) for key,value in link_dict.items(): nodes = list(network.nodes) network.add_edge(nodes[key[0]],nodes[key[1]],obj=value,weight=int(value.distance),rp = float(1/(value.distance))) return network def randomize_weights(network): for edge in network.edges(): rand_num = random.randint(10,100) network[edge[0]][edge[1]]['weight'] = rand_num network[edge[0]][edge[1]]['rp'] = 1/rand_num return network org_net = gnp_random_connected_graph(10,.1) #print("edges weight",nx.get_edge_attributes(org_net,'weight')) bc_dict = nx.edge_betweenness_centrality(org_net, weight='rp') print("OLD",bc_dict) new_net = randomize_weights(org_net.copy()) #print("edges weight",nx.get_edge_attributes(new_net,'weight')) new_bc_dict = nx.edge_betweenness_centrality(new_net, weight='rp') print("NEW", new_bc_dict)
[ "I am relatively inexperienced with network algorithms in general, so take my answer and terminology here with a boulder of salt.\nFirst off, I think your code actually does work. Try, for example, redefining org_net as:\norg_net = nx.random_geometric_graph(10, 3, seed=42)\n\nThis does indeed produce different betweenness centrality values for the edges. From my reading of the definition of betweenness centrality, I think your problem is that the structure of the graph you've chosen actually means that the betweenness centrality values of your edges are essentially independent of weights. So randomizing the weights results in the same values, regardless of how they're assigned. Betweenness centrality only depends on the number of shortest paths, and if you draw your example network I think you'll see that randomizing the weights won't effect your result:\norg_net = gnp_random_connected_graph(10,.1)\npos = nx.spring_layout(org_net, seed=42)\nnx.draw(org_net)\n\ngives\n\nOne other potentially conflicting problem that may have tripped up your troubleshooting is that networkx (as is often the case, unfortunately) silently ignores an issue that one might expect to throw an error. For example, consider\nbc_dict = nx.edge_betweenness_centrality(org_net,weight='banana')\n\nThis doesn't throw an error despite there being no edge attribute 'banana'.\nEDIT:\nIf you simply add a few edges to your 10-node org_net, you'll find that the betweenness centralities of your nodes do indeed change with the rest of your current code untouched:\na = [node for node in org_net.nodes()]\n\norg_net.add_edge(a[4], a[9])\norg_net.add_edge(a[6], a[9])\norg_net.add_edge(a[7], a[9])\n\nbc_dict = nx.edge_betweenness_centrality(org_net, weight='rp')\nnew_net = randomize_weights(org_net.copy())\nnew_bc_dict = nx.edge_betweenness_centrality(new_net, weight='rp')\n\nbc_dict == new_bc_dict\n\nreturns False.\n" ]
[ 0 ]
[]
[]
[ "networkx", "python" ]
stackoverflow_0074584989_networkx_python.txt
Q: How to sample `n=1000` vector from Multivariate Normal distribution? I want to sample the number of m=10 of size n=1000 vectors (1000 dimension) from Multivariate Normal distribution with mean vector (0,0,..,0) and covariance matrix identity I_n and then divided by its l_2 norm. Based on the answer, I try the following code: import random m = 2 n = 5 random.seed(1000001) x = np.random.multivariate_normal(np.zeros(m), np.eye(m), size=n) print(x) [[ 0.93503543 -0.00605634] [-0.42033252 0.08350352] [ 0.58507136 -0.07849799] [ 0.79762498 0.26868063] [ 1.31544479 0.79820179]] Normalized # Calculate the norms on axis zero axis_0_norms = np.linalg.norm(x,axis = 0) #print(f"Norms on axis 0 = {axis_0_norms}\n") # Normalise the arrays normalized_x = x/axis_0_norms print("Normalized data:\n", normalized_x) Normalized data: [[ 0.48221541 -0.00712517] [-0.21677341 0.09824033] [ 0.30173234 -0.09235142] [ 0.41135025 0.31609774] [ 0.6783997 0.93906949]] But 0.48221541**2+(-0.00712517)**2 is not 1. A: Use np.zeros(), and np.eye(), and size, to provide the parameters for the multivariate_normal function in order to create the array. Then normalize the data using the l2 norm parameter of the normalize function from sklearn. We can then validate this l2 normalization by checking the sum of the squared values in each row of the data. So firstly, let us create the array: import numpy as np import pandas as pd from sklearn import preprocessing # Set the seed for reproducibility rng = np.random.default_rng(42) # Create the array m = 10 n = 1000 X = rng.multivariate_normal(np.zeros(m), np.eye(m), size=n) # Display the data within a dataframe df_X = pd.DataFrame(X) print("Original X:\n", df_X.head(5)) OUTPUT: Showing the first 5/1000 rows of the Original array (X) Original X: Now let us normalize the array using the preprocessing.normalize() function from sklearn. # Normalize X using l2 norms X_normalized = preprocessing.normalize(X, norm='l2') # Display the normalized array within a dataframe df_norm = pd.DataFrame(X_normalized) print("X_normalized:\n", df_norm.head(5)) OUTPUT: Showing the first 5/1000 rows of the normalized array. X_normalized: And finally, we can now check the validity of this normalized array by checking that thesum of the squared values in each row is equal to 1. # Confirm l2 normalization by checking the sum of the squared values in each row. # Should equal 1 in each row X_normalized_squared = X_normalized ** 2 X_sum_squared = np.sum(X_normalized_squared, axis=1) # Display the sum of the squared values for each row within a dataframe df_sum = pd.DataFrame(X_sum_squared, columns=["Sum"]) print("X_sum_squared:\n", df_sum.head(5)) OUTPUT: Showing the first 5/1000 rows. Sum of the squared values for each row. X_sum_squared:
How to sample `n=1000` vector from Multivariate Normal distribution?
I want to sample the number of m=10 of size n=1000 vectors (1000 dimension) from Multivariate Normal distribution with mean vector (0,0,..,0) and covariance matrix identity I_n and then divided by its l_2 norm. Based on the answer, I try the following code: import random m = 2 n = 5 random.seed(1000001) x = np.random.multivariate_normal(np.zeros(m), np.eye(m), size=n) print(x) [[ 0.93503543 -0.00605634] [-0.42033252 0.08350352] [ 0.58507136 -0.07849799] [ 0.79762498 0.26868063] [ 1.31544479 0.79820179]] Normalized # Calculate the norms on axis zero axis_0_norms = np.linalg.norm(x,axis = 0) #print(f"Norms on axis 0 = {axis_0_norms}\n") # Normalise the arrays normalized_x = x/axis_0_norms print("Normalized data:\n", normalized_x) Normalized data: [[ 0.48221541 -0.00712517] [-0.21677341 0.09824033] [ 0.30173234 -0.09235142] [ 0.41135025 0.31609774] [ 0.6783997 0.93906949]] But 0.48221541**2+(-0.00712517)**2 is not 1.
[ "Use np.zeros(), and np.eye(), and size, to provide the parameters for the multivariate_normal function in order to create the array. Then normalize the data using the l2 norm parameter of the normalize function from sklearn. We can then validate this l2 normalization by checking the sum of the squared values in each row of the data.\n\n\nSo firstly, let us create the array:\nimport numpy as np\nimport pandas as pd\nfrom sklearn import preprocessing\n\n# Set the seed for reproducibility\nrng = np.random.default_rng(42)\n\n# Create the array\nm = 10\nn = 1000\nX = rng.multivariate_normal(np.zeros(m), np.eye(m), size=n)\n\n# Display the data within a dataframe\ndf_X = pd.DataFrame(X)\nprint(\"Original X:\\n\", df_X.head(5))\n\n\nOUTPUT: \nShowing the first 5/1000 rows of the Original array (X)\nOriginal X:\n\n\n\n\nNow let us normalize the array using the preprocessing.normalize() function from sklearn.\n# Normalize X using l2 norms\nX_normalized = preprocessing.normalize(X, norm='l2')\n\n# Display the normalized array within a dataframe\ndf_norm = pd.DataFrame(X_normalized)\nprint(\"X_normalized:\\n\", df_norm.head(5))\n\n\nOUTPUT: \nShowing the first 5/1000 rows of the normalized array.\nX_normalized:\n\n\n\n\n\nAnd finally, we can now check the validity of this normalized array by checking that thesum of the squared values in each row is equal to 1.\n# Confirm l2 normalization by checking the sum of the squared values in each row. \n# Should equal 1 in each row\nX_normalized_squared = X_normalized ** 2\nX_sum_squared = np.sum(X_normalized_squared, axis=1)\n\n# Display the sum of the squared values for each row within a dataframe\ndf_sum = pd.DataFrame(X_sum_squared, columns=[\"Sum\"])\nprint(\"X_sum_squared:\\n\", df_sum.head(5))\n\n\nOUTPUT: \nShowing the first 5/1000 rows.\nSum of the squared values for each row.\nX_sum_squared:\n\n\n" ]
[ 1 ]
[]
[]
[ "python" ]
stackoverflow_0074587012_python.txt
Q: How to count the extracted elements? I need to know the number of links that return in the extraction below: for produtos in classeprodutos: link = produtos.find_element(By. TAG_NAME, "a") lista_link.append(print(link.get_attribute("href"))) A: You have to call an iterable maeaning a list inside len() function to count the total number like: print(len(classeprodutos)) #OR lista_link = [] for produtos in classeprodutos: link = produtos.find_element(By. TAG_NAME, "a") lista_link.append(link.get_attribute("href")) print(len(lista_link))
How to count the extracted elements?
I need to know the number of links that return in the extraction below: for produtos in classeprodutos: link = produtos.find_element(By. TAG_NAME, "a") lista_link.append(print(link.get_attribute("href")))
[ "You have to call an iterable maeaning a list inside len() function to count the total number like:\nprint(len(classeprodutos))\n\n#OR\nlista_link = []\nfor produtos in classeprodutos:\n link = produtos.find_element(By. TAG_NAME, \"a\")\n lista_link.append(link.get_attribute(\"href\"))\n\nprint(len(lista_link))\n\n" ]
[ 1 ]
[]
[]
[ "python", "selenium" ]
stackoverflow_0074586395_python_selenium.txt
Q: How to filter data from a dataframe so that only values ​linked to a specific string remain. Python so I got this dataframe showing the leading causes of death for each year in Chile. Original Dataframe What I want to do is to make something like this: What i want to make I want to make that so I can see how that specific cause of death varies in the years shown. I made the dataframe so "Causas 2 de año 2016" is a different column to "% 1" (%2016) Later I want to try plotting these variations. I'm new on using Python, right now im using it on Jupyter Notebook. Thanks in advance I tried using .loc but i absolutely failed. Really dont know how to aproach the problem A: You should use df.query. With that function one can make a filter from the dataframe A: Probably is there a better way, but in the meantime this give exactly what you asked for import pandas as pd df = pd.read_csv("causas.csv") causa="Enfermedades cerebrovasculares" i = df.columns df3=pd.DataFrame() for j in range (0,14,2): df2=df[df[i[j]]==causa][[i[j],i[j+1]]] df3[i[j]]=df2[i[j]].to_list() df3[i[j+1]]=df2[i[j+1]].to_list() print(df3)
How to filter data from a dataframe so that only values ​linked to a specific string remain. Python
so I got this dataframe showing the leading causes of death for each year in Chile. Original Dataframe What I want to do is to make something like this: What i want to make I want to make that so I can see how that specific cause of death varies in the years shown. I made the dataframe so "Causas 2 de año 2016" is a different column to "% 1" (%2016) Later I want to try plotting these variations. I'm new on using Python, right now im using it on Jupyter Notebook. Thanks in advance I tried using .loc but i absolutely failed. Really dont know how to aproach the problem
[ "You should use df.query. With that function one can make a filter from the dataframe\n", "Probably is there a better way, but in the meantime this give exactly what you asked for\nimport pandas as pd\n\ndf = pd.read_csv(\"causas.csv\")\n\ncausa=\"Enfermedades cerebrovasculares\"\n\ni = df.columns\ndf3=pd.DataFrame()\nfor j in range (0,14,2):\n df2=df[df[i[j]]==causa][[i[j],i[j+1]]]\n df3[i[j]]=df2[i[j]].to_list()\n df3[i[j+1]]=df2[i[j+1]].to_list()\n\nprint(df3)\n\n" ]
[ 0, 0 ]
[]
[]
[ "dataframe", "jupyter_notebook", "numpy", "pandas", "python" ]
stackoverflow_0074578711_dataframe_jupyter_notebook_numpy_pandas_python.txt
Q: Please inform Django connecting to mysql for M1mac I want to connect django to mysql. So I tried pip install mysqlclient, but I get the following error. [running environment] machin: M1 Mac OS ventura mysql8.0.30: installed by brew [trying note] installing Cmake : not solve Please let me know the latest solution for M1 Mac. [erro logs] $ pip install mysqlclient Collecting mysqlclient Using cached mysqlclient-2.1.1.tar.gz (88 kB) Preparing metadata (setup.py) ... done Building wheels for collected packages: mysqlclient Building wheel for mysqlclient (setup.py) ... error error: subprocess-exited-with-error × python setup.py bdist_wheel did not run successfully. │ exit code: 1 ╰─> [41 lines of output] mysql_config --version ['8.0.30'] mysql_config --libs ['-L/opt/homebrew/Cellar/mysql/8.0.30/lib', '-lmysqlclient', '-lz', '-L/opt/homebrew/lib', '-lzstd', '-L/opt/homebrew/opt/openssl@1.1/lib', '-lssl', '-lcrypto', '-lresolv'] mysql_config --cflags ['-I/opt/homebrew/Cellar/mysql/8.0.30/include/mysql'] ext_options: library_dirs: ['/opt/homebrew/Cellar/mysql/8.0.30/lib', '/opt/homebrew/lib', '/opt/homebrew/opt/openssl@1.1/lib'] libraries: ['mysqlclient', 'resolv'] extra_compile_args: ['-std=c99'] extra_link_args: [] include_dirs: ['/opt/homebrew/Cellar/mysql/8.0.30/include/mysql'] extra_objects: [] define_macros: [('version_info', "(2,1,1,'final',0)"), ('__version__', '2.1.1')] running bdist_wheel running build running build_py creating build creating build/lib.macosx-10.9-x86_64-cpython-39 creating build/lib.macosx-10.9-x86_64-cpython-39/MySQLdb copying MySQLdb/__init__.py -> build/lib.macosx-10.9-x86_64-cpython-39/MySQLdb copying MySQLdb/_exceptions.py -> build/lib.macosx-10.9-x86_64-cpython-39/MySQLdb copying MySQLdb/connections.py -> build/lib.macosx-10.9-x86_64-cpython-39/MySQLdb copying MySQLdb/converters.py -> build/lib.macosx-10.9-x86_64-cpython-39/MySQLdb copying MySQLdb/cursors.py -> build/lib.macosx-10.9-x86_64-cpython-39/MySQLdb copying MySQLdb/release.py -> build/lib.macosx-10.9-x86_64-cpython-39/MySQLdb copying MySQLdb/times.py -> build/lib.macosx-10.9-x86_64-cpython-39/MySQLdb creating build/lib.macosx-10.9-x86_64-cpython-39/MySQLdb/constants copying MySQLdb/constants/__init__.py -> build/lib.macosx-10.9-x86_64-cpython-39/MySQLdb/constants copying MySQLdb/constants/CLIENT.py -> build/lib.macosx-10.9-x86_64-cpython-39/MySQLdb/constants copying MySQLdb/constants/CR.py -> build/lib.macosx-10.9-x86_64-cpython-39/MySQLdb/constants copying MySQLdb/constants/ER.py -> build/lib.macosx-10.9-x86_64-cpython-39/MySQLdb/constants copying MySQLdb/constants/FIELD_TYPE.py -> build/lib.macosx-10.9-x86_64-cpython-39/MySQLdb/constants copying MySQLdb/constants/FLAG.py -> build/lib.macosx-10.9-x86_64-cpython-39/MySQLdb/constants running build_ext building 'MySQLdb._mysql' extension creating build/temp.macosx-10.9-x86_64-cpython-39 creating build/temp.macosx-10.9-x86_64-cpython-39/MySQLdb clang -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem /opt/anaconda3/envs/django/include -arch x86_64 -I/opt/anaconda3/envs/django/include -fPIC -O2 -isystem /opt/anaconda3/envs/django/include -arch x86_64 -Dversion_info=(2,1,1,'final',0) -D__version__=2.1.1 -I/opt/homebrew/Cellar/mysql/8.0.30/include/mysql -I/opt/anaconda3/envs/django/include/python3.9 -c MySQLdb/_mysql.c -o build/temp.macosx-10.9-x86_64-cpython-39/MySQLdb/_mysql.o -std=c99 xcrun: error: invalid active developer path (/Library/Developer/CommandLineTools), missing xcrun at: /Library/Developer/CommandLineTools/usr/bin/xcrun error: command '/usr/bin/clang' failed with exit code 1 [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for mysqlclient Running setup.py clean for mysqlclient Failed to build mysqlclient Installing collected packages: mysqlclient Running setup.py install for mysqlclient ... error error: subprocess-exited-with-error × Running setup.py install for mysqlclient did not run successfully. │ exit code: 1 ╰─> [43 lines of output] mysql_config --version ['8.0.30'] mysql_config --libs ['-L/opt/homebrew/Cellar/mysql/8.0.30/lib', '-lmysqlclient', '-lz', '-L/opt/homebrew/lib', '-lzstd', '-L/opt/homebrew/opt/openssl@1.1/lib', '-lssl', '-lcrypto', '-lresolv'] mysql_config --cflags ['-I/opt/homebrew/Cellar/mysql/8.0.30/include/mysql'] ext_options: library_dirs: ['/opt/homebrew/Cellar/mysql/8.0.30/lib', '/opt/homebrew/lib', '/opt/homebrew/opt/openssl@1.1/lib'] libraries: ['mysqlclient', 'resolv'] extra_compile_args: ['-std=c99'] extra_link_args: [] include_dirs: ['/opt/homebrew/Cellar/mysql/8.0.30/include/mysql'] extra_objects: [] define_macros: [('version_info', "(2,1,1,'final',0)"), ('__version__', '2.1.1')] running install /opt/anaconda3/envs/django/lib/python3.9/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools. warnings.warn( running build running build_py creating build creating build/lib.macosx-10.9-x86_64-cpython-39 creating build/lib.macosx-10.9-x86_64-cpython-39/MySQLdb copying MySQLdb/__init__.py -> build/lib.macosx-10.9-x86_64-cpython-39/MySQLdb copying MySQLdb/_exceptions.py -> build/lib.macosx-10.9-x86_64-cpython-39/MySQLdb copying MySQLdb/connections.py -> build/lib.macosx-10.9-x86_64-cpython-39/MySQLdb copying MySQLdb/converters.py -> build/lib.macosx-10.9-x86_64-cpython-39/MySQLdb copying MySQLdb/cursors.py -> build/lib.macosx-10.9-x86_64-cpython-39/MySQLdb copying MySQLdb/release.py -> build/lib.macosx-10.9-x86_64-cpython-39/MySQLdb copying MySQLdb/times.py -> build/lib.macosx-10.9-x86_64-cpython-39/MySQLdb creating build/lib.macosx-10.9-x86_64-cpython-39/MySQLdb/constants copying MySQLdb/constants/__init__.py -> build/lib.macosx-10.9-x86_64-cpython-39/MySQLdb/constants copying MySQLdb/constants/CLIENT.py -> build/lib.macosx-10.9-x86_64-cpython-39/MySQLdb/constants copying MySQLdb/constants/CR.py -> build/lib.macosx-10.9-x86_64-cpython-39/MySQLdb/constants copying MySQLdb/constants/ER.py -> build/lib.macosx-10.9-x86_64-cpython-39/MySQLdb/constants copying MySQLdb/constants/FIELD_TYPE.py -> build/lib.macosx-10.9-x86_64-cpython-39/MySQLdb/constants copying MySQLdb/constants/FLAG.py -> build/lib.macosx-10.9-x86_64-cpython-39/MySQLdb/constants running build_ext building 'MySQLdb._mysql' extension creating build/temp.macosx-10.9-x86_64-cpython-39 creating build/temp.macosx-10.9-x86_64-cpython-39/MySQLdb clang -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem /opt/anaconda3/envs/django/include -arch x86_64 -I/opt/anaconda3/envs/django/include -fPIC -O2 -isystem /opt/anaconda3/envs/django/include -arch x86_64 -Dversion_info=(2,1,1,'final',0) -D__version__=2.1.1 -I/opt/homebrew/Cellar/mysql/8.0.30/include/mysql -I/opt/anaconda3/envs/django/include/python3.9 -c MySQLdb/_mysql.c -o build/temp.macosx-10.9-x86_64-cpython-39/MySQLdb/_mysql.o -std=c99 xcrun: error: invalid active developer path (/Library/Developer/CommandLineTools), missing xcrun at: /Library/Developer/CommandLineTools/usr/bin/xcrun error: command '/usr/bin/clang' failed with exit code 1 [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: legacy-install-failure × Encountered error while trying to install package. ╰─> mysqlclient note: This is an issue with the package mentioned above, not pip. hint: See above for output from the failure. A: Use: pip install wheel pip install --upgrade setuptools
Please inform Django connecting to mysql for M1mac
I want to connect django to mysql. So I tried pip install mysqlclient, but I get the following error. [running environment] machin: M1 Mac OS ventura mysql8.0.30: installed by brew [trying note] installing Cmake : not solve Please let me know the latest solution for M1 Mac. [erro logs] $ pip install mysqlclient Collecting mysqlclient Using cached mysqlclient-2.1.1.tar.gz (88 kB) Preparing metadata (setup.py) ... done Building wheels for collected packages: mysqlclient Building wheel for mysqlclient (setup.py) ... error error: subprocess-exited-with-error × python setup.py bdist_wheel did not run successfully. │ exit code: 1 ╰─> [41 lines of output] mysql_config --version ['8.0.30'] mysql_config --libs ['-L/opt/homebrew/Cellar/mysql/8.0.30/lib', '-lmysqlclient', '-lz', '-L/opt/homebrew/lib', '-lzstd', '-L/opt/homebrew/opt/openssl@1.1/lib', '-lssl', '-lcrypto', '-lresolv'] mysql_config --cflags ['-I/opt/homebrew/Cellar/mysql/8.0.30/include/mysql'] ext_options: library_dirs: ['/opt/homebrew/Cellar/mysql/8.0.30/lib', '/opt/homebrew/lib', '/opt/homebrew/opt/openssl@1.1/lib'] libraries: ['mysqlclient', 'resolv'] extra_compile_args: ['-std=c99'] extra_link_args: [] include_dirs: ['/opt/homebrew/Cellar/mysql/8.0.30/include/mysql'] extra_objects: [] define_macros: [('version_info', "(2,1,1,'final',0)"), ('__version__', '2.1.1')] running bdist_wheel running build running build_py creating build creating build/lib.macosx-10.9-x86_64-cpython-39 creating build/lib.macosx-10.9-x86_64-cpython-39/MySQLdb copying MySQLdb/__init__.py -> build/lib.macosx-10.9-x86_64-cpython-39/MySQLdb copying MySQLdb/_exceptions.py -> build/lib.macosx-10.9-x86_64-cpython-39/MySQLdb copying MySQLdb/connections.py -> build/lib.macosx-10.9-x86_64-cpython-39/MySQLdb copying MySQLdb/converters.py -> build/lib.macosx-10.9-x86_64-cpython-39/MySQLdb copying MySQLdb/cursors.py -> build/lib.macosx-10.9-x86_64-cpython-39/MySQLdb copying MySQLdb/release.py -> build/lib.macosx-10.9-x86_64-cpython-39/MySQLdb copying MySQLdb/times.py -> build/lib.macosx-10.9-x86_64-cpython-39/MySQLdb creating build/lib.macosx-10.9-x86_64-cpython-39/MySQLdb/constants copying MySQLdb/constants/__init__.py -> build/lib.macosx-10.9-x86_64-cpython-39/MySQLdb/constants copying MySQLdb/constants/CLIENT.py -> build/lib.macosx-10.9-x86_64-cpython-39/MySQLdb/constants copying MySQLdb/constants/CR.py -> build/lib.macosx-10.9-x86_64-cpython-39/MySQLdb/constants copying MySQLdb/constants/ER.py -> build/lib.macosx-10.9-x86_64-cpython-39/MySQLdb/constants copying MySQLdb/constants/FIELD_TYPE.py -> build/lib.macosx-10.9-x86_64-cpython-39/MySQLdb/constants copying MySQLdb/constants/FLAG.py -> build/lib.macosx-10.9-x86_64-cpython-39/MySQLdb/constants running build_ext building 'MySQLdb._mysql' extension creating build/temp.macosx-10.9-x86_64-cpython-39 creating build/temp.macosx-10.9-x86_64-cpython-39/MySQLdb clang -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem /opt/anaconda3/envs/django/include -arch x86_64 -I/opt/anaconda3/envs/django/include -fPIC -O2 -isystem /opt/anaconda3/envs/django/include -arch x86_64 -Dversion_info=(2,1,1,'final',0) -D__version__=2.1.1 -I/opt/homebrew/Cellar/mysql/8.0.30/include/mysql -I/opt/anaconda3/envs/django/include/python3.9 -c MySQLdb/_mysql.c -o build/temp.macosx-10.9-x86_64-cpython-39/MySQLdb/_mysql.o -std=c99 xcrun: error: invalid active developer path (/Library/Developer/CommandLineTools), missing xcrun at: /Library/Developer/CommandLineTools/usr/bin/xcrun error: command '/usr/bin/clang' failed with exit code 1 [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for mysqlclient Running setup.py clean for mysqlclient Failed to build mysqlclient Installing collected packages: mysqlclient Running setup.py install for mysqlclient ... error error: subprocess-exited-with-error × Running setup.py install for mysqlclient did not run successfully. │ exit code: 1 ╰─> [43 lines of output] mysql_config --version ['8.0.30'] mysql_config --libs ['-L/opt/homebrew/Cellar/mysql/8.0.30/lib', '-lmysqlclient', '-lz', '-L/opt/homebrew/lib', '-lzstd', '-L/opt/homebrew/opt/openssl@1.1/lib', '-lssl', '-lcrypto', '-lresolv'] mysql_config --cflags ['-I/opt/homebrew/Cellar/mysql/8.0.30/include/mysql'] ext_options: library_dirs: ['/opt/homebrew/Cellar/mysql/8.0.30/lib', '/opt/homebrew/lib', '/opt/homebrew/opt/openssl@1.1/lib'] libraries: ['mysqlclient', 'resolv'] extra_compile_args: ['-std=c99'] extra_link_args: [] include_dirs: ['/opt/homebrew/Cellar/mysql/8.0.30/include/mysql'] extra_objects: [] define_macros: [('version_info', "(2,1,1,'final',0)"), ('__version__', '2.1.1')] running install /opt/anaconda3/envs/django/lib/python3.9/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools. warnings.warn( running build running build_py creating build creating build/lib.macosx-10.9-x86_64-cpython-39 creating build/lib.macosx-10.9-x86_64-cpython-39/MySQLdb copying MySQLdb/__init__.py -> build/lib.macosx-10.9-x86_64-cpython-39/MySQLdb copying MySQLdb/_exceptions.py -> build/lib.macosx-10.9-x86_64-cpython-39/MySQLdb copying MySQLdb/connections.py -> build/lib.macosx-10.9-x86_64-cpython-39/MySQLdb copying MySQLdb/converters.py -> build/lib.macosx-10.9-x86_64-cpython-39/MySQLdb copying MySQLdb/cursors.py -> build/lib.macosx-10.9-x86_64-cpython-39/MySQLdb copying MySQLdb/release.py -> build/lib.macosx-10.9-x86_64-cpython-39/MySQLdb copying MySQLdb/times.py -> build/lib.macosx-10.9-x86_64-cpython-39/MySQLdb creating build/lib.macosx-10.9-x86_64-cpython-39/MySQLdb/constants copying MySQLdb/constants/__init__.py -> build/lib.macosx-10.9-x86_64-cpython-39/MySQLdb/constants copying MySQLdb/constants/CLIENT.py -> build/lib.macosx-10.9-x86_64-cpython-39/MySQLdb/constants copying MySQLdb/constants/CR.py -> build/lib.macosx-10.9-x86_64-cpython-39/MySQLdb/constants copying MySQLdb/constants/ER.py -> build/lib.macosx-10.9-x86_64-cpython-39/MySQLdb/constants copying MySQLdb/constants/FIELD_TYPE.py -> build/lib.macosx-10.9-x86_64-cpython-39/MySQLdb/constants copying MySQLdb/constants/FLAG.py -> build/lib.macosx-10.9-x86_64-cpython-39/MySQLdb/constants running build_ext building 'MySQLdb._mysql' extension creating build/temp.macosx-10.9-x86_64-cpython-39 creating build/temp.macosx-10.9-x86_64-cpython-39/MySQLdb clang -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem /opt/anaconda3/envs/django/include -arch x86_64 -I/opt/anaconda3/envs/django/include -fPIC -O2 -isystem /opt/anaconda3/envs/django/include -arch x86_64 -Dversion_info=(2,1,1,'final',0) -D__version__=2.1.1 -I/opt/homebrew/Cellar/mysql/8.0.30/include/mysql -I/opt/anaconda3/envs/django/include/python3.9 -c MySQLdb/_mysql.c -o build/temp.macosx-10.9-x86_64-cpython-39/MySQLdb/_mysql.o -std=c99 xcrun: error: invalid active developer path (/Library/Developer/CommandLineTools), missing xcrun at: /Library/Developer/CommandLineTools/usr/bin/xcrun error: command '/usr/bin/clang' failed with exit code 1 [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: legacy-install-failure × Encountered error while trying to install package. ╰─> mysqlclient note: This is an issue with the package mentioned above, not pip. hint: See above for output from the failure.
[ "Use:\npip install wheel\npip install --upgrade setuptools\n\n" ]
[ 0 ]
[]
[]
[ "macos", "mysql", "pip", "python" ]
stackoverflow_0074587075_macos_mysql_pip_python.txt
Q: Separating 2 bar groups in Plotly - Python I have two groups of data I'm working with, which I'd like to show in a bar plot using plotly (example for data is shown below). import numpy as np import plotly.express as px import plotly.graph_objects as go values1 = abs(np.random.normal(0.5, 0.3, 13)) # random data and names values2 = abs(np.random.normal(0.5, 0.3, 13)) values3 = abs(np.random.standard_normal(13)) values4 = abs(np.random.standard_normal(13)) names = ['1', '2', '3', '4', '5', '6', '7', '8', '9', '10', '11', '12', '13'] fig = go.Figure() fig.add_trace(go.Bar( x = names, y = values1, legendgroup="group", legendgrouptitle_text="method one", name="first" )) fig.add_trace(go.Bar( x=names, y=values2, legendgroup="group", name="second" )) fig.add_trace(go.Bar( x=names, y=values3, legendgroup="group2", legendgrouptitle_text="method two", name="first" )) fig.add_trace(go.Bar( x=names, y=values4, legendgroup="group2", name="second" )) fig.update_layout(barmode='group') fig.update_traces(texttemplate='%{y:.2}', textposition='inside') fig.show() This code produces the following graph: I would like to add a space between the two methods for each name (adding space between method one and method two for the two values in each value). I tried using offsetgroup but doesn't seem to work. Any help on the matter would be appreciated. A: The only functions to adjust the spacing of bars in plotly are the spacing of bars and the type of spacing within a group. So you can force spacing by inserting a null-valued graph in between. fig.update_layout(barmode='group', bargroupgap=0.2) fig = go.Figure() fig.add_trace(go.Bar( x = names, y = values1, legendgroup="group", legendgrouptitle_text="method one", name="first" )) fig.add_trace(go.Bar( x=names, y=values2, legendgroup="group", name="second" )) # add bar plot(null data) fig.add_trace(go.Bar( x=names, y=np.full((1,51),np.NaN), showlegend=False, )) fig.add_trace(go.Bar( x=names, y=values3, legendgroup="group2", legendgrouptitle_text="method two", name="first" )) fig.add_trace(go.Bar( x=names, y=values4, legendgroup="group2", name="second" )) fig.update_layout(barmode='group')#, bargroupgap=0.2 fig.update_traces(texttemplate='%{y:.2}', textposition='inside') fig.show() A: I think you should use a subplots: Check this link, and see the examples. Subplots
Separating 2 bar groups in Plotly - Python
I have two groups of data I'm working with, which I'd like to show in a bar plot using plotly (example for data is shown below). import numpy as np import plotly.express as px import plotly.graph_objects as go values1 = abs(np.random.normal(0.5, 0.3, 13)) # random data and names values2 = abs(np.random.normal(0.5, 0.3, 13)) values3 = abs(np.random.standard_normal(13)) values4 = abs(np.random.standard_normal(13)) names = ['1', '2', '3', '4', '5', '6', '7', '8', '9', '10', '11', '12', '13'] fig = go.Figure() fig.add_trace(go.Bar( x = names, y = values1, legendgroup="group", legendgrouptitle_text="method one", name="first" )) fig.add_trace(go.Bar( x=names, y=values2, legendgroup="group", name="second" )) fig.add_trace(go.Bar( x=names, y=values3, legendgroup="group2", legendgrouptitle_text="method two", name="first" )) fig.add_trace(go.Bar( x=names, y=values4, legendgroup="group2", name="second" )) fig.update_layout(barmode='group') fig.update_traces(texttemplate='%{y:.2}', textposition='inside') fig.show() This code produces the following graph: I would like to add a space between the two methods for each name (adding space between method one and method two for the two values in each value). I tried using offsetgroup but doesn't seem to work. Any help on the matter would be appreciated.
[ "The only functions to adjust the spacing of bars in plotly are the spacing of bars and the type of spacing within a group. So you can force spacing by inserting a null-valued graph in between.\nfig.update_layout(barmode='group', bargroupgap=0.2)\n\n\nfig = go.Figure()\n\nfig.add_trace(go.Bar(\n x = names,\n y = values1,\n legendgroup=\"group\", \n legendgrouptitle_text=\"method one\",\n name=\"first\"\n))\n\nfig.add_trace(go.Bar(\n x=names,\n y=values2,\n legendgroup=\"group\",\n name=\"second\"\n))\n# add bar plot(null data) \nfig.add_trace(go.Bar(\n x=names,\n y=np.full((1,51),np.NaN),\n showlegend=False,\n))\n \nfig.add_trace(go.Bar(\n x=names,\n y=values3,\n legendgroup=\"group2\",\n legendgrouptitle_text=\"method two\",\n name=\"first\"\n))\n\nfig.add_trace(go.Bar(\n x=names,\n y=values4,\n legendgroup=\"group2\",\n name=\"second\"\n))\n\nfig.update_layout(barmode='group')#, bargroupgap=0.2\nfig.update_traces(texttemplate='%{y:.2}', textposition='inside')\nfig.show()\n\n\n", "I think you should use a subplots: Check this link, and see the examples.\nSubplots\n" ]
[ 1, 0 ]
[]
[]
[ "bar_chart", "plotly_python", "python" ]
stackoverflow_0074585842_bar_chart_plotly_python_python.txt
Q: Web scraping specific td tag from a table with python I am trying to extract the text from the first <td> tag but there are multiple identical class tags in a row which I am having trouble extracting a single one (the final golf score from the golfer, -19 in the example below). I cannot get python to pick it up at all. I have it picking up the golfers name, but that's it. Only want their final score, and bypass the rest, then on to the next golfer. I am new to python and I understand my variables in code below are not the best practice. Trying to scrape: <tr class="PlayerRow__Overview PlayerRow__Overview--expandable Table__TR Table__even"> <td class="Table__TD"> <svg aria-hidden="true" class="PlayerRow__caret__down icon__svg" viewBox="0 0 24 24"><use xlink:href="#icon__caret__down"></use></svg></td> <td class="tl Table__TD">1</td> <td class="tl plyr Table__TD"> <img src="https://a.espncdn.com/combiner/i?img=/i/teamlogos/countries/500/can.png&amp;w=40&amp;h=40&amp;scale=crop" alt="Canada" class="flag mr2"> <a class="AnchorLink leaderboard_player_name" tabindex="0" href="http://www.espn.com/golf/player/_/id/9127/adam-svensson">A. Svensson</a> </td> <td class="Table__TD">-19</td> <td class="Table__TD">73</td> <td class="Table__TD">64</td> <td class="Table__TD">62</td> <td class="Table__TD">64</td> <td class="Table__TD">263</td> <td class="Table__TD">$1,458,000</td> <td class="tc Table__TD">500</td></tr> Tried: from bs4 import BeautifulSoup import requests html_text = requests.get('https://www.espn.com/golf/leaderboard/_/tournamentId/401465506').text soup = BeautifulSoup(html_text, 'lxml') golfers = soup.find_all('tr', class_ = 'PlayerRow__Overview PlayerRow__Overview--expandable Table__TR Table__even') for golfer in golfers: golfer_name = golfer.a.text golfer_score = golfer.td.text print(f'{golfer_name} final score {golfer_score}') Result: Traceback (most recent call last): File "C:\Users\mikef\AppData\Local\Temp\tempCodeRunnerFile.python", line 10, in <module> golfer_score = golfer.td = 'Table__TD'.text AttributeError: 'str' object has no attribute 'text' [Done] exited with code=1 in 0.559 seconds If I comment out the golfer_score then I get a list of all the golfers like I'm intending. Source code can be found at https://www.espn.com/golf/leaderboard/_/tournamentId/401465506 A: You can try the next example using CSS selectors correctly from bs4 import BeautifulSoup import requests html_text = requests.get('https://www.espn.com/golf/leaderboard/_/tournamentId/401465506').text soup = BeautifulSoup(html_text, 'lxml') golfers = soup.find_all('tr', class_ = 'PlayerRow__Overview PlayerRow__Overview--expandable Table__TR Table__even') for golfer in golfers: golfer_name = golfer.select_one('a[class="AnchorLink leaderboard_player_name"]').get_text(strip=True) golfer_score = golfer.select_one('td.Table__TD:nth-child(4)').get_text(strip=True) print(f'{golfer_name} final score {golfer_score}') Output: Adam Svensson final score -19 Callum Tarren final score -17 Brian Harman final score -17 Sahith Theegala final score -17 Joel Dahmen final score -15 Cole Hammer final score -15 Chris Stroud final score -15 Seamus Power final score -15 Alex Smalley final score -15 Robby Shelton final score -14 David Lingmerth final score -14 Erik Barnes final score -14 Wyndham Clark final score -14 Patrick Rodgers final score -14 J.J. Spaun final score -13 Greyson Sigg final score -13 Seung-Yul Noh final score -13 Will Gordon final score -13 Taylor Pendrith final score -13 Taylor Montgomery final score -13 J.T. Poston final score -12 Russell Knox final score -12 Danny Lee final score -12 Ben Taylor final score -12 Beau Hossler final score -12 Harry Higgs final score -12 Andrew Putnam final score -12 Ben Martin final score -12 Kevin Kisner final score -11 Zac Blair final score -11 Ben Griffin final score -11 Harris English final score -11 Justin Rose final score -11 Paul Haley II final score -11 Chris Gotterup final score -10 Michael Kim final score -10 Patton Kizzire final score -10 Kevin Streelman final score -10 Hayden Buckley final score -9 Keith Mitchell final score -9 Aaron Baddeley final score -9 Henrik Norlander final score -9 Eric Cole final score -9 Carl Yuan final score -9 Akshay Bhatia final score -8 Denny McCarthy final score -7 Kevin Roy final score -7 Brice Garnett final score -7 Davis Riley final score -7 Jim Herman final score -7 Stephan Jaeger final score -7 Dylan Wu final score -7 Ryan Armour final score -7 Martin Trainer final score -6 Trevor Cone final score -6 Scott Stallings final score -6 Dean Burmester final score -6 Brandon Wu final score -6 Kevin Yu final score -6 Brent Grant final score -6 Jacob Bridgeman final score -6 Matthias Schwab final score -5 Tyson Alexander final score -5 Joseph Bramlett final score -4 Doc Redman final score -4 Justin Suh final score -3 Zecheng Dou final score -1 Andrew Landry final score -1 MJ Daffue final score +1 Kevin Tway final score CUT Sam Ryder final score CUT Stewart Cink final score CUT Brendon Todd final score CUT Vincent Norrman final score CUT Dylan Frittelli final score CUT Adam Long final score CUT Jason Dufner final score CUT Chesson Hadley final score CUT Nick Hardy final score CUT Rory Sabbatini final score CUT Seonghyeon Kim final score CUT Sam Stevens final score CUT Matt Kuchar final score CUT Robert Streb final score CUT Mackenzie Hughes final score CUT Nicolas Echavarria final score CUT Spencer Ralston final score CUT Nate Lashley final score CUT Tyler Duncan final score CUT Peter Malnati final score CUT Kyle Westmoreland final score CUT Adam Schenk final score CUT Brian Gay final score CUT Chris Kirk final score CUT Sung Kang final score CUT Matthew NeSmith final score CUT Zach Johnson final score CUT Webb Simpson final score CUT Cameron Percy final score CUT Conner Godsey final score CUT Chad Ramey final score CUT Scott Piercy final score CUT Sean O'Hair final score CUT John Huh final score CUT Francesco Molinari final score CUT Lee Hodges final score CUT Jason Day final score CUT Brandon Matthews final score CUT Harry Hall final score CUT Kelly Kraft final score CUT Charley Hoffman final score CUT Nick Watney final score CUT Brett Drewitt final score CUT Byeong Hun An final score CUT Camilo Villegas final score CUT Justin Lower final score CUT Harrison Endycott final score CUT Carson Young final score CUT Troy Merritt final score CUT Garrick Higgo final score CUT Aaron Rai final score CUT Tom Hoge final score CUT Luke List final score CUT Andrew Novak final score CUT Matt Wallace final score CUT Ryan Brehm final score CUT Sepp Straka final score CUT Michael Thompson final score CUT Austin Smotherman final score CUT Bill Haas final score CUT Austin Cook final score CUT Jonathan Byrd final score CUT Michael Gligic final score CUT Cameron Champ final score CUT Matti Schmid final score CUT Hank Lebioda final score CUT Davis Thompson final score CUT Taylor Moore final score CUT Tim Weinhart final score CUT Augusto Núñez final score CUT Kevin Chappell final score CUT Palmer Jackson (a) final score CUT Vaughn Taylor final score CUT Scott Harrington final score CUT Trevor Werbylo final score CUT Max McGreevy final score CUT Scott Brown final score CUT Philip Knowles final score CUT Brian Stuard final score CUT Tano Goya final score CUT Richy Werenski final score CUT Bryson Nimmer final score CUT Danny Willett final score WD Trey Mullinax final score WD David Lipsky final score WD
Web scraping specific td tag from a table with python
I am trying to extract the text from the first <td> tag but there are multiple identical class tags in a row which I am having trouble extracting a single one (the final golf score from the golfer, -19 in the example below). I cannot get python to pick it up at all. I have it picking up the golfers name, but that's it. Only want their final score, and bypass the rest, then on to the next golfer. I am new to python and I understand my variables in code below are not the best practice. Trying to scrape: <tr class="PlayerRow__Overview PlayerRow__Overview--expandable Table__TR Table__even"> <td class="Table__TD"> <svg aria-hidden="true" class="PlayerRow__caret__down icon__svg" viewBox="0 0 24 24"><use xlink:href="#icon__caret__down"></use></svg></td> <td class="tl Table__TD">1</td> <td class="tl plyr Table__TD"> <img src="https://a.espncdn.com/combiner/i?img=/i/teamlogos/countries/500/can.png&amp;w=40&amp;h=40&amp;scale=crop" alt="Canada" class="flag mr2"> <a class="AnchorLink leaderboard_player_name" tabindex="0" href="http://www.espn.com/golf/player/_/id/9127/adam-svensson">A. Svensson</a> </td> <td class="Table__TD">-19</td> <td class="Table__TD">73</td> <td class="Table__TD">64</td> <td class="Table__TD">62</td> <td class="Table__TD">64</td> <td class="Table__TD">263</td> <td class="Table__TD">$1,458,000</td> <td class="tc Table__TD">500</td></tr> Tried: from bs4 import BeautifulSoup import requests html_text = requests.get('https://www.espn.com/golf/leaderboard/_/tournamentId/401465506').text soup = BeautifulSoup(html_text, 'lxml') golfers = soup.find_all('tr', class_ = 'PlayerRow__Overview PlayerRow__Overview--expandable Table__TR Table__even') for golfer in golfers: golfer_name = golfer.a.text golfer_score = golfer.td.text print(f'{golfer_name} final score {golfer_score}') Result: Traceback (most recent call last): File "C:\Users\mikef\AppData\Local\Temp\tempCodeRunnerFile.python", line 10, in <module> golfer_score = golfer.td = 'Table__TD'.text AttributeError: 'str' object has no attribute 'text' [Done] exited with code=1 in 0.559 seconds If I comment out the golfer_score then I get a list of all the golfers like I'm intending. Source code can be found at https://www.espn.com/golf/leaderboard/_/tournamentId/401465506
[ "You can try the next example using CSS selectors correctly\nfrom bs4 import BeautifulSoup\nimport requests\n\nhtml_text = requests.get('https://www.espn.com/golf/leaderboard/_/tournamentId/401465506').text\n\nsoup = BeautifulSoup(html_text, 'lxml')\ngolfers = soup.find_all('tr', class_ = 'PlayerRow__Overview PlayerRow__Overview--expandable Table__TR Table__even')\nfor golfer in golfers:\n golfer_name = golfer.select_one('a[class=\"AnchorLink leaderboard_player_name\"]').get_text(strip=True)\n golfer_score = golfer.select_one('td.Table__TD:nth-child(4)').get_text(strip=True)\n print(f'{golfer_name} final score {golfer_score}')\n\nOutput:\nAdam Svensson final score -19\nCallum Tarren final score -17\nBrian Harman final score -17\nSahith Theegala final score -17\nJoel Dahmen final score -15\nCole Hammer final score -15\nChris Stroud final score -15\nSeamus Power final score -15\nAlex Smalley final score -15\nRobby Shelton final score -14\nDavid Lingmerth final score -14\nErik Barnes final score -14\nWyndham Clark final score -14\nPatrick Rodgers final score -14\nJ.J. Spaun final score -13\nGreyson Sigg final score -13\nSeung-Yul Noh final score -13\nWill Gordon final score -13\nTaylor Pendrith final score -13\nTaylor Montgomery final score -13\nJ.T. Poston final score -12\nRussell Knox final score -12\nDanny Lee final score -12\nBen Taylor final score -12\nBeau Hossler final score -12\nHarry Higgs final score -12\nAndrew Putnam final score -12\nBen Martin final score -12\nKevin Kisner final score -11\nZac Blair final score -11\nBen Griffin final score -11\nHarris English final score -11\nJustin Rose final score -11\nPaul Haley II final score -11\nChris Gotterup final score -10\nMichael Kim final score -10\nPatton Kizzire final score -10\nKevin Streelman final score -10\nHayden Buckley final score -9\nKeith Mitchell final score -9\nAaron Baddeley final score -9\nHenrik Norlander final score -9\nEric Cole final score -9\nCarl Yuan final score -9\nAkshay Bhatia final score -8\nDenny McCarthy final score -7\nKevin Roy final score -7\nBrice Garnett final score -7\nDavis Riley final score -7\nJim Herman final score -7\nStephan Jaeger final score -7\nDylan Wu final score -7\nRyan Armour final score -7\nMartin Trainer final score -6\nTrevor Cone final score -6\nScott Stallings final score -6\nDean Burmester final score -6\nBrandon Wu final score -6\nKevin Yu final score -6\nBrent Grant final score -6\nJacob Bridgeman final score -6\nMatthias Schwab final score -5\nTyson Alexander final score -5\nJoseph Bramlett final score -4\nDoc Redman final score -4\nJustin Suh final score -3\nZecheng Dou final score -1\nAndrew Landry final score -1\nMJ Daffue final score +1\nKevin Tway final score CUT\nSam Ryder final score CUT\nStewart Cink final score CUT\nBrendon Todd final score CUT\nVincent Norrman final score CUT\nDylan Frittelli final score CUT\nAdam Long final score CUT\nJason Dufner final score CUT\nChesson Hadley final score CUT\nNick Hardy final score CUT\nRory Sabbatini final score CUT\nSeonghyeon Kim final score CUT\nSam Stevens final score CUT\nMatt Kuchar final score CUT\nRobert Streb final score CUT\nMackenzie Hughes final score CUT\nNicolas Echavarria final score CUT\nSpencer Ralston final score CUT\nNate Lashley final score CUT\nTyler Duncan final score CUT\nPeter Malnati final score CUT\nKyle Westmoreland final score CUT\nAdam Schenk final score CUT\nBrian Gay final score CUT\nChris Kirk final score CUT\nSung Kang final score CUT\nMatthew NeSmith final score CUT\nZach Johnson final score CUT\nWebb Simpson final score CUT\nCameron Percy final score CUT\nConner Godsey final score CUT\nChad Ramey final score CUT\nScott Piercy final score CUT\nSean O'Hair final score CUT\nJohn Huh final score CUT\nFrancesco Molinari final score CUT\nLee Hodges final score CUT\nJason Day final score CUT\nBrandon Matthews final score CUT\nHarry Hall final score CUT\nKelly Kraft final score CUT\nCharley Hoffman final score CUT\nNick Watney final score CUT\nBrett Drewitt final score CUT\nByeong Hun An final score CUT\nCamilo Villegas final score CUT\nJustin Lower final score CUT\nHarrison Endycott final score CUT\nCarson Young final score CUT\nTroy Merritt final score CUT\nGarrick Higgo final score CUT\nAaron Rai final score CUT\nTom Hoge final score CUT\nLuke List final score CUT\nAndrew Novak final score CUT\nMatt Wallace final score CUT\nRyan Brehm final score CUT\nSepp Straka final score CUT\nMichael Thompson final score CUT\nAustin Smotherman final score CUT\nBill Haas final score CUT\nAustin Cook final score CUT\nJonathan Byrd final score CUT\nMichael Gligic final score CUT\nCameron Champ final score CUT\nMatti Schmid final score CUT\nHank Lebioda final score CUT\nDavis Thompson final score CUT\nTaylor Moore final score CUT\nTim Weinhart final score CUT\nAugusto Núñez final score CUT\nKevin Chappell final score CUT\nPalmer Jackson (a) final score CUT\nVaughn Taylor final score CUT\nScott Harrington final score CUT\nTrevor Werbylo final score CUT\nMax McGreevy final score CUT\nScott Brown final score CUT\nPhilip Knowles final score CUT\nBrian Stuard final score CUT\nTano Goya final score CUT\nRichy Werenski final score CUT\nBryson Nimmer final score CUT\nDanny Willett final score WD\nTrey Mullinax final score WD\nDavid Lipsky final score WD\n\n" ]
[ 0 ]
[]
[]
[ "html", "python", "web_scraping" ]
stackoverflow_0074587155_html_python_web_scraping.txt
Q: Renaming a header to a csv file using python and adding numeric value on that Column I'm pretty new with Python programming and would like to seek your expertise/help on how to achieve my goal. So far, what I have done is to delete unnecessary columns from a CSV file using Python. Now, I want to rename a specific Header "Tags" into "Quantity" on the edited CSV file. I also want to append the value of that column since it's blank and make every cell into "1". Below is the Python Script I have so far. Looking forward to your suggestions. Thank you very much! import os import pandas as pd directory = 'path/' ext = ('.csv') for filename in os.listdir(directory): f = os.path.join(directory, filename) if f.endswith(ext): head_tail = os.path.split(f) head_tail1 = 'path/Output' k =head_tail[1] r=k.split(".")[0] p=head_tail1 + "/" + r + " - Output.csv" mydata = pd.read_csv(f) new =mydata[["Part ID","Serial ID","Bin","Cluster","Site","Room","Model MPN","Vendor","Type","State","Tags"]] new.to_csv(p ,index=False) I have Googled and Youtubed possible solutions but it doesn't work and faced errors in Pycharm. A: new = mydata[["Part ID","Serial ID","Bin","Cluster","Site","Room","Model MPN","Vendor","Type","State","Tags"]] # added lines new = new.rename(columns={'Tags': 'Quantity'}) new['Quantity'] = 1 new.to_csv(p ,index=False) This should work.
Renaming a header to a csv file using python and adding numeric value on that Column
I'm pretty new with Python programming and would like to seek your expertise/help on how to achieve my goal. So far, what I have done is to delete unnecessary columns from a CSV file using Python. Now, I want to rename a specific Header "Tags" into "Quantity" on the edited CSV file. I also want to append the value of that column since it's blank and make every cell into "1". Below is the Python Script I have so far. Looking forward to your suggestions. Thank you very much! import os import pandas as pd directory = 'path/' ext = ('.csv') for filename in os.listdir(directory): f = os.path.join(directory, filename) if f.endswith(ext): head_tail = os.path.split(f) head_tail1 = 'path/Output' k =head_tail[1] r=k.split(".")[0] p=head_tail1 + "/" + r + " - Output.csv" mydata = pd.read_csv(f) new =mydata[["Part ID","Serial ID","Bin","Cluster","Site","Room","Model MPN","Vendor","Type","State","Tags"]] new.to_csv(p ,index=False) I have Googled and Youtubed possible solutions but it doesn't work and faced errors in Pycharm.
[ "new = mydata[[\"Part ID\",\"Serial ID\",\"Bin\",\"Cluster\",\"Site\",\"Room\",\"Model MPN\",\"Vendor\",\"Type\",\"State\",\"Tags\"]]\n\n# added lines\nnew = new.rename(columns={'Tags': 'Quantity'})\nnew['Quantity'] = 1\n\nnew.to_csv(p ,index=False)\n\nThis should work.\n" ]
[ 0 ]
[]
[]
[ "csv", "header", "pandas", "python" ]
stackoverflow_0074587190_csv_header_pandas_python.txt
Q: PyTorch backpropagation is too slow by custom loss I'm currently implementing a custom contrastive loss for the network but the training process is very slow. I investigate this problem and finally find that the backpropagation of the custom loss makes the main contribution to the problem. Here is the simplified code. n = 2000 neighbors = 2 x = torch.randn((n, n), requires_grad=True) start = time.time() row_sum = torch.sum(torch.exp(x), dim=1) - torch.exp(torch.diagonal(x, 0)) y = torch.zeros((n, neighbors)) for i in range(n): for j in range(neighbors): y[i][j] = x[i][random.randrange(0, n)] loss = torch.sum(-torch.log(torch.exp(y) / row_sum.view(-1, 1))) # Using broadcasting print('Forward time: {:.3f}'.format(time.time() - start)) start = time.time() loss.backward() print('Backward time: {:.3f}'.format(time.time() - start)) The results is below. Forward time: 0.255 Backward time: 37.502 The backward time depends on n and neighbors. When n = 1000 and neighbors = 2. Forward time: 0.241 Backward time: 9.094 When n = 3000 and neighbors = 2. Forward time: 0.250 Backward time: 136.768 When n = 2000 and neighbors = 1. Forward time: 0.124 Backward time: 22.925 When n = 2000 and neighbors = 3. Forward time: 0.210 Backward time: 58.341 Is there a way to speed up the performance? A: As @Flavia Giammarino mentions, avoiding the loops can speed up the backward time. torch.gather function do the correct sampling for the problem. >>> t = torch.tensor([[1, 2], [3, 4]]) >>> torch.gather(t, 1, torch.tensor([[0, 0], [1, 0]])) tensor([[ 1, 1], [ 4, 3]]) The final code is below. import time import torch n = 2000 neighbors = 2 x = torch.randn((n, n), requires_grad=True) random_nbrs = torch.randint(low=0, high=n, size=(n, neighbors)) start = time.time() row_sum = torch.sum(torch.exp(x), dim=1) - torch.exp(torch.diagonal(x, 0)) y = torch.gather(x, 1, random_nbrs) loss = torch.sum(-torch.log(torch.exp(y) / row_sum.view(-1, 1))) print('Forward time: {:.3f}'.format(time.time() - start)) start = time.time() loss.backward() print('Backward time: {:.3f}'.format(time.time() - start)) # Forward time: 0.010 # Backward time: 0.037
PyTorch backpropagation is too slow by custom loss
I'm currently implementing a custom contrastive loss for the network but the training process is very slow. I investigate this problem and finally find that the backpropagation of the custom loss makes the main contribution to the problem. Here is the simplified code. n = 2000 neighbors = 2 x = torch.randn((n, n), requires_grad=True) start = time.time() row_sum = torch.sum(torch.exp(x), dim=1) - torch.exp(torch.diagonal(x, 0)) y = torch.zeros((n, neighbors)) for i in range(n): for j in range(neighbors): y[i][j] = x[i][random.randrange(0, n)] loss = torch.sum(-torch.log(torch.exp(y) / row_sum.view(-1, 1))) # Using broadcasting print('Forward time: {:.3f}'.format(time.time() - start)) start = time.time() loss.backward() print('Backward time: {:.3f}'.format(time.time() - start)) The results is below. Forward time: 0.255 Backward time: 37.502 The backward time depends on n and neighbors. When n = 1000 and neighbors = 2. Forward time: 0.241 Backward time: 9.094 When n = 3000 and neighbors = 2. Forward time: 0.250 Backward time: 136.768 When n = 2000 and neighbors = 1. Forward time: 0.124 Backward time: 22.925 When n = 2000 and neighbors = 3. Forward time: 0.210 Backward time: 58.341 Is there a way to speed up the performance?
[ "As @Flavia Giammarino mentions, avoiding the loops can speed up the backward time. torch.gather function do the correct sampling for the problem.\n>>> t = torch.tensor([[1, 2], [3, 4]])\n>>> torch.gather(t, 1, torch.tensor([[0, 0], [1, 0]]))\ntensor([[ 1, 1],\n [ 4, 3]])\n\nThe final code is below.\nimport time\nimport torch\n\nn = 2000\nneighbors = 2\nx = torch.randn((n, n), requires_grad=True)\nrandom_nbrs = torch.randint(low=0, high=n, size=(n, neighbors))\n\nstart = time.time()\nrow_sum = torch.sum(torch.exp(x), dim=1) - torch.exp(torch.diagonal(x, 0))\ny = torch.gather(x, 1, random_nbrs)\nloss = torch.sum(-torch.log(torch.exp(y) / row_sum.view(-1, 1)))\nprint('Forward time: {:.3f}'.format(time.time() - start))\n\nstart = time.time()\nloss.backward()\nprint('Backward time: {:.3f}'.format(time.time() - start))\n\n# Forward time: 0.010\n# Backward time: 0.037\n\n" ]
[ 0 ]
[]
[]
[ "loss_function", "python", "pytorch" ]
stackoverflow_0074580950_loss_function_python_pytorch.txt
Q: Messed up JSON return? - nested nonsense I have a return form a JSON call via curl that returns this { 'result': { 'today_runtime': 830, 'month_runtime': 39991, 'today_energy': 1293, 'month_energy': 55326, 'local_time': '2022-11-27 13:50:54', 'electricity_charge': \[0, 0, 0\], 'current_power': 93860 }, 'error_code': 0 } Now im about 5 minutes into python, actually more like 45 minutes, because this is super frustrating and in any other language id be sorted by now, but i need values from the key value pairs in the inner bracketed section, and i know how to extract them without the {'result': and , 'error_code': 0} as i have manually chopped them off and get the expected result when extracting values someone please put mne out of what is suddently becoming a misery, thanks in advance.... Spent 45 minutes googling how to trim substrings, access nested values etc....none of which seems to work.... A: { 'result': { 'today_runtime': 830, 'month_runtime': 39991, 'today_energy': 1293, 'month_energy': 55326, 'local_time': '2022-11-27 13:50:54', 'electricity_charge': \[0, 0, 0\], 'current_power': 93860 }, 'error_code': 0 } That is a dictionary. It has two keys: "result" and "error_code". The value of the "result" key is also a dictionary, with seven keys. (The "electricity_charge" key is odd, though. Why are the backslashes \[ and \] there? I assume that's a formatting quirk of yours.) Presuming you assigned this dictionary to a variable named data, you would access the information like this: data['result']['today_runtime'] data['result']['month_runtime'] ... data['result']['current_power'] data['error_code'] This is all very straightforward dictionary syntax. It is not "nonsense" at all.
Messed up JSON return? - nested nonsense
I have a return form a JSON call via curl that returns this { 'result': { 'today_runtime': 830, 'month_runtime': 39991, 'today_energy': 1293, 'month_energy': 55326, 'local_time': '2022-11-27 13:50:54', 'electricity_charge': \[0, 0, 0\], 'current_power': 93860 }, 'error_code': 0 } Now im about 5 minutes into python, actually more like 45 minutes, because this is super frustrating and in any other language id be sorted by now, but i need values from the key value pairs in the inner bracketed section, and i know how to extract them without the {'result': and , 'error_code': 0} as i have manually chopped them off and get the expected result when extracting values someone please put mne out of what is suddently becoming a misery, thanks in advance.... Spent 45 minutes googling how to trim substrings, access nested values etc....none of which seems to work....
[ "{\n 'result': {\n 'today_runtime': 830,\n 'month_runtime': 39991,\n 'today_energy': 1293,\n 'month_energy': 55326,\n 'local_time': '2022-11-27 13:50:54',\n 'electricity_charge': \\[0, 0, 0\\],\n 'current_power': 93860\n },\n 'error_code': 0\n}\n\nThat is a dictionary. It has two keys: \"result\" and \"error_code\".\nThe value of the \"result\" key is also a dictionary, with seven keys.\n(The \"electricity_charge\" key is odd, though. Why are the backslashes \\[ and \\] there? I assume that's a formatting quirk of yours.)\nPresuming you assigned this dictionary to a variable named data, you would access the information like this:\ndata['result']['today_runtime']\ndata['result']['month_runtime']\n...\ndata['result']['current_power']\ndata['error_code']\n\nThis is all very straightforward dictionary syntax. It is not \"nonsense\" at all.\n" ]
[ 2 ]
[]
[]
[ "json", "python" ]
stackoverflow_0074587184_json_python.txt
Q: name "variable" is not define in python I have a question, here I have a problem with my code for my last one semester in college, I made a menu for food n when I input order with [1/2/3/4] and input how much for the order, I got the error like in for i in range(order) h.append(price) ah, I don't know where the problem bro can your guys help me? how to explain the error? I already add price = 0 as default price but still same A: It says in your screenshot that price is not defined. You need to set price to a value to use it.
name "variable" is not define in python
I have a question, here I have a problem with my code for my last one semester in college, I made a menu for food n when I input order with [1/2/3/4] and input how much for the order, I got the error like in for i in range(order) h.append(price) ah, I don't know where the problem bro can your guys help me? how to explain the error? I already add price = 0 as default price but still same
[ "It says in your screenshot that price is not defined. You need to set price to a value to use it.\n" ]
[ 1 ]
[]
[]
[ "python" ]
stackoverflow_0074587227_python.txt
Q: Enumerating the Writing of Different Lines in Txt file in Python It seems that every time the string should add up the 4, 1, and 4, for column 1, the total result is just 4*3. Could you help me put an enumeration-like function in here? (I am I very new beginner) Thank you for anything! import os import platform pathwindows = os.environ['USERPROFILE'] + r"\Documents\Your_Wordle_Results.txt" pathmac = r'/Mac/Users/%USEPROFILE%/Documents/Your_Wordle_Results.txt' isFileWindows = os.path.exists(pathwindows) isFileMac = os.path.isfile(pathmac) if isFileWindows == True: outfile = open(pathwindows, 'r') if isFileMac == True: outfile = open(pathmac, 'r') totalpoints1 = 0 totalpoints2 = 0 totalpoints3 = 0 totalpoints4 = 0 totalpoints5 = 0 with open(pathwindows, 'r') as fp: lineofinterest = fp.readlines()[2:100] stringlineofinterest = str(lineofinterest) print(*lineofinterest) for line in lineofinterest: print(line.strip()) startline = 22 separation = 4 value1 = (stringlineofinterest[startline + separation * 0]) value2 = (stringlineofinterest[startline + separation * 1]) value3 = (stringlineofinterest[startline + separation * 2]) value4 = (stringlineofinterest[startline + separation * 3]) value5 = (stringlineofinterest[startline + separation * 4]) outfile.close print(value1) print(totalpoints1) The text file is Ben Jackson 1pt 2pt 3pt 4pt 5pt Total Results Will Be Shown Below 4 3 0 1 0 LOSS for audio in 7.28s 1 2 0 2 0 LOSS for audit in 6.18s 4 5 0 1 0 LOSS for audio in 7.28s I expected for the 4 + 1 +4 to add up in the 1 pt column but rather the first "4" was multiplied 3 times meaning that the cycle that beings with "with open" did not enumerate through. A: I'm going to answer this the best I can according to the post, there was problems with indentation, use of the correct variable to fetch the values (stringlineofinteres instead of line which is the one in the loop), your code, and finally no line to add vaalues to the totals: import os import platform pathwindows = os.environ['USERPROFILE'] + r"\Documents\Your_Wordle_Results.txt" pathmac = r'/Mac/Users/%USEPROFILE%/Documents/Your_Wordle_Results.txt' pathwindows="enum.txt" isFileWindows = os.path.exists(pathwindows) isFileMac = os.path.isfile(pathmac) if isFileWindows == True: filepath=pathwindows if isFileMac == True: filepath=pathmac totalpoints1 = 0 totalpoints2 = 0 totalpoints3 = 0 totalpoints4 = 0 totalpoints5 = 0 with open(filepath, 'r') as fp: lineofinterest = fp.readlines()[2:100] stringlineofinterest = str(lineofinterest) print(*lineofinterest) for line in lineofinterest: print(line) startline = 22 separation = 4 value1 = (line[startline + separation * 0]) totalpoints1 += int(value1) value2 = (line[startline + separation * 1]) totalpoints2 += int(value2) value3 = (line[startline + separation * 2]) totalpoints3 += int(value3) value4 = (line[startline + separation * 3]) totalpoints4 += int(value4) value5 = (line[startline + separation * 4]) totalpoints5 += int(value5) # write the totals line here with open(filepath,'a') as outfile: outfile.write("totals xxxx") print(totalpoints1, totalpoints2,totalpoints3,totalpoints4,totalpoints5)
Enumerating the Writing of Different Lines in Txt file in Python
It seems that every time the string should add up the 4, 1, and 4, for column 1, the total result is just 4*3. Could you help me put an enumeration-like function in here? (I am I very new beginner) Thank you for anything! import os import platform pathwindows = os.environ['USERPROFILE'] + r"\Documents\Your_Wordle_Results.txt" pathmac = r'/Mac/Users/%USEPROFILE%/Documents/Your_Wordle_Results.txt' isFileWindows = os.path.exists(pathwindows) isFileMac = os.path.isfile(pathmac) if isFileWindows == True: outfile = open(pathwindows, 'r') if isFileMac == True: outfile = open(pathmac, 'r') totalpoints1 = 0 totalpoints2 = 0 totalpoints3 = 0 totalpoints4 = 0 totalpoints5 = 0 with open(pathwindows, 'r') as fp: lineofinterest = fp.readlines()[2:100] stringlineofinterest = str(lineofinterest) print(*lineofinterest) for line in lineofinterest: print(line.strip()) startline = 22 separation = 4 value1 = (stringlineofinterest[startline + separation * 0]) value2 = (stringlineofinterest[startline + separation * 1]) value3 = (stringlineofinterest[startline + separation * 2]) value4 = (stringlineofinterest[startline + separation * 3]) value5 = (stringlineofinterest[startline + separation * 4]) outfile.close print(value1) print(totalpoints1) The text file is Ben Jackson 1pt 2pt 3pt 4pt 5pt Total Results Will Be Shown Below 4 3 0 1 0 LOSS for audio in 7.28s 1 2 0 2 0 LOSS for audit in 6.18s 4 5 0 1 0 LOSS for audio in 7.28s I expected for the 4 + 1 +4 to add up in the 1 pt column but rather the first "4" was multiplied 3 times meaning that the cycle that beings with "with open" did not enumerate through.
[ "I'm going to answer this the best I can according to the post, there was problems with indentation, use of the correct variable to fetch the values (stringlineofinteres instead of line which is the one in the loop), your code, and finally no line to add vaalues to the totals:\nimport os\nimport platform\n\npathwindows = os.environ['USERPROFILE'] + r\"\\Documents\\Your_Wordle_Results.txt\"\npathmac = r'/Mac/Users/%USEPROFILE%/Documents/Your_Wordle_Results.txt'\n\npathwindows=\"enum.txt\"\n\nisFileWindows = os.path.exists(pathwindows)\nisFileMac = os.path.isfile(pathmac)\n\nif isFileWindows == True:\n filepath=pathwindows\n \nif isFileMac == True:\n filepath=pathmac\n \n\ntotalpoints1 = 0\ntotalpoints2 = 0\ntotalpoints3 = 0\ntotalpoints4 = 0\ntotalpoints5 = 0\n\nwith open(filepath, 'r') as fp:\n\n lineofinterest = fp.readlines()[2:100]\n stringlineofinterest = str(lineofinterest)\n print(*lineofinterest)\n for line in lineofinterest:\n print(line)\n\n startline = 22\n separation = 4\n value1 = (line[startline + separation * 0])\n totalpoints1 += int(value1)\n value2 = (line[startline + separation * 1])\n totalpoints2 += int(value2)\n value3 = (line[startline + separation * 2])\n totalpoints3 += int(value3)\n value4 = (line[startline + separation * 3])\n totalpoints4 += int(value4)\n value5 = (line[startline + separation * 4])\n totalpoints5 += int(value5)\n\n\n# write the totals line here\nwith open(filepath,'a') as outfile:\n outfile.write(\"totals xxxx\")\n\n\nprint(totalpoints1, totalpoints2,totalpoints3,totalpoints4,totalpoints5)\n\n" ]
[ 0 ]
[]
[]
[ "enumerate", "python" ]
stackoverflow_0074578921_enumerate_python.txt
Q: How do I access JSON elements and put the contents into lists? I have a JSON list of objects like this: "server-1": { "username": "admin542", "on_break": false, "scheduling_type": 1, "schedule": { "Monday": [ "11:00" ], "Tuesday": [ "12:00", "13:00" ], }, "com_type": 2 }, "server-2": { "username": "admin543", "on_break": false, "scheduling_type": 2, "schedule": { "Monday": [ "10:00" ], "Wednesday": [ "13:00", "14:00" ] }, "com_type": 2 }, I want to access the schedule for a specific server and put it into lists. Let's say I want to access the schedule for server-2. How can I access it and put it in a list like this? schedule_days = ["Monday", "Wednesday"] schedule_times = [["10:00"], ["13:00", "14:00"]] I tried with: import json with open("systemfile.json", "r") as f: data = json.load(f) schedule_days = [] schedule_days.append(data[str("server-2")]["schedule"]) ### Which gave me [{'Monday': ['11:00'], 'Wednesday': ['13:00', '14:00']}] Is there any way I can access that to make a list like the example above? Or what would you do? Thanks beforehand. A: data['server-2']['schedule'] is the sub-dictionary you want to access. You can use the .keys() method of this dictionary to get the keys, and the .values() method to get the values. schedule_days = list(data['server-2']['schedule'].keys()) schedule_times = list(data['server-2']['schedule'].values())
How do I access JSON elements and put the contents into lists?
I have a JSON list of objects like this: "server-1": { "username": "admin542", "on_break": false, "scheduling_type": 1, "schedule": { "Monday": [ "11:00" ], "Tuesday": [ "12:00", "13:00" ], }, "com_type": 2 }, "server-2": { "username": "admin543", "on_break": false, "scheduling_type": 2, "schedule": { "Monday": [ "10:00" ], "Wednesday": [ "13:00", "14:00" ] }, "com_type": 2 }, I want to access the schedule for a specific server and put it into lists. Let's say I want to access the schedule for server-2. How can I access it and put it in a list like this? schedule_days = ["Monday", "Wednesday"] schedule_times = [["10:00"], ["13:00", "14:00"]] I tried with: import json with open("systemfile.json", "r") as f: data = json.load(f) schedule_days = [] schedule_days.append(data[str("server-2")]["schedule"]) ### Which gave me [{'Monday': ['11:00'], 'Wednesday': ['13:00', '14:00']}] Is there any way I can access that to make a list like the example above? Or what would you do? Thanks beforehand.
[ "data['server-2']['schedule'] is the sub-dictionary you want to access. You can use the .keys() method of this dictionary to get the keys, and the .values() method to get the values.\nschedule_days = list(data['server-2']['schedule'].keys())\nschedule_times = list(data['server-2']['schedule'].values())\n\n" ]
[ 1 ]
[]
[]
[ "json", "python" ]
stackoverflow_0074587147_json_python.txt
Q: Python3: Remove \n from middle of the string while keeping the one in the end I am new to Python and stuck with this problem. I have a multiline string: My name is ABCD \n I am 20 years old \n I like to travel his name is XYZ \n he is 20 years old \n he likes to eat your name is ABC \n you are 20 years old \n you like to play I want to replace all \n with space but keep the sentence as it is. The desired output is: My name is ABCD I am 20 years old I like to travel his name is XYZ he is 20 years old he likes to eat your name is ABC you are 20 years old you like to play I tried: str.replace('\n', ' ') but it gives: My name is ABCD I am 20 years old I like to travel his name is XYZ he is 20 years old he likes to eat your name is ABC you are 20 years old you like to play Which is not what I want. Can you please guide me? Thank you in advance. A: You can use regex to find the newlines (\n) that are surrounded by a space \s. The regex pattern looks like r"(\s\n\s)" Here is the example code: import re test_string = """ My name is ABCD \n I am 20 years old \n I like to travel his name is XYZ \n he is 20 years old \n he likes to eat your name is ABC \n you are 20 years old \n you like to play """ pattern = r"(\s\n\s)" new_text = re.sub(pattern, "", test_string) print(new_text) OUTPUT: My name is ABCD I am 20 years old I like to travel his name is XYZ he is 20 years old he likes to eat your name is ABC you are 20 years old you like to play
Python3: Remove \n from middle of the string while keeping the one in the end
I am new to Python and stuck with this problem. I have a multiline string: My name is ABCD \n I am 20 years old \n I like to travel his name is XYZ \n he is 20 years old \n he likes to eat your name is ABC \n you are 20 years old \n you like to play I want to replace all \n with space but keep the sentence as it is. The desired output is: My name is ABCD I am 20 years old I like to travel his name is XYZ he is 20 years old he likes to eat your name is ABC you are 20 years old you like to play I tried: str.replace('\n', ' ') but it gives: My name is ABCD I am 20 years old I like to travel his name is XYZ he is 20 years old he likes to eat your name is ABC you are 20 years old you like to play Which is not what I want. Can you please guide me? Thank you in advance.
[ "You can use regex to find the newlines (\\n) that are surrounded by a space \\s.\n\nThe regex pattern looks like r\"(\\s\\n\\s)\"\n\nHere is the example code:\nimport re\n\ntest_string = \"\"\"\nMy name is ABCD \\n I am 20 years old \\n I like to travel\nhis name is XYZ \\n he is 20 years old \\n he likes to eat\nyour name is ABC \\n you are 20 years old \\n you like to play\n\"\"\"\n\npattern = r\"(\\s\\n\\s)\"\nnew_text = re.sub(pattern, \"\", test_string)\n\nprint(new_text)\n\nOUTPUT:\nMy name is ABCD I am 20 years old I like to travel\nhis name is XYZ he is 20 years old he likes to eat\nyour name is ABC you are 20 years old you like to play\n\n" ]
[ 0 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0074587200_python_python_3.x.txt
Q: Django - How to Filter when getting values_list? I have an application that has multiple objects related to one Model. When I try and get data to display in a form (in order to update) it is either giving an error or not displaying any of the data. To illustrate the layout we have OBJECT(ID): Project(1): Issue(1) Issue(42) Issue(66) Issue(97) What is happening above is I have multiple issues related to the project. I am getting the IDs from the Issues within the Project by using the following queryset. get_issue_id = get_object_or_404(DevProjects, pk=pk) issue_id = DevIssues.objects.filter(project=get_issue_id).values_list('id', flat=True) Which returns: <QuerySet [1, 42, 66, 97]> I am trying to use the following queryset to filter the Issue ID from the values_list, in order to set the Instance= (for forms) to the queryset to only get the data and display that data in the form for the issues ID from the Project PK that I am requesting. update_issue = DevIssues.objects.filter(id=issue_id) Below is my current view.py get_issue_id = get_object_or_404(DevProjects, pk=pk) issue_id = DevIssues.objects.filter(project=get_issue_id).values_list('id', flat=True) update_issue = DevIssues.objects.filter(id=issue_id) update_issue_form = UpdateProjectIssues(instance=update_issue) if request.method == 'POST' and 'updateissue' in request.POST: update_issue_form = UpdateProjectIssues(request.POST, instance=update_issue) if update_issue_form.is_valid(): update_issue_form.save() Here is models.py for both DevProjects and DevIssues: class DevProjects(models.Model): PROJECT_TYPE = [ ('NEW_APP', 'New Application'), ('UPDATE_APP', 'Update Application'), ('BUG_FIX', 'Bug Fixes') ] PROJECT_STATUS = [ ('New', 'New'), ('In Progress', 'In Progress'), ('Complete', 'Complete'), ] project_id = models.CharField(max_length=15, editable=False) project_title = models.CharField(max_length=100) project_desc = models.CharField(max_length=500) project_category = models.CharField(max_length=25, choices=PROJECT_TYPE, null=True, blank=True) project_status = models.CharField(max_length=25, choices=PROJECT_STATUS, default='New') created_at = models.DateTimeField(auto_now_add=True) updated_at = models.DateField(auto_now=True) created_by = models.ForeignKey(User, on_delete=models.CASCADE) def save(self, *args, **kwargs): super(DevProjects, self).save(**kwargs) self.project_id = 'PROJ-' + str(self.id) super(DevProjects, self).save(**kwargs) def __str__(self): return self.project_title class DevIssues(models.Model): ISSUE_CODE = [ ('BUG', 'Bug'), ('BACKLOG', 'Backlog'), ('REQUEST', 'Request'), ('TODO', 'To-Do'), ] ISSUE_STATUS = [ ('New', 'New'), ('In Progress', 'In Progress'), ('Complete', 'Complete'), ] issue_id = models.CharField(max_length=15, editable=False) project = models.ForeignKey(DevProjects, on_delete=models.CASCADE, related_name='issue') issue = models.CharField(max_length=100) issue_desc = models.CharField(max_length=500) issue_code = models.CharField(max_length=9, choices=ISSUE_CODE, null=True, blank=True) issue_status = models.CharField(max_length=15, choices=ISSUE_STATUS, default='New') issue_resolution = models.CharField(max_length=500, null=True) created_at = models.DateTimeField(auto_now_add=True) updated_at = models.DateField(auto_now=True) created_by = models.ForeignKey(User, on_delete=models.CASCADE) def save(self, *args, **kwargs): self.issue_id = 'ISSUE-' + str(self.id) super(DevIssues, self).save(**kwargs) Here is my forms.py: class UpdateProjectIssues(forms.ModelForm): class Meta: model = DevIssues fields= ["issue", "issue_desc", "issue_code", "issue_status"] labels = { 'issue_status': 'Update Status' } And this is the current error I am facing: 'QuerySet' object has no attribute '_meta' When I use get(), this is the error I get: The QuerySet value for an exact lookup must be limited to one result using slicing. Another error I get when I use something like id__in: get() returned more than one DevIssues -- it returned 2! How do I filter 'issue_id', the values_list queryset, in order to get the data and display the correct data within the form? A: To get project instance project_id = get_object_or_404(DevProjects, pk=pk) To get issue_ids related to that project instance issue_ids = DevIssues.objects.filter(project=project_id).values_list('id', flat=True) To get update issue objects update_issue = DevIssues.objects.filter(id__in=issue_ids) but you already get the same result in this line update_issue = DevIssues.objects.filter(project=project_id) so, do not need IN qyery. You got error because you pass queryset instead of object in this line update_issue_form = UpdateProjectIssues(instance=update_issue) You can iterate on queryset and pass object in ModelForm class.
Django - How to Filter when getting values_list?
I have an application that has multiple objects related to one Model. When I try and get data to display in a form (in order to update) it is either giving an error or not displaying any of the data. To illustrate the layout we have OBJECT(ID): Project(1): Issue(1) Issue(42) Issue(66) Issue(97) What is happening above is I have multiple issues related to the project. I am getting the IDs from the Issues within the Project by using the following queryset. get_issue_id = get_object_or_404(DevProjects, pk=pk) issue_id = DevIssues.objects.filter(project=get_issue_id).values_list('id', flat=True) Which returns: <QuerySet [1, 42, 66, 97]> I am trying to use the following queryset to filter the Issue ID from the values_list, in order to set the Instance= (for forms) to the queryset to only get the data and display that data in the form for the issues ID from the Project PK that I am requesting. update_issue = DevIssues.objects.filter(id=issue_id) Below is my current view.py get_issue_id = get_object_or_404(DevProjects, pk=pk) issue_id = DevIssues.objects.filter(project=get_issue_id).values_list('id', flat=True) update_issue = DevIssues.objects.filter(id=issue_id) update_issue_form = UpdateProjectIssues(instance=update_issue) if request.method == 'POST' and 'updateissue' in request.POST: update_issue_form = UpdateProjectIssues(request.POST, instance=update_issue) if update_issue_form.is_valid(): update_issue_form.save() Here is models.py for both DevProjects and DevIssues: class DevProjects(models.Model): PROJECT_TYPE = [ ('NEW_APP', 'New Application'), ('UPDATE_APP', 'Update Application'), ('BUG_FIX', 'Bug Fixes') ] PROJECT_STATUS = [ ('New', 'New'), ('In Progress', 'In Progress'), ('Complete', 'Complete'), ] project_id = models.CharField(max_length=15, editable=False) project_title = models.CharField(max_length=100) project_desc = models.CharField(max_length=500) project_category = models.CharField(max_length=25, choices=PROJECT_TYPE, null=True, blank=True) project_status = models.CharField(max_length=25, choices=PROJECT_STATUS, default='New') created_at = models.DateTimeField(auto_now_add=True) updated_at = models.DateField(auto_now=True) created_by = models.ForeignKey(User, on_delete=models.CASCADE) def save(self, *args, **kwargs): super(DevProjects, self).save(**kwargs) self.project_id = 'PROJ-' + str(self.id) super(DevProjects, self).save(**kwargs) def __str__(self): return self.project_title class DevIssues(models.Model): ISSUE_CODE = [ ('BUG', 'Bug'), ('BACKLOG', 'Backlog'), ('REQUEST', 'Request'), ('TODO', 'To-Do'), ] ISSUE_STATUS = [ ('New', 'New'), ('In Progress', 'In Progress'), ('Complete', 'Complete'), ] issue_id = models.CharField(max_length=15, editable=False) project = models.ForeignKey(DevProjects, on_delete=models.CASCADE, related_name='issue') issue = models.CharField(max_length=100) issue_desc = models.CharField(max_length=500) issue_code = models.CharField(max_length=9, choices=ISSUE_CODE, null=True, blank=True) issue_status = models.CharField(max_length=15, choices=ISSUE_STATUS, default='New') issue_resolution = models.CharField(max_length=500, null=True) created_at = models.DateTimeField(auto_now_add=True) updated_at = models.DateField(auto_now=True) created_by = models.ForeignKey(User, on_delete=models.CASCADE) def save(self, *args, **kwargs): self.issue_id = 'ISSUE-' + str(self.id) super(DevIssues, self).save(**kwargs) Here is my forms.py: class UpdateProjectIssues(forms.ModelForm): class Meta: model = DevIssues fields= ["issue", "issue_desc", "issue_code", "issue_status"] labels = { 'issue_status': 'Update Status' } And this is the current error I am facing: 'QuerySet' object has no attribute '_meta' When I use get(), this is the error I get: The QuerySet value for an exact lookup must be limited to one result using slicing. Another error I get when I use something like id__in: get() returned more than one DevIssues -- it returned 2! How do I filter 'issue_id', the values_list queryset, in order to get the data and display the correct data within the form?
[ "To get project instance\nproject_id = get_object_or_404(DevProjects, pk=pk)\n\nTo get issue_ids related to that project instance\nissue_ids = DevIssues.objects.filter(project=project_id).values_list('id', flat=True)\n\nTo get update issue objects\nupdate_issue = DevIssues.objects.filter(id__in=issue_ids)\n\nbut you already get the same result in this line\nupdate_issue = DevIssues.objects.filter(project=project_id)\n\nso, do not need IN qyery.\nYou got error because you pass queryset instead of object in this line\nupdate_issue_form = UpdateProjectIssues(instance=update_issue)\n\nYou can iterate on queryset and pass object in ModelForm class.\n" ]
[ 1 ]
[]
[]
[ "django", "python" ]
stackoverflow_0074587096_django_python.txt
Q: Add Attribute to Existing Object in Python Dictionary I was attempting to add an attribute to a pre-existing object in a dictionary: key = 'key1' dictObj = {} dictObj[key] = "hello world!" #attempt 236 (j/k) dictObj[key]["property2"] = "value2" ###'str' object does not support item assignment #another attempt setattr(dictObj[key], 'property2', 'value2') ###'dict' object has no attribute 'property2' #successful attempt that I did not like dictObj[key] = {'property':'value', 'property2':''} ###instantiating the dict object with all properties defined seemed wrong... #this did allow for the following to work dictObj[key]["property2"] = "value2" I tried various combinations (including setattr, etc.) and was not having much luck. Once I have added an item to a Dictionary, how can I add additional key/value pairs to that item (not add another item to the dictionary). A: As I was writing up this question, I realized my mistake. key = 'key1' dictObj = {} dictObj[key] = {} #here is where the mistake was dictObj[key]["property2"] = "value2" The problem appears to be that I was instantiating the object with key 'key1' as a string instead of a dictionary. As such, I was not able to add a key to a string. This was one of many issues I encountered while trying to figure out this simple problem. I encountered KeyErrors as well when I varied the code a bit. A: Strictly reading the question, we are considering adding an attribute to the object. This can look like this: class DictObj(dict): pass dictObj = DictObj(dict) dictObj.key = {'property2': 'value2'} And then, we can use dictObj.key == {'property2': 'value2'} Given the context of the question, we are dealing with adding a property to the dictionary. This can be done (in addition to @John Bartels's approach) in the following ways: 1st option - add the "full" content in one line: dictObj = {'key': {'property2': 'value2'}} 2nd option for the case of dictionary creation with initial values: dictObj = dict(key = dict(property2 = 'value2')) 3rd option (Python 3.5 and higher): dictObj = {} dictObj2 = {'key': {'property2': 'value2'}} dictObj = {**dictObj, **dictObj2} 4th option (Python 3.9 and higher): dictObj = {} dictObj |= {'key': {'property2': 'value2'}} In all cases the result will be: dictObj == {'key': {'property2': 'value2'}}
Add Attribute to Existing Object in Python Dictionary
I was attempting to add an attribute to a pre-existing object in a dictionary: key = 'key1' dictObj = {} dictObj[key] = "hello world!" #attempt 236 (j/k) dictObj[key]["property2"] = "value2" ###'str' object does not support item assignment #another attempt setattr(dictObj[key], 'property2', 'value2') ###'dict' object has no attribute 'property2' #successful attempt that I did not like dictObj[key] = {'property':'value', 'property2':''} ###instantiating the dict object with all properties defined seemed wrong... #this did allow for the following to work dictObj[key]["property2"] = "value2" I tried various combinations (including setattr, etc.) and was not having much luck. Once I have added an item to a Dictionary, how can I add additional key/value pairs to that item (not add another item to the dictionary).
[ "As I was writing up this question, I realized my mistake.\nkey = 'key1'\ndictObj = {}\ndictObj[key] = {} #here is where the mistake was\n\ndictObj[key][\"property2\"] = \"value2\"\n\nThe problem appears to be that I was instantiating the object with key 'key1' as a string instead of a dictionary. As such, I was not able to add a key to a string. This was one of many issues I encountered while trying to figure out this simple problem. I encountered KeyErrors as well when I varied the code a bit.\n", "Strictly reading the question, we are considering adding an attribute to the object. This can look like this:\nclass DictObj(dict):\n pass\n\ndictObj = DictObj(dict)\n\ndictObj.key = {'property2': 'value2'}\n\nAnd then, we can use dictObj.key == {'property2': 'value2'}\nGiven the context of the question, we are dealing with adding a property to the dictionary. This can be done (in addition to @John Bartels's approach) in the following ways:\n1st option - add the \"full\" content in one line:\ndictObj = {'key': {'property2': 'value2'}}\n\n2nd option for the case of dictionary creation with initial values:\ndictObj = dict(key = dict(property2 = 'value2'))\n\n3rd option (Python 3.5 and higher):\ndictObj = {}\ndictObj2 = {'key': {'property2': 'value2'}}\ndictObj = {**dictObj, **dictObj2}\n\n4th option (Python 3.9 and higher):\ndictObj = {}\ndictObj |= {'key': {'property2': 'value2'}}\n\nIn all cases the result will be: dictObj == {'key': {'property2': 'value2'}}\n" ]
[ 21, 0 ]
[]
[]
[ "attributes", "dictionary", "python" ]
stackoverflow_0034046609_attributes_dictionary_python.txt
Q: Beautifulsoup error: "openpyxl.utils.exceptions.illegalcharactererror" I'm trying to extract texts from html files saved locally in my hard drive. And then paste them in each rows in an excel file. Doing this on Mac, here is the full code: # install/import all prerequisites first # from cgitb import text from openpyxl import Workbook, load_workbook from bs4 import BeautifulSoup # create a question that asks how many files you have i = 1 n = int(input("How many files ? ")) # final_n = n - 1 # the list of files files = [] # the list of files only has 1 file contained by default # while loop will create multiple files in the list so that I don't have to do the tedious work while i <= n: files.append("folder/SplashBidNoticeAbstractUI (" + str(i) +").html") i = i+1 # load an existing Libreoffice Calc file wb = Workbook() ws = wb.active ws.title = "Data" # add the titles on the first row, each column with the respective title ws.append(["DatePublished", "Closing Date", "Category", "Procuring Entity", "Approved Budget for the Contract", "Name", "Delivery Period", "Reference Number", "Title", "Area of Delivery", "Solicitation Number", "Contact"]) # the actual magic. # extract desired data from the html files and then # paste in the active Libreoffice Calc file for i in files: with open(i, "r", errors="ignore") as html_file: content = html_file.read() # does something soup = BeautifulSoup(content, "html.parser") # does something # extracts data from the webpages if soup.find("span", id="lblDisplayDatePublish") != None: datePublished = soup.find("span", id="lblDisplayDatePublish").text else: datePublished = "" if soup.find("span", id="lblDisplayCloseDateTime") != None: cd = soup.find("span", id="lblDisplayCloseDateTime").text else: cd = "" if soup.find("span", id="lblDisplayCategory") != None: cat = soup.find("span", id="lblDisplayCategory").text else: cat = "" if soup.find("span", id="lblDisplayProcuringEntity") != None: pro_id = soup.find("span", id="lblDisplayProcuringEntity").text.replace("", "") else: pro_id = "" if soup.find("span", id="lblDisplayBudget") != None: abc = soup.find("span", id="lblDisplayBudget").text else: abc = "" if soup.find("span", id="lblHeader") != None: name = soup.find("span", id="lblHeader").text.replace(" ", "_").replace("\n", "_") else: name = "" if soup.find("span", id="lblDisplayPeriod") != None: delp = soup.find("span", id="lblDisplayPeriod").text else: delp = "" if soup.find("span", id="lblDisplayReferenceNo")!= None: ref_num = soup.find("span", id="lblDisplayReferenceNo").text else: ref_num = "" if soup.find("span", id="lblDisplayTitle")!= None: title = soup.find("span", id="lblDisplayTitle").text.replace(" ", "_").replace("\n", "_") else: title = "" if soup.find("span", id="lblDisplayAOD") != None: aod = soup.find("span", id="lblDisplayAOD").text.replace("\n", "_") else: aod = "" if soup.find("span", id="lblDisplaySolNumber") != None: solNr = soup.find("span", id="lblDisplaySolNumber").text else: solNr = "" if soup.find("span", id="lblDisplayContactPerson")!= None: contact = soup.find("span", id="lblDisplayContactPerson").text else: contact = "" # just an assurance that the code worked and nothing screwed up print("\nBid" + i) print("Date Published: " + datePublished) print("Closing Date: " + cd) print("Category : " + cat) print("Procurement Entity : " + pro_id) print("Name: " + name) print("Delivery Period: " + delp) print("ABC: " + abc) print("Reference Number : " + ref_num) print("Title : " + title) print("Area of Delivery : " + aod) print("Solicitation Number: "+ solNr) print("Contact: "+ contact) # pastes the data inside the calc file under the titles ws.append([datePublished, cd, cat, pro_id, abc, name, delp, ref_num, title, aod, solNr, contact]) # saves file so work is safe and sound filename = input("filename: ") wb.save(filename + ".xlsx") print("Saved into '" + filename + ".xlsx'.") Running this code for all 37,820 html files. I've tried updating my Python version from 3.9 to 3.10 and 3.11. I've tried running both python3 phase3.py and just python phase3.py. I've also reinstalled bs4 and openpyxl. Still, problem isn't fixed. Here is the error openpyxl.utils.exceptions.illegalcharactererror A: Openpyxl module raises this exception when you try to assign an ASCII control character (e.g, "\x00", "\x01", ..) to a cell's value. It means that at least one of yourhtml files holds this kind of characters. So, you need to use str.encode to escape those. Replace this : ws.append([datePublished, cd, cat, pro_id, abc, name, delp, ref_num, title, aod, solNr, contact]) By this : try: list_of_vals= [datePublished, cd, cat, pro_id, abc, name, delp, ref_num, title, aod, solNr, contact] ws.append(list_of_valls) except openpyxl.utils.exceptions.illegalcharactererror: ws.append([cell_val.encode("ascii") for cell_val in list_of_vals] NB: Be aware of the indentation here.
Beautifulsoup error: "openpyxl.utils.exceptions.illegalcharactererror"
I'm trying to extract texts from html files saved locally in my hard drive. And then paste them in each rows in an excel file. Doing this on Mac, here is the full code: # install/import all prerequisites first # from cgitb import text from openpyxl import Workbook, load_workbook from bs4 import BeautifulSoup # create a question that asks how many files you have i = 1 n = int(input("How many files ? ")) # final_n = n - 1 # the list of files files = [] # the list of files only has 1 file contained by default # while loop will create multiple files in the list so that I don't have to do the tedious work while i <= n: files.append("folder/SplashBidNoticeAbstractUI (" + str(i) +").html") i = i+1 # load an existing Libreoffice Calc file wb = Workbook() ws = wb.active ws.title = "Data" # add the titles on the first row, each column with the respective title ws.append(["DatePublished", "Closing Date", "Category", "Procuring Entity", "Approved Budget for the Contract", "Name", "Delivery Period", "Reference Number", "Title", "Area of Delivery", "Solicitation Number", "Contact"]) # the actual magic. # extract desired data from the html files and then # paste in the active Libreoffice Calc file for i in files: with open(i, "r", errors="ignore") as html_file: content = html_file.read() # does something soup = BeautifulSoup(content, "html.parser") # does something # extracts data from the webpages if soup.find("span", id="lblDisplayDatePublish") != None: datePublished = soup.find("span", id="lblDisplayDatePublish").text else: datePublished = "" if soup.find("span", id="lblDisplayCloseDateTime") != None: cd = soup.find("span", id="lblDisplayCloseDateTime").text else: cd = "" if soup.find("span", id="lblDisplayCategory") != None: cat = soup.find("span", id="lblDisplayCategory").text else: cat = "" if soup.find("span", id="lblDisplayProcuringEntity") != None: pro_id = soup.find("span", id="lblDisplayProcuringEntity").text.replace("", "") else: pro_id = "" if soup.find("span", id="lblDisplayBudget") != None: abc = soup.find("span", id="lblDisplayBudget").text else: abc = "" if soup.find("span", id="lblHeader") != None: name = soup.find("span", id="lblHeader").text.replace(" ", "_").replace("\n", "_") else: name = "" if soup.find("span", id="lblDisplayPeriod") != None: delp = soup.find("span", id="lblDisplayPeriod").text else: delp = "" if soup.find("span", id="lblDisplayReferenceNo")!= None: ref_num = soup.find("span", id="lblDisplayReferenceNo").text else: ref_num = "" if soup.find("span", id="lblDisplayTitle")!= None: title = soup.find("span", id="lblDisplayTitle").text.replace(" ", "_").replace("\n", "_") else: title = "" if soup.find("span", id="lblDisplayAOD") != None: aod = soup.find("span", id="lblDisplayAOD").text.replace("\n", "_") else: aod = "" if soup.find("span", id="lblDisplaySolNumber") != None: solNr = soup.find("span", id="lblDisplaySolNumber").text else: solNr = "" if soup.find("span", id="lblDisplayContactPerson")!= None: contact = soup.find("span", id="lblDisplayContactPerson").text else: contact = "" # just an assurance that the code worked and nothing screwed up print("\nBid" + i) print("Date Published: " + datePublished) print("Closing Date: " + cd) print("Category : " + cat) print("Procurement Entity : " + pro_id) print("Name: " + name) print("Delivery Period: " + delp) print("ABC: " + abc) print("Reference Number : " + ref_num) print("Title : " + title) print("Area of Delivery : " + aod) print("Solicitation Number: "+ solNr) print("Contact: "+ contact) # pastes the data inside the calc file under the titles ws.append([datePublished, cd, cat, pro_id, abc, name, delp, ref_num, title, aod, solNr, contact]) # saves file so work is safe and sound filename = input("filename: ") wb.save(filename + ".xlsx") print("Saved into '" + filename + ".xlsx'.") Running this code for all 37,820 html files. I've tried updating my Python version from 3.9 to 3.10 and 3.11. I've tried running both python3 phase3.py and just python phase3.py. I've also reinstalled bs4 and openpyxl. Still, problem isn't fixed. Here is the error openpyxl.utils.exceptions.illegalcharactererror
[ "Openpyxl module raises this exception when you try to assign an ASCII control character (e.g, \"\\x00\", \"\\x01\", ..) to a cell's value. It means that at least one of yourhtml files holds this kind of characters. So, you need to use str.encode to escape those.\nReplace this :\nws.append([datePublished, cd, cat, pro_id, abc, name, delp, ref_num, title, aod, solNr, contact])\n\nBy this :\n try:\n list_of_vals= [datePublished, cd, cat, pro_id, abc, name, delp, ref_num, title, aod, solNr, contact]\n ws.append(list_of_valls)\n except openpyxl.utils.exceptions.illegalcharactererror:\n ws.append([cell_val.encode(\"ascii\") for cell_val in list_of_vals]\n \n\nNB: Be aware of the indentation here.\n" ]
[ 0 ]
[]
[]
[ "beautifulsoup", "excel", "openpyxl", "python", "python_3.x" ]
stackoverflow_0074587240_beautifulsoup_excel_openpyxl_python_python_3.x.txt
Q: (Pandas/Dataframe) pandas.json_normalize on nested JSON data without uniform record_path I'm attempting to convert a large JSON file to a CSV, but the field that I need to be able to sort data on in the Spreadsheet is all in one cell whenever I convert it to CSV/Normalize the JSON. The main thing I need is the hits list of dictionaries not all be in the same cell when I convert it to a csv. (Structure is: a Dictionary of Dictionaries which contains a List of Dictionaries) Here's an example of what the JSON would look like: https://pastebin.com/VA5mfhfB Here's how I've tried doing it (and what gives somewhat of an output): df = pd.json_normalize(boss_dictionary) df.to_csv(r'data.csv', index=None) I've tried putting a record_path parameter, but because there isn't a "uniform" boss_id (the slew of numbers beforehand), I can't figure out how to normalize the hits list of dictionaries. Another thing that I've tried: df = pd.read_json('data.json') df.to_csv(r'data.csv', index=None) Which does something similar to what I need, but not what I actually need. The hit list is just in one cell instead of being normalized out. What I've tried to fix it: I've tried to normalize it with the dictionary itself, and read it from JSON. I've read the documentation on json_normalize, but no parameters of meta or record_path netted me any result that didn't raise an exception. A: Using json_normalize with in a list comp based off keys. Finally merge and explode. from ast import literal_eval import pandas as pd data = literal_eval(open("/path/to/file/data.txt").read()) df_meta = ( pd .concat([pd.json_normalize(data=data[x]) for x in data], keys=data.keys()) .droplevel(level=1) .reset_index(names="id") ) df_records = ( pd .concat([pd.json_normalize(data=data[x], record_path=["hits"]) for x in data], keys=data.keys()) .droplevel(level=1) .reset_index(names="id") ) df_final = pd.merge(left=df_meta, right=df_records).drop(columns="hits") df_final = df_final.explode("hp_list").reset_index(drop=True)
(Pandas/Dataframe) pandas.json_normalize on nested JSON data without uniform record_path
I'm attempting to convert a large JSON file to a CSV, but the field that I need to be able to sort data on in the Spreadsheet is all in one cell whenever I convert it to CSV/Normalize the JSON. The main thing I need is the hits list of dictionaries not all be in the same cell when I convert it to a csv. (Structure is: a Dictionary of Dictionaries which contains a List of Dictionaries) Here's an example of what the JSON would look like: https://pastebin.com/VA5mfhfB Here's how I've tried doing it (and what gives somewhat of an output): df = pd.json_normalize(boss_dictionary) df.to_csv(r'data.csv', index=None) I've tried putting a record_path parameter, but because there isn't a "uniform" boss_id (the slew of numbers beforehand), I can't figure out how to normalize the hits list of dictionaries. Another thing that I've tried: df = pd.read_json('data.json') df.to_csv(r'data.csv', index=None) Which does something similar to what I need, but not what I actually need. The hit list is just in one cell instead of being normalized out. What I've tried to fix it: I've tried to normalize it with the dictionary itself, and read it from JSON. I've read the documentation on json_normalize, but no parameters of meta or record_path netted me any result that didn't raise an exception.
[ "Using json_normalize with in a list comp based off keys.\nFinally merge and explode.\nfrom ast import literal_eval\n\nimport pandas as pd\n\n\ndata = literal_eval(open(\"/path/to/file/data.txt\").read())\n\ndf_meta = (\n pd\n .concat([pd.json_normalize(data=data[x]) for x in data], keys=data.keys())\n .droplevel(level=1)\n .reset_index(names=\"id\")\n)\n\ndf_records = (\n pd\n .concat([pd.json_normalize(data=data[x], record_path=[\"hits\"]) for x in data], keys=data.keys())\n .droplevel(level=1)\n .reset_index(names=\"id\")\n)\n\ndf_final = pd.merge(left=df_meta, right=df_records).drop(columns=\"hits\")\ndf_final = df_final.explode(\"hp_list\").reset_index(drop=True)\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "json", "json_normalize", "pandas", "python" ]
stackoverflow_0074587293_dataframe_json_json_normalize_pandas_python.txt
Q: It keeps on saying IndexError: String Index out of range I'm coding a discord bot and it works fine until I try and use one of the ! commands (Like !hello) and then It comes up with this ERROR discord.client Ignoring exception in on_message Traceback (most recent call last): File "C:\Users\vanti\PycharmProjects\discordbot4thtry\venv\Lib\site-packages\discord\client.py", line 409, in _run_event await coro(*args, **kwargs) File "C:\Users\vanti\PycharmProjects\discordbot4thtry\bot.py", line 33, in on_message if user_message[0] == '%': ~~~~~~~~~~~~^^^ IndexError: string index out of range The % is supposed to make the bot send you the response in a DM e.g. if I do !hello it would reply in the channel with "Hello there!" but if I put %hello it would send "Hello There!" as a DM import discord import responses async def send_message(message, user_message, is_private): try: response = responses.handle_response(user_message) await message.author.send(response) if is_private else await message.channel.send(response) except Exception as e: print(e) def run_discord_bot(): TOKEN = 'This is where the bots token would go' client = discord.Client(intents=discord.Intents.default()) @client.event async def on_ready(): print(f'{client.user} is now running!') @client.event async def on_message(message): if message.author == client.user: return username = str(message.author) user_message = str(message.content) channel = str(message.channel) print(f"{username} said: '{user_message}' ({channel})") if user_message[0] == '%': user_message = user_message[1:] await send_message(message, user_message, is_private=True) else: await send_message(message, user_message, is_private=False) client.run(TOKEN) A: You must add the message_content intent. intents.message_content = True The class definition will look like def run_discord_bot(): TOKEN = 'This is where the bots token would go' intents = discord.Intents.default() intents.message_content = True client = discord.Client(intents=intents)
It keeps on saying IndexError: String Index out of range
I'm coding a discord bot and it works fine until I try and use one of the ! commands (Like !hello) and then It comes up with this ERROR discord.client Ignoring exception in on_message Traceback (most recent call last): File "C:\Users\vanti\PycharmProjects\discordbot4thtry\venv\Lib\site-packages\discord\client.py", line 409, in _run_event await coro(*args, **kwargs) File "C:\Users\vanti\PycharmProjects\discordbot4thtry\bot.py", line 33, in on_message if user_message[0] == '%': ~~~~~~~~~~~~^^^ IndexError: string index out of range The % is supposed to make the bot send you the response in a DM e.g. if I do !hello it would reply in the channel with "Hello there!" but if I put %hello it would send "Hello There!" as a DM import discord import responses async def send_message(message, user_message, is_private): try: response = responses.handle_response(user_message) await message.author.send(response) if is_private else await message.channel.send(response) except Exception as e: print(e) def run_discord_bot(): TOKEN = 'This is where the bots token would go' client = discord.Client(intents=discord.Intents.default()) @client.event async def on_ready(): print(f'{client.user} is now running!') @client.event async def on_message(message): if message.author == client.user: return username = str(message.author) user_message = str(message.content) channel = str(message.channel) print(f"{username} said: '{user_message}' ({channel})") if user_message[0] == '%': user_message = user_message[1:] await send_message(message, user_message, is_private=True) else: await send_message(message, user_message, is_private=False) client.run(TOKEN)
[ "You must add the message_content intent.\nintents.message_content = True\n\nThe class definition will look like\ndef run_discord_bot():\n TOKEN = 'This is where the bots token would go'\n intents = discord.Intents.default()\n intents.message_content = True\n client = discord.Client(intents=intents)\n\n" ]
[ 1 ]
[ "You may want to add the following line to your code :\nif len(user_message) > 0:\n\nLike this:\n @client.event\n async def on_message(message):\n if message.author == client.user:\n return\n\n username = str(message.author)\n user_message = str(message.content)\n channel = str(message.channel)\n\n print(f\"{username} said: '{user_message}' ({channel})\")\n\n if len(user_message) > 0:\n if user_message[0] == '%':\n user_message = user_message[1:]\n await send_message(message, user_message, is_private=True)\n else:\n await send_message(message, user_message, is_private=False)\n\n client.run(TOKEN)\n\n" ]
[ -1 ]
[ "discord", "discord.py", "python" ]
stackoverflow_0074587344_discord_discord.py_python.txt
Q: Best way to process the update of dictionary in python I wish to update my dictionary based on my values in a dictionary by looping it but my method is quite naive and not cool so I wish to seek help here to see whether is there better cool way with single line or maybe a few lines to process it to have the same output? My code: g_keypoints = {"test1": (14,145), "test2": (15, 151)} d = {} for k, v in g_keypoints.items(): d.update({k+"_x":v[0]}) d.update({k+"_y":v[1]}) My output: {'test1_x': 14, 'test1_y': 145, 'test2_x': 15, 'test2_y': 151} My expectation: single line or better pythonize way A: I think you basically want a solution using list comprehension. g_keypoints = {"test1": (14,145), "test2": (15, 151)} d = {} [(d.update({k+"_x":v[0]}), d.update({k+"_y":v[1]})) for k,v in g_keypoints.items()] print(d) This seems to work and produces the same output you have, although I feel like the way you have it makes it is more readable than using a single line.
Best way to process the update of dictionary in python
I wish to update my dictionary based on my values in a dictionary by looping it but my method is quite naive and not cool so I wish to seek help here to see whether is there better cool way with single line or maybe a few lines to process it to have the same output? My code: g_keypoints = {"test1": (14,145), "test2": (15, 151)} d = {} for k, v in g_keypoints.items(): d.update({k+"_x":v[0]}) d.update({k+"_y":v[1]}) My output: {'test1_x': 14, 'test1_y': 145, 'test2_x': 15, 'test2_y': 151} My expectation: single line or better pythonize way
[ "I think you basically want a solution using list comprehension.\ng_keypoints = {\"test1\": (14,145), \"test2\": (15, 151)}\nd = {}\n[(d.update({k+\"_x\":v[0]}), d.update({k+\"_y\":v[1]})) for k,v in g_keypoints.items()]\nprint(d)\n\nThis seems to work and produces the same output you have, although I feel like the way you have it makes it is more readable than using a single line.\n" ]
[ 0 ]
[]
[]
[ "dictionary", "python" ]
stackoverflow_0074581798_dictionary_python.txt
Q: PIP Install web3 I am having trouble installing web3.py on my macOS by pip The error I am getting is xcrun: error: invalid active developer path (/Library/Developer/CommandLineTools), missing xcrun at: /Library/Developer/CommandLineTools/usr/bin/xcrun I have followed the docs and make a venv they way the said and still no luck I am using python 3.6.6 https://web3py.readthedocs.io/en/stable/troubleshooting.html#setup-environment Has anyone else had this problem? A: I just did it and didn't have a problem. Try upgrading your Python version to the latest one available. Download the newest version of it from www.python.org or from your package manager of choice. If you are using home brew just type : brew update brew upgrade python3 Then just simple use pip to install the required packages. pip install web3 A: you need to use another command also, it'll help to clear the user error on your os. xcode-select --install
PIP Install web3
I am having trouble installing web3.py on my macOS by pip The error I am getting is xcrun: error: invalid active developer path (/Library/Developer/CommandLineTools), missing xcrun at: /Library/Developer/CommandLineTools/usr/bin/xcrun I have followed the docs and make a venv they way the said and still no luck I am using python 3.6.6 https://web3py.readthedocs.io/en/stable/troubleshooting.html#setup-environment Has anyone else had this problem?
[ "I just did it and didn't have a problem. \nTry upgrading your Python version to the latest one available. Download the newest version of it from www.python.org or from your package manager of choice.\nIf you are using home brew just type :\nbrew update \n\nbrew upgrade python3\n\nThen just simple use pip to install the required packages.\npip install web3\n\n", "you need to use another command also, it'll help to clear the user error on your os.\nxcode-select --install\n\n" ]
[ 1, 0 ]
[]
[]
[ "ethereum", "macos", "python", "python_3.x", "web3" ]
stackoverflow_0054024875_ethereum_macos_python_python_3.x_web3.txt
Q: Dataframe reindexing in order I have a dataframe like this datasource datavalue 0 aaaa.pdf 5 0 bbbbb.pdf 5 0 cccc.pdf 9 I don't know if this is the reason but this seems to be messing a dash display so I would like to reindex it like datasource datavalue 0 aaaa.pdf 5 1 bbbbb.pdf 5 2 cccc.pdf 9 I used data_all.reset_index() but it is not working, the index are still 0 how it should be done? EDIT1: Thanks to the two participants who made me notice my mistake. I should have put data_all=data_all.reset_index() Unfortunately it did not go as expected. Before: datasource datavalue 0 aaaa.pdf 5 0 bbbbb.pdf 5 0 cccc.pdf 9 Then data_all.keys() Index(['datasource','datavalue'],dtype='object') So data_all.reset_index() After index datasource datavalue 0 0 aaaa.pdf 5 1 0 bbbbb.pdf 5 2 0 cccc.pdf 9 data_all.keys() Index(['index','datasource','datavalue'],dtype='object') As you see one column "index" was added. I suppose I can drop that column but I was expecting something that in one step reindex the df without adding anything EDIT2: Turns out drop=True was necessary! Thanks everybody! A: I think this is what you are looking for. df.reset_index(drop=True, inplace=True) #drop: Do not try to insert index into dataframe columns. This resets the index to the default integer index. # inplace: Whether to modify the DataFrame rather than creating a new one. A: Try: data_all = data_all.reset_index(drop=True)
Dataframe reindexing in order
I have a dataframe like this datasource datavalue 0 aaaa.pdf 5 0 bbbbb.pdf 5 0 cccc.pdf 9 I don't know if this is the reason but this seems to be messing a dash display so I would like to reindex it like datasource datavalue 0 aaaa.pdf 5 1 bbbbb.pdf 5 2 cccc.pdf 9 I used data_all.reset_index() but it is not working, the index are still 0 how it should be done? EDIT1: Thanks to the two participants who made me notice my mistake. I should have put data_all=data_all.reset_index() Unfortunately it did not go as expected. Before: datasource datavalue 0 aaaa.pdf 5 0 bbbbb.pdf 5 0 cccc.pdf 9 Then data_all.keys() Index(['datasource','datavalue'],dtype='object') So data_all.reset_index() After index datasource datavalue 0 0 aaaa.pdf 5 1 0 bbbbb.pdf 5 2 0 cccc.pdf 9 data_all.keys() Index(['index','datasource','datavalue'],dtype='object') As you see one column "index" was added. I suppose I can drop that column but I was expecting something that in one step reindex the df without adding anything EDIT2: Turns out drop=True was necessary! Thanks everybody!
[ "I think this is what you are looking for.\ndf.reset_index(drop=True, inplace=True)\n#drop: Do not try to insert index into dataframe columns. This resets the index to the default integer index.\n# inplace: Whether to modify the DataFrame rather than creating a new one.\n\n", "Try:\ndata_all = data_all.reset_index(drop=True)\n\n" ]
[ 1, 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074587386_pandas_python.txt
Q: Generate all possible combination of numbers, letters, symbols python I only managed to generate all possible combination of letters in python. from string import ascii_lowercase from itertools import product minimum_length = 0 maximum_length = 1 for length in range(minimum_length, maximum_length + 1): for combo in product(ascii_lowercase, repeat=length): print(''.join(combo)) I only managed to do lowercase Please help thanks A: According to the official documentation you can leverage the following: string.ascii_lowercase string.ascii_uppercase string.punctuation string.digits A: You can import uppercase letters, numbers, and symbols from the same module (or just define a list containing them yourself). Then combine them into one list and create combinations out of that. Modifying your code, it would look like this: from string import ascii_lowercase, ascii_uppercase, punctuation, digits from itertools import product minimum_length = 0 maximum_length = 3 ALLOWED_CHARACTERS = ascii_lowercase + ascii_uppercase + punctuation + digits for length in range(minimum_length, maximum_length + 1): for combo in product(ALLOWED_CHARACTERS, repeat=length): print(''.join(combo))
Generate all possible combination of numbers, letters, symbols python
I only managed to generate all possible combination of letters in python. from string import ascii_lowercase from itertools import product minimum_length = 0 maximum_length = 1 for length in range(minimum_length, maximum_length + 1): for combo in product(ascii_lowercase, repeat=length): print(''.join(combo)) I only managed to do lowercase Please help thanks
[ "According to the official documentation you can leverage the following:\n\nstring.ascii_lowercase\nstring.ascii_uppercase\nstring.punctuation\nstring.digits\n\n", "You can import uppercase letters, numbers, and symbols from the same module (or just define a list containing them yourself).\nThen combine them into one list and create combinations out of that.\nModifying your code, it would look like this:\nfrom string import ascii_lowercase, ascii_uppercase, punctuation, digits\nfrom itertools import product\n\nminimum_length = 0\nmaximum_length = 3\n\nALLOWED_CHARACTERS = ascii_lowercase + ascii_uppercase + punctuation + digits\n\nfor length in range(minimum_length, maximum_length + 1):\n for combo in product(ALLOWED_CHARACTERS, repeat=length):\n print(''.join(combo))\n\n" ]
[ 1, 1 ]
[]
[]
[ "python" ]
stackoverflow_0074587286_python.txt
Q: AttributeError: 'CustomerHelper' object has no attribute 'requests_utility' src\helpers\customers_helper.py:23: AttributeError from ssqaapitest.src.utilities.genericUtilities import generate_random_email_and_password from ssqaapitest.src.utilities.requestsUtility import RequestUtility class CustomerHelper(object): def int(self): self.requests_utility = RequestUtility() def create_customer(self, email=None, password=None, **kwargs): if not email: ep = generate_random_email_and_password() email = ep['email'] if not password: password = 'Password1' payload = dict() payload['email'] = email payload['password'] = password payload.update(kwargs) create_user_json = self.requests_utility.post('customers', payload=payload, expected_status_code=201) return create_user_json expected result no error A: You have a typo in your init function in your class CustomerHelper. You have __int__(self) it should be __init__(self). This is what is creating the attribute error as that variable never gets initialized.
AttributeError: 'CustomerHelper' object has no attribute 'requests_utility' src\helpers\customers_helper.py:23: AttributeError
from ssqaapitest.src.utilities.genericUtilities import generate_random_email_and_password from ssqaapitest.src.utilities.requestsUtility import RequestUtility class CustomerHelper(object): def int(self): self.requests_utility = RequestUtility() def create_customer(self, email=None, password=None, **kwargs): if not email: ep = generate_random_email_and_password() email = ep['email'] if not password: password = 'Password1' payload = dict() payload['email'] = email payload['password'] = password payload.update(kwargs) create_user_json = self.requests_utility.post('customers', payload=payload, expected_status_code=201) return create_user_json expected result no error
[ "You have a typo in your init function in your class CustomerHelper. You have __int__(self) it should be __init__(self). This is what is creating the attribute error as that variable never gets initialized.\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074580631_python.txt
Q: How to add timestamp in python? I want to set timestamp as I wnat. But There was an error in doing it this way like this dir.set({ u'done':False, u'from':datetime.date(2022, 10, 20) }) ('Cannot convert to a Firestore Value', datetime.date(2022, 10, 20), 'Invalid type', <class 'datetime.date'>) A: Your issue is that Firestore does not accept the python Datetime object as a type. Instead, cast your datetime object to a string before sending it over and you should be good. Something like this should work dir.set({ 'done':False, 'from':datetime.date(2022, 10, 20).strftime('%Y/%m/%d') })
How to add timestamp in python?
I want to set timestamp as I wnat. But There was an error in doing it this way like this dir.set({ u'done':False, u'from':datetime.date(2022, 10, 20) }) ('Cannot convert to a Firestore Value', datetime.date(2022, 10, 20), 'Invalid type', <class 'datetime.date'>)
[ "Your issue is that Firestore does not accept the python Datetime object as a type.\nInstead, cast your datetime object to a string before sending it over and you should be good.\nSomething like this should work\ndir.set({\n 'done':False,\n 'from':datetime.date(2022, 10, 20).strftime('%Y/%m/%d')\n })\n\n" ]
[ 0 ]
[]
[]
[ "google_cloud_firestore", "python" ]
stackoverflow_0074587407_google_cloud_firestore_python.txt
Q: Comparing previous rows in two columns of a DataFrame I have a dataframe of transactions with the unique ID of a product, seller and buyer. I want to keep a record whether someone has bought a product and later resold it. Here's a simplified view of my dataset: prod_id seller buyer 0 cc_123 x y 1 cc_111 d y 2 cc_025 y x 3 cc_806 d m 4 cc_963 a b 5 cc_235 o h 6 cc_806 m t 7 cc_555 z w 8 cc_444 s q My initial idea was to group by id and compare consecutive rows in the grouped dataframe, by checking if the seller of the current row is the same person as the buyer of the previous row for a given product, e.g., transaction 6 is a resale because the same person "m" previously bought and is now selling product cc_806: prod_id seller buyer 3 cc_806 d m 6 cc_806 m t So the final dataset would look like this: prod_id seller buyer resale 0 cc_123 x y 0 1 cc_111 d y 0 2 cc_025 y x 0 3 cc_806 d m 0 4 cc_963 a b 0 5 cc_235 o h 0 6 cc_806 m t 1 7 cc_555 z w 0 8 cc_444 s q 0 Where 1 means yes/true and 0 means no/false. My attempt is not working: df['resale'] = df.groupby('prod_id')['seller'] == df.groupby('prod_id')['buyer'].shift(1) Is there an efficient solution for this? A: Your attempt almost got there. For sellers, it has no need to groupby. df['resale'] = df.groupby('prod_id')['buyer'].shift(1) == df['seller'] df['resale'] = df['resale'].astype(int) df output: prod_id seller buyer resale 0 cc_123 x y 0 1 cc_111 d y 0 2 cc_025 y x 0 3 cc_806 d m 0 4 cc_963 a b 0 5 cc_235 o h 0 6 cc_806 m t 1 7 cc_555 z w 0 8 cc_444 s q 0 A: use this df["resale"]=df.apply(lambda x: len(df[(df["prod_id"]==x["prod_id"])& (df["buyer"]==x["seller"])]),axis=1) output prod_id seller buyer resale 0 cc_123 x y 0 1 cc_111 d y 0 2 cc_025 y x 0 3 cc_806 d m 0 4 cc_963 a b 0 5 cc_235 o h 0 6 cc_806 m t 1 7 cc_555 z w 0 8 cc_444 s q 0
Comparing previous rows in two columns of a DataFrame
I have a dataframe of transactions with the unique ID of a product, seller and buyer. I want to keep a record whether someone has bought a product and later resold it. Here's a simplified view of my dataset: prod_id seller buyer 0 cc_123 x y 1 cc_111 d y 2 cc_025 y x 3 cc_806 d m 4 cc_963 a b 5 cc_235 o h 6 cc_806 m t 7 cc_555 z w 8 cc_444 s q My initial idea was to group by id and compare consecutive rows in the grouped dataframe, by checking if the seller of the current row is the same person as the buyer of the previous row for a given product, e.g., transaction 6 is a resale because the same person "m" previously bought and is now selling product cc_806: prod_id seller buyer 3 cc_806 d m 6 cc_806 m t So the final dataset would look like this: prod_id seller buyer resale 0 cc_123 x y 0 1 cc_111 d y 0 2 cc_025 y x 0 3 cc_806 d m 0 4 cc_963 a b 0 5 cc_235 o h 0 6 cc_806 m t 1 7 cc_555 z w 0 8 cc_444 s q 0 Where 1 means yes/true and 0 means no/false. My attempt is not working: df['resale'] = df.groupby('prod_id')['seller'] == df.groupby('prod_id')['buyer'].shift(1) Is there an efficient solution for this?
[ "Your attempt almost got there. For sellers, it has no need to groupby.\ndf['resale'] = df.groupby('prod_id')['buyer'].shift(1) == df['seller']\ndf['resale'] = df['resale'].astype(int)\ndf\n\noutput:\n prod_id seller buyer resale\n0 cc_123 x y 0\n1 cc_111 d y 0\n2 cc_025 y x 0\n3 cc_806 d m 0\n4 cc_963 a b 0\n5 cc_235 o h 0\n6 cc_806 m t 1\n7 cc_555 z w 0\n8 cc_444 s q 0\n\n", "use this\ndf[\"resale\"]=df.apply(lambda x: len(df[(df[\"prod_id\"]==x[\"prod_id\"])& (df[\"buyer\"]==x[\"seller\"])]),axis=1)\n\noutput\n prod_id seller buyer resale\n0 cc_123 x y 0\n1 cc_111 d y 0\n2 cc_025 y x 0\n3 cc_806 d m 0\n4 cc_963 a b 0\n5 cc_235 o h 0\n6 cc_806 m t 1\n7 cc_555 z w 0\n8 cc_444 s q 0\n\n\n" ]
[ 2, 1 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074587238_pandas_python.txt
Q: Don't allow duplicate value records in a list of dictionaries When user adding details to a dictionary, I want check those details are already there or not, When name, age, team and car have same values in another record ignore those inputs and tell the user "It's already there" otherwise "add" details to the dictionary. Also, this duplication check should happen before appending to the List. I don't how to do this I tried but it doesn't work driver_list = [] name = str(input("Enter player name : ")) try: age = int(input("Enter the age : ")) except ValueError: print("Input an integer",) age = int(input("Enter the age : ")) team = str(input("Enter team name : ")) car = str(input("Enter name of the car : ")) try: current_points = int(input("Enter the current points of the player : ")) except ValueError: print("Input an integer") current_points = int(input("Enter the current points of the player : ")) driver_details={"Name":name ,"Age": age,"Team": team ,"Car": car, "Current_points": current_points} driver_list.append(driver_details) A: # check all values in the driver_det is found in any of the # dictionaries in the list driver_list def checkAllInList(driver_det): # define the keys we are interested in Interest = ['name', 'age', 'team', 'car'] # get corresponding values from the driver_det b = set([value for key, value in driver_det.items() if key in Interest]) # iterate over all driver_list dictionaries for d in driver_list: # get all values corresponding to the interest keys for the d driver a = set([value for key, value in d.items() if key in Interest]) # if both have the same values, return true if a == b: return True; # if found equal, return true return False; #otherwise, return false if non of the dictionaries have these details # you can use it like that if checkAllInList(driver_details): # check a duplicate is found print("this is a duplicate value") else: # no duplicate found driver_list.append(driver_details)
Don't allow duplicate value records in a list of dictionaries
When user adding details to a dictionary, I want check those details are already there or not, When name, age, team and car have same values in another record ignore those inputs and tell the user "It's already there" otherwise "add" details to the dictionary. Also, this duplication check should happen before appending to the List. I don't how to do this I tried but it doesn't work driver_list = [] name = str(input("Enter player name : ")) try: age = int(input("Enter the age : ")) except ValueError: print("Input an integer",) age = int(input("Enter the age : ")) team = str(input("Enter team name : ")) car = str(input("Enter name of the car : ")) try: current_points = int(input("Enter the current points of the player : ")) except ValueError: print("Input an integer") current_points = int(input("Enter the current points of the player : ")) driver_details={"Name":name ,"Age": age,"Team": team ,"Car": car, "Current_points": current_points} driver_list.append(driver_details)
[ "# check all values in the driver_det is found in any of the \n# dictionaries in the list driver_list\ndef checkAllInList(driver_det): \n # define the keys we are interested in\n Interest = ['name', 'age', 'team', 'car']\n # get corresponding values from the driver_det\n b = set([value for key, value in driver_det.items() if key in Interest])\n \n # iterate over all driver_list dictionaries\n for d in driver_list:\n # get all values corresponding to the interest keys for the d driver\n a = set([value for key, value in d.items() if key in Interest])\n # if both have the same values, return true\n if a == b:\n return True; # if found equal, return true\n return False; #otherwise, return false if non of the dictionaries have these details\n\n# you can use it like that\nif checkAllInList(driver_details): # check a duplicate is found\n print(\"this is a duplicate value\")\nelse: # no duplicate found\n driver_list.append(driver_details)\n\n" ]
[ 2 ]
[]
[]
[ "dictionary", "list", "python" ]
stackoverflow_0074587455_dictionary_list_python.txt
Q: How can I specify a file without listing the entire file path? I am currently using pandas to read an "output.csv" file by specifying the filepath as : df = pd.read_csv(r'C:\users\user1\desktop\project\output.csv') While this works perfectly fine on my local machine, is there a way I can code this so anyone who runs the script can use it? I want to be able to hand this script to coworkers who have no knowledge of Python and have them be able to run it without manually changing the username in the path. I have tried using os.path to no avail: df = pd.read_csv(os.path.dirname('output.csv')) SOLUTION: df = pd.read_csv('output.csv'). Simple, embarrassing, and a wonderful building block to learn from. Thank you all. A: If you're shipping out the output.csv in the same directory as the python script, you should be able to reference it directly pd.read_csv('output.csv'). If your need to get the full path + filename for the file, you should use os.path.abspath(__file__). Finally, if your output.csv is in a static location in all your coworkers computers and you need to get the username, you can use os.getlogin() and add it to the path. So there's a bunch of solutions here depending on your exact problem. A: Use this line to get an absolute directory path of your application, then contact your csv file to it. import os abspath = os.path.dirname(os.path.realpath(__file__)) print(abspath) csvpath = abspath + '/my_csv_file.csv' print(csvpath)
How can I specify a file without listing the entire file path?
I am currently using pandas to read an "output.csv" file by specifying the filepath as : df = pd.read_csv(r'C:\users\user1\desktop\project\output.csv') While this works perfectly fine on my local machine, is there a way I can code this so anyone who runs the script can use it? I want to be able to hand this script to coworkers who have no knowledge of Python and have them be able to run it without manually changing the username in the path. I have tried using os.path to no avail: df = pd.read_csv(os.path.dirname('output.csv')) SOLUTION: df = pd.read_csv('output.csv'). Simple, embarrassing, and a wonderful building block to learn from. Thank you all.
[ "If you're shipping out the output.csv in the same directory as the python script, you should be able to reference it directly pd.read_csv('output.csv').\nIf your need to get the full path + filename for the file, you should use os.path.abspath(__file__).\nFinally, if your output.csv is in a static location in all your coworkers computers and you need to get the username, you can use os.getlogin() and add it to the path.\nSo there's a bunch of solutions here depending on your exact problem.\n", "Use this line to get an absolute directory path of your application, then contact your csv file to it.\nimport os\nabspath = os.path.dirname(os.path.realpath(__file__))\nprint(abspath)\ncsvpath = abspath + '/my_csv_file.csv'\nprint(csvpath)\n\n" ]
[ 0, 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074587277_pandas_python.txt
Q: Scraping game name containing the "@": scraper recognizes such name as email address I want to scrape games' information. However, some games' name contains "@", such as the game "Ampers@t". When I try to scrape such games' title, the code will return me "[email protected]". Apparently, my code does not recognize that this is game's name, and is not an email. Here are my codes used. import requests from requests.adapters import HTTPAdapter from urllib3.util.retry import Retry from bs4 import BeautifulSoup session = requests.Session() retry = Retry(connect=3, backoff_factor=0.5) adapter = HTTPAdapter(max_retries=retry) session.mount("http://", adapter) session.mount("https://", adapter) def grab_soup(url): """Takes a url and returns a BeautifulSoup object""" response = session.get(url, headers={"User-Agent": "Mozilla/5.0"}) assert response.status_code == 200, "Problem with url request! %s throws %s" % ( url, response.status_code, ) # checking that it worked page = response.text soup = BeautifulSoup(page, "lxml") return soup soup = grab_soup("https://www.mobygames.com/game/amperst") header = soup.find(class_="niceHeaderTitle")("a") What I expect the output of the header is <a href="https://www.mobygames.com/game/amperst">Ampers@t</a>, ... However, the output is: <a href="https://www.mobygames.com/game/amperst"><span class="__cf_email__" data-cfemail="ce8fa3beabbcbd8eba">[email protected]</span></a>, ... And I try to check the page source of the game. Indeed, the page source is recorded like this: <div class="rightPanelHeader"> <h1 class="niceHeaderTitle"> <a href="https://www.mobygames.com/game/amperst"> <span class="__cf_email__" data-cfemail="cd8ca0bda8bfbe8db9">[email&#160;protected] </span> </a>" Therefore, the reason my code gives me the content probably because it is what being recorded in the page source. But IS there a way that I can get rid of this issue? I found this solution. But it is undesire for me as I have lots of games to scrape and cannot run the code for each of game. A: [Expanded from my comment] If you tweak the function from this answer a bit to def deCFEmail(encTag): if not (encTag.get('data-cfemail') or encTag.select('*[data-cfemail]')): encTag.append(f'[! no "data-cfemail" attribute !]') else: fp = encTag.get('data-cfemail', None) if fp is None: fp = encTag.select_one('*[data-cfemail]').get('data-cfemail') try: r = int(fp[:2],16) encTag.string = ''.join([chr(int(fp[i:i+2], 16) ^ r) for i in range(2, len(fp), 2)]) except Exception as e: encTag.append(f'! failed to decode "{e}"') return encTag then you can use it conditionally: for i, h in enumerate(header): if h.get_text()=='[email\xa0protected]': header[i] = deCFEmail(h) So, header can go from [<a href="https://www.mobygames.com/game/amperst"><span class="__cf_email__" data-cfemail="8ecfe3feebfcfdcefa">[email protected]</span></a>, <a class="btn btn-xs btn-clear" href="https://www.mobygames.com/game/amperst/forums">Discuss</a>, <a class="btn btn-xs btn-clear" href="https://www.mobygames.com/game/sheet/review_game/amperst/">Review</a>, <a class="btn btn-xs btn-clear" href="https://www.mobygames.com/game/amperst/add-to-want-list">+ Want</a>, <a class="btn btn-xs btn-clear" href="https://www.mobygames.com/game/amperst/add-to-have-list">+ Have</a>, <a class="btn btn-xs btn-mobysuccess" href="https://www.mobygames.com/game/amperst/contribute">Contribute</a>] to [<a href="https://www.mobygames.com/game/amperst">Ampers@t</a>, <a class="btn btn-xs btn-clear" href="https://www.mobygames.com/game/amperst/forums">Discuss</a>, <a class="btn btn-xs btn-clear" href="https://www.mobygames.com/game/sheet/review_game/amperst/">Review</a>, <a class="btn btn-xs btn-clear" href="https://www.mobygames.com/game/amperst/add-to-want-list">+ Want</a>, <a class="btn btn-xs btn-clear" href="https://www.mobygames.com/game/amperst/add-to-have-list">+ Have</a>, <a class="btn btn-xs btn-mobysuccess" href="https://www.mobygames.com/game/amperst/contribute">Contribute</a>] it is kind of a speed requirement The additional time would not be unlike adding an extra find statement; and anyways, it would be insignificant compared to the parsing time (for soup = BeautifulSoup(page, "lxml")).
Scraping game name containing the "@": scraper recognizes such name as email address
I want to scrape games' information. However, some games' name contains "@", such as the game "Ampers@t". When I try to scrape such games' title, the code will return me "[email protected]". Apparently, my code does not recognize that this is game's name, and is not an email. Here are my codes used. import requests from requests.adapters import HTTPAdapter from urllib3.util.retry import Retry from bs4 import BeautifulSoup session = requests.Session() retry = Retry(connect=3, backoff_factor=0.5) adapter = HTTPAdapter(max_retries=retry) session.mount("http://", adapter) session.mount("https://", adapter) def grab_soup(url): """Takes a url and returns a BeautifulSoup object""" response = session.get(url, headers={"User-Agent": "Mozilla/5.0"}) assert response.status_code == 200, "Problem with url request! %s throws %s" % ( url, response.status_code, ) # checking that it worked page = response.text soup = BeautifulSoup(page, "lxml") return soup soup = grab_soup("https://www.mobygames.com/game/amperst") header = soup.find(class_="niceHeaderTitle")("a") What I expect the output of the header is <a href="https://www.mobygames.com/game/amperst">Ampers@t</a>, ... However, the output is: <a href="https://www.mobygames.com/game/amperst"><span class="__cf_email__" data-cfemail="ce8fa3beabbcbd8eba">[email protected]</span></a>, ... And I try to check the page source of the game. Indeed, the page source is recorded like this: <div class="rightPanelHeader"> <h1 class="niceHeaderTitle"> <a href="https://www.mobygames.com/game/amperst"> <span class="__cf_email__" data-cfemail="cd8ca0bda8bfbe8db9">[email&#160;protected] </span> </a>" Therefore, the reason my code gives me the content probably because it is what being recorded in the page source. But IS there a way that I can get rid of this issue? I found this solution. But it is undesire for me as I have lots of games to scrape and cannot run the code for each of game.
[ "[Expanded from my comment] If you tweak the function from this answer a bit to\ndef deCFEmail(encTag):\n if not (encTag.get('data-cfemail') or encTag.select('*[data-cfemail]')):\n encTag.append(f'[! no \"data-cfemail\" attribute !]')\n else:\n fp = encTag.get('data-cfemail', None)\n if fp is None:\n fp = encTag.select_one('*[data-cfemail]').get('data-cfemail')\n try:\n r = int(fp[:2],16)\n encTag.string = ''.join([chr(int(fp[i:i+2], 16) ^ r) for i in range(2, len(fp), 2)]) \n except Exception as e:\n encTag.append(f'! failed to decode \"{e}\"')\n return encTag\n\n\nthen you can use it conditionally:\nfor i, h in enumerate(header): \n if h.get_text()=='[email\\xa0protected]': \n header[i] = deCFEmail(h)\n\n\nSo, header can go from\n[<a href=\"https://www.mobygames.com/game/amperst\"><span class=\"__cf_email__\" data-cfemail=\"8ecfe3feebfcfdcefa\">[email protected]</span></a>,\n <a class=\"btn btn-xs btn-clear\" href=\"https://www.mobygames.com/game/amperst/forums\">Discuss</a>,\n <a class=\"btn btn-xs btn-clear\" href=\"https://www.mobygames.com/game/sheet/review_game/amperst/\">Review</a>,\n <a class=\"btn btn-xs btn-clear\" href=\"https://www.mobygames.com/game/amperst/add-to-want-list\">+ Want</a>,\n <a class=\"btn btn-xs btn-clear\" href=\"https://www.mobygames.com/game/amperst/add-to-have-list\">+ Have</a>,\n <a class=\"btn btn-xs btn-mobysuccess\" href=\"https://www.mobygames.com/game/amperst/contribute\">Contribute</a>]\n\nto\n[<a href=\"https://www.mobygames.com/game/amperst\">Ampers@t</a>,\n <a class=\"btn btn-xs btn-clear\" href=\"https://www.mobygames.com/game/amperst/forums\">Discuss</a>,\n <a class=\"btn btn-xs btn-clear\" href=\"https://www.mobygames.com/game/sheet/review_game/amperst/\">Review</a>,\n <a class=\"btn btn-xs btn-clear\" href=\"https://www.mobygames.com/game/amperst/add-to-want-list\">+ Want</a>,\n <a class=\"btn btn-xs btn-clear\" href=\"https://www.mobygames.com/game/amperst/add-to-have-list\">+ Have</a>,\n <a class=\"btn btn-xs btn-mobysuccess\" href=\"https://www.mobygames.com/game/amperst/contribute\">Contribute</a>]\n\n\n\nit is kind of a speed requirement\n\nThe additional time would not be unlike adding an extra find statement; and anyways, it would be insignificant compared to the parsing time (for soup = BeautifulSoup(page, \"lxml\")).\n" ]
[ 1 ]
[]
[]
[ "python", "web_scraping" ]
stackoverflow_0074577001_python_web_scraping.txt
Q: TypeError: 'float' object is not callable even after multiple tries enter image description here I cannot figure out how I can get this formula to work. Any help is very appreciated! :) I tried appling everything that is in the picture. But I berly have any knowlage of coding. A: An operator is missing as shown in the image that's why it raisea s not callable error. having (1)(2)(this is seen as a function call in the interpreter) doesn't mean (1) * (2) in python,
TypeError: 'float' object is not callable even after multiple tries
enter image description here I cannot figure out how I can get this formula to work. Any help is very appreciated! :) I tried appling everything that is in the picture. But I berly have any knowlage of coding.
[ "An operator is missing as shown in the image that's why it raisea s not callable error. having (1)(2)(this is seen as a function call in the interpreter) doesn't mean (1) * (2) in python,\n" ]
[ 1 ]
[]
[]
[ "python" ]
stackoverflow_0074587459_python.txt
Q: How to tell AWS Lambda how to access relative files Python I'm trying to get some Python code to run on AWS Lambda. This is my file structure. I'm trying to run the lambda_handler function in the aws_lambda_function module. The code in aws_lambda_function is: import json from server.server_code import process_request def lambda_handler(event, context): response = process_request(event) return { 'statusCode': 200, 'body': json.dumps(response) } I am telling the lambda to look for the code to run here: I have found that when I comment out line 2 from aws_lambda_function, I get the following error instead: This suggests to me that it's having a hard time with how I'm trying to import the server_code module. I've tried each of the following: from .server_code import process_request (this produces the same error about relative imports beyond top-level packages) from server_code import process_request (this produces the error Unable to import module 'server.aws_lambda_function': No module named 'server_code') I've read a lot of articles and Stack exchange threads about how to tackle this issue of relative imports in Python, but following their instructions haven't made a difference so far. (Note: I have been clicking the "Deploy" button each time I make a change.) Any thoughts on how I can make it so that my handler can reference this function from the server_code.py file? It works fine when I run the code locally on my machine. A: Okay, so it turns out that these AWS messages aren't the most descriptive or helpful for finding where the actual bugs are. It turns out that what I had to do was to go through recursively through every folder in this directory, add an __init__.py file to make the folder a package, and then remove all relative imports and replace them with really long, absolute imports starting with the parent package. That did the trick!
How to tell AWS Lambda how to access relative files Python
I'm trying to get some Python code to run on AWS Lambda. This is my file structure. I'm trying to run the lambda_handler function in the aws_lambda_function module. The code in aws_lambda_function is: import json from server.server_code import process_request def lambda_handler(event, context): response = process_request(event) return { 'statusCode': 200, 'body': json.dumps(response) } I am telling the lambda to look for the code to run here: I have found that when I comment out line 2 from aws_lambda_function, I get the following error instead: This suggests to me that it's having a hard time with how I'm trying to import the server_code module. I've tried each of the following: from .server_code import process_request (this produces the same error about relative imports beyond top-level packages) from server_code import process_request (this produces the error Unable to import module 'server.aws_lambda_function': No module named 'server_code') I've read a lot of articles and Stack exchange threads about how to tackle this issue of relative imports in Python, but following their instructions haven't made a difference so far. (Note: I have been clicking the "Deploy" button each time I make a change.) Any thoughts on how I can make it so that my handler can reference this function from the server_code.py file? It works fine when I run the code locally on my machine.
[ "Okay, so it turns out that these AWS messages aren't the most descriptive or helpful for finding where the actual bugs are. It turns out that what I had to do was to go through recursively through every folder in this directory, add an __init__.py file to make the folder a package, and then remove all relative imports and replace them with really long, absolute imports starting with the parent package. That did the trick!\n" ]
[ 0 ]
[]
[]
[ "amazon_web_services", "aws_lambda", "python" ]
stackoverflow_0074579538_amazon_web_services_aws_lambda_python.txt
Q: python - Opening another tkinter window with opencv camera don't show live feed I have 2 windows, first is the main window (window1) and another window with opencv (window2). I have a button on window1 that opens window2. Whenever I open window2 on window1 the camera won't show on the GUI. But if I open window2 individually which is on a different file, the camera is showing. I tried to put it on a single python file, and it still doesn't work, the camera is still not showing. A: I found the answer. I fixed it by removing the main loop on window2 since camera window2 also runs in tkinter.
python - Opening another tkinter window with opencv camera don't show live feed
I have 2 windows, first is the main window (window1) and another window with opencv (window2). I have a button on window1 that opens window2. Whenever I open window2 on window1 the camera won't show on the GUI. But if I open window2 individually which is on a different file, the camera is showing. I tried to put it on a single python file, and it still doesn't work, the camera is still not showing.
[ "I found the answer.\nI fixed it by removing the main loop on window2 since camera window2 also runs in tkinter.\n" ]
[ 0 ]
[]
[]
[ "camera", "python", "tkinter", "window" ]
stackoverflow_0074573165_camera_python_tkinter_window.txt
Q: How to compile a python script into executable program and can be use by others My python script is finished and working and I want to compile and have other users enjoy/benefit from it. The users don't need to install Pycharm or Visual Studio Code, something like an executable file or run in a command prompt then execute on their local machine or is there a way to convert it on a Tampermonkey Script? How do I achieve this? Thank you very much in advance! Googled and Youtubed but it's not what I'm looking for. A: This question is probably answered multiple times, but the PyInstaller module is a great way to generate an executable that will run on Windows, and an app that will run on macOS. Check out PyInstaller on PyPI.org: https://pypi.org/project/pyinstaller/ Project description PyPI PyPI - Python Version Read the Docs (version) PyPI - Downloads PyInstaller bundles a Python application and all its dependencies into a single package. The user can run the packaged app without installing a Python interpreter or any modules. Documentation: https://pyinstaller.org/ Code: https://github.com/pyinstaller/pyinstaller PyInstaller reads a Python script written by you. It analyzes your code to discover every other module and library your script needs in order to execute. Then it collects copies of all those files – including the active Python interpreter! – and puts them with your script in a single folder, or optionally in a single executable file. PyInstaller is tested against Windows, macOS, and GNU/Linux. However, it is not a cross-compiler: to make a Windows app you run PyInstaller in Windows; to make a GNU/Linux app you run it in GNU/Linux, etc. PyInstaller has been used successfully with AIX, Solaris, FreeBSD and OpenBSD, but is not tested against them as part of the continuous integration tests. Main Advantages Works out-of-the-box with any Python version 3.7-3.11. Fully multi-platform, and uses the OS support to load the dynamic libraries, thus ensuring full compatibility. Correctly bundles the major Python packages such as numpy, PyQt5, PySide2, PyQt6, PySide6, wxPython, matplotlib and others out-of-the-box. Compatible with many 3rd-party packages out-of-the-box. (All the required tricks to make external packages work are already integrated.) Works with code signing on macOS. Bundles MS Visual C++ DLLs on Windows. Installation PyInstaller is available on PyPI. You can install it through pip: pip install pyinstaller Requirements and Tested Platforms Python: 3.7-3.11. Note that Python 3.10.0 contains a bug making it unsupportable by PyInstaller. PyInstaller will also not work with beta releases of Python 3.12. tinyaes 1.0+ (only if using bytecode encryption). Instead of installing tinyaes, pip install pyinstaller[encryption] instead. Windows (32bit/64bit): PyInstaller should work on Windows 7 or newer, but we only officially support Windows 8+. Support for Python installed from the Windows store without using virtual environments requires PyInstaller 4.4 or later. Note that Windows on arm64 is not yet supported. If you have such a device and want to help us add arm64 support then please let us know on our issue tracker. Linux: GNU libc based distributions on architectures x86_64, aarch64, i686, ppc64le, s390x. musl libc based distributions on architectures x86_64, aarch64. ldd: Console application to print the shared libraries required by each program or shared library. This typically can be found in the distribution-package glibc or libc-bin. objdump: Console application to display information from object files. This typically can be found in the distribution-package binutils. objcopy: Console application to copy and translate object files. This typically can be found in the distribution-package binutils, too. Raspberry Pi users on armv5-armv7 should add piwheels as an extra index url then pip install pyinstaller as usual. macOS (x86_64 or arm64): macOS 10.15 (Catalina) or newer. Supports building universal2 applications provided that your installation of Python and all your dependencies are also compiled universal2. Usage Basic usage is very simple, just run it against your main script: pyinstaller /path/to/yourscript.py For more details, see the manual. Untested Platforms The following platforms have been contributed and any feedback or enhancements on these are welcome. FreeBSD ldd Solaris ldd objdump AIX AIX 6.1 or newer. PyInstaller will not work with statically linked Python libraries. ldd Linux on any other libc implementation/architecture combination not listed above. Before using any contributed platform, you need to build the PyInstaller bootloader. This will happen automatically when you pip install pyinstaller provided that you have an appropriate C compiler (typically either gcc or clang) and zlib’s development headers already installed. Support Official debugging guide: https://pyinstaller.org/en/v5.6.2/when-things-go-wrong.html Assorted user contributed help topics: https://github.com/pyinstaller/pyinstaller/wiki Web based Q&A forums: https://github.com/pyinstaller/pyinstaller/discussions Email based Q&A forums: https://groups.google.com/g/pyinstaller Changes in this Release You can find a detailed list of changes in this release in the Changelog section of the manual.
How to compile a python script into executable program and can be use by others
My python script is finished and working and I want to compile and have other users enjoy/benefit from it. The users don't need to install Pycharm or Visual Studio Code, something like an executable file or run in a command prompt then execute on their local machine or is there a way to convert it on a Tampermonkey Script? How do I achieve this? Thank you very much in advance! Googled and Youtubed but it's not what I'm looking for.
[ "This question is probably answered multiple times, but the PyInstaller module is a great way to generate an executable that will run on Windows, and an app that will run on macOS.\nCheck out PyInstaller on PyPI.org: https://pypi.org/project/pyinstaller/\nProject description\nPyPI PyPI - Python Version Read the Docs (version) PyPI - Downloads\nPyInstaller bundles a Python application and all its dependencies into a single package. The user can run the packaged app without installing a Python interpreter or any modules.\nDocumentation:\nhttps://pyinstaller.org/\nCode:\nhttps://github.com/pyinstaller/pyinstaller\nPyInstaller reads a Python script written by you. It analyzes your code to discover every other module and library your script needs in order to execute. Then it collects copies of all those files – including the active Python interpreter! – and puts them with your script in a single folder, or optionally in a single executable file.\nPyInstaller is tested against Windows, macOS, and GNU/Linux. However, it is not a cross-compiler: to make a Windows app you run PyInstaller in Windows; to make a GNU/Linux app you run it in GNU/Linux, etc. PyInstaller has been used successfully with AIX, Solaris, FreeBSD and OpenBSD, but is not tested against them as part of the continuous integration tests.\nMain Advantages\nWorks out-of-the-box with any Python version 3.7-3.11.\nFully multi-platform, and uses the OS support to load the dynamic libraries, thus ensuring full compatibility.\nCorrectly bundles the major Python packages such as numpy, PyQt5, PySide2, PyQt6, PySide6, wxPython, matplotlib and others out-of-the-box.\nCompatible with many 3rd-party packages out-of-the-box. (All the required tricks to make external packages work are already integrated.)\nWorks with code signing on macOS.\nBundles MS Visual C++ DLLs on Windows.\nInstallation\nPyInstaller is available on PyPI. You can install it through pip:\npip install pyinstaller\nRequirements and Tested Platforms\nPython:\n3.7-3.11. Note that Python 3.10.0 contains a bug making it unsupportable by PyInstaller. PyInstaller will also not work with beta releases of Python 3.12.\ntinyaes 1.0+ (only if using bytecode encryption). Instead of installing tinyaes, pip install pyinstaller[encryption] instead.\nWindows (32bit/64bit):\nPyInstaller should work on Windows 7 or newer, but we only officially support Windows 8+.\nSupport for Python installed from the Windows store without using virtual environments requires PyInstaller 4.4 or later.\nNote that Windows on arm64 is not yet supported. If you have such a device and want to help us add arm64 support then please let us know on our issue tracker.\nLinux:\nGNU libc based distributions on architectures x86_64, aarch64, i686, ppc64le, s390x.\nmusl libc based distributions on architectures x86_64, aarch64.\nldd: Console application to print the shared libraries required by each program or shared library. This typically can be found in the distribution-package glibc or libc-bin.\nobjdump: Console application to display information from object files. This typically can be found in the distribution-package binutils.\nobjcopy: Console application to copy and translate object files. This typically can be found in the distribution-package binutils, too.\nRaspberry Pi users on armv5-armv7 should add piwheels as an extra index url then pip install pyinstaller as usual.\nmacOS (x86_64 or arm64):\nmacOS 10.15 (Catalina) or newer.\nSupports building universal2 applications provided that your installation of Python and all your dependencies are also compiled universal2.\nUsage\nBasic usage is very simple, just run it against your main script:\npyinstaller /path/to/yourscript.py\nFor more details, see the manual.\nUntested Platforms\nThe following platforms have been contributed and any feedback or enhancements on these are welcome.\nFreeBSD\nldd\nSolaris\nldd\nobjdump\nAIX\nAIX 6.1 or newer. PyInstaller will not work with statically linked Python libraries.\nldd\nLinux on any other libc implementation/architecture combination not listed above.\nBefore using any contributed platform, you need to build the PyInstaller bootloader. This will happen automatically when you pip install pyinstaller provided that you have an appropriate C compiler (typically either gcc or clang) and zlib’s development headers already installed.\nSupport\nOfficial debugging guide: https://pyinstaller.org/en/v5.6.2/when-things-go-wrong.html\nAssorted user contributed help topics: https://github.com/pyinstaller/pyinstaller/wiki\nWeb based Q&A forums: https://github.com/pyinstaller/pyinstaller/discussions\nEmail based Q&A forums: https://groups.google.com/g/pyinstaller\nChanges in this Release\nYou can find a detailed list of changes in this release in the Changelog section of the manual.\n" ]
[ 0 ]
[]
[]
[ "command_line", "compiler_construction", "csv", "python" ]
stackoverflow_0074587560_command_line_compiler_construction_csv_python.txt
Q: Django forms: cannot access local variable 'form' where it is not associated with a value Condition: I have a model, created an empty table in the database, and I'm trying to create an html form that will fill in the fields of the corresponding columns of the table. And here's what my app looks like: models.py from django.db import models class Cities(models.Model): city = models.CharField(max_length=100) def __str__(self): return self.state class Routes(models.Model): route_name = models.CharField(max_length=50, default='Route') lvl = models.IntegerField(default=0) about = models.TextField(max_length=1500) total_distance = models.IntegerField(default=0) city = models.ForeignKey(Cities, on_delete=models.CASCADE) forms.py from django.forms import ModelForm from .models import Routes class RouteForm(ModelForm): class Meta: model = Routes fields = '__all__' views.py from django.shortcuts import get_object_or_404, render from django.http import HttpResponse from routes_form.forms import RouteForm def getAbout(request): if request.method == 'POST': form = RouteForm(request.POST) if form.is_valid(): form.save() return render(request, 'routes_form/form_page.html', {'form': form}) form.html <form method="post"> {% csrf_token %} <legend> <h2>About</h2> </legend> {{ form }} <input type="text" placeholder="Write more about the route: about waypoints, points of interest and warnings."> <input type="submit" value="Send route"> </form> I have already tried to do everything as indicated in the Django Forms documentation. But still something is wrong. Even at the moment of starting the server, it writes an error: cannot access local variable 'form' where it is not associated with a value A: It is because you haven't defined form for GET method so: def getAbout(request): if request.method == 'POST': form = RouteForm(request.POST) if form.is_valid(): form.save() return redirect('some_view_name_to_redirect') else: form=RouteForm() return render(request, 'routes_form/form_page.html', {'form': form}) Note: Models in Django are written in singular form, as Django itself add s as the suffix, so it is better to name the models as City and Route. A: Here you passed form = RouteForm(request.POST) object for POST request you need to pass for GET request so, when def getAbout(request) function called with GET request then renders it like this ... def getAbout(request): form=RouteForm() # <---- called at GET request if request.method == 'POST': form = RouteForm(request.POST) # <---- called at POST request if form.is_valid(): form.save() return redirect("/") return render(request, 'routes_form/form_page.html', {'form': form})
Django forms: cannot access local variable 'form' where it is not associated with a value
Condition: I have a model, created an empty table in the database, and I'm trying to create an html form that will fill in the fields of the corresponding columns of the table. And here's what my app looks like: models.py from django.db import models class Cities(models.Model): city = models.CharField(max_length=100) def __str__(self): return self.state class Routes(models.Model): route_name = models.CharField(max_length=50, default='Route') lvl = models.IntegerField(default=0) about = models.TextField(max_length=1500) total_distance = models.IntegerField(default=0) city = models.ForeignKey(Cities, on_delete=models.CASCADE) forms.py from django.forms import ModelForm from .models import Routes class RouteForm(ModelForm): class Meta: model = Routes fields = '__all__' views.py from django.shortcuts import get_object_or_404, render from django.http import HttpResponse from routes_form.forms import RouteForm def getAbout(request): if request.method == 'POST': form = RouteForm(request.POST) if form.is_valid(): form.save() return render(request, 'routes_form/form_page.html', {'form': form}) form.html <form method="post"> {% csrf_token %} <legend> <h2>About</h2> </legend> {{ form }} <input type="text" placeholder="Write more about the route: about waypoints, points of interest and warnings."> <input type="submit" value="Send route"> </form> I have already tried to do everything as indicated in the Django Forms documentation. But still something is wrong. Even at the moment of starting the server, it writes an error: cannot access local variable 'form' where it is not associated with a value
[ "It is because you haven't defined form for GET method so:\ndef getAbout(request):\n if request.method == 'POST':\n form = RouteForm(request.POST)\n if form.is_valid():\n form.save()\n return redirect('some_view_name_to_redirect')\n else:\n form=RouteForm()\n return render(request, 'routes_form/form_page.html', {'form': form})\n\n\nNote: Models in Django are written in singular form, as Django itself add s as the suffix, so it is better to name the models as City and Route.\n\n", "Here you passed form = RouteForm(request.POST) object for POST request you need to pass for GET request so, when def getAbout(request) function called with GET request then renders it like this ...\ndef getAbout(request):\n form=RouteForm() # <---- called at GET request\n if request.method == 'POST':\n form = RouteForm(request.POST) # <---- called at POST request\n if form.is_valid():\n form.save()\n return redirect(\"/\")\n return render(request, 'routes_form/form_page.html', {'form': form})\n\n" ]
[ 1, 1 ]
[]
[]
[ "django", "django_forms", "django_models", "django_templates", "python" ]
stackoverflow_0074586586_django_django_forms_django_models_django_templates_python.txt
Q: Python Pandas Import Excel sheet cell as object without quotes in the end My example excel sheet looks like this: Excel sheet data: customer1_data.xlsx = parameter customer1 analysis 1 analysis_name 1month_services analysis_duration [2022-08-23, 2022-11-02] analysis_numcheck 1 analysis_dupcolumns 1 Import excel sheet data as dataframe It looks normal but when I query individual rows or cells, some cell values have quotes at the end. I don't want any quotes in the end. c1df = pd.read_excel('customer1_data.xlsx') c1df.set_index('parameter',inplace=True) print(c1df) parameter customer1 analysis 1 analysis_name 1month_services analysis_duration [2022-08-23, 2022-11-02] analysis_numcheck 1 analysis_dupcolumns 1 Present output When I print individual cell values print(c1df.loc['analysis']) 1 print(c1df.loc['analysis_duration']) '[2022-08-23, 2022-11-02]' print(c1df.loc['analysis_name']) '1month_services' Expected output: print(c1df.loc['analysis']) 1 print(c1df.loc['analysis_duration']) # I don't want any quotes at the end for the list here [2022-08-23, 2022-11-02] print(c1df.loc['analysis_name']) # ' ' quote is expected for the string, no issues here '1month_services' A: You can use pandas.Series.split to convert string delimited to lists : c1df["customer1"]= ( c1df["customer1"].str.strip("[]") .str.split(",") .where(c1df["customer1"].str.contains("[\[\]]", regex=True, na=False)) .fillna(c1df["customer1"]) ) ​ # Output : print(c1df) parameter customer1 0 analysis 1 1 analysis_name 1month_services 2 analysis_duration [2022-08-23, 2022-11-02] 3 analysis_numcheck 1 4 analysis_dupcolumns 1 print(c1df.iloc[2,1]) ['2022-08-23', ' 2022-11-02']
Python Pandas Import Excel sheet cell as object without quotes in the end
My example excel sheet looks like this: Excel sheet data: customer1_data.xlsx = parameter customer1 analysis 1 analysis_name 1month_services analysis_duration [2022-08-23, 2022-11-02] analysis_numcheck 1 analysis_dupcolumns 1 Import excel sheet data as dataframe It looks normal but when I query individual rows or cells, some cell values have quotes at the end. I don't want any quotes in the end. c1df = pd.read_excel('customer1_data.xlsx') c1df.set_index('parameter',inplace=True) print(c1df) parameter customer1 analysis 1 analysis_name 1month_services analysis_duration [2022-08-23, 2022-11-02] analysis_numcheck 1 analysis_dupcolumns 1 Present output When I print individual cell values print(c1df.loc['analysis']) 1 print(c1df.loc['analysis_duration']) '[2022-08-23, 2022-11-02]' print(c1df.loc['analysis_name']) '1month_services' Expected output: print(c1df.loc['analysis']) 1 print(c1df.loc['analysis_duration']) # I don't want any quotes at the end for the list here [2022-08-23, 2022-11-02] print(c1df.loc['analysis_name']) # ' ' quote is expected for the string, no issues here '1month_services'
[ "You can use pandas.Series.split to convert string delimited to lists :\nc1df[\"customer1\"]= (\n c1df[\"customer1\"].str.strip(\"[]\")\n .str.split(\",\")\n .where(c1df[\"customer1\"].str.contains(\"[\\[\\]]\", regex=True, na=False))\n .fillna(c1df[\"customer1\"])\n )\n​\n\n# Output :\nprint(c1df)\n\n parameter customer1\n0 analysis 1\n1 analysis_name 1month_services\n2 analysis_duration [2022-08-23, 2022-11-02]\n3 analysis_numcheck 1\n4 analysis_dupcolumns 1\n\n\nprint(c1df.iloc[2,1])\n['2022-08-23', ' 2022-11-02']\n\n" ]
[ 1 ]
[]
[]
[ "dataframe", "excel", "pandas", "python" ]
stackoverflow_0074587564_dataframe_excel_pandas_python.txt
Q: How to group multiple columns while replacing zero with values (pandas)? Name Cat Dog Frog Pig Ana 0 1 0 0 Ana 1 0 1 0 Name Cat Dog Frog Pig Ana 1 1 1 0 I'd like to group these two rows by name and replace the 'zeros' by one when is filled. The output should be like this A: Use groupby with max() df = df.groupby('Name').max().reset_index() output: > df Name Cat Dog Frog Pig 0 Ana 1 1 1 0 A: what you might want to do here is an aggregation. One way to obtain your desired output is to use the pandas dataframe methods grouby() and sum() Here is how I would do it. import pandas as pd data = [ ('Ana', 0, 1, 0, 0) , ('Ana', 1, 0, 1, 0) ] df = pd.DataFrame(data, columns=['Name', 'Cat', 'Dog', 'Frog', 'Pig']) print(df.groupby(['Name']).sum()) Then the output would be: Cat Dog Frog Pig Name Ana 1 1 1 0 If you want to know more about these methods, you can follow the links below: groupby(): https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.groupby.html sum(): https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.sum.html
How to group multiple columns while replacing zero with values (pandas)?
Name Cat Dog Frog Pig Ana 0 1 0 0 Ana 1 0 1 0 Name Cat Dog Frog Pig Ana 1 1 1 0 I'd like to group these two rows by name and replace the 'zeros' by one when is filled. The output should be like this
[ "Use groupby with max()\ndf = df.groupby('Name').max().reset_index()\n\noutput:\n> df\n\n Name Cat Dog Frog Pig\n0 Ana 1 1 1 0\n\n", "what you might want to do here is an aggregation. One way to obtain your desired output is to use the pandas dataframe methods grouby() and sum()\nHere is how I would do it.\nimport pandas as pd\n\ndata = [\n ('Ana', 0, 1, 0, 0)\n, ('Ana', 1, 0, 1, 0) \n]\n\ndf = pd.DataFrame(data, columns=['Name', 'Cat', 'Dog', 'Frog', 'Pig'])\n\nprint(df.groupby(['Name']).sum())\n\nThen the output would be:\n Cat Dog Frog Pig\nName\nAna 1 1 1 0\n\nIf you want to know more about these methods, you can follow the links below:\ngroupby(): https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.groupby.html\nsum(): https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.sum.html\n" ]
[ 2, 1 ]
[]
[]
[ "group_by", "pandas", "python", "replace", "sql" ]
stackoverflow_0074587565_group_by_pandas_python_replace_sql.txt
Q: Django Rest How to show all related foreignkey object? I have an blog website and my visitors can also comment on my blog posts. Each blog post have multiple comment and I want to show those comment under my each single blog post. Assume Blog1 have 10 comment so all 10 comment will be show under Blog1 here is my code: models.py class Blog(models.Model): blog_title = models.CharField(max_length=200, unique=True) class Comment(models.Model): name = models.CharField(max_length=100) email = models.EmailField(max_length=100) comment = models.TextField() blog = models.ForeignKey(Blog, on_delete=models.CASCADE) Serializer.py class CommentSerializer(serializers.ModelSerializer): class Meta: model = Comment fields = '__all__' class BlogSerializer(serializers.ModelSerializer): class Meta: model = Blog exclude = ("author", "blog_is_published") lookup_field = 'blog_slug' extra_kwargs = { 'url': {'lookup_field': 'blog_slug'} } views.py: class BlogViewSet(viewsets.ModelViewSet): queryset = Blog.objects.all().order_by('-id') serializer_class = BlogSerializer pagination_class = BlogPagination lookup_field = 'blog_slug' A: You can access comments list from blog object using comment_set attribute, so add comment_set field to your serializer: class BlogSerializer(serializers.ModelSerializer): comment_set = CommentSerializer(many=True) class Meta: model = Blog exclude = ("author", "blog_is_published") lookup_field = 'blog_slug' extra_kwargs = { 'url': {'lookup_field': 'blog_slug'} }
Django Rest How to show all related foreignkey object?
I have an blog website and my visitors can also comment on my blog posts. Each blog post have multiple comment and I want to show those comment under my each single blog post. Assume Blog1 have 10 comment so all 10 comment will be show under Blog1 here is my code: models.py class Blog(models.Model): blog_title = models.CharField(max_length=200, unique=True) class Comment(models.Model): name = models.CharField(max_length=100) email = models.EmailField(max_length=100) comment = models.TextField() blog = models.ForeignKey(Blog, on_delete=models.CASCADE) Serializer.py class CommentSerializer(serializers.ModelSerializer): class Meta: model = Comment fields = '__all__' class BlogSerializer(serializers.ModelSerializer): class Meta: model = Blog exclude = ("author", "blog_is_published") lookup_field = 'blog_slug' extra_kwargs = { 'url': {'lookup_field': 'blog_slug'} } views.py: class BlogViewSet(viewsets.ModelViewSet): queryset = Blog.objects.all().order_by('-id') serializer_class = BlogSerializer pagination_class = BlogPagination lookup_field = 'blog_slug'
[ "You can access comments list from blog object using comment_set attribute, so add comment_set field to your serializer:\nclass BlogSerializer(serializers.ModelSerializer): \n comment_set = CommentSerializer(many=True)\n\n class Meta:\n model = Blog\n exclude = (\"author\", \"blog_is_published\")\n lookup_field = 'blog_slug'\n extra_kwargs = {\n 'url': {'lookup_field': 'blog_slug'}\n }\n\n" ]
[ 2 ]
[]
[]
[ "django", "django_rest_framework", "python", "python_3.x" ]
stackoverflow_0074587635_django_django_rest_framework_python_python_3.x.txt
Q: Converting pdf files to txt files but only getting last page of pdf file I'm trying to convert a list of PDF files in a directory to txt. At the moment, however, I'm only getting the last page of the pdf files in the newly created txt. files. The code: import os, PyPDF2 import re for file in os.listdir("Documents/Python/"): if file.endswith(".pdf"): fpath=os.path.join("Documents/Python/", file) pdffileobj=open(fpath,'rb') pdfreader=PyPDF2.PdfFileReader(pdffileobj) x=pdfreader.numPages pageobj=pdfreader.getPage(x-1) text=pageobj.extractText() newfpath=re.sub(".pdf","txt",fpath) file1=open(newfpath,"a") file1.writelines(text) Txt files with all the pages A: You are only getting the text of the last page because you are only ever reading the text of the last page of each pdf pageobj=pdfreader.getPage(x-1) Although it works, it looks like pdfreader.numPages is deprecated now. The way to do it is len(reader.pages) if you wanted the number of pages. You could also just loop through each page object without getting the number of pages for page in pdfreader.pages: The main thing you are missing is a second loop to go through each page of the pdf and extract the text. for file in os.listdir("/your/path"): if file.endswith(".pdf"): fpath=os.path.join("/your/path", file) pdffileobj=open(fpath,'rb') pdfreader=PyPDF2.PdfFileReader(pdffileobj) newfpath=re.sub(".pdf",".txt",fpath) with open(newfpath,"w") as file: #loop through each page and write to file for page in pdfreader.pages: text=page.extractText() file.write(text)
Converting pdf files to txt files but only getting last page of pdf file
I'm trying to convert a list of PDF files in a directory to txt. At the moment, however, I'm only getting the last page of the pdf files in the newly created txt. files. The code: import os, PyPDF2 import re for file in os.listdir("Documents/Python/"): if file.endswith(".pdf"): fpath=os.path.join("Documents/Python/", file) pdffileobj=open(fpath,'rb') pdfreader=PyPDF2.PdfFileReader(pdffileobj) x=pdfreader.numPages pageobj=pdfreader.getPage(x-1) text=pageobj.extractText() newfpath=re.sub(".pdf","txt",fpath) file1=open(newfpath,"a") file1.writelines(text) Txt files with all the pages
[ "You are only getting the text of the last page because you are only ever reading the text of the last page of each pdf pageobj=pdfreader.getPage(x-1)\nAlthough it works, it looks like pdfreader.numPages is deprecated now. The way to do it is len(reader.pages) if you wanted the number of pages. You could also just loop through each page object without getting the number of pages for page in pdfreader.pages:\nThe main thing you are missing is a second loop to go through each page of the pdf and extract the text.\nfor file in os.listdir(\"/your/path\"):\n if file.endswith(\".pdf\"):\n fpath=os.path.join(\"/your/path\", file)\n pdffileobj=open(fpath,'rb')\n pdfreader=PyPDF2.PdfFileReader(pdffileobj)\n newfpath=re.sub(\".pdf\",\".txt\",fpath)\n with open(newfpath,\"w\") as file:\n #loop through each page and write to file\n for page in pdfreader.pages:\n text=page.extractText()\n file.write(text)\n\n" ]
[ 0 ]
[]
[]
[ "for_loop", "pdf", "pypdf2", "python", "txt" ]
stackoverflow_0074586987_for_loop_pdf_pypdf2_python_txt.txt
Q: How can I get files dir from opened several foleders in python I trying to make that gather files from several file explorrer with Python I want to move many files to one folder from already opend foleders by file explorer How can I approch opend directory? Manual typing dir one by one is not an option so, what I want is I want to get several dir path from opend file explorer A: Here is the code I use to get a list of file names that have been copied to the clipboard from Windows Explorer: import ctypes import struct from ctypes.wintypes import BOOL, HWND, HANDLE, HGLOBAL, UINT, LPVOID from ctypes import c_size_t as SIZE_T OpenClipboard = ctypes.windll.user32.OpenClipboard OpenClipboard.argtypes = HWND, OpenClipboard.restype = BOOL EmptyClipboard = ctypes.windll.user32.EmptyClipboard EmptyClipboard.restype = BOOL GetClipboardData = ctypes.windll.user32.GetClipboardData GetClipboardData.argtypes = UINT, GetClipboardData.restype = HANDLE CloseClipboard = ctypes.windll.user32.CloseClipboard CloseClipboard.restype = BOOL IsClipboardFormatAvailable = ctypes.windll.user32.IsClipboardFormatAvailable IsClipboardFormatAvailable.argtypes = UINT, IsClipboardFormatAvailable.restype = BOOL CF_HDROP = 15 GlobalAlloc = ctypes.windll.kernel32.GlobalAlloc GlobalAlloc.argtypes = UINT, SIZE_T GlobalAlloc.restype = HGLOBAL GlobalLock = ctypes.windll.kernel32.GlobalLock GlobalLock.argtypes = HGLOBAL, GlobalLock.restype = LPVOID GlobalUnlock = ctypes.windll.kernel32.GlobalUnlock GlobalUnlock.argtypes = HGLOBAL, GlobalSize = ctypes.windll.kernel32.GlobalSize GlobalSize.argtypes = HGLOBAL, GlobalSize.restype = SIZE_T GMEM_MOVEABLE = 0x0002 GMEM_ZEROINIT = 0x0040 GMEM_SHARE = 0x2000 GHND = GMEM_MOVEABLE | GMEM_ZEROINIT def read_raw(fmt): handle = GetClipboardData(fmt) pcontents = GlobalLock(handle) size = GlobalSize(handle) raw_string = None if pcontents and size: raw_data = ctypes.create_string_buffer(size) ctypes.memmove(raw_data, pcontents, size) raw_string = raw_data.raw GlobalUnlock(handle) return raw_string def get(): OpenClipboard(None) if IsClipboardFormatAvailable(CF_HDROP): raw_string = read_raw(CF_HDROP) CloseClipboard() pFiles, pt_x, pt_y, fNC, fWide = struct.unpack('IIIII', raw_string[:20]) cooked = raw_string[pFiles:].decode('utf-16' if fWide else 'mbcs') return [name for name in cooked.split(u'\0') if name]
How can I get files dir from opened several foleders in python
I trying to make that gather files from several file explorrer with Python I want to move many files to one folder from already opend foleders by file explorer How can I approch opend directory? Manual typing dir one by one is not an option so, what I want is I want to get several dir path from opend file explorer
[ "Here is the code I use to get a list of file names that have been copied to the clipboard from Windows Explorer:\nimport ctypes\nimport struct\n\nfrom ctypes.wintypes import BOOL, HWND, HANDLE, HGLOBAL, UINT, LPVOID\nfrom ctypes import c_size_t as SIZE_T\n\nOpenClipboard = ctypes.windll.user32.OpenClipboard\nOpenClipboard.argtypes = HWND,\nOpenClipboard.restype = BOOL\nEmptyClipboard = ctypes.windll.user32.EmptyClipboard\nEmptyClipboard.restype = BOOL\nGetClipboardData = ctypes.windll.user32.GetClipboardData\nGetClipboardData.argtypes = UINT,\nGetClipboardData.restype = HANDLE\nCloseClipboard = ctypes.windll.user32.CloseClipboard\nCloseClipboard.restype = BOOL\nIsClipboardFormatAvailable = ctypes.windll.user32.IsClipboardFormatAvailable\nIsClipboardFormatAvailable.argtypes = UINT,\nIsClipboardFormatAvailable.restype = BOOL\nCF_HDROP = 15\n\nGlobalAlloc = ctypes.windll.kernel32.GlobalAlloc\nGlobalAlloc.argtypes = UINT, SIZE_T\nGlobalAlloc.restype = HGLOBAL\nGlobalLock = ctypes.windll.kernel32.GlobalLock\nGlobalLock.argtypes = HGLOBAL,\nGlobalLock.restype = LPVOID\nGlobalUnlock = ctypes.windll.kernel32.GlobalUnlock\nGlobalUnlock.argtypes = HGLOBAL,\nGlobalSize = ctypes.windll.kernel32.GlobalSize\nGlobalSize.argtypes = HGLOBAL,\nGlobalSize.restype = SIZE_T\nGMEM_MOVEABLE = 0x0002\nGMEM_ZEROINIT = 0x0040\nGMEM_SHARE = 0x2000\nGHND = GMEM_MOVEABLE | GMEM_ZEROINIT\n\ndef read_raw(fmt):\n handle = GetClipboardData(fmt)\n pcontents = GlobalLock(handle)\n size = GlobalSize(handle)\n raw_string = None\n if pcontents and size:\n raw_data = ctypes.create_string_buffer(size)\n ctypes.memmove(raw_data, pcontents, size)\n raw_string = raw_data.raw\n GlobalUnlock(handle)\n return raw_string\n\ndef get():\n OpenClipboard(None)\n if IsClipboardFormatAvailable(CF_HDROP):\n raw_string = read_raw(CF_HDROP)\n CloseClipboard()\n pFiles, pt_x, pt_y, fNC, fWide = struct.unpack('IIIII', raw_string[:20])\n cooked = raw_string[pFiles:].decode('utf-16' if fWide else 'mbcs')\n return [name for name in cooked.split(u'\\0') if name]\n\n" ]
[ 1 ]
[]
[]
[ "file_move", "python", "visual_studio_code" ]
stackoverflow_0074587650_file_move_python_visual_studio_code.txt
Q: How to solve the coding error with python? Topic: Check adjacent odd numbers Problem description: Enter 5 numbers Input description: Whether the output has adjacent odd numbers. Output description: If there are adjacent odd numbers, output the first group of adjacent odd numbers, otherwise, turn out NO. Sample Input: Sample Output: 5 6 7 8 9 NO⏎ 8 9 11 13 15 9,11⏎ Writing condition: Input: Use the list(), map(), int() functions to convert the input string to a sequence of integers Process: Use a loop to check for adjacent odd numbers Output: apply f-string num1, num2, num3, num4, num5 = map(int,input().split()) list1= [num1, num2, num3, num4, num5] for i in range(list1[0], list1[4]): if i%2 != 0: for j in range(list1[1], list1[4]): if j%2 != 0 : break print(f'{i, j}') else : print('NO') I used for-loop to write the code but the result is a wrong error. I excepted that when I input 5 6 7 8 9 and the actual result is NO. However, my wrong result is (5, 7) NO (7, 7) NO A: When you use range you are creating an entirely new set of numbers. You need to use the list directly. An easy way to solve this problem is by using zip on the list and an offset of the list This way you can compare the current number with the next number without creating more loops. #you can apply your split results directly to the list #no need to unpack and repack the data data = list(map(int,input().split())) for i,j in zip(data, data[1:]): #you don't need explicit equality (ie. i%2!=0) #both of these either "are" or "aren't" if i%2 and j%2: #the way you wrote your fstring #you were actually printing a tuple print(f'{i}, {j}') break else: #if `break` was never reached print('NO')
How to solve the coding error with python?
Topic: Check adjacent odd numbers Problem description: Enter 5 numbers Input description: Whether the output has adjacent odd numbers. Output description: If there are adjacent odd numbers, output the first group of adjacent odd numbers, otherwise, turn out NO. Sample Input: Sample Output: 5 6 7 8 9 NO⏎ 8 9 11 13 15 9,11⏎ Writing condition: Input: Use the list(), map(), int() functions to convert the input string to a sequence of integers Process: Use a loop to check for adjacent odd numbers Output: apply f-string num1, num2, num3, num4, num5 = map(int,input().split()) list1= [num1, num2, num3, num4, num5] for i in range(list1[0], list1[4]): if i%2 != 0: for j in range(list1[1], list1[4]): if j%2 != 0 : break print(f'{i, j}') else : print('NO') I used for-loop to write the code but the result is a wrong error. I excepted that when I input 5 6 7 8 9 and the actual result is NO. However, my wrong result is (5, 7) NO (7, 7) NO
[ "When you use range you are creating an entirely new set of numbers. You need to use the list directly. An easy way to solve this problem is by using zip on the list and an offset of the list\nThis way you can compare the current number with the next number without creating more loops.\n#you can apply your split results directly to the list\n#no need to unpack and repack the data\ndata = list(map(int,input().split()))\n\nfor i,j in zip(data, data[1:]):\n\n #you don't need explicit equality (ie. i%2!=0)\n #both of these either \"are\" or \"aren't\"\n if i%2 and j%2:\n #the way you wrote your fstring \n #you were actually printing a tuple\n print(f'{i}, {j}')\n break\nelse: \n #if `break` was never reached\n print('NO')\n\n" ]
[ 2 ]
[]
[]
[ "python" ]
stackoverflow_0074587683_python.txt
Q: Key error on pandas merge, but both dataframes contain the key I'm trying to work with US Census data using the census package. I'm doing API requests through the census package like this: req = c.acs.state_county_tract(tuple(allowed_vars), states.CA.fips, '013', '313102') Then converting to pandas data frames and trying to merge the frames: row_df = pd.DataFrame.from_dict(req) These are the column names for two of the data frames I'm trying to merge: ['B01001_001E', 'B01002_001E', 'B01003_001E', 'B02001_001E', 'B02008_001E', 'B02009_001E', 'B02010_001E', 'B02011_001E', 'B02012_001E', 'B02013_001E', 'B02014_001E', 'B02015_001E', 'B02016_001E', 'B02017_001E', 'B02018_001E', 'B02019_001E', 'B03001_001E', 'B03002_001E', 'B03003_001E', 'B04004_001E', 'state', 'county', 'tract'] ['B04005_001E', 'B04006_001E', 'B04007_001E', 'B05001_001E', 'B05002_001E', 'B05003_001E', 'B05004_001E', 'B05005_001E', 'B05006_001E', 'B05007_001E', 'B05008_001E', 'B05009_001E', 'B05010_001E', 'B05011_001E', 'B05012_001E', 'B05013_001E', 'B05014_001E', 'B05015_001E', 'B06001_001E', 'B06002_001E', 'state', 'county', 'tract'] You can see that both frames contain the 'state', 'county', and 'tract' keys. When I try to merge the frames: row_df = row_df.merge(second_df, on = ["state","county","tract"], how = 'left') I get the following error: KeyError: 'state' To give you more info, here is an example of one of the data frames and the dtype of its columns: B07203_001E B07204_001E B07401_001E B07402_001E B07403_001E B07407_001E B07408_001E B07409_001E B07410_001E B07411_001E ... B08007_001E B08008_001E B08009_001E B08011_001E B08012_001E B08013_001E B08014_001E state county tract 0 0.0 4280.0 None None None None None None None None ... 2077.0 2077.0 2077.0 1980.0 1980.0 99815.0 2077.0 06 013 313102 1 rows × 23 columns B07203_001E float64 B07204_001E float64 B07401_001E object B07402_001E object B07403_001E object B07407_001E object B07408_001E object B07409_001E object B07410_001E object B07411_001E object B07412_001E object B07413_001E object B08006_001E float64 B08007_001E float64 B08008_001E float64 B08009_001E float64 B08011_001E float64 B08012_001E float64 B08013_001E float64 B08014_001E float64 state object county object tract object Why is this happening? I'm really pulling out my hair here. Thanks for your help. A: I tried to reproduce the error: KeyError 'state' It happens only if I have the following situation: The columns of the dataframe that you are reporting, are actually the values of the rows of the dataframe The merge fails because the on key argument is referring to a column that is not there More information can be found in the documentation: merge(): https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.merge.html However, here is what worked for me: When I run the code this way, by assigning a list of values to a dataframe, the merge works as expected: import pandas as pd import numpy as np # Generate random 23 values for the df data = [np.random.rand(23)] # first df column names row_df_cols = ['B01001_001E', 'B01002_001E', 'B01003_001E', 'B02001_001E', 'B02008_001E', 'B02009_001E', 'B02010_001E', 'B02011_001E', 'B02012_001E', 'B02013_001E', 'B02014_001E', 'B02015_001E', 'B02016_001E', 'B02017_001E', 'B02018_001E', 'B02019_001E', 'B03001_001E', 'B03002_001E', 'B03003_001E', 'B04004_001E', 'state', 'county', 'tract'] # create row_df row_df = pd.DataFrame(data, columns=row_df_cols) # second df column names second_df_cols = ['B04005_001E', 'B04006_001E', 'B04007_001E', 'B05001_001E', 'B05002_001E', 'B05003_001E', 'B05004_001E', 'B05005_001E', 'B05006_001E', 'B05007_001E', 'B05008_001E', 'B05009_001E', 'B05010_001E', 'B05011_001E', 'B05012_001E', 'B05013_001E', 'B05014_001E', 'B05015_001E', 'B06001_001E', 'B06002_001E', 'state', 'county', 'tract'] # create second_df second_df = pd.DataFrame(data, columns=second_df_cols) # merging row_df = row_df.merge(second_df, on = ["state","county","tract"], how = 'left') print(row_df) Here is the output: B01001_001E B01002_001E B01003_001E B02001_001E B02008_001E B02009_001E ... B05012_001E B05013_001E B05014_001E B05015_001E B06001_001E B06002_001E 0 0.483652 0.867077 0.868199 0.026156 0.486646 0.674776 ... 0.194051 0.979108 0.309233 0.44728 0.322487 0.049854
Key error on pandas merge, but both dataframes contain the key
I'm trying to work with US Census data using the census package. I'm doing API requests through the census package like this: req = c.acs.state_county_tract(tuple(allowed_vars), states.CA.fips, '013', '313102') Then converting to pandas data frames and trying to merge the frames: row_df = pd.DataFrame.from_dict(req) These are the column names for two of the data frames I'm trying to merge: ['B01001_001E', 'B01002_001E', 'B01003_001E', 'B02001_001E', 'B02008_001E', 'B02009_001E', 'B02010_001E', 'B02011_001E', 'B02012_001E', 'B02013_001E', 'B02014_001E', 'B02015_001E', 'B02016_001E', 'B02017_001E', 'B02018_001E', 'B02019_001E', 'B03001_001E', 'B03002_001E', 'B03003_001E', 'B04004_001E', 'state', 'county', 'tract'] ['B04005_001E', 'B04006_001E', 'B04007_001E', 'B05001_001E', 'B05002_001E', 'B05003_001E', 'B05004_001E', 'B05005_001E', 'B05006_001E', 'B05007_001E', 'B05008_001E', 'B05009_001E', 'B05010_001E', 'B05011_001E', 'B05012_001E', 'B05013_001E', 'B05014_001E', 'B05015_001E', 'B06001_001E', 'B06002_001E', 'state', 'county', 'tract'] You can see that both frames contain the 'state', 'county', and 'tract' keys. When I try to merge the frames: row_df = row_df.merge(second_df, on = ["state","county","tract"], how = 'left') I get the following error: KeyError: 'state' To give you more info, here is an example of one of the data frames and the dtype of its columns: B07203_001E B07204_001E B07401_001E B07402_001E B07403_001E B07407_001E B07408_001E B07409_001E B07410_001E B07411_001E ... B08007_001E B08008_001E B08009_001E B08011_001E B08012_001E B08013_001E B08014_001E state county tract 0 0.0 4280.0 None None None None None None None None ... 2077.0 2077.0 2077.0 1980.0 1980.0 99815.0 2077.0 06 013 313102 1 rows × 23 columns B07203_001E float64 B07204_001E float64 B07401_001E object B07402_001E object B07403_001E object B07407_001E object B07408_001E object B07409_001E object B07410_001E object B07411_001E object B07412_001E object B07413_001E object B08006_001E float64 B08007_001E float64 B08008_001E float64 B08009_001E float64 B08011_001E float64 B08012_001E float64 B08013_001E float64 B08014_001E float64 state object county object tract object Why is this happening? I'm really pulling out my hair here. Thanks for your help.
[ "I tried to reproduce the error:\n\nKeyError 'state'\n\nIt happens only if I have the following situation:\n\nThe columns of the dataframe that you are reporting, are actually the values of the rows of the dataframe\nThe merge fails because the on key argument is referring to a column that is not there\n\nMore information can be found in the documentation:\nmerge(): https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.merge.html\nHowever, here is what worked for me:\nWhen I run the code this way, by assigning a list of values to a dataframe, the merge works as expected:\nimport pandas as pd\nimport numpy as np\n\n# Generate random 23 values for the df\ndata = [np.random.rand(23)]\n\n# first df column names\nrow_df_cols = ['B01001_001E', 'B01002_001E', 'B01003_001E', 'B02001_001E', 'B02008_001E', 'B02009_001E', 'B02010_001E', 'B02011_001E', 'B02012_001E', 'B02013_001E', 'B02014_001E', 'B02015_001E', 'B02016_001E', 'B02017_001E', 'B02018_001E', 'B02019_001E', 'B03001_001E', 'B03002_001E', 'B03003_001E', 'B04004_001E', 'state', 'county', 'tract']\n\n# create row_df\nrow_df = pd.DataFrame(data, columns=row_df_cols)\n\n# second df column names\nsecond_df_cols = ['B04005_001E', 'B04006_001E', 'B04007_001E', 'B05001_001E', 'B05002_001E', 'B05003_001E', 'B05004_001E', 'B05005_001E', 'B05006_001E', 'B05007_001E', 'B05008_001E', 'B05009_001E', 'B05010_001E', 'B05011_001E', 'B05012_001E', 'B05013_001E', 'B05014_001E', 'B05015_001E', 'B06001_001E', 'B06002_001E', 'state', 'county', 'tract'] \n\n# create second_df\nsecond_df = pd.DataFrame(data, columns=second_df_cols)\n\n# merging\nrow_df = row_df.merge(second_df, on = [\"state\",\"county\",\"tract\"], how = 'left')\n\nprint(row_df)\n\nHere is the output:\n B01001_001E B01002_001E B01003_001E B02001_001E B02008_001E B02009_001E ... B05012_001E B05013_001E B05014_001E B05015_001E B06001_001E B06002_001E\n0 0.483652 0.867077 0.868199 0.026156 0.486646 0.674776 ... 0.194051 0.979108 0.309233 0.44728 0.322487 0.049854\n\n" ]
[ 0 ]
[]
[]
[ "census", "jupyter_notebook", "pandas", "python" ]
stackoverflow_0074587675_census_jupyter_notebook_pandas_python.txt
Q: Not sure if I'm answering the wrong question or wrongly answering the right question. Need suggestions please Thank you for taking the time to read my post. I'm in a bit of a limbo here and really need some intervention. I have been working on an individual project. It is a supervised learning regression problem. After cleaning, initial analysis, EDA, and feature selection, the chosen dataset now has a total of 8 attributes and 7 of them are numerical. There are around 1000 observations of top global companies ranked as per the highest revenues. The attributes are as follow: Names Revenues 2022 Rev_Percentage_Change Profits 2022 Pro_Percentage_Change Assets Market_Value Employees I made the first blunder when finalizing the research/problem question. Though I still think the question is pretty good but now I feel it cannot be answered with this dataset. The question is, "Predict profit 2023 for the global 1000 companies and rank them as per highest profit earned". The main idea behind this is that the profit is a better measure then revenue, and therefore companies should be ranked according to the most profit made for the year. As anyone would understand that the Revenue is money earned; and profit is the money saved. So, to answer this question I did my research on google scholar to find similar works but could not find any material on the subject matter. Out of all the materials I could find, I shortlisted 10 research papers where there was some sort of prediction involved. Still, I could not find anything on exactly what I was doing. Except something that I realize now that most of the search results were projects on time series analysis. I did explore time series but it was like I was blindfolded or something and I did not realize that the question I'm trying to answer is basically a time series problem, and it can easily be solved given the right dataset. For example if we were to predict the profit of these 1000 companies and we had the dataset which contained profits for the last 10 years then we would've been in business! But as the data we have is profit for the year, even though I was able to generate the profit figures of last year with the help of "Profit_Percentage_Change" column. But I still felt that it cannot get the job done. At this point I just went with the flow and wanted to apply the regression models to get somewhere at least. Therefore I applied four regression models, Multiple Linear regression, Random Forest Regressor, Decision Tree Regressor, and Support Vector Machine regressor. These models were applied properly with a 70/30 split, generated the predicted values, evaluated the models with RMSE, MSE, R Squared scores, and also with Cross validation (5fold) with Random Forest regressor outperforming the other three. Even though I predicted the values, even compared the actual and the predicted but of course they were for present year. Something I gained from this exercise was to find out which one of these models performed the best. But it still did not answer my question, in fact I was no way near it. I could've predicted the profit for next year with the help of time series and appropriate dataset, but I was late for it. I still tried to find the appropriate dataset but had no luck so the only choice I am left with now is to change my research question. I have thought it through but I am not able to come up with the right question. The only possible question I can think of is "which supervised machine learning model best performs in predicting the profit of global 1000 companies". But why am I doing this? What problem does it solve? Is it a good solid question you think? I'm doubtful. Guys, I know that Stack overflow is not a personal coding service and nor it should be. All I'm asking is some sort of direction so that I can row the boat and get to the right destination. Any help from you would be greatly appreciated. Thank you for taking the time to read and respond to this. Included all the details in the first section. Thank you A: If I get what you have done in your experiments, you are passing to the regression models all the data except the profit which is the output that you're trying to predict, but you get the current year's profit and not the future one, as expected due to the dataset structure. For doing what you want, predict the next year's profit, you need a time series with the "history" of that data, so each company should have these data for each year from the start, even the data of only 2021 and 2022 could do the job, anyway I don't think you can trust the results with such a short series. If you can create such a dataset you can predict the full record so Revenues, profits, etc.. of one year, passing one or two or more records of the past years to the model. For such type of application, I would do that with a neural network, a simple mlp with a few layers should give you good results but you need more data... A base structure could be a model that takes all the data that you have in one record for one company for 3 years and return one record for the next year, or you can return only one value, the profit if you only need that. For the last question, I don't know if it's worth predicting the revenue of one year by the other values of the same year, it's not my field, but I guess that it's a much easier task. Here is an article where you can find some info on how to set up a simple neural network for time series predictions: time series prediction with keras Hope this helped, good luck with your project.
Not sure if I'm answering the wrong question or wrongly answering the right question. Need suggestions please
Thank you for taking the time to read my post. I'm in a bit of a limbo here and really need some intervention. I have been working on an individual project. It is a supervised learning regression problem. After cleaning, initial analysis, EDA, and feature selection, the chosen dataset now has a total of 8 attributes and 7 of them are numerical. There are around 1000 observations of top global companies ranked as per the highest revenues. The attributes are as follow: Names Revenues 2022 Rev_Percentage_Change Profits 2022 Pro_Percentage_Change Assets Market_Value Employees I made the first blunder when finalizing the research/problem question. Though I still think the question is pretty good but now I feel it cannot be answered with this dataset. The question is, "Predict profit 2023 for the global 1000 companies and rank them as per highest profit earned". The main idea behind this is that the profit is a better measure then revenue, and therefore companies should be ranked according to the most profit made for the year. As anyone would understand that the Revenue is money earned; and profit is the money saved. So, to answer this question I did my research on google scholar to find similar works but could not find any material on the subject matter. Out of all the materials I could find, I shortlisted 10 research papers where there was some sort of prediction involved. Still, I could not find anything on exactly what I was doing. Except something that I realize now that most of the search results were projects on time series analysis. I did explore time series but it was like I was blindfolded or something and I did not realize that the question I'm trying to answer is basically a time series problem, and it can easily be solved given the right dataset. For example if we were to predict the profit of these 1000 companies and we had the dataset which contained profits for the last 10 years then we would've been in business! But as the data we have is profit for the year, even though I was able to generate the profit figures of last year with the help of "Profit_Percentage_Change" column. But I still felt that it cannot get the job done. At this point I just went with the flow and wanted to apply the regression models to get somewhere at least. Therefore I applied four regression models, Multiple Linear regression, Random Forest Regressor, Decision Tree Regressor, and Support Vector Machine regressor. These models were applied properly with a 70/30 split, generated the predicted values, evaluated the models with RMSE, MSE, R Squared scores, and also with Cross validation (5fold) with Random Forest regressor outperforming the other three. Even though I predicted the values, even compared the actual and the predicted but of course they were for present year. Something I gained from this exercise was to find out which one of these models performed the best. But it still did not answer my question, in fact I was no way near it. I could've predicted the profit for next year with the help of time series and appropriate dataset, but I was late for it. I still tried to find the appropriate dataset but had no luck so the only choice I am left with now is to change my research question. I have thought it through but I am not able to come up with the right question. The only possible question I can think of is "which supervised machine learning model best performs in predicting the profit of global 1000 companies". But why am I doing this? What problem does it solve? Is it a good solid question you think? I'm doubtful. Guys, I know that Stack overflow is not a personal coding service and nor it should be. All I'm asking is some sort of direction so that I can row the boat and get to the right destination. Any help from you would be greatly appreciated. Thank you for taking the time to read and respond to this. Included all the details in the first section. Thank you
[ "If I get what you have done in your experiments, you are passing to the regression models all the data except the profit which is the output that you're trying to predict, but you get the current year's profit and not the future one, as expected due to the dataset structure.\nFor doing what you want, predict the next year's profit, you need a time series with the \"history\" of that data, so each company should have these data for each year from the start, even the data of only 2021 and 2022 could do the job, anyway I don't think you can trust the results with such a short series.\nIf you can create such a dataset you can predict the full record so Revenues, profits, etc.. of one year, passing one or two or more records of the past years to the model.\nFor such type of application, I would do that with a neural network, a simple mlp with a few layers should give you good results but you need more data...\nA base structure could be a model that takes all the data that you have in one record for one company for 3 years and return one record for the next year, or you can return only one value, the profit if you only need that.\nFor the last question, I don't know if it's worth predicting the revenue of one year by the other values of the same year, it's not my field, but I guess that it's a much easier task.\nHere is an article where you can find some info on how to set up a simple neural network for time series predictions: time series prediction with keras\nHope this helped, good luck with your project.\n" ]
[ 1 ]
[]
[]
[ "machine_learning", "prediction", "python", "regression", "supervised_learning" ]
stackoverflow_0074587602_machine_learning_prediction_python_regression_supervised_learning.txt
Q: Jupyter kernel dies while running this neural networks code import numpy as np import cv2 import os import matplotlib.pyplot as plt from tensorflow.keras.preprocessing.image import ImageDataGenerator from tensorflow.keras.preprocessing import image from tensorflow.keras.optimizers import RMSprop img = image.load_img("image_location_here") train = ImageDataGenerator(rescale=1/255) validation = ImageDataGenerator(rescale=1/255) train_dataset = train.flow_from_directory('image_location_here', target_size = (50,50), batch_size = 3, class_mode = 'binary') validation_dataset = train.flow_from_directory('image_location_here', target_size = (50,50),batch_size = 3, class_mode = 'binary') train_dataset.class_indices train_dataset.classes model = tf.keras.models.Sequential tf.keras.layers.Conv2D(16,(3,3),activation = 'relu', input_shape = (50,50,3))` `tf.keras.layers.MaxPool2D(2,2) tf.keras.layers.Conv2D(32,(3,3),activation = 'relu'),tf.keras.layers.MaxPool2D(2,2) tf.keras.layers.Conv2D(64,(3,3),activation = 'relu') tf.keras.layers.MaxPool2D(2,2) tf.keras.layers.Flatten() tf.keras.layers.Dense(128,activation = 'relu') tf.keras.layers.Dense(1, activation = 'sigmoid') The code above runs fine, but when I run the line given below the kernel dies. model().compile(loss = 'binary_crossentropy', optimizer = 'adam', metrices = ['accuracy']) I have updated anaconda to the latest version, restarted it few times still the kernel dies on this specific line. sorry for the poor editing of the lines A: I recently had a similar issue. The issue is being caused by CUDA/cudNN, probably because you are using an incompatible version with Tensorflow. There are two solutions for the same: Uninstall CUDA and reinstall Tensorflow library Uninstall CUDA and setup the compatible CUDA/cudNN version ( Check https://tensorflow.org/install/source_windows for the version you need for your tensorflow version)
Jupyter kernel dies while running this neural networks code
import numpy as np import cv2 import os import matplotlib.pyplot as plt from tensorflow.keras.preprocessing.image import ImageDataGenerator from tensorflow.keras.preprocessing import image from tensorflow.keras.optimizers import RMSprop img = image.load_img("image_location_here") train = ImageDataGenerator(rescale=1/255) validation = ImageDataGenerator(rescale=1/255) train_dataset = train.flow_from_directory('image_location_here', target_size = (50,50), batch_size = 3, class_mode = 'binary') validation_dataset = train.flow_from_directory('image_location_here', target_size = (50,50),batch_size = 3, class_mode = 'binary') train_dataset.class_indices train_dataset.classes model = tf.keras.models.Sequential tf.keras.layers.Conv2D(16,(3,3),activation = 'relu', input_shape = (50,50,3))` `tf.keras.layers.MaxPool2D(2,2) tf.keras.layers.Conv2D(32,(3,3),activation = 'relu'),tf.keras.layers.MaxPool2D(2,2) tf.keras.layers.Conv2D(64,(3,3),activation = 'relu') tf.keras.layers.MaxPool2D(2,2) tf.keras.layers.Flatten() tf.keras.layers.Dense(128,activation = 'relu') tf.keras.layers.Dense(1, activation = 'sigmoid') The code above runs fine, but when I run the line given below the kernel dies. model().compile(loss = 'binary_crossentropy', optimizer = 'adam', metrices = ['accuracy']) I have updated anaconda to the latest version, restarted it few times still the kernel dies on this specific line. sorry for the poor editing of the lines
[ "I recently had a similar issue. The issue is being caused by CUDA/cudNN, probably because you are using an incompatible version with Tensorflow. There are two solutions for the same:\n\nUninstall CUDA and reinstall Tensorflow library\nUninstall CUDA and setup the compatible CUDA/cudNN version ( Check https://tensorflow.org/install/source_windows for the version you need for your tensorflow version)\n\n" ]
[ 0 ]
[]
[]
[ "conv_neural_network", "jupyter_notebook", "python" ]
stackoverflow_0074587606_conv_neural_network_jupyter_notebook_python.txt
Q: Trying to input a blank for an input that requires 2 values separated by ", " So I am trying to have the while loop end when inputting a blank for the input, but the problem is that the input takes 2 values separated by ", ". It is necessary for me to keep the input like that rather than separating them so how to fix this? print(" Input the productIDs and quantities (input blank to complete transaction)") productID, quantity = input().split(", ") quantity = int(quantity) while quantity >= 1: self.addProductToTransaction(productID, quantity) print("why u here bro u ain't buyin nothin") When input is blank: ValueError: not enough values to unpack (expected 2, got 1) A: while loop should be outer, if you want to iteratively receive the input until a bad format is fed (handled by try-except). while True: try: productID, quantity = input("Input the productIDs and quantities (input blank to complete transaction)").split(", ") quantity = int(quantity) except ValueError: print("why u here bro u ain't buyin nothin") break if quantity >= 1: self.addProductToTransaction(productID, quantity) A: You don't even need a execption handelling. Simply take as a string then split by ' print(" Input the productIDs and quantities (input blank to complete transaction)") user_in = input() if user_in !='': productID, quantity = user_in.split(',') print(quantity) quantity = int(quantity) while quantity >= 1: self.addProductToTransaction(productID, quantity) else: print("why u here bro u ain't buyin nothin") Sample outs# Input the productIDs and quantities (input blank to complete transaction) why u here bro u ain't buyin nothin A: There are three ways to do this. Using len() or with try..except or with if if while True: var = input("Input the productIDs and quantities (input blank to complete transaction)") if len(var) == ‘’: print("why u here bro u ain't buyin nothin") break productID, quantity = Var.split(", ") quantity = int(quantity) if quantity >= 1: self.addProductToTransaction(productID, quantity) len() while True: var = input("Input the productIDs and quantities (input blank to complete transaction)") if len(var) == 0: print("why u here bro u ain't buyin nothin") break productID, quantity = Var.split(", ") quantity = int(quantity) if quantity >= 1: self.addProductToTransaction(productID, quantity) try…except while True: try: productID, quantity = input("Input the productIDs and quantities (input blank to complete transaction)").split(", ") quantity = int(quantity) if quantity >= 1: self.addProductToTransaction(productID, quantity) except ValueError: print("why u here bro u ain't buyin nothin") break Three notes: If you are printing a message for input the pass the string in the input function. And if you want user to input on next line, pass new line escape chat in the string "Foo..string\n" I think you are trying to have the loop that repeatedly ask the user the input then pass it to another class function addProductToTransaction(). Unless nothing is passed, in that case exit the loop. For that you can put the loop before input. The solutions above are ordered per preference and performance (high to low)
Trying to input a blank for an input that requires 2 values separated by ", "
So I am trying to have the while loop end when inputting a blank for the input, but the problem is that the input takes 2 values separated by ", ". It is necessary for me to keep the input like that rather than separating them so how to fix this? print(" Input the productIDs and quantities (input blank to complete transaction)") productID, quantity = input().split(", ") quantity = int(quantity) while quantity >= 1: self.addProductToTransaction(productID, quantity) print("why u here bro u ain't buyin nothin") When input is blank: ValueError: not enough values to unpack (expected 2, got 1)
[ "while loop should be outer, if you want to iteratively receive the input until a bad format is fed (handled by try-except).\nwhile True:\n try:\n productID, quantity = input(\"Input the productIDs and quantities (input blank to complete transaction)\").split(\", \")\n quantity = int(quantity)\n except ValueError:\n print(\"why u here bro u ain't buyin nothin\")\n break\n if quantity >= 1:\n self.addProductToTransaction(productID, quantity) \n\n", "You don't even need a execption handelling. Simply take as a string then split by '\nprint(\" Input the productIDs and quantities (input blank to complete transaction)\")\nuser_in = input()\nif user_in !='':\n\n productID, quantity = user_in.split(',')\n print(quantity)\n quantity = int(quantity)\n while quantity >= 1:\n self.addProductToTransaction(productID, quantity) \n \nelse:\n \n print(\"why u here bro u ain't buyin nothin\")\n\nSample outs#\n Input the productIDs and quantities (input blank to complete transaction)\n\nwhy u here bro u ain't buyin nothin\n\n", "There are three ways to do this. Using len() or with try..except or with if\n\nif\n\nwhile True:\n\n var = input(\"Input the productIDs and quantities (input blank to complete transaction)\")\n \n if len(var) == ‘’: \n\n print(\"why u here bro u ain't buyin nothin\")\n break\n\n productID, quantity = Var.split(\", \")\n\n\n\n quantity = int(quantity)\n if quantity >= 1:\n self.addProductToTransaction(productID, quantity) \n\n\n\nlen()\n\n\nwhile True:\n\n var = input(\"Input the productIDs and quantities (input blank to complete transaction)\")\n \n if len(var) == 0: \n\n print(\"why u here bro u ain't buyin nothin\")\n break\n\n productID, quantity = Var.split(\", \")\n\n\n\n quantity = int(quantity)\n if quantity >= 1:\n self.addProductToTransaction(productID, quantity) \n\n\n\ntry…except\n\n\nwhile True:\n\n try:\n\n\n productID, quantity = input(\"Input the productIDs and quantities (input blank to complete transaction)\").split(\", \")\n\n quantity = int(quantity)\n if quantity >= 1:\n self.addProductToTransaction(productID, quantity) \n\n\n except ValueError:\n print(\"why u here bro u ain't buyin nothin\")\n break\n\n\nThree notes:\n\nIf you are printing a message for input the pass the string in the input function. And if you want user to input on next line, pass new line escape chat in the string \"Foo..string\\n\"\nI think you are trying to have the loop that repeatedly ask the user the input then pass it to another class function addProductToTransaction(). Unless nothing is passed, in that case exit the loop. For that you can put the loop before input.\nThe solutions above are ordered per preference and performance (high to low)\n\n" ]
[ 1, 1, 0 ]
[ "I managed to fix it by using try/except.\nprint(\" Input the productIDs and quantities (input blank to complete transaction)\")\n try:\n productID, quantity = input().split(\", \")\n quantity = int(quantity)\n while quantity >= 1:\n self.addProductToTransaction(productID, quantity) \n except:\n print(\"why u here bro u ain't buyin nothin\")\n\n" ]
[ -1 ]
[ "input", "python", "python_3.x" ]
stackoverflow_0074587685_input_python_python_3.x.txt
Q: Get first element of sublist as dictionary key in python I looked but i didn't found the answer (and I'm pretty new to python). The question is pretty simple. I have a list made of sublists: ll [[1,2,3], [4,5,6], [7,8,9]] What I'm trying to do is to create a dictionary that has as key the first element of each sublist and as values the values of the coorresponding sublists, like: d = {1:[2,3], 4:[5,6], 7:[8,9]} How can I do that? A: Using dict comprehension : {words[0]:words[1:] for words in lst} output: {1: [2, 3], 4: [5, 6], 7: [8, 9]} A: Using dictionary comprehension (For Python 2.7 +) and slicing - d = {e[0] : e[1:] for e in ll} Demo - >>> ll = [[1,2,3], [4,5,6], [7,8,9]] >>> d = {e[0] : e[1:] for e in ll} >>> d {1: [2, 3], 4: [5, 6], 7: [8, 9]} A: you could do it this way: ll = [[1,2,3], [4,5,6], [7,8,9]] dct = dict( (item[0], item[1:]) for item in ll) # or even: dct = { item[0]: item[1:] for item in ll } print(dct) # {1: [2, 3], 4: [5, 6], 7: [8, 9]} A: Another variation on the theme: d = {e.pop(0): e for e in ll}
Get first element of sublist as dictionary key in python
I looked but i didn't found the answer (and I'm pretty new to python). The question is pretty simple. I have a list made of sublists: ll [[1,2,3], [4,5,6], [7,8,9]] What I'm trying to do is to create a dictionary that has as key the first element of each sublist and as values the values of the coorresponding sublists, like: d = {1:[2,3], 4:[5,6], 7:[8,9]} How can I do that?
[ "Using dict comprehension :\n{words[0]:words[1:] for words in lst}\n\noutput:\n{1: [2, 3], 4: [5, 6], 7: [8, 9]}\n\n", "Using dictionary comprehension (For Python 2.7 +) and slicing -\nd = {e[0] : e[1:] for e in ll}\n\nDemo -\n>>> ll = [[1,2,3], [4,5,6], [7,8,9]]\n>>> d = {e[0] : e[1:] for e in ll}\n>>> d\n{1: [2, 3], 4: [5, 6], 7: [8, 9]}\n\n", "you could do it this way:\nll = [[1,2,3], [4,5,6], [7,8,9]]\ndct = dict( (item[0], item[1:]) for item in ll)\n# or even: dct = { item[0]: item[1:] for item in ll }\nprint(dct)\n# {1: [2, 3], 4: [5, 6], 7: [8, 9]}\n\n", "Another variation on the theme:\nd = {e.pop(0): e for e in ll}\n\n" ]
[ 11, 7, 2, 2 ]
[ "Another:\nd = {k: v for k, *v in ll}\n\n" ]
[ -1 ]
[ "dictionary", "list", "python", "sublist" ]
stackoverflow_0032604558_dictionary_list_python_sublist.txt
Q: DRF pagination issue in APIView I want to implement LimitOffsetPagination in my APIView which was successful but the url is required to be appended with ?limit=<int>. Something like this: But I do not want to manually add that endpoint. So I tried creating a new pagination.py file: But now I am not getting the pagination prompt for navigation to next post like before. I think return needs to be altered? A: You just need to change your return. Instead of return Response(serializer.data) use return paginator.get_paginated_response(serializer.data)
DRF pagination issue in APIView
I want to implement LimitOffsetPagination in my APIView which was successful but the url is required to be appended with ?limit=<int>. Something like this: But I do not want to manually add that endpoint. So I tried creating a new pagination.py file: But now I am not getting the pagination prompt for navigation to next post like before. I think return needs to be altered?
[ "You just need to change your return. Instead of return Response(serializer.data) use return paginator.get_paginated_response(serializer.data)\n" ]
[ 1 ]
[]
[]
[ "backend", "django", "django_rest_framework", "python" ]
stackoverflow_0074587600_backend_django_django_rest_framework_python.txt
Q: VS Code: "The isort server crashed 5 times in the last 3 minutes..." I may have messed up some environmental path variables. I was tinkering around VS Code while learning about Django and virtual environments, and changing the directory path of my Python install. While figuring out how to point VS Code's default Python path, I deleted some User path variables. Then, isort began to refuse to run. I've tried uninstalling the extension(s), deleting the ms-python.'s, and uninstalling VS Code itself, clearing the Python Workspace Interpreter Settings, and restarting my computer. Even if it's not my path variables, anyone know the defaults that should be in the "user" paths variables? A: You need to find the location of the python.exe file. Usually it is C:\Users\Admin\AppData\Local\Programs\Python\Python310\ You can also automatically add python to the system environment by deleting and reinstalling it. During installation, a small box is automatically checked to add environment variables. A: Another reason maybe that you are using a python version older than 3.7, which isort is not supporting anymore. Here is a reference link. A: I ended up refreshing my Windows install. Was for the best because I'm repurposing an older machine anyway.
VS Code: "The isort server crashed 5 times in the last 3 minutes..."
I may have messed up some environmental path variables. I was tinkering around VS Code while learning about Django and virtual environments, and changing the directory path of my Python install. While figuring out how to point VS Code's default Python path, I deleted some User path variables. Then, isort began to refuse to run. I've tried uninstalling the extension(s), deleting the ms-python.'s, and uninstalling VS Code itself, clearing the Python Workspace Interpreter Settings, and restarting my computer. Even if it's not my path variables, anyone know the defaults that should be in the "user" paths variables?
[ "You need to find the location of the python.exe file.\nUsually it is C:\\Users\\Admin\\AppData\\Local\\Programs\\Python\\Python310\\\nYou can also automatically add python to the system environment by deleting and reinstalling it. During installation, a small box is automatically checked to add environment variables.\n", "Another reason maybe that you are using a python version older than 3.7, which isort is not supporting anymore. Here is a reference link.\n", "I ended up refreshing my Windows install. Was for the best because I'm repurposing an older machine anyway.\n" ]
[ 1, 1, 0 ]
[]
[]
[ "isort", "python", "python_extensions", "visual_studio_code" ]
stackoverflow_0074467875_isort_python_python_extensions_visual_studio_code.txt
Q: Tensorflow 2.0 list_physical_devices doesn't detect my GPU I recently install tensorflow 2.0 on my computer but when I try to run it on my GPU, the function tf.config.experimental.list_physical_devices('GPU') on Jupyter or Vitual Studio Code it returns me a void array. Do you know why ? My set-up : Computer : MSI Processor : Intel(R) Core(TM) i7-8750H CPU @ 2.220GHz GPU 0 : Intel(R) UHD Graphics 630 GPU : NVIDIA GeForce GTX 1060 Python : Ananconda 3 with Python 3.7 Tensenflow 2.0 installed with pip install tensorflow My test code : physical_devices = tf.config.experimental.list_physical_devices('GPU') print(physical_devices) if physical_devices: tf.config.experimental.set_memory_growth(physical_devices[0], True) Thanks in advance ! :) A: Providing the solution here (Answer Section), even though it is present in the Comment Section for the benefit of the community. Instead of pip install tensorflow, you can try pip3 install --upgrade tensorflow-gpu or just remove tensorflow and then installing "tensorflow-gpu will resolves your issue. After installation of Tensorflow GPU, you can check GPU as below physical_devices = tf.config.experimental.list_physical_devices('GPU') print(physical_devices) if physical_devices: tf.config.experimental.set_memory_growth(physical_devices[0], True) Output: [PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')] A: Upgrading simply worked for me.: pip3 install --upgrade tensorflow-gpu and the device name searched has to be 'XLA_GPU', and does not respond with solo 'GPU' search term. But it also raised another error when setting the memory growth which is not supported by 'XLA' gpus. A: Installing tensorflow with gpu using Conda First install anaconda or mini-anaconda on your system and make sure you have set the environment path for conda command. In below command replace tensor with a environment name of your choice: conda create -n tensor tensorflow-gpu cudatoolkit=9.0 conda activate tensor Then update your tensorflow-gup package using pip and make sure your environment is activated in conda pip3 install --upgrade tensorflow-gpu Verify weather tensorflow with gpu support is installed successfully (tensor) C:\> python Python 3.8.15 (default, Nov 24 2022, 14:38:14) [MSC v.1916 64 bit (AMD64)] :: Anaconda, Inc. on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import tensorflow >>> tensorflow.config.list_physical_devices("GPU") [PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')] The output[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')] in terminal means we have successfully installed tensorflow with gpu. This answer was taken from anaconda tensorflow installation link.
Tensorflow 2.0 list_physical_devices doesn't detect my GPU
I recently install tensorflow 2.0 on my computer but when I try to run it on my GPU, the function tf.config.experimental.list_physical_devices('GPU') on Jupyter or Vitual Studio Code it returns me a void array. Do you know why ? My set-up : Computer : MSI Processor : Intel(R) Core(TM) i7-8750H CPU @ 2.220GHz GPU 0 : Intel(R) UHD Graphics 630 GPU : NVIDIA GeForce GTX 1060 Python : Ananconda 3 with Python 3.7 Tensenflow 2.0 installed with pip install tensorflow My test code : physical_devices = tf.config.experimental.list_physical_devices('GPU') print(physical_devices) if physical_devices: tf.config.experimental.set_memory_growth(physical_devices[0], True) Thanks in advance ! :)
[ "Providing the solution here (Answer Section), even though it is present in the Comment Section for the benefit of the community.\nInstead of pip install tensorflow, you can try pip3 install --upgrade tensorflow-gpu or just remove tensorflow and then installing \"tensorflow-gpu will resolves your issue. \nAfter installation of Tensorflow GPU, you can check GPU as below\nphysical_devices = tf.config.experimental.list_physical_devices('GPU')\nprint(physical_devices)\nif physical_devices:\n tf.config.experimental.set_memory_growth(physical_devices[0], True)\n\nOutput:\n[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]\n\n", "Upgrading simply worked for me.:\npip3 install --upgrade tensorflow-gpu\n\nand the device name searched has to be 'XLA_GPU', and does not respond with solo 'GPU' search term. But it also raised another error when setting the memory growth which is not supported by 'XLA' gpus.\n", "Installing tensorflow with gpu using Conda\nFirst install anaconda or mini-anaconda on your system and make sure you have set the environment path for conda command.\n\nIn below command replace tensor with a environment name of your choice:\n\nconda create -n tensor tensorflow-gpu cudatoolkit=9.0\nconda activate tensor\n\n\nThen update your tensorflow-gup package using pip and make sure\nyour environment is activated in conda\n\npip3 install --upgrade tensorflow-gpu\n\n\nVerify weather tensorflow with gpu support is installed\nsuccessfully\n\n(tensor) C:\\> python\n\nPython 3.8.15 (default, Nov 24 2022, 14:38:14) [MSC v.1916 64 bit (AMD64)] :: Anaconda, Inc. on win32\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n\n>>> import tensorflow\n>>> tensorflow.config.list_physical_devices(\"GPU\")\n[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]\n\nThe output[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')] in terminal means we have successfully installed tensorflow with gpu.\nThis answer was taken from anaconda tensorflow installation link.\n" ]
[ 7, 1, 0 ]
[]
[]
[ "gpu", "python", "tensorflow2.0" ]
stackoverflow_0058956619_gpu_python_tensorflow2.0.txt
Q: How can I add Buttons to my bot page in discord.py? So how do I add this button? Image Thanks in advice i tried it with Client.change_presence(activity=discord... , buttons=["Website"]) A: Unfortunately, Discord (and subsequently the discord.py lib) does not have ways for us developers to modify those buttons. Both of those buttons are fairly new to Discord anyways. The button you were seeing is related to the bot "streaming on Twitch" (ie Twitch integration) This is autogenerated for anyone who streams on Twitch with their account linked, and as such pops up for bots as well. Your best bet would be to read update patch notes and keep an eye out for if they make that available to us devs at some point. If you're looking for a subtle way to link a website to your bot, I would personally suggest using what Discord's API does offer: embeds. You can always include a link inside of the embed, or even put the website in the footer. Embeds are very versatile, and hopefully, you can use them to solve the issue you're running into.
How can I add Buttons to my bot page in discord.py?
So how do I add this button? Image Thanks in advice i tried it with Client.change_presence(activity=discord... , buttons=["Website"])
[ "Unfortunately, Discord (and subsequently the discord.py lib) does not have ways for us developers to modify those buttons. Both of those buttons are fairly new to Discord anyways. The button you were seeing is related to the bot \"streaming on Twitch\" (ie Twitch integration) This is autogenerated for anyone who streams on Twitch with their account linked, and as such pops up for bots as well.\nYour best bet would be to read update patch notes and keep an eye out for if they make that available to us devs at some point.\nIf you're looking for a subtle way to link a website to your bot, I would personally suggest using what Discord's API does offer: embeds. You can always include a link inside of the embed, or even put the website in the footer. Embeds are very versatile, and hopefully, you can use them to solve the issue you're running into.\n" ]
[ 0 ]
[]
[]
[ "discord.py", "python" ]
stackoverflow_0074577024_discord.py_python.txt
Q: How to link 2 items from the same array? First of all, I'm sorry for my bad English. English isn't my main language. So, I have this data, datamhs = np.array([["x", 85, "22222221"], ["y", 85, "22222222"], ["z", 70, "22222223"], ["a", 90, "22222224"], ["b", 60, "22222225"], ["c", 90, "22222226"]]) Is there a way to reference each row with itself? For example, I want to make x has value of 85 and uid of 22222221. I want to make a function that check if x has value of 85 and/or has uid of 22222221. I'm sorry if it hard to understand I really don't know how to write it. Thanks for answering. I'm trying def name(): if x in datamhs[:,0]: if y in datamhs[:,1]: print(x) print(y) It's print out an error which y not defined. When I try to define it with, y = datamhs[:,1] Then it just show up with a list of the value. I want to input x, then checking if x has the value of y. A: Where do x and y come from? I can't understand Maybe you should try something like that def name(): if 'x' in datamhs[:,0]: if 'y' in datamhs[:,1]: print(datamhs[:,0]) print(datamhs[:,1]) The python interpreter nothing knows about x and y if you have not yet defined them.
How to link 2 items from the same array?
First of all, I'm sorry for my bad English. English isn't my main language. So, I have this data, datamhs = np.array([["x", 85, "22222221"], ["y", 85, "22222222"], ["z", 70, "22222223"], ["a", 90, "22222224"], ["b", 60, "22222225"], ["c", 90, "22222226"]]) Is there a way to reference each row with itself? For example, I want to make x has value of 85 and uid of 22222221. I want to make a function that check if x has value of 85 and/or has uid of 22222221. I'm sorry if it hard to understand I really don't know how to write it. Thanks for answering. I'm trying def name(): if x in datamhs[:,0]: if y in datamhs[:,1]: print(x) print(y) It's print out an error which y not defined. When I try to define it with, y = datamhs[:,1] Then it just show up with a list of the value. I want to input x, then checking if x has the value of y.
[ "Where do x and y come from? I can't understand\nMaybe you should try something like that\ndef name():\n if 'x' in datamhs[:,0]:\n if 'y' in datamhs[:,1]:\n print(datamhs[:,0])\n print(datamhs[:,1])\n\nThe python interpreter nothing knows about x and y if you have not yet defined them.\n" ]
[ 0 ]
[]
[]
[ "arrays", "numpy", "python" ]
stackoverflow_0074587908_arrays_numpy_python.txt
Q: How to convert result of np.where to array? I have an array from which I want to get the indices of the elements of interest by condition np.where: diff = [19, 403472, 403491, 403491, 403491, 403491, 13, 403478, 13] np.where(diff > np.average(diff)) As result I have tuple: (array([1, 2, 3, 4, 5, 7], dtype=int64),) But I want only array: array([1, 2, 3, 4, 5, 7] And list() don't help. Can you please help? Thanks. A: np.where(diff > np.average(diff))[0]? output: array([1, 2, 3, 4, 5, 7])
How to convert result of np.where to array?
I have an array from which I want to get the indices of the elements of interest by condition np.where: diff = [19, 403472, 403491, 403491, 403491, 403491, 13, 403478, 13] np.where(diff > np.average(diff)) As result I have tuple: (array([1, 2, 3, 4, 5, 7], dtype=int64),) But I want only array: array([1, 2, 3, 4, 5, 7] And list() don't help. Can you please help? Thanks.
[ "np.where(diff > np.average(diff))[0]?\noutput:\narray([1, 2, 3, 4, 5, 7])\n" ]
[ 2 ]
[]
[]
[ "numpy", "python" ]
stackoverflow_0074587951_numpy_python.txt
Q: Storing CSV file in dictionary list of tuples. Each key is a date, each tuple contains 3 corresponding fields, multiple entries per key(date) possible An example to demonstrate my problem, suppose the csv file is formatted like: 2022-11-05,Female,30-39,City of London 2022-11-05,Male,60-69,City of London 2022-11-04,Female,70-79,City of London 2022-11-04,Female,60-69,City of London Should be read into a dictionary like: {'2022-11-05': [(Female,30-39, City of London), (Male,60-69,City of London), '2022-11-04': [(Female, 70-79, City of London), (Female, 60-69, City of London)]} When I attempted to read it like: vaccine_data_reader = csv.reader(vaccine_date_file) mydict = {rows[0]: [(rows[1],rows[2],rows[3])] for rows in vaccine_data_reader} I only got one value per key, not multiple lists of tuples for each unique entry. A: A more pythonic way to express the same solution is: for row in vaccine_data_reader: try: mydict[row[0]].append(tuple(row[1:])) except KeyError: mydict[row[0]] = [tuple(row[1:])] A: I don't think Dictionary Comprehension would be the way here, as dictionary does not allow duplicate keys, so it will just replace the existing one. for rows in vaccine_data_reader: if rows[0] in mydict.keys(): #Checks if the keys already exists mydict[rows[0]].append(tuple(rows[1:])) #Appends the value of duplicate continue mydict[rows[0]] = [tuple(rows[1:])] #Creates a new keys and adds the value
Storing CSV file in dictionary list of tuples. Each key is a date, each tuple contains 3 corresponding fields, multiple entries per key(date) possible
An example to demonstrate my problem, suppose the csv file is formatted like: 2022-11-05,Female,30-39,City of London 2022-11-05,Male,60-69,City of London 2022-11-04,Female,70-79,City of London 2022-11-04,Female,60-69,City of London Should be read into a dictionary like: {'2022-11-05': [(Female,30-39, City of London), (Male,60-69,City of London), '2022-11-04': [(Female, 70-79, City of London), (Female, 60-69, City of London)]} When I attempted to read it like: vaccine_data_reader = csv.reader(vaccine_date_file) mydict = {rows[0]: [(rows[1],rows[2],rows[3])] for rows in vaccine_data_reader} I only got one value per key, not multiple lists of tuples for each unique entry.
[ "A more pythonic way to express the same solution is:\nfor row in vaccine_data_reader:\n try:\n mydict[row[0]].append(tuple(row[1:]))\n except KeyError:\n mydict[row[0]] = [tuple(row[1:])]\n\n", "I don't think Dictionary Comprehension would be the way here, as dictionary does not allow duplicate keys, so it will just replace the existing one.\nfor rows in vaccine_data_reader:\n if rows[0] in mydict.keys(): #Checks if the keys already exists\n mydict[rows[0]].append(tuple(rows[1:])) #Appends the value of duplicate\n continue\n mydict[rows[0]] = [tuple(rows[1:])] #Creates a new keys and adds the value\n\n" ]
[ 1, 0 ]
[]
[]
[ "csv", "file_io", "python" ]
stackoverflow_0074587824_csv_file_io_python.txt
Q: Issue With Nested Comments Using MPTT in Django and Django Rest Framework API - Result In Detail Not Found I'm trying to create a nested comment system using MPTT but using Django Rest Framework to serialize MPTT tree. I got the nested comments to work - and these comments are added, edited, and deleted by calling Django Rest Framework API endpoints only - not using Django ORM DB calls at all. Unfortunately, there is a bug I couldn't figure out! Although the comments are added, edited, and deleted fine - but when a seventh or eighth comment is nested - suddenly the first-in comment or first-in nested comments would become [detail: Not found.] - meaning it will return an empty result or throw an unknown validation error somewhere which I couldn't figure out why. This results in when clicking on edit or delete the buggy comments becoming impossible - but the GET part is fine since these buggy comments do show up in the comment section (or should I say the list part returns fine). The image I'll attach will show that when I entered comment ggggg, the comment aaaa and bbbb will throw errors when trying to edit or delete them. If I delete comment gggg, comment hhhh will also be deleted (as CASCADE was enabled) - and suddenly comment aaaa and bbbb will work again for deletion and editing. My comment model (models.py): from django.db import models from django.template.defaultfilters import truncatechars from mptt.managers import TreeManager from post.models import Post from account.models import Account from mptt.models import MPTTModel, TreeForeignKey # Create your models here. # With MPTT class CommentManager(TreeManager): def viewable(self): queryset = self.get_queryset().filter(level=0) return queryset class Comment(MPTTModel): parent = TreeForeignKey('self', on_delete=models.CASCADE, null=True, blank=True, related_name='comment_children') post = models.ForeignKey(Post, on_delete=models.CASCADE, related_name='comment_post') user = models.ForeignKey(Account, on_delete=models.CASCADE, related_name='comment_account') content = models.TextField(max_length=9000) created_date = models.DateTimeField(auto_now_add=True) updated_date = models.DateTimeField(auto_now=True) status = models.BooleanField(default=True) objects = CommentManager() def __str__(self): return f'Comment by {str(self.pk)}-{self.user.full_name.__self__}' @property def short_content(self): return truncatechars(self.content, 99) class MPTTMeta: # If changing the order - MPTT needs the programmer to go into console and do Comment.objects.rebuild() order_insertion_by = ['-created_date'] My serializers.py (Showing only comment serializer portion). class RecursiveField(serializers.Serializer): def to_representation(self, value): serializer = self.parent.parent.__class__(value, context=self.context) return serializer.data class CommentSerializer(serializers.ModelSerializer): post_slug = serializers.SerializerMethodField() user = serializers.StringRelatedField(read_only=True) user_name = serializers.SerializerMethodField() user_id = serializers.PrimaryKeyRelatedField(read_only=True) comment_children = RecursiveField(many=True) class Meta: model = Comment fields = '__all__' # noinspection PyMethodMayBeStatic # noinspection PyBroadException def get_post_slug(self, instance): try: slug = instance.post.slug return slug except Exception: pass # noinspection PyMethodMayBeStatic # noinspection PyBroadException def get_user_name(self, instance): try: full_name = f'{instance.user.first_name} {instance.user.last_name}' return full_name except Exception: pass # noinspection PyMethodMayBeStatic def validate_content(self, value): if len(value) < COM_MIN_LEN: raise serializers.ValidationError('The comment is too short.') elif len(value) > COM_MAX_LEN: raise serializers.ValidationError('The comment is too long.') else: return value def get_fields(self): fields = super(CommentSerializer, self).get_fields() fields['comment_children'] = CommentSerializer(many=True, required=False) return fields The API views for comments would look like this: class CommentAV(mixins.CreateModelMixin, generics.GenericAPIView): # This class only allows users to create comments but not list all comments. List all comments would # be too taxing for the server if the website got tons of comments. queryset = Comment.objects.viewable().filter(status=True) serializer_class = CommentSerializer def post(self, request, *args, **kwargs): return self.create(request, *args, **kwargs) def perform_create(self, serializer): # Overriding perform_create. Can create comment using the authenticated account. # Cannot pretend to be someone else to create comment on his or her behalf. commenter = self.request.user now = timezone.now() before_now = now - timezone.timedelta(seconds=COM_WAIT_TIME) # Make sure user can only create comment again after waiting for wait_time. this_user_comments = Comment.objects.filter(user=commenter, created_date__lt=now, created_date__gte=before_now) if this_user_comments: raise ValidationError(f'You have to wait for {COM_WAIT_TIME} seconds before you can post another comment.') elif Comment.objects.filter(user=commenter, level__gt=COMMENT_LEVEL_DEPTH): raise ValidationError(f'You cannot make another level-deep reply.') else: serializer.save(user=commenter) # By combining perform_create method to filter out only the owner of the comment can edit his or her own # comment -- and the permission_classes of IsAuthenticated -- allowing only authenticated user to create # comments. When doing custome permission - such as redefinte BasePermission's has_object_permission, # it doesn't work with ListCreateAPIView - because has_object_permission is meant to be used on single instance # such as object detail. permission_classes = [IsAuthenticated] class CommentAVAdmin(generics.ListCreateAPIView): queryset = Comment.objects.viewable() serializer_class = CommentSerializer permission_classes = [IsAdminUser] class CommentDetailAV(generics.RetrieveUpdateDestroyAPIView): queryset = Comment.objects.viewable().filter(status=True) serializer_class = CommentSerializer permission_classes = [CustomAuthenticatedOrReadOnly] def destroy(self, request, *args, **kwargs): instance = self.get_object() if not instance.user.id == self.request.user.id: return Response({ 'Error': 'Comment isn\'t deleted! Please log into the owner account of this comment to delete this comment.'}, status=status.HTTP_400_BAD_REQUEST) self.perform_destroy(instance) return Response({'Success': 'Comment deleted!'}, status=status.HTTP_204_NO_CONTENT) class CommentDetailAVAdmin(generics.RetrieveUpdateDestroyAPIView): queryset = Comment.objects.viewable() serializer_class = CommentSerializer permission_classes = [IsAdminUser] class CommentDetailChildrenAV(generics.RetrieveUpdateDestroyAPIView): queryset = Comment.objects.viewable().get_descendants().filter(status=True) serializer_class = CommentSerializer permission_classes = [CustomAuthenticatedOrReadOnly] def destroy(self, request, *args, **kwargs): instance = self.get_object() if not instance.user.id == self.request.user.id: return Response({ 'Error': 'Reply isn\'t deleted! Please log into the owner account of this reply to delete this reply.'}, status=status.HTTP_400_BAD_REQUEST) self.perform_destroy(instance) return Response({'Success': 'Comment deleted!'}, status=status.HTTP_204_NO_CONTENT) The API calls would look like this in blog_post app views: add_comment = requests.post(BLOG_BASE_URL + f'api/post-list/comments/create-comments/', headers=headers, data=user_comment) add_reply = requests.post(BLOG_BASE_URL + f'api/post-list/comments/create-comments/', headers=headers, data=user_reply) requests.request('PUT', BLOG_BASE_URL + f'api/post-list/comments/{pk}/', headers=headers, data=user_comment) response = requests.request('PUT', BLOG_BASE_URL + f'api/post-list/comments/children/{pk}/', headers=headers, data=user_comment) response = requests.request("DELETE", BLOG_BASE_URL + f'api/post-list/comments/{pk}/', headers=headers) These calls in the blog post app views would allow me to allow authenticated users to create, edit, and delete comments. Does anyone know why my application got this bug? Any help would be appreciated! I read somewhere about getting a node refresh_from_db() - but how would I do that in the serialization? Also, Comment.objects.rebuild() doesn't help! I also noticed that when I stopped the development server and restarted it, the whole comment tree worked normally again - and I could now edit and delete the non-working comments earlier. Update: I also opened up python shell (by doing Python manage.py shell) and tried this for the specific affected comment that when doing API call for edit or delete and got error of Not Found: from comment.models import Comment reply = Comment.objects.get(pk=113) print(reply.content) I did get the proper output of the comment's content. Then I also tried to get_ancestors(include_self=True) (using MPTT instance methods) - and I got proper output that when using include_self=True does show the affected comment's node in the output - but calling API endpoint results in Not Found (for GET) still. I'm super confused now! Why? If I restart the development server by doing Ctrl-C and python manage.py runserver - and revisit the same affected API GET endpoint - this case is comment (child node) with 113 primary key(id) - the endpoint would show proper output and details as if nothing had gone wrong. Update 2: Found an interesting Github post: https://github.com/django-mptt/django-mptt/issues/789 This sounds like what I'm experiencing but I'm not using Apache - and this is Django's default development server. A: Okie, I figured it out! I think when calling the same object in the Tree of MPTT for GET and PUT somehow spits out a weird bug that prevents me from editing the affected replies. So, my solution now is just creating an endpoint with API view below: class CommentChildrenAV(mixins.CreateModelMixin, generics.GenericAPIView): # This class only allows users to create comments but not list all comments. List all comments would # be too taxing for the server if the website got tons of comments. queryset = Comment.objects.viewable().get_descendants().filter(status=True) serializer_class = CommentSerializer def get(self, request, pk): replies = Comment.objects.viewable().get_descendants().filter(status=True, pk=pk) serializer = CommentSerializer(replies, many=True) return Response(serializer.data, status=status.HTTP_200_OK) def post(self, request, *args, **kwargs): return self.create(request, *args, **kwargs) def perform_create(self, serializer): # Overriding perform_create. Can create comment using the authenticated account. # Cannot pretend to be someone else to create comment on his or her behalf. commenter = self.request.user now = timezone.now() before_now = now - timezone.timedelta(seconds=COM_WAIT_TIME) # Make sure user can only create comment again after waiting for wait_time. this_user_comments = Comment.objects.filter(user=commenter, created_date__lt=now, created_date__gte=before_now) if this_user_comments: raise ValidationError(f'You have to wait for {COM_WAIT_TIME} seconds before you can post another comment.') elif Comment.objects.filter(user=commenter, level__gt=COMMENT_LEVEL_DEPTH): raise ValidationError(f'You cannot make another level-deep reply.') else: serializer.save(user=commenter) # By combining perform_create method to filter out only the owner of the comment can edit his or her own # comment -- and the permission_classes of IsAuthenticated -- allowing only authenticated user to create # comments. When doing custome permission - such as redefinte BasePermission's has_object_permission, # it doesn't work with ListCreateAPIView - because has_object_permission is meant to be used on single instance # such as object detail. permission_classes = [IsAuthenticated] This API view would allow me to pass in the pk of the reply - get JSON response like so: response = requests.request("GET", BLOG_BASE_URL + f'api/post-list/children/get-child/{pk}/', headers=headers) Once I have the response in JSON - I could get the original reply content - input this reply content into reply-form's initial data like so: edit_form = CommentForm(initial=original_comment_data) Then I'm getting the POST's new content that the user wants to replace the original reply's content with - the gist is the solution I'm now going with is - if the user is authenticated and if the original's JSON content's user_id (meaning the original's commenter of the reply text) is the same as the request.user.id - then I just do: if request.method == 'POST': # I can't use API endpoint here to edit reply because some weird bug won't allow me to do so. # Instead of calling the endpoint api for edit reply - I just update the database with # using ORM (Object Relational Manager) method. if request.user.is_authenticated: print(content[0]['user_id'], os.getcwd()) if request.user.id == content[0]['user_id']: Comment.objects.filter(status=True, pk=pk).update(content=request.POST.get(strip_invalid_html('content'))) post_slug = content[0]['post_slug'] return redirect('single_post', post_slug) This now solves my problem for real! I was just hoping that I don't have to cheat by going the route of ORM for editing a reply. I would prefer 100% API calls for all actions in this app. Sigh... but now my app is fully functioning in terms of having a comment system that is nested using MPTT package.
Issue With Nested Comments Using MPTT in Django and Django Rest Framework API - Result In Detail Not Found
I'm trying to create a nested comment system using MPTT but using Django Rest Framework to serialize MPTT tree. I got the nested comments to work - and these comments are added, edited, and deleted by calling Django Rest Framework API endpoints only - not using Django ORM DB calls at all. Unfortunately, there is a bug I couldn't figure out! Although the comments are added, edited, and deleted fine - but when a seventh or eighth comment is nested - suddenly the first-in comment or first-in nested comments would become [detail: Not found.] - meaning it will return an empty result or throw an unknown validation error somewhere which I couldn't figure out why. This results in when clicking on edit or delete the buggy comments becoming impossible - but the GET part is fine since these buggy comments do show up in the comment section (or should I say the list part returns fine). The image I'll attach will show that when I entered comment ggggg, the comment aaaa and bbbb will throw errors when trying to edit or delete them. If I delete comment gggg, comment hhhh will also be deleted (as CASCADE was enabled) - and suddenly comment aaaa and bbbb will work again for deletion and editing. My comment model (models.py): from django.db import models from django.template.defaultfilters import truncatechars from mptt.managers import TreeManager from post.models import Post from account.models import Account from mptt.models import MPTTModel, TreeForeignKey # Create your models here. # With MPTT class CommentManager(TreeManager): def viewable(self): queryset = self.get_queryset().filter(level=0) return queryset class Comment(MPTTModel): parent = TreeForeignKey('self', on_delete=models.CASCADE, null=True, blank=True, related_name='comment_children') post = models.ForeignKey(Post, on_delete=models.CASCADE, related_name='comment_post') user = models.ForeignKey(Account, on_delete=models.CASCADE, related_name='comment_account') content = models.TextField(max_length=9000) created_date = models.DateTimeField(auto_now_add=True) updated_date = models.DateTimeField(auto_now=True) status = models.BooleanField(default=True) objects = CommentManager() def __str__(self): return f'Comment by {str(self.pk)}-{self.user.full_name.__self__}' @property def short_content(self): return truncatechars(self.content, 99) class MPTTMeta: # If changing the order - MPTT needs the programmer to go into console and do Comment.objects.rebuild() order_insertion_by = ['-created_date'] My serializers.py (Showing only comment serializer portion). class RecursiveField(serializers.Serializer): def to_representation(self, value): serializer = self.parent.parent.__class__(value, context=self.context) return serializer.data class CommentSerializer(serializers.ModelSerializer): post_slug = serializers.SerializerMethodField() user = serializers.StringRelatedField(read_only=True) user_name = serializers.SerializerMethodField() user_id = serializers.PrimaryKeyRelatedField(read_only=True) comment_children = RecursiveField(many=True) class Meta: model = Comment fields = '__all__' # noinspection PyMethodMayBeStatic # noinspection PyBroadException def get_post_slug(self, instance): try: slug = instance.post.slug return slug except Exception: pass # noinspection PyMethodMayBeStatic # noinspection PyBroadException def get_user_name(self, instance): try: full_name = f'{instance.user.first_name} {instance.user.last_name}' return full_name except Exception: pass # noinspection PyMethodMayBeStatic def validate_content(self, value): if len(value) < COM_MIN_LEN: raise serializers.ValidationError('The comment is too short.') elif len(value) > COM_MAX_LEN: raise serializers.ValidationError('The comment is too long.') else: return value def get_fields(self): fields = super(CommentSerializer, self).get_fields() fields['comment_children'] = CommentSerializer(many=True, required=False) return fields The API views for comments would look like this: class CommentAV(mixins.CreateModelMixin, generics.GenericAPIView): # This class only allows users to create comments but not list all comments. List all comments would # be too taxing for the server if the website got tons of comments. queryset = Comment.objects.viewable().filter(status=True) serializer_class = CommentSerializer def post(self, request, *args, **kwargs): return self.create(request, *args, **kwargs) def perform_create(self, serializer): # Overriding perform_create. Can create comment using the authenticated account. # Cannot pretend to be someone else to create comment on his or her behalf. commenter = self.request.user now = timezone.now() before_now = now - timezone.timedelta(seconds=COM_WAIT_TIME) # Make sure user can only create comment again after waiting for wait_time. this_user_comments = Comment.objects.filter(user=commenter, created_date__lt=now, created_date__gte=before_now) if this_user_comments: raise ValidationError(f'You have to wait for {COM_WAIT_TIME} seconds before you can post another comment.') elif Comment.objects.filter(user=commenter, level__gt=COMMENT_LEVEL_DEPTH): raise ValidationError(f'You cannot make another level-deep reply.') else: serializer.save(user=commenter) # By combining perform_create method to filter out only the owner of the comment can edit his or her own # comment -- and the permission_classes of IsAuthenticated -- allowing only authenticated user to create # comments. When doing custome permission - such as redefinte BasePermission's has_object_permission, # it doesn't work with ListCreateAPIView - because has_object_permission is meant to be used on single instance # such as object detail. permission_classes = [IsAuthenticated] class CommentAVAdmin(generics.ListCreateAPIView): queryset = Comment.objects.viewable() serializer_class = CommentSerializer permission_classes = [IsAdminUser] class CommentDetailAV(generics.RetrieveUpdateDestroyAPIView): queryset = Comment.objects.viewable().filter(status=True) serializer_class = CommentSerializer permission_classes = [CustomAuthenticatedOrReadOnly] def destroy(self, request, *args, **kwargs): instance = self.get_object() if not instance.user.id == self.request.user.id: return Response({ 'Error': 'Comment isn\'t deleted! Please log into the owner account of this comment to delete this comment.'}, status=status.HTTP_400_BAD_REQUEST) self.perform_destroy(instance) return Response({'Success': 'Comment deleted!'}, status=status.HTTP_204_NO_CONTENT) class CommentDetailAVAdmin(generics.RetrieveUpdateDestroyAPIView): queryset = Comment.objects.viewable() serializer_class = CommentSerializer permission_classes = [IsAdminUser] class CommentDetailChildrenAV(generics.RetrieveUpdateDestroyAPIView): queryset = Comment.objects.viewable().get_descendants().filter(status=True) serializer_class = CommentSerializer permission_classes = [CustomAuthenticatedOrReadOnly] def destroy(self, request, *args, **kwargs): instance = self.get_object() if not instance.user.id == self.request.user.id: return Response({ 'Error': 'Reply isn\'t deleted! Please log into the owner account of this reply to delete this reply.'}, status=status.HTTP_400_BAD_REQUEST) self.perform_destroy(instance) return Response({'Success': 'Comment deleted!'}, status=status.HTTP_204_NO_CONTENT) The API calls would look like this in blog_post app views: add_comment = requests.post(BLOG_BASE_URL + f'api/post-list/comments/create-comments/', headers=headers, data=user_comment) add_reply = requests.post(BLOG_BASE_URL + f'api/post-list/comments/create-comments/', headers=headers, data=user_reply) requests.request('PUT', BLOG_BASE_URL + f'api/post-list/comments/{pk}/', headers=headers, data=user_comment) response = requests.request('PUT', BLOG_BASE_URL + f'api/post-list/comments/children/{pk}/', headers=headers, data=user_comment) response = requests.request("DELETE", BLOG_BASE_URL + f'api/post-list/comments/{pk}/', headers=headers) These calls in the blog post app views would allow me to allow authenticated users to create, edit, and delete comments. Does anyone know why my application got this bug? Any help would be appreciated! I read somewhere about getting a node refresh_from_db() - but how would I do that in the serialization? Also, Comment.objects.rebuild() doesn't help! I also noticed that when I stopped the development server and restarted it, the whole comment tree worked normally again - and I could now edit and delete the non-working comments earlier. Update: I also opened up python shell (by doing Python manage.py shell) and tried this for the specific affected comment that when doing API call for edit or delete and got error of Not Found: from comment.models import Comment reply = Comment.objects.get(pk=113) print(reply.content) I did get the proper output of the comment's content. Then I also tried to get_ancestors(include_self=True) (using MPTT instance methods) - and I got proper output that when using include_self=True does show the affected comment's node in the output - but calling API endpoint results in Not Found (for GET) still. I'm super confused now! Why? If I restart the development server by doing Ctrl-C and python manage.py runserver - and revisit the same affected API GET endpoint - this case is comment (child node) with 113 primary key(id) - the endpoint would show proper output and details as if nothing had gone wrong. Update 2: Found an interesting Github post: https://github.com/django-mptt/django-mptt/issues/789 This sounds like what I'm experiencing but I'm not using Apache - and this is Django's default development server.
[ "Okie, I figured it out!\nI think when calling the same object in the Tree of MPTT for GET and PUT somehow spits out a weird bug that prevents me from editing the affected replies. So, my solution now is just creating an endpoint with API view below:\nclass CommentChildrenAV(mixins.CreateModelMixin, generics.GenericAPIView):\n # This class only allows users to create comments but not list all comments. List all comments would\n # be too taxing for the server if the website got tons of comments.\n queryset = Comment.objects.viewable().get_descendants().filter(status=True)\n serializer_class = CommentSerializer\n\n def get(self, request, pk):\n replies = Comment.objects.viewable().get_descendants().filter(status=True, pk=pk)\n serializer = CommentSerializer(replies, many=True)\n return Response(serializer.data, status=status.HTTP_200_OK)\n\n def post(self, request, *args, **kwargs):\n return self.create(request, *args, **kwargs)\n\n def perform_create(self, serializer):\n # Overriding perform_create. Can create comment using the authenticated account.\n # Cannot pretend to be someone else to create comment on his or her behalf.\n commenter = self.request.user\n now = timezone.now()\n before_now = now - timezone.timedelta(seconds=COM_WAIT_TIME)\n # Make sure user can only create comment again after waiting for wait_time.\n this_user_comments = Comment.objects.filter(user=commenter, created_date__lt=now, created_date__gte=before_now)\n if this_user_comments:\n raise ValidationError(f'You have to wait for {COM_WAIT_TIME} seconds before you can post another comment.')\n elif Comment.objects.filter(user=commenter, level__gt=COMMENT_LEVEL_DEPTH):\n raise ValidationError(f'You cannot make another level-deep reply.')\n else:\n serializer.save(user=commenter)\n\n # By combining perform_create method to filter out only the owner of the comment can edit his or her own\n # comment -- and the permission_classes of IsAuthenticated -- allowing only authenticated user to create\n # comments. When doing custome permission - such as redefinte BasePermission's has_object_permission,\n # it doesn't work with ListCreateAPIView - because has_object_permission is meant to be used on single instance\n # such as object detail.\n permission_classes = [IsAuthenticated]\n\nThis API view would allow me to pass in the pk of the reply - get JSON response like so:\nresponse = requests.request(\"GET\", BLOG_BASE_URL + f'api/post-list/children/get-child/{pk}/', headers=headers)\n\nOnce I have the response in JSON - I could get the original reply content - input this reply content into reply-form's initial data like so:\nedit_form = CommentForm(initial=original_comment_data)\n\nThen I'm getting the POST's new content that the user wants to replace the original reply's content with - the gist is the solution I'm now going with is - if the user is authenticated and if the original's JSON content's user_id (meaning the original's commenter of the reply text) is the same as the request.user.id - then I just do:\nif request.method == 'POST':\n # I can't use API endpoint here to edit reply because some weird bug won't allow me to do so.\n # Instead of calling the endpoint api for edit reply - I just update the database with\n # using ORM (Object Relational Manager) method.\n if request.user.is_authenticated:\n print(content[0]['user_id'], os.getcwd())\n if request.user.id == content[0]['user_id']:\n Comment.objects.filter(status=True, pk=pk).update(content=request.POST.get(strip_invalid_html('content')))\n post_slug = content[0]['post_slug']\n return redirect('single_post', post_slug)\n\nThis now solves my problem for real! I was just hoping that I don't have to cheat by going the route of ORM for editing a reply. I would prefer 100% API calls for all actions in this app. Sigh... but now my app is fully functioning in terms of having a comment system that is nested using MPTT package.\n" ]
[ 0 ]
[]
[]
[ "api", "django", "django_rest_framework", "mptt", "python" ]
stackoverflow_0074585468_api_django_django_rest_framework_mptt_python.txt
Q: python/pandas/one dimensional dataframe Creating a 2 dimensional dataframe works fine: y = np.array([[1,2],[3,4]]) df = pd.DataFrame( y, index=[1,2], columns=["a","b"] ) print (df) But if I try to create a one dimensional dataframe I get an error message: z = np.array([5,6]) df2 = pd.DataFrame( z, index=[3], columns=["a","b"]) print (df2) Error message: Shape of passed values is (1, 2), indices imply (2, 1) I tried: z = np.array([[5],[6]]) But I get the same error message. The reason I might want to create a one dimensional dataframe is so I can append a single row to an existing dataframe. It wont let me append a list or an array so I have to turn it into a dataframe first. But I cant do that either I am using anaconda A: Just adding [] z = np.array([5,6]) df2 = pd.DataFrame( [z], index=[3], columns=["a","b"]) df2 Out[67]: a b 3 5 6 A: You cannot create a dataframe from a 1D array. Add another dimension to the array before passing it to the constructor: pd.DataFrame(z[np.newaxis,:], index=[3], columns=["a","b"]) # a b #3 5 6 A: One alternative solution would be switching index and column and applying transpose. z = np.array([5,6]) df2 = pd.DataFrame(z, columns=[3], index=["a","b"]).T df2 A: BENCHMARK!!! TLDR: use np.atleast_2d() z = np.array([5,6]) %timeit pd.DataFrame([z], index=[3], columns=["a","b"]) %timeit pd.DataFrame(np.atleast_2d(z), index=[3], columns=["a","b"]) %timeit pd.DataFrame(z[np.newaxis,:], index=[3], columns=["a","b"]) # 1.17 ms ± 107 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) # 754 µs ± 49.2 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each) # 878 µs ± 183 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
python/pandas/one dimensional dataframe
Creating a 2 dimensional dataframe works fine: y = np.array([[1,2],[3,4]]) df = pd.DataFrame( y, index=[1,2], columns=["a","b"] ) print (df) But if I try to create a one dimensional dataframe I get an error message: z = np.array([5,6]) df2 = pd.DataFrame( z, index=[3], columns=["a","b"]) print (df2) Error message: Shape of passed values is (1, 2), indices imply (2, 1) I tried: z = np.array([[5],[6]]) But I get the same error message. The reason I might want to create a one dimensional dataframe is so I can append a single row to an existing dataframe. It wont let me append a list or an array so I have to turn it into a dataframe first. But I cant do that either I am using anaconda
[ "Just adding []\nz = np.array([5,6])\ndf2 = pd.DataFrame( [z], index=[3], columns=[\"a\",\"b\"])\ndf2\nOut[67]: \n a b\n3 5 6\n\n", "You cannot create a dataframe from a 1D array. Add another dimension to the array before passing it to the constructor:\npd.DataFrame(z[np.newaxis,:], index=[3], columns=[\"a\",\"b\"])\n# a b\n#3 5 6\n\n", "One alternative solution would be switching index and column and applying transpose.\nz = np.array([5,6])\ndf2 = pd.DataFrame(z, columns=[3], index=[\"a\",\"b\"]).T\ndf2\n\n\n", "BENCHMARK!!!\nTLDR: use np.atleast_2d()\nz = np.array([5,6])\n%timeit pd.DataFrame([z], index=[3], columns=[\"a\",\"b\"])\n%timeit pd.DataFrame(np.atleast_2d(z), index=[3], columns=[\"a\",\"b\"])\n%timeit pd.DataFrame(z[np.newaxis,:], index=[3], columns=[\"a\",\"b\"])\n\n# 1.17 ms ± 107 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)\n# 754 µs ± 49.2 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)\n# 878 µs ± 183 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)\n\n" ]
[ 4, 2, 0, 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0052340276_pandas_python.txt
Q: VSCode Jupyter loads incorrect version of python VSCode's Jupyter isn't actually running the version of python that it displays in the lower left of the screen. Below, it purports to be running 3.9.1, but the output of the cell shows that it is indeed running 3.7.9. I selected the displayed rl environment via: Select environment to start Jupyter Server. What doesn't work: Restarting the Jupyter kernel Selecting a different environment (they all actually run 3.7.9 regardless of the env's python) Extra info: Python Output: > conda --version > pyenv root > python3.7 ~/.vscode/extensions/ms-python.python-2020.12.424452561/pythonFiles/pyvsc-run-isolated.py -c "import sys;print(sys.executable)" > python3.6 ~/.vscode/extensions/ms-python.python-2020.12.424452561/pythonFiles/pyvsc-run-isolated.py -c "import sys;print(sys.executable)" > python3 ~/.vscode/extensions/ms-python.python-2020.12.424452561/pythonFiles/pyvsc-run-isolated.py -c "import sys;print(sys.executable)" > python2 ~/.vscode/extensions/ms-python.python-2020.12.424452561/pythonFiles/pyvsc-run-isolated.py -c "import sys;print(sys.executable)" > python ~/.vscode/extensions/ms-python.python-2020.12.424452561/pythonFiles/pyvsc-run-isolated.py -c "import sys;print(sys.executable)" > ~/.local/share/miniconda3/envs/rl/bin/python ~/.vscode/extensions/ms-python.python-2020.12.424452561/pythonFiles/pyvsc-run-isolated.py -c "import sys;print(sys.executable)" > conda info --json Starting Pylance language server. Python interpreter path: ~/.local/share/miniconda3/envs/rl/bin/python > conda env list > conda env list Yes, that last listed interpreter really is v3.9.1: % ~/.local/share/miniconda3/envs/rl/bin/python --version Python 3.9.1 Jupyter Output: User belongs to experiment group 'jupyterTest' > ~/.local/share/miniconda3/envs/rl/bin/python ~/.vscode/extensions/ms-toolsai.jupyter-2020.12.414227025/pythonFiles/pyvsc-run-isolated.py -c "import ipykernel" > ~/.local/share/miniconda3/envs/nndl/bin/python ~/.vscode/extensions/ms-toolsai.jupyter-2020.12.414227025/pythonFiles/pyvsc-run-isolated.py -c "import notebook" > ~/.local/share/miniconda3/bin/python ~/.vscode/extensions/ms-toolsai.jupyter-2020.12.414227025/pythonFiles/pyvsc-run-isolated.py -c "import ipykernel" > ~/.local/share/miniconda3/envs/nndl/bin/python ~/.vscode/extensions/ms-toolsai.jupyter-2020.12.414227025/pythonFiles/pyvsc-run-isolated.py -c "import jupyter" > ~/.local/share/miniconda3/envs/nndl/bin/python ~/.vscode/extensions/ms-toolsai.jupyter-2020.12.414227025/pythonFiles/pyvsc-run-isolated.py jupyter kernelspec --version > ~/.local/share/miniconda3/envs/nndl/bin/python ~/.vscode/extensions/ms-toolsai.jupyter-2020.12.414227025/pythonFiles/pyvsc-run-isolated.py vscode_datascience_helpers.daemon --daemon-module=vscode_datascience_helpers.jupyter_daemon -v > ~/.local/share/miniconda3/envs/nndl/bin/python ~/.vscode/extensions/ms-toolsai.jupyter-2020.12.414227025/pythonFiles/pyvsc-run-isolated.py vscode_datascience_helpers.daemon --daemon-module=vscode_datascience_helpers.jupyter_daemon -v > ~/.local/share/miniconda3/envs/nndl/bin/python ~/.vscode/extensions/ms-toolsai.jupyter-2020.12.414227025/pythonFiles/pyvsc-run-isolated.py vscode_datascience_helpers.daemon --daemon-module=vscode_datascience_helpers.jupyter_daemon -v > ~/.local/share/miniconda3/bin/python ~/.vscode/extensions/ms-toolsai.jupyter-2020.12.414227025/pythonFiles/pyvsc-run-isolated.py vscode_datascience_helpers.daemon --daemon-module=vscode_datascience_helpers.kernel_launcher_daemon -v > ~/.local/share/miniconda3/envs/rl/bin/python ~/.vscode/extensions/ms-toolsai.jupyter-2020.12.414227025/pythonFiles/pyvsc-run-isolated.py vscode_datascience_helpers.daemon --daemon-module=vscode_datascience_helpers.kernel_launcher_daemon -v Started kernel Python 3 > ~/.local/share/miniconda3/bin/python ~/.vscode/extensions/ms-toolsai.jupyter-2020.12.414227025/pythonFiles/pyvsc-run-isolated.py vscode_datascience_helpers.daemon --daemon-module=vscode_datascience_helpers.kernel_launcher_daemon -v This final listed python is the incorrect, non-selected version: % ~/.local/share/miniconda3/bin/python --version Python 3.7.9 Why is this version being used when the correct ~/.local/share/miniconda3/envs/rl/bin/python is listed immediately above? A: What eventually worked for me was: Close VSCode In .ipynb pane's top right, change version of Python to the desired conda environment Start the kernel Changing the kernel and then restarting it didn't seem to work. A: In VSCode, the Python environment of Jupyter notebook is independent, it uses the Python environment we chose last time by default. We can click on "Python3: Idle" in the upper right corner and switch to the Python3.9 environment. Please reload VSCode after switching the Python environment to make Jupyter reload the corresponding kernel. A: https://www.it-swarm-vi.com/vi/python/ipynb-nhap-tap-tin-ipynb-khac/1042793988/ !pip install import_ipynb`enter code here` !pip install ipynb import import_ipynb import ipynb from ipynb.fs.full.tim_folder import tim_folder A: I've been in situations where I've tried to specify the python version when creating the environment for conda. conda env create -f environment.yml python={version} It would usually install the latest version and ignore my command. To resolve this I activate my environment: conda activate ${env} Then I install the right version: conda install python=3.9.12 -y The key or big take away that I gathered is in the VSCode UI it gives me the option to select my kernel environment & it also gives me the path to that environment. When I went in to that directory the python version was not there. So...install it, reboot VScode.
VSCode Jupyter loads incorrect version of python
VSCode's Jupyter isn't actually running the version of python that it displays in the lower left of the screen. Below, it purports to be running 3.9.1, but the output of the cell shows that it is indeed running 3.7.9. I selected the displayed rl environment via: Select environment to start Jupyter Server. What doesn't work: Restarting the Jupyter kernel Selecting a different environment (they all actually run 3.7.9 regardless of the env's python) Extra info: Python Output: > conda --version > pyenv root > python3.7 ~/.vscode/extensions/ms-python.python-2020.12.424452561/pythonFiles/pyvsc-run-isolated.py -c "import sys;print(sys.executable)" > python3.6 ~/.vscode/extensions/ms-python.python-2020.12.424452561/pythonFiles/pyvsc-run-isolated.py -c "import sys;print(sys.executable)" > python3 ~/.vscode/extensions/ms-python.python-2020.12.424452561/pythonFiles/pyvsc-run-isolated.py -c "import sys;print(sys.executable)" > python2 ~/.vscode/extensions/ms-python.python-2020.12.424452561/pythonFiles/pyvsc-run-isolated.py -c "import sys;print(sys.executable)" > python ~/.vscode/extensions/ms-python.python-2020.12.424452561/pythonFiles/pyvsc-run-isolated.py -c "import sys;print(sys.executable)" > ~/.local/share/miniconda3/envs/rl/bin/python ~/.vscode/extensions/ms-python.python-2020.12.424452561/pythonFiles/pyvsc-run-isolated.py -c "import sys;print(sys.executable)" > conda info --json Starting Pylance language server. Python interpreter path: ~/.local/share/miniconda3/envs/rl/bin/python > conda env list > conda env list Yes, that last listed interpreter really is v3.9.1: % ~/.local/share/miniconda3/envs/rl/bin/python --version Python 3.9.1 Jupyter Output: User belongs to experiment group 'jupyterTest' > ~/.local/share/miniconda3/envs/rl/bin/python ~/.vscode/extensions/ms-toolsai.jupyter-2020.12.414227025/pythonFiles/pyvsc-run-isolated.py -c "import ipykernel" > ~/.local/share/miniconda3/envs/nndl/bin/python ~/.vscode/extensions/ms-toolsai.jupyter-2020.12.414227025/pythonFiles/pyvsc-run-isolated.py -c "import notebook" > ~/.local/share/miniconda3/bin/python ~/.vscode/extensions/ms-toolsai.jupyter-2020.12.414227025/pythonFiles/pyvsc-run-isolated.py -c "import ipykernel" > ~/.local/share/miniconda3/envs/nndl/bin/python ~/.vscode/extensions/ms-toolsai.jupyter-2020.12.414227025/pythonFiles/pyvsc-run-isolated.py -c "import jupyter" > ~/.local/share/miniconda3/envs/nndl/bin/python ~/.vscode/extensions/ms-toolsai.jupyter-2020.12.414227025/pythonFiles/pyvsc-run-isolated.py jupyter kernelspec --version > ~/.local/share/miniconda3/envs/nndl/bin/python ~/.vscode/extensions/ms-toolsai.jupyter-2020.12.414227025/pythonFiles/pyvsc-run-isolated.py vscode_datascience_helpers.daemon --daemon-module=vscode_datascience_helpers.jupyter_daemon -v > ~/.local/share/miniconda3/envs/nndl/bin/python ~/.vscode/extensions/ms-toolsai.jupyter-2020.12.414227025/pythonFiles/pyvsc-run-isolated.py vscode_datascience_helpers.daemon --daemon-module=vscode_datascience_helpers.jupyter_daemon -v > ~/.local/share/miniconda3/envs/nndl/bin/python ~/.vscode/extensions/ms-toolsai.jupyter-2020.12.414227025/pythonFiles/pyvsc-run-isolated.py vscode_datascience_helpers.daemon --daemon-module=vscode_datascience_helpers.jupyter_daemon -v > ~/.local/share/miniconda3/bin/python ~/.vscode/extensions/ms-toolsai.jupyter-2020.12.414227025/pythonFiles/pyvsc-run-isolated.py vscode_datascience_helpers.daemon --daemon-module=vscode_datascience_helpers.kernel_launcher_daemon -v > ~/.local/share/miniconda3/envs/rl/bin/python ~/.vscode/extensions/ms-toolsai.jupyter-2020.12.414227025/pythonFiles/pyvsc-run-isolated.py vscode_datascience_helpers.daemon --daemon-module=vscode_datascience_helpers.kernel_launcher_daemon -v Started kernel Python 3 > ~/.local/share/miniconda3/bin/python ~/.vscode/extensions/ms-toolsai.jupyter-2020.12.414227025/pythonFiles/pyvsc-run-isolated.py vscode_datascience_helpers.daemon --daemon-module=vscode_datascience_helpers.kernel_launcher_daemon -v This final listed python is the incorrect, non-selected version: % ~/.local/share/miniconda3/bin/python --version Python 3.7.9 Why is this version being used when the correct ~/.local/share/miniconda3/envs/rl/bin/python is listed immediately above?
[ "What eventually worked for me was:\n\nClose VSCode\nIn .ipynb pane's top right, change version of Python to the desired conda environment\nStart the kernel\n\nChanging the kernel and then restarting it didn't seem to work.\n", "In VSCode, the Python environment of Jupyter notebook is independent, it uses the Python environment we chose last time by default. We can click on \"Python3: Idle\" in the upper right corner and switch to the Python3.9 environment.\nPlease reload VSCode after switching the Python environment to make Jupyter reload the corresponding kernel.\n\n", "https://www.it-swarm-vi.com/vi/python/ipynb-nhap-tap-tin-ipynb-khac/1042793988/\n!pip install import_ipynb`enter code here`\n!pip install ipynb\n\nimport import_ipynb\nimport ipynb\n\nfrom ipynb.fs.full.tim_folder import tim_folder\n\n", "I've been in situations where I've tried to specify the python version when creating the environment for conda.\nconda env create -f environment.yml python={version}\n\nIt would usually install the latest version and ignore my command.\n\nTo resolve this I activate my environment:\nconda activate ${env}\nThen I install the right version:\nconda install python=3.9.12 -y\nThe key or big take away that I gathered is in the VSCode UI it gives me the option to select my kernel environment & it also gives me the path to that environment. When I went in to that directory the python version was not there. So...install it, reboot VScode.\n" ]
[ 2, 1, 0, 0 ]
[]
[]
[ "jupyter_notebook", "python", "python_3.x", "visual_studio_code" ]
stackoverflow_0065503907_jupyter_notebook_python_python_3.x_visual_studio_code.txt
Q: pip connection failure: cannot fetch index base URL http://pypi.python.org/simple/ I run sudo pip install git-review, and get the following messages: Downloading/unpacking git-review Cannot fetch index base URL http://pypi.python.org/simple/ Could not find any downloads that satisfy the requirement git-review No distributions at all found for git-review Storing complete log in /home/sai/.pip/pip.log Does anyone has any idea about this? A: I know this is an old thread, but I encountered this issue today and wanted to share my solution to the problem because I haven't seen this solution elsewhere on SO. My environment: Python 2.7.12/2.7.14 on Ubuntu 12.04.5 LTS in a virtualenv, pip version 1.1. My Errors: pip install nose in console: Cannot fetch index base URL http://pypi.python.org/simple/ in ~/.pip/pip.log: Could not fetch URL http://pypi.python.org/simple/: HTTP Error 403: SSL is required Curious for me because I had been running these same commands in a script without issue for about a year. this fixed it: pip install --index-url=https://pypi.python.org/simple/ nose (note the https) Hope this helps someone! A: You need to upgrade your pip installation because it is still using http instead of https. The --index-url (short version: -i) option allows you to specify an index-url in the call to pip itself, there you can use the https-variant. Then you can instruct pip to upgrade itself. sudo pip install --index-url https://pypi.python.org/simple/ --upgrade pip Afterwards you should be able to use pip without the --index-url option. I believe that the release 7.0.0 (2015-05-21) triggered this issue. The release note for that version states the following: BACKWARD INCOMPATIBLE No longer implicitly support an insecure origin origin, and instead require insecure origins be explicitly trusted with the --trusted-host option. You can check your pip version with pip --version. This would mean that issuing sudo pip install --trusted-host --upgrade pip once would also solve this issue, albeit download pip over insecure http. This might also not work at all, because it is possible that the insecure endpoint is no longer accessible on the server (I have not tested this). A: EDIT: The current version of PIP no longer has this issue. As of right now, version: 7.1.2 is the current version. Here is the PIP link: https://pypi.python.org/pypi/pip ORIGINAL FIX: I got this issue when trying to use pip==1.5.4 This is an issue related to PIP and Python's PYPI trusting SSL certificates. If you look in the PIP log in Mac OS X at: /Users/username/.pip/pip.log it will give you more detail. My workaround to get PIP back up and running after hours of trying different stuff was to go into my site-packages in Python whether it is in a virtualenv or in your normal site-packages, and get rid of the current PIP version. For me I had pip==1.5.4 I deleted the PIP directory and the PIP egg file. Then I ran easy_install pip==1.2.1 This version of PIP doesn't have the SSL issue, and then I was able to go and run my normal pip install -r requirements.txt within my virtualenv to set up all packages that I wanted that were listed in my requirements.txt file. This is also the recommended hack to get passed the issue by several people on this Google Group that I found: https://groups.google.com/forum/#!topic/beagleboard/aSlPCNYcVjw A: I added --proxy command line option to point to the proxy and it's working (pip version is 1.5.4 and python 2.7). for some reason it was not taking the shell env variables HTTPS_PROXY, HTTP_PROXY, https_proxy, http_proxy. sudo pip --proxy [user:passwd@]proxy.server:port install git-review A: Check your proxy connection, I had a similar issue, then I changed my connection which wasn't proxied and boom, of it started downloading and setting up the library A: I had the same issue with pip 1.5.6. I just deleted the ~/.pip folder and it worked like a charm. rm -r ~/.pip/ A: I had the same problem with pip==1.5.6. I had to correct my system time. # date -s "2014-12-09 10:09:50" A: This worked for me on Ubuntu 12.04. pip install --index-url=https://pypi.python.org/simple/ -U scikit-learn A: If that's not a proxy/network problem you should try to create/edit config file .pip/pip.conf or if you are running pip as root /root/.pip/pip.conf. Check and change index-url from http to https. It should be like this: [global] index-url=https://pypi.python.org/simple/ Worked for me with Ubuntu 12 and pip 9.0.1 A: it works! sudo pip --proxy=http://202.194.64.89:8000 install elasticsearch ; 202.194.64.89:8000 is my PROXY, A: In my case (Python 3.4, in a virtual environment, running under macOS 10.10.6) I could not even upgrade pip itself. Help came from this SO answer in the form of the following one-liner: curl https://bootstrap.pypa.io/get-pip.py | python (If you do not use a virtual environment, you may need sudo python.) With this I managed to upgrade pip from Version 1.5.6 to Version 10.0.0 (quite a jump!). This version does not use TLS 1.0 or 1.1 which are not supported any more by the Python.org site(s), and can install PyPI packages nicely. No need to specify --index-url=https://pypi.python.org/simple/. A: I was able to fix this by upgrading my python, which had previously been attached to an outdated version of OpenSSL. Now it is using 1.0.1h-1 and my package will pip install. FYI, my log and commands, using anaconda and installing the pytest-ipynb package [1] : $ conda update python Fetching package metadata: .... Solving package specifications: . Package plan for installation in environment /Users/me/anaconda/envs/py27: The following NEW packages will be INSTALLED: openssl: 1.0.1h-1 The following packages will be UPDATED: python: 2.7.5-3 --> 2.7.8-1 readline: 6.2-1 --> 6.2-2 sqlite: 3.7.13-1 --> 3.8.4.1-0 tk: 8.5.13-1 --> 8.5.15-0 Proceed ([y]/n)? y Unlinking packages ... [ COMPLETE ] |#############################################################| 100% Linking packages ... [ COMPLETE ] |#############################################################| 100% $ pip install pytest-ipynb Downloading/unpacking pytest-ipynb Downloading pytest-ipynb-0.1.1.tar.gz Running setup.py (path:/private/var/folders/4f/b8gwyhg905x94twqw2pbklyw0000gn/T/pip_build_me/pytest-ipynb/setup.py) egg_info for package pytest-ipynb Requirement already satisfied (use --upgrade to upgrade): pytest in /Users/me/anaconda/envs/py27/lib/python2.7/site-packages (from pytest-ipynb) Installing collected packages: pytest-ipynb Running setup.py install for pytest-ipynb Successfully installed pytest-ipynb Cleaning up... [1] My ticket about this issue; https://github.com/zonca/pytest-ipynb/issues/1 A: I faced same problem but that was related proxy. it was resolved by setting proxy. Set http_proxy=http://myuserid:mypassword@myproxyname:myproxyport Set https_proxy=http://myuserid:mypassword@myproxyname:myproxyport This might help someone. A: If your proxy is configured correctly, then pip version 1.5.6 will handle this correctly. The bug was resolved. You can upgrade pip with easy_install pip==1.5.6 A: Extra answer: if you are doing this from chroot. You need source of random numbers to be able to establish secure connection to pypi. On linux, you can bind-mount host dev to chroot dev: mount --bind /dev /path-to-chroot/dev A: I also got this error while installing pyinstaller in a proxied connection. I just connect direct Internet connection(Using my dongle) and did that again. sudo pip install pyinstaller This worked for me. A: You might be missing a DNS server conf in /etc/resolv.conf make sure u can ping to: ping pypi.python.org if you're not getting a ping try to add a DNS server to file...something like: nameserver xxx.xxx.xxx.xxx A: My explanation/enquiry is for windows environment. I am pretty new to python, and this is for someone still novice than me. I installed the latest pip(python installer package) and downloaded 32 bit/64 bit (open source) compatible binaries from http://www.lfd.uci.edu/~gohlke/pythonlibs/, and it worked. Steps followed to install pip, though usually pip is installed by default during python installation from www.python.org/downloads/ - Download pip-7.1.0.tar.gz from https://pypi.python.org/pypi/pip. - Unzip and un-tar the above file. - In the pip-7.1.0 folder, run: python setup.py install. This installed pip latest version. Use pip to install(any feasible operation) binary package. Run the pip app to do the work(install file), as below: \python27\scripts\pip2.7.exe install file_path\file_name --proxy If you face, wheel(i.e egg) issue, use the compatible binary package file. Hope this helps. A: in my case I would install django ( pip install django ) and it has a same problem with ssl certificate (Cannot fetch index base URL http://pypi.python.org/simple/ ) it's from virtualenv so DO : FIRST: delete your virtualenv deactivate rm -rf env SECOND: check have pip pip3 -V if you don't have sudo apt-get install python3-pip FINALLY: install virtualenv with nosite-packages and make your virenviroment sudo pip3 install virtualenv virtualenv --no-site-packages -p /usr/bin/python3.6 . env/bin/activate A: Check ~/.pip/pip.log It could contain the error message Could not fetch URL https://pypi.python.org/simple/pip/: 403 Client Error: [[[!!! BREAKING CHANGE !!!]]] Support for clients that do not support Server Name Indication is temporarily disabled and will be permanently deprecated soon. See https://status.python.org/incidents/hzmjhqsdjqgb and https://github.com/pypa/pypi-support/issues/978 [[[!!! END BREAKING CHANGE !!!]]] If so, the fix is to upgrade to that last version of Python 2.7. See https://github.com/pypa/pypi-support/issues/978 In my case I could do that with add-apt-repository ppa:fkrull/deadsnakes-python2.7 && apt-get update && apt-get upgrade but YMMV may vary depending on distribution. A: I had a similar problem, but in my case I was getting the error: Downloading/unpacking bencode Cannot fetch index base URL http://c.pypi.python.org/simple/ Could not find any downloads that satisfy the requirement bencode No distributions at all found for bencode Storing complete log in /home/andrew/.pip/pip.log In my case I was able to fix the error by editing ~/.pip/pip.conf and changing http://c.pypi.python.org/simple/ to http://pypi.python.org/simple and then pip worked fine again. A: I got this error message in ~/.pip/pip.log Could not fetch URL https://pypi.python.org/simple/: connection error: [Errno 185090050] _ssl.c:344: error:0B084002:x509 certificate routines:X509_load_cert_crl_file:system lib Will skip URL https://pypi.python.org/simple/ when looking for download links for regulargrid I fixed it by updating my ~/.pip/pip.conf. It accidentally pointed to cacert.pem file that did not exist and looked like this [global] cert = /some/path/.pip/cacert.pem A: I used to use the easy_install pip==1.2.1 workaround but I randomly found that if you're having this bug, you probably installed a 32bit version of python. If you install a 64bit version of it by installing it from the source and then build you virtualenv upon it, you wont have that pip bug anymore. A: I too used the chosen solution (downgrading pip) to work around this issue until I ran into another seemingly unrelated issue caused by the same underlying problem. Python's version of OpenSSL was out of date. Check your OpenSSL version: python -c 'import ssl; print(ssl.OPENSSL_VERSION)' If the version is 0.9.7, that should verify that OpenSSL needs to be updated. If you know how to do that directly, great (but please let me know in a comment). If not, you can follow the advice in this answer, and reinstall python from the 64 bit/32 bit installer instead of the 32 bit only installer from python.org (I'm using python 3.4.2). I now have OpenSSL version 0.9.8, and none of these issues. A: Try doing reinstallation of pip : curl -O https://pypi.python.org/packages/source/p/pip/pip-1.2.1.tar.gz tar xvfz pip-1.2.1.tar.gz cd pip-1.2.1 python setup.py install If curl doesnot work , you will have proxy issues , Please fix that it should work fine. Check after opening google.com in your browser in linux. The try installing pip install virtualenv A: In case you use a firewall, make sure outbound connections to port 443 are not blocked, e.g. run: sudo iptables -A OUTPUT -p tcp --dport 443 -j ACCEPT A: I have met the same questions with you. When I realize it may be caused by unmatched version of numpy or pip, I uninstalled numpy and pip, then continue as this 'https://radimrehurek.com/gensim/install.html', at last I succeed! A: C:\Users\Asus>pip install matplotlib Downloading/unpacking matplotlib Cannot fetch index base URL https://pypi.python.org/simple/ Could not find any downloads that satisfy the requirement matplotlib Cleaning up... No distributions at all found for matplotlib Storing debug log for failure in C:\Users\Asus\pip\pip.log I used 'easy_install pip==1.2.1' and it worked fine. C:\Users\Asus>easy_install pip==1.2.1 Searching for pip==1.2.1 Reading https://pypi.python.org/simple/pip/ Best match: pip 1.2.1 Downloading ... Then on using this command 'pip install matplotlib' C:\Users\Asus>pip install matplotlib Downloading/unpacking matplotlib Downloading matplotlib-2.0.0b4.tar.gz (unknown size): A: If you're running these commands in a Docker container on Windows, it may mean that your docker machine's network connection is stale and needs to be rebuilt. To fix it, run these commands: docker-machine stop docker-machine start @FOR /f "tokens=*" %i IN ('docker-machine env') DO @%i A: I'm now getting this in $HOME/.pip/pip.log: Could not fetch URL https://pypi.python.org/simple/: HTTP Error 403: TLSv1.2+ is required I don't have a straightforward solution for this, but I'm mentioning it as something to watch out for before you waste time on trying some of the other solutions here. I'm obviously already using a https URL There is no proxy or firewall issue Using trusted-host didn't change anything (dunno where I picked this up) For what it's worth my openssl is too old to even have ssl.OPENSSL_VERSION so maybe that's really the explanation here. In the end, wiping my virtual environment and recreating it with virtualenv --setuptools env seems to have fixed at least the major blockers. This is on a really old Debian box, Python 2.6.6. A: My problem was the system virtualenv version. When I created an env with python3 venv everything worked. But when I used virtualenv (by default with python2.7) to create an env I receive those error messages. In the virtualenv created the pip version was 1.5.6, but my system pip version was 10.0.1 Then I ran (outside any env): pip install virtualenv --upgrade It upgraded virtualenv to version 16.0.0 and now my pip install in the envs created with virtualenv and python2.7 work flawlessly. Also, the pip version inside the env is now 10.0.1. Before upgrade: A: I tried almost all answers and nothing fix my error, so I just reinstall python (in my case I have version 2.7.9 and I install 2.7.15) and the error finally fixed. No need to uninstall python first, the installer do it for you. A: You can try with this below command: python -m pip install --trusted-host https://pypi.python.org deepdiff it will work. A: I found the solutions from Daniel F and mattdedek very helpful: use an https URL, and upgrade pip in order to make that the default. But that still didn't fix the problem for me. I had this error, under Mac OS X / Python 3.4: $ pip3 install Flask Downloading/unpacking Flask Cannot fetch index base URL https://pypi.python.org/simple/ Could not find any downloads that satisfy the requirement Flask Cleaning up... No distributions at all found for Flask Storing debug log for failure in /Users/huttarl/.pip/pip.log and pip.log basically showed the same thing: File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/pip/index.py", line 277, in find_requirement raise DistributionNotFound('No distributions at all found for %s' % req) pip.exceptions.DistributionNotFound: No distributions at all found for Flask With help from a friend, I learned that upgrading from Python 3.4 to 3.8 fixed the problem. Apparently this is because one of the newer versions of Python 3 included updated certificates: Certificate verification and OpenSSL This package includes its own private copy of OpenSSL 1.1.1. The trust certificates in system and user keychains managed by the Keychain Access application and the security command line utility are not used as defaults by the Python ssl module. A sample command script is included in /Applications/Python 3.8 to install a curated bundle of default root certificates from the third-party certifi package (https://pypi.org/project/certifi/). Double-click on Install Certificates to run it. The bundled pip has its own default certificate store for verifying download connections. After this upgrade, and running Install Certificates.command, the pip3 install Flask runs successfully. You may have already upgraded to Python 3.6+ in Mac OS, and just overlooked this certificate installation command. In that case, browse to Applications/Python 3.x and double-click Install Certificates.command. (See Mac OSX python ssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:749)) A: Late 2022 answer after working on a legacy system: Check a younger ubuntu version. By raising the ubuntu version more and more, testing this with a Dockerfile, I could fix a legacy setup by taking Ubuntu 18, see Docker build error "Cannot fetch index base URL http://pypi.python.org/simple/".
pip connection failure: cannot fetch index base URL http://pypi.python.org/simple/
I run sudo pip install git-review, and get the following messages: Downloading/unpacking git-review Cannot fetch index base URL http://pypi.python.org/simple/ Could not find any downloads that satisfy the requirement git-review No distributions at all found for git-review Storing complete log in /home/sai/.pip/pip.log Does anyone has any idea about this?
[ "I know this is an old thread, but I encountered this issue today and wanted to share my solution to the problem because I haven't seen this solution elsewhere on SO.\nMy environment: Python 2.7.12/2.7.14 on Ubuntu 12.04.5 LTS in a virtualenv, pip version 1.1.\nMy Errors:\npip install nose\n\nin console:\nCannot fetch index base URL http://pypi.python.org/simple/\n\nin ~/.pip/pip.log:\nCould not fetch URL http://pypi.python.org/simple/: HTTP Error 403: SSL is required\n\nCurious for me because I had been running these same commands in a script without issue for about a year.\nthis fixed it:\npip install --index-url=https://pypi.python.org/simple/ nose\n\n(note the https)\nHope this helps someone!\n", "You need to upgrade your pip installation because it is still using http instead of https.\nThe --index-url (short version: -i) option allows you to specify an index-url in the call to pip itself, there you can use the https-variant. Then you can instruct pip to upgrade itself.\nsudo pip install --index-url https://pypi.python.org/simple/ --upgrade pip\nAfterwards you should be able to use pip without the --index-url option.\n\nI believe that the release 7.0.0 (2015-05-21) triggered this issue. The release note for that version states the following:\n\nBACKWARD INCOMPATIBLE No longer implicitly support an insecure origin\norigin, and instead require insecure origins be explicitly trusted\nwith the --trusted-host option.\n\nYou can check your pip version with pip --version.\nThis would mean that issuing sudo pip install --trusted-host --upgrade pip once would also solve this issue, albeit download pip over insecure http. This might also not work at all, because it is possible that the insecure endpoint is no longer accessible on the server (I have not tested this).\n", "EDIT:\nThe current version of PIP no longer has this issue. As of right now, version: 7.1.2 is the current version. Here is the PIP link:\nhttps://pypi.python.org/pypi/pip\nORIGINAL FIX:\nI got this issue when trying to use pip==1.5.4\nThis is an issue related to PIP and Python's PYPI trusting SSL certificates. If you look in the PIP log in Mac OS X at: /Users/username/.pip/pip.log it will give you more detail. \nMy workaround to get PIP back up and running after hours of trying different stuff was to go into my site-packages in Python whether it is in a virtualenv or in your normal site-packages, and get rid of the current PIP version. For me I had pip==1.5.4 \nI deleted the PIP directory and the PIP egg file. Then I ran\neasy_install pip==1.2.1 \n\nThis version of PIP doesn't have the SSL issue, and then I was able to go and run my normal pip install -r requirements.txt within my virtualenv to set up all packages that I wanted that were listed in my requirements.txt file.\nThis is also the recommended hack to get passed the issue by several people on this Google Group that I found:\nhttps://groups.google.com/forum/#!topic/beagleboard/aSlPCNYcVjw\n", "I added --proxy command line option to point to the proxy and it's working (pip version is 1.5.4 and python 2.7). for some reason it was not taking the shell env variables HTTPS_PROXY, HTTP_PROXY, https_proxy, http_proxy.\nsudo pip --proxy [user:passwd@]proxy.server:port install git-review\n\n", "Check your proxy connection, I had a similar issue, then I changed my connection which wasn't proxied and boom, of it started downloading and setting up the library\n", "I had the same issue with pip 1.5.6.\nI just deleted the ~/.pip folder and it worked like a charm.\nrm -r ~/.pip/\n\n", "I had the same problem with pip==1.5.6. I had to correct my system time.\n# date -s \"2014-12-09 10:09:50\"\n\n", "This worked for me on Ubuntu 12.04.\npip install --index-url=https://pypi.python.org/simple/ -U scikit-learn\n\n", "If that's not a proxy/network problem you should try to create/edit config file .pip/pip.conf or if you are running pip as root /root/.pip/pip.conf. Check and change index-url from http to https.\nIt should be like this:\n[global] \nindex-url=https://pypi.python.org/simple/\n\nWorked for me with Ubuntu 12 and pip 9.0.1\n", "it works!\nsudo pip --proxy=http://202.194.64.89:8000 install elasticsearch ;\n202.194.64.89:8000 is my PROXY,\n", "In my case (Python 3.4, in a virtual environment, running under macOS 10.10.6) I could not even upgrade pip itself. Help came from this SO answer in the form of the following one-liner:\ncurl https://bootstrap.pypa.io/get-pip.py | python\n(If you do not use a virtual environment, you may need sudo python.)\nWith this I managed to upgrade pip from Version 1.5.6 to Version 10.0.0 (quite a jump!). This version does not use TLS 1.0 or 1.1 which are not supported any more by the Python.org site(s), and can install PyPI packages nicely. No need to specify --index-url=https://pypi.python.org/simple/.\n", "I was able to fix this by upgrading my python, which had previously been attached to an outdated version of OpenSSL. Now it is using 1.0.1h-1 and my package will pip install.\nFYI, my log and commands, using anaconda and installing the pytest-ipynb package [1] : \n\n$ conda update python\nFetching package metadata: ....\nSolving package specifications: .\nPackage plan for installation in environment /Users/me/anaconda/envs/py27:\nThe following NEW packages will be INSTALLED:\n openssl: 1.0.1h-1\nThe following packages will be UPDATED:\n python: 2.7.5-3 --> 2.7.8-1\n readline: 6.2-1 --> 6.2-2\n sqlite: 3.7.13-1 --> 3.8.4.1-0\n tk: 8.5.13-1 --> 8.5.15-0\nProceed ([y]/n)? y\nUnlinking packages ...\n[ COMPLETE ] |#############################################################| 100%\nLinking packages ...\n[ COMPLETE ] |#############################################################| 100%\n$ pip install pytest-ipynb\nDownloading/unpacking pytest-ipynb\n Downloading pytest-ipynb-0.1.1.tar.gz\n Running setup.py (path:/private/var/folders/4f/b8gwyhg905x94twqw2pbklyw0000gn/T/pip_build_me/pytest-ipynb/setup.py) egg_info for package pytest-ipynb\nRequirement already satisfied (use --upgrade to upgrade): pytest in /Users/me/anaconda/envs/py27/lib/python2.7/site-packages (from pytest-ipynb)\nInstalling collected packages: pytest-ipynb\n Running setup.py install for pytest-ipynb\nSuccessfully installed pytest-ipynb\nCleaning up...\n\n[1] My ticket about this issue; https://github.com/zonca/pytest-ipynb/issues/1\n", "I faced same problem but that was related proxy. it was resolved by setting proxy.\nSet http_proxy=http://myuserid:mypassword@myproxyname:myproxyport\nSet https_proxy=http://myuserid:mypassword@myproxyname:myproxyport\n\nThis might help someone.\n", "If your proxy is configured correctly, then pip version 1.5.6 will handle this correctly. The bug was resolved.\nYou can upgrade pip with easy_install pip==1.5.6\n", "Extra answer: if you are doing this from chroot.\nYou need source of random numbers to be able to establish secure connection to pypi.\nOn linux, you can bind-mount host dev to chroot dev:\nmount --bind /dev /path-to-chroot/dev\n\n", "I also got this error while installing pyinstaller in a proxied connection. I just connect direct Internet connection(Using my dongle) and did that again.\n sudo pip install pyinstaller\n\nThis worked for me.\n", "You might be missing a DNS server conf in /etc/resolv.conf\nmake sure u can ping to:\nping pypi.python.org\nif you're not getting a ping try to add a DNS server to file...something like:\nnameserver xxx.xxx.xxx.xxx\n", "My explanation/enquiry is for windows environment.\nI am pretty new to python, and this is for someone still novice than me. \nI installed the latest pip(python installer package) and downloaded 32 bit/64 bit (open source) compatible binaries from http://www.lfd.uci.edu/~gohlke/pythonlibs/, and it worked.\n\nSteps followed to install pip, though usually pip is installed by default during python installation from www.python.org/downloads/ \n- Download pip-7.1.0.tar.gz from https://pypi.python.org/pypi/pip. \n- Unzip and un-tar the above file. \n- In the pip-7.1.0 folder, run: python setup.py install. This installed pip latest version.\n\nUse pip to install(any feasible operation) binary package.\nRun the pip app to do the work(install file), as below:\n\\python27\\scripts\\pip2.7.exe install file_path\\file_name --proxy \nIf you face, wheel(i.e egg) issue, use the compatible binary package file.\nHope this helps.\n", "in my case I would install django (\n\npip install django\n\n)\nand it has a same problem with ssl certificate (Cannot fetch index base URL http://pypi.python.org/simple/ )\nit's from virtualenv so DO :\nFIRST:\ndelete your virtualenv\n\ndeactivate\nrm -rf env\n\nSECOND:\ncheck have pip\n\npip3 -V\n\nif you don't have\n\nsudo apt-get install python3-pip\n\nFINALLY:\ninstall virtualenv with nosite-packages\nand make your virenviroment\n\nsudo pip3 install virtualenv\nvirtualenv --no-site-packages -p /usr/bin/python3.6\n. env/bin/activate\n\n", "Check ~/.pip/pip.log\nIt could contain the error message\nCould not fetch URL https://pypi.python.org/simple/pip/: 403 Client Error: [[[!!! BREAKING CHANGE !!!]]] Support for clients that do not support Server Name Indication is temporarily disabled and will be permanently deprecated soon. See https://status.python.org/incidents/hzmjhqsdjqgb and https://github.com/pypa/pypi-support/issues/978 [[[!!! END BREAKING CHANGE !!!]]]\nIf so, the fix is to upgrade to that last version of Python 2.7. See https://github.com/pypa/pypi-support/issues/978\nIn my case I could do that with add-apt-repository ppa:fkrull/deadsnakes-python2.7 && apt-get update && apt-get upgrade but YMMV may vary depending on distribution.\n", "I had a similar problem, but in my case I was getting the error:\nDownloading/unpacking bencode\n Cannot fetch index base URL http://c.pypi.python.org/simple/\n Could not find any downloads that satisfy the requirement bencode\nNo distributions at all found for bencode\nStoring complete log in /home/andrew/.pip/pip.log\n\nIn my case I was able to fix the error by editing ~/.pip/pip.conf and changing http://c.pypi.python.org/simple/ to http://pypi.python.org/simple and then pip worked fine again.\n", "I got this error message in ~/.pip/pip.log\nCould not fetch URL https://pypi.python.org/simple/: connection error: [Errno 185090050] _ssl.c:344: error:0B084002:x509 certificate routines:X509_load_cert_crl_file:system lib\n Will skip URL https://pypi.python.org/simple/ when looking for download links for regulargrid\n\nI fixed it by updating my ~/.pip/pip.conf. It accidentally pointed to cacert.pem file that did not exist and looked like this\n[global]\ncert = /some/path/.pip/cacert.pem\n\n", "I used to use the easy_install pip==1.2.1 workaround but I randomly found that if you're having this bug, you probably installed a 32bit version of python.\nIf you install a 64bit version of it by installing it from the source and then build you virtualenv upon it, you wont have that pip bug anymore.\n", "I too used the chosen solution (downgrading pip) to work around this issue until I ran into another seemingly unrelated issue caused by the same underlying problem. Python's version of OpenSSL was out of date. Check your OpenSSL version:\npython -c 'import ssl; print(ssl.OPENSSL_VERSION)'\n\nIf the version is 0.9.7, that should verify that OpenSSL needs to be updated. If you know how to do that directly, great (but please let me know in a comment). If not, you can follow the advice in this answer, and reinstall python from the 64 bit/32 bit installer instead of the 32 bit only installer from python.org (I'm using python 3.4.2). I now have OpenSSL version 0.9.8, and none of these issues.\n", "Try doing reinstallation of pip : \ncurl -O https://pypi.python.org/packages/source/p/pip/pip-1.2.1.tar.gz\ntar xvfz pip-1.2.1.tar.gz\ncd pip-1.2.1\npython setup.py install\n\nIf curl doesnot work , you will have proxy issues , Please fix that it should work fine. Check after opening google.com in your browser in linux.\nThe try installing \npip install virtualenv\n\n", "In case you use a firewall, make sure outbound connections to port 443 are not blocked, e.g. run:\nsudo iptables -A OUTPUT -p tcp --dport 443 -j ACCEPT\n\n", "I have met the same questions with you. When I realize it may be caused by unmatched version of numpy or pip, I uninstalled numpy and pip, then continue as this 'https://radimrehurek.com/gensim/install.html', at last I succeed! \n", "C:\\Users\\Asus>pip install matplotlib\nDownloading/unpacking matplotlib\n Cannot fetch index base URL https://pypi.python.org/simple/\n Could not find any downloads that satisfy the requirement matplotlib\nCleaning up...\nNo distributions at all found for matplotlib\nStoring debug log for failure in C:\\Users\\Asus\\pip\\pip.log\n\nI used 'easy_install pip==1.2.1' and it worked fine.\nC:\\Users\\Asus>easy_install pip==1.2.1\nSearching for pip==1.2.1\nReading https://pypi.python.org/simple/pip/\nBest match: pip 1.2.1\nDownloading ...\n\nThen on using this command 'pip install matplotlib'\nC:\\Users\\Asus>pip install matplotlib\nDownloading/unpacking matplotlib\n Downloading matplotlib-2.0.0b4.tar.gz (unknown size):\n\n", "If you're running these commands in a Docker container on Windows, it may mean that your docker machine's network connection is stale and needs to be rebuilt. To fix it, run these commands:\ndocker-machine stop\ndocker-machine start\n@FOR /f \"tokens=*\" %i IN ('docker-machine env') DO @%i\n\n", "I'm now getting this in $HOME/.pip/pip.log:\nCould not fetch URL https://pypi.python.org/simple/: HTTP Error 403: TLSv1.2+ is required\n\nI don't have a straightforward solution for this, but I'm mentioning it as something to watch out for before you waste time on trying some of the other solutions here.\n\nI'm obviously already using a https URL\nThere is no proxy or firewall issue\nUsing trusted-host didn't change anything (dunno where I picked this up)\n\nFor what it's worth my openssl is too old to even have ssl.OPENSSL_VERSION so maybe that's really the explanation here.\nIn the end, wiping my virtual environment and recreating it with virtualenv --setuptools env seems to have fixed at least the major blockers.\nThis is on a really old Debian box, Python 2.6.6.\n", "My problem was the system virtualenv version.\nWhen I created an env with python3 venv everything worked. But when I used virtualenv (by default with python2.7) to create an env I receive those error messages. \nIn the virtualenv created the pip version was 1.5.6, but my system pip version was 10.0.1\nThen I ran (outside any env):\npip install virtualenv --upgrade\nIt upgraded virtualenv to version 16.0.0 and now my pip install in the envs created with virtualenv and python2.7 work flawlessly. Also, the pip version inside the env is now 10.0.1.\nBefore upgrade:\n", "I tried almost all answers and nothing fix my error, so I just reinstall python (in my case I have version 2.7.9 and I install 2.7.15) and the error finally fixed.\nNo need to uninstall python first, the installer do it for you.\n", "You can try with this below command:\n\n\n\npython -m pip install --trusted-host https://pypi.python.org deepdiff\n\n\n\nit will work.\n", "I found the solutions from Daniel F and mattdedek very helpful: use an https URL, and upgrade pip in order to make that the default. But that still didn't fix the problem for me. I had this error, under Mac OS X / Python 3.4:\n$ pip3 install Flask\nDownloading/unpacking Flask\n Cannot fetch index base URL https://pypi.python.org/simple/\n Could not find any downloads that satisfy the requirement Flask\nCleaning up...\nNo distributions at all found for Flask\nStoring debug log for failure in /Users/huttarl/.pip/pip.log\n\nand pip.log basically showed the same thing:\n File \"/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/pip/index.py\", line 277, in find_requirement\n raise DistributionNotFound('No distributions at all found for %s' % req) \npip.exceptions.DistributionNotFound: No distributions at all found for Flask\n\nWith help from a friend, I learned that upgrading from Python 3.4 to 3.8 fixed the problem. Apparently this is because one of the newer versions of Python 3 included updated certificates:\n\nCertificate verification and OpenSSL\nThis package includes its own private copy of OpenSSL 1.1.1. The\n trust certificates in system and user keychains managed by the\n Keychain Access application and the security command line utility are\n not used as defaults by the Python ssl module. A sample command\n script is included in /Applications/Python 3.8 to install a curated\n bundle of default root certificates from the third-party certifi\n package (https://pypi.org/project/certifi/). Double-click on Install\n Certificates to run it.\nThe bundled pip has its own default certificate store for verifying\n download connections.\n\nAfter this upgrade, and running Install Certificates.command, the pip3 install Flask runs successfully.\nYou may have already upgraded to Python 3.6+ in Mac OS, and just overlooked this certificate installation command. In that case, browse to Applications/Python 3.x and double-click Install Certificates.command. (See Mac OSX python ssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:749))\n", "Late 2022 answer after working on a legacy system:\nCheck a younger ubuntu version. By raising the ubuntu version more and more, testing this with a Dockerfile, I could fix a legacy setup by taking Ubuntu 18, see Docker build error \"Cannot fetch index base URL http://pypi.python.org/simple/\".\n" ]
[ 152, 61, 42, 13, 11, 8, 6, 6, 4, 3, 3, 2, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "For my case, I fix it by:\nI copied libcrypto-1_1-x64.dll and libssl-1_1-x64.dll from Anaconda3\\Library\\bin to \\Anaconda3\\DLLs.\n" ]
[ -1 ]
[ "git", "git_review", "pip", "python", "ubuntu" ]
stackoverflow_0021294997_git_git_review_pip_python_ubuntu.txt
Q: How can I add a restriction so that the # of forward steps won't overshadow the # of backward steps? I have to create a program that tracks a person's motion. F represents forward B represents backward. The values are to be randomly generated between 2-20 as well as the total # of steps being randomly generated from 10-85 (total will decide when the steps will stop). The # of forward steps has to be greater than the # of backwards steps (always). My problem is that if my total is a number that's not so far from the # of steps forward, my # of backwards steps aren't even fully generated once. For example, I generated my program and it gave me an output of this: FFFFFFFFFFFFFFFFFFBBBBB 13 steps from start Forward: 18 Backward: 14 Total: 23 But the backwards steps weren't even able to be completed. How can I make it so this won't occur? Do I have to add a restriction? Here's my code: import random while True: fwd= random.randint(2,20) bkwd= random.randint(2,fwd-1) total=random.randint(10,85) f= 0 b = 0 t= 0 steps_taken= 0 if bkwd > fwd: break while total > 0: f = 0 while fwd > f: if total > 0: print("F", end="") f=f+1 t=t+1 total=total-1 steps_taken= steps_taken+1 else: f = fwd b = 0 while bkwd > b: if total > 0: print("B", end="") t=t-1 b=b+1 total=total-1 steps_taken= steps_taken+1 else: b = bkwd if f > total: break print(" ",t, "steps from the start") #I need help here printing the right amount of total steps print("Forward:", f, "Backward:", b, "Total:", steps_taken ) Here are my instructions: A person walks a random amount of steps forward, and then a different random number of steps backwards. The random steps are anywhere between 2 and 20 The number of steps forward is always greater than the number of steps backwards That motion of forward / backward random steps repeats itself again and again The motion is consistent (the number of forward steps stays the same throughout the motion, and the number of backwards steps stays the same throughout the motion) After making a specific amount of total steps the person is told to stop and will be a certain amount of steps forward from where they started. The total number of steps is generated randomly and will be between 10 and 85 You are writing a program to simulate the motion taken by the person. Display that motion and the number of steps he ends away from where he started. For Example: If the program generated the forward steps to be 4, and the backward steps to be 2, and the total number of steps to be 13, your program would display: FFFFBBFFFFBBF = 5 Steps from the start If the program generated the forward steps to be 5, and the backward steps to be 3, and the total steps to be 16, your program would display FFFFFBBBFFFFFBBB = 4 Steps from the start A: I would tackle it like this: import random total_steps = random.randint(10, 85) fwd = random.randint(3,(20, total_steps-1)[total_steps<21]) bkwd= random.randint(2,fwd-1) if (fwd+bkwd) > total_steps: bkwd = total_steps-fwd print("Total_steps=", total_steps, ", fwd=", fwd, ", bkwd=", bkwd) # Initialise step pattern to a blank string, and steps to zero. step_pattern = "" steps = 0 while total_steps > 0: for i in range(fwd): step_pattern += "F" steps += 1 total_steps -= 1 if total_steps > 0: for j in range(bkwd): step_pattern += "B" steps -= 1 total_steps -= 1 # Use f-strings to insert (step_pattern) and (steps) into string print(f"{step_pattern} = {steps} steps from the start") Example OUTPUTs: Total_steps= 45 , fwd= 5 , bkwd= 2 FFFFFBBFFFFFBBFFFFFBBFFFFFBBFFFFFBBFFFFFBBFFF = 21 steps from the start Total_steps= 14 , fwd= 6 , bkwd= 5 FFFFFFBBBBBFFF = 4 steps from the start A: If I would not have to complicate things, I would just set the total steps taken random generation from the sum of fwd and bkwd. And as per the requirements mentioned, the sum will not be greater or equal to 85. total=random.randint(fwd+bkwd,85)
How can I add a restriction so that the # of forward steps won't overshadow the # of backward steps?
I have to create a program that tracks a person's motion. F represents forward B represents backward. The values are to be randomly generated between 2-20 as well as the total # of steps being randomly generated from 10-85 (total will decide when the steps will stop). The # of forward steps has to be greater than the # of backwards steps (always). My problem is that if my total is a number that's not so far from the # of steps forward, my # of backwards steps aren't even fully generated once. For example, I generated my program and it gave me an output of this: FFFFFFFFFFFFFFFFFFBBBBB 13 steps from start Forward: 18 Backward: 14 Total: 23 But the backwards steps weren't even able to be completed. How can I make it so this won't occur? Do I have to add a restriction? Here's my code: import random while True: fwd= random.randint(2,20) bkwd= random.randint(2,fwd-1) total=random.randint(10,85) f= 0 b = 0 t= 0 steps_taken= 0 if bkwd > fwd: break while total > 0: f = 0 while fwd > f: if total > 0: print("F", end="") f=f+1 t=t+1 total=total-1 steps_taken= steps_taken+1 else: f = fwd b = 0 while bkwd > b: if total > 0: print("B", end="") t=t-1 b=b+1 total=total-1 steps_taken= steps_taken+1 else: b = bkwd if f > total: break print(" ",t, "steps from the start") #I need help here printing the right amount of total steps print("Forward:", f, "Backward:", b, "Total:", steps_taken ) Here are my instructions: A person walks a random amount of steps forward, and then a different random number of steps backwards. The random steps are anywhere between 2 and 20 The number of steps forward is always greater than the number of steps backwards That motion of forward / backward random steps repeats itself again and again The motion is consistent (the number of forward steps stays the same throughout the motion, and the number of backwards steps stays the same throughout the motion) After making a specific amount of total steps the person is told to stop and will be a certain amount of steps forward from where they started. The total number of steps is generated randomly and will be between 10 and 85 You are writing a program to simulate the motion taken by the person. Display that motion and the number of steps he ends away from where he started. For Example: If the program generated the forward steps to be 4, and the backward steps to be 2, and the total number of steps to be 13, your program would display: FFFFBBFFFFBBF = 5 Steps from the start If the program generated the forward steps to be 5, and the backward steps to be 3, and the total steps to be 16, your program would display FFFFFBBBFFFFFBBB = 4 Steps from the start
[ "I would tackle it like this:\nimport random\n\ntotal_steps = random.randint(10, 85)\nfwd = random.randint(3,(20, total_steps-1)[total_steps<21])\nbkwd= random.randint(2,fwd-1)\n\nif (fwd+bkwd) > total_steps: \n bkwd = total_steps-fwd\n\nprint(\"Total_steps=\", total_steps, \", fwd=\", fwd, \", bkwd=\", bkwd)\n\n# Initialise step pattern to a blank string, and steps to zero.\nstep_pattern = \"\"\nsteps = 0\nwhile total_steps > 0:\n for i in range(fwd):\n step_pattern += \"F\"\n steps += 1\n total_steps -= 1\n \n if total_steps > 0:\n for j in range(bkwd):\n step_pattern += \"B\"\n steps -= 1\n total_steps -= 1\n\n# Use f-strings to insert (step_pattern) and (steps) into string\nprint(f\"{step_pattern} = {steps} steps from the start\")\n\n\n\nExample OUTPUTs:\nTotal_steps= 45 , fwd= 5 , bkwd= 2\nFFFFFBBFFFFFBBFFFFFBBFFFFFBBFFFFFBBFFFFFBBFFF = 21 steps from the start\n\nTotal_steps= 14 , fwd= 6 , bkwd= 5\nFFFFFFBBBBBFFF = 4 steps from the start\n\n", "If I would not have to complicate things, I would just set the total steps taken random generation from the sum of fwd and bkwd. And as per the requirements mentioned, the sum will not be greater or equal to 85.\ntotal=random.randint(fwd+bkwd,85)\n\n" ]
[ 1, 0 ]
[]
[]
[ "iteration", "loops", "motion", "python", "while_loop" ]
stackoverflow_0074587741_iteration_loops_motion_python_while_loop.txt
Q: Pyautogui error about non-internable object Error Image exist in the folder, but I don't know why program gives an error I don't know what to do help me pls A: pyautogui.click("otvet.png") will take x,y value and you're passing a .png file, i think you wanted to do this: # Locate the image on screen, this will return x,y value if image is found and none if it is not otvet = pyautogui.locateOnScreen("otvet.png") # Click on the x,y value pyautogui.click(otvet)
Pyautogui error about non-internable object
Error Image exist in the folder, but I don't know why program gives an error I don't know what to do help me pls
[ "pyautogui.click(\"otvet.png\") will take x,y value and you're passing a .png file,\ni think you wanted to do this:\n# Locate the image on screen, this will return x,y value if image is found and none if it is not\notvet = pyautogui.locateOnScreen(\"otvet.png\")\n\n# Click on the x,y value\npyautogui.click(otvet)\n\n" ]
[ 0 ]
[]
[]
[ "image", "object", "pyautogui", "python", "python_3.x" ]
stackoverflow_0074577263_image_object_pyautogui_python_python_3.x.txt
Q: How do I save scraped data to a MySQL database? I have a python script that scrapes data from a job website. I want to save these scraped data to MySQL database but after writing the code, it connects to the database. Now after connecting, it doesn't create table and as result couldn't insert those data into the table. Please i need my code to store these scraped data to a table in the MYSQL database. Here's my code import requests from bs4 import BeautifulSoup import mysql.connector for x in range(1, 210): html_text = requests.get(f'https://www.timesjobs.com/candidate/job-search.html? from=submit&actualTxtKeywords=Python&searchBy=0&rdoOperator=OR&searchType=personalizedSearch&luceneResultSize=25&postWeek=60&txtKeywords=Python&pDate=I&sequence={x}&startPage=1').text soup = BeautifulSoup(html_text, 'lxml') jobs = soup.find_all('li', class_ = 'clearfix job-bx wht-shd-bx') job_list = [] for job in jobs: company_name = job.find('h3', class_ = 'joblist-comp-name').text.strip().replace(' ','') keyskill = job.find('span', class_ = 'srp-skills').text.strip().replace(' ','') all_detail = {company_name, keyskill} job_list.append(all_detail) db = mysql.connector.connect(host= 'localhost', user= 'root', password= 'Maxesafrica2') cursor = db.cursor() cursor.execute("CREATE DATABASE first_db") print("Connection to MYSQL Established!") db = mysql.connector.connect(host= 'localhost', user= 'root', password= 'Maxesafrica2', database = 'first_db' ) print("Connected to Database!") cursor = db.cursor() mysql_create_table_query = """CREATE TABLE first_tbl (Company Name Varchar(300) NOT NULL, Keyskill Varchar(400) NOT NULL)""" result = cursor.execute(mysql_create_table_query) insert_query = """INSERT INTO first_tbl (Company Name, Keyskill) VALUES (%s, %s)""" records_to_insert = job_list cursor = db.cursor() cursor.executemany(mysql_create_table_query, records_to_insert) db.commit() cursor.close() db.close() print('Done!') Here's the error I get Connection to MYSQL Established! Connected to Database! Traceback (most recent call last): File "C:\Users\LP\AppData\Local\Programs\Python\Python310\lib\site-packages\mysql\connector\connection_cext.py", line 565, in cmd_query self._cmysql.query(mysql_connector.MySQLInterfaceError: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'Name Varchar(300) NOT NULL, ' at line 1 A: Slight issue with the column name. Instead of 'Company Name' it needs to be 'Company_Name'. SQL doesn't like spaces in column names. Updated queries that you should run: CREATE TABLE first_tbl ( Company_Name Varchar(300) NOT NULL, Keyskill Varchar(400) NOT NULL ) INSERT INTO first_tbl (Company_Name, Keyskill) VALUES (%s, %s) A: I think the issue is related to the field name 'Company Name' you put with a blank in between. Try the name with and _ like this mysql_create_table_query = """CREATE TABLE first_tbl (Company_Name Varchar(300) NOT NULL, Keyskill Varchar(400) NOT NULL)"""
How do I save scraped data to a MySQL database?
I have a python script that scrapes data from a job website. I want to save these scraped data to MySQL database but after writing the code, it connects to the database. Now after connecting, it doesn't create table and as result couldn't insert those data into the table. Please i need my code to store these scraped data to a table in the MYSQL database. Here's my code import requests from bs4 import BeautifulSoup import mysql.connector for x in range(1, 210): html_text = requests.get(f'https://www.timesjobs.com/candidate/job-search.html? from=submit&actualTxtKeywords=Python&searchBy=0&rdoOperator=OR&searchType=personalizedSearch&luceneResultSize=25&postWeek=60&txtKeywords=Python&pDate=I&sequence={x}&startPage=1').text soup = BeautifulSoup(html_text, 'lxml') jobs = soup.find_all('li', class_ = 'clearfix job-bx wht-shd-bx') job_list = [] for job in jobs: company_name = job.find('h3', class_ = 'joblist-comp-name').text.strip().replace(' ','') keyskill = job.find('span', class_ = 'srp-skills').text.strip().replace(' ','') all_detail = {company_name, keyskill} job_list.append(all_detail) db = mysql.connector.connect(host= 'localhost', user= 'root', password= 'Maxesafrica2') cursor = db.cursor() cursor.execute("CREATE DATABASE first_db") print("Connection to MYSQL Established!") db = mysql.connector.connect(host= 'localhost', user= 'root', password= 'Maxesafrica2', database = 'first_db' ) print("Connected to Database!") cursor = db.cursor() mysql_create_table_query = """CREATE TABLE first_tbl (Company Name Varchar(300) NOT NULL, Keyskill Varchar(400) NOT NULL)""" result = cursor.execute(mysql_create_table_query) insert_query = """INSERT INTO first_tbl (Company Name, Keyskill) VALUES (%s, %s)""" records_to_insert = job_list cursor = db.cursor() cursor.executemany(mysql_create_table_query, records_to_insert) db.commit() cursor.close() db.close() print('Done!') Here's the error I get Connection to MYSQL Established! Connected to Database! Traceback (most recent call last): File "C:\Users\LP\AppData\Local\Programs\Python\Python310\lib\site-packages\mysql\connector\connection_cext.py", line 565, in cmd_query self._cmysql.query(mysql_connector.MySQLInterfaceError: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'Name Varchar(300) NOT NULL, ' at line 1
[ "Slight issue with the column name. Instead of 'Company Name' it needs to be 'Company_Name'. SQL doesn't like spaces in column names.\nUpdated queries that you should run:\nCREATE TABLE first_tbl \n(\n Company_Name Varchar(300) NOT NULL,\n Keyskill Varchar(400) NOT NULL\n)\n\nINSERT INTO first_tbl \n (Company_Name, Keyskill)\nVALUES (%s, %s)\n\n\n", "I think the issue is related to the field name 'Company Name' you put with a blank in between. Try the name with and _ like this\nmysql_create_table_query = \"\"\"CREATE TABLE first_tbl (Company_Name Varchar(300) NOT NULL,\nKeyskill Varchar(400) NOT NULL)\"\"\"\n\n" ]
[ 1, 0 ]
[]
[]
[ "mysql", "python" ]
stackoverflow_0074588033_mysql_python.txt
Q: No module named 'keras.saving.hdf5_format' After pip3 installing tensorflow and the transformers library, I'm receiving the titular error when I try loading this from transformers import pipeline classifier = pipeline("text-classification",model='bhadresh-savani/distilbert-base-uncased-emotion') The error traceback looks like: RuntimeError: Failed to import transformers.models.distilbert.modeling_tf_distilbert because of the following error (look up to see its traceback): No module named 'keras.saving.hdf5_format' I have ensured keras got installed with transformers, so I'm not sure why it isn't working A: If you are using the latest version of TensorFlow and Keras then you have to try this code and you have got this error as shown below RuntimeError: Failed to import transformers.models.distilbert.modeling_tf_distilbert because of the following error (look up to see its traceback): No module named 'keras.saving.hdf5_format' Now, expand this error traces as I have shown below Now click on the 14 frames and select as shown below Now comment this line as shown in the picture below Now, try this and your error will gone. The problem is that this is in the older version of keras and you are using the latest version of keras. So, you can skip all these steps and go back to the older version and it will work eventually.
No module named 'keras.saving.hdf5_format'
After pip3 installing tensorflow and the transformers library, I'm receiving the titular error when I try loading this from transformers import pipeline classifier = pipeline("text-classification",model='bhadresh-savani/distilbert-base-uncased-emotion') The error traceback looks like: RuntimeError: Failed to import transformers.models.distilbert.modeling_tf_distilbert because of the following error (look up to see its traceback): No module named 'keras.saving.hdf5_format' I have ensured keras got installed with transformers, so I'm not sure why it isn't working
[ "If you are using the latest version of TensorFlow and Keras then you have to try this code and you have got this error as shown below\nRuntimeError: Failed to import transformers.models.distilbert.modeling_tf_distilbert because of the following error (look up to see its traceback):\nNo module named 'keras.saving.hdf5_format'\n\nNow, expand this error traces as I have shown below\n\nNow click on the 14 frames and select as shown below\n\nNow comment this line as shown in the picture below\n\nNow, try this and your error will gone.\nThe problem is that this is in the older version of keras and you are using the latest version of keras. So, you can skip all these steps and go back to the older version and it will work eventually.\n" ]
[ 2 ]
[]
[]
[ "keras", "python", "tensorflow" ]
stackoverflow_0074586892_keras_python_tensorflow.txt
Q: How to delete a document in MongoDB I am trying to create a delete method in order to delete a document that has the key:"name" and the value:"Rhonda". Whenever I execute my current code, I get an AttributeError saying:"'AnimalShelter' object has no attribute 'delete'". How do I get the method to return the deleted document's JSON contents? Here is my code: testing_script.ipynb from animal_shelter import AnimalShelter # now need to create the object from the class shelter = AnimalShelter("aacuser","Superman") data = {"age_upon_outcome":"2 years","animal_type":"Dog","breed":"Dachshund","color":"Black and tan","name":"Rhonda","outcome_subtype":"Partner","outcome_type":"Adopt","sex_upon_outcome":"Female"} new_values = {"$set": {"age_upon_outcome":"3 years"}} # if shelter.create(data): # print("Animal added") # else: # print("Failed to add animal") # Calls the read function # shelter.read(data) # Calls the update function # shelter.update(data, new_values) # Calls the delete function shelter.delete(data) output AttributeError Traceback (most recent call last) <ipython-input-5-60b1d887dfb8> in <module> 17 18 # Calls the delete function ---> 19 shelter.delete(data) 20 AttributeError: 'AnimalShelter' object has no attribute 'delete' animal_shelter.py from pymongo import MongoClient from bson.objectid import ObjectId class AnimalShelter(object): """ CRUD operations for Animal collection in MongoDB """ def __init__(self,username,password): # Initializing the MongoClient. This helps to # access the MongoDB databases and collections. # init to connect to mongodb without authentication self.client = MongoClient('mongodb://localhost:55996') # init connect to mongodb with authentication # self.client = MongoClient('mongodb://%s:%s@localhost:55996/?authMechanism=DEFAULT&authSource=AAC'%(username, password)) self.database = self.client['AAC'] # Complete this create method to implement the C in CRUD. def create(self, data): if data is not None: self.database.animals.insert(data) # data should be dictionary return True # Tells whether the create function ran successfully else: raise Exception("Nothing to save ...") # Create method to implement the R in CRUD. def read(self, data): return self.database.animals.find_one(data) #returns only one # Update method to implement the U in CRUD. def update(self, data, new_values): if self.database.animals.count(data): self.database.animals.update(data, new_values) return self.database.animals.find({"age_upon_outcome":"3 years"}) else: raise Exception("Nothing to update ...") # Delete method to implement the D in CRUD def delete(self, data) result = self.database.animals.find_one_and_delete(data) # print the _id key only if the result is not None if("_id" in result): print("find_one_and_delete ID:",result["_id"]) else: print("Nothing to delete") A: Problem is that functions that you are defining are outside the class. You have to put indentation on functions in class AnimalShelter Also as pointed out in comment you are missing : in delete Updated animal_sheltor.py from pymongo import MongoClient from bson.objectid import ObjectId class AnimalShelter(object): """ CRUD operations for Animal collection in MongoDB """ def __init__(self,username,password): # Initializing the MongoClient. This helps to # access the MongoDB databases and collections. # init to connect to mongodb without authentication self.client = MongoClient('mongodb://localhost:55996') # init connect to mongodb with authentication # self.client = MongoClient('mongodb://%s:%s@localhost:55996/?authMechanism=DEFAULT&authSource=AAC'%(username, password)) self.database = self.client['AAC'] # Complete this create method to implement the C in CRUD. def create(self, data): if data is not None: self.database.animals.insert(data) # data should be dictionary return True # Tells whether the create function ran successfully else: raise Exception("Nothing to save ...") # Create method to implement the R in CRUD. def read(self, data): return self.database.animals.find_one(data) #returns only one # Update method to implement the U in CRUD. def update(self, data, new_values): if self.database.animals.count(data): self.database.animals.update(data, new_values) return self.database.animals.find({"age_upon_outcome":"3 years"}) else: raise Exception("Nothing to update ...") # Delete method to implement the D in CRUD def delete(self, data): result = self.database.animals.find_one_and_delete(data) # print the _id key only if the result is not None if("_id" in result): print("find_one_and_delete ID:",result["_id"]) else: print("Nothing to delete")
How to delete a document in MongoDB
I am trying to create a delete method in order to delete a document that has the key:"name" and the value:"Rhonda". Whenever I execute my current code, I get an AttributeError saying:"'AnimalShelter' object has no attribute 'delete'". How do I get the method to return the deleted document's JSON contents? Here is my code: testing_script.ipynb from animal_shelter import AnimalShelter # now need to create the object from the class shelter = AnimalShelter("aacuser","Superman") data = {"age_upon_outcome":"2 years","animal_type":"Dog","breed":"Dachshund","color":"Black and tan","name":"Rhonda","outcome_subtype":"Partner","outcome_type":"Adopt","sex_upon_outcome":"Female"} new_values = {"$set": {"age_upon_outcome":"3 years"}} # if shelter.create(data): # print("Animal added") # else: # print("Failed to add animal") # Calls the read function # shelter.read(data) # Calls the update function # shelter.update(data, new_values) # Calls the delete function shelter.delete(data) output AttributeError Traceback (most recent call last) <ipython-input-5-60b1d887dfb8> in <module> 17 18 # Calls the delete function ---> 19 shelter.delete(data) 20 AttributeError: 'AnimalShelter' object has no attribute 'delete' animal_shelter.py from pymongo import MongoClient from bson.objectid import ObjectId class AnimalShelter(object): """ CRUD operations for Animal collection in MongoDB """ def __init__(self,username,password): # Initializing the MongoClient. This helps to # access the MongoDB databases and collections. # init to connect to mongodb without authentication self.client = MongoClient('mongodb://localhost:55996') # init connect to mongodb with authentication # self.client = MongoClient('mongodb://%s:%s@localhost:55996/?authMechanism=DEFAULT&authSource=AAC'%(username, password)) self.database = self.client['AAC'] # Complete this create method to implement the C in CRUD. def create(self, data): if data is not None: self.database.animals.insert(data) # data should be dictionary return True # Tells whether the create function ran successfully else: raise Exception("Nothing to save ...") # Create method to implement the R in CRUD. def read(self, data): return self.database.animals.find_one(data) #returns only one # Update method to implement the U in CRUD. def update(self, data, new_values): if self.database.animals.count(data): self.database.animals.update(data, new_values) return self.database.animals.find({"age_upon_outcome":"3 years"}) else: raise Exception("Nothing to update ...") # Delete method to implement the D in CRUD def delete(self, data) result = self.database.animals.find_one_and_delete(data) # print the _id key only if the result is not None if("_id" in result): print("find_one_and_delete ID:",result["_id"]) else: print("Nothing to delete")
[ "Problem is that functions that you are defining are outside the class. You have to put indentation on functions in class AnimalShelter\nAlso as pointed out in comment you are missing : in delete\nUpdated animal_sheltor.py\nfrom pymongo import MongoClient\nfrom bson.objectid import ObjectId\n\nclass AnimalShelter(object):\n\"\"\" CRUD operations for Animal collection in MongoDB \"\"\"\n\n def __init__(self,username,password):\n # Initializing the MongoClient. This helps to \n # access the MongoDB databases and collections. \n # init to connect to mongodb without authentication\n self.client = MongoClient('mongodb://localhost:55996')\n # init connect to mongodb with authentication\n # self.client = MongoClient('mongodb://%s:%s@localhost:55996/?authMechanism=DEFAULT&authSource=AAC'%(username, password))\n self.database = self.client['AAC']\n\n# Complete this create method to implement the C in CRUD.\n def create(self, data):\n if data is not None:\n self.database.animals.insert(data) # data should be dictionary \n return True # Tells whether the create function ran successfully\n else:\n raise Exception(\"Nothing to save ...\")\n\n# Create method to implement the R in CRUD. \n def read(self, data):\n return self.database.animals.find_one(data) #returns only one\n\n# Update method to implement the U in CRUD.\n def update(self, data, new_values):\n if self.database.animals.count(data):\n self.database.animals.update(data, new_values)\n return self.database.animals.find({\"age_upon_outcome\":\"3 years\"})\n else:\n raise Exception(\"Nothing to update ...\") \n \n# Delete method to implement the D in CRUD\n def delete(self, data):\n result = self.database.animals.find_one_and_delete(data)\n # print the _id key only if the result is not None\n if(\"_id\" in result):\n print(\"find_one_and_delete ID:\",result[\"_id\"])\n else:\n print(\"Nothing to delete\")\n\n\n" ]
[ 0 ]
[]
[]
[ "mongodb", "python" ]
stackoverflow_0074588047_mongodb_python.txt
Q: What is the python equivalent of setting instances of a class within the __init__() method? I'd like to send in a list of dependencies as part of creating a DAGNode: what is the supported way to achieve a similar behavior in python - given it seems this exact syntax were not supported? from typing import TypeVar, Generic T = TypeVar('T') class DAGNode(Generic[T]): # Apparently the `DAGNode` type does not exist yet so this fails def __init__(self, id: T, dependencies: set[DAGNode[T]]): self.id = id self.dependencies = dependencies A: Since the class does not exist yet you have to reference it including its name between single quote like this from typing import TypeVar, Generic, Set T = TypeVar('T') class DAGNode(Generic[T]): # Apparently the DAGNode type does not exist yet so this fails def __init__(self, type_id: T, dependencies: Set['DAGNode[T]']): self.id = type_id self.dependencies = dependencies Notice I used type_id instead of id to not shadow the builtin function id A: In Python 3.7 and later, you can add the line from __future__ import annotations near the beginning of the code. Annotations aren't evaluated then anymore automatically (only valid expression syntax is checked). For more details see https://peps.python.org/pep-0563/
What is the python equivalent of setting instances of a class within the __init__() method?
I'd like to send in a list of dependencies as part of creating a DAGNode: what is the supported way to achieve a similar behavior in python - given it seems this exact syntax were not supported? from typing import TypeVar, Generic T = TypeVar('T') class DAGNode(Generic[T]): # Apparently the `DAGNode` type does not exist yet so this fails def __init__(self, id: T, dependencies: set[DAGNode[T]]): self.id = id self.dependencies = dependencies
[ "Since the class does not exist yet you have to reference it including its name between single quote like this\nfrom typing import TypeVar, Generic, Set\n\nT = TypeVar('T')\nclass DAGNode(Generic[T]):\n\n # Apparently the DAGNode type does not exist yet so this fails\n def __init__(self, type_id: T, dependencies: Set['DAGNode[T]']): \n self.id = type_id\n self.dependencies = dependencies\n\nNotice I used type_id instead of id to not shadow the builtin function id\n", "In Python 3.7 and later, you can add the line\nfrom __future__ import annotations\n\nnear the beginning of the code. Annotations aren't evaluated then anymore automatically (only valid expression syntax is checked).\nFor more details see https://peps.python.org/pep-0563/\n" ]
[ 2, 1 ]
[]
[]
[ "python" ]
stackoverflow_0074588016_python.txt
Q: TypeError: list indices must be integers or slices, not Symbol from sympy import * t6,a,b,c = symbols ('t6,a,b,c') result=solve([(a*cos(t6))+(b*sin(t6))+c],[t6]) cs=[(a,-26.468147779101194),(b,4.395890741437306),(c,19.920476269921963)] t6 = result[t6].subs(cs) trying to solve an equation i guess it is because it has two results because it works fine on simplier equations A: There is no need for list (i.e., []) inside solve() t6,a,b,c = symbols('t6,a,b,c') result=solve((a*cos(t6))+(b*sin(t6))+c, t6) cs=[(a,-26.468147779101194),(b,4.395890741437306),(c,19.920476269921963)] # solve for t6 for i in range(len(result)): t6 = result[i].subs(cs) print(t6) output: 0.569494943945226 -0.898655036471938
TypeError: list indices must be integers or slices, not Symbol
from sympy import * t6,a,b,c = symbols ('t6,a,b,c') result=solve([(a*cos(t6))+(b*sin(t6))+c],[t6]) cs=[(a,-26.468147779101194),(b,4.395890741437306),(c,19.920476269921963)] t6 = result[t6].subs(cs) trying to solve an equation i guess it is because it has two results because it works fine on simplier equations
[ "There is no need for list (i.e., []) inside solve()\nt6,a,b,c = symbols('t6,a,b,c')\nresult=solve((a*cos(t6))+(b*sin(t6))+c, t6)\ncs=[(a,-26.468147779101194),(b,4.395890741437306),(c,19.920476269921963)]\n\n# solve for t6\nfor i in range(len(result)):\n t6 = result[i].subs(cs)\n print(t6)\n\noutput:\n0.569494943945226\n-0.898655036471938\n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074588086_python.txt
Q: Pandas version of rbind In R, you can combine two dataframes by sticking the columns of one onto the bottom of the columns of the other using rbind. In pandas, how do you accomplish the same thing? It seems bizarrely difficult. Using append results in a horrible mess including NaNs and things for reasons I don't understand. I'm just trying to "rbind" two identical frames that look like this: EDIT: I was creating the DataFrames in a stupid way, which was causing issues. Append=rbind to all intents and purposes. See answer below. 0 1 2 3 4 5 6 7 0 ADN.L 20130220 437.4 442.37 436.5000 441.9000 2775364 2013-02-20 18:47:42 1 ADM.L 20130220 1279.0 1300.00 1272.0000 1285.0000 967730 2013-02-20 18:47:42 2 AGK.L 20130220 1717.0 1749.00 1709.0000 1739.0000 834534 2013-02-20 18:47:43 3 AMEC.L 20130220 1030.0 1040.00 1024.0000 1035.0000 1972517 2013-02-20 18:47:43 4 AAL.L 20130220 1998.0 2014.50 1942.4999 1951.0000 3666033 2013-02-20 18:47:44 5 ANTO.L 20130220 1093.0 1097.00 1064.7899 1068.0000 2183931 2013-02-20 18:47:44 6 ARM.L 20130220 941.5 965.10 939.4250 951.5001 2994652 2013-02-20 18:47:45 But I'm getting something horrible a la this: 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 NaN NaN NaN NaN NaN NaN NaN NaN ADN.L 20130220 437.4 442.37 436.5000 441.9000 2775364 2013-02-20 18:47:42 1 NaN NaN NaN NaN NaN NaN NaN NaN ADM.L 20130220 1279.0 1300.00 1272.0000 1285.0000 967730 2013-02-20 18:47:42 2 NaN NaN NaN NaN NaN NaN NaN NaN AGK.L 20130220 1717.0 1749.00 1709.0000 1739.0000 834534 2013-02-20 18:47:43 3 NaN NaN NaN NaN NaN NaN NaN NaN AMEC.L 20130220 1030.0 1040.00 1024.0000 1035.0000 1972517 2013-02-20 18:47:43 4 NaN NaN NaN NaN NaN NaN NaN NaN AAL.L 20130220 1998.0 2014.50 1942.4999 1951.0000 3666033 2013-02-20 18:47:44 5 NaN NaN NaN NaN NaN NaN NaN NaN ANTO.L 20130220 1093.0 1097.00 1064.7899 1068.0000 2183931 2013-02-20 18:47:44 6 NaN NaN NaN NaN NaN NaN NaN NaN ARM.L 20130220 941.5 965.10 939.4250 951.5001 2994652 2013-02-20 18:47:45 0 NaN NaN NaN NaN NaN NaN NaN NaN ADN.L 20130220 437.4 442.37 436.5000 441.9000 2775364 2013-02-20 18:47:42 1 NaN NaN NaN NaN NaN NaN NaN NaN ADM.L 20130220 1279.0 1300.00 1272.0000 1285.0000 967730 2013-02-20 18:47:42 2 NaN NaN NaN NaN NaN NaN NaN NaN AGK.L 20130220 1717.0 1749.00 1709.0000 1739.0000 834534 2013-02-20 18:47:43 3 NaN NaN NaN NaN NaN NaN NaN NaN And I don't understand why. I'm starting to miss R :( A: Ah, this is to do with how I created the DataFrame, not with how I was combining them. The long and the short of it is, if you are creating a frame using a loop and a statement that looks like this: Frame = Frame.append(pandas.DataFrame(data = SomeNewLineOfData)) You must ignore the index Frame = Frame.append(pandas.DataFrame(data = SomeNewLineOfData), ignore_index=True) Or you will have issues later when combining data. A: pd.concat will serve the purpose of rbind in R. import pandas as pd df1 = pd.DataFrame({'col1': [1,2], 'col2':[3,4]}) df2 = pd.DataFrame({'col1': [5,6], 'col2':[7,8]}) print(df1) print(df2) print(pd.concat([df1, df2])) The outcome will looks like: col1 col2 0 1 3 1 2 4 col1 col2 0 5 7 1 6 8 col1 col2 0 1 3 1 2 4 0 5 7 1 6 8 If you read the documentation careful enough, it will also explain other operations like cbind, ..etc. A: [EDIT] append() is deprecated since 1.4.0 - use concat() instead - https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.append.html This worked for me: import numpy as np import pandas as pd dates = np.asarray(pd.date_range('1/1/2000', periods=8)) df1 = pd.DataFrame(np.random.randn(8, 4), index=dates, columns=['A', 'B', 'C', 'D']) df2 = df1.copy() df = df1.append(df2) Yields: A B C D 2000-01-01 -0.327208 0.552500 0.862529 0.493109 2000-01-02 1.039844 -2.141089 -0.781609 1.307600 2000-01-03 -0.462831 0.066505 -1.698346 1.123174 2000-01-04 -0.321971 -0.544599 -0.486099 -0.283791 2000-01-05 0.693749 0.544329 -1.606851 0.527733 2000-01-06 -2.461177 -0.339378 -0.236275 0.155569 2000-01-07 -0.597156 0.904511 0.369865 0.862504 2000-01-08 -0.958300 -0.583621 -2.068273 0.539434 2000-01-01 -0.327208 0.552500 0.862529 0.493109 2000-01-02 1.039844 -2.141089 -0.781609 1.307600 2000-01-03 -0.462831 0.066505 -1.698346 1.123174 2000-01-04 -0.321971 -0.544599 -0.486099 -0.283791 2000-01-05 0.693749 0.544329 -1.606851 0.527733 2000-01-06 -2.461177 -0.339378 -0.236275 0.155569 2000-01-07 -0.597156 0.904511 0.369865 0.862504 2000-01-08 -0.958300 -0.583621 -2.068273 0.539434 If you don't already use the latest version of pandas I highly recommend upgrading. It is now possible to operate with DataFrames which contain duplicate indices. A: import pandas as pd import numpy as np If you have a DataFrame like this: array = np.random.randint( 0,10, size = (2,4) ) df = pd.DataFrame(array, columns = ['A','B', 'C', 'D'], \ index = ['10aa', '20bb'] ) ### some crazy indexes df A B C D 10aa 4 2 4 6 20bb 5 1 0 2 And you want add some NEW ROW which is a list (or another iterable object): List = [i**3 for i in range(df.shape[1]) ] List [0, 1, 8, 27] You should transform list to dictionary with keys equals columns in DataFrame with zip() function: Dict = dict( zip(df.columns, List) ) Dict {'A': 0, 'B': 1, 'C': 8, 'D': 27} Than you can use append() method to add new dictionary: df = df.append(Dict, ignore_index=True) df A B C D 0 7 5 5 4 1 5 8 4 1 2 0 1 8 27 N.B. the indexes are dropped. And yeah, it's not as simple as cbind() in R :( A: dplyr's bind_rows does the same thing. In python, you can do it the same way: >>> from datar.all import bind_rows, head, tail >>> from datar.datasets import iris >>> >>> iris >> head(3) >> bind_rows(iris >> tail(3)) Sepal_Length Sepal_Width Petal_Length Petal_Width Species <float64> <float64> <float64> <float64> <object> 0 5.1 3.5 1.4 0.2 setosa 1 4.9 3.0 1.4 0.2 setosa 2 4.7 3.2 1.3 0.2 setosa 3 6.5 3.0 5.2 2.0 virginica 4 6.2 3.4 5.4 2.3 virginica 5 5.9 3.0 5.1 1.8 virginica I am the author of the datar package. Feel free to submit issues if you have any questions. A: Yes, rbind() (row bind dataframes) and cbind() (column bind dataframes) in R are very simple and intuitive. You can use the "concat()" function from the pandas library for both of them to achieve the same thing. The rbind(df1,df2) equivalent in pandas will be the following: pd.concat([df1, df2], ignore_index = True) However, I have written rbind() and cbind() functions below using pandas for ease of use. def rbind(df1, df2): import pandas as pd return pd.concat([df1, df2], ignore_index = True) def cbind(df1, df2): import pandas as pd # Note this does not keep the original indexes of the df's and resets them to 0,1,... return pd.concat([df1.reset_index(drop=True), df2.reset_index(drop=True)], axis = 1) If you copy, paste, and run the above functions you can use these functions in python the same as you would use them in R. Also, they have the same assumptions as their R counterparts such as for rbind(df1, df2): df1 and df2 need to have the same column names. Below is an example of the rbind() function: import pandas as pd dict1 = {'Name': ['Ali', 'Craig', 'Shaz', 'Maheen'], 'Age': [36, 38, 33, 34]} dict2 = {'Name': ['Fahad', 'Tyler', 'Thai-Son', 'Shazmeen', 'Uruj', 'Tatyana'], 'Age': [42, 27, 29, 60, 42, 31]} data1 = pd.DataFrame(dict1) data2 = pd.DataFrame(dict2) # We now row-bind the two dataframes and save it as df_final. df_final = rbind(data1, data2) print(df_final) Here is an open public GitHub repo file I created for writing and consolidating python equivalent R functions in one central place: https://github.com/CubeStatistica/Learning-Data-Science-Properly-for-Work-and-Production-Using-Python/blob/main/Writing-R-Functions-in-Python.ipynb Feel free to contribute. Happy coding!
Pandas version of rbind
In R, you can combine two dataframes by sticking the columns of one onto the bottom of the columns of the other using rbind. In pandas, how do you accomplish the same thing? It seems bizarrely difficult. Using append results in a horrible mess including NaNs and things for reasons I don't understand. I'm just trying to "rbind" two identical frames that look like this: EDIT: I was creating the DataFrames in a stupid way, which was causing issues. Append=rbind to all intents and purposes. See answer below. 0 1 2 3 4 5 6 7 0 ADN.L 20130220 437.4 442.37 436.5000 441.9000 2775364 2013-02-20 18:47:42 1 ADM.L 20130220 1279.0 1300.00 1272.0000 1285.0000 967730 2013-02-20 18:47:42 2 AGK.L 20130220 1717.0 1749.00 1709.0000 1739.0000 834534 2013-02-20 18:47:43 3 AMEC.L 20130220 1030.0 1040.00 1024.0000 1035.0000 1972517 2013-02-20 18:47:43 4 AAL.L 20130220 1998.0 2014.50 1942.4999 1951.0000 3666033 2013-02-20 18:47:44 5 ANTO.L 20130220 1093.0 1097.00 1064.7899 1068.0000 2183931 2013-02-20 18:47:44 6 ARM.L 20130220 941.5 965.10 939.4250 951.5001 2994652 2013-02-20 18:47:45 But I'm getting something horrible a la this: 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 NaN NaN NaN NaN NaN NaN NaN NaN ADN.L 20130220 437.4 442.37 436.5000 441.9000 2775364 2013-02-20 18:47:42 1 NaN NaN NaN NaN NaN NaN NaN NaN ADM.L 20130220 1279.0 1300.00 1272.0000 1285.0000 967730 2013-02-20 18:47:42 2 NaN NaN NaN NaN NaN NaN NaN NaN AGK.L 20130220 1717.0 1749.00 1709.0000 1739.0000 834534 2013-02-20 18:47:43 3 NaN NaN NaN NaN NaN NaN NaN NaN AMEC.L 20130220 1030.0 1040.00 1024.0000 1035.0000 1972517 2013-02-20 18:47:43 4 NaN NaN NaN NaN NaN NaN NaN NaN AAL.L 20130220 1998.0 2014.50 1942.4999 1951.0000 3666033 2013-02-20 18:47:44 5 NaN NaN NaN NaN NaN NaN NaN NaN ANTO.L 20130220 1093.0 1097.00 1064.7899 1068.0000 2183931 2013-02-20 18:47:44 6 NaN NaN NaN NaN NaN NaN NaN NaN ARM.L 20130220 941.5 965.10 939.4250 951.5001 2994652 2013-02-20 18:47:45 0 NaN NaN NaN NaN NaN NaN NaN NaN ADN.L 20130220 437.4 442.37 436.5000 441.9000 2775364 2013-02-20 18:47:42 1 NaN NaN NaN NaN NaN NaN NaN NaN ADM.L 20130220 1279.0 1300.00 1272.0000 1285.0000 967730 2013-02-20 18:47:42 2 NaN NaN NaN NaN NaN NaN NaN NaN AGK.L 20130220 1717.0 1749.00 1709.0000 1739.0000 834534 2013-02-20 18:47:43 3 NaN NaN NaN NaN NaN NaN NaN NaN And I don't understand why. I'm starting to miss R :(
[ "Ah, this is to do with how I created the DataFrame, not with how I was combining them. The long and the short of it is, if you are creating a frame using a loop and a statement that looks like this:\nFrame = Frame.append(pandas.DataFrame(data = SomeNewLineOfData))\n\nYou must ignore the index\nFrame = Frame.append(pandas.DataFrame(data = SomeNewLineOfData), ignore_index=True)\n\nOr you will have issues later when combining data.\n", "pd.concat will serve the purpose of rbind in R. \nimport pandas as pd\ndf1 = pd.DataFrame({'col1': [1,2], 'col2':[3,4]})\ndf2 = pd.DataFrame({'col1': [5,6], 'col2':[7,8]})\nprint(df1)\nprint(df2)\nprint(pd.concat([df1, df2]))\n\nThe outcome will looks like: \n col1 col2\n0 1 3\n1 2 4\n col1 col2\n0 5 7\n1 6 8\n col1 col2\n0 1 3\n1 2 4\n0 5 7\n1 6 8\n\nIf you read the documentation careful enough, it will also explain other operations like cbind, ..etc. \n", "[EDIT] append() is deprecated since 1.4.0 - use concat() instead - https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.append.html\nThis worked for me:\nimport numpy as np\nimport pandas as pd\n\ndates = np.asarray(pd.date_range('1/1/2000', periods=8))\ndf1 = pd.DataFrame(np.random.randn(8, 4), index=dates, columns=['A', 'B', 'C', 'D'])\ndf2 = df1.copy()\ndf = df1.append(df2)\n\nYields:\n A B C D\n2000-01-01 -0.327208 0.552500 0.862529 0.493109\n2000-01-02 1.039844 -2.141089 -0.781609 1.307600\n2000-01-03 -0.462831 0.066505 -1.698346 1.123174\n2000-01-04 -0.321971 -0.544599 -0.486099 -0.283791\n2000-01-05 0.693749 0.544329 -1.606851 0.527733\n2000-01-06 -2.461177 -0.339378 -0.236275 0.155569\n2000-01-07 -0.597156 0.904511 0.369865 0.862504\n2000-01-08 -0.958300 -0.583621 -2.068273 0.539434\n2000-01-01 -0.327208 0.552500 0.862529 0.493109\n2000-01-02 1.039844 -2.141089 -0.781609 1.307600\n2000-01-03 -0.462831 0.066505 -1.698346 1.123174\n2000-01-04 -0.321971 -0.544599 -0.486099 -0.283791\n2000-01-05 0.693749 0.544329 -1.606851 0.527733\n2000-01-06 -2.461177 -0.339378 -0.236275 0.155569\n2000-01-07 -0.597156 0.904511 0.369865 0.862504\n2000-01-08 -0.958300 -0.583621 -2.068273 0.539434\n\nIf you don't already use the latest version of pandas I highly recommend upgrading. It is now possible to operate with DataFrames which contain duplicate indices.\n", "import pandas as pd \nimport numpy as np\n\nIf you have a DataFrame like this:\narray = np.random.randint( 0,10, size = (2,4) )\ndf = pd.DataFrame(array, columns = ['A','B', 'C', 'D'], \\ \n index = ['10aa', '20bb'] ) ### some crazy indexes\ndf\n\n A B C D\n10aa 4 2 4 6\n20bb 5 1 0 2\n\nAnd you want add some NEW ROW which is a list (or another iterable object):\nList = [i**3 for i in range(df.shape[1]) ]\nList\n[0, 1, 8, 27]\n\nYou should transform list to dictionary with keys equals columns in DataFrame with zip() function:\nDict = dict( zip(df.columns, List) )\nDict\n{'A': 0, 'B': 1, 'C': 8, 'D': 27}\n\nThan you can use append() method to add new dictionary:\ndf = df.append(Dict, ignore_index=True)\ndf\n A B C D\n0 7 5 5 4\n1 5 8 4 1\n2 0 1 8 27\n\nN.B. the indexes are dropped.\nAnd yeah, it's not as simple as cbind() in R :(\n", "dplyr's bind_rows does the same thing.\nIn python, you can do it the same way:\n>>> from datar.all import bind_rows, head, tail\n>>> from datar.datasets import iris\n>>> \n>>> iris >> head(3) >> bind_rows(iris >> tail(3))\n Sepal_Length Sepal_Width Petal_Length Petal_Width Species\n <float64> <float64> <float64> <float64> <object>\n0 5.1 3.5 1.4 0.2 setosa\n1 4.9 3.0 1.4 0.2 setosa\n2 4.7 3.2 1.3 0.2 setosa\n3 6.5 3.0 5.2 2.0 virginica\n4 6.2 3.4 5.4 2.3 virginica\n5 5.9 3.0 5.1 1.8 virginica\n\nI am the author of the datar package. Feel free to submit issues if you have any questions.\n", "Yes, rbind() (row bind dataframes) and cbind() (column bind dataframes) in R are very simple and intuitive.\nYou can use the \"concat()\" function from the pandas library for both of them to achieve the same thing. The rbind(df1,df2) equivalent in pandas will be the following:\npd.concat([df1, df2], ignore_index = True)\n\nHowever, I have written rbind() and cbind() functions below using pandas for ease of use.\n\n def rbind(df1, df2):\n import pandas as pd\n return pd.concat([df1, df2], ignore_index = True)\n\n def cbind(df1, df2):\n import pandas as pd\n # Note this does not keep the original indexes of the df's and resets them to 0,1,...\n return pd.concat([df1.reset_index(drop=True), df2.reset_index(drop=True)], axis = 1)\n\n\nIf you copy, paste, and run the above functions you can use these functions in python the same as you would use them in R. Also, they have the same assumptions as their R counterparts such as for rbind(df1, df2): df1 and df2 need to have the same column names.\nBelow is an example of the rbind() function:\nimport pandas as pd\n\ndict1 = {'Name': ['Ali', 'Craig', 'Shaz', 'Maheen'], 'Age': [36, 38, 33, 34]} \ndict2 = {'Name': ['Fahad', 'Tyler', 'Thai-Son', 'Shazmeen', 'Uruj', 'Tatyana'], 'Age': [42, 27, 29, 60, 42, 31]}\n\ndata1 = pd.DataFrame(dict1)\ndata2 = pd.DataFrame(dict2) \n\n# We now row-bind the two dataframes and save it as df_final.\n\ndf_final = rbind(data1, data2)\n\nprint(df_final)\n\n\nHere is an open public GitHub repo file I created for writing and consolidating python equivalent R functions in one central place:\nhttps://github.com/CubeStatistica/Learning-Data-Science-Properly-for-Work-and-Production-Using-Python/blob/main/Writing-R-Functions-in-Python.ipynb\nFeel free to contribute.\nHappy coding!\n" ]
[ 56, 56, 35, 4, 3, 0 ]
[]
[]
[ "dataframe", "pandas", "python", "r" ]
stackoverflow_0014988480_dataframe_pandas_python_r.txt
Q: Filtering out SQAlchemy query with group_by based on column value being > 0 I am trying to filter out this SQLAlchemy query to only return records where avg_3d_perf > 0 Here is a query I am using: query = query.with_entities(func.max(Signals_light.id).label('id'),Signals_light.symbol,func.max(Signals_light.close).label('close'),func.max(Signals_light.volume).label('volume'),func.max(Signals_light .total_change).label('total_change'),func.max(Signals_light.strength).label('strength'),func.max(Signals_light.created_at).label('created_at'),func.max(Signals_light.window_mins).label('window_mins'), func.max(Signals_light.pattern).label('pattern'),func.max(Signals_light.pattern_type).label('pattern_type'),func.max(Signals_light.sentiment).label('sentiment'),func.avg(func.nullif(Signals_light.avg_3d_perf,0)).label('avg_3d_perf'), func.count(Signals_light.symbol).label('signals_count')).group_by(Signals_light.symbol) All I need to do is filter the results from this query above to only keep records where avg_3d_Perf > 0 I tried using statements like query = query.filter(avg_3d_Perf > 0).all() on top of this large query but I think I need to apply this filter differently somehow. I also tried to add .having(avg_3d_perf > 0) to the end of the query but it does not work either. A: Adding query = query.having(func.avg(func.nullif(Signals_light.avg_3d_perf,0)) > 0) after the main query solved my issue.
Filtering out SQAlchemy query with group_by based on column value being > 0
I am trying to filter out this SQLAlchemy query to only return records where avg_3d_perf > 0 Here is a query I am using: query = query.with_entities(func.max(Signals_light.id).label('id'),Signals_light.symbol,func.max(Signals_light.close).label('close'),func.max(Signals_light.volume).label('volume'),func.max(Signals_light .total_change).label('total_change'),func.max(Signals_light.strength).label('strength'),func.max(Signals_light.created_at).label('created_at'),func.max(Signals_light.window_mins).label('window_mins'), func.max(Signals_light.pattern).label('pattern'),func.max(Signals_light.pattern_type).label('pattern_type'),func.max(Signals_light.sentiment).label('sentiment'),func.avg(func.nullif(Signals_light.avg_3d_perf,0)).label('avg_3d_perf'), func.count(Signals_light.symbol).label('signals_count')).group_by(Signals_light.symbol) All I need to do is filter the results from this query above to only keep records where avg_3d_Perf > 0 I tried using statements like query = query.filter(avg_3d_Perf > 0).all() on top of this large query but I think I need to apply this filter differently somehow. I also tried to add .having(avg_3d_perf > 0) to the end of the query but it does not work either.
[ "Adding query = query.having(func.avg(func.nullif(Signals_light.avg_3d_perf,0)) > 0) after the main query solved my issue.\n" ]
[ 0 ]
[]
[]
[ "filter", "python", "sqlalchemy", "where_clause" ]
stackoverflow_0074203940_filter_python_sqlalchemy_where_clause.txt
Q: AttributeError: module 'collections' has no attribute 'Iterable' I am using the "pdftables" library to extract tables from a pdf. This is my code: import pdftables pg = pdftables.get_pdf_page(open("filename.pdf","rb"),253) print(pg) table = pdftables.page_to_tables(pg) print(table) I am getting this error and I am not sure what's causing it. Traceback (most recent call last): File "c:\Users\gayak\OneDrive\Documents\PDF to Database\PDF_to_Tables_3.py", line 9, in <module> table = pdftables.page_to_tables(pg) File "C:\Users\gayak\AppData\Local\Programs\Python\Python310\lib\site-packages\pdftables\pdftables.py", line 485, in page_to_tables box_list = LeafList().populate(page, flt).purge_empty_text() File "C:\Users\gayak\AppData\Local\Programs\Python\Python310\lib\site-packages\pdftables\tree.py", line 98, in populate for obj in children(pdfpage): File "C:\Users\gayak\AppData\Local\Programs\Python\Python310\lib\site-packages\pdftables\tree.py", line 75, in children if isinstance(obj, collections.Iterable): AttributeError: module 'collections' has no attribute 'Iterable' The version of python I am using is python 3.10.4 I used pip install pdftables.six to get the library A: If you don't want to change the source code, there is an easier way. Just use this in your script after importing. import collections collections.Iterable = collections.abc.Iterable A: As the Error says, the attribute isn't valid. When using collection.Iterable then it not finds the Iterable attribute. This can have different reasons but in this case, after looking on the package files on Github, i noticed that you are missing the abc keyword. I not tried this solution, but I'm 95% sure that using collections.abc.Iterable will do the thing.
AttributeError: module 'collections' has no attribute 'Iterable'
I am using the "pdftables" library to extract tables from a pdf. This is my code: import pdftables pg = pdftables.get_pdf_page(open("filename.pdf","rb"),253) print(pg) table = pdftables.page_to_tables(pg) print(table) I am getting this error and I am not sure what's causing it. Traceback (most recent call last): File "c:\Users\gayak\OneDrive\Documents\PDF to Database\PDF_to_Tables_3.py", line 9, in <module> table = pdftables.page_to_tables(pg) File "C:\Users\gayak\AppData\Local\Programs\Python\Python310\lib\site-packages\pdftables\pdftables.py", line 485, in page_to_tables box_list = LeafList().populate(page, flt).purge_empty_text() File "C:\Users\gayak\AppData\Local\Programs\Python\Python310\lib\site-packages\pdftables\tree.py", line 98, in populate for obj in children(pdfpage): File "C:\Users\gayak\AppData\Local\Programs\Python\Python310\lib\site-packages\pdftables\tree.py", line 75, in children if isinstance(obj, collections.Iterable): AttributeError: module 'collections' has no attribute 'Iterable' The version of python I am using is python 3.10.4 I used pip install pdftables.six to get the library
[ "If you don't want to change the source code, there is an easier way. Just use this in your script after importing.\nimport collections\ncollections.Iterable = collections.abc.Iterable\n\n", "As the Error says, the attribute isn't valid. When using collection.Iterable then it not finds the Iterable attribute. This can have different reasons but in this case, after looking on the package files on Github, i noticed that you are missing the abc keyword. I not tried this solution, but I'm 95% sure that using collections.abc.Iterable will do the thing.\n" ]
[ 3, 0 ]
[ "A simple fix that works for python3.10:\nUnder directory\n/usr/lib/python3.10/collections/init.py\nNote: The path might change depending\nAdd this line of code:\nfrom _collections_abc import Iterable\n", "import collections \nfrom _collections_abc import Iterable \ncollection.Iterable = Iterable\n\nThis should work \n" ]
[ -1, -1 ]
[ "attributeerror", "pdf", "pdftables", "python" ]
stackoverflow_0072371859_attributeerror_pdf_pdftables_python.txt
Q: How to filter the dates from datetime field in django views.py import datetime from django.shortcuts import render import pymysql from django.http import HttpResponseRedirect from facligoapp.models import Scrapper from django.utils import timezone import pytz roles = "" get_records_by_date = "" def index(request): if request.method == "POST": from_date = request.POST.get("from_date") f_date = datetime.datetime.strptime(from_date,'%Y-%m-%d') print(f_date) to_date = request.POST.get("to_date") t_date = datetime.datetime.strptime(to_date, '%Y-%m-%d') print(t_date) global get_records_by_date get_records_by_date = Scrapper.objects.all().filter(start_time=f_date,end_time=t_date) print(get_records_by_date) else: global roles roles = Scrapper.objects.all() return render(request, "home.html",{"scrappers": roles}) return render(request, "home.html", {"scrappers": get_records_by_date}) models.py from django.db import models from django.db import connections # Create your models here. from django.utils import timezone class Scrapper(models.Model): scrapper_id = models.IntegerField(primary_key=True) scrapper_jobs_log_id = models.IntegerField() external_job_source_id = models.IntegerField() start_time = models.DateField(default=True) end_time = models.DateField(default=True) scrapper_status = models.IntegerField() processed_records = models.IntegerField() new_records = models.IntegerField() skipped_records = models.IntegerField() error_records = models.IntegerField() class Meta: db_table = "scrapper_job_logs" Database structure I post f_date = 2022-11-24 00:00:00 , t_date = 2022-11-24 00:00:00 . I need to get the row start_time and end_time which has dates 2022-11-24. Is there any solution how to filter datas from datetime field. If I pass f_date and t_date my <QuerySet []> is empty. A: I need to get the row start_time and end_time which has dates 2022-11-24. It is a DateTimeField so compare its date using __date lookup so use this Queryset: Scrapper.objects.filter(start_time__date=f_date,end_time__date=t_date)
How to filter the dates from datetime field in django
views.py import datetime from django.shortcuts import render import pymysql from django.http import HttpResponseRedirect from facligoapp.models import Scrapper from django.utils import timezone import pytz roles = "" get_records_by_date = "" def index(request): if request.method == "POST": from_date = request.POST.get("from_date") f_date = datetime.datetime.strptime(from_date,'%Y-%m-%d') print(f_date) to_date = request.POST.get("to_date") t_date = datetime.datetime.strptime(to_date, '%Y-%m-%d') print(t_date) global get_records_by_date get_records_by_date = Scrapper.objects.all().filter(start_time=f_date,end_time=t_date) print(get_records_by_date) else: global roles roles = Scrapper.objects.all() return render(request, "home.html",{"scrappers": roles}) return render(request, "home.html", {"scrappers": get_records_by_date}) models.py from django.db import models from django.db import connections # Create your models here. from django.utils import timezone class Scrapper(models.Model): scrapper_id = models.IntegerField(primary_key=True) scrapper_jobs_log_id = models.IntegerField() external_job_source_id = models.IntegerField() start_time = models.DateField(default=True) end_time = models.DateField(default=True) scrapper_status = models.IntegerField() processed_records = models.IntegerField() new_records = models.IntegerField() skipped_records = models.IntegerField() error_records = models.IntegerField() class Meta: db_table = "scrapper_job_logs" Database structure I post f_date = 2022-11-24 00:00:00 , t_date = 2022-11-24 00:00:00 . I need to get the row start_time and end_time which has dates 2022-11-24. Is there any solution how to filter datas from datetime field. If I pass f_date and t_date my <QuerySet []> is empty.
[ "\nI need to get the row start_time and end_time which has dates 2022-11-24.\n\nIt is a DateTimeField so compare its date using __date lookup so use this Queryset:\nScrapper.objects.filter(start_time__date=f_date,end_time__date=t_date)\n\n" ]
[ 3 ]
[]
[]
[ "django", "django_models", "django_queryset", "django_views", "python" ]
stackoverflow_0074588141_django_django_models_django_queryset_django_views_python.txt
Q: How to remove keys-values from dictionary 1 which are not in dictionary 2 based on common keys? I have two large dictionaries and both dictionaries have same keys, (name of images) and have different values. 1st dict named train_descriptions which looks like this: {'15970.jpg': 'Turtle Check Men Navy Blue Shirt', '39386.jpg': 'Peter England Men Party Blue Jeans', '59263.jpg': 'Titan Women Silver Watch', .... .... '1855.jpg': 'Inkfruit Mens Chain Reaction T-shirt'} and a 2nd dict named train_features {'31973.jpg': array([[0.00125694, 0. , 0.03409385, ..., 0.00434341, 0.00728011, 0.01451511]], dtype=float32), '30778.jpg': array([[0.0174035 , 0.04345186, 0.00772929, ..., 0.02230316, 0. , 0.03104496]], dtype=float32), ..., ..., '38246.jpg': array([[0.00403965, 0.03701203, 0.02616892, ..., 0.02296285, 0.00930257, 0.04575242]], dtype=float32)} The length of both dictionaries are as follows: len(train_descriptions) is 44424 and len(train_features) is 44441 As you can see length of train_description dict is less than length of train_features. train_features dictionary has more keys-values than train_descriptions. How do I remove the keys from train_features dictionary which are not in train_description? To make their length same. A: Use xor to get the difference between the dictionaries diff = train_features.keys() ^ train_descriptions.keys() for k in diff: del train_features[k] A: Using for loop feat = train_features.keys() desc = train_description.keys() common = list(i for i in feat if i not in decc) for i in common: del train_features[i] Edit: See below Above code works. But we can do this more efficiently by not converting dict_keys to list as follows: for i in train_features.keys() - train_description.keys(): del train_features[i] When python dict_keys are subtracted then it gives dict_keys of uncommon keys. First code was first converting into list which was neither efficient nor required. A: just pop() if not exist in another dict. for key in train_descriptions.keys(): if key not in train_features.keys(): train_features.pop(key)
How to remove keys-values from dictionary 1 which are not in dictionary 2 based on common keys?
I have two large dictionaries and both dictionaries have same keys, (name of images) and have different values. 1st dict named train_descriptions which looks like this: {'15970.jpg': 'Turtle Check Men Navy Blue Shirt', '39386.jpg': 'Peter England Men Party Blue Jeans', '59263.jpg': 'Titan Women Silver Watch', .... .... '1855.jpg': 'Inkfruit Mens Chain Reaction T-shirt'} and a 2nd dict named train_features {'31973.jpg': array([[0.00125694, 0. , 0.03409385, ..., 0.00434341, 0.00728011, 0.01451511]], dtype=float32), '30778.jpg': array([[0.0174035 , 0.04345186, 0.00772929, ..., 0.02230316, 0. , 0.03104496]], dtype=float32), ..., ..., '38246.jpg': array([[0.00403965, 0.03701203, 0.02616892, ..., 0.02296285, 0.00930257, 0.04575242]], dtype=float32)} The length of both dictionaries are as follows: len(train_descriptions) is 44424 and len(train_features) is 44441 As you can see length of train_description dict is less than length of train_features. train_features dictionary has more keys-values than train_descriptions. How do I remove the keys from train_features dictionary which are not in train_description? To make their length same.
[ "Use xor to get the difference between the dictionaries\ndiff = train_features.keys() ^ train_descriptions.keys()\nfor k in diff:\n del train_features[k]\n\n", "Using for loop\nfeat = train_features.keys()\ndesc = train_description.keys()\ncommon = list(i for i in feat if i not in decc)\n\nfor i in common: del train_features[i]\n\nEdit: See below\nAbove code works. But we can do this more efficiently by not converting dict_keys to list as follows:\nfor i in train_features.keys() - train_description.keys(): del train_features[i]\n\nWhen python dict_keys are subtracted then it gives dict_keys of uncommon keys. First code was first converting into list which was neither efficient nor required.\n", "just pop() if not exist in another dict.\nfor key in train_descriptions.keys():\n if key not in train_features.keys():\n train_features.pop(key)\n\n" ]
[ 2, 1, 1 ]
[]
[]
[ "dictionary", "python" ]
stackoverflow_0074588103_dictionary_python.txt
Q: Numpy - Circular indexing by skip Given a np.arange() array of arbitrary length, x = np.arange(10) for example, what is an efficient way to generate an output array that skips every n values starting from 0 up to n? Sample code with current method and output: def circ(arr, n): y = np.array([]) for i in range(n): y = np.concatenate((y, arr[i::n])) return y.astype(int) for i in range(1,6): circ(x, i) """ array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) array([0, 2, 4, 6, 8, 1, 3, 5, 7, 9]) array([0, 3, 6, 9, 1, 4, 7, 2, 5, 8]) array([0, 4, 8, 1, 5, 9, 2, 6, 3, 7]) array([0, 5, 1, 6, 2, 7, 3, 8, 4, 9]) """ It would be easy to just use mod, but note that with case 4 that once the list exhausts all the ways to accumulate 3 elements by skipping every 4, it accumulates 2 elements. I considered a sort of pad, resize, flatten, and discard type, but I'm not sure of a good way to discard the padded values in the end: def circ2(arr, n): arr2 = np.zeros(n*math.ceil(len(arr)/n)) arr2[:len(arr)] = arr arr3 = np.resize(arr2, (len(arr2)//n, n)) return arr3.T.ravel() for i in range(1,5): circ2(x, i) """ array([0., 1., 2., 3., 4., 5., 6., 7., 8., 9.]) array([0., 2., 4., 6., 8., 1., 3., 5., 7., 9.]) array([0., 3., 6., 9., 1., 4., 7., 0., 2., 5., 8., 0.]) array([0., 4., 8., 1., 5., 9., 2., 6., 0., 3., 7., 0.]) """ The for loop method works, but it will be really inefficient for larger arrays (up to 5000 elements). Does anyone know of a function, or have insights on making this faster, or a different method? A: Repeated concatenation is inefficient as you have to create a new array over and over. A loop with a single concatenation should be more efficient: x = np.arange(10) n = 4 out = np.concatenate([x[i::n] for i in range(n)]) Output: array([0, 4, 8, 1, 5, 9, 2, 6, 3, 7]) As a function: def circ2(arr, n): return np.concatenate([arr[i::n] for i in range(n)]) Timings for a loop of n in range(1, 6) on an input array of 1M items: # your approach 110 ms ± 4.42 ms per loop (mean ± std. dev. of 7 runs, 10 loops each) # the single concatenate 17.7 ms ± 540 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
Numpy - Circular indexing by skip
Given a np.arange() array of arbitrary length, x = np.arange(10) for example, what is an efficient way to generate an output array that skips every n values starting from 0 up to n? Sample code with current method and output: def circ(arr, n): y = np.array([]) for i in range(n): y = np.concatenate((y, arr[i::n])) return y.astype(int) for i in range(1,6): circ(x, i) """ array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) array([0, 2, 4, 6, 8, 1, 3, 5, 7, 9]) array([0, 3, 6, 9, 1, 4, 7, 2, 5, 8]) array([0, 4, 8, 1, 5, 9, 2, 6, 3, 7]) array([0, 5, 1, 6, 2, 7, 3, 8, 4, 9]) """ It would be easy to just use mod, but note that with case 4 that once the list exhausts all the ways to accumulate 3 elements by skipping every 4, it accumulates 2 elements. I considered a sort of pad, resize, flatten, and discard type, but I'm not sure of a good way to discard the padded values in the end: def circ2(arr, n): arr2 = np.zeros(n*math.ceil(len(arr)/n)) arr2[:len(arr)] = arr arr3 = np.resize(arr2, (len(arr2)//n, n)) return arr3.T.ravel() for i in range(1,5): circ2(x, i) """ array([0., 1., 2., 3., 4., 5., 6., 7., 8., 9.]) array([0., 2., 4., 6., 8., 1., 3., 5., 7., 9.]) array([0., 3., 6., 9., 1., 4., 7., 0., 2., 5., 8., 0.]) array([0., 4., 8., 1., 5., 9., 2., 6., 0., 3., 7., 0.]) """ The for loop method works, but it will be really inefficient for larger arrays (up to 5000 elements). Does anyone know of a function, or have insights on making this faster, or a different method?
[ "Repeated concatenation is inefficient as you have to create a new array over and over.\nA loop with a single concatenation should be more efficient:\nx = np.arange(10)\nn = 4\n\nout = np.concatenate([x[i::n] for i in range(n)])\n\nOutput:\narray([0, 4, 8, 1, 5, 9, 2, 6, 3, 7])\n\nAs a function:\ndef circ2(arr, n):\n return np.concatenate([arr[i::n] for i in range(n)])\n\nTimings for a loop of n in range(1, 6) on an input array of 1M items:\n# your approach\n110 ms ± 4.42 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)\n\n# the single concatenate\n17.7 ms ± 540 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)\n\n" ]
[ 1 ]
[]
[]
[ "arrays", "numpy", "python" ]
stackoverflow_0074588090_arrays_numpy_python.txt
Q: How to hide or disable google chrome maximize and minimize option using selenium python can anyone tell me How to hide or disable google chrome maximize and minimize option using selenium python automation. Refer image link below While automation anyone can't be minimize and maximize chrome browser. A: Use selenium engine it should do the trick
How to hide or disable google chrome maximize and minimize option using selenium python
can anyone tell me How to hide or disable google chrome maximize and minimize option using selenium python automation. Refer image link below While automation anyone can't be minimize and maximize chrome browser.
[ "Use selenium engine it should do the trick\n" ]
[ 1 ]
[]
[]
[ "python", "selenium" ]
stackoverflow_0074588233_python_selenium.txt
Q: Empty list as json response although code is running I am trying to run the following python script to extract data from google scholar.However, when I run the code,I am getting an empty list as a json response.Note that all necessary libraries are installed. headers = { 'User-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36' } params = { 'q': 'Machine learning', 'hl': 'en' } html = requests.get('https://scholar.google.com/scholar', headers=headers, params=params).text soup = BeautifulSoup(html, 'lxml') # JSON data will be collected here data = [] # Container where all needed data is located for result in soup.select('.gs_r.gs_or.gs_scl'): title = result.select_one('.gs_rt').text title_link = result.select_one('.gs_rt a')['href'] publication_info = result.select_one('.gs_a').text snippet = result.select_one('.gs_rs').text cited_by = result.select_one('#gs_res_ccl_mid .gs_nph+ a')['href'] related_articles = result.select_one('a:nth-child(4)')['href'] try: all_article_versions = result.select_one('a~ a+ .gs_nph')['href'] except: all_article_versions = None try: pdf_link = result.select_one('.gs_or_ggsm a:nth-child(1)')['href'] except: pdf_link = None data.append({ 'title': title, 'title_link': title_link, 'publication_info': publication_info, 'snippet': snippet, 'cited_by': f'https://scholar.google.com{cited_by}', 'related_articles': f'https://scholar.google.com{related_articles}', 'all_article_versions': f'https://scholar.google.com{all_article_versions}', "pdf_link": pdf_link }) print(json.dumps(data, indent = 2, ensure_ascii = False)) Output: [] A: Your code is working fine but the problem was to save the scraped data correctly in json format. So you can use super powerful and easy tool which is pandas DataFrasme to store data in json format from bs4 import BeautifulSoup import requests #import json import pandas as pd headers = { 'User-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36' } params = { 'q': 'Machine learning', 'hl': 'en' } html = requests.get('https://scholar.google.com/scholar', headers=headers, params=params).text soup = BeautifulSoup(html, 'lxml') #print(soup.prettify()) # JSON data will be collected here data = [] # Container where all needed data is located for result in soup.select('.gs_r.gs_or.gs_scl'): title = result.select_one('.gs_rt').text title_link = result.select_one('.gs_rt a')['href'] publication_info = result.select_one('.gs_a').text snippet = result.select_one('.gs_rs').text cited_by = result.select_one('#gs_res_ccl_mid .gs_nph+ a')['href'] related_articles = result.select_one('a:nth-child(4)')['href'] try: all_article_versions = result.select_one('a~ a+ .gs_nph')['href'] except: all_article_versions = None try: pdf_link = result.select_one('.gs_or_ggsm a:nth-child(1)')['href'] except: pdf_link = None data.append({ 'title': title, 'title_link': title_link, 'publication_info': publication_info, 'snippet': snippet, 'cited_by': f'https://scholar.google.com{cited_by}', 'related_articles': f'https://scholar.google.com{related_articles}', 'all_article_versions': f'https://scholar.google.com{all_article_versions}', "pdf_link": pdf_link }) #print(json.dumps(data, indent = 2, ensure_ascii = False)) df = pd.DataFrame(data).to_json('out.json',indent=4) Output: { "title": { "0": "[BOOK][B] Machine learning", "1": "[BOOK][B] Machine learning", "2": "Machine learning", "3": "Machine learning: Trends, perspectives, and prospects", "4": "[PDF][PDF] Machine learning algorithms-a review", "5": "What is machine learning?", "6": "[PDF][PDF] Machine learning basics", "7": "What is machine learning? A primer for the epidemiologist", "8": "[BOOK][B] Readings in machine learning", "9": "[BOOK][B] Encyclopedia of machine learning" }, "title_link": { "0": "https:\/\/books.google.com\/books?hl=en&lr=&id=ctM-EAAAQBAJ&oi=fnd&pg=PR6&dq=Machine+learning&ots=oZOqY0Vw_r&sig=Ide7KdAOWXxQwQKPxJKaps4Ag0g", "1": "https:\/\/profs.info.uaic.ro\/~ciortuz\/SLIDES\/2017s\/ml0.pdf", "2": "https:\/\/www.annualreviews.org\/doi\/pdf\/10.1146\/annurev.cs.04.060190.001351", "3": "https:\/\/www.science.org\/doi\/abs\/10.1126\/science.aaa8415", "4": "https:\/\/www.researchgate.net\/profile\/Batta-Mahesh\/publication\/344717762_Machine_Learning_Algorithms_-A_Review\/links\/5f8b2365299bf1b53e2d243a\/Machine-Learning-Algorithms-A-Review.pdf?eid=5082902844932096", "5": "https:\/\/link.springer.com\/chapter\/10.1007\/978-3-319-18305-3_1", "6": "http:\/\/whdeng.cn\/Teaching\/PPT_01_Machine%20learning%20Basics.pdf", "7": "https:\/\/academic.oup.com\/aje\/article-abstract\/188\/12\/2222\/5567515", "8": "https:\/\/books.google.com\/books?hl=en&lr=&id=UgC33U2KMCsC&oi=fnd&pg=PA1&dq=Machine+learning&ots=Thlmkd7Io7&sig=8wkVF31S9nKRAOY8a-OOF8DWRGI", "9": "https:\/\/books.google.com\/books?hl=en&lr=&id=i8hQhp1a62UC&oi=fnd&pg=PT29&dq=Machine+learning&ots=91ogCqhE8N&sig=7yz-s1SuD_e6HZe_-_5jF8lbld8" }, "publication_info": { "0": "ZH Zhou - 2021 - books.google.com", "1": "TM Mitchell, TM Mitchell - 1997 - profs.info.uaic.ro", "2": "TG Dietterich\u00a0- Annual review of computer science, 1990 - annualreviews.org", "3": "MI Jordan, TM Mitchell\u00a0- Science, 2015 - science.org", "4": "B Mahesh\u00a0- International Journal of Science and Research (IJSR)\u00a0\u2026, 2020 - researchgate.net", "5": "I El Naqa, MJ Murphy\u00a0- machine learning in radiation oncology, 2015 - Springer", "6": "H Wang, Z Lei, X Zhang, B Zhou, J Peng\u00a0- Deep Learn, 2016 - whdeng.cn", "7": "Q Bi, KE Goodman, J Kaminsky\u2026\u00a0- American journal of\u00a0\u2026, 2019 - academic.oup.com", "8": "JW Shavlik, T Dietterich, TG Dietterich - 1990 - books.google.com", "9": "C Sammut, GI Webb - 2011 - books.google.com" }, "snippet": { "0": "\u2026 machine learning. The second part includes Chapters 4\u201310, which presents some classic and \npopular machine learning \u2026 cover the core topics of machine learning in one semester, and \u2026", "1": "\u2026 Tom Mitchell (Definition of the [general] learning problem): \u201cA computer program is said \nto learn from experience E with respect to some class of tasks T and performance measure P\u00a0\u2026", "2": "Recent progress in the study of machine learning methods has taken many directions. First, \nin the area of inductive learning, a new formal definition of learning introduced by Leslie \u2026", "3": "\u2026 Machine learning addresses the question of how to build computers that improve \u2026 Recent \nprogress in machine learning has been driven both by the development of new learning \u2026", "4": "\u2026 Here\u201fsa quick look at some of the commonly used algorithms in machine learning (ML) \nSupervised Learning Supervised learning is the machine learning task of learning a function \u2026", "5": "\u2026 A machine learning algorithm is a computational process that \u2026 This training is the \u201clearning\u201d \npart of machine learning. The \u2026 can practice \u201clifelong\u201d learning as it processes new data and \u2026", "6": "\u2026 To obtain theoretical guarantees about generalization of a machine learning algorithm, we \n\u2026 Why does deep learning have different behavior than other machine learning methods for \u2026", "7": "\u2026 We provide a brief introduction to 5 common machine learning \u2026 of machine learning \ntechniques in the published literature. We recommend approaches to incorporate machine learning \u2026", "8": "\u2026 in machine learning. We have taught from these readings in our own machine learning \u2026 \nFurthermore, we in machine learning believe that learning techniques provide important con\u2026", "9": "\u2026 Machine Learning came to be identified as a research field in \u2026 machine learning appeared. \nAlthough the field coalesced in the \uf6dc\uf641\uf640\uf639s, research on what we now call machine learning \u2026" }, "cited_by": { "0": "https:\/\/scholar.google.com\/scholar?cites=3387547533016043281&as_sdt=2005&sciodt=0,5&hl=en", "1": "https:\/\/scholar.google.com\/scholar?cites=5160851211484945804&as_sdt=2005&sciodt=0,5&hl=en", "2": "https:\/\/scholar.google.com\/scholar?cites=7073378272324684978&as_sdt=2005&sciodt=0,5&hl=en", "3": "https:\/\/scholar.google.com\/scholar?cites=10883068066968164261&as_sdt=2005&sciodt=0,5&hl=en", "4": "https:\/\/scholar.google.com\/scholar?cites=15194857180303073201&as_sdt=2005&sciodt=0,5&hl=en", "5": "https:\/\/scholar.google.com\/scholar?cites=13248080025875046634&as_sdt=2005&sciodt=0,5&hl=en", "6": "https:\/\/scholar.google.com\/scholar?cites=2537307997858018983&as_sdt=2005&sciodt=0,5&hl=en", "7": "https:\/\/scholar.google.com\/scholar?cites=16719333272424362284&as_sdt=2005&sciodt=0,5&hl=en", "8": "https:\/\/scholar.google.com\/scholar?cites=2031020440241972606&as_sdt=2005&sciodt=0,5&hl=en", "9": "https:\/\/scholar.google.com\/scholar?cites=16791323098365028130&as_sdt=2005&sciodt=0,5&hl=en" }, "related_articles": { "0": "https:\/\/scholar.google.com\/scholar?q=related:EQ8shYj8Ai8J:scholar.google.com\/&scioq=Machine+learning&hl=en&as_sdt=0,5", "1": "https:\/\/scholar.google.com\/scholar?q=related:jF00X9UGn0cJ:scholar.google.com\/&scioq=Machine+learning&hl=en&as_sdt=0,5", "2": "https:\/\/scholar.google.com\/scholar?q=related:sgzh8w-wKWIJ:scholar.google.com\/&scioq=Machine+learning&hl=en&as_sdt=0,5", "3": "https:\/\/scholar.google.com\/scholar?q=related:pdcI9r5sCJcJ:scholar.google.com\/&scioq=Machine+learning&hl=en&as_sdt=0,5", "4": "https:\/\/scholar.google.com\/scholar?q=related:sR_ChBn63tIJ:scholar.google.com\/&scioq=Machine+learning&hl=en&as_sdt=0,5", "5": "https:\/\/scholar.google.com\/scholar?q=related:6uA6mpei2rcJ:scholar.google.com\/&scioq=Machine+learning&hl=en&as_sdt=0,5", "6": "https:\/\/scholar.google.com\/scholar?q=related:p7YVSi5UNiMJ:scholar.google.com\/&scioq=Machine+learning&hl=en&as_sdt=0,5", "7": "https:\/\/scholar.google.com\/scholar?q=related:LDE5SAcBB-gJ:scholar.google.com\/&scioq=Machine+learning&hl=en&as_sdt=0,5", "8": "https:\/\/scholar.google.com\/scholar?q=related:fiUuYFSiLxwJ:scholar.google.com\/&scioq=Machine+learning&hl=en&as_sdt=0,5", "9": "https:\/\/scholar.google.com\/scholar?q=related:IufbymTDBukJ:scholar.google.com\/&scioq=Machine+learning&hl=en&as_sdt=0,5" }, "all_article_versions": { "0": "https:\/\/scholar.google.comNone", "1": "https:\/\/scholar.google.com\/scholar?cluster=5160851211484945804&hl=en&as_sdt=0,5", "2": "https:\/\/scholar.google.com\/scholar?cluster=7073378272324684978&hl=en&as_sdt=0,5", "3": "https:\/\/scholar.google.com\/scholar?cluster=10883068066968164261&hl=en&as_sdt=0,5", "4": "https:\/\/scholar.google.com\/scholar?cluster=15194857180303073201&hl=en&as_sdt=0,5", "5": "https:\/\/scholar.google.com\/scholar?cluster=13248080025875046634&hl=en&as_sdt=0,5", "6": "https:\/\/scholar.google.com\/scholar?cluster=2537307997858018983&hl=en&as_sdt=0,5", "7": "https:\/\/scholar.google.com\/scholar?cluster=16719333272424362284&hl=en&as_sdt=0,5", "8": "https:\/\/scholar.google.com\/scholar?cluster=2031020440241972606&hl=en&as_sdt=0,5", "9": "https:\/\/scholar.google.com\/scholar?cluster=16791323098365028130&hl=en&as_sdt=0,5" }, "pdf_link": { "0": null, "1": "https:\/\/profs.info.uaic.ro\/~ciortuz\/SLIDES\/2017s\/ml0.pdf", "2": "https:\/\/web.engr.oregonstate.edu\/~tgd\/publications\/arcs.ps.gz", "3": "http:\/\/www.cs.cmu.edu\/~tom\/pubs\/Science-ML-2015.pdf", "4": "https:\/\/www.researchgate.net\/profile\/Batta-Mahesh\/publication\/344717762_Machine_Learning_Algorithms_-A_Review\/links\/5f8b2365299bf1b53e2d243a\/Machine-Learning-Algorithms-A-Review.pdf?eid=5082902844932096", "5": null, "6": "http:\/\/whdeng.cn\/Teaching\/PPT_01_Machine%20learning%20Basics.pdf", "7": null, "8": null, "9": null } } A: This will let you write results to a JSON file. import requests from bs4 import BeautifulSoup import json start_url = 'https://scholar.google.com/scholar' headers = { 'User-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36' } params = { 'q': 'Machine learning', 'hl': 'en' } res = requests.get(start_url, headers=headers, params=params) soup = BeautifulSoup(res.text, 'lxml') data = [] for result in soup.select('#gs_res_ccl_mid > [data-lid]'): item_dict = {} item_dict['title'] = result.select_one('h3 > a[href]').text item_dict['title_link'] = result.select_one('h3 > a[href]')['href'] item_dict['publication_info'] = result.select_one('.gs_a').text item_dict['snippet'] = result.select_one('.gs_rs').text item_dict['cited_by'] = result.select_one("a:-soup-contains('Cited by')")['href'] item_dict['related_articles'] = result.select_one("a:-soup-contains('Related articles')")['href'] try: item_dict['all_article_versions'] = result.select_one("a.gs_nph:-soup-contains('versions')")['href'] except TypeError: item_dict['all_article_versions'] = "" try: item_dict['pdf_link'] = result.select_one('.gs_or_ggsm > a[href]')['href'] except TypeError: item_dict['pdf_link'] = "" data.append(item_dict) print(json.dumps(data, indent=4)) with open('output.json', 'w', encoding='utf-8') as f: json.dump(data, f, ensure_ascii=False, indent=4)
Empty list as json response although code is running
I am trying to run the following python script to extract data from google scholar.However, when I run the code,I am getting an empty list as a json response.Note that all necessary libraries are installed. headers = { 'User-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36' } params = { 'q': 'Machine learning', 'hl': 'en' } html = requests.get('https://scholar.google.com/scholar', headers=headers, params=params).text soup = BeautifulSoup(html, 'lxml') # JSON data will be collected here data = [] # Container where all needed data is located for result in soup.select('.gs_r.gs_or.gs_scl'): title = result.select_one('.gs_rt').text title_link = result.select_one('.gs_rt a')['href'] publication_info = result.select_one('.gs_a').text snippet = result.select_one('.gs_rs').text cited_by = result.select_one('#gs_res_ccl_mid .gs_nph+ a')['href'] related_articles = result.select_one('a:nth-child(4)')['href'] try: all_article_versions = result.select_one('a~ a+ .gs_nph')['href'] except: all_article_versions = None try: pdf_link = result.select_one('.gs_or_ggsm a:nth-child(1)')['href'] except: pdf_link = None data.append({ 'title': title, 'title_link': title_link, 'publication_info': publication_info, 'snippet': snippet, 'cited_by': f'https://scholar.google.com{cited_by}', 'related_articles': f'https://scholar.google.com{related_articles}', 'all_article_versions': f'https://scholar.google.com{all_article_versions}', "pdf_link": pdf_link }) print(json.dumps(data, indent = 2, ensure_ascii = False)) Output: []
[ "Your code is working fine but the problem was to save the scraped data correctly in json format. So you can use super powerful and easy tool which is pandas DataFrasme to store data in json format\nfrom bs4 import BeautifulSoup\nimport requests\n#import json\nimport pandas as pd\n\nheaders = {\n 'User-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36'\n}\n\nparams = {\n 'q': 'Machine learning',\n 'hl': 'en'\n}\n\nhtml = requests.get('https://scholar.google.com/scholar', headers=headers, params=params).text\nsoup = BeautifulSoup(html, 'lxml')\n#print(soup.prettify())\n\n# JSON data will be collected here\ndata = []\n\n# Container where all needed data is located\nfor result in soup.select('.gs_r.gs_or.gs_scl'):\n title = result.select_one('.gs_rt').text\n title_link = result.select_one('.gs_rt a')['href']\n publication_info = result.select_one('.gs_a').text\n snippet = result.select_one('.gs_rs').text\n cited_by = result.select_one('#gs_res_ccl_mid .gs_nph+ a')['href']\n related_articles = result.select_one('a:nth-child(4)')['href']\n try:\n all_article_versions = result.select_one('a~ a+ .gs_nph')['href']\n except:\n all_article_versions = None\n \n try:\n pdf_link = result.select_one('.gs_or_ggsm a:nth-child(1)')['href']\n except: \n pdf_link = None\n\n data.append({\n 'title': title,\n 'title_link': title_link,\n 'publication_info': publication_info,\n 'snippet': snippet,\n 'cited_by': f'https://scholar.google.com{cited_by}',\n 'related_articles': f'https://scholar.google.com{related_articles}',\n 'all_article_versions': f'https://scholar.google.com{all_article_versions}',\n \"pdf_link\": pdf_link\n })\n\n#print(json.dumps(data, indent = 2, ensure_ascii = False))\n\ndf = pd.DataFrame(data).to_json('out.json',indent=4)\n\nOutput:\n{\n \"title\": {\n \"0\": \"[BOOK][B] Machine learning\",\n \"1\": \"[BOOK][B] Machine learning\",\n \"2\": \"Machine learning\",\n \"3\": \"Machine learning: Trends, perspectives, and prospects\",\n \"4\": \"[PDF][PDF] Machine learning algorithms-a review\",\n \"5\": \"What is machine learning?\",\n \"6\": \"[PDF][PDF] Machine learning basics\",\n \"7\": \"What is machine learning? A primer for the epidemiologist\",\n \"8\": \"[BOOK][B] Readings in machine learning\",\n \"9\": \"[BOOK][B] Encyclopedia of machine learning\"\n },\n \"title_link\": {\n \"0\": \"https:\\/\\/books.google.com\\/books?hl=en&lr=&id=ctM-EAAAQBAJ&oi=fnd&pg=PR6&dq=Machine+learning&ots=oZOqY0Vw_r&sig=Ide7KdAOWXxQwQKPxJKaps4Ag0g\",\n \"1\": \"https:\\/\\/profs.info.uaic.ro\\/~ciortuz\\/SLIDES\\/2017s\\/ml0.pdf\",\n \"2\": \"https:\\/\\/www.annualreviews.org\\/doi\\/pdf\\/10.1146\\/annurev.cs.04.060190.001351\",\n \"3\": \"https:\\/\\/www.science.org\\/doi\\/abs\\/10.1126\\/science.aaa8415\",\n \"4\": \"https:\\/\\/www.researchgate.net\\/profile\\/Batta-Mahesh\\/publication\\/344717762_Machine_Learning_Algorithms_-A_Review\\/links\\/5f8b2365299bf1b53e2d243a\\/Machine-Learning-Algorithms-A-Review.pdf?eid=5082902844932096\",\n \"5\": \"https:\\/\\/link.springer.com\\/chapter\\/10.1007\\/978-3-319-18305-3_1\",\n \"6\": \"http:\\/\\/whdeng.cn\\/Teaching\\/PPT_01_Machine%20learning%20Basics.pdf\",\n \"7\": \"https:\\/\\/academic.oup.com\\/aje\\/article-abstract\\/188\\/12\\/2222\\/5567515\",\n \"8\": \"https:\\/\\/books.google.com\\/books?hl=en&lr=&id=UgC33U2KMCsC&oi=fnd&pg=PA1&dq=Machine+learning&ots=Thlmkd7Io7&sig=8wkVF31S9nKRAOY8a-OOF8DWRGI\",\n \"9\": \"https:\\/\\/books.google.com\\/books?hl=en&lr=&id=i8hQhp1a62UC&oi=fnd&pg=PT29&dq=Machine+learning&ots=91ogCqhE8N&sig=7yz-s1SuD_e6HZe_-_5jF8lbld8\"\n },\n \"publication_info\": {\n \"0\": \"ZH Zhou - 2021 - books.google.com\",\n \"1\": \"TM Mitchell, TM Mitchell - 1997 - profs.info.uaic.ro\",\n \"2\": \"TG Dietterich\\u00a0- Annual review of computer science, 1990 - annualreviews.org\",\n \"3\": \"MI Jordan, TM Mitchell\\u00a0- Science, 2015 - science.org\",\n \"4\": \"B Mahesh\\u00a0- International Journal of Science and Research (IJSR)\\u00a0\\u2026, 2020 - researchgate.net\",\n \"5\": \"I El Naqa, MJ Murphy\\u00a0- machine learning in radiation oncology, 2015 - Springer\",\n \"6\": \"H Wang, Z Lei, X Zhang, B Zhou, J Peng\\u00a0- Deep Learn, 2016 - whdeng.cn\",\n \"7\": \"Q Bi, KE Goodman, J Kaminsky\\u2026\\u00a0- American journal of\\u00a0\\u2026, 2019 - academic.oup.com\",\n \"8\": \"JW Shavlik, T Dietterich, TG Dietterich - 1990 - books.google.com\",\n \"9\": \"C Sammut, GI Webb - 2011 - books.google.com\"\n },\n \"snippet\": {\n \"0\": \"\\u2026 machine learning. The second part includes Chapters 4\\u201310, which presents some classic and \\npopular machine learning \\u2026 cover the core topics of machine learning in one semester, and \\u2026\",\n \"1\": \"\\u2026 Tom Mitchell (Definition of the [general] learning problem): \\u201cA computer program is said \\nto learn from experience E with respect to some class of tasks T and performance measure P\\u00a0\\u2026\",\n \"2\": \"Recent progress in the study of machine learning methods has taken many directions. First, \\nin the area of inductive learning, a new formal definition of learning introduced by Leslie \\u2026\",\n \"3\": \"\\u2026 Machine learning addresses the question of how to build computers that improve \\u2026 Recent \\nprogress in machine learning has been driven both by the development of new learning \\u2026\",\n \"4\": \"\\u2026 Here\\u201fsa quick look at some of the commonly used algorithms in machine learning (ML) \\nSupervised Learning Supervised learning is the machine learning task of learning a function \\u2026\",\n \"5\": \"\\u2026 A machine learning algorithm is a computational process that \\u2026 This training is the \\u201clearning\\u201d \\npart of machine learning. The \\u2026 can practice \\u201clifelong\\u201d learning as it processes new data and \\u2026\",\n \"6\": \"\\u2026 To obtain theoretical guarantees about generalization of a machine learning algorithm, we \\n\\u2026 Why does deep learning have different behavior than other machine learning methods for \\u2026\",\n \"7\": \"\\u2026 We provide a brief introduction to 5 common machine learning \\u2026 of machine learning \\ntechniques in the published literature. We recommend approaches to incorporate machine learning \\u2026\",\n \"8\": \"\\u2026 in machine learning. We have taught from these readings in our own machine learning \\u2026 \\nFurthermore, we in machine learning believe that learning techniques provide important con\\u2026\",\n \"9\": \"\\u2026 Machine Learning came to be identified as a research field in \\u2026 machine learning appeared. \\nAlthough the field coalesced in the \\uf6dc\\uf641\\uf640\\uf639s, research on what we now call machine learning \\u2026\"\n },\n \"cited_by\": {\n \"0\": \"https:\\/\\/scholar.google.com\\/scholar?cites=3387547533016043281&as_sdt=2005&sciodt=0,5&hl=en\",\n \"1\": \"https:\\/\\/scholar.google.com\\/scholar?cites=5160851211484945804&as_sdt=2005&sciodt=0,5&hl=en\",\n \"2\": \"https:\\/\\/scholar.google.com\\/scholar?cites=7073378272324684978&as_sdt=2005&sciodt=0,5&hl=en\",\n \"3\": \"https:\\/\\/scholar.google.com\\/scholar?cites=10883068066968164261&as_sdt=2005&sciodt=0,5&hl=en\",\n \"4\": \"https:\\/\\/scholar.google.com\\/scholar?cites=15194857180303073201&as_sdt=2005&sciodt=0,5&hl=en\",\n \"5\": \"https:\\/\\/scholar.google.com\\/scholar?cites=13248080025875046634&as_sdt=2005&sciodt=0,5&hl=en\",\n \"6\": \"https:\\/\\/scholar.google.com\\/scholar?cites=2537307997858018983&as_sdt=2005&sciodt=0,5&hl=en\",\n \"7\": \"https:\\/\\/scholar.google.com\\/scholar?cites=16719333272424362284&as_sdt=2005&sciodt=0,5&hl=en\",\n \"8\": \"https:\\/\\/scholar.google.com\\/scholar?cites=2031020440241972606&as_sdt=2005&sciodt=0,5&hl=en\",\n \"9\": \"https:\\/\\/scholar.google.com\\/scholar?cites=16791323098365028130&as_sdt=2005&sciodt=0,5&hl=en\"\n },\n \"related_articles\": {\n \"0\": \"https:\\/\\/scholar.google.com\\/scholar?q=related:EQ8shYj8Ai8J:scholar.google.com\\/&scioq=Machine+learning&hl=en&as_sdt=0,5\",\n \"1\": \"https:\\/\\/scholar.google.com\\/scholar?q=related:jF00X9UGn0cJ:scholar.google.com\\/&scioq=Machine+learning&hl=en&as_sdt=0,5\",\n \"2\": \"https:\\/\\/scholar.google.com\\/scholar?q=related:sgzh8w-wKWIJ:scholar.google.com\\/&scioq=Machine+learning&hl=en&as_sdt=0,5\",\n \"3\": \"https:\\/\\/scholar.google.com\\/scholar?q=related:pdcI9r5sCJcJ:scholar.google.com\\/&scioq=Machine+learning&hl=en&as_sdt=0,5\",\n \"4\": \"https:\\/\\/scholar.google.com\\/scholar?q=related:sR_ChBn63tIJ:scholar.google.com\\/&scioq=Machine+learning&hl=en&as_sdt=0,5\",\n \"5\": \"https:\\/\\/scholar.google.com\\/scholar?q=related:6uA6mpei2rcJ:scholar.google.com\\/&scioq=Machine+learning&hl=en&as_sdt=0,5\",\n \"6\": \"https:\\/\\/scholar.google.com\\/scholar?q=related:p7YVSi5UNiMJ:scholar.google.com\\/&scioq=Machine+learning&hl=en&as_sdt=0,5\",\n \"7\": \"https:\\/\\/scholar.google.com\\/scholar?q=related:LDE5SAcBB-gJ:scholar.google.com\\/&scioq=Machine+learning&hl=en&as_sdt=0,5\",\n \"8\": \"https:\\/\\/scholar.google.com\\/scholar?q=related:fiUuYFSiLxwJ:scholar.google.com\\/&scioq=Machine+learning&hl=en&as_sdt=0,5\",\n \"9\": \"https:\\/\\/scholar.google.com\\/scholar?q=related:IufbymTDBukJ:scholar.google.com\\/&scioq=Machine+learning&hl=en&as_sdt=0,5\"\n },\n \"all_article_versions\": {\n \"0\": \"https:\\/\\/scholar.google.comNone\",\n \"1\": \"https:\\/\\/scholar.google.com\\/scholar?cluster=5160851211484945804&hl=en&as_sdt=0,5\",\n \"2\": \"https:\\/\\/scholar.google.com\\/scholar?cluster=7073378272324684978&hl=en&as_sdt=0,5\",\n \"3\": \"https:\\/\\/scholar.google.com\\/scholar?cluster=10883068066968164261&hl=en&as_sdt=0,5\",\n \"4\": \"https:\\/\\/scholar.google.com\\/scholar?cluster=15194857180303073201&hl=en&as_sdt=0,5\",\n \"5\": \"https:\\/\\/scholar.google.com\\/scholar?cluster=13248080025875046634&hl=en&as_sdt=0,5\",\n \"6\": \"https:\\/\\/scholar.google.com\\/scholar?cluster=2537307997858018983&hl=en&as_sdt=0,5\",\n \"7\": \"https:\\/\\/scholar.google.com\\/scholar?cluster=16719333272424362284&hl=en&as_sdt=0,5\",\n \"8\": \"https:\\/\\/scholar.google.com\\/scholar?cluster=2031020440241972606&hl=en&as_sdt=0,5\",\n \"9\": \"https:\\/\\/scholar.google.com\\/scholar?cluster=16791323098365028130&hl=en&as_sdt=0,5\"\n },\n \"pdf_link\": {\n \"0\": null,\n \"1\": \"https:\\/\\/profs.info.uaic.ro\\/~ciortuz\\/SLIDES\\/2017s\\/ml0.pdf\",\n \"2\": \"https:\\/\\/web.engr.oregonstate.edu\\/~tgd\\/publications\\/arcs.ps.gz\",\n \"3\": \"http:\\/\\/www.cs.cmu.edu\\/~tom\\/pubs\\/Science-ML-2015.pdf\",\n \"4\": \"https:\\/\\/www.researchgate.net\\/profile\\/Batta-Mahesh\\/publication\\/344717762_Machine_Learning_Algorithms_-A_Review\\/links\\/5f8b2365299bf1b53e2d243a\\/Machine-Learning-Algorithms-A-Review.pdf?eid=5082902844932096\",\n \"5\": null,\n \"6\": \"http:\\/\\/whdeng.cn\\/Teaching\\/PPT_01_Machine%20learning%20Basics.pdf\",\n \"7\": null,\n \"8\": null,\n \"9\": null\n }\n}\n\n", "This will let you write results to a JSON file.\nimport requests\nfrom bs4 import BeautifulSoup\n\nimport json\n\nstart_url = 'https://scholar.google.com/scholar'\n\nheaders = {\n 'User-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36'\n}\n\nparams = {\n 'q': 'Machine learning',\n 'hl': 'en'\n}\n\nres = requests.get(start_url, headers=headers, params=params)\nsoup = BeautifulSoup(res.text, 'lxml')\ndata = []\nfor result in soup.select('#gs_res_ccl_mid > [data-lid]'):\n item_dict = {}\n item_dict['title'] = result.select_one('h3 > a[href]').text\n item_dict['title_link'] = result.select_one('h3 > a[href]')['href']\n item_dict['publication_info'] = result.select_one('.gs_a').text\n item_dict['snippet'] = result.select_one('.gs_rs').text\n item_dict['cited_by'] = result.select_one(\"a:-soup-contains('Cited by')\")['href']\n item_dict['related_articles'] = result.select_one(\"a:-soup-contains('Related articles')\")['href']\n try:\n item_dict['all_article_versions'] = result.select_one(\"a.gs_nph:-soup-contains('versions')\")['href']\n except TypeError:\n item_dict['all_article_versions'] = \"\"\n \n try:\n item_dict['pdf_link'] = result.select_one('.gs_or_ggsm > a[href]')['href']\n except TypeError: \n item_dict['pdf_link'] = \"\"\n\n data.append(item_dict)\n\n\nprint(json.dumps(data, indent=4))\n\nwith open('output.json', 'w', encoding='utf-8') as f:\n json.dump(data, f, ensure_ascii=False, indent=4)\n\n" ]
[ 0, 0 ]
[]
[]
[ "beautifulsoup", "json", "python", "web_scraping" ]
stackoverflow_0074587071_beautifulsoup_json_python_web_scraping.txt
Q: Python Tkinter f string to variable Please help, hi guys, how to put end-users email content to an f string in Python? What should we insert here st.insert(INSERT, "...") so that mail.Body # (1) gets st variable and works like mail.Body # (2)? I'm sorry if I don't explain my question clearly. This is my first question in Python. Please let me know if you need more information. st = ScrolledText(window) st.insert(INSERT, "...") st.grid(row=6, column=1, columnspan=2, pady=5, padx=20, ipadx=50) def SendEmail(): # Load email info into dataframe try: email_info = pd.read_excel(email_file_path) outlook = win32.Dispatch('outlook.application') for index, row in email_info.iterrows(): mail = outlook.CreateItem(0) mail.To = row["Email"] mail.CC = row["CC"] mail.Subject = f"OOReport for: {row['Vendor']}" mail.Body = st.get(1.0, END) # (1) mail.Body = f"""Hi {row['First Name']} # (2) Please find the attached report for {row['Vendor']}. Thanks, xxxxxxx xxxxxxxx yuyyyyyyy zzzzzz """ except Exception as e: messagebox.showerror('Python Error', e) A: I found my solution. st.insert(INSERT, """f\"""Hi {row['First Name']} Please find the attached report for {row['Vendor']}. Best regards, xxxxxxx yuyyyyyyy zzzzzz \""" """) mail.Body = eval(st.get(1.0, END))
Python Tkinter f string to variable
Please help, hi guys, how to put end-users email content to an f string in Python? What should we insert here st.insert(INSERT, "...") so that mail.Body # (1) gets st variable and works like mail.Body # (2)? I'm sorry if I don't explain my question clearly. This is my first question in Python. Please let me know if you need more information. st = ScrolledText(window) st.insert(INSERT, "...") st.grid(row=6, column=1, columnspan=2, pady=5, padx=20, ipadx=50) def SendEmail(): # Load email info into dataframe try: email_info = pd.read_excel(email_file_path) outlook = win32.Dispatch('outlook.application') for index, row in email_info.iterrows(): mail = outlook.CreateItem(0) mail.To = row["Email"] mail.CC = row["CC"] mail.Subject = f"OOReport for: {row['Vendor']}" mail.Body = st.get(1.0, END) # (1) mail.Body = f"""Hi {row['First Name']} # (2) Please find the attached report for {row['Vendor']}. Thanks, xxxxxxx xxxxxxxx yuyyyyyyy zzzzzz """ except Exception as e: messagebox.showerror('Python Error', e)
[ "I found my solution.\nst.insert(INSERT, \"\"\"f\\\"\"\"Hi {row['First Name']}\n\nPlease find the attached report for {row['Vendor']}.\n\nBest regards,\nxxxxxxx\nyuyyyyyyy\nzzzzzz\n\\\"\"\"\n \n\"\"\")\n\nmail.Body = eval(st.get(1.0, END))\n\n" ]
[ 0 ]
[]
[]
[ "python", "python_3.x", "tkinter", "tkinter_entry" ]
stackoverflow_0074588123_python_python_3.x_tkinter_tkinter_entry.txt
Q: Word search puzzle generator in Python I am creating a word search puzzle generator, but I am a beginner in programming, so I´m having some trouble, like some words overlap, and I don't really know where is the problem. So i need a program that asks for the rows and columns, with that information it creates the grid and organize the word(key) given in the grid, in a vertical or horizontal direction. In addition, i need to make the grid of max 25x25. Any help would be apreciated.(also I'm not a native English speaker) import string import random col = int(input("Indique cuantas columnas desea para la sopa de letras: "))#here I ask for the columns (aka width) fil = int(input("Indique cuantas filas desea para la sopa de letras: "))#here is the rows (aka height ) def posicionPalabra(palabra,cuadricula, fil, col): palabra = random.choice([palabra, palabra[::-1]]) direccion = random.choice([[1,0], [0,1]]) print(f'La posicion de {palabra} en el orden {direccion}...') xComienzo = col if direccion[0] == 0 else col - len(palabra) - 1 yComienzo = fil if direccion[1] == 0 else fil - len(palabra) - 1 x = random.randrange(0, xComienzo) y = random.randrange(0, yComienzo) print([x, y]) for i in range(len(palabra)): cuadricula[x + direccion[0]*i][y + direccion[1]*i] = palabra[i] return cuadricula cuadricula = [[random.choice(string.ascii_uppercase) for a in range(col)] for h in range(fil)] for palabra in ["Alemania","Belice","Cuba","Finlandia","Guatemala","Mexico"]: posicionPalabra(palabra,cuadricula,25,25) print ("\n".join(map(lambda row: " ".join(row), cuadricula))) I have tried with other grid sizes but it gives me this: Indique cuantas columnas desea para la sopa de letras: 15 Indique cuantas filas desea para la sopa de letras: 18 La posicion de Alemania en el orden [1, 0]... [3, 17] Traceback (most recent call last): File "C:/Users/VictoriaBC/Desktop/de py/sopapaises.py", line 28, in <module> posicionPalabra(palabra,cuadricula,25,25) File "C:/Users/VictoriaBC/Desktop/de py/sopapaises.py", line 22, in posicionPalabra cuadricula[x + direccion[0]*i][y + direccion[1]*i] = palabra[i] IndexError: list assignment index out of range *I also need to ask the player for the words to find, but that is already done A: you try to get this [x + direccion[0]*i][y + direccion[1]*i] from this cuadricula 2D list, but you cannot do this because in cuadricula there is not such indices that are given by [x + direccion[0]*i][y + direccion[1]*i] on some lvl of iteration of for i in range(len(palabra)):
Word search puzzle generator in Python
I am creating a word search puzzle generator, but I am a beginner in programming, so I´m having some trouble, like some words overlap, and I don't really know where is the problem. So i need a program that asks for the rows and columns, with that information it creates the grid and organize the word(key) given in the grid, in a vertical or horizontal direction. In addition, i need to make the grid of max 25x25. Any help would be apreciated.(also I'm not a native English speaker) import string import random col = int(input("Indique cuantas columnas desea para la sopa de letras: "))#here I ask for the columns (aka width) fil = int(input("Indique cuantas filas desea para la sopa de letras: "))#here is the rows (aka height ) def posicionPalabra(palabra,cuadricula, fil, col): palabra = random.choice([palabra, palabra[::-1]]) direccion = random.choice([[1,0], [0,1]]) print(f'La posicion de {palabra} en el orden {direccion}...') xComienzo = col if direccion[0] == 0 else col - len(palabra) - 1 yComienzo = fil if direccion[1] == 0 else fil - len(palabra) - 1 x = random.randrange(0, xComienzo) y = random.randrange(0, yComienzo) print([x, y]) for i in range(len(palabra)): cuadricula[x + direccion[0]*i][y + direccion[1]*i] = palabra[i] return cuadricula cuadricula = [[random.choice(string.ascii_uppercase) for a in range(col)] for h in range(fil)] for palabra in ["Alemania","Belice","Cuba","Finlandia","Guatemala","Mexico"]: posicionPalabra(palabra,cuadricula,25,25) print ("\n".join(map(lambda row: " ".join(row), cuadricula))) I have tried with other grid sizes but it gives me this: Indique cuantas columnas desea para la sopa de letras: 15 Indique cuantas filas desea para la sopa de letras: 18 La posicion de Alemania en el orden [1, 0]... [3, 17] Traceback (most recent call last): File "C:/Users/VictoriaBC/Desktop/de py/sopapaises.py", line 28, in <module> posicionPalabra(palabra,cuadricula,25,25) File "C:/Users/VictoriaBC/Desktop/de py/sopapaises.py", line 22, in posicionPalabra cuadricula[x + direccion[0]*i][y + direccion[1]*i] = palabra[i] IndexError: list assignment index out of range *I also need to ask the player for the words to find, but that is already done
[ "you try to get this [x + direccion[0]*i][y + direccion[1]*i] from this cuadricula 2D list, but you cannot do this because in cuadricula there is not such indices that are given by [x + direccion[0]*i][y + direccion[1]*i] on some lvl of iteration of for i in range(len(palabra)):\n" ]
[ 0 ]
[]
[]
[ "python", "wordsearch" ]
stackoverflow_0074588261_python_wordsearch.txt
Q: Markov Chain: Finding terminal state calculation I'm trying to figure out this problem. Hopefully someone can tell me how to complete this. I consulted the following pages, but I was unable to write a code in java/python that produces the correct output and passes all test cases. I'd appreciate any and all help. Markov chain probability calculation - Python Calculating Markov chain probabilities with values too large to exponentiate Write a function answer(m) that takes an array of array of nonnegative ints representing how many times that state has gone to the next state and return an array of ints for each terminal state giving the exact probabilities of each terminal state, represented as the numerator for each state, then the denominator for all of them at the end and in simplest form. The matrix is at most 10 by 10. It is guaranteed that no matter which state the ore is in, there is a path from that state to a terminal state. That is, the processing will always eventually end in a stable state. The ore starts in state 0. The denominator will fit within a signed 32-bit integer during the calculation, as long as the fraction is simplified regularly. For example, consider the matrix m: [ [0,1,0,0,0,1], # s0, the initial state, goes to s1 and s5 with equal probability [4,0,0,3,2,0], # s1 can become s0, s3, or s4, but with different probabilities [0,0,0,0,0,0], # s2 is terminal, and unreachable (never observed in practice) [0,0,0,0,0,0], # s3 is terminal [0,0,0,0,0,0], # s4 is terminal [0,0,0,0,0,0], # s5 is terminal ] So, we can consider different paths to terminal states, such as: s0 -> s1 -> s3 s0 -> s1 -> s0 -> s1 -> s0 -> s1 -> s4 s0 -> s1 -> s0 -> s5 Tracing the probabilities of each, we find that s2 has probability 0 s3 has probability 3/14 s4 has probability 1/7 s5 has probability 9/14 A: I'm not sure what the results for the edge cases should be, but what I did for this problem is: Created a second matrix that held all of the denominators for each probability by adding up all of the numerators in each row. Find the first terminal state in the matrix to use as the bound of the non-terminal states. Subtract the matrix bounded by the first terminal from the identity matrix of the same size. Find the inverse of the difference. There's a couple ways to do this, I decided to augment the matching identity matrix with the difference. Multiply the inverse by the matrix bounded from the first terminal to the end of the matrix. Then, find the resulting denominator and return the first row of numerators of the matrix bounded from the first terminal to the end of the matrix. Side notes: You'll need to write a simplify function that simplifies fractions (you may also need to write gcf and lcm functions to help you simplify). You may also need to sort the matrix so the terminals are at the end of the matrix so it is in proper form. edge cases: 1x1 matrix, matrix that only has 1 nonterminal state, 10x10 matrix, matrix that only has 1 terminal state A: I know it's a bit old topic but maybe someone will be interested. In my case, this PDF helped me a lot: https://math.dartmouth.edu/archive/m20x06/public_html/Lecture14.pdf The algorithm is easy to implement. As Ana said you need to sort matrix, remember to sort rows and columns at the same time to get proper results. Regarding edge cases: 1x1 is always 100% if you start from the only one state and it must terminal as there is no other state. if there is only one nonterminal state, then the result will be the same as this row. No calculation needed. The last two edge cases from Ana's answer (which should be accepted in my opinion) are not the edge cases, to be frank, they are regular cases so you need to calculate the answer normally. A: As an alternative approach, one may consider Engel's algorithm for absorbing Markov chains to compute absorption probabilities. This requires no matrix inversion / linear system solution, and hence there is no need in rational number arithmetic. A: The matrix given in the question represents a transition matrix for an Absorbing Markov chain. Hints: What is a Markov chain Absorbing Markov chain Probabilities The above two links should be enough to come up with a solution for the problem. I chose to solve system of linear equations instead of matrix inversion, to avoid float inaccuracy. fractions module in python can be used for easier calculation. Remember to take care of edge test cases, like 1X1 matrix.
Markov Chain: Finding terminal state calculation
I'm trying to figure out this problem. Hopefully someone can tell me how to complete this. I consulted the following pages, but I was unable to write a code in java/python that produces the correct output and passes all test cases. I'd appreciate any and all help. Markov chain probability calculation - Python Calculating Markov chain probabilities with values too large to exponentiate Write a function answer(m) that takes an array of array of nonnegative ints representing how many times that state has gone to the next state and return an array of ints for each terminal state giving the exact probabilities of each terminal state, represented as the numerator for each state, then the denominator for all of them at the end and in simplest form. The matrix is at most 10 by 10. It is guaranteed that no matter which state the ore is in, there is a path from that state to a terminal state. That is, the processing will always eventually end in a stable state. The ore starts in state 0. The denominator will fit within a signed 32-bit integer during the calculation, as long as the fraction is simplified regularly. For example, consider the matrix m: [ [0,1,0,0,0,1], # s0, the initial state, goes to s1 and s5 with equal probability [4,0,0,3,2,0], # s1 can become s0, s3, or s4, but with different probabilities [0,0,0,0,0,0], # s2 is terminal, and unreachable (never observed in practice) [0,0,0,0,0,0], # s3 is terminal [0,0,0,0,0,0], # s4 is terminal [0,0,0,0,0,0], # s5 is terminal ] So, we can consider different paths to terminal states, such as: s0 -> s1 -> s3 s0 -> s1 -> s0 -> s1 -> s0 -> s1 -> s4 s0 -> s1 -> s0 -> s5 Tracing the probabilities of each, we find that s2 has probability 0 s3 has probability 3/14 s4 has probability 1/7 s5 has probability 9/14
[ "I'm not sure what the results for the edge cases should be, but what I did for this problem is:\n\nCreated a second matrix that held all of the denominators for each probability by adding up all of the numerators in each row.\nFind the first terminal state in the matrix to use as the bound of the non-terminal states.\nSubtract the matrix bounded by the first terminal from the identity matrix of the same size. \nFind the inverse of the difference. There's a couple ways to do this, I decided to augment the matching identity matrix with the difference. \nMultiply the inverse by the matrix bounded from the first terminal to the end of the matrix. \nThen, find the resulting denominator and return the first row of numerators of the matrix bounded from the first terminal to the end of the matrix. \n\nSide notes:\n\nYou'll need to write a simplify function that simplifies fractions (you may also need to write gcf and lcm functions to help you simplify). \nYou may also need to sort the matrix so the terminals are at the end of the matrix so it is in proper form. \nedge cases: 1x1 matrix, matrix that only has 1 nonterminal state, 10x10 matrix, matrix that only has 1 terminal state \n\n", "I know it's a bit old topic but maybe someone will be interested.\nIn my case, this PDF helped me a lot: https://math.dartmouth.edu/archive/m20x06/public_html/Lecture14.pdf\nThe algorithm is easy to implement.\nAs Ana said you need to sort matrix, remember to sort rows and columns at the same time to get proper results.\nRegarding edge cases:\n\n1x1 is always 100% if you start from the only one state and it must terminal as there is no other state.\nif there is only one nonterminal state, then the result will be the same as this row. No calculation needed.\n\nThe last two edge cases from Ana's answer (which should be accepted in my opinion) are not the edge cases, to be frank, they are regular cases so you need to calculate the answer normally. \n", "As an alternative approach, one may consider Engel's algorithm for absorbing Markov chains to compute absorption probabilities. This requires no matrix inversion / linear system solution, and hence there is no need in rational number arithmetic.\n", "The matrix given in the question represents a transition matrix for an Absorbing Markov chain.\nHints:\nWhat is a Markov chain\nAbsorbing Markov chain Probabilities\nThe above two links should be enough to come up with a solution for the problem.\nI chose to solve system of linear equations instead of matrix inversion, to avoid float inaccuracy. fractions module in python can be used for easier calculation.\nRemember to take care of edge test cases, like 1X1 matrix.\n" ]
[ 5, 2, 0, 0 ]
[]
[]
[ "java", "python" ]
stackoverflow_0040433526_java_python.txt
Q: Python Class instance object has no attribute 'undefined_method' Class definition class Car: amount_cars = 0 def __init__(self, manufacturer, model, hp): self.manufacturer = manufacturer self.model = model self.hp = hp Car.amount_cars += 1 def print_car_amount(self): print("Amount: {}".format(Car.amount_cars)) Creating instance myCar1 = Car("Tesla", "Model X", 525) Printing instance myCar1.print_info() Output: --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Input In [37], in <cell line: 1>() ----> 1 myCar1.print_info() AttributeError: 'Car' object has no attribute 'print_info Need help in finding the error A: As it is stated in the error message, tou have no method by the name print_info. Probably, you're trying to do: myCar1.print_car_amount()
Python Class instance object has no attribute 'undefined_method'
Class definition class Car: amount_cars = 0 def __init__(self, manufacturer, model, hp): self.manufacturer = manufacturer self.model = model self.hp = hp Car.amount_cars += 1 def print_car_amount(self): print("Amount: {}".format(Car.amount_cars)) Creating instance myCar1 = Car("Tesla", "Model X", 525) Printing instance myCar1.print_info() Output: --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Input In [37], in <cell line: 1>() ----> 1 myCar1.print_info() AttributeError: 'Car' object has no attribute 'print_info Need help in finding the error
[ "As it is stated in the error message, tou have no method by the name print_info. Probably, you're trying to do:\nmyCar1.print_car_amount()\n\n" ]
[ 4 ]
[ "class Car:\n\namount_cars = 0\n\ndef __init__(self, manufacturer, model, hp):\n self.manufacturer = manufacturer\n self.model = model\n self.hp = hp\n Car.amount_cars += 1\n\ndef print_info(self): # Changed\n print(\"Amount: {}\".format(Car.amount_cars))\n\n" ]
[ -1 ]
[ "python" ]
stackoverflow_0074588076_python.txt
Q: Circle collision, pop-up error "CreaCir' object has no attribute 'radio'" I make a program where I generate circles of random size and position, by means of classes in python, I have managed to generate the circles as I wish but all collide with each other, so I have created a method so that this does not happen but it generates an error that I can not identify "'CreaCir' object has no attribute 'radio'", so I do not know what I'm doing wrong, the terminal tells me that the error is in the circle class in the collision method in the return, if someone could mark my error and give me advice to fix it I would appreciate it. my code is as follows. first my circle class where I contain the collision method: class Circulo(PosGeo): def __init__(self, x, y, r): self.x = x self.y = y self.radio = r self.cx = x+r self.cy = y+r def dibujar(self, ventana): pg.draw.circle(ventana, "white", (self.x, self.y), self.radio, 1) def update(self): pass def colisionC(self, c2): return self.radio + c2.radio > sqrt(pow(self.cx - c2.cx, 2) + pow(self.cy - c2.cy, 2))#where I get the error now a part of my main code: class CreaCir: def __init__(self, figs): self.i = 1 self.r = randint(5, 104) self.x = randint(0, 500 + self.r) self.y = randint(0, 300 + self.r) self.creOne = Circulo(self.x, self.y, self.r) self.figs.append(self.creOne) def update(self): if self.i <100: choca = False self.r = randint(5, 104) self.x = randint(0, 500 + self.r) self.y = randint(0, 300 + self.r) self.creOne = Circulo(self.x, self.y, self.r) for j in range (self.i): choca = self.creOne.colisionC(self.figs[j])#here is where I use the collision method to my object if choca == True: break if choca == False: self.figs.append(self.creOne) self.i+=1 def dibujar(self, ventana): pass called: class Ventana: def __init__(self, Ven_Tam= (700, 500)): pg.init() self.ven_tam = Ven_Tam self.ven = pg.display.set_caption("Linea") self.ven = pg.display.set_mode(self.ven_tam) self.ven.fill(pg.Color('#404040')) self.figs = [] self.figs.append(CreaCir(self.figs)) self.reloj = pg.time.Clock() def check_events(self): for event in pg.event.get(): if event.type == pg.QUIT or (event.type == pg.KEYDOWN and event.key == pg.K_ESCAPE): quit() pg.display.flip() def run(self): while True: self.check_events() for fig in self.figs: fig.update() for fig in self.figs: fig.dibujar(self.ven) self.reloj.tick(30) if __name__ == '__main__': ven = Ventana() ven.run() A: The cause of the error is that you have put a CreaCir object in the figs list. So the first item in figs (self.creOne.colisionC(self.figs[j])) is a CreaCir object and this object has no radio. Just remove that line of code, it is absolutely unnecessary. self.figs.append(CreaCir(self.figs)) Create the CreaCir in run, but don't put it in the list: class Ventana: def __init__(self, Ven_Tam= (700, 500)): pg.init() self.ven_tam = Ven_Tam self.ven = pg.display.set_caption("Linea") self.ven = pg.display.set_mode(self.ven_tam) self.ven.fill(pg.Color('#404040')) self.figs = [] self.reloj = pg.time.Clock() def check_events(self): # [...] def run(self): cirCreater = CreaCir(self.figs) while True: self.check_events() cirCreater.update() for fig in self.figs: fig.dibujar(self.ven) self.reloj.tick(30) To detect the collision of 2 circles, you need to calculate the distance between the centers of the circles and check if the distance is greater than the sum of the radii of the circles. The 3rd argument of pygame.draw.circle is the center of the circle, but not the top left of the bounding box of the circle: pg.draw.circle(ventana, "white", (self.x, self.y), self.radio, 1) pg.draw.circle(ventana, "white", (self.cx, self.cy), self.radio, 1) Complete example: import pygame as pg import math, random class Circulo: def __init__(self, x, y, r): self.x, self.y = x, y self.radio = r self.cx, self.cy = x+r, y+r def dibujar(self, ventana): pg.draw.circle(ventana, "white", (self.cx, self.cy), self.radio, 1) def colisionC(self, c2): dx = self.cx - c2.cx dy = self.cy - c2.cy return self.radio + c2.radio > math.sqrt(dx*dx + dy*dy) class CreaCir: def __init__(self, figs): self.figs = figs def update(self): if len(self.figs) < 100: choca = False r = random.randint(5, 104) x = random.randint(0, 700 - r*2) y = random.randint(0, 500 - r*2) creOne = Circulo(x, y, r) for fig in self.figs: choca = creOne.colisionC(fig) if choca == True: break if choca == False: self.figs.append(creOne) class Ventana: def __init__(self, Ven_Tam= (700, 500)): pg.init() self.ven_tam = Ven_Tam self.ven = pg.display.set_caption("Linea") self.ven = pg.display.set_mode(self.ven_tam) self.figs = [] self.reloj = pg.time.Clock() def check_events(self): for event in pg.event.get(): if event.type == pg.QUIT or (event.type == pg.KEYDOWN and event.key == pg.K_ESCAPE): quit() def run(self): cirCreater = CreaCir(self.figs) while True: self.check_events() cirCreater.update() self.ven.fill(pg.Color('#404040')) for fig in self.figs: fig.dibujar(self.ven) pg.display.flip() self.reloj.tick(30) if __name__ == '__main__': ven = Ventana() ven.run()
Circle collision, pop-up error "CreaCir' object has no attribute 'radio'"
I make a program where I generate circles of random size and position, by means of classes in python, I have managed to generate the circles as I wish but all collide with each other, so I have created a method so that this does not happen but it generates an error that I can not identify "'CreaCir' object has no attribute 'radio'", so I do not know what I'm doing wrong, the terminal tells me that the error is in the circle class in the collision method in the return, if someone could mark my error and give me advice to fix it I would appreciate it. my code is as follows. first my circle class where I contain the collision method: class Circulo(PosGeo): def __init__(self, x, y, r): self.x = x self.y = y self.radio = r self.cx = x+r self.cy = y+r def dibujar(self, ventana): pg.draw.circle(ventana, "white", (self.x, self.y), self.radio, 1) def update(self): pass def colisionC(self, c2): return self.radio + c2.radio > sqrt(pow(self.cx - c2.cx, 2) + pow(self.cy - c2.cy, 2))#where I get the error now a part of my main code: class CreaCir: def __init__(self, figs): self.i = 1 self.r = randint(5, 104) self.x = randint(0, 500 + self.r) self.y = randint(0, 300 + self.r) self.creOne = Circulo(self.x, self.y, self.r) self.figs.append(self.creOne) def update(self): if self.i <100: choca = False self.r = randint(5, 104) self.x = randint(0, 500 + self.r) self.y = randint(0, 300 + self.r) self.creOne = Circulo(self.x, self.y, self.r) for j in range (self.i): choca = self.creOne.colisionC(self.figs[j])#here is where I use the collision method to my object if choca == True: break if choca == False: self.figs.append(self.creOne) self.i+=1 def dibujar(self, ventana): pass called: class Ventana: def __init__(self, Ven_Tam= (700, 500)): pg.init() self.ven_tam = Ven_Tam self.ven = pg.display.set_caption("Linea") self.ven = pg.display.set_mode(self.ven_tam) self.ven.fill(pg.Color('#404040')) self.figs = [] self.figs.append(CreaCir(self.figs)) self.reloj = pg.time.Clock() def check_events(self): for event in pg.event.get(): if event.type == pg.QUIT or (event.type == pg.KEYDOWN and event.key == pg.K_ESCAPE): quit() pg.display.flip() def run(self): while True: self.check_events() for fig in self.figs: fig.update() for fig in self.figs: fig.dibujar(self.ven) self.reloj.tick(30) if __name__ == '__main__': ven = Ventana() ven.run()
[ "The cause of the error is that you have put a CreaCir object in the figs list. So the first item in figs (self.creOne.colisionC(self.figs[j])) is a CreaCir object and this object has no radio.\nJust remove that line of code, it is absolutely unnecessary.\nself.figs.append(CreaCir(self.figs))\nCreate the CreaCir in run, but don't put it in the list:\nclass Ventana:\n def __init__(self, Ven_Tam= (700, 500)):\n \n pg.init()\n self.ven_tam = Ven_Tam\n\n self.ven = pg.display.set_caption(\"Linea\")\n self.ven = pg.display.set_mode(self.ven_tam)\n self.ven.fill(pg.Color('#404040'))\n self.figs = []\n self.reloj = pg.time.Clock()\n \n def check_events(self):\n # [...]\n\n def run(self):\n\n cirCreater = CreaCir(self.figs)\n while True:\n self.check_events()\n\n cirCreater.update()\n for fig in self.figs:\n fig.dibujar(self.ven)\n\n self.reloj.tick(30)\n\nTo detect the collision of 2 circles, you need to calculate the distance between the centers of the circles and check if the distance is greater than the sum of the radii of the circles. The 3rd argument of pygame.draw.circle is the center of the circle, but not the top left of the bounding box of the circle:\npg.draw.circle(ventana, \"white\", (self.x, self.y), self.radio, 1)\npg.draw.circle(ventana, \"white\", (self.cx, self.cy), self.radio, 1)\n\n\nComplete example:\n\nimport pygame as pg\nimport math, random\n\nclass Circulo:\n def __init__(self, x, y, r):\n self.x, self.y = x, y\n self.radio = r\n self.cx, self.cy = x+r, y+r\n\n def dibujar(self, ventana):\n pg.draw.circle(ventana, \"white\", (self.cx, self.cy), self.radio, 1)\n\n def colisionC(self, c2):\n dx = self.cx - c2.cx\n dy = self.cy - c2.cy\n return self.radio + c2.radio > math.sqrt(dx*dx + dy*dy)\n\nclass CreaCir:\n def __init__(self, figs):\n self.figs = figs\n \n def update(self):\n if len(self.figs) < 100:\n choca = False\n r = random.randint(5, 104)\n x = random.randint(0, 700 - r*2)\n y = random.randint(0, 500 - r*2) \n creOne = Circulo(x, y, r)\n for fig in self.figs:\n choca = creOne.colisionC(fig)\n if choca == True:\n break\n if choca == False:\n self.figs.append(creOne)\n\nclass Ventana:\n def __init__(self, Ven_Tam= (700, 500)):\n \n pg.init()\n self.ven_tam = Ven_Tam\n\n self.ven = pg.display.set_caption(\"Linea\")\n self.ven = pg.display.set_mode(self.ven_tam)\n self.figs = []\n self.reloj = pg.time.Clock()\n \n def check_events(self):\n for event in pg.event.get():\n if event.type == pg.QUIT or (event.type == pg.KEYDOWN and event.key == pg.K_ESCAPE):\n quit()\n\n def run(self):\n\n cirCreater = CreaCir(self.figs)\n while True:\n self.check_events()\n cirCreater.update()\n\n self.ven.fill(pg.Color('#404040'))\n for fig in self.figs:\n fig.dibujar(self.ven)\n pg.display.flip()\n self.reloj.tick(30)\n \nif __name__ == '__main__':\n ven = Ventana()\n ven.run()\n\n" ]
[ 0 ]
[]
[]
[ "class", "pygame", "python" ]
stackoverflow_0074587586_class_pygame_python.txt
Q: Pandas dataframe column name is oriented incorrectly I'm pulling data from Sqlite3 and moving it into a dataframe to work with it. However, I get this weird output where it places the first column name into the second row while the other column names are unaffected in the first row. This creates problems as pandas won't recognize the first row's column name (it only sees column names 2-4). What's the best way to get the column names on the first row? Example output: (x$1000) PRN AMT TICKER CUSIP 594918104 3034828140 1612323669 MSFT 037833100 2463247977 2628732382 AAPL 02079K305 2096049986 93429916 GOOGL Wanted output: CUSIP (x$1000) PRN AMT TICKER 594918104 3034828140 1612323669 MSFT 037833100 2463247977 2628732382 AAPL 02079K305 2096049986 93429916 GOOGL A: Right now you have CUSIP as an index, so it is showing up in this orientation. To add a new index column and remove this columns index attribute, use: df.reset_index(drop=TRUE) The drop=TRUE is to avoid the old index being added as a column.
Pandas dataframe column name is oriented incorrectly
I'm pulling data from Sqlite3 and moving it into a dataframe to work with it. However, I get this weird output where it places the first column name into the second row while the other column names are unaffected in the first row. This creates problems as pandas won't recognize the first row's column name (it only sees column names 2-4). What's the best way to get the column names on the first row? Example output: (x$1000) PRN AMT TICKER CUSIP 594918104 3034828140 1612323669 MSFT 037833100 2463247977 2628732382 AAPL 02079K305 2096049986 93429916 GOOGL Wanted output: CUSIP (x$1000) PRN AMT TICKER 594918104 3034828140 1612323669 MSFT 037833100 2463247977 2628732382 AAPL 02079K305 2096049986 93429916 GOOGL
[ "Right now you have CUSIP as an index, so it is showing up in this orientation. To add a new index column and remove this columns index attribute, use:\ndf.reset_index(drop=TRUE)\n\nThe drop=TRUE is to avoid the old index being added as a column.\n" ]
[ 1 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074578718_pandas_python.txt
Q: Python dataclass prints default values when calling fields function I just started to use dataclasses. I have created a python dataclass: from dataclasses import dataclass, fields from typing import Optional @dataclass class CSVData: SUPPLIER_AID: str = "" EAN: Optional[str] = None DESCRIPTION_SHORT: str = "" DESCRIPTION_LONG: str = "Article long description" After creating an instance of a this dataclass and printing it out: data = CSVData(SUPPLIER_AID='1234-568', EAN='EAN number', DESCRIPTION_SHORT='Article Name') print(data) output is: CSVData(SUPPLIER_AID='1234-568', EAN='EAN number', DESCRIPTION_SHORT='Article name', DESCRIPTION_LONG='Article long description') When i call the fields function: for field in fields(data): print(field.default) output is: "", None, "", Article long description I would expect to print: 1234-568, EAN number, Article Name, Article long description A: This is how the dataclass.fields method works (see documentation). If you want to iterate over the values, you can use asdict or astuple instead: from dataclasses import dataclass, asdict from typing import Optional @dataclass class CSVData: SUPPLIER_AID: str = "" EAN: Optional[str] = None DESCRIPTION_SHORT: str = "" DESCRIPTION_LONG: str = "Article long description" data = CSVData(SUPPLIER_AID='1234-568', EAN='EAN number', DESCRIPTION_SHORT='Article Name') for field in asdict(data).values(): print(field) Which prints: 1234-568 EAN number Article Name Article long description
Python dataclass prints default values when calling fields function
I just started to use dataclasses. I have created a python dataclass: from dataclasses import dataclass, fields from typing import Optional @dataclass class CSVData: SUPPLIER_AID: str = "" EAN: Optional[str] = None DESCRIPTION_SHORT: str = "" DESCRIPTION_LONG: str = "Article long description" After creating an instance of a this dataclass and printing it out: data = CSVData(SUPPLIER_AID='1234-568', EAN='EAN number', DESCRIPTION_SHORT='Article Name') print(data) output is: CSVData(SUPPLIER_AID='1234-568', EAN='EAN number', DESCRIPTION_SHORT='Article name', DESCRIPTION_LONG='Article long description') When i call the fields function: for field in fields(data): print(field.default) output is: "", None, "", Article long description I would expect to print: 1234-568, EAN number, Article Name, Article long description
[ "This is how the dataclass.fields method works (see documentation). If you want to iterate over the values, you can use asdict or astuple instead:\nfrom dataclasses import dataclass, asdict\nfrom typing import Optional\n\n\n@dataclass\nclass CSVData:\n SUPPLIER_AID: str = \"\"\n EAN: Optional[str] = None\n DESCRIPTION_SHORT: str = \"\"\n DESCRIPTION_LONG: str = \"Article long description\"\n\n\ndata = CSVData(SUPPLIER_AID='1234-568',\n EAN='EAN number',\n DESCRIPTION_SHORT='Article Name')\n\nfor field in asdict(data).values():\n print(field)\n\nWhich prints:\n1234-568\nEAN number\nArticle Name\nArticle long description\n\n" ]
[ 2 ]
[]
[]
[ "python", "python_dataclasses" ]
stackoverflow_0074588291_python_python_dataclasses.txt
Q: Merge rows in pandas with parent/child relationship Consider this example dataframe: case_number parent_case_number name role paid notes 0 NYC-22-1234 None Bob Cratchit Accountant 50000 Scrooge's favorite accountant. 1 LON-22-1446 None Ebenezer Scrooge Partner 950000 Charles Dickens would be proud. 2 CHI-21-0115 None Bob Marley Partner (Deceased) 425000 Shackled. 3 NYC-22-1235 NYC-22-1234 Bob Cratchit Accountant 30000 One of Scrooge's accountants. This can be constructed as follows: import pandas as pd sample_data = [ { "case_number": "NYC-22-1234", "parent_case_number": None, "name": "Bob Cratchit", "role": "Accountant", "paid": 50000, "notes": "Scrooge's favorite accountant.", }, { "case_number": "LON-22-1446", "parent_case_number": None, "name": "Ebenezer Scrooge", "role": "Partner", "paid": 950000, "notes": "Charles Dickens would be proud.", }, { "case_number": "CHI-21-0115", "parent_case_number": None, "name": "Bob Marley", "role": "Partner (Deceased)", "paid": 425000, "notes": "Shackled.", }, { "case_number": "NYC-22-1235", "parent_case_number": "NYC-22-1234", "name": "Bob Cratchit", "role": "Accountant", "paid": 30000, "notes": "One of Scrooge's accountants.", }, ] df = pd.DataFrame(sample_data) I want to merge children into their parents, where parent_case_number refers to a parent case_number. If a field is empty, it should take the value that is not empty (regardless of whether that value comes from child or parent). If a field has the same value in both rows, it should retain one of them. For conflicting values (e.g., paid), it should take the highest value. For notes, it should append the child to the parent's notes. In the event of conflicting values It should capture the case_number of the removed row(s) in a new column, child_case_numbers In this example, the expected output is: case_number parent_case_number name role paid notes child_case_numbers 0 NYC-22-1234 None Bob Cratchit Accountant 50000 Scrooge's favorite accountant. One of Scrooge's accountants. NYC-22-1235 1 LON-22-1446 None Ebenezer Scrooge Partner 950000 Charles Dickens would be proud. NaN 2 CHI-21-0115 None Bob Marley Partner (Deceased) 425000 Shackled. NaN I originally tried grouping but this assumes certain columns have identical values rather than a parent/child relationship. I also thought about removing the child elements into a separate dataframe then merging back in on case_number=parent_case_number` but wasn't sure how to use the logic to impact specific fields for the merge. I also thought perhaps an approach by data type could work also: Any If parent or child has a value that is empty, use the populated value Strings: If same, no change If different, append child to parent Number (float64) Use the biggest number (and a value is always "bigger" than NaN) How do I go about doing this? A: Here is one way to do it: # Merge relevant subsets on case_number/parent_case_number new_df = pd.concat( [ df[df["parent_case_number"].isna()].set_index("case_number"), df.dropna(subset="parent_case_number") .set_index("parent_case_number") .pipe( lambda df_: df_.rename(columns={col: f"child_{col}" for col in df_.columns}) ), ], axis=1, ) # Set new values new_df["paid"] = new_df[["paid", "child_paid"]].max(axis=1) new_df["name"] = new_df.apply(lambda x: x["name"] or x["child_name"], axis=1) new_df["role"] = new_df.apply(lambda x: x["role"] or x["child_role"], axis=1) new_df["notes"] = new_df["notes"] + " " + new_df["child_notes"].fillna("") # Cleanup new_df = ( new_df[ [col for col in new_df.columns if not col.startswith("child_")] + ["child_case_number"] ] .reset_index() .rename(columns={"index": "case_number"}) ) Then: print(new_df) # Output case_number parent_case_number name role \ 0 NYC-22-1234 None Bob Cratchit Accountant 1 LON-22-1446 None Ebenezer Scrooge Partner 2 CHI-21-0115 None Bob Marley Partner (Deceased) paid notes \ 0 50000.0 Scrooge's favorite accountant. One of Scrooge's accountants. 1 950000.0 Charles Dickens would be proud. 2 425000.0 Shackled. child_case_number 0 NYC-22-1235 1 NaN 2 NaN
Merge rows in pandas with parent/child relationship
Consider this example dataframe: case_number parent_case_number name role paid notes 0 NYC-22-1234 None Bob Cratchit Accountant 50000 Scrooge's favorite accountant. 1 LON-22-1446 None Ebenezer Scrooge Partner 950000 Charles Dickens would be proud. 2 CHI-21-0115 None Bob Marley Partner (Deceased) 425000 Shackled. 3 NYC-22-1235 NYC-22-1234 Bob Cratchit Accountant 30000 One of Scrooge's accountants. This can be constructed as follows: import pandas as pd sample_data = [ { "case_number": "NYC-22-1234", "parent_case_number": None, "name": "Bob Cratchit", "role": "Accountant", "paid": 50000, "notes": "Scrooge's favorite accountant.", }, { "case_number": "LON-22-1446", "parent_case_number": None, "name": "Ebenezer Scrooge", "role": "Partner", "paid": 950000, "notes": "Charles Dickens would be proud.", }, { "case_number": "CHI-21-0115", "parent_case_number": None, "name": "Bob Marley", "role": "Partner (Deceased)", "paid": 425000, "notes": "Shackled.", }, { "case_number": "NYC-22-1235", "parent_case_number": "NYC-22-1234", "name": "Bob Cratchit", "role": "Accountant", "paid": 30000, "notes": "One of Scrooge's accountants.", }, ] df = pd.DataFrame(sample_data) I want to merge children into their parents, where parent_case_number refers to a parent case_number. If a field is empty, it should take the value that is not empty (regardless of whether that value comes from child or parent). If a field has the same value in both rows, it should retain one of them. For conflicting values (e.g., paid), it should take the highest value. For notes, it should append the child to the parent's notes. In the event of conflicting values It should capture the case_number of the removed row(s) in a new column, child_case_numbers In this example, the expected output is: case_number parent_case_number name role paid notes child_case_numbers 0 NYC-22-1234 None Bob Cratchit Accountant 50000 Scrooge's favorite accountant. One of Scrooge's accountants. NYC-22-1235 1 LON-22-1446 None Ebenezer Scrooge Partner 950000 Charles Dickens would be proud. NaN 2 CHI-21-0115 None Bob Marley Partner (Deceased) 425000 Shackled. NaN I originally tried grouping but this assumes certain columns have identical values rather than a parent/child relationship. I also thought about removing the child elements into a separate dataframe then merging back in on case_number=parent_case_number` but wasn't sure how to use the logic to impact specific fields for the merge. I also thought perhaps an approach by data type could work also: Any If parent or child has a value that is empty, use the populated value Strings: If same, no change If different, append child to parent Number (float64) Use the biggest number (and a value is always "bigger" than NaN) How do I go about doing this?
[ "Here is one way to do it:\n# Merge relevant subsets on case_number/parent_case_number\nnew_df = pd.concat(\n [\n df[df[\"parent_case_number\"].isna()].set_index(\"case_number\"),\n df.dropna(subset=\"parent_case_number\")\n .set_index(\"parent_case_number\")\n .pipe(\n lambda df_: df_.rename(columns={col: f\"child_{col}\" for col in df_.columns})\n ),\n ],\n axis=1,\n)\n\n# Set new values\nnew_df[\"paid\"] = new_df[[\"paid\", \"child_paid\"]].max(axis=1)\nnew_df[\"name\"] = new_df.apply(lambda x: x[\"name\"] or x[\"child_name\"], axis=1)\nnew_df[\"role\"] = new_df.apply(lambda x: x[\"role\"] or x[\"child_role\"], axis=1)\nnew_df[\"notes\"] = new_df[\"notes\"] + \" \" + new_df[\"child_notes\"].fillna(\"\")\n\n# Cleanup\nnew_df = (\n new_df[\n [col for col in new_df.columns if not col.startswith(\"child_\")]\n + [\"child_case_number\"]\n ]\n .reset_index()\n .rename(columns={\"index\": \"case_number\"})\n)\n\nThen:\nprint(new_df)\n# Output\n case_number parent_case_number name role \\\n0 NYC-22-1234 None Bob Cratchit Accountant \n1 LON-22-1446 None Ebenezer Scrooge Partner \n2 CHI-21-0115 None Bob Marley Partner (Deceased) \n\n paid notes \\\n0 50000.0 Scrooge's favorite accountant. One of Scrooge's accountants. \n1 950000.0 Charles Dickens would be proud. \n2 425000.0 Shackled. \n\n child_case_number \n0 NYC-22-1235 \n1 NaN \n2 NaN \n\n" ]
[ 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074554320_pandas_python.txt
Q: How to upload a csv file using Jinja2 Templates and FastAPI , and return it after modifications? I am using FastAPI to upload a csv file, perform some modifications on it and then return it to the HTML page. I am using Jinja2 as the template engine and HTML in frontend. How can I upload the csv file using Jinja2 template, modify it and then return it to the client? Python code from fastapi.templating import Jinja2Templates from fastapi import FastAPI, File, UploadFile, Request from io import BytesIO import pandas as pd import uvicorn app = FastAPI() templates = Jinja2Templates(directory="templates") @app.get("/") def form_post(request: Request): result = "upload file" return templates.TemplateResponse('home.html', context={'request': request, 'result': result}) @app.post("/") def upload(request: Request, file: UploadFile = File(...)): contents1 = file.file.read() buffer1 = BytesIO(contents1) test1 = pd.read_csv(buffer1) buffer1.close() file.file.close() test1 = dict(test1.values) return templates.TemplateResponse('home.html', context={'request': request, 'result': test1}) if __name__ == "__main__": uvicorn.run(app) HTML code \<!DOCTYPE html\> \<html lang="en"\> \<head\> \<meta charset="UTF-8"\> \<title\>RUL_PREDICTION\</title\> \</head\> \<body\> \<h1\>RUL PREDICTION\</h1\> \<form method="post"\> \<input type="file" name="file" id="file"/\> \<button type="submit"\>upload\</button\> \</form\> \<p\>{{ result }}\</p\> \</body\> \</html\> A: The working example below is derived from the answers here, here, as well as here, here and here, at which I would suggest you have a look for more details and explanation. Sample data data.csv Id,name,age,height,weight 1,Alice,20,62,120.6 2,Freddie,21,74,190.6 3,Bob,17,68,120.0 Option 1 - Return modified data in a new CSV file app.py from fastapi import FastAPI, File, UploadFile, Request, Response, HTTPException from fastapi.templating import Jinja2Templates from io import BytesIO import pandas as pd app = FastAPI() templates = Jinja2Templates(directory='templates') @app.post('/upload') def upload(file: UploadFile = File(...)): try: contents = file.file.read() buffer = BytesIO(contents) df = pd.read_csv(buffer) except: raise HTTPException(status_code=500, detail='Something went wrong') finally: buffer.close() file.file.close() # remove a column from the DataFrame df.drop('age', axis=1, inplace=True) headers = {'Content-Disposition': 'attachment; filename="modified_data.csv"'} return Response(df.to_csv(), headers=headers, media_type='text/csv') @app.get('/') def main(request: Request): return templates.TemplateResponse('index.html', {'request': request}) templates/index.html <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1"> </head> <body> <form method="post" action="/upload" enctype="multipart/form-data"> <label for="csvFile">Choose a CSV file</label> <input type="file" id="csvFile" name="file" onchange="enableSubmitBtn();"><br><br> <input type="submit" id="submitBtn" value="submit" disabled> </form> <script> function enableSubmitBtn() { document.getElementById('submitBtn').removeAttribute("disabled"); } </script> </body> </html> Option 2 - Return modified data in a new Jinja2 Template If you would rather like to return a new Jinja2 template with the modified data instead of a csv file as demonstrated above, you could use the below. Method 1 Use pandas.DataFrame.to_html() to render the DataFrame as an HTML table. You could optionally use the classes parameter in to_html() function to pass a class name, or a list of names, that will be used in a style sheet in your frontend to style the table. Additionally, you could remove the border by specifying border=0 in to_html(). app.py # ... (rest of code is same as in Option 1) @app.post('/upload') def upload(request: Request, file: UploadFile = File(...)): # ... (rest of code is same as in Option 1) context = {'request': request, 'table': df.to_html()} return templates.TemplateResponse('results.html', context) templates/results.html <!DOCTYPE html> <html> <body>{{ table | safe }}</body> </html> Method 2 Use pandas.DataFrame.to_dict() to convert the DataFrame to a dictionary and return it. app.py # ... (rest of code is same as in Option 1) @app.post('/upload') def upload(request: Request, file: UploadFile = File(...)): # ... (rest of code is same as in Option 1) context = {'request': request, 'data': df.to_dict(orient='records'), 'columns': df.columns.values} return templates.TemplateResponse('results.html', context) templates/results.html <!DOCTYPE html> <html> <body> <table style="width:50%"> <tr> {% for c in columns %}<td>{{ c }}</td>{% endfor %} </tr> {% for d in data %} <tr> {% for v in d.values() %} <td>{{ v }}</td> {% endfor %} <br> </tr> {% endfor %} </table> </body> </html> A: This might work: @app.post("/") def upload(file: UploadFile): with open("temp.csv", "wb") as f: for row in file.file: f.write(row) with open("temp.csv", "r", encoding="utf-8") as csv: # modifications return FileResponse(path="temp.csv", filename="new.csv", media_type="application/octet-stream")
How to upload a csv file using Jinja2 Templates and FastAPI , and return it after modifications?
I am using FastAPI to upload a csv file, perform some modifications on it and then return it to the HTML page. I am using Jinja2 as the template engine and HTML in frontend. How can I upload the csv file using Jinja2 template, modify it and then return it to the client? Python code from fastapi.templating import Jinja2Templates from fastapi import FastAPI, File, UploadFile, Request from io import BytesIO import pandas as pd import uvicorn app = FastAPI() templates = Jinja2Templates(directory="templates") @app.get("/") def form_post(request: Request): result = "upload file" return templates.TemplateResponse('home.html', context={'request': request, 'result': result}) @app.post("/") def upload(request: Request, file: UploadFile = File(...)): contents1 = file.file.read() buffer1 = BytesIO(contents1) test1 = pd.read_csv(buffer1) buffer1.close() file.file.close() test1 = dict(test1.values) return templates.TemplateResponse('home.html', context={'request': request, 'result': test1}) if __name__ == "__main__": uvicorn.run(app) HTML code \<!DOCTYPE html\> \<html lang="en"\> \<head\> \<meta charset="UTF-8"\> \<title\>RUL_PREDICTION\</title\> \</head\> \<body\> \<h1\>RUL PREDICTION\</h1\> \<form method="post"\> \<input type="file" name="file" id="file"/\> \<button type="submit"\>upload\</button\> \</form\> \<p\>{{ result }}\</p\> \</body\> \</html\>
[ "The working example below is derived from the answers here, here, as well as here, here and here, at which I would suggest you have a look for more details and explanation.\nSample data\ndata.csv\nId,name,age,height,weight\n1,Alice,20,62,120.6\n2,Freddie,21,74,190.6\n3,Bob,17,68,120.0\n\nOption 1 - Return modified data in a new CSV file\napp.py\nfrom fastapi import FastAPI, File, UploadFile, Request, Response, HTTPException\nfrom fastapi.templating import Jinja2Templates\nfrom io import BytesIO\nimport pandas as pd\n\napp = FastAPI()\ntemplates = Jinja2Templates(directory='templates')\n\n@app.post('/upload')\ndef upload(file: UploadFile = File(...)):\n try:\n contents = file.file.read()\n buffer = BytesIO(contents) \n df = pd.read_csv(buffer)\n except:\n raise HTTPException(status_code=500, detail='Something went wrong')\n finally:\n buffer.close()\n file.file.close()\n\n # remove a column from the DataFrame\n df.drop('age', axis=1, inplace=True)\n \n headers = {'Content-Disposition': 'attachment; filename=\"modified_data.csv\"'}\n return Response(df.to_csv(), headers=headers, media_type='text/csv')\n \n\n@app.get('/')\ndef main(request: Request):\n return templates.TemplateResponse('index.html', {'request': request})\n\ntemplates/index.html\n<!DOCTYPE html>\n<html>\n <head>\n <meta charset=\"utf-8\">\n <meta name=\"viewport\" content=\"width=device-width, initial-scale=1\">\n </head>\n <body>\n <form method=\"post\" action=\"/upload\" enctype=\"multipart/form-data\">\n <label for=\"csvFile\">Choose a CSV file</label>\n <input type=\"file\" id=\"csvFile\" name=\"file\" onchange=\"enableSubmitBtn();\"><br><br>\n <input type=\"submit\" id=\"submitBtn\" value=\"submit\" disabled>\n </form>\n <script>\n function enableSubmitBtn() {\n document.getElementById('submitBtn').removeAttribute(\"disabled\");\n }\n </script>\n </body>\n</html>\n\nOption 2 - Return modified data in a new Jinja2 Template\nIf you would rather like to return a new Jinja2 template with the modified data instead of a csv file as demonstrated above, you could use the below.\nMethod 1\nUse pandas.DataFrame.to_html() to render the DataFrame as an HTML table. You could optionally use the classes parameter in to_html() function to pass a class name, or a list of names, that will be used in a style sheet in your frontend to style the table. Additionally, you could remove the border by specifying border=0 in to_html().\napp.py\n# ... (rest of code is same as in Option 1)\n\n@app.post('/upload')\ndef upload(request: Request, file: UploadFile = File(...)):\n # ... (rest of code is same as in Option 1)\n\n context = {'request': request, 'table': df.to_html()}\n return templates.TemplateResponse('results.html', context)\n\n\ntemplates/results.html\n<!DOCTYPE html>\n<html>\n <body>{{ table | safe }}</body>\n</html>\n\nMethod 2\nUse pandas.DataFrame.to_dict() to convert the DataFrame to a dictionary and return it.\napp.py\n# ... (rest of code is same as in Option 1)\n\n@app.post('/upload')\ndef upload(request: Request, file: UploadFile = File(...)):\n # ... (rest of code is same as in Option 1)\n\n context = {'request': request, 'data': df.to_dict(orient='records'), 'columns': df.columns.values}\n return templates.TemplateResponse('results.html', context)\n\n\ntemplates/results.html\n<!DOCTYPE html>\n<html>\n <body>\n <table style=\"width:50%\">\n <tr>\n {% for c in columns %}<td>{{ c }}</td>{% endfor %}\n </tr>\n {% for d in data %}\n <tr>\n {% for v in d.values() %}\n <td>{{ v }}</td>\n {% endfor %}\n <br>\n </tr>\n {% endfor %}\n </table>\n </body>\n</html>\n\n", "This might work:\n@app.post(\"/\")\ndef upload(file: UploadFile):\n\n with open(\"temp.csv\", \"wb\") as f:\n for row in file.file:\n f.write(row)\n \n with open(\"temp.csv\", \"r\", encoding=\"utf-8\") as csv:\n # modifications\n \n\n return FileResponse(path=\"temp.csv\", filename=\"new.csv\", media_type=\"application/octet-stream\")\n\n" ]
[ 1, 0 ]
[]
[]
[ "csv", "fastapi", "html", "jinja2", "python" ]
stackoverflow_0074573656_csv_fastapi_html_jinja2_python.txt
Q: "ModuleNotFoundError: No module named 'skpy' " happen even though I already installed Though I installed skpy by pip and pip3, the error happened when I command jupyter execute on the terminal. Python 3.9.13 pip 22.2.2 from /Users/username/opt/anaconda3/lib/python3.9/site-packages/pip (python 3.9) Proof I installed skpy (base) username@MacBook-Pro-3 test-directory % pip list Package Version ----------------------------- -------------------- ・・・ SkPy 0.10.4 ・・・ ・・・ (base) username@MacBook-Pro-3 test-directory % pip3 install skpy Requirement already satisfied: skpy in /Users/username/opt/anaconda3/lib/python3.9/site-packages (0.10.4) Requirement already satisfied: beautifulsoup4 in /Users/username/opt/anaconda3/lib/python3.9/site-packages (from skpy) (4.11.1) Requirement already satisfied: requests in /Users/username/opt/anaconda3/lib/python3.9/site-packages (from skpy) (2.28.1) Requirement already satisfied: soupsieve>1.2 in /Users/username/opt/anaconda3/lib/python3.9/site-packages (from beautifulsoup4->skpy) (2.3.1) Requirement already satisfied: certifi>=2017.4.17 in /Users/username/opt/anaconda3/lib/python3.9/site-packages (from requests->skpy) (2022.9.24) Requirement already satisfied: charset-normalizer<3,>=2 in /Users/username/opt/anaconda3/lib/python3.9/site-packages (from requests->skpy) (2.0.4) Requirement already satisfied: urllib3<1.27,>=1.21.1 in /Users/username/opt/anaconda3/lib/python3.9/site-packages (from requests->skpy) (1.26.11) Requirement already satisfied: idna<4,>=2.5 in /Users/username/opt/anaconda3/lib/python3.9/site-packages (from requests->skpy) (3.3) Execution command (base) username@MacBook-Pro-3 test-directory % /Library/Frameworks/Python.framework/Versions/3.10/bin/jupyter execute /Users/username/Desktop/job/test-directory/createNewShiftTab.ipynb The error after run above command [NbClientApp] Executing /Users/username/Desktop/job/test-directory/createNewShiftTab.ipynb [NbClientApp] Executing notebook with kernel: python3 Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.10/bin/jupyter-execute", line 8, in <module> sys.exit(main()) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/jupyter_core/application.py", line 276, in launch_instance return super().launch_instance(argv=argv, **kwargs) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/traitlets/config/application.py", line 981, in launch_instance app.initialize(argv) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/traitlets/config/application.py", line 110, in inner return method(app, *args, **kwargs) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/nbclient/cli.py", line 113, in initialize [self.run_notebook(path) for path in self.notebooks] File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/nbclient/cli.py", line 113, in <listcomp> [self.run_notebook(path) for path in self.notebooks] File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/nbclient/cli.py", line 154, in run_notebook client.execute() File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/nbclient/util.py", line 85, in wrapped return just_run(coro(*args, **kwargs)) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/nbclient/util.py", line 60, in just_run return loop.run_until_complete(coro) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/asyncio/base_events.py", line 646, in run_until_complete return future.result() File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/nbclient/client.py", line 701, in async_execute await self.async_execute_cell( File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/nbclient/client.py", line 1019, in async_execute_cell await self._check_raise_for_error(cell, cell_index, exec_reply) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/nbclient/client.py", line 913, in _check_raise_for_error raise CellExecutionError.from_cell_and_msg(cell, exec_reply_content) nbclient.exceptions.CellExecutionError: An error occurred while executing the following cell: --------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) Cell In [1], line 9 6 from oauth2client.service_account import ServiceAccountCredentials 7 # print(sys.path) importしたモジュールの探索先ディレクトリを一覧表示 8 #Skype操作 ----> 9 from skpy import Skype ModuleNotFoundError: No module named 'skpy' ModuleNotFoundError: No module named 'skpy' But when I executed on the JupyterLab browser directly, it didn’t happen as below, so I’m confusing. enter image description here And this is the output of sys.path ['/Users/username/Desktop/job/automation', '/Users/username/opt/anaconda3/lib/python39.zip', '/Users/username/opt/anaconda3/lib/python3.9', '/Users/username/opt/anaconda3/lib/python3.9/lib-dynload', '', '/Users/username/opt/anaconda3/lib/python3.9/site-packages', '/Users/username/opt/anaconda3/lib/python3.9/site-packages/aeosa', '/Users/username/opt/anaconda3/lib/python3.9/site-packages/IPython/extensions', '/Users/username/.ipython'] Run print(sys.path) then confirmed the module was in the directory where python searched to use module Restart mac, terminal, jupyter fix text to "SkPy" from "skpy" But all of them didn’t work A: As a play off Rubizzo's answer: A very simple fix here is just installing using the same Python version as your juypter notebook is using. /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/python -m pip install skpy Should do it. For further clarification: You seem to have multiple versions of Python installed. If you run where python3 You will see every Python 3.xx.xx installation you have. The very top one is the version you use if you simply run a command like pip install <package> or python3 <file> But you can select any of the python versions using their full paths to run a file or to use pip with. A: You have two different versions of python3. You installed your package for python 3.9 but Jupyter uses python3.10. Or you force Jupyter to use python3.9 or you install your package for python3.10 too.
"ModuleNotFoundError: No module named 'skpy' " happen even though I already installed
Though I installed skpy by pip and pip3, the error happened when I command jupyter execute on the terminal. Python 3.9.13 pip 22.2.2 from /Users/username/opt/anaconda3/lib/python3.9/site-packages/pip (python 3.9) Proof I installed skpy (base) username@MacBook-Pro-3 test-directory % pip list Package Version ----------------------------- -------------------- ・・・ SkPy 0.10.4 ・・・ ・・・ (base) username@MacBook-Pro-3 test-directory % pip3 install skpy Requirement already satisfied: skpy in /Users/username/opt/anaconda3/lib/python3.9/site-packages (0.10.4) Requirement already satisfied: beautifulsoup4 in /Users/username/opt/anaconda3/lib/python3.9/site-packages (from skpy) (4.11.1) Requirement already satisfied: requests in /Users/username/opt/anaconda3/lib/python3.9/site-packages (from skpy) (2.28.1) Requirement already satisfied: soupsieve>1.2 in /Users/username/opt/anaconda3/lib/python3.9/site-packages (from beautifulsoup4->skpy) (2.3.1) Requirement already satisfied: certifi>=2017.4.17 in /Users/username/opt/anaconda3/lib/python3.9/site-packages (from requests->skpy) (2022.9.24) Requirement already satisfied: charset-normalizer<3,>=2 in /Users/username/opt/anaconda3/lib/python3.9/site-packages (from requests->skpy) (2.0.4) Requirement already satisfied: urllib3<1.27,>=1.21.1 in /Users/username/opt/anaconda3/lib/python3.9/site-packages (from requests->skpy) (1.26.11) Requirement already satisfied: idna<4,>=2.5 in /Users/username/opt/anaconda3/lib/python3.9/site-packages (from requests->skpy) (3.3) Execution command (base) username@MacBook-Pro-3 test-directory % /Library/Frameworks/Python.framework/Versions/3.10/bin/jupyter execute /Users/username/Desktop/job/test-directory/createNewShiftTab.ipynb The error after run above command [NbClientApp] Executing /Users/username/Desktop/job/test-directory/createNewShiftTab.ipynb [NbClientApp] Executing notebook with kernel: python3 Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.10/bin/jupyter-execute", line 8, in <module> sys.exit(main()) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/jupyter_core/application.py", line 276, in launch_instance return super().launch_instance(argv=argv, **kwargs) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/traitlets/config/application.py", line 981, in launch_instance app.initialize(argv) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/traitlets/config/application.py", line 110, in inner return method(app, *args, **kwargs) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/nbclient/cli.py", line 113, in initialize [self.run_notebook(path) for path in self.notebooks] File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/nbclient/cli.py", line 113, in <listcomp> [self.run_notebook(path) for path in self.notebooks] File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/nbclient/cli.py", line 154, in run_notebook client.execute() File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/nbclient/util.py", line 85, in wrapped return just_run(coro(*args, **kwargs)) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/nbclient/util.py", line 60, in just_run return loop.run_until_complete(coro) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/asyncio/base_events.py", line 646, in run_until_complete return future.result() File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/nbclient/client.py", line 701, in async_execute await self.async_execute_cell( File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/nbclient/client.py", line 1019, in async_execute_cell await self._check_raise_for_error(cell, cell_index, exec_reply) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/nbclient/client.py", line 913, in _check_raise_for_error raise CellExecutionError.from_cell_and_msg(cell, exec_reply_content) nbclient.exceptions.CellExecutionError: An error occurred while executing the following cell: --------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) Cell In [1], line 9 6 from oauth2client.service_account import ServiceAccountCredentials 7 # print(sys.path) importしたモジュールの探索先ディレクトリを一覧表示 8 #Skype操作 ----> 9 from skpy import Skype ModuleNotFoundError: No module named 'skpy' ModuleNotFoundError: No module named 'skpy' But when I executed on the JupyterLab browser directly, it didn’t happen as below, so I’m confusing. enter image description here And this is the output of sys.path ['/Users/username/Desktop/job/automation', '/Users/username/opt/anaconda3/lib/python39.zip', '/Users/username/opt/anaconda3/lib/python3.9', '/Users/username/opt/anaconda3/lib/python3.9/lib-dynload', '', '/Users/username/opt/anaconda3/lib/python3.9/site-packages', '/Users/username/opt/anaconda3/lib/python3.9/site-packages/aeosa', '/Users/username/opt/anaconda3/lib/python3.9/site-packages/IPython/extensions', '/Users/username/.ipython'] Run print(sys.path) then confirmed the module was in the directory where python searched to use module Restart mac, terminal, jupyter fix text to "SkPy" from "skpy" But all of them didn’t work
[ "As a play off Rubizzo's answer:\nA very simple fix here is just installing using the same Python version as your juypter notebook is using.\n/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/python -m pip install skpy\n\nShould do it.\nFor further clarification: You seem to have multiple versions of Python installed. If you run\nwhere python3\n\nYou will see every Python 3.xx.xx installation you have. The very top one is the version you use if you simply run a command like\npip install <package>\nor\npython3 <file>\n\nBut you can select any of the python versions using their full paths to run a file or to use pip with.\n", "You have two different versions of python3. You installed your package for python 3.9 but Jupyter uses python3.10.\nOr you force Jupyter to use python3.9 or you install your package for python3.10 too.\n" ]
[ 1, 0 ]
[]
[]
[ "jupyter_notebook", "python" ]
stackoverflow_0074588148_jupyter_notebook_python.txt
Q: Extract int from objects in column that has multiple metrics that need scaling My column has over 9000 rows of Mb and Kb objects appearing as numbers, that need to be converted all into Kb values (by multiplying Mb values to 1000). This is a snippet of the column values: array(['5.3M', '47M', '556k', '526k', '76M', '7.6M', '59M', '9.7M', '78M', '72M', '43M', '7.7M', '6.3M', '334k', '93M', '65M', '23k', '7.3M', '6.5M', '1.5M', '7.5M']) I used str.extract below, and it works in converting the objects into int, however I lose the M character and I cannot track which numbers to multiply to 1000 df.Size = df['Size'].str.extract('(\d+)').astype(int) I've also tried creating a list of the Mb values to then use a custom function with .apply() to multiply, but I believe there is something wrong with my code: mblist = df['Size'].str.extract('(\d+[M$])').astype(int) I need the column to all become int and all in Kb metric, so I can conduct analysis on it. Is there a better pathway to get there? A: We can use np.where() here along with str.extract: df["Size"] = np.where(df["Size"].str.contains(r'M$', regex=True), 1000.0*df["Size"].str.extract('(\d+(?:\.\d+)?)').astype(float), df["Size"].str.extract('(\d+(?:\.\d+)?)').astype(float)) The above logic extracts the float (or integer) number from the size field, and multiplies by 1000, in the case that the unit be M for megabytes. Data will then be reported everywhere as KB.
Extract int from objects in column that has multiple metrics that need scaling
My column has over 9000 rows of Mb and Kb objects appearing as numbers, that need to be converted all into Kb values (by multiplying Mb values to 1000). This is a snippet of the column values: array(['5.3M', '47M', '556k', '526k', '76M', '7.6M', '59M', '9.7M', '78M', '72M', '43M', '7.7M', '6.3M', '334k', '93M', '65M', '23k', '7.3M', '6.5M', '1.5M', '7.5M']) I used str.extract below, and it works in converting the objects into int, however I lose the M character and I cannot track which numbers to multiply to 1000 df.Size = df['Size'].str.extract('(\d+)').astype(int) I've also tried creating a list of the Mb values to then use a custom function with .apply() to multiply, but I believe there is something wrong with my code: mblist = df['Size'].str.extract('(\d+[M$])').astype(int) I need the column to all become int and all in Kb metric, so I can conduct analysis on it. Is there a better pathway to get there?
[ "We can use np.where() here along with str.extract:\ndf[\"Size\"] = np.where(df[\"Size\"].str.contains(r'M$', regex=True),\n 1000.0*df[\"Size\"].str.extract('(\\d+(?:\\.\\d+)?)').astype(float),\n df[\"Size\"].str.extract('(\\d+(?:\\.\\d+)?)').astype(float))\n\nThe above logic extracts the float (or integer) number from the size field, and multiplies by 1000, in the case that the unit be M for megabytes. Data will then be reported everywhere as KB.\n" ]
[ 0 ]
[]
[]
[ "if_statement", "python", "regex", "string", "type_conversion" ]
stackoverflow_0074588355_if_statement_python_regex_string_type_conversion.txt
Q: Check if object attributes are non-empty python I can check if python list or dictionary are empty or not like this lis1, dict1 = [], {} # similar thing can be done for dict1 if lis1: # Do stuff else: print "List is empty" If I try to do this with my class object, i.e checking if my object attributes are non-empty by typing if my_object: this always evaluate to True >>> class my_class(object): ... def __init__(self): ... self.lis1 = [] ... self.dict1 = {} ... >>> obj1 = my_class() >>> obj1 <__main__.my_class object at 0x10c793250> >>> if obj1: ... print "yes" ... yes I can write a function specifically to check if my object attributes are non-empty and then call if obj1.is_attributes_empty():, but I am more interested in knowing how if evaluates the standard data-types like list and dict to True or False depending on the items they contain or are empty. If I want to achieve this functionality with my class object, what methods do I need to override or make changes to? A: You need to implement the __nonzero__ method (or __bool__ for Python3) https://docs.python.org/2/reference/datamodel.html#object.nonzero class my_class(object): def __init__(self): self.lis1 = [] self.dict1 = {} def __nonzero__(self): return bool(self.lis1 or self.dict1) obj = my_class() if obj: print "Available" else: print "Not available" Python also checks the __len__ method for truthiness, but that doesn't seem to make sense for your example. If you have a lot of attributes to check you may prefer to return any((self.lis1, self.dict1, ...)) A: It is given in the documentation of Truth value testing for Python 2.x - instances of user-defined classes, if the class defines a __nonzero__() or __len__() method, when that method returns the integer zero or bool value False. For Python 3.x - instances of user-defined classes, if the class defines a __bool__() or __len__() method, when that method returns the integer zero or bool value False. According to the definition of your class, if maybe meaningful to define __len__() method, which returns the sum of length of the list as well as the dict.Then this method would be called to determine whether to interpret the object as True or False in boolean context. Example - class my_class(object): def __init__(self): self.lis1 = [] self.dict1 = {} def __len__(self): print("In len") return len(self.lis1) + len(self.dict1) Demo - >>> class my_class(object): ... def __init__(self): ... self.lis1 = [] ... self.dict1 = {} ... def __len__(self): ... print("In len") ... return len(self.lis1) + len(self.dict1) ... >>> obj = my_class() >>> if obj: ... print("yes") ... In len >>> obj.lis1.append(1) >>> >>> if obj: ... print("yes") ... In len yes A: The build in vars() function makes a useful one-liner for checking if an object has any non-empty attributes. Combine it with __nonzero__ and you get the following: def __nonzero__(self): return any(vars(self).values()) A: If your class defines (on Py2) __nonzero__, (on Py3) __bool__ or (on either) __len__, then that will be used to evaluate the "truthiness" of objects of that class (if only __len__ is defined, an instance is truthy when it returns non-zero, and falsy when it returns zero). So, for example, to make your class simply report if it's attributes are non-empty in either Py2 or Py3, you'd add: def __bool__(self): return bool(self.lis1 or self.dict1) __nonzero__ = __bool__ # To make it work on Py2 too Alternatively, if your class instances have meaningful lengths, you define: def __len__(self): return len(self.lis1) + len(self.dict1) # For example; I doubt the length is meaningful in terms of both and get boolean behavior by side-effect of supporting len(myobject). A: Combining the answers for using any() and __bool__(self), the following code will allow you to check for all of the attributes using list comprehension. class my_class(object): def __init__(self): self.list1 = [] self.dict1 = {} def __bool__(self): check = any([self.__dict__[attr] for attr in self.__dict__.keys()]) return check obj1 = my_class() if obj1: print('yes') This code snippet will print nothing as expected. A: As many answers and duplicate votes suggest, you need to override the __nonzero__ method. However, from your comment, you also want to avoid enumerating the attributes explicitly. This can be done with a trick like this: class Example(object): def __init__(self): self._values = {} self.attr1 = [] self.attr2 = {} def __setattr__(self, name, value): self.__dict__[name] = value if not name.startswith('_'): self._values[name] = value # keep track of the assigned attributes def __nonzero__(self): return any(self._values.itervalues()) This handles all public attributes that are assigned or modified later on: >>> ex = Example() >>> bool(ex) False >>> ex.attr1.append('data') >>> bool(ex) True >>> ex.attr1.pop() >>> ex.attr3 = 42 bool(ex) >>> False Attribute deletion is not handled properly, for that you need to override __delattr__. A: In python you wouldn't know which attributes to expect if you do not declare them. A standard way to declare attributes is supplied in the dataclass see https://docs.python.org/3/library/dataclasses.html from dataclasses import dataclass @dataclass class my_class: lis1:list = [] dict1:dict = {} def isEmtpy(self)->bool: return len(self.lis1)+len(self.dict1)==0 To implement a more generic solution you might want to inspect the dataclass source code at https://github.com/python/cpython/blob/3.11/Lib/dataclasses.py especially the fields accessor
Check if object attributes are non-empty python
I can check if python list or dictionary are empty or not like this lis1, dict1 = [], {} # similar thing can be done for dict1 if lis1: # Do stuff else: print "List is empty" If I try to do this with my class object, i.e checking if my object attributes are non-empty by typing if my_object: this always evaluate to True >>> class my_class(object): ... def __init__(self): ... self.lis1 = [] ... self.dict1 = {} ... >>> obj1 = my_class() >>> obj1 <__main__.my_class object at 0x10c793250> >>> if obj1: ... print "yes" ... yes I can write a function specifically to check if my object attributes are non-empty and then call if obj1.is_attributes_empty():, but I am more interested in knowing how if evaluates the standard data-types like list and dict to True or False depending on the items they contain or are empty. If I want to achieve this functionality with my class object, what methods do I need to override or make changes to?
[ "You need to implement the __nonzero__ method (or __bool__ for Python3)\nhttps://docs.python.org/2/reference/datamodel.html#object.nonzero\nclass my_class(object):\n def __init__(self):\n self.lis1 = []\n self.dict1 = {}\n\n def __nonzero__(self):\n return bool(self.lis1 or self.dict1)\n\nobj = my_class()\nif obj:\n print \"Available\"\nelse:\n print \"Not available\"\n\nPython also checks the __len__ method for truthiness, but that doesn't seem to make sense for your example.\nIf you have a lot of attributes to check you may prefer to\nreturn any((self.lis1, self.dict1, ...))\n\n", "It is given in the documentation of Truth value testing for Python 2.x - \n\ninstances of user-defined classes, if the class defines a __nonzero__() or __len__() method, when that method returns the integer zero or bool value False.\n\nFor Python 3.x -\n\ninstances of user-defined classes, if the class defines a __bool__() or __len__() method, when that method returns the integer zero or bool value False.\n\nAccording to the definition of your class, if maybe meaningful to define __len__() method, which returns the sum of length of the list as well as the dict.Then this method would be called to determine whether to interpret the object as True or False in boolean context. Example -\nclass my_class(object):\n def __init__(self):\n self.lis1 = []\n self.dict1 = {}\n def __len__(self):\n print(\"In len\")\n return len(self.lis1) + len(self.dict1)\n\nDemo -\n>>> class my_class(object):\n... def __init__(self):\n... self.lis1 = []\n... self.dict1 = {}\n... def __len__(self):\n... print(\"In len\")\n... return len(self.lis1) + len(self.dict1)\n...\n>>> obj = my_class()\n>>> if obj:\n... print(\"yes\")\n...\nIn len\n>>> obj.lis1.append(1)\n>>>\n>>> if obj:\n... print(\"yes\")\n...\nIn len\nyes\n\n", "The build in vars() function makes a useful one-liner for checking if an object has any non-empty attributes. Combine it with __nonzero__ and you get the following:\ndef __nonzero__(self):\n return any(vars(self).values())\n\n", "If your class defines (on Py2) __nonzero__, (on Py3) __bool__ or (on either) __len__, then that will be used to evaluate the \"truthiness\" of objects of that class (if only __len__ is defined, an instance is truthy when it returns non-zero, and falsy when it returns zero). So, for example, to make your class simply report if it's attributes are non-empty in either Py2 or Py3, you'd add:\n def __bool__(self):\n return bool(self.lis1 or self.dict1)\n __nonzero__ = __bool__ # To make it work on Py2 too\n\nAlternatively, if your class instances have meaningful lengths, you define:\n def __len__(self):\n return len(self.lis1) + len(self.dict1) # For example; I doubt the length is meaningful in terms of both\n\nand get boolean behavior by side-effect of supporting len(myobject).\n", "Combining the answers for using any() and __bool__(self), the following code will allow you to check for all of the attributes using list comprehension.\nclass my_class(object):\n def __init__(self):\n self.list1 = []\n self.dict1 = {}\n\n def __bool__(self):\n check = any([self.__dict__[attr] for attr in self.__dict__.keys()])\n\n return check\n\nobj1 = my_class()\n\nif obj1:\n print('yes')\n\nThis code snippet will print nothing as expected.\n", "As many answers and duplicate votes suggest, you need to override the __nonzero__ method. However, from your comment, you also want to avoid enumerating the attributes explicitly. This can be done with a trick like this:\nclass Example(object):\n def __init__(self):\n self._values = {}\n self.attr1 = []\n self.attr2 = {}\n\n def __setattr__(self, name, value):\n self.__dict__[name] = value\n if not name.startswith('_'):\n self._values[name] = value # keep track of the assigned attributes\n\n def __nonzero__(self):\n return any(self._values.itervalues())\n\nThis handles all public attributes that are assigned or modified later on:\n>>> ex = Example()\n>>> bool(ex)\nFalse\n>>> ex.attr1.append('data')\n>>> bool(ex)\nTrue\n>>> ex.attr1.pop()\n>>> ex.attr3 = 42\nbool(ex)\n>>> False\n\nAttribute deletion is not handled properly, for that you need to override __delattr__.\n", "In python you wouldn't know which attributes to expect if you do not declare them. A standard way to declare attributes is supplied in the dataclass\nsee https://docs.python.org/3/library/dataclasses.html\nfrom dataclasses import dataclass\n\n@dataclass\nclass my_class:\n lis1:list = []\n dict1:dict = {}\n\n def isEmtpy(self)->bool:\n return len(self.lis1)+len(self.dict1)==0\n\nTo implement a more generic solution you might want to inspect the dataclass source code at https://github.com/python/cpython/blob/3.11/Lib/dataclasses.py especially the fields accessor\n" ]
[ 7, 2, 2, 0, 0, 0, 0 ]
[]
[]
[ "if_statement", "python" ]
stackoverflow_0032836291_if_statement_python.txt
Q: How to take over arguments after virtual env activation in batch and Python I want to execute python script by batch file. when I do that , I want to take over argument to python scipt. But, after activation of venv, it is not possible to take over the argument. The batch file I made is the following: It does not work. SET file=%~nx1 rem activate virtual env CALL ..\Script\activate.bat rem execute python in virtual env with argument python train.py --data .\data\%file% A: Try specific file instead of %~nx1. '%' may be the problem.
How to take over arguments after virtual env activation in batch and Python
I want to execute python script by batch file. when I do that , I want to take over argument to python scipt. But, after activation of venv, it is not possible to take over the argument. The batch file I made is the following: It does not work. SET file=%~nx1 rem activate virtual env CALL ..\Script\activate.bat rem execute python in virtual env with argument python train.py --data .\data\%file%
[ "Try specific file instead of %~nx1.\n'%' may be the problem.\n" ]
[ 0 ]
[]
[]
[ "batch_file", "python" ]
stackoverflow_0074588324_batch_file_python.txt
Q: parsing xml with namespace from request with lxml in python I am trying to get some text out of a table from an online xml file. I can find the tables: from lxml import etree import requests main_file = requests.get('https://training.gov.au/TrainingComponentFiles/CUA/CUAWRT601_R1.xml') main_file.encoding = 'utf-8-sig' root = etree.fromstring(main_file.content) tables = root.xpath('//foo:table', namespaces={"foo": "http://www.authorit.com/xml/authorit"}) print(tables) But I can't get any further than that. The text that I am looking for is: Prepare to write scripts Write draft scripts Produce final scripts When I paste the xml in here: http://xpather.com/ I can get it using the following expression: //table[1]/tr/td[@width="2700"]/p[@id="4"][not(*)]/text() but that doesn't work here and I'm out of ideas. How can I get that text? A: Use the namespace prefix you declared (with namespaces={"foo": "http://www.authorit.com/xml/authorit"}) e.g. instead of //table[1]/tr/td[@width="2700"]/p[@id="4"][not(*)]/text() use //foo:table[1]/foo:tr/foo:td[@width="2700"]/foo:p[@id="4"][not(*)]/text().
parsing xml with namespace from request with lxml in python
I am trying to get some text out of a table from an online xml file. I can find the tables: from lxml import etree import requests main_file = requests.get('https://training.gov.au/TrainingComponentFiles/CUA/CUAWRT601_R1.xml') main_file.encoding = 'utf-8-sig' root = etree.fromstring(main_file.content) tables = root.xpath('//foo:table', namespaces={"foo": "http://www.authorit.com/xml/authorit"}) print(tables) But I can't get any further than that. The text that I am looking for is: Prepare to write scripts Write draft scripts Produce final scripts When I paste the xml in here: http://xpather.com/ I can get it using the following expression: //table[1]/tr/td[@width="2700"]/p[@id="4"][not(*)]/text() but that doesn't work here and I'm out of ideas. How can I get that text?
[ "Use the namespace prefix you declared (with namespaces={\"foo\": \"http://www.authorit.com/xml/authorit\"}) e.g. instead of //table[1]/tr/td[@width=\"2700\"]/p[@id=\"4\"][not(*)]/text() use //foo:table[1]/foo:tr/foo:td[@width=\"2700\"]/foo:p[@id=\"4\"][not(*)]/text().\n" ]
[ 0 ]
[]
[]
[ "lxml", "python", "xml", "xpath" ]
stackoverflow_0074587533_lxml_python_xml_xpath.txt
Q: I am trying to print a statement that will show any number that is great than 5 I am trying to print any number that is great than n, which is 5 in this case. It is only printing 6 and 7. I am not sure what I am doing wrong. This is my code. I am looping through the array and testing if i is greater then n (5) list = [2, 3, 4, 5, 6, 7, 8, 9] n = 5 filter_list (list, n) def filter_list (list, n): ` `for i in range(len(list)): ` `if list[i] > n: ` `print (list[i]) the outcome is only 6, 7. Its not 6, 7, 8, 9 which is what I would like it It doesnt print the desired outcome A: For me your code working fine just fix the indent. Just adding end '' at print to print on same line. list = [2, 3, 4, 5, 6, 7, 8, 9] n = 5 def filter_list (list, n): for i in range(len(list)): if list[i] > n: print (list[i],end =' ') filter_list (list, n) Gives # 6 7 8 9 A: Your code is completely fine, if you fix your intent. list = [2, 3, 4, 5, 6, 7, 8, 9] n = 5 def filter_list (list, n): for i in range(len(list)): if list[i] > n: print (list[i],end =' ') print(filter_list (list, n)) Output: 6 7 8 9 And you don’t need to call a function. If we use list compression we can increase speed and code readability ‍ list = [2, 3, 4, 5, 6, 7, 8, 9] n = 5 print(list(i for i in List if i > n)) A: Your code is working if you give indent before if statement list = [2, 3, 4, 5, 6, 7, 8, 9] n = 5 def filter_list (list, n): for i in range(len(list)): if list[i] > n: print (list[i]) filter_list (list, n) Or you can try this code : list1 = [2, 3, 4, 5, 6, 7, 8, 9] n = 5 for i in list1 : if i > n: print(I) Hope this will help you :)
I am trying to print a statement that will show any number that is great than 5
I am trying to print any number that is great than n, which is 5 in this case. It is only printing 6 and 7. I am not sure what I am doing wrong. This is my code. I am looping through the array and testing if i is greater then n (5) list = [2, 3, 4, 5, 6, 7, 8, 9] n = 5 filter_list (list, n) def filter_list (list, n): ` `for i in range(len(list)): ` `if list[i] > n: ` `print (list[i]) the outcome is only 6, 7. Its not 6, 7, 8, 9 which is what I would like it It doesnt print the desired outcome
[ "For me your code working fine just fix the indent. Just adding end '' at print to print on same line.\nlist = [2, 3, 4, 5, 6, 7, 8, 9]\nn = 5\n\n\ndef filter_list (list, n):\n for i in range(len(list)):\n if list[i] > n:\n print (list[i],end =' ')\n \nfilter_list (list, n)\n\nGives #\n6 7 8 9 \n\n", "Your code is completely fine, if you fix your intent.\nlist = [2, 3, 4, 5, 6, 7, 8, 9]\nn = 5\n\n\ndef filter_list (list, n):\n for i in range(len(list)):\n if list[i] > n:\n print (list[i],end =' ')\n\nprint(filter_list (list, n))\n\nOutput: 6 7 8 9\nAnd you don’t need to call a function. If we use list compression we can increase speed and code readability ‍\nlist = [2, 3, 4, 5, 6, 7, 8, 9]\n\nn = 5\n\nprint(list(i for i in List if i > n))\n\n", "Your code is working if you give indent before if statement\nlist = [2, 3, 4, 5, 6, 7, 8, 9]\nn = 5\n\ndef filter_list (list, n):\n for i in range(len(list)):\n if list[i] > n:\n print (list[i])\n\nfilter_list (list, n)\n\nOr you can try this code :\nlist1 = [2, 3, 4, 5, 6, 7, 8, 9]\nn = 5\n\nfor i in list1 :\n if i > n:\n print(I)\n\nHope this will help you :)\n" ]
[ 1, 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074587535_python.txt
Q: How to set the input channels size? It is an assignment using pytorch for hand gesture recognition. Code: D = np.array(Images).astype('float32') y = np.array(Labels).astype(int) for i in tqdm(range(X.shape[0])): train_data.append(X[i]) # original image train_data.append(rotate(X[i], angle = 45, mode = 'wrap')) train_data.append(np.fliplr(X[i])) train_data.append(np.flipud(X[i])) train_data.append(random_noise(X[i], var = 0.2 ** 2)) for j in range(5): target_train.append(Y[i]) class Net(nn.Module): def __init__(self, num_classes = 5): super(Net, self).__init__() self.conv1 = nn.Conv2d(in_channels = 1, out_channels = 12, kernel_size = 3, stride = 1, padding = 1) self.conv2 = nn.Conv2d(in_channels = 12, out_channels = 24, kernel_size = 3, stride = 1, padding = 1) self.pool = nn.MaxPool2d(kernel_size = 2) self.drop = nn.Dropout2d(p = 0.2) self.fc1 = nn.Linear(in_features = 19.5 * 19.5 * 24, out_features = 120) self.fc2 = nn.Linear(in_features = 120, out_features = num_classes) def forward(self, x): x = F.relu(self.pool(self.conv1(x))) x = F.relu(self.pool(self.conv2(x))) x = F.dropout(self.drop(x), training = self.training) x = x.view(-1, 19.5 * 19.5 * 24) x = F.relu(self.fc1(x)) x = self.fc2(x) Error: RuntimeError: Given groups=1, weight of size [8, 1, 3, 3], expected input[1, 32, 78, 78] to have 1 channels, but got 32 channels instead Size of X: (2080, 300, 300, 3) Size of y: (2080,) How do I set the input channel size for the fc1(fully connected layer 1)? A: The input should be of the format [batch_size, channels, height, width] in PyTorch, so you have to change your input to (2080, 1, 300, 300) instead of (2080, 300, 300, 3). As per your NN architecture, the input should be single channel and not 3 channel. Also, x = x.view(-1, 19.5 * 19.5 * 24) will throw an error if the input size and 19.5 * 19.5 * 24(i.e. 9126) are not divisible. Fixing these 2 things should solve the problem.
How to set the input channels size?
It is an assignment using pytorch for hand gesture recognition. Code: D = np.array(Images).astype('float32') y = np.array(Labels).astype(int) for i in tqdm(range(X.shape[0])): train_data.append(X[i]) # original image train_data.append(rotate(X[i], angle = 45, mode = 'wrap')) train_data.append(np.fliplr(X[i])) train_data.append(np.flipud(X[i])) train_data.append(random_noise(X[i], var = 0.2 ** 2)) for j in range(5): target_train.append(Y[i]) class Net(nn.Module): def __init__(self, num_classes = 5): super(Net, self).__init__() self.conv1 = nn.Conv2d(in_channels = 1, out_channels = 12, kernel_size = 3, stride = 1, padding = 1) self.conv2 = nn.Conv2d(in_channels = 12, out_channels = 24, kernel_size = 3, stride = 1, padding = 1) self.pool = nn.MaxPool2d(kernel_size = 2) self.drop = nn.Dropout2d(p = 0.2) self.fc1 = nn.Linear(in_features = 19.5 * 19.5 * 24, out_features = 120) self.fc2 = nn.Linear(in_features = 120, out_features = num_classes) def forward(self, x): x = F.relu(self.pool(self.conv1(x))) x = F.relu(self.pool(self.conv2(x))) x = F.dropout(self.drop(x), training = self.training) x = x.view(-1, 19.5 * 19.5 * 24) x = F.relu(self.fc1(x)) x = self.fc2(x) Error: RuntimeError: Given groups=1, weight of size [8, 1, 3, 3], expected input[1, 32, 78, 78] to have 1 channels, but got 32 channels instead Size of X: (2080, 300, 300, 3) Size of y: (2080,) How do I set the input channel size for the fc1(fully connected layer 1)?
[ "The input should be of the format [batch_size, channels, height, width] in PyTorch, so you have to change your input to (2080, 1, 300, 300) instead of (2080, 300, 300, 3). As per your NN architecture, the input should be single channel and not 3 channel.\nAlso,\nx = x.view(-1, 19.5 * 19.5 * 24) \nwill throw an error if the input size and 19.5 * 19.5 * 24(i.e. 9126) are not divisible.\nFixing these 2 things should solve the problem.\n" ]
[ 0 ]
[]
[]
[ "deep_learning", "python", "pytorch" ]
stackoverflow_0074582503_deep_learning_python_pytorch.txt
Q: For some reason odd reason i can't seem to make my code write in the txt file So i have written this code where i want the computer to open a file and write in it what the user have answered to the question i asked him but when ever i open the txt file its empty. import os Welcome = input("Hi my name is Steve. Do you have an account at Steve? ANSWER WITH JUST A YES OR NO ") def register(): name = input("First name: ") last_name = input("Last name: ") Email = input("Email: ") ussername = input("Username: ") password = input("Password: ") def login(): ussername = input("Username: ") password = input("Password: ") if Welcome == "yes": login() else: register() if Welcome == "no" or "No": with open("userinfo.txt", "w") as file: file.write(register()) A: You are not writing anything to the file. I have modified the code to add the response to the file and also changed the code to be more accurate. welcome = input("Hi my name is Steve. Do you have an account at Steve? ANSWER WITH JUST A YES OR NO ") def register(): first_name = input("First name: ") last_name = input("Last name: ") email = input("Email: ") username = input("Username: ") password = input("Password: ") with open("userinfo.txt", "w") as file: file.write(f"{first_name}\n{last_name}\n{email}\n{username}\n{password}") def login(): username = input("Username: ") password = input("Password: ") if welcome.upper() == "YES": login() print("LOGGED IN!") elif welcome.upper() == "NO": register() print("REGISTRATION SUCCESFULL!") else: print("WRONG INPUT!") A: Your file is empty because you're not writing anything into it. Your register() function does not return anything, so nothing gets written into the file. Maybe you want to add something like return f"{name} {last_name}" to the end of your register() func? At least then something should get written to your output file. Also, you have a logic error in if Welcome == "no" or "No": I would change that to: if Welcome.lower() == "no": That fixes your logic error. The line you wrote could have been written as: if Welcome == "no" or Welcome == "No":
For some reason odd reason i can't seem to make my code write in the txt file
So i have written this code where i want the computer to open a file and write in it what the user have answered to the question i asked him but when ever i open the txt file its empty. import os Welcome = input("Hi my name is Steve. Do you have an account at Steve? ANSWER WITH JUST A YES OR NO ") def register(): name = input("First name: ") last_name = input("Last name: ") Email = input("Email: ") ussername = input("Username: ") password = input("Password: ") def login(): ussername = input("Username: ") password = input("Password: ") if Welcome == "yes": login() else: register() if Welcome == "no" or "No": with open("userinfo.txt", "w") as file: file.write(register())
[ "You are not writing anything to the file. I have modified the code to add the response to the file and also changed the code to be more accurate.\nwelcome = input(\"Hi my name is Steve. Do you have an account at Steve? ANSWER WITH JUST A YES OR NO \")\n\n\ndef register():\n first_name = input(\"First name: \")\n last_name = input(\"Last name: \")\n email = input(\"Email: \")\n username = input(\"Username: \")\n password = input(\"Password: \")\n\n with open(\"userinfo.txt\", \"w\") as file:\n file.write(f\"{first_name}\\n{last_name}\\n{email}\\n{username}\\n{password}\")\n\n\ndef login():\n username = input(\"Username: \")\n password = input(\"Password: \")\n\n\nif welcome.upper() == \"YES\":\n login()\n print(\"LOGGED IN!\")\nelif welcome.upper() == \"NO\":\n register()\n print(\"REGISTRATION SUCCESFULL!\")\nelse:\n print(\"WRONG INPUT!\")\n\n", "Your file is empty because you're not writing anything into it. Your register() function does not return anything, so nothing gets written into the file.\nMaybe you want to add something like\nreturn f\"{name} {last_name}\"\n\nto the end of your register() func? At least then something should get written to your output file.\nAlso, you have a logic error in if Welcome == \"no\" or \"No\":\nI would change that to:\nif Welcome.lower() == \"no\":\n\nThat fixes your logic error.\nThe line you wrote could have been written as:\nif Welcome == \"no\" or Welcome == \"No\":\n\n" ]
[ 1, 0 ]
[]
[]
[ "file", "module", "python" ]
stackoverflow_0074588520_file_module_python.txt
Q: Extracting 2 digits numbers from string I have a file which contains string, from every string I need to append to my list every 2 digit number. Here's the file content: https://pastebin.com/N6gHRaVA I need to iterate every string and check if string on index[i] and on index[i+1] is digit, if yes, append those digits to list and slice the string from those 2 digits number, for example the string: string = '7469NMPLWX8384RXXOORHKLYBTVVXKKSRWEITLOCWNHNOAQIXO' should work in this way: Okay I have found digit 74, add 74 to my list and slice the string from 74 to the end My string is now 69NMPLWX8384RXXOORHKLYBTVVXKKSRWEITLOCWNHNOAQIXO, I have found digit 69,add 69 to list and slice the string until I will find new 2-number digit. The problem is I always have error: if string[i].isdigit() and string[i+1].isdigit(): ~~~~~~^^^^^ IndexError: string index out of range f = open("file.txt") read = f.read().split() f.close() for string in read: l = list() i = 0 print(string) while i<len(string): if string[i].isdigit() and string[i+1].isdigit(): l.append(string[i] + string[i+1]) string = string[i+2:] i = 0 else: i+=1 My program stops at string in line 31, which is the string: 'REDOHGMDPOXKFMHUDDOMLDYFAFYDLMODDUHMFKXOPDMGHODER5' I have no idea how to do this slice iteration, and please, don't use regex. A: You're going off the end of the string... Change: while i<len(string): to: while i<len(string)-1: And you should be fine. If you were just looking at one character at a time, you could use your original while. The trick here is that you're always looking at a char and also "one ahead" of the char. So you have to shorten your check by one iteration to prevent going past the last char to check. A: Your loop condition i len(string). If string is not empty, this equals a positive intiger, which is evaluated as True. Hence, you created an endless loop, that meets it's end, when i gets greater then string length. Try this: while i < len(string) -1: EDITED: Apparently, i didn't notice which string gave you the error. As you check for i+1th element of string, when we star checking the last character, reaching for the next one gives an obvious error. So, there should be -1 in the condition. A: You could use recursion. Here is what it would look like to deal with one of the strings. Part of the code: my_string = '7469NMPLWX8384RXXOORHKLYBTVVXKKSRWEITLOCWNHNOAQIXO' result_list = [] def read_string(s): result = "" for i,j in enumerate(s): if i>0 and s[i-1].isdigit() and s[i].isdigit(): result = s[i-1] + s[i] result_list.append(result) read_string(s[i+1:]) break; return (result_list) # Call the read_string function x = read_string(my_string) print(x) OUTPUT: ['74', '69', '83', '84'] A: You are not stopping at the correct spot. You can just change your while loop to loop to while I < len(string) - 1: If I may suggest a slightly cleaner approach, see below. f = open("file.txt") read = f.read().split() f.close() for string in read: l = list() i = 0 print(string) while i < len(string) - 1: numCheck = i + 1 # You call it more than once. Set to var ltr = string[i] + string[numCheck] # no need to call this multiple times, just set to a var if ltr.isdigit(): l.append(ltr) string = string[numCheck:] i = 0 else: i += 1 print(l) I changed your while loop to above and then put the calls you make more than once into a variable. Also since your list is initialized within the for loop you only keep the number from your last string if you want a list with all the numbers, simply move it out. Like so, f = open("file.txt") read = f.read().split() f.close() l = list() for string in read: i = 0 print(string) while i < len(string) - 1: numCheck = i + 1 # You call it more than once. Set to var ltr = string[i] + string[numCheck] # no need to call this multiple times, just set to a var if ltr.isdigit(): l.append(ltr) string = string[numCheck:] i = 0 else: i += 1 print(l)
Extracting 2 digits numbers from string
I have a file which contains string, from every string I need to append to my list every 2 digit number. Here's the file content: https://pastebin.com/N6gHRaVA I need to iterate every string and check if string on index[i] and on index[i+1] is digit, if yes, append those digits to list and slice the string from those 2 digits number, for example the string: string = '7469NMPLWX8384RXXOORHKLYBTVVXKKSRWEITLOCWNHNOAQIXO' should work in this way: Okay I have found digit 74, add 74 to my list and slice the string from 74 to the end My string is now 69NMPLWX8384RXXOORHKLYBTVVXKKSRWEITLOCWNHNOAQIXO, I have found digit 69,add 69 to list and slice the string until I will find new 2-number digit. The problem is I always have error: if string[i].isdigit() and string[i+1].isdigit(): ~~~~~~^^^^^ IndexError: string index out of range f = open("file.txt") read = f.read().split() f.close() for string in read: l = list() i = 0 print(string) while i<len(string): if string[i].isdigit() and string[i+1].isdigit(): l.append(string[i] + string[i+1]) string = string[i+2:] i = 0 else: i+=1 My program stops at string in line 31, which is the string: 'REDOHGMDPOXKFMHUDDOMLDYFAFYDLMODDUHMFKXOPDMGHODER5' I have no idea how to do this slice iteration, and please, don't use regex.
[ "You're going off the end of the string... Change:\n while i<len(string):\n\nto:\n while i<len(string)-1:\n\nAnd you should be fine.\nIf you were just looking at one character at a time, you could use your original while. The trick here is that you're always looking at a char and also \"one ahead\" of the char. So you have to shorten your check by one iteration to prevent going past the last char to check.\n", "Your loop condition i len(string). If string is not empty, this equals a positive intiger, which is evaluated as True. Hence, you created an endless loop, that meets it's end, when i gets greater then string length. Try this:\n while i < len(string) -1:\nEDITED:\nApparently, i didn't notice which string gave you the error. As you check for i+1th element of string, when we star checking the last character, reaching for the next one gives an obvious error. So, there should be -1 in the condition.\n", "You could use recursion.\nHere is what it would look like to deal with one of the strings.\nPart of the code:\nmy_string = '7469NMPLWX8384RXXOORHKLYBTVVXKKSRWEITLOCWNHNOAQIXO'\nresult_list = []\n\ndef read_string(s):\n result = \"\"\n for i,j in enumerate(s):\n if i>0 and s[i-1].isdigit() and s[i].isdigit():\n result = s[i-1] + s[i]\n result_list.append(result)\n read_string(s[i+1:])\n break;\n \n return (result_list) \n \n# Call the read_string function\nx = read_string(my_string) \nprint(x) \n\nOUTPUT:\n['74', '69', '83', '84']\n\n", "You are not stopping at the correct spot. You can just change your while loop to loop to\nwhile I < len(string) - 1:\n\nIf I may suggest a slightly cleaner approach, see below.\nf = open(\"file.txt\")\nread = f.read().split()\nf.close()\nfor string in read:\n l = list()\n i = 0\n print(string)\n while i < len(string) - 1:\n numCheck = i + 1 # You call it more than once. Set to var\n ltr = string[i] + string[numCheck] # no need to call this multiple times, just set to a var\n if ltr.isdigit():\n l.append(ltr)\n string = string[numCheck:]\n i = 0\n else:\n i += 1\n \nprint(l)\n\nI changed your while loop to above and then put the calls you make more than once into a variable. Also since your list is initialized within the for loop you only keep the number from your last string if you want a list with all the numbers, simply move it out. Like so,\nf = open(\"file.txt\")\nread = f.read().split()\nf.close()\nl = list()\nfor string in read:\n i = 0\n print(string)\n while i < len(string) - 1:\n numCheck = i + 1 # You call it more than once. Set to var\n ltr = string[i] + string[numCheck] # no need to call this multiple times, just set to a var\n if ltr.isdigit():\n l.append(ltr)\n string = string[numCheck:]\n i = 0\n else:\n i += 1\n \nprint(l)\n\n" ]
[ 1, 0, 0, 0 ]
[]
[]
[ "list", "loops", "python", "slice", "string" ]
stackoverflow_0074588308_list_loops_python_slice_string.txt
Q: how to get spacing between grouped bar plot in python I have plotted the grouped bar plot and I want to have spacing between orange and blue bar. I am not sure how to. It is the sample image - I want little space between blue and orange bar. import numpy as np import matplotlib.pyplot as plt N=4 a = [63,13,12,45] b = [22,6,9,9] ind = np.arange(N) width=0.35 fig, ax = plt.subplots() b1 = ax.bar(ind, a, width) b2 = ax.bar(ind+width, b, width) ax.set_xticks(ind+width/2) plt.show() A: Just do this: b2 = ax.bar(ind+ 1.2 * width, b, width) A: I am unaware of an dedicated option for such a behavior. The reason is that it would indicate inaccurate measures. You would be no longer sure if the blue/orange bars belong to the same value on the x-axis. Therefore, you need to come up with a small workaround by shifting the data (or rather the two data arrays) around the index on the x-axis. For this, I introduced the variable dist in the code below. Note that it should be larger than half of the width of a bar. import numpy as np import matplotlib.pyplot as plt N=4 a = [63,13,12,45] b = [22,6,9,9] ind = np.arange(N) width = 0.1 dist = 0.08 # should be larger than width/2 fig, ax = plt.subplots() b1 = ax.bar(ind-dist, a, width) b2 = ax.bar(ind+dist, b, width) plt.show() generic solution For a somewhat more generic solution, we need first to calculate the width of the grouped bars and than shift the group around the index: import numpy as np import matplotlib.pyplot as plt N=4 a = [63,13,12,45] b = [22,6,9,9] ind = np.arange(N) # index / x-axis value width = 0.1 # width of each bar DistBetweenBars = 0.01 # distance between bars Num = 5 # number of bars in a group # calculate the width of the grouped bars (including the distance between the individual bars) WithGroupedBars = Num*width + (Num-1)*DistBetweenBars fig, ax = plt.subplots() for i in range(Num): data = np.random.rand(N) ax.bar(ind-WithGroupedBars/2 + (width+DistBetweenBars)*i,data, width) plt.show() A: Just edge processing in ax.bar() creates space. b1 = ax.bar(ind, a, width, edgecolor="w", linewidth=3) It's the full code of the modified sample. import numpy as np import matplotlib.pyplot as plt N=4 a = [63,13,12,45] b = [22,6,9,9] ind = np.arange(N) width=0.4 fig, ax = plt.subplots() b1 = ax.bar(ind, a, width, edgecolor="w", linewidth=3) b2 = ax.bar(ind+width, b, width, edgecolor="w",linewidth=3) ax.set_xticks(ind+width/2) plt.show()
how to get spacing between grouped bar plot in python
I have plotted the grouped bar plot and I want to have spacing between orange and blue bar. I am not sure how to. It is the sample image - I want little space between blue and orange bar. import numpy as np import matplotlib.pyplot as plt N=4 a = [63,13,12,45] b = [22,6,9,9] ind = np.arange(N) width=0.35 fig, ax = plt.subplots() b1 = ax.bar(ind, a, width) b2 = ax.bar(ind+width, b, width) ax.set_xticks(ind+width/2) plt.show()
[ "Just do this:\nb2 = ax.bar(ind+ 1.2 * width, b, width)\n\n\n", "I am unaware of an dedicated option for such a behavior. The reason is that it would indicate inaccurate measures. You would be no longer sure if the blue/orange bars belong to the same value on the x-axis.\nTherefore, you need to come up with a small workaround by shifting the data (or rather the two data arrays) around the index on the x-axis. For this, I introduced the variable dist in the code below. Note that it should be larger than half of the width of a bar.\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nN=4\na = [63,13,12,45]\nb = [22,6,9,9]\n\nind = np.arange(N)\nwidth = 0.1\ndist = 0.08 # should be larger than width/2\n\nfig, ax = plt.subplots()\nb1 = ax.bar(ind-dist, a, width)\nb2 = ax.bar(ind+dist, b, width)\n\nplt.show()\n\n\ngeneric solution\nFor a somewhat more generic solution, we need first to calculate the width of the grouped bars and than shift the group around the index:\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nN=4\na = [63,13,12,45]\nb = [22,6,9,9]\n\nind = np.arange(N) # index / x-axis value\nwidth = 0.1 # width of each bar\n\nDistBetweenBars = 0.01 # distance between bars\nNum = 5 # number of bars in a group\n# calculate the width of the grouped bars (including the distance between the individual bars)\nWithGroupedBars = Num*width + (Num-1)*DistBetweenBars\n\nfig, ax = plt.subplots()\nfor i in range(Num):\n data = np.random.rand(N)\n ax.bar(ind-WithGroupedBars/2 + (width+DistBetweenBars)*i,data, width)\n\nplt.show()\n\n\n", "Just edge processing in ax.bar() creates space.\nb1 = ax.bar(ind, a, width, edgecolor=\"w\", linewidth=3)\n\n\nIt's the full code of the modified sample.\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nN=4\na = [63,13,12,45]\nb = [22,6,9,9]\n\nind = np.arange(N)\nwidth=0.4\n\nfig, ax = plt.subplots()\nb1 = ax.bar(ind, a, width, edgecolor=\"w\", linewidth=3)\nb2 = ax.bar(ind+width, b, width, edgecolor=\"w\",linewidth=3) \n\nax.set_xticks(ind+width/2)\nplt.show()\n\n" ]
[ 1, 0, 0 ]
[]
[]
[ "matplotlib", "python" ]
stackoverflow_0066913686_matplotlib_python.txt
Q: I want to convert string 1F to hex 1F in Python, what should I do? num="1F" nm="1" nm1="2" hex(num)^hex(nm)^hex(nm1) I wrote it like the code above, but hex doesn't work properly. I want to convert the string to hexadecimal, and I want an xor operation of the converted value. What should I do? A: The variable num can be converted to int using int(num, 16). Other variables nm, nm1 are just integers in form of strings. to convert them use int(nm), int(nm1) num = "1F" nm = "1" nm1 = "2" result = int(num, 16) ^ int(nm) ^ int(nm1) print(result) > 28
I want to convert string 1F to hex 1F in Python, what should I do?
num="1F" nm="1" nm1="2" hex(num)^hex(nm)^hex(nm1) I wrote it like the code above, but hex doesn't work properly. I want to convert the string to hexadecimal, and I want an xor operation of the converted value. What should I do?
[ "The variable num can be converted to int using int(num, 16). Other variables nm, nm1 are just integers in form of strings. to convert them use int(nm), int(nm1)\nnum = \"1F\"\nnm = \"1\"\nnm1 = \"2\"\n\nresult = int(num, 16) ^ int(nm) ^ int(nm1)\nprint(result)\n\n> 28\n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074588543_python.txt
Q: I can't display all texts in the same embed with python from discord bot ok, i'm trying to add all the texts in the same embed of a for bot for discord, but i can't do it, this is what happens to me... It should be all in the same embed What should I do to correct this error? Thank you very much! this is the code i am using... import requests import discord from discord.ext import commands bot = commands.Bot(command_prefix='!', description="ayuda bot") bot.remove_command("help") p @bot.command() async def habbo(ctx): response = requests.get("https://images.habbo.com/habbo-web-leaderboards/hhes/visited-rooms/daily/latest.json") data= response.json() for habbo in data: embed=discord.Embed(title=f"", description=f"{habbo['name']}", color=discord.Colour.random()) await ctx.send(embed=embed) @bot.event async def on_ready(): print("BOT listo!") bot.run('') Someone give me a hand, thank you very much! A: If you want to send multiple embed, don't create multiple, build the content, then send one @bot.command() async def habbo(ctx): response = requests.get("https://images.habbo.com/habbo-web-leaderboards/hhes/visited-rooms/daily/latest.json") data = response.json() content = '\n'.join(item['name'] for item in data) embed = discord.Embed(title=f"", description=f"{content}", color=discord.Colour.random()) await ctx.send(embed=embed)
I can't display all texts in the same embed with python from discord bot
ok, i'm trying to add all the texts in the same embed of a for bot for discord, but i can't do it, this is what happens to me... It should be all in the same embed What should I do to correct this error? Thank you very much! this is the code i am using... import requests import discord from discord.ext import commands bot = commands.Bot(command_prefix='!', description="ayuda bot") bot.remove_command("help") p @bot.command() async def habbo(ctx): response = requests.get("https://images.habbo.com/habbo-web-leaderboards/hhes/visited-rooms/daily/latest.json") data= response.json() for habbo in data: embed=discord.Embed(title=f"", description=f"{habbo['name']}", color=discord.Colour.random()) await ctx.send(embed=embed) @bot.event async def on_ready(): print("BOT listo!") bot.run('') Someone give me a hand, thank you very much!
[ "If you want to send multiple embed, don't create multiple, build the content, then send one\n@bot.command()\nasync def habbo(ctx):\n response = requests.get(\"https://images.habbo.com/habbo-web-leaderboards/hhes/visited-rooms/daily/latest.json\")\n data = response.json()\n content = '\\n'.join(item['name'] for item in data)\n embed = discord.Embed(title=f\"\", description=f\"{content}\", color=discord.Colour.random())\n await ctx.send(embed=embed)\n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074588577_python.txt
Q: Cannot install deepspeech of python I want to use DeepSpeech of Mozilla on my Linux 22.04 system, following this website: https://deepspeech.readthedocs.io/en/r0.9/?badge=latest At the very beginning line, at pip3 install deepspeech I got this error: ERROR: Could not find a version that satisfies the requirement deepspeech (from versions: none) ERROR: No matching distribution found for deepspeech I ran into internet and followed all methods such as upgrading pip3, using pip instead of pip3. I could not solve the problem. This website: https://github.com/mozilla/DeepSpeech/issues/3693 suggests to use archive. I did not understand which repository should I archive at this step. It is very nice of you if you can help me A: The pip command you mentioned above worked for me: Try updating your linux packages sudo apt update sudo apt upgrade Then trying again if it does not work trying using python python -m pip install deepspeech
Cannot install deepspeech of python
I want to use DeepSpeech of Mozilla on my Linux 22.04 system, following this website: https://deepspeech.readthedocs.io/en/r0.9/?badge=latest At the very beginning line, at pip3 install deepspeech I got this error: ERROR: Could not find a version that satisfies the requirement deepspeech (from versions: none) ERROR: No matching distribution found for deepspeech I ran into internet and followed all methods such as upgrading pip3, using pip instead of pip3. I could not solve the problem. This website: https://github.com/mozilla/DeepSpeech/issues/3693 suggests to use archive. I did not understand which repository should I archive at this step. It is very nice of you if you can help me
[ "The pip command you mentioned above worked for me:\nTry updating your linux packages\nsudo apt update\nsudo apt upgrade\n\nThen trying again if it does not work trying using python\npython -m pip install deepspeech\n\n" ]
[ 0 ]
[]
[]
[ "mozilla_deepspeech", "pip", "python" ]
stackoverflow_0074588517_mozilla_deepspeech_pip_python.txt