content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
35
137
Q: 'pyglet.graphics' has no attribute 'vertex_list' I'm trying to make a Triangle with pyglet but I keep seeing this error. AttributeError: module 'pyglet.graphics' has no attribute 'vertex_list' I not sure what is the problem exactly. by the way this the code I'm trying to run ` from pyglet.gl import * class Triangle: def __init__(self) -> None: self.vertices = pyglet.graphics.vertex_list(3, ('v3f', [-0.5,-0.5,0.0, 0.5,-0.5,0.0, 0.0,0.5,0.0]), ('c3B', [100,200,220, 200,110,100, 100,250,100])) class MyWindow(pyglet.window.Window): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.set_minimum_size(400,300) self.triangle = Triangle() def on_draw(self): self.triangle.vertices.draw(GL_TRIANGLES) def on_resize(self, width, height): glViewport(0, 0, width, height) if __name__ =="__main__": window = MyWindow(1280, 720, "Hello", resizable=True) window.on_draw() pyglet.app.run() ` A: Please check which version of pyglet you are using. Recently (on 01.11.2022) pyglet version 2.0 got released which changed how to use vertex based drawing majorly. pyglet.graphics.vertex_list was used in earlier versions of pyglet (for information how to use this refer to https://pyglet.readthedocs.io/en/pyglet-1.5-maintenance/modules/graphics/index.html which is the documentation for pyglet version 1.5-maintenance). If you want to work with this older version make sure to use pyglet version 1.5.27 or lower In pyglet version 2.0 you have to create a Shader Program and use this to draw vertex based graphics. For more information about this refer to pyglets documentation regarding Shaders and Rendering: https://pyglet.readthedocs.io/en/latest/programming_guide/graphics.html
'pyglet.graphics' has no attribute 'vertex_list'
I'm trying to make a Triangle with pyglet but I keep seeing this error. AttributeError: module 'pyglet.graphics' has no attribute 'vertex_list' I not sure what is the problem exactly. by the way this the code I'm trying to run ` from pyglet.gl import * class Triangle: def __init__(self) -> None: self.vertices = pyglet.graphics.vertex_list(3, ('v3f', [-0.5,-0.5,0.0, 0.5,-0.5,0.0, 0.0,0.5,0.0]), ('c3B', [100,200,220, 200,110,100, 100,250,100])) class MyWindow(pyglet.window.Window): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.set_minimum_size(400,300) self.triangle = Triangle() def on_draw(self): self.triangle.vertices.draw(GL_TRIANGLES) def on_resize(self, width, height): glViewport(0, 0, width, height) if __name__ =="__main__": window = MyWindow(1280, 720, "Hello", resizable=True) window.on_draw() pyglet.app.run() `
[ "Please check which version of pyglet you are using. Recently (on 01.11.2022) pyglet version 2.0 got released which changed how to use vertex based drawing majorly.\npyglet.graphics.vertex_list was used in earlier versions of pyglet (for information how to use this refer to https://pyglet.readthedocs.io/en/pyglet-1.5-maintenance/modules/graphics/index.html which is the documentation for pyglet version 1.5-maintenance). If you want to work with this older version make sure to use pyglet version 1.5.27 or lower\nIn pyglet version 2.0 you have to create a Shader Program and use this to draw vertex based graphics. For more information about this refer to pyglets documentation regarding Shaders and Rendering: https://pyglet.readthedocs.io/en/latest/programming_guide/graphics.html\n" ]
[ 0 ]
[]
[]
[ "pyglet", "python" ]
stackoverflow_0074446973_pyglet_python.txt
Q: Creating function that makes a dictionary from a list The goal -> For each word in the text except the last one, a key should appear in the resulting dictionary, and the corresponding value should be a list of every word that occurs immediately after the key word in the text. Repeated words should have multiple values: example: fun(["ONE", "two", "one", "three"]) == {"one": ["two", "three"],"two": ["one] }) what I have so far: def build_predictions(words: list) -> dict: dictionary = {} for word in words: if word.index() != words.len(): if word not in dictionary: dictionary.update({word : words(words.index(word)+1)}) else: dictionary[word] = dictionary[word] + [words(words.index(word)+1)] Im getting an EOF error ;[ -> not sure if this is right anyways. A: Your code can't give you an EOF error, since you don't do any file reading in the code you've shown. Since you haven't shown any of that code, I can't help you with the EOF error. However, there are a bunch of things wrong with your approach to make your dictionary of predictions: word.index() is not a thing. If you want the index of word in words, iterate using for index, word in enumerate(words) words.len() is not a thing. You can get the length of words by len(words) The current word's index is never equal to the length of the list, since list indices start at 0 and go to len(lst) - 1. Your if condition should have been if index < len(words) - 1 1 .You don't need to do this check at all if you simply change your loop to for word in words[:-1], which will skip the last word. if word not in dictionary, you want to create a new list that contains the next word. If word is in dictionary, you want to append that word to the list instead of creating a new list by concatenating the two. You need to return dictionary from your function. Incorporating all these suggestions, your code would look like so: def build_predictions(words: list) -> dict: dictionary = {} for index, word in enumerate(words[:-1]): next_word = words[index + 1] if word not in dictionary: dictionary[word] = [next_word] else: dictionary[word].append(next_word) return dictionary Now, if you only want unique words, you can just create a dictionary containing sets instead of lists. That way, when you .add() to the set, it won't have any effect if the set already contains the word you want to add. def build_predictions(words: list) -> dict: dictionary = {} for index, word in enumerate(words[:-1]): next_word = words[index + 1] if word not in dictionary: dictionary[word] = {next_word} # Creates a set containing next_word else: dictionary[word].add(next_word) return dictionary At the end of this, if you want to convert the sets back to lists, it's easy enough to do so. Instead of return dictionary, do: return {k: list(v) for k, v in dictionary.items()} We can remove the need to check if word in dictionary by using a collections.defaultdict We can zip two slices of the list of words: one that goes from the start to the second-to-last item, and one that goes from the second item to the last item. Iterating over the zip of the two slices will give us the current word and the next word in each iteration. Then, we can just collect these in a defaultdict(list) or a defaultdict(set). from collections import defaultdict def build_predictions(words: list) -> dict: predictions = defaultdict(list) # or = defaultdict(set) for word, next_word in zip(words[:-1], words[1:]): predictions[word].append(next_word) # or .add(next_word) return predictions # or {k: list(v) for k, v in predictions.items()} A: First of all, you should not use index because it return only the index of the first occurence. This way it should work better for i in range(len(words)-1): word = world[i]
Creating function that makes a dictionary from a list
The goal -> For each word in the text except the last one, a key should appear in the resulting dictionary, and the corresponding value should be a list of every word that occurs immediately after the key word in the text. Repeated words should have multiple values: example: fun(["ONE", "two", "one", "three"]) == {"one": ["two", "three"],"two": ["one] }) what I have so far: def build_predictions(words: list) -> dict: dictionary = {} for word in words: if word.index() != words.len(): if word not in dictionary: dictionary.update({word : words(words.index(word)+1)}) else: dictionary[word] = dictionary[word] + [words(words.index(word)+1)] Im getting an EOF error ;[ -> not sure if this is right anyways.
[ "Your code can't give you an EOF error, since you don't do any file reading in the code you've shown. Since you haven't shown any of that code, I can't help you with the EOF error. However, there are a bunch of things wrong with your approach to make your dictionary of predictions:\n\nword.index() is not a thing. If you want the index of word in words, iterate using for index, word in enumerate(words)\nwords.len() is not a thing. You can get the length of words by len(words)\nThe current word's index is never equal to the length of the list, since list indices start at 0 and go to len(lst) - 1. Your if condition should have been if index < len(words) - 1\n1 .You don't need to do this check at all if you simply change your loop to for word in words[:-1], which will skip the last word.\nif word not in dictionary, you want to create a new list that contains the next word.\nIf word is in dictionary, you want to append that word to the list instead of creating a new list by concatenating the two.\nYou need to return dictionary from your function.\n\nIncorporating all these suggestions, your code would look like so:\ndef build_predictions(words: list) -> dict:\n dictionary = {}\n for index, word in enumerate(words[:-1]):\n next_word = words[index + 1]\n if word not in dictionary:\n dictionary[word] = [next_word]\n else:\n dictionary[word].append(next_word)\n return dictionary\n\n\nNow, if you only want unique words, you can just create a dictionary containing sets instead of lists. That way, when you .add() to the set, it won't have any effect if the set already contains the word you want to add.\ndef build_predictions(words: list) -> dict:\n dictionary = {}\n for index, word in enumerate(words[:-1]):\n next_word = words[index + 1]\n if word not in dictionary:\n dictionary[word] = {next_word} # Creates a set containing next_word\n else:\n dictionary[word].add(next_word)\n return dictionary\n\nAt the end of this, if you want to convert the sets back to lists, it's easy enough to do so. Instead of return dictionary, do:\n return {k: list(v) for k, v in dictionary.items()}\n\n\nWe can remove the need to check if word in dictionary by using a collections.defaultdict\nWe can zip two slices of the list of words: one that goes from the start to the second-to-last item, and one that goes from the second item to the last item. Iterating over the zip of the two slices will give us the current word and the next word in each iteration.\nThen, we can just collect these in a defaultdict(list) or a defaultdict(set).\nfrom collections import defaultdict\n\ndef build_predictions(words: list) -> dict:\n predictions = defaultdict(list)\n # or = defaultdict(set)\n for word, next_word in zip(words[:-1], words[1:]):\n predictions[word].append(next_word)\n # or .add(next_word)\n\n return predictions\n # or {k: list(v) for k, v in predictions.items()}\n\n\n", "First of all, you should not use index because it return only the index of the first occurence. This way it should work better\n for i in range(len(words)-1):\n word = world[i]\n\n" ]
[ 0, 0 ]
[]
[]
[ "dictionary", "list", "python" ]
stackoverflow_0074603960_dictionary_list_python.txt
Q: How do I convert date (YYYY-MM-DD) to Month-YY and groupby on some other column to get minimum and maximum month? I have created a data frame which has rolling quarter mapping using the code abcd = pd.DataFrame() abcd['Month'] = np.nan abcd['Month'] = pd.date_range(start='2020-04-01', end='2022-04-01', freq = 'MS') abcd['Time_1'] = np.arange(1, abcd.shape[0]+1) abcd['Time_2'] = np.arange(0, abcd.shape[0]) abcd['Time_3'] = np.arange(-1, abcd.shape[0]-1) db_nd_ad_unpivot = pd.melt(abcd, id_vars=['Month'], value_vars=['Time_1', 'Time_2', 'Time_3',], var_name='Time_name', value_name='Time') abcd_map = db_nd_ad_unpivot[(db_nd_ad_unpivot['Time']>0)&(db_nd_ad_unpivot['Time']< abcd.shape[0]+1)] abcd_map = abcd_map[['Month','Time']] The output of the code looks like this: Now, I have created an additional column name that gives me the name of the month and year in format Mon-YY using the code abcd_map['Month'] = pd.to_datetime(abcd_map.Month) # abcd_map['Month'] = abcd_map['Month'].astype(str) abcd_map['Time_Period'] = abcd_map['Month'].apply(lambda x: x.strftime("%b'%y")) Now I want to see for a specific time, what is the minimum and maximum in the month column. For eg. for time instance 17 ,The simple groupby results as: Time Period 17 Aug'21-Sept'21 The desired output is Time Time_Period 17 Aug'21-Oct'21. I think it is based on min and max of the column Month as by using the strftime function the column is getting converted in String/object type. A: Do this: abcd_map['Month_'] = pd.to_datetime(abcd_map['Month']).dt.strftime('%Y-%m') abcd_map['Time_Period'] = abcd_map['Month_'] = pd.to_datetime(abcd_map['Month']).dt.strftime('%Y-%m') abcd_map['Time_Period'] = abcd_map['Month'].apply(lambda x: x.strftime("%b'%y")) df = abcd_map.groupby(['Time']).agg( sum_col=('Time', np.sum), first_date=('Time_Period', np.min), last_date=('Time_Period', np.max) ).reset_index() df['TimePeriod'] = df['first_date']+'-'+df['last_date'] df = df.drop(['first_date','last_date'], axis = 1) df which returns Time sum_col TimePeriod 0 1 3 Apr'20-May'20 1 2 6 Jul'20-May'20 2 3 9 Aug'20-Jun'20 3 4 12 Aug'20-Sep'20 4 5 15 Aug'20-Sep'20 5 6 18 Nov'20-Sep'20 6 7 21 Dec'20-Oct'20 7 8 24 Dec'20-Nov'20 8 9 27 Dec'20-Jan'21 9 10 30 Feb'21-Mar'21 10 11 33 Apr'21-Mar'21 11 12 36 Apr'21-May'21 12 13 39 Apr'21-May'21 13 14 42 Jul'21-May'21 14 15 45 Aug'21-Jun'21 15 16 48 Aug'21-Sep'21 16 17 51 Aug'21-Sep'21 17 18 54 Nov'21-Sep'21 18 19 57 Dec'21-Oct'21 19 20 60 Dec'21-Nov'21 20 21 63 Dec'21-Jan'22 21 22 66 Feb'22-Mar'22 22 23 69 Apr'22-Mar'22 23 24 48 Apr'22-Mar'22 24 25 25 Apr'22-Apr'22 A: How about converting to string after finding the min and max New_df = abcd_map.groupby('Time')['Month'].agg(['min', 'max']).apply(lambda x: x.dt.strftime("%b'%y")).agg(' '.join, axis=1).reset_index()
How do I convert date (YYYY-MM-DD) to Month-YY and groupby on some other column to get minimum and maximum month?
I have created a data frame which has rolling quarter mapping using the code abcd = pd.DataFrame() abcd['Month'] = np.nan abcd['Month'] = pd.date_range(start='2020-04-01', end='2022-04-01', freq = 'MS') abcd['Time_1'] = np.arange(1, abcd.shape[0]+1) abcd['Time_2'] = np.arange(0, abcd.shape[0]) abcd['Time_3'] = np.arange(-1, abcd.shape[0]-1) db_nd_ad_unpivot = pd.melt(abcd, id_vars=['Month'], value_vars=['Time_1', 'Time_2', 'Time_3',], var_name='Time_name', value_name='Time') abcd_map = db_nd_ad_unpivot[(db_nd_ad_unpivot['Time']>0)&(db_nd_ad_unpivot['Time']< abcd.shape[0]+1)] abcd_map = abcd_map[['Month','Time']] The output of the code looks like this: Now, I have created an additional column name that gives me the name of the month and year in format Mon-YY using the code abcd_map['Month'] = pd.to_datetime(abcd_map.Month) # abcd_map['Month'] = abcd_map['Month'].astype(str) abcd_map['Time_Period'] = abcd_map['Month'].apply(lambda x: x.strftime("%b'%y")) Now I want to see for a specific time, what is the minimum and maximum in the month column. For eg. for time instance 17 ,The simple groupby results as: Time Period 17 Aug'21-Sept'21 The desired output is Time Time_Period 17 Aug'21-Oct'21. I think it is based on min and max of the column Month as by using the strftime function the column is getting converted in String/object type.
[ "Do this:\nabcd_map['Month_'] = pd.to_datetime(abcd_map['Month']).dt.strftime('%Y-%m')\nabcd_map['Time_Period'] = abcd_map['Month_'] = pd.to_datetime(abcd_map['Month']).dt.strftime('%Y-%m')\nabcd_map['Time_Period'] = abcd_map['Month'].apply(lambda x: x.strftime(\"%b'%y\"))\ndf = abcd_map.groupby(['Time']).agg(\n sum_col=('Time', np.sum),\n first_date=('Time_Period', np.min),\n last_date=('Time_Period', np.max)\n).reset_index()\n\ndf['TimePeriod'] = df['first_date']+'-'+df['last_date']\ndf = df.drop(['first_date','last_date'], axis = 1)\ndf\n\nwhich returns\n\n Time sum_col TimePeriod\n0 1 3 Apr'20-May'20\n1 2 6 Jul'20-May'20\n2 3 9 Aug'20-Jun'20\n3 4 12 Aug'20-Sep'20\n4 5 15 Aug'20-Sep'20\n5 6 18 Nov'20-Sep'20\n6 7 21 Dec'20-Oct'20\n7 8 24 Dec'20-Nov'20\n8 9 27 Dec'20-Jan'21\n9 10 30 Feb'21-Mar'21\n10 11 33 Apr'21-Mar'21\n11 12 36 Apr'21-May'21\n12 13 39 Apr'21-May'21\n13 14 42 Jul'21-May'21\n14 15 45 Aug'21-Jun'21\n15 16 48 Aug'21-Sep'21\n16 17 51 Aug'21-Sep'21\n17 18 54 Nov'21-Sep'21\n18 19 57 Dec'21-Oct'21\n19 20 60 Dec'21-Nov'21\n20 21 63 Dec'21-Jan'22\n21 22 66 Feb'22-Mar'22\n22 23 69 Apr'22-Mar'22\n23 24 48 Apr'22-Mar'22\n24 25 25 Apr'22-Apr'22\n\n", "How about converting to string after finding the min and max\nNew_df = abcd_map.groupby('Time')['Month'].agg(['min', 'max']).apply(lambda x: x.dt.strftime(\"%b'%y\")).agg(' '.join, axis=1).reset_index()\n\n" ]
[ 1, 1 ]
[]
[]
[ "datetime", "group_by", "pandas", "python" ]
stackoverflow_0074603351_datetime_group_by_pandas_python.txt
Q: How to validate an integer with python that needs to be used in calculations I'm trying to validate that a user input int is numbers only. This has been my most recent try: while True: NumCars = int(input("Enter the number of cars on the policy: ")) NumCarsStr = str(NumCars) if NumCarsStr == "": print("Number of cars cannot be blank. Please re-enter.") elif NumCarsStr.isalpha == True: print("Number of cars must be numbers only. Please re-enter.") else: break With this, I get the following error: line 91, in <module> NumCars = int(input("Enter the number of cars on the policy: ")) ValueError: invalid literal for int() with base 10: 'g' (I entered a "g" to test if it was working) Thanks in advance! A: Use try / except, and then break the loop if no exception is raised, otherwise capture the exception, print an error message and let the loop iterate again while True: try: NumCars = int(input("Enter the number of cars on the policy: ")) break except ValueError: print("You must enter an integer number") A: Thank you everyone for your suggestions! Unfortunately none of these answers did quite what I wanted to do, but they did lead me down a thought process that DID work. while True: NumCars = (input("Enter the number of cars on the policy: ")) if NumCars == "": print("Number of cars cannot be blank. Please re-enter.") elif NumCars.isdigit() == False: print("Number of cars must be numbers only. Please re-enter.") else: break After changing the NumCars input to a string, I validated with isdigit, and then when I needed the variable for the calculations, I converted it to integers at the instances where the variable would be used for calculations. I don't know if it was the easiest or fastest way of doing this, but it worked! A: if type(input) == int: do_code_if_is_numbersonly() else: print("Numbers Only") This checks the type of the input, and if it is integer (number) it does example code, otherwise it prints Numbers Only. This is a quick fix but in my opinion it should work well.
How to validate an integer with python that needs to be used in calculations
I'm trying to validate that a user input int is numbers only. This has been my most recent try: while True: NumCars = int(input("Enter the number of cars on the policy: ")) NumCarsStr = str(NumCars) if NumCarsStr == "": print("Number of cars cannot be blank. Please re-enter.") elif NumCarsStr.isalpha == True: print("Number of cars must be numbers only. Please re-enter.") else: break With this, I get the following error: line 91, in <module> NumCars = int(input("Enter the number of cars on the policy: ")) ValueError: invalid literal for int() with base 10: 'g' (I entered a "g" to test if it was working) Thanks in advance!
[ "Use try / except, and then break the loop if no exception is raised, otherwise capture the exception, print an error message and let the loop iterate again\nwhile True:\n try:\n NumCars = int(input(\"Enter the number of cars on the policy: \"))\n break\n except ValueError:\n print(\"You must enter an integer number\")\n\n", "Thank you everyone for your suggestions! Unfortunately none of these answers did quite what I wanted to do, but they did lead me down a thought process that DID work.\n while True:\n\n NumCars = (input(\"Enter the number of cars on the policy: \"))\n if NumCars == \"\":\n print(\"Number of cars cannot be blank. Please re-enter.\")\n elif NumCars.isdigit() == False:\n print(\"Number of cars must be numbers only. Please re-enter.\")\n else:\n break\n\nAfter changing the NumCars input to a string, I validated with isdigit, and then when I needed the variable for the calculations, I converted it to integers at the instances where the variable would be used for calculations.\nI don't know if it was the easiest or fastest way of doing this, but it worked!\n", "if type(input) == int: do_code_if_is_numbersonly() else: print(\"Numbers Only\")\nThis checks the type of the input, and if it is integer (number) it does example code, otherwise it prints Numbers Only. This is a quick fix but in my opinion it should work well.\n" ]
[ 1, 1, 0 ]
[ "You can utilize the .isnumeric() function to determine if the string represents an integer.\ne.g;\n NumCarsStr = str(input(\"Enter the number of cars on the policy: \"))\n ...\n ...\n elif not NumCarsStr.isnumeric():\n print(\"Number of cars must be numbers only. Please re-enter.\")\n\n" ]
[ -1 ]
[ "python", "validation" ]
stackoverflow_0074603758_python_validation.txt
Q: Connecting Rows of Array to Date and Time formate Im New to Python, it sounds easy- but i cant solve it, nether find a resolution with similar questions. I have an Array where every row has one value ((...), Hour, Min., Sec., (...) ,D, M, Y) - Example: arr = np.array([x, x,0, 0, 3, x, x,10, 8, 2022]) How can I conect the rows by ID to create an Data.frame of Date and Time in the formate: Date: Time: YY/DD/MM HH:MM:SS So in my example i want to create an dataframe like: Date: Time: 2022/08/10 00:00:03 I was trying to use merge or concat, but only reach to the output: [2022.10.8],[0.0.3] High Thanks for any ideas! : ) A: Try: arr = np.array([-1, -1, 0, 0, 3, -1, -1, 10, 8, 2022]) df = pd.DataFrame( [ pd.to_datetime( f"{arr[-1]}/{arr[-2]}/{arr[-3]} {arr[-8]}:{arr[-7]}:{arr[-6]}" ) ], columns=["DateTime"], ) df["Date"] = df["DateTime"].dt.strftime("%Y/%m/%d") df["Time"] = df["DateTime"].dt.time df = df[["Date", "Time"]] print(df) Prints: Date Time 0 2022/08/10 00:00:03
Connecting Rows of Array to Date and Time formate
Im New to Python, it sounds easy- but i cant solve it, nether find a resolution with similar questions. I have an Array where every row has one value ((...), Hour, Min., Sec., (...) ,D, M, Y) - Example: arr = np.array([x, x,0, 0, 3, x, x,10, 8, 2022]) How can I conect the rows by ID to create an Data.frame of Date and Time in the formate: Date: Time: YY/DD/MM HH:MM:SS So in my example i want to create an dataframe like: Date: Time: 2022/08/10 00:00:03 I was trying to use merge or concat, but only reach to the output: [2022.10.8],[0.0.3] High Thanks for any ideas! : )
[ "Try:\narr = np.array([-1, -1, 0, 0, 3, -1, -1, 10, 8, 2022])\n\ndf = pd.DataFrame(\n [\n pd.to_datetime(\n f\"{arr[-1]}/{arr[-2]}/{arr[-3]} {arr[-8]}:{arr[-7]}:{arr[-6]}\"\n )\n ],\n columns=[\"DateTime\"],\n)\n\ndf[\"Date\"] = df[\"DateTime\"].dt.strftime(\"%Y/%m/%d\")\ndf[\"Time\"] = df[\"DateTime\"].dt.time\ndf = df[[\"Date\", \"Time\"]]\n\nprint(df)\n\nPrints:\n Date Time\n0 2022/08/10 00:00:03\n\n" ]
[ 1 ]
[]
[]
[ "concatenation", "dataframe", "datetime", "extract", "python" ]
stackoverflow_0074601701_concatenation_dataframe_datetime_extract_python.txt
Q: Django/Python "django.core.exceptions.ImproperlyConfigured: Cannot import 'contact'. Check that '...apps.contact.apps.ContactConfig.name' is correct" Hopefully you all can give me a hand with this... my work flow: |.vscode: |capstone_project_website: | -_pycache_: | -apps: | -_pycache_ | -accounts: | -contact: # app that is throwing errors | -_pycache_: | -migrations: | -_init_.py | -admin.py | -apps.py | -forms.py | -models.py | -test.py | -urls.py | -views.py | -public: | -_init_.py | -templates: # all my .html | -_init_.py | -asgi.py | -settings.py | -urls.py | -views.py | -wsgi.py |requirements: |scripts: |static: |.gitignore |.python-version |db-sqlite3 |docker-compose.yml |Dockerfile |Makefile #command I am running |manage.py |setup.cfg my Installed apps in capstone_project_website/settings.py: INSTALLED_APPS = [ "django.contrib.admin", "django.contrib.auth", "django.contrib.contenttypes", "django.contrib.sessions", "django.contrib.messages", "django.contrib.staticfiles", "capstone_project_website.apps.accounts", "capstone_project_website.apps.contact", ] my capstone_project_website/apps/contact/apps.py: from django.apps import AppConfig class ContactConfig(AppConfig): name = "contact" the command I am running is in my Makefile: compose-start: docker-compose up --remove-orphans $(options) When I run make compose-start I get this message from my Terminal: make compose-start docker-compose up --remove-orphans Docker Compose is now in the Docker CLI, try `docker compose up` Starting django-website_postgres_1 ... done Starting django-website_db_migrate_1 ... done Starting django-website_website_1 ... done Attaching to django-website_postgres_1, django-website_db_migrate_1, django-website_website_1 db_migrate_1 | Traceback (most recent call last): db_migrate_1 | File "/usr/local/lib/python3.7/site-packages/django/apps/config.py", line 244, in create db_migrate_1 | app_module = import_module(app_name) db_migrate_1 | File "/usr/local/lib/python3.7/importlib/__init__.py", line 127, in import_module db_migrate_1 | return _bootstrap._gcd_import(name[level:], package, level) db_migrate_1 | File "<frozen importlib._bootstrap>", line 1006, in _gcd_import db_migrate_1 | File "<frozen importlib._bootstrap>", line 983, in _find_and_load db_migrate_1 | File "<frozen importlib._bootstrap>", line 965, in _find_and_load_unlocked db_migrate_1 | ModuleNotFoundError: No module named 'contact' db_migrate_1 | db_migrate_1 | During handling of the above exception, another exception occurred: db_migrate_1 | db_migrate_1 | Traceback (most recent call last): db_migrate_1 | File "manage.py", line 22, in <module> db_migrate_1 | main() db_migrate_1 | File "manage.py", line 18, in main db_migrate_1 | execute_from_command_line(sys.argv) db_migrate_1 | File "/usr/local/lib/python3.7/site-packages/django/core/management/__init__.py", line 419, in execute_from_command_line db_migrate_1 | utility.execute() db_migrate_1 | File "/usr/local/lib/python3.7/site-packages/django/core/management/__init__.py", line 395, in execute db_migrate_1 | django.setup() db_migrate_1 | File "/usr/local/lib/python3.7/site-packages/django/__init__.py", line 24, in setup db_migrate_1 | apps.populate(settings.INSTALLED_APPS) db_migrate_1 | File "/usr/local/lib/python3.7/site-packages/django/apps/registry.py", line 91, in populate db_migrate_1 | app_config = AppConfig.create(entry) db_migrate_1 | File "/usr/local/lib/python3.7/site-packages/django/apps/config.py", line 250, in create db_migrate_1 | app_config_class.__qualname__, db_migrate_1 | django.core.exceptions.ImproperlyConfigured: Cannot import 'contact'. Check that 'capstone_project_website.apps.contact.apps.ContactConfig.name' is correct. postgres_1 | postgres_1 | PostgreSQL Database directory appears to contain a database; Skipping initialization postgres_1 | postgres_1 | 2021-05-02 14:17:44.105 UTC [1] LOG: starting PostgreSQL 13.2 (Debian 13.2-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit postgres_1 | 2021-05-02 14:17:44.106 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 postgres_1 | 2021-05-02 14:17:44.106 UTC [1] LOG: listening on IPv6 address "::", port 5432 postgres_1 | 2021-05-02 14:17:44.113 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" postgres_1 | 2021-05-02 14:17:44.121 UTC [28] LOG: database system was shut down at 2021-05-02 14:17:17 UTC django-website_db_migrate_1 exited with code 1 postgres_1 | 2021-05-02 14:17:44.129 UTC [1] LOG: database system is ready to accept connections website_1 | Watching for file changes with StatReloader ^CGracefully stopping... (press Ctrl+C again to force) Stopping django-website_website_1 ... done Stopping django-website_postgres_1 ... done I know I put a lot of my work flow in, but I am wondering if I am setting it up incorrectly. I have run into this problem before when trying to makemigrations on my accounts app. the answer to that one seemed to work. the answer was to change the name in capstone_project_website/apps/accounts/apps.py to class AccountsConfig(AppConfig): name = "capstone_project_website.apps.accounts" and in installed apps you can see the path is: "capstone_project_website.apps.accounts", I have tried changing the name of... capstone_project_website/apps/contact/apps.py class ContactConfig(AppConfig): name = "contact" ... to name='capstone_project_website.apps.contact' ...and my installed app name to: INSTALLED_APPS = [ ... "capstone_project_website.apps.ContactConfig", ] Point being I am unsure what is going on, if it is a directory issue, a name issue or if there is some step I am missing before I make compose-start. If you could help me with this, and explain what I am doing wrong, I would greatly appreciate it! Thank you A: try changing this: class ContactConfig(AppConfig): name = "contact" to this: class ContactConfig(AppConfig): name = "apps.contact" A: Check your django version. If you update your django version to 3.2, try to switch with the earliest one. django==3.1.8 A: This happens due to python version, so I was having similar issue with Python version 3.7.x. But when I switched to 3.9.x, it got resolved. Here is what I was trying to do: (base) Imrans-MacBook-Pro:django-vue imran$ source ./venv/bin/activate (venv) (base) Imrans-MacBook-Pro:django-vue imran$ python manage.py makemigrations Traceback (most recent call last): File "manage.py", line 22, in <module> main() File "manage.py", line 18, in main execute_from_command_line(sys.argv) File "/Users/imran/Workspace/django-vue/venv/lib/python3.7/site-packages/django/core/management/__init__.py", line 419, in execute_from_command_line utility.execute() Then I ssh into the docker setup with 3.9 and here are the results: (venv) (base) Imrans-MacBook-Pro:django-vue imran$ docker-compose exec backend sh # python --version Python 3.9.15 # python manage.py makemigrations Migrations for 'core': core/migrations/0003_product.py - Create model Product
Django/Python "django.core.exceptions.ImproperlyConfigured: Cannot import 'contact'. Check that '...apps.contact.apps.ContactConfig.name' is correct"
Hopefully you all can give me a hand with this... my work flow: |.vscode: |capstone_project_website: | -_pycache_: | -apps: | -_pycache_ | -accounts: | -contact: # app that is throwing errors | -_pycache_: | -migrations: | -_init_.py | -admin.py | -apps.py | -forms.py | -models.py | -test.py | -urls.py | -views.py | -public: | -_init_.py | -templates: # all my .html | -_init_.py | -asgi.py | -settings.py | -urls.py | -views.py | -wsgi.py |requirements: |scripts: |static: |.gitignore |.python-version |db-sqlite3 |docker-compose.yml |Dockerfile |Makefile #command I am running |manage.py |setup.cfg my Installed apps in capstone_project_website/settings.py: INSTALLED_APPS = [ "django.contrib.admin", "django.contrib.auth", "django.contrib.contenttypes", "django.contrib.sessions", "django.contrib.messages", "django.contrib.staticfiles", "capstone_project_website.apps.accounts", "capstone_project_website.apps.contact", ] my capstone_project_website/apps/contact/apps.py: from django.apps import AppConfig class ContactConfig(AppConfig): name = "contact" the command I am running is in my Makefile: compose-start: docker-compose up --remove-orphans $(options) When I run make compose-start I get this message from my Terminal: make compose-start docker-compose up --remove-orphans Docker Compose is now in the Docker CLI, try `docker compose up` Starting django-website_postgres_1 ... done Starting django-website_db_migrate_1 ... done Starting django-website_website_1 ... done Attaching to django-website_postgres_1, django-website_db_migrate_1, django-website_website_1 db_migrate_1 | Traceback (most recent call last): db_migrate_1 | File "/usr/local/lib/python3.7/site-packages/django/apps/config.py", line 244, in create db_migrate_1 | app_module = import_module(app_name) db_migrate_1 | File "/usr/local/lib/python3.7/importlib/__init__.py", line 127, in import_module db_migrate_1 | return _bootstrap._gcd_import(name[level:], package, level) db_migrate_1 | File "<frozen importlib._bootstrap>", line 1006, in _gcd_import db_migrate_1 | File "<frozen importlib._bootstrap>", line 983, in _find_and_load db_migrate_1 | File "<frozen importlib._bootstrap>", line 965, in _find_and_load_unlocked db_migrate_1 | ModuleNotFoundError: No module named 'contact' db_migrate_1 | db_migrate_1 | During handling of the above exception, another exception occurred: db_migrate_1 | db_migrate_1 | Traceback (most recent call last): db_migrate_1 | File "manage.py", line 22, in <module> db_migrate_1 | main() db_migrate_1 | File "manage.py", line 18, in main db_migrate_1 | execute_from_command_line(sys.argv) db_migrate_1 | File "/usr/local/lib/python3.7/site-packages/django/core/management/__init__.py", line 419, in execute_from_command_line db_migrate_1 | utility.execute() db_migrate_1 | File "/usr/local/lib/python3.7/site-packages/django/core/management/__init__.py", line 395, in execute db_migrate_1 | django.setup() db_migrate_1 | File "/usr/local/lib/python3.7/site-packages/django/__init__.py", line 24, in setup db_migrate_1 | apps.populate(settings.INSTALLED_APPS) db_migrate_1 | File "/usr/local/lib/python3.7/site-packages/django/apps/registry.py", line 91, in populate db_migrate_1 | app_config = AppConfig.create(entry) db_migrate_1 | File "/usr/local/lib/python3.7/site-packages/django/apps/config.py", line 250, in create db_migrate_1 | app_config_class.__qualname__, db_migrate_1 | django.core.exceptions.ImproperlyConfigured: Cannot import 'contact'. Check that 'capstone_project_website.apps.contact.apps.ContactConfig.name' is correct. postgres_1 | postgres_1 | PostgreSQL Database directory appears to contain a database; Skipping initialization postgres_1 | postgres_1 | 2021-05-02 14:17:44.105 UTC [1] LOG: starting PostgreSQL 13.2 (Debian 13.2-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit postgres_1 | 2021-05-02 14:17:44.106 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 postgres_1 | 2021-05-02 14:17:44.106 UTC [1] LOG: listening on IPv6 address "::", port 5432 postgres_1 | 2021-05-02 14:17:44.113 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" postgres_1 | 2021-05-02 14:17:44.121 UTC [28] LOG: database system was shut down at 2021-05-02 14:17:17 UTC django-website_db_migrate_1 exited with code 1 postgres_1 | 2021-05-02 14:17:44.129 UTC [1] LOG: database system is ready to accept connections website_1 | Watching for file changes with StatReloader ^CGracefully stopping... (press Ctrl+C again to force) Stopping django-website_website_1 ... done Stopping django-website_postgres_1 ... done I know I put a lot of my work flow in, but I am wondering if I am setting it up incorrectly. I have run into this problem before when trying to makemigrations on my accounts app. the answer to that one seemed to work. the answer was to change the name in capstone_project_website/apps/accounts/apps.py to class AccountsConfig(AppConfig): name = "capstone_project_website.apps.accounts" and in installed apps you can see the path is: "capstone_project_website.apps.accounts", I have tried changing the name of... capstone_project_website/apps/contact/apps.py class ContactConfig(AppConfig): name = "contact" ... to name='capstone_project_website.apps.contact' ...and my installed app name to: INSTALLED_APPS = [ ... "capstone_project_website.apps.ContactConfig", ] Point being I am unsure what is going on, if it is a directory issue, a name issue or if there is some step I am missing before I make compose-start. If you could help me with this, and explain what I am doing wrong, I would greatly appreciate it! Thank you
[ "try changing this:\n\nclass ContactConfig(AppConfig):\nname = \"contact\"\n\nto this:\n\nclass ContactConfig(AppConfig):\nname = \"apps.contact\"\n\n", "Check your django version. If you update your django version to 3.2, try to switch with the earliest one.\n\ndjango==3.1.8\n\n", "This happens due to python version, so I was having similar issue with Python version 3.7.x. But when I switched to 3.9.x, it got resolved.\nHere is what I was trying to do:\n(base) Imrans-MacBook-Pro:django-vue imran$ source ./venv/bin/activate\n(venv) (base) Imrans-MacBook-Pro:django-vue imran$ python manage.py makemigrations\nTraceback (most recent call last):\n File \"manage.py\", line 22, in <module>\n main()\n File \"manage.py\", line 18, in main\n execute_from_command_line(sys.argv)\n File \"/Users/imran/Workspace/django-vue/venv/lib/python3.7/site-packages/django/core/management/__init__.py\", line 419, in execute_from_command_line\n utility.execute()\n\nThen I ssh into the docker setup with 3.9 and here are the results:\n(venv) (base) Imrans-MacBook-Pro:django-vue imran$ docker-compose exec backend sh\n# python --version\nPython 3.9.15\n# python manage.py makemigrations\nMigrations for 'core':\n core/migrations/0003_product.py\n - Create model Product\n\n" ]
[ 13, 2, 0 ]
[]
[]
[ "configuration", "django", "docker_compose", "modulenotfounderror", "python" ]
stackoverflow_0067358268_configuration_django_docker_compose_modulenotfounderror_python.txt
Q: How to sum all values with equal dates so that I dont have duplicate date values I would like to know how can I sum app the values for the same dates only. (See Picture) like for example I would only like to have every date once with the corresponding values. Since some date are repeating I would like to know how I can some up the values with the same dates. Above you see the column name A: You can use Groupby.sum with numeric_only=True. Assuming df is your dataframe, try this : df.groupby("dt_COMP", as_index=False).sum(numeric_only=True)
How to sum all values with equal dates so that I dont have duplicate date values
I would like to know how can I sum app the values for the same dates only. (See Picture) like for example I would only like to have every date once with the corresponding values. Since some date are repeating I would like to know how I can some up the values with the same dates. Above you see the column name
[ "You can use Groupby.sum with numeric_only=True.\nAssuming df is your dataframe, try this :\ndf.groupby(\"dt_COMP\", as_index=False).sum(numeric_only=True)\n\n" ]
[ 2 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074604110_pandas_python.txt
Q: Calculating embedding overload problems with BERT I'm trying to calculate the embedding of a sentence using BERT. After I input the sentence into BERT, I calculate the Mean-pooling, which is used as the embedding of the sentence. Problem My code can calculate the embedding of sentences, but the computational cost is very high. I don't know what's wrong and I hope someone can help me. Install BERT import torch from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") model = AutoModel.from_pretrained("bert-base-uncased") Get Embedding Function # get the word embedding from BERT def get_word_embedding(text:str): input_ids = torch.tensor(tokenizer.encode(text)).unsqueeze(0) # Batch size 1 outputs = model(input_ids) last_hidden_states = outputs[1] # The last hidden-state is the first element of the output tuple return last_hidden_states[0] Data The maximum number of words in the text is 50. I calculate the entity+text embedding enter image description here Run code entity_desc is my data. It's this step that overloads my computer every time I run it. Please help me!!! I was use RAM 80GB machine in Colab. entity_embedding = {} for i in range(len(entity_desc)): entity = entity_desc['entity'][i] text = entity_desc['text'][i] entity += ' ' + text entity_embedding[entity_desc['entity_id'][i]] = get_word_embedding(entity) A: You might be storing sentence embedding in the GPU. Try to move it to cpu before returning it. # get the word embedding from BERT def get_word_embedding(text:str): input_ids = torch.tensor(tokenizer.encode(text)).unsqueeze(0) # Batch size 1 outputs = model(input_ids) last_hidden_states = outputs[1] # The last hidden-state is the first element of the output tuple return last_hidden_states[0].detach().cpu()
Calculating embedding overload problems with BERT
I'm trying to calculate the embedding of a sentence using BERT. After I input the sentence into BERT, I calculate the Mean-pooling, which is used as the embedding of the sentence. Problem My code can calculate the embedding of sentences, but the computational cost is very high. I don't know what's wrong and I hope someone can help me. Install BERT import torch from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") model = AutoModel.from_pretrained("bert-base-uncased") Get Embedding Function # get the word embedding from BERT def get_word_embedding(text:str): input_ids = torch.tensor(tokenizer.encode(text)).unsqueeze(0) # Batch size 1 outputs = model(input_ids) last_hidden_states = outputs[1] # The last hidden-state is the first element of the output tuple return last_hidden_states[0] Data The maximum number of words in the text is 50. I calculate the entity+text embedding enter image description here Run code entity_desc is my data. It's this step that overloads my computer every time I run it. Please help me!!! I was use RAM 80GB machine in Colab. entity_embedding = {} for i in range(len(entity_desc)): entity = entity_desc['entity'][i] text = entity_desc['text'][i] entity += ' ' + text entity_embedding[entity_desc['entity_id'][i]] = get_word_embedding(entity)
[ "You might be storing sentence embedding in the GPU. Try to move it to cpu before returning it.\n# get the word embedding from BERT\ndef get_word_embedding(text:str):\n input_ids = torch.tensor(tokenizer.encode(text)).unsqueeze(0) # Batch size 1\n outputs = model(input_ids)\n last_hidden_states = outputs[1] \n # The last hidden-state is the first element of the output tuple\n return last_hidden_states[0].detach().cpu()\n\n" ]
[ 0 ]
[]
[]
[ "bert_language_model", "embedding", "nlp", "python", "pytorch" ]
stackoverflow_0074595449_bert_language_model_embedding_nlp_python_pytorch.txt
Q: create a dictionary showing connectivity between items of list python I have a list as: ['Title', 'Text', 'Title', 'Title', 'Text', 'Title', 'Text', 'List', 'Text', 'Title', 'Text', 'Text'] I want every element to be connected to element 'Title" before the element. For example, Text at index 1 is connected to Title at index 0, Title at index 2 would not be connected to any element, because it has another title after it. Text at index 4 is connected to title 3, similarly Text at position 10,11 will be connected to Title at index 9. This is the expected output: {1:0,4:3,6:5,7:5,8:5,10:9,11:9} How can I do that? A: You can use a loop: l = ['Title', 'Text', 'Title', 'Title', 'Text', 'Title', 'Text', 'List', 'Text', 'Title', 'Text', 'Text'] last = -1 out = {} for i, v in enumerate(l): if v == 'Title': last = i else: out[i] = last print(out) Output: {1: 0, 4: 3, 6: 5, 7: 5, 8: 5, 10: 9, 11: 9} A: Don't take this as good practice, but I was trying to have fun with the := walrus operator and do it all in one comprehension. prev = -1 li = ['Title', 'Text', 'Title', 'Title', 'Text', 'Title', 'Text', 'List', 'Text', 'Title', 'Text', 'Text'] li2 = {ix:prev for ix, v in enumerate(li) if #this first part is always True because of the +77 # why 77 rather than say 1? to avoid -1+1 => 0 if first is Text #but it only stores prev on Title values using the walrus `:=` ((prev:= (ix if v == "Title" else prev))+77) #we only want the Text Values and v == "Text" } print(f"{li2=}") #output: li2={1: 0, 4: 3, 6: 5, 8: 5, 10: 9, 11: 9} In some defense of this unintuitive code list comprehensions are usually faster than equivalent for loop constructs. Usually => not in this case, where mozway's answer was faster.
create a dictionary showing connectivity between items of list python
I have a list as: ['Title', 'Text', 'Title', 'Title', 'Text', 'Title', 'Text', 'List', 'Text', 'Title', 'Text', 'Text'] I want every element to be connected to element 'Title" before the element. For example, Text at index 1 is connected to Title at index 0, Title at index 2 would not be connected to any element, because it has another title after it. Text at index 4 is connected to title 3, similarly Text at position 10,11 will be connected to Title at index 9. This is the expected output: {1:0,4:3,6:5,7:5,8:5,10:9,11:9} How can I do that?
[ "You can use a loop:\nl = ['Title', 'Text', 'Title', 'Title', 'Text', 'Title', 'Text', 'List', 'Text', 'Title', 'Text', 'Text']\n\nlast = -1\nout = {}\nfor i, v in enumerate(l):\n if v == 'Title':\n last = i\n else:\n out[i] = last\nprint(out)\n\nOutput: {1: 0, 4: 3, 6: 5, 7: 5, 8: 5, 10: 9, 11: 9}\n", "Don't take this as good practice, but I was trying to have fun with the := walrus operator and do it all in one comprehension.\nprev = -1\nli = ['Title', 'Text', 'Title', 'Title', 'Text', 'Title', 'Text', 'List', 'Text', 'Title', 'Text', 'Text']\nli2 = {ix:prev\n for ix, v \n in enumerate(li) \n\n if\n\n #this first part is always True because of the +77\n # why 77 rather than say 1? to avoid -1+1 => 0 if first is Text\n #but it only stores prev on Title values using the walrus `:=`\n ((prev:= (ix if v == \"Title\" else prev))+77) \n\n #we only want the Text Values\n and v == \"Text\" \n}\n\nprint(f\"{li2=}\")\n\n\n\n#output:\nli2={1: 0, 4: 3, 6: 5, 8: 5, 10: 9, 11: 9}\n\nIn some defense of this unintuitive code list comprehensions are usually faster than equivalent for loop constructs. Usually => not in this case, where mozway's answer was faster.\n" ]
[ 6, 0 ]
[ "One way you can do this (not very efficient but I do not have much time at them moment):\nx = ['Title', 'Text', 'Title', 'Title', 'Text', 'Title', 'Text', 'List', 'Text', 'Title', 'Text', 'Text']\nd = {}\nfor i, ele in enumerate(x):\n if ele != 'Title':\n d[i] = i - x[:i+1][::-1].index('Title')\n\n>>> print(d)\n>>> {1: 0, 4: 3, 6: 5, 7: 5, 8: 5, 10: 9, 11: 9}\n\n#oneliner:\n{i: i-x[:i+1][::-1].index('Title') for i, ele in enumerate(x) if ele != 'Title'}\n\n\nBut this does not check if there is a 'Title' in your list, i.e. the list should always start with 'Title'.\n", "Basic logic is to find all postions of root Title and just place the last smaller than current position to current none root element.\nCode:\nx= ['Title', 'Text', 'Title', 'Title', 'Text', 'Title', 'Text', 'List', 'Text', 'Title', 'Text', 'Text']\n\n{i:[j for j, v in enumerate(x[:i]) if v==x[0]][-1] for i,v in enumerate(x) if v!=x[0]}\n\nOutput:\n{1: 0, 4: 3, 6: 5, 7: 5, 8: 5, 10: 9, 11: 9}\n\n" ]
[ -1, -1 ]
[ "for_loop", "list", "python" ]
stackoverflow_0074598300_for_loop_list_python.txt
Q: python: regex to match pattern on multiple lines (apply to start of string) I'm trying to create a pattern to match two lines in the following multi-line text: Text of no interest. TextOfInterest1, foobar of no interest another foobar of no interest AnotherText-of_Interest2 some foobar don't care I need to match exactly TextOfInterest1 and AnotherText-of_Interest2, note that there might be multiple lines between TextOfInterest1 and AnotherText-of_Interest2. I don't know how I can apply ^ symbol more than once in a single pattern string? A: Try this: pat = r"^.*\n([-_\w]+).*\n.*\n([-_\w]+).*$" string="""Text of no interest. TextOfInterest1, foobar of no interest another foobar of no interest AnotherText-of_Interest2 some foobar don't care""" re.search(pat,string).groups() ('TextOfInterest1', 'AnotherText-of_Interest2') Check pattern at regex101.
python: regex to match pattern on multiple lines (apply to start of string)
I'm trying to create a pattern to match two lines in the following multi-line text: Text of no interest. TextOfInterest1, foobar of no interest another foobar of no interest AnotherText-of_Interest2 some foobar don't care I need to match exactly TextOfInterest1 and AnotherText-of_Interest2, note that there might be multiple lines between TextOfInterest1 and AnotherText-of_Interest2. I don't know how I can apply ^ symbol more than once in a single pattern string?
[ "Try this:\npat = r\"^.*\\n([-_\\w]+).*\\n.*\\n([-_\\w]+).*$\"\nstring=\"\"\"Text of no interest.\nTextOfInterest1, foobar of no interest\nanother foobar of no interest\nAnotherText-of_Interest2 some foobar don't care\"\"\"\n\nre.search(pat,string).groups()\n\n('TextOfInterest1', 'AnotherText-of_Interest2')\n\nCheck pattern at regex101.\n" ]
[ 0 ]
[]
[]
[ "python", "string_matching" ]
stackoverflow_0074604099_python_string_matching.txt
Q: Are all JSON strings also syntactically valid Python string literals? I am writing a JavaScript function that generates a Python program. Can I use JavaScript's JSON.stringify() on a string and expect a valid Python string every time or are there edge cases that mean I have to write my own toPythonString() function? A: The short answer is no. The long answer is in practice, sort of yes. Almost. The difference is that JSON strings can have the forward slash backslash-escaped. So the string "\/" is interpreted by JSON (and JavaScript) as a single forward slash '/' but as a backslash followed by a forward slash by Python '\\/'. However, it's not mandatory to serialize forward slashes this way in JSON. If you have control over the JSON serializer and can be sure that it doesn't backslash escape forward slashes then your JSON string should always be a valid Python string. Looking at the source code of the JSON serializer in V8 (Google Chrome's and Node.js's JavaScript engine), it always serializes '/' as "/", as you'd expect. JSON string syntax is documented on json.org and Python string syntax is documented in the Lexical analisys section of its docs. We can see that a regular JSON string character is "Any codepoint except ", \ or control characters" whereas Python string characters are "Any source character except \ or newline or the quote". Since newline is a control character, this means JSON is more restrictive than Python, which is good. Then the question is what is the overlap between "Any codepoint" and "Any source character"? I don't know how to answer that, but I'd guess they're probably the same, assuming both the JSON and Python are encoded in UTF-8. (If you're interested in JavaScript strings instead of JSON strings, then JavaScript is generally encoded in UTF-16, so there could be some incompatibilities arising from that here.) We also see that JSON has some backslash escapes, which are all supported by Python except one, the escaped forward slash, \/. This is the one part of JSON strings that isn't shared by Python. Finally, in JSON we can use the \uXXXX syntax to escape any character, which is also supported in Python. A: If you want to serialize a string as Python source code in JavaScript, do it like this: const regexSingleEscape = /'|\\|\p{C}|\p{Z}/gu; const regexDoubleEscape = /"|\\|\p{C}|\p{Z}/gu; function asPythonStr(s) { let quote = "'"; if (s.includes("'") && !s.includes('"')) { quote = '"'; } const regex = quote === "'" ? regexSingleEscape : regexDoubleEscape; return ( quote + s.replace(regex, (c: string): string => { switch (c) { case " ": return " "; case "\x07": return "\\a"; case "\b": return "\\b"; case "\f": return "\\f"; case "\n": return "\\n"; case "\r": return "\\r"; case "\t": return "\\t"; case "\v": return "\\v"; case "\\": return "\\\\"; case "'": case '"': return "\\" + c; } const hex = (c.codePointAt(0) as number).toString(16); if (hex.length <= 2) { return "\\x" + hex.padStart(2, "0"); } if (hex.length <= 4) { return "\\u" + hex.padStart(4, "0"); } return "\\U" + hex.padStart(8, "0"); }) + quote ); }
Are all JSON strings also syntactically valid Python string literals?
I am writing a JavaScript function that generates a Python program. Can I use JavaScript's JSON.stringify() on a string and expect a valid Python string every time or are there edge cases that mean I have to write my own toPythonString() function?
[ "The short answer is no.\nThe long answer is in practice, sort of yes. Almost. The difference is that JSON strings can have the forward slash backslash-escaped. So the string \"\\/\" is interpreted by JSON (and JavaScript) as a single forward slash '/' but as a backslash followed by a forward slash by Python '\\\\/'.\nHowever, it's not mandatory to serialize forward slashes this way in JSON. If you have control over the JSON serializer and can be sure that it doesn't backslash escape forward slashes then your JSON string should always be a valid Python string.\nLooking at the source code of the JSON serializer in V8 (Google Chrome's and Node.js's JavaScript engine), it always serializes '/' as \"/\", as you'd expect.\n\nJSON string syntax is documented on json.org and Python string syntax is documented in the Lexical analisys section of its docs.\nWe can see that a regular JSON string character is \"Any codepoint except \", \\ or control characters\" whereas Python string characters are \"Any source character except \\ or newline or the quote\". Since newline is a control character, this means JSON is more restrictive than Python, which is good. Then the question is what is the overlap between \"Any codepoint\" and \"Any source character\"? I don't know how to answer that, but I'd guess they're probably the same, assuming both the JSON and Python are encoded in UTF-8. (If you're interested in JavaScript strings instead of JSON strings, then JavaScript is generally encoded in UTF-16, so there could be some incompatibilities arising from that here.)\nWe also see that JSON has some backslash escapes, which are all supported by Python except one, the escaped forward slash, \\/. This is the one part of JSON strings that isn't shared by Python.\nFinally, in JSON we can use the \\uXXXX syntax to escape any character, which is also supported in Python.\n", "If you want to serialize a string as Python source code in JavaScript, do it like this:\nconst regexSingleEscape = /'|\\\\|\\p{C}|\\p{Z}/gu;\nconst regexDoubleEscape = /\"|\\\\|\\p{C}|\\p{Z}/gu;\n\nfunction asPythonStr(s) {\n let quote = \"'\";\n if (s.includes(\"'\") && !s.includes('\"')) {\n quote = '\"';\n }\n const regex = quote === \"'\" ? regexSingleEscape : regexDoubleEscape;\n return (\n quote +\n s.replace(regex, (c: string): string => {\n switch (c) {\n case \" \":\n return \" \";\n case \"\\x07\":\n return \"\\\\a\";\n case \"\\b\":\n return \"\\\\b\";\n case \"\\f\":\n return \"\\\\f\";\n case \"\\n\":\n return \"\\\\n\";\n case \"\\r\":\n return \"\\\\r\";\n case \"\\t\":\n return \"\\\\t\";\n case \"\\v\":\n return \"\\\\v\";\n case \"\\\\\":\n return \"\\\\\\\\\";\n case \"'\":\n case '\"':\n return \"\\\\\" + c;\n }\n const hex = (c.codePointAt(0) as number).toString(16);\n if (hex.length <= 2) {\n return \"\\\\x\" + hex.padStart(2, \"0\");\n }\n if (hex.length <= 4) {\n return \"\\\\u\" + hex.padStart(4, \"0\");\n }\n return \"\\\\U\" + hex.padStart(8, \"0\");\n }) +\n quote\n );\n}\n\n" ]
[ 2, 0 ]
[]
[]
[ "code_generation", "json", "python", "syntax", "transpiler" ]
stackoverflow_0070499499_code_generation_json_python_syntax_transpiler.txt
Q: Launchd 'Invalid Property List' I was hoping someone could point out where I might be going wrong with a launchctl script I'm trying to write and launch. The intention is to run a python script I've managed to get working, everyday at 3.15am. (the computer it's going to run on is on 24/7 but if there is a way to incorporate this into it's everyday running that would be great in the event the computer shuts down or needs to be interrupted.) When I attempt to load the file I get the return message below: jace@Jaces-Mac-Pro ~ % launchctl load /Users/jace/Library/LaunchAgents/com.launchd.phoneconfig.plist /Users/jace/Library/LaunchAgents/com.launchd.phoneconfig.plist: Invalid property list jace@Jaces-Mac-Pro ~ % According to the tutorial I'm following and a few pages online I've reviewed I should have laid everything out correctly but feel I'm missing something. If someone could point out my mistake and offer a solution I'd really appreciate it! Am totally new to this file type and the writing of, but feel it'll be what i need to get the job done. The file I'm trying to run is the below - <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Label</key> <string>com.launch.phoneconfig</string> <key>ProgramArgument</key> <array> <string>/usr/local/bin/python3</string> <string>/Users/jace/Desktop/Filing_Cabinet/Python_Folder/Selenium_Projects/my_phone_config01.py</string> </array> <key>RunAtLoad</key> <true/> <key>StartCalendarInterval</key> <dict> <key>Hour</key <integer>3</integer> <key>Minute</key> <integer>15</integer> </dict> </dict> </plist>1 P.s I tried putting the file in my mac's launch Daemon folder but can't seem to locate it in the Library, only 'LaunchAgents' folder can be found. Don't know if this effects anything but let me know if there's more information I could provide to get this running. Thank you for your time. I was hoping the code to load and run at the designated time and be returned with the other .plist files when I enter 'launchctl list' into terminal. Update from comment (Thanks for pointing this out @Chepner) Running 'putil -lint' allowed me to see an error in script. Example of result. Before, w/error: jace@Jaces-Mac-Pro LaunchAgents % plutil /Users/jace/Library/LaunchAgents/com.launchd.phoneconfig.plist /Users/jace/Library/LaunchAgents/com.launchd.phoneconfig.plist: Encountered unexpected character < on line 17 while looking for close tag After, once fixed: jace@Jaces-Mac-Pro LaunchAgents % plutil -lint /Users/jace/Desktop/Filing_Cabinet/LaunchD_Folder/launch_phoneconfig/com.launchd.phoneconfig.plist /Users/jace/Desktop/Filing_Cabinet/LaunchD_Folder/launch_phoneconfig/com.launchd.phoneconfig.plist: OK Tried running code again but results are as shown - jace@Jaces-Mac-Pro LaunchAgents % launchctl load /Users/jace/Desktop/Filing_Cabinet/LaunchD_Folder/launch_phoneconfig/com.launchd.phoneconfig.plist /Users/jace/Desktop/Filing_Cabinet/LaunchD_Folder/launch_phoneconfig/com.launchd.phoneconfig.plist: Invalid or missing Program/ProgramArguments jace@Jaces-Mac-Pro LaunchAgents % Update from Answer Thank you for the suggestion @Chepner. Adjusting the line to separate the program and program argument seems to have gotten everything to say 'ok'. Technically this answers my question! Only problem now is that it doesn't actually run, I can run the python script it's meant to run just fine but can't seem to get it to run via this method. Any suggestions or links I can review to help figure this out? Thank you for your time in helping me! :) Last login: Wed Nov 30 14:44:29 on console jace@Jaces-Mac-Pro ~ % plutil /Users/jace/Library/LaunchAgents/com.launchd.phoneconfig.plist /Users/jace/Library/LaunchAgents/com.launchd.phoneconfig.plist: OK jace@Jaces-Mac-Pro ~ % launchctl list PID Status Label - 0 com.launch.phoneconfig (removed the other items that would be returned to show clearly it loaded) jace@Jaces-Mac-Pro ~ % A: I think the problem is that you are trying to specify an entire command line as program arguments, rather than specifying the program name separately from its arguments. <dict> <key>Label</key> <string>com.launch.phoneconfig</string> <key>Program</key> <string>/usr/local/bin/python3</string> <key>ProgramArgument</key> <array> <string>/Users/jace/Desktop/Filing_Cabinet/Python_Folder/Selenium_Projects/my_phone_config01.py</string> </array> <key>RunAtLoad</key> <true/> <key>StartCalendarInterval</key> <dict> <key>Hour</key <integer>3</integer> <key>Minute</key> <integer>15</integer> </dict> </dict> If this is the case, the runtime error could be more specific (Program is missing, so it could just say so instead of implying that there is a problem with ProgramArguments as well).
Launchd 'Invalid Property List'
I was hoping someone could point out where I might be going wrong with a launchctl script I'm trying to write and launch. The intention is to run a python script I've managed to get working, everyday at 3.15am. (the computer it's going to run on is on 24/7 but if there is a way to incorporate this into it's everyday running that would be great in the event the computer shuts down or needs to be interrupted.) When I attempt to load the file I get the return message below: jace@Jaces-Mac-Pro ~ % launchctl load /Users/jace/Library/LaunchAgents/com.launchd.phoneconfig.plist /Users/jace/Library/LaunchAgents/com.launchd.phoneconfig.plist: Invalid property list jace@Jaces-Mac-Pro ~ % According to the tutorial I'm following and a few pages online I've reviewed I should have laid everything out correctly but feel I'm missing something. If someone could point out my mistake and offer a solution I'd really appreciate it! Am totally new to this file type and the writing of, but feel it'll be what i need to get the job done. The file I'm trying to run is the below - <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Label</key> <string>com.launch.phoneconfig</string> <key>ProgramArgument</key> <array> <string>/usr/local/bin/python3</string> <string>/Users/jace/Desktop/Filing_Cabinet/Python_Folder/Selenium_Projects/my_phone_config01.py</string> </array> <key>RunAtLoad</key> <true/> <key>StartCalendarInterval</key> <dict> <key>Hour</key <integer>3</integer> <key>Minute</key> <integer>15</integer> </dict> </dict> </plist>1 P.s I tried putting the file in my mac's launch Daemon folder but can't seem to locate it in the Library, only 'LaunchAgents' folder can be found. Don't know if this effects anything but let me know if there's more information I could provide to get this running. Thank you for your time. I was hoping the code to load and run at the designated time and be returned with the other .plist files when I enter 'launchctl list' into terminal. Update from comment (Thanks for pointing this out @Chepner) Running 'putil -lint' allowed me to see an error in script. Example of result. Before, w/error: jace@Jaces-Mac-Pro LaunchAgents % plutil /Users/jace/Library/LaunchAgents/com.launchd.phoneconfig.plist /Users/jace/Library/LaunchAgents/com.launchd.phoneconfig.plist: Encountered unexpected character < on line 17 while looking for close tag After, once fixed: jace@Jaces-Mac-Pro LaunchAgents % plutil -lint /Users/jace/Desktop/Filing_Cabinet/LaunchD_Folder/launch_phoneconfig/com.launchd.phoneconfig.plist /Users/jace/Desktop/Filing_Cabinet/LaunchD_Folder/launch_phoneconfig/com.launchd.phoneconfig.plist: OK Tried running code again but results are as shown - jace@Jaces-Mac-Pro LaunchAgents % launchctl load /Users/jace/Desktop/Filing_Cabinet/LaunchD_Folder/launch_phoneconfig/com.launchd.phoneconfig.plist /Users/jace/Desktop/Filing_Cabinet/LaunchD_Folder/launch_phoneconfig/com.launchd.phoneconfig.plist: Invalid or missing Program/ProgramArguments jace@Jaces-Mac-Pro LaunchAgents % Update from Answer Thank you for the suggestion @Chepner. Adjusting the line to separate the program and program argument seems to have gotten everything to say 'ok'. Technically this answers my question! Only problem now is that it doesn't actually run, I can run the python script it's meant to run just fine but can't seem to get it to run via this method. Any suggestions or links I can review to help figure this out? Thank you for your time in helping me! :) Last login: Wed Nov 30 14:44:29 on console jace@Jaces-Mac-Pro ~ % plutil /Users/jace/Library/LaunchAgents/com.launchd.phoneconfig.plist /Users/jace/Library/LaunchAgents/com.launchd.phoneconfig.plist: OK jace@Jaces-Mac-Pro ~ % launchctl list PID Status Label - 0 com.launch.phoneconfig (removed the other items that would be returned to show clearly it loaded) jace@Jaces-Mac-Pro ~ %
[ "I think the problem is that you are trying to specify an entire command line as program arguments, rather than specifying the program name separately from its arguments.\n<dict>\n <key>Label</key>\n <string>com.launch.phoneconfig</string>\n <key>Program</key>\n <string>/usr/local/bin/python3</string>\n <key>ProgramArgument</key>\n <array>\n <string>/Users/jace/Desktop/Filing_Cabinet/Python_Folder/Selenium_Projects/my_phone_config01.py</string>\n </array>\n <key>RunAtLoad</key>\n <true/>\n <key>StartCalendarInterval</key>\n <dict>\n <key>Hour</key\n <integer>3</integer>\n <key>Minute</key>\n <integer>15</integer>\n </dict>\n</dict>\n\nIf this is the case, the runtime error could be more specific (Program is missing, so it could just say so instead of implying that there is a problem with ProgramArguments as well).\n" ]
[ 1 ]
[]
[]
[ "daemon", "launchctl", "launchd", "plist", "python" ]
stackoverflow_0074602252_daemon_launchctl_launchd_plist_python.txt
Q: Callable modules Why doesn't Python allow modules to have a __call__ method? (Beyond the obvious that it wouldn't be easy to import directly.) Specifically, why doesn't using a(b) syntax find the __call__ attribute like it does for functions, classes, and objects? (Is lookup just incompatibly different for modules?) >>> print(open("mod_call.py").read()) def __call__(): return 42 >>> import mod_call >>> mod_call() Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: 'module' object is not callable >>> mod_call.__call__() 42 A: Python doesn't allow modules to override or add any magic method, because keeping module objects simple, regular and lightweight is just too advantageous considering how rarely strong use cases appear where you could use magic methods there. When such use cases do appear, the solution is to make a class instance masquerade as a module. Specifically, code your mod_call.py as follows: import sys class mod_call: def __call__(self): return 42 sys.modules[__name__] = mod_call() Now your code importing and calling mod_call works fine. A: Special methods are only guaranteed to be called implicitly when they are defined on the type, not on the instance. (__call__ is an attribute of the module instance mod_call, not of <type 'module'>.) You can't add methods to built-in types. https://docs.python.org/reference/datamodel.html#special-lookup A: As Miles says, you need to define the call on class level. So an alternative to Alex post is to change the class of sys.modules[__name__] to a subclass of the type of sys.modules[__name__] (It should be types.ModuleType). This has the advantage that the module is callable while keeping all other properties of the module (like accessing functions, variables, ...). import sys class MyModule(sys.modules[__name__].__class__): def __call__(self): # module callable return 42 sys.modules[__name__].__class__ = MyModule Note: Tested with python3.6. A: Christoph Böddeker's answer seems to be the best way to create a callable module, but as a comment says, it only works in Python 3.5 and up. The benefit is that you can write your module like normal, and just add the class reassignment at the very end, i.e. # coolmodule.py import stuff var = 33 class MyClass: ... def function(x, y): ... class CoolModule(types.ModuleType): def __call__(self): return 42 sys.modules[__name__].__class__ = CoolModule and everything works, including all expected module attributes like __file__ being defined. (This is because you're actually not changing the module object resulting from the import at all, just "casting" it to a subclass with a __call__ method, which is exactly what we want.) To get this to work similarly in Python versions below 3.5, you can adapt Alex Martelli's answer to make your new class a subclass of ModuleType, and copy all the module's attributes into your new module instance: #(all your module stuff here) class CoolModule(types.ModuleType): def __init__(self): types.ModuleType.__init__(self, __name__) # or super().__init__(__name__) for Python 3 self.__dict__.update(sys.modules[__name__].__dict__) def __call__(self): return 42 sys.modules[__name__] = CoolModule() Now __file__, __name__ and other module attributes are defined (which aren't present if just following Alex's answer), and your imported module object still "is a" module. A: All answers work only for import mod_call. To get it working simultaneously for from mod_call import *, the solution of @Alex Martelli can be enhanced as follow import sys class mod_call: def __call__(self): return 42 mod_call = __call__ __all__ = list(set(vars().keys()) - {'__qualname__'}) # for python 2 and 3 sys.modules[__name__] = mod_call() This solution was derived with the discussion of an answer of a similar problem. A: To turn the solution into a convenient reusable function: def set_module(cls, __name__): import sys class cls2(sys.modules[__name__].__class__, cls): pass sys.modules[__name__].__class__ = cls2 save it to, say, util.py. Then in your module, import util class MyModule: def __call__(self): # module callable return 42 util.set_module(MyModule, __name__) Hurray! I wrote this because I need to enhance a lot of modules with this trick. P.S. Few days after I wrote this answer, I removed this trick from my code, since it is so tricky for tools like Pylint to understand. A: Using Python version 3.10.8, formatting my code as below allowed me to: Make my module callable (thanks Alex) Keep all properties of the module accessible (e.g. methods) (thanks Nick) Allow the module to work when used as from CallableModule import x, y, z, although not as from CallableModule import * (I tried to employ Friedrich's solution) from types import ModuleType class CallableModule(ModuleType): def __init__(self): ModuleType.__init__(self, __name__) self.__dict__.update(modules[__name__].__dict__) def __call__(self): print("You just called a module UwU") mod_call= __call__ __all__= list(set(vars().keys()) - {'__qualname__'}) modules[__name__]= CallableModule()
Callable modules
Why doesn't Python allow modules to have a __call__ method? (Beyond the obvious that it wouldn't be easy to import directly.) Specifically, why doesn't using a(b) syntax find the __call__ attribute like it does for functions, classes, and objects? (Is lookup just incompatibly different for modules?) >>> print(open("mod_call.py").read()) def __call__(): return 42 >>> import mod_call >>> mod_call() Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: 'module' object is not callable >>> mod_call.__call__() 42
[ "Python doesn't allow modules to override or add any magic method, because keeping module objects simple, regular and lightweight is just too advantageous considering how rarely strong use cases appear where you could use magic methods there.\nWhen such use cases do appear, the solution is to make a class instance masquerade as a module. Specifically, code your mod_call.py as follows:\nimport sys\n\nclass mod_call:\n def __call__(self):\n return 42\n\nsys.modules[__name__] = mod_call()\n\nNow your code importing and calling mod_call works fine.\n", "Special methods are only guaranteed to be called implicitly when they are defined on the type, not on the instance. (__call__ is an attribute of the module instance mod_call, not of <type 'module'>.) You can't add methods to built-in types.\nhttps://docs.python.org/reference/datamodel.html#special-lookup\n", "As Miles says, you need to define the call on class level.\nSo an alternative to Alex post is to change the class of sys.modules[__name__] to a subclass of the type of sys.modules[__name__] (It should be types.ModuleType).\nThis has the advantage that the module is callable while keeping all other properties of the module (like accessing functions, variables, ...).\nimport sys\n\nclass MyModule(sys.modules[__name__].__class__):\n def __call__(self): # module callable\n return 42\n\nsys.modules[__name__].__class__ = MyModule\n\nNote: Tested with python3.6.\n", "Christoph Böddeker's answer seems to be the best way to create a callable module, but as a comment says, it only works in Python 3.5 and up.\nThe benefit is that you can write your module like normal, and just add the class reassignment at the very end, i.e.\n# coolmodule.py\nimport stuff\n\nvar = 33\nclass MyClass:\n ...\ndef function(x, y):\n ...\n\nclass CoolModule(types.ModuleType):\n def __call__(self):\n return 42\nsys.modules[__name__].__class__ = CoolModule\n\nand everything works, including all expected module attributes like __file__ being defined. (This is because you're actually not changing the module object resulting from the import at all, just \"casting\" it to a subclass with a __call__ method, which is exactly what we want.)\nTo get this to work similarly in Python versions below 3.5, you can adapt Alex Martelli's answer to make your new class a subclass of ModuleType, and copy all the module's attributes into your new module instance:\n#(all your module stuff here)\n\nclass CoolModule(types.ModuleType):\n def __init__(self):\n types.ModuleType.__init__(self, __name__)\n # or super().__init__(__name__) for Python 3\n self.__dict__.update(sys.modules[__name__].__dict__)\n def __call__(self):\n return 42\n\nsys.modules[__name__] = CoolModule()\n\nNow __file__, __name__ and other module attributes are defined (which aren't present if just following Alex's answer), and your imported module object still \"is a\" module.\n", "All answers work only for import mod_call. To get it working simultaneously for from mod_call import *, the solution of @Alex Martelli can be enhanced as follow\nimport sys\n\nclass mod_call:\n def __call__(self):\n return 42\n mod_call = __call__\n __all__ = list(set(vars().keys()) - {'__qualname__'}) # for python 2 and 3\n\nsys.modules[__name__] = mod_call()\n\nThis solution was derived with the discussion of an answer of a similar problem.\n", "To turn the solution into a convenient reusable function:\ndef set_module(cls, __name__):\n import sys\n\n class cls2(sys.modules[__name__].__class__, cls):\n pass\n\n sys.modules[__name__].__class__ = cls2\n\nsave it to, say, util.py. Then in your module,\nimport util\n\nclass MyModule:\n def __call__(self): # module callable\n return 42\n\nutil.set_module(MyModule, __name__)\n\nHurray!\nI wrote this because I need to enhance a lot of modules with this trick.\n\nP.S. Few days after I wrote this answer, I removed this trick from my code, since it is so tricky for tools like Pylint to understand.\n", "Using Python version 3.10.8, formatting my code as below allowed me to:\n\nMake my module callable (thanks Alex)\nKeep all properties of the module accessible (e.g. methods) (thanks Nick)\nAllow the module to work when used as from CallableModule import x, y, z, although not as from CallableModule import * (I tried to employ Friedrich's solution)\n\n from types import ModuleType\n \n class CallableModule(ModuleType):\n def __init__(self):\n ModuleType.__init__(self, __name__)\n self.__dict__.update(modules[__name__].__dict__)\n \n def __call__(self):\n print(\"You just called a module UwU\")\n \n mod_call= __call__\n __all__= list(set(vars().keys()) - {'__qualname__'})\n modules[__name__]= CallableModule()\n\n" ]
[ 110, 46, 22, 10, 5, 2, 0 ]
[]
[]
[ "module", "python", "python_import" ]
stackoverflow_0001060796_module_python_python_import.txt
Q: Running Jupyter Notebook in GCP on a Schedule What is the best way to migrate a jupyter notebook in to Google Cloud Platform? Requirements I don't want to do a lot of changes to the notebook to get it to run I want it to be scheduleable, preferably through the UI I want it to be able to run a ipynb file, not a py file In AWS it seems like sagemaker is the no brainer solution for this. I want the tool in GCP that gets as close to the specific task without a lot of extras I've tried the following, Cloud Function: it seems like it's best for running python scripts, not a notebook, requires you to run a main.py file by default Dataproc: seems like you can add a notebook to a running instance but it cannot be scheduled Dataflow: sort of seemed like overkill, like it wasn't the best tool and that it was better suited apache based tools I feel like this question should be easier, I found this article on the subject: How to Deploy and Schedule Jupyter Notebook on Google Cloud Platform He actually doesn't do what the title says, he moves a lot of GCP code in to a main.py to create an instance and he has the instance execute the notebook. Feel free to correct my perspective on any of this A: I use Vertex AI Workbench to run notebooks on GCP. It provides two variants: Managed Notebooks User-managed Notebooks User-managed notebooks creates compute instances at the background and it comes with pre-built packages such as Jupyter Lab, Python, etc and allows customisation. I mainly use for developing Dataflow pipelines. Other requirement of scheduling - Managed Notebooks supports this feature, refer this documentation (I am yet to try Managed Notebooks): Use the executor to run a notebook file as a one-time execution or on a schedule. Choose the specific environment and hardware that you want your execution to run on. Your notebook's code will run on Vertex AI custom training, which can make it easier to do distributed training, optimize hyperparameters, or schedule continuous training jobs. See Run notebook files with the executor. You can use parameters in your execution to make specific changes to each run. For example, you might specify a different dataset to use, change the learning rate on your model, or change the version of the model. You can also set a notebook to run on a recurring schedule. Even while your instance is shut down, Vertex AI Workbench will run your notebook file and save the results for you to look at and share with others.
Running Jupyter Notebook in GCP on a Schedule
What is the best way to migrate a jupyter notebook in to Google Cloud Platform? Requirements I don't want to do a lot of changes to the notebook to get it to run I want it to be scheduleable, preferably through the UI I want it to be able to run a ipynb file, not a py file In AWS it seems like sagemaker is the no brainer solution for this. I want the tool in GCP that gets as close to the specific task without a lot of extras I've tried the following, Cloud Function: it seems like it's best for running python scripts, not a notebook, requires you to run a main.py file by default Dataproc: seems like you can add a notebook to a running instance but it cannot be scheduled Dataflow: sort of seemed like overkill, like it wasn't the best tool and that it was better suited apache based tools I feel like this question should be easier, I found this article on the subject: How to Deploy and Schedule Jupyter Notebook on Google Cloud Platform He actually doesn't do what the title says, he moves a lot of GCP code in to a main.py to create an instance and he has the instance execute the notebook. Feel free to correct my perspective on any of this
[ "I use Vertex AI Workbench to run notebooks on GCP. It provides two variants:\n\nManaged Notebooks\nUser-managed Notebooks\n\nUser-managed notebooks creates compute instances at the background and it comes with pre-built packages such as Jupyter Lab, Python, etc and allows customisation. I mainly use for developing Dataflow pipelines.\nOther requirement of scheduling - Managed Notebooks supports this feature, refer this documentation (I am yet to try Managed Notebooks):\n\nUse the executor to run a notebook file as a one-time execution or on\na schedule. Choose the specific environment and hardware that you want\nyour execution to run on. Your notebook's code will run on Vertex AI\ncustom training, which can make it easier to do distributed training,\noptimize hyperparameters, or schedule continuous training jobs. See\nRun notebook files with the executor.\nYou can use parameters in your execution to make specific changes to\neach run. For example, you might specify a different dataset to use,\nchange the learning rate on your model, or change the version of the\nmodel.\nYou can also set a notebook to run on a recurring schedule. Even while\nyour instance is shut down, Vertex AI Workbench will run your notebook\nfile and save the results for you to look at and share with others.\n\n" ]
[ 1 ]
[]
[]
[ "google_cloud_platform", "jupyter", "jupyter_notebook", "python" ]
stackoverflow_0074603301_google_cloud_platform_jupyter_jupyter_notebook_python.txt
Q: [Azure SDK Python]How to check if a subnet has available IPs? is there a way in Azure SDK Python to check if a subnet has still available IPs? We need this info because we dinamically deploy VMs in different subnets and we have to know if there is still network availabiliy before provisioning in that subnet. I tried to search on SO and on Azure docs but with no success. I've just found commands in Azure CLI/Rest APIs but I don't know if there is something equivalent also in python. If no, how could I proceed with REST APIs? Thank you a lot for your support. A: I do not think Azure Python SDK offers such functionality out of the box; you can create a feedback item for this on the Azure Python SDK repository. If it helps Azure Python SDK does offer a check_ip_address_availability module but it only validates if a given single private IP address is available for use. You can use this code as reference. Other option here will be to use the REST API mentioned by you above to get the required details.
[Azure SDK Python]How to check if a subnet has available IPs?
is there a way in Azure SDK Python to check if a subnet has still available IPs? We need this info because we dinamically deploy VMs in different subnets and we have to know if there is still network availabiliy before provisioning in that subnet. I tried to search on SO and on Azure docs but with no success. I've just found commands in Azure CLI/Rest APIs but I don't know if there is something equivalent also in python. If no, how could I proceed with REST APIs? Thank you a lot for your support.
[ "I do not think Azure Python SDK offers such functionality out of the box; you can create a feedback item for this on the Azure Python SDK repository.\nIf it helps Azure Python SDK does offer a check_ip_address_availability module but it only validates if a given single private IP address is available for use. You can use this code as reference.\nOther option here will be to use the REST API mentioned by you above to get the required details.\n" ]
[ 1 ]
[]
[]
[ "azure", "azure_sdk_python", "azure_virtual_machine", "azure_virtual_network", "python" ]
stackoverflow_0074573188_azure_azure_sdk_python_azure_virtual_machine_azure_virtual_network_python.txt
Q: Extract the extra fields in logging call in log formatter so I can add additional fields to my log message like so logging.info("My log Message", extra={"someContext":1, "someOtherContext":2}) which is nice but unclear how to extract all the extra fields in my log formatter def format(self, record): record_dict = record.__dict__.copy() print(record_dict) in the above i can see all my extra fields in the output dict but they are flattened in to a dict with load of other junk i dont want {'name': 'root', 'msg': 'My log Message', 'args': (), 'levelname': 'INFO', 'levelno': 20, 'pathname': '.\\handler.py', 'filename': 'handler.py', 'module': 'handler', 'exc_info': None, 'exc_text': None, 'stack_info': None, 'lineno': 27, 'funcName': 'getPlan', 'created': 1575461352.0664868, 'msecs': 66.48683547973633, 'relativeCreated': 1253.0038356781006, 'thread': 15096, 'threadName': 'MainThread', 'processName': 'MainProcess', 'process': 23740, 'someContext': 1, 'someOtherContext':2} is there any way of getting all my extra keys without having to know them all upfront, i am writing a json formatter and want to create a dict a la justMyExtra = ????? to_log = { "message" record_dict["message"], **justMyExtra } A: Two approaches come to mind: Dump all of the extra fields into a dictionary within your main dictionary. Call the key "additionalContext" and get all the extra entries. Create a copy of the original dictionary and delete all of your known keys: 'name','msg','args', etc. until you only have justYourExtra A: If you read the source code of the logging.Logger.makeRecord method, which returns a LogRecord object with the given logging information, you'll find that it merges the extra dict with the __dict__ attribute of the returing LogRecord object, so you cannot retrieve the original extra dict afterwards in the formatter. Instead, you can patch the logging.Logger.makeRecord method with a wrapper function that stores the given extra dict as the _extra attribute of the returning LogRecord object: def make_record_with_extra(self, name, level, fn, lno, msg, args, exc_info, func=None, extra=None, sinfo=None): record = original_makeRecord(self, name, level, fn, lno, msg, args, exc_info, func, extra, sinfo) record._extra = extra return record original_makeRecord = logging.Logger.makeRecord logging.Logger.makeRecord = make_record_with_extra so that: class myFormatter(logging.Formatter): def format(self, record): print('Got extra:', record._extra) # or do whatever you want with _extra return super().format(record) logger = logging.getLogger(__name__) handler = logging.StreamHandler() handler.setFormatter(myFormatter('%(name)s - %(levelname)s - %(message)s - %(foo)s')) logger.addHandler(handler) logger.warning('test', extra={'foo': 'bar'}) outputs: Got extra: {'foo': 'bar'} __main__ - WARNING - test - bar Demo: https://repl.it/@blhsing/WorthyTotalLivedistro A: Two approaches similar to those from @james-hendricks, but a little less fragile: Use the existing args dict on the LogRecord instead of coming up with your own dict. Create a dummy LogRecord and inspect its keys to get a non-fragile str list of LogRecord special keys: dummy = logging.LogRecord('dummy', 0, 'dummy', 0, None, None, None, None, None) reserved_keys = dummy.__dict__.keys()
Extract the extra fields in logging call in log formatter
so I can add additional fields to my log message like so logging.info("My log Message", extra={"someContext":1, "someOtherContext":2}) which is nice but unclear how to extract all the extra fields in my log formatter def format(self, record): record_dict = record.__dict__.copy() print(record_dict) in the above i can see all my extra fields in the output dict but they are flattened in to a dict with load of other junk i dont want {'name': 'root', 'msg': 'My log Message', 'args': (), 'levelname': 'INFO', 'levelno': 20, 'pathname': '.\\handler.py', 'filename': 'handler.py', 'module': 'handler', 'exc_info': None, 'exc_text': None, 'stack_info': None, 'lineno': 27, 'funcName': 'getPlan', 'created': 1575461352.0664868, 'msecs': 66.48683547973633, 'relativeCreated': 1253.0038356781006, 'thread': 15096, 'threadName': 'MainThread', 'processName': 'MainProcess', 'process': 23740, 'someContext': 1, 'someOtherContext':2} is there any way of getting all my extra keys without having to know them all upfront, i am writing a json formatter and want to create a dict a la justMyExtra = ????? to_log = { "message" record_dict["message"], **justMyExtra }
[ "Two approaches come to mind:\n\nDump all of the extra fields into a dictionary within your main dictionary. Call the key \"additionalContext\" and get all the extra entries.\nCreate a copy of the original dictionary and delete all of your known keys: 'name','msg','args', etc. until you only have justYourExtra \n\n", "If you read the source code of the logging.Logger.makeRecord method, which returns a LogRecord object with the given logging information, you'll find that it merges the extra dict with the __dict__ attribute of the returing LogRecord object, so you cannot retrieve the original extra dict afterwards in the formatter.\nInstead, you can patch the logging.Logger.makeRecord method with a wrapper function that stores the given extra dict as the _extra attribute of the returning LogRecord object:\ndef make_record_with_extra(self, name, level, fn, lno, msg, args, exc_info, func=None, extra=None, sinfo=None):\n record = original_makeRecord(self, name, level, fn, lno, msg, args, exc_info, func, extra, sinfo)\n record._extra = extra\n return record\n\noriginal_makeRecord = logging.Logger.makeRecord\nlogging.Logger.makeRecord = make_record_with_extra\n\nso that:\nclass myFormatter(logging.Formatter):\n def format(self, record):\n print('Got extra:', record._extra) # or do whatever you want with _extra\n return super().format(record)\n\nlogger = logging.getLogger(__name__)\nhandler = logging.StreamHandler()\nhandler.setFormatter(myFormatter('%(name)s - %(levelname)s - %(message)s - %(foo)s'))\nlogger.addHandler(handler)\nlogger.warning('test', extra={'foo': 'bar'})\n\noutputs:\nGot extra: {'foo': 'bar'}\n__main__ - WARNING - test - bar\n\nDemo: https://repl.it/@blhsing/WorthyTotalLivedistro\n", "Two approaches similar to those from @james-hendricks, but a little less fragile:\n\nUse the existing args dict on the LogRecord instead of coming up with your own dict.\nCreate a dummy LogRecord and inspect its keys to get a non-fragile str list of LogRecord special keys:\n\n dummy = logging.LogRecord('dummy', 0, 'dummy', 0, None, None, None, None, None)\n reserved_keys = dummy.__dict__.keys()\n\n" ]
[ 2, 1, 0 ]
[]
[]
[ "logging", "python", "python_3.x" ]
stackoverflow_0059176101_logging_python_python_3.x.txt
Q: How to solve a Knapsack problem with extra constraints? (Or alternative algorithms) Expanding upon a common dynamic programming solution for the knapsack problem: def knapSack(W, wt, val, n): results = [] K = [[0 for x in range(W + 1)] for x in range(n + 1)] # Build tаble K[][] in bоttоm uр mаnner for i in range(n + 1): for w in range(W + 1): if (i == 0) or (w == 0): K[i][w] = 0 elif wt[i-1] <= w: K[i][w] = max(val[i-1] + K[i-1][w-wt[i-1]], K[i-1][w]) else: K[i][w] = K[i-1][w] print(K[n][W]) return(set(results)) # Driver code val = [60, 100, 120] wt = [10, 20, 30] W = 50 n = len(val) knapSack(W, wt, val, n) = 220 Now perhaps I add extra constraints: # Driver code val_all = [60, 100, 120, 50, 80, 10] wt_all = [10, 20, 30, 15, 20, 5] W_all = 50 n_all = len(val_all) val_1 = [60, 100, 120] wt_1 = [10, 20, 30] W_1 = 30 n_1 = len(val_1) val_2 = [50, 80, 10] wt_2 = [15, 20, 5] W_2 = 25 n_2 = len(val_2) I want to maximise all 3, using the same values. val_all has a solution of 240 [60, 100, 80]. val_1 is maxed at 160 [60, 100] and val_2 would max at 90 [80, 10] but given I want the same values and 10 does not sit in the other two sets the max solution would be 80. I am also wondering if you can add to the function to give you the values chosen as well as the maximum value. And is this approach feasible for large lists as I have a list of 150,000 different values each with different weights. There may be a better algorithm, my problem is I have 150,000 values each with a weight and need to select any number of those values such that we get a close to a ceiling W value. However, the data is actually a mixture of two different types of values and the sum of weights of each type also have a W1 and W2 ceiling value. I'd like to maximise all three equations but using the same set of values. Any value chosen in W1 or W2 must exist in W. This knapsack code won't be very useful as I have 150k values with an average weight of 50 and a weight ceiling of 6mil. The time complexity given such large loops will be huge. A: Based on the top comment I found a package that is very fast and allows for this: from mip import Model, xsum, maximize, BINARY all = pd.read_csv('df_all.csv') X = pd.read_csv('df_x_only.csv') Y = pd.read_csv('df_y_only.csv') p = all.id.values # as we arent optimizing the value of the index, p is irrelevant and we replace xsum(p[i] with xsum(w[i] in the objective function w = all.new_weights.values w1 = X.new_weights.values w2 = Y.new_weights.values c = 5876834 Cx = 4902953 Cy = 719051.4 I = range(len(w)) I1 = range(len(w1)) I2 = range(len(w2)) m = Model("knapsack") x = [m.add_var(var_type=BINARY) for i in I] m.objective = maximize(xsum(w[i] * x[i] for i in I)) + maximize(xsum(w1[i] * x[i] for i in I1)) + maximize(xsum(w2[i] * x[i] for i in I2)) m += xsum(w[i] * x[i] for i in I) <= c*1.05 m += xsum(w[i] * x[i] for i in I) >= c*0.95 m += xsum(w1[i] * x[i] for i in I1) <= Cx*1.05 m += xsum(w1[i] * x[i] for i in I1) >= Cx*0.95 m += xsum(w2[i] * x[i] for i in I2) <= Cy*1.05 m += xsum(w2[i] * x[i] for i in I2) >= Cy*0.95 m.optimize() selected = [i for i in I if x[i].x >= 0.99]
How to solve a Knapsack problem with extra constraints? (Or alternative algorithms)
Expanding upon a common dynamic programming solution for the knapsack problem: def knapSack(W, wt, val, n): results = [] K = [[0 for x in range(W + 1)] for x in range(n + 1)] # Build tаble K[][] in bоttоm uр mаnner for i in range(n + 1): for w in range(W + 1): if (i == 0) or (w == 0): K[i][w] = 0 elif wt[i-1] <= w: K[i][w] = max(val[i-1] + K[i-1][w-wt[i-1]], K[i-1][w]) else: K[i][w] = K[i-1][w] print(K[n][W]) return(set(results)) # Driver code val = [60, 100, 120] wt = [10, 20, 30] W = 50 n = len(val) knapSack(W, wt, val, n) = 220 Now perhaps I add extra constraints: # Driver code val_all = [60, 100, 120, 50, 80, 10] wt_all = [10, 20, 30, 15, 20, 5] W_all = 50 n_all = len(val_all) val_1 = [60, 100, 120] wt_1 = [10, 20, 30] W_1 = 30 n_1 = len(val_1) val_2 = [50, 80, 10] wt_2 = [15, 20, 5] W_2 = 25 n_2 = len(val_2) I want to maximise all 3, using the same values. val_all has a solution of 240 [60, 100, 80]. val_1 is maxed at 160 [60, 100] and val_2 would max at 90 [80, 10] but given I want the same values and 10 does not sit in the other two sets the max solution would be 80. I am also wondering if you can add to the function to give you the values chosen as well as the maximum value. And is this approach feasible for large lists as I have a list of 150,000 different values each with different weights. There may be a better algorithm, my problem is I have 150,000 values each with a weight and need to select any number of those values such that we get a close to a ceiling W value. However, the data is actually a mixture of two different types of values and the sum of weights of each type also have a W1 and W2 ceiling value. I'd like to maximise all three equations but using the same set of values. Any value chosen in W1 or W2 must exist in W. This knapsack code won't be very useful as I have 150k values with an average weight of 50 and a weight ceiling of 6mil. The time complexity given such large loops will be huge.
[ "Based on the top comment I found a package that is very fast and allows for this:\nfrom mip import Model, xsum, maximize, BINARY\n\nall = pd.read_csv('df_all.csv')\nX = pd.read_csv('df_x_only.csv')\nY = pd.read_csv('df_y_only.csv')\n\np = all.id.values # as we arent optimizing the value of the index, p is irrelevant and we replace xsum(p[i] with xsum(w[i] in the objective function\nw = all.new_weights.values\nw1 = X.new_weights.values\nw2 = Y.new_weights.values\n\nc = 5876834\nCx = 4902953\nCy = 719051.4\n\nI = range(len(w))\nI1 = range(len(w1))\nI2 = range(len(w2))\n\nm = Model(\"knapsack\")\n\nx = [m.add_var(var_type=BINARY) for i in I]\n\nm.objective = maximize(xsum(w[i] * x[i] for i in I)) + maximize(xsum(w1[i] * x[i] for i in I1)) + maximize(xsum(w2[i] * x[i] for i in I2))\n\nm += xsum(w[i] * x[i] for i in I) <= c*1.05\nm += xsum(w[i] * x[i] for i in I) >= c*0.95\nm += xsum(w1[i] * x[i] for i in I1) <= Cx*1.05\nm += xsum(w1[i] * x[i] for i in I1) >= Cx*0.95\nm += xsum(w2[i] * x[i] for i in I2) <= Cy*1.05\nm += xsum(w2[i] * x[i] for i in I2) >= Cy*0.95\n\nm.optimize()\n\n\nselected = [i for i in I if x[i].x >= 0.99]\n\n" ]
[ 2 ]
[]
[]
[ "knapsack_problem", "matching", "mathematical_optimization", "optimization", "python" ]
stackoverflow_0074602059_knapsack_problem_matching_mathematical_optimization_optimization_python.txt
Q: Python-check if the input which is a two lists of numbers from the user contains only numbers (all types, including fractions and negative numbers) (in Python 3) I neet to check if the input which is two lists of numbers from the user contains only numbers and than calculate pearson corelation. requierments: -All types of numbers, including fractions and negative numbers. -The user can type extra spaces or no spaces at all, it shouldn't metter. -There must be a comma, between the numbers. -It needs to be simple, no libaries or advanced functions. -If an element in a list from the user is not a number, he need to see the message: "was not able to calculate the correlation" and than the code stops. the clue i got: two ways were offer to solve it: use:starts with,strip,split - also possible to divide by commas. It is possible if you want to break it down before and after the point.,all and list comprehension - requires creativity. 2.Run over the characters of the number (with for) and make sure everything is digit and also allow one dot in the middle. An example of an output: Enter list1: 1,2,3, 5, 8 Enter list2: 0.11, 0.1, 0.13,0.2, 0.18 The correlation between the lists is: 0.83 Thanks I've tried to convert all to float but if it contains something not a number i get an error. and i have to be able to get fragments like 0.18 ect. list1=input("Enter list1:") #Creates a list from string input list1=list1.split(",") #Check if all elements in list1 are numbers check_list1_num = all(ele1.isdigit() for ele1 in list1) #Same process for list2 list2=input("Enter list2:") list2=list2.split(",") check_list2_num = all(ele2.isdigit() for ele2 in list2) if check_list1_num==True and check_list2_num==True: list1 = [ float(i) for i in list1 ] list2 = [ float(k) for k in list2] A: Try and except statements must be used if any unexpected input is given by user try: list1=input("Enter list 1") list2=input("Enter list 2") lst1=list(map(float,[x.strip() for x in list1.split(',')])) lst2=list(map(float,[x.strip() for x in list2.split(',')])) corelation_function(list1,list2) except Exception as e: print("was not able to calculate the correlation") Here we split the input and strip of white spaces from both side and then covert it to float. Any error will raise exception and code will stop.
Python-check if the input which is a two lists of numbers from the user contains only numbers (all types, including fractions and negative numbers)
(in Python 3) I neet to check if the input which is two lists of numbers from the user contains only numbers and than calculate pearson corelation. requierments: -All types of numbers, including fractions and negative numbers. -The user can type extra spaces or no spaces at all, it shouldn't metter. -There must be a comma, between the numbers. -It needs to be simple, no libaries or advanced functions. -If an element in a list from the user is not a number, he need to see the message: "was not able to calculate the correlation" and than the code stops. the clue i got: two ways were offer to solve it: use:starts with,strip,split - also possible to divide by commas. It is possible if you want to break it down before and after the point.,all and list comprehension - requires creativity. 2.Run over the characters of the number (with for) and make sure everything is digit and also allow one dot in the middle. An example of an output: Enter list1: 1,2,3, 5, 8 Enter list2: 0.11, 0.1, 0.13,0.2, 0.18 The correlation between the lists is: 0.83 Thanks I've tried to convert all to float but if it contains something not a number i get an error. and i have to be able to get fragments like 0.18 ect. list1=input("Enter list1:") #Creates a list from string input list1=list1.split(",") #Check if all elements in list1 are numbers check_list1_num = all(ele1.isdigit() for ele1 in list1) #Same process for list2 list2=input("Enter list2:") list2=list2.split(",") check_list2_num = all(ele2.isdigit() for ele2 in list2) if check_list1_num==True and check_list2_num==True: list1 = [ float(i) for i in list1 ] list2 = [ float(k) for k in list2]
[ "Try and except statements must be used if any unexpected input is given by user\ntry:\n list1=input(\"Enter list 1\")\n list2=input(\"Enter list 2\")\n lst1=list(map(float,[x.strip() for x in list1.split(',')]))\n lst2=list(map(float,[x.strip() for x in list2.split(',')]))\n corelation_function(list1,list2)\nexcept Exception as e:\n print(\"was not able to calculate the correlation\")\n\nHere we split the input and strip of white spaces from both side and then covert it to float.\nAny error will raise exception and code will stop.\n" ]
[ 0 ]
[]
[]
[ "input", "python", "type_conversion" ]
stackoverflow_0074603729_input_python_type_conversion.txt
Q: How to use pandas list as variable in SQL query using python? I am using the Pandas library on Python and was wondering if there is a way to use a list variable (or perhaps series is better?), let's say uID_list, in an SQL query that is also executed within the same Python code. For example: dict = {'a': 1, 'b': 2, 'c':3} uID_series = pd.Series(data=dict, index=['a','b','c']) uID_list = uID_series.toList() and let's assume this uID_list can be changed down the road, so it does not always consist of 1, 2, 3. How can I "plug in" this uID_list variable in the following SQL query? sql = f''' CREATE PROCEDURE SearchForUsers(@uIDinput AS INT(11)) AS BEGIN SELECT username FROM users_table WHERE uID in @uIDinput END EXECUTE SearchForUsers {uID_list} ''' Note that creating a new table in the database is not an option for me, as it is work related and my supervisor is strict on keeping the database clean. What I'm essentially trying to do is see a selection of usernames in users_table table, where the uIDs in users_table match a varying collection of uIDs stored, which is given by uID_list. And uID_list depends on the rest of the python code where it can get changed and manipulated. A: Are you just trying to pull the data? Will be easier outside of a stored procedure l1 = ['ad', 'dfgdf', 'htbgf', 'dtghyt'] l1_str = "('" + "', '".join([str(item) for item in l1]) + "')" sql = f''' SELECT username FROM users_table WHERE uID in {l1_str} ''' A: You can start by converting your list to a tuple then pass it as a parameter of pandas.read_sql_query. Assuming cnx is the name of your connection, try this : uID_tuple = tuple(uID_list) sql = ''' CREATE PROCEDURE SearchForUsers(@uIDinput AS INT(11)) AS BEGIN SELECT username FROM users_table WHERE uID in @uIDinput END EXECUTE SearchForUsers ? ''' pd.read_sql_query(sql, cnx, params=[uID_tuple])
How to use pandas list as variable in SQL query using python?
I am using the Pandas library on Python and was wondering if there is a way to use a list variable (or perhaps series is better?), let's say uID_list, in an SQL query that is also executed within the same Python code. For example: dict = {'a': 1, 'b': 2, 'c':3} uID_series = pd.Series(data=dict, index=['a','b','c']) uID_list = uID_series.toList() and let's assume this uID_list can be changed down the road, so it does not always consist of 1, 2, 3. How can I "plug in" this uID_list variable in the following SQL query? sql = f''' CREATE PROCEDURE SearchForUsers(@uIDinput AS INT(11)) AS BEGIN SELECT username FROM users_table WHERE uID in @uIDinput END EXECUTE SearchForUsers {uID_list} ''' Note that creating a new table in the database is not an option for me, as it is work related and my supervisor is strict on keeping the database clean. What I'm essentially trying to do is see a selection of usernames in users_table table, where the uIDs in users_table match a varying collection of uIDs stored, which is given by uID_list. And uID_list depends on the rest of the python code where it can get changed and manipulated.
[ "Are you just trying to pull the data? Will be easier outside of a stored procedure\nl1 = ['ad', 'dfgdf', 'htbgf', 'dtghyt']\n\nl1_str = \"('\" + \"', '\".join([str(item) for item in l1]) + \"')\"\n\n\nsql = \nf'''\n SELECT username\n FROM users_table\n WHERE uID in {l1_str}\n'''\n\n\n", "You can start by converting your list to a tuple then pass it as a parameter of pandas.read_sql_query.\nAssuming cnx is the name of your connection, try this :\nuID_tuple = tuple(uID_list)\n\nsql = \n'''\nCREATE PROCEDURE SearchForUsers(@uIDinput AS INT(11))\nAS\nBEGIN\n\n SELECT username\n FROM users_table\n WHERE uID in @uIDinput\n\nEND\n\nEXECUTE SearchForUsers ?\n'''\n\npd.read_sql_query(sql, cnx, params=[uID_tuple])\n\n" ]
[ 0, 0 ]
[]
[]
[ "pandas", "python", "sql" ]
stackoverflow_0074603789_pandas_python_sql.txt
Q: Need update value passed by render_template with flask in html template td tag every 10 seconds I have this: @views.route('/') def home(): while True: try: token=getToken() if(token!='null' or token!=''): plazas=getInfo(token,id) except: print('Conection failed') time.sleep(secs) return render_template("home.html", plazas=plazas) Need update "plazas" variable value which is constantly refreshing with "while True" loop in my html template on td tag: {% for parking in parkings %} <tr> <td class="par"><img src={{parking.image}} alt="img"></td> <td class="nombre">{{parking.nombre}}</td> {% if plazas|int >= (totalplazas*30)/100 %} <td class="num" style="color:#39FF00"> {{plazas}}</td> {% elif plazas|int < 1%} <td class="num" style="color:red"><p class="an">COMPLETO</p></td> {% elif plazas|int <= (totalplazas*10)/100%} <td class="num" style="color:red"> {{plazas}}</td> {% else %} <td class="num" style="color:yellow"> {{plazas}}</td> {% endif %} <td class="dir"><img src={{parking.direccion}} alt="img"></td> </tr> {% endfor %} I have tried to use javascript, but when the 10 seconds pass it tells me that the result of {{plazas}} is undefined. Any help? <script type="text/javascript"> window.onload = setInterval(refresh, 10000); function refresh(places) { var elements = document.getElementsByClassName("num"); for(let i = 0; i < elements.length; i++) { elements[i].innerHTML = places; } return elements[i].innerHTML = places; } </script> A: To refresh it with new text, you can fetch to your own flask route and update the information using setInterval HTML <td id="num" style="color:#39FF00">{{plazas}}</td> <script> var element = document.findElementById("num") async function reload() { const promise = await fetch('/myroute') const data = await promise.text() element.innerHTML = data } window.onload = setInterval(reload, 1000) </script> Flask @app.route('/myroute') def myroute(): # time is just an example return time.time()
Need update value passed by render_template with flask in html template td tag every 10 seconds
I have this: @views.route('/') def home(): while True: try: token=getToken() if(token!='null' or token!=''): plazas=getInfo(token,id) except: print('Conection failed') time.sleep(secs) return render_template("home.html", plazas=plazas) Need update "plazas" variable value which is constantly refreshing with "while True" loop in my html template on td tag: {% for parking in parkings %} <tr> <td class="par"><img src={{parking.image}} alt="img"></td> <td class="nombre">{{parking.nombre}}</td> {% if plazas|int >= (totalplazas*30)/100 %} <td class="num" style="color:#39FF00"> {{plazas}}</td> {% elif plazas|int < 1%} <td class="num" style="color:red"><p class="an">COMPLETO</p></td> {% elif plazas|int <= (totalplazas*10)/100%} <td class="num" style="color:red"> {{plazas}}</td> {% else %} <td class="num" style="color:yellow"> {{plazas}}</td> {% endif %} <td class="dir"><img src={{parking.direccion}} alt="img"></td> </tr> {% endfor %} I have tried to use javascript, but when the 10 seconds pass it tells me that the result of {{plazas}} is undefined. Any help? <script type="text/javascript"> window.onload = setInterval(refresh, 10000); function refresh(places) { var elements = document.getElementsByClassName("num"); for(let i = 0; i < elements.length; i++) { elements[i].innerHTML = places; } return elements[i].innerHTML = places; } </script>
[ "To refresh it with new text, you can fetch to your own flask route and update the information using setInterval\nHTML\n<td id=\"num\" style=\"color:#39FF00\">{{plazas}}</td>\n<script>\nvar element = document.findElementById(\"num\")\nasync function reload() {\n const promise = await fetch('/myroute')\n const data = await promise.text()\n element.innerHTML = data\n}\nwindow.onload = setInterval(reload, 1000)\n</script>\n\nFlask\n@app.route('/myroute')\ndef myroute():\n # time is just an example\n return time.time()\n\n" ]
[ 2 ]
[]
[]
[ "flask", "jinja2", "python" ]
stackoverflow_0074600858_flask_jinja2_python.txt
Q: How can adjust the size of doughnut chart using python's pptx module I want to have multiple doughnut charts (max 3) using python's pptx module. As of now I'm able to add only one chart, how can I reduce the size of the doughnuts accordingly so that I can adjust 2 or 3 charts in one slide. Also, how can I auto adjust the size of doughnuts depending on number of graphs with different set of values. python code: from pptx import Presentation from pptx.util import Pt, Cm, Inches from pptx.chart.data import CategoryChartData, ChartData from pptx.enum.chart import XL_CHART_TYPE def create_ppt(): pr = Presentation() slide1_register = pr.slide_layouts[0] #add initial slide slide1 = pr.slides.add_slide(slide1_register) #top placeholder title1 = slide1.shapes.title #subtitle sub_title = slide1.placeholders[1] # insert text title1.text = "Data" title_para = title1.text_frame.paragraphs[0] title_para.font.name = "Arial (Headings)" title_para.font.size = Pt(34) title1.top = Cm(1) title1.left = Cm(0) title1.width = Cm(15) title1.height = Cm(3) chart_data = ChartData() chart_data.categories = ['X1', 'X2', 'X3', 'X4'] chart_data.add_series('Data 1', (75, 10, 5, 4)) x, y, cx, cy = Inches(2), Inches(2), Inches(6), Inches(4.5) chart = slide1.shapes.add_chart( XL_CHART_TYPE.DOUGHNUT, x, y, cx, cy, chart_data ).chart chart.has_legend = True chart.legend.include_in_layout = False apply_data_labels(chart) pr.save("test222222222.pptx") def apply_data_labels(chart): plot = chart.plots[0] plot.has_data_labels = True for series in plot.series: values = series.values counter = 0 for point in series.points: data_label = point.data_label data_label.has_text_frame = True data_label.text_frame.text = str(values[counter]) counter = counter + 1 create_ppt() current slide: expected output: A: As you insert the chart, you can specify its position and size, as you already do: x, y, cx, cy = Inches(2), Inches(2), Inches(6), Inches(4.5) slide1.shapes.add_chart(XL_CHART_TYPE.DOUGHNUT, x, y, cx, cy, chart_data) In the code above x and y are the coordinates of the top left corner of the chart, while cx and cy are the width and height, respectively. You can repeat the add_chart call as many times as many chart you want to insert with proper position and size values. See the documentation for shapes.add_chart function here.
How can adjust the size of doughnut chart using python's pptx module
I want to have multiple doughnut charts (max 3) using python's pptx module. As of now I'm able to add only one chart, how can I reduce the size of the doughnuts accordingly so that I can adjust 2 or 3 charts in one slide. Also, how can I auto adjust the size of doughnuts depending on number of graphs with different set of values. python code: from pptx import Presentation from pptx.util import Pt, Cm, Inches from pptx.chart.data import CategoryChartData, ChartData from pptx.enum.chart import XL_CHART_TYPE def create_ppt(): pr = Presentation() slide1_register = pr.slide_layouts[0] #add initial slide slide1 = pr.slides.add_slide(slide1_register) #top placeholder title1 = slide1.shapes.title #subtitle sub_title = slide1.placeholders[1] # insert text title1.text = "Data" title_para = title1.text_frame.paragraphs[0] title_para.font.name = "Arial (Headings)" title_para.font.size = Pt(34) title1.top = Cm(1) title1.left = Cm(0) title1.width = Cm(15) title1.height = Cm(3) chart_data = ChartData() chart_data.categories = ['X1', 'X2', 'X3', 'X4'] chart_data.add_series('Data 1', (75, 10, 5, 4)) x, y, cx, cy = Inches(2), Inches(2), Inches(6), Inches(4.5) chart = slide1.shapes.add_chart( XL_CHART_TYPE.DOUGHNUT, x, y, cx, cy, chart_data ).chart chart.has_legend = True chart.legend.include_in_layout = False apply_data_labels(chart) pr.save("test222222222.pptx") def apply_data_labels(chart): plot = chart.plots[0] plot.has_data_labels = True for series in plot.series: values = series.values counter = 0 for point in series.points: data_label = point.data_label data_label.has_text_frame = True data_label.text_frame.text = str(values[counter]) counter = counter + 1 create_ppt() current slide: expected output:
[ "As you insert the chart, you can specify its position and size, as you already do:\nx, y, cx, cy = Inches(2), Inches(2), Inches(6), Inches(4.5)\nslide1.shapes.add_chart(XL_CHART_TYPE.DOUGHNUT, x, y, cx, cy, chart_data)\n\nIn the code above x and y are the coordinates of the top left corner of the chart, while cx and cy are the width and height, respectively.\nYou can repeat the add_chart call as many times as many chart you want to insert with proper position and size values.\nSee the documentation for shapes.add_chart function here.\n" ]
[ 1 ]
[]
[]
[ "aspose_slides", "powerpoint", "python", "python_3.x", "python_pptx" ]
stackoverflow_0074559304_aspose_slides_powerpoint_python_python_3.x_python_pptx.txt
Q: Pivot a dataframe keeping all the columns and assigning suffixes and values to each column based on another column I have searched across SO and the internet but the closest I have gotten to my answer is that I may need to implement df.pivot(). However I can't seem to figure out what should I pass in the values and columns parameters in order to achieve the expected result. Initial dataframe: pd.DataFrame( {'Date': ['8-Sep-22', '8-Sep-22', '8-Sep-22', '8-Sep-22', '16-Jun-22', '16-Jun-22', '27-Apr-22', '27-Apr-22'], 'CLASS': ['A', 'B', 'C', 'D', 'A', 'B', 'A', 'B'], 'CCY': ['USD', 'USD', 'USD', 'USD', 'USD', 'USD', 'USD', 'USD'], 'SZE(M)': [202.81, 13.78, 13.39, 9.1, 356.0, 15.45, 405.62, 27.56], 'WAL': ['1.91', '-', '-', '2.25', '1.05', '3.19', '2.28', '2.58'], 'DR': ['AAA', 'AA', 'A', 'BBB', 'AAA', 'BBB', 'AAA', 'AA'], 'TYPE': ['Fixed', 'Fixed', 'Fixed', 'Fixed', 'Fixed', 'Fixed', 'Fixed', 'Fixed'], 'BNCH': ['I-Curve', 'I-Curve', 'I-Curve', 'I-Curve', 'I-Curve', 'I-Curve', 'I-Curve', 'I-Curve'], 'GDNC': ['225-235', '275a', '325a', '450a', '-', '-', '175a', '200a'], 'SPRD': ['215', '290', '330', '450', '275', '-', '170', '200'], 'CPN': ['4.30%', '4.64%', '4.89%', '5.53%', '4.55%', '-', '4.30%', '4.64%'], 'YLD': ['5.65%', '6.40%', '6.80%', '8.00%', '5.65%', '-', '4.34%', '4.69%'], 'PRICE': ['97.6802', '96.5669', '96.21589', '95.18761', '98.96079', '-', '99.9888', '99.97876']} ) Expected final dataframe: pd.DataFrame( {'Date': ['Sep 8, 2022', 'Jun 16, 2022', 'Apr 27, 2022'], 'Sum SZE(M)': [239.08, 371.45, 433.18], 'CLASS-A': ['A', 'A', 'A'], 'CCY-A': ['USD', 'USD', 'USD'], 'SZE(M)-A': [202.81, 356.0, 405.62], 'WAL-A': [1.91, 1.05, 2.28], 'DR-A': ['AAA', 'AAA', 'AAA'], 'PRICE-A': [97.6802, 98.96079, 99.9888], 'CLASS-B': ['B', 'B', 'B'], 'CCY-B': ['USD', 'USD', 'USD'], 'SZE(M)-B': [13.78, 15.45, 27.56], 'WAL-B': ['-', '3.19', '2.58'], 'DR-B': ['AA', 'BBB', 'AA'], 'PRICE-B': ['96.5669', '-', '99.97876'], 'CLASS-C': ['C', nan, nan], 'CCY-C': ['USD', nan, nan], 'SZE(M)-C': [13.39, nan, nan], 'WAL-C': ['-', nan, nan], 'DR-C': ['A', nan, nan], 'PRICE-C': [96.21589, nan, nan], 'CLASS-D': ['D', nan, nan], 'CCY-D': ['USD', nan, nan], 'SZE(M)-D': [9.1, nan, nan], 'WAL-D': [2.25, nan, nan], 'DR-D': ['BBB', nan, nan], 'PRICE-D': [95.18761, nan, nan]}) The suffix will be from class columns (A,B,C,D etc) and this will be for all the columns(not all are shown in the example). Also attaching the images of csvs for clarification: Initial: Final: Any guidance in the right direction is appreciated. Thanks. I tried pivoting the dataframe but I am struggling to find out what to put as columns and values. I am guessing it is either pivot, transpose or melt that is I am looking for, but am a bit confused. A: Here is what you want to do to get the desired output: Pivot the df, then sort the columns by level 1, which is A,B,C,.... Then join the multicolumn index to one level in the format you want. out = ( df .pivot(index='Date', columns='CLASS', values=['CCY','SZE(M)','WAL','DR','TYPE', 'BNCH','GDNC', 'SPRD', 'CPN', 'YLD', 'PRICE']) .sort_index(axis=1, level=1)) out.columns = out.columns.map('-'.join) #prefer that one over the f-string # out.columns = out.columns.map(lambda x: f"{x[0]}-{x[1]}") # create new column for sum of all SZE(M) columns out.insert(0, 'Sum SZE(M)', out.filter(like='SZE(M)').sum(axis=1)) #additional, if needed out.index = pd.to_datetime(out.index, format="%d-%b-%y") out = out.sort_index() print(out) Sum SZE(M) BNCH-A CCY-A CPN-A DR-A GDNC-A PRICE-A SPRD-A SZE(M)-A TYPE-A WAL-A YLD-A BNCH-B CCY-B CPN-B DR-B GDNC-B PRICE-B SPRD-B SZE(M)-B TYPE-B WAL-B YLD-B BNCH-C CCY-C CPN-C DR-C GDNC-C PRICE-C SPRD-C SZE(M)-C TYPE-C WAL-C YLD-C BNCH-D CCY-D CPN-D DR-D GDNC-D PRICE-D SPRD-D SZE(M)-D TYPE-D WAL-D YLD-D Date 2022-04-27 433.18 I-Curve USD 4.30% AAA 175a 99.9888 170 405.62 Fixed 2.28 4.34% I-Curve USD 4.64% AA 200a 99.97876 200 27.56 Fixed 2.58 4.69% NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 2022-06-16 371.45 I-Curve USD 4.55% AAA - 98.96079 275 356.0 Fixed 1.05 5.65% I-Curve USD - BBB - - - 15.45 Fixed 3.19 - NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 2022-09-08 239.08 I-Curve USD 4.30% AAA 225-235 97.6802 215 202.81 Fixed 1.91 5.65% I-Curve USD 4.64% AA 275a 96.5669 290 13.78 Fixed - 6.40% I-Curve USD 4.89% A 325a 96.21589 330 13.39 Fixed - 6.80% I-Curve USD 5.53% BBB 450a 95.18761 450 9.1 Fixed 2.25 8.00% Because of question in the comments. This is what out looks like before the line out.columns = ... level=0 is the first row of columns, level=1 ('CLASS') is the second row. BNCH CCY CPN DR GDNC PRICE SPRD SZE(M) TYPE WAL YLD BNCH CCY CPN DR GDNC PRICE SPRD SZE(M) TYPE WAL YLD BNCH CCY CPN DR GDNC PRICE SPRD SZE(M) TYPE WAL YLD BNCH CCY CPN DR GDNC PRICE SPRD SZE(M) TYPE WAL YLD CLASS A A A A A A A A A A A B B B B B B B B B B B C C C C C C C C C C C D D D D D D D D D D D Date 16-Jun-22 I-Curve USD 4.55% AAA - 98.96079 275 356.0 Fixed 1.05 5.65% I-Curve USD - BBB - - - 15.45 Fixed 3.19 - NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 27-Apr-22 I-Curve USD 4.30% AAA 175a 99.9888 170 405.62 Fixed 2.28 4.34% I-Curve USD 4.64% AA 200a 99.97876 200 27.56 Fixed 2.58 4.69% NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 8-Sep-22 I-Curve USD 4.30% AAA 225-235 97.6802 215 202.81 Fixed 1.91 5.65% I-Curve USD 4.64% AA 275a 96.5669 290 13.78 Fixed - 6.40% I-Curve USD 4.89% A 325a 96.21589 330 13.39 Fixed - 6.80% I-Curve USD 5.53% BBB 450a 95.18761 450 9.1 Fixed 2.25 8.00%
Pivot a dataframe keeping all the columns and assigning suffixes and values to each column based on another column
I have searched across SO and the internet but the closest I have gotten to my answer is that I may need to implement df.pivot(). However I can't seem to figure out what should I pass in the values and columns parameters in order to achieve the expected result. Initial dataframe: pd.DataFrame( {'Date': ['8-Sep-22', '8-Sep-22', '8-Sep-22', '8-Sep-22', '16-Jun-22', '16-Jun-22', '27-Apr-22', '27-Apr-22'], 'CLASS': ['A', 'B', 'C', 'D', 'A', 'B', 'A', 'B'], 'CCY': ['USD', 'USD', 'USD', 'USD', 'USD', 'USD', 'USD', 'USD'], 'SZE(M)': [202.81, 13.78, 13.39, 9.1, 356.0, 15.45, 405.62, 27.56], 'WAL': ['1.91', '-', '-', '2.25', '1.05', '3.19', '2.28', '2.58'], 'DR': ['AAA', 'AA', 'A', 'BBB', 'AAA', 'BBB', 'AAA', 'AA'], 'TYPE': ['Fixed', 'Fixed', 'Fixed', 'Fixed', 'Fixed', 'Fixed', 'Fixed', 'Fixed'], 'BNCH': ['I-Curve', 'I-Curve', 'I-Curve', 'I-Curve', 'I-Curve', 'I-Curve', 'I-Curve', 'I-Curve'], 'GDNC': ['225-235', '275a', '325a', '450a', '-', '-', '175a', '200a'], 'SPRD': ['215', '290', '330', '450', '275', '-', '170', '200'], 'CPN': ['4.30%', '4.64%', '4.89%', '5.53%', '4.55%', '-', '4.30%', '4.64%'], 'YLD': ['5.65%', '6.40%', '6.80%', '8.00%', '5.65%', '-', '4.34%', '4.69%'], 'PRICE': ['97.6802', '96.5669', '96.21589', '95.18761', '98.96079', '-', '99.9888', '99.97876']} ) Expected final dataframe: pd.DataFrame( {'Date': ['Sep 8, 2022', 'Jun 16, 2022', 'Apr 27, 2022'], 'Sum SZE(M)': [239.08, 371.45, 433.18], 'CLASS-A': ['A', 'A', 'A'], 'CCY-A': ['USD', 'USD', 'USD'], 'SZE(M)-A': [202.81, 356.0, 405.62], 'WAL-A': [1.91, 1.05, 2.28], 'DR-A': ['AAA', 'AAA', 'AAA'], 'PRICE-A': [97.6802, 98.96079, 99.9888], 'CLASS-B': ['B', 'B', 'B'], 'CCY-B': ['USD', 'USD', 'USD'], 'SZE(M)-B': [13.78, 15.45, 27.56], 'WAL-B': ['-', '3.19', '2.58'], 'DR-B': ['AA', 'BBB', 'AA'], 'PRICE-B': ['96.5669', '-', '99.97876'], 'CLASS-C': ['C', nan, nan], 'CCY-C': ['USD', nan, nan], 'SZE(M)-C': [13.39, nan, nan], 'WAL-C': ['-', nan, nan], 'DR-C': ['A', nan, nan], 'PRICE-C': [96.21589, nan, nan], 'CLASS-D': ['D', nan, nan], 'CCY-D': ['USD', nan, nan], 'SZE(M)-D': [9.1, nan, nan], 'WAL-D': [2.25, nan, nan], 'DR-D': ['BBB', nan, nan], 'PRICE-D': [95.18761, nan, nan]}) The suffix will be from class columns (A,B,C,D etc) and this will be for all the columns(not all are shown in the example). Also attaching the images of csvs for clarification: Initial: Final: Any guidance in the right direction is appreciated. Thanks. I tried pivoting the dataframe but I am struggling to find out what to put as columns and values. I am guessing it is either pivot, transpose or melt that is I am looking for, but am a bit confused.
[ "Here is what you want to do to get the desired output:\nPivot the df, then sort the columns by level 1, which is A,B,C,.... Then join the multicolumn index to one level in the format you want.\nout = (\n df\n .pivot(index='Date', \n columns='CLASS', \n values=['CCY','SZE(M)','WAL','DR','TYPE', 'BNCH','GDNC', 'SPRD', 'CPN', 'YLD', 'PRICE'])\n .sort_index(axis=1, level=1))\n\nout.columns = out.columns.map('-'.join) #prefer that one over the f-string\n# out.columns = out.columns.map(lambda x: f\"{x[0]}-{x[1]}\")\n\n# create new column for sum of all SZE(M) columns\nout.insert(0, 'Sum SZE(M)', out.filter(like='SZE(M)').sum(axis=1))\n\n\n#additional, if needed\nout.index = pd.to_datetime(out.index, format=\"%d-%b-%y\")\nout = out.sort_index()\n\nprint(out)\n\n Sum SZE(M) BNCH-A CCY-A CPN-A DR-A GDNC-A PRICE-A SPRD-A SZE(M)-A TYPE-A WAL-A YLD-A BNCH-B CCY-B CPN-B DR-B GDNC-B PRICE-B SPRD-B SZE(M)-B TYPE-B WAL-B YLD-B BNCH-C CCY-C CPN-C DR-C GDNC-C PRICE-C SPRD-C SZE(M)-C TYPE-C WAL-C YLD-C BNCH-D CCY-D CPN-D DR-D GDNC-D PRICE-D SPRD-D SZE(M)-D TYPE-D WAL-D YLD-D\nDate \n2022-04-27 433.18 I-Curve USD 4.30% AAA 175a 99.9888 170 405.62 Fixed 2.28 4.34% I-Curve USD 4.64% AA 200a 99.97876 200 27.56 Fixed 2.58 4.69% NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN\n2022-06-16 371.45 I-Curve USD 4.55% AAA - 98.96079 275 356.0 Fixed 1.05 5.65% I-Curve USD - BBB - - - 15.45 Fixed 3.19 - NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN\n2022-09-08 239.08 I-Curve USD 4.30% AAA 225-235 97.6802 215 202.81 Fixed 1.91 5.65% I-Curve USD 4.64% AA 275a 96.5669 290 13.78 Fixed - 6.40% I-Curve USD 4.89% A 325a 96.21589 330 13.39 Fixed - 6.80% I-Curve USD 5.53% BBB 450a 95.18761 450 9.1 Fixed 2.25 8.00%\n\nBecause of question in the comments. This is what out looks like before the line out.columns = ...\nlevel=0 is the first row of columns, level=1 ('CLASS') is the second row.\n BNCH CCY CPN DR GDNC PRICE SPRD SZE(M) TYPE WAL YLD BNCH CCY CPN DR GDNC PRICE SPRD SZE(M) TYPE WAL YLD BNCH CCY CPN DR GDNC PRICE SPRD SZE(M) TYPE WAL YLD BNCH CCY CPN DR GDNC PRICE SPRD SZE(M) TYPE WAL YLD\nCLASS A A A A A A A A A A A B B B B B B B B B B B C C C C C C C C C C C D D D D D D D D D D D\nDate \n16-Jun-22 I-Curve USD 4.55% AAA - 98.96079 275 356.0 Fixed 1.05 5.65% I-Curve USD - BBB - - - 15.45 Fixed 3.19 - NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN\n27-Apr-22 I-Curve USD 4.30% AAA 175a 99.9888 170 405.62 Fixed 2.28 4.34% I-Curve USD 4.64% AA 200a 99.97876 200 27.56 Fixed 2.58 4.69% NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN\n8-Sep-22 I-Curve USD 4.30% AAA 225-235 97.6802 215 202.81 Fixed 1.91 5.65% I-Curve USD 4.64% AA 275a 96.5669 290 13.78 Fixed - 6.40% I-Curve USD 4.89% A 325a 96.21589 330 13.39 Fixed - 6.80% I-Curve USD 5.53% BBB 450a 95.18761 450 9.1 Fixed 2.25 8.00%\n\n" ]
[ 1 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074604240_dataframe_pandas_python.txt
Q: Return the name of the maximum value for every column using Pandas I am new to using Python and I am trying to use pandas to return the value of a name column for the name which has the maximum average grouped value for every numeric column. Using the Pokemon dataset as an example, the below code loads the data. import pandas as pd url = "https://raw.githubusercontent.com/UofGAnalyticsData/DPIP/main/assesment_datasets/assessment3/Pokemon.csv" df4 = pd.read_csv(url) Then these next lines of code group by Type 1 and return the mean average outputs for every numeric attribute once grouped. df4.groupby("Type 1")[["Total", "HP", "Attack", "Defense", "Sp. Atk", "Sp. Def", "Speed"]].agg("mean") I want to modify this code so that from the resulting table, it outlines the name of the "Type 1" which has the highest average Total, HP, Attack and so on... The below code gives me the numeric maximums, but I also want to return the name of Type 1 for which each maximum belongs to. df4.groupby("Type 1")[["Total", "HP", "Attack", "Defense", "Sp. Atk", "Sp. Def", "Speed"]].agg("mean").agg("max") How would I do this succinctly using pandas? Thanks. A: You can just add idxmax in the agg() method : df4.groupby("Type 1")[["Total", "HP", "Attack", "Defense", "Sp. Atk", "Sp. Def", "Speed"]].agg("mean").agg(["max", "idxmax"])
Return the name of the maximum value for every column using Pandas
I am new to using Python and I am trying to use pandas to return the value of a name column for the name which has the maximum average grouped value for every numeric column. Using the Pokemon dataset as an example, the below code loads the data. import pandas as pd url = "https://raw.githubusercontent.com/UofGAnalyticsData/DPIP/main/assesment_datasets/assessment3/Pokemon.csv" df4 = pd.read_csv(url) Then these next lines of code group by Type 1 and return the mean average outputs for every numeric attribute once grouped. df4.groupby("Type 1")[["Total", "HP", "Attack", "Defense", "Sp. Atk", "Sp. Def", "Speed"]].agg("mean") I want to modify this code so that from the resulting table, it outlines the name of the "Type 1" which has the highest average Total, HP, Attack and so on... The below code gives me the numeric maximums, but I also want to return the name of Type 1 for which each maximum belongs to. df4.groupby("Type 1")[["Total", "HP", "Attack", "Defense", "Sp. Atk", "Sp. Def", "Speed"]].agg("mean").agg("max") How would I do this succinctly using pandas? Thanks.
[ "You can just add idxmax in the agg() method :\ndf4.groupby(\"Type 1\")[[\"Total\", \"HP\", \"Attack\", \"Defense\", \"Sp. Atk\", \"Sp. Def\", \"Speed\"]].agg(\"mean\").agg([\"max\", \"idxmax\"])\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074604222_dataframe_pandas_python.txt
Q: Alphabetically sort with Python? I am aware of the sorted() function but I am having a little trouble using it/implementing it in my code. I have a database containing student records such as Name, address, age etc. When the user selects "4" the program runs the function to Display all records saved in the database and I desire it to be sorted alphabetically. My function works and displays the records, just not alphabetically. How could I take advantage of the sorted() function to make my code display the records alphabetically? Any help would be greatly appreciated. rom ShowAllRecords import show_record from deleteRecord import delete_student from showRecord import view_records from createRecord import add_record global student_info global database """ Fields :- ['Student ID', 'First name', 'Last Name', 'age', 'address', 'phone number'] 1. Create a new Record 2. Show a record 3. Delete a record 4. Display All Records. 5. Exit """ student_info = ['Student ID', 'First name', 'last name', 'age', 'address', 'phone number'] database = 'file_records.txt' def display_menu(): print("**********************************************") print(" RECORDS MANAGER ") print("**********************************************") print("1. Create a new record. ") print("2. Show a record. ") print("3. Delete a record. ") print("4. Display All Records. ") print("5. Exit") while True: display_menu() choice = input("Enter your choice: ") if choice == '1': print('You have chosen "Create a new record."') add_record() elif choice == '2': print('You have chosen "Show a record"') show_record() elif choice == '3': delete_student() elif choice == '4': print('You have chosen "Display ALL records"') view_records() else: break print("**********************************************") print(" RECORDS MANAGER ") print("**********************************************") ViewRecords function- import csv student_info = ['Student ID', 'First name', 'last name', 'age', 'address', 'phone number'] database = 'file_records.txt' def view_records(): global student_info global database print("--- Student Records ---") with open(database, "r", encoding="utf-8") as f: reader = csv.reader(f) for x in student_info: print(x, end='\t |') print("\n-----------------------------------------------------------------") for row in reader: for item in row: print(item, end="\t |") print("\n") input("Press any key to continue") I know I should use the sorted function, just not sure where/how to properly implement it within my code Sample Run: Blockquote RECORDS MANAGER 1. Create a new record. 2. Show a record. 3. Delete a record. 4. Display All Records. 5. Exit. Enter your option [1 - 5]: 4 You have chosen "Display ALL records in alphabetical order by last name." Would you like the registry sorted alphabetically in Ascending or Descending order? (A or D): D Last Name: Hunt First Name: Alan Student ID: 875653 Age: 23 Address: 345 Ocean Way Phone number: 3334445454 Last Name: Farrow First Name: Mia Student ID: 86756475 Age: 22 Address: 34 Lotus Ct Phone number: 9994448585 Done! Press enter to continue. returning to Main Menu. A: You already know you need the sorted function. Think about what you need to sort: all the records in your csv file, and the key to use to sort: Let's say by last name and then first name. See the documentation for more detail on the key argument to sorted. Since you want to sort by two items, you can create a tuple containing these two items as your key. In this case, for any student record r, the last name is the third element (r[2]) and the first name is the second element (r[1]), so our key needs to be the tuple (r[2], r[1]). In the function where you read the records, instead of printing them immediately, read all the records first, then sort them, and then print them: def view_records(): # These are not necessary since you never set these variables in the function # You can access those variables without global # global student_info # global database print("--- Student Records ---") with open(database, "r", encoding="utf-8") as f: for x in student_info: print(x, end='\t |') print("\n-----------------------------------------------------------------") reader = csv.reader(f) student_records = sorted(reader, key=lambda r: (r[2], r[1])) for row in student_records: print(*row, sep="\t |") input("Press any key to continue") A: By default the sorted() function will sort a list alphabetically, so you could read the lines in the file into a list, call sorted on the list, the print that list line by line with something like this: with open(database, "r", encoding="utf-8") as f: reader = csv.reader(f) # make a list of all lines in the database line_list = [] for line in reader: line_list.append(line) # print the list line_list = sorted(line_list) for line in line_list: print(line) I'm not exactly sure that's the correct way to go line by line for a database file but you get the idea. Another thing you can do with sorted is pass it a function that determines exactly what you sort, so say if you have 3 columns in your database and for each line you split the row into a list of 3 items you could do this: def func(x): return x[2] data = [[a,b,c],[d,e,f],[g,h,i]] sorted_data = sorted(data, key=func) to sort the data by the third column. This is also used when you want to sort by some manner that's not alphanumeric. Finally, for questions on basic functions like this I highly recommend geeksforgeeks as it usually has good basic examples and descriptions of things like this. A: To sort the database, you will need to read the whole thing into memory, so I would put that logic in a separate function, returning a list of dicts. def load_records(path, fieldnames): with open(database, "r", encoding="utf-8") as f: return list(csv.DictReader(f, fieldnames)) Then call view_records with this data as a parameter, and sort the records using a lambda: records = load_records(database, student_info) view_records(records) def view_records(records): print("--- Student Records ---") print(*student_info, sep="\t |") print("-----------------------------------------------------------------") for record in sorted(records, key=lambda record: (record["Last Name"], record["First name"])): print(*record.values(), sep="\t |") input("Press any key to continue") Alternatively, use operator.itemgetter as the key: from operator import itemgetter ... for record in sorted(records, key=itemgetter(record, "Last Name", "First name")):
Alphabetically sort with Python?
I am aware of the sorted() function but I am having a little trouble using it/implementing it in my code. I have a database containing student records such as Name, address, age etc. When the user selects "4" the program runs the function to Display all records saved in the database and I desire it to be sorted alphabetically. My function works and displays the records, just not alphabetically. How could I take advantage of the sorted() function to make my code display the records alphabetically? Any help would be greatly appreciated. rom ShowAllRecords import show_record from deleteRecord import delete_student from showRecord import view_records from createRecord import add_record global student_info global database """ Fields :- ['Student ID', 'First name', 'Last Name', 'age', 'address', 'phone number'] 1. Create a new Record 2. Show a record 3. Delete a record 4. Display All Records. 5. Exit """ student_info = ['Student ID', 'First name', 'last name', 'age', 'address', 'phone number'] database = 'file_records.txt' def display_menu(): print("**********************************************") print(" RECORDS MANAGER ") print("**********************************************") print("1. Create a new record. ") print("2. Show a record. ") print("3. Delete a record. ") print("4. Display All Records. ") print("5. Exit") while True: display_menu() choice = input("Enter your choice: ") if choice == '1': print('You have chosen "Create a new record."') add_record() elif choice == '2': print('You have chosen "Show a record"') show_record() elif choice == '3': delete_student() elif choice == '4': print('You have chosen "Display ALL records"') view_records() else: break print("**********************************************") print(" RECORDS MANAGER ") print("**********************************************") ViewRecords function- import csv student_info = ['Student ID', 'First name', 'last name', 'age', 'address', 'phone number'] database = 'file_records.txt' def view_records(): global student_info global database print("--- Student Records ---") with open(database, "r", encoding="utf-8") as f: reader = csv.reader(f) for x in student_info: print(x, end='\t |') print("\n-----------------------------------------------------------------") for row in reader: for item in row: print(item, end="\t |") print("\n") input("Press any key to continue") I know I should use the sorted function, just not sure where/how to properly implement it within my code Sample Run: Blockquote RECORDS MANAGER 1. Create a new record. 2. Show a record. 3. Delete a record. 4. Display All Records. 5. Exit. Enter your option [1 - 5]: 4 You have chosen "Display ALL records in alphabetical order by last name." Would you like the registry sorted alphabetically in Ascending or Descending order? (A or D): D Last Name: Hunt First Name: Alan Student ID: 875653 Age: 23 Address: 345 Ocean Way Phone number: 3334445454 Last Name: Farrow First Name: Mia Student ID: 86756475 Age: 22 Address: 34 Lotus Ct Phone number: 9994448585 Done! Press enter to continue. returning to Main Menu.
[ "You already know you need the sorted function. Think about what you need to sort: all the records in your csv file, and the key to use to sort: Let's say by last name and then first name. See the documentation for more detail on the key argument to sorted. Since you want to sort by two items, you can create a tuple containing these two items as your key. In this case, for any student record r, the last name is the third element (r[2]) and the first name is the second element (r[1]), so our key needs to be the tuple (r[2], r[1]).\nIn the function where you read the records, instead of printing them immediately, read all the records first, then sort them, and then print them:\ndef view_records():\n # These are not necessary since you never set these variables in the function\n # You can access those variables without global\n # global student_info\n # global database\n\n print(\"--- Student Records ---\")\n\n with open(database, \"r\", encoding=\"utf-8\") as f:\n for x in student_info:\n print(x, end='\\t |')\n print(\"\\n-----------------------------------------------------------------\")\n\n reader = csv.reader(f)\n student_records = sorted(reader, key=lambda r: (r[2], r[1]))\n for row in student_records:\n print(*row, sep=\"\\t |\")\n\n input(\"Press any key to continue\")\n\n", "By default the sorted() function will sort a list alphabetically, so you could read the lines in the file into a list, call sorted on the list, the print that list line by line with something like this:\nwith open(database, \"r\", encoding=\"utf-8\") as f:\n reader = csv.reader(f)\n\n # make a list of all lines in the database\n line_list = []\n for line in reader:\n line_list.append(line)\n\n # print the list\n line_list = sorted(line_list)\n for line in line_list:\n print(line)\n\nI'm not exactly sure that's the correct way to go line by line for a database file but you get the idea. Another thing you can do with sorted is pass it a function that determines exactly what you sort, so say if you have 3 columns in your database and for each line you split the row into a list of 3 items you could do this:\ndef func(x):\n return x[2]\n\ndata = [[a,b,c],[d,e,f],[g,h,i]]\nsorted_data = sorted(data, key=func)\n\nto sort the data by the third column. This is also used when you want to sort by some manner that's not alphanumeric.\nFinally, for questions on basic functions like this I highly recommend geeksforgeeks as it usually has good basic examples and descriptions of things like this.\n", "To sort the database, you will need to read the whole thing into memory, so I would put that logic in a separate function, returning a list of dicts.\ndef load_records(path, fieldnames):\n with open(database, \"r\", encoding=\"utf-8\") as f:\n return list(csv.DictReader(f, fieldnames))\n\nThen call view_records with this data as a parameter, and sort the records using a lambda:\nrecords = load_records(database, student_info)\nview_records(records)\n\ndef view_records(records):\n print(\"--- Student Records ---\")\n print(*student_info, sep=\"\\t |\")\n print(\"-----------------------------------------------------------------\")\n for record in sorted(records, key=lambda record: (record[\"Last Name\"], record[\"First name\"])):\n print(*record.values(), sep=\"\\t |\")\n input(\"Press any key to continue\")\n\nAlternatively, use operator.itemgetter as the key:\nfrom operator import itemgetter\n...\n for record in sorted(records, key=itemgetter(record, \"Last Name\", \"First name\")):\n\n" ]
[ 0, 0, 0 ]
[]
[]
[ "alphabetical", "python", "sorting" ]
stackoverflow_0074604275_alphabetical_python_sorting.txt
Q: A way to detect empty cells in CSV and replace them with NULL in sql query I have a dynamic csv file with cells that start out empty but over time will get filled with values. To get this csv file into a database I convert it on the fly to sql and upload to my database, however, empty cells in the CSV file are showing up with empty values in the database but are not set to NULL. Is there a way to detect empty cells and set them to NULL before the sql is uploaded? Here's my code: import MySQLdb import csv import sys db = MySQLdb.connect("localhost", "root", "", "point") cur = db.cursor() csv_data = csv.reader(open("/home/dboo/data1/data/Fiverphy/tablecells222.csv")) header = next(csv_data) cur.execute('DROP TABLE IF EXISTS tablecells222') cur.execute('''CREATE TABLE tablecells222( id INTEGER NOT NULL PRIMARY KEY ,less_id INTEGER NOT NULL ,datetime VARCHAR(25) NOT NULL ,status VARCHAR(4) NOT NULL ,less VARCHAR(30) ,team1 VARCHAR(15) NOT NULL ,team2 VARCHAR(15) NOT NULL ,team1 INTEGER NOT NULL ,team2 INTEGER NOT NULL ,team1_code VARCHAR(3) NOT NULL ,team2_code VARCHAR(3) NOT NULL ,prob1 VARCHAR(19) NOT NULL ,prob2 VARCHAR(20) NOT NULL ,prob3 VARCHAR(19) NOT NULL ,round VARCHAR(30) ,day VARCHAR(30) ,score1 NUMERIC(3) ,score2 NUMERIC(3) ,adj_score1 VARCHAR(18) ,adj_score2 VARCHAR(18) ,chances1 VARCHAR(18) ,chances2 VARCHAR(19) ,moves1 VARCHAR(19) ,moves2 VARCHAR(19) ,aggregate VARCHAR(30) ,shootout VARCHAR(30) ); ''') for row in csv_data: print(row) cur.execute("INSERT IGNORE INTO tablecells222(id,less_id,datetime,status,less,team1,team2,team1_id,team2_id,team1_code,team2_code,prob1,prob2,prob3,round,day,score1,score2,adj_score1,adj_score2,chances1,chances2,moves1,moves2,aggregate,shootout) VALUES (%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s);", row) db.commit() cur.close() print("Done") A: You could do the following: for row in csv_data: if row['data_value'] == "" or row['data_value'] == " ": row['data_value'] == None # row['data_value'] == "Null" else: cur.execute()
A way to detect empty cells in CSV and replace them with NULL in sql query
I have a dynamic csv file with cells that start out empty but over time will get filled with values. To get this csv file into a database I convert it on the fly to sql and upload to my database, however, empty cells in the CSV file are showing up with empty values in the database but are not set to NULL. Is there a way to detect empty cells and set them to NULL before the sql is uploaded? Here's my code: import MySQLdb import csv import sys db = MySQLdb.connect("localhost", "root", "", "point") cur = db.cursor() csv_data = csv.reader(open("/home/dboo/data1/data/Fiverphy/tablecells222.csv")) header = next(csv_data) cur.execute('DROP TABLE IF EXISTS tablecells222') cur.execute('''CREATE TABLE tablecells222( id INTEGER NOT NULL PRIMARY KEY ,less_id INTEGER NOT NULL ,datetime VARCHAR(25) NOT NULL ,status VARCHAR(4) NOT NULL ,less VARCHAR(30) ,team1 VARCHAR(15) NOT NULL ,team2 VARCHAR(15) NOT NULL ,team1 INTEGER NOT NULL ,team2 INTEGER NOT NULL ,team1_code VARCHAR(3) NOT NULL ,team2_code VARCHAR(3) NOT NULL ,prob1 VARCHAR(19) NOT NULL ,prob2 VARCHAR(20) NOT NULL ,prob3 VARCHAR(19) NOT NULL ,round VARCHAR(30) ,day VARCHAR(30) ,score1 NUMERIC(3) ,score2 NUMERIC(3) ,adj_score1 VARCHAR(18) ,adj_score2 VARCHAR(18) ,chances1 VARCHAR(18) ,chances2 VARCHAR(19) ,moves1 VARCHAR(19) ,moves2 VARCHAR(19) ,aggregate VARCHAR(30) ,shootout VARCHAR(30) ); ''') for row in csv_data: print(row) cur.execute("INSERT IGNORE INTO tablecells222(id,less_id,datetime,status,less,team1,team2,team1_id,team2_id,team1_code,team2_code,prob1,prob2,prob3,round,day,score1,score2,adj_score1,adj_score2,chances1,chances2,moves1,moves2,aggregate,shootout) VALUES (%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s);", row) db.commit() cur.close() print("Done")
[ "You could do the following:\nfor row in csv_data: \n if row['data_value'] == \"\" or row['data_value'] == \" \":\n row['data_value'] == None\n # row['data_value'] == \"Null\"\n else:\n cur.execute()\n\n" ]
[ 0 ]
[]
[]
[ "csv", "database", "mysql_python", "python" ]
stackoverflow_0074595240_csv_database_mysql_python_python.txt
Q: How to permanently save a variable (login information) I am trying to code a username and password system and was wondering if there was any way to save the variable so even when the code stops the username will work the next time. I have not yet started coding it and was just wondering if this was possible. I saw a few things on saving it as a file but with no luck. Thanks! I tried saving it as a file but, I don't want to manually add every username and password. A: You can try appending them to a file instead of writing a new one for each user. So each time a user logs in the creds will be saved to that file. This is what I have done in the past. A: You can automaticly save everything in a simple text file. file = open("Python.txt", "w") make sure the .txt exists and check before running that you are actually running your python file in the right directory. Then write the variable to the file and make a newline by appending \n. file.write(str(yourVariable), "\n") Dont forget to close your file afterwards! file.close() To acces this varibale afterwards open it again by doing: f = open('Python.txt', 'r') contents= f.read() and convert it back to your old type of variable by doing something like: int(contents) #or list(contents) A: You are look for persistence, there are different ways to do so, python-specific option is shelve which allows you to seamlessly store dict inside file, consider following simple phone-book example, let addphone.py content be import shelve name = input('What is your name?') phone = input('What is your phone number?') with shelve.open('phones') as db: db[name] = phone and listphones.py content be import shelve with shelve.open('phones') as db: for name, phone in db.items(): print(name,'uses phone',phone) If your requirements stipulates interoperatibility with other software, you might consider using json or configparser, you might also consider other PersistenceTools
How to permanently save a variable (login information)
I am trying to code a username and password system and was wondering if there was any way to save the variable so even when the code stops the username will work the next time. I have not yet started coding it and was just wondering if this was possible. I saw a few things on saving it as a file but with no luck. Thanks! I tried saving it as a file but, I don't want to manually add every username and password.
[ "You can try appending them to a file instead of writing a new one for each user. So each time a user logs in the creds will be saved to that file. This is what I have done in the past.\n", "You can automaticly save everything in a simple text file.\nfile = open(\"Python.txt\", \"w\")\n\nmake sure the .txt exists and check before running that you are actually running your python file in the right directory.\nThen write the variable to the file and make a newline by appending \\n.\nfile.write(str(yourVariable), \"\\n\")\n\nDont forget to close your file afterwards!\nfile.close()\n\nTo acces this varibale afterwards open it again by doing:\nf = open('Python.txt', 'r')\ncontents= f.read()\n\nand convert it back to your old type of variable by doing something like:\nint(contents)\n#or\nlist(contents)\n\n", "You are look for persistence, there are different ways to do so, python-specific option is shelve which allows you to seamlessly store dict inside file, consider following simple phone-book example, let\naddphone.py content be\nimport shelve\nname = input('What is your name?')\nphone = input('What is your phone number?')\nwith shelve.open('phones') as db:\n db[name] = phone\n\nand listphones.py content be\nimport shelve\nwith shelve.open('phones') as db:\n for name, phone in db.items():\n print(name,'uses phone',phone)\n\nIf your requirements stipulates interoperatibility with other software, you might consider using json or configparser, you might also consider other PersistenceTools\n" ]
[ 0, 0, 0 ]
[]
[]
[ "passwords", "python", "save", "variables" ]
stackoverflow_0074604400_passwords_python_save_variables.txt
Q: Python) How to copy a row and paste it to all rows in another dataframe How can I extract a specific row and paste it to all rows in another dataframe? For example, when I have two dataframes as below: df1={'category': ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I']} df1=pd.DataFrame(df1) df2={'value 1': [1, 1, 2, 5, 3, 4, 4, 8, 7], 'value 2': [4, 2, 8, 5, 7, 9, 3, 4, 2]} df2=pd.DataFrame(df2) df1 # category #0 A #1 B #2 C #3 D #4 E #5 F #6 G #7 H #8 I df2 # value 1 value 2 #0 1 4 #1 1 2 #2 2 8 I'd like to copy the forth row to all rows in df1 df3 # category value 1 value 2 #0 A 1 2 #1 B 1 2 #2 C 1 2 #3 D 1 2 #4 E 1 2 #5 F 1 2 #6 G 1 2 #7 H 1 2 #8 I 1 2 Thanks to Axe319, I change the code as in the comment. But I check that there was unexpected result. df3 = pd.concat([df1, df2.apply(lambda _: df2.iloc[1], axis=1)], axis=1) df3 # category value 1 value 2 #0 A 1.0 2.0 #1 B 1.0 2.0 #2 C 1.0 2.0 #3 D NaN NaN #4 E NaN NaN #5 F NaN NaN #6 G NaN NaN #7 H NaN NaN #8 I NaN NaN I guess that it only fills according to the number of rows in df2, but I cannot understand this situation. A: Example #1: Create two data frames and append the second to the first one. # Importing pandas as pd import pandas as pd # Creating the first Dataframe using dictionary df1 = df = pd.DataFrame({"a":[1, 2, 3, 4], "b":[5, 6, 7, 8]}) # Creating the Second Dataframe using dictionary df2 = pd.DataFrame({"a":[1, 2, 3], "b":[5, 6, 7]}) # Print df1 print(df1, "\n") # Print df2 df2 Output: enter link description here enter link description here Now append df2 at the end of df1. # to append df2 at the end of df1 dataframe df1.append(df2) Output:enter link description here Notice the index value of the second data frame is maintained in the appended data frame. If we do not want it to happen then we can set ignore_index=True. # A continuous index value will be maintained # across the rows in the new appended data frame. df1.append(df2, ignore_index = True) Output:enter link description here
Python) How to copy a row and paste it to all rows in another dataframe
How can I extract a specific row and paste it to all rows in another dataframe? For example, when I have two dataframes as below: df1={'category': ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I']} df1=pd.DataFrame(df1) df2={'value 1': [1, 1, 2, 5, 3, 4, 4, 8, 7], 'value 2': [4, 2, 8, 5, 7, 9, 3, 4, 2]} df2=pd.DataFrame(df2) df1 # category #0 A #1 B #2 C #3 D #4 E #5 F #6 G #7 H #8 I df2 # value 1 value 2 #0 1 4 #1 1 2 #2 2 8 I'd like to copy the forth row to all rows in df1 df3 # category value 1 value 2 #0 A 1 2 #1 B 1 2 #2 C 1 2 #3 D 1 2 #4 E 1 2 #5 F 1 2 #6 G 1 2 #7 H 1 2 #8 I 1 2 Thanks to Axe319, I change the code as in the comment. But I check that there was unexpected result. df3 = pd.concat([df1, df2.apply(lambda _: df2.iloc[1], axis=1)], axis=1) df3 # category value 1 value 2 #0 A 1.0 2.0 #1 B 1.0 2.0 #2 C 1.0 2.0 #3 D NaN NaN #4 E NaN NaN #5 F NaN NaN #6 G NaN NaN #7 H NaN NaN #8 I NaN NaN I guess that it only fills according to the number of rows in df2, but I cannot understand this situation.
[ "Example #1: Create two data frames and append the second to the first one.\n# Importing pandas as pd\nimport pandas as pd\n\n# Creating the first Dataframe using dictionary\ndf1 = df = pd.DataFrame({\"a\":[1, 2, 3, 4],\n \"b\":[5, 6, 7, 8]})\n\n# Creating the Second Dataframe using dictionary\ndf2 = pd.DataFrame({\"a\":[1, 2, 3],\n \"b\":[5, 6, 7]})\n\n# Print df1\nprint(df1, \"\\n\")\n\n# Print df2\ndf2\n\nOutput:\nenter link description here\nenter link description here\nNow append df2 at the end of df1.\n# to append df2 at the end of df1 dataframe\ndf1.append(df2)\n\nOutput:enter link description here\nNotice the index value of the second data frame is maintained in the appended data frame. If we do not want it to happen then we can set ignore_index=True.\n# A continuous index value will be maintained\n# across the rows in the new appended data frame.\ndf1.append(df2, ignore_index = True)\n\nOutput:enter link description here\n" ]
[ 0 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074604338_dataframe_pandas_python.txt
Q: Python Dataframes merge multi match I'm new with Dataframe. I would like to kwon how (if possible) can I merge 2 Dataframes with multiple match For example [df1] date ZipCode Weather 2022-11-25 00:00:00 123456 34 2022-11-25 00:00:15 123456 35 2022-11-25 00:00:30 123456 36 [df2] date ZipCode host 2022-11-25 00:00:00 123456 host1 2022-11-25 00:00:00 123456 host2 2022-11-25 00:00:00 123456 host3 2022-11-25 00:00:15 123456 host1 2022-11-25 00:00:30 123456 host2 2022-11-25 00:00:30 123456 host3 Expected results: date ZipCode host Weather 2022-11-25 00:00:00 123456 host1 34 2022-11-25 00:00:00 123456 host2 34 2022-11-25 00:00:00 123456 host3 34 2022-11-25 00:00:15 123456 host1 35 2022-11-25 00:00:30 123456 host2 36 2022-11-25 00:00:30 123456 host3 36 My objetive is assign weather measures to each host. I have weather measurements every 15 minutes for one ZipCode (One line) By the other hand, I have several host KPIs for one time and one ZipCode (multiples lines) Can I perfomr this activity with Dataframes? Thanks in advance! A: You could use the join function in pandas which joins one dataframe's index to the index of the other. Try something like import pandas as pd data1 = \ [['2022-11-25 00:00:00', 123456, 34], ['2022-11-25 00:00:15', 123456, 35], ['2022-11-25 00:00:30', 123456, 36]] columns1 =['date', 'ZipCode', 'Weather'] data2 = \ [['2022-11-25 00:00:00', 123456, 'host1'], ['2022-11-25 00:00:00', 123456, 'host2'], ['2022-11-25 00:00:00', 123456, 'host3'], ['2022-11-25 00:00:15', 123456, 'host1'], ['2022-11-25 00:00:30', 123456, 'host2'], ['2022-11-25 00:00:30', 123456, 'host3']] columns2 =['date', 'ZipCode', 'host'] df1 = pd.DataFrame(data=data1, columns=columns1) df1.date = pd.to_datetime(df1.date) df1.set_index('date', inplace=True) df2 = pd.DataFrame(data=data2, columns=columns2) df2.date = pd.to_datetime(df2.date) df2.set_index('date', inplace=True) df3 = df1.join(df2['host'], on='date') df3 A: We do that by using merge and setting the argument on to ['date', 'ZipCode']: new_df = pd.merge(df2, df1, on=['date', 'ZipCode']) Output >>> new_df ... date ZipCode host Weather 0 2022-11-25 00:00:00 123456 host1 34 1 2022-11-25 00:00:00 123456 host2 34 2 2022-11-25 00:00:00 123456 host3 34 3 2022-11-25 00:00:15 123456 host1 35 4 2022-11-25 00:00:30 123456 host2 36 5 2022-11-25 00:00:30 123456 host3 36 A: After several test I got the expected result new_df2 = pd.merge(df1, df2, on=['date', 'ZipCode'], how='right') Thanks for your orientation!!
Python Dataframes merge multi match
I'm new with Dataframe. I would like to kwon how (if possible) can I merge 2 Dataframes with multiple match For example [df1] date ZipCode Weather 2022-11-25 00:00:00 123456 34 2022-11-25 00:00:15 123456 35 2022-11-25 00:00:30 123456 36 [df2] date ZipCode host 2022-11-25 00:00:00 123456 host1 2022-11-25 00:00:00 123456 host2 2022-11-25 00:00:00 123456 host3 2022-11-25 00:00:15 123456 host1 2022-11-25 00:00:30 123456 host2 2022-11-25 00:00:30 123456 host3 Expected results: date ZipCode host Weather 2022-11-25 00:00:00 123456 host1 34 2022-11-25 00:00:00 123456 host2 34 2022-11-25 00:00:00 123456 host3 34 2022-11-25 00:00:15 123456 host1 35 2022-11-25 00:00:30 123456 host2 36 2022-11-25 00:00:30 123456 host3 36 My objetive is assign weather measures to each host. I have weather measurements every 15 minutes for one ZipCode (One line) By the other hand, I have several host KPIs for one time and one ZipCode (multiples lines) Can I perfomr this activity with Dataframes? Thanks in advance!
[ "You could use the join function in pandas which joins one dataframe's index to the index of the other. Try something like\nimport pandas as pd\n\ndata1 = \\\n[['2022-11-25 00:00:00', 123456, 34],\n['2022-11-25 00:00:15', 123456, 35],\n['2022-11-25 00:00:30', 123456, 36]]\n\ncolumns1 =['date', 'ZipCode', 'Weather']\n\ndata2 = \\\n[['2022-11-25 00:00:00', 123456, 'host1'],\n['2022-11-25 00:00:00', 123456, 'host2'],\n['2022-11-25 00:00:00', 123456, 'host3'],\n['2022-11-25 00:00:15', 123456, 'host1'],\n['2022-11-25 00:00:30', 123456, 'host2'],\n['2022-11-25 00:00:30', 123456, 'host3']]\n\ncolumns2 =['date', 'ZipCode', 'host']\n\ndf1 = pd.DataFrame(data=data1, columns=columns1)\ndf1.date = pd.to_datetime(df1.date)\ndf1.set_index('date', inplace=True)\ndf2 = pd.DataFrame(data=data2, columns=columns2)\ndf2.date = pd.to_datetime(df2.date)\ndf2.set_index('date', inplace=True)\ndf3 = df1.join(df2['host'], on='date')\ndf3\n\n", "We do that by using merge and setting the argument on to ['date', 'ZipCode']:\nnew_df = pd.merge(df2, df1, on=['date', 'ZipCode'])\n\nOutput\n>>> new_df\n... date ZipCode host Weather\n0 2022-11-25 00:00:00 123456 host1 34\n1 2022-11-25 00:00:00 123456 host2 34\n2 2022-11-25 00:00:00 123456 host3 34\n3 2022-11-25 00:00:15 123456 host1 35\n4 2022-11-25 00:00:30 123456 host2 36\n5 2022-11-25 00:00:30 123456 host3 36\n\n", "After several test I got the expected result\nnew_df2 = pd.merge(df1, df2, on=['date', 'ZipCode'], how='right')\n\nThanks for your orientation!!\n" ]
[ 0, 0, 0 ]
[]
[]
[ "dataframe", "merge", "multiple_columns", "python" ]
stackoverflow_0074598005_dataframe_merge_multiple_columns_python.txt
Q: How to download Instagram reels, image, dp and stories using python without session? I am trying to make instagram content downloader like this this website. But the problem is python wants cookies sessions id. I want to download instagram reels, image, dp, stories, without session id. A: The question is not related to Python generally. If you want to scrape data from Instagram you have to pass session keys and simulate real client connection (try https://github.com/chris-greening/instascrape or similar libraries). Or you can use official Instagram Graph API (https://developers.facebook.com/docs/instagram-api/) which is better option if those rights scope meets your requirements.
How to download Instagram reels, image, dp and stories using python without session?
I am trying to make instagram content downloader like this this website. But the problem is python wants cookies sessions id. I want to download instagram reels, image, dp, stories, without session id.
[ "The question is not related to Python generally.\nIf you want to scrape data from Instagram you have to pass session keys and simulate real client connection (try https://github.com/chris-greening/instascrape or similar libraries).\nOr you can use official Instagram Graph API (https://developers.facebook.com/docs/instagram-api/) which is better option if those rights scope meets your requirements.\n" ]
[ 0 ]
[]
[]
[ "instagram", "python", "python_3.x" ]
stackoverflow_0074604541_instagram_python_python_3.x.txt
Q: Is there a way to enumerate rows in temporal pandas data frame according to the date of an action? I have a data frame with temporal data. Each row represents a purchase of a customer. It looks similar to this: cli_id date item_purchased 1 2017-01-01 A 2 2017-01-04 C 3 2017-01-03 B 1 2017-02-01 B 2 2017-01-31 B 3 2017-02-02 A 1 2017-02-15 A 2 2017-02-10 A 3 2017-02-16 C 2 2017-02-20 B I would like to be able to make another column that represents the order of the purchase. This way I will be able to group by the first purchase and have the mode of first purchase, then have the mode of the second purchase, and so on. The output would look like this: cli_id date item_purchased purchase_order 1 2017-01-01 A 1 2 2017-01-04 C 1 3 2017-01-03 B 1 1 2017-02-01 B 2 2 2017-01-31 B 2 3 2017-02-02 A 2 1 2017-02-15 A 3 2 2017-02-10 A 3 3 2017-02-16 C 3 2 2017-02-20 B 4 Important: Not all customers have the same amount of purchases (i.e., not the same amount of rows). So far, I have tried with for loops, but it is slow: list_dates = ['2017-01-01', '2017-01-04', '2017-01-03', '2017-02-01', '2017-01-31', '2017-02-02', '2017-02-15', '2017-02-10', '2017-02-16', '2017-02-20'] list_item_purchased = ['A','C','B','B','B','A','A','A','C','B'] df = pd.DataFrame({'cli_id': list_cli_id, 'date': list_dates, 'item_purchased': list_item_purchased}) df['date'] = pd.to_datetime(df['date']) first = True for id in df['cli_id'].unique(): if first: df_order = df[df['cli_id'] == id].sort_values(by='date').reset_index().reset_index() else: df_order = df_order.append(df[df['cli_id'] == id].sort_values(by='date').reset_index().reset_index(), ignore_index=True) first = False df_order = df_order.rename(columns={"level_0": "purchase_order"}).drop('index', axis=1) Is there a better (faster / more elegant) way to do that? My real data frame is much bigger than this. A: You can use GroupBy.cumcount. df['date'] = pd.to_datetime(df['date']) df= ( df .join((df.sort_values(by=["cli_id", "date"]) .groupby("cli_id").cumcount()+1) .to_frame("purchase_order")) ) # Output : print(df) cli_id date item_purchased purchase_order 0 1 2017-01-01 A 1 1 2 2017-01-04 C 1 2 3 2017-01-03 B 1 3 1 2017-02-01 B 2 4 2 2017-01-31 B 2 5 3 2017-02-02 A 2 6 1 2017-02-15 A 3 7 2 2017-02-10 A 3 8 3 2017-02-16 C 3 9 2 2017-02-20 B 4
Is there a way to enumerate rows in temporal pandas data frame according to the date of an action?
I have a data frame with temporal data. Each row represents a purchase of a customer. It looks similar to this: cli_id date item_purchased 1 2017-01-01 A 2 2017-01-04 C 3 2017-01-03 B 1 2017-02-01 B 2 2017-01-31 B 3 2017-02-02 A 1 2017-02-15 A 2 2017-02-10 A 3 2017-02-16 C 2 2017-02-20 B I would like to be able to make another column that represents the order of the purchase. This way I will be able to group by the first purchase and have the mode of first purchase, then have the mode of the second purchase, and so on. The output would look like this: cli_id date item_purchased purchase_order 1 2017-01-01 A 1 2 2017-01-04 C 1 3 2017-01-03 B 1 1 2017-02-01 B 2 2 2017-01-31 B 2 3 2017-02-02 A 2 1 2017-02-15 A 3 2 2017-02-10 A 3 3 2017-02-16 C 3 2 2017-02-20 B 4 Important: Not all customers have the same amount of purchases (i.e., not the same amount of rows). So far, I have tried with for loops, but it is slow: list_dates = ['2017-01-01', '2017-01-04', '2017-01-03', '2017-02-01', '2017-01-31', '2017-02-02', '2017-02-15', '2017-02-10', '2017-02-16', '2017-02-20'] list_item_purchased = ['A','C','B','B','B','A','A','A','C','B'] df = pd.DataFrame({'cli_id': list_cli_id, 'date': list_dates, 'item_purchased': list_item_purchased}) df['date'] = pd.to_datetime(df['date']) first = True for id in df['cli_id'].unique(): if first: df_order = df[df['cli_id'] == id].sort_values(by='date').reset_index().reset_index() else: df_order = df_order.append(df[df['cli_id'] == id].sort_values(by='date').reset_index().reset_index(), ignore_index=True) first = False df_order = df_order.rename(columns={"level_0": "purchase_order"}).drop('index', axis=1) Is there a better (faster / more elegant) way to do that? My real data frame is much bigger than this.
[ "You can use GroupBy.cumcount.\ndf['date'] = pd.to_datetime(df['date'])\n\ndf= (\n df\n .join((df.sort_values(by=[\"cli_id\", \"date\"])\n .groupby(\"cli_id\").cumcount()+1)\n .to_frame(\"purchase_order\")) \n )\n\n# Output :\nprint(df)\n\n cli_id date item_purchased purchase_order\n0 1 2017-01-01 A 1\n1 2 2017-01-04 C 1\n2 3 2017-01-03 B 1\n3 1 2017-02-01 B 2\n4 2 2017-01-31 B 2\n5 3 2017-02-02 A 2\n6 1 2017-02-15 A 3\n7 2 2017-02-10 A 3\n8 3 2017-02-16 C 3\n9 2 2017-02-20 B 4\n\n" ]
[ 1 ]
[]
[]
[ "dataframe", "date", "pandas", "python" ]
stackoverflow_0074604399_dataframe_date_pandas_python.txt
Q: regex for repeating word to repeating two words I am trying to write some regex pattern that will look through a sentence and remove any one or two sequentially repeated words for example: # R code below string_a = "hello hello, how are you you?" string_b = "goodbye world goodbye world, I am flying to the the moon!" gsub(pattern, "", string_a) gsub(pattern, "", string_b) Desired outputs are [1] "hello, how are you?" [2] "goodbye world, I am flying to the moon!" A: Try gsub("(\\S+(\\s+\\S+)?)\\s+\\1+", "\\1", c(string_a, string_b)) -output [1] "hello, how are you?" [2] "goodbye world, I am flying to the moon!"
regex for repeating word to repeating two words
I am trying to write some regex pattern that will look through a sentence and remove any one or two sequentially repeated words for example: # R code below string_a = "hello hello, how are you you?" string_b = "goodbye world goodbye world, I am flying to the the moon!" gsub(pattern, "", string_a) gsub(pattern, "", string_b) Desired outputs are [1] "hello, how are you?" [2] "goodbye world, I am flying to the moon!"
[ "Try\n gsub(\"(\\\\S+(\\\\s+\\\\S+)?)\\\\s+\\\\1+\", \"\\\\1\", c(string_a, string_b))\n\n-output\n[1] \"hello, how are you?\" \n[2] \"goodbye world, I am flying to the moon!\"\n\n" ]
[ 3 ]
[]
[]
[ "javascript", "python", "r", "regex" ]
stackoverflow_0074604589_javascript_python_r_regex.txt
Q: Is it possible to create a Rest API using Jupyter notebook?,if yes How to create Rest API for the following code interms of json format I have built a model for the time series analysis which is going to predict the sail for the next days,the model is working fine,but i want to convert that into Rest API in JSON format using the Anaconda jupyter notebook,Please let me know the way for that .Thanks in advance. Here is the code: from pandas import Series from statsmodels.tsa.arima_model import ARIMA import numpy # create a differenced series def difference(dataset, interval=1): diff = list() for i in range(interval, len(dataset)): value = dataset[i] - dataset[i - interval] diff.append(value) return numpy.array(diff) # invert differenced value def inverse_difference(history, yhat, interval=1): return yhat + history[-interval] # load dataset series = Series.from_csv('mkr.csv', header=None) # seasonal difference X = series.values X = X.astype('float32') days_in_year = 365 differenced = difference(X, days_in_year) # fit model model = ARIMA(differenced, order=(0,0,1)) model_fit = model.fit(disp=0) # multi-step out-of-sample forecast forecast = model_fit.forecast(steps=7)[0] # invert the differenced forecast to something usable history = [x for x in X] day = 1 for yhat in forecast: inverted = inverse_difference(history, yhat,days_in_year) print('Day %d sail:= %.3f' % (month, inverted)) history.append(inverted) day += 1 A: There are hopeful google search results for this problem found by 'jupyter notebook rest api', e.g. https://blog.ouseful.info/2017/09/06/building-a-json-api-using-jupyer-notebooks-in-under-5-minutes/ Have you tried using kernelgateway? A: If you can install jupyter server proxy, this project allows to implement a REST API in the notebook https://github.com/gbrault/jpjwidgets A: An addition to @Pangur answer, for a complete documentation from jupyter: https://jupyter-kernel-gateway.readthedocs.io/en/latest/http-mode.html You can generate even swagger as in below: Once cell # POST /person req = json.loads(REQUEST) row_id = person_table.insert(req['body']) res = {'id' : row_id} print(json.dumps(res)) Next cell (companion cell): # ResponseInfo POST /person print(json.dumps({ "headers" : { "Content-Type" : "application/json" }, "status" : 201 })) Swagger: https://jupyter-kernel-gateway.readthedocs.io/en/latest/http-mode.html#swagger-spec Full demo: https://github.com/jupyter/kernel_gateway_demos/tree/master/scotch_demo
Is it possible to create a Rest API using Jupyter notebook?,if yes How to create Rest API for the following code interms of json format
I have built a model for the time series analysis which is going to predict the sail for the next days,the model is working fine,but i want to convert that into Rest API in JSON format using the Anaconda jupyter notebook,Please let me know the way for that .Thanks in advance. Here is the code: from pandas import Series from statsmodels.tsa.arima_model import ARIMA import numpy # create a differenced series def difference(dataset, interval=1): diff = list() for i in range(interval, len(dataset)): value = dataset[i] - dataset[i - interval] diff.append(value) return numpy.array(diff) # invert differenced value def inverse_difference(history, yhat, interval=1): return yhat + history[-interval] # load dataset series = Series.from_csv('mkr.csv', header=None) # seasonal difference X = series.values X = X.astype('float32') days_in_year = 365 differenced = difference(X, days_in_year) # fit model model = ARIMA(differenced, order=(0,0,1)) model_fit = model.fit(disp=0) # multi-step out-of-sample forecast forecast = model_fit.forecast(steps=7)[0] # invert the differenced forecast to something usable history = [x for x in X] day = 1 for yhat in forecast: inverted = inverse_difference(history, yhat,days_in_year) print('Day %d sail:= %.3f' % (month, inverted)) history.append(inverted) day += 1
[ "There are hopeful google search results for this problem found by 'jupyter notebook rest api', e.g. https://blog.ouseful.info/2017/09/06/building-a-json-api-using-jupyer-notebooks-in-under-5-minutes/\nHave you tried using kernelgateway?\n", "If you can install jupyter server proxy, this project allows to implement a REST API in the notebook\nhttps://github.com/gbrault/jpjwidgets\n", "An addition to @Pangur answer, for a complete documentation from jupyter:\nhttps://jupyter-kernel-gateway.readthedocs.io/en/latest/http-mode.html\nYou can generate even swagger as in below:\nOnce cell\n# POST /person\nreq = json.loads(REQUEST)\nrow_id = person_table.insert(req['body'])\nres = {'id' : row_id}\nprint(json.dumps(res))\n\nNext cell (companion cell):\n# ResponseInfo POST /person\nprint(json.dumps({\n \"headers\" : {\n \"Content-Type\" : \"application/json\"\n },\n \"status\" : 201\n}))\n\nSwagger:\nhttps://jupyter-kernel-gateway.readthedocs.io/en/latest/http-mode.html#swagger-spec\nFull demo:\nhttps://github.com/jupyter/kernel_gateway_demos/tree/master/scotch_demo\n" ]
[ 3, 0, 0 ]
[]
[]
[ "jupyter_notebook", "mysql", "python", "time_series" ]
stackoverflow_0051018536_jupyter_notebook_mysql_python_time_series.txt
Q: Is one way of returning a user-input value, using a try-except clause, better than the other? For context, I am new to Python, and somewhat new to programming in general. In CS50's "Little Professor" problem (details here, but not needed: https://cs50.harvard.edu/python/2022/psets/4/professor/) my program passes all correctness checks; but, unfortunately, programs aren't checked for efficiency, style or "cleanliness", making those details harder to learn... Therefore, using the function below as an example, I am trying to grasp how to think about choosing an implementation when there are multiple options. In the code below, I have a function that prompts the user to input an int(). If the user inputs 1, 2 or 3, return that value. Otherwise, if the user does not input 1, 2 or 3, or the input is not even an int(), re-prompt the user. The first includes the conditional within the try block, and breaks if the condition is met, returning the value once out of the loop. def get_level(): while True: try: level = int(input("Level: ")) if 0 < level <= 3: break except ValueError: pass return level In the second, once the input has met the int() condition, if the value is 1, 2 or 3, it breaks out of the loop by returning the value of level, similarly re-prompting if not. (Note: I noticed the below also works without the "else:" statement, which is a little confusing to me as well, why isn't it needed?) def get_level(): while True: try: level = int(input("Level: ")) except ValueError: pass else: if 0 < level <= 3: return level Is one of these examples better to use than the other, and if so, why? Any help is greatly appreciated, but if there is not a specific answer here, thoughts on the overall concept would be incredibly helpful as well! A: Generally, you should put as little code as possible inside a try, so that you don't misconstrue an unrelated error as the one you're expecting. It also makes sense to separate the code that obtains the level from the code that acts on it, so that level has known validity for as much code as possible. Both of these points favor the second approach (the second moreso in a larger example), although in this case you could also write it as def get_level(): while True: try: level = int(input("Level: ")) except ValueError: continue if 0 < level <= 3: return level which avoids the else indentation. (You do need that else as given, since otherwise if the first input attempt doesn't parse you'll read level before assigning to it.) Meanwhile, the question of break vs. return is really just one of style or conceptualization: do you see the value being in the desired range as meaning "we should stop asking" (break) or "we have the answer" (return)?
Is one way of returning a user-input value, using a try-except clause, better than the other?
For context, I am new to Python, and somewhat new to programming in general. In CS50's "Little Professor" problem (details here, but not needed: https://cs50.harvard.edu/python/2022/psets/4/professor/) my program passes all correctness checks; but, unfortunately, programs aren't checked for efficiency, style or "cleanliness", making those details harder to learn... Therefore, using the function below as an example, I am trying to grasp how to think about choosing an implementation when there are multiple options. In the code below, I have a function that prompts the user to input an int(). If the user inputs 1, 2 or 3, return that value. Otherwise, if the user does not input 1, 2 or 3, or the input is not even an int(), re-prompt the user. The first includes the conditional within the try block, and breaks if the condition is met, returning the value once out of the loop. def get_level(): while True: try: level = int(input("Level: ")) if 0 < level <= 3: break except ValueError: pass return level In the second, once the input has met the int() condition, if the value is 1, 2 or 3, it breaks out of the loop by returning the value of level, similarly re-prompting if not. (Note: I noticed the below also works without the "else:" statement, which is a little confusing to me as well, why isn't it needed?) def get_level(): while True: try: level = int(input("Level: ")) except ValueError: pass else: if 0 < level <= 3: return level Is one of these examples better to use than the other, and if so, why? Any help is greatly appreciated, but if there is not a specific answer here, thoughts on the overall concept would be incredibly helpful as well!
[ "Generally, you should put as little code as possible inside a try, so that you don't misconstrue an unrelated error as the one you're expecting. It also makes sense to separate the code that obtains the level from the code that acts on it, so that level has known validity for as much code as possible. Both of these points favor the second approach (the second moreso in a larger example), although in this case you could also write it as\ndef get_level():\n while True:\n try:\n level = int(input(\"Level: \"))\n except ValueError:\n continue\n if 0 < level <= 3:\n return level\n\nwhich avoids the else indentation. (You do need that else as given, since otherwise if the first input attempt doesn't parse you'll read level before assigning to it.)\nMeanwhile, the question of break vs. return is really just one of style or conceptualization: do you see the value being in the desired range as meaning \"we should stop asking\" (break) or \"we have the answer\" (return)?\n" ]
[ 1 ]
[]
[]
[ "python", "try_except" ]
stackoverflow_0073992584_python_try_except.txt
Q: How to connect with oracle database? I am using this code to connect with oracle database: import cx_Oracle conn_str = u"jbdc:oracle:thin:@****_***.**.com" conn = cx_Oracle.connect(conn_str) c = conn.cursor() However, I am getting this error: ORA-12560: TNS:protocol adapter error How can I resolve it? Thank you A: You cannot use a JDBC thin connect string to connect with cx_Oracle (or the new python-oracledb). You must use either an alias found in a tnsnames.ora file, or the full connect descriptor (such as that found in a tnsnames.ora) file or an EZ+ Connect string. An example follows: conn_str = "user/password@host:port/service_name" With the new python-oracledb driver you can also do this: import oracledb conn = oracledb.connect(user="USER", password="PASSWORD", host="my_host", port=1521, service_name="my_service") See the documentation for more details.
How to connect with oracle database?
I am using this code to connect with oracle database: import cx_Oracle conn_str = u"jbdc:oracle:thin:@****_***.**.com" conn = cx_Oracle.connect(conn_str) c = conn.cursor() However, I am getting this error: ORA-12560: TNS:protocol adapter error How can I resolve it? Thank you
[ "You cannot use a JDBC thin connect string to connect with cx_Oracle (or the new python-oracledb). You must use either an alias found in a tnsnames.ora file, or the full connect descriptor (such as that found in a tnsnames.ora) file or an EZ+ Connect string. An example follows:\nconn_str = \"user/password@host:port/service_name\"\n\nWith the new python-oracledb driver you can also do this:\nimport oracledb\nconn = oracledb.connect(user=\"USER\", password=\"PASSWORD\", host=\"my_host\",\n port=1521, service_name=\"my_service\")\n\nSee the documentation for more details.\n" ]
[ 1 ]
[]
[]
[ "cx_oracle", "python" ]
stackoverflow_0074604565_cx_oracle_python.txt
Q: Is it possible to convert a sympy expression to a pyomo expression? I'm currently using sympy to parse a string equation and replace the variables with either values or Pyomo variables: model = pe.ConcreteModel() model.flow = pe.Var() DEMAND = 100 equation = sympify('10 * DEMAND * FLOW', evaluate=False) updated_equation = equation.subs('DEMAND', DEMAND).subs('FLOW', model.flow) Does anyone know if it is then possible to convert this to a linear expression I can use in a Pyomo model? A: Yes: if you look in pyomo.core.expr.sympy_tools, there are two methods: sympyify_expression(expr) will take a Pyomo expression and return a sympy expression, with all Pyomo Var objects replaced by sympy real Symbols, along with a PyomoSympyBimap object that maps the Pyomo Var objects to the corresponding sympy Symbol objects sympy2pyomo_expression(expr, object_map) will take a sympy expression (containing sympy Symbols), along with the PyomoSymbolBimap that relates the Symbols to Pyomo components and will return a native Pyomo expression. What you are asking for is something slightly different: You are starting with a sympy expression that already contains Pyomo objects. You should be able to adapt the Sympy2PyomoVisitor class to work for your specific use case (you should only need to reimplement the beforeChild method to allow for the presence of Pyomo components)
Is it possible to convert a sympy expression to a pyomo expression?
I'm currently using sympy to parse a string equation and replace the variables with either values or Pyomo variables: model = pe.ConcreteModel() model.flow = pe.Var() DEMAND = 100 equation = sympify('10 * DEMAND * FLOW', evaluate=False) updated_equation = equation.subs('DEMAND', DEMAND).subs('FLOW', model.flow) Does anyone know if it is then possible to convert this to a linear expression I can use in a Pyomo model?
[ "Yes: if you look in pyomo.core.expr.sympy_tools, there are two methods:\n\nsympyify_expression(expr) will take a Pyomo expression and return a sympy expression, with all Pyomo Var objects replaced by sympy real Symbols, along with a PyomoSympyBimap object that maps the Pyomo Var objects to the corresponding sympy Symbol objects\nsympy2pyomo_expression(expr, object_map) will take a sympy expression (containing sympy Symbols), along with the PyomoSymbolBimap that relates the Symbols to Pyomo components and will return a native Pyomo expression.\n\nWhat you are asking for is something slightly different: You are starting with a sympy expression that already contains Pyomo objects. You should be able to adapt the Sympy2PyomoVisitor class to work for your specific use case (you should only need to reimplement the beforeChild method to allow for the presence of Pyomo components)\n" ]
[ 3 ]
[]
[]
[ "pyomo", "python", "sympy" ]
stackoverflow_0074601162_pyomo_python_sympy.txt
Q: How to split at uppercase and brackets I'm trying to parse lyrics site and I need to collect song's lyrics. I have issues with my output I need to have lyrics displayed as below enter image description here I've figured out how to split text at uppercase, but there is one thing remains: the brackets are splitted unproperly, here's my code: import re import requests from bs4 import BeautifulSoup r = requests.get('https://genius.com/Taylor-swift-lavender-haze-lyrics') #print(r.status_code) if r.status_code != 200: print('Error') soup = BeautifulSoup(r.content, 'lxml') titles = soup.find_all('title') titles = titles[0].text titlist = titles.split('Lyrics | ') titlist.pop(1) titlist = titlist[0].replace("\xa0", " ") print(titlist) divs = soup.find_all('div', {'class' : 'Lyrics__Container-sc-1ynbvzw-6 YYrds'}) #print(divs[0].text) lyrics = (divs[0].text) res = re.findall(r'[A-Z][^A-Z]*', lyrics) res_l = [] for el in res: res_l.append(el + '\n') print(el) and output is snown on a screenshot. How do I fix it?enter image description here for those, who asked, added a full code A: As brackets have a meaning in regex you'll need to escape them. In python you should be able to use \[ to get what you want. A: You can .unwrap unnecessary tags (<a>, <span>), replace <br> with newlines and then get text: import requests from bs4 import BeautifulSoup url = "https://genius.com/Taylor-swift-lavender-haze-lyrics" soup = BeautifulSoup(requests.get(url).content, "html.parser") for t in soup.select("#lyrics-root [data-lyrics-container]"): for tag in t.select("span, a"): tag.unwrap() for br in t.select("br"): br.replace_with("\n") print(t.text) Prints: [Intro] Meet me at midnight [Verse 1] Staring at the ceiling with you Oh, you don't ever say too much And you don't really read into My melancholia [Pre-Chorus] I been under scrutiny (Yeah, oh, yeah) You handle it beautifully (Yeah, oh, yeah) All this shit is new to me (Yeah, oh, yeah) ...and so on.
How to split at uppercase and brackets
I'm trying to parse lyrics site and I need to collect song's lyrics. I have issues with my output I need to have lyrics displayed as below enter image description here I've figured out how to split text at uppercase, but there is one thing remains: the brackets are splitted unproperly, here's my code: import re import requests from bs4 import BeautifulSoup r = requests.get('https://genius.com/Taylor-swift-lavender-haze-lyrics') #print(r.status_code) if r.status_code != 200: print('Error') soup = BeautifulSoup(r.content, 'lxml') titles = soup.find_all('title') titles = titles[0].text titlist = titles.split('Lyrics | ') titlist.pop(1) titlist = titlist[0].replace("\xa0", " ") print(titlist) divs = soup.find_all('div', {'class' : 'Lyrics__Container-sc-1ynbvzw-6 YYrds'}) #print(divs[0].text) lyrics = (divs[0].text) res = re.findall(r'[A-Z][^A-Z]*', lyrics) res_l = [] for el in res: res_l.append(el + '\n') print(el) and output is snown on a screenshot. How do I fix it?enter image description here for those, who asked, added a full code
[ "As brackets have a meaning in regex you'll need to escape them. In python you should be able to use \\[ to get what you want.\n", "You can .unwrap unnecessary tags (<a>, <span>), replace <br> with newlines and then get text:\nimport requests\nfrom bs4 import BeautifulSoup\n\nurl = \"https://genius.com/Taylor-swift-lavender-haze-lyrics\"\nsoup = BeautifulSoup(requests.get(url).content, \"html.parser\")\n\nfor t in soup.select(\"#lyrics-root [data-lyrics-container]\"):\n\n for tag in t.select(\"span, a\"):\n tag.unwrap()\n\n for br in t.select(\"br\"):\n br.replace_with(\"\\n\")\n\n print(t.text)\n\nPrints:\n[Intro]\nMeet me at midnight\n\n[Verse 1]\nStaring at the ceiling with you\nOh, you don't ever say too much\nAnd you don't really read into\nMy melancholia\n\n[Pre-Chorus]\nI been under scrutiny (Yeah, oh, yeah)\nYou handle it beautifully (Yeah, oh, yeah)\nAll this shit is new to me (Yeah, oh, yeah)\n\n...and so on.\n\n" ]
[ 0, 0 ]
[]
[]
[ "parsing", "python", "python_3.x", "split" ]
stackoverflow_0074603507_parsing_python_python_3.x_split.txt
Q: Largest Perimeter Triangle Leetcode Problem Python I have given the mentioned problem quite a thought but was not able to come up with a working solution on my own. So I found the following solution, but I want to understand why does it work. Here it is: class Solution: def largestPerimeter(self, nums: List[int]) -> int: # triange in-equality a+b > c # sum of 2 smallest > largest nums.sort(reverse=True) a,b,c = inf,inf,inf for n in nums: a, b, c = n, a, b if a + b > c: return a+b+c return 0 Now I will write how I understand this code works and what I do not understand. So. The list is sorted in a descending way with nums.sort(reverse=True). Then a, b, c are given values of infinity each. Then on the first iteration of the for cycle a, b, c are set equal to: a - the highest value from nums; b, c - infinity. Then the program checks if the highest value from nums + infinity is higher than infinity. But isn`t this condition true for any value of a? And even if I understood the condition, why is the output equal to a+b+c = highest value from nums + infinity + infinity? How can this be a valid perimeter for a triangle? A: But isn`t this condition true for any value of a? No, that condition is false for any value of a. In the first iteration, a + b > c will resolve to n0 + inf > inf. Now, n0 + inf returns inf for any integer n0, so inf > inf is false. Also, in the second iteration, you will have n1 + n0 > inf, which is also always false. Only in the third iteration, all the infs will be gone and you will start having n2 + n1 > n0, then actual comparissons will happen. Maybe it would be more clear if the implementation was something like this equivalent code: nums.sort(reverse=True) # extract the first 2 elements in a and b, # and reassign all the remaining back to nums b, a, *nums = nums for n in nums: a, b, c = n, a, b if a + b > c: return a+b+c Please note that, by doing this, you would need to treat the special cases where len(nums) < 3.
Largest Perimeter Triangle Leetcode Problem Python
I have given the mentioned problem quite a thought but was not able to come up with a working solution on my own. So I found the following solution, but I want to understand why does it work. Here it is: class Solution: def largestPerimeter(self, nums: List[int]) -> int: # triange in-equality a+b > c # sum of 2 smallest > largest nums.sort(reverse=True) a,b,c = inf,inf,inf for n in nums: a, b, c = n, a, b if a + b > c: return a+b+c return 0 Now I will write how I understand this code works and what I do not understand. So. The list is sorted in a descending way with nums.sort(reverse=True). Then a, b, c are given values of infinity each. Then on the first iteration of the for cycle a, b, c are set equal to: a - the highest value from nums; b, c - infinity. Then the program checks if the highest value from nums + infinity is higher than infinity. But isn`t this condition true for any value of a? And even if I understood the condition, why is the output equal to a+b+c = highest value from nums + infinity + infinity? How can this be a valid perimeter for a triangle?
[ "\nBut isn`t this condition true for any value of a?\n\nNo, that condition is false for any value of a.\nIn the first iteration, a + b > c will resolve to n0 + inf > inf. Now, n0 + inf returns inf for any integer n0, so inf > inf is false.\nAlso, in the second iteration, you will have n1 + n0 > inf, which is also always false.\nOnly in the third iteration, all the infs will be gone and you will start having n2 + n1 > n0, then actual comparissons will happen.\nMaybe it would be more clear if the implementation was something like this equivalent code:\nnums.sort(reverse=True)\n# extract the first 2 elements in a and b, \n# and reassign all the remaining back to nums\nb, a, *nums = nums \nfor n in nums:\n a, b, c = n, a, b\n if a + b > c:\n return a+b+c \n\nPlease note that, by doing this, you would need to treat the special cases where len(nums) < 3.\n" ]
[ 0 ]
[]
[]
[ "greedy", "python" ]
stackoverflow_0074604424_greedy_python.txt
Q: Why a norm distribution does not plot a line on stats.probplot()? The problem is with the resultant graph of function scipy.stats.probplot(). Samples from a normal distribution doesn't produce a line as expected. I am trying to normalize some data using graphs as guidance. However, after some strange results showing that zscore and log transformations were having no effect, I started looking for something wrong. So, I built a graph using synthetic values that has a norm distribution and the resultant graph seems very awkward. Here is the steps to reproduce the array and the graph: import math import matplotlib.pyplot as plt import numpy as np from scipy import stats mu = 0 variance = 1 sigma = math.sqrt(variance) x = np.linspace(mu - 3*sigma, mu + 3*sigma, 100) norm = stats.norm.pdf(x, mu, sigma) plt.plot(x, norm) plt.show() _ = stats.probplot(norm, plot=plt, sparams=(0, 1)) plt.show() Distribution curve: Probability plot: A: Your synthesized data aren't normally distributed, they are uniformly distributed, this is what numpy.linspace() does. You can visualize this by adding seaborn.distplot(x, fit=scipy.stats.norm). import math import matplotlib.pyplot as plt import numpy as np from scipy import stats import seaborn as sns mu = 0 variance = 1 sigma = math.sqrt(variance) x = np.linspace(mu - 3*sigma, mu + 3*sigma, 100) y = stats.norm.pdf(x, mu, sigma) sns.distplot(y, fit=stats.norm) fig = plt.figure() res = stats.probplot(y, plot=plt, sparams=(0, 1)) plt.show() Try synthesizing your data with numpy.random.normal(). This will give you normally distributed data. import math import matplotlib.pyplot as plt import numpy as np from scipy import stats import seaborn as sns mu = 0 variance = 1 sigma = math.sqrt(variance) x = np.random.normal(loc=mu, scale=sigma, size=100) sns.distplot(x, fit=stats.norm) fig = plt.figure() res = stats.probplot(x, plot=plt, sparams=(0, 1)) plt.show()
Why a norm distribution does not plot a line on stats.probplot()?
The problem is with the resultant graph of function scipy.stats.probplot(). Samples from a normal distribution doesn't produce a line as expected. I am trying to normalize some data using graphs as guidance. However, after some strange results showing that zscore and log transformations were having no effect, I started looking for something wrong. So, I built a graph using synthetic values that has a norm distribution and the resultant graph seems very awkward. Here is the steps to reproduce the array and the graph: import math import matplotlib.pyplot as plt import numpy as np from scipy import stats mu = 0 variance = 1 sigma = math.sqrt(variance) x = np.linspace(mu - 3*sigma, mu + 3*sigma, 100) norm = stats.norm.pdf(x, mu, sigma) plt.plot(x, norm) plt.show() _ = stats.probplot(norm, plot=plt, sparams=(0, 1)) plt.show() Distribution curve: Probability plot:
[ "Your synthesized data aren't normally distributed, they are uniformly distributed, this is what numpy.linspace() does. You can visualize this by adding seaborn.distplot(x, fit=scipy.stats.norm).\nimport math\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom scipy import stats\nimport seaborn as sns\n\n\nmu = 0\nvariance = 1\nsigma = math.sqrt(variance)\nx = np.linspace(mu - 3*sigma, mu + 3*sigma, 100)\ny = stats.norm.pdf(x, mu, sigma)\n\nsns.distplot(y, fit=stats.norm)\nfig = plt.figure()\nres = stats.probplot(y, plot=plt, sparams=(0, 1))\nplt.show()\n\nTry synthesizing your data with numpy.random.normal(). This will give you normally distributed data.\nimport math\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom scipy import stats\nimport seaborn as sns\n\n\nmu = 0\nvariance = 1\nsigma = math.sqrt(variance)\nx = np.random.normal(loc=mu, scale=sigma, size=100)\n\nsns.distplot(x, fit=stats.norm)\nfig = plt.figure()\nres = stats.probplot(x, plot=plt, sparams=(0, 1))\nplt.show()\n\n" ]
[ 0 ]
[]
[]
[ "data_cleaning", "feature_engineering", "plot", "python", "scipy.stats" ]
stackoverflow_0074501950_data_cleaning_feature_engineering_plot_python_scipy.stats.txt
Q: Cost efficient way to test scalability of RS485 Modbus device read operation We are fairly new to Modbus and RS485 communication and are currently in the process of writing a Python application to read specified registers from multiple smart meters. Our final python script shall be able to read registers from up to 50-200 smart meters at a time via RS485 using Modbus. For testing, performance and scalability purposes I would like to have a cost efficient physical simulation environment in our lab in order to understand how many devices we will be able to read in practice. The goal is to understand how many devices, with which length of bus wiring, repeaters, etc. our Python script can handle, what the response time is, and whether other parts of the script need to be adjusted. We understand that we can make use of multiple serial interfaces but we would like to discover how we can minimize the requirement of multiple serial interfaces with the use of RS485 repeaters as well. We were asking ourselves if there might be some physical Modbus RTU sensors with RS485 to get to a close setup as if they were smart meters. Given the size of 50-200 devices are there any similar, simple devices in range of $5 where we could do similar practical testings? So far, we only have access to small lab of a few smart meters where we can do our tests. Our current script runs on a Raspberry Pi 4 4GB. A: In my opinion, there is no real need for testing. But you need to take into account several factors that you seem not to be aware of. Consider the following: How many registers are you going to read from each meter? You can read a couple of registers (for instance, just total power) or many (50-100 or more if you read all the data the meters are collecting). How frequently will you take readings from each meter? If you are fine reading once every minute you will be able to read from many devices, but what if you need readings every second? Most meters will probably update once or twice a second; if you need 100 registers from 200 devices twice a second you might have a problem (more on that later). How far apart are your devices? Are they all in the same room within 10 meters from your computer/PLC or 1 km apart each? As you probably know, the maximum speed on any RS-485 link depends on the maximum length of the bus. And obviously, that's also related to my two previous questions: number of registers (amount of data) and its rate (once a second or once a minute). Once you answer those questions you would be able to do a back-of-the-envelope calculation, you can take a look here for a sample calculation. Elaborating a bit more on your questions: The goal is to understand how many devices, with which length of bus wiring, repeaters, etc. our Python script can handle, what the response time is, and whether other parts of the script need to be adjusted. This is, in my opinion, the wrong way of looking at the problem. Your software (Python script) should never be your limiting factor. Are you sure you would be able to decide the location of your meters based on what your software is able to handle? I think it should be the other way around. We understand that we can make use of multiple serial interfaces but we would like to discover how we can minimize the requirement of multiple serial interfaces with the use of RS485 repeaters as well. If you split your network into two or more chunks you might reach a point where either your Python script or the computing power of your RPi does become your limiting factor. I don't know what your setup would look like but I would think about it this way: every time you split your network you will need two more wires (and one more serial port), if your distances are long, after a very short while splitting you would be better off going to Modbus over TCP. We were asking ourselves if there might be some physical Modbus RTU sensors with RS485 to get to a close setup as if they were smart meters. Given the size of 50-200 devices are there any similar, simple devices in range of $5 where we could do similar practical testings? Again, this is in my opinion, the wrong approach. Instead of looking for lookalike devices why don't you look at the documentation of the real device you will be using? That should give you, at the very least, the refresh rate at which you will be reading data. Obviously, your bus will not be sending data faster than that. You would be able to emulate any sensor with a computer (RPi or otherwise) and a serial port. With most computers and most serial ports you should be able to send data (random, or fake, or whatever) as fast as possible for each baud rate you can imagine from 150 bps to at least 1 Mbps, but that would be pointless. What you want to know is the data rate of your real sensors/meters, and that should become clear from reading their documentation (or otherwise asking their manufacturer). At the very bottom of every measuring instrument, there is an AD (analog to digital) converter that's taking samples at a fixed rate. You can ask the instrument for faster readings but if you do that what you will get is the same values until a new reading comes in. If for instance, you get readings from the instrument once a second and you ask for values twice a second you will get the same value twice each second. As simple as that. I hope you find some useful ideas in my long and theoretical post.
Cost efficient way to test scalability of RS485 Modbus device read operation
We are fairly new to Modbus and RS485 communication and are currently in the process of writing a Python application to read specified registers from multiple smart meters. Our final python script shall be able to read registers from up to 50-200 smart meters at a time via RS485 using Modbus. For testing, performance and scalability purposes I would like to have a cost efficient physical simulation environment in our lab in order to understand how many devices we will be able to read in practice. The goal is to understand how many devices, with which length of bus wiring, repeaters, etc. our Python script can handle, what the response time is, and whether other parts of the script need to be adjusted. We understand that we can make use of multiple serial interfaces but we would like to discover how we can minimize the requirement of multiple serial interfaces with the use of RS485 repeaters as well. We were asking ourselves if there might be some physical Modbus RTU sensors with RS485 to get to a close setup as if they were smart meters. Given the size of 50-200 devices are there any similar, simple devices in range of $5 where we could do similar practical testings? So far, we only have access to small lab of a few smart meters where we can do our tests. Our current script runs on a Raspberry Pi 4 4GB.
[ "In my opinion, there is no real need for testing. But you need to take into account several factors that you seem not to be aware of.\nConsider the following:\n\nHow many registers are you going to read from each meter? You can read a couple of registers (for instance, just total power) or many (50-100 or more if you read all the data the meters are collecting).\nHow frequently will you take readings from each meter? If you are fine reading once every minute you will be able to read from many devices, but what if you need readings every second? Most meters will probably update once or twice a second; if you need 100 registers from 200 devices twice a second you might have a problem (more on that later).\nHow far apart are your devices? Are they all in the same room within 10 meters from your computer/PLC or 1 km apart each? As you probably know, the maximum speed on any RS-485 link depends on the maximum length of the bus. And obviously, that's also related to my two previous questions: number of registers (amount of data) and its rate (once a second or once a minute).\n\nOnce you answer those questions you would be able to do a back-of-the-envelope calculation, you can take a look here for a sample calculation.\nElaborating a bit more on your questions:\n\nThe goal is to understand how many devices, with which length of bus wiring, repeaters, etc. our Python script can handle, what the response time is, and whether other parts of the script need to be adjusted.\n\nThis is, in my opinion, the wrong way of looking at the problem. Your software (Python script) should never be your limiting factor. Are you sure you would be able to decide the location of your meters based on what your software is able to handle? I think it should be the other way around.\n\nWe understand that we can make use of multiple serial interfaces but we would like to discover how we can minimize the requirement of multiple serial interfaces with the use of RS485 repeaters as well.\n\nIf you split your network into two or more chunks you might reach a point where either your Python script or the computing power of your RPi does become your limiting factor. I don't know what your setup would look like but I would think about it this way: every time you split your network you will need two more wires (and one more serial port), if your distances are long, after a very short while splitting you would be better off going to Modbus over TCP.\n\nWe were asking ourselves if there might be some physical Modbus RTU sensors with RS485 to get to a close setup as if they were smart meters. Given the size of 50-200 devices are there any similar, simple devices in range of $5 where we could do similar practical testings?\n\nAgain, this is in my opinion, the wrong approach. Instead of looking for lookalike devices why don't you look at the documentation of the real device you will be using? That should give you, at the very least, the refresh rate at which you will be reading data. Obviously, your bus will not be sending data faster than that.\nYou would be able to emulate any sensor with a computer (RPi or otherwise) and a serial port. With most computers and most serial ports you should be able to send data (random, or fake, or whatever) as fast as possible for each baud rate you can imagine from 150 bps to at least 1 Mbps, but that would be pointless. What you want to know is the data rate of your real sensors/meters, and that should become clear from reading their documentation (or otherwise asking their manufacturer).\nAt the very bottom of every measuring instrument, there is an AD (analog to digital) converter that's taking samples at a fixed rate. You can ask the instrument for faster readings but if you do that what you will get is the same values until a new reading comes in. If for instance, you get readings from the instrument once a second and you ask for values twice a second you will get the same value twice each second. As simple as that.\nI hope you find some useful ideas in my long and theoretical post.\n" ]
[ 0 ]
[]
[]
[ "modbus", "python", "raspberry_pi", "rs485", "smartmeter" ]
stackoverflow_0074521448_modbus_python_raspberry_pi_rs485_smartmeter.txt
Q: Anaconda numpy: how to enable multiprocessing? I am trying to enable multithreading/multiprocessing in an Anaconda installation of Numpy. My test program is the following: import os import numpy as np from timeit import timeit size = 1024 A = np.random.random((size, size)), B = np.random.random((size, size)) print 'Time with %s threads: %f s' \ %(os.environ.get('OMP_NUM_THREADS'), timeit(lambda: np.dot(A, B), number=4)) I change the environmental variable OMP_NUM_THREADS, but regardless its value, it always takes the same amount of time to run the code and always a single core is being used. It appears that my Numpy is linked against OpenBlas: numpy.__config__.show() lapack_opt_info: libraries = ['openblas', 'openblas'] library_dirs = ['/home/myuser/anaconda3/envs/py2env/lib'] define_macros = [('HAVE_CBLAS', None)] language = c blas_opt_info: libraries = ['openblas', 'openblas'] library_dirs = ['/home/myuser/anaconda3/envs/py2env/lib'] define_macros = [('HAVE_CBLAS', None)] language = c openblas_info: libraries = ['openblas', 'openblas'] library_dirs = ['/home/myuser/anaconda3/envs/py2env/lib'] define_macros = [('HAVE_CBLAS', None)] language = c blis_info: NOT AVAILABLE openblas_lapack_info: libraries = ['openblas', 'openblas'] library_dirs = ['/home/myuser/anaconda3/envs/py2env/lib'] define_macros = [('HAVE_CBLAS', None)] language = c lapack_mkl_info: NOT AVAILABLE blas_mkl_info: NOT AVAILABLE and this is my relevant part of conda list: conda list | grep blas blas 1.1 openblas conda-forge libblas 3.9.0 1_h6e990d7_netlib conda-forge libcblas 3.9.0 3_h893e4fe_netlib conda-forge numpy 1.14.6 py27_blas_openblashd3ea46f_200 [blas_openblas] conda-forge openblas 0.2.20 8 conda-forge scikit-learn 0.19.2 py27_blas_openblasha84fab4_201 [blas_openblas] conda-forge I also tried setting OPENBLAS_NUM_THREADS but did make any difference. I use a Python2.7 environment in conda 4.12.0. A: If you want to build parallel processing, you will have to break down the problem and use python's multi-threading or multi-processing tools to implement it. Here is a scipy doc on how to get started. If you need to do more sophisticated calculations you can also consider using mpi4py. If most of your calculations are just numpy calculations, I would also consider using dask. It helps chunk and parallelizes your code and lots of the numpy functions are already supported.
Anaconda numpy: how to enable multiprocessing?
I am trying to enable multithreading/multiprocessing in an Anaconda installation of Numpy. My test program is the following: import os import numpy as np from timeit import timeit size = 1024 A = np.random.random((size, size)), B = np.random.random((size, size)) print 'Time with %s threads: %f s' \ %(os.environ.get('OMP_NUM_THREADS'), timeit(lambda: np.dot(A, B), number=4)) I change the environmental variable OMP_NUM_THREADS, but regardless its value, it always takes the same amount of time to run the code and always a single core is being used. It appears that my Numpy is linked against OpenBlas: numpy.__config__.show() lapack_opt_info: libraries = ['openblas', 'openblas'] library_dirs = ['/home/myuser/anaconda3/envs/py2env/lib'] define_macros = [('HAVE_CBLAS', None)] language = c blas_opt_info: libraries = ['openblas', 'openblas'] library_dirs = ['/home/myuser/anaconda3/envs/py2env/lib'] define_macros = [('HAVE_CBLAS', None)] language = c openblas_info: libraries = ['openblas', 'openblas'] library_dirs = ['/home/myuser/anaconda3/envs/py2env/lib'] define_macros = [('HAVE_CBLAS', None)] language = c blis_info: NOT AVAILABLE openblas_lapack_info: libraries = ['openblas', 'openblas'] library_dirs = ['/home/myuser/anaconda3/envs/py2env/lib'] define_macros = [('HAVE_CBLAS', None)] language = c lapack_mkl_info: NOT AVAILABLE blas_mkl_info: NOT AVAILABLE and this is my relevant part of conda list: conda list | grep blas blas 1.1 openblas conda-forge libblas 3.9.0 1_h6e990d7_netlib conda-forge libcblas 3.9.0 3_h893e4fe_netlib conda-forge numpy 1.14.6 py27_blas_openblashd3ea46f_200 [blas_openblas] conda-forge openblas 0.2.20 8 conda-forge scikit-learn 0.19.2 py27_blas_openblasha84fab4_201 [blas_openblas] conda-forge I also tried setting OPENBLAS_NUM_THREADS but did make any difference. I use a Python2.7 environment in conda 4.12.0.
[ "If you want to build parallel processing, you will have to break down the problem and use python's multi-threading or multi-processing tools to implement it. Here is a scipy doc on how to get started.\nIf you need to do more sophisticated calculations you can also consider using mpi4py. If most of your calculations are just numpy calculations, I would also consider using dask. It helps chunk and parallelizes your code and lots of the numpy functions are already supported.\n" ]
[ 0 ]
[]
[]
[ "anaconda", "anaconda3", "numpy", "openblas", "python" ]
stackoverflow_0074603345_anaconda_anaconda3_numpy_openblas_python.txt
Q: Python storing output as variable I'm working on some data parsing from text files, by running a for-loop/conditionacross the files and writing output to result file. I then run another for-loop on that result file to parse it further. Is there a way to store this result in a variable rather than a file?
Python storing output as variable
I'm working on some data parsing from text files, by running a for-loop/conditionacross the files and writing output to result file. I then run another for-loop on that result file to parse it further. Is there a way to store this result in a variable rather than a file?
[]
[]
[ "Yes but more context on what you want to accomplish would be helpful.\nSee below for what I think you want. This loop iterates over each item and then appends each one to a list which contains all outputs\nlst = []\nfor x in range(0,5):\n y = x\n lst.append(y)\n\n" ]
[ -1 ]
[ "parsing", "python", "python_3.x" ]
stackoverflow_0074604797_parsing_python_python_3.x.txt
Q: how to ask a function to do something specific in its last call without using the help of the items in a for loop? I have a complicated code of several classes and functions. one of the functions is called n times and I need to do something specific in the last call of the function. It is too complicated to use the iteratble of the for loop. I need something that knows that this is the last call of the function, then it does what I want something like that def myfunc(y): x = y*2 if last_call: x = y*3 is it possible? I tried to use the items in the for loop that calls the function but because the code has several classes with several functions, it did not work as there are detailed requirements to call the functions and these requirements contradict what I tried to do. Thanks A: The function itself does not know and should not care about the context in which it is called. If you want the behavior to change, you do that by passing an appropriate argument. In this case, perhaps the other multiplicand should be a second argument; it can default to 2, but the caller should be responsible for passing 3 when appropriate. def myfunc(y, z=2): x = y * z ... Then you can determine the appropriate 2nd argument in the loop for i in range(10): myfunc(i, 2 if range < 9 else 3) or refactor the loop like for i in range(9): myfunc(i) myfunc(10, 3) The exact refactoring depends on your use case; these are just suggestions. In more extreme cases, there should probably be two different functions, with the caller deciding which function to call rather than simply deciding what argument(s) to pass. Here, maybe there is one function to call inside the (suitably shortened) loop, and another to call after the loop completes. A: Use flag. Iterate the calls to that fun and get the last unchanged flag values.
how to ask a function to do something specific in its last call without using the help of the items in a for loop?
I have a complicated code of several classes and functions. one of the functions is called n times and I need to do something specific in the last call of the function. It is too complicated to use the iteratble of the for loop. I need something that knows that this is the last call of the function, then it does what I want something like that def myfunc(y): x = y*2 if last_call: x = y*3 is it possible? I tried to use the items in the for loop that calls the function but because the code has several classes with several functions, it did not work as there are detailed requirements to call the functions and these requirements contradict what I tried to do. Thanks
[ "The function itself does not know and should not care about the context in which it is called. If you want the behavior to change, you do that by passing an appropriate argument.\nIn this case, perhaps the other multiplicand should be a second argument; it can default to 2, but the caller should be responsible for passing 3 when appropriate.\ndef myfunc(y, z=2):\n x = y * z\n ...\n\nThen you can determine the appropriate 2nd argument in the loop\nfor i in range(10):\n myfunc(i, 2 if range < 9 else 3)\n\nor refactor the loop like\nfor i in range(9):\n myfunc(i)\nmyfunc(10, 3)\n\nThe exact refactoring depends on your use case; these are just suggestions.\nIn more extreme cases, there should probably be two different functions, with the caller deciding which function to call rather than simply deciding what argument(s) to pass. Here, maybe there is one function to call inside the (suitably shortened) loop, and another to call after the loop completes.\n", "Use flag. Iterate the calls to that fun and get the last unchanged flag values.\n" ]
[ 1, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074604805_python.txt
Q: Pairing mutual values based on different column in Pandas Suppose I have the next Data Set NAME FRIEND -------------- John Ella John Ben Ella John Ella Ben Dave Ben ... More Values I want to get a list of the mutual friends of John, Ella and Dave. In this example the output should be ['Ben']. I've tried achieving this with loc but I wouldn't get the expected output, and would get 'friend's that aren't mutual. ['Ella', 'Ben', 'John', 'Ben'] Have searched for an answer for some time, couldn't find one that I might be duplicating. A: You can use a crosstab: ct = pd.crosstab(df['NAME'], df['FRIEND']) out = ct.columns[ct.all()].to_list() Or with set operations: s = df.groupby('FRIEND')['NAME'].agg(set) out = s.index[s.eq(set(df['NAME']))].to_list() Output: ['Ben'] Intermediate crosstab: FRIEND Ben Ella John NAME Dave 1 0 0 Ella 1 0 1 John 1 1 0 Intermediate s: FRIEND Ben {Ella, Dave, John} Ella {John} John {Ella} Name: NAME, dtype: object if you want to specifically match {'Ella', 'Dave', 'John'}, even if the are other names in NAME: target = ['Ella', 'Dave', 'John'] ct = pd.crosstab(df['NAME'], df['FRIEND']) out = ct.columns[ct.reindex(target).all()].to_list() Or; target = {'Ella', 'Dave', 'John'} s = df[df['NAME'].isin(target)].groupby(df['FRIEND'])['NAME'].agg(set) out = s.index[s.eq(target)].to_list()
Pairing mutual values based on different column in Pandas
Suppose I have the next Data Set NAME FRIEND -------------- John Ella John Ben Ella John Ella Ben Dave Ben ... More Values I want to get a list of the mutual friends of John, Ella and Dave. In this example the output should be ['Ben']. I've tried achieving this with loc but I wouldn't get the expected output, and would get 'friend's that aren't mutual. ['Ella', 'Ben', 'John', 'Ben'] Have searched for an answer for some time, couldn't find one that I might be duplicating.
[ "You can use a crosstab:\nct = pd.crosstab(df['NAME'], df['FRIEND'])\n\nout = ct.columns[ct.all()].to_list()\n\nOr with set operations:\ns = df.groupby('FRIEND')['NAME'].agg(set)\nout = s.index[s.eq(set(df['NAME']))].to_list()\n\nOutput: ['Ben']\nIntermediate crosstab:\nFRIEND Ben Ella John\nNAME \nDave 1 0 0\nElla 1 0 1\nJohn 1 1 0\n\nIntermediate s:\nFRIEND\nBen {Ella, Dave, John}\nElla {John}\nJohn {Ella}\nName: NAME, dtype: object\n\nif you want to specifically match {'Ella', 'Dave', 'John'}, even if the are other names in NAME:\ntarget = ['Ella', 'Dave', 'John']\n\nct = pd.crosstab(df['NAME'], df['FRIEND'])\n\nout = ct.columns[ct.reindex(target).all()].to_list()\n\nOr;\ntarget = {'Ella', 'Dave', 'John'}\ns = df[df['NAME'].isin(target)].groupby(df['FRIEND'])['NAME'].agg(set)\nout = s.index[s.eq(target)].to_list()\n\n" ]
[ 1 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074604816_dataframe_pandas_python.txt
Q: Selenium Data scraping issue, unproper data scrapped I am trying to scrape data from:- https://www.canadapharmacy.com/ below are a few pages that I need to scrape:- https://www.canadapharmacy.com/products/abilify-tablet, https://www.canadapharmacy.com/products/accolate, https://www.canadapharmacy.com/products/abilify-mt I need all the information from the page. I wrote the below code:- Using Soup:- base_url = 'https://www.canadapharmacy.com' data = [] for i in tqdm(range(len(test))): r = requests.get(base_url+test[i]) soup = BeautifulSoup(r.text,'lxml') # Scraping medicine Name try: main_name = (soup.find('h1',{"class":"mn"}).text.lstrip()).rstrip() except: main_name = None try: sec_name = ((soup.find('div',{"class":"product-name"}).find('h3').text.lstrip()).rstrip()).replace('\n','') except: sec_name = None try: generic_name = (soup.find('div',{"class":"card product generic strength equal"}).find('div').find('h3').text.lstrip()).rstrip() except: generic_name = None # Description card = ''.join([x.get_text(' ',strip=True) for x in soup.select('div.answer.expanded')]) try: des = card.split('Directions')[0].replace('Description','') except: des = None try: drc = card.split('Directions')[1].split('Ingredients')[0] except: drc = None try: ingre= card.split('Directions')[1].split('Ingredients')[1].split('Cautions')[0] except: ingre = None try: cau=card.split('Directions')[1].split('Ingredients')[1].split('Cautions')[1].split('Side Effects')[0] except: cau = None try: se= card.split('Directions')[1].split('Ingredients')[1].split('Cautions')[1].split('Side Effects')[1] except: se = None for j in soup.find('div',{"class":"answer expanded"}).find_all('h4'): if 'Product Code' in j.text: prod_code = j.text #prod_code = soup.find('div',{"class":"answer expanded"}).find_all('h4')[5].text #//div[@class='answer expanded']//h4 pharma = {"primary_name":main_name, "secondary_name":sec_name, "Generic_Name":generic_name, 'Description':des, 'Directions':drc, 'Ingredients':ingre, 'Cautions':cau, 'Side Effects':se, "Product_Code":prod_code} data.append(pharma) Using Selenium:- main_name = [] sec_name = [] generic_name = [] strength = [] quantity = [] desc = [] direc = [] ingre = [] cau = [] side_effect = [] prod_code = [] for i in tqdm(range(len(test_url))): card = [] driver.get(base_url+test_url[i]) time.sleep(1) try: main_name.append(driver.find_element(By.XPATH,"//div[@class='card product brand strength equal']//h3").text) except: main_name.append(None) try: sec_name.append(driver.find_element(By.XPATH,"//div[@class='card product generic strength equal']//h3").text) except: sec_name.append(None) try: generic_name.append(driver.find_element(By.XPATH,"//div[@class='card product generic strength equal']//h3").text) except: generic_name.append(None) try: for i in driver.find_elements(By.XPATH,"//div[@class='product-content']//div[@class='product-select']//form"): strength.append(i.text) except: strength.append(None) # try: # for i in driver.find_elements(By.XPATH,"//div[@class='product-select']//form//div[@class='product-select-options'][2]"): # quantity.append(i.text) # except: # quantity.append(None) card.append(driver.find_element(By.XPATH,"//div[@class='answer expanded']").text) try: desc.append(card[0].split('Directions')[0].replace('Description','')) except: desc.append(None) try: direc.append(card[0].split('Directions')[1].split('Ingredients')[0]) except: direc.append(None) try: ingre.append(card[0].split('Directions')[1].split('Ingredients')[1].split('Cautions')[0]) except: ingre.append(None) try: cau.append(card[0].split('Directions')[1].split('Ingredients')[1].split('Cautions')[1].split('Side Effects')[0]) except: cau.append(None) try: #side_effect.append(card.split('Directions')[1].split('Ingredients')[1].split('Cautions')[1].split('Side Effects')[1]) side_effect.append(card[0].split('Cautions')[1].split('Side Effects')[1]) except: side_effect.append(None) for j in driver.find_elements(By.XPATH,"//div[@class='answer expanded']//h4"): if 'Product Code' in j.text: prod_code.append(j.text) I am able to scrap the data from the pages but facing an issue while scraping the Strength and quantity box. I want to write the code in such a manner so that I could scrape the data from every medicine separately and convert it data frame with columns like 2mg, 5mg, 10mg , 30 tablets, 90 tablets and shows prices. I tried this code:- medicine_name1 = [] medicine_name2 = [] strength = [] quantity = [] for i in tqdm(range(len(test_url))): driver.get(base_url+test_url[i]) time.sleep(1) try: name1 = driver.find_element(By.XPATH,"//div[@class='card product brand strength equal']//h3").text except: name1 = None try: name2 = driver.find_element(By.XPATH,"//div[@class='card product generic strength equal']//h3").text except: name2 = None try: for i in driver.find_elements(By.XPATH,"//div[@class='product-select']//form//div[@class='product-select-options'][1]"): strength.append(i.text) medicine_name1.append(name1) medicine_name2.append(name2) except: strength.append(None) try: for i in driver.find_elements(By.XPATH,"//div[@class='product-select']//form//div[@class='product-select-options'][2]"): quantity.append(i.text) except: quantity.append(None) It works fine but still, here I am getting repeated values for the medicine. Could anyone please check? A: Note: it's usually more reliable to build a list of dictionaries [rather than separate lists like you are in the selenium version.] Without a sample/mockup of your desired output, I can't be sure this is the exact format you'd want it in, but I'd suggest something like this solution using requests+bs4 [on the 3 links you includes as example] # import requests # from bs4 import BeautifulSoup rootUrl = 'https://www.canadapharmacy.com' prodList = ['abilify-tablet', 'accolate', 'abilify-mt'] priceList = [] for prod in prodList: prodUrl = f'{rootUrl}/products/{prod}' print('', end=f'Scraping {prodUrl} ') resp = requests.get(prodUrl) if resp.status_code != 200: print(f'{resp.raise_for_status()} - failed to get {prodUrl}') continue pSoup = BeautifulSoup(resp.content) pNameSel = 'div.product-name > h3' for pv in pSoup.select(f'div > div.card.product:has({pNameSel})'): pName = pv.select_one(pNameSel).get_text('\n').strip().split('\n')[0] pDet = {'product_endpt': prod, 'product_name': pName.strip()} brgen = pv.select_one('div.badge-container > div.badge') if brgen: pDet['brand_or_generic'] = brgen.get_text(' ').strip() rxReq = pv.select_one(f'{pNameSel} p.mn') if rxReq: pDet['rx_requirement'] = rxReq.get_text(' ').strip() mgSel = 'div.product-select-options' opSel = 'option[value]:not([value=""])' opSel = f'{mgSel} + {mgSel} select[name="productsizeId"] {opSel}' for pvRow in pv.select(f'div.product-select-options-row:has({opSel})'): pvrStrength = pvRow.select_one(mgSel).get_text(' ').strip() pDet[pvrStrength] = ', '.join([ pvOp.get_text(' ').strip() for pvOp in pvRow.select(opSel) ]) pDet['source_url'] = prodUrl priceList.append(pDet) print(f' [total {len(priceList)} product prices]') and then to display as table: # import pandas pricesDf = pandas.DataFrame(priceList).set_index('product_name') colOrder = sorted(pricesDf.columns, key=lambda c: c == 'source_url') pricesDf = pricesDf[colOrder] # (just to push 'source_url' to the end) You could also get separate columns for each tablet-count-option, if you remove pDet[pvrStrength] = ', '.join([ pvOp.get_text(' ').strip() for pvOp in pvRow.select(opSel) ]) and replace it with this loop: for pvoi, pvOp in enumerate(pvRow.select(opSel)): pvoTxt = pvOp.get_text(' ').strip() tabletCt = pvoTxt.split(' - ')[0] pvoPrice = pvoTxt.split(' - ')[-1] if not tabletCt.endswith(' tablets'): tabletCt = f'[option {pvoi + 1}]' pvoPrice = pvoTxt pDet[f'{pvrStrength} - {tabletCt}'] = pvoPrice index Abilify (Aripiprazole) Generic Equivalent - Abilify (Aripiprazole) Generic Equivalent - Accolate (Zafirlukast) Abilify ODT (Aripiprazole) Generic Equivalent - Abilify ODT (Aripiprazole) product_endpt abilify-tablet abilify-tablet accolate abilify-mt abilify-mt brand_or_generic Brand Generic Generic Brand Generic rx_requirement Prescription Required NaN NaN Prescription Required NaN 2mg - 30 tablets $219.99 NaN NaN NaN NaN 2mg - 90 tablets $526.99 NaN NaN NaN NaN 5mg - 28 tablets $160.99 NaN NaN NaN NaN 5mg - 84 tablets $459.99 NaN NaN NaN NaN 10mg - 28 tablets $116.99 NaN NaN NaN NaN 10mg - 84 tablets $162.99 NaN NaN NaN NaN 15mg - 28 tablets $159.99 NaN NaN NaN NaN 15mg - 84 tablets $198.99 NaN NaN NaN NaN 20mg - 90 tablets $745.99 $67.99 NaN NaN NaN 30mg - 28 tablets $104.99 NaN NaN NaN NaN 30mg - 84 tablets $289.99 $75.99 NaN NaN NaN 1mg/ml Solution - [option 1] 150 ml - $239.99 NaN NaN NaN NaN 2mg - 100 tablets NaN $98.99 NaN NaN NaN 5mg - 100 tablets NaN $43.99 NaN NaN NaN 10mg - 90 tablets NaN $38.59 NaN NaN NaN 15mg - 90 tablets NaN $56.59 NaN NaN NaN 10mg - 60 tablets NaN NaN $109.00 NaN NaN 20mg - 60 tablets NaN NaN $109.00 NaN NaN 10mg ODT - 84 tablets NaN NaN NaN $499.99 NaN 15mg ODT - 84 tablets NaN NaN NaN $499.99 NaN 5mg ODT - 90 tablets NaN NaN NaN NaN $59.00 20mg ODT - 90 tablets NaN NaN NaN NaN $89.00 30mg ODT - 150 tablets NaN NaN NaN NaN $129.99 source_url https://www.canadapharmacy.com/products/abilify-tablet https://www.canadapharmacy.com/products/abilify-tablet https://www.canadapharmacy.com/products/accolate https://www.canadapharmacy.com/products/abilify-mt https://www.canadapharmacy.com/products/abilify-mt (I transposed the table since there were so many columns and so few rows. Table markdown can be copied from output of print(pricesDf.T.to_markdown()))
Selenium Data scraping issue, unproper data scrapped
I am trying to scrape data from:- https://www.canadapharmacy.com/ below are a few pages that I need to scrape:- https://www.canadapharmacy.com/products/abilify-tablet, https://www.canadapharmacy.com/products/accolate, https://www.canadapharmacy.com/products/abilify-mt I need all the information from the page. I wrote the below code:- Using Soup:- base_url = 'https://www.canadapharmacy.com' data = [] for i in tqdm(range(len(test))): r = requests.get(base_url+test[i]) soup = BeautifulSoup(r.text,'lxml') # Scraping medicine Name try: main_name = (soup.find('h1',{"class":"mn"}).text.lstrip()).rstrip() except: main_name = None try: sec_name = ((soup.find('div',{"class":"product-name"}).find('h3').text.lstrip()).rstrip()).replace('\n','') except: sec_name = None try: generic_name = (soup.find('div',{"class":"card product generic strength equal"}).find('div').find('h3').text.lstrip()).rstrip() except: generic_name = None # Description card = ''.join([x.get_text(' ',strip=True) for x in soup.select('div.answer.expanded')]) try: des = card.split('Directions')[0].replace('Description','') except: des = None try: drc = card.split('Directions')[1].split('Ingredients')[0] except: drc = None try: ingre= card.split('Directions')[1].split('Ingredients')[1].split('Cautions')[0] except: ingre = None try: cau=card.split('Directions')[1].split('Ingredients')[1].split('Cautions')[1].split('Side Effects')[0] except: cau = None try: se= card.split('Directions')[1].split('Ingredients')[1].split('Cautions')[1].split('Side Effects')[1] except: se = None for j in soup.find('div',{"class":"answer expanded"}).find_all('h4'): if 'Product Code' in j.text: prod_code = j.text #prod_code = soup.find('div',{"class":"answer expanded"}).find_all('h4')[5].text #//div[@class='answer expanded']//h4 pharma = {"primary_name":main_name, "secondary_name":sec_name, "Generic_Name":generic_name, 'Description':des, 'Directions':drc, 'Ingredients':ingre, 'Cautions':cau, 'Side Effects':se, "Product_Code":prod_code} data.append(pharma) Using Selenium:- main_name = [] sec_name = [] generic_name = [] strength = [] quantity = [] desc = [] direc = [] ingre = [] cau = [] side_effect = [] prod_code = [] for i in tqdm(range(len(test_url))): card = [] driver.get(base_url+test_url[i]) time.sleep(1) try: main_name.append(driver.find_element(By.XPATH,"//div[@class='card product brand strength equal']//h3").text) except: main_name.append(None) try: sec_name.append(driver.find_element(By.XPATH,"//div[@class='card product generic strength equal']//h3").text) except: sec_name.append(None) try: generic_name.append(driver.find_element(By.XPATH,"//div[@class='card product generic strength equal']//h3").text) except: generic_name.append(None) try: for i in driver.find_elements(By.XPATH,"//div[@class='product-content']//div[@class='product-select']//form"): strength.append(i.text) except: strength.append(None) # try: # for i in driver.find_elements(By.XPATH,"//div[@class='product-select']//form//div[@class='product-select-options'][2]"): # quantity.append(i.text) # except: # quantity.append(None) card.append(driver.find_element(By.XPATH,"//div[@class='answer expanded']").text) try: desc.append(card[0].split('Directions')[0].replace('Description','')) except: desc.append(None) try: direc.append(card[0].split('Directions')[1].split('Ingredients')[0]) except: direc.append(None) try: ingre.append(card[0].split('Directions')[1].split('Ingredients')[1].split('Cautions')[0]) except: ingre.append(None) try: cau.append(card[0].split('Directions')[1].split('Ingredients')[1].split('Cautions')[1].split('Side Effects')[0]) except: cau.append(None) try: #side_effect.append(card.split('Directions')[1].split('Ingredients')[1].split('Cautions')[1].split('Side Effects')[1]) side_effect.append(card[0].split('Cautions')[1].split('Side Effects')[1]) except: side_effect.append(None) for j in driver.find_elements(By.XPATH,"//div[@class='answer expanded']//h4"): if 'Product Code' in j.text: prod_code.append(j.text) I am able to scrap the data from the pages but facing an issue while scraping the Strength and quantity box. I want to write the code in such a manner so that I could scrape the data from every medicine separately and convert it data frame with columns like 2mg, 5mg, 10mg , 30 tablets, 90 tablets and shows prices. I tried this code:- medicine_name1 = [] medicine_name2 = [] strength = [] quantity = [] for i in tqdm(range(len(test_url))): driver.get(base_url+test_url[i]) time.sleep(1) try: name1 = driver.find_element(By.XPATH,"//div[@class='card product brand strength equal']//h3").text except: name1 = None try: name2 = driver.find_element(By.XPATH,"//div[@class='card product generic strength equal']//h3").text except: name2 = None try: for i in driver.find_elements(By.XPATH,"//div[@class='product-select']//form//div[@class='product-select-options'][1]"): strength.append(i.text) medicine_name1.append(name1) medicine_name2.append(name2) except: strength.append(None) try: for i in driver.find_elements(By.XPATH,"//div[@class='product-select']//form//div[@class='product-select-options'][2]"): quantity.append(i.text) except: quantity.append(None) It works fine but still, here I am getting repeated values for the medicine. Could anyone please check?
[ "Note: it's usually more reliable to build a list of dictionaries [rather than separate lists like you are in the selenium version.]\n\nWithout a sample/mockup of your desired output, I can't be sure this is the exact format you'd want it in, but I'd suggest something like this solution using requests+bs4 [on the 3 links you includes as example]\n# import requests\n# from bs4 import BeautifulSoup\n\nrootUrl = 'https://www.canadapharmacy.com'\nprodList = ['abilify-tablet', 'accolate', 'abilify-mt']\npriceList = []\nfor prod in prodList:\n prodUrl = f'{rootUrl}/products/{prod}'\n print('', end=f'Scraping {prodUrl} ')\n resp = requests.get(prodUrl)\n if resp.status_code != 200:\n print(f'{resp.raise_for_status()} - failed to get {prodUrl}')\n continue\n pSoup = BeautifulSoup(resp.content)\n\n pNameSel = 'div.product-name > h3'\n for pv in pSoup.select(f'div > div.card.product:has({pNameSel})'):\n pName = pv.select_one(pNameSel).get_text('\\n').strip().split('\\n')[0] \n pDet = {'product_endpt': prod, 'product_name': pName.strip()}\n\n brgen = pv.select_one('div.badge-container > div.badge')\n if brgen: pDet['brand_or_generic'] = brgen.get_text(' ').strip()\n rxReq = pv.select_one(f'{pNameSel} p.mn')\n if rxReq: pDet['rx_requirement'] = rxReq.get_text(' ').strip()\n\n mgSel = 'div.product-select-options'\n opSel = 'option[value]:not([value=\"\"])'\n opSel = f'{mgSel} + {mgSel} select[name=\"productsizeId\"] {opSel}'\n for pvRow in pv.select(f'div.product-select-options-row:has({opSel})'):\n pvrStrength = pvRow.select_one(mgSel).get_text(' ').strip()\n\n pDet[pvrStrength] = ', '.join([\n pvOp.get_text(' ').strip() for pvOp in pvRow.select(opSel)\n ]) \n\n pDet['source_url'] = prodUrl\n priceList.append(pDet)\n print(f' [total {len(priceList)} product prices]')\n\nand then to display as table:\n# import pandas\n\npricesDf = pandas.DataFrame(priceList).set_index('product_name')\ncolOrder = sorted(pricesDf.columns, key=lambda c: c == 'source_url')\npricesDf = pricesDf[colOrder] # (just to push 'source_url' to the end)\n\n\n\nYou could also get separate columns for each tablet-count-option, if you remove\n pDet[pvrStrength] = ', '.join([\n pvOp.get_text(' ').strip() for pvOp in pvRow.select(opSel)\n ]) \n\nand replace it with this loop:\n for pvoi, pvOp in enumerate(pvRow.select(opSel)): \n pvoTxt = pvOp.get_text(' ').strip()\n tabletCt = pvoTxt.split(' - ')[0]\n pvoPrice = pvoTxt.split(' - ')[-1]\n if not tabletCt.endswith(' tablets'): \n tabletCt = f'[option {pvoi + 1}]' \n pvoPrice = pvoTxt\n \n pDet[f'{pvrStrength} - {tabletCt}'] = pvoPrice \n\n\n\n\n\nindex\nAbilify (Aripiprazole)\nGeneric Equivalent - Abilify (Aripiprazole)\nGeneric Equivalent - Accolate (Zafirlukast)\nAbilify ODT (Aripiprazole)\nGeneric Equivalent - Abilify ODT (Aripiprazole)\n\n\n\n\nproduct_endpt\nabilify-tablet\nabilify-tablet\naccolate\nabilify-mt\nabilify-mt\n\n\nbrand_or_generic\nBrand\nGeneric\nGeneric\nBrand\nGeneric\n\n\nrx_requirement\nPrescription Required\nNaN\nNaN\nPrescription Required\nNaN\n\n\n2mg - 30 tablets\n$219.99\nNaN\nNaN\nNaN\nNaN\n\n\n2mg - 90 tablets\n$526.99\nNaN\nNaN\nNaN\nNaN\n\n\n5mg - 28 tablets\n$160.99\nNaN\nNaN\nNaN\nNaN\n\n\n5mg - 84 tablets\n$459.99\nNaN\nNaN\nNaN\nNaN\n\n\n10mg - 28 tablets\n$116.99\nNaN\nNaN\nNaN\nNaN\n\n\n10mg - 84 tablets\n$162.99\nNaN\nNaN\nNaN\nNaN\n\n\n15mg - 28 tablets\n$159.99\nNaN\nNaN\nNaN\nNaN\n\n\n15mg - 84 tablets\n$198.99\nNaN\nNaN\nNaN\nNaN\n\n\n20mg - 90 tablets\n$745.99\n$67.99\nNaN\nNaN\nNaN\n\n\n30mg - 28 tablets\n$104.99\nNaN\nNaN\nNaN\nNaN\n\n\n30mg - 84 tablets\n$289.99\n$75.99\nNaN\nNaN\nNaN\n\n\n1mg/ml Solution - [option 1]\n150 ml - $239.99\nNaN\nNaN\nNaN\nNaN\n\n\n2mg - 100 tablets\nNaN\n$98.99\nNaN\nNaN\nNaN\n\n\n5mg - 100 tablets\nNaN\n$43.99\nNaN\nNaN\nNaN\n\n\n10mg - 90 tablets\nNaN\n$38.59\nNaN\nNaN\nNaN\n\n\n15mg - 90 tablets\nNaN\n$56.59\nNaN\nNaN\nNaN\n\n\n10mg - 60 tablets\nNaN\nNaN\n$109.00\nNaN\nNaN\n\n\n20mg - 60 tablets\nNaN\nNaN\n$109.00\nNaN\nNaN\n\n\n10mg ODT - 84 tablets\nNaN\nNaN\nNaN\n$499.99\nNaN\n\n\n15mg ODT - 84 tablets\nNaN\nNaN\nNaN\n$499.99\nNaN\n\n\n5mg ODT - 90 tablets\nNaN\nNaN\nNaN\nNaN\n$59.00\n\n\n20mg ODT - 90 tablets\nNaN\nNaN\nNaN\nNaN\n$89.00\n\n\n30mg ODT - 150 tablets\nNaN\nNaN\nNaN\nNaN\n$129.99\n\n\nsource_url\nhttps://www.canadapharmacy.com/products/abilify-tablet\nhttps://www.canadapharmacy.com/products/abilify-tablet\nhttps://www.canadapharmacy.com/products/accolate\nhttps://www.canadapharmacy.com/products/abilify-mt\nhttps://www.canadapharmacy.com/products/abilify-mt\n\n\n\n\n(I transposed the table since there were so many columns and so few rows. Table markdown can be copied from output of print(pricesDf.T.to_markdown()))\n" ]
[ 1 ]
[]
[]
[ "beautifulsoup", "pandas", "python", "selenium", "web_scraping" ]
stackoverflow_0074601118_beautifulsoup_pandas_python_selenium_web_scraping.txt
Q: How to write Python if else in single line? category.is_parent = True if self.request.get('parentKey') is not None else category.is_parent = False Above is the code in which I am trying to write a if else in a single line and it is giving me this syntax error SyntaxError: can't assign to conditional expression" But if I write it in following way it works fine if self.request.get('parentKey') is not None: category.is_parent = True else: category.is_parent = False A: Try this: category.is_parent = True if self.request.get('parentKey') else False To check only against None: category.is_parent = True if self.request.get('parentKey') is not None else False A: You can write: category.is_parent = True if self.request.get('parentKey') is not None else False Or even simpler in this case: category.is_parent = self.request.get('parentKey') is not None A: You can write this with a one liner but using the [][] python conditional syntax. Like so: category.is_parent = [False,True][self.request.get('parentKey') is not None The first element in the first array is the result for the else part of the condition and the second element is the result when the condition in the second array evaluates to True.
How to write Python if else in single line?
category.is_parent = True if self.request.get('parentKey') is not None else category.is_parent = False Above is the code in which I am trying to write a if else in a single line and it is giving me this syntax error SyntaxError: can't assign to conditional expression" But if I write it in following way it works fine if self.request.get('parentKey') is not None: category.is_parent = True else: category.is_parent = False
[ "Try this:\ncategory.is_parent = True if self.request.get('parentKey') else False\n\nTo check only against None:\ncategory.is_parent = True if self.request.get('parentKey') is not None else False\n\n", "You can write:\ncategory.is_parent = True if self.request.get('parentKey') is not None else False\nOr even simpler in this case:\ncategory.is_parent = self.request.get('parentKey') is not None\n", "You can write this with a one liner but using the [][] python conditional syntax. Like so:\ncategory.is_parent = [False,True][self.request.get('parentKey') is not None\n\nThe first element in the first array is the result for the else part of the condition and the second element is the result when the condition in the second array evaluates to True.\n" ]
[ 3, 1, 0 ]
[]
[]
[ "python" ]
stackoverflow_0040810139_python.txt
Q: How to list down all available python versions in my windows When I type py --list in my cmd, it shows C:\Users\Administrator>py --list Installed Pythons found by py Launcher for Windows -3.9-64 * -3.8-64 But when I use the command python it shows Python 3.10.6 C:\Users\Administrator>python Python 3.10.6 (main, Aug 12 2022, 18:00:29) [GCC 12.1.0 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> I am confused about exactly how many pythons are available in my system. How to list them ALL? And how it is decided that which one is default? ( or is there any default way at all? ) A: Use py -0 to find all installed versions of python on your PC. Wallbloggerbeing explained how to change your default. A: Advanced System Settings > Advance (tab) . On the bottom you'll find 'Environment Variables' Double-click on the Path . You'll see path to one of the python installations, change that to path of your desired version.
How to list down all available python versions in my windows
When I type py --list in my cmd, it shows C:\Users\Administrator>py --list Installed Pythons found by py Launcher for Windows -3.9-64 * -3.8-64 But when I use the command python it shows Python 3.10.6 C:\Users\Administrator>python Python 3.10.6 (main, Aug 12 2022, 18:00:29) [GCC 12.1.0 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> I am confused about exactly how many pythons are available in my system. How to list them ALL? And how it is decided that which one is default? ( or is there any default way at all? )
[ "Use py -0 to find all installed versions of python on your PC.\nWallbloggerbeing explained how to change your default.\n", "Advanced System Settings > Advance (tab) . On the bottom you'll find 'Environment Variables'\nDouble-click on the Path . You'll see path to one of the python installations, change that to path of your desired version.\n" ]
[ 2, 1 ]
[]
[]
[ "python", "version" ]
stackoverflow_0074604557_python_version.txt
Q: Accessing training data during tensorflow graph execution I'd like to use pre-trained sentence embeddings in my tensorflow graph execution model. The embeddings are available dynamically from a function call, which takes in an array of sentences and outputs an array of sentence embeddings. This function uses a pre-trained pytorch model so has to remain separate from the tensorflow model I'm training: def get_pretrained_embeddings(sentences): return pretrained_pytorch_model.encode(sentences) My tensorflow model looks like this: class SentenceModel(tf.keras.Model): def __init__(self): super().__init__() def call(self, sentences): embedding_layer = tf.keras.layers.Embedding( 10_000, 256, embeddings_initializer=tf.keras.initializers.Constant(get_pretrained_embeddings(sentences)), trainable=False, ) sentence_text_embedding = tf.keras.Sequential([ embedding_layer, tf.keras.layers.GlobalAveragePooling1D(), ]) return sentence_text_embedding, But when I try to train this model using cached_train = train.shuffle(100_000).batch(1024) model.fit(cached_train) my embeddings_initializer call gets the error: OperatorNotAllowedInGraphError: iterating over `tf.Tensor` is not allowed: AutoGraph did convert this function. This might indicate you are trying to use an unsupported feature. I assume this is because tensorflow is trying to compile the graph using symbolic data. How can I get my external function, which relies on the current training data batch, to work with tensorflow's graph training? A: Tensorflow compiles models to an execution graph before performing the actual training process. The obvious side-effect that clues us into this is if we have a regular Python print() statement in e.g. our call() method, it will only get executed once as Tensorflow runs through your code to construct the execution graph, which it will later convert to native code. The other side effect of this is that cannot use anything that isn't a tensor of some description when training. By 'tensor' here, all of the following can be considered a tensor: The input value of your call() method (obviously) A tf.Sequential A tf.keras.Model/tf.keras.layers.Layer subclass A SparseTensor A tf.constant() ....probably more I haven't listed here. To this end, you would need to convert your PyTorch model to a Tensorflow one to be able to reference it in a subclass of tf.keras.Model/tf.keras.layers.Layer. As a side note, if you do find you need to iterate a tensor, you should just be able to iterate it on the 1st dimension (i.e. the batch size) like so: for part in some_tensor: pass If you want to iterate on some other dimension, I recommend doing a tf.unstack(some_tensor, axis=AXIS_NUMBER_HERE) first and iterate over the result thereof.
Accessing training data during tensorflow graph execution
I'd like to use pre-trained sentence embeddings in my tensorflow graph execution model. The embeddings are available dynamically from a function call, which takes in an array of sentences and outputs an array of sentence embeddings. This function uses a pre-trained pytorch model so has to remain separate from the tensorflow model I'm training: def get_pretrained_embeddings(sentences): return pretrained_pytorch_model.encode(sentences) My tensorflow model looks like this: class SentenceModel(tf.keras.Model): def __init__(self): super().__init__() def call(self, sentences): embedding_layer = tf.keras.layers.Embedding( 10_000, 256, embeddings_initializer=tf.keras.initializers.Constant(get_pretrained_embeddings(sentences)), trainable=False, ) sentence_text_embedding = tf.keras.Sequential([ embedding_layer, tf.keras.layers.GlobalAveragePooling1D(), ]) return sentence_text_embedding, But when I try to train this model using cached_train = train.shuffle(100_000).batch(1024) model.fit(cached_train) my embeddings_initializer call gets the error: OperatorNotAllowedInGraphError: iterating over `tf.Tensor` is not allowed: AutoGraph did convert this function. This might indicate you are trying to use an unsupported feature. I assume this is because tensorflow is trying to compile the graph using symbolic data. How can I get my external function, which relies on the current training data batch, to work with tensorflow's graph training?
[ "Tensorflow compiles models to an execution graph before performing the actual training process. The obvious side-effect that clues us into this is if we have a regular Python print() statement in e.g. our call() method, it will only get executed once as Tensorflow runs through your code to construct the execution graph, which it will later convert to native code.\nThe other side effect of this is that cannot use anything that isn't a tensor of some description when training. By 'tensor' here, all of the following can be considered a tensor:\n\nThe input value of your call() method (obviously)\nA tf.Sequential\nA tf.keras.Model/tf.keras.layers.Layer subclass\nA SparseTensor\nA tf.constant()\n....probably more I haven't listed here.\n\nTo this end, you would need to convert your PyTorch model to a Tensorflow one to be able to reference it in a subclass of tf.keras.Model/tf.keras.layers.Layer.\n\nAs a side note, if you do find you need to iterate a tensor, you should just be able to iterate it on the 1st dimension (i.e. the batch size) like so:\nfor part in some_tensor:\n pass\n\nIf you want to iterate on some other dimension, I recommend doing a tf.unstack(some_tensor, axis=AXIS_NUMBER_HERE) first and iterate over the result thereof.\n" ]
[ 0 ]
[]
[]
[ "keras", "python", "tensorflow" ]
stackoverflow_0066681053_keras_python_tensorflow.txt
Q: in nested dictionary dynamically test if the value is a dictionary or a dictionary list I iterate through a nested dictionary taken from a json including one of the keys ("price") and sometimes a list sometimes a dictionary. Data={"main": {"sub_main": [ {"id": "995", "item": "850", "price": {"ref": "razorback", "value": "250"}}, {"id": "953", "item": "763", "price": [{"ref": "razorback", "value": "450"},{"ref": "sumatra", "value": "370"},{"ref": "ligea", "value": "320"} ]}, ]}} I filter the result according to the value of another key ("id"). # here, the value of "price" key is a dict result1 = [item["price"] for item in Data["main"]["sub_main"] if item["id"]=="995"] # here, the value of "price" key is a list of dict result2 = [item["price"] for item in Data["main"]["sub_main"] if item["id"]=="953"] then I convert the result into a dictionary #here I have what I wanted, because the "price" key is a dict dresult={k:v for e in result1 for (k,v) in e.items()} but when the "price" key has a dictionary list as its value, it doesn't work, because of course I get an error "'list' object has no attribute 'items' #it cannot loop on the value and key because the "price" key is a list dresult={k:v for e in result2 for (k,v) in e.items()} how to make it convert the result to dictionary in both cases (because I'm iterating through thousands of data). how to dynamically do a type test and change to finally have a dictionary.I would like to use the result to display in a view in Django. I need it to be a dictionary thank you. A: I hope I've understood your question right. In this code I distinguish if item['price'] is dict/list and create a new dict from ref/value keys: Data = { "main": { "sub_main": [ { "id": "995", "item": "850", "price": {"ref": "razorback", "value": "250"}, }, { "id": "953", "item": "763", "price": [ {"ref": "razorback", "value": "450"}, {"ref": "sumatra", "value": "370"}, {"ref": "ligea", "value": "320"}, ], }, ] } } ids = "995", "953" for id_ in ids: out = { d["ref"]: d["value"] for item in Data["main"]["sub_main"] for d in ( [item["price"]] if isinstance(item["price"], dict) else item["price"] ) if item["id"] == id_ } print(id_, out) Prints: 995 {'razorback': '250'} 953 {'razorback': '450', 'sumatra': '370', 'ligea': '320'} A: so I made a function that you will pass id of the sub_main dict you want and will give you price as a dictionary def get_result(id_): price= [item["price"] for item in Data["main"]["sub_main"] if item["id"]==id_][0] match type(price).__name__: case 'dict':return [price] case 'list':return price then to get result you pass the id as string dresult=get_result('953')#[{'ref': 'razorback', 'value': '450'}, {'ref': 'sumatra', 'value': '370'}, {'ref': 'ligea', 'value': '320'}] dresult=get_result('995')#[{'ref': 'razorback', 'value': '250'}] how to render in view in your view you can render the context as return render(request, 'app_name/template_name.html',{'dictionaries': dresult}) reason behind you can not a get many dictionaries unwrapped ie.{'ref': 'razorback', 'value': '450'}, {'ref': 'sumatra', 'value': '370'}, {'ref': 'ligea', 'value': '320'}.and since your result seem to have a variable number of dicts so its better to wrap the single dict in a list so that in your template you can use {% for dictionary in dictionaries %}: {{ dictionary.ref}}#to get ref {{ dictionary.value}}#to get value ... {%endfor%} edit: override get_context_data if its a detail or update view where you pass your pk you can pass the pk to the get_result function like below def get_context_data(self, **kwargs): context = super().get_context_data(**kwargs) context['dictionaries'] = get_result(self.kwargs.get('pk')) return context
in nested dictionary dynamically test if the value is a dictionary or a dictionary list
I iterate through a nested dictionary taken from a json including one of the keys ("price") and sometimes a list sometimes a dictionary. Data={"main": {"sub_main": [ {"id": "995", "item": "850", "price": {"ref": "razorback", "value": "250"}}, {"id": "953", "item": "763", "price": [{"ref": "razorback", "value": "450"},{"ref": "sumatra", "value": "370"},{"ref": "ligea", "value": "320"} ]}, ]}} I filter the result according to the value of another key ("id"). # here, the value of "price" key is a dict result1 = [item["price"] for item in Data["main"]["sub_main"] if item["id"]=="995"] # here, the value of "price" key is a list of dict result2 = [item["price"] for item in Data["main"]["sub_main"] if item["id"]=="953"] then I convert the result into a dictionary #here I have what I wanted, because the "price" key is a dict dresult={k:v for e in result1 for (k,v) in e.items()} but when the "price" key has a dictionary list as its value, it doesn't work, because of course I get an error "'list' object has no attribute 'items' #it cannot loop on the value and key because the "price" key is a list dresult={k:v for e in result2 for (k,v) in e.items()} how to make it convert the result to dictionary in both cases (because I'm iterating through thousands of data). how to dynamically do a type test and change to finally have a dictionary.I would like to use the result to display in a view in Django. I need it to be a dictionary thank you.
[ "I hope I've understood your question right. In this code I distinguish if item['price'] is dict/list and create a new dict from ref/value keys:\nData = {\n \"main\": {\n \"sub_main\": [\n {\n \"id\": \"995\",\n \"item\": \"850\",\n \"price\": {\"ref\": \"razorback\", \"value\": \"250\"},\n },\n {\n \"id\": \"953\",\n \"item\": \"763\",\n \"price\": [\n {\"ref\": \"razorback\", \"value\": \"450\"},\n {\"ref\": \"sumatra\", \"value\": \"370\"},\n {\"ref\": \"ligea\", \"value\": \"320\"},\n ],\n },\n ]\n }\n}\n\n\nids = \"995\", \"953\"\n\nfor id_ in ids:\n out = {\n d[\"ref\"]: d[\"value\"]\n for item in Data[\"main\"][\"sub_main\"]\n for d in (\n [item[\"price\"]]\n if isinstance(item[\"price\"], dict)\n else item[\"price\"]\n )\n if item[\"id\"] == id_\n }\n print(id_, out)\n\nPrints:\n995 {'razorback': '250'}\n953 {'razorback': '450', 'sumatra': '370', 'ligea': '320'}\n\n", "so I made a function that you will pass id of the sub_main dict you want and will give you price as a dictionary\ndef get_result(id_):\n price= [item[\"price\"] for item in Data[\"main\"][\"sub_main\"] if item[\"id\"]==id_][0]\n match type(price).__name__:\n case 'dict':return [price]\n case 'list':return price\n\nthen to get result you pass the id as string\ndresult=get_result('953')#[{'ref': 'razorback', 'value': '450'}, {'ref': 'sumatra', 'value': '370'}, {'ref': 'ligea', 'value': '320'}]\n\ndresult=get_result('995')#[{'ref': 'razorback', 'value': '250'}]\n\n\nhow to render in view\n\nin your view you can render the context as\nreturn render(request, 'app_name/template_name.html',{'dictionaries': dresult})\n\n\nreason behind\n\nyou can not a get many dictionaries unwrapped ie.{'ref': 'razorback', 'value': '450'}, {'ref': 'sumatra', 'value': '370'}, {'ref': 'ligea', 'value': '320'}.and since your result seem to have a variable number of dicts so its better to wrap the single dict in a list so that in your template you can use\n{% for dictionary in dictionaries %}:\n {{ dictionary.ref}}#to get ref\n{{ dictionary.value}}#to get value\n ...\n{%endfor%}\n\nedit:\n\noverride get_context_data\n\nif its a detail or update view where you pass your pk you can pass the pk to the get_result function like below\ndef get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n \n context['dictionaries'] = get_result(self.kwargs.get('pk'))\n return context\n\n" ]
[ 1, 0 ]
[]
[]
[ "dictionary", "list_comprehension", "nested", "python" ]
stackoverflow_0074604238_dictionary_list_comprehension_nested_python.txt
Q: Python turtle module I'm currently new to python programming. Nowadays I'm building a snake game using the turtle module. I want to refresh the screen after every piece of snake object parts has moved. So I turned off the tracer and use the update function after the for loop. But to do that I must import the time module and use the time.sleep() function.If I don't use it the python turtle module begins to not respond. I want to know what is the reason why I must use the time function and why I can't simply use sc.update directly without time function. here is my code from turtle import * from snake import * import time sc = Screen() sc.bgcolor('black') sc.setup(width=600, height=600) sc.tracer(0) # diyego is our snake name diyego = Snake(10) run = 1 while run: #here is the problem sc.update() time.sleep(1) #used time.sleep for i in range(len(diyego.snake_object_list)-1, 0, -1): infront_item_position = diyego.snake_object_list[i - 1].pos() diyego.snake_object_list[i].goto(infront_item_position) diyego.snake_head.forward(10) sc.exitonclick() #Snake module from turtle import * class Snake(): def __init__(self, number_of_parts): """Should pass the lenght of snake""" self.snake_object_list = [] self.create_snake_parts(number_of_parts) self.snake_head = self.snake_object_list[0] def create_snake_parts(self, number_of_parts): """ Get number of parts which snake shuld have and create snake it""" x_cor = 0 for i in range(number_of_parts): snake = Turtle() snake.speed(0) snake.shape("circle") snake.color('white') snake.penup() snake.setx(x=x_cor) self.snake_object_list.append(snake) x_cor += -20 I just want to know why the turtle gets not respond when I remove the time.sleep() A: What you describe is possible, but the problem isn't lack of use of the sleep() function but rather your use of (effectively) while True: which has no place in an event-driven world like turtle. Let's rework your code using ontimer() events and make the snake's basic movement a method of the snake itself: from turtle import Screen, Turtle CURSOR_SIZE = 20 class Snake(): def __init__(self, number_of_parts): """ Should pass the length of snake """ self.snake_parts = [] self.create_snake_parts(number_of_parts) self.snake_head = self.snake_parts[0] def create_snake_parts(self, number_of_parts): """ Get number of parts which snake should have and create snake """ x_coordinate = 0 for _ in range(number_of_parts): part = Turtle() part.shape('circle') part.color('white') part.penup() part.setx(x_coordinate) self.snake_parts.append(part) x_coordinate -= CURSOR_SIZE def move(self): for i in range(len(self.snake_parts) - 1, 0, -1): infront_item_position = self.snake_parts[i - 1].position() self.snake_parts[i].setposition(infront_item_position) self.snake_head.forward(CURSOR_SIZE) def slither(): diyego.move() screen.update() screen.ontimer(slither, 100) # milliseconds screen = Screen() screen.bgcolor('black') screen.setup(width=600, height=600) screen.tracer(0) diyego = Snake(10) slither() screen.exitonclick()
Python turtle module
I'm currently new to python programming. Nowadays I'm building a snake game using the turtle module. I want to refresh the screen after every piece of snake object parts has moved. So I turned off the tracer and use the update function after the for loop. But to do that I must import the time module and use the time.sleep() function.If I don't use it the python turtle module begins to not respond. I want to know what is the reason why I must use the time function and why I can't simply use sc.update directly without time function. here is my code from turtle import * from snake import * import time sc = Screen() sc.bgcolor('black') sc.setup(width=600, height=600) sc.tracer(0) # diyego is our snake name diyego = Snake(10) run = 1 while run: #here is the problem sc.update() time.sleep(1) #used time.sleep for i in range(len(diyego.snake_object_list)-1, 0, -1): infront_item_position = diyego.snake_object_list[i - 1].pos() diyego.snake_object_list[i].goto(infront_item_position) diyego.snake_head.forward(10) sc.exitonclick() #Snake module from turtle import * class Snake(): def __init__(self, number_of_parts): """Should pass the lenght of snake""" self.snake_object_list = [] self.create_snake_parts(number_of_parts) self.snake_head = self.snake_object_list[0] def create_snake_parts(self, number_of_parts): """ Get number of parts which snake shuld have and create snake it""" x_cor = 0 for i in range(number_of_parts): snake = Turtle() snake.speed(0) snake.shape("circle") snake.color('white') snake.penup() snake.setx(x=x_cor) self.snake_object_list.append(snake) x_cor += -20 I just want to know why the turtle gets not respond when I remove the time.sleep()
[ "What you describe is possible, but the problem isn't lack of use of the sleep() function but rather your use of (effectively) while True: which has no place in an event-driven world like turtle. Let's rework your code using ontimer() events and make the snake's basic movement a method of the snake itself:\nfrom turtle import Screen, Turtle\n\nCURSOR_SIZE = 20\n\nclass Snake():\n def __init__(self, number_of_parts):\n \"\"\" Should pass the length of snake \"\"\"\n self.snake_parts = []\n self.create_snake_parts(number_of_parts)\n self.snake_head = self.snake_parts[0]\n\n def create_snake_parts(self, number_of_parts):\n \"\"\" Get number of parts which snake should have and create snake \"\"\"\n x_coordinate = 0\n\n for _ in range(number_of_parts):\n part = Turtle()\n part.shape('circle')\n part.color('white')\n part.penup()\n part.setx(x_coordinate)\n\n self.snake_parts.append(part)\n x_coordinate -= CURSOR_SIZE\n\n def move(self):\n for i in range(len(self.snake_parts) - 1, 0, -1):\n infront_item_position = self.snake_parts[i - 1].position()\n self.snake_parts[i].setposition(infront_item_position)\n\n self.snake_head.forward(CURSOR_SIZE)\n\ndef slither():\n diyego.move()\n screen.update()\n screen.ontimer(slither, 100) # milliseconds\n\nscreen = Screen()\nscreen.bgcolor('black')\nscreen.setup(width=600, height=600)\nscreen.tracer(0)\n\ndiyego = Snake(10)\n\nslither()\n\nscreen.exitonclick()\n\n" ]
[ 0 ]
[]
[]
[ "python", "python_3.x", "python_turtle" ]
stackoverflow_0074580169_python_python_3.x_python_turtle.txt
Q: Python Need to build models for Linear regression and Decision Tree I am trying to train the model TreeReg = DecisionTreeRegressor() TreeReg.fit(X_train, y_train) y_pred_Train = TreeReg.predict(X_train) #predictions on Training set y_pred_Test = TreeReg.predict(X_test) #predictions on testing set It's giving me an error message: ValueError: could not convert string to float: '3/1/2019' A: Linear models need continuous values, (example 4 or 3.5) You're passing a datetime/string value into it and model can't use this. The model you used converts each line to a float value. Since that datetime thing can't converted to a float it raises an error. If your all column Is like that value, a datetime column, then try this: df['ThisCol'] = pd.to_datetime(df['ThisCol']) df['Year'] = df['ThisCol'].dt.year df['Month'] = df['ThisCol'].dt.month You can use month and year values in your model. you can check the docs for further information about datetime!
Python Need to build models for Linear regression and Decision Tree
I am trying to train the model TreeReg = DecisionTreeRegressor() TreeReg.fit(X_train, y_train) y_pred_Train = TreeReg.predict(X_train) #predictions on Training set y_pred_Test = TreeReg.predict(X_test) #predictions on testing set It's giving me an error message: ValueError: could not convert string to float: '3/1/2019'
[ "Linear models need continuous values, (example 4 or 3.5)\nYou're passing a datetime/string value into it and model can't use this.\nThe model you used converts each line to a float value. Since that datetime thing can't converted to a float it raises an error.\nIf your all column Is like that value, a datetime column, then try this:\ndf['ThisCol'] = pd.to_datetime(df['ThisCol'])\ndf['Year'] = df['ThisCol'].dt.year\ndf['Month'] = df['ThisCol'].dt.month\n\nYou can use month and year values in your model.\nyou can check the docs for further information about datetime!\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074604988_python.txt
Q: Jinja elif not working even though condition is true The fourth elif statement is the one causing me the issue. I have swapped the third elif statement with the fourth and every time the fourth is in third place it works. {% block content%} {% load static %} <link rel="stylesheet" href="{% static 'css/home_page.css' %}"> <link rel="stylesheet" href="{% static 'css/home_w_d_cs.css' %}"> {% if first_hour_d == 'clear sky' and time_of_day == True %} <!-- day == True means day --> <div class="side-hour-icon"> <img src="{% static 'images/sunny-black.png' %}" alt="" width="55" height="50"> </div> {% elif first_hour_d == 'clear sky' and time_of_day == False %} <!-- day == False means night --> <div class="side-hour-icon"> <img src="{% static 'images/clear-night-black.png' %}" alt="" width="55" height="50"> </div> {% elif first_hour_d == 'overcast clouds' or 'broken clouds' %} <div class="side-hour-icon"> <img src="{% static 'images/cloudy2.png' %}" alt="" width="55" height="50"> </div> {% elif first_hour_d == 'few clouds' or 'scattered clouds' %} <div class="side-hour-icon"> <img src="{% static 'images/few-clouds-black.png' %}" alt="" width="55" height="50"> </div> {% endif %} {% endblock %} I want to have a few elif statements, maybe 10 or 12. Is this possible? A: You can't do: {% elif first_hour_d == 'overcast clouds' or 'broken clouds' %} because the second string will always evaluate to True. You must do: {% elif first_hour_d == 'overcast clouds' or first_hour_d == 'broken clouds' %}
Jinja elif not working even though condition is true
The fourth elif statement is the one causing me the issue. I have swapped the third elif statement with the fourth and every time the fourth is in third place it works. {% block content%} {% load static %} <link rel="stylesheet" href="{% static 'css/home_page.css' %}"> <link rel="stylesheet" href="{% static 'css/home_w_d_cs.css' %}"> {% if first_hour_d == 'clear sky' and time_of_day == True %} <!-- day == True means day --> <div class="side-hour-icon"> <img src="{% static 'images/sunny-black.png' %}" alt="" width="55" height="50"> </div> {% elif first_hour_d == 'clear sky' and time_of_day == False %} <!-- day == False means night --> <div class="side-hour-icon"> <img src="{% static 'images/clear-night-black.png' %}" alt="" width="55" height="50"> </div> {% elif first_hour_d == 'overcast clouds' or 'broken clouds' %} <div class="side-hour-icon"> <img src="{% static 'images/cloudy2.png' %}" alt="" width="55" height="50"> </div> {% elif first_hour_d == 'few clouds' or 'scattered clouds' %} <div class="side-hour-icon"> <img src="{% static 'images/few-clouds-black.png' %}" alt="" width="55" height="50"> </div> {% endif %} {% endblock %} I want to have a few elif statements, maybe 10 or 12. Is this possible?
[ "You can't do:\n{% elif first_hour_d == 'overcast clouds' or 'broken clouds' %}\n\nbecause the second string will always evaluate to True.\nYou must do:\n{% elif first_hour_d == 'overcast clouds' or first_hour_d == 'broken clouds' %}\n\n" ]
[ 1 ]
[]
[]
[ "django", "jinja2", "python" ]
stackoverflow_0074605002_django_jinja2_python.txt
Q: how to make one dictionary from several arrays. PYTHON I want to make one dictionary out of several arrays in a loop a = ["hello","hi"] b = ["day","night"] #And these arrays were transformed into a dictionary c = {"a": "hello, hi", "b": "day, night"} dictt = dict.fromkeys(a, b) print(dictt) A: What you want is a dictionary with the variable name as key, and the contents of the lists joined together as the values. It is unusual to retrieve the variable names and use them as keys, so you could do it manually like this: a = ["hello","hi"] b = ["day","night"] c = { 'a': ', '.join(a), 'b': ', '.join(b) } print(c) # {'a': 'hello, hi', 'b': 'day, night'} If you want to loop over allot of a's and b's, I would organize the a and b differently. So you can loop over them. Like this: # here are your many lists data = [ ["hello","hi"], ["day","night"] ] # here you go over all the lists and use the index as key c = {} for i, lst in enumerate(data): c[i] = ', '.join(lst) print(c) # {0: 'hello, hi', 1: 'day, night'} If you need the dictionary keys to be some string, you could use a list of keys alongside the data: # define keys keys = ['a', 'b'] # here are your many lists data = [ ["hello","hi"], ["day","night"] ] # here you go over all the lists and use the key for keys c = {} for i, lst in enumerate(data): c[keys[i]] = ', '.join(lst) print(c) # {'a': 'hello, hi', 'b': 'day, night'} If you want to avoid a list of keys but do want some letter as key. You could run through the alphabet for the keys. This is limited to the number of letters you can go over: # here are your many lists data = [ ["hello","hi"], ["day","night"] ] # here you go over all the lists and use the alphabet letters in order as keys c = {} for i, lst in enumerate(data): c[chr(i+97)] = ', '.join(lst) print(c) # {'a': 'hello, hi', 'b': 'day, night'}
how to make one dictionary from several arrays. PYTHON
I want to make one dictionary out of several arrays in a loop a = ["hello","hi"] b = ["day","night"] #And these arrays were transformed into a dictionary c = {"a": "hello, hi", "b": "day, night"} dictt = dict.fromkeys(a, b) print(dictt)
[ "What you want is a dictionary with the variable name as key, and the contents of the lists joined together as the values.\nIt is unusual to retrieve the variable names and use them as keys, so you could do it manually like this:\na = [\"hello\",\"hi\"]\nb = [\"day\",\"night\"]\n\nc = {\n 'a': ', '.join(a),\n 'b': ', '.join(b)\n}\n\nprint(c) # {'a': 'hello, hi', 'b': 'day, night'}\n\nIf you want to loop over allot of a's and b's, I would organize the a and b differently. So you can loop over them. Like this:\n# here are your many lists\ndata = [\n [\"hello\",\"hi\"],\n [\"day\",\"night\"]\n]\n\n# here you go over all the lists and use the index as key\nc = {}\nfor i, lst in enumerate(data):\n c[i] = ', '.join(lst)\n\nprint(c) # {0: 'hello, hi', 1: 'day, night'}\n\nIf you need the dictionary keys to be some string, you could use a list of keys alongside the data:\n# define keys\nkeys = ['a', 'b']\n\n# here are your many lists\ndata = [\n [\"hello\",\"hi\"],\n [\"day\",\"night\"]\n]\n\n# here you go over all the lists and use the key for keys\nc = {}\nfor i, lst in enumerate(data):\n c[keys[i]] = ', '.join(lst)\n\nprint(c) # {'a': 'hello, hi', 'b': 'day, night'}\n\nIf you want to avoid a list of keys but do want some letter as key. You could run through the alphabet for the keys. This is limited to the number of letters you can go over:\n# here are your many lists\ndata = [\n [\"hello\",\"hi\"],\n [\"day\",\"night\"]\n]\n\n# here you go over all the lists and use the alphabet letters in order as keys\nc = {}\nfor i, lst in enumerate(data):\n c[chr(i+97)] = ', '.join(lst)\n\nprint(c) # {'a': 'hello, hi', 'b': 'day, night'}\n\n" ]
[ 0 ]
[]
[]
[ "arraylist", "dictionary", "python" ]
stackoverflow_0074605006_arraylist_dictionary_python.txt
Q: S3 Upload Invoke Lambda Fails - Cross Account Access I have a lambda that triggers off an S3 bucket upload (it basically converts a PDF to a dataframe and writes it to a different s3 bucket). Both of these belong to AWS account A. I would like to allow cross-account s3 access to trigger this lambda from another IAM user from account B (Administrator), however I am having issues with the GetObject operation. Here is how my lambda in account A looks: LOGGER = logging.getLogger(__name__) logging.basicConfig(level=logging.ERROR) logging.getLogger(__name__).setLevel(logging.DEBUG) session = boto3.Session( aws_access_key_id="XXXX", aws_secret_access_key="XXXX", ) s3 = session.resource('s3') dest_bucket = 'bucket-output' csv_buffer = StringIO() def lambda_handler(event,context): source_bucket = event['Records'][0]['s3']['bucket']['name'] pdf_name = event['Records'][0]['s3']['object']['key'] LOGGER.info('Reading {} from {}'.format(pdf_name, source_bucket)) pdf_obj = s3.Object(source_bucket,pdf_name) fs = pdf_obj.get()['Body'].read() #### code is failing here df = convert_bytes_to_df(BytesIO(fs)) df.to_csv(csv_buffer,index=False) s3.Object(dest_bucket,str(pdf_name.split('.')[0])+".csv").put(Body=csv_buffer.getvalue()) LOGGER.info('Successfully converted {} from {} to {}'.format(pdf_name,source_bucket,dest_bucket)) The lambda is failing with this error: ClientError: An error occurred (AccessDenied) when calling the GetObject operation: Access Denied I'm aware it's bad practice to have keys in the lambda file but I can't change things at the moment. The process works fine if I am uploading to the S3 bucket from within an IAM User in account A itself, but when I expose the S3 buckets to an IAM user from a separate account, the issues above start happening. This is the S3 bucket policy (terraform) allowing cross-account access to an IAM user from account B: resource "aws_s3_bucket_policy" "cross_account_input_access" { bucket = aws_s3_bucket.statements_input.id policy = <<EOF { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::XXXXXXXXX:user/Administrator" }, "Action": [ "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::capsphere-input" ] }, { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::XXXXXXXXX:user/Administrator" }, "Action": [ "s3:PutObject", "s3:GetObject", "s3:DeleteObject" ], "Resource": [ "arn:aws:s3:::bucket-name", "arn:aws:s3:::bucket-name/*" ] } ] } And here is the policy attached to an IAM user from another AWS account B which enables Administrator from account B to write a pdf to account A's s3 bucket programmatically: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::bucket-name" ] }, { "Effect": "Allow", "Action": [ "s3:PutObject", "s3:GetObject", "s3:DeleteObject" ], "Resource": [ "arn:aws:s3:::bucket-name", "arn:aws:s3:::bucket-name/*" ] } ] } I write the file to the bucket from Administrator using aws cli like this: aws s3 cp filename.pdf s3://bucket-name I can't figure out what else needs to change. A: It appears that your situation is: Account A contains: An AWS Lambda function A 'source' bucket used to trigger the Lambda function A 'destination' bucket used by the Lambda function to store output You want to allow the Administrator IAM User in Account B to upload a file to the source bucket in Account A. This user should be able to retrieve the output from the destination bucket in Account A. The following would achieve these goals: Create an IAM Role in Account A and associate it with the Lambda function. Assign permissions to allow GetObject from the source bucket and PutObject to the destination bucket. There should be no need to reference any credentials within the Lambda function itself, since any necessary permissions will be provided via this IAM Role. The policy would be: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "s3:GetObject" "Resource": "arn:aws:s3:::source-bucket/*" }, { "Effect": "Allow", "Action": "s3:PutObject" "Resource": "arn:aws:s3:::destination-bucket/*" } ] } Add a Bucket Policy on the source bucket that permits the Administrator user in Account B to PutObject into the bucket: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::Account-B:user/Administrator" }, "Action": "s3:PutObject", "Resource": "arn:aws:s3:::source-bucket/*" } ] } Add a Bucket policy on the destination bucket that permits the Administrator user in Account B to GetObject from the bucket and, I presume, list the bucket and delete objects that have been processed: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::Account-B:user/Administrator" }, "Action": [ "s3:ListBucket" ], "Resource": "arn:aws:s3:::destination-bucket" }, { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::Account-B:user/Administrator" }, "Action": [ "s3:GetObject", "s3:DeleteObject" ], "Resource": "arn:aws:s3:::destination-bucket/*" } ] } Since this is cross-account access, permission must also be granted to the Administrator IAM User in Account B to let them access the source and destination buckets. This policy is not required if they already have a lot of S3 permissions, such as s3:*, since it would work on any buckets including buckets in different AWS Accounts. This policy would go on the IAM User: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::destination-bucket" ] }, { "Effect": "Allow", "Action": [ "s3:PutObject", "s3:GetObject", "s3:DeleteObject" ], "Resource": [ "arn:aws:s3:::source-bucket/*", "arn:aws:s3:::destination-bucket/*" ] } ] } A: Ah! I have it! While your Lambda function has permission to access the object, the fact that the object was uploaded using credentials from another AWS Account means that the object is still 'owned' by the other account. There are two ways to fix this: Turn off ACLs on the receiving bucket From Disabling ACLs for all new buckets and enforcing Object Ownership - Amazon Simple Storage Service: We recommend that you disable ACLs on your Amazon S3 buckets. You can do this by applying the bucket owner enforced setting for S3 Object Ownership. When you apply this setting, ACLs are disabled and you automatically own and have full control over all objects in your bucket. Set ownership when uploading When uploading the object when using credentials from another, you can specify ACL=bucket-owner-full-control (depending on how you perform the upload). This will grant ownership to the AWS Account that owns the receiving bucket. A: What if you add Region information to the session: session = boto3.Session( aws_access_key_id="XXXX", aws_secret_access_key="XXXX", region_name="<REGION>" )
S3 Upload Invoke Lambda Fails - Cross Account Access
I have a lambda that triggers off an S3 bucket upload (it basically converts a PDF to a dataframe and writes it to a different s3 bucket). Both of these belong to AWS account A. I would like to allow cross-account s3 access to trigger this lambda from another IAM user from account B (Administrator), however I am having issues with the GetObject operation. Here is how my lambda in account A looks: LOGGER = logging.getLogger(__name__) logging.basicConfig(level=logging.ERROR) logging.getLogger(__name__).setLevel(logging.DEBUG) session = boto3.Session( aws_access_key_id="XXXX", aws_secret_access_key="XXXX", ) s3 = session.resource('s3') dest_bucket = 'bucket-output' csv_buffer = StringIO() def lambda_handler(event,context): source_bucket = event['Records'][0]['s3']['bucket']['name'] pdf_name = event['Records'][0]['s3']['object']['key'] LOGGER.info('Reading {} from {}'.format(pdf_name, source_bucket)) pdf_obj = s3.Object(source_bucket,pdf_name) fs = pdf_obj.get()['Body'].read() #### code is failing here df = convert_bytes_to_df(BytesIO(fs)) df.to_csv(csv_buffer,index=False) s3.Object(dest_bucket,str(pdf_name.split('.')[0])+".csv").put(Body=csv_buffer.getvalue()) LOGGER.info('Successfully converted {} from {} to {}'.format(pdf_name,source_bucket,dest_bucket)) The lambda is failing with this error: ClientError: An error occurred (AccessDenied) when calling the GetObject operation: Access Denied I'm aware it's bad practice to have keys in the lambda file but I can't change things at the moment. The process works fine if I am uploading to the S3 bucket from within an IAM User in account A itself, but when I expose the S3 buckets to an IAM user from a separate account, the issues above start happening. This is the S3 bucket policy (terraform) allowing cross-account access to an IAM user from account B: resource "aws_s3_bucket_policy" "cross_account_input_access" { bucket = aws_s3_bucket.statements_input.id policy = <<EOF { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::XXXXXXXXX:user/Administrator" }, "Action": [ "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::capsphere-input" ] }, { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::XXXXXXXXX:user/Administrator" }, "Action": [ "s3:PutObject", "s3:GetObject", "s3:DeleteObject" ], "Resource": [ "arn:aws:s3:::bucket-name", "arn:aws:s3:::bucket-name/*" ] } ] } And here is the policy attached to an IAM user from another AWS account B which enables Administrator from account B to write a pdf to account A's s3 bucket programmatically: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::bucket-name" ] }, { "Effect": "Allow", "Action": [ "s3:PutObject", "s3:GetObject", "s3:DeleteObject" ], "Resource": [ "arn:aws:s3:::bucket-name", "arn:aws:s3:::bucket-name/*" ] } ] } I write the file to the bucket from Administrator using aws cli like this: aws s3 cp filename.pdf s3://bucket-name I can't figure out what else needs to change.
[ "It appears that your situation is:\nAccount A contains:\n\nAn AWS Lambda function\nA 'source' bucket used to trigger the Lambda function\nA 'destination' bucket used by the Lambda function to store output\n\nYou want to allow the Administrator IAM User in Account B to upload a file to the source bucket in Account A. This user should be able to retrieve the output from the destination bucket in Account A.\nThe following would achieve these goals:\n\nCreate an IAM Role in Account A and associate it with the Lambda function. Assign permissions to allow GetObject from the source bucket and PutObject to the destination bucket. There should be no need to reference any credentials within the Lambda function itself, since any necessary permissions will be provided via this IAM Role. The policy would be:\n\n{\n \"Version\": \"2012-10-17\",\n \"Statement\": [\n {\n \"Effect\": \"Allow\",\n \"Action\": \"s3:GetObject\"\n \"Resource\": \"arn:aws:s3:::source-bucket/*\"\n },\n {\n \"Effect\": \"Allow\",\n \"Action\": \"s3:PutObject\"\n \"Resource\": \"arn:aws:s3:::destination-bucket/*\"\n }\n ]\n}\n\n\nAdd a Bucket Policy on the source bucket that permits the Administrator user in Account B to PutObject into the bucket:\n\n{\n \"Version\": \"2012-10-17\",\n \"Statement\": [\n {\n \"Effect\": \"Allow\",\n \"Principal\": {\n \"AWS\": \"arn:aws:iam::Account-B:user/Administrator\"\n },\n \"Action\": \"s3:PutObject\",\n \"Resource\": \"arn:aws:s3:::source-bucket/*\"\n }\n ]\n}\n\n\nAdd a Bucket policy on the destination bucket that permits the Administrator user in Account B to GetObject from the bucket and, I presume, list the bucket and delete objects that have been processed:\n\n{\n \"Version\": \"2012-10-17\",\n \"Statement\": [\n {\n \"Effect\": \"Allow\",\n \"Principal\": {\n \"AWS\": \"arn:aws:iam::Account-B:user/Administrator\"\n },\n \"Action\": [\n \"s3:ListBucket\"\n ],\n \"Resource\": \"arn:aws:s3:::destination-bucket\"\n }, \n {\n \"Effect\": \"Allow\",\n \"Principal\": {\n \"AWS\": \"arn:aws:iam::Account-B:user/Administrator\"\n },\n \"Action\": [\n \"s3:GetObject\",\n \"s3:DeleteObject\"\n ],\n \"Resource\": \"arn:aws:s3:::destination-bucket/*\"\n }\n ]\n}\n\n\nSince this is cross-account access, permission must also be granted to the Administrator IAM User in Account B to let them access the source and destination buckets. This policy is not required if they already have a lot of S3 permissions, such as s3:*, since it would work on any buckets including buckets in different AWS Accounts. This policy would go on the IAM User:\n\n{\n \"Version\": \"2012-10-17\",\n \"Statement\": [\n {\n \"Effect\": \"Allow\",\n \"Action\": [\n \"s3:ListBucket\"\n ],\n \"Resource\": [\n \"arn:aws:s3:::destination-bucket\"\n ]\n },\n {\n \"Effect\": \"Allow\",\n \"Action\": [\n \"s3:PutObject\",\n \"s3:GetObject\",\n \"s3:DeleteObject\"\n ],\n \"Resource\": [\n \"arn:aws:s3:::source-bucket/*\",\n \"arn:aws:s3:::destination-bucket/*\"\n ]\n }\n ]\n}\n\n", "Ah! I have it!\nWhile your Lambda function has permission to access the object, the fact that the object was uploaded using credentials from another AWS Account means that the object is still 'owned' by the other account.\nThere are two ways to fix this:\nTurn off ACLs on the receiving bucket\nFrom Disabling ACLs for all new buckets and enforcing Object Ownership - Amazon Simple Storage Service:\n\nWe recommend that you disable ACLs on your Amazon S3 buckets. You can do this by applying the bucket owner enforced setting for S3 Object Ownership. When you apply this setting, ACLs are disabled and you automatically own and have full control over all objects in your bucket.\n\nSet ownership when uploading\nWhen uploading the object when using credentials from another, you can specify ACL=bucket-owner-full-control (depending on how you perform the upload). This will grant ownership to the AWS Account that owns the receiving bucket.\n", "What if you add Region information to the session:\nsession = boto3.Session(\n aws_access_key_id=\"XXXX\",\n aws_secret_access_key=\"XXXX\",\n region_name=\"<REGION>\"\n)\n\n" ]
[ 1, 1, 0 ]
[]
[]
[ "amazon_s3", "amazon_web_services", "aws_lambda", "python", "terraform" ]
stackoverflow_0074593918_amazon_s3_amazon_web_services_aws_lambda_python_terraform.txt
Q: Upload multiple images to a post in Django View Error(Cannot resolve keyword 'post' into field.) My Model : class Gallary (models.Model): ProgramTitle = models.CharField(max_length=200, blank = False) Thum = models.ImageField(upload_to='Gallary/Thumb/',default = "", blank = False, null=False) VideoLink = models.CharField(max_length=200, blank = True,default = "") updated_on = models.DateTimeField(auto_now = True) created_on = models.DateTimeField(auto_now_add =True) status = models.IntegerField(choices=STATUS, default = 1) total_views=models.IntegerField(default=0) class Meta: verbose_name = 'Add Gallary Content' verbose_name_plural = 'Add Gallary Content' #for compress images if Thum.blank == False : def save(self, *args, **kwargs): # call the compress function new_image = compress(self.Thum) # set self.image to new_image self.Thum = new_image # save super().save(*args, **kwargs) def __str__(self): return self.ProgramTitle class GallaryDetails (models.Model): Gallary = models.ForeignKey(Gallary, default = None, on_delete = models.CASCADE) G_Images = models.ImageField(upload_to='Gallary/Images/',default = "", blank = False, null=False) #for compress images if G_Images.blank == False : def save(self, *args, **kwargs): # call the compress function new_image = compress(self.G_Images) # set self.image to new_image self.G_Images = new_image # save super().save(*args, **kwargs) def __str__(self): return self.Gallary.ProgramTitle My View def gallery(request): gallary = Gallary.objects.all() context = { 'gallary': gallary.order_by('-created_on') } return render(request, 'gallery.html',context) def album_details(request, post_id): post = get_object_or_404(Gallary,pk=post_id) photos = GallaryDetails.objects.filter(post=post) context = { 'post':post, 'photos': photos } return render(request, 'album_details.html',context) Gallary View <div class="our_gallery pt-60 ptb-80"> <div class="container"> <div class="row"> {% for post in gallary %} <div class="col-lg-4 col-md-6"> <div class="singleAlbum"> <div class="albumThumb"> <a href="{% url 'album_details' post.pk %}"> {% if post.Thum %} <img src="{{ post.Thum.url }}" alt=""> {% endif %} </a> </div> <h5 class="albumTitle"><a href="{% url 'album_details' post.pk %}">{{post.ProgramTitle}}</a></h5> </div> </div> {% endfor %} </div> </div> Gallary Details View <div id="gallery" class="container-fluid"> {% for p in photos.all %} <div class="popup-gallery"> {% if p.G_Images %} <a href="{{ p.G_Images.url }}" title="The Cleaner"><img src="{{ p.G_Images.url }}" width="75" height="75"></a> {% endif %} </div> {% endfor %} error ===enter image description here Renter image description hereesult: I'm working on a portfolio project and I want to add multiple images on the Django admin site then displaying one of the header_image and title of a project on the home/list page (like card class functionality in bootstrap) and other images on the detail page. Is it possible? A: Change photos = GallaryDetails.objects.filter(post=post) to photos = GallaryDetails.objects.filter(Gallary=post) GallaryDetails has no field called post. It's called Gallary. Also it's a good practive to only name Class names with uppercase letters. Field names should start with lowercase letters.
Upload multiple images to a post in Django View Error(Cannot resolve keyword 'post' into field.)
My Model : class Gallary (models.Model): ProgramTitle = models.CharField(max_length=200, blank = False) Thum = models.ImageField(upload_to='Gallary/Thumb/',default = "", blank = False, null=False) VideoLink = models.CharField(max_length=200, blank = True,default = "") updated_on = models.DateTimeField(auto_now = True) created_on = models.DateTimeField(auto_now_add =True) status = models.IntegerField(choices=STATUS, default = 1) total_views=models.IntegerField(default=0) class Meta: verbose_name = 'Add Gallary Content' verbose_name_plural = 'Add Gallary Content' #for compress images if Thum.blank == False : def save(self, *args, **kwargs): # call the compress function new_image = compress(self.Thum) # set self.image to new_image self.Thum = new_image # save super().save(*args, **kwargs) def __str__(self): return self.ProgramTitle class GallaryDetails (models.Model): Gallary = models.ForeignKey(Gallary, default = None, on_delete = models.CASCADE) G_Images = models.ImageField(upload_to='Gallary/Images/',default = "", blank = False, null=False) #for compress images if G_Images.blank == False : def save(self, *args, **kwargs): # call the compress function new_image = compress(self.G_Images) # set self.image to new_image self.G_Images = new_image # save super().save(*args, **kwargs) def __str__(self): return self.Gallary.ProgramTitle My View def gallery(request): gallary = Gallary.objects.all() context = { 'gallary': gallary.order_by('-created_on') } return render(request, 'gallery.html',context) def album_details(request, post_id): post = get_object_or_404(Gallary,pk=post_id) photos = GallaryDetails.objects.filter(post=post) context = { 'post':post, 'photos': photos } return render(request, 'album_details.html',context) Gallary View <div class="our_gallery pt-60 ptb-80"> <div class="container"> <div class="row"> {% for post in gallary %} <div class="col-lg-4 col-md-6"> <div class="singleAlbum"> <div class="albumThumb"> <a href="{% url 'album_details' post.pk %}"> {% if post.Thum %} <img src="{{ post.Thum.url }}" alt=""> {% endif %} </a> </div> <h5 class="albumTitle"><a href="{% url 'album_details' post.pk %}">{{post.ProgramTitle}}</a></h5> </div> </div> {% endfor %} </div> </div> Gallary Details View <div id="gallery" class="container-fluid"> {% for p in photos.all %} <div class="popup-gallery"> {% if p.G_Images %} <a href="{{ p.G_Images.url }}" title="The Cleaner"><img src="{{ p.G_Images.url }}" width="75" height="75"></a> {% endif %} </div> {% endfor %} error ===enter image description here Renter image description hereesult: I'm working on a portfolio project and I want to add multiple images on the Django admin site then displaying one of the header_image and title of a project on the home/list page (like card class functionality in bootstrap) and other images on the detail page. Is it possible?
[ "Change\nphotos = GallaryDetails.objects.filter(post=post)\nto\nphotos = GallaryDetails.objects.filter(Gallary=post)\nGallaryDetails has no field called post. It's called Gallary.\nAlso it's a good practive to only name Class names with uppercase letters. Field names should start with lowercase letters.\n" ]
[ 0 ]
[]
[]
[ "django", "django_templates", "django_views", "list", "python" ]
stackoverflow_0074605064_django_django_templates_django_views_list_python.txt
Q: why others recipients are not able to see the image from my html? using win32 python Hello guys i'm tryng to send an image in my html, when im testing sending it to myself is working fine But, if try to send to other recepients they don't receive it how can i fix that: any suggestion, thank you my code is this: import win32com.client as win32 olApp = win32.Dispatch('Outlook.Application') olNS = olApp.GetNameSpace('MAPI') mailItem = olApp.CreateItem(0) mailItem.Subject = 'test daily Email' mailItem.BodyFormat = 1 mailItem.To = 'testing it with my myself@outlook.com' mailItem.Cc = 'others recepient@outlook.com' mailItem.htmlBody = '''\ <html> <body> <h2>This is just a test</h2> <p>Using it for test before production</p> <img src="E:\\html5.gif" alt="HTML5 Icon" style="width:128px;height:128px;"> </body> </html> ''' mailItem.Save() mailItem.Send() A: They do see the image because they cannot possibly have access to your local file (E:\html5.gif). You need to add the image as an attachment and set its content-id appropriately, then reference the image by that content id in your HTML. See https://stackoverflow.com/a/28272165/332059
why others recipients are not able to see the image from my html? using win32 python
Hello guys i'm tryng to send an image in my html, when im testing sending it to myself is working fine But, if try to send to other recepients they don't receive it how can i fix that: any suggestion, thank you my code is this: import win32com.client as win32 olApp = win32.Dispatch('Outlook.Application') olNS = olApp.GetNameSpace('MAPI') mailItem = olApp.CreateItem(0) mailItem.Subject = 'test daily Email' mailItem.BodyFormat = 1 mailItem.To = 'testing it with my myself@outlook.com' mailItem.Cc = 'others recepient@outlook.com' mailItem.htmlBody = '''\ <html> <body> <h2>This is just a test</h2> <p>Using it for test before production</p> <img src="E:\\html5.gif" alt="HTML5 Icon" style="width:128px;height:128px;"> </body> </html> ''' mailItem.Save() mailItem.Send()
[ "They do see the image because they cannot possibly have access to your local file (E:\\html5.gif).\nYou need to add the image as an attachment and set its content-id appropriately, then reference the image by that content id in your HTML.\nSee https://stackoverflow.com/a/28272165/332059\n" ]
[ 0 ]
[]
[]
[ "html_email", "image", "outlook", "python", "win32com" ]
stackoverflow_0074604833_html_email_image_outlook_python_win32com.txt
Q: How to calculate R squared on monthly frequency for daily data? Currently I calculate the R squared for the whole dataset and for monthly R squared I slice the dataframe into smaller dataframes with the corresponding month and this is really unwieldy for a large dataset. Is there a way to easy calculate R squared for each month? I use this for the whole dataset: from sklearn.metrics import r2_score y_true = df['no2'] y_pred = df['temp'] r2_score(y_true, y_pred) Dataset: date,no2,temp 2022-03-27,22.0,12.0 2022-03-28,21.0,11.0 2022-03-29,25.0,15.0 2022-03-30,29.0,12.0 2022-03-31,24.0,17.0 2022-04-21,34.0,16.0 2022-04-22,32.0,19.0 2022-04-23,38.0,18.0 2022-04-24,37.0,19.0 2022-04-25,32.0,20.0 2022-05-25,36.0,21.0 2022-05-26,34.0,23.0 2022-05-27,39.0,21.0 2022-05-28,33.0,24.0 2022-05-29,31.0,22.0 2022-05-30,30.0,26.0 What I want: date,r_squared 2022-03,45 2022-04,42 2022-05,56 A: First, I did a pre-processing step which may or may not be necessary for you. I converted the date column from object to datetime. (You can check df.dtypes to see if this step is necessary.) df['date'] = pd.to_datetime(df['date']) Next, I group the dataframe by month, using pd.Grouper() to select the grouping duration. For each group, I apply the correlation function you mention. correlation = df.set_index('date') \ .groupby(pd.Grouper(freq='MS')) \ .apply(lambda month_df: r2_score(month_df['no2'], month_df['temp'])) The previous step resulted in a Series, and you want a DataFrame. The next step is to convert that Series into a DataFrame with the desired column name. correlation = pd.DataFrame(correlation, columns=['r_squared']) Output: r_squared date 2022-03-01 -15.443299 2022-04-01 -42.621795 2022-05-01 -14.465046
How to calculate R squared on monthly frequency for daily data?
Currently I calculate the R squared for the whole dataset and for monthly R squared I slice the dataframe into smaller dataframes with the corresponding month and this is really unwieldy for a large dataset. Is there a way to easy calculate R squared for each month? I use this for the whole dataset: from sklearn.metrics import r2_score y_true = df['no2'] y_pred = df['temp'] r2_score(y_true, y_pred) Dataset: date,no2,temp 2022-03-27,22.0,12.0 2022-03-28,21.0,11.0 2022-03-29,25.0,15.0 2022-03-30,29.0,12.0 2022-03-31,24.0,17.0 2022-04-21,34.0,16.0 2022-04-22,32.0,19.0 2022-04-23,38.0,18.0 2022-04-24,37.0,19.0 2022-04-25,32.0,20.0 2022-05-25,36.0,21.0 2022-05-26,34.0,23.0 2022-05-27,39.0,21.0 2022-05-28,33.0,24.0 2022-05-29,31.0,22.0 2022-05-30,30.0,26.0 What I want: date,r_squared 2022-03,45 2022-04,42 2022-05,56
[ "First, I did a pre-processing step which may or may not be necessary for you. I converted the date column from object to datetime. (You can check df.dtypes to see if this step is necessary.)\ndf['date'] = pd.to_datetime(df['date'])\n\nNext, I group the dataframe by month, using pd.Grouper() to select the grouping duration. For each group, I apply the correlation function you mention.\ncorrelation = df.set_index('date') \\\n .groupby(pd.Grouper(freq='MS')) \\\n .apply(lambda month_df: r2_score(month_df['no2'], month_df['temp']))\n\nThe previous step resulted in a Series, and you want a DataFrame. The next step is to convert that Series into a DataFrame with the desired column name.\ncorrelation = pd.DataFrame(correlation, columns=['r_squared'])\n\nOutput:\n r_squared\ndate \n2022-03-01 -15.443299\n2022-04-01 -42.621795\n2022-05-01 -14.465046\n\n" ]
[ 1 ]
[]
[]
[ "group_by", "pandas", "python" ]
stackoverflow_0074604920_group_by_pandas_python.txt
Q: Access Azure EventHub with WebSocket and proxy I'm trying to access Azure EvenHub but my network makes me use proxy and allows connection only over https (port 443) Based on https://learn.microsoft.com/en-us/python/api/azure-eventhub/azure.eventhub.aio.eventhubproducerclient?view=azure-python I added proxy configuration and TransportType.AmqpOverWebsocket parametr and my Producer looks like this: async def run(): producer = EventHubProducerClient.from_connection_string( "Endpoint=sb://my_eh.servicebus.windows.net/;SharedAccessKeyName=eh-sender;SharedAccessKey=MFGf5MX6Mdummykey=", eventhub_name="my_eh", auth_timeout=180, http_proxy=HTTP_PROXY, transport_type=TransportType.AmqpOverWebsocket, ) and I get an error: File "/usr/local/lib64/python3.9/site-packages/uamqp/authentication/cbs_auth_async.py", line 74, in create_authenticator_async raise errors.AMQPConnectionError( uamqp.errors.AMQPConnectionError: Unable to open authentication session on connection b'EHProducer-a1cc5f12-96a1-4c29-ae54-70aafacd3097'. Please confirm target hostname exists: b'my_eh.servicebus.windows.net' I don't know what might be the issue. Might it be related to this one ? https://github.com/Azure/azure-event-hubs-c/issues/50#issuecomment-501437753 A: you should be able to set up a proxy that the SDK uses to access EventHub. Here is a sample that shows you how to set the HTTP_PROXY dictionary with the proxy information. Behind the scenes when proxy is passed in, it automatically goes over websockets. As @BrunoLucasAzure suggested checking the ports on the proxy itself will be good to check, because based on the error message it looks like it made it past the proxy and cant resolve the endpoint.
Access Azure EventHub with WebSocket and proxy
I'm trying to access Azure EvenHub but my network makes me use proxy and allows connection only over https (port 443) Based on https://learn.microsoft.com/en-us/python/api/azure-eventhub/azure.eventhub.aio.eventhubproducerclient?view=azure-python I added proxy configuration and TransportType.AmqpOverWebsocket parametr and my Producer looks like this: async def run(): producer = EventHubProducerClient.from_connection_string( "Endpoint=sb://my_eh.servicebus.windows.net/;SharedAccessKeyName=eh-sender;SharedAccessKey=MFGf5MX6Mdummykey=", eventhub_name="my_eh", auth_timeout=180, http_proxy=HTTP_PROXY, transport_type=TransportType.AmqpOverWebsocket, ) and I get an error: File "/usr/local/lib64/python3.9/site-packages/uamqp/authentication/cbs_auth_async.py", line 74, in create_authenticator_async raise errors.AMQPConnectionError( uamqp.errors.AMQPConnectionError: Unable to open authentication session on connection b'EHProducer-a1cc5f12-96a1-4c29-ae54-70aafacd3097'. Please confirm target hostname exists: b'my_eh.servicebus.windows.net' I don't know what might be the issue. Might it be related to this one ? https://github.com/Azure/azure-event-hubs-c/issues/50#issuecomment-501437753
[ "you should be able to set up a proxy that the SDK uses to access EventHub. Here is a sample that shows you how to set the HTTP_PROXY dictionary with the proxy information. Behind the scenes when proxy is passed in, it automatically goes over websockets.\nAs @BrunoLucasAzure suggested checking the ports on the proxy itself will be good to check, because based on the error message it looks like it made it past the proxy and cant resolve the endpoint.\n" ]
[ 1 ]
[]
[]
[ "azure_eventhub", "python", "websocket" ]
stackoverflow_0074563693_azure_eventhub_python_websocket.txt
Q: Hide all channels except one for everyone except some roles discord.py How can I hide all channels except one (the one can be excluded with role id or name) except for admin with discord.py. hide all channels except one (by making it private) with a command Return all channels back to normal with a command A: I'm assuming you meant channel id and not "role id", given such you can make every channel hidden by setting the permissions for @everyone "read messages" to False in every channel, except for the excluded channel. I'm personally using discord.app_commands.CommandTree so it looks like this: @tree.command(name='hideall') async def hideall(interaction: discord.Interaction, exception: discord.TextChannel): for chan in interaction.guild.channels: #for every channel in server if chan.id != exception.id: #excludes excepted channel await chan.set_permissions(interaction.guild.default_role, read_messages=False) #overwrites read_messages permission to False for @everyone i.e set to private If you want the excluded channel to shown to everyone (if it was not already), you can change the permission for that specific channel: @tree.command(name='hideall') async def hideall(interaction: discord.Interaction, exception: discord.TextChannel): for chan in interaction.guild.channels: if chan.id == exception.id: #if it is the channel await chan.set_permissions(interaction.guild.default_role, read_messages=True) #set to unprivate else: await chan.set_permissions(interaction.guild.default_role, read_messages=False) #if other channel, private If you want it for other roles (instead of @everyone), you can add role: discord.Role in the command (asking command user for role input) and change interaction.guild.default_role to role or whatever you define discord.Role as. However, this is very hard to reverse. You either have to change the permissions of the channels manually or if you want to make it a command, you might need to store the channels that already had the read_messages permissions for @everyone as False and exclude those from set_permissions in the reverse command. (Command1: Store channel id of channel that already had read_messages permission set to False in a file Command2: If channel does not match any channel id(s) in file, set permission, read_messages back to True) Here's how I did the second command: @tree.command(name='returnall') async def returnall(interaction: discord.Interaction, role: discord.Role=None): if role == None: role = interaction.guild.default_role for chan in interaction.guild.channels: #for every channel in server for line in open('filename.txt', 'r'): if chan.id == int(line): #check if channel id is in the file await chan.set_permissions(role, view_channel=False) break else: #if channel id is not in file await chan.set_permissions(role, view_channel=True) Editing first command: with open("filename.txt",'r+') as file: file.truncate(0) #clears file for chan in interaction.guild.channels: #for every channel in server if chan.permissions_for(role).view_channel == False: #if channel is originally private with open('filename.txt', 'a') as f: categorycheck = discord.utils.get(interaction.guild.categories, id=chan.id) if categorycheck == None: #if it is not a category f.write(str(chan.id) + '\n') #write channel id in file for chan in interaction.guild.channels: #for every channel in server if chan.id != exception.id: #excludes excepted channel await chan.set_permissions(interaction.guild.default_role, read_messages=False) #overwrites read_messages permission to False for @everyone i.e set to private Things to note: My command returns all of the channels to unprivate before putting the channels that are to be privated to private (Showing privated channels to everyone before putting them back) This is probably not the best way to do this but it is one that works. Reference for overwriting the permissions of all channels Reading text files and Writing text files
Hide all channels except one for everyone except some roles discord.py
How can I hide all channels except one (the one can be excluded with role id or name) except for admin with discord.py. hide all channels except one (by making it private) with a command Return all channels back to normal with a command
[ "I'm assuming you meant channel id and not \"role id\", given such you can make every channel hidden by setting the permissions for @everyone \"read messages\" to False in every channel, except for the excluded channel.\nI'm personally using discord.app_commands.CommandTree so it looks like this:\n@tree.command(name='hideall')\nasync def hideall(interaction: discord.Interaction, exception: discord.TextChannel):\n for chan in interaction.guild.channels: #for every channel in server\n if chan.id != exception.id: #excludes excepted channel\n await chan.set_permissions(interaction.guild.default_role, read_messages=False)\n #overwrites read_messages permission to False for @everyone i.e set to private\n\nIf you want the excluded channel to shown to everyone (if it was not already), you can change the permission for that specific channel:\n@tree.command(name='hideall')\nasync def hideall(interaction: discord.Interaction, exception: discord.TextChannel):\n for chan in interaction.guild.channels:\n if chan.id == exception.id: #if it is the channel\n await chan.set_permissions(interaction.guild.default_role, read_messages=True) #set to unprivate\n else:\n await chan.set_permissions(interaction.guild.default_role, read_messages=False) #if other channel, private\n\nIf you want it for other roles (instead of @everyone), you can add role: discord.Role in the command (asking command user for role input) and change interaction.guild.default_role to role or whatever you define discord.Role as.\nHowever, this is very hard to reverse. You either have to change the permissions of the channels manually or if you want to make it a command, you might need to store the channels that already had the read_messages permissions for @everyone as False and exclude those from set_permissions in the reverse command.\n(Command1: Store channel id of channel that already had read_messages permission set to False in a file\nCommand2: If channel does not match any channel id(s) in file, set permission, read_messages back to True)\nHere's how I did the second command:\n@tree.command(name='returnall')\nasync def returnall(interaction: discord.Interaction, role: discord.Role=None):\n if role == None:\n role = interaction.guild.default_role\n for chan in interaction.guild.channels: #for every channel in server\n for line in open('filename.txt', 'r'):\n if chan.id == int(line): #check if channel id is in the file\n await chan.set_permissions(role, view_channel=False)\n break\n else: #if channel id is not in file\n await chan.set_permissions(role, view_channel=True)\n\nEditing first command:\nwith open(\"filename.txt\",'r+') as file:\n file.truncate(0) #clears file\nfor chan in interaction.guild.channels: #for every channel in server\n if chan.permissions_for(role).view_channel == False: #if channel is originally private\n with open('filename.txt', 'a') as f:\n categorycheck = discord.utils.get(interaction.guild.categories, id=chan.id)\n if categorycheck == None: #if it is not a category\n f.write(str(chan.id) + '\\n') #write channel id in file\nfor chan in interaction.guild.channels: #for every channel in server\n if chan.id != exception.id: #excludes excepted channel\n await chan.set_permissions(interaction.guild.default_role, read_messages=False)\n #overwrites read_messages permission to False for @everyone i.e set to private\n\nThings to note:\nMy command returns all of the channels to unprivate before putting the channels that are to be privated to private (Showing privated channels to everyone before putting them back)\nThis is probably not the best way to do this but it is one that works.\nReference for overwriting the permissions of all channels\nReading text files and Writing text files\n" ]
[ 0 ]
[]
[]
[ "discord", "discord.py", "python" ]
stackoverflow_0074483979_discord_discord.py_python.txt
Q: Wrong output in function Hi I'am totally new to programmering and i have just jumped into it. The problem i am trying to solve is to make a function that standardized an adress as input. example: def standardize_address(a): numbers =[] letters = [] a.replace('_', ' ') for word in a.split(): if word. isdigit(): numbers. append(int(word)) elif word.isalpha(): letters.append(word) s = f"{numbers} {letters}" return s Can someone help me explain my error and give me a "pro" programmers solution and "noob" (myself) solution? This is what i should print: a = 'New_York 10001' s = standardize_address(a) print(s) and the output should be: 10001 New York Right now my output is: [10001] ['New', 'York'] A: Issues strings are immutable so you need to keep the replace result, so do a = a.replace('_', ' ') or chain it before the split call You need to concatenate the lists into one numbers + letters then join the elements with " ".join() don't convert the numeric to int, that's useless and would force you to convert them back to str in the " ".join def standardize_address(a): numbers = [] letters = [] for word in a.replace('_', ' ').split(): if word.isdigit(): numbers.append(word) elif word.isalpha(): letters.append(word) return ' '.join(numbers + letters) Improve In fact you want to sort the words regarding the isdigit condition, so you can express that with a sort and the appropriate sorted def standardize_address(value): return ' '.join(sorted(value.replace('_', ' ').split(), key=str.isdigit, reverse=True)) A: numbers and letters are both lists of strings, and if you format them they'll be rendered with []s and ''s appropriately. What you want to do is to replace this: s = f"{numbers} {letters}" return s with this: return ' '.join(numbers + letters) numbers + letters is the combined list of number-strings and letter-strings, and ' '.join() takes that list and turns it into a string by putting ' ' between each item.
Wrong output in function
Hi I'am totally new to programmering and i have just jumped into it. The problem i am trying to solve is to make a function that standardized an adress as input. example: def standardize_address(a): numbers =[] letters = [] a.replace('_', ' ') for word in a.split(): if word. isdigit(): numbers. append(int(word)) elif word.isalpha(): letters.append(word) s = f"{numbers} {letters}" return s Can someone help me explain my error and give me a "pro" programmers solution and "noob" (myself) solution? This is what i should print: a = 'New_York 10001' s = standardize_address(a) print(s) and the output should be: 10001 New York Right now my output is: [10001] ['New', 'York']
[ "Issues\n\nstrings are immutable so you need to keep the replace result, so do a = a.replace('_', ' ') or chain it before the split call\n\nYou need to concatenate the lists into one numbers + letters then join the elements with \" \".join()\n\ndon't convert the numeric to int, that's useless and would force you to convert them back to str in the \" \".join\n\n\ndef standardize_address(a):\n numbers = []\n letters = []\n for word in a.replace('_', ' ').split():\n if word.isdigit():\n numbers.append(word)\n elif word.isalpha():\n letters.append(word)\n return ' '.join(numbers + letters)\n\n\nImprove\nIn fact you want to sort the words regarding the isdigit condition, so you can express that with a sort and the appropriate sorted\ndef standardize_address(value):\n return ' '.join(sorted(value.replace('_', ' ').split(),\n key=str.isdigit, reverse=True))\n\n", "numbers and letters are both lists of strings, and if you format them they'll be rendered with []s and ''s appropriately. What you want to do is to replace this:\n s = f\"{numbers} {letters}\"\n \n \n return s\n\nwith this:\nreturn ' '.join(numbers + letters)\n\nnumbers + letters is the combined list of number-strings and letter-strings, and ' '.join() takes that list and turns it into a string by putting ' ' between each item.\n" ]
[ 2, 0 ]
[]
[]
[ "append", "function", "loops", "python" ]
stackoverflow_0074605167_append_function_loops_python.txt
Q: Pandas column split ValueError: Columns must be same length as key I have dataframe structured like: Location_Identifier Location_Name Location_Type Observed_Property 5728 place 1 Groundwater 39398 - ETHION IN WHOLE WATER SAMPLE (UG/L) 535 place 2 Groundwater 946 - SULFATE, DISSOLVED (MG/L AS SO4) 1003 place 3 Groundwater 1145 - SELENIUM, DISSOLVED (UG/L AS SE) 12151 place 4 Surface Water 94 - SPECIFIC CONDUCTANCE, FIELD (UMHOS/CM @ 25C) 1571 place 5 Groundwater 82078 - TURBIDITY, FIELD NEPHELOMETRIC TURBIDITY UNITS (NTU) 8094 place 6 Spring 90068 - SAMPLE DEPTH FROM SURFACE (METERS) 2778 place 7 Groundwater 1044 - IRON, SUSPENDED (UG/L AS FE) When I attempt to split the "Observed Property" field, I receive the following error: df[["pcode","pname"]] = df["Observed_Property"].str.split('-',expand=True) ValueError: Columns must be same length as key A: I ran your code and I am fairly certain you have some values in "Observed_Property" That have more than one '-' so when you split the values, you get more than 2 columns. from io import StringIO import pandas as pd dfstr = """Location_Identifier Location_Name Location_Type Observed_Property 5728 place 1 Groundwater 39398 - ETHION IN WHOLE WATER SAMPLE (UG/L) 535 place 2 Groundwater 946 - SULFATE, DISSOLVED (MG/L AS SO4) 1003 place 3 Groundwater 1145 - SELENIUM, DISSOLVED (UG/L AS SE) 12151 place 4 Surface Water 94 - SPECIFIC CONDUCTANCE, FIELD (UMHOS/CM @ 25C) 1571 place 5 Groundwater 82078 - TURBIDITY, FIELD NEPHELOMETRIC TURBIDITY UNITS (NTU) 8094 place 6 Spring 90068 - SAMPLE DEPTH FROM SURFACE (METERS) 2778 place 7 Groundwater 1044 - IRON, SUSPENDED (UG/L AS FE)""" df = pd.read_csv(StringIO(dfstr), sep='\t') df[["pcode","pname"]] = df["Observed_Property"].str.split('-',expand=True) If I just use your example df and run your split code, it works as expected. But, I can break it by adding a value to 'Oberserved_Property' that has two '-'. df.loc[6] = [1234, 'place 8', 'Groundwater', '12345 - Name-of-place'] Location_Identifier Location_Name Location_Type Observed_Property 0 5728 place 1 Groundwater 39398 - ETHION IN WHOLE WATER SAMPLE (UG/L) 1 535 place 2 Groundwater 946 - SULFATE, DISSOLVED (MG/L AS SO4) 2 1003 place 3 Groundwater 1145 - SELENIUM, DISSOLVED (UG/L AS SE) 3 12151 place 4 Surface Water 94 - SPECIFIC CONDUCTANCE, FIELD (UMHOS/CM @ 25C) 4 1571 place 5 Groundwater 82078 - TURBIDITY, FIELD NEPHELOMETRIC TURBIDI... 5 8094 place 6 Spring 90068 - SAMPLE DEPTH FROM SURFACE (METERS) 6 1234 place 8 Groundwater 12345 - Name-of-place Now if I run the same code I get same error as you. df[["pcode","pname"]] = df["Observed_Property"].str.split('-',expand=True) ValueError: Columns must be same length as key One way you can get around this is by passing in a more stringent split argument. df[["pcode","pname"]] = df["Observed_Property"].str.split('[0-9] -',expand=True) This tells pandas to split on a digit ([0-9]) followed by a space and a '-'. This will then prevent it from splitting on other '-' that is not preceded by a digit. Based on what the rest of your data looks like you can modify the regex and get the right split.
Pandas column split ValueError: Columns must be same length as key
I have dataframe structured like: Location_Identifier Location_Name Location_Type Observed_Property 5728 place 1 Groundwater 39398 - ETHION IN WHOLE WATER SAMPLE (UG/L) 535 place 2 Groundwater 946 - SULFATE, DISSOLVED (MG/L AS SO4) 1003 place 3 Groundwater 1145 - SELENIUM, DISSOLVED (UG/L AS SE) 12151 place 4 Surface Water 94 - SPECIFIC CONDUCTANCE, FIELD (UMHOS/CM @ 25C) 1571 place 5 Groundwater 82078 - TURBIDITY, FIELD NEPHELOMETRIC TURBIDITY UNITS (NTU) 8094 place 6 Spring 90068 - SAMPLE DEPTH FROM SURFACE (METERS) 2778 place 7 Groundwater 1044 - IRON, SUSPENDED (UG/L AS FE) When I attempt to split the "Observed Property" field, I receive the following error: df[["pcode","pname"]] = df["Observed_Property"].str.split('-',expand=True) ValueError: Columns must be same length as key
[ "I ran your code and I am fairly certain you have some values in \"Observed_Property\" That have more than one '-' so when you split the values, you get more than 2 columns.\nfrom io import StringIO\nimport pandas as pd\n\n\ndfstr = \"\"\"Location_Identifier Location_Name Location_Type Observed_Property\n5728 place 1 Groundwater 39398 - ETHION IN WHOLE WATER SAMPLE (UG/L)\n535 place 2 Groundwater 946 - SULFATE, DISSOLVED (MG/L AS SO4)\n1003 place 3 Groundwater 1145 - SELENIUM, DISSOLVED (UG/L AS SE)\n12151 place 4 Surface Water 94 - SPECIFIC CONDUCTANCE, FIELD (UMHOS/CM @ 25C)\n1571 place 5 Groundwater 82078 - TURBIDITY, FIELD NEPHELOMETRIC TURBIDITY UNITS (NTU)\n8094 place 6 Spring 90068 - SAMPLE DEPTH FROM SURFACE (METERS)\n2778 place 7 Groundwater 1044 - IRON, SUSPENDED (UG/L AS FE)\"\"\"\n\ndf = pd.read_csv(StringIO(dfstr), sep='\\t')\n\ndf[[\"pcode\",\"pname\"]] = df[\"Observed_Property\"].str.split('-',expand=True)\n\n\nIf I just use your example df and run your split code, it works as expected. But, I can break it by adding a value to 'Oberserved_Property' that has two '-'.\ndf.loc[6] = [1234, 'place 8', 'Groundwater', '12345 - Name-of-place']\n\n\n\n\n\n\nLocation_Identifier\nLocation_Name\nLocation_Type\nObserved_Property\n\n\n\n\n0\n5728\nplace 1\nGroundwater\n39398 - ETHION IN WHOLE WATER SAMPLE (UG/L)\n\n\n1\n535\nplace 2\nGroundwater\n946 - SULFATE, DISSOLVED (MG/L AS SO4)\n\n\n2\n1003\nplace 3\nGroundwater\n1145 - SELENIUM, DISSOLVED (UG/L AS SE)\n\n\n3\n12151\nplace 4\nSurface Water\n94 - SPECIFIC CONDUCTANCE, FIELD (UMHOS/CM @ 25C)\n\n\n4\n1571\nplace 5\nGroundwater\n82078 - TURBIDITY, FIELD NEPHELOMETRIC TURBIDI...\n\n\n5\n8094\nplace 6\nSpring\n90068 - SAMPLE DEPTH FROM SURFACE (METERS)\n\n\n6\n1234\nplace 8\nGroundwater\n12345 - Name-of-place\n\n\n\n\nNow if I run the same code I get same error as you.\ndf[[\"pcode\",\"pname\"]] = df[\"Observed_Property\"].str.split('-',expand=True)\nValueError: Columns must be same length as key\n\nOne way you can get around this is by passing in a more stringent split argument.\ndf[[\"pcode\",\"pname\"]] = df[\"Observed_Property\"].str.split('[0-9] -',expand=True)\n\nThis tells pandas to split on a digit ([0-9]) followed by a space and a '-'. This will then prevent it from splitting on other '-' that is not preceded by a digit. Based on what the rest of your data looks like you can modify the regex and get the right split.\n" ]
[ 0 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074604902_dataframe_pandas_python.txt
Q: How to add Thread Name to each request context to Flask In creating a python thread with name, we can do it by t = threading.Thread(target=func,name="my_thread") In flask, each request spawns its own thread holding the context until the process is completed. How do i assign dynamic name to these threads being created by flask? A: can try this one chain-logging from flask import Flask from chain_logging.flask import setup_chained_logger, logger app = Flask(__name__) setup_chained_logger(app) @app.get("/home") def home(): logger.info("this is a trial 1") logger.error("this is a trial 2") logger.warning("this is a trial 3") logger.critical("this is a trial 3") return {"status": "welcome home!"} every time you make request, you;ll have different id A: Just found the quick and easy solution to this. Within the flask's api context somewhere you can get the current thread and rename it: t = threading.current_thread() t.name = "renamed_thread" This way, i can better see in vscode debugging window the threads spawn for each API calls without inspecting them individually...
How to add Thread Name to each request context to Flask
In creating a python thread with name, we can do it by t = threading.Thread(target=func,name="my_thread") In flask, each request spawns its own thread holding the context until the process is completed. How do i assign dynamic name to these threads being created by flask?
[ "can try this one chain-logging\nfrom flask import Flask\nfrom chain_logging.flask import setup_chained_logger, logger\n\napp = Flask(__name__)\n\nsetup_chained_logger(app)\n\n@app.get(\"/home\")\ndef home():\n logger.info(\"this is a trial 1\")\n logger.error(\"this is a trial 2\")\n logger.warning(\"this is a trial 3\")\n logger.critical(\"this is a trial 3\")\n return {\"status\": \"welcome home!\"}\n\nevery time you make request, you;ll have different id\n", "Just found the quick and easy solution to this.\nWithin the flask's api context somewhere you can get the current thread and rename it:\nt = threading.current_thread()\nt.name = \"renamed_thread\"\n\nThis way, i can better see in vscode debugging window the threads spawn for each API calls without inspecting them individually...\n" ]
[ 0, 0 ]
[]
[]
[ "flask", "multithreading", "python" ]
stackoverflow_0074308792_flask_multithreading_python.txt
Q: How to get output layer values during training tensorflow Is it possible to get the output layer values during the training in order to build a custom loss function. to be more specific i want to get the output value and compute the loss using external method my problem is I can't pass tf.eval() before initialize variables using tf.global_variables_initializer() def run_command(im_path,p): s = 'cmd'+p.eval() os.system(s) im = imread(im_path) return im def cross_corr(y_true, y_pred): path1 = 'path_to_input_image' true_image = run_command(path1, y_pred) path2 = 'path_to_predicted_image' predicted_image = run_command(path2,y_true) pearson_r, update_op = tf.contrib.metrics.streaming_pearson_correlation(predicted_image, true_image, name='pearson_r') loss = 1-(tf.math.square(pearson_r)) return loss *** *** # Create the network *** tf.global_variables_initializer() *** # run training with tf.Session() as sess: *** A: If your model has a single output value, you can subclass tf.keras.losses.Loss. For example, a trivial custom loss function (which wouldn't be very good in training) implementation: import tensorflow as tf class LossCustom(tf.keras.losses.Loss): def __init__(self, some_arg): super(LossCustom, self).__init__() self.param_some_arg = some_arg def get_config(self): config = super(LossCustom, self).get_config() config.update({ "some_arg": self.param_some_arg, }) return config def call(self, y_true, y_pred): result = y_pred - y_true return tf.math.reduce_sum(result) However, if you have multiple outputs and want to run them through the same loss function, you need to artificially combine them into a single value. I do this in one of my models with an a dummy layer at the end of my model like this: import tensorflow as tf class LayerCheeseMultipleOut(tf.keras.layers.Layer): def __init__(self, **kwargs): super(LayerCheeseMultipleOut, self).__init__(**kwargs) def call(self, inputs): return tf.stack(inputs, axis=1) # [batch_size, OUTPUTS, .... ] Then, in your custom loss function, unstack again like so: output_a, output_b = tf.unstack(y_pred, axis=1)
How to get output layer values during training tensorflow
Is it possible to get the output layer values during the training in order to build a custom loss function. to be more specific i want to get the output value and compute the loss using external method my problem is I can't pass tf.eval() before initialize variables using tf.global_variables_initializer() def run_command(im_path,p): s = 'cmd'+p.eval() os.system(s) im = imread(im_path) return im def cross_corr(y_true, y_pred): path1 = 'path_to_input_image' true_image = run_command(path1, y_pred) path2 = 'path_to_predicted_image' predicted_image = run_command(path2,y_true) pearson_r, update_op = tf.contrib.metrics.streaming_pearson_correlation(predicted_image, true_image, name='pearson_r') loss = 1-(tf.math.square(pearson_r)) return loss *** *** # Create the network *** tf.global_variables_initializer() *** # run training with tf.Session() as sess: ***
[ "If your model has a single output value, you can subclass tf.keras.losses.Loss. For example, a trivial custom loss function (which wouldn't be very good in training) implementation:\nimport tensorflow as tf\n\nclass LossCustom(tf.keras.losses.Loss):\n def __init__(self, some_arg):\n super(LossCustom, self).__init__()\n self.param_some_arg = some_arg\n \n def get_config(self):\n config = super(LossCustom, self).get_config()\n config.update({\n \"some_arg\": self.param_some_arg,\n })\n return config\n \n def call(self, y_true, y_pred):\n result = y_pred - y_true\n return tf.math.reduce_sum(result)\n\nHowever, if you have multiple outputs and want to run them through the same loss function, you need to artificially combine them into a single value.\nI do this in one of my models with an a dummy layer at the end of my model like this:\nimport tensorflow as tf\n\nclass LayerCheeseMultipleOut(tf.keras.layers.Layer):\n def __init__(self, **kwargs):\n super(LayerCheeseMultipleOut, self).__init__(**kwargs)\n \n def call(self, inputs):\n return tf.stack(inputs, axis=1) # [batch_size, OUTPUTS, .... ]\n\nThen, in your custom loss function, unstack again like so:\noutput_a, output_b = tf.unstack(y_pred, axis=1)\n\n" ]
[ 0 ]
[]
[]
[ "deep_learning", "machine_learning", "python", "tensorflow" ]
stackoverflow_0060041546_deep_learning_machine_learning_python_tensorflow.txt
Q: Are line breaks possible in Python match case patterns? I want a match pattern with a rather long OR-pattern something like: match item: case Really.Long.Qualified.Name.ONE | Really.Long.Qualified.Name.TWO | Really.Long.Qualified.Name.THREE | Some.Other.Patterns.Here: pass This is obviously very annoying to have on a single line. However, PyCharm doesn't seem to warn about longline as per usual and reports syntax errors if I use a line-break (even if it's escaped). Is there any way to format this code more nicely, or must the entire pattern be on a single line? Is there a definitive source that establishes this - I couldn't find it in the PEP for match/case OR in particular. If the latter, why would that language design decision be made? It seems... not good.... A: You can wrap such chain expressions within a pair of parenthesis. match item: case ( Really.Long.Qualified.Name.ONE | Really.Long.Qualified.Name.TWO | Really.Long.Qualified.Name.THREE | Some.Other.Patterns.Here ): pass
Are line breaks possible in Python match case patterns?
I want a match pattern with a rather long OR-pattern something like: match item: case Really.Long.Qualified.Name.ONE | Really.Long.Qualified.Name.TWO | Really.Long.Qualified.Name.THREE | Some.Other.Patterns.Here: pass This is obviously very annoying to have on a single line. However, PyCharm doesn't seem to warn about longline as per usual and reports syntax errors if I use a line-break (even if it's escaped). Is there any way to format this code more nicely, or must the entire pattern be on a single line? Is there a definitive source that establishes this - I couldn't find it in the PEP for match/case OR in particular. If the latter, why would that language design decision be made? It seems... not good....
[ "You can wrap such chain expressions within a pair of parenthesis.\nmatch item:\n case (\n Really.Long.Qualified.Name.ONE |\n Really.Long.Qualified.Name.TWO |\n Really.Long.Qualified.Name.THREE |\n Some.Other.Patterns.Here\n ):\n pass\n\n" ]
[ 1 ]
[]
[]
[ "code_formatting", "match", "python" ]
stackoverflow_0074605197_code_formatting_match_python.txt
Q: How can I include the relative path to a module in a Python logging statement? My project has a subpackage nested under the root package like so: mypackage/ __init__.py topmodule.py subpackage/ __init__.py nested.py My goal is to get logging records formatted like: mypackage/topmodule.py:123: First log message mypackage/subpackage/nested.py:456: Second log message so that the paths become clickable in my terminal. I've tried the following formats. '%(modulename).pys:%(lineno): %(message)s' isn't clickable (the dots need to be slashes): mypackage.topmodule.py:123: First log message mypackage.subpackage.nested.py:456: Second log message 'mypackage/%(filename)s:%(lineno): %(message)s' doesn't work for subpackages: mypackage/topmodule.py:123: First log message mypackage/nested.py:456: Second log message '%(pathname)s:%(lineno): %(message)s' produces clickable paths, but they're so long that they cut off the rest of my logging : /Users/jacebrowning/Documents/mypackage/topmodule.py:123: First log message /Users/jacebrowning/Documents/mypackage/subpackage/nested.py:456: Second log message Is there a logging pattern I can pass to logging.basicConfig(format='???') that wil produce the logging records I desire? A: You'd have to do additional processing to get the path that you want here. You can do such processing and add additional information to log records, including the 'local' path for your own package, by creating a custom filter. Filters don't actually have to do filtering, but they do get access to all log records, so they are a great way of updating records with missing information. Just make sure you return True when done: import logging import os import sys class PackagePathFilter(logging.Filter): def filter(self, record): pathname = record.pathname record.relativepath = None abs_sys_paths = map(os.path.abspath, sys.path) for path in sorted(abs_sys_paths, key=len, reverse=True): # longer paths first if not path.endswith(os.sep): path += os.sep if pathname.startswith(path): record.relativepath = os.path.relpath(pathname, path) break return True This finds the sys.path entry that's the parent dir for the pathname on the logrecord and adds a new relativepath entry on the log record. You can then use %(relativepath)s to include it in your log. Add the filter to any handler that you have configured with your custom formatter: handler.addFilter(PackagePathFilter()) and together with '%(relativepath)s:%(lineno)s: %(message)s' as the format your log messages will come out like: mypackage/topmodule.py:123: First log message mypackage/subpackage/nested.py:456: Second log message (actual output, except I altered the line numbers on that). A: This produces the same result. import os import logging class PackagePathFilter(logging.Filter): def filter(self, record): record.pathname = record.pathname.replace(os.getcwd(),"") return True add handler handler.addFilter(PackagePathFilter()) and together with '%(pathname)s:%(lineno)s: %(message)s' as the format your log.
How can I include the relative path to a module in a Python logging statement?
My project has a subpackage nested under the root package like so: mypackage/ __init__.py topmodule.py subpackage/ __init__.py nested.py My goal is to get logging records formatted like: mypackage/topmodule.py:123: First log message mypackage/subpackage/nested.py:456: Second log message so that the paths become clickable in my terminal. I've tried the following formats. '%(modulename).pys:%(lineno): %(message)s' isn't clickable (the dots need to be slashes): mypackage.topmodule.py:123: First log message mypackage.subpackage.nested.py:456: Second log message 'mypackage/%(filename)s:%(lineno): %(message)s' doesn't work for subpackages: mypackage/topmodule.py:123: First log message mypackage/nested.py:456: Second log message '%(pathname)s:%(lineno): %(message)s' produces clickable paths, but they're so long that they cut off the rest of my logging : /Users/jacebrowning/Documents/mypackage/topmodule.py:123: First log message /Users/jacebrowning/Documents/mypackage/subpackage/nested.py:456: Second log message Is there a logging pattern I can pass to logging.basicConfig(format='???') that wil produce the logging records I desire?
[ "You'd have to do additional processing to get the path that you want here.\nYou can do such processing and add additional information to log records, including the 'local' path for your own package, by creating a custom filter.\nFilters don't actually have to do filtering, but they do get access to all log records, so they are a great way of updating records with missing information. Just make sure you return True when done:\nimport logging\nimport os\nimport sys\n\n\nclass PackagePathFilter(logging.Filter):\n def filter(self, record):\n pathname = record.pathname\n record.relativepath = None\n abs_sys_paths = map(os.path.abspath, sys.path)\n for path in sorted(abs_sys_paths, key=len, reverse=True): # longer paths first\n if not path.endswith(os.sep):\n path += os.sep\n if pathname.startswith(path):\n record.relativepath = os.path.relpath(pathname, path)\n break\n return True\n\nThis finds the sys.path entry that's the parent dir for the pathname on the logrecord and adds a new relativepath entry on the log record. You can then use %(relativepath)s to include it in your log.\nAdd the filter to any handler that you have configured with your custom formatter:\nhandler.addFilter(PackagePathFilter())\n\nand together with '%(relativepath)s:%(lineno)s: %(message)s' as the format your log messages will come out like:\nmypackage/topmodule.py:123: First log message\nmypackage/subpackage/nested.py:456: Second log message\n\n(actual output, except I altered the line numbers on that).\n", "This produces the same result.\nimport os\nimport logging\n\n class PackagePathFilter(logging.Filter):\n def filter(self, record):\n record.pathname = record.pathname.replace(os.getcwd(),\"\")\n return True\n\nadd handler\nhandler.addFilter(PackagePathFilter())\n\nand together with '%(pathname)s:%(lineno)s: %(message)s' as the format your log.\n" ]
[ 8, 0 ]
[]
[]
[ "logging", "python" ]
stackoverflow_0052582458_logging_python.txt
Q: FutureWarning in using iteritems() in use .iloc() pandas When I use s.iteritems() in using .iloc I see the below warning: FutureWarning: iteritems is deprecated and will be removed in a future version. Use .items instead. for item in s.iteritems() While I'm using this function: temp3 = temp2.iloc[:, 0] I am using python 3.8 and don't know why I'm getting this warning. I also tried the following: temp3 = temp2.iloc[:, 0].copy() temp3 = temp2.loc[:, 0].copy() temp3 = temp2[0].copy() But it's the same. A: By the way, I solved my problem with: temp3 = temp2[0].values But I don't have any idea why I'm getting this warning! A: You could edit the module directly, since it gives the line number as 606 in this file (and wait for a fix later from JetBrains): C:\Program Files\JetBrains\PyCharm 2022.2.4\plugins\python\helpers\pydev_pydevd_bundle\pydevd_utils.py Change the iteritems to item: def _series_to_str(s, max_items): res = [] s = s[:max_items] for item in s.items(): #for item in s.iteritems(): #bugged on panda 1.5 FutureWarning # item: (index, value) res.append(str(item)) return ' '.join(res)
FutureWarning in using iteritems() in use .iloc() pandas
When I use s.iteritems() in using .iloc I see the below warning: FutureWarning: iteritems is deprecated and will be removed in a future version. Use .items instead. for item in s.iteritems() While I'm using this function: temp3 = temp2.iloc[:, 0] I am using python 3.8 and don't know why I'm getting this warning. I also tried the following: temp3 = temp2.iloc[:, 0].copy() temp3 = temp2.loc[:, 0].copy() temp3 = temp2[0].copy() But it's the same.
[ "By the way, I solved my problem with:\ntemp3 = temp2[0].values\n\nBut I don't have any idea why I'm getting this warning!\n", "You could edit the module directly, since it gives the line number as 606 in this file (and wait for a fix later from JetBrains):\nC:\\Program Files\\JetBrains\\PyCharm 2022.2.4\\plugins\\python\\helpers\\pydev_pydevd_bundle\\pydevd_utils.py\nChange the iteritems to item:\ndef _series_to_str(s, max_items):\n res = []\n s = s[:max_items]\n for item in s.items():\n #for item in s.iteritems(): #bugged on panda 1.5 FutureWarning \n # item: (index, value)\n res.append(str(item))\n return ' '.join(res)\n\n" ]
[ 1, 0 ]
[]
[]
[ "pandas", "python", "python_3.x" ]
stackoverflow_0074442073_pandas_python_python_3.x.txt
Q: Confusion about multithreading in Python sockets and FastAPI Problem: I am trying to figure out if I really need to implement any thread safe mechanism when dealing with a client with multiple threads accessing the same (client) socket, and the information I find seems contradictory. Every implementation I find consist basically in a server spawning new threads to hanlde each client's connection. For example: How to make a simple multithreaded socket server in Python that remembers clients And as you can see, there is no explicity thread-safety mechanism. However, according to this answer, sockets are not thread safe (and I have read that under some conditions there can be problems when using the same socket although nothing clear to me): Python: Socket and threads? Context The setup I need to deploy to production is a FastAPI server that accepts http requests, and then, as a client, makes requests to a server socket maintaining a sinlge socket regardless of http clients. My routes are not async, and that means parallel http requests from the frontend (or frontends) will be handled as different threads by FastAPI and then those threads will use the same socket to send data to the socket. The problem is that I have experienced "broken pipe" errors some times and I don't really know where the weak point of the whole setup might be. These errors occur in the client socket end of the FastAPI appplication, when handling more than one request at the same time. In short, I would like to know if someone has suggestions in terms of thread safety for the whole setup. A: If I understand correctly you have a single FastAPI server. This server can handle multiple client connections simultaneously. Each connection has its own socket and thread. To handle requests, the FastAPI server communicates with another, back-end server. You use a single socket to communicate with the back-end server. When listing for new client connections, there are no multi-threading issues. Only the main thread listens on the socket. When a new client connects, you get a new socket and you create a new thread. When communicating with the client, there are no multi-threading issues. Every connection has its own socket and thread. This thread is the only thread that uses this socket. When communicating with the back-end server, there are multi-threading issues. All the threads use the same socket. As sockets are not thread-safe, this can and will cause issues. You need to come up with a different setup. You can have each thread create its own socket to communicate with the back-end server. This is a simple setup but uses more resources (on both servers). You can synchronize access to the one back-end socket so that only one thread can use the socket at any given time. This means that clients will block when another client is communicating with the back-end server.
Confusion about multithreading in Python sockets and FastAPI
Problem: I am trying to figure out if I really need to implement any thread safe mechanism when dealing with a client with multiple threads accessing the same (client) socket, and the information I find seems contradictory. Every implementation I find consist basically in a server spawning new threads to hanlde each client's connection. For example: How to make a simple multithreaded socket server in Python that remembers clients And as you can see, there is no explicity thread-safety mechanism. However, according to this answer, sockets are not thread safe (and I have read that under some conditions there can be problems when using the same socket although nothing clear to me): Python: Socket and threads? Context The setup I need to deploy to production is a FastAPI server that accepts http requests, and then, as a client, makes requests to a server socket maintaining a sinlge socket regardless of http clients. My routes are not async, and that means parallel http requests from the frontend (or frontends) will be handled as different threads by FastAPI and then those threads will use the same socket to send data to the socket. The problem is that I have experienced "broken pipe" errors some times and I don't really know where the weak point of the whole setup might be. These errors occur in the client socket end of the FastAPI appplication, when handling more than one request at the same time. In short, I would like to know if someone has suggestions in terms of thread safety for the whole setup.
[ "If I understand correctly you have a single FastAPI server. This server can handle multiple client connections simultaneously. Each connection has its own socket and thread. To handle requests, the FastAPI server communicates with another, back-end server. You use a single socket to communicate with the back-end server.\nWhen listing for new client connections, there are no multi-threading issues. Only the main thread listens on the socket. When a new client connects, you get a new socket and you create a new thread.\nWhen communicating with the client, there are no multi-threading issues. Every connection has its own socket and thread. This thread is the only thread that uses this socket.\nWhen communicating with the back-end server, there are multi-threading issues. All the threads use the same socket. As sockets are not thread-safe, this can and will cause issues.\nYou need to come up with a different setup.\nYou can have each thread create its own socket to communicate with the back-end server. This is a simple setup but uses more resources (on both servers).\nYou can synchronize access to the one back-end socket so that only one thread can use the socket at any given time. This means that clients will block when another client is communicating with the back-end server.\n" ]
[ 1 ]
[]
[]
[ "fastapi", "python", "python_multithreading", "sockets" ]
stackoverflow_0074599213_fastapi_python_python_multithreading_sockets.txt
Q: Error #15: Initializing libiomp5md.dll, but found libiomp5md.dll already initialized Operating System: Window10 I use spyder (python3.8) in anaconda and after run the code, I get the following error: [SpyderKernelApp] WARNING | No such comm: df7601e106dd11eba18accf9e4a3c0ef OMP: Error #15: Initializing libiomp5md.dll, but found libiomp5md.dll already initialized. OMP: Hint This means that multiple copies of the OpenMP runtime have been linked into the program. That is dangerous, since it can degrade performance or cause incorrect results. The best thing to do is to ensure that only a single OpenMP runtime is linked into the process, e.g. by avoiding static linking of the OpenMP runtime in any library. As an unsafe, unsupported, undocumented workaround you can set the environment variable KMP_DUPLICATE_LIB_OK=TRUE to allow the program to continue to execute, but that may cause crashes or silently produce incorrect results. For more information, please see http://www.intel.com/software/products/support/. How do I fix this? A: import os os.environ['KMP_DUPLICATE_LIB_OK']='True' A: This error occurs when there are multiple "libiomp5.dll" files within a python interpreter. I fixed it by deleting all of the versions of the file that were not within the module I was using (PyTorch). I would like to point out that this could cause a lot of multiprocessing issues down the road if you don't know what you are doing but I am sure it is nothing that can't be solved by looking it up on StackOverflow. For more details visit: https://www.programmersought.com/article/53286415201/ A: I'm not sure if this has been solved and you've moved on, but I just encountered this as well and was able to fix it with reinstalling a package with pip. My machine has an AMD Ryzen 5 3400G CPU, and I frequently do machine learning and deep learning for university research. I first had this problem yesterday when I created a Tensorflow anaconda environment when I already had a separate PyTorch environment. I was also incorporating a colleague's code into mine, so I believe what Aenaon commented is valid, that what is imported matters. Anyway, my solution - after researching where mkl is used - was to repeatedly pip uninstall and pip install one mkl-dependent package at a time until the problem went away when running my program. Somehow it was resolved after doing this the first time for numpy. I think it's coincidence, but my desired program works for now without any problems. A: I solved this by upgrading NumPy to version 1.23.4. pip install numpy --upgrade This works fine for me :)
Error #15: Initializing libiomp5md.dll, but found libiomp5md.dll already initialized
Operating System: Window10 I use spyder (python3.8) in anaconda and after run the code, I get the following error: [SpyderKernelApp] WARNING | No such comm: df7601e106dd11eba18accf9e4a3c0ef OMP: Error #15: Initializing libiomp5md.dll, but found libiomp5md.dll already initialized. OMP: Hint This means that multiple copies of the OpenMP runtime have been linked into the program. That is dangerous, since it can degrade performance or cause incorrect results. The best thing to do is to ensure that only a single OpenMP runtime is linked into the process, e.g. by avoiding static linking of the OpenMP runtime in any library. As an unsafe, unsupported, undocumented workaround you can set the environment variable KMP_DUPLICATE_LIB_OK=TRUE to allow the program to continue to execute, but that may cause crashes or silently produce incorrect results. For more information, please see http://www.intel.com/software/products/support/. How do I fix this?
[ "import os\nos.environ['KMP_DUPLICATE_LIB_OK']='True'\n\n", "This error occurs when there are multiple \"libiomp5.dll\" files within a python interpreter. I fixed it by deleting all of the versions of the file that were not within the module I was using (PyTorch). I would like to point out that this could cause a lot of multiprocessing issues down the road if you don't know what you are doing but I am sure it is nothing that can't be solved by looking it up on StackOverflow.\nFor more details visit: https://www.programmersought.com/article/53286415201/\n", "I'm not sure if this has been solved and you've moved on, but I just encountered this as well and was able to fix it with reinstalling a package with pip.\nMy machine has an AMD Ryzen 5 3400G CPU, and I frequently do machine learning and deep learning for university research. I first had this problem yesterday when I created a Tensorflow anaconda environment when I already had a separate PyTorch environment. I was also incorporating a colleague's code into mine, so I believe what Aenaon commented is valid, that what is imported matters.\nAnyway, my solution - after researching where mkl is used - was to repeatedly pip uninstall and pip install one mkl-dependent package at a time until the problem went away when running my program. Somehow it was resolved after doing this the first time for numpy. I think it's coincidence, but my desired program works for now without any problems.\n", "I solved this by upgrading NumPy to version 1.23.4.\npip install numpy --upgrade\n\nThis works fine for me :)\n" ]
[ 8, 7, 3, 2 ]
[ "Had the same problem with\n\nAnaconda Navigator 2.3.2\nSpyder version: 5.3.3 (conda)\nPython version: 3.9.15 64-bit\nQt version: 5.15.2\nPyQt5 version: 5.15.7\nOperating System: Windows 10\n\nI detected libiomp5md.dll in:\n\n..\\anaconda3\\envs\\My_Env\\Library\\bin\n..\\anaconda3\\pkgs\\tensorflow-base-2.9.1-mkl_py39h6a7f48e_1\\Lib\\site-packages\\tensorflow\\python\n\nI renamed the one in ..\\envs\\My_Env\\Library\\bin to libiomp5_save.dll.\nand everything was fine.\nDon't know if there are any side effects.\n", "I was able to fix it by moving import HELPERS late and plus adding the import OS too.\n\n" ]
[ -1, -1 ]
[ "python" ]
stackoverflow_0064209238_python.txt
Q: Convert '00:00' to '00:00:00' pandas I have a DataFrame that has a column with time data like ['25:45','12:34:34'], the initial idea is: first convert this column to a list called "time_list". Then with a for, iterate and convert it to minutes time_list = df7[' Chip Time'].tolist() time_mins = [] for i in time_list: h, m, s = i.split(':') math = (int(h) * 3600 + int(m) * 60 + int(s))/60 time_mins.append(math) But I got this error: ValueError: too many values ​​to unpack... . It is because within the data there are values ​​of only minutes and seconds but also others with hours, minutes and seconds. My idea is to convert this column into a single format, that of 'hh:mm:ss' but I can't find the way. Thanks for your reply A: Instead of using h, m, s = i.split(':') you should be using t = i.split(':') because the first expects the i.split to always return 3 values, but in cases where i = [aa:bb] it will only return two. From there you use the length of t to decide if you need to calculate the seconds or not. Your code is converting everything to seconds and could be replaced with something like this: t = i.split(':') for i in range(len(t)): math += t[i] * 3600 ** (1./(60 ** i)) # 3600 ** (1./(60 ** i)) returns 3600, 60, 1 for i = 0, 1, 2 time_mins.append(math) but if your goal is just to append :00 to any entry that dosen't have seconds then you could just do this: t = i.split(':') if len(t) == 2: time_mins.append(str(i)+":00") else: time_mins.append(i) A: I found a solution: I reformatted that column with a function: Creating function def format_date(n): if len(n) == 5: return "{}:{}:{}".format('00', n[0:2], n[3:5]) else: return n** Applying in that column with labda: df7['ChipTime'] = df7.apply(lambda x: format_date(x.ChipTime), axis=1) df7 and done. Thank you.
Convert '00:00' to '00:00:00' pandas
I have a DataFrame that has a column with time data like ['25:45','12:34:34'], the initial idea is: first convert this column to a list called "time_list". Then with a for, iterate and convert it to minutes time_list = df7[' Chip Time'].tolist() time_mins = [] for i in time_list: h, m, s = i.split(':') math = (int(h) * 3600 + int(m) * 60 + int(s))/60 time_mins.append(math) But I got this error: ValueError: too many values ​​to unpack... . It is because within the data there are values ​​of only minutes and seconds but also others with hours, minutes and seconds. My idea is to convert this column into a single format, that of 'hh:mm:ss' but I can't find the way. Thanks for your reply
[ "Instead of using h, m, s = i.split(':') you should be using t = i.split(':') because the first expects the i.split to always return 3 values, but in cases where i = [aa:bb] it will only return two. From there you use the length of t to decide if you need to calculate the seconds or not.\nYour code is converting everything to seconds and could be replaced with something like this:\nt = i.split(':')\nfor i in range(len(t)):\n math += t[i] * 3600 ** (1./(60 ** i))\n # 3600 ** (1./(60 ** i)) returns 3600, 60, 1 for i = 0, 1, 2\ntime_mins.append(math)\n\nbut if your goal is just to append :00 to any entry that dosen't have seconds then you could just do this:\nt = i.split(':')\nif len(t) == 2:\n time_mins.append(str(i)+\":00\")\nelse:\n time_mins.append(i)\n\n", "I found a solution:\nI reformatted that column with a function:\n\nCreating function\n\ndef format_date(n):\n if len(n) == 5:\n return \"{}:{}:{}\".format('00', n[0:2], n[3:5])\n else:\n return n**\n\n\n\nApplying in that column with labda:\n\ndf7['ChipTime'] = df7.apply(lambda x: format_date(x.ChipTime), axis=1)\ndf7\n\nand done. Thank you.\n" ]
[ 1, 0 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074604221_dataframe_pandas_python.txt
Q: How to add data from tk.Entry to f'string in Python 3? I need to make a text generator in which user input is added to f'string. What I get in result is that data typed in entry can be printed in PyCharm console, but doesn't show up in generated strings in tk.Text. Thank you in advance! Here is the code of my randomizer: Simple GUI interface: import tkinter as tk import random win = tk.Tk() win.title('TEXT MACHINE') win.geometry('1020x600+400+250') win.resizable(False, False) Here I want a variable anchor to assign a data from entry anchor = tk.StringVar() entry_1 = tk.Entry(win, textvariable=anchor) entry_1.grid(row=1, column=1, padx=20, sticky='wn') def add_text(): text_generated = (random.choice(first_text) + random.choice(second_text) ) text.delete('1.0', tk.END) text.insert('insert', text_generated) btn_1 = tk.Button(win, text='GENERATE TEXT', command=add_text, height=5, width=50) btn_1.grid(row=1, column=0, pady=10, padx=20, sticky='w') lbl_1 = tk.Label(win, text='Place Your Anchor Here') lbl_1.grid(row=0, column=1, padx=20, sticky='ws') text = tk.Text(win, width=120) text.grid(row=2, column=0, columnspan=2, padx=(20, 120)) win.grid_rowconfigure(0, pad=50) win.grid_columnconfigure(0, pad=50) first_text = ['First text 1', 'First text 2', 'First text 3' ] second_text = [f'Second text 1{anchor.get()}', f'Second text 2{anchor.get()}', f'Second text 3{anchor.get()}' ] When I generate text I get empty space instead of anchor, but when I do print(anchor.get()) it shows up in my console correctly. A: The issue is that you're essentially calling anchor.get() immediately after declaring it with anchor = tk.StringVar(). It seems like what you really want is entry_1.get(), in which case you can do away with the anchor variable entirely. entry_1 = tk.Entry(win) You'll also probably want to move the definition of second_text (and maybe also first_text, for consistency) to inside the add_text() function. That way the call to entry_1.get() is performed on each button press instead of immediately (and only once) def add_text(): first_text = ['First text 1', 'First text 2', 'First text 3'] second_text = [ f'Second text 1{entry_1.get()}', f'Second text 2{entry_1.get()}', f'Second text 3{entry_1.get()}' ] text_generated = (random.choice(first_text) + random.choice(second_text)) text.delete('1.0', tk.END) text.insert('insert', text_generated)
How to add data from tk.Entry to f'string in Python 3?
I need to make a text generator in which user input is added to f'string. What I get in result is that data typed in entry can be printed in PyCharm console, but doesn't show up in generated strings in tk.Text. Thank you in advance! Here is the code of my randomizer: Simple GUI interface: import tkinter as tk import random win = tk.Tk() win.title('TEXT MACHINE') win.geometry('1020x600+400+250') win.resizable(False, False) Here I want a variable anchor to assign a data from entry anchor = tk.StringVar() entry_1 = tk.Entry(win, textvariable=anchor) entry_1.grid(row=1, column=1, padx=20, sticky='wn') def add_text(): text_generated = (random.choice(first_text) + random.choice(second_text) ) text.delete('1.0', tk.END) text.insert('insert', text_generated) btn_1 = tk.Button(win, text='GENERATE TEXT', command=add_text, height=5, width=50) btn_1.grid(row=1, column=0, pady=10, padx=20, sticky='w') lbl_1 = tk.Label(win, text='Place Your Anchor Here') lbl_1.grid(row=0, column=1, padx=20, sticky='ws') text = tk.Text(win, width=120) text.grid(row=2, column=0, columnspan=2, padx=(20, 120)) win.grid_rowconfigure(0, pad=50) win.grid_columnconfigure(0, pad=50) first_text = ['First text 1', 'First text 2', 'First text 3' ] second_text = [f'Second text 1{anchor.get()}', f'Second text 2{anchor.get()}', f'Second text 3{anchor.get()}' ] When I generate text I get empty space instead of anchor, but when I do print(anchor.get()) it shows up in my console correctly.
[ "The issue is that you're essentially calling anchor.get() immediately after declaring it with anchor = tk.StringVar(). It seems like what you really want is entry_1.get(), in which case you can do away with the anchor variable entirely.\nentry_1 = tk.Entry(win)\n\nYou'll also probably want to move the definition of second_text (and maybe also first_text, for consistency) to inside the add_text() function. That way the call to entry_1.get() is performed on each button press instead of immediately (and only once)\ndef add_text():\n first_text = ['First text 1', 'First text 2', 'First text 3']\n second_text = [\n f'Second text 1{entry_1.get()}',\n f'Second text 2{entry_1.get()}',\n f'Second text 3{entry_1.get()}'\n ]\n text_generated = (random.choice(first_text) + random.choice(second_text))\n text.delete('1.0', tk.END)\n text.insert('insert', text_generated)\n\n" ]
[ 1 ]
[]
[]
[ "python", "python_3.8", "tkinter" ]
stackoverflow_0074605365_python_python_3.8_tkinter.txt
Q: having trouble using functions to break up my code I have a monthly budget code that shows the user if they are over/under the budget for a certain month. I am having trouble breaking the code up into def functions. here is what I have print("""\ This program uses a for loop to monitor your budget. The program will prompt you to enter your budget, and amount spent for a certain month and calculate if your were under or over budget. You will have the option of choosing how many months you would like to monitor.\n""") AmountSpent = 0 Budget = 0 numMonths = int(input("Enter the number of months you would like to monitor:")) while numMonths<0: print("\nNegative value detected!") numMonths = int(input("Enter the number of months you would like to monitor")) for month in range(1,numMonths+1): print("\n=====================================") AmountBudgeted = float(input(f"Enter amount budgeted for month {month}:")) while AmountBudgeted<0: print("Negative value detected!") AmountBudgeted = float(input(f"Enter amount budgeted for month {month}:")) AmountSpent = float(input(f"Enter amount spent for month {month}:")) while AmountSpent<0: print("Negative value detected!") AmountSpent = float(input(f"Enter amount spent for month {month}:")) if AmountSpent <= AmountBudgeted: underB = AmountBudgeted - AmountSpent print(f"Good Job! You are under budget by {underB}") else: overB = AmountSpent - AmountBudgeted print(f"Oops! You're over budget by {overB}") if month == "1": print(f'your budget is {AmountBudgeted}.') Can anyone help me break this code up into functions using "def" and other functions like "Describeprogram()" and "GetMonths()" ? A: You could extract the user interaction like def get_nb_months(): value = int(input("Enter the number of months you would like to monitor:")) while value < 0: print("Negative value detected!") value = int(input("Enter the number of months you would like to monitor")) return value But then you notice that they're quite the same methods, so you can generalize: def get_amount(msg, numeric_type): value = numeric_type(input(msg)) while value < 0: print("Negative value detected!") value = numeric_type(input(msg)) return value def summary(spent, budget): diff = abs(budget - spent) if spent <= budget: print(f"Good Job! You are under budget by {diff}") else: print(f"Oops! You're over budget by {diff}") if __name__ == "__main__": numMonths = get_amount("Enter the number of months you would like to monitor:", int) for month in range(1, numMonths + 1): print("\n=====================================") amount_budgeted = get_amount(f"Enter amount budgeted for month {month}:", float) amount_spent = get_amount(f"Enter amount spent for month {month}:", float) summary(amount_spent, amount_budgeted)
having trouble using functions to break up my code
I have a monthly budget code that shows the user if they are over/under the budget for a certain month. I am having trouble breaking the code up into def functions. here is what I have print("""\ This program uses a for loop to monitor your budget. The program will prompt you to enter your budget, and amount spent for a certain month and calculate if your were under or over budget. You will have the option of choosing how many months you would like to monitor.\n""") AmountSpent = 0 Budget = 0 numMonths = int(input("Enter the number of months you would like to monitor:")) while numMonths<0: print("\nNegative value detected!") numMonths = int(input("Enter the number of months you would like to monitor")) for month in range(1,numMonths+1): print("\n=====================================") AmountBudgeted = float(input(f"Enter amount budgeted for month {month}:")) while AmountBudgeted<0: print("Negative value detected!") AmountBudgeted = float(input(f"Enter amount budgeted for month {month}:")) AmountSpent = float(input(f"Enter amount spent for month {month}:")) while AmountSpent<0: print("Negative value detected!") AmountSpent = float(input(f"Enter amount spent for month {month}:")) if AmountSpent <= AmountBudgeted: underB = AmountBudgeted - AmountSpent print(f"Good Job! You are under budget by {underB}") else: overB = AmountSpent - AmountBudgeted print(f"Oops! You're over budget by {overB}") if month == "1": print(f'your budget is {AmountBudgeted}.') Can anyone help me break this code up into functions using "def" and other functions like "Describeprogram()" and "GetMonths()" ?
[ "You could extract the user interaction like\ndef get_nb_months():\n value = int(input(\"Enter the number of months you would like to monitor:\"))\n while value < 0:\n print(\"Negative value detected!\")\n value = int(input(\"Enter the number of months you would like to monitor\"))\n return value\n\nBut then you notice that they're quite the same methods, so you can generalize:\ndef get_amount(msg, numeric_type):\n value = numeric_type(input(msg))\n while value < 0:\n print(\"Negative value detected!\")\n value = numeric_type(input(msg))\n return value\n\ndef summary(spent, budget):\n diff = abs(budget - spent)\n if spent <= budget:\n print(f\"Good Job! You are under budget by {diff}\")\n else:\n print(f\"Oops! You're over budget by {diff}\")\n\nif __name__ == \"__main__\":\n numMonths = get_amount(\"Enter the number of months you would like to monitor:\", int)\n for month in range(1, numMonths + 1):\n print(\"\\n=====================================\")\n amount_budgeted = get_amount(f\"Enter amount budgeted for month {month}:\", float)\n amount_spent = get_amount(f\"Enter amount spent for month {month}:\", float)\n summary(amount_spent, amount_budgeted)\n\n" ]
[ 0 ]
[]
[]
[ "function", "python", "void" ]
stackoverflow_0074605296_function_python_void.txt
Q: How to query multiple tables using join in sqlalchemy select count(DISTINCT(a.cust_id)) as count ,b.code, b.name from table1 as a inner join table2 as b on a.par_id = b.id where a.data = "present" group by a.par_id order by b.name asc; How to write this in sqlalchemy to get as expected results The above query which is writen in sql should be right in sqlalchemy. Thanks for inputs A: Hope this works... session.query( func.count(distinct(table1.cust_id)).label('count'), table2.code, table2.name ).join( table2, table1.par_id == table2.id ).filter( table1.data == "present" ).group_by( table1.par_id ).order_by( table2.name.asc() ).all()
How to query multiple tables using join in sqlalchemy
select count(DISTINCT(a.cust_id)) as count ,b.code, b.name from table1 as a inner join table2 as b on a.par_id = b.id where a.data = "present" group by a.par_id order by b.name asc; How to write this in sqlalchemy to get as expected results The above query which is writen in sql should be right in sqlalchemy. Thanks for inputs
[ "Hope this works...\nsession.query(\n func.count(distinct(table1.cust_id)).label('count'),\n table2.code,\n table2.name\n).join(\n table2,\n table1.par_id == table2.id\n).filter(\n table1.data == \"present\"\n).group_by(\n table1.par_id\n).order_by(\n table2.name.asc()\n).all()\n\n" ]
[ 0 ]
[]
[]
[ "distinct", "fastapi", "join", "python", "sqlalchemy" ]
stackoverflow_0074568646_distinct_fastapi_join_python_sqlalchemy.txt
Q: How to clean a textfile to export like JSON - Python I have the following textfile from an LFT command. 2 [14080] [100.0.0.0 - 100.255.255.255] 100.5.254.150 6.3ms 3 [14080] [100.0.0.0 - 100.255.255.255] 100.8.254.149 5.7ms 4 [15169] [GOOGLE] 142.250.164.139 17.5ms 5 [15169] [GOOGLE] 142.250.164.138 10.9ms 6 [15169] [GOOGLE] 72.14.233.63 12.8ms 7 [15169] [GOOGLE] 142.250.210.131 9.6ms 8 [15169] [GOOGLE] 142.250.78.78 11.9ms Where each space could be understood like a field. I tried convert this textfile in a JSON file but I have that: { "emp1": { "Jumps": "2", "System": "[14080]", "Adress": "[100.0.0.0", "IP": "-", "Delay": "100.255.255.255] 100.5.254.150 6.3ms" }, "emp2": { "Jumps": "3", "System": "[14080]", "Adress": "[100.0.0.0", "IP": "-", "Delay": "100.255.255.255] 100.5.254.150 5.7ms" }, "emp3": { "Jumps": "4", "System": "[15169]", "Adress": "[GOOGLE]", "IP": "142.250.164.139", "Delay": "17.5ms" }, "emp4": { "Jumps": "5", "System": "[15169]", "Adress": "[GOOGLE]", "IP": "142.250.164.138", "Delay": "10.9ms" }, "emp5": { "Jumps": "6", "System": "[15169]", "Adress": "[GOOGLE]", "IP": "72.14.233.63", "Delay": "12.8ms" }, "emp6": { "Jumps": "7", "System": "[15169]", "Adress": "[GOOGLE]", "IP": "142.250.210.131", "Delay": "9.6ms" }, "emp7": { "Jumps": "8", "System": "[15169]", "Adress": "[GOOGLE]", "IP": "142.250.78.78", "Delay": "11.9ms" } } As you can see, the first two fields in the "Delay" section are worng. How I can fix it? What can I do for that? I tried to use pandas too but what I get is the same answer: data = pd.read_csv("file.txt", sep=r'\s+') A: You can try to parse the text with re module: text = """\ 2 [14080] [100.0.0.0 - 100.255.255.255] 100.5.254.150 6.3ms 3 [14080] [100.0.0.0 - 100.255.255.255] 100.8.254.149 5.7ms 4 [15169] [GOOGLE] 142.250.164.139 17.5ms 5 [15169] [GOOGLE] 142.250.164.138 10.9ms 6 [15169] [GOOGLE] 72.14.233.63 12.8ms 7 [15169] [GOOGLE] 142.250.210.131 9.6ms 8 [15169] [GOOGLE] 142.250.78.78 11.9ms""" import re pat = re.compile(r"(?m)^\s*(\d+)\s*\[(.*?)\]\s*\[(.*?)\]\s*(\S+)\s*(\S+)") out = {} for i, t in enumerate(pat.findall(text), 1): out[f"emp{i}"] = { "Jumps": t[0], "System": t[1], "Adress": t[2], "IP": t[3], "Delay": t[4], } print(out) Prints: { "emp1": { "Jumps": "2", "System": "14080", "Adress": "100.0.0.0 - 100.255.255.255", "IP": "100.5.254.150", "Delay": "6.3ms", }, "emp2": { "Jumps": "3", "System": "14080", "Adress": "100.0.0.0 - 100.255.255.255", "IP": "100.8.254.149", "Delay": "5.7ms", }, "emp3": { "Jumps": "4", "System": "15169", "Adress": "GOOGLE", "IP": "142.250.164.139", "Delay": "17.5ms", }, "emp4": { "Jumps": "5", "System": "15169", "Adress": "GOOGLE", "IP": "142.250.164.138", "Delay": "10.9ms", }, "emp5": { "Jumps": "6", "System": "15169", "Adress": "GOOGLE", "IP": "72.14.233.63", "Delay": "12.8ms", }, "emp6": { "Jumps": "7", "System": "15169", "Adress": "GOOGLE", "IP": "142.250.210.131", "Delay": "9.6ms", }, "emp7": { "Jumps": "8", "System": "15169", "Adress": "GOOGLE", "IP": "142.250.78.78", "Delay": "11.9ms", }, } A: Andrej's answer is already perfect, just wanted to add another solution: with open("textfile.txt", 'r') as f: s = f.readlines() data = {} for i, value in enumerate(s, 1): t = value.split('\n')[0].split() data[f"emp{i}"] = { "Jumps": t[0], "System": t[1], "Adress": t[2] if len(t)==5 else ''.join(t[2:5]), "IP": t[-2], "Delay": t[-1]} This prints: { 'emp1':{ 'Jumps': '2', 'System': '[14080]', 'Adress': '[100.0.0.0-100.255.255.255]', 'IP': '100.5.254.150', 'Delay': '6.3ms'}, 'emp2': { 'Jumps': '3', 'System': '[14080]', 'Adress': '[100.0.0.0-100.255.255.255]', 'IP': '100.8.254.149', 'Delay': '5.7ms'}, 'emp3': { 'Jumps': '4', 'System': '[15169]', 'Adress': '[GOOGLE]', 'IP': '142.250.164.139', 'Delay': '17.5ms'}, 'emp4': { 'Jumps': '5', 'System': '[15169]', 'Adress': '[GOOGLE]', 'IP': '142.250.164.138', 'Delay': '10.9ms'}, 'emp5': { 'Jumps': '6', 'System': '[15169]', 'Adress': '[GOOGLE]', 'IP': '72.14.233.63', 'Delay': '12.8ms'}, 'emp6': { 'Jumps': '7', 'System': '[15169]', 'Adress': '[GOOGLE]', 'IP': '142.250.210.131', 'Delay': '9.6ms'}, 'emp7': { 'Jumps': '8', 'System': '[15169]', 'Adress': '[GOOGLE]', 'IP': '142.250.78.78', 'Delay': '11.9ms'} }
How to clean a textfile to export like JSON - Python
I have the following textfile from an LFT command. 2 [14080] [100.0.0.0 - 100.255.255.255] 100.5.254.150 6.3ms 3 [14080] [100.0.0.0 - 100.255.255.255] 100.8.254.149 5.7ms 4 [15169] [GOOGLE] 142.250.164.139 17.5ms 5 [15169] [GOOGLE] 142.250.164.138 10.9ms 6 [15169] [GOOGLE] 72.14.233.63 12.8ms 7 [15169] [GOOGLE] 142.250.210.131 9.6ms 8 [15169] [GOOGLE] 142.250.78.78 11.9ms Where each space could be understood like a field. I tried convert this textfile in a JSON file but I have that: { "emp1": { "Jumps": "2", "System": "[14080]", "Adress": "[100.0.0.0", "IP": "-", "Delay": "100.255.255.255] 100.5.254.150 6.3ms" }, "emp2": { "Jumps": "3", "System": "[14080]", "Adress": "[100.0.0.0", "IP": "-", "Delay": "100.255.255.255] 100.5.254.150 5.7ms" }, "emp3": { "Jumps": "4", "System": "[15169]", "Adress": "[GOOGLE]", "IP": "142.250.164.139", "Delay": "17.5ms" }, "emp4": { "Jumps": "5", "System": "[15169]", "Adress": "[GOOGLE]", "IP": "142.250.164.138", "Delay": "10.9ms" }, "emp5": { "Jumps": "6", "System": "[15169]", "Adress": "[GOOGLE]", "IP": "72.14.233.63", "Delay": "12.8ms" }, "emp6": { "Jumps": "7", "System": "[15169]", "Adress": "[GOOGLE]", "IP": "142.250.210.131", "Delay": "9.6ms" }, "emp7": { "Jumps": "8", "System": "[15169]", "Adress": "[GOOGLE]", "IP": "142.250.78.78", "Delay": "11.9ms" } } As you can see, the first two fields in the "Delay" section are worng. How I can fix it? What can I do for that? I tried to use pandas too but what I get is the same answer: data = pd.read_csv("file.txt", sep=r'\s+')
[ "You can try to parse the text with re module:\ntext = \"\"\"\\\n2 [14080] [100.0.0.0 - 100.255.255.255] 100.5.254.150 6.3ms\n3 [14080] [100.0.0.0 - 100.255.255.255] 100.8.254.149 5.7ms\n4 [15169] [GOOGLE] 142.250.164.139 17.5ms\n5 [15169] [GOOGLE] 142.250.164.138 10.9ms\n6 [15169] [GOOGLE] 72.14.233.63 12.8ms\n7 [15169] [GOOGLE] 142.250.210.131 9.6ms\n8 [15169] [GOOGLE] 142.250.78.78 11.9ms\"\"\"\n\nimport re\n\npat = re.compile(r\"(?m)^\\s*(\\d+)\\s*\\[(.*?)\\]\\s*\\[(.*?)\\]\\s*(\\S+)\\s*(\\S+)\")\n\nout = {}\nfor i, t in enumerate(pat.findall(text), 1):\n out[f\"emp{i}\"] = {\n \"Jumps\": t[0],\n \"System\": t[1],\n \"Adress\": t[2],\n \"IP\": t[3],\n \"Delay\": t[4],\n }\n\nprint(out)\n\nPrints:\n{\n \"emp1\": {\n \"Jumps\": \"2\",\n \"System\": \"14080\",\n \"Adress\": \"100.0.0.0 - 100.255.255.255\",\n \"IP\": \"100.5.254.150\",\n \"Delay\": \"6.3ms\",\n },\n \"emp2\": {\n \"Jumps\": \"3\",\n \"System\": \"14080\",\n \"Adress\": \"100.0.0.0 - 100.255.255.255\",\n \"IP\": \"100.8.254.149\",\n \"Delay\": \"5.7ms\",\n },\n \"emp3\": {\n \"Jumps\": \"4\",\n \"System\": \"15169\",\n \"Adress\": \"GOOGLE\",\n \"IP\": \"142.250.164.139\",\n \"Delay\": \"17.5ms\",\n },\n \"emp4\": {\n \"Jumps\": \"5\",\n \"System\": \"15169\",\n \"Adress\": \"GOOGLE\",\n \"IP\": \"142.250.164.138\",\n \"Delay\": \"10.9ms\",\n },\n \"emp5\": {\n \"Jumps\": \"6\",\n \"System\": \"15169\",\n \"Adress\": \"GOOGLE\",\n \"IP\": \"72.14.233.63\",\n \"Delay\": \"12.8ms\",\n },\n \"emp6\": {\n \"Jumps\": \"7\",\n \"System\": \"15169\",\n \"Adress\": \"GOOGLE\",\n \"IP\": \"142.250.210.131\",\n \"Delay\": \"9.6ms\",\n },\n \"emp7\": {\n \"Jumps\": \"8\",\n \"System\": \"15169\",\n \"Adress\": \"GOOGLE\",\n \"IP\": \"142.250.78.78\",\n \"Delay\": \"11.9ms\",\n },\n}\n\n", "Andrej's answer is already perfect, just wanted to add another solution:\nwith open(\"textfile.txt\", 'r') as f:\ns = f.readlines()\n\ndata = {}\nfor i, value in enumerate(s, 1):\n t = value.split('\\n')[0].split()\n data[f\"emp{i}\"] = {\n \"Jumps\": t[0],\n \"System\": t[1],\n \"Adress\": t[2] if len(t)==5 else ''.join(t[2:5]),\n \"IP\": t[-2],\n \"Delay\": t[-1]}\n\nThis prints:\n{\n 'emp1':{ \n 'Jumps': '2',\n 'System': '[14080]',\n 'Adress': '[100.0.0.0-100.255.255.255]',\n 'IP': '100.5.254.150', 'Delay': '6.3ms'},\n 'emp2': {\n 'Jumps': '3',\n 'System': '[14080]',\n 'Adress': '[100.0.0.0-100.255.255.255]',\n 'IP': '100.8.254.149',\n 'Delay': '5.7ms'},\n 'emp3': {\n 'Jumps': '4',\n 'System': '[15169]',\n 'Adress': '[GOOGLE]',\n 'IP': '142.250.164.139',\n 'Delay': '17.5ms'},\n 'emp4': {\n 'Jumps': '5',\n 'System': '[15169]',\n 'Adress': '[GOOGLE]',\n 'IP': '142.250.164.138',\n 'Delay': '10.9ms'},\n 'emp5': {\n 'Jumps': '6',\n 'System': '[15169]',\n 'Adress': '[GOOGLE]',\n 'IP': '72.14.233.63',\n 'Delay': '12.8ms'},\n 'emp6': {\n 'Jumps': '7',\n 'System': '[15169]',\n 'Adress': '[GOOGLE]',\n 'IP': '142.250.210.131',\n 'Delay': '9.6ms'},\n 'emp7': {\n 'Jumps': '8',\n 'System': '[15169]',\n 'Adress': '[GOOGLE]',\n 'IP': '142.250.78.78',\n 'Delay': '11.9ms'}\n}\n\n" ]
[ 1, 1 ]
[]
[]
[ "json", "python" ]
stackoverflow_0074604906_json_python.txt
Q: Find_element_by_name for multiple names I want to find right element name by calling more element names at a time. Is that possible? try: G= driver.find_element_by_name("contact[Name]") G.send_keys("name") except: pass try: H= driver.find_element_by_name("contact[name]") H.send_keys("name") elem = driver.find_element_by_name(("contact[Name]")("contact[name]")) elem.send_keys("name") A: I don't think there's a way to pass multiple names to find_element_by_name(). You'll have to call it with the first name, and if that raises an exception, call it with the second name. elem = None try: elem = driver.find_element_by_name("contact[Name]") except selenium.common.exceptions.NoSuchElementException: pass if elem is None: # no element was found, so try again with the other name elem = driver.find_element_by_name("contact[name]") elem.send_keys("whatever") Or, a cleaner way to do the same thing in a loop: elem = None for element_name in ["some_name_1", "some_name_2"]: try: elem = driver.find_element_by_name(element_name) break except selenium.common.exceptions.NoSuchElementException: pass elem.send_keys("whatever")
Find_element_by_name for multiple names
I want to find right element name by calling more element names at a time. Is that possible? try: G= driver.find_element_by_name("contact[Name]") G.send_keys("name") except: pass try: H= driver.find_element_by_name("contact[name]") H.send_keys("name") elem = driver.find_element_by_name(("contact[Name]")("contact[name]")) elem.send_keys("name")
[ "I don't think there's a way to pass multiple names to find_element_by_name(). You'll have to call it with the first name, and if that raises an exception, call it with the second name.\nelem = None\n\ntry:\n elem = driver.find_element_by_name(\"contact[Name]\")\nexcept selenium.common.exceptions.NoSuchElementException:\n pass\n\nif elem is None:\n # no element was found, so try again with the other name\n elem = driver.find_element_by_name(\"contact[name]\")\n\nelem.send_keys(\"whatever\")\n\nOr, a cleaner way to do the same thing in a loop:\nelem = None\n\nfor element_name in [\"some_name_1\", \"some_name_2\"]:\n try:\n elem = driver.find_element_by_name(element_name)\n break\n except selenium.common.exceptions.NoSuchElementException:\n pass\n\nelem.send_keys(\"whatever\")\n\n" ]
[ 0 ]
[]
[]
[ "python", "selenium" ]
stackoverflow_0074605370_python_selenium.txt
Q: How is argument unpacking working in this function? I'm still a beginner in learning python but I came across this function : def show_skills (name , *skills ,**skillswithprogress) : print (f'hello {name} \n skills without progress is :') for skill in skills : print (f'-{skills}') print ('skills with progress is :') for skill_key , skills_value in skillswithprogress.items(): print(f'-{skill_key} =>{skills_value}') show_skills('H', python = '95%',css = '95%') and the output for it is as follows: hello H skills without progress is : skills with progress is : -python =>95% -css =>95% Can anyone explain explain why did [python = '95%',css = '95%'] get treated like that, not as *skills ? A: It is the concept of *args and **kwargs *args is the list or tuple **kwargs is the dictionary you put the parameter for the function as like show_skills('H', python = '95%',css = '95%') would be python='95%', css='95%' => {python:'95%', css:'95%'}
How is argument unpacking working in this function?
I'm still a beginner in learning python but I came across this function : def show_skills (name , *skills ,**skillswithprogress) : print (f'hello {name} \n skills without progress is :') for skill in skills : print (f'-{skills}') print ('skills with progress is :') for skill_key , skills_value in skillswithprogress.items(): print(f'-{skill_key} =>{skills_value}') show_skills('H', python = '95%',css = '95%') and the output for it is as follows: hello H skills without progress is : skills with progress is : -python =>95% -css =>95% Can anyone explain explain why did [python = '95%',css = '95%'] get treated like that, not as *skills ?
[ "It is the concept of *args and **kwargs\n*args is the list or tuple\n**kwargs is the dictionary\nyou put the parameter for the function as like\nshow_skills('H', python = '95%',css = '95%')\nwould be python='95%', css='95%' => {python:'95%', css:'95%'}\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074605400_python.txt
Q: How can I stop iterating once a certain number of SMS messages have been sent? I'm making a telegram bot that will send SMS through a parser and I need to loop until about 20 SMS are sent. I am using the telebot library to create a bot and for the parser I have requests and BeautifulSoup. import telebot import requests from bs4 import BeautifulSoup from telebot import types bot = telebot.TeleBot('my token') @bot.message_handler(commands=['start']) def start(message): mess = f'Привет, сегодня у меня для тебя 100 Анг слов! Удачи <b>{message.from_user.first_name}</b>' bot.send_message(message.chat.id, mess, parse_mode='html') @bot.message_handler(commands=['words']) def website(message): markup = types.ReplyKeyboardMarkup(resize_keyboard=True, row_width=1) word = types.KeyboardButton('100 Слов') markup.add(word)#создание самой кнопки. bot.send_message(message.chat.id, 'Подевиться слова', reply_markup=markup) @bot.message_handler(content_types=['text']) def get_user_commands(message): if message.text == '100 Слов': url = 'https://www.kreekly.com/lists/100-samyh-populyarnyh-angliyskih-slov/' response = requests.get(url) soup = BeautifulSoup(response.text, 'lxml') data = soup.find_all("div", class_="dict-word") for i in data: ENG = i.find("span", class_="eng").text rU = i.find("span", class_="rus").text bot.send_message(message.chat.id, ENG, parse_mode='html') bot.send_message(message.chat.id, rU, parse_mode='html') bot.polling(none_stop=True) I tried to do it this way: if i >= 20: break but this does not work. I also tried to do it like this: if data.index(i) >= 20 break A: Don't bother with the stopping the loop or slicing the array, tell bs4 to limit the result: data = soup.find_all("div", class_="dict-word", limit=20) A: you can try for indx,i in enumerate(data): if index == 19: # including zero, 20th will be the 19th. break -code here- enumerate function keeps track of current step, so at first iteration indx = 0 , second indx=1 and so on. for indx,i in enumerate(data): if index < 20: # apply the code if it's under the 20th itereation. -code here- I suggest the upper solution.
How can I stop iterating once a certain number of SMS messages have been sent?
I'm making a telegram bot that will send SMS through a parser and I need to loop until about 20 SMS are sent. I am using the telebot library to create a bot and for the parser I have requests and BeautifulSoup. import telebot import requests from bs4 import BeautifulSoup from telebot import types bot = telebot.TeleBot('my token') @bot.message_handler(commands=['start']) def start(message): mess = f'Привет, сегодня у меня для тебя 100 Анг слов! Удачи <b>{message.from_user.first_name}</b>' bot.send_message(message.chat.id, mess, parse_mode='html') @bot.message_handler(commands=['words']) def website(message): markup = types.ReplyKeyboardMarkup(resize_keyboard=True, row_width=1) word = types.KeyboardButton('100 Слов') markup.add(word)#создание самой кнопки. bot.send_message(message.chat.id, 'Подевиться слова', reply_markup=markup) @bot.message_handler(content_types=['text']) def get_user_commands(message): if message.text == '100 Слов': url = 'https://www.kreekly.com/lists/100-samyh-populyarnyh-angliyskih-slov/' response = requests.get(url) soup = BeautifulSoup(response.text, 'lxml') data = soup.find_all("div", class_="dict-word") for i in data: ENG = i.find("span", class_="eng").text rU = i.find("span", class_="rus").text bot.send_message(message.chat.id, ENG, parse_mode='html') bot.send_message(message.chat.id, rU, parse_mode='html') bot.polling(none_stop=True) I tried to do it this way: if i >= 20: break but this does not work. I also tried to do it like this: if data.index(i) >= 20 break
[ "Don't bother with the stopping the loop or slicing the array, tell bs4 to limit the result:\ndata = soup.find_all(\"div\", class_=\"dict-word\", limit=20)\n\n", "you can try\nfor indx,i in enumerate(data):\n if index == 19: # including zero, 20th will be the 19th.\n break\n -code here-\n\nenumerate function keeps track of current step, so at first iteration indx = 0 , second indx=1 and so on.\nfor indx,i in enumerate(data):\n if index < 20: # apply the code if it's under the 20th itereation.\n -code here-\n\nI suggest the upper solution.\n" ]
[ 1, 0 ]
[]
[]
[ "python", "telebot" ]
stackoverflow_0074605337_python_telebot.txt
Q: How to get pairwise combinations of words? Given a string I want pair wise combinations of the words present in the string with blank spaces. For instance for the following string: string = "This is cat" I want the following output: str1 = "This is cat" str2 = "Thisis cat" str3 = "This iscat" str4 = "Thisiscat" I tried something with itertools. Basically getting all pairwise combinations of True and False: permutations = list(itertools.product([False, True], repeat=2) and add the blank spaces based on True. This is what I'm trying: args = string.split() permutations = list(itertools.product([False, True], repeat=len(args)-1)) strings = [] for i in range(len(args)-1): string = "" for permutation in permutations: if permutation[i] is True: string = string + " " + args[i] if permutation[i] is False: string = string + args[i] strings.append(string) A: i like the idea using combinations of True and False, and it actualy works (if we may use '_' as a replacement): from itertools import product string = "This is a cat" strings = [] permutations = list(product([False, True], repeat=string.count(' '))) for permutation in permutations: s = string for p in permutation: s = s.replace(' ','' if p else '_',1) strings.append(s.replace('_',' ')) >>> strings ''' ['This is a cat', 'This is acat', 'This isa cat', 'This isacat', 'Thisis a cat', 'Thisis acat', 'Thisisa cat', 'Thisisacat']
How to get pairwise combinations of words?
Given a string I want pair wise combinations of the words present in the string with blank spaces. For instance for the following string: string = "This is cat" I want the following output: str1 = "This is cat" str2 = "Thisis cat" str3 = "This iscat" str4 = "Thisiscat" I tried something with itertools. Basically getting all pairwise combinations of True and False: permutations = list(itertools.product([False, True], repeat=2) and add the blank spaces based on True. This is what I'm trying: args = string.split() permutations = list(itertools.product([False, True], repeat=len(args)-1)) strings = [] for i in range(len(args)-1): string = "" for permutation in permutations: if permutation[i] is True: string = string + " " + args[i] if permutation[i] is False: string = string + args[i] strings.append(string)
[ "i like the idea using combinations of True and False, and it actualy works (if we may use '_' as a replacement):\nfrom itertools import product\n\nstring = \"This is a cat\"\nstrings = []\n\npermutations = list(product([False, True], repeat=string.count(' ')))\nfor permutation in permutations:\n s = string\n for p in permutation:\n s = s.replace(' ','' if p else '_',1)\n strings.append(s.replace('_',' ')) \n\n>>> strings\n'''\n['This is a cat',\n 'This is acat',\n 'This isa cat',\n 'This isacat',\n 'Thisis a cat',\n 'Thisis acat',\n 'Thisisa cat',\n 'Thisisacat']\n\n" ]
[ 0 ]
[]
[]
[ "python", "python_2.7", "python_3.x" ]
stackoverflow_0074604864_python_python_2.7_python_3.x.txt
Q: 404 Error with mybinder.org when I try to deploy a Jupyter Notebook with voila from GitHub I have a problem deploying my Jupyter Notebook from Github to mybinder.org. I configured everything according to several tutorials I found, but everytime I try to build the page mybinder.org gives me this error afterwards: 404 : Not Found, You are requesting a page that does not exist! The building process seems to work, as there are no error messages despite the 404 error in the end. This is my Github page I try to build from: https://github.com/MkengineTA/VoilaDemo And here you can see the input I use on mybinder.org: Does anyone have an idea what I am doing wrong? A: You didn't have the correct name of the notebook in the 'URL to open' portion of the form. The resulting correct address should be: https://mybinder.org/v2/gh/MkengineTA/VoilaDemo/main?urlpath=voila%2Frender%2FTestVoila.ipynb , that's https://mybinder.org/v2/gh/MkengineTA/VoilaDemo/main?urlpath=voila%2Frender%2FTestVoila.ipynb. You had the name of the notebook wrong in what the form was generating because you input the name of the repo and not the notebook there in front of .ipynb.
404 Error with mybinder.org when I try to deploy a Jupyter Notebook with voila from GitHub
I have a problem deploying my Jupyter Notebook from Github to mybinder.org. I configured everything according to several tutorials I found, but everytime I try to build the page mybinder.org gives me this error afterwards: 404 : Not Found, You are requesting a page that does not exist! The building process seems to work, as there are no error messages despite the 404 error in the end. This is my Github page I try to build from: https://github.com/MkengineTA/VoilaDemo And here you can see the input I use on mybinder.org: Does anyone have an idea what I am doing wrong?
[ "You didn't have the correct name of the notebook in the 'URL to open' portion of the form.\nThe resulting correct address should be: https://mybinder.org/v2/gh/MkengineTA/VoilaDemo/main?urlpath=voila%2Frender%2FTestVoila.ipynb , that's https://mybinder.org/v2/gh/MkengineTA/VoilaDemo/main?urlpath=voila%2Frender%2FTestVoila.ipynb.\nYou had the name of the notebook wrong in what the form was generating because you input the name of the repo and not the notebook there in front of .ipynb.\n" ]
[ 0 ]
[]
[]
[ "github", "jupyter_notebook", "mybinder", "python", "voila" ]
stackoverflow_0074604898_github_jupyter_notebook_mybinder_python_voila.txt
Q: Failing to install Sparkmagic using pip I am trying to install sparkmaic in jupyter notebook through pip but getting below error: Command: pip install sparkmagic error: Collecting gssapi>=1.6.0 Using cached gssapi-1.8.1.tar.gz (94 kB) Installing build dependencies ... done Getting requirements to build wheel ... error error: subprocess-exited-with-error × Getting requirements to build wheel did not run successfully. │ exit code: 1 ╰─> [21 lines of output] /bin/sh: 1: krb5-config: not found Traceback (most recent call last): File "/home/e004255/.local/lib/python3.8/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 363, in <module> main() File "/home/e004255/.local/lib/python3.8/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 345, in main json_out['return_val'] = hook(**hook_input['kwargs']) File "/home/e004255/.local/lib/python3.8/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 130, in get_requires_for_build_wheel return hook(config_settings) File "/tmp/pip-build-env-guoi_e0p/overlay/lib/python3.8/site-packages/setuptools/build_meta.py", line 338, in get_requires_for_build_wheel return self._get_build_requires(config_settings, requirements=['wheel']) File "/tmp/pip-build-env-guoi_e0p/overlay/lib/python3.8/site-packages/setuptools/build_meta.py", line 320, in _get_build_requires self.run_setup() File "/tmp/pip-build-env-guoi_e0p/overlay/lib/python3.8/site-packages/setuptools/build_meta.py", line 335, in run_setup exec(code, locals()) File "<string>", line 109, in <module> File "<string>", line 22, in get_output File "/usr/lib/python3.8/subprocess.py", line 415, in check_output return run(*popenargs, stdout=PIPE, timeout=timeout, check=True, File "/usr/lib/python3.8/subprocess.py", line 516, in run raise CalledProcessError(retcode, process.args, subprocess.CalledProcessError: Command 'krb5-config --libs gssapi' returned non-zero exit status 127. [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error × Getting requirements to build wheel did not run successfully. │ exit code: 1 ╰─> See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. A: It looks like your system is missing a tool named krb5-config. You might be able to install it with: sudo apt install krb5-config A: Thanks @bernhard but, I had to install dev tools too (ubuntu 20.04). sudo apt-get install libkrb5-dev
Failing to install Sparkmagic using pip
I am trying to install sparkmaic in jupyter notebook through pip but getting below error: Command: pip install sparkmagic error: Collecting gssapi>=1.6.0 Using cached gssapi-1.8.1.tar.gz (94 kB) Installing build dependencies ... done Getting requirements to build wheel ... error error: subprocess-exited-with-error × Getting requirements to build wheel did not run successfully. │ exit code: 1 ╰─> [21 lines of output] /bin/sh: 1: krb5-config: not found Traceback (most recent call last): File "/home/e004255/.local/lib/python3.8/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 363, in <module> main() File "/home/e004255/.local/lib/python3.8/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 345, in main json_out['return_val'] = hook(**hook_input['kwargs']) File "/home/e004255/.local/lib/python3.8/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 130, in get_requires_for_build_wheel return hook(config_settings) File "/tmp/pip-build-env-guoi_e0p/overlay/lib/python3.8/site-packages/setuptools/build_meta.py", line 338, in get_requires_for_build_wheel return self._get_build_requires(config_settings, requirements=['wheel']) File "/tmp/pip-build-env-guoi_e0p/overlay/lib/python3.8/site-packages/setuptools/build_meta.py", line 320, in _get_build_requires self.run_setup() File "/tmp/pip-build-env-guoi_e0p/overlay/lib/python3.8/site-packages/setuptools/build_meta.py", line 335, in run_setup exec(code, locals()) File "<string>", line 109, in <module> File "<string>", line 22, in get_output File "/usr/lib/python3.8/subprocess.py", line 415, in check_output return run(*popenargs, stdout=PIPE, timeout=timeout, check=True, File "/usr/lib/python3.8/subprocess.py", line 516, in run raise CalledProcessError(retcode, process.args, subprocess.CalledProcessError: Command 'krb5-config --libs gssapi' returned non-zero exit status 127. [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error × Getting requirements to build wheel did not run successfully. │ exit code: 1 ╰─> See above for output. note: This error originates from a subprocess, and is likely not a problem with pip.
[ "It looks like your system is missing a tool named krb5-config. You might be able to install it with:\nsudo apt install krb5-config\n\n", "Thanks @bernhard\nbut, I had to install dev tools too (ubuntu 20.04).\nsudo apt-get install libkrb5-dev\n\n" ]
[ 0, 0 ]
[]
[]
[ "pip", "pyspark", "python", "ubuntu", "unix" ]
stackoverflow_0074031328_pip_pyspark_python_ubuntu_unix.txt
Q: first argument must be an iterable of pandas objects, you passed an object of type "Series" I have the following dataframe: s = df.head().to_dict() print(s) {'BoP transfers': {1998: 12.346282212735618, 1999: 19.06438060024298, 2000: 18.24888031473687, 2001: 24.860019912667006, 2002: 32.38242225822908}, 'Current balance': {1998: -6.7953, 1999: -2.9895, 2000: -3.9694, 2001: 1.1716, 2002: 5.7433}, 'Domestic demand': {1998: 106.8610389799729, 1999: 104.70302507466538, 2000: 104.59254229534136, 2001: 103.83532232336977, 2002: 102.81709401489702}, 'Effective exchange rate': {1998: 88.134, 1999: 95.6425, 2000: 99.927725, 2001: 101.92745, 2002: 107.85565}, 'RoR (foreign liabilities)': {1998: 0.0433, 1999: 0.0437, 2000: 0.0542, 2001: 0.0539, 2002: 0.0474}} which can be transformed back to its original form using df = pd.DataFrame.from_dict(s) I want to slice this dataframe in the following manner: df_1 = df.iloc[:,0:2] df_2 = pd.concat(df.iloc[:,0], df.iloc[:,3:]) when I get the titled error. I know there are some questions regarding this already, but I am unable to put the pieces together. Specifically, in my case, the dataframe is not this small (it has 100 columns). I want something along the lines of df_1 = df.iloc[:,0:10] df_2 = pd.concat(df.iloc[:,0], df.iloc[:,11:20]) df_3 = pd.concat(df.iloc[:,0], df.iloc[:,21:30]) and so on. How can this be accomplished? Thank you. A: You need to use a list of the DataFrames to merge and to concat on axis=1: df_2 = pd.concat([df.iloc[:,0], df.iloc[:,3:]], axis=1) Or, better, use slicing: df_2 = df.iloc[:, [0,3,4]] # or df_2 = df.iloc[:, np.r_[0,3:df.shape[1]]] Output: BoP transfers Effective exchange rate RoR (foreign liabilities) 1998 12.346282 88.134000 0.0433 1999 19.064381 95.642500 0.0437 2000 18.248880 99.927725 0.0542 2001 24.860020 101.927450 0.0539 2002 32.382422 107.855650 0.0474
first argument must be an iterable of pandas objects, you passed an object of type "Series"
I have the following dataframe: s = df.head().to_dict() print(s) {'BoP transfers': {1998: 12.346282212735618, 1999: 19.06438060024298, 2000: 18.24888031473687, 2001: 24.860019912667006, 2002: 32.38242225822908}, 'Current balance': {1998: -6.7953, 1999: -2.9895, 2000: -3.9694, 2001: 1.1716, 2002: 5.7433}, 'Domestic demand': {1998: 106.8610389799729, 1999: 104.70302507466538, 2000: 104.59254229534136, 2001: 103.83532232336977, 2002: 102.81709401489702}, 'Effective exchange rate': {1998: 88.134, 1999: 95.6425, 2000: 99.927725, 2001: 101.92745, 2002: 107.85565}, 'RoR (foreign liabilities)': {1998: 0.0433, 1999: 0.0437, 2000: 0.0542, 2001: 0.0539, 2002: 0.0474}} which can be transformed back to its original form using df = pd.DataFrame.from_dict(s) I want to slice this dataframe in the following manner: df_1 = df.iloc[:,0:2] df_2 = pd.concat(df.iloc[:,0], df.iloc[:,3:]) when I get the titled error. I know there are some questions regarding this already, but I am unable to put the pieces together. Specifically, in my case, the dataframe is not this small (it has 100 columns). I want something along the lines of df_1 = df.iloc[:,0:10] df_2 = pd.concat(df.iloc[:,0], df.iloc[:,11:20]) df_3 = pd.concat(df.iloc[:,0], df.iloc[:,21:30]) and so on. How can this be accomplished? Thank you.
[ "You need to use a list of the DataFrames to merge and to concat on axis=1:\ndf_2 = pd.concat([df.iloc[:,0], df.iloc[:,3:]], axis=1)\n\nOr, better, use slicing:\ndf_2 = df.iloc[:, [0,3,4]]\n# or\ndf_2 = df.iloc[:, np.r_[0,3:df.shape[1]]]\n\nOutput:\n BoP transfers Effective exchange rate RoR (foreign liabilities)\n1998 12.346282 88.134000 0.0433\n1999 19.064381 95.642500 0.0437\n2000 18.248880 99.927725 0.0542\n2001 24.860020 101.927450 0.0539\n2002 32.382422 107.855650 0.0474\n\n" ]
[ 0 ]
[]
[]
[ "concatenation", "pandas", "python", "slice" ]
stackoverflow_0074605469_concatenation_pandas_python_slice.txt
Q: How to print unicode character from a string variable? I am new in programming world, and I am a bit confused. I expecting that both print result the same graphical unicode exclamation mark symbol: My experiment: number = 10071 byteStr = number.to_bytes(4, byteorder='big') hexStr = hex(number) uniChar = byteStr.decode('utf-32be') uniStr = '\\u' + hexStr[2:6] print(f'{number} - {hexStr[2:6]} - {byteStr} - {uniChar}') print(f'{uniStr}') # Not working print(f'\u2757') # Working Output: 10071 - 2757 - b"\x00\x00'W" - ❗ \u2757 ❗ What are the difference in the last two lines? Please, help me to understand it! My environment is JupyterHub and v3.9 python. A: An escape code evaluated by the Python parser when constructing literal strings. For example, the literal string '马' and '\u9a6c' are evaluated by the parser as the same, length 1, string. You can (and did) build a string with the 6 characters \u9a6c by using an escape code for the backslash (\\) to prevent the parser from evaluating those 6 characters as an escape code, which is why it prints as the 6-character \u2757. If you build a byte string with those 6 characters, you can decode it with .decode('unicode-escape') to get the character: >>> b'\\u2757'.decode('unicode_escape') '❗' But it is easier to use the chr() function on the number itself: >>> chr(0x2757) '❗' >>> chr(10071) '❗'
How to print unicode character from a string variable?
I am new in programming world, and I am a bit confused. I expecting that both print result the same graphical unicode exclamation mark symbol: My experiment: number = 10071 byteStr = number.to_bytes(4, byteorder='big') hexStr = hex(number) uniChar = byteStr.decode('utf-32be') uniStr = '\\u' + hexStr[2:6] print(f'{number} - {hexStr[2:6]} - {byteStr} - {uniChar}') print(f'{uniStr}') # Not working print(f'\u2757') # Working Output: 10071 - 2757 - b"\x00\x00'W" - ❗ \u2757 ❗ What are the difference in the last two lines? Please, help me to understand it! My environment is JupyterHub and v3.9 python.
[ "An escape code evaluated by the Python parser when constructing literal strings. For example, the literal string '马' and '\\u9a6c' are evaluated by the parser as the same, length 1, string.\nYou can (and did) build a string with the 6 characters \\u9a6c by using an escape code for the backslash (\\\\) to prevent the parser from evaluating those 6 characters as an escape code, which is why it prints as the 6-character \\u2757.\nIf you build a byte string with those 6 characters, you can decode it with .decode('unicode-escape') to get the character:\n>>> b'\\\\u2757'.decode('unicode_escape')\n'❗'\n\nBut it is easier to use the chr() function on the number itself:\n>>> chr(0x2757)\n'❗'\n>>> chr(10071)\n'❗'\n\n" ]
[ 0 ]
[]
[]
[ "printf", "python", "string", "unicode" ]
stackoverflow_0074599920_printf_python_string_unicode.txt
Q: Unique Variable Constraint in Gurobi I am using Gurobi in Jupyter notebook with 9 variables. I want to add constraints so that each variable has a unique value. How do I code this in Gurobi Linear programming solver? m.addConstr(i1 - i2 + y*a >= 1) Based on my online research, the above is what I am currently trying to enforce but it is not working. A: To be a bit more constructive, this is called an all-different constraint. This is part of many Constraint Programming (CP) solvers. Implementing this constraint in a Mixed-Integer Programming model is not totally trivial or cheap. For some formulations see for instance: https://pubsonline.informs.org/doi/abs/10.1287/ijoc.13.2.96.10515.
Unique Variable Constraint in Gurobi
I am using Gurobi in Jupyter notebook with 9 variables. I want to add constraints so that each variable has a unique value. How do I code this in Gurobi Linear programming solver? m.addConstr(i1 - i2 + y*a >= 1) Based on my online research, the above is what I am currently trying to enforce but it is not working.
[ "To be a bit more constructive, this is called an all-different constraint. This is part of many Constraint Programming (CP) solvers. Implementing this constraint in a Mixed-Integer Programming model is not totally trivial or cheap. For some formulations see for instance: https://pubsonline.informs.org/doi/abs/10.1287/ijoc.13.2.96.10515.\n" ]
[ 0 ]
[]
[]
[ "gurobi", "jupyter_notebook", "linear_programming", "optimization", "python" ]
stackoverflow_0074605285_gurobi_jupyter_notebook_linear_programming_optimization_python.txt
Q: Rock, Paper, Scissors in Functions I need to write rock, paper, scissors only using functions. I have not done this before so my try at it is below. I am not able to get it to run fully through. Any pointers in the right direction would be very helpful! Code below: import random def user_choice(): user_choice = input("Choose rock, paper, or scissors: ") if user_choice in ["rock", "Rock"]: user_choice = "r" elif user_choice in ["paper", "Paper"]: user_choice = "p" elif user_choice in ["scissors", "Scissors"]: user_choice = "s" else: ("Try again.") user_choice return user_choice def computer_choice(): computer_choice = random.randint(1,3) if computer_choice == 1: computer_choice = "r" if computer_choice == 2: computer_choice = "p" if computer_choice == 3: computer_choice = "s" return computer_choice def get_winner(): #User choice = rock if user_choice == "r": if computer_choice == "r": print ("You and computer chose rock. It's a tie!") elif user_choice == "r": if computer_choice == "p": print ("You chose rock and computer chose paper. Computer wins!") elif user_choice == "r": if computer_choice == "s": print ("You chose rock and computer chose scissors. You win!") #User choice = scissors if user_choice == "s": if computer_choice == "s": print ("You and computer chose scissors. It's a tie!") elif user_choice == "s": if computer_choice == "p": print ("You chose scissors and computer chose paper. You win!") elif user_choice == "s": if computer_choice == "r": print ("You chose scissors and computer chose rock. Computer wins!") #User choice = paper if user_choice == "p": if computer_choice == "p": print ("You and computer chose paper. It's a tie!") elif user_choice == "p": if computer_choice == "r": print ("You chose paper and computer chose rock. You win!") elif user_choice == "p": if computer_choice == "s": print ("You chose paper and computer chose scissors. Computer wins!") else: print("Error") user_choice() computer_choice() get_winner() I tried writing a function for the user input, a random choice for the computer, and one that compares the user and computer choice to get the winner. I have tried writing and calling the functions but it is not working. A: You need to pass results of "input" functions to "get_winner" function as arguments. It should be defined like that: def get_winner(user_choice, computer_choice): Then in your code: uc = user_choice() cc = computer_choice() get_winner(uc, cc)
Rock, Paper, Scissors in Functions
I need to write rock, paper, scissors only using functions. I have not done this before so my try at it is below. I am not able to get it to run fully through. Any pointers in the right direction would be very helpful! Code below: import random def user_choice(): user_choice = input("Choose rock, paper, or scissors: ") if user_choice in ["rock", "Rock"]: user_choice = "r" elif user_choice in ["paper", "Paper"]: user_choice = "p" elif user_choice in ["scissors", "Scissors"]: user_choice = "s" else: ("Try again.") user_choice return user_choice def computer_choice(): computer_choice = random.randint(1,3) if computer_choice == 1: computer_choice = "r" if computer_choice == 2: computer_choice = "p" if computer_choice == 3: computer_choice = "s" return computer_choice def get_winner(): #User choice = rock if user_choice == "r": if computer_choice == "r": print ("You and computer chose rock. It's a tie!") elif user_choice == "r": if computer_choice == "p": print ("You chose rock and computer chose paper. Computer wins!") elif user_choice == "r": if computer_choice == "s": print ("You chose rock and computer chose scissors. You win!") #User choice = scissors if user_choice == "s": if computer_choice == "s": print ("You and computer chose scissors. It's a tie!") elif user_choice == "s": if computer_choice == "p": print ("You chose scissors and computer chose paper. You win!") elif user_choice == "s": if computer_choice == "r": print ("You chose scissors and computer chose rock. Computer wins!") #User choice = paper if user_choice == "p": if computer_choice == "p": print ("You and computer chose paper. It's a tie!") elif user_choice == "p": if computer_choice == "r": print ("You chose paper and computer chose rock. You win!") elif user_choice == "p": if computer_choice == "s": print ("You chose paper and computer chose scissors. Computer wins!") else: print("Error") user_choice() computer_choice() get_winner() I tried writing a function for the user input, a random choice for the computer, and one that compares the user and computer choice to get the winner. I have tried writing and calling the functions but it is not working.
[ "You need to pass results of \"input\" functions to \"get_winner\" function as arguments.\nIt should be defined like that:\ndef get_winner(user_choice, computer_choice):\n\n\nThen in your code:\nuc = user_choice()\ncc = computer_choice()\nget_winner(uc, cc)\n\n" ]
[ 1 ]
[]
[]
[ "function", "python", "python_3.x" ]
stackoverflow_0074605505_function_python_python_3.x.txt
Q: How to continue a while loop in python I am building a python dice game where two players are supposed to play a vs. The code runs correctly and the first player is able to play his/her round however am having a bit of a problem looping through the while statement so as to make the second player play his/her round. I thought I would add a continue at if statement(at the end of the code )but it says "continue" can be used only within a loop here is my code for the game import random import array as arr # enter player details: player = ["", ""] player[0] = input("Player one name :") player[1] = input("Player two name :") print("xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx") print(player[0], "vs", player[1]) # waiting for user to press enter key input("Press enter key to continue...") # clearing screen # creating an array of 5 dice with integer type dice = arr.array('i', [1, 2, 3, 4, 5]) k = 0 busted = 0 round = 1 score = 0 roll = 0 possible_output = [1, 2, 3, 4, 5, 6] print("round : ", round) while busted == 0 or score < 10000 or round < 6: # filling each value of dice array with random number from 1 to 6 for j in range(0, 5): # rolling dice dice[j] = random.choice(possible_output) break print(dice) roll += 1 # calculating score # condition when there is a series of 1 through 5 if dice[0] == 1 and dice[1] == 2 and dice[2] == 3 and dice[3] == 4 and dice[4] == 5: score = score+1250 # condition when there is a series of 2 through 6 elif dice[0] == 2 and dice[1] == 3 and dice[2] == 4 and dice[3] == 5 and dice[4] == 6: score = score+1250 # if value is a 1 or 5; for j in range(0, 5): if dice[j] == 1: score = score+100 elif dice[j] == 5: score = score+50 # printing the results print("Result of dice rolling is :", dice) print("Roll total : ", score) roll_dice = input("[s]ave or [R]oll") if roll_dice == "s": print("saved!") else: if k == 0: k = 1 continue else: k = 0 round = round+1 if round == 6: print("Thankyou, the game is over") A: Your if block is outside the while block (indentation). You are not executing this if within the while block, hence the error.
How to continue a while loop in python
I am building a python dice game where two players are supposed to play a vs. The code runs correctly and the first player is able to play his/her round however am having a bit of a problem looping through the while statement so as to make the second player play his/her round. I thought I would add a continue at if statement(at the end of the code )but it says "continue" can be used only within a loop here is my code for the game import random import array as arr # enter player details: player = ["", ""] player[0] = input("Player one name :") player[1] = input("Player two name :") print("xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx") print(player[0], "vs", player[1]) # waiting for user to press enter key input("Press enter key to continue...") # clearing screen # creating an array of 5 dice with integer type dice = arr.array('i', [1, 2, 3, 4, 5]) k = 0 busted = 0 round = 1 score = 0 roll = 0 possible_output = [1, 2, 3, 4, 5, 6] print("round : ", round) while busted == 0 or score < 10000 or round < 6: # filling each value of dice array with random number from 1 to 6 for j in range(0, 5): # rolling dice dice[j] = random.choice(possible_output) break print(dice) roll += 1 # calculating score # condition when there is a series of 1 through 5 if dice[0] == 1 and dice[1] == 2 and dice[2] == 3 and dice[3] == 4 and dice[4] == 5: score = score+1250 # condition when there is a series of 2 through 6 elif dice[0] == 2 and dice[1] == 3 and dice[2] == 4 and dice[3] == 5 and dice[4] == 6: score = score+1250 # if value is a 1 or 5; for j in range(0, 5): if dice[j] == 1: score = score+100 elif dice[j] == 5: score = score+50 # printing the results print("Result of dice rolling is :", dice) print("Roll total : ", score) roll_dice = input("[s]ave or [R]oll") if roll_dice == "s": print("saved!") else: if k == 0: k = 1 continue else: k = 0 round = round+1 if round == 6: print("Thankyou, the game is over")
[ "Your if block is outside the while block (indentation). You are not executing this if within the while block, hence the error.\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074605535_python.txt
Q: How to avoid "AttributeError: cls not available in session-scoped context" using request.cls and scope="session"? I have next construction: 1. BaseClass with fixture class BaseClass: instance = None @pytest.fixture(scope="session", autouse=True) def setup_and_teardown(self, request): request.cls.instance = AnotherClass() request.cls.instance.open_session() yield self.instance.close_session() 2. TestClass with a test class TestClass(BaseClass): def awesome_test(self): value = self.instance.get_value("parameter") assert value == "correct result" Question: if I want to save this kind of construction with reusing an instance of the class in the actual test, parameter scope="session" and yield in the fixtures, how can I avoid/solve an error "AttributeError: cls not available in session-scoped context" (if possible)? I tried: Creating 2 fixtures: one for setup and creating instances of class and second for yield and close connection A: It is not a good idea to have session-scoped fixture as a method in a class - this creates the impression that it have access to the class or a class instance, which it has not. Instead, write the session-scoped fixture outside the class and yield the created object. Pytest will cache the object and pass it to any test that requires it: @pytest.fixture(scope="session", autouse=True) def instance(): inst = AnotherClass() inst.open_session() yield inst inst.close_session() class TestClass: def awesome_test(self, instance): value = instance.get_value("parameter") assert value == "correct result" This also makes the base class obsolete.
How to avoid "AttributeError: cls not available in session-scoped context" using request.cls and scope="session"?
I have next construction: 1. BaseClass with fixture class BaseClass: instance = None @pytest.fixture(scope="session", autouse=True) def setup_and_teardown(self, request): request.cls.instance = AnotherClass() request.cls.instance.open_session() yield self.instance.close_session() 2. TestClass with a test class TestClass(BaseClass): def awesome_test(self): value = self.instance.get_value("parameter") assert value == "correct result" Question: if I want to save this kind of construction with reusing an instance of the class in the actual test, parameter scope="session" and yield in the fixtures, how can I avoid/solve an error "AttributeError: cls not available in session-scoped context" (if possible)? I tried: Creating 2 fixtures: one for setup and creating instances of class and second for yield and close connection
[ "It is not a good idea to have session-scoped fixture as a method in a class - this creates the impression that it have access to the class or a class instance, which it has not.\nInstead, write the session-scoped fixture outside the class and yield the created object. Pytest will cache the object and pass it to any test that requires it:\n@pytest.fixture(scope=\"session\", autouse=True)\ndef instance():\n inst = AnotherClass()\n inst.open_session()\n \n yield inst\n\n inst.close_session()\n\nclass TestClass:\n def awesome_test(self, instance):\n value = instance.get_value(\"parameter\")\n assert value == \"correct result\"\n\nThis also makes the base class obsolete.\n" ]
[ 0 ]
[]
[]
[ "automated_tests", "fixtures", "pytest", "python" ]
stackoverflow_0074573662_automated_tests_fixtures_pytest_python.txt
Q: Pip can't build wheels for pyproject.toml projects because it can't find "--plat-name." I'm trying to install moderngl using pip. Of course, python -m pip install moderngl It failed to build wheels for moderngl and one of its dependencies, glcontext. In both cases, it gave me the same error message: error: --plat-name must be one of ('win32', 'win-amd64', 'win-arm32', 'win-arm64') I am certainly, certainly running a 64-bit AMD processor on a Windows machine. How do I make sure pip/pyproject.toml/whatever knows this? With the double-hyphen, I'd assume this is a CLI argument python -m pip install moderngl --plat-name 'win-amd64' but I want to double check and not break things. Is this the safe choice? Or am I way off base? Is there some boilerplate thing I needed to do between installing pip and using it? Either way, what happens if I change the --plat-name argument? A: Still don't know what the problem was precisely, but now I know vaguely, and the solution. I had installed Python 3.10.8, pip, pipenv, etc. through mingw64. This seems to lag behind on providing the latest versions of things. It couldn't give me the newest pip or pipenv at the time of this question. And although Python 3.10.8 was exactly the version I wanted, something in there was just messed. Using the recommended installer from python.org and installing pipenv with pip cleared it up (except that pipenv is still trying to use my bad install by default). Still curious if anyone has insight into what happened. Was --plat-name untraceable for my original python install because of the container it was in?
Pip can't build wheels for pyproject.toml projects because it can't find "--plat-name."
I'm trying to install moderngl using pip. Of course, python -m pip install moderngl It failed to build wheels for moderngl and one of its dependencies, glcontext. In both cases, it gave me the same error message: error: --plat-name must be one of ('win32', 'win-amd64', 'win-arm32', 'win-arm64') I am certainly, certainly running a 64-bit AMD processor on a Windows machine. How do I make sure pip/pyproject.toml/whatever knows this? With the double-hyphen, I'd assume this is a CLI argument python -m pip install moderngl --plat-name 'win-amd64' but I want to double check and not break things. Is this the safe choice? Or am I way off base? Is there some boilerplate thing I needed to do between installing pip and using it? Either way, what happens if I change the --plat-name argument?
[ "Still don't know what the problem was precisely, but now I know vaguely, and the solution.\nI had installed Python 3.10.8, pip, pipenv, etc. through mingw64. This seems to lag behind on providing the latest versions of things. It couldn't give me the newest pip or pipenv at the time of this question. And although Python 3.10.8 was exactly the version I wanted, something in there was just messed. Using the recommended installer from python.org and installing pipenv with pip cleared it up (except that pipenv is still trying to use my bad install by default).\nStill curious if anyone has insight into what happened. Was --plat-name untraceable for my original python install because of the container it was in?\n" ]
[ 0 ]
[]
[]
[ "pip", "python", "python_moderngl" ]
stackoverflow_0074596436_pip_python_python_moderngl.txt
Q: How to collect terms of given powers of multivariable polynomials using sympy? I have a polynomial like this: 3*D*c1*cos_psi**2*p**2*u/(d*k**4*kappa**2) + 3*D*c1*cos_psi*p*q*u/(2*k**4*kappa**2) - 3*D*c1*cos_psi*p*q*u/(d*k**4*kappa**2) - 3*D*c1*u/(2*k**2*kappa**2) - 3*D*c1*p**2*u/(2*k**4*kappa**2) - 3*D*c1*q**2*u/(4*k**4*kappa**2) + 3*D*c1*p**2*u*(1 - cos_psi**2)/(d*k**4*kappa**2) + 3*D*c1*q**2*u/(2*d*k**4*kappa**2) - 6*D*c3*cos_psi**2*p**2*u/(d*k**4*kappa**2) - 6*D*c3*cos_psi*p*q*u/(k**4*kappa**2) + 6*D*c3*cos_psi*p*q*u/(d*k**4*kappa**2) + 6*D*c3*p**2*u/(k**4*kappa**2) + 3*D*c3*q**2*u/(k**4*kappa**2) - 6*D*c3*p**2*u*(1 - cos_psi**2)/(d*k**4*kappa**2) - 3*D*c3*q**2*u/(d*k**4*kappa**2) I want to collect the terms like a multivariable polynomial of powers of q and p. I found the Poly(expr,q,p) does exactly what I want. But the outcome is Poly((-3*D*c1*d*u + 6*D*c1*u + 12*D*c3*d*u - 12*D*c3*u)/(4*d*k**4*kappa**2)*q**2 + (3*D*c1*cos_psi*d*u - 6*D*c1*cos_psi*u - 12*D*c3*cos_psi*d*u + 12*D*c3*cos_psi*u)/(2*d*k**4*kappa**2)*q*p + (-3*D*c1*d*u + 6*D*c1*u + 12*D*c3*d*u - 12*D*c3*u)/(2*d*k**4*kappa**2)*p**2 - 3*D*c1*u/(2*k**2*kappa**2), q, p, domain='ZZ(u,c1,c3,d,k,D,cos_psi,kappa)'). I just want the final expression without the 'Poly(__,q,p,domain=....)'. I only want the ____ . A: If Poly does what you want then you can get the expression part of a Poly as follows: >>> Poly(3*x+y*x+4+z,x).as_expr() x*(y + 3) + z + 4 A: It seems like this solved it: expr = collect(simplify(expr),(q,p,p*q))
How to collect terms of given powers of multivariable polynomials using sympy?
I have a polynomial like this: 3*D*c1*cos_psi**2*p**2*u/(d*k**4*kappa**2) + 3*D*c1*cos_psi*p*q*u/(2*k**4*kappa**2) - 3*D*c1*cos_psi*p*q*u/(d*k**4*kappa**2) - 3*D*c1*u/(2*k**2*kappa**2) - 3*D*c1*p**2*u/(2*k**4*kappa**2) - 3*D*c1*q**2*u/(4*k**4*kappa**2) + 3*D*c1*p**2*u*(1 - cos_psi**2)/(d*k**4*kappa**2) + 3*D*c1*q**2*u/(2*d*k**4*kappa**2) - 6*D*c3*cos_psi**2*p**2*u/(d*k**4*kappa**2) - 6*D*c3*cos_psi*p*q*u/(k**4*kappa**2) + 6*D*c3*cos_psi*p*q*u/(d*k**4*kappa**2) + 6*D*c3*p**2*u/(k**4*kappa**2) + 3*D*c3*q**2*u/(k**4*kappa**2) - 6*D*c3*p**2*u*(1 - cos_psi**2)/(d*k**4*kappa**2) - 3*D*c3*q**2*u/(d*k**4*kappa**2) I want to collect the terms like a multivariable polynomial of powers of q and p. I found the Poly(expr,q,p) does exactly what I want. But the outcome is Poly((-3*D*c1*d*u + 6*D*c1*u + 12*D*c3*d*u - 12*D*c3*u)/(4*d*k**4*kappa**2)*q**2 + (3*D*c1*cos_psi*d*u - 6*D*c1*cos_psi*u - 12*D*c3*cos_psi*d*u + 12*D*c3*cos_psi*u)/(2*d*k**4*kappa**2)*q*p + (-3*D*c1*d*u + 6*D*c1*u + 12*D*c3*d*u - 12*D*c3*u)/(2*d*k**4*kappa**2)*p**2 - 3*D*c1*u/(2*k**2*kappa**2), q, p, domain='ZZ(u,c1,c3,d,k,D,cos_psi,kappa)'). I just want the final expression without the 'Poly(__,q,p,domain=....)'. I only want the ____ .
[ "If Poly does what you want then you can get the expression part of a Poly as follows:\n>>> Poly(3*x+y*x+4+z,x).as_expr()\nx*(y + 3) + z + 4\n\n", "It seems like this solved it:\n expr = collect(simplify(expr),(q,p,p*q))\n\n" ]
[ 1, 0 ]
[]
[]
[ "poly", "python", "sympy" ]
stackoverflow_0074599601_poly_python_sympy.txt
Q: I got the "AttributeError: 'OutStream' object has no attribute 'buffer'" when i run the below python code that are from w3school in the google colab Here I mention the code that I saw in the w3school. # w3school code import sys import matplotlib matplotlib.use('Agg') import pandas as pd import matplotlib.pyplot as plt health_data = pd.read_csv("data.csv", header=0, sep=",") health_data.plot(x ='Average_Pulse', y='Calorie_Burnage', kind='line'), plt.ylim(ymin=0, ymax=400) plt.xlim(xmin=0, xmax=150) plt.show() #Two lines to make our compiler able to draw: plt.savefig(sys.stdout.buffer) sys.stdout.flush() And I got the error (AttributeError: 'OutStream' object has no attribute 'buffer') if I performed the above operation on Kaggle dataset in google colab via using the below code. #Three lines to make our compiler able to draw: import sys import matplotlib matplotlib.use('Agg') import pandas as pd import matplotlib.pyplot as plt health_data = pd.read_csv("/content/drive/MyDrive/India_GDP_Data.csv", header=0, sep=",") health_data.plot(x ='Year', y='GDP_In_Billion_USD', kind='line'), plt.ylim(ymin=0, ymax=400) plt.xlim(xmin=0, xmax=150) plt.show() #Two lines to make our compiler able to draw: plt.savefig(sys.stdout.buffer) sys.stdout.flush() A: I had this same issue on W3schools as well! Use matplotlib inline in you notebook like this: %matplotlib inline import pandas as pd import matplotlib.pyplot as plt df = pd.read_csv('data.csv') df.plot() plt.show() sys.stdout.flush()
I got the "AttributeError: 'OutStream' object has no attribute 'buffer'" when i run the below python code that are from w3school in the google colab
Here I mention the code that I saw in the w3school. # w3school code import sys import matplotlib matplotlib.use('Agg') import pandas as pd import matplotlib.pyplot as plt health_data = pd.read_csv("data.csv", header=0, sep=",") health_data.plot(x ='Average_Pulse', y='Calorie_Burnage', kind='line'), plt.ylim(ymin=0, ymax=400) plt.xlim(xmin=0, xmax=150) plt.show() #Two lines to make our compiler able to draw: plt.savefig(sys.stdout.buffer) sys.stdout.flush() And I got the error (AttributeError: 'OutStream' object has no attribute 'buffer') if I performed the above operation on Kaggle dataset in google colab via using the below code. #Three lines to make our compiler able to draw: import sys import matplotlib matplotlib.use('Agg') import pandas as pd import matplotlib.pyplot as plt health_data = pd.read_csv("/content/drive/MyDrive/India_GDP_Data.csv", header=0, sep=",") health_data.plot(x ='Year', y='GDP_In_Billion_USD', kind='line'), plt.ylim(ymin=0, ymax=400) plt.xlim(xmin=0, xmax=150) plt.show() #Two lines to make our compiler able to draw: plt.savefig(sys.stdout.buffer) sys.stdout.flush()
[ "I had this same issue on W3schools as well!\nUse matplotlib inline in you notebook like this:\n%matplotlib inline\n\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\ndf = pd.read_csv('data.csv')\n\ndf.plot()\n\nplt.show()\n\nsys.stdout.flush()\n\n" ]
[ 0 ]
[]
[]
[ "attributes", "buffer", "object", "python" ]
stackoverflow_0073956763_attributes_buffer_object_python.txt
Q: How to store image input given by Pi Camera directly into a variable rather than to a file? I want to store the input given by my pi camera directly into a variable, rather than storing it in a file. I want to do this so that it takes less processing power of the pi, as I am working on a autonomus car project and it takes alot of processing. When I try to store the image to a variable it gives me the following error - AttributeError: 'int' object has no attribute 'name' During handling of the above exception, another exception occurred: 'Format must be specified when output has no filename') picamera.exc.PiCameraValueError: Format must be specified when output has no filename My code - img = 1 camera = picamera.PiCamera() camera.capture(img) time.sleep(0.0001) img = cv2.imread(img) cv2.imshow('img', img) cv2.waitKey(1) I have made img as a variable to store the captured image, but it is not working. If there is any library that can do this for me please do let me know. Thanks in advance for your kind response. A: Came across the same issue but the answers proposed around the web were not clear enough for my issue. Also, this is the first result that came up during my research, and yet there are no answers to it. Hopefully my summary helps answer this problem properly. Here is how I figured it out: camera.capture(output, 'jpeg', use_video_port=True) this statement takes an output, an optional output type (must be specified if the format cannot be figured out from the output name), and whether or not you will use the video port for the output. output must be a bytes-like object. So this means that we should feed it a BytesIO object. By doing this, we get a stream of the desired output. However, if you try to process this stream immediately, you will notice that there seems to be no data. This is because the internal pointer is pointing at the end of the stream. We must first move it to the start with myStream.seek(0). After doing this, we have a file on which we can do proper work. You can send it to a server, use PIL features on it or whatever you want. Summary of the above: my_stream = BytesIO() #create a bytes object so the capture can output to it camera.capture(my_stream, 'jpeg', use_video_port=True) #output the camera feed to my_stream my_stream.seek(0) #go to the start of the stream in order to do work on it for example, in my case, I could just send this to my websocket server directly with await websocket.send(my_stream)
How to store image input given by Pi Camera directly into a variable rather than to a file?
I want to store the input given by my pi camera directly into a variable, rather than storing it in a file. I want to do this so that it takes less processing power of the pi, as I am working on a autonomus car project and it takes alot of processing. When I try to store the image to a variable it gives me the following error - AttributeError: 'int' object has no attribute 'name' During handling of the above exception, another exception occurred: 'Format must be specified when output has no filename') picamera.exc.PiCameraValueError: Format must be specified when output has no filename My code - img = 1 camera = picamera.PiCamera() camera.capture(img) time.sleep(0.0001) img = cv2.imread(img) cv2.imshow('img', img) cv2.waitKey(1) I have made img as a variable to store the captured image, but it is not working. If there is any library that can do this for me please do let me know. Thanks in advance for your kind response.
[ "Came across the same issue but the answers proposed around the web were not clear enough for my issue. Also, this is the first result that came up during my research, and yet there are no answers to it. Hopefully my summary helps answer this problem properly. Here is how I figured it out:\n\ncamera.capture(output, 'jpeg', use_video_port=True)\n\nthis statement takes an output, an optional output type (must be specified if the format cannot be figured out from the output name), and whether or not you will use the video port for the output.\noutput must be a bytes-like object. So this means that we should feed it a BytesIO object. By doing this, we get a stream of the desired output.\nHowever, if you try to process this stream immediately, you will notice that there seems to be no data. This is because the internal pointer is pointing at the end of the stream. We must first move it to the start with myStream.seek(0).\nAfter doing this, we have a file on which we can do proper work. You can send it to a server, use PIL features on it or whatever you want.\nSummary of the above:\nmy_stream = BytesIO() #create a bytes object so the capture can output to it\ncamera.capture(my_stream, 'jpeg', use_video_port=True) #output the camera feed to my_stream\nmy_stream.seek(0) #go to the start of the stream in order to do work on it\n\nfor example, in my case, I could just send this to my websocket server directly with await websocket.send(my_stream)\n" ]
[ 1 ]
[]
[]
[ "python", "raspberry_pi" ]
stackoverflow_0054869329_python_raspberry_pi.txt
Q: Python: Why is this recursion failing? Why am I getting maximum recursion results of [] in this simple recursion example? # generate data df = pd.DataFrame({'id': [1, 2, 2, 3, 4, 5, 6, 7], 'parent': [np.nan, 1, 2, 2, np.nan, 1, 1, 5]}) parents = df.parent.dropna().unique().astype(int) def find_parent(init_parent): init_parent = [init_parent] if isinstance(init_parent, int) else [init_parent] if len(init_parent) == 0: return init_parent else: return find_parent(df.loc[df['parent'].isin(init_parent)]['id'].tolist()) # max recursion of [] results find_parent(parents[1]) A: def find_parent(init_parent): init_parent = [init_parent] if isinstance(init_parent, int) else [init_parent] if len(init_parent) == 0: # this only returns true on an empty array return init_parent # you're getting [] because this return else: return find_parent(df.loc[df['parent'].isin(init_parent)]['id'].tolist()) # run ops find_parent(df.loc[df['parent'].isin(init_parent)]['id'].tolist()) your maximum return is reached when there is no parent. You overwrite the init parent in the line above, check if it's an empty array, and then return that empty array.
Python: Why is this recursion failing?
Why am I getting maximum recursion results of [] in this simple recursion example? # generate data df = pd.DataFrame({'id': [1, 2, 2, 3, 4, 5, 6, 7], 'parent': [np.nan, 1, 2, 2, np.nan, 1, 1, 5]}) parents = df.parent.dropna().unique().astype(int) def find_parent(init_parent): init_parent = [init_parent] if isinstance(init_parent, int) else [init_parent] if len(init_parent) == 0: return init_parent else: return find_parent(df.loc[df['parent'].isin(init_parent)]['id'].tolist()) # max recursion of [] results find_parent(parents[1])
[ "def find_parent(init_parent):\n init_parent = [init_parent] if isinstance(init_parent, int) else [init_parent]\n if len(init_parent) == 0: # this only returns true on an empty array\n return init_parent # you're getting [] because this return\n else:\n return find_parent(df.loc[df['parent'].isin(init_parent)]['id'].tolist())\n\n# run ops\nfind_parent(df.loc[df['parent'].isin(init_parent)]['id'].tolist())\n\nyour maximum return is reached when there is no parent. You overwrite the init parent in the line above, check if it's an empty array, and then return that empty array.\n" ]
[ 1 ]
[]
[]
[ "python", "recursion" ]
stackoverflow_0074605789_python_recursion.txt
Q: operation parameter must be str I need to insert with using only SQLAlchemy. Stack of technologies is only sqlite, sqlachemy for this. ` # запись работника в таблицу users(ФИО, рандом др, роль(рандом из таблицы roles) # вывод последних добавленных 5 работников import json import sqlalchemy as sa import sqlite3 from sqlalchemy import Table, MetaData metadata = sa.MetaData() roles = sa.Table('roles', metadata, sa.Column('id', sa.Integer, primary_key=True), sa.Column('name', sa.String(50))) users = sa.Table('users', metadata, sa.Column('id', sa.BIGINT, primary_key=True), sa.Column('fio', sa.String(255)), sa.Column('datar', sa.String(255)), sa.Column('id_role', sa.Integer)) with open("C:\\Users\\kiril\\PycharmProjects\\testbotjob_project\\config.json", 'r', encoding='utf-8') as f: # открыли файл options = json.load(f) # загнали все из файла в переменную db = options["data_base"] # путь к бд con = sqlite3.connect(db) con.execute(roles.insert().values(name='Грузчик')) con.close() # with con: # con.execute(""" # CREATE TABLE roles ( # id INT PRIMARY KEY, # name VARCHAR(50) # ); # """) # con.execute(""" # CREATE TABLE users ( # id BIGINT PRIMARY KEY, # fio VARCHAR(30), # datar DATE, # id_role INT, # FOREIGN KEY (id_role) REFERENCES roles (id) # ); # """) ` I get this error: Traceback (most recent call last): File "C:/Users/kiril/PycharmProjects/testbotjob_project/database/db_con.py", line 26, in <module> con.execute(roles.insert().values(name='Грузчик')) ValueError: operation parameter must be str I was googled, but after 40 minutes i dont know.. Help please. Hope only for you. A: Thanks https://stackoverflow.com/users/5320906/snakecharmerb answer: https://docs.sqlalchemy.org/en/14/tutorial/data_insert.html#inserting-rows-with-core engine = create_engine("sqlite:///database.db", echo=True, future=True) ... with engine.connect() as conn: result = conn.execute(stmt) conn.commit()
operation parameter must be str
I need to insert with using only SQLAlchemy. Stack of technologies is only sqlite, sqlachemy for this. ` # запись работника в таблицу users(ФИО, рандом др, роль(рандом из таблицы roles) # вывод последних добавленных 5 работников import json import sqlalchemy as sa import sqlite3 from sqlalchemy import Table, MetaData metadata = sa.MetaData() roles = sa.Table('roles', metadata, sa.Column('id', sa.Integer, primary_key=True), sa.Column('name', sa.String(50))) users = sa.Table('users', metadata, sa.Column('id', sa.BIGINT, primary_key=True), sa.Column('fio', sa.String(255)), sa.Column('datar', sa.String(255)), sa.Column('id_role', sa.Integer)) with open("C:\\Users\\kiril\\PycharmProjects\\testbotjob_project\\config.json", 'r', encoding='utf-8') as f: # открыли файл options = json.load(f) # загнали все из файла в переменную db = options["data_base"] # путь к бд con = sqlite3.connect(db) con.execute(roles.insert().values(name='Грузчик')) con.close() # with con: # con.execute(""" # CREATE TABLE roles ( # id INT PRIMARY KEY, # name VARCHAR(50) # ); # """) # con.execute(""" # CREATE TABLE users ( # id BIGINT PRIMARY KEY, # fio VARCHAR(30), # datar DATE, # id_role INT, # FOREIGN KEY (id_role) REFERENCES roles (id) # ); # """) ` I get this error: Traceback (most recent call last): File "C:/Users/kiril/PycharmProjects/testbotjob_project/database/db_con.py", line 26, in <module> con.execute(roles.insert().values(name='Грузчик')) ValueError: operation parameter must be str I was googled, but after 40 minutes i dont know.. Help please. Hope only for you.
[ "Thanks https://stackoverflow.com/users/5320906/snakecharmerb\nanswer: https://docs.sqlalchemy.org/en/14/tutorial/data_insert.html#inserting-rows-with-core\nengine = create_engine(\"sqlite:///database.db\", echo=True, future=True)\n\n...\nwith engine.connect() as conn:\nresult = conn.execute(stmt)\nconn.commit()\n\n" ]
[ 0 ]
[]
[]
[ "python", "sqlalchemy", "sqlite" ]
stackoverflow_0074604509_python_sqlalchemy_sqlite.txt
Q: How to open a window by clicking on a column in a tree view table? How would you double click on a column in a tree view table to then display the specific records on entry fields on a new window. def SearchCustomer(self): connection = sqlite3.connect("Guestrecord.db") cursor = connection.cursor() columnID = ["GuestID","title","firstName","surname","dob","payment","email","phoneno","address","postcode"] columnStr =["GuestID","Title","FirstName","Surname","DOB","Payment","Email","PhoneNo","Address","Postcode"] self.search_table = ttk.Treeview(self.search_frame,columns=columnID,show="headings") ## self.search_table.bind("<Motion>","break") for i in range(0,10): self.search_table.heading(columnID[i],text = columnStr[i]) self.search_table.column(columnID[i],minwidth = 0, width = 90) self.search_table.place(x=0,y=0) for GuestRec in cursor.execute("SELECT * FROM tb1Guest1"): self.search_table.insert("",END,values=GuestRec) self.search_table.bind("<Double-1>", self.OnDoubleClick) connection.commit() connection.close() SearchCustomer(self) sqlCommand = """ CREATE TABLE IF NOT EXISTS tb1Guest1 ( guestID INTEGER NOT NULL, guestTitle TEXT, guestFirstname TEXT, guestSurname TEXT, guestDOB DATE, guestPaymentType TEXT, guestEmail TEXT, guestPhoneNumber INTEGER, guestAddress TEXT, guestPostcode TEXT, primary key (guestID) ) """ self.search_firstname = Entry(self.search_frame2, width=25,bg="#e2f0d9",font=("Avenir Next",18),highlightthickness = 0,relief=FLAT) self.search_firstname.place(x = 140, y =0) self.search_firstname_label = Label(self.search_frame2,bg = "white", text = "First Name", font=("Avenir Next",20)) self.search_firstname_label.place(x= 30,y=0) self.search_Surname = Entry(self.search_frame2, width=25,bg="#e2f0d9",font=("Avenir Next",18),highlightthickness = 0,relief=FLAT) self.search_Surname.place(x = 540, y =0) self.search_Surname_label = Label(self.search_frame2,bg = "white", text = "Surname", font=("Avenir Next",20)) self.search_Surname_label.place(x= 450,y=0) ## Binding entries self.search_firstname.bind("<KeyRelease>",self.Search) self.search_Surname.bind("<KeyRelease>",self.Search) def OnDoubleClick(self, event): self.gf_window.destroy() self.search_results() This is the my code for displaying my records in my search tree view table. I am able to double click on a column to then bring up a new window but I am unsure how I would display the specific records on my entry fields I am making on the new window? If anyone could suggest a solution to this torment it would be much amazing. Thanks in advance. A: You should be able to grab the selected Treeview item from the mouse event being passed to your OnDoubleClick() function. If you want to open a new window that's a child of your root window, you'll want a Toplevel widget. You can treat that Toplevel window pretty much just like you would a root Tk window. # side note: this should really be named "on_double_click" by convention def OnDoubleClick(self, event): tree = event.widget # get the treeview widget region = tree.identify_region(event.x, event.y) # get click location iid = tree.identify('item', event.x, event.y) # get item ID if region == 'cell': # i.e., if the click wasn't on the header row... data = tree.item(iid)['values'] # get the item data window = Toplevel(self) window.transient(self) # optional: hide window controls except [x], pull focus # whatever you want here... window.geometry = '400x600' window.title('Results') label = ttk.Label(window, text=data) label.pack() # yadda yadda...
How to open a window by clicking on a column in a tree view table?
How would you double click on a column in a tree view table to then display the specific records on entry fields on a new window. def SearchCustomer(self): connection = sqlite3.connect("Guestrecord.db") cursor = connection.cursor() columnID = ["GuestID","title","firstName","surname","dob","payment","email","phoneno","address","postcode"] columnStr =["GuestID","Title","FirstName","Surname","DOB","Payment","Email","PhoneNo","Address","Postcode"] self.search_table = ttk.Treeview(self.search_frame,columns=columnID,show="headings") ## self.search_table.bind("<Motion>","break") for i in range(0,10): self.search_table.heading(columnID[i],text = columnStr[i]) self.search_table.column(columnID[i],minwidth = 0, width = 90) self.search_table.place(x=0,y=0) for GuestRec in cursor.execute("SELECT * FROM tb1Guest1"): self.search_table.insert("",END,values=GuestRec) self.search_table.bind("<Double-1>", self.OnDoubleClick) connection.commit() connection.close() SearchCustomer(self) sqlCommand = """ CREATE TABLE IF NOT EXISTS tb1Guest1 ( guestID INTEGER NOT NULL, guestTitle TEXT, guestFirstname TEXT, guestSurname TEXT, guestDOB DATE, guestPaymentType TEXT, guestEmail TEXT, guestPhoneNumber INTEGER, guestAddress TEXT, guestPostcode TEXT, primary key (guestID) ) """ self.search_firstname = Entry(self.search_frame2, width=25,bg="#e2f0d9",font=("Avenir Next",18),highlightthickness = 0,relief=FLAT) self.search_firstname.place(x = 140, y =0) self.search_firstname_label = Label(self.search_frame2,bg = "white", text = "First Name", font=("Avenir Next",20)) self.search_firstname_label.place(x= 30,y=0) self.search_Surname = Entry(self.search_frame2, width=25,bg="#e2f0d9",font=("Avenir Next",18),highlightthickness = 0,relief=FLAT) self.search_Surname.place(x = 540, y =0) self.search_Surname_label = Label(self.search_frame2,bg = "white", text = "Surname", font=("Avenir Next",20)) self.search_Surname_label.place(x= 450,y=0) ## Binding entries self.search_firstname.bind("<KeyRelease>",self.Search) self.search_Surname.bind("<KeyRelease>",self.Search) def OnDoubleClick(self, event): self.gf_window.destroy() self.search_results() This is the my code for displaying my records in my search tree view table. I am able to double click on a column to then bring up a new window but I am unsure how I would display the specific records on my entry fields I am making on the new window? If anyone could suggest a solution to this torment it would be much amazing. Thanks in advance.
[ "You should be able to grab the selected Treeview item from the mouse event being passed to your OnDoubleClick() function. If you want to open a new window that's a child of your root window, you'll want a Toplevel widget. You can treat that Toplevel window pretty much just like you would a root Tk window.\n# side note: this should really be named \"on_double_click\" by convention\ndef OnDoubleClick(self, event): \n tree = event.widget # get the treeview widget\n region = tree.identify_region(event.x, event.y) # get click location\n iid = tree.identify('item', event.x, event.y) # get item ID\n\n if region == 'cell': # i.e., if the click wasn't on the header row...\n data = tree.item(iid)['values'] # get the item data\n window = Toplevel(self)\n window.transient(self) # optional: hide window controls except [x], pull focus\n # whatever you want here...\n window.geometry = '400x600'\n window.title('Results')\n label = ttk.Label(window, text=data)\n label.pack()\n # yadda yadda...\n\n" ]
[ 0 ]
[]
[]
[ "python", "tkinter", "treeview" ]
stackoverflow_0074605379_python_tkinter_treeview.txt
Q: How to merge pandas dataframes after renaming columns? I have a code that merged 2 dataframes that had all columns in uppercase. I need to adjust the code to merge the dataframes but now 1 comes with columns in lowercase and the other doesn't. I wrote the following code to change the columns names to lowercase, and then changed the merge to lowercase also, but no i get a Key error: 'code' df_small = df_small.rename(columns=str.lower) common = df_small.merge(df_big,on=['code']) A: First to 'lower' the column names you can use: df_small.columns = df_small.columns.str.lower() I think that your code for lowercasing the column names is doing something wrong. Try mine above
How to merge pandas dataframes after renaming columns?
I have a code that merged 2 dataframes that had all columns in uppercase. I need to adjust the code to merge the dataframes but now 1 comes with columns in lowercase and the other doesn't. I wrote the following code to change the columns names to lowercase, and then changed the merge to lowercase also, but no i get a Key error: 'code' df_small = df_small.rename(columns=str.lower) common = df_small.merge(df_big,on=['code'])
[ "First to 'lower' the column names you can use:\ndf_small.columns = df_small.columns.str.lower()\n\nI think that your code for lowercasing the column names is doing something wrong. Try mine above\n" ]
[ 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074605887_pandas_python.txt
Q: Snowflake UDF returns "Unknown user-defined function" For Existing UDF I have a UDF that I can call within my snowflakecomputing.com console. SELECT DECODE_UTF8('some string') Works great, until I try to call it programmatically from a Python script. I receive this... snowflake.connector.errors.ProgrammingError: 002141 (42601): or: Unknown user-defined function CS_QA.CS_ANALYTICS.DECODE_UTF8 I am even fully qualifying it (i.e., db.schema.function) Can anyone suggest a fix? Thank you. A: Most likely the user(and role assigned) used to connect from Python does not have access to that UDF. This hypothesis could be validated by using INFORMATION_SCHEMA.FUNCTIONS: The view only displays objects for which the current role for the session has been granted access privileges. SELECT * FROM CS_QA.INFORMATION_SCHEMA.FUNCTIONS; Another possibility is that part of the fully-qualified name is case-sensitive and requires wrapping with " SELECT "CS_QA"."CS_ANALYTICS".DECODE_UTF8('some string'); A: I believe, You might have to first switch to database where the function has defined. USE DATBASE USER_DEF; SELECT DECODE_UTF8('some string') That should work.
Snowflake UDF returns "Unknown user-defined function" For Existing UDF
I have a UDF that I can call within my snowflakecomputing.com console. SELECT DECODE_UTF8('some string') Works great, until I try to call it programmatically from a Python script. I receive this... snowflake.connector.errors.ProgrammingError: 002141 (42601): or: Unknown user-defined function CS_QA.CS_ANALYTICS.DECODE_UTF8 I am even fully qualifying it (i.e., db.schema.function) Can anyone suggest a fix? Thank you.
[ "Most likely the user(and role assigned) used to connect from Python does not have access to that UDF. This hypothesis could be validated by using INFORMATION_SCHEMA.FUNCTIONS:\n\nThe view only displays objects for which the current role for the session has been granted access privileges.\n\nSELECT *\nFROM CS_QA.INFORMATION_SCHEMA.FUNCTIONS;\n\n\nAnother possibility is that part of the fully-qualified name is case-sensitive and requires wrapping with \"\nSELECT \"CS_QA\".\"CS_ANALYTICS\".DECODE_UTF8('some string');\n\n", "I believe, You might have to first switch to database where the function has defined.\nUSE DATBASE USER_DEF;\nSELECT DECODE_UTF8('some string')\nThat should work.\n" ]
[ 1, 0 ]
[]
[]
[ "python", "snowflake_cloud_data_platform", "user_defined_functions" ]
stackoverflow_0072504545_python_snowflake_cloud_data_platform_user_defined_functions.txt
Q: How to use ArrayFire batched 2D convolution Reading through ArrayFire documentation, I noticed that the library supports batched operations when using 2D convolution. Therefore, I need to apply N filters to an image using the C++ API. For easy testing, I decided to create a simple Python script to assert the convolution results. However, I couldn't get proper results when using >1 filters and comparing them to OpenCV's 2D convolution separately. Following is my Python script: import arrayfire as af import cv2 import numpy as np np.random.seed(1) np.set_printoptions(precision=3) af.set_backend('cuda') n_kernels = 2 image = np.random.randn(512,512).astype(np.float32) kernels_list = [np.random.randn(7,7).astype(np.float32) for _ in range(n_kernels)] conv_cv_list = [cv2.filter2D(image, -1, cv2.flip(kernel,-1), borderType=cv2.BORDER_CONSTANT) for kernel in kernels_list] image_gpu = af.array.Array(image.ctypes.data, image.shape, image.dtype.char) kernels = np.stack(kernels_list, axis=-1) if n_kernels > 1 else kernels_list[0] kernels_gpu = af.array.Array(kernels.ctypes.data, kernels.shape, kernels.dtype.char) conv_af_gpu = af.convolve2(image_gpu, kernels_gpu) conv_af = conv_af_gpu.to_ndarray() if n_kernels == 1: conv_af = conv_af[..., None] for kernel_idx in range(n_kernels): print("CV conv:", conv_cv_list[kernel_idx][0, 0]) print("AF conv", conv_af[0, 0, kernel_idx]) That said, I would like to know how properly use ArrayFire batched support. A: ArrayFire is column-major, whereas OpenCV and NumPy are row-major. This can cause issues. We provide an interop function to handle this for you, as follows. import arrayfire as af import cv2 import numpy as np np.random.seed(1) np.set_printoptions(precision=3) af.set_backend('cuda') n_kernels = 2 image = np.random.randn(512,512).astype(np.float32) kernels_list = [np.random.randn(7,7).astype(np.float32) for _ in range(n_kernels)] conv_cv_list = [cv2.filter2D(image, -1, cv2.flip(kernel,-1), borderType=cv2.BORDER_CONSTANT) for kernel in kernels_list] image_gpu = af.interop.from_ndarray(image) # CHECK OUT THIS kernels = np.stack(kernels_list, axis=-1) if n_kernels > 1 else kernels_list[0] kernels_gpu = af.interop.from_ndarray(kernels) # CHECK OUT THIS conv_af_gpu = af.convolve2(image_gpu, kernels_gpu) conv_af = conv_af_gpu.to_ndarray() if n_kernels == 1: conv_af = conv_af[..., None] for kernel_idx in range(n_kernels): print("CV conv:", conv_cv_list[kernel_idx][0, 0]) print("AF conv", conv_af[0, 0, kernel_idx]) We are working on a newer version of ArrayFire Python which will address this issue. Good luck!
How to use ArrayFire batched 2D convolution
Reading through ArrayFire documentation, I noticed that the library supports batched operations when using 2D convolution. Therefore, I need to apply N filters to an image using the C++ API. For easy testing, I decided to create a simple Python script to assert the convolution results. However, I couldn't get proper results when using >1 filters and comparing them to OpenCV's 2D convolution separately. Following is my Python script: import arrayfire as af import cv2 import numpy as np np.random.seed(1) np.set_printoptions(precision=3) af.set_backend('cuda') n_kernels = 2 image = np.random.randn(512,512).astype(np.float32) kernels_list = [np.random.randn(7,7).astype(np.float32) for _ in range(n_kernels)] conv_cv_list = [cv2.filter2D(image, -1, cv2.flip(kernel,-1), borderType=cv2.BORDER_CONSTANT) for kernel in kernels_list] image_gpu = af.array.Array(image.ctypes.data, image.shape, image.dtype.char) kernels = np.stack(kernels_list, axis=-1) if n_kernels > 1 else kernels_list[0] kernels_gpu = af.array.Array(kernels.ctypes.data, kernels.shape, kernels.dtype.char) conv_af_gpu = af.convolve2(image_gpu, kernels_gpu) conv_af = conv_af_gpu.to_ndarray() if n_kernels == 1: conv_af = conv_af[..., None] for kernel_idx in range(n_kernels): print("CV conv:", conv_cv_list[kernel_idx][0, 0]) print("AF conv", conv_af[0, 0, kernel_idx]) That said, I would like to know how properly use ArrayFire batched support.
[ "ArrayFire is column-major, whereas OpenCV and NumPy are row-major. This can cause issues. We provide an interop function to handle this for you, as follows.\nimport arrayfire as af\nimport cv2\nimport numpy as np\n \nnp.random.seed(1)\n \nnp.set_printoptions(precision=3)\naf.set_backend('cuda')\n \nn_kernels = 2\n \nimage = np.random.randn(512,512).astype(np.float32)\n \nkernels_list = [np.random.randn(7,7).astype(np.float32) for _ in range(n_kernels)]\n \nconv_cv_list = [cv2.filter2D(image, -1, cv2.flip(kernel,-1), borderType=cv2.BORDER_CONSTANT) for kernel in kernels_list]\n \nimage_gpu = af.interop.from_ndarray(image) # CHECK OUT THIS\n \nkernels = np.stack(kernels_list, axis=-1) if n_kernels > 1 else kernels_list[0]\nkernels_gpu = af.interop.from_ndarray(kernels) # CHECK OUT THIS\n \nconv_af_gpu = af.convolve2(image_gpu, kernels_gpu)\nconv_af = conv_af_gpu.to_ndarray()\n \nif n_kernels == 1:\n conv_af = conv_af[..., None]\n \nfor kernel_idx in range(n_kernels):\n print(\"CV conv:\", conv_cv_list[kernel_idx][0, 0])\n print(\"AF conv\", conv_af[0, 0, kernel_idx])\n\nWe are working on a newer version of ArrayFire Python which will address this issue.\nGood luck!\n" ]
[ 1 ]
[]
[]
[ "arrayfire", "arrays", "gpu", "python" ]
stackoverflow_0074605090_arrayfire_arrays_gpu_python.txt
Q: turtle.textinput() is not working in one of my codes but it works in the other I am currently working on a game in python using the turtle library import. Before I add anything to the main game always test it in another python file to make sure it works. I came across the turtle.textinput() It worked in my test code but not in my actual game. When I tried to put it in the actual game it says AttributeError: type object 'Turtle' has no attribute textinput This makes no sense. I use VS code so I don't understand what is going on please help P.S. : I couldn't figure out the code highlight thing so I put it in brackets. A: textinput() is a method in Screen class A: Note that textinput() is a method in the Screen instance, and not in the turtle. from turtle import Screen screen = Screen() screen.textinput("Title Example", "Prompt example")
turtle.textinput() is not working in one of my codes but it works in the other
I am currently working on a game in python using the turtle library import. Before I add anything to the main game always test it in another python file to make sure it works. I came across the turtle.textinput() It worked in my test code but not in my actual game. When I tried to put it in the actual game it says AttributeError: type object 'Turtle' has no attribute textinput This makes no sense. I use VS code so I don't understand what is going on please help P.S. : I couldn't figure out the code highlight thing so I put it in brackets.
[ "textinput() is a method in Screen class\n", "Note that textinput() is a method in the Screen instance, and not in the turtle.\nfrom turtle import Screen\n\nscreen = Screen()\n\nscreen.textinput(\"Title Example\", \"Prompt example\")\n\n" ]
[ 1, 0 ]
[]
[]
[ "python" ]
stackoverflow_0067451474_python.txt
Q: A faster solution for Project Euler question 10 I have solved the question 10 regarding sum of all primes under 2 million, however my code takes over a few minutes to calculate the result. I was just wondering if there is any way to optimise it to make it run faster ? The code takes an upper limit Generates an array Iterates through it and removes multiples of a number, replacing it with 0 Takes that filtered array and loops through the next non zero number Increases this number till it is sqrt of the limit. Prints out what is left. import numpy as np def sievePrime(n): array = np.arange(2, n) tempSieve = [2] for value in range(2, int(np.floor(np.sqrt(n)))): if tempSieve[value - 2] != value: continue else: for x in range(len(array)): if array[x] % value == 0 and array[x] != value: array[x] = 0 tempSieve = array return sum(array) print(sievePrime(2000000)) Thank you for your time. A: Thanks for the input. I was able to improve the code by changing how a number is checked for prime. The code finishes under 2 seconds instead of minutes. Instead of counting from beginning to end for array, it counts from the prime number to the end of the array with increments of the prime number. Thanks again for the suggestions. import numpy as np import time start_time = time.time() def sievePrime(n): array = np.arange(2, n) tempsieve = [2] for value in range(2, int(np.floor(np.sqrt(n)))): if tempsieve[value - 2] != value: continue else: for x in range(value, len(range(n)), value): if array[x - 2] % value == 0 and array[x - 2] != value: array[x - 2] = 0 tempsieve = array else: continue return sum(array) print(sievePrime(2000000)) print("--- %s seconds ---" % (time.time() - start_time))
A faster solution for Project Euler question 10
I have solved the question 10 regarding sum of all primes under 2 million, however my code takes over a few minutes to calculate the result. I was just wondering if there is any way to optimise it to make it run faster ? The code takes an upper limit Generates an array Iterates through it and removes multiples of a number, replacing it with 0 Takes that filtered array and loops through the next non zero number Increases this number till it is sqrt of the limit. Prints out what is left. import numpy as np def sievePrime(n): array = np.arange(2, n) tempSieve = [2] for value in range(2, int(np.floor(np.sqrt(n)))): if tempSieve[value - 2] != value: continue else: for x in range(len(array)): if array[x] % value == 0 and array[x] != value: array[x] = 0 tempSieve = array return sum(array) print(sievePrime(2000000)) Thank you for your time.
[ "Thanks for the input. I was able to improve the code by changing how a number is checked for prime.\nThe code finishes under 2 seconds instead of minutes.\nInstead of counting from beginning to end for array, it counts from the prime number to the end of the array with increments of the prime number.\nThanks again for the suggestions.\nimport numpy as np\nimport time\n\nstart_time = time.time()\n\n\ndef sievePrime(n):\n array = np.arange(2, n)\n\n tempsieve = [2]\n\n for value in range(2, int(np.floor(np.sqrt(n)))):\n if tempsieve[value - 2] != value:\n continue\n else:\n for x in range(value, len(range(n)), value):\n if array[x - 2] % value == 0 and array[x - 2] != value:\n array[x - 2] = 0\n tempsieve = array\n else:\n continue\n\n return sum(array)\n\n\nprint(sievePrime(2000000))\nprint(\"--- %s seconds ---\" % (time.time() - start_time))\n\n" ]
[ 1 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0074603473_python_python_3.x.txt
Q: Argument of type "datetime" cannot be assigned to parameter "value" of type "str" I am currently working on a Python and SQL project. There I am building a GUI that takes information from user input and stores them in a MySQL database locally. There are a few warnings/errors that I am trying to resolve. This is my code Annotated the error-raising lines by comments. elif (x == "Learn Python The Hard Way"): self.book_id_var.set("Book Id: 2") self.book_title_var.set("Learn Python The Hard Way") self.book_author_var.set("Zde A. Sham") d1 = datetime.datetime.today() d2 = datetime.timedelta(days = 15) d3 = d1 + d2 self.date_borrowed_var.set(d1) # Argument of type "datetime" cannot be assigned to parameter "value" of type "str" in function "set", "datetime" is incompatible with "str" self.date_due_var.set(d3) # Argument of type "datetime" cannot be assigned to parameter "value" of type "str" in function "set", "datetime" is incompatible with "str" self.days_on_book_var.set("15") self.late_return_fine_var.set("Rs.25") self.date_over_due_var.set("NO") self.final_price_var.set("Rs.725") In line 10 and 13, d1 and d3 are throwing the error commented. Unfortunately, I am not able to find solution to it. Can I ignore "datetime" is incompatible with "str", and if not, what would be a workaround? A: In order to cast your datetime object to string you can call the .strftime method builtin to the datetime package e.g. use: d3.strftime("%d.%m.%y") instead of just d3 same goes for d1 - ("%d.%m.%y") represents the format in which your datetime is being represented. Results would look like this: self.date_borrowed_var.set(d1.strftime("%d.%m.%y")) self.date_due_var.set(d3.strftime("%d.%m.%y"))
Argument of type "datetime" cannot be assigned to parameter "value" of type "str"
I am currently working on a Python and SQL project. There I am building a GUI that takes information from user input and stores them in a MySQL database locally. There are a few warnings/errors that I am trying to resolve. This is my code Annotated the error-raising lines by comments. elif (x == "Learn Python The Hard Way"): self.book_id_var.set("Book Id: 2") self.book_title_var.set("Learn Python The Hard Way") self.book_author_var.set("Zde A. Sham") d1 = datetime.datetime.today() d2 = datetime.timedelta(days = 15) d3 = d1 + d2 self.date_borrowed_var.set(d1) # Argument of type "datetime" cannot be assigned to parameter "value" of type "str" in function "set", "datetime" is incompatible with "str" self.date_due_var.set(d3) # Argument of type "datetime" cannot be assigned to parameter "value" of type "str" in function "set", "datetime" is incompatible with "str" self.days_on_book_var.set("15") self.late_return_fine_var.set("Rs.25") self.date_over_due_var.set("NO") self.final_price_var.set("Rs.725") In line 10 and 13, d1 and d3 are throwing the error commented. Unfortunately, I am not able to find solution to it. Can I ignore "datetime" is incompatible with "str", and if not, what would be a workaround?
[ "In order to cast your datetime object to string you can call the .strftime method builtin to the datetime package\ne.g. use:\nd3.strftime(\"%d.%m.%y\")\n\ninstead of just\nd3\n\nsame goes for d1 - (\"%d.%m.%y\") represents the format in which your datetime is being represented.\nResults would look like this:\nself.date_borrowed_var.set(d1.strftime(\"%d.%m.%y\"))\nself.date_due_var.set(d3.strftime(\"%d.%m.%y\"))\n\n" ]
[ 0 ]
[]
[]
[ "datetime", "mysql", "python" ]
stackoverflow_0074605853_datetime_mysql_python.txt
Q: Store password in Ansbile Vault and retrieve that key from Python script using API I have a requirement where I should not store any passwords in the script files in plain text. So I have created an Ansible vault file called "vault.yml" which contains username and password. Is there some kind of API that I can use to look up this value from python script called for example "test.py"? What I would like in test.py is something like this: username = ansible_api_get(key=username) password = ansible_api_get(key=password) P.S. - I don't have to use Ansible Vault, but that is preferred option as we would like to use all sensitive info with Vault and we want to integrate our scripts as much as possible. A: Yes, ansible-vault is the Python library that you can use for this purpose. vault.py #!/usr/bin/env python3 ''' get secrets from ansible-vault file with gpg-encrypted password ''' import os import sys from subprocess import check_output import yaml from ansible_vault import Vault vault_file = sys.argv[1] if os.path.exists(vault_file): get_vault_password = os.environ['ANSIBLE_VAULT_PASSWORD_FILE'] if os.path.exists(get_vault_password): PASSWORD = check_output(get_vault_password).strip().decode("utf-8") secrets = yaml.safe_load(Vault(PASSWORD).load_raw(open(vault_file, encoding='utf-8').read())) print(secrets['username']) print(secrets['password']) else: raise FileNotFoundError As @nwinkler wisely says, you'll still have to have the password for the Ansible vault. As a developer you're probably familiar with signing your commits in Git, the good news is that you can use the same GPG keys to encrypt/decrypt the file that stores the password, and this can work transparently. If the environment variable ANSIBLE_VAULT_PASSWORD_FILE points to an executable, then Ansible will run that executable to get to the password required to decrypt a vault file. If it's not executable, then someone can store plain-text secrets there. The executable in this example needs just one line of shell to decrypt a file named vault_pw.gpg. Create this file with the vault-password on one line and encrypt it with your GPG key, remove the plain-text file. ~/.bash_profile I setup my shell to do this, and also launch the gpg-agent (for caching). export ANSIBLE_VAULT_PASSWORD_FILE=~/.ansible_vault_password_exe gpg-agent --daemon --write-env-file "${HOME}/.gpg-agent-info"' export GPG_TTY=$(tty) ~/.bashrc This will ensure that only one gpg-agent is running: if [ -f "${HOME}/.gpg-agent-info" ] then . "${HOME}/.gpg-agent-info" fi ~/.ansible_vault_password_exe #!/bin/sh exec gpg -q -d ${HOME}/vault_pw.gpg
Store password in Ansbile Vault and retrieve that key from Python script using API
I have a requirement where I should not store any passwords in the script files in plain text. So I have created an Ansible vault file called "vault.yml" which contains username and password. Is there some kind of API that I can use to look up this value from python script called for example "test.py"? What I would like in test.py is something like this: username = ansible_api_get(key=username) password = ansible_api_get(key=password) P.S. - I don't have to use Ansible Vault, but that is preferred option as we would like to use all sensitive info with Vault and we want to integrate our scripts as much as possible.
[ "Yes, ansible-vault is the Python library that you can use for this purpose.\nvault.py\n#!/usr/bin/env python3\n''' get secrets from ansible-vault file with gpg-encrypted password '''\nimport os\nimport sys\nfrom subprocess import check_output\nimport yaml\nfrom ansible_vault import Vault\n\nvault_file = sys.argv[1]\nif os.path.exists(vault_file):\n get_vault_password = os.environ['ANSIBLE_VAULT_PASSWORD_FILE']\n if os.path.exists(get_vault_password):\n PASSWORD = check_output(get_vault_password).strip().decode(\"utf-8\")\n secrets = yaml.safe_load(Vault(PASSWORD).load_raw(open(vault_file, encoding='utf-8').read()))\n print(secrets['username'])\n print(secrets['password'])\nelse:\n raise FileNotFoundError\n\nAs @nwinkler wisely says, you'll still have to have the password for the Ansible vault. As a developer you're probably familiar with signing your commits in Git, the good news is that you can use the same GPG keys to encrypt/decrypt the file that stores the password, and this can work transparently.\nIf the environment variable ANSIBLE_VAULT_PASSWORD_FILE points to an executable, then Ansible will run that executable to get to the password required to decrypt a vault file. If it's not executable, then someone can store plain-text secrets there. The executable in this example needs just one line of shell to decrypt a file named vault_pw.gpg. Create this file with the vault-password on one line and encrypt it with your GPG key, remove the plain-text file.\n~/.bash_profile\nI setup my shell to do this, and also launch the gpg-agent (for caching).\nexport ANSIBLE_VAULT_PASSWORD_FILE=~/.ansible_vault_password_exe\ngpg-agent --daemon --write-env-file \"${HOME}/.gpg-agent-info\"'\nexport GPG_TTY=$(tty)\n\n~/.bashrc\nThis will ensure that only one gpg-agent is running:\nif [ -f \"${HOME}/.gpg-agent-info\" ]\nthen\n . \"${HOME}/.gpg-agent-info\"\nfi\n\n~/.ansible_vault_password_exe\n#!/bin/sh\nexec gpg -q -d ${HOME}/vault_pw.gpg\n\n" ]
[ 0 ]
[]
[]
[ "ansible", "ansible_vault", "python", "python_2.7" ]
stackoverflow_0053601810_ansible_ansible_vault_python_python_2.7.txt
Q: Finding when a value in a pandas Series crosses multiple threshold values from another Series I have two Pandas Series, say sensor_values and thresholds. thresholds are defined in increasing order. How do I identify that in the sensor_value series, at what instances do the values cross any of the thresholds defined in the other Series thresholds? What I'm trying to do is basically what has been done in this answer, but instead the threshold (called line in that answer) would itself be a Series of multiple values. So I want to check against not one single threshold value but multiple of them, since the data is time-varying, and so for my use-case, different thresholds may apply depending upon the range any of the values in sensor_values is in. I tried to come up with this, but it's not detecting all of the values correctly, only some are detected. threshold_cross = (((sensor_values.rta >= thresholds.any()) & (sensor_values.next_rta < thresholds.any())) | ((sensor_values.next_rta > thresholds.any()) & (sensor_values.rta <= thresholds.any())) | (sensor_values.rta == thresholds.any())) The sensor_values Series may be increasing or decreasing in values. Those thresholds for now only apply to the increasing ones. For decreasing values, a different set of thresholds would apply, but anyways that's a thing I should be able to do it myself once I figure out just for one set of threshold values. So for now, one can assume that both sensor_values and thresholds are monotonically increasing. Edit: sensor_values is a DataFrame, containing two fields rta and next_rta, with later being the next instance of the rta (time shifted by 1 position, as done in the linked answer) A: I'm thinking you could pd.cut the sensors values into intervals between thresholds and then compare those thresholds: import numpy as np import pandas as pd sensors = pd.Series([0.1, 1.3, 2.1, 1.7, 2.6, 3.8, 4.1, 5.1, 4.4, 3.2, 1.6, 7.2]) thresholds = pd.Series([2, 5, 7]) bins = pd.concat([pd.Series(-np.inf), thresholds, pd.Series(np.inf)]) binned = pd.cut(sensors, bins) crossings = binned != binned.shift() crossings[0] = False # Don't count the first element Output: >>> crossings.tolist() [False, False, True, True, True, False, False, True, True, False, True, True] Important Note: The above becomes problematic if your sensors take the exact value of one of the thresholds (i.e., given that the bins are treated right-closed, depending on the previous interval it may get counted as crossing or not). In that case, you'll have to write additional code to handle that.
Finding when a value in a pandas Series crosses multiple threshold values from another Series
I have two Pandas Series, say sensor_values and thresholds. thresholds are defined in increasing order. How do I identify that in the sensor_value series, at what instances do the values cross any of the thresholds defined in the other Series thresholds? What I'm trying to do is basically what has been done in this answer, but instead the threshold (called line in that answer) would itself be a Series of multiple values. So I want to check against not one single threshold value but multiple of them, since the data is time-varying, and so for my use-case, different thresholds may apply depending upon the range any of the values in sensor_values is in. I tried to come up with this, but it's not detecting all of the values correctly, only some are detected. threshold_cross = (((sensor_values.rta >= thresholds.any()) & (sensor_values.next_rta < thresholds.any())) | ((sensor_values.next_rta > thresholds.any()) & (sensor_values.rta <= thresholds.any())) | (sensor_values.rta == thresholds.any())) The sensor_values Series may be increasing or decreasing in values. Those thresholds for now only apply to the increasing ones. For decreasing values, a different set of thresholds would apply, but anyways that's a thing I should be able to do it myself once I figure out just for one set of threshold values. So for now, one can assume that both sensor_values and thresholds are monotonically increasing. Edit: sensor_values is a DataFrame, containing two fields rta and next_rta, with later being the next instance of the rta (time shifted by 1 position, as done in the linked answer)
[ "I'm thinking you could pd.cut the sensors values into intervals between thresholds and then compare those thresholds:\nimport numpy as np\nimport pandas as pd\n\nsensors = pd.Series([0.1, 1.3, 2.1, 1.7, 2.6, 3.8, 4.1, 5.1, 4.4, 3.2, 1.6, 7.2])\nthresholds = pd.Series([2, 5, 7])\n\nbins = pd.concat([pd.Series(-np.inf), thresholds, pd.Series(np.inf)])\nbinned = pd.cut(sensors, bins)\n\ncrossings = binned != binned.shift()\ncrossings[0] = False # Don't count the first element\n\nOutput:\n>>> crossings.tolist()\n[False, False, True, True, True, False, False, True, True, False, True, True]\n\nImportant Note:\nThe above becomes problematic if your sensors take the exact value of one of the thresholds (i.e., given that the bins are treated right-closed, depending on the previous interval it may get counted as crossing or not). In that case, you'll have to write additional code to handle that.\n" ]
[ 1 ]
[]
[]
[ "numpy", "pandas", "python", "python_3.x", "time_series" ]
stackoverflow_0074605380_numpy_pandas_python_python_3.x_time_series.txt
Q: Why is pip install missing source of my package? I have a private package which I have uploaded to my private devpi server. When I use pip to install it, only the egg folder is installed. The source is missing and hence I am unable to use any code or libraries in my package. My setup.py: from setuptools import setup, find_packages setup( name='my-package', version=1.0, packages=find_packages(), install_requires=[ 'requests>=2.21.0', ] ) I am using a venv within Pycharm to do all of this. Why is this happening? How can I force pip to download and install the source distribution? [EDIT] When I download the tarball from my devpi server UI, it does NOT contain the source. Which means that when I upload the package with devpi upload it is not uploading the sdist? I could not find anything on how to force devpi to force to upload an sdist. Here is the build log: running sdist running egg_info writing ****.egg-info/PKG-INFO writing dependency_links to ****.egg-info/dependency_links.txt writing requirements to ****.egg-info/requires.txt writing top-level names to ****.egg-info/top_level.txt reading manifest file '****.egg-info/SOURCES.txt' writing manifest file '****.egg-info/SOURCES.txt' running check warning: Check: missing required meta-data: url warning: Check: missing meta-data: either (author and author_email) or (maintainer and maintainer_email) must be supplied creating ... creating ***.egg-info creating ***-1.0/client creating ***-1.0/client/model copying files to ***-1.0... copying README.md -> ****-1.0 copying setup.py -> ****-1.0 copying ****.egg-info/PKG-INFO -> ****-1.0/****.egg-info copying ****.egg-info/SOURCES.txt -> ****-1.0/****.egg-info copying ****.egg-info/dependency_links.txt -> ****-1.0/****.egg-info copying ****.egg-info/requires.txt -> ****-1.0/****.egg-info copying ****.egg-info/top_level.txt -> ****-1.0/****.egg-info copying the actual source here Writing ****-1.0/setup.cfg Creating tar archive removing '****-1.0' (and everything under it) A: This fixed it: devpi upload dist/pkg-ver.tar.gz Basically, run it from root of my project.
Why is pip install missing source of my package?
I have a private package which I have uploaded to my private devpi server. When I use pip to install it, only the egg folder is installed. The source is missing and hence I am unable to use any code or libraries in my package. My setup.py: from setuptools import setup, find_packages setup( name='my-package', version=1.0, packages=find_packages(), install_requires=[ 'requests>=2.21.0', ] ) I am using a venv within Pycharm to do all of this. Why is this happening? How can I force pip to download and install the source distribution? [EDIT] When I download the tarball from my devpi server UI, it does NOT contain the source. Which means that when I upload the package with devpi upload it is not uploading the sdist? I could not find anything on how to force devpi to force to upload an sdist. Here is the build log: running sdist running egg_info writing ****.egg-info/PKG-INFO writing dependency_links to ****.egg-info/dependency_links.txt writing requirements to ****.egg-info/requires.txt writing top-level names to ****.egg-info/top_level.txt reading manifest file '****.egg-info/SOURCES.txt' writing manifest file '****.egg-info/SOURCES.txt' running check warning: Check: missing required meta-data: url warning: Check: missing meta-data: either (author and author_email) or (maintainer and maintainer_email) must be supplied creating ... creating ***.egg-info creating ***-1.0/client creating ***-1.0/client/model copying files to ***-1.0... copying README.md -> ****-1.0 copying setup.py -> ****-1.0 copying ****.egg-info/PKG-INFO -> ****-1.0/****.egg-info copying ****.egg-info/SOURCES.txt -> ****-1.0/****.egg-info copying ****.egg-info/dependency_links.txt -> ****-1.0/****.egg-info copying ****.egg-info/requires.txt -> ****-1.0/****.egg-info copying ****.egg-info/top_level.txt -> ****-1.0/****.egg-info copying the actual source here Writing ****-1.0/setup.cfg Creating tar archive removing '****-1.0' (and everything under it)
[ "This fixed it:\ndevpi upload dist/pkg-ver.tar.gz\nBasically, run it from root of my project.\n" ]
[ -1 ]
[]
[]
[ "devpi", "pip", "pypi", "python", "setuptools" ]
stackoverflow_0055525800_devpi_pip_pypi_python_setuptools.txt