Unnamed: 0
int64
0
1.91M
id
int64
337
73.8M
title
stringlengths
10
150
question
stringlengths
21
64.2k
answer
stringlengths
19
59.4k
tags
stringlengths
5
112
score
int64
-10
17.3k
1,905,800
69,860,825
What is the working combination of the s3fs and fsspec version? ImportError: cannot import name 'maybe_sync' from 'fsspec.asyn'
<p>I am using the latest version of s3fs-0.5.2 and fsspec-0.9.0, when import s3fs, encountered the following error:</p> <pre><code>File &quot;/User/.conda/envs/py376/lib/python3.7/site-packages/s3fs/__init__.py&quot;, line 1, in &lt;module&gt; from .core import S3FileSystem, S3File File &quot;/User/.conda/envs/py376/lib/python3.7/site-packages/s3fs/core.py&quot;, line 11, in &lt;module&gt; from fsspec.asyn import AsyncFileSystem, sync, sync_wrapper, maybe_sync ImportError: cannot import name 'maybe_sync' from 'fsspec.asyn' (/User/.conda/envs/py376/lib/python3.7/site-packages/fsspec/asyn.py) </code></pre> <p>what is a working combination version of s3fs and fsspec?</p>
<p>The latest version of s3fs and fsspec as of today is 2021.11.0. The latest version on conda-forge is 2021.10.1 . Since the change to calendar versioning this year, the two are always released together and the dependency pinned, so that this kind of problem won't occur in the future.</p> <p>I believe for fsspec 0.9.0, you need s3fs 0.6.0 .</p>
amazon-s3|python-s3fs|fsspec
0
1,905,801
66,620,255
Extract information to work with with pandas
<p>I have this dataframe:</p> <h1>Column Non-Null Dtype</h1> <p>0 nombre 74 non-null object</p> <p>1 fabricante - 74 non-null - object</p> <p>2 calorias -74 non-null -int64<br /> 3 proteina -74 non-null -int64<br /> 4 grasa -74 non-null -int64<br /> 5 sodio -74 non-null -int64<br /> 6 fibra dietaria -74 non-null -float64 7 carbohidratos -74 non-null -float64 8 azúcar -74 non-null -int64<br /> 9 potasio -74 non-null -int64<br /> 10 vitaminas y minerales -74 non-null -int64</p> <p>I am trying to extract information like this:</p> <pre><code>cereal_df.loc[cereal_df['fabricante'] == 'Kelloggs', 'sodio'] </code></pre> <p>The output is (good, that is what I want to extract in this case right?)</p> <p>2 260 3 140 6 125 16 290 17 90 19 140 21 220 24 125 25 200 26 0 27 240 37 170 38 170 43 150 45 190 46 220 47 170 50 320 55 210 57 0 59 290 63 70 64 230 Name: sodio, dtype: int64</p> <p>That is what I need so far, but when I try to write a function like this (in order to get the confidence):</p> <pre><code>def valor_medio_intervalo(fabricante, variable, confianza): subconjunto = cereal_df.loc[cereal_df['fabricante'] == fabricante, cereal_df[variable]] inicio, final = sm.stats.DescrStatsW(subconjunto[variable]).zconfint_mean(alpha = 1 - confianza) return inicio, final </code></pre> <p>Then I run the function:</p> <pre><code>valor_medio_intervalo('Kelloggs', 'azúcar', 0.95) </code></pre> <p>And the output is:</p> <pre><code> KeyError Traceback (most recent call last) &lt;ipython-input-57-11420ac4d15f&gt; in &lt;module&gt;() 1 #TEST_CELL ----&gt; 2 valor_medio_intervalo('Kelloggs', 'azúcar', 0.95) 7 frames /usr/local/lib/python3.7/dist-packages/pandas/core/indexing.py in _validate_read_indexer(self, key, indexer, axis, raise_missing) 1296 if missing == len(indexer): 1297 axis_name = self.obj._get_axis_name(axis) -&gt; 1298 raise KeyError(f&quot;None of [{key}] are in the [{axis_name}]&quot;) 1299 1300 # We (temporarily) allow for some missing keys with .loc, except in KeyError: &quot;None of [Int64Index([ 6, 8, 5, 0, 8, 10, 14, 8, 6, 5, 12, 1, 9, 7, 13, 3, 2,\n 12, 13, 7, 0, 3, 10, 5, 13, 11, 7, 12, 12, 15, 9, 5, 3, 4,\n 11, 10, 11, 6, 9, 3, 6, 12, 3, 13, 6, 9, 7, 2, 10, 14, 3,\n 0, 0, 6, -1, 12, 8, 6, 2, 3, 0, 0, 0, 15, 3, 5, 3, 14,\n 3, 3, 12, 3, 3, 8],\n dtype='int64')] are in the [columns]&quot; </code></pre> <p>I do not understand what is going on. I appreciate your help or any hint. Thanks in advance</p>
<p>Just got the answer examining the code:</p> <pre><code>def valor_medio_intervalo(fabricante, variable, confianza): subconjunto = cereal_df.loc[cereal_df['fabricante'] == fabricante,cereal_df[variable]] inicio, final = sm.stats.DescrStatsW(subconjunto[variable]).zconfint_mean(alpha = 1 - confianza) return inicio, final </code></pre> <p>in the line</p> <pre><code>inicio, final = sm.stats.DescrStatsW(subconjunto[variable]).zconfint_mean(alpha = 1 - confianza) </code></pre> <p>the</p> <pre><code>(subconjunto[variable]) </code></pre> <p>is just</p> <pre><code>(subconjunto) </code></pre>
pandas|dataframe|.loc
1
1,905,802
64,843,043
Extract info when JSON when key changes in Python
<p>I'm quite new to Python so please excuse me if my terminology is not 100% accurate or if this is quite simple</p> <p>I currently have the code:</p> <pre><code> import requests POINTS = requests.get('https://api.torn.com/market/?selections=pointsmarket&amp;key=jtauOcpEwz0W0V5M').json() POINTSA = POINTS['pointsmarket'] print (POINTSA) </code></pre> <p>I want to print the cost of the very first key. However, the number for the first key will always change.</p> <p>At the time of posting the first key is 9765126 as seen here:</p> <p><a href="https://imgur.com/VRi8Owe" rel="nofollow noreferrer">https://imgur.com/VRi8Owe</a></p> <p>So the line would be (I think):</p> <pre><code> POINTSA = POINTS['pointsmarket'][9765126]['cost'] </code></pre> <p>However in 5 minutes time, the 9765126 key will change to something else. How do I get it to print the 1st cost entry regardless of the first key?</p>
<p>Hi you can try the following sample of getting the keys as a list then getting the first index</p> <pre class="lang-py prettyprint-override"><code>keys = list(POINTS['pointsmarket'].keys()) # Get all keys and convert to a list POINTS['pointsmarket'][keys[0]]['cost'] </code></pre>
python|json|python-3.x
1
1,905,803
71,830,887
show aggragated data in admin changelist_view
<p>I have an app where I am tracking user searches. The table currently looks like this:</p> <p><a href="https://i.stack.imgur.com/wPvTf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wPvTf.png" alt="enter image description here" /></a></p> <p>What I want to change is the view. I'd dlike to group/aggregate/annotate the view based on <code>Search Term</code> column and keep the latest date of each search term. The column <code>User searches</code> is being updated when user puts the same term in the search bar so there is no need for me to keep all records. I just need the latest one.</p> <p>I tried to update queryset using <code>distinct()</code> and <code>annotate()</code> in admin model but with no success.</p> <p>In other words I want to have 1 unique term in the table with the latest date (<code>Searched on</code> column).</p> <p>Below is my code:</p> <p>models.py</p> <pre><code>class SearchQueryTracker(models.Model): id = models.AutoField(primary_key=True) author = models.ForeignKey(User, blank=True, null=True, on_delete=models.SET_NULL, unique=False) total_results = models.IntegerField('Search results', default=0) total_searches = models.IntegerField('User searches', default=0) search_term = models.CharField(max_length=1000, blank=True, null=True) searched_on = models.DateTimeField('searched on', default=timezone.now) </code></pre> <p>views.py</p> <pre><code>@login_required def SearchViewFBV(request): query = request.GET.get('q') query_list = request.GET.get('q', None).split() query_list_count = len(query_list) user = User.objects.get(email=request.user.email) find_duplicate_results = SearchQueryTracker.objects.filter(search_term=query).update(total_searches=F('total_searches') + 1) .... search_tracker = SearchQueryTracker() search_tracker.total_searches = find_duplicate_results search_tracker.author = user search_tracker.search_term = query search_tracker.save() </code></pre> <p>admin.py</p> <pre><code>class SearchQueryTrackerAdmin(ImportExportModelAdmin): list_display = ('search_term', 'author', 'total_results', 'total_searches', 'searched_on') readonly_fields = list_display list_display_links = ('search_term', ) search_fields = ('search_term', ) def queryset(self, request): qs = super(SearchQueryTrackerAdmin, self).queryset(request) qs = qs.order_by('-searched_on',).distinct('search_term') # not working # qs = qs.annotate(Count('searched_on')) # not working return qs admin.site.register(SearchQueryTracker, SearchQueryTrackerAdmin) </code></pre> <p>Any help or advice will be appreciated.</p>
<p>Instead of creating a new <code>SearchQueryTracker</code> each time on <code>SearchViewFBV</code>, you can use the <code>get_or_create</code> method to retrieve the <code>SearchQueryTracker</code> if it exists, or create it otherwise.</p> <pre class="lang-py prettyprint-override"><code>@login_required def SearchViewFBV(request): query = request.GET.get('q') query_list = request.GET.get('q', None).split() query_list_count = len(query_list) user = User.objects.get(email=request.user.email) search_tracker, created = SearchQueryTracker.objects.get_or_create( search_term=query, author=user, defaults={'total_searches': 1} # If it is created, assign the total_searches so you don't have to save afterwards ) if not created: search_tracker.total_searches += 1 search_tracker.save() </code></pre> <p>To avoid <code>MultipleObjectsReturned</code> exception you will need to clean your database of all duplicates first.</p>
python|django|django-admin
1
1,905,804
72,133,601
how can I make it that the list updates itself?
<p>In every &quot;for&quot; loop, two items (user and likes) are taken from the &quot;Data&quot; list and are added to a separate list (scoreboard). I am trying to detect if &quot;user&quot; already exists in the scoreboard list. If it does exist, then I try to take the item that comes after &quot;user&quot; in the scoreboard list (which would be the previous &quot;likes&quot; item) and add the new &quot;likes&quot; item to it, after that I try to set the new &quot;likes&quot; item to the new value. but i couldn't make it work.</p> <pre><code> def likecount(): scoreboard = [] for t in range(0,len(Data),3): user = Data[t] likes = Data[t + 2] if user not in scoreboard: scoreboard.append(user), scoreboard.append(likes) else: scoreboard[(scoreboard.index(user) + 1)] + likes == scoreboard[(scoreboard.index(user) + 1)] for i in range(0,len(scoreboard),2): print (scoreboard[i],scoreboard[i+1], end= &quot;\n&quot;) </code></pre> <p>#Data list:</p> <pre><code>['Rhett_andEmma77', 'Los Angeles is soooo hot', '0', 'Tato', 'Glitch', '1', 'Aurio', 'April fools tomorrow', '4', 'nap', 'The bully**', '3', 'NICKSION', 'Oops', '6', 'stupidfuck', 'hes a little guy', '5', 'database', 'Ty', '1', 'stupidfuck', 'possible object show (objects needed)', '3', 'NightSkyMusic', 'nicotine takes 10 seconds to reach your brain', '27', 'stupidfuck', '@BFBleafyfan99 be anii e', '4', 'Odminey', 'Aliveness', '26', 'stupidfuck', '@techness2011 the', '5', 'techness2011', 'Boomerang', '1', 'saltyipaint', 'April is r slur month', '5', 'HENRYFOLIO', 'flip and dunk', '4', 'SpenceAnimation', 'Got Any grapes ', '2', 'RainyFox2', 'Draw me in your style****', '1', 'hecksacontagon', 'funky cat guy impresses you with his fish corpse', '11', 'HENRYFOLIO', 'flip and dunk @bruhkeko version', '4', 'nairb', 'Spoderman turns green', '5', 'SpenceAnimation', 'Jellybean', '1', 'SpenceAnimation', '@FussiArt', '3'] </code></pre>
<p>Dictiionary is precisely the thing you are looking for instead of keeping a list of items. So whenever a user is considered, you will check if he/she exists</p> <ol> <li>If exists, then add the new likes to the old like count.</li> <li>If it doesn't, create an entry with the like count.</li> </ol> <p>That is the natural solution.</p> <pre><code>data = ['Rhett_andEmma77', 'Los Angeles is soooo hot', '0', 'Tato', 'Glitch', '1', 'Aurio', 'April fools tomorrow', '4', 'nap', 'The bully**', '3', 'NICKSION', 'Oops', '6', 'stupidfuck', 'hes a little guy', '5', 'database', 'Ty', '1', 'stupidfuck', 'possible object show (objects needed)', '3', 'NightSkyMusic', 'nicotine takes 10 seconds to reach your brain', '27', 'stupidfuck', '@BFBleafyfan99 be anii e', '4', 'Odminey', 'Aliveness', '26', 'stupidfuck', '@techness2011 the', '5', 'techness2011', 'Boomerang', '1', 'saltyipaint', 'April is r slur month', '5', 'HENRYFOLIO', 'flip and dunk', '4', 'SpenceAnimation', 'Got Any grapes ', '2', 'RainyFox2', 'Draw me in your style****', '1', 'hecksacontagon', 'funky cat guy impresses you with his fish corpse', '11', 'HENRYFOLIO', 'flip and dunk @bruhkeko version', '4', 'nairb', 'Spoderman turns green', '5', 'SpenceAnimation', 'Jellybean', '1', 'SpenceAnimation', '@FussiArt', '3'] assert(len(data)%3 == 0) scoreboard = {} for i in range(0, len(data), 3): if data[i] in scoreboard.keys(): scoreboard[data[i]]+= int(data[i+2]) else: scoreboard[data[i]] = int(data[i+2]) print(scoreboard) </code></pre> <h4>Output</h4> <pre><code> {'Rhett_andEmma77': 0, 'Tato': 1, 'Aurio': 4, 'nap': 3, 'NICKSION': 6, 'stupidfuck': 17, 'database': 1, 'NightSkyMusic': 27, 'Odminey': 26, 'techness2011': 1, 'saltyipaint': 5, 'HENRYFOLIO': 8, 'SpenceAnimation': 6, 'RainyFox2': 1, 'hecksacontagon': 11, 'nairb': 5} </code></pre>
python|web-scraping
1
1,905,805
61,883,783
Take column value into a column MultiIndex
<p>Working with Python 3 and Pandas 1, I have a data frame like this:</p> <pre class="lang-py prettyprint-override"><code>df = pd.DataFrame([['a', 1, 2, 3, 4, 5, 6, 'param1'], ['a', 7, 8, 9, 10, 11, 12, 'param2'], ['b', 13, 14, 15, 16, 17, 18, 'param1'], ['b', 19, 20, 21, 22, 23, 24, 'param2'], ['c', 25, 26, 27, 28, 29, 30, 'param1'], ['c', 31, 32, 33, 34, 35, 36, 'param2']], columns=['object', 2001, 2002, 2003, 2004, 2005, 2006, 'parameter']).set_index('object') </code></pre> <pre><code> 2001 2002 2003 2004 2005 2006 parameter object a 1 2 3 4 5 6 param1 a 7 8 9 10 11 12 param2 b 13 14 15 16 17 18 param1 b 19 20 21 22 23 24 param2 c 25 26 27 28 29 30 param1 c 31 32 33 34 35 36 param2 </code></pre> <p>and would like it to look like this:</p> <pre><code>year 2001 2002 2003 2004 2005 2006 parameter param1 param2 param1 param2 param1 param2 param1 param2 param1 param2 param1 param2 object a 1 7 2 8 3 9 4 10 5 11 6 12 b 13 19 14 20 15 21 16 22 17 23 18 24 c 25 31 26 32 27 33 28 34 29 35 30 36 </code></pre> <p>where <code>year</code> and <code>parameter</code> are levels of a MultiIndex in the columns, such that and element's 'co-ordinate' is</p> <pre><code>(object, (year, parameter)) </code></pre> <p>note that the values of the <code>parameter</code> column are used to create the MultiIndex level <code>parameter</code>.</p> <p>Any help is much appreciated, thanks</p>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.set_index.html" rel="nofollow noreferrer"><code>DataFrame.set_index</code></a> with <code>append=True</code> for <code>MultiIndex in index</code>, set columns name by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.rename_axis.html" rel="nofollow noreferrer"><code>DataFrame.rename_axis</code></a> and then <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.unstack.html" rel="nofollow noreferrer"><code>DataFrame.unstack</code></a>:</p> <pre><code>df = df.set_index('parameter', append=True).rename_axis('year', axis=1).unstack() print (df) year 2001 2002 2003 2004 2005 \ parameter param1 param2 param1 param2 param1 param2 param1 param2 param1 object a 1 7 2 8 3 9 4 10 5 b 13 19 14 20 15 21 16 22 17 c 25 31 26 32 27 33 28 34 29 year 2006 parameter param2 param1 param2 object a 11 6 12 b 23 18 24 c 35 30 36 </code></pre>
python|pandas
3
1,905,806
63,648,983
How to convert the below R code to python
<p>Objective: To get the class/dtype of features from a dataframe and incorporate the data into a new data frame containing features names as rows. For clarity I am pasting the R code for it which I was able to crack</p> <pre><code>Variables&lt;-as.data.frame(names(telecom)) ##Telecom original dataset-Incorporating columns names as dataframes in variable object ##If class integer it goes as CONT and if categorical then &quot;CAT&quot; into a new column &quot;cont_cat&quot; in Variable dataframe for(i in 1:ncol(telecom)) { Variable$cont_cat[i]&lt;-ifelse(class(telecom[,i])==&quot;integer&quot;|class(adult[,i])==&quot;numeric&quot;,&quot;Cont&quot;,&quot;Cat&quot;) } </code></pre> <p>The first part of it, I was able to crack in Python</p> <pre><code>Variables=pd.DataFrame(credit_data.columns, columns=[&quot;Features&quot;]) </code></pre> <p>However, I need help with second part of it.</p>
<p>Here is a sample solution. You will have to change the wanted category to your problem</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd import numpy as np Variables['cont_cat'] = np.nan wanted_categories = [&quot;int64&quot;,&quot;object&quot;, &quot;float64&quot;] #change wanted datatypes to your scenario for i in range(len(telecom.columns)): if telecom.iloc[:,i].dtypes in wanted_categories: Variables.iloc[i,1] = &quot;cat&quot; #pandas iloc [row, index] </code></pre>
python|r
1
1,905,807
62,104,850
Why did an empty query-set appeared in my Django program?
<pre><code>def Cart(request): if request.user.is_authenticated: customer=request.user.customer order,created=Order.objects.get_or_create(customer=customer,complete=True) items=order.orderitem_set.all() print(items) else: items=[] context={"items":items} return render(request,'store/Cart.html',context) </code></pre> <p>i'm trying to show some orderitems in the Cart template but nothing appears so i tried to print the items and i get a an despite in the admin pannel i assisgned in an order some orderitems </p>
<p>try this </p> <pre><code>def Cart(request): if request.user.is_authenticated: customer=request.user.customer order=Order.objects.get_or_create(customer__user=customer,complete=True) items=order.orderitem_set.all() print(items) else: items=[] context={"items":items} return render(request,'store/Cart.html',context) </code></pre>
python|django|django-views
0
1,905,808
31,225,132
Python 3.3.2 - trying to read url from wunderground
<p>I have been struggling for a while trying to convert some code from an older version of Python. I'm simply trying to run an api lookup from wunderground and I can't get past my errors in python. Here is the error: f = urllib.request.urlopen(fileName) AttributeError: 'module' object has no attribute 'request'</p> <p>The code is pretty straight forward, I know i"m missing something simple, thanks for any help. </p> <pre><code>import urllib import json key = "xxxxxxxxxxxxxxxxx" zip = input('For which ZIP code would you like to see the weather? ') fileName = "http://api.wunderground.com/api/" + key + "/geolookup/conditions/q/PA/" + zip + ".json" f = urllib.request.urlopen(fileName) json_string = f.read() parsed_json = json.loads(json_string) location = parsed_json['location']['city'] temp_f = parsed_json['current_observation']['temp_f'] print ("Current temperature in %s is: %s % (location, temp_f)") close() </code></pre>
<p>Sometimes importing a package (e.g. <code>numpy</code>) automatically imports submodules (e.g. <code>numpy.linalg</code>) into its namespace. But that is not the case for <code>urllib</code>. So you need to use</p> <pre><code>import urllib.request </code></pre> <p>instead of </p> <pre><code>import urllib </code></pre> <p>in order to access the <code>urllib.request</code> module. Alternatively, you could use</p> <pre><code>import urllib.request as request </code></pre> <p>in order to access the module as <code>request</code>. </p> <p>Looking at the <a href="https://docs.python.org/3/library/urllib.request.html#examples" rel="nofollow">examples in the docs</a> is a good way to avoid problems like this in the future.</p> <hr> <p>Since <code>f.read()</code> returns a <code>bytes</code> object, and <code>json.loads</code> expects a <code>str</code>, you'll also need to decode the bytes. The particular encoding depends on what the server decides to send you; in this case the bytes are <code>utf-8</code> encoded. So use</p> <pre><code>json_string = f.read().decode('utf-8') parsed_json = json.loads(json_string) </code></pre> <p>to decode the bytes.</p> <hr> <p>There is a small typo on the last line. Use </p> <pre><code>print ("Current temperature in %s is: %s" % (location, temp_f)) </code></pre> <p>to interpolate the string <code>"Current temperature in %s is: %s"</code> with the values <code>(location, temp_f)</code>. Note the placement of the quotation mark.</p> <hr> <p>Tip: Since <code>zip</code> is a builtin-function, it is a good practice not to name a variable <code>zip</code> since this changes the usual meaning of <code>zip</code> making it harder for others and perhaps future-you to understand your code. The fix is easy: change <code>zip</code> to something else like <code>zip_code</code>.</p> <hr> <pre><code>import urllib.request as request import json key = ... zip_code = input('For which ZIP code would you like to see the weather? ') fileName = "http://api.wunderground.com/api/" + key + "/geolookup/conditions/q/PA/" + zip_code + ".json" f = request.urlopen(fileName) json_string = f.read().decode('utf-8') parsed_json = json.loads(json_string) location = parsed_json['location']['city'] temp_f = parsed_json['current_observation']['temp_f'] print ("Current temperature in %s is: %s" % (location, temp_f)) </code></pre>
python|json
3
1,905,809
35,157,373
Is it possible to treat dictionary values as objects?
<p>Here is a class that analyses data: </p> <pre><code>class TopFive: def __init__(self, catalog_data, sales_data, query, **kwargs): self.catalog_data = catalog_data self.sales_data = sales_data self.query = query def analyse(self): CATALOG_DATA = self.catalog_data SALES_DATA = self.sales_data query = self.query products = {} # Creating a dict with ID, city or hour ( depending on query ) as keys and their income as values. for row in SALES_DATA: QUERIES = { 'category': row[0], 'city': row[2], 'hour': row[3] } if QUERIES[query] in products: products[QUERIES[query]] += float(row[4]) products[QUERIES[query]] = round(products[QUERIES[query]], 2) else: products[QUERIES[query]] = float(row[4]) if query == 'category': top_five = {} top_five_items = sorted(products, key=products.get, reverse=True)[:5] # Getting top 5 categories. for key in top_five_items: for row in CATALOG_DATA: if key == row[0]: key_string = row[5] + ', ' + row[4] top_five[key_string] = products[key] return top_five else: return products </code></pre> <p>It is being called like so:</p> <pre><code> holder = TopFive(catalog_data=catalog_data, sales_data=sales_data, query='hour') top_hour = holder.analyse() </code></pre> <p>What I want to do now is work with the dates. They come in from an input csv file looking like this: </p> <pre><code>2015-12-11T17:14:05+01:00 </code></pre> <p>Now I need to change to UTC time zone. I thought of using:</p> <pre><code>.astimezone(pytz.utc) </code></pre> <p>And now to my question: Can I somehow do so in my QUERIES dictionary, so that when the 'hour' argument is passed to the class I can then execute the program, without changing the following code's structure:</p> <pre><code>if QUERIES[query] in products: products[QUERIES[query]] += float(row[4]) products[QUERIES[query]] = round(products[QUERIES[query]], 2) else: products[QUERIES[query]] = float(row[4]) </code></pre> <p>and without adding more conditions. I am thinking of something like:</p> <pre><code>'hour': row[3].astimezone(pytz.utc) </code></pre> <p>But this is not working. I can understand why, I am just wondering if there is a similar approach that works. Otherwise I would have to add yet another condition with separate return value and work there.</p>
<p>Got it! The answer to my question is yes: you can use methods in dictionary, just as I tried: </p> <pre><code>QUERIES = { 'category': row[0], 'city': row[2], 'hour': hour.astimezone(pytz.utc) } </code></pre> <p>What I just realized was that I forgot to parse the csv input into datetime format. So obviously when I try to use .astimezone on string it raises error. Sorry for the long useless post, but I'm still very new to OOP and its quite difficult keeping track of all files, instances and so on ;D Thanks</p>
python|dictionary|timezone
1
1,905,810
69,595,725
Numpy list of arrays to concatenated ordered matrix of arrays
<p>I have a list of numpy arrays, where each array is of the same rank, for example each array has the same shape of <code>(H, W, C)</code>.</p> <p>Assume the list I have has 12 such arrays, e.g.</p> <pre class="lang-py prettyprint-override"><code>my_list = [A, B, C, D, E, F, G, H, I, J, K, L] </code></pre> <p>What I want is given a grid size (in the sample below the grid is 3x4), create a single matrix <strong>with the same rank of each array</strong> that places the first array in the top left and the last array in the bottom right in an ordered manner, e.g.</p> <pre><code>[A, B, C, D, E, F, G, H, I, J, K, L] </code></pre> <p>This is only a pseudo result, as the result should be in this case a matrix with the shape of <code>(H*3, W*4, C)</code>. The example above is only for placement clarification.</p> <p>How can I achieve that using numpy?</p>
<pre><code>import numpy as np h = 7 w = 5 c = 10 grid = 3*4 ##Creating sample data for list a = np.random.rand(grid,h,w,c) my_list = list(a) #### my_array = np.array(my_list) ## shape (grid,h,w,c) my_array = my_array.reshape(3,4,h,w,c) my_array = my_array.transpose(0,2,1,3,4) your_req_array = my_array.reshape(3*h,4*w,c) </code></pre>
python|numpy
1
1,905,811
55,449,114
How do I assign an element from one list as the value of an element from another list and print the result?
<p>I need to assign an element from one list as the value of an element in another list. How do I do it? </p> <p>I had a string whose characters I stored as a list-A and I have another list-B whose elements will act like variables. I tried using a for loop where each iteration in the list-B will contain elements from list-A as their value. I know it is picking the value from the last iteration of the List-A. I tried using a nested for loop; the First one for the List-B and the second one for the List-A. All it does is print all the elements of list-A for every iteration of List-B.</p> <pre><code>item = "Cheese Burger" ends = [] end_num = [] for i in range(1,13): end_num.append(i) print(f"end_num = {end_num} \n") for num in end_num: end = f"end{num}" #print(end) ends.append(end) print(f"end list = {ends} \n") characters = [] for char in item: characters.append(char) characters.pop(6) print("Character list = ", characters, "\n") #print(len(characters)) for iteration in ends: result = '{0} = "{1}"'.format(iteration,char) print(result) </code></pre> <p>For a string "Cheese Burger" as input, the expected output is: </p> <pre><code>end1 = "C" end2 = "h" end3 = "e" end4 = "e" end5 = "s" end6 = "e" end7 = "B" end8 = "u" end9 = "r" end10 = "g" end11 = "e" end12 = "r" </code></pre> <p>Actual output :</p> <pre><code>end1 = "r" end2 = "r" end3 = "r" end4 = "r" end5 = "r" end6 = "r" end7 = "r" end8 = "r" end9 = "r" end10 = "r" end11 = "r" end12 = "r" </code></pre>
<p>Replace the last <code>for</code> loop with:</p> <pre><code>for i in range(len(ends)): result = '{0} = "{1}"'.format(ends[i],characters[i]) print(result) </code></pre> <p>Demo: <a href="https://repl.it/@glhr/55449114" rel="nofollow noreferrer">https://repl.it/@glhr/55449114</a></p> <p>Also, in the first <code>for</code> loop, consider using <code>for i in range(1,len(item))</code> instead of hard-coding the number of characters with <code>range(1,13)</code> </p> <p>Edit: to make a general purpose solution, instead of using <code>characters.pop(6)</code> which only works if there's a single space to remove at the 6th position, you can remove all spaces from the input string with <code>item.replace(" ","")</code>:</p> <pre><code>item = "Che ese Bur ger" item = item.replace(" ","") print(item) </code></pre> <p>which will output <code>CheeseBurger</code>.</p> <p>You'll then need to update the range of the first <code>for</code> loop:</p> <pre><code>for i in range(1,len(item)+1): </code></pre> <p>so that the sizes of <code>end_num</code> and <code>item</code> match.</p> <p>2nd demo: <a href="https://repl.it/@glhr/55449114-2" rel="nofollow noreferrer">https://repl.it/@glhr/55449114-2</a> you can change the input string to anything you want. </p> <p>However, note that there's a much simpler and shorter solution for outputting the same thing as your code:</p> <pre><code>item = "Che ese Bur ger" item = item.replace(" ","") for i in range(1,len(item)+1): result = 'ends{0} = "{1}"'.format(i,item[i-1]) print(result) </code></pre> <p>3rd demo: <a href="https://repl.it/@glhr/55449228-3" rel="nofollow noreferrer">https://repl.it/@glhr/55449228-3</a></p>
python-3.x|loops
0
1,905,812
57,448,184
Python version of an Excel Formula
<p>I'm currently creating a Python 3 program that will pick out the most frequent numbers from a six-column CSV. So far I have the code that will pick the most frequent from each column, but I also want code that can pick the six most frequent (from the 1st most frequent, to the 6th) number from all the columns and rows combined.</p> <p>I have an Excel spreadsheet that does it, using the formula:</p> <pre><code>=MODE(IF(1 - ISNUMBER(MATCH(B2:G402,$L$24:L24,0)),B2:G402)) </code></pre> <p>Then dragging the calculation down to display six numbers (as far as I can tell this is a working formula!)</p> <p>Is there a way I can get that formula, or something better, in Python 3? So the code will display the top six most frequent numbers from six columns and 400+ rows?</p> <p>Here's my code so far:</p> <pre><code>import csv import os import random from collections import Counter filename='lotto.csv' os.system('cls' if os.name == 'nt' else 'clear') print("\n\n********** Lottery Number Generator **********\n\n") print("Based on all previous Lotto numbers from CSV.\n") x = 1 while x &lt; 8: with open(filename, 'r') as f: column = (row[x] for row in csv.reader(f)) print("Lotto Number", x, ": {0}".format(Counter(column).most_common()[0][0])) x = x + 1 </code></pre> <p>Any ideas, guys?</p> <p>Thanks in advance!</p> <p>Dave </p>
<p>Here is for me the simplest way for you. </p> <p>I will be using pandas (install it with <code>pip install pandas</code>) :</p> <pre><code>import pandas as pd df = pd.read_csv('filename.csv') freq = df.stack().value_counts() </code></pre> <p>It will get you a list with the frequencies of each element in the array.</p> <p>However, if you want the frequency of only one column, you can do this :</p> <pre><code>freq = df['column_name'].value_counts() </code></pre>
python|python-3.x
1
1,905,813
54,019,931
Getting value in a dataframe in PySpark
<p>I have the below dataframe and I'm trying to get the value <strong>3097</strong> as a int, e.g. storing it in a python variable to manipulate it, multiply it by another int etc.</p> <p><a href="https://i.stack.imgur.com/RZqyx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RZqyx.png" alt="enter image description here"></a></p> <p>I've managed to get the row, but I don't even now if it's a good way to do it and I still can't have the value as a int.</p> <pre><code>data.groupBy("card_bank", "failed").count().filter(data["failed"] == "true").collect() </code></pre> <p><a href="https://i.stack.imgur.com/hKU19.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hKU19.png" alt="enter image description here"></a></p>
<p>You need to get a <code>row</code> from the sequence (wither for loop or map function) and then <code>row.getInt(2)</code> according to <a href="https://spark.apache.org/docs/1.4.0/api/java/org/apache/spark/sql/Row.html" rel="nofollow noreferrer">https://spark.apache.org/docs/1.4.0/api/java/org/apache/spark/sql/Row.html</a>.</p>
python|apache-spark|pyspark
2
1,905,814
41,428,110
Plot mouse clicks over an image
<p>I writing a code in Python 3 to plot some markers over a DICOM image. for this, I wrote a very short program:</p> <p>In the main program, I read the DICOM filename from the terminal and plot the image.</p> <p>main_prog.py:</p> <pre><code>import sys import dicom as dcm import numpy as np from matplotlib import pyplot as plt from dicomplot import dicomplot as dcmplot filename = sys.argv[1] dicomfile = dcm.read_file(filename) dicomimg = dicomfile.pixel_array fig = plt.figure(dpi = 300) ax = fig.add_subplot(1, 1, 1) plt.set_cmap(plt.gray()) plt.pcolormesh(np.flipud(dicomimg)) dcm = dcmplot(ax) plt.show() </code></pre> <p>Then, I define a class to store the coordinates clicked by the user and plot each of them at a time over the image:</p> <p>dicomplot.py</p> <pre><code>from matplotlib import pyplot as plt class dicomplot(): def __init__(self, img): self.img = img self.fig = plt.figure(dpi = 300) self.xcoord = list() self.ycoord = list() self.cid = img.figure.canvas.mpl_connect('button_press_event', self) def __call__(self, event): if event.button == 1: self.xcoord.append(event.x) self.ycoord.append(event.y) self.img.plot(self.ycoord, self.xcoord, 'r*') self.img.figure.canvas.draw() elif event.button == 2: self.img.figure.canvas.mpl_disconnect(self.cid) elif event.button == 3: self.xcoord.append(-1) self.ycoord.append(-1) </code></pre> <p>The problem is that when I click over the image, the markers appear in a different scale, and not over the image as they are supposed to.</p> <p>How can I modify my code so when I click on the image, all the mouse clicks are stored and ploted in the desired position?</p>
<p>The <code>MouseEvent</code> objects carry both a <code>x</code>/<code>y</code> and<code>xdata</code>/<code>ydata</code> attributes (<a href="http://matplotlib.org/api/backend_bases_api.html#matplotlib.backend_bases.MouseEvent.ydata" rel="nofollow noreferrer">docs</a>). The first set is in screen coordinates (ex pixels from the lower left) and the second set (<code>*data</code>) are in the data coordinates.</p> <p>You might also be interested in <a href="https://github.com/joferkington/mpldatacursor" rel="nofollow noreferrer"><code>mpldatacursor</code></a>.</p>
python-3.x|matplotlib|plot
1
1,905,815
51,865,887
Amazon Web Services P3 slower than local GPU with Keras, TensorFlow and MobileNet
<p>I'm currently training (fine tuning) a pretrained MobileNet model with keras and tensorflow. The training is done on my local computer with a GTX980.</p> <p>To speed up things I created a p3.2xlarge instance on AWS with an Amazon Deep Learning AMI based on Ubuntu (<a href="https://aws.amazon.com/marketplace/pp/B077GCH38C?qid=1534363940598&amp;sr=0-2&amp;ref_=srh_res_product_title" rel="nofollow noreferrer">aws marketplace</a>). </p> <p>When running with some test data (~ 300 images) I noticed that my local computer needs around 10 seconds per epoch while aws needs 26 seconds. I even tested it with a p3.16xlarge instance, but no big difference. When watching the GPU(s) with </p> <pre><code>watch -n 1 nvidia-smi </code></pre> <p>all the memory (16GB per GPU) was filled. I tried different amount of data, keras implementations, batch sizes and raising GPU speed. When listing the devices, the GPU is shown as used. What could be the problem for running slow? I am using jupyter notebook. Below my test code:</p> <pre><code>from keras.applications import MobileNet mobile_model = MobileNet() for layer in mobile_model.layers[:-4]: layer.trainable = False from keras import models from keras import layers from keras import optimizers # Create the model model = models.Sequential() # Add the vgg convolutional base model model.add(mobile_model) # Add new layers #model.add(layers.Flatten(return_sequences=True)) model.add(layers.Dense(1024, activation='relu')) model.add(layers.Dropout(0.5)) model.add(layers.Dense(2, activation='softmax')) from keras.preprocessing.image import ImageDataGenerator train_dir = "./painOrNoPain/train/" validation_dir = "./painOrNoPain/valid/" image_size = 224 train_datagen = ImageDataGenerator( rescale=1./255, rotation_range=20, width_shift_range=0.2, height_shift_range=0.2, horizontal_flip=True, fill_mode='nearest') validation_datagen = ImageDataGenerator(rescale=1./255) # Change the batchsize according to your system RAM train_batchsize = 128 val_batchsize = 128 train_generator = train_datagen.flow_from_directory( train_dir, target_size=(image_size, image_size), batch_size=train_batchsize, class_mode='categorical') validation_generator = validation_datagen.flow_from_directory( validation_dir, target_size=(image_size, image_size), batch_size=val_batchsize, class_mode='categorical', shuffle=False) try: model = multi_gpu_model(model) except: pass from keras.optimizers import Adam # Compile the model model.compile(loss='categorical_crossentropy', optimizer=Adam(lr=1e-4), metrics=['acc']) # Train the model history = model.fit_generator( train_generator, steps_per_epoch=train_generator.samples/train_generator.batch_size , epochs=3, validation_data=validation_generator, validation_steps=validation_generator.samples/validation_generator.batch_size, verbose=2) # Save the model model.save('small_last4.h5') </code></pre>
<p>I'm having the same problem. Running with Amazon AWS GPU is even slower than running on CPU on my own laptop. A possible explanation is due to the large amount of time spent on data transferring between CPU and GPU as said in <a href="https://stackoverflow.com/questions/49138324/5-layer-dnn-in-keras-trains-slower-using-gpu">this post</a></p>
python|amazon-web-services|tensorflow|amazon-ec2|keras
2
1,905,816
26,951,792
How to get needed datetime from tuple list?
<p>i have the variable:</p> <pre><code>results3 = [ ( 'CP - 2615', 23652, datetime.datetime(2014, 10, 31, 19, 21, 56), 'custom-simulation:pre-processing-cleanup', 5, datetime.datetime(2014, 10, 31, 19, 21, 59), datetime.datetime(2014, 10, 31, 19, 22, 4), 259, 262 ), ( 'CP - 2615', 23652, datetime.datetime(2014, 10, 31, 19, 21, 56), 'custom-cleanup:pre-processing-cleanup', 1, datetime.datetime(2014, 10, 31, 19, 22, 5), datetime.datetime(2014, 10, 31, 19, 22, 6), 259, 262 ) ] </code></pre> <p>how i can get needed date time (datetime.datetime(2014, 10, 31, 19, 22, 5)) from list of tuples, when i try:</p> <pre><code>actualEndTime = [] for i in range(len(results3)): actualEndTime.append(results3[i][2][1]) </code></pre> <p>i got the:</p> <pre><code>TypeError: 'datetime.datetime' object has no attribute '__getitem__' </code></pre>
<p>You went one level too deep with your indexing, the <code>[1]</code> is unneeded. Also, if all you want to do is pull the <code>datetime</code> out into a list, your code will be more readable if you use a list comprehension.</p> <pre><code>actualEndTime = [x[2] for x in results3] </code></pre>
python|python-2.7|datetime|tuples
1
1,905,817
64,557,706
Create new pandas data frame based on 2 conditions in an existing column
<p>Easy question here: I am creating a new data frame based on a report of contracts where I only want 2 statuses. I have the first one, but am having trouble adding the second condition.</p> <pre><code>df2 = df[(df['[PCW] Contract Status'] == &quot;Draft&quot;)] </code></pre> <p>The other status is &quot;Draft Amendment&quot;. So I basically want it to read like</p> <pre><code>df2 = df[(df['[PCW] Contract Status'] == &quot;Draft&quot;, &quot;Draft Amendment&quot;)] </code></pre>
<p>You can use <code>isin()</code>.</p> <pre><code>df2 = df[df['[PCW] Contract Status'].isin([&quot;Draft&quot;, &quot;Draft Amendment&quot;])] </code></pre> <p>Or else you can create the list of required variables earlier and then add the name of the list in <code>isin()</code>.</p>
python|pandas
2
1,905,818
55,850,851
Can't read avro in Jupyter notebook
<p>I already have a SparkContext created and a Spark global variable. When I read ORC files, I can read them as simple as <code>spark.read.format("orc").load("filepath")</code> however, for avro I can't seem to do the same even though I try to import the jar like so:</p> <pre><code> spark.conf.set("spark.jars.packages", "file:///projects/apps/lib/spark-avro_2.11-3.2.0.jar") </code></pre> <p>Error:</p> <pre><code>and then try to read the avro file. I get an error like so: Py4JJavaError: An error occurred while calling o65.load. : org.apache.spark.sql.AnalysisException: Failed to find data source: avro. Please find an Avro package at http://spark.apache.org/third-party-projects.html; </code></pre>
<p><code>spark.jars.packages</code> takes Gradle compatible coordinates:</p> <pre><code>spark.jars.packages org.apache.spark:spark-avro_2.12:2.4.2 </code></pre> <p>Additionally, as explained in <a href="https://stackoverflow.com/q/33908156/">How to load jar dependenices in IPython Notebook</a>, it has to be set before JVM and <code>SparkSession</code> / <code>SparkContext</code> are initialized.</p> <p>So you have to:</p> <ul> <li>Fix the settings.</li> <li>Provide these as a configuration or environment variable, before JVM is initialized.</li> </ul>
python|apache-spark|pyspark|jupyter-notebook
2
1,905,819
49,895,956
Incomplete HTML Content Using Python request.get
<p>I am Trying to Get Html Content from a URL using request.get in Python. But am getting incomplete response.</p> <pre><code>import requests from lxml import html url = "https://www.expedia.com/Hotel-Search?destination=Maldives&amp;latLong=3.480528%2C73.192127&amp;regionId=109&amp;startDate=04%2F20%2F2018&amp;endDate=04%2F21%2F2018&amp;rooms=1&amp;_xpid=11905%7C1&amp;adults=2" headers = { 'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.80 Safari/537.36', 'Content-Type': 'text/html', } response = requests.get(url, headers=headers) print response.content </code></pre> <p>Can any one suggest the changes to be done for getting the exact complete response.</p> <p>NB:using selenium am able to get the complete response,but that is not the recommended way.</p>
<p>If you need to get content generated dynamically by JavaScript and you don't want to use Selenium, you can try <a href="https://html.python-requests.org/" rel="noreferrer">requests-html</a> tool that supports JavaScript:</p> <pre><code>from requests_html import HTMLSession session = HTMLSession() url = "https://www.expedia.com/Hotel-Search?destination=Maldives&amp;latLong=3.480528%2C73.192127&amp;regionId=109&amp;startDate=04%2F20%2F2018&amp;endDate=04%2F21%2F2018&amp;rooms=1&amp;_xpid=11905%7C1&amp;adults=2" r = session.get(url) r.html.render() print(r.content) </code></pre>
python|web-scraping
6
1,905,820
66,540,704
How to run Python file in Heroku from termux(Andriod)?
<p>I am getting error when trying to run my python file from termux can any one help me pls.</p> <pre><code>heroku run --app python popfinal.py Running popfinal.py on ⬢ python... done › Error: You do not have permission to manage dynos › on python. You need to have the deploy or operate › permission on this app. › › Error ID: forbidden </code></pre>
<pre><code>heroku run --app python popfinal.py Running popfinal.py on ⬢ python... done › Error: You do not have permission to manage dynos › on python. You need to have the deploy or operate › permission on this app. › › Error ID: forbidden </code></pre> <p>You need to authenticate first before being able to run that command. You can authenticate with <code>heroku login</code>.</p> <p><strong>If</strong> that still does not work then I'm afraid that the issue runs much deeper and is something you will have to ask the maintainers of the termux project to fix. <a href="https://wiki.termux.com/wiki/Differences_from_Linux" rel="nofollow noreferrer">Termux is not a true Linux environment</a>. Some stuff may be broken. So the heroku CLI functionalities might not entirely work on Termux.</p>
python|selenium|selenium-webdriver|heroku
0
1,905,821
64,725,347
matplotlib minus sign in tick labels has bad formatting
<p>Matplotlib appears to use the &quot;correct&quot; unicode character for a minus sign in tick labels if they are set automatically. However, if I try to set them manually, it appears that it instead uses a hyphen (which looks way too small and generally bad). How can I manually change the tick labels while retaining the correct minus sign? Here is an example of a comparison between the automatic and the manual setting of the labels. Automatic:</p> <pre><code>import matplotlib.pyplot as plt plt.plot(np.arange(-50,51,10), np.arange(0, 101, 10)) plt.xticks(np.arange(-50,51,10)) </code></pre> <p>Manual:</p> <pre><code>import matplotlib.pyplot as plt plt.plot(np.arange(-50,51,10), np.arange(0, 101, 10)) plt.xticks(np.arange(-50,51,10), np.arange(-50,51,10)) </code></pre> <p>Here is the comparison of the output [1]: <a href="https://i.stack.imgur.com/euLmy.png" rel="nofollow noreferrer">https://i.stack.imgur.com/euLmy.png</a></p> <p>The real reason that I care about this is because I want to set the labels to values that are <strong>different</strong> from the default ones. I am running matplotlib version 3.2.2.</p>
<p>When using the standard tick labels, <a href="https://matplotlib.org/devdocs/gallery/text_labels_and_annotations/unicode_minus.html" rel="nofollow noreferrer">the minus signs are converted automatically to unicode minus signs</a>. When changing the tick labels, the numbers are converted to strings. Such strings don't get the automatic replacement to unicode minus signs. A solution is to do it explicitly:</p> <pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt import numpy as np plt.plot(np.arange(-50, 51, 10), np.arange(11)) plt.xticks(np.arange(-50, 51, 10), [f'{x}'.replace('-', '\N{MINUS SIGN}') for x in np.arange(-50, 51, 10)]) plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/Kivp7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Kivp7.png" alt="comparison plot" /></a></p>
python|matplotlib|formatting|xticks
0
1,905,822
65,005,966
Extract details from a filename in a folder
<p>My filename looks like <code>23_10_2020_15_47_06_1550_5.png</code>. I want to extract the last <code>_5</code> part of it so that I can compare it to another file ending in <code>23_10_2020_15_47_06_1050_R_5</code> which is in another folder and use these files for my further operations.</p> <p>Can someone help me with how to do it?</p>
<p>Here is how you can use the <a href="https://python-reference.readthedocs.io/en/latest/docs/str/rsplit.html" rel="nofollow noreferrer"><code>list.rsplit()</code></a> method:</p> <pre><code>from glob import glob files1 = glob(&quot;Folder1\\*.png&quot;) # List of all png files in Folder1 files2 = glob(&quot;Folder2\\*.png&quot;) # List of all png files in Folder2 lst = [f.rsplit('_', 1)[-1][:-4] for f in files1] # List of all the last numbers in the files1 list </code></pre>
python
0
1,905,823
64,109,177
Python script - launch nmap with parameters
<p>The scripts launches the application nmap. Only it doesn't print out the var to the application nmap. It does however if I print them separately.</p> <p>Looking for some suggestions, many thanks.</p> <p><a href="https://i.stack.imgur.com/hyA6U.png" rel="nofollow noreferrer">Image script</a> - <a href="https://i.stack.imgur.com/9Io8h.png" rel="nofollow noreferrer">Image output</a></p> <p><strong>Script:</strong></p> <pre><code>import os import subprocess xIP=input(&quot;Please enter IP-address to use on NMAP port-scan: &quot;) xOP=input(&quot;Please apply options to add on port-scan: &quot;) print(&quot;Entered values: &quot;, xIP + xOP) command = &quot;nmap print xIP xOP&quot; #os.system(command) subprocess.Popen(command) input(&quot;Press ENTER to exit application...&quot;) </code></pre>
<pre><code>command = &quot;nmap print xIP xOP&quot; </code></pre> <p>This line is the issue. First, as <a href="https://stackoverflow.com/a/64109246/1426065">Uriya Harpeness mentioned</a>, <a href="https://docs.python.org/3/library/subprocess.html#subprocess.Popen" rel="nofollow noreferrer"><code>subprocess.Popen()</code></a> works best when the command is split into pieces in a collection like a list. Instead of using <code>str.split()</code> as they suggest, <a href="https://docs.python.org/3/library/shlex.html#shlex.split" rel="nofollow noreferrer"><code>shlex.split()</code></a> is designed for this, as the example in the <code>Popen</code> docs shows.</p> <p>However, the other issue you're having is that you've placed your variable names into a string literal, so they're just being treated as part of the string instead of being evaluated for their contents. To get the behavior you want, just reference the variables directly:</p> <pre><code>command = [&quot;nmap&quot;, &quot;print&quot;, xIP, xOP] subprocess.Popen(command) </code></pre>
python|nmap
1
1,905,824
65,191,094
Create a list with text from another file
<p>I'm trying to add items to a list that are from another file.</p> <p>This is what I have so far:</p> <pre><code>book_list = open(&quot;books.txt&quot;) book_titles = [] </code></pre> <p>I have no clue where to go from here. The book titles in the text file each have their own line. Can someone please resolve me of my ignorance? Thank you</p>
<p>Use the <code>.split()</code> function and split by each new line!</p> <pre class="lang-py prettyprint-override"><code>book_list = open(&quot;books.txt&quot;) books_titles = book_list.read().split(&quot;\n&quot;) </code></pre>
python|python-3.x|list|file
1
1,905,825
65,407,538
Regex: marking a pattern
<p>I'm trying to mark a sentence contains &quot;manu&quot; from it's nearest <code>\n\n</code> to it's nearest <code>\n\n</code>, this is the text</p> <pre><code>\n\nHolds Certificate No: EMS 96453\nand operates an Environmental Management System which complies with the requirements of ISO for\n\nthe following scope:The Environmental Management System of Dow Corning, for management of environmental\nrisks associated with all global business processes for the marketing, developing,\n manufacturing, and supply of silicon-based and complementary products and services.\n\n/ tou\n\nFor and on behalf\n\n </code></pre> <p>I wanted to mark just this</p> <pre><code>the following scope:The Environmental Management System of Dow Corning, for management of environmental\nrisks associated with all global business processes for the marketing, developing,\n manufacturing, and supply of silicon-based and complementary products and services. </code></pre> <p>I tried this regex</p> <pre><code>\\n\\n(.+manu.+?)\\n\\n </code></pre> <p>but it's ignoring the nearest <code>\n\n</code> to my pattern and marks much more text than I want</p> <pre><code>Holds Certificate No: EMS 96453\nand operates an Environmental Management System which complies with the requirements of ISO for\n\nthe following scope:The Environmental Management System of Dow Corning, for management of environmental\nrisks associated with all global business processes for the marketing, developing,\n manufacturing, and supply of silicon-based and complementary products and services. </code></pre> <p>what am I missing?</p>
<p>The pattern starts at the left by matching <code>\\n\\n</code> followed by making use of the dot that matches any character. So it will match in this case <code>manu</code> without considering any characters in between.</p> <p>You can use a pattern to match <code>\\n\\n</code> and make sure to not match it again before encountering <code>manu</code></p> <p>Then match until the first occurrence of <code>\\n\\n</code> after it, and capture the part that you want in a capture group.</p> <pre><code>\\n\\n((?:(?!\\n\\n).)+manu.+?)\\n\\n </code></pre> <p><strong>Explanation</strong></p> <ul> <li><code>\\n\\n</code> Match literally</li> <li><code>(</code> Capture group 1 <ul> <li><code>(?:(?!\\n\\n).)+</code> Match any char asserting what is at the right is not <code>\\n\\n</code></li> <li><code>manu.+?</code> Match <code>manu</code> followed by as least chars as possible</li> </ul> </li> <li><code>)</code> Close group 1</li> <li><code>\\n\\n</code> Match literally</li> </ul> <p><a href="https://regex101.com/r/qh6lwd/1" rel="nofollow noreferrer">Regex demo</a></p> <p>If you also want the match when it is either followed by <code>\\n\\n</code> or the end of the string:</p> <pre><code>\\n\\n((?:(?!\\n\\n).)+manu.+?)(?:\n\\n|$) </code></pre> <p><a href="https://regex101.com/r/5UjqTO/1" rel="nofollow noreferrer">Regex demo</a></p>
python|regex
1
1,905,826
71,503,782
Python heapq providing unexpected output
<p>I have a simple python script that aims to read all rows in a csv file and perform a heap sort based on the second element of each row. Here is my function to read the file:</p> <pre><code>def read Processes(): file = open('Project1/Processes/sample.csv', 'r') reader = csv.reader(file, delimiter=',') processes = [] for row in reader: heapq.heappush(processes,(row[1],row)) return processes </code></pre> <p>And here is how I am printing the data:</p> <pre><code>processess = readProcesses() while processess: print(heapq.heappop(processess)) </code></pre> <p>The initial output for the data is:</p> <pre><code>('10016', ['90', '10016', '8070']) ('10136', ['0', '10136', '11315']) ('10461', ['83', '10461', '79576']) ('10969', ['206', '10969', '52071']) ('2997', ['58', '2997', '12935']) ('3666', ['108', '3666', '98952']) ('3946', ['109', '3946', '22268']) ('4083', ['236', '4083', '81516']) ('4233', ['182', '4233', '28817']) ('4395', ['64', '4395', '94292']) ('4420', ['51', '4420', '52133']) </code></pre> <p>Essentially, all values over 10,000 appear at the top of the heap. The rest of the values are in the appropriate order. I've tried calling heapify after adding all of the processes but it has no effect.</p>
<p>The first element of each tuple in your heap is a string, <em>not</em> an integer. Thus, lexicographical comparison is used, rather than a comparison of the underlying values.</p> <p>If you want to compare based on the numerical values that the strings represent, change</p> <pre class="lang-py prettyprint-override"><code>heapq.heappush(processes,(row[1],row)) </code></pre> <p>to</p> <pre class="lang-py prettyprint-override"><code>heapq.heappush(processes,(int(row[1]),row)) </code></pre>
python|sorting|heapsort|heapq
0
1,905,827
10,711,847
Python interpreter as shared library
<p>i wanted to embed python into my application. how can i compile python from source as shared library (.dll or .so)? the default configuration only produces python executable.</p>
<blockquote> <p>the default configuration only produces python executable.</p> </blockquote> <p>Incorrect. The "executable" is a very thin wrapper around the shared library that sets up the interpreter object and runs it. They are there; look harder.</p>
python|embed|shared-libraries
2
1,905,828
62,538,682
Resizing a NumPy array that's a DICOM image - Resize or Rescale?
<p>I'm trying to make my image dimensions smaller to process in a model.</p> <p>I'm using <code>skimage.transform.resize()</code> and <code>skim age.transform.rescale()</code> and can't decide between the two.</p> <p>I definitely need to maintain the data in the image. I'm afraid that I'm either deleting or adding data.</p> <p>This is my code:</p> <p><code>resized_img = resize(image, (200, 200), anti_aliasing=True)</code></p> <p><code>resized_img = rescale(image, 0.39, anti_aliasing=True)</code></p> <p>Please let me know what I can do.</p>
<p>Looking at the <a href="https://scikit-image.org/docs/stable/auto_examples/transform/plot_rescale.html" rel="nofollow noreferrer">Rescale, resize and downscale documentation</a>, rescale and resize do almost the same thing.</p> <p>The only question is do you want to size of your new image to be a factor of the original size? Or do you want the new image to be of a fixed size? It just depends on your particular application.</p>
python|image-processing|scikit-image
0
1,905,829
62,522,558
How to fix pycharm not recognising templates folder
<p>I have an html template named &quot;homepage.html&quot; and my project is set up like this:</p> <p><a href="https://i.stack.imgur.com/dlxu3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dlxu3.png" alt="" /></a></p> <p>When running it, however, I get this error:</p> <pre><code>django.template.loaders.filesystem.Loader: D:\lirdi\python\django_blog_project\djangonautic\home\homepage.html (Source does not exist) django.template.loaders.app_directories.Loader: D:\lirdi\python\django_blog_project\interpreter\lib\site-packages\django\contrib\admin\templates\home\homepage.html (Source does not exist) django.template.loaders.app_directories.Loader: D:\lirdi\python\django_blog_project\interpreter\lib\site-packages\django\contrib\auth\templates\home\homepage.html (Source does not exist) </code></pre> <p>Here is my template code:</p> <pre><code>&lt;html&gt; &lt;head&gt; &lt;title&gt;Homepage&lt;/title&gt; &lt;/head&gt; &lt;body&gt; &lt;h1&gt;This is the homepage&lt;/h1&gt; &lt;p&gt;Welcome to Djangonautic&lt;/p&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>And for my view:</p> <pre><code>def homepage(reqeust): return render(reqeust,'home/homepage.html') </code></pre> <p>And in my settings file:</p> <pre><code>'DIRS': ['templates'], </code></pre>
<p>The template <code>home/homepage.html</code> does not exist in the location you specified (there is no folder I can see named '<code>home</code>'). Ideally, the templates folder should be in a higher-level directory, directly under the first <code>djangonautic</code> folder.</p> <p>Try instead moving the templates folder to <code>django_blog_project/djangonautic/templates</code> and simply using:</p> <pre><code>def homepage(request): return render(request,'homepage.html') </code></pre> <p>Make sure the main template directory is specified in your <code>settings.py</code> file properly such as:</p> <pre><code>import os BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) TEMPLATES = [ { 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'DIRS': [os.path.join(BASE_DIR, 'templates')], 'APP_DIRS': True, 'OPTIONS': { # ... some options here ... }, }, ] </code></pre>
python|django|web|django-templates
0
1,905,830
62,658,214
How to merge/shift rows/columns up or down basis 2048 game
<p>I need some help creating 2 functions <code>push_up</code> and <code>push_down</code> (think 2048 game), I've managed to create functions that would push the values left and right and then sum them together if they are of equal value. For example, <code>[2,2,0,0]</code> would return as <code>[4,0,0,0]</code>. I have a separate function that adds random numbers.</p> <p>I need to do the same with up and down, but I'm not sure how to go about it. Here is my starting array:</p> <pre class="lang-py prettyprint-override"><code>grid = [ [2, 2, 0, 2], [0, 2, 0, 0], [0, 2, 0, 4], [4, 0, 4, 0] ] </code></pre> <p>And here are my functions:</p> <pre class="lang-py prettyprint-override"><code>def push_right(grid): for row in grid: for i, number in reversed(list(enumerate(row))): if number == row[i-1]: row[i] = number + row[i-1] row[i-1] = 0 row.sort(key=lambda v: v != 0) return grid def push_left(grid): for row in grid: for i, number in reversed(list(enumerate(row))): if number == row[i-1]: row[i] = number + row[i-1] row[i-1] = 0 row.sort(key=lambda v: v != 0, reverse=True) return grid print(push_right(grid)) print(push_left(grid)) </code></pre> <p>Which outputs:</p> <pre class="lang-py prettyprint-override"><code>[ [0, 0, 4, 4], [0, 0, 0, 2], [0, 0, 2, 4], [0, 0, 4, 4] ] </code></pre> <p>And:</p> <pre class="lang-py prettyprint-override"><code>[ [8, 0, 0, 0], [2, 0, 0, 0], [2, 4, 0, 0], [8, 0, 0, 0]] </code></pre> <p>I have a function which prints this out on a grid, but I need some help to code functions which will shift and merge up and down. For example, for the array:</p> <pre class="lang-py prettyprint-override"><code>[ [0, 0, 4, 4], [0, 0, 0, 2], [0, 0, 2, 4], [0, 0, 4, 4] ] </code></pre> <p>Down would be:</p> <pre class="lang-py prettyprint-override"><code>[ [0, 0, 0, 0], [0, 0, 4, 4], [0, 0, 2, 2], [0, 0, 4, 8] ] </code></pre> <p>And up would be:</p> <pre class="lang-py prettyprint-override"><code>[ [0, 0, 4, 4], [0, 0, 2, 2], [0, 0, 4, 8], [0, 0, 0, 0] ] </code></pre> <p>Any help would be much appreciated. Thank you in advance.</p>
<p>Your code looks fine syntax wise. There are some logic errors you should first fix. Push left and right is not working yet. Take this example</p> <pre><code>[ [0, 4, 0, 4], [0, 0, 0, 2], [0, 0, 2, 4], [4, 2, 2, 4] ] </code></pre> <p>For right this returns:</p> <pre><code>[ [0, 0, 4, 4], [0, 0, 0, 2], [0, 0, 2, 4], [0, 0, 8, 4] ] </code></pre> <p>For left the same is returned but reversed. Can you see the mistake on the first and last line? You should rethink the logic..</p> <p>When that is done, you can create columns with a loop. (First take all first numbers of each column, then all second,...)</p> <p>If you have a list of columns, you can actually use the same logic of pushing left or right.</p> <p>I will provide more code when needed, I just wanted to give you some advice first :)</p>
python|arrays|merge|2048
0
1,905,831
60,533,691
How to get unbounded classmethod
<p>I'm trying to "wrap" an existing classmethod, i.e.,</p> <pre class="lang-py prettyprint-override"><code>def Foo: @classmethod def bar(cls, x): return x + 2 old_bar = Foo.bar def wrapped_bar(cls, x): result = old_bar(cls, x) # Results in an error return result Foo.bar = wrapped_bar </code></pre> <p>It seems that <code>Foo.bar</code> is already bound with <code>cls = Foo</code>, how do I get the unbound version of the function <code>bar</code>?</p> <p>[I'm not allowed to modify <code>Foo</code>, it exists in another codebase that I'm patching]</p>
<p>Suppose, you have:</p> <pre><code>&gt;&gt;&gt; class Foo: ... @classmethod ... def bar(cls, x): ... return x*42 ... &gt;&gt;&gt; Foo.bar(2) 84 </code></pre> <p>Then one way is to access the name-space of your class directly. Then you should be able to access the <code>classmethod</code> object and obtain the decorated function available at the <code>__func__</code> attribute:</p> <pre><code>&gt;&gt;&gt; vars(Foo)['bar'] &lt;classmethod object at 0x103eec520&gt; &gt;&gt;&gt; vars(Foo)['bar'].__func__ &lt;function Foo.bar at 0x1043e49d0&gt; </code></pre> <p>Alternatively, it is accessible on the bound-method object itself:</p> <pre><code>&gt;&gt;&gt; bound = Foo.bar &gt;&gt;&gt; bound &lt;bound method Foo.bar of &lt;class '__main__.Foo'&gt;&gt; &gt;&gt;&gt; bound.__func__ &lt;function Foo.bar at 0x1043e49d0&gt; </code></pre>
python|python-3.x
3
1,905,832
56,863,116
How can I groupby an ID and tag the first row with a non-null value?
<p>Within an <code>ID</code>, I need to remove the first row with a <code>value &gt; 0</code> and all rows before it in dataframe with an ordered date column. I think the easiest way to do that would be creating a new <code>flag</code> column to mark those rows for removal.</p> <p>I figured out the below to tag the first date row within each <code>ID</code> (after sorting), but I'm having trouble figuring out how to continue my flag up to and including the first row where the <code>value &gt; 0</code>:</p> <pre><code>df['flag'] = np.where((df.date == df.groupby('id')['date'].transform('flag')),1,0) </code></pre> <p>Which gets me:</p> <pre><code>id date value flag 114 2016-01-01 0 1 114 2016-02-01 0 0 114 2016-03-01 200 0 114 2016-04-01 300 0 114 2016-05-01 100 0 220 2016-01-01 0 1 220 2016-02-01 0 0 220 2016-03-01 0 0 220 2016-04-01 0 0 220 2016-05-01 400 0 220 2016-06-01 200 0 </code></pre> <p>but the end result should be:</p> <pre><code>id date value flag 114 2016-01-01 0 1 114 2016-02-01 0 1 114 2016-03-01 200 1 114 2016-04-01 300 0 114 2016-05-01 100 0 220 2016-01-01 0 1 220 2016-02-01 0 1 220 2016-03-01 0 1 220 2016-04-01 0 1 220 2016-05-01 400 1 220 2016-06-01 200 0 </code></pre>
<ol> <li>First sort id and date in ascending order <br></li> <li>Then fill flag 1 when first non zero value with Id <br></li> <li>replace 0 with nan in flag <Br></li> <li>bfill with group by and tranform<br></li> <li>final replace Nan with 0</li> </ol> <pre><code>df = pd.DataFrame(data={"id": [114, 114, 114, 114, 114, 220, 220, 220, 220, 220, 220], "date": ['2016-01-01', '2016-02-01', '2016-03-01', '2016-04-01', '2016-05-01', '2016-01-01', '2016-02-01', '2016-03-01', '2016-04-01', '2016-05-01', '2016-06-01'], 'value': [0, 0, 200, 300, 100, 0, 0, 0, 0, 400, 200]}) df.sort_values(by=['id', 'date'], ascending=[True, True], inplace=True) df['flag'] = 0 df.loc[df['value'].ne(0).groupby(df['id']).idxmax(),'flag']=1 df['flag'].replace({0:np.nan},inplace=True) df['flag'] = df.groupby(['id'],as_index=False)['flag'].transform(pd.Series.bfill) df['flag'].fillna(0,inplace=True) print(df) </code></pre> <pre><code> id date value flag 0 114 2016-01-01 0 1.0 1 114 2016-02-01 0 1.0 2 114 2016-03-01 200 1.0 3 114 2016-04-01 300 0.0 4 114 2016-05-01 100 0.0 5 220 2016-01-01 0 1.0 6 220 2016-02-01 0 1.0 7 220 2016-03-01 0 1.0 8 220 2016-04-01 0 1.0 9 220 2016-05-01 400 1.0 10 220 2016-06-01 200 0.0 </code></pre> <p>I hope it would solve your problem</p>
python|pandas
2
1,905,833
56,704,858
How to select columns based on value they contain pandas
<p>I am working in pandas with a certain dataset that describes the population of a certain country per year. The dataset is construed in a weird way wherein the years aren't the columns themselves but rather the years are a value within the first row of the set. The dataset describes every year from 1960 up til now but I only need 1970, 1980, 1990 etc. For this purpose I've created a list with all those years and tried to make a new dataset which is equivalent to the old one but only has the columns that contain a value from said list so I don't have all this extra info I'm not using. Online I can only find instructions for removing rows or selecting by column name, since both these criteria don't apply in this situation I thought i should ask here. The dataset is a csv file which I've downloaded off some world population site. <a href="https://i.stack.imgur.com/zc1Oc.png" rel="nofollow noreferrer">here a link to a screenshot of the data</a></p> <p>As you can see the years are given in scientific notation for some years, which is also how I've added them to my list.</p> <pre><code>pop = pd.read_csv('./maps/API_SP.POP.TOTL_DS2_en_csv_v2_10576638.csv', header=None, engine='python', skiprows=4) display(pop) years = ['1.970000e+03','1.980000e+03','1.990000e+03','2.000000e+03','2.010000e+03','2.015000e+03', 'Country Name'] pop[pop.columns[pop.isin(years).any()]] </code></pre> <p>This is one of the things I've tried so far which I thought made the most sense, but I am still very new to pandas so any help would be greatly appreciated.</p>
<p>Using the data at <a href="https://data.worldbank.org/indicator/sp.pop.totl" rel="nofollow noreferrer">https://data.worldbank.org/indicator/sp.pop.totl</a>, copied into pastebin (first time using the service, so apologies if it doesn't work for some reason):</p> <pre><code># actual code using CSV file saved to desktop #df = pd.read_csv(&lt;path to CSV&gt;, skiprows=4) # pastebin for reproducibility df = pd.read_csv(r'https://pastebin.com/raw/LmdGySCf',sep='\t') # manually select years and other columns of interest colsX = ['Country Name', 'Country Code', 'Indicator Name', 'Indicator Code', '1990', '1995', '2000'] dfX = df[colsX] # select every fifth year colsY = df.filter(regex='19|20', axis=1).columns[[int(col) % 5 == 0 for col in df.filter(regex='19|20', axis=1).columns]] dfY = df[colsY] </code></pre> <p>As a general comment: </p> <blockquote> <p>The dataset is construed in a weird way wherein the years aren't the columns themselves but rather the years are a value within the first row of the set.</p> </blockquote> <p>This is not correct. Viewing the CSV file, it is quite clear that row 5 (<em>Country Name, Country Code, Indicator Name, Indicator Code, 1960, 1961, ...</em>) <strong>are indeed column names</strong>. You have read the data into pandas in such a way that those values are not column years, but your first step, before trying to subset your data, should be to ensure you have read in the data properly -- which, in this case, would give you column headers named for each year.</p>
pandas|dataframe
1
1,905,834
17,797,829
How to edit existing rrule?
<p>I want to edit an existing rrule that was parsed from a string to set an UNTIL date. How can I do that? Theoretically, I could modify the rule string and re-parse it, but then it gets complicated. I want to make it simple: Whatever the rule says about how many occurrences or until what date it goes to, I want to override it with a new UNTIL date.</p> <p>Thanks.</p>
<p>I am not aware of public interface for this, but if you really need to then setting <code>_until</code> property directly seems to work. I should warn you that it is a bad practice to use it and this code could be broken by future releases of <code>dateutil</code>.</p> <pre><code>&gt;&gt;&gt; r = rrule(DAILY,dtstart=datetime(2013,7,15,0,0,0), until=datetime.now()) &gt;&gt;&gt; list(r) [datetime.datetime(2013, 7, 15, 0, 0), datetime.datetime(2013, 7, 16, 0, 0), datetime.datetime(2013, 7, 17, 0, 0), datetime.datetime(2013, 7, 18, 0, 0), datetime.datetime(2013, 7, 19, 0, 0), datetime.datetime(2013, 7, 20, 0, 0), datetime.datetime(2013, 7, 21, 0, 0), datetime.datetime(2013, 7, 22, 0, 0), datetime.datetime(2013, 7, 23, 0, 0)] &gt;&gt;&gt; r._until = datetime(2013, 7, 20, 0, 0) &gt;&gt;&gt; list(r) [datetime.datetime(2013, 7, 15, 0, 0), datetime.datetime(2013, 7, 16, 0, 0), datetime.datetime(2013, 7, 17, 0, 0), datetime.datetime(2013, 7, 18, 0, 0), datetime.datetime(2013, 7, 19, 0, 0), datetime.datetime(2013, 7, 20, 0, 0)] </code></pre>
python|python-dateutil
2
1,905,835
69,085,607
Python throws a keyword error when I try to get a json metadata from a website
<p>I want to iterate and fetch a part of json metadata The code below throws a traceback <strong>keyword error:</strong> <em>'metadata'</em> when I run the code below. I dont know if the error is because of the function printResultS or it is only because of the keyword or argument &quot;metadata&quot;</p> <pre><code>import urllib.request import json def printResult(data): theJSON = json.loads(data) if &quot;title&quot; in theJSON[&quot;metadata&quot;]: print(theJSON[&quot;metadata&quot;][&quot;title&quot;]) count = theJSON [&quot;metadata&quot;][&quot;count&quot;] print(str(count)+ &quot;events recorded&quot;) def main(): webUrldata = ( &quot;https://www.sciencebase.gov/catalog/item/5d88ea50e4b0c4f70d0ab3c0?format=json&quot;) webUrl = urllib.request.urlopen(webUrldata) print('Get request:'+ str(webUrl.getcode())) if (webUrl.getcode() == 200): data = webUrl.read() printResult(data) else: print(&quot;recieved error&quot;) if __name__ == &quot;__main__&quot;: main() </code></pre> <p><strong>error</strong></p> <pre><code>et request:200 Traceback (most recent call last): File /xample 3.py&quot;, line 27, in &lt;module&gt; main() File &quot;/xample 3.py&quot;, line 22, in main printResult(data) File &quot;/xample 3.py&quot;, line 7, in printResult if &quot;title&quot; in theJSON[&quot;metadata&quot;]: KeyError: 'metadata' </code></pre>
<p>There is no metadata present</p> <pre><code>theJSON= json.loads(data).get(&quot;metadata&quot;) print(theJSON) </code></pre> <p>gives -</p> <pre><code>None </code></pre> <p>&amp;</p> <pre><code>theJSONkeys = json.loads(data).keys() print(theJSONkeys) </code></pre> <p>gives</p> <pre><code>link relatedItems id identifiers title summary body citation purpose provenance maintenanceUpdateFrequency hasChildren parentId contacts webLinks systemTypes tags dates spatial files distributionLinks previewImage </code></pre>
python|json|urllib
0
1,905,836
62,144,988
What is the efficient way to read big tables in pandas Python?
<p>I have a table in MySql of shape 40 million rows x 54 Columns. I tried reading the table in chunks using read_sql but it goes out of memory (I am working with 32 gb, 8 Core EC2 instance). Then I tried Limit and Offset method but it is really slow.</p> <p>Is there any efficient way to read the table without losing memory and reading the table faster.</p> <p>I looked into some Big Data techniques but since I am not familiar with Big Data I am not able to decide which one to go for.</p> <p>Currently I am using this to read tables but it is really slow and certainly not very efficient.</p> <pre><code>def read_sql_chunked(query, con, nrows, chunksize=10000): start = 1 dfs = [] while start &lt; nrows: df = pd.read_sql("%s LIMIT %s OFFSET %s" % (query, chunksize, start), con) dfs.append(df) print(start, chunksize) start += chunksize return pd.concat(dfs, ignore_index=True) dt = read_sql_chunked(query=query, con=conn, nrows=40000000) </code></pre>
<p>How may rows does your table have? Sounds like that your table is just too big to fit in the amount of memory that you have. If you want to sumarize the data, you are probably better off doing that in sql first.</p>
python|mysql|pandas
0
1,905,837
35,771,508
Run a process in python, writing the process output to stdout?
<p>I want to run a process from a python script, blocking while it runs, and writing the output of the process to stdout.</p> <p>How would I do this?</p> <p>I looked at the documentation for 'subprocess' but couldn't work it out.</p> <p><em>Editing this question to explain how it's different, as requested:</em> See existing text above: <strong>and writing the output of the process to stdout</strong></p>
<p>You can use the <code>wait()</code> method for that.</p> <p>For example:</p> <pre><code># main_script.py #some code p1 = subprocess.Popen(['/path/to/process.sh'], shell=True) p1.wait() #rest of code when process is done </code></pre>
python
0
1,905,838
59,576,518
Replacing chunks of elements in numpy array
<p>I have an <code>np.array</code> like this one: <code>x = [1,2,3,4,5,6,7,8,9,10 ... N]</code>. I need to replace the first <code>n</code> chunks with a certain element, like so:</p> <pre><code>for i in np.arange(0,125): x[i] = x[0] for i in np.arange(125,250): x[i] = x[125] for i in np.arange(250,375): x[i] = x[250] </code></pre> <p>This is obviously not the way to go, but I just wrote it to this so I can show you what I need to achieve.</p>
<p>One way would be -</p> <pre><code>In [47]: x Out[47]: array([10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21]) In [49]: n = 5 In [50]: x[::n][np.arange(len(x))//n] Out[50]: array([10, 10, 10, 10, 10, 15, 15, 15, 15, 15, 20, 20]) </code></pre> <p>Another with <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.repeat.html" rel="nofollow noreferrer"><code>np.repeat</code></a> -</p> <pre><code>In [67]: np.repeat(x[::n], n)[:len(x)] Out[67]: array([10, 10, 10, 10, 10, 15, 15, 15, 15, 15, 20, 20]) </code></pre> <p>For in-situ edit, we can reshape and assign in a broadcasted-manner, like so -</p> <pre><code>m = (len(x)-1)//n x[:n*m].reshape(-1,n)[:] = x[:n*m:n,None] x[n*m:] = x[n*m] </code></pre>
python|numpy|list-comprehension
4
1,905,839
49,008,718
Can mypy handle list comprehensions?
<pre><code>from typing import Tuple def test_1(inp1: Tuple[int, int, int]) -&gt; None: pass def test_2(inp2: Tuple[int, int, int]) -&gt; None: test_tuple = tuple(e for e in inp2) reveal_type(test_tuple) test_1(test_tuple) </code></pre> <p>While running <code>mypy</code> on the above code, I get:</p> <pre><code>error: Argument 1 to "test_1" has incompatible type "Tuple[int, ...]"; expected "Tuple[int, int, int]" </code></pre> <p>Is <code>test_tuple</code> not guaranteed to have 3 <code>int</code> elements? Does <code>mypy</code> not handle such list comprehensions or is there another way of defining the type here?</p>
<p>As of version 0.600, <code>mypy</code> does not infer types in such cases. It would be hard to implement, as suggested on <a href="https://github.com/python/mypy/issues/5068#issuecomment-389882867" rel="noreferrer">GitHub</a>.</p> <p>Instead, we can use <code>cast</code> (see <a href="http://mypy.readthedocs.io/en/latest/casts.html" rel="noreferrer">mypy docs</a>):</p> <pre><code>from typing import cast, Tuple def test_1(inp1: Tuple[int, int, int]) -&gt; None: pass def test_2(inp2: Tuple[int, int, int]) -&gt; None: test_tuple = cast(Tuple[int, int, int], tuple(e for e in inp2)) reveal_type(test_tuple) test_1(test_tuple) </code></pre>
python|python-3.x|type-hinting|mypy|python-typing
6
1,905,840
49,209,155
pandas - drop row with list of values, if contains from list
<p>I have a huge set of data. Something like 100k lines and I am trying to drop a row from a dataframe if the row, which contains a list, contains a value from another dataframe. Here's a small time example.</p> <pre><code>has = [['@a'], ['@b'], ['#c, #d, #e, #f'], ['@g']] use = [1,2,3,5] z = ['#d','@a'] df = pd.DataFrame({'user': use, 'tweet': has}) df2 = pd.DataFrame({'z': z}) tweet user 0 [@a] 1 1 [@b] 2 2 [#c, #d, #e, #f] 3 3 [@g] 5 z 0 #d 1 @a </code></pre> <p>The desired outcome would be</p> <pre><code> tweet user 0 [@b] 2 1 [@g] 5 </code></pre> <p>Things i've tried</p> <pre><code>#this seems to work for dropping @a but not #d for a in range(df.tweet.size): for search in df2.z: if search in df.loc[a].tweet: df.drop(a) #this works for my small scale example but throws an error on my big data df['tweet'] = df.tweet.apply(', '.join) test = df[~df.tweet.str.contains('|'.join(df2['z'].astype(str)))] #the error being "unterminated character set at position 1343770" #i went to check what was on that line and it returned this basket.iloc[1343770] user_id 17060480 tweet [#IfTheyWereBlackOrBrownPeople, #WTF] Name: 4612505, dtype: object </code></pre> <p>Any help would be greatly appreciated.</p>
<p>is <code>['#c, #d, #e, #f']</code> 1 string or a list like this <code>['#c', '#d', '#e', '#f']</code> ?</p> <pre><code>has = [['@a'], ['@b'], ['#c', '#d', '#e', '#f'], ['@g']] use = [1,2,3,5] z = ['#d','@a'] df = pd.DataFrame({'user': use, 'tweet': has}) df2 = pd.DataFrame({'z': z}) </code></pre> <p>simple solution would be</p> <pre><code>screen = set(df2.z.tolist()) to_delete = list() # this will speed things up doing only 1 delete for id, row in df.iterrows(): if set(row.tweet).intersection(screen): to_delete.append(id) df.drop(to_delete, inplace=True) </code></pre> <p>speed comparaison (for 10 000 rows):</p> <pre><code>st = time.time() screen = set(df2.z.tolist()) to_delete = list() for id, row in df.iterrows(): if set(row.tweet).intersection(screen): to_delete.append(id) df.drop(to_delete, inplace=True) print(time.time()-st) 2.142000198364258 st = time.time() for a in df.tweet.index: for search in df2.z: if search in df.loc[a].tweet: df.drop(a, inplace=True) break print(time.time()-st) 43.99799990653992 </code></pre>
python|pandas
2
1,905,841
49,182,881
OpenCV how to detect a specific color in a frame (inRange function)
<p>I am able to use the code below to find anything blue within a frame:</p> <p><a href="https://stackoverflow.com/questions/38877102/how-to-detect-red-color-in-opencv-python">How To Detect Red Color In OpenCV Python?</a></p> <p>However I want to modify the code to look for a very specific color within a video that I have. I read in a frame from the video file, and convert the frame to HSV, and then print the HSV values at a particular element (that contains the color I want to detect):</p> <pre><code>print("hsv[x][y]: {}".format(hsv[x][y])) </code></pre> <p>"hsv" is what I get after I convert the frame from BGR to HSV using cvtColor(). The print command above gives me: hsv[x][y]: [108 27 207]</p> <p>I then define my lower and upper HSV values, and pass that to the inRange() function:</p> <pre><code>lowerHSV = np.array([107,26,206]) upperHSV = np.array([109,28,208]) maskHSV = cv2.inRange(hsv, lowerHSV, upperHSV) </code></pre> <p>I display maskHSV, but it doesn't seem to identify the item that contains that color. I try to expand the lowerHSV and upperHSV bounds, but that doesn't seem to work either. I try something something similar using BGR but that doesn't appear to work.</p> <p>The thing I'm trying to identify can best be described as a lemon-lime sports drink bottle...</p> <p>Any suggestions would be appreciated.</p> <p>=====================================================</p> <h1>The complete python code I am running is shown below, along with some relevant images...</h1> <pre><code>import cv2 import numpy as np import time video_capture = cv2.VideoCapture("conveyorBeltShort.wmv") xloc = 460 yloc = 60 dCounter = 0 while(1): dCounter += 1 grabbed, frame = video_capture.read() if grabbed == False: break time.sleep(2) hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV) lowerBGR = np.array([200,190,180]) upperBGR = np.array([210,200,190]) # HSV [108 27 209] observation lowerHSV = np.array([105,24,206]) upperHSV = np.array([111,30,212]) maskBGR = cv2.inRange(frame, lowerBGR, upperBGR) maskHSV = cv2.inRange(hsv, lowerHSV, upperHSV) cv2.putText(hsv, "HSV:" + str(hsv[xloc][yloc]),(100,280), cv2.FONT_HERSHEY_SIMPLEX, 3,(0,0,0),8,cv2.LINE_AA) cv2.putText(frame, "BGR:" + str(frame[xloc][yloc]),(100,480), cv2.FONT_HERSHEY_SIMPLEX, 3,(0,0,0),8,cv2.LINE_AA) cv2.rectangle(frame, (0, 0), (xloc-1, yloc-1), (255, 0, 0), 2) cv2.rectangle(hsv, (0, 0), (xloc-1, yloc-1), (255, 0, 0), 2) cv2.imwrite("maskHSV-%d.jpg" % dCounter, maskHSV) cv2.imwrite("maskBGR-%d.jpg" % dCounter, maskBGR) cv2.imwrite("hsv-%d.jpg" % dCounter, hsv) cv2.imwrite("frame-%d.jpg" % dCounter, frame) cv2.imshow('frame',frame) cv2.imshow('hsv',hsv) cv2.imshow('maskHSV',maskHSV) cv2.imshow('maskBGR',maskBGR) if cv2.waitKey(1) &amp; 0xFF == ord('q'): break cv2.destroyAllWindows() video_capture.release() </code></pre> <p>=====================================================</p> <p>first image is "frame-6.jpg"</p> <p>second image is "hsv-6.jpg"</p> <p>third image is "maskHSV-6.jpg"</p> <p>fourth image is "maskBGR-6.jpg"</p> <h1>maskHSV-6.jpg and maskBGR-6.jpg do not appear to show the lemon-lime bottle on the conveyor belt. I believe I have the lower and upper HSV/BGR limits set correctly...</h1> <p><a href="https://i.stack.imgur.com/ftJqH.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ftJqH.jpg" alt="frame-6.jpg"></a></p> <p><a href="https://i.stack.imgur.com/CBDbu.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CBDbu.jpg" alt="hsv-6.jpg"></a></p> <p><a href="https://i.stack.imgur.com/lXaGr.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lXaGr.jpg" alt="maskHSV-6.jpg"></a></p> <p><a href="https://i.stack.imgur.com/REFuo.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/REFuo.jpg" alt="maskBGR-6.jpg"></a></p>
<p>I know only C++ for OpenCV, but according to <a href="https://stackoverflow.com/questions/28717271/opencv-get-the-l-a-b-values-of-a-pixel">this post</a> you should use img[y][x] or img[x,y] to access pixel value.</p> <p>Your rgb value are not correct.It should be something like <code>[96,160,165]</code>.</p>
python|opencv|machine-learning|computer-vision|opencv3.1
0
1,905,842
49,140,540
Python nested loop miscounting instances of integers in list
<p>I'm banging my head against the wall trying to figure out why this nested loop is miscounting the number of times an integer occurs in a list. I've set up a function to take two lines of input, <code>n</code> and <code>ar</code>, where <code>n</code> is the number of integers in <code>ar</code> and <code>ar</code> is the list of integers. My code is below:</p> <pre><code>import sys n = sys.stdin.readline() n = int(n) ar = sys.stdin.readline() ar = ar.split(' ') ar = [int(i) for i in ar] def find_mode(n,ar): # create an empty dict and initially set count of all integers to 1 d = {} for i in range(n): d[ar[i]] = 1 for i in range(n): # hold integer i constant and check subsequent integers against it # increase count if match x = ar[i] for k in range(i+1,n): if ar[k] == x: d[ar[k]] += 1 print(d) </code></pre> <p>The counter seems to be increasing the count by 1 every time, which leads me to believe it's a problem with the nested loop.</p> <pre><code>&gt;&gt;&gt; 9 &gt;&gt;&gt; 1 2 3 4 4 9 9 0 0 {0: 2, 1: 1, 2: 1, 3: 1, 4: 2, 9: 2} </code></pre> <p>OK</p> <pre><code>&gt;&gt;&gt; 10 &gt;&gt;&gt; 1 2 3 4 4 9 9 0 0 0 {0: 4, 1: 1, 2: 1, 3: 1, 4: 2, 9: 2} </code></pre> <p>Count of 0 increased by +2</p> <pre><code>&gt;&gt;&gt; 11 &gt;&gt;&gt; 1 2 3 4 4 9 9 0 0 0 0 {0: 7, 1: 1, 2: 1, 3: 1, 4: 2, 9: 2} </code></pre> <p>Count of 0 increased by +3</p> <p>I understand there might be more efficient or "pythonic" ways to count the amount of times a number occurs in a list but this was the solution I came up with and as someone still learning Python, it would help to understand why this exact solution is failing. Many thanks in advance.</p>
<p>This is because for each distinct number in the list (call it <code>x</code>) you count the number of subsequent appearances. This is fine if a number only occurs twice but if it occurs multiple times you will over-count for each additional appearance.</p> <p>For example: <code>[0, 0, 0, 0]</code>. You iterate over the list and then for each item you iterate over the list that follows that item. So for the first <code>0</code> you count a total of <code>3</code> subsequent <code>0</code>s. For the second however you will count a total of <code>2</code> and for the third a total of <code>1</code> which makes <code>6</code>. This is why you have <code>3</code> too much in the end.</p> <p>You can achieve this task by using <a href="https://docs.python.org/3/library/collections.html#collections.Counter" rel="nofollow noreferrer"><code>collections.Counter</code></a>:</p> <pre><code>&gt;&gt;&gt; from collections import Counter &gt;&gt;&gt; d = Counter(ar) </code></pre>
python|python-3.x|nested-loops
2
1,905,843
25,296,245
How to access C++ baseclass functions using a derived Python swigged object?
<p>I have a C++ class, say B, that is publicly derived from another class, A.</p> <p>The B class is swigged and made accessible in Python. This works great, and all public functions defined in class B are readily available in Python without almost any work.</p> <p>However, when creating a B object in Python the public functions that are defined in class A seem not to be exposed, available automatically. </p> <p>Question is, how do one get access to functions up a C++ class hierarchy for a swigged Python object?</p>
<p>It seemed the answer was in the actual ordering of include files in the swig interface file.</p> <p>By placing the baseclass headers BEFORE derived classes, everything seemed to start working as expected.</p> <p>For anyone "swigging", seeing the warning;</p> <pre><code>warning 401: 'AnObjectXXX' must be defined before it is used as a base class. </code></pre> <p>will give you a python object without expected baseclass functionality.</p>
python|c++|swig
3
1,905,844
71,035,034
Debug window toolbar console missing arrows
<p>I am trying to debug a simple program in Python with PyCharm debugger but when I put a break point and run debugger the arrows such as &quot;Step Into&quot; or &quot;Step Over&quot; doesn't show so I can't move on with debugging. The last time I used PyCharm I didn't have this problem and I don't believe I did anything that can change the debugger.</p> <p><img src="https://i.stack.imgur.com/GaQQb.png" alt="screenshot of debug window" /></p>
<p>The part of the toolbar you're refering to with the <em>&quot;Step Over&quot;</em> <kbd>F8</kbd>, <em>&quot;Step Into&quot;</em> <kbd>F7</kbd> icons is called the <em>&quot;Debug Tool Window Top Toolbar&quot;</em> or the <a href="https://www.jetbrains.com/help/pycharm/debug-tool-window.html#steptoolbar" rel="nofollow noreferrer">Stepping Toolbar</a>. It can be enabled/disabled cliking on the cog on the right side of the toolbar and choosing <code>Show Toolbar</code>.</p> <p><a href="https://i.stack.imgur.com/im77t.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/im77t.png" alt="screenshot of debug window" /></a></p> <p>You can also configure the icons individually by going to <code>Settings</code> <code>&gt;</code> <code>Apperance and Behavior</code> <code>&gt;</code> <code>Menus and Toolbars</code> <code>&gt;</code> <code>Debug Tool Window Top Toolbar</code> <code>&gt;</code> <code>Add Action</code> and using <code>Choose Actions to Add</code>.</p> <p><a href="https://i.stack.imgur.com/pkEwq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pkEwq.png" alt="screenshot of settings menus and toolbars" /></a></p>
python|debugging|pycharm
2
1,905,845
70,962,833
I can't get data from a website using beautiful soup(python)
<p>i have a problem. I am trying to create a web scraping script using python that gets the titles and the links from the articles. The link i want to get all the data is <a href="https://ec.europa.eu/commission/presscorner/home/en" rel="nofollow noreferrer">https://ec.europa.eu/commission/presscorner/home/en</a> . The problem is that when i run the code, i don't get anything. Why is that? Here is the code:</p> <pre><code>from bs4 import BeautifulSoup as bs import requests #url_1 = &quot;https://ec.europa.eu/info/news_en?pages=159399#news-block&quot; url_2 = &quot;https://ec.europa.eu/commission/presscorner/home/en&quot; links = [url_2] for i in links: site = i page = requests.get(site).text doc = bs(page, &quot;html.parser&quot;) # if site == url_1: # h3 = doc.find_all(&quot;h3&quot;, class_=&quot;listing__title&quot;) # for b in h3: # title = b.text # link = b.find_all(&quot;a&quot;)[0][&quot;href&quot;] # if(link[0:5] != &quot;https&quot;): # link = &quot;https://ec.europa.eu&quot; + link # print(title) # print(link) # print() if site == url_2: ul = doc.find_all(&quot;li&quot;, class_=&quot;ecl-list-item&quot;) for d in ul: title_2 = d.text link_2 = d.find_all(&quot;a&quot;)[0][&quot;href&quot;] if(link_2[0:5] != &quot;https&quot;): link_2 = &quot;https://ec.europa.eu&quot; + link_2 print(title_2) print(link_2) print() </code></pre> <p>(I am also want to get data from another url(the url i have on the script) but from that link, i get all the data i want).</p>
<p>Set a breakpoint after the line page = requests... and you will see the data you pull. The webpage is loading most of its contents via javascript. That's why you're not able to scrape any data.</p> <p>You can either use Selenium or a proxy service that can render javascript- but these are paid services.</p>
python-3.x
0
1,905,846
60,068,543
Python script problems with Type Error 'Type' object is not iterable
<p>I have this email_check class that I use for part of a script I have and lately it has been throwing errors. I had to make a change to the code because it at one point was using google plus and its been throwing errors due to that, I removed google plus from the for statement in the code below and now I get the Type Error 'Type' object is not iterable. Here is the code:</p> <pre><code>from scraper.config import Config # from scraper.google_plus import GooglePlus from scraper.scraper import Scraper from scraper.spokeo import Spokeo class EmailChecker: def __init__(self): config = Config() # Open instance to chromedriver self.__scraper = Scraper() def check_email(self, email): config = Config() results = {} # for _ in (GooglePlus, Spokeo): for _ in (Spokeo): site = _(self.__scraper) try: result = site.search_for_email(email) except Exception: if config.debug: raise result = None try: site.logout() except Exception: if config.debug: raise pass results[_.__name__] = result try: self.__scraper.driver.close() except Exception: pass try: self.__scraper.driver.quit() except Exception: pass return results </code></pre>
<p><code>(GooglePlus, Spokeo)</code> is a tuple, which can be iterated in a <code>for</code> loop. <code>(Spokeo)</code> is an expression inside a parenthesis, which is only used to denote precedence. For a more concrete example, consider <code>(2 + 3, 1)</code> (which evaluates to <code>(5, 1)</code>) versus <code>(2 + 3)</code> (which evaluates to 5).</p> <p>To change the code the least, you could just write <code>(Spokeo,)</code> instead of <code>(Spokeo)</code> to have a one-tuple, though that is a bit weird syntax. Since you're not iterating over anything anymore, you could just drop the for loop:</p> <pre><code>results = {} _ = Spokeo # the old for was here site = _(self.__scraper) ... </code></pre> <p>But consider having a better name than <code>_</code>. Or just drop that variable and use <code>Spokeo</code> explicitly in its place: <code>site = Spokeo(self.__scraper)</code> and so on.</p>
python|python-3.x
1
1,905,847
2,850,823
Multiple XML Namespaces in tag with LXML
<p>I am trying to use Pythons LXML library to create a GPX file that can be read by Garmin's Mapsource Product. The header on their GPX files looks like this</p> <pre><code>&lt;?xml version="1.0" encoding="UTF-8" standalone="no" ?&gt; &lt;gpx xmlns="http://www.topografix.com/GPX/1/1" creator="MapSource 6.15.5" version="1.1" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.topografix.com/GPX/1/1 http://www.topografix.com/GPX/1/1/gpx.xsd"&gt; </code></pre> <p>When I use the following code:</p> <pre><code>xmlns = "http://www.topografix.com/GPX/1/1" xsi = "http://www.w3.org/2001/XMLSchema-instance" schemaLocation = "http://www.topografix.com/GPX/1/1 http://www.topografix.com/GPX/1/1/gpx.xsd" version = "1.1" ns = "{xsi}" getXML = etree.Element("{" + xmlns + "}gpx", version=version, attrib={"{xsi}schemaLocation": schemaLocation}, creator='My Product', nsmap={'xsi': xsi, None: xmlns}) print(etree.tostring(getXML, xml_declaration=True, standalone='Yes', encoding="UTF-8", pretty_print=True)) </code></pre> <p>I get:</p> <pre><code>&lt;?xml version=\'1.0\' encoding=\'UTF-8\' standalone=\'yes\'?&gt; &lt;gpx xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://www.topografix.com/GPX/1/1" xmlns:ns0="xsi" ns0:schemaLocation="http://www.topografix.com/GPX/1/1 http://www.topografix.com/GPX/1/1/gpx.xsd" version="1.1" creator="My Product"/&gt; </code></pre> <p>Which has the annoying <code>ns0</code> tag. This might be perfectly valid XML but Mapsource does not appreciate it. </p> <p>Any idea how to get this to not have the <code>ns0</code> tag? </p>
<p>The problem is with your attribute name.</p> <pre><code>attrib={"{xsi}schemaLocation" : schemaLocation}, </code></pre> <p>puts schemaLocation in the xsi namespace. </p> <p>I think you meant</p> <pre><code>attrib={"{" + xsi + "}schemaLocation" : schemaLocation} </code></pre> <p>to use the URL for xsi. This matches your uses of namespace variables in the element name. It puts the attribute in the <code>http://www.w3.org/2001/XMLSchema-instance</code> namespace </p> <p>That gives the result of</p> <pre><code>&lt;?xml version='1.0' encoding='UTF-8' standalone='yes'?&gt; &lt;gpx xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://www.topografix.com/GPX/1/1" xsi:schemaLocation="http://www.topografix.com/GPX/1/1 http://www.topografix.com/GPX/1/1/gpx.xsd" version="1.1" creator="My Product"/&gt; </code></pre>
python|xml|lxml|gpx
16
1,905,848
2,904,847
What the heck kind of timestamp is this: 1267488000000
<p>And how do I convert it to a datetime.datetime instance in python?</p> <p>It's the output from the New York State Senate's API: <a href="http://open.nysenate.gov/legislation/" rel="nofollow noreferrer">http://open.nysenate.gov/legislation/</a>.</p>
<p>It looks like Unix time, but with milliseconds instead of seconds?</p> <pre><code>&gt;&gt;&gt; import time &gt;&gt;&gt; time.gmtime(1267488000000 / 1000) time.struct_time(tm_year=2010, tm_mon=3, tm_mday=2, tm_hour=0, tm_min=0, tm_sec=0, tm_wday=1, tm_yday=61, tm_isdst=0) </code></pre> <p>March 2nd, 2010?</p> <p>And if you want a <code>datetime</code> object:</p> <pre><code>&gt;&gt;&gt; import datetime &gt;&gt;&gt; datetime.datetime.fromtimestamp(1267488000000 / 1000) datetime.datetime(2010, 3, 1, 19, 0) </code></pre> <p>Note that the <code>datetime</code> is using the local timezone while the <code>time.struct_time</code> is using UTC.</p>
python|datetime
15
1,905,849
67,783,983
Efficiently evaluate an entire BSpline basis in Python
<p>I have a sequence of knots of a cubic spline in the NumPy array <code>knots</code>, and I would like to efficiently evaluate an entire cubic BSpline basis which is represented by the array of knots at a certain point <code>x</code>. What I am currently doing is constructing the basis using the SciPy <code>scipy.interpolate.BSpline</code> class:</p> <pre class="lang-py prettyprint-override"><code>from scipy.interpolate import BSpline def bspline_basis(knots): return [ BSpline.basis_element(knots[i:(i+5)], extrapolate=False) for i in range(len(knots) - 4) ] </code></pre> <p>and then using the returned basis for evaluation:</p> <pre class="lang-py prettyprint-override"><code>def eval_basis(basis, x): return [elem(val).item() for elem in basis] </code></pre> <p>However, since the <code>eval_basis</code> function is repeatedly called millions of times, the above code is slow! The <code>BSpline</code> object is optimized for operating on arrays, and I am feeding it with individual scalars <code>x</code> and extracting the scalars from the resulting arrays.</p> <p>Due to the fact that I operate in an existing codebase where I cannot change the call protocol to <code>eval_basis</code>, it has to be called on individual scalars <code>x</code>.</p> <p>The code can clearly be accelerated if I could somehow efficiently evaluate an entire BSpline basis at a point <code>x</code> and obtain an NumPy array of the basis function values. Is there such a way using SciPy, or another Python library?</p>
<p><code>scipy.interpolate._bspl.evaluate_all_bspl</code> is undocumented but gets it done</p>
python|numpy|scipy
1
1,905,850
68,002,194
How can I fix this pytube regex error python3 pytube3
<p>I'm having issues with pytube on raspberry pi 4b running python 3.7. Im getting this error code:</p> <blockquote> <p>%Run Pyoutube_downloader<br /> <a href="https://www.youtube.com/watch?v=-QLVxOvESf4" rel="nofollow noreferrer">https://www.youtube.com/watch?v=-QLVxOvESf4</a></p> </blockquote> <pre class="lang-py prettyprint-override"><code>Traceback (most recent call last): File &quot;/home/pi/Documents/My_Scripts/Pyoutube_downloader&quot;, line 5, in &lt;module&gt; yt = YouTube(link) File &quot;/home/pi/.local/lib/python3.7/site-packages/pytube/__main__.py&quot;, line 71, in __init__ self.video_id = extract.video_id(url) File &quot;/home/pi/.local/lib/python3.7/site-packages/pytube/extract.py&quot;, line 162, in video_id return regex_search(r&quot;(?:v=|\/)([0-9A-Za-z_-]{11}).*&quot;, url, group=1) File &quot;/home/pi/.local/lib/python3.7/site-packages/pytube/helpers.py&quot;, line 129, in regex_search raise RegexMatchError(caller=&quot;regex_search&quot;, pattern=pattern) pytube.exceptions.RegexMatchError: regex_search: could not find match for (?:v=|\/)([0-9A-Za-z_-]{11}).* </code></pre> <p>I have tried all the changes and updates described on github, I've tried updating everything and ive tried copying and pasting the error code specifically into Google to no avail.</p> <p>Has anyone had this and fixed it? Any help is greatly appreciated.</p>
<blockquote> <p>Credits: Modification of the regex expression created by Stack Overflow user <a href="https://stackoverflow.com/users/681044/kwarunek">kwarunek</a> in his <a href="https://stackoverflow.com/a/34833500/12511801">answer</a></p> </blockquote> <hr /> <p>I've modified the regex expression created by Stack Overflow user <a href="https://stackoverflow.com/users/681044/kwarunek">kwarunek</a> in his <a href="https://stackoverflow.com/a/34833500/12511801">answer</a> and this is the result:</p> <pre><code>(?:https?:\/\/)?(?:www.)?(?:[0-9A-Z-]+\.)?(?:youtube|youtu|youtube-nocookie)\.(?:com|be)\/(?:watch\?v=|watch\?.+&amp;v=|embed\/|v\/|.+\?v=)?([^&amp;=\n%\?]{11}) </code></pre> <p><img src="https://www.debuggex.com/i/rqeC0NDbijFM61V6.png" alt="Regular expression visualization" /></p> <p>You can test this regex expression in this <a href="https://www.debuggex.com/r/rqeC0NDbijFM61V6" rel="nofollow noreferrer">demo</a> available at <a href="https://www.debuggex.com/" rel="nofollow noreferrer">Debuggex.com</a></p>
python|raspberry-pi|youtube|pytube
0
1,905,851
67,932,495
How to solve No module named 'selenium' in VS code?
<p>How to solve No module named 'selenium' in VS code?</p> <p><strong>Module to be used</strong></p> <pre><code>from selenium import webdriver import time,sys,csv,pyautogui </code></pre> <p>However, the following error occurred.</p> <p><code>ModuleNotFoundError: No module named 'selenium'</code></p> <p><strong>I tried in VS code to solve this error.</strong></p> <p>A similar situation with me, stack overflow the resolution, but failed.</p> <p>-&gt; <a href="https://stackoverflow.com/questions/31147660/importerror-no-module-named-selenium">ImportError: No module named &#39;selenium&#39;</a></p> <pre><code>1.pip install selenium 2.conda install selenium </code></pre> <p><a href="https://i.stack.imgur.com/rgU08.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rgU08.png" alt="enter image description here" /></a> -&gt; select Python 3.8.5 -64bit(conda)</p> <blockquote> <p>how to solved ModuleNotFoundError? help me..</p> </blockquote> <p><strong>comment update(programandoconro)</strong> <a href="https://i.stack.imgur.com/JRhl4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JRhl4.png" alt="enter image description here" /></a></p> <p>-&gt; An error still occurs.</p> <p><strong>comment update(Prophet)</strong></p> <p><a href="https://i.stack.imgur.com/bhowJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bhowJ.png" alt="enter image description here" /></a></p> <p>-&gt; An error still occurs.</p>
<p>Both answers above might work but it is better to create a <a href="https://realpython.com/python-virtual-environments-a-primer/#:%7E:text=At%20its%20core%2C%20the%20main,dependencies%20every%20other%20project%20has." rel="nofollow noreferrer">virtual environment</a> and install your dependencies in there. Since you're already using Anaconda, I'll list the steps below.</p> <p>You can create a virtual environment with Anaconda:</p> <ol> <li><code>conda create -n yourenvname python=x.x anaconda</code></li> <li><code>source activate yourenvname</code></li> </ol> <p>You can find more info how to setup your <code>environment</code> <a href="https://uoa-eresearch.github.io/eresearch-cookbook/recipe/2014/11/20/conda/" rel="nofollow noreferrer">here</a></p>
python|selenium|visual-studio-code|path
3
1,905,852
30,727,456
How to configure python tornado application to give back a crossdomain.xml?
<p>So as far as I know first step shoud be create some response class in main application which is link with some url pattern. For example:</p> <pre><code>(r'/crossdomain.xml', GrahhHandler) </code></pre> <p>then in second step we should some way return crossdomail.xml document when we call GrahhHanler get function </p> <pre><code>class GrahhHandler (web.RequestHandler): def get(): return self.render('crossdomain.xml') </code></pre> <p>But I have 500: Internal Server Error after configuring GrahhHandler such way</p> <p>TypeError: get() takes 0 positional arguments but 1 was given ERROR:tornado.access:500 GET /crossdomain.xml (127.0.0.1) 25.00ms</p> <p>Please help me to configure GrahhHandler to get back a real crossdomain.xml</p>
<p>Set the proper <code>content-type</code> header for <code>xml</code> just before the <code>render()</code> call:</p> <pre><code>self.set_header("Content-Type", "text/xml") </code></pre>
python|tornado
0
1,905,853
30,538,673
Python: List Comprehension with a Nested Loop
<p>I have a situation where I am using list comprehension to scan one list and return items that match a certain criteria.</p> <p><code>[item for item in some_list_of_objects if 'thisstring' in item.id]</code></p> <p>I want to expand this and have a list of things that can be in the item, the list being of unknown length. Something like this:</p> <p><code>string_list = ['somestring', 'another_string', 'etc']</code></p> <p><code>[item for item in some_list_of_objects if one of string_list in item.id]</code></p> <p>What is a pythonic way to accomplish this? I know I could easily rewrite it to use the standard loop structure, but I would like to keep the list comprehension if I can do so without producing very ugly code.</p> <p>Thanks in advance.</p>
<p>Use <a href="https://docs.python.org/2/library/functions.html#any" rel="nofollow">any</a>:</p> <pre><code>string_list = ['somestring', 'another_string', 'etc'] [item for item in some_list if any(s in item.id for s in string_list)] </code></pre> <p>any will <em>lazily evaluate</em> breaking on the first match or checking all if we don't get a match.</p>
python|list|python-2.7|list-comprehension
2
1,905,854
67,025,013
search string in pandas column
<p>I am trying to find a substring in below hard_skills_name column, like i want all rows which has 'Apple Products' as hard skill.</p> <p><a href="https://i.stack.imgur.com/NAeMb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NAeMb.png" alt="enter image description here" /></a></p> <p>I tried below code:</p> <pre><code>df.loc[df['hard_skills_name'].str.contains(&quot;Apple Products&quot;, case=False)] </code></pre> <p>but getting this error:</p> <pre><code>KeyError Traceback (most recent call last) &lt;ipython-input-49-acdcdfbdfd3d&gt; in &lt;module&gt; ----&gt; 1 df.loc[df['hard_skills_name'].str.contains(&quot;Apple Products&quot;, case=False)] ~/anaconda3/envs/python3/lib/python3.6/site-packages/pandas/core/indexing.py in __getitem__(self, key) 877 878 maybe_callable = com.apply_if_callable(key, self.obj) --&gt; 879 return self._getitem_axis(maybe_callable, axis=axis) 880 881 def _is_scalar_access(self, key: Tuple): ~/anaconda3/envs/python3/lib/python3.6/site-packages/pandas/core/indexing.py in _getitem_axis(self, key, axis) 1097 raise ValueError(&quot;Cannot index with multidimensional key&quot;) 1098 -&gt; 1099 return self._getitem_iterable(key, axis=axis) 1100 1101 # nested tuple slicing ~/anaconda3/envs/python3/lib/python3.6/site-packages/pandas/core/indexing.py in _getitem_iterable(self, key, axis) 1035 1036 # A collection of keys -&gt; 1037 keyarr, indexer = self._get_listlike_indexer(key, axis, raise_missing=False) 1038 return self.obj._reindex_with_indexers( 1039 {axis: [keyarr, indexer]}, copy=True, allow_dups=True ~/anaconda3/envs/python3/lib/python3.6/site-packages/pandas/core/indexing.py in _get_listlike_indexer(self, key, axis, raise_missing) 1252 keyarr, indexer, new_indexer = ax._reindex_non_unique(keyarr) 1253 -&gt; 1254 self._validate_read_indexer(keyarr, indexer, axis, raise_missing=raise_missing) 1255 return keyarr, indexer 1256 ~/anaconda3/envs/python3/lib/python3.6/site-packages/pandas/core/indexing.py in _validate_read_indexer(self, key, indexer, axis, raise_missing) 1296 if missing == len(indexer): 1297 axis_name = self.obj._get_axis_name(axis) -&gt; 1298 raise KeyError(f&quot;None of [{key}] are in the [{axis_name}]&quot;) 1299 1300 # We (temporarily) allow for some missing keys with .loc, except in KeyError: &quot;None of [Float64Index([nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,\n nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,\n nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,\n nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,\n nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,\n nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,\n nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,\n nan, nan, nan, nan, nan, nan, nan, nan, nan],\n dtype='float64')] are in the [index]&quot; </code></pre>
<p>Try to chain (temporarily) conversion of the list of strings to comma separated strings by <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.str.join.html" rel="nofollow noreferrer"><code>str.join()</code></a> before string search:</p> <pre><code>df[df['hard_skills_name'].str.join(', ').str.contains(&quot;Apple Products&quot;, case=False)] </code></pre> <p>The problem was owing to the string you are going to search is contained within a list. You cannot search the string in list directly with <code>.str.contains()</code>. To solve it, you can convert the list of strings to a long string first (e.g. with commas separating the substrings) by <code>.str.join()</code> before doing your string search.</p>
python|pandas|string|dataframe|data-wrangling
2
1,905,855
66,946,179
Reading a csv file header which contains a utf-8 character
<p>Is there a way by which I can check if a csv file contains a header without importing any modules like the csv or panda module? I keep getting an error when I try to use readlines as the header contains a utf-8 character (a degree symbol).</p>
<p>You know that the header of a csv file is its first line separated by commas.</p> <p>By opening the file and reading its first line you can obtain the information that you need:</p> <pre><code>f = open('file.csv', 'r', encoding='utf-8') #open file headers = f.readline().split(',') #get headers list f.close() #close file </code></pre>
python|python-3.x
0
1,905,856
66,885,177
how to select by a partial text in Python - selenium
<p>How can I select a dropdown element by a part of its name? I want to select an option based on a DB values, but this values don't have the complete name of the dropdown element, is there any way to make selenium look for the option with my database value as a partial text?</p> <pre><code> modelo = googleSheet.modelo.upper().strip() select = Select(WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, '/html/body/div/div/div/div[1]/form/fieldset[6]/div/ul/fieldset[3]/div/ul/fieldset[3]/div/ul/fieldset/div/ul/li/label')))) select.select_by_visible_text(modelo) </code></pre> <p>I the dropdown option I want to select is &quot;Terrano II 2.7 xpto ol&quot;, but my data base value is just Terrano II 2.7</p> <p>Thanks for the help</p>
<p><code>driver.select_by_visible_text()</code> already does <code>strip()</code>. You do not need it. Also, from this method definition:</p> <pre><code>Select all options that display text matching the argument. That is, when given &quot;Bar&quot; this would select an option like: &lt;option value=&quot;foo&quot;&gt;Bar&lt;/option&gt; :Args: - text - The visible text to match against </code></pre> <p>So, you need to expect exactly the option that is visible. Another problem in your code is the way you pass the variable.</p> <pre><code>dropdown_option = &quot;Some text you expect to see in the dropdown&quot; locator = driver.find_element_by_id(&quot;id&quot;) # or any other locator select = Select(locator) if locator is not None: for option in select.options: select.select_by_visible_text(dropdown_option) </code></pre> <p>This implementation makes debugging easier. For example you can print all the values from your dropdown before selecting the option you need.</p> <p>If it takes a lot of time to dropdown to open, or other elements make your dropdown temporarily not visible, add a separate wait before selection.</p> <pre><code>from selenium.webdriver.support.select import Select from selenium.webdriver.support.wait import WebDriverWait from selenium.webdriver.common.by import By wait = WebDriverWait(driver, 10) wait.until(EC.visibility_of_element_located( (By.CSS_SELECTOR, &quot;Unique css selector of the first option in dropdown&quot;))) </code></pre>
python|selenium|drop-down-menu|automation
0
1,905,857
67,104,160
Advanced ticket system discord.py
<p>I am creating a ticket system for my bot so that members can tell their issue to mods and I can create multiple ticket channels without any limit by reacting again and again. But I want that there should be a limit so that when there is an ongoing ticket so that a specific user cannot create one more ticket and the bot should yell at him.</p> <p>Code:</p> <pre class="lang-py prettyprint-override"><code> @bot.event async def on_raw_reaction_add(payload): if payload.member.id != bot.user.id and str(payload.emoji)== u&quot;\U0001F3AB&quot;: msg_id, channel_id, category_id= bot.ticket_configs[payload.guild_id] if payload.message_id == msg_id: guild= bot.get_guild(payload.guild_id) for category in guild.categories: if category.id == category_id: break channel = guild.get_channel(channel_id) ticket_num= 1 if len(category.channels) == 0 else int(category.channels[-1].name.split(&quot;-&quot;)[1]) + 1 ticket_channel= await category.create_text_channel(f&quot;Ticket {ticket_num}&quot;, topic= f&quot;Channel for ticket number {ticket_num}.&quot;, permission_synced= True) await ticket_channel.set_permissions(payload.member, read_messages= True, send_messages= True) message= await channel.fetch_message(msg_id) await message.remove_reaction(payload.emoji, payload.member) await ticket_channel.send(f&quot;{payload.member.mention} Thank You! for creating this ticket staff will contact you soon. Type **-close** to close the ticket.&quot;) try: await bot.wait_for(&quot;message&quot;, check=lambda m: m.channel== ticket_channel and m.author== payload.member and m.content == &quot;-close&quot;, timeout= 3600) except asyncio.TimeoutError: await ticket_channel.delete() else: await ticket_channel.delete() </code></pre> <hr /> <p>TL;DR: My bot can currently create multiple ticket channels without any limits. However, I want to enforce a limit where if a user had an existing ticket but started a new channel, the bot would remind him that he already has an existing ticket.</p>
<p>try this</p> <pre><code>if discord.utils.get(guild.channels, name=f&quot;Ticket {ticket_num}&quot;) != None: await payload.member.send(content=f&quot;Ticket {ticket_num} is already an existing channel!&quot;) </code></pre>
python|discord.py|ticket-system
0
1,905,858
42,684,078
During the recent Python Build's
<p>Which Python build was up to date during 1999? I'm trying to run module files through the python interpreter, and the example in the book doesn't read the same syntax as the print command works for Python 3.5. What syntax should I use running module files during the Python 3.5 Build.</p>
<p>According to Wikipedia (<a href="https://en.wikipedia.org/wiki/History_of_Python" rel="nofollow noreferrer">History of Python</a>), Python 1.5 or 1.6 would be your best bet; though I would probably recommend finding a more current book (or online resource) if possible, since learning an obsolete version of the language won't be very useful (you'll have to find old versions of every library you want to use) and may cause you headaches in the future when you want to learn the new syntax.</p>
python
1
1,905,859
42,718,547
How to connect remote mongodb with pymongo
<p>When I use MongoChef to connect remote mongo database, I use next parameters:</p> <hr> <p><strong>Server</strong></p> <ul> <li><em>Server:</em> localhost</li> <li><em>Port:</em> 27017</li> </ul> <p><strong>SSH Tunnel</strong></p> <ul> <li><p><em>SSH address:</em> 10.1.0.90</p></li> <li><p><em>Port:</em> 25</p></li> <li><p><em>SSH Username:</em> username</p></li> <li><p><em>SSH Password:</em> password</p></li> </ul> <hr> <p>When I connect with Pymongo, I have the next code:</p> <pre><code>import pymongo MONGO_HOST = "10.1.0.90" MONGO_PORT = 25 MONGO_DB = "db_name" MONGO_USER = "username" MONGO_PASS = "password" con = pymongo.MongoClient(MONGO_HOST, MONGO_PORT) db = con[MONGO_DB] db.authenticate(MONGO_USER, MONGO_PASS) print(db) </code></pre> <p>But I have the next error:</p> <pre><code>pymongo.errors.ServerSelectionTimeoutError: 10.1.2.84:27017: [Errno 111] Connection refused </code></pre> <p>Please, could you help me with this problem? What did I do wrong?</p>
<p>The solution which works for me.</p> <pre><code>from sshtunnel import SSHTunnelForwarder import pymongo import pprint MONGO_HOST = "REMOTE_IP_ADDRESS" MONGO_DB = "DATABASE_NAME" MONGO_USER = "LOGIN" MONGO_PASS = "PASSWORD" server = SSHTunnelForwarder( MONGO_HOST, ssh_username=MONGO_USER, ssh_password=MONGO_PASS, remote_bind_address=('127.0.0.1', 27017) ) server.start() client = pymongo.MongoClient('127.0.0.1', server.local_bind_port) # server.local_bind_port is assigned local port db = client[MONGO_DB] pprint.pprint(db.collection_names()) server.stop() </code></pre>
python|mongodb|python-3.x|ssh|pymongo
41
1,905,860
66,667,078
Regex to find numerical data from a group of JSON
<p>I am trying to build a regex to find the numerical data in the series of a output which combines many small JSON data. This is just like to find the numerical data after <code>:</code> but I am not able to get through it I tried many regex and used the hint from many websites but am not able to solve it.</p> <p>The data is somewhat like</p> <pre><code>{&quot;orders&quot;:[{&quot;id&quot;:1},{&quot;id&quot;:2},{&quot;id&quot;:3},{&quot;id&quot;:4},{&quot;id&quot;:5},{&quot;id&quot;:6},{&quot;id&quot;:7},{&quot;id&quot;:8},{&quot;id&quot;:9},{&quot;id&quot;:10},{&quot;id&quot;:11},{&quot;i d&quot;:648},{&quot;id&quot;:649},{&quot;id&quot;:650},{&quot;id&quot;:651},{&quot;id&quot;:652},{&quot;id&quot;:653}],&quot;errors&quot;:[{&quot;code&quot;:3,&quot;message&quot;:&quot;[PHP Warning #2] count(): Parameter must be an array or an object that implements Countable (153)&quot;}]} </code></pre> <p>Here I want all the numerical values except <code>#2</code>and <code>(153)</code>.</p> <p>Please Help</p>
<pre><code>(?&lt;=:)\d+ </code></pre> <p>Maybe negative lookbehind works.</p>
python|regex
1
1,905,861
72,307,750
Executing CMD in Python with os
<p>My code:</p> <pre><code>import os os.system(&quot;pip install colorama&quot;) </code></pre> <p>This starts to print the actual command and such, is there a way to execute this without anything being printed. Or even like for example a variable with the response and I can handle it from there? I don't want a massive text, just a simple one saying it was installed etc.</p>
<p>You could probably redirect the output into <code>/dev/null</code>, so you could do something like the following:</p> <pre class="lang-py prettyprint-override"><code>import os os.system(“pip install colorama &gt; /dev/null 2&gt;&amp;1“) </code></pre> <p>Note that this would also redirect the errors (because of the <code>2&gt;&amp;1</code>). If you want to keep errors, you should remove that.</p>
python
2
1,905,862
65,624,011
coursera(deep specialization course): CNN week-3 assignment car detection with YOLO
<p>I have been struggling with <strong>Attribute Error: 'list' object has no attribute 'dtype'.</strong> This is from the coursera assignment, everything works fine in coursera jupyter notebook but when i run it in in my local machine then i started getting this error dont know why? could anyone help me with this out.</p> <p>here in below code we are Converting output of the model to usable bounding box tensors.</p> <pre class="lang-py prettyprint-override"><code>yolo_outputs = yolo_head(yolo_model.output, anchors, len(class_names)) </code></pre> <pre><code>--------------------------------------------------------------------------- AttributeError Traceback (most recent call last) &lt;ipython-input-90-d69bb71a2d56&gt; in &lt;module&gt; ----&gt; 1 yolo_outputs = yolo_head(yolo_model.output, anchors, len(class_names)) ~\object detection\yad2k\models\keras_yolo.py in yolo_head(feats, anchors, num_classes) 107 conv_index = K.transpose(K.stack([conv_height_index, conv_width_index])) 108 conv_index = K.reshape(conv_index, [1, conv_dims[0], conv_dims[1], 1, 2]) --&gt; 109 conv_index = K.cast(conv_index, K.dtype(feats)) 110 111 feats = K.reshape(feats, [-1, conv_dims[0], conv_dims[1], num_anchors, num_classes + 5]) ~\anaconda3\envs\tensorflow_env\lib\site-packages\tensorflow\python\util\dispatch.py in wrapper(*args, **kwargs) 199 &quot;&quot;&quot;Call target, and fall back on dispatchers if there is a TypeError.&quot;&quot;&quot; 200 try: --&gt; 201 return target(*args, **kwargs) 202 except (TypeError, ValueError): 203 # Note: convert_to_eager_tensor currently raises a ValueError, not a ~\anaconda3\envs\tensorflow_env\lib\site-packages\tensorflow\python\keras\backend.py in dtype(x) 1369 1370 &quot;&quot;&quot; -&gt; 1371 return x.dtype.base_dtype.name 1372 1373 AttributeError: 'list' object has no attribute 'dtype' </code></pre> <pre class="lang-py prettyprint-override"><code>def yolo_head(feats, anchors, num_classes): &quot;&quot;&quot;Convert final layer features to bounding box parameters. Parameters ---------- feats : tensor Final convolutional layer features. anchors : array-like Anchor box widths and heights. num_classes : int Number of target classes. Returns ------- box_xy : tensor x, y box predictions adjusted by spatial location in conv layer. box_wh : tensor w, h box predictions adjusted by anchors and conv spatial resolution. box_conf : tensor Probability estimate for whether each box contains any object. box_class_pred : tensor Probability distribution estimate for each box over class labels. &quot;&quot;&quot; num_anchors = len(anchors) # Reshape to batch, height, width, num_anchors, box_params. anchors_tensor = K.reshape(K.variable(anchors), [1, 1, 1, num_anchors, 2]) # Static implementation for fixed models. # TODO: Remove or add option for static implementation. # _, conv_height, conv_width, _ = K.int_shape(feats) # conv_dims = K.variable([conv_width, conv_height]) # Dynamic implementation of conv dims for fully convolutional model. conv_dims = K.shape(feats)[1:3] # assuming channels last # In YOLO the height index is the inner most iteration. conv_height_index = K.arange(0, stop=conv_dims[0]) conv_width_index = K.arange(0, stop=conv_dims[1]) conv_height_index = K.tile(conv_height_index, [conv_dims[1]]) # TODO: Repeat_elements and tf.split doesn't support dynamic splits. # conv_width_index = K.repeat_elements(conv_width_index, conv_dims[1], axis=0) conv_width_index = K.tile(K.expand_dims(conv_width_index, 0), [conv_dims[0], 1]) conv_width_index = K.flatten(K.transpose(conv_width_index)) conv_index = K.transpose(K.stack([conv_height_index, conv_width_index])) conv_index = K.reshape(conv_index, [1, conv_dims[0], conv_dims[1], 1, 2]) conv_index = K.cast(conv_index, K.dtype(feats)) feats = K.reshape(feats, [-1, conv_dims[0], conv_dims[1], num_anchors, num_classes + 5]) conv_dims = K.cast(K.reshape(conv_dims, [1, 1, 1, 1, 2]), K.dtype(feats)) # Static generation of conv_index: # conv_index = np.array([_ for _ in np.ndindex(conv_width, conv_height)]) # conv_index = conv_index[:, [1, 0]] # swap columns for YOLO ordering. # conv_index = K.variable( # conv_index.reshape(1, conv_height, conv_width, 1, 2)) # feats = Reshape( # (conv_dims[0], conv_dims[1], num_anchors, num_classes + 5))(feats) box_confidence = K.sigmoid(feats[..., 4:5]) box_xy = K.sigmoid(feats[..., :2]) box_wh = K.exp(feats[..., 2:4]) box_class_probs = K.softmax(feats[..., 5:]) # Adjust preditions to each spatial grid point and anchor size. # Note: YOLO iterates over height index before width index. box_xy = (box_xy + conv_index) / conv_dims box_wh = box_wh * anchors_tensor / conv_dims return box_confidence, box_xy, box_wh, box_class_probs``` </code></pre>
<p>What is the type of your input to the model? You should cast it into a NumPy array first.</p> <pre class="lang-py prettyprint-override"><code>a = [1, 2, 3] a.dtype </code></pre> <p>This will raise your error:</p> <pre><code>--------------------------------------------------------------------------- AttributeError Traceback (most recent call last) &lt;ipython-input-2-b3cdf8323901&gt; in &lt;module&gt; ----&gt; 1 a.dtype AttributeError: 'list' object has no attribute 'dtype' </code></pre> <p>After casting to a NumPy array, it'll work correctly:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np a = np.array(a) a.dtype # dtype('int64') </code></pre>
python|tensorflow|keras|jupyter-notebook|anaconda
0
1,905,863
65,572,338
Python Linked List - insert method
<p>I am kinda new to DataStructure and I trying to write LinkedList, but I cannot figure out why my insert method does not work. If you have any suggestions to improve, please write to me</p> <pre><code>class Node: def __init__(self, data): self.item = data self.ref = None class LinkedList: def __init__(self): self.start_node = None self.index = 0 def append(self, val): new = Node(val) if not self.start_node: self.start_node = new return last = self.start_node while last.ref: last = last.ref last.ref = new self.index += 1 def insert(self, index, value): start_counting = 0 start = self.start_node while start_counting &lt; index: start = start.ref start_counting += 1 NewNode = Node(value) tmp = start.ref start = NewNode start.ref = tmp def display(self): ls = [] last = self.start_node while last: ls.append(last.item) last = last.ref return ls </code></pre>
<p>The insert function is replacing the current node instead of linking it. I'll illustrate this with an example:</p> <p>I have this linked list: 1 -&gt; 2 -&gt; 3 And I want to insert an element &quot;4&quot; in position 1 (the current number in position 1 is 2). The iterations will be:</p> <pre><code>start_counting = 0 start = self.start_node (node with value 1) </code></pre> <hr /> <p><strong>1st Iteration:</strong></p> <pre><code>start_counting &lt; 1 -&gt; true start = start.ref (node with value 2) start_counting += 1 (actual count 1) </code></pre> <p><strong>2nd Iteration:</strong></p> <pre><code>start_counting &lt; 1 -&gt; false start = (node with value 2) </code></pre> <hr /> <p><strong>After that the code continues as follows:</strong></p> <pre><code>We create the new Node (4 in my example) and we do: tmp = start.ref (which is 3) start = NewNode (we are replacing the node completely, we are not linking the node with another) &lt;- here is the error start.ref = tmp (which in this case is 3) </code></pre> <hr /> <p>To fix the error you should take in consideration two things:</p> <ol> <li>Iterate until the previous node instead until the next one.</li> <li>Handle the case where you want to insert a node as head of the linked list, in other words, in position 0.</li> </ol> <p>The code would be something like:</p> <pre><code> def insert(self, index, value): start_counting = 0 start = self.start_node NewNode = Node(value) if index == 0: # Handling the position 0 case. NewNode.ref = start self.start_node = NewNode else: while start_counting &lt; index - 1: # Iterating until the previous node. start_counting += 1 start = start.ref NewNode.ref = start.ref start.ref = NewNode </code></pre> <p>Tested with the following code and it is working:</p> <pre><code>a = LinkedList() a.append(1) a.append(2) a.append(3) a.insert(0, 4) a.insert(1, 5) print(a.display()) </code></pre>
python|data-structures|linked-list
1
1,905,864
3,952,978
How to write a port scanner listening for 'ACK' in python?
<p>please can anyone help me with the port scanner program to scan ports on the IP address provided,for ACK. i want to know the technique used to scan for ACK &amp; use multi-threading so please help me in that perspective.</p> <p>Thank you </p>
<p>Just a heads-up - Windows XP SP2 and later disable raw sockets, so you won't be able to scan for TCP ACK messages specifically on Windows. Since an ACK message is the last message in establishing a TCP connection, you can implicitly detect the ACK message by attempting to establish a connection with a simple <code>socket.connect</code> call (if it connects, you've sent your ACK).</p> <p>If you want to see an example of a multithreaded port scanner that I wrote, see <code>inet.py</code> and <code>scanner.py</code> in <a href="http://bitbucket.org/jaraco/jaraco.net/src/tip/jaraco/net/" rel="nofollow">jaraco.net</a>.</p>
python|multithreading|sockets|port-scanning
3
1,905,865
3,586,003
unsigned char* image to Python
<p>I was able to generate python bindings for a camera library using SWIG and I am able to capture and save image using the library's inbuilt functions. I am trying to obtain data from the camera into Python Image Library format, the library provides functions to return camera data as unsigned char* . Does anyone know how to convert unsigned char* image data into some data format that I can use in Python? Basically am trying to convert unsigned char* image data to Python Image Library format.</p> <p>Thank you.</p>
<p>Okay Guys, so finally after a long fight (maybe because am a newbie in python), I solved it.</p> <p>I wrote a data structure that could be understood by python and converted the unsigned char* image to that structure. After writing the interface for the custom data structure, I was able to get the image into Python Image Library image format. I wanted to paste the code here but it wont allow more tha 500 chars. Here's a link to my code</p> <p><a href="http://www.optionsbender.com/technologybending/python/unsignedcharimagedatatopilimage" rel="nofollow noreferrer">http://www.optionsbender.com/technologybending/python/unsignedcharimagedatatopilimage</a></p> <p>I've also attached files so that you can use it.</p>
python|c++|python-imaging-library|unsigned-char|pep3118
1
1,905,866
3,761,473
Python not recognising directories os.path.isdir()
<p>I have the following Python code to remove files in a directory. For some reason my .svn directories are not being recognised as directories.</p> <p>And I get the following output:</p> <blockquote> <p>.svn not a dir</p> </blockquote> <p>Any ideas would be appreciated.</p> <pre><code>def rmfiles(path, pattern): pattern = re.compile(pattern) for each in os.listdir(path): if os.path.isdir(each) != True: print(each + " not a dir") if pattern.search(each): name = os.path.join(path, each) os.remove(name) </code></pre>
<p>You need to create the full path name before checking:</p> <pre><code>if not os.path.isdir(os.path.join(path, each)): ... </code></pre>
python|file-io|path|directory
59
1,905,867
50,247,981
How to Insert A Non-String Object as a Key in Database
<p>right now I am using <strong>mongoDB</strong> and faces key error based on the datatype restriction to &quot;String&quot; only.</p> <blockquote> <p>InvalidDocument: documents must have only string keys, key was ('good', 'beneficial')</p> </blockquote> <p>The action I had tried is following:</p> <h3>1) Create a database named in 'glot'</h3> <h3>2) Make a collection with name &quot;usr_history&quot;</h3> <h3>3) Try to insert a usr_history document into the collection</h3> <p>with following command in jupyter-notebook</p> <pre><code>import pymongo from pymongo import MongoClient client = MongoClient() #to run the mongod instance we need a client to run it db = client['glot'] #we connect to 'test-database' with the pre-defined client instance collection = db['usr_history'] edge_attr = {('good','beneficial'):&quot;good is beneficial&quot;, ('beneficial','supportive'):&quot;beneficial means something supportive&quot;} #create this with glat_algo usr_history = {&quot;nodes&quot;: ['good', 'beneficial', 'supportive'], &quot;edges&quot;: [('good', 'beneficial'), ('beneficial', 'supportive')], &quot;rephrase&quot;: edge_attr} collection.insert_one(usr_history) </code></pre> <p>and the last command returns error as the above mentioned.</p> <p>Basically what I am trying to do is to store the (vertex, edge, edge_attr) data into <strong>any</strong> DB so that I can keep tracking of larage amount of usr_history data for analyzing those using python.</p> <p>I am not confined to mongoDB, so you can give me a guideline or solution not just with mongoDB, but with another alternatives.</p> <p>Thank you very much.</p>
<p>You can't use a non-string value as a key; only strings can be keys.</p> <p>It will work better if you store <code>('good','beneficial')</code> as the value of a separate attribute, perhaps like this:</p> <pre class="lang-js prettyprint-override"><code>edge_attr = [ {"attr" : ('good','beneficial'), "val" : "good is beneficial"}, {"attr" : ('beneficial','supportive'), "val" : "beneficial means something supportive"} ]; </code></pre>
python|mongodb|key
0
1,905,868
35,146,733
Is urllib2 slower than requests in python3
<p>I use python to simply call api.github.gist. I have tried urllib2 at first which cost me about 10 seconds!. The requests takes less than 1 senond</p> <p>I am under a cooperation network, using a proxy. Do these two libs have different default behavior under a proxy?</p> <p>And I use fiddler to check the network. In both situation, the http request finished in about 40ms. So where urllib spends the time on?</p>
<p>It's most likely that DNS caching sped up the <code>requests</code>. DNS queries might take a lot of time in corporate networks, don't know why but I experience the same. The first time you sent the request with <code>urllib2</code> DNS queried, slow, and cached. The second time you sent the request with <code>requests</code>, DNS needed not to be queried just retrieved from the cache.</p> <p>Clear up the DNS cache and change the order, i.e. request with <code>requests</code> first, see if there is any difference.</p>
python|python-requests|urllib
0
1,905,869
34,934,396
Python Scrapy - dynamic HTML, div and span content needed
<p>So I'm new to Scrapy and am looking to do something which is proving a little too ambitious. I'm hoping somebody out there can help guide me on how to gather and parse the info I'm after from this website. </p> <p>I need to obtain the following: label1 4810 (this is generated dynamically) Business name Name Address1 Address2 Address3 Address4 Postcode 0800 111111 me@domain.com</p> <p>Is this even possible using scrapy?</p> <p>Many thanks in advance.</p> <p><div class="snippet" data-lang="js" data-hide="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>&lt;div class="mbg"&gt; &lt;a href="http://www.domain.com" aria-label="label1"&gt; &lt;span class="nw1"&gt;Label13345&lt;/span&gt; &lt;/a&gt; &lt;span class="mbg-l"&gt; &lt;a href="http://www.domain.com/1" title="FBS"&gt;4810&lt;/a&gt; &lt;img alt="4810" title="4810" src="http://www.domain.com/image1"&gt;&lt;/span&gt; &lt;/div&gt; &lt;div id="bsi-c" class=" bsi-c-uk-bislr"&gt; &lt;div class="bsi-cnt"&gt; &lt;div class="bsi-ttl section-ttl"&gt; &lt;h2&gt;Info&lt;/h2&gt; &lt;div class="rd-sep"&gt;&lt;/div&gt; &lt;/div&gt; &lt;div class="bsi-bn"&gt;Business name&lt;/div&gt; &lt;div class="bsi-cic"&gt; &lt;div id="bsi-ec" class="u-flL"&gt; &lt;span class="bsi-arw"&gt;&lt;a href="javascript:;"&gt;&lt;/a&gt;&lt;/span&gt; &lt;span class="bsi-cdt"&gt;&lt;a href="javascript:;"&gt;Contact details&lt;/a&gt;&lt;/span&gt; &lt;/div&gt; &lt;div id="e8" class="u-flL bsi-ci"&gt; &lt;div class="bsi-c1"&gt; &lt;div&gt;Name&lt;/div&gt; &lt;div&gt;Address1&lt;/div&gt; &lt;div&gt;Address2&lt;/div&gt; &lt;div&gt;Address3&lt;/div&gt; &lt;div&gt;Address4&lt;/div&gt; &lt;div&gt;Postcode&lt;/div&gt; &lt;/div&gt; &lt;div class="bsi-c2"&gt; &lt;br&gt;&lt;/br&gt; &lt;div&gt; &lt;span class="bsi-lbl"&gt;Phone:&lt;/span&gt; &lt;span&gt;0800 111111&lt;/span&gt; &lt;/div&gt; &lt;div&gt; &lt;span class="bsi-lbl"&gt;Email:&lt;/span&gt; &lt;span&gt;me@domain.com&lt;/span&gt; &lt;/div&gt; &lt;/div&gt; &lt;/div&gt; &lt;/div&gt;</code></pre> </div> </div> </p>
<p>An example of parsing the already received page might look something like this:</p> <pre><code>import lxml.html page="""&lt;div&gt;&lt;span&gt; . . .&lt;/span&gt;&lt;/div&gt; """ doc = lxml.html.document_fromstring(page) # get label1 4810 label = doc.cssselect('.mbg .mbg-l a')[0].text_content() # get address addres = doc.cssselect('.u-flL .bsi-c1')[0].text_content() # get phone phone = doc.cssselect('.bsi-c2 .bsi-lbl')[0].text_content() # get mail mail = doc.cssselect('.bsi-c2 .bsi-lbl')[1].text_content() </code></pre> <p>if a page must be retrieved from the network can make so:</p> <pre><code>import requests, lxml.html page = requests.get('site_.com') doc = lxml.html.document_fromstring(page.text) phone = doc.cssselect('.bsi-c2 .bsi-lbl')[0].text_content() </code></pre>
python|html|scrapy|scrapy-spider
1
1,905,870
56,658,930
Use PostgreSQL plpython3u function to return a table
<p>I wanna return table. The function get an array(the query is 'select function_name(array_agg(column_name)) from table_name')</p> <p>I coded below:</p> <pre><code>create type pddesctype as( count float, mean float, std float, min float ); create function pddesc(x numeric[]) returns pddesctype as $$ import pandas as pd data=pd.Series(x) count=data.describe()[0] mean=data.describe()[1] std=data.describe()[2] min=data.describe()[3] return count, mean, std, min $$ language plpython3u; </code></pre> <p>This code results only array on one column. (float, float, float...)</p> <p>I tried</p> <pre><code>create function pddesc(x numeric[]) returns table(count float, mean float, std float, min float) as $$ import pandas as pd data=pd.Series(x) count=data.describe()[0] mean=data.describe()[1] std=data.describe()[2] min=data.describe()[3] return count, mean, std, min $$ language plpython3u; </code></pre> <p>But there is an error:</p> <pre><code>ERROR: key &quot;count&quot; not found in mapping HINT: To return null in a column, add the value None to the mapping with the key named after the column. CONTEXT: while creating return value. </code></pre> <p>I want to show the result in columns (like a table) without creating type in advance.</p> <p>How to change the RETURN / RETURNS syntax?</p>
<p>Here are the steps I tried to get a table of one row with four columns as the Output. The last step has the solution, the first is another way to reproduce your error.</p> <h4>Check np.array</h4> <pre><code>create or replace function pddesc(x numeric[]) returns table(count float, mean float, std float, min float) as $$ import pandas as pd import numpy as np data=pd.Series(x) count=data.describe()[0] mean=data.describe()[1] std=data.describe()[2] min=data.describe()[3] ## print an INFO of the output: plpy.info(np.array([count, mean, std, min])) return np.array([count, mean, std, min]) $$ language plpython3u; </code></pre> <p>Test fails (with the error of the question reproduced):</p> <pre><code>postgres=# SELECT * FROM pddesc(ARRAY[1,2,3]); INFO: [3 3 Decimal('1') 1] ERROR: key &quot;count&quot; not found in mapping HINT: To return null in a column, add the value None to the mapping with the key named after the column. CONTEXT: while creating return value PL/Python function &quot;pddesc&quot; </code></pre> <h4>Working solution: <code>np.array([...]).reshape(1,-1)</code></h4> <p>You need to reshape the array so that it is of the dimension that you want to get. In this case, it is of dim (1 row x 4 cols), and the <code>.reshape(1,-1)</code> means 1 row and -1 (= whatever needed) cols</p> <pre><code>create or replace function pddesc(x numeric[]) returns table(count float, mean float, std float, min float) as $$ import pandas as pd import numpy as np data=pd.Series(x) count=data.describe()[0] mean=data.describe()[1] std=data.describe()[2] min=data.describe()[3] ## print an INFO of the output: plpy.info(np.array([count, mean, std, min]).reshape(1,-1)) return np.array([count, mean, std, min]).reshape(1,-1) ## or with the same result: # return np.hstack((count, mean, std, min)).reshape(1,-1) $$ language plpython3u; </code></pre> <p>Test:</p> <pre><code>postgres=# SELECT * FROM pddesc(ARRAY[1,2,3]); INFO: [[3 3 Decimal('1') 1]] count | mean | std | min -------+------+-----+----- 3 | 3 | 1 | 1 (1 row) </code></pre>
postgresql|plpython|postgres-plpython
1
1,905,871
56,790,461
What is the time complexity of this agorithm (that solves leetcode question 650) (question 2)?
<p>Hello I have been working on <a href="https://leetcode.com/problems/2-keys-keyboard/" rel="nofollow noreferrer">https://leetcode.com/problems/2-keys-keyboard/</a> and came upon this dynamic programming question.</p> <p>You start with an 'A' on a blank page and you get a number n when you are done you should have n times 'A' on the page. The catch is you are allowed only 2 operations copy (and you can only copy the total amount of A's currently on the page) and paste --> find the minimum number of operations to get n 'A' on the page.</p> <p>I solved this problem but then found a better solution in the discussion section of leetcode --> and I can't figure out it's time complexity.</p> <pre><code> def minSteps(self, n): factors = 0 i=2 while i &lt;= n: while n % i == 0: factors += i n /= i i+=1 return factors </code></pre> <p>The way this works is <code>i</code> is never gonna be bigger than the biggest prime factor <code>p</code> of <code>n</code> so the outer loop is <code>O(p)</code> and the inner while loop is basically <code>O(logn)</code> since we are dividing <code>n /= i</code> at each iteration.</p> <p>But the way I look at it we are doing <code>O(logn)</code> divisions in total for the inner loop while the outer loop is <code>O(p)</code> so using aggregate analysis this function is basically <code>O(max(p, logn))</code> is this correct ?</p> <p>Any help is welcome.</p>
<p>Your reasoning is correct: <em>O(max(p, logn))</em> gives the time complexity, assuming that arithmetic operations take constant time. This assumption is not true for arbitrary large <em>n</em>, that would not fit in the machine's fixed-size number storage, and where you would need Big-Integer operations that have <a href="https://en.wikipedia.org/wiki/Computational_complexity_of_mathematical_operations#Arithmetic_functions" rel="nofollow noreferrer">non-constant</a> time complexity. But I will ignore that.</p> <p>It is still odd to express the complexity in terms of <em>p</em> when that is not the input (but derived from it). Your input is only <em>n</em>, so it makes sense to express the complexity in terms of <em>n</em> alone.</p> <h2>Worst Case</h2> <p>Clearly, when <em>n</em> is prime, the algorithm is <em>O(n)</em> -- the inner loop never iterates.</p> <p>For a prime <em>n</em>, the algorithm will take more time than for <em>n+1</em>, as even the smallest factor of <em>n+1</em> (i.e. 2), will halve the number of iterations of the outer loop, and yet only add 1 block of constant work in the inner loop. </p> <p>So <em>O(n)</em> is the worst case.</p> <h2>Average Case</h2> <p>For the average case, we note that the division of <em>n</em> happens just as many times as <em>n</em> has prime factors (counting duplicates). For example, for <em>n = 12</em>, we have 3 divisions, as <em>n = 2·2·3</em></p> <p>The <a href="http://mathworld.wolfram.com/PrimeFactor.html" rel="nofollow noreferrer">average number of prime factors</a> for <em>1 &lt; n &lt; x</em> approaches <em>loglogn + B</em>, where <em>B</em> is some constant. So we could say the <em>average</em> time complexity for the total execution of the inner loop is <em>O(loglogn)</em>.</p> <p>We need to add to that the execution of the outer loop. This corresponds to the average <em>greatest</em> prime factor. For <em>1 &lt; n &lt; x</em> <a href="http://www.kurims.kyoto-u.ac.jp/EMIS/journals/INTEGERS/papers/n81/n81.pdf" rel="nofollow noreferrer">this average approaches <em>C.n/logn</em></a>, and so we have:</p> <p>&nbsp; &nbsp; &nbsp; &nbsp; <em>O(n/logn + loglogn)</em></p> <p>Now <em>n/logn</em> is the more important term here, so this simplifies to:</p> <p>&nbsp; &nbsp; &nbsp; &nbsp; <em>O(n/logn)</em></p>
python-3.x|algorithm|time-complexity
1
1,905,872
44,912,213
python datetime incorrect sorted
<p>I'm getting some database dates and they can get mixed up. I use the sorted () method to print these dates on a regular basis</p> <pre><code>datelist = ["1 2017", "2 2017", "6 2017", "7 2017", "8 2017", "3 2017", "4 2017", "5 2017", "9 2017", "10 2017", "11 2017", "12 2017"] for i in sorted(datelist): print(i) </code></pre> <p>But the result is terrible ..!</p> <pre><code>1 2017 10 2017 11 2017 12 2017 2 2017 3 2017 4 2017 5 2017 6 2017 7 2017 8 2017 9 2017 </code></pre> <p>How can I get these values ​​regularly? Thanks in advance for help</p>
<p>Use the <code>key</code> parameter of <code>sorted</code> to parse the string and sort by a <code>datetime</code>:</p> <pre><code>from datetime import datetime datelist = ["1 2017", "2 2017", "6 2017", "7 2017", "8 2017", "3 2017", "4 2017", "5 2017", "9 2017", "10 2017", "11 2017", "12 2017"] for i in sorted(datelist,key=lambda s: datetime.strptime(s,'%m %Y')): print(i) </code></pre> <p>Output:</p> <pre><code>1 2017 2 2017 3 2017 4 2017 5 2017 6 2017 7 2017 8 2017 9 2017 10 2017 11 2017 12 2017 </code></pre>
python|sorting|tkinter
2
1,905,873
60,512,511
Get string from numpy array
<p>Im looking for an easy way to extract <code>filename1.npy</code> from following numpy array <code>test</code>:</p> <pre><code>array([['filename1.npy'], ['filename2.npy'], ['filename3.npy']], dtype=object) </code></pre> <p>I can easily do: <code>str(test[1])</code> but then I still have those brackets around it <code>['filename1.npy']</code>. Though, I just want the name to insert it into <code>np.load(path+'filename1.npy')</code></p>
<p>Thanks, for the quick response:</p> <p><code>test[0][0]</code></p> <p>works fine to get filename1.npy.</p> <p>To loop through all names, I now use:</p> <pre><code>for index in range(3): name = test[index] name = name[0] # load single file single_file = np.load(os.path.join(path,name)) </code></pre>
python|arrays|numpy
0
1,905,874
58,061,200
How to make a high frequency thread talk to a class that updates at a low frequency?
<h3>Summary</h3> <p>I am making a real-time physics simulation that needs a low delta_t. I have connected this simulation to a python-arcade game window to display the information in real-time.</p> <p>I made a separate thread for the physics because there are some expensive matrix multiplications in the physics thread. Then, when an update is done I set the resulting states of the game window class that the game window can display whenever it draws a new frame. </p> <p>Therefore, my thought process would be that the game window class only has to worry about drawing on the screen, while the physics thread takes care of all of the computations.</p> <p>However, there is a bottleneck in communicating between the game window and the thread and I don't know have the under-the-hood insight.</p> <h3>Minimal Representation of what I want to do:</h3> <pre class="lang-py prettyprint-override"><code>import threading import time import math import arcade class DisplayWindow(arcade.Window): def __init__(self): super().__init__(width=400, height=400) self.state = 0 self.FPS = 0 def set_state(self, state): self.state = state def on_update(self, delta_time: float): self.FPS = 1. / delta_time def on_draw(self): arcade.start_render() arcade.draw_text(f'FPS: {self.FPS:0.2f}', 20, 20, arcade.color.WHITE) arcade.draw_rectangle_filled(center_x=self.state * self.width, center_y=self.height/2, color=arcade.color.WHITE, tilt_angle=0, width=10, height=10) # Thread to simulate physics. def simulation(display): t_0 = time.time() while True: # Expensive calculation that needs high frequency: t = time.time() - t_0 x = math.sin(t) / 2 + 0.5 # sinusoid for demonstration # Send it to the display window display.set_state(state=x) # time.sleep(0.1) # runs smoother with this def main(): display_window = DisplayWindow() physics_thread = threading.Thread(target=simulation, args=(display_window,), daemon=True) physics_thread.start() arcade.run() return 0 if __name__ == '__main__': main() </code></pre> <p><strong><em>Expected result:</em></strong> Smooth simulation with high frame-rates. The arcade window only has to run the on_draw at 30 or 60 fps. It only has to draw a few things.</p> <p><strong><em>Actual result:</em></strong> The physics loop runs super fast and calls the FPS drops.</p> <p>When I add a time.sleep(0.1) to the physics thread, the whole thing becomes much smoother, I guess for some reason <code>set_state( _ )</code> slows down the draw loop.</p>
<p>Python threads might not be the ideal tool for the job you are trying to do.</p> <p>Although it may be tempting to think Python threads as running concurrently, they are not : the Global Interpreter Lock (GIL) only allows one thread to control the Python interpreter. <a href="https://realpython.com/python-gil/" rel="nofollow noreferrer">More info</a></p> <p>Because of that, the <code>arcade.Window</code> object does not get an early chance to control the Python interpreter and run all its update functions because the GIL stays "focused" on the infinite loop in the <code>simulation</code> function of your <code>physics_thread</code>. </p> <p>The GIL will only release the focus on the <code>physics_thread</code> and look for something else to do on other threads after a certain number of instructions were run or the <code>physics_thread</code> is set to sleep using <code>time.sleep()</code> <a href="https://stackoverflow.com/questions/92928/time-sleep-sleeps-thread-or-process">which performs on threads</a>. Which is exactly what you empirically found to restore the expected behavior of the program.</p> <p>This is an example of a typical problem called thread starvation, which can be solved by using the <a href="https://docs.python.org/2/library/multiprocessing.html" rel="nofollow noreferrer">multiprocessing library</a>. This comes with a bit more complexity, but will separate your CPU-intensive computation and your lightweight event-based interface in separate processes, thereby solving your problem.</p>
python|python-multithreading|arcade
1
1,905,875
58,123,196
Make lists from elements of JSON file
<p>I would like to make three different lists out of a JSON file (<a href="https://www.dropbox.com/s/woxtqy634g89f0r/tags.json?dl=0" rel="nofollow noreferrer">https://www.dropbox.com/s/woxtqy634g89f0r/tags.json?dl=0</a>) say, A, B, and C:</p> <pre><code>A: (13,13,...,37) B: (12,14,...,32) C: ([-1.3875, 1.3875, -1.95 ], [[-2.775, -0., -0., ], .., [-0., 2.775, -0. ]]) </code></pre> <p>Is there any quick and smart way to do it?</p> <p>I tried in jupyter-notebook:</p> <pre><code>with open('tags.json', 'r') as f: tags = json.load(f) tags['neb.00'] </code></pre> <p>the output would give me </p> <blockquote> <p>'13->12 0.2906332796567428 [-1.3875 1.3875 -1.95 ]'</p> </blockquote> <p>And then I can do something like <code>tags['neb.03'][0:2]</code> to get <code>13</code> and so on. But this is not a good way of doing and I hope there is a proper way to do it. Could anyone help me, please?</p>
<pre><code>for k,v in data.items(): A =v[:v.index('-')] B =v[v.index('&gt;')+1:v.index('[')] C =v[v.index('[')::] print(k,A,B,C) </code></pre> <p>.items() iterates through dict keys(k) and values(v)</p> <p>i don't know if that's the answer you were looking for but looking at your data, it seems like it has a stable structure. i guess you can append each value to a list.</p>
python|json
2
1,905,876
58,114,082
Why is my plot updated by panel (twice) when I change a button setting that shouldn't trigger any updates? (Panel Holoviz)
<p>I made a class to explore and train models.<br></p> <p>When I change the value of dropdown 'choose_model_type' in the code example below, I would expect nothing to change in my dashboard, since there are no <code>@param.depends('choose_model_type', watch=True)</code> in my class. However, my dashboard gets updated, when I change the value of dropdown 'choose_model_type'. In this case function plot_y() gets triggered twice if I look at the logs:</p> <blockquote> <p>2019-09-26 11:24:42,802 starting plot_y<br> 2019-09-26 11:24:42,825 starting plot_y</p> </blockquote> <p>This is for me unexpected behavior. I don't want plot_y() to be triggered when I change 'choose_model_type'.<br> How do i make sure that plot_y gets triggered only when 'y' changes (and my plot is updated in the dashboard) and not when other parameters such as dropdown change? <br> I want to control what gets triggered when, but for me there seems to be some magic going on. <br></p> <p>Other related question is:<br> Why does plot_y() get triggered twice? If I change 'pred_target' it also triggers plot_y() twice. Same happens when I change the value of 'choose_model_type': plot_y() gets triggered twice.</p> <pre><code># library imports import logging import numpy as np import pandas as pd import hvplot import hvplot.pandas import holoviews as hv from holoviews.operation.datashader import datashade, dynspread hv.extension('bokeh', logo=False) import panel as pn import param # create some sample data df = pd.DataFrame(np.random.choice(100, size=[50, 2]), columns=['TARGET1', 'TARGET2']) # class to train my models with some settings class ModelTrainer(param.Parameterized): logging.info('initializing class') pred_target = param.Selector( default='TARGET1', objects=['TARGET1', 'TARGET2'], label='Choose prediction target' ) choose_model_type = param.Selector( default='LINEAR', objects=['LINEAR', 'LGBM', 'RANDOM_FOREST'], label='Choose type of model', ) y = df[pred_target.default] # i expect this function only to be triggered when pred_target changes @param.depends('pred_target', watch=True) def _reset_variables(self): logging.info('starting reset variables') self.y = df[self.pred_target] # i expect plot_y() only to be triggered when y changes @param.depends('y', watch=True) def plot_y(self): logging.info('starting plot_y') self.y_plot = dynspread(datashade(self.y.hvplot.scatter())) return self.y_plot model_trainer = ModelTrainer() # show model dashboard pn.Column( pn.Row(model_trainer.param['pred_target']), pn.Row(model_trainer.param['choose_model_type']), pn.Row(model_trainer.plot_y) ).servable() </code></pre> <p><a href="https://i.stack.imgur.com/jOLw4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jOLw4.png" alt="dropdown_changes_dashboard_shouldnt_happen"></a></p>
<p>The problem here is one of validation, specifically the issue is here: <code>@param.depends('y', watch=True)</code>. <code>y</code> is not a parameter in your example, therefore param.depends can't resolve it and ends up falling back to depending on all parameters. I've filed an issue to resolve this <a href="https://github.com/pyviz/param/issues/357" rel="nofollow noreferrer">here</a>. If you change your example to:</p> <pre><code>y = param.Series(default=df[pred_target.default]) </code></pre> <p>It will work, however you will still have the issue with the callback being called twice. This is because you have set <code>watch=True</code> in the depends declaration. Setting <code>watch=True</code> only makes sense for methods that have side-effects, if your method is something that returns a value then it will rarely make sense to set it. To expand on that, when you pass the method to panel, e.g. <code>pn.Row(model_trainer.plot_y)</code>, it will automatically watch the parameters and call the method to update the plot when the parameters change.</p>
python|holoviews|holoviz|panel-pyviz
2
1,905,877
58,088,435
Error when using Pandas.remove_duplicates()
<p>I am attempting to use Pandas.drop_duplicates() by considering only a certain subset but am getting an error <code>KeyError: Index(['days'], dtype='object')</code></p> <p>The Index is as follows: <code>id, event_description, attribute1, attribute 2, attribute 3, days, days_supply, days_equivalent</code></p> <p>I want to ignore attribute 2 and attribute 3 so I have ran the follow</p> <pre class="lang-py prettyprint-override"><code>df = df.drop_duplicates(subset=['id', 'event_description', 'attribute1', 'days', 'days_supply', 'days_equivalent']) </code></pre> <p>Which returns: </p> <pre class="lang-py prettyprint-override"><code>eyError Traceback (most recent call last) &lt;ipython-input-4-3f7da32b380f&gt; in &lt;module&gt; 7 8 df = df.drop_duplicates(subset=['id', 'event_description', 'attribute1', 'days', -&gt; 9 'days_supply', 'days_equivalent']) 10 11 print(df) /anaconda3/lib/python3.6/site-packages/pandas/core/frame.py in drop_duplicates(self, subset, keep, inplace) 4892 4893 inplace = validate_bool_kwarg(inplace, "inplace") -&gt; 4894 duplicated = self.duplicated(subset, keep=keep) 4895 4896 if inplace: /anaconda3/lib/python3.6/site-packages/pandas/core/frame.py in duplicated(self, subset, keep) 4949 diff = Index(subset).difference(self.columns) 4950 if not diff.empty: -&gt; 4951 raise KeyError(diff) 4952 4953 vals = (col.values for name, col in self.items() if name in subset) KeyError: Index(['days'], dtype='object') </code></pre> <p>Once I remove <code>days</code>, the remove duplicates runs without flaw, but I do need to make sure I consider <code>days</code>. What does the error require that I fix?</p>
<p>Had to re-check column names. <code>Days</code> vs <code>days</code></p>
python|python-3.x|pandas
4
1,905,878
56,269,638
Advanced List Coding using multiple lists
<p>So we are given two lists.</p> <pre><code>groups = [[0,1],[2],[3,4,5],[6,7,8,9]] A = [[[0, 1, 6, 7, 8, 9], [0, 1, 6, 7, 8, 9]], [[2]], [[3, 4, 5, 6, 7, 8, 9], [3, 4, 5, 6, 7, 8, 9], [3, 4, 5, 6, 7, 8, 9]], [[0, 1, 3, 4, 5, 6, 8, 9], [0, 1, 3, 4, 5, 7, 8, 9], [0, 1, 3, 4, 5, 6, 7, 8, 9], [0, 1, 3, 4, 5, 6, 7, 8, 9]]] </code></pre> <p>How do we replace the the elements in <code>A</code> with their corresponding indexes in groups: i.e., replace the <code>0</code> and <code>1</code> in <code>A</code> with <code>0</code>, the <code>2</code> in <code>A</code> with <code>1</code>, the <code>3</code>, <code>4</code> and <code>5</code> with <code>2</code> and so on.</p> <p>Output:</p> <pre><code>A = [[[0, 0, 3, 3, 3, 3], [0, 0, 3, 3, 3, 3]], [[1]], [[2, 2, 2, 3, 3, 3, 3], [2, 2, 2, 3, 3, 3, 3], [2, 2, 2, 3, 3, 3, 3]], [[0, 0, 2, 2, 2, 3, 3, 3], [0, 0, 2, 2, 2, 3, 3, 3], [0, 0, 2, 2, 2, 3, 3, 3, 3], [0, 0, 2, 2, 2, 3, 3, 3, 3]]] </code></pre>
<p>Even though there is no attempt from your side, here you go :</p> <pre><code>def f(l,i): for k in l: if i in k: return l.index(k) output_ = [[[f(groups,n) for n in a0] for a0 in a] for a in A] </code></pre> <p><strong>Output</strong> : </p> <pre><code>[[[0, 0, 3, 3, 3, 3], [0, 0, 3, 3, 3, 3]], [[1]], [[2, 2, 2, 3, 3, 3, 3], [2, 2, 2, 3, 3, 3, 3], [2, 2, 2, 3, 3, 3, 3]], [[0, 0, 2, 2, 2, 3, 3, 3], [0, 0, 2, 2, 2, 3, 3, 3], [0, 0, 2, 2, 2, 3, 3, 3, 3], [0, 0, 2, 2, 2, 3, 3, 3, 3]]] </code></pre>
python
0
1,905,879
18,687,015
RegEx in Python not returning matches
<p>I am trying to extract certain lines out of a file if they match certain criteria. Specifically, column [3] needs to start with Chr3:, and column [13] needs to be "yes".</p> <p>Here are examples of lines that match and do not match the criteria:</p> <blockquote> <pre><code>XLOC_004170 XLOC_004170 - Ch3:14770-25031 SC_JR32_Female SC_JR32_Male OK 55.8796 9.2575 -2.59363 -0.980118 0.49115 0.897554 no XLOC_004387 XLOC_004387 - Ch3:3072455-3073591 SC_JR32_Female SC_JR32_Male OK 0 35.4535 inf -nan 5e-05 0.0149954 yes </code></pre> </blockquote> <p>The python script I am using is:</p> <pre><code>with open(input_file) as fp: # fp is the file handle for line in fp: #line is the iterator line=line.split("\t") locus = str(line[3]) significance = str(line[13]) print(locus) print(significance) if (re.match('Chr3:[0-9]+-[0-9]+',locus,flags=0) and re.match('yes',significance,flags=0)): output.write(("%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\n")%(line[0],line[1],line[2],line[3],line[4],line[5],line[6],line[7],line[8],line[9],line[10],line[11],line[12],line[13])) </code></pre> <p>I would really be grateful if anyone could explain why this script returns no outputs. </p>
<p>There's really no reason to use regex here:</p> <pre><code>with open(input_file) as handle: for line in handle: cells = line.split('\t') locus = cells[2] significance = cells[12] if locus.startswith('Ch3:') and significance == 'yes': output.write('\t'.join(cells) + '\n') </code></pre>
python|regex
3
1,905,880
71,685,255
Parse list of dicts from a pandas Series into columns
<p>I have a pandas series containing a list of dictionaries. I'd like to parse the contents of the dicts with some condition and store the results into new columns.</p> <p>Here's some data to work with:</p> <pre><code> import pandas as pd df = pd.DataFrame({'d': [[{'br': 1, 'ba': 1, 'r': 100}, {'ba': 1, 'r': 80}, {'br': 2, 'ba': 1, 'r': 150}, {'br': 1, 'ba': 1, 'r': 90}], [{'br': 1, 'ba': 1, 'r': 100}, {'ba': 1, 'r': 80}, {'br': 2, 'ba': 1, 'r': 150}]], 'id': ['xxas', 'yxas'], 'name': [A, B] }) </code></pre> <p>I'd like to parse the contents of each dictionary with some conditional logic. Check for each dicts in the list and name columns as keys <code>br</code> and <code>ba</code>. Get the <code>value</code> of <code>r</code> <code>key</code> as assign as column value. If key is not found, <code>br</code> in this example, assign <code>0</code> as the value. Expected output:</p> <pre><code>id br ba r name xxas 1 0 100 A xxas 0 1 80 A xxas 2 1 150 A xxas 1 1 90 A yxas 1 1 100 B yxas 0 1 80 B yxas 2 1 150 B </code></pre>
<p>From your <code>DataFrame</code>, we first set the columns <code>id</code> and <code>name</code> as index like so :</p> <pre class="lang-py prettyprint-override"><code>df = df.set_index(['id', 'name']) </code></pre> <p>Then, we can use <a href="https://pandas.pydata.org/docs/reference/api/pandas.concat.html" rel="nofollow noreferrer">concat</a> to get the expected result :</p> <pre class="lang-py prettyprint-override"><code>pd.concat(df.d.apply(pd.DataFrame).tolist(), keys=df.index).reset_index(level=[0, 1]).reset_index(drop=True).fillna(0).rename(columns={'level_0': 'id', 'level_1': 'name'}) </code></pre> <p>Output :</p> <pre><code> id name br ba r 0 xxas A 1.0 1 100 1 xxas A 0.0 1 80 2 xxas A 2.0 1 150 3 xxas A 1.0 1 90 4 yxas B 1.0 1 100 5 yxas B 0.0 1 80 6 yxas B 2.0 1 150 </code></pre>
python|pandas|dictionary
2
1,905,881
71,641,519
What is the best way to find all pairs in a list with a given condition?
<p>Say I have a list <code>['ab','cd','ac','de']</code> and I want to find all possible pairs where both elements do not have any repeating letter.</p> <p>The answer would be <code>['ab','cd'],['ab','de'],['ac','de']</code>.</p> <p>But I do not know the best possible way/algorithm to solve this. Currently what I am doing is using a nested <code>for</code> loop to check with each element and then removing it to shorten the list, but when my list is 1000s of lines long and the elements more complicated, it starts to take a noticeable amount of time.</p> <pre><code>arr4 = ['ab','cd','ac','de'] for i in arr4: for x in arr4: if not bool(set(i)&amp;set(x)): print('['+i+','+x+']') print('----') arr4.remove(i) </code></pre>
<p>Your solution isn't bad, as far as the basic algorithm goes, but you shouldn't modify the list you're iterating over like you do. Also, there's some optimisation possible:</p> <pre><code>from random import sample from timeit import timeit from itertools import combinations size = 3 # length of the strings chars = [chr(n) for n in range(97, 123)] strings = {''.join(sample(chars, size)) for _ in range(1000)} # your solution as a one-liner def pair_strs1(ps): return [(p, q) for p in ps for q in ps if not (set(p) &amp; set(q))] # this doesn't work, unless there's no initial pairs with double letters like 'aa' def pair_strs2(ps): return [(p, q) for p in ps for q in ps if len(set(p+q)) == 4] # your solution, but compute the sets only once, then look them up before combining def pair_strs3(ps): ps = {p: set(p) for p in ps} return [(p, q) for p in ps for q in ps if not (ps[p] &amp; ps[q])] # same, but avoiding mirrored pairs, only compute above the diagonal of the product def pair_strs4(ps): ps = {p: set(p) for p in ps} keys = list(ps.keys()) return [(p, q) for i, p in enumerate(ps) for q in keys[i+1:] if not (ps[p] &amp; ps[q])] # the same again, but now avoiding the lookup in the dictionaries def pair_strs5(ps): ps = {p: set(p) for p in ps} items = list(ps.items()) return [(p, q) for i, (p, p_set) in enumerate(ps.items()) for q, q_set in items[i+1:] if not (p_set &amp; q_set)] # different approach, creating the product (upper half above diagonal) first def pair_strs6(ps): ps = {p: set(p) for p in ps} ps = combinations(ps.items(), 2) return [(p, q) for (p, p_set), (q, q_set) in ps if not (p_set &amp; q_set)] # the same as 5, but with integer bitmasks instead of sets, as per @kellybundy's suggestion def pair_strs7(ps): ps = {p: sum(1 &lt;&lt; (ord(c) - 97) for c in p) for p in ps} items = list(ps.items()) return [(p, q) for i, (p, p_mask) in enumerate(ps.items()) for q, q_mask in items[i+1:] if not (p_mask &amp; q_mask)] # run some tests n = 10 print(timeit(lambda: pair_strs1(strings), number=n)) print(timeit(lambda: pair_strs3(strings), number=n)) print(timeit(lambda: pair_strs4(strings), number=n)) print(timeit(lambda: pair_strs5(strings), number=n)) print(timeit(lambda: pair_strs6(strings), number=n)) print(timeit(lambda: pair_strs7(strings), number=n)) p1 = pair_strs1(strings) p3 = pair_strs3(strings) p4 = pair_strs4(strings) p4 += [tuple(reversed(p)) for p in p4] p5 = pair_strs5(strings) p5 += [tuple(reversed(p)) for p in p5] p6 = pair_strs6(strings) p6 += [tuple(reversed(p)) for p in p6] p7 = pair_strs7(strings) p7 += [tuple(reversed(p)) for p in p7] print(set(p1) == set(p3) == set(p4) == set(p5) == set(p6) == set(p7) and len(p1) == len(p3) == len(p4) == len(p5) == len(p6) == len(p7)) </code></pre> <p>Result for me:</p> <pre class="lang-none prettyprint-override"><code>3.4115294000002905 1.7107413000012457 0.8492871999987983 0.7485178000024462 0.7776397999987239 0.37149049999788986 True </code></pre> <p>So, the final change doesn't really help - computing the product beforehand doesn't save enough. <code>pair_pairs5</code> appears to be preferable.</p> <p>Edit: changed the generation of strings to including a size (and set the default to 3 instead of 2), adjusted naming accordingly and avoiding duplicate characters in strings (as per OP's description).</p> <p>This also makes user @KellyBundy's solution the fastest, since the lack of duplicates allows for a very clean integer-based solution, #7. This is the clear winner under these conditions (about a 9x speed-up).</p>
python|algorithm
2
1,905,882
69,635,716
Why does the Range() method in Python have an optional parameter yet its second parameter is not optional?
<p>Newbie here, as I understand you cannot have non-optional arguments after optional ones, so when I saw the documentation for the range() method I was quite confused. What is going on here, am I missing something?</p> <p>It says the start parameter is optional and defaults to 1, but the second parameter stop is mandatory.</p> <p>Link to range() function: <a href="https://docs.python.org/3/library/stdtypes.html" rel="nofollow noreferrer">https://docs.python.org/3/library/stdtypes.html</a></p>
<p>Depending on the number of arguments range is passed the behavior changes.</p> <ul> <li>If 1 argument is passed it treats the start as implicit and the only argument as the stop value.</li> <li>If 2 arguments are passed it treats the first argument as the start value and the second as the stop value.</li> <li>If 3 arguments are passed it treats the first argument as the start value, the second as the stop value, and the third as the step size.</li> </ul> <p>Below is a contrived and probably buggy example of how such a function could be implemented:</p> <pre class="lang-py prettyprint-override"><code>def myrange(arg1, arg2=None, arg3=None): if not arg2: return range(arg1) if not arg3: return range(arg2, arg1) return range(arg2, arg1, arg3) </code></pre>
python|range
2
1,905,883
57,616,108
Pagination loop with offset 100
<p>I am working on a code where I am fetching records from an API and this API has pagination implemented on it where it would allow maximum of 100 records. So I have to loop in the multiples of 100's. Currently my code compares the total records and loops from <em>offset</em> 100 and then 101,102,103 etc. I want it to loop in 100's(like 100,200,300) and stop as soon as the offset is greater than the total records. I am not sure how to do this, i have partial code which increment by 1 instead of 100 and wont stop when needed. Could anyone please help me with this issue. </p> <pre><code>import pandas as pd from pandas.io.json import json_normalize #Token for Authorization API_ACCESS_KEY = 'Token' Accept='application/xml' #Query Details that is passed in the URL since = '2018-01-01' until = '2018-02-01' limit = '100' offset = '0' total = 'true' def get(): url_address = "https://mywebsite/web?offset="+str('0') headers = { 'Authorization': 'token={0}'.format(API_ACCESS_KEY), 'Accept': Accept, } querystring = {"since":since,"until":until, "limit":limit, "total":total} # find out total number of pages r = requests.get(url=url_address, headers=headers, params=querystring).json() total_record = int(r['total']) print("Total record: " +str(total_record)) # results will be appended to this list all_items = [] # loop through all offset and return JSON object for offset in range(0, total_record): url = "https://mywebsite/web?offset="+str(offset) response = requests.get(url=url, headers=headers, params=querystring).json() all_items.append(response) offset = offset + 100 print(offset) # prettify JSON data = json.dumps(all_items, sort_keys=True, indent=4) return data print(get()) </code></pre> <p>Currently when I print the offset I see<br> Total Records: 345<br> 100,<br> 101,<br> 102, </p> <p>Expected:<br> Total Records: 345<br> 100,<br> 200,<br> 300<br> Stop the loop! </p>
<p>One way you could do it is change</p> <pre><code>for offset in range(0, total_record): url = "https://mywebsite/web?offset="+str(offset) response = requests.get(url=url, headers=headers, params=querystring).json() all_items.append(response) offset = offset + 100 print(offset) </code></pre> <p>to</p> <pre><code>for offset in range(0, total_record, 100): url = "https://mywebsite/web?offset="+str(offset) response = requests.get(url=url, headers=headers, params=querystring).json() all_items.append(response) print(offset) </code></pre> <p>as you cannot change offset inside the loop</p>
python|api|pagination
5
1,905,884
57,693,109
python: make nested list based on elements on single list
<p>I have a list of strings:</p> <pre><code>ls = ['elev', 'solRd'] </code></pre> <p>I want to create a a new list, with nested list of two elements, where the second element actually better explains the meaning of the first one. </p> <pre><code>ls.out = [["elev", "elevation"], ["solRd", "solRadiation"]] </code></pre> <p>I have only few strings, which will repeat, so I would like to specify it manually. </p> <p>Ie. if element is <code>'elev'</code> -> new pair item will be <code>'elevation'</code>; if element is <code>'solRd'</code>-> new element <code>'solarRadiation'</code>, etc.</p> <p>This seems pretty easy but I am relatively new to python and I cannot figure it our. </p> <p>I have tried to subset my element by name <code>ls['a']</code> and include it to new list, but even the subsetting by name dis not worked.. I don't want to subset it by index, in case my string order will change. </p>
<pre><code>meanings = { "elev": "elevation", "solRd": "solRadiation" } ls = ["elev", "solRd"] lists = [[item, meanings.get(item, "")] for item in ls] </code></pre>
python
2
1,905,885
57,396,744
How to detect increases and decreases of values per column by condition?
<p>I have the following DataFrame <code>df</code>:</p> <pre><code>id col1 col2 col3 111 22 3 10 222 21 4 11 333 22 5 5 444 5 3 4 555 6 3 4 666 4 4 3 777 7 2 8 </code></pre> <p>I need to solve a tricky task. I want to find all columns that have increases in values when <code>col1</code> values are higher than <code>20</code>. By "increase in value" I mean a value greater than a column median by at least 30% for at least 65% of rows of <code>col1</code> when <code>col1</code> values are higher than <code>20</code>.</p> <p>In my example, there are 3 rows when <code>col1</code> values are higher than <code>20</code>:</p> <pre><code>id col1 col2 col3 111 22 3 10 222 21 4 11 333 22 5 5 </code></pre> <p>Among these rows, the 1st and 2nd rows of <code>col3</code> have increases in values by at least 30% with respect to median (the median of <code>col3</code> is equal to 5). This condition does not apply to 3rd row of <code>col3</code>, but it's fine since it should work for at least 65% of rows, i.e. 65% of 3 rows is 1.95 ~2 rows. </p> <p>The expected output is (a different output format is also fine, but it should be clear that <code>col3</code> was identified):</p> <pre><code>col3 </code></pre>
<p>IIUC, in your example, you should output <code>col2</code> and <code>col3</code></p> <pre><code>medians = df.median() s = df[df.col1.gt(20)] base = s.gt(medians + 0.3 * medians.abs()) (base.sum()/base.count()).gt(0.65) </code></pre> <hr> <pre><code>col2 True col3 True dtype: bool </code></pre>
python|pandas
1
1,905,886
42,142,046
Extending query arguments in graphene/graphene_django
<p>How do I add non-field arguments to a GraphQL query in graphene? Here's an example of a use case. I'd like to be able to do:</p> <pre><code>{ hsv(h: 40, s: 128, v: 54) { r g b name } </code></pre> <p>with this Django model:</p> <pre><code>from django.db import models from django.core.validators import MinValueValidator, MaxValueValidator, class Color(models.Model): name = models.CharField( "name", max_length=24, null=False, blank=False) r = models.IntegerField( "red", null=False, blank=False, validators=[MinValueValidator(0), MinValueValidator(255)] ) g = models.IntegerField( "green", null=False, blank=False, validators=[MinValueValidator(0), MinValueValidator(255)] ) b = models.IntegerField( "blue", null=False, blank=False, validators=[MinValueValidator(0), MinValueValidator(255)] ) </code></pre> <p>and this GraphQL object type and Query based on it:</p> <pre><code>from graphene import ObjectType, IntegerField, Field, relay from graphene_django import DjangoObjectType from .django import Color from colorsys import hsv_to_rgb class ColorNode(DjangoObjectType): r = IntegerField() g = IntegerField() b = IntegerField() class Meta: model = Color class Query(ObjectType): rgb = relay.node.Field(ColorNode) hsv = relay.node.Field(ColorNode) named = relay.node.Field(ColorNode) def resolve_rgb(self, args, context, info): if not all(map(lambda x: x in args, ['r', 'g', 'b'])): # Arguments missing return None return Color.objects.get(**args) def resolve_hsv(self, args, context, info): if not all(map(lambda x: x in args, ['h', 's', 'v'])): # Arguments missing return None r, g, b = hsv_to_rgb(args['h'], args['s'], args['v']) return Color.objects.get(r=r, g=g, b=b) def resolve_named(self, args, context, info): if not 'name' in args: # Arguments missing return None return Color.objects.get(name=args['name']) </code></pre> <p>It fails because the arguments aren't accepted. What am I missing?</p>
<p>The answer turns out to be simple. To add arguments to the resolver, declare the arguments in the constructor of the field, like this:</p> <pre><code>rgb = relay.node.Field(ColorNode, r=graphene.String(), g=graphene.String(), b=graphene.String()) hsv = relay.node.Field(ColorNode, h=graphene.String(), s=graphene.String(), v=graphene.String())) named = relay.node.Field(ColorNode, name=graphene.String()) </code></pre> <p>The arguments may then be included in the query, as shown above.</p>
django-models|graphql|graphene-python
5
1,905,887
54,008,000
pxssh - Output not returning correctly with certain length command
<p>When using pxssh from pexpect (Python 3-v4.6.0, Python 2-v4.2.1) to execute a command, the output of the command is not returned, only the command itself along with a control character (example below). This only occurs when using a command with a particular length, for example when setting the window size to 200, a command of length 189 character will trigger this behaviour, for a window size of 300, a 246 character command etc.</p> <p>Example code with respective outputs:</p> <p>Setup:</p> <pre><code>from pexpect import pxssh conn = pxssh.pxssh() conn.login(host, user, password) conn.setwinsize(500, 200) conn.setecho(False) conn.sendline('') conn.prompt(1) conn.prompt(1) </code></pre> <p>Correct Expected Output: </p> <pre><code>conn.sendline('l'*188) conn.prompt(1) conn.before b'llll**snip**lllll\x1b[Kl\r\n-sh: lllll*snip*lllll: command not found\r\n' </code></pre> <p>Incorrect Output:</p> <pre><code>conn.sendline('l'*189) conn.prompt(1) conn.before b'lllll**snip**lllll\r\x1b[A' </code></pre> <p>Correct Expected Output: </p> <pre><code>conn.sendline('l'*190) conn.prompt(1) conn.before b'lllll**snip**llllll\x1b[Kl\r\n-sh: lllll**snip**llllll: command not found\r\n' </code></pre> <p>Does anyone know what might be causing this?</p>
<p>In case anyone comes across this problem in the future. This issue occurs when a command of length = window size - length of prompt, which causes an additional prompt to incorrectly be inserted into the incoming data in turn causing the output to be incorrectly returned. </p> <p>For more information see: <a href="https://github.com/pexpect/pexpect/issues/552" rel="nofollow noreferrer">https://github.com/pexpect/pexpect/issues/552</a></p>
python|pxssh
0
1,905,888
58,365,337
If instances are really just pointers to objects in Python, what happens when I change the value of a data attribute in a particular instance?
<p>Suppose I define "Class original:" and create a class attribute "one = 4." Then I create an instance of the class "First = original()." My understanding is that First now contains a pointer to original and "First.one" will return "4." However, suppose I create "Second = original()" and then set "Second.one = 5." What exactly happens in memory? Does a new copy of Class original get created with a class attribute of 5? </p> <p>I've created a Class original with class attribute one. I then created two instances of this class (First and Second) and verified that id(First.one) and id(Second.one) are pointing to the same place. They both return the same address. However, when I created Third=original() and set Third.one = 5 and then check id(Thrid.one) it appears to be pointing somewhere else. Where is it pointing and what happened? When I check original.one it still returns "4" so obviously the original object is not being modified. Thanks.</p>
<p>It appears you are asking about a piece of code similar to this:</p> <pre class="lang-py prettyprint-override"><code>class Original: def __init__(self, n): self.one = n first = Original(4) second = Original(4) third = Original(5) print(id(first.one)) # 140570468047360 print(id(second.one)) # 140570468047360 print(id(third.one)) # 140570468047336 </code></pre> <blockquote> <p>Suppose I define "Class original:" and create a class attribute "one = 4." Then I create an instance of the class "First = original()." My understanding is that First now contains a pointer to original</p> </blockquote> <p>No. The variable references the instance you created, not the class. If it referenced the class, there would be <em>no way</em> for you to get at the instance you just created.</p> <p>The <em>instance</em> will, somewhere in its object header, of course contain a pointer to its class. Without that pointer, method lookup wouldn't be possible, since you wouldn't be able to find the class from the instance.</p> <blockquote> <p>and "First.one" will return "4."</p> </blockquote> <p>Yes. The attribute <code>one</code> of <code>first</code> contains a pointer to the object <code>4</code> (which is an instance of the class <code>int</code>).</p> <p>[Note that technically, some Python implementations will perform an optimization and actually store the object <code>4</code> directly in the attribute instead of a pointer to the object. But that is an internal implementation detail.]</p> <blockquote> <p>However, suppose I create "Second = original()" and then set "Second.one = 5." What exactly happens in memory? Does a new copy of Class original get created with a class attribute of 5?</p> </blockquote> <p>No. Why would you need a separate copy of the class? The methods are still the same for both instances. In fact, that is <em>precisely</em> the reason why methods in Python take the instance as their first argument! That way, there need only be one method. (This is actually the same in every OO language, except that in most other languages, this argument is "invisible" and only accessible using a special keyword like <code>self</code> in Ruby, Smalltalk, Self, and Newspeak or <code>this</code> in Java, C#, and Scala.)</p> <blockquote> <p>I've created a Class original with class attribute one. I then created two instances of this class (First and Second) and verified that id(First.one) and id(Second.one) are pointing to the same place. They both return the same address. However, when I created Third=original() and set Third.one = 5 and then check id(Thrid.one) it appears to be pointing somewhere else.</p> </blockquote> <p>It is not quite clear to me what your question is here. <code>first.one</code> and <code>second.one</code> both point to <code>4</code>, so they both point to the same ID since they both point to the same object. <code>third.one</code> points to <code>5</code>, which is obviously a different object from <code>4</code>, so naturally, it has a different ID.</p> <p>It is, in fact, one of the requirement of IDs that different objects that exist at the same time <em>must</em> have different IDs.</p> <blockquote> <p>Where is it pointing and what happened?</p> </blockquote> <p>Again, it is not quite clear what you are asking.</p> <p>It is pointing at <code>5</code>, and nothing happened.</p> <blockquote> <p>When I check original.one it still returns "4" so obviously the original object is not being modified.</p> </blockquote> <p>Indeed, it isn't. Why would it be?</p>
python-3.x
0
1,905,889
65,395,378
How to update a plot in pyqtgraph?
<p>I am trying to have a user interface using PyQt5 and pyqtgraph. I made two checkboxes and whenever I select them I want to plot one of the two data sets available in the code and whenever I deselect a button I want it to clear the corresponding curve. There are two checkboxes with texts <code>A1</code> and <code>A2</code> and each of them plot one set of data.</p> <p>I have two issues:</p> <p>1- If I select <code>A1</code> it plots the data associated with <code>A1</code> and as long as I do not select <code>A2</code>, by deselecting <code>A1</code> I can clear the data associated with <code>A1</code>. However, If I check <code>A1</code> box and then I check <code>A2</code> box, then deselecting <code>A1</code> does not clear the associated plot. In this situation, if I choose to plot random data, instead of a deterministic curve such as <code>sin</code>, I see that by selecting either button new data is added but it cannot be removed.</p> <p>2- The real application have 96 buttons each of which should be associated to one data set. I think the way I wrote the code is inefficient because I need to copy the same code for one button and data set 96 times. Is there a way to generalize the toy code I presented below to arbitrary number of checkboxes? Or perhaps, using/copying the almost the same code for every button is the usual and correct way to do this?</p> <p>The code is:</p> <pre><code>from PyQt5 import QtWidgets, uic, QtGui import matplotlib.pyplot as plt from matplotlib.widgets import SpanSelector import numpy as np import sys import string import pyqtgraph as pg from pyqtgraph.Qt import QtGui, QtCore app = QtWidgets.QApplication(sys.argv) x = np.linspace(0, 3.14, 100) y1 = np.sin(x)#Data number 1 associated to checkbox A1 y2 = np.cos(x)#Data number 2 associated to checkbox A2 #This function is called whenever the state of checkboxes changes def todo(): if cbx1.isChecked(): global curve1 curve1 = plot.plot(x, y1, pen = 'r') else: try: plot.removeItem(curve1) except NameError: pass if cbx2.isChecked(): global curve2 curve2 = plot.plot(x, y2, pen = 'y') else: try: plot.removeItem(curve2) except NameError: pass #A widget to hold all of my future widgets widget_holder = QtGui.QWidget() #Checkboxes named A1 and A2 cbx1 = QtWidgets.QCheckBox() cbx1.setText('A1') cbx1.stateChanged.connect(todo) cbx2 = QtWidgets.QCheckBox() cbx2.setText('A2') cbx2.stateChanged.connect(todo) #Making a pyqtgraph plot widget plot = pg.PlotWidget() #Setting the layout layout = QtGui.QGridLayout() widget_holder.setLayout(layout) #Adding the widgets to the layout layout.addWidget(cbx1, 0,0) layout.addWidget(cbx2, 0, 1) layout.addWidget(plot, 1,0, 3,1) widget_holder.adjustSize() widget_holder.show() sys.exit(app.exec_()) </code></pre>
<p>Below is an example I made that works fine. It can be reused to do more plots without increasing the code, just changing the value of <code>self.num</code> and adding the corresponding data using the function <code>add_data(x,y,ind)</code>, where <code>x</code> and <code>y</code> are the values of the data and <code>ind</code> is the index of the box (from <code>0</code> to <code>n-1</code>).</p> <pre><code>import sys import numpy as np import pyqtgraph as pg from pyqtgraph.Qt import QtCore, QtGui class MyApp(QtGui.QWidget): def __init__(self): QtGui.QWidget.__init__(self) self.central_layout = QtGui.QVBoxLayout() self.plot_boxes_layout = QtGui.QHBoxLayout() self.boxes_layout = QtGui.QVBoxLayout() self.setLayout(self.central_layout) # Lets create some widgets inside self.label = QtGui.QLabel('Plots and Checkbox bellow:') # Here is the plot widget from pyqtgraph self.plot_widget = pg.PlotWidget() # Now the Check Boxes (lets make 3 of them) self.num = 6 self.check_boxes = [QtGui.QCheckBox(f&quot;Box {i+1}&quot;) for i in range(self.num)] # Here will be the data of the plot self.plot_data = [None for _ in range(self.num)] # Now we build the entire GUI self.central_layout.addWidget(self.label) self.central_layout.addLayout(self.plot_boxes_layout) self.plot_boxes_layout.addWidget(self.plot_widget) self.plot_boxes_layout.addLayout(self.boxes_layout) for i in range(self.num): self.boxes_layout.addWidget(self.check_boxes[i]) # This will conect each box to the same action self.check_boxes[i].stateChanged.connect(self.box_changed) # For optimization let's create a list with the states of the boxes self.state = [False for _ in range(self.num)] # Make a list to save the data of each box self.box_data = [[[0], [0]] for _ in range(self.num)] x = np.linspace(0, 3.14, 100) self.add_data(x, np.sin(x), 0) self.add_data(x, np.cos(x), 1) self.add_data(x, np.sin(x)+np.cos(x), 2) self.add_data(x, np.sin(x)**2, 3) self.add_data(x, np.cos(x)**2, 4) self.add_data(x, x*0.2, 5) def add_data(self, x, y, ind): self.box_data[ind] = [x, y] if self.plot_data[ind] is not None: self.plot_data[ind].setData(x, y) def box_changed(self): for i in range(self.num): if self.check_boxes[i].isChecked() != self.state[i]: self.state[i] = self.check_boxes[i].isChecked() if self.state[i]: if self.plot_data[i] is not None: self.plot_widget.addItem(self.plot_data[i]) else: self.plot_data[i] = self.plot_widget.plot(*self.box_data[i]) else: self.plot_widget.removeItem(self.plot_data[i]) break if __name__ == &quot;__main__&quot;: app = QtGui.QApplication(sys.argv) window = MyApp() window.show() sys.exit(app.exec_()) </code></pre> <p>Note that inside de <code>PlotWidget</code> I add the plot using the <code>plot()</code> method, it returns a <code>PlotDataItem</code> that is saved in the list created before called <code>self.plot_data</code>. With this, you can easily remove it from the <code>Plot Widget</code> and add it again. Also if you are aiming for a more complex program, for example, one that you can change the data of each box on the run, the plot will update without major issues if you use the <code>setData()</code> method on the <code>PlotDataItem</code></p> <p>As I said at the beginning, this should work fine with a lot of checkboxes, because the function that is called when a checkbox is Checked/Unchecked, first compare the actual state of each box with the previous one (stored in <code>self.state</code>) and only do the changes on the plot corresponding to that specific box. With this, you avoid doing one function for each checkbox and the replot of all de boxes every time you check/uncheck a box (like <a href="https://stackoverflow.com/users/8408080/user8408080">user8408080</a> did). I don't say it is bad, but if you increase the number of checkboxes and/or the complexity of the data, the workload of replotting all of the data will increase drastically.</p> <p>The only problem will be when the window is too small to support a crazy amount of checkboxes (96 for example), then you will have to organize the checkboxes in another widget instead of a layout.</p> <p>Now some screenshots of the code from above: <a href="https://i.stack.imgur.com/q87Wh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/q87Wh.png" alt="enter image description here" /></a></p> <p>And then changing the value of <code>self.num</code> to <code>6</code> and adding some random data to them:</p> <pre><code>self.add_data(x, np.sin(x)**2, 3) self.add_data(x, np.cos(x)**2, 4) self.add_data(x, x*0.2, 5) </code></pre> <p><a href="https://i.stack.imgur.com/H3hP7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/H3hP7.png" alt="enter image description here" /></a></p>
python|pyqt5|pyqtgraph
5
1,905,890
45,471,831
Ipywidgets Jupyter Notebook Interact Ignore Argument
<p>Is there a way to have <code>interact(f)</code> ignore certain arguments in <code>f</code>? I believe it is getting confused by the fact that I have a default argument that I use to pass in a dataframe. Here is my function:</p> <pre><code>def show_stats(start,end,df_pnl=df_pnl): mask = df_pnl['Fulldate'] &gt;= start &amp; df_pnl['FullDate'] &lt;= end df_pnl = df_pnl[mask] #do some more transformations here display(df_pnl) </code></pre> <p>Here is what I'm trying to do:</p> <pre><code>interact(show_stats,start=start_str,end=today_str) </code></pre> <p>And here is the error I'm getting:</p> <p><a href="https://i.stack.imgur.com/D3hAx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/D3hAx.png" alt="enter image description here"></a></p> <p>I hypothesize that <code>interact</code> somehow changes <code>df_pnl</code> into a string (since it gives a dropdown of the column headers in the interact output), and fails because it then tries to do <code>df_pnl['Fulldate'].....</code> on a string, which causes the error shown.</p> <p>How do I get around this? Could I exclude that argument from my function while still having it work on the correct dataframe? Is there an option within interact to ignore certain arguments in the function?</p> <p>Thanks </p>
<p>So it is a little difficult to test this solution without a sample DataFrame but I think that <a href="https://docs.python.org/3.6/library/functools.html#functools.partial" rel="noreferrer"><code>functools.partial</code></a> may be what you are looking for. Essentially <code>partial</code> allows you to define a new function with one of the keyword arguments or positional arguments loaded beforehand. Try the code below and see if works;</p> <pre class="lang-py prettyprint-override"><code>from functools import partial def show_stats(start, end, df_pnl): mask = df_pnl['Fulldate'] &gt;= start &amp; df_pnl['FullDate'] &lt;= end df_pnl = df_pnl[mask] #do some more transformations here display(df_pnl) # Define the new partial function with df_pnl loaded. show_stats_plus_df = partial(show_stats, df_pnl=YOUR_DATAFRAME) interact(show_stats_plus_df, start=start_str, end=today_str) </code></pre> <p>Update:</p> <p>You could also try ipywidgets the <a href="http://ipywidgets.readthedocs.io/en/stable/examples/Using%20Interact.html#Fixing-arguments-using-fixed" rel="noreferrer"><code>fixed</code></a> function.</p> <pre class="lang-py prettyprint-override"><code>from ipywidgets import fixed def show_stats(start, end, df_pnl): mask = df_pnl['Fulldate'] &gt;= start &amp; df_pnl['FullDate'] &lt;= end df_pnl = df_pnl[mask] #do some more transformations here display(df_pnl) interact(show_stats, start=start_str, end=today_str, df_pnl=fixed(df_pnl)) </code></pre> <p>Please comment below if this doesn't solve the problem.</p>
python-3.x|jupyter-notebook|ipython-notebook|jupyter|ipywidgets
5
1,905,891
28,493,106
Tkinter Frame Doesn't Fill Remaining Space
<p>This is my first encounter with Tkinter, and I'm beating my head against a wall over this.</p> <p>Ultimately, I am wanting a layout that uses a Frame along the left edge that contains a stack of buttons (navigation). This should be hard-wired to 100 px wide and take up all vertical space. The rest of the window should be filled with a content Frame, the contents of which I'll change dynamically.</p> <p>So, should be simple. Here's the simplified version of what I have (color-coded for reference):</p> <pre><code>from Tkinter import * root = Tk() root.geometry("300x200") navbar = Frame(root, bg="green", width=100) navbar.pack(anchor=W, fill=Y, expand=True, side=LEFT) content_frame = Frame(root, bg="orange") content_frame.pack(anchor=N, fill=BOTH, expand=True, side=LEFT ) root.mainloop() </code></pre> <p>What I would expect to see is a green bar (100 px wide) on the left side of the window (full height), and the rest of the window orange. However, what I get is the Irish flag (green, white, orange).</p> <p>No matter what I do, the orange content_frame just refuses to fill horizontally. It seems as though Tk has divided the window exactly in half, placing the green navbar on the left side (as expected), but giving the orange content_frame only half of the remaining space.</p> <p>I can force the orange to fill the space by coding it, in this case to width=200, but this does not flex when the window resizes. Also, if I don't add the green navbar, the orange content_frame happily fills the whole window.</p> <p>This example code shows the content_frame with anchor=N, but it doesn't matter where I anchor it. It also doesn't matter whether I side=LEFT or side=RIGHT.</p> <p>I also discovered that no matter how I size the width of the green navbar, Tk always divides the remaining horizontal space in half, and places the orange content_frame in the right half.</p> <p>This has to be simple. Any ideas?</p>
<p>specify <code>expand=False</code> for the left frame:</p> <pre><code>from Tkinter import * root = Tk() root.geometry("300x200") navbar = Frame(root, bg="green", width=100) navbar.pack(anchor=W, fill=Y, expand=False, side=LEFT) # &lt;---- content_frame = Frame(root, bg="orange") content_frame.pack(anchor=N, fill=BOTH, expand=True, side=LEFT ) root.mainloop() </code></pre>
python|tkinter|tk
7
1,905,892
14,838,635
quit mainloop in python
<p>Although I am a kind of experimented programmer in other languages, I am very new in Python. I have been trying to do a very simple thing that is to quit the mainloop after starting. It seems that it is a big deal. The program below only makes a sequence of events. Everything seems to be working, but I am not able to close the final window... What should I do?</p> <pre><code>from Tkinter import * root=Tk() theMainFrame=Frame(root) theMainFrame.pack() class CloseAfterFinishFrame1(Frame): # Diz que herda os parametros de Frame def __init__(self): Frame.__init__(self,theMainFrame) # Inicializa com os parametros acima!! Label(self,text="Hi",font=("Arial", 16)).pack() button = Button (self, text = "I am ready", command=self.CloseWindow,font=("Arial", 12)) button.pack() self.pack() def CloseWindow(self): self.forget() CloseAfterFinishFrame2() class CloseAfterFinishFrame2(Frame): # Diz que herda os parametros de Frame def __init__(self): Frame.__init__(self,theMainFrame) # Inicializa com os parametros acima!! Label(self,text="Hey",font=("Arial", 16)).pack() button = Button (self, text = "the End", command=self.CloseWindow,font=("Arial", 12)) button.pack() self.pack() def CloseWindow(self): self.forget() CloseEnd() class CloseEnd(): theMainFrame.quit() CloseAfterFinishFrame1() theMainFrame.mainloop() </code></pre>
<p>Call <code>root.quit()</code>, not <code>theMainFrame.quit</code>:</p> <pre><code>import Tkinter as tk class CloseAfterFinishFrame1(tk.Frame): # Diz que herda os parametros de Frame def __init__(self, master): self.master = master tk.Frame.__init__(self, master) # Inicializa com os parametros acima!! tk.Label(self, text="Hi", font=("Arial", 16)).pack() self.button = tk.Button(self, text="I am ready", command=self.CloseWindow, font=("Arial", 12)) self.button.pack() self.pack() def CloseWindow(self): # disable the button so pressing &lt;SPACE&gt; does not call CloseWindow again self.button.config(state=tk.DISABLED) self.forget() CloseAfterFinishFrame2(self.master) class CloseAfterFinishFrame2(tk.Frame): # Diz que herda os parametros de Frame def __init__(self, master): tk.Frame.__init__(self, master) # Inicializa com os parametros acima!! tk.Label(self, text="Hey", font=("Arial", 16)).pack() button = tk.Button(self, text="the End", command=self.CloseWindow, font=("Arial", 12)) button.pack() self.pack() def CloseWindow(self): root.quit() root = tk.Tk() CloseAfterFinishFrame1(root) root.mainloop() </code></pre> <p>Also, there is no need to make a class <code>CloseEnd</code> if all you want to do is call the function <code>root.quit</code>.</p>
python|tkinter
8
1,905,893
41,675,178
Differentiate GET and POST in a ModelViewset Django Rest Framework
<p>I would like to know what I have to do to differentiate GET and POST in a ModelViewSet in Django Rest Framework becouse it mix bought and I have no idea how to do it.</p> <hr> <p>Basically I want to make an api that allows to upload two images and the response of a POST call is a number depending on the degree of similarity of the uploaded images. For this I intend by means of the POST call to get the path where the images are stored to be able to work with them in OpenCV in another script. Then I put the code I have, which allows you to upload the two images.</p> <pre><code>## Models.py ## class Task(models.Model): task_name = models.CharField(max_length=20) image1 = models.ImageField(upload_to='Images/',default='Images/None/No-img.jpg') image2 = models.ImageField(upload_to='Images/', default='Images/None/No-img.jpg') def __str__(self): return "%s" % self.task_name ## Serializers.py ## class TaskSerializer(serializers.ModelSerializer): image1 = serializers.ImageField(max_length=None,use_url=True) image2 = serializers.ImageField(max_length=None, use_url=True) class Meta: model = Task fields = ('id','task_name','image1','image2') ## Views.py ## class TaskViewSet(viewsets.ModelViewSet): queryset = Task.objects.all() serializer_class = TaskSerializer ## Urls.py ## router = routers.DefaultRouter() router.register(r'task', views.TaskViewSet) urlpatterns = [ url(r'^',include(router.urls)), url(r'^admin/', include(admin.site.urls)), ] </code></pre>
<p>If I got it right, you need to compare your images after you created a Task.</p> <pre><code>from rest_framework.decorators import detail_route from rest_framework.response import Response class TaskViewSet(viewsets.ModelViewSet): queryset = Task.objects.all() serializer_class = TaskSerializer @detail_route(methods=['post']) def perform_task(self, request, pk=None): task = self.get_object() serializer = PasswordSerializer(data=request.data) if serializer.is_valid(): task.save() # here you run your code to set similarity return Response({'similarity': similarity}) else: return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST) </code></pre>
python|django|web-services|rest
0
1,905,894
41,483,357
open() function not accepting single argument or string type argument
<p>I'm learning Python and I'm trying to just simply read file into my program but the <code>open()</code> function is not accepting <code>"r"</code> or any sting as argument, I don't know how it is possible.</p>
<p><code>os.open</code> isn't the function you should be using. Just call <code>open</code> directly which uses the built-in. </p> <p><a href="https://docs.python.org/3/library/os.html#os.open" rel="nofollow noreferrer">From the docs on <code>os.open</code></a>:</p> <blockquote> <p>Note: This function is intended for low-level I/O. For normal usage, use the built-in function <code>open()</code>, which returns a file object with <code>read()</code> and <code>write()</code> methods (and many more). To wrap a file descriptor in a file object, use <code>fdopen()</code>.</p> </blockquote>
python|python-3.x
2
1,905,895
56,963,520
Creating sequential ID in panda dataframe for given column values
<p>I would like to create a unique sequential ID for each given values in the "Type" column but can't seem to make it work. </p> <p>Current data frame:</p> <pre><code> Item Type ID 0 Apple Fruit 0 1 Orange Fruit 1 2 Banana Fruit 2 3 Peach Fruit 3 4 Cheese Dairy 0 5 Milk Dairy 1 6 Chicken Meat 0 7 Pork Meat 1 8 Beef Meat 2 </code></pre> <p>Desired data frame:</p> <pre><code> Item Type ID 0 Apple Fruit 0 1 Orange Fruit 1 2 Banana Fruit 2 3 Peach Fruit 3 4 Cheese Dairy 0 5 Milk Dairy 1 6 Chicken Meat 0 7 Pork Meat 1 8 Beef Meat 2 </code></pre> <p>I've tried to set_index and creating a separate column that indicating "Type" value change but was not able to create the desired format. Any help is appreciated. </p>
<p>try using cumcount()</p> <pre><code>df = pd.DataFrame(data={"Type":["Fruit","Fruit","Dairy","Meat","Meat"], "Item":["Apple","Orange","Chesse","Pork","Beef"]}) df["ID"] = df.groupby(['Type']).cumcount() print(df) </code></pre> <pre><code> Type Item ID 0 Fruit Apple 0 1 Fruit Orange 1 2 Dairy Chesse 0 3 Meat Pork 0 4 Meat Beef 1 </code></pre> <p>I hope it would solve your problem</p>
python|pandas
3
1,905,896
24,054,808
How to do overwriting of a file opened in append mode, in the subsequent runs
<p>I have opened my file in append mode and it works fine, but when I run the program again it doesn't overwrites the previous content. I am not able to figure out where I went wrong. This may be a very basic question , but I am new to python. Please help.</p> <pre><code>with open(Result,"a") as f: csv_writer = csv.writer(f, delimiter=',') csv_writer.writerow((fn1,fn2)) </code></pre> <p>I need to append the contents but also needed to overwrite it when the program is run for the next time.</p>
<p>To overwrite the previous contents, open the file in <code>"w"</code> mode. (Write Mode)</p> <pre><code>with open(Result,"w") as f: csv_writer = csv.writer(f, delimiter=',') csv_writer.writerow((fn1,fn2)) </code></pre> <p><a href="https://docs.python.org/2/library/functions.html#open" rel="nofollow">Read more about <code>open</code></a></p>
python
1
1,905,897
20,768,675
Cant wrap my brain around towers of hanoi recursion in python
<p>I understood recursions for factorials and other math cases, but this one is eluding me.</p> <p>So, we have to break it into n-1 down to the base case, but I don't see how the code is breaking down to output all the steps?!</p> <pre><code>1 def printMove(fr, to): 2 print('move from ' + str(fr) + ' to ' + str(to)) 3 4 def Towers(n, fr, to, spare): 5 if n == 1: 6 printMove(fr, to) 7 else: 8 Towers(n-1, fr, spare, to) 9 Towers(1, fr, to, spare) 10 Towers(n-1, spare, to, fr) 11 Towers(3,'t1','t3','t2') move from t1 to t3 move from t1 to t2 move from t3 to t2 move from t1 to t3 move from t2 to t1 move from t2 to t3 move from t1 to t3 </code></pre> <p>How does this work? Do I need to be able to grok this, or does writing the code at a high level by coding the general idea knowing the details will work out suffice? </p>
<p>Reducing the boilerplate might help with grasping the recursion:</p> <pre><code>def hanoi(n, src, hlp, dst): if n &gt; 0: hanoi(n-1, src, dst, hlp) print("moving disc " + str(n) + " from " + src + " to " + dst) hanoi(n-1, hlp, src, dst) # call with: hanoi(3, 'src', 'hlp', 'dst') </code></pre> <p><strong>Explanation:</strong><br> The recursion is based on moving the first n-1 discs from the "source" to "helper" (middle tower) - which is done by using the "destination" tower as the "helper". This is done in the line: <br><code>hanoi(n-1, src, dst, hlp)</code></p> <p>Then move the biggest ring (n) from the "source" to the "destination" (which is done in the "print").</p> <p>And then again, recursively, move the n-1 rings from the "helper" to the "destination" while using the "source" pole as the "helper": <code>hanoi(n-1, hlp, src, dst)</code></p>
python|recursion
2
1,905,898
35,858,230
Python - obtain least 4 significant bytes from SHA1 hash
<p>I am trying to derive a function that will return least significant 4 bytes of a SHA1 hash.</p> <p>For example, given the following SHA1 hash:</p> <pre><code>f552bfdca00b431eb0362ee5f638f0572c97475f (hex digest) \xf5R\xbf\xdc\xa0\x0bC\x1e\xb06.\xe5\xf68\xf0W,\x97G_ (digest) </code></pre> <p>I need to derive a function that extracts the last significant 4 bytes from the hash and produces:</p> <pre><code>E456406 (hex) 239428614 (dec) </code></pre> <p>So far I have tried solutions as described in <a href="https://stackoverflow.com/questions/9903053/get-the-x-least-significant-bits-from-a-string-in-python">Get the x Least Significant Bits from a String in Python</a> and <a href="https://stackoverflow.com/questions/4822130/reading-least-significant-bits-in-python">Reading least significant bits in Python</a>.</p> <p>However, the resulting value is not the same.</p>
<p>If you want the least significant 4 bytes (8 hex characters) from a the sha1 value <code>f552bfdca00b431eb0362ee5f638f0572c97475f</code> use a binary mask:</p> <pre><code>sha1_value = 0xf552bfdca00b431eb0362ee5f638f0572c97475f mask = 0xffffffff least_four = sha1_value &amp; mask # 0x2c97475f </code></pre>
python|string|hash|byte
0
1,905,899
15,414,033
How can I stop Python's gzip from making sub directories?
<p>this is probably a simple mistake on my part, but I can't quite figure out how to compress a file without making lots of sub-directories.</p> <p>Here is how I am doing it:</p> <pre><code>f_in = open(r'C:\cygwin\home\User\Stuff\MoreStuff\file.csv', 'r') gzip_file_name = r'C:\cygwin\home\User\Stuff\MoreStuff\file.csv.gz' f_out = gzip.open(gzip_file_name, 'w') f_out.writelines(f_in) f_out.close() </code></pre> <p>The problem is, when I decompress that <code>.gz</code> file, I don't get just the <code>csv</code> file, but rather a long chain of directories that finally end with the csv file. e.g. <code>cygwin\home\User\Stuff\MoreStuff\file.csv</code></p> <p>My workaround looks a bit like this:</p> <pre><code>current_dir = os.getcwd() os.chdir(r'C:\cygwin\home\User\Stuff\MoreStuff') f_in = open('file.csv', 'r') gzip_file_name = 'file.csv.gz' f_out = gzip.open(gzip_file_name, 'w') f_out.writelines(f_in) f_out.close() os.chdir(current_dir) </code></pre> <p>I don't know if it is a good idea to keep changing the current directory (especially since I might have multiple files to compress).</p> <p>So, is there a way to not make those sub directories? (I couldn't find anything that discussed this in the offical <a href="http://docs.python.org/2.6/library/gzip.html" rel="nofollow">docs</a> ).</p> <p>Note: I am using Windows, but I do need this to be portable. I am also using Python 2.4.</p> <p>Thanks for your time.</p> <p>edit: I see the sub directories when I open the compressed file in WinRar or even in 7zip. If I do it with <code>chdir</code>, then I no longer see those sub directories.</p>
<p>The <a href="https://stackoverflow.com/questions/1466287/python-gzip-folder-structure-when-zipping-single-file/1466443#1466443">link</a> to the previous question that crayzeewulf provided worked just fine.</p> <p>This is likely only a problem in older distributions of Python. According to that <a href="http://svn.python.org/view/python/trunk/Lib/gzip.py?r1=72458&amp;r2=75935" rel="nofollow noreferrer">diff</a> (also provided by crazyzeewulf), this was changed in newer versions, so you likely won't be able to reproduce this problem in Python 2.7. </p> <p>Thanks for your help everyone. </p>
python|gzip
1