Unnamed: 0
int64
0
1.91M
id
int64
337
73.8M
title
stringlengths
10
150
question
stringlengths
21
64.2k
answer
stringlengths
19
59.4k
tags
stringlengths
5
112
score
int64
-10
17.3k
1,907,800
59,206,983
how to map a column that contains multiple strings according to a dictionary in pandas
<p>I have a dataframe in which one column contains strings separated by comma. I want map the columns according to a dictionary. </p> <p>For example: </p> <pre><code>dfm = pd.DataFrame({'Idx': np.arange(4), 'Names': ['John,Mary', 'Mike', 'Mike,Joe,Mary', 'John']}) mask = {'John':'1', 'Mary':'2','Joe':'3','Mike':'4'} </code></pre> <p>Desired Output:</p> <pre><code> Idx Names 0 0 1,2 1 1 4 2 2 4,3,2 3 3 1 </code></pre> <p>What's the best way to achieve that? Thanks.</p>
<p>You can try this:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; dfm.Names.apply(lambda x: ','.join([mask[i] for i in x.split(',')])) 0 1,2 1 4 2 4,3,2 3 1 Name: Names, dtype: object </code></pre>
python|pandas
1
1,907,801
72,837,686
Python Warning Panda Dataframe "Simple Issue!" - "A value is trying to be set on a copy of a slice from a DataFrame"
<p>first post / total Python novice so be patient with my slow understanding!</p> <p>I have a dataframe containing a list of transactions by order of transaction <strong>date</strong>.</p> <p>I've appended an additional new field/column called <strong>[&quot;DB/CR&quot;]</strong>, that dependant on the presence of &quot;-&quot; in the [&quot;Amount&quot;] field populates 'Debit', else 'Credit' in the absence of &quot;-&quot;.</p> <p>Noting the transactions are in date order, I've included another new field/column called [Top x]. The output of which is I want to populate and incremental independent number (starting at 1) for both debits and credits on a segregated basis.</p> <p>As such, I have created a simple loop with a associated 'if' / 'elif' (prob could use else as it's binary) statement that loops through the data sent row 0 to the last row in the df and using an if statement 1) &quot;Debit&quot; or 2) &quot;Credit&quot; increments the number for each independently by &quot;Debit&quot; 'i' integer, and &quot;Credit&quot; 'ii' integer.</p> <p><strong>The code works as expected in terms of output</strong> of the 'Top x'; however, I always receive a warning &quot;<em>A value is trying to be set on a copy of a slice from a DataFrame</em>&quot;.</p> <p>Trying to perfect my script, <strong>without any warnings</strong> I've been trying to understand what I'm doing incorrect but not getting it in terms of my use case scenario.</p> <p>Appreciate if someone can kindly shed light on / propose how the code needs to be refactored to avoid receiving this error.</p> <p>Code (the df source data is an imported csv):</p> <pre><code>#top x debits/credits i = 0 ii = 0 for ind in df.index: if df[&quot;DB/CR&quot;][ind] == &quot;Debit&quot;: i = i+1 df[&quot;Top x&quot;][ind] = i elif df[&quot;DB/CR&quot;][ind] == &quot;Credit&quot;: ii = ii+1 df[&quot;Top x&quot;][ind] = ii </code></pre> <p>Interpreter</p> <pre><code> df[&quot;Top x&quot;][ind] = i G:\Finances Backup\venv\Statementsv.03.py:173: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy df[&quot;Top x&quot;][ind] = ii </code></pre> <p>Many thanks :)</p>
<p>Use iterrows() to iterate over the DF. However, updating DF while iterating is not preferable</p> <p>see documentation here</p> <p>Refer to the documentation here <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.iterrows.html?highlight=iterrows#pandas.DataFrame.iterrows" rel="nofollow noreferrer">Iterrows()</a></p> <blockquote> <p>You should never modify something you are iterating over. This is not guaranteed to work in all cases. Depending on the data types, the iterator returns a copy and not a view, and writing to it will have no effect.</p> </blockquote>
python-3.x|pandas|dataframe
0
1,907,802
63,174,331
Python requests security protocol
<p>I apologize if this question is real entry level type programmer..</p> <p>But if I am posting data with the requests package, is the data secure? OR while the http message is 'in the air' between my PC and http bin; could someone intercept/replicate what I am doing?... Basically corrupt my data and create havoc for what I am trying to do...</p> <pre><code>import time, requests stuff = {} stamp = time.time() data = 120.2 stuff['Date'] = stamp stuff['meter_reading'] = data print(&quot;sending this dict&quot;,stuff) r = requests.post('https://httpbin.org/post', data=stuff) print(&quot;Status code: &quot;, r.status_code) print(&quot;Printing Entire Post Request&quot;) print(r.text) </code></pre> <p>With the script above on the level of security would it matter if I am posting to a server that is running http or https? The code above is similar to my real world example (that I run on a rasp pi scheduled task) where I am posting data with a time stamp to an http (Not https) server (flask app on pythonanywhere cloud site) which then saves the data to sql. This data can then be rendered thru typical javacript front end web development...</p> <p>Thanks for any advice I am still learning how to make this 'secure' on the data transfer from the rasp to to cloud server.. Asking about client side web browsing security to view the data that has already been transferred maybe a totally different question/topic..</p>
<p>This is a question about protocols mainly. The HTTP protocol is less secure as someone can 'listen' to what you are sending over it. That's why you should always use the newer HTTPS protocol, since it uses TLS (encrypted) connection. You can read more about it e.g. <a href="https://www.cloudflare.com/learning/ssl/why-is-http-not-secure/" rel="nofollow noreferrer">here</a>.</p>
python|http|https|python-requests|data-transfer
1
1,907,803
62,444,412
Django form data into multiple dictionaries
<p>In my HTML form i have multiple form fields:</p> <pre><code> &lt;div class="row form-group"&gt; &lt;div class="col-md-6"&gt; &lt;label for="label1"&gt;Label1&lt;/label&gt; &lt;input type="text" value="Card Number", name="card_number" class="form-control" id="label1"&gt; &lt;/div&gt; &lt;div class="col-md-6"&gt; &lt;label for="placeholder1"&gt;Placeholder&lt;/label&gt; &lt;input type="text" value="Enter Card Number" name="card_number_placeholder" class="form-control" id="placeholder1"&gt; &lt;/div&gt; &lt;/div&gt; &lt;div class="row form-group"&gt; &lt;div class="col-md-4 mb-3"&gt; &lt;label for="label2"&gt;Label2&lt;/label&gt; &lt;input type="text" value="Expiration" name="expiration" class="form-control" id="label2"&gt; &lt;/div&gt; &lt;div class="col-md-3 mb-3"&gt; &lt;label for="placeholder2"&gt;Placeholder&lt;/label&gt; &lt;input type="text" value="MM/YY" name="expiration_placeholder" class="form-control" id="placeholder2"&gt; &lt;/div&gt; &lt;/div&gt; </code></pre> <p>In my Django View when i convert the form data into dictionary using</p> <pre><code>data = request.POST.dict() </code></pre> <p>i get all the form field values as one single dictionary:</p> <pre><code>{'expiration_placeholder': ['MM/YY'], 'card_number_placeholder': ['Enter Card Number'], 'expiration': ['Expiration'], 'card_number': ['Card Number']} </code></pre> <p>How can i get these as multiple dictionaries on form submit as something below:</p> <pre><code>{{'expiration': ['Expiration'],'expiration_placeholder': ['MM/YY']}, {'card_number': ['Card Number'] 'card_number_placeholder': ['Enter Card Number']}} </code></pre> <p>If this structure is maintained it will be easier for me to parse and store as multiple rows in my database table</p>
<p>If all what you want is a dict of your request.POST data, you can use this:</p> <pre><code>data = {} for p in request.POST: data[p] = request.POST[p] </code></pre>
python|html|django
0
1,907,804
58,742,576
How to execute a python script after clicking a button in HTML (Django)?
<p>I want to execute a python script after a user (registered user) clicks on a button in HTML using Django. How can this be achieved? Can someone give me a hint on how to create the logic in the views.py along with the Html code in the template? Before executing the button code, I want to check whether if the user.id exists in the model. If not, the button doesn't do anything. </p> <p>Currently, this is basic understanding:</p> <p>views.py:</p> <pre><code>def button(request): #python script return render(request, 'users/home.html') </code></pre> <p>home.html:</p> <pre><code>{% if user.id %} &lt;button class="btn btn-outline-info" onclick="location.href={% url 'button' %}"&gt; Manual Index &lt;/button&gt; </code></pre> <p>FYI, I am new to Django and am still exploring its potential.</p>
<p>Add your function in urls.py, example code is</p> <pre><code>from django.contrib import admin from django.urls import path from .views import button urlpatterns = [ ..... path("button/", button, name="button_call"), ..... ] </code></pre> <p>and simply you can link your button for the url, example code is</p> <pre><code>{% if user.id %} &lt;button class="btn btn-outline-info" onclick="location.href={% url 'button_call' %}"&gt; Manual Index &lt;/button&gt; </code></pre> <p>more flexible way to do this is</p> <pre><code>&lt;a href="{% if user.id %}{% url "button_call" %}{% else %}#{% endif %}"&gt;&lt;button class="btn btn-outline-info"&gt; Manual Index &lt;/button&gt;&lt;/a&gt; </code></pre> <p>happy coding</p>
python|django
0
1,907,805
15,869,230
How to get file names from Web directory - Python code
<p>I would like to know how to get a list of filenames (.jpg image files) that are stored on a server.</p> <p>I am looking for code which stores all the filenames (with their extension) in an Excel table or in the CSV format.</p> <p>Any tips will be very helpful.</p>
<p>It's pretty easy to get the filenames of all files stored in a certain folder, and you can easily write it in a file (in whatever format you want) :</p> <pre><code>import os filenames = os.listdir('path to directory') logFile = open('filenames.txt') for name in filenames: logFile.write(filename) logFile.close() </code></pre> <p>You must execute this code on your server, so make sure you can execute Python code on it.</p> <p>Once that's done, you can retrieve it from your server using the <code>urllib2</code> module : <a href="https://stackoverflow.com/questions/22676/how-do-i-download-a-file-over-http-using-python">How do I download a file over HTTP using Python?</a></p>
python|web|filenames
-2
1,907,806
59,702,013
can not insert an embedded document into a model using django
<p>I can't create an embedded document into a model using django, i'm using djongo as my database.It keeps telling me that my value must be an instance of Model:<code>&lt;class 'django.db.models.base.Model'&gt;</code> even though I have created all the fields in the model. I really need some help....</p> <p>my model:</p> <pre><code>class SMSHistory(models.Model): ThoiGian = models.DateField(default=datetime.date.today,null=True,blank=True) SoDienThoai = models.CharField(max_length=100,null=True,blank=True) SeriNo = models.CharField(max_length=100,null=True,blank=True) Count = models.IntegerField(default=0,null=True,blank=True) class WebHistory(models.Model): ThoiGian = models.DateField(default=datetime.date.today,null=True,blank=True) DiaChiIP = models.CharField(max_length=100,null=True,blank=True) SoDienThoai = models.CharField(max_length=100,null=True,blank=True) SeriNo = models.CharField(max_length=100,null=True,blank=True) Count = models.IntegerField(default=0,null=True,blank=True) class AppHistory(models.Model): ThoiGian = models.DateField(default=datetime.date.today,null=True,blank=True) DiaChiIP = models.CharField(max_length=100,null=True,blank=True) SoDienThoai = models.CharField(max_length=100,null=True,blank=True) SeriNo = models.CharField(max_length=100,null=True,blank=True) Count = models.IntegerField(default=0,null=True,blank=True) class CallHistory(models.Model): ThoiGian = models.DateField(default=datetime.date.today,null=True,blank=True) SoDienThoai = models.CharField(max_length=100,null=True,blank=True) SeriNo = models.CharField(max_length=100,null=True,blank=True) Count = models.IntegerField(default=0,null=True,blank=True) class History(models.Model): MaTem = models.CharField(max_length=100,null=True,blank=True) MaSP = models.CharField(max_length=100,null=True,blank=True) SMS = models.EmbeddedModelField( model_container = SMSHistory ) App = models.EmbeddedModelField( model_container = AppHistory ) Web = models.EmbeddedModelField( model_container = WebHistory ) Call = models.EmbeddedModelField( model_container = CallHistory ) </code></pre> <p>my views</p> <pre><code> class check(View): def get(self,request): return render(request,'website/main.html') def post(self,request): matem=request.POST.get('txtCheck') print(matem) temp=khotemact.objects.filter(MaTem=matem) print(temp[0]) tim=History.objects.filter(MaTem=temp[0].MaTem) if len(tim)==0: print('khong co') them=History.objects.create(MaTem=temp[0].MaTem,MaSP='123', SMS={'ThoiGian':'2010-1-1','SoDienThoai':'12324','SeriNo':'12343','Count':0}, App={'ThoiGian':'2010-1-1','DiaChiIP':'1','SoDienThoai':'12324','SeriNo':'1236','Count':0}, Web={'ThoiGian':'2010-1-1','DiaChiIP':'1','SoDienThoai':'12324','SeriNo':'1236','Count':0}, Call={'ThoiGian':'2010-1-1','SoDienThoai':'1233','SeriNo':'123','Count':0} ) them.save() else: print('co') # History.objects.filter(MaTem=temp[0].MaTem).update(Web={'Count':Count+1}) return HttpResponse('oke') </code></pre> <p>i received an error like this </p> <pre><code> ValueError at /website/check/ Value: {'ThoiGian': '2010-1-1', 'SoDienThoai': '12324', 'SeriNo': '12343', 'Count': 0} must be instance of Model: &lt;class 'django.db.models.base.Model'&gt; </code></pre> <p>thank you</p>
<p>As error says, you should use model instance, and you are using dict</p> <h1>wrong</h1> <pre><code>SMS={'ThoiGian':'2010-1-1','SoDienThoai':'12324','SeriNo':'12343','Count':0} </code></pre> <h1>right</h1> <pre><code>SMS = SMSHistory.objects.create(ThoiGian='2010-1-1', SoDienThoai='12324',SeriNo='12343', Count=0) </code></pre>
python|django|web
0
1,907,807
59,543,979
How to configure AWS4 S3 Authorization in Django?
<p>I configurred <code>S3</code> bucket for <code>Frankfurt</code> region. Though my <code>Django</code>-based service is able to write files to the bucket, whenever it tried to read them there's <code>InvalidRequest</code> error telling to upgrate authorization mechanizm:</p> <pre><code>&lt;Error&gt; &lt;Code&gt;InvalidRequest&lt;/Code&gt; &lt;Message&gt; The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256. &lt;/Message&gt; &lt;RequestId&gt;17E9629D33BF1E24&lt;/RequestId&gt; &lt;HostId&gt; ... &lt;/HostId&gt; &lt;/Error&gt; </code></pre> <p>Is the cause for this error burried in my incorrect implementation of storage backend or is this caused by bucket not supporting older <code>AWS3</code> method of authorization?</p> <p>How to configure <code>S3Boto3Storage</code> in Django to use <code>AWS4</code> authorization? I can't find any definitive documentation on this topic.</p>
<p>Since you mentioned the <code>S3Boto3Storage</code> backend, I'm assuming you're using django-storages rather than writing your own implementation. There's a setting that lets you specify which signature version to use, <code>AWS_S3_SIGNATURE_VERSION = 's3v4'</code> you can find the full list of settings for S3 here <a href="https://django-storages.readthedocs.io/en/latest/backends/amazon-S3.html#settings" rel="nofollow noreferrer">https://django-storages.readthedocs.io/en/latest/backends/amazon-S3.html#settings</a></p>
python|django|amazon-s3|boto3
1
1,907,808
48,940,049
Python Loop Capability
<p>I am attempting to write a program for my school project, on a raspberry pi. The program concept is fairly simple. The pi is looking at 4 GPIO pin inputs, and will output a different basic animation to the LED matrix depending on the input. </p> <p>The main loop is a simple while(1) loop that always runs. Within this loop, the program is constantly checking to see what the input is from the 4 GPIO input pins on the pi. When the input is matched to one of the if statements, then it runs a short 'animation' in which is displays an image on the LED matrix, waits, clears the matrix, displays another image, waits, and clears again. For example, this could be a 'blinking smiley face' animation where the first image displayed is a smiley face with its eyes open, and the second image is a smiley face with its eyes closed. With the pausing in between the pictures getting displayed, it appears the image on the screen is actually blinking. </p> <p>This setup is shown below for clarity (not actual code, gets the idea across though):</p> <pre><code>while(1) { currentState = [GPIO.input(pin1), GPIO.input(pin2), GPIO.input(pin3), GPIO.input(pin4)] if((currentState[0] == 1) and (currentState[1] == 0) and (currentState[2] == 1) and (currentState[3] == 0)) { matrix.SetImage(open_eyes) time.sleep(.3) matrix.Clear() matrix.SetImage(closed_eyes) time.sleep(.3) matrix.Clear() } if((currentState[0] == 0) and (currentState[1] == 1) and (currentState[2] == 0) and (currentState[3] == 1)) { matrix.SetImage(closed_mouth) time.sleep(.3) matrix.Clear() matrix.SetImage(open_mouth) time.sleep(.3) matrix.Clear() } } </code></pre> <p>The issue I am having with this setup is if the input changes <strong>during</strong> an animation, then it will not cut off the animation it is currently on to start the next. This is obvious the way the code is structured since the currentState variable is only being set at the beginning of the while loop. </p> <p>To accomplish this <strong>immediate</strong> switch, I attempted to make each animation a function, and just run the function within the if statements. However, then the program never would break out of those functions to check to see what the input is. I am now stuck, and if anyone has any ideas on how to accomplish this, I would love to hear them. </p>
<p>The issue here is time.sleep. During the sleep, the program won't detect changes to the currentState - as you correctly recognise. One way to handle this (I'm not going to write the code for you, but explain the general idea), is to keep your animation state in an object which gets updated on every while loop cycle. Things become a bit more complex than what you have because you will need to track the .3 seconds using some kind of time monitoring. So you'd set your object state to something like: </p> <pre><code>animation.state = 'blinking' animation.started = datetime.now() </code></pre> <p>Then once you detect .3 seconds have passed, you can change your animation state. If the currentState changes, you can reset the animation state.</p> <p>Alternatively you could spawn a thread for the animation. That would be natural in some ways (and easy in Go for example) but a bit trickier if you're not used to Python threading.</p>
python|while-loop|nested
0
1,907,809
49,320,399
Callling a REST API in python
<p>I want to call a REST api and get some json data in response in python.</p> <pre><code>curl https://analysis.lastline.com/analysis/get_completed -X POST -F “key=2AAAD5A21DN0TBDFZZ66” -F “api_token=IwoAGFa344c277Z2” -F “after=2016-03-11 20:00:00” </code></pre> <p>I know of python <a href="http://docs.python-requests.org/en/latest/" rel="nofollow noreferrer">request</a>, but how can I pass <code>key</code>, <code>api_token</code> and <code>after</code>? What is <code>-F</code> flag and how to use it in python requests?</p>
<p>Just include the parameter <code>data</code> to the .post function.</p> <pre><code>requests.post('https://analysis.lastline.com/analysis/get_completed', data = {'key':'2AAAD5A21DN0TBDFZZ66', 'api_token':'IwoAGFa344c277Z2', 'after':'2016-03-11 20:00:00'}) </code></pre>
python|rest|api|curl
2
1,907,810
71,033,139
Half transparent image Pillow
<p>How do i make an image half transparent like the dank memer command. Ive been using the putalpha since i found it on the Internet when i was searching for a solution but it didnt work. I am using Pillow.</p> <p>The code:</p> <pre><code>class Image_Manipulation(commands.Cog, name=&quot;Image Manipulation&quot;): def __init__(self, bot:commands.Bot): self.bot = bot @commands.command(aliases=[&quot;Clown&quot;,&quot;CLOWN&quot;]) async def clown(self, ctx, user: nextcord.Member = None): if user == None: user = ctx.author clown = Image.open(os.path.join(&quot;Modules&quot;, &quot;Images&quot;, &quot;clown.jpg&quot;)) useravatar = user.display_avatar.with_size(128) avatar_data = BytesIO(await useravatar.read()) pfp = Image.open(avatar_data) pfp = pfp.resize((192,192)) pfp.putalpha(300) if pfp.mode != &quot;RGBA&quot;: pfp = pfp.convert(&quot;RGBA&quot;) clown.paste(pfp, (0,0)) clown.save(&quot;clown_done.png&quot;) await ctx.send(file = nextcord.File(&quot;clown_done.png&quot;)) def setup(bot: commands.Bot): bot.add_cog(Image_Manipulation(bot)) </code></pre> <p>The dank memer command example: <a href="https://imgur.com/a/ZDNyKsG" rel="nofollow noreferrer">https://imgur.com/a/ZDNyKsG</a></p>
<p>Effect you want to achieve is called <em>average</em> of two images. <a href="https://pillow.readthedocs.io/en/stable/reference/Image.html#PIL.Image.blend" rel="nofollow noreferrer"><code>PIL.Image.blend</code></a> allow to get <em>weighted average</em> of two images. Example usage:</p> <pre><code>from PIL import Image im1 = Image.open('path/to/image1') im2 = Image.open('path/to/image2') im3 = Image.blend(im1, im2, 0.5) im3.save('path/to/outputimage') </code></pre> <p>where <code>im1</code> and <code>im2</code> are images of same size (width x height), you might elect to use different 3rd parameter for <code>Image.blend</code> if you wish one image to have more impact on result image (<code>0.0</code> results in copy of <code>im1</code>, <code>1.0</code> results in copy of <code>im2</code>)</p>
python|discord|python-imaging-library|nextcord
2
1,907,811
70,909,531
How to disable and modify password validation in Django
<p>I want the end-user to set the password like a numeric pin of min &amp; max length to 6 characters while registration. e,g: <code>234123</code> Yes, this can be insecure but the project needs to do it with 6 digit pin. As <code>AUTH_PASSWORD_VALIDATORS</code> doesn't allow to do it.</p> <pre><code>AUTH_PASSWORD_VALIDATORS = [ { 'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator', }, { 'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator', 'OPTIONS': { 'min_length': 9, } }, { 'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator', }, { 'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator', }, ] </code></pre> <p>So how to disable and modify <a href="https://docs.djangoproject.com/en/4.0/topics/auth/passwords/#module-django.contrib.auth.password_validation" rel="nofollow noreferrer">password validation</a> in Django. Without changing the <a href="https://docs.djangoproject.com/en/4.0/ref/settings/#std:setting-AUTHENTICATION_BACKENDS" rel="nofollow noreferrer">AUTHENTICATION_BACKENDS </a> If we have to change the <code>AUTHENTICATION_BACKENDS</code>, then how to achieve it.</p>
<p>You can work with a simple validator for the pin:</p> <pre><code># <em>app_name</em>/password_validation.py from re import compile as recompile class PinValidator: <strong>pin_regex = recompile('\d{6}')</strong> def validate(self, password, user=None): if not <strong>self.pin_regex.fullmatch(password)</strong>: raise ValidationError( _('The password must contain exactly six digit'), code='pin_6_digits', )</code></pre> <p>and then use this as the only validator:</p> <pre><code># settings.py AUTH_PASSWORD_VALIDATORS = [ {'NAME': <strong>'<em>app_name</em>.password_validation.PinValidator'</strong>, } ]</code></pre> <p>That being said, a six-digit pin is usually not a good password: it has only 1'000'000 items. If later the database is stolen, it is also very easy to obtain the PINs by brute forcing this.</p>
python|django|django-authentication
1
1,907,812
60,299,299
Grid dashboard with Plotly dash
<p>I'm trying to build a dashboard with Dash from Plotly composed of a series of tiles (Text) like in the picture below. </p> <p>I'm trying to build a component to reuse it an build the layout below. Each box will contain a Title, a value and a description as shown below. </p> <p>Is there a component available? Can someone help me with any basic idea/code ?</p> <p>Thanks in advance!</p> <p><a href="https://i.stack.imgur.com/oYlTy.png" rel="noreferrer"><img src="https://i.stack.imgur.com/oYlTy.png" alt="enter image description here"></a></p>
<p>I would recommend checking out <a href="https://dash-bootstrap-components.opensource.faculty.ai/" rel="noreferrer">Dash Bootstrap Components</a> (dbc).</p> <p>You can use <code>dbc.Col</code> (columns) components nested into <code>dbc.Row</code> (rows) components to produce your layout. You can check them out <a href="https://dash-bootstrap-components.opensource.faculty.ai/l/components/layout" rel="noreferrer">here</a>.</p> <p>Then for the actual 'cards' as I'll call them, you can use the <code>dbc.Card</code> component. Here's the <a href="https://dash-bootstrap-components.opensource.faculty.ai/l/components/card" rel="noreferrer">link</a>.</p> <p>Here's some example code replicating your layout:</p> <pre><code>import dash_bootstrap_components as dbc import dash_html_components as html card = dbc.Card( dbc.CardBody( [ html.H4("Title", id="card-title"), html.H2("100", id="card-value"), html.P("Description", id="card-description") ] ) ) layout = html.Div([ dbc.Row([ dbc.Col([card]), dbc.Col([card]), dbc.Col([card]), dbc.Col([card]), dbc.Col([card]) ]), dbc.Row([ dbc.Col([card]), dbc.Col([card]), dbc.Col([card]), dbc.Col([card]) ]), dbc.Row([ dbc.Col([card]), dbc.Col([card]) ]) ]) </code></pre> <p>Best thing would probably be to have a function which creates those cards with parameters for ids, titles and descriptions to save the hassle of creating different cards:</p> <pre><code>def create_card(card_id, title, description): return dbc.Card( dbc.CardBody( [ html.H4(title, id=f"{card_id}-title"), html.H2("100", id=f"{card_id}-value"), html.P(description, id=f"{card_id}-description") ] ) ) </code></pre> <p>You can then just replace each <code>card</code> with <code>create_card('id', 'Title', 'Description')</code> as you please.</p> <p>Another quick tip is that the <code>col</code> component has a parameter, <code>width</code>. You can give each column in a row a different value to adjust the relative widths. You can read more about that in the docs I linked above.</p> <p>Hope this helps,</p> <p>Ollie</p>
python|html|css|plotly|plotly-dash
12
1,907,813
67,008,815
Unable to fetch timestamp correctly in beautiful soup
<p><a href="https://i.stack.imgur.com/imP5x.png" rel="nofollow noreferrer">enter image description here</a> please refer the attached picture image. I am trying fetch the timestamp and the below 10 #content as shown in the image and in expected output in below code, However i am not able to fetch &quot;40 minutes ago&quot; type text. instead i am getting &quot;08-04-2021 16:48:34&quot; in this format.</p> <pre><code> from bs4 import BeautifulSoup import requests URL=&quot;https://trends24.in/india/&quot; html_text=requests.get(URL) soup= BeautifulSoup(html_text.content,'lxml') results = [] job_elem=soup.findAll(attrs={'class': 'trend-card'}) for j in job_elem: print(j.find('h5').get_text()) for i in soup.select('#trend-list li'): d = dict() d[i.a.text] = '' try: val = i.select_one('.tweet-count').text except: val = &quot;NA&quot; finally: d[i.a.text] = val results.append(d) print(d) **Output:** 08-04-2021 16:48:34 08-04-2021 15:54:30 08-04-2021 15:01:07 ... {'#AskNivetha': 'NA'} {'#TikaUtsav': 'NA'} {'#VakeelSaabFestivalBegins': '62K'} ... **expected output :** 40 minutes ago {'#AskNivetha': 'NA'} {'#TikaUtsav': 'NA'} {'#VakeelSaabFestivalBegins': '62K'} {'ANMOL SUSHANT': '33K'} {'#TheBigBull': 'NA'} {'#IPL2021': '73K'} {'nidra ley uv creations': '64K'} {'Chief Ministers': 'NA'} {'B. True 48MP Camera': 'NA'} {'conan': '51K'} 1 hour ago {'#AskNivetha': 'NA'} {'#VakeelSaabFestivalBegins': '50K'} {'NIDRA LEY UV CREATIONS': '59K'} {'#SecretOfHappyLiving': 'NA'} {'#MeditateToRaiseWillpower': 'NA'} {'#HappinessMantra': 'NA'} {'ANMOL SUSHANT': 'NA'} {'Tika Utsav': 'NA'} {'Chief Ministers': 'NA'} {'conan': '46K'} </code></pre> <p>Also i am trying to fetch the timestamp and then 10 #content titles. as shown in the screenshot attached.</p>
<p>That is the format the datetime info stored in. Disable JavaScript and you will see:</p> <p><a href="https://i.stack.imgur.com/TCrx7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TCrx7.png" alt="enter image description here" /></a></p> <p>What you see in webpage is the <code>data-timestamp</code> attribute value that gets prettified when JavaScript runs in the webpage. More specifically, when the following is called:</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-js lang-js prettyprint-override"><code>T24.prettyDate = function(t) { var e = new Date(1e3 * t), n = ((new Date).getTime() - e.getTime()) / 1e3, a = Math.floor(n / 86400); return isNaN(a) || a &lt; 0 ? "" : 0 === a &amp;&amp; ((n &lt; 900 ? "just now" : n &lt; 1800 &amp;&amp; "few minutes ago") || n &lt; 3600 &amp;&amp; Math.floor(n / 60) + " minutes ago" || n &lt; 7200 &amp;&amp; "1 hour ago" || n &lt; 86400 &amp;&amp; Math.floor(n / 3600) + " hours ago") || 1 === a &amp;&amp; "Yesterday" || a &lt; 7 &amp;&amp; a + " days ago" || a &lt; 31 &amp;&amp; Math.ceil(a / 7) + " weeks ago" || 31 &lt; a &amp;&amp; Math.ceil(a / 30) + " months ago" }</code></pre> </div> </div> </p> <p>You could write your own function, with the above as a logic guide, and use that, or use selenium to automate a browser.</p>
python|web-scraping|beautifulsoup|timestamp|web-scraping-language
0
1,907,814
63,908,463
What format code should I use for this timestamp with strptime in Python?
<p>I have a .txt file that contains the string <code>&quot;2020-08-13T20:41:15.4227628Z&quot;</code> What format code should I use in <code>strptime</code> function in Python 3.7? I tried the following but the <code>'8'</code> at end just before <code>'Z'</code> is not a valid weekday</p> <pre><code>from datetime import datetime timestamp_str = &quot;2020-08-13T20:41:15.4227628Z&quot; timestamp = datetime.strptime(timestamp_str, '%Y-%m-%dT%H:%M:%S.%f%uZ') ValueError: time data '2020-08-13T20:41:15.4227628Z' does not match format '%Y-%m-%dT%H:%M:%S.%f%uZ' </code></pre>
<p>your timestamp's format is mostly in accordance with <a href="https://en.wikipedia.org/wiki/ISO_8601" rel="nofollow noreferrer">ISO 8601</a>, except for the 7 digit fractional seconds.</p> <ul> <li>The 7th digit would be 1/10th of a microsecond; normally you'd have 3, 6 or 9 digits resolution (milli-, micro or nanoseconds respectively).</li> <li>The <strong>Z</strong> denotes UTC</li> </ul> <p>In Python, you can parse this format conveniently <a href="https://stackoverflow.com/a/63447899/10197418">as I show here</a>.</p>
python|datetime|strptime
1
1,907,815
66,505,179
How can I count the number of rows returned?
<pre><code>myDB = pymysql.connect( host=&quot;localhost&quot;, user=&quot;exampleuser&quot;, password=&quot;examplepassword&quot;, database=&quot;exampledb&quot; ) myCursor = myDB.cursor() msg = &quot;something&quot; SQL = &quot;SELECT * FROM `example` WHERE `example`='&quot; + msg + &quot;'&quot; myCursor.execute(SQL) myResult = myCursor.fetchall() </code></pre> <p>Some times it can return multiple results. So, I do not think <code>fetchone</code> will work.</p> <p>How can I count the number of rows in Python?, like in PHP, I could use <code>mysqli_num_rows()</code>.</p> <p>I am unable to find any way to do this. I am using Python 3.</p> <p>I am absolutely new to python MySQL. I am really familiar with MySQL in PHP.</p> <p>You can do it with <code>len()</code>. But, for my curiosity: is there way to do it using pymysql directly?</p>
<p>If you want to get the number of rows returned without calling <code>len</code> on the resultset you can inspect the cursor's <a href="https://www.python.org/dev/peps/pep-0249/#rowcount" rel="nofollow noreferrer">rowcount</a> attribute.</p> <pre class="lang-py prettyprint-override"><code>myCursor = myDB.cursor() msg = &quot;something&quot; SQL = &quot;SELECT * FROM `example` WHERE `example`= %s&quot; myCursor.execute(SQL, (msg,)) print('Number of rows is', myCursor.rowcount) </code></pre>
python|mysql|python-3.x|pymysql
3
1,907,816
50,705,901
Python to Open html files in Excel
<p>I have a bunch of purchase orders in .html formats that I need to extract data and put in one simple excel sheet. While I could use beutifulsoup to do it I would rather just use excel's in built converter which already does a much better job. Then just work with excel files directly. Is there a way to use python to open html documents, then save it again in .xlsx. I tried using openpyxl but it does not take html files.</p>
<p>You could use Python to automate an instance of the Excel application, opening each file, and saving as <code>.xlsx</code>:</p> <pre><code>import win32com.client excelApp = win32com.client.Dispatch('Excel.Application') book = excelApp.Open(path_to_html_file) book.SaveAs(path_to_html_file + '.xlsx', 51) </code></pre>
python|excel|openpyxl
1
1,907,817
50,453,457
knapsack with no values
<p>I am facing the next problem, I have a python dictionary like this: </p> <pre><code>total = 30 companies = { 'a': 30, 'b': 7, 'c': 21, 'd': 5, 'e': 5, etc } </code></pre> <p>What I am trying to do is group companies in a way that the numbers add up to a total. In this example the output I want would be:</p> <pre><code>group1 = { 'a':30 } group2 = { 'c': 21, 'b': 7 } group3 = { 'd': 5, 'e': 5 } </code></pre> <p>If a value of a key in dictionary is > total then a group will be created containing only that key:value. For example if we would have </p> <pre><code>companies = { 'a': 30, 'b': 7, 'c': 21, 'd': 5, 'e': 5, 'f': 32 etc </code></pre> <p>}</p> <pre><code>group1 = { 'f':32 } etc </code></pre> <p>I have searched for various ways of implementing this, best I have found would be Knapsack but this algorithm would take as input weight, value, only as int. Also I found this interesting module:</p> <p><a href="https://developers.google.com/optimization/bin/knapsack" rel="nofollow noreferrer">https://developers.google.com/optimization/bin/knapsack</a></p> <pre><code>from __future__ import print_function from ortools.algorithms import pywrapknapsack_solver def main(): # Create the solver. solver = pywrapknapsack_solver.KnapsackSolver( pywrapknapsack_solver.KnapsackSolver. KNAPSACK_DYNAMIC_PROGRAMMING_SOLVER, 'test') weights = [[565, 406, 194, 130, 435, 367, 230, 315, 393, 125, 670, 892, 600, 293, 712, 147, 421, 255]] capacities = [850] values = weights[0] solver.Init(values, weights, capacities) computed_value = solver.Solve() packed_items = [x for x in range(0, len(weights[0])) if solver.BestSolutionContains(x)] packed_weights = [weights[0][i] for i in packed_items] print("Packed items: ", packed_items) print("Packed weights: ", packed_weights) print("Total weight (same as total value): ", computed_value) if __name__ == '__main__': main() </code></pre> <p>I have tried to modify this algorithm to work with a dictionary(especially with a string) but with no success.</p> <p>Is there a better way to achieve this results?</p> <p>Thank you, </p>
<p>If I understand your question correctly, you are looking to minimize the overall number of groups that you are creating. This can be solved by iteratively creating a group with as much weight as possible in each iteration and then solving for the remaining items. </p> <p>Note that I used a simple recursive implementation to solve the subset sum problem. You might want to use a more optimized framework if you operate on a larger data set.</p> <pre><code># Group items into groups with max_weight # Items with weight &gt; max_weight get their own group def create_groups(max_weight, items): # Will hold the assignment of each company to a group cur_group = 1 groups = {} # Assign all companies with weight &gt; max_weight for item, weight in items.items(): if weight &gt; max_weight: groups[item] = cur_group cur_group += 1 # Cluster remaining items while 1: rem_items = {k: v for k, v in items.items() if not k in groups.keys()} if len(rem_items) == 0: break solution = solve_subset_sum(max_weight, rem_items) groups.update({k: cur_group for k in solution}) cur_group += 1 return groups # Solve a subset sum problem def solve_subset_sum(max_weight, items): best_solution = [] best_sum = 0 for item, weight in items.items(): if weight &gt; max_weight: continue items_reduced = {k: v for k, v in items.items() if k != item} solution = solve_subset_sum(max_weight - weight, items_reduced) solution.append(item) solution_sum = sum([v for k, v in items.items() if k in solution]) if solution_sum &gt; best_sum: best_solution = solution best_sum = solution_sum return best_solution if __name__ == '__main__': # Input as specified by you total = 30 companies = { 'a': 30, 'b': 7, 'c': 21, 'd': 5, 'e': 5, 'f': 32 } # Test it groups = create_groups(total, companies) print(groups) </code></pre> <p>The code yields <code>{'f': 1, 'a': 2, 'c': 3, 'b': 3, 'e': 4, 'd': 4}</code> as a result. The format is <code>item_name: group_number</code>.</p>
python-3.x|dynamic-programming|knapsack-problem
1
1,907,818
45,230,156
Configure proxy settings Selenium Firefox
<p>I've been trying for about 2 days to use selenium with a proxy server but have still not been able to do it. I have tried the method from the selenium website, copy and pasting but it didn't work. I've tried all of the answers from Stackoverflow but none of them worked, the closest I got was loading the page but my IP stayed the same.</p> <p>Here is the code I tried most recently:</p> <pre><code>#!/usr/bin/env python from selenium import webdriver from selenium.webdriver.common.proxy import Proxy, ProxyType from selenium.webdriver.common.desired_capabilities import DesiredCapabilities HOST = "144.217.31.225" PORT = "3128" def my_proxy(PROXY_HOST,PROXY_PORT): fp = webdriver.FirefoxProfile() # Direct = 0, Manual = 1, PAC = 2, AUTODETECT = 4, SYSTEM = 5 print PROXY_PORT print PROXY_HOST fp.set_preference("network.proxy.type", 1) fp.set_preference("network.proxy.http",PROXY_HOST) fp.set_preference("network.proxy.http_port",int(PROXY_PORT)) fp.set_preference("general.useragent.override","whater_useragent") fp.update_preferences() return webdriver.Firefox(firefox_profile=fp) browser = my_proxy(HOST,PORT) browser.get('https://www.google.com') search=browser.find_element_by_xpath("//input[@title='Search']") search.send_keys('my ip') </code></pre> <p>This is the error I received:</p> <pre><code>3128 144.217.31.225 Traceback (most recent call last): File "./script.py", line 32, in &lt;module&gt; search=browser.find_element_by_xpath("//input[@title='Search']") File "/home/matt/.local/lib/python2.7/site-packages/selenium/webdriver/remote/webdriver.py", line 313, in find_element_by_xpath return self.find_element(by=By.XPATH, value=xpath) File "/home/matt/.local/lib/python2.7/site-packages/selenium/webdriver/remote/webdriver.py", line 791, in find_element 'value': value})['value'] File "/home/matt/.local/lib/python2.7/site-packages/selenium/webdriver/remote/webdriver.py", line 256, in execute self.error_handler.check_response(response) File "/home/matt/.local/lib/python2.7/site-packages/selenium/webdriver/remote/errorhandler.py", line 194, in check_response raise exception_class(message, screen, stacktrace) selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element: //input[@title='Search'] </code></pre> <p>It manages to load the page but fails to enter the text in the search bar. I manually take over and type 'my IP' in the search bar but the IP hasn't changed.</p>
<p>This following code worked, or though google didn't like the fact I was using a proxy with selenium, this is why I changed the address to <code>whatsmyipaddress.com</code>.</p> <pre><code>from selenium import webdriver from selenium.webdriver.common.proxy import Proxy, ProxyType from selenium.webdriver.common.desired_capabilities import DesiredCapabilities from time import sleep HOST = "144.217.31.225" PORT = "3128" def my_proxy(PROXY_HOST,PROXY_PORT): fp = webdriver.FirefoxProfile() # Direct = 0, Manual = 1, PAC = 2, AUTODETECT = 4, SYSTEM = 5 print PROXY_PORT print PROXY_HOST fp.set_preference("network.proxy.type", 1) fp.set_preference("network.proxy.http",PROXY_HOST) fp.set_preference("network.proxy.http_port",int(PROXY_PORT)) fp.set_preference("network.proxy.ssl",PROXY_HOST) fp.set_preference("network.proxy.ssl_port",int(PROXY_PORT)) fp.set_preference("general.useragent.override","whater_useragent") fp.update_preferences() return webdriver.Firefox(firefox_profile=fp) browser = my_proxy(HOST,PORT) browser.get('http://whatismyipaddress.com/') sleep(5) search=browser.find_element_by_xpath("//input[@title='Search']") search.send_keys('my ip') </code></pre> <p>Thanks to @SAZ for the solution.</p>
python|selenium|firefox|proxy
2
1,907,819
61,553,913
How to establish connection to use dbpediaSpotlight in python?
<p>I'm programming python in Jupyter Notebook, but when I run this code: </p> <pre><code>import requests import re import spotlight document = 'First documented in the 13th century, Berlin was the capital of the Kingdom of Prussia (1701–1918), the German Empire (1871–1918), the Weimar Republic (1919–33) and the Third Reich (1933– 45). Berlin in the 1920s was the third largest municipality in the world. After World War II, the city became divided into East Berlin -- the capital of East Germany -- and West Berlin, a West German exclave surrounded by the Berlin Wall from 1961–89. Following German reunification in 1990, the city regained its status as the capital of Germany, hosting 147 foreign embassies.' annotations = spotlight.annotate('http://spotlight.dbpedia.org/rest', document,confidence=0.5, support=20) </code></pre> <p>It shows this error, how can I run it? </p> <pre><code>ConnectionError: HTTPSConnectionPool(host='www.dbpedia-spotlight.orgrest', port=443): Max retries exceeded with url: / (Caused by NewConnectionError('&lt;urllib3.connection.VerifiedHTTPSConnection object at 0x000000070169B1C8&gt;: Failed to establish a new connection: [Errno 11001] getaddrinfo failed') </code></pre> <p>Thank you.</p>
<p>The library had an update and can no longer be consulted that way. I recommend you check the following <a href="https://stackoverflow.com/questions/61758873/how-to-get-dbpedia-spotlight-annotations-and-candidates-with-filter-sparql-in-py">post</a>.</p> <p>It is the same only as implemented in another way, using response.</p> <p>The SPARQL query you mention can be skipped and it works</p> <p>I use the translator, suddenly some error</p>
python|jupyter-notebook|spotlight|spotlight-dbpedia
0
1,907,820
61,318,455
Executing multiple pytest test files by using one "master" file in Python
<p>I have multiple test files in the test folder. My directory structure is like:</p> <pre><code>Tests/ run-tests.py pytest.ini /TestCases TestCase1.py TestCase2.py </code></pre> <p>My <em>run-tests.py</em> file contains:</p> <pre><code>from __future__ import absolute_import import pytest if __name__ == '__main__': pytest.main(args=['TestsCases']) </code></pre> <p>So when I run <em>run-tests.py</em> from console:</p> <pre><code>python3.8 -m pytest Tests/run-tests.py </code></pre> <p>tests are not executed. Of course I could create shell script and call them inside the script by explicitly calling them but this is what I do not want to do. TestCase1.py and other file contains tests which use <em>pytest</em> framework and are defined as functions. I am not using classes inside. </p> <p><strong>So question: is it possible to execute TestCase*.py files by using run-test.py file?</strong> </p> <p>Also I wonder how the pytest.xml files will be generated. My pytest.ini file contains:</p> <pre><code>addopts = -v -s -ra --junitxml=Tests/test-reports/pytest.xml </code></pre> <p>Would be very nice to be able to merge all TestCase*.py exports into single one report xml file. I am afraid that every TestCase*.py file will overwrite the previous one.</p>
<p>I found the solution. Inside my <em>run-tests.py</em> file I have a loop from which I call every test:</p> <pre><code>test_cases_folder = os.path.dirname(os.path.realpath(__file__)) + '/TestCases/' for name in os.listdir(test_cases_folder): if fnmatch.fnmatch(name, 'Test*.py'): pytest.main( [test_cases_folder+'TestCase'+str(i)+'.py', '--url='+args.server, '--user='+args.user, '--headless='+args.headless] ) </code></pre> <p>also I am executing script as a module, not by calling file directly:</p> <pre><code>python3.8 -m Tests.run-tests --server=&lt;hostname&gt; --user=&lt;username&gt; --headless=False|True </code></pre> <p>hopefully it will help somebody</p>
python-3.x|pytest
0
1,907,821
58,084,072
Python module not found but exists in folder
<p>I'm trying to execute a simple PY file and I'm getting the following error:</p> <pre><code>Traceback (most recent call last): File "docker_pull.py", line 8, in &lt;module&gt; import requests ModuleNotFoundError: No module named 'requests' </code></pre> <p>Looking at Python install folder I found requests module under: <code>C:\Program Files (x86)\Python 3.7.1\Lib\site-packages\pip\_vendor\requests</code>. How can I force Python to use the module already installed ?</p> <p>P.S: I don't have internet connection in this machine.</p>
<p>This is a very typical issue, so I will leave this here for future reference.</p> <p>If I have a project called: my_project</p> <pre><code>. └── my_project ├── first_folder ├── second_folder └── third_folder </code></pre> <p>What you want to do is one of these two things:</p> <ol> <li>PYTHONPATH (only Python)</li> </ol> <p><code>cd ~/my_project &amp;&amp; export PYTHONPATH=$(pwd)</code></p> <p>change dir to root dir, and export that dir to PYTHONPATH, so when Python runs it looks in the PYTHONPATH.</p> <ol start="2"> <li>PATH (everything)</li> </ol> <p><code>cd ~/my_project &amp;&amp; export PATH=$PATH:$(pwd)</code></p> <p>Same as above!</p> <p>This needed to live somewhere since it took me a while to figure out. So for anyone in the future who needs help with this!</p>
python
5
1,907,822
18,644,857
Exception handling in Python (webapp2, Google App Engine)
<p>I tried to use the functions suggested on how to handle exceptions:</p> <p><a href="http://webapp-improved.appspot.com/guide/exceptions.html" rel="nofollow">http://webapp-improved.appspot.com/guide/exceptions.html</a></p> <p>in main.py:</p> <pre><code>def handle_404(request, response, exception): logging.exception(exception) response.write('404 Error') response.set_status(404) def handle_500(request, response, exception): logging.exception(exception) response.write('A server error occurred!') response.set_status(500) class AdminPage(webapp2.RequestHandler): def get(self): ... admin_id = admin.user_id() queues = httpRequests.get_queues(admin_id) app = webapp2.WSGIApplication(...) app.error_handlers[404] = handle_404 app.error_handlers[500] = handle_500 </code></pre> <p>The function in httpRequests.py:</p> <pre><code>def get_queues(admin_id): url = "http://localhost:8080/api/" + admin_id + "/queues" result = urlfetch.fetch(url) if (result.status_code == 200): received_data = json.loads(result.content) return received_data </code></pre> <p>The function which is being called in the API:</p> <pre><code>class Queues(webapp2.RequestHandler): def get(self, admin_id): queues = queues(admin_id) if queues == []: self.abort(404) else: self.response.write(json.dumps(queues)) </code></pre> <p>I'm stuck at get_queues in httpRequests.py. How to handle the HTTP exceptions with urlfetch?</p>
<p>Another approach for handling errors is to make a <code>BaseHandler</code> with <code>handle_exception</code> and have all your other handlers extend this one. A fully working example will look like this:</p> <pre><code>import webapp2 from google.appengine.api import urlfetch class BaseHandler(webapp2.RequestHandler): def handle_exception(self, exception, debug_mode): if isinstance(exception, urlfetch.DownloadError): self.response.out.write('Oups...!') else: # Display a generic 500 error page. pass class MainHandler(BaseHandler): def get(self): url = "http://www.google.commm/" result = urlfetch.fetch(url) self.response.write('Hello world!') app = webapp2.WSGIApplication([ ('/', MainHandler) ], debug=True) </code></pre> <p>An even better solution would be to throw the exception as they are when running in debug mode and handling them in a more friendly way when running on production. Taken from <a href="https://developers.google.com/appengine/articles/handling_datastore_errors" rel="nofollow">another example</a> you could do something like this for your <code>BaseHandler</code> and expand it as you wish:</p> <pre><code>class BaseHandler(webapp2.RequestHandler): def handle_exception(self, exception, debug_mode): if not debug_mode: super(BaseHandler, self).handle_exception(exception, debug_mode) else: if isinstance(exception, urlfetch.DownloadError): # Display a download-specific error page pass else: # Display a generic 500 error page. pass </code></pre>
python|google-app-engine|webapp2|urlfetch
4
1,907,823
18,307,968
How to save session data (Python)
<p>Say there is some code I wrote, and someone else uses, and inputs their name, and does other stuff the code has.</p> <pre><code>name = input('What is your name? ') money = 5 #There is more stuff, but this is just an example. </code></pre> <p>How would I be able to save that information, say, to a text file to be recalled at a later time to continue in that session. Like a save point in a video game.</p>
<p>You can write the information to a text file. For example:</p> <h2>mytext.txt</h2> <pre><code>(empty) </code></pre> <h2>myfile.py</h2> <pre><code>name = input('What is your name? ') ... with open('mytext.txt', 'w') as mytextfile: mytextfile.write(name) # Now mytext.txt will contain the name. </code></pre> <p>Then to access it again:</p> <pre><code>with open('mytext.txt') as mytextfile: the_original_name = mytextfile.read() print the_original_name # Whatever name was inputted will be printed. </code></pre>
python
1
1,907,824
18,670,725
'function' object has no attribute 'pprint'
<p>I got this error when trying to print an object using <code>pprint.pprint(object)</code></p> <p>I imported <code>from pprint import pprint</code>....</p> <pre><code>'function' object has no attribute 'pprint' </code></pre> <p>My code:</p> <pre><code>output = ', '.join([ pprint.pprint(p) for p in people_list]) return HttpResponse (output) </code></pre> <p>What am I doing wrong?</p>
<p>You already imported the <em>function</em> object; leave off the <code>pprint.</code> reference:</p> <pre><code>output = ', '.join([pprint(p) for p in people_list]) return HttpResponse (output) </code></pre> <p>This will not do what you want still, as it'll print to <code>sys.stdout</code>, <strong>not</strong> return a pretty-printed value. Using the <code>pprint</code> module is not very suitable for use in a web server environment really.</p> <p>I'd create a <code>PrettyPrinter</code> instance and use it's <a href="http://docs.python.org/2/library/pprint.html#pprint.pformat" rel="nofollow noreferrer"><code>PrettyPrinter.pformat() method</code></a> to generate output instead:</p> <pre><code>from pprint import PrettyPrinter pprinter = PrettyPrinter() output = ', '.join([ pprinter.pformat(p) for p in people_list]) </code></pre> <p>You can use <a href="https://docs.python.org/2/library/pprint.html#pprint.pformat" rel="nofollow noreferrer"><code>pprint.pformat()</code></a> too, but it's more efficient to just re-use a single <code>PrettyPrinter()</code> object.</p>
python|django
4
1,907,825
71,525,755
Why is my code not incrementing the count?
<p>I have a function (from a part of my code) which is supposed to increment the count whenever I run the function, however, the count only stays at 1. What am I doing wrong? Thanks in advance.</p> <pre><code>def add_a_team(): csv_team_count = 0 team_adder = [] csv_team_count += 1 print(csv_team_count) if csv_team_count &lt;4: for i in range(0,6): name = str(input(&quot;You are now entering the data for Team 1.\nYou will see this 6 times, enter the team's name first then the members, all individually.&quot;)) team_adder.append(name) else: print(&quot;Team count of 4 has been filled in! You cannot add another team!&quot;) print(&quot;\n&quot;) with open('teams.csv', 'a', newline = '', encoding='UTF8') as f: writer = csv.writer(f) #writing the data for each team. writer.writerow(team_adder) f.close() </code></pre>
<p>Everytime you initialize the function the value is set to 0.</p> <pre><code> csv_team_count = 0 </code></pre> <p>After you increment to 1, but if you call again the function it will set to 0. This value needs to be set outside the function and be passed as arg.</p>
python|increment
2
1,907,826
69,590,124
Extract all instances of a substring in an array column and move to a new column
<p>I am fairly new to python so please bare with me. I am attempting to extract all instances of a specific substring between two markers and export it to a new column. The file contains thousands of columns, sometimes with hundreds of rows for the column (['INFO']) that I'm interested in. Specifically, I want to extract the string that always follows &quot;|HIGH|&quot; as shown in the example below.</p> <pre><code>**#CHROM POS ALT INFO 217 1 21351411 &lt;DEL&gt; SVTYPE=DEL;STRANDS=+-:25;SVLEN=-18597;END=21370008;CIPOS=-10,652;CIEND=-236,9;CIPOS95=-1,104;CIEND95=-48,2;IMPRECISE;SU=25;PE=25;SR=0;ANN=&lt;DEL&gt;|transcript_ablation|HIGH|Khdc1c|Khdc1c|transcript|NM_001033904.1|protein_coding|1/3|c.-17246_*281del|p.0?|||||,&lt;DEL&gt;|splice_region_variant&amp;downstream_gene_variant|LOW|Khdc1a|Khdc1a|transcript|NM_183322.2|protein_coding|3/3|c.*319_*18915del|||||0|,&lt;DEL&gt;|3_prime_UTR_variant|MODIFIER|Khdc1a|Khdc1a|transcript|NM_183322.2|protein_coding|3/3|c.*319_*18915del|||||319|,&lt;DEL&gt;|upstream_gene_variant|MODIFIER|Khdc1c|Khdc1c|transcript|NM_001033904.1|protein_coding|1/3|c.-17246_*281del|||||16919|,&lt;DEL&gt;|downstream_gene_variant|MODIFIER|Khdc1c|Khdc1c|transcript|NM_001033904.1|protein_coding|1/3|c.-17246_*281del|||||0|,&lt;DEL&gt;|intergenic_region|MODIFIER|Khdc1a-Khdc1c|Khdc1a-Khdc1c|intergenic_region|Khdc1a-Khdc1c|||n.21351412_21370008del||||||,&lt;DEL&gt;|intergenic_region|MODIFIER|Khdc1c-Khdc1b|Khdc1c-Khdc1b|intergenic_region|Khdc1c-Khdc1b|||n.21351412_21370008del||||||,** </code></pre> <p>Each of the lines I'm interested in for the ['INFO'] column contains &quot;...|HIGH|Gene_here|&quot;. There can be hundreds of these lines in a given row and I want all of the substring instances for a given line to be exported to one new column but kept in the same row. I just want the gene name that follows &quot;|HIGH|&quot; but ends before the next &quot;|&quot; to be extracted to the new column in all instances.</p> <p>I have attempted to do this with a for loop using regex but can't quite figure it out.</p> <pre><code>import re for substring in df['INFO']: df = df['INFO'] pattern = &quot;HIGH\|(\w+)\|&quot; substring = re.search(pattern, df).group(1) df['GENE'] = susbtring </code></pre>
<p>I am not sure why you were using the <code>for</code> loop there, but to perform the operation you stated I would write code as follows.</p> <pre><code>for frame in df: keyWord = '|HIGH|' tempString = str(df[frame]['INFO']) index = tempString.find(keyWord) rIndex = tempString.find('|',index+len(keyWord)) df[frame]['GENE'] = tempString[index+len(keyWord):rIndex] </code></pre>
python|arrays|pandas
0
1,907,827
57,486,558
How to process a sequence of multiple temporary files in Python?
<p>I want to go through the following steps:</p> <p>1) tempfileA (Note: this is a blob downloaded from Google Cloud Storage)</p> <p>2) tempfileB = function(tempfileA)</p> <p>3) tempfileC = function(tempfileB)</p> <p>This should be pretty straightforward, however, I am not sure about what's the best way to access different temporary files created in sequence based on the previous one.</p> <p>So far I have found the example below from <a href="https://docs.python.org/3/library/tempfile.html" rel="nofollow noreferrer">docs</a>, but the <code>Temporaryfile</code> is closed at the exit of the <code>with</code> clause, so it should not be possible to access the temp file in the next step.</p> <pre><code># create a temporary file using a context manager with tempfile.TemporaryFile() as fp: fp.write(b'Hello world!') fp.seek(0) fp.read() </code></pre> <p>Could you please suggest what would be a good way to achieve what described above? Please note that at each step a method from an external library is called to process the current temp file and the result should be the next temp file.</p>
<p>You can open multiple files in the same <code>with</code> block.</p> <pre><code>with TemporaryFile() as fp0, TemporaryFile() as fp1, TemporaryFile() as fp2: fp0.write(b'foo') fp0.seek(0) fp1.write(fp0.read()) ... </code></pre>
python|python-3.x|google-cloud-platform|google-cloud-storage|temporary-files
6
1,907,828
54,116,080
Having issues with neural network training. Loss not decreasing
<p>I'm largely following <a href="https://github.com/jeffwen/road_building_extraction" rel="noreferrer">this</a> project but am doing a pixel-wise classification. I have 8 classes and 9 band imagery. My images are gridded into 9x128x128. My loss is not reducing and training accuracy doesn't fluctuate much. I'm guessing I have something wrong with the model. Any advice is much appreciated! I get at least 91% accuracy using random forest.</p> <p>My classes are extremely unbalanced so I attempted to adjust training weights based on the proportion of classes within the training data.</p> <pre><code># get model learning_rate = 0.0001 model = unet.UNetSmall(8) optimizer = optim.Adam(model.parameters(), lr=learning_rate) # set up weights based on data proportion weights = np.array([0.79594768, 0.07181202, 0.02347426, 0.0042031, 0.00366211, 0.00764327, 0.07003923, 0.02321833]) weights = (1 - weights)/7 print('Weights of training data based on proportion of the training labels. Not compted here') print(weights) print(sum(weights)) criterion = nn.CrossEntropyLoss(weight = weight) lr_scheduler = optim.lr_scheduler.StepLR(optimizer, step_size=5, gamma=0.1) </code></pre> <blockquote> <p>Weights of training data based on proportion of the training labels. Not compted here [0.02915033 0.13259828 0.13950368 0.1422567 0.14233398 0.14176525 0.13285154 0.13954024] 1.0000000000000002</p> </blockquote> <p>I've normalized the data using the transforms.functional.normalize function. I calculated the mean and standard deviation of the training data and added this augmentation to my data loader.</p> <pre><code>dataset_train = data_utils.SatIn(data_path, 'TrainValTest.csv', 'train', transform=transforms.Compose([aug.ToTensorTarget(), aug.NormalizeTarget(mean=popmean, std=popstd)])) </code></pre> <p>I augmented my training data in preprocessing by rotating and flipping the imagery. 1 image grid then became 8.</p> <p>I checked that my training data matched my classes and everything checked out. Since I'm using 8 classes I chose to use CrossEntropyLoss since it has Softmax built in.</p> <p>Current model</p> <pre><code>class UNetSmall(nn.Module): """ Main UNet architecture """ def __init__(self, num_classes=1): super().__init__() # encoding self.conv1 = encoding_block(9, 32) self.maxpool1 = nn.MaxPool2d(kernel_size=2) self.conv2 = encoding_block(32, 64) self.maxpool2 = nn.MaxPool2d(kernel_size=2) self.conv3 = encoding_block(64, 128) self.maxpool3 = nn.MaxPool2d(kernel_size=2) self.conv4 = encoding_block(128, 256) self.maxpool4 = nn.MaxPool2d(kernel_size=2) # center self.center = encoding_block(256, 512) # decoding self.decode4 = decoding_block(512, 256) self.decode3 = decoding_block(256, 128) self.decode2 = decoding_block(128, 64) self.decode1 = decoding_block(64, 32) # final self.final = nn.Conv2d(32, num_classes, kernel_size=1) def forward(self, input): # encoding conv1 = self.conv1(input) maxpool1 = self.maxpool1(conv1) conv2 = self.conv2(maxpool1) maxpool2 = self.maxpool2(conv2) conv3 = self.conv3(maxpool2) maxpool3 = self.maxpool3(conv3) conv4 = self.conv4(maxpool3) maxpool4 = self.maxpool4(conv4) # center center = self.center(maxpool4) # decoding decode4 = self.decode4(conv4, center) decode3 = self.decode3(conv3, decode4) decode2 = self.decode2(conv2, decode3) decode1 = self.decode1(conv1, decode2) # final final = nn.functional.upsample(self.final(decode1), input.size()[2:], mode='bilinear') return final </code></pre> <p>Training method</p> <pre><code>def train(train_loader, model, criterion, optimizer, scheduler, epoch_num): correct = 0 totalcount = 0 scheduler.step() # iterate over data for idx, data in enumerate(tqdm(train_loader, desc="training")): # get the inputs and wrap in Variable if torch.cuda.is_available(): inputs = Variable(data['sat_img'].cuda()) labels = Variable(data['map_img'].cuda()) else: inputs = Variable(data['sat_img']) labels = Variable(data['map_img']) optimizer.zero_grad() outputs = model(inputs) loss = criterion(outputs, labels.long()) loss.backward() optimizer.step() test = torch.max(outputs.data, 1)[1] == labels.long() correct += test.sum().item() totalcount += test.size()[0] * test.size()[1] * test.size()[2] print('Training Loss: {:.4f}, Accuracy: {:.2f}'.format(loss.data[0], correct/totalcount)) return {'train_loss': loss.data[0], 'train_acc' : correct/totalcount} </code></pre> <p>Training call in epoch loop</p> <pre><code>lr_scheduler.step() train_metrics = train(train_dataloader, model, criterion, optimizer, lr_scheduler, epoch) </code></pre> <p>Some epoch iteration output</p> <blockquote> #### Epoch 0/19 <p>---------- training: 100%|████████████████████████████████████████████████████████████████████████| 84/84 [00:17&lt;00:00, 5.77it/s] Training Loss: 0.8901, Accuracy: 0.83 Current elapsed time 2m 6s</p> #### Epoch 1/19 <p>---------- training: 100%|████████████████████████████████████████████████████████████████████████| 84/84 [00:17&lt;00:00, 5.72it/s] Training Loss: 0.7922, Accuracy: 0.83 Current elapsed time 2m 24s</p> #### Epoch 2/19 <p>---------- training: 100%|████████████████████████████████████████████████████████████████████████| 84/84 [00:18&lt;00:00, 5.44it/s] Training Loss: 0.8753, Accuracy: 0.84 Current elapsed time 2m 42s</p> #### Epoch 3/19 <p>---------- training: 100%|████████████████████████████████████████████████████████████████████████| 84/84 [00:18&lt;00:00, 5.53it/s] Training Loss: 0.7741, Accuracy: 0.84 Current elapsed time 3m 1s</p> </blockquote>
<p>It's hard to debug your model with those informations, but maybe some of those ideas will help you in some way:</p> <ol> <li>Try to overfit your network on much smaller data and for many epochs without augmenting first, say one-two batches for many epochs. If this one doesn't work, than your model is not capable to model relation between data and desired target or you have an error somewhere. Furthermore it's easier to debug it that way.</li> <li>I'm not sure about the weights idea, maybe try to upsample underrepresented classes in order to make it more balanced (repeat some underrepresented examples in your dataset). Curious where is this idea from, never heard of it.</li> <li>Have you tried to run the model from the repo you provided before applying your own customisations? How well it performs, were you able to replicate their findings? Why do you think this architecture would be a good fit for your, from what I understand, different case? Loss function in the link you provided is different, while the architecture is the same. I haven't read this paper, neither have I tried your model, but it seems a little strange.</li> <li>Link inside GitHub repo points to a blog post, where bigger batches are advised as it stabilizes the training, what is your batch size?</li> <li>Maybe start with smaller and easier model and work you way up from there?</li> </ol> <p>And the most important coming last; I don't think SO is the best place for such question (especially as it is research oriented), I see you have already asked it on GitHub issues though, maybe try to contact author directly? </p> <p>If I were you I would start with the last point and thorough understanding of operations and their effect on your goal, good luck.</p>
python|machine-learning|pytorch
13
1,907,829
54,056,565
AttributeError: module 'skimage.measure' has no attribute 'marching_cubes'
<p>when I'm executing one of code that I found from the web it's given me "AttributeError: module 'skimage.measure' has no attribute 'marching_cubes'". Do you have any idea to fix this?</p> <p>Executed code segment: </p> <pre><code>from skimage import measure def make_mesh(image, threshold=+30, step_size=1): print "Transposing surface" p = image.transpose(2, 1, 0) print "Calculating surface" verts, faces, norm, val = measure.marching_cubes(p, threshold, step_size=step_size, allow_degenerate=True) return verts, faces </code></pre>
<p>In the new version, there are two methods <a href="https://scikit-image.org/docs/dev/api/skimage.measure.html#skimage.measure.marching_cubes_lewiner" rel="nofollow noreferrer"><code>marching_cubes_lewiner</code></a> and <a href="https://scikit-image.org/docs/dev/api/skimage.measure.html#skimage.measure.marching_cubes_classic" rel="nofollow noreferrer"><code>marching_cubes_classic</code></a>. But classic doesn't take <code>step_size</code> parameter. You can try this:</p> <pre class="lang-py prettyprint-override"><code>measure.marching_cubes_lewiner(p, threshold, step_size=step_size, allow_degenerate=True) </code></pre>
python-3.x|image-processing|scikit-image
6
1,907,830
58,536,846
Efficient operation on numpy arrays contain rows with different size
<p>I want to ask something that is related with this question posted time ago <a href="https://stackoverflow.com/questions/38894291/operation-on-numpy-arrays-contain-rows-with-different-size">Operation on numpy arrays contain rows with different size</a> . The point is that I need to do some operations with numpy arrays that contain rows with different size.</p> <p>The standard way like "list2*list3*np.exp(list1)" doens't work since the rows are from different size, and the option that works is using zip. See the code below.</p> <pre><code>import numpy as np import time list1 = np.array([[2.,0.,3.5,3],[3.,4.2,5.,7.1,5.]]) list2 = np.array([[2,3,3,0],[3,8,5.1,7.6,1.2]]) list3 = np.array([[1,3,8,3],[3,4,9,0,0]]) start_time = time.time() c =[] for i in range(len(list1)): c.append([list2*list3*np.exp(list1) for list1, list2,list3 in zip(list1[i], list2[i],list3[i])]) print("--- %s seconds ---"% (time.time()-start_time)) </code></pre> <p>I want to ask if exist a much more efficient way to perform this operations avoiding a loop an doing in a more numpy way. Thanks!</p>
<p>This should do it:</p> <pre><code>f = np.vectorize(lambda x, y, z: y * z * np.exp(x)) result = [f(*i) for i in np.column_stack((list1, list2, list3))] result #[array([ 14.7781122 , 9. , 794.77084701, 0. ]), # array([ 180.76983231, 2133.96259331, 6812.16400281, 0. , 0. ])] </code></pre>
python|arrays|numpy
1
1,907,831
58,368,839
How to remove channels in Conda?
<p>How do you remove a channel, or at least instruct Conda to not abort if one of the channel URLs is down?</p> <p>I was testing a buggy tool, which provided a Conda repo at <code>https://www.idiap.ch/software/bob/conda/linux-64</code>, but now that repo is offline, and now all calls to <code>conda create --prefix=./cenv -y python=3.7</code> fail with the message:</p> <pre><code>Collecting package metadata (current_repodata.json): failed CondaHTTPError: HTTP 503 SERVICE UNAVAILABLE for url &lt;https://www.idiap.ch/software/bob/conda/linux-64/current_repodata.json&gt; Elapsed: 00:02.470723 A remote server error occurred when trying to retrieve this URL. A 500-type error (e.g. 500, 501, 502, 503, etc.) indicates the server failed to fulfill a valid request. The problem may be spurious, and will resolve itself if you try your request again. If the problem persists, consider notifying the maintainer of the remote server. </code></pre> <p>I'm not using this channel in my create command, so I don't know why Conda fails rather than ignore it. How do I allow Conda to continue or purge the channel?</p> <p>I tried removing the channel by running:</p> <pre><code>conda config --remove channels https://www.idiap.ch/software/bob/conda </code></pre> <p>outputs the error:</p> <pre><code>CondaKeyError: 'channels': 'https://www.idiap.ch/software/bob/conda' is not in the 'channels' key of the config file </code></pre> <p>Edit: My <code>conda info</code> is:</p> <pre><code> active environment : None shell level : 0 user config file : /home/myuser/.condarc populated config files : /home/myuser/miniconda3/.condarc /home/myuser/.condarc conda version : 4.7.12 conda-build version : not installed python version : 3.7.3.final.0 virtual packages : base environment : /home/myuser/miniconda3 (writable) channel URLs : https://repo.anaconda.com/pkgs/main/linux-64 https://repo.anaconda.com/pkgs/main/noarch https://repo.anaconda.com/pkgs/r/linux-64 https://repo.anaconda.com/pkgs/r/noarch https://www.idiap.ch/software/bob/conda/linux-64 https://www.idiap.ch/software/bob/conda/noarch package cache : /home/myuser/miniconda3/pkgs /home/myuser/.conda/pkgs envs directories : /home/myuser/miniconda3/envs /home/myuser/.conda/envs platform : linux-64 user-agent : conda/4.7.12 requests/2.22.0 CPython/3.7.3 Linux/4.15.0-64-generic ubuntu/18.04.3 glibc/2.27 UID:GID : 1000:1000 netrc file : None offline mode : False </code></pre> <p>Running either:</p> <pre><code>conda config --remove channels https://www.idiap.ch/software/bob/conda/linux-64 conda config --remove channels https://www.idiap.ch/software/bob/conda/noarch </code></pre> <p>outputs the error:</p> <pre><code>CondaKeyError: 'channels': 'https://www.idiap.ch/software/bob/conda/&lt;name&gt;' is not in the 'channels' key of the config file </code></pre> <p>These ire the contents of my <code>.condarc</code>:</p> <pre><code>$ cat ~/.condarc show_channel_urls: true auto_activate_base: false channels: - defaults </code></pre>
<pre class="lang-sh prettyprint-override"><code>conda config --remove channels https://www.idiap.ch/software/bob/conda </code></pre> <p>works in my test.</p> <p>Another way is to open <code>.condarc</code> files under your <code>$HOME</code> directory, and comment out the corresponding channel line.</p>
python|anaconda|conda
4
1,907,832
58,473,908
Method-chaining without permanently mutating the object
<p>I am learning how to write a python class and method chaining at the moment. Basically, I want a python (2.7) class that keeps my data and has (chain-able) methods that allow me to filter the data without mutating my original data. I have done some Googling and it seems like my answer might have something to do with <code>return self</code>, but I am not sure how to implement it such that the methods will not mutate my original data.</p> <p>Let's say I have a data stored in an excel file called <code>file</code> as follows:</p> <pre class="lang-none prettyprint-override"><code>+--------+-----+-------+ | Person | Sex | Score | +--------+-----+-------+ | A | M | 10 | | B | F | 9 | | C | M | 8 | | D | F | 7 | +--------+-----+-------+ </code></pre> <p>I would like to write a class called <code>MyData</code> such that I can do some basic data calling and filtering.</p> <p>This is what I got so far</p> <pre><code>class MyData: def __init__ (self, file): import pandas as pd self.data = pd.read_excel (file) self.Person = self.data['Person'] self.Sex = self.data['Sex'] self.Score = self.data['Score'] def male_only(self): self.data = self.data[self.Sex=="M"] self.Person = self.Person[self.Sex=="M"] self.Score = self.Score[self.Sex=="M"] self.Sex = self.Sex[self.Sex=="M"] return self def female_only(self): self.data = self.data[self.Sex=="F"] self.Person = self.Person[self.Sex=="F"] self.Score = self.Score[self.Sex=="F"] self.Sex = self.Sex[self.Sex=="F"] return self </code></pre> <p>This seems to work, but sadly my original data is permanently mutated with this code. For example:</p> <pre class="lang-none prettyprint-override"><code>Data = MyData(file) Data.data &gt;&gt;&gt; Data.data Person Sex Score 0 A M 10 1 B F 9 2 C M 8 3 D F 7 Data.male_only().data &gt;&gt;&gt; Data.male_only().data Person Sex Score 0 A M 10 2 C M 8 Data.data &gt;&gt;&gt; Data.data Person Sex Score 0 A M 10 2 C M 8 </code></pre> <p>I would like a class that returns the same answers for <code>Data.male_only().Person</code> and <code>Data.Person.male_only()</code> or for <code>Data.male_only().data</code> and <code>Data.data.male_only()</code> without permanently mutating <code>Data.data</code> or <code>Data.Person</code>.</p>
<p>You explicitly modify self.data in the 1st line, when you write <code>self.data = ...</code>. You could return a new instance of Data: </p> <pre><code> def male_only(self): newdata = MyData() newdata.data = self.data[self.Sex=="M"] newdata.Person = self.Person[self.Sex=="M"] newdata.Score = self.Score[self.Sex=="M"] newdata.Sex = self.Sex[self.Sex=="M"] return newdata </code></pre> <p>Following your comments, here's a suggestion of a filter solution: there'd functions to activate some flags/filters, then you'd have to write functions to get the attributes:</p> <pre><code># self.filters should be initialized to [] in __init__ def male_only(self): self.filters.append('male_only') def person(self): if "male_only" in self.filters: return self.Person[self.Sex=="M"] else: return self.Person </code></pre> <p>To see if this can go somewhere, you should really complete your test-case(s) to help you fix your ideas (it's always good practice to write the test-case first, then the classes).</p>
python|pandas|python-2.x|method-chaining
0
1,907,833
58,398,673
Returning indicies for the values producing the max difference
<p>I have a recursive function, that finds the maximum difference between any two integers, provided that the value at the second index is <em>higher</em> than the value at the first index:</p> <pre><code>def func(arr): if len(arr) &lt;= 1: return 0; left = arr[ : (len(arr) // 2)] right = arr[(len(arr) // 2) : ] leftBest = func(left) rightBest = func(right) crossBest = max(right) - min(left) return max(leftBest, rightBest, crossBest) </code></pre> <p>If I am given the list <code>[12, 9, 18, 3]</code>, then I would compute the maximum difference between any two elements <code>i</code>, <code>j</code> such that <code>j &gt; i</code> and the element at <code>j</code> minus the element at <code>i</code> is of max difference.</p> <p>In this case we have <code>j = 2, i = 1</code>, representing <code>18 - 9</code> for the biggest difference of <code>9</code>.</p> <p>My current algorithm returns <code>9</code>, but I would like it to return the indicies <code>(1, 2)</code>. How can I modify this divide and conquer approach to do this?</p>
<p>I don't know if you are nailed to a recursive divide'n'conquer implementation. If you are not, here is an iterative, linear solution:</p> <pre><code>def func(arr): min_i = lower_i = 0 max_dif = upper_i = None for i in range(1, len(arr)): if max_dif is None or max_dif &lt; arr[i] - arr[min_i]: max_dif = arr[i] - arr[min_i] lower_i = min_i upper_i = i if arr[i] &lt; arr[min_i]: min_i = i return lower_i, upper_i </code></pre>
python|algorithm|list|divide-and-conquer
2
1,907,834
45,647,813
Create 2D array from point x,y using numpy
<p>With a position x,y as 30,40 as the top left corner of a 30 by 30 pixels (900 pixels total) grid I want to create an array with these x,y points using numpy. I've tried with a list and for loop but it seems slow I hope numpy will be faster.</p>
<h1>source: <a href="https://stackoverflow.com/questions/24436063/numpy-matrix-of-coordinates">Numpy matrix of coordinates</a></h1> <p>Something like this?</p> <pre><code>import numpy as np rows = np.arange(30,60) cols = np.arange(40,70) coords = np.empty((len(rows), len(cols), 2), dtype=np.intp) coords[..., 0] = rows[:, None] coords[..., 1] = cols </code></pre>
numpy
1
1,907,835
45,387,069
How to convert .rda to .pmml and use it in Python
<p>I've got myself a neural network model in .rda format that is already trained, and I'm not sure how to convert it to .pmml so that I can use it as a predictive engine in Python. Once this is done, which libraries should I install to allow the pmml file to be used in Python? Are there any special interactions I should be aware of?</p> <p>-UPDATE- I installed r2pmml into my RStudio, and I was wondering if its possible to load a model from .rda format and instantly export it without having to train it. Can this be done?</p> <p>-UPDATE 2- Managed to convert .Rda to .pmml successfully. I have a list of 0/1 vectors to use with the pmml file, (53,850 1's and 0's); how do I run the list through the predictive model in Python? One of the suggestions was to use Augustus.</p>
<p>The .rda file contains the model object in a R-specific serialization data format. You should be able to de-serialize it using the <a href="https://stat.ethz.ch/R-manual/R-devel/library/base/html/readRDS.html" rel="nofollow noreferrer"><code>readRDS(rds_path)</code></a> method, and then invoke the <code>r2pmml(model, pmml_path)</code> method.</p> <p>Training a model, and serializing it into a RDS file:</p> <pre><code>library("randomForest") rf = randomForest(Species ~ ., data = iris) saveRDS(rf, "rf.rds") </code></pre> <p>De-serializing the model from the RDS file, and exporting it into a PMML file:</p> <pre><code>library("r2pmml") rf_clone = readRDS("rf.rds") r2pmml(rf_clone, "rf.pmml") </code></pre>
python|r|pmml
0
1,907,836
45,526,328
Fetch many image files from the web and save them asynchronously
<p>I have a list of Web URLs to image files. I wish to fetch all the image files and write them each to the appropriate directory. The images are all PNGs. In a test program I am able to successfully fetch a single image synchronously:</p> <pre><code>import urllib.request import shutil # This example will download a single traffic image. # Spoof a well-known browser so servers won't think I am a bot. class AppURLopener(urllib.request.FancyURLopener): version = "Mozilla/5.0" def getTrafficImage(fromUrl, toPath): baseUrl = "https://mass511.com/map/Cctv/" url = f"{baseUrl}{fromUrl}" opener = AppURLopener() # Request image file from remote server and save to disk locally. with opener.open(url) as response, open(toPath, 'wb') as out_file: shutil.copyfileobj(response, out_file) # Camera label on MASS511: # I-93-SB-Somerville-Exit 26 Storrow Dr url = "406443--1" # Path to store the file file_name = "C:/Users/pchernoch/projects/HackWeek/traffic-feeds/I-93-SB-Somerville-Exit.png" getTrafficImage(url, file_name) </code></pre> <p>How can I repeat this for many URLs and have each fetch performed asynchronously?</p> <p>If any image cannot be fetched or has an error (like a timeout), I wish to log that error to the console but not stop processing the other files.</p> <p>I am using <strong>Python 3.6.2</strong>. My preference is to use the new async/await approach and the <strong>aiohttp</strong> and <strong>asyncio</strong> libraries. However, any popular async library (.e.g. curio) will do. I have only been programming in Python for one week, so much is confusing. This answer looks useful, but I do not know how to make use of it: <a href="https://stackoverflow.com/questions/35926917/asyncio-web-scraping-101-fetching-multiple-urls-with-aiohttp">asyncio web scraping 101: fetching multiple urls with aiohttp</a></p> <p><strong>Goal: The task to be accomplished is capturing traffic camera images from many Boston cameras every few seconds for a set period of time.</strong> </p> <p>The following is the program I am trying to write, with TODO: marks at the places I am stumped. The program runs on a timer. Every few seconds it will capture another set of images from each of the traffic cameras. The timer loop is not asynchronous, but I want the image capture of many URLs to be async.</p> <pre><code>import sys, os, datetime, threading, time import urllib.request import shutil # ================== # Configuration # ================== # Identify the name of the camera with its URL on Mass511 web site CAMERA_URLS = { "I-93-SB-Somerville-Exit 26 Storrow Dr": "406443--1", "Road STWB-WB-TNL-Storrow WB": "1407--1", "I-93-NB-Dorchester-between x14 &amp; x15 Savin": "406557" } # All cameras have URLs that begin with this prefix BASE_URL = "https://mass511.com/map/Cctv/" # Store photos in subdirectories under this directory PHOTO_STORAGE_DIR = "C:/Users/pchernoch/projects/HackWeek/traffic-feeds" # Take this many pictures from each camera SNAP_COUNT = 5 # Capture new set of pictures after this many seconds POLLING_INTERVAL_SECONDS = 2 # ================== # Classes # ================== def logMessage(msg): sys.stdout.write(msg + '\n') sys.stdout.flush() # Change the presumed name of the browser to fool robot detectors class AppURLopener(urllib.request.FancyURLopener): version = "Mozilla/5.0" # Can Read file from one camera and save to a file class Camera(object): def __init__(self, sourceUrl, targetDirectory, name, extension): self.SourceUrl = sourceUrl self.TargetDirectory = targetDirectory self.Name = name self.Extension = extension def TargetFile(self, time): timeStamp = time.strftime("%Y-%m-%d-%H-%M-%S") return f"{self.TargetDirectory}/{timeStamp}.{self.Extension}" def Get(self): fileName = self.TargetFile(datetime.datetime.now()) logMessage(f" - For camera {self.Name}, get {self.SourceUrl} and save as {fileName}") # TODO: GET IMAGE FILE FROM WEB AND SAVE IN FILE HERE # Can poll multiple cameras once class CameraPoller(object): def __init__(self, urlMap, baseUrl, rootDir): self.CamerasToRead = [] for cameraName, urlSuffix in urlMap.items(): url = f"{baseUrl}{urlSuffix}" targetDir = f"{rootDir}/{cameraName}" if not os.path.exists(targetDir): os.makedirs(targetDir) camera = Camera(url, targetDir, cameraName, "png") self.CamerasToRead.append(camera) def Snap(self): # TODO: MAKE THIS LOOP ASYNC for camera in self.CamerasToRead: camera.Get() # Repeatedly poll all cameras, then sleep def get_images(poller, pollingInterval, snapCount): next_call = time.time() for i in range(0, snapCount): now = datetime.datetime.now() timeString = now.strftime("%Y-%m-%d-%H-%M-%S") logMessage(f"\nPoll at {timeString}") poller.Snap() next_call = next_call + pollingInterval time.sleep(next_call - time.time()) # ================== # Application # ================== if __name__ == "__main__": cameraPoller = CameraPoller(CAMERA_URLS, BASE_URL, PHOTO_STORAGE_DIR) # Poll cameras i na separate thread. It is a daemon, so when the main thread exits, it will stop. timerThread = threading.Thread(target=get_images, args=([cameraPoller, POLLING_INTERVAL_SECONDS, SNAP_COUNT])) timerThread.daemon = False timerThread.start() timerThread.join() endTime = datetime.datetime.now() endTimeString = endTime.strftime("%Y-%m-%d-%H-%M-%S") logMessage(f"Exiting Poller at {endTimeString}") </code></pre>
<p>Here is the same code, with the URL grabbing done using ThreadPoolExecutor. It required the fewest changes to my code. Thanks to @larsks for pointing me in the right direction.</p> <pre><code>import sys, os, datetime, threading, time import urllib.request from concurrent.futures import ThreadPoolExecutor import shutil # ================== # Configuration # ================== # Identify the name of the camera with its URL on Mass511 web site CAMERA_URLS = { "I-93-SB-Somerville-Exit 26 Storrow Dr": "406443--1", "Road STWB-WB-TNL-Storrow WB": "1407--1", "I-93-NB-Dorchester-between x14 &amp; x15 Savin": "406557" } # All cameras have URLs that begin with this prefix BASE_URL = "https://mass511.com/map/Cctv/" # Store photos in subdirectories under this directory PHOTO_STORAGE_DIR = "C:/Users/pchernoch/projects/HackWeek/traffic-feeds" # Take this many pictures from each camera SNAP_COUNT = 5 # Capture new set of pictures after this many seconds POLLING_INTERVAL_SECONDS = 2 # ================== # Classes # ================== def logMessage(msg): sys.stdout.write(msg + '\n') sys.stdout.flush() # Change the presumed name of the browser to fool robot detectors class AppURLopener(urllib.request.FancyURLopener): version = "Mozilla/5.0" # Can Read file from one camera and save to a file class Camera(object): def __init__(self, sourceUrl, targetDirectory, name, extension): self.SourceUrl = sourceUrl self.TargetDirectory = targetDirectory self.Name = name self.Extension = extension def TargetFile(self, time): timeStamp = time.strftime("%Y-%m-%d-%H-%M-%S") return f"{self.TargetDirectory}/{timeStamp}.{self.Extension}" def Get(self): fileName = self.TargetFile(datetime.datetime.now()) message = f" - For camera {self.Name}, get {self.SourceUrl} and save as {fileName}" # Request image file from remote server and save to disk locally. opener = AppURLopener() with opener.open(self.SourceUrl) as response, open(fileName, 'wb') as out_file: shutil.copyfileobj(response, out_file) logMessage(message) return message def snap_picture(camera): return camera.Get() # Can poll multiple cameras once class CameraPoller(object): def __init__(self, urlMap, baseUrl, rootDir): self.CamerasToRead = [] for cameraName, urlSuffix in urlMap.items(): url = f"{baseUrl}{urlSuffix}" targetDir = f"{rootDir}/{cameraName}" if not os.path.exists(targetDir): os.makedirs(targetDir) camera = Camera(url, targetDir, cameraName, "png") self.CamerasToRead.append(camera) def Snap(self): with ThreadPoolExecutor(max_workers=10) as executor: results = executor.map(snap_picture, self.CamerasToRead) # Repeatedly poll all cameras, then sleep def get_images(poller, pollingInterval, snapCount): next_call = time.time() for i in range(0, snapCount): now = datetime.datetime.now() timeString = now.strftime("%Y-%m-%d-%H-%M-%S") logMessage(f"\nPoll at {timeString}") poller.Snap() next_call = next_call + pollingInterval time.sleep(next_call - time.time()) # ================== # Application # ================== if __name__ == "__main__": cameraPoller = CameraPoller(CAMERA_URLS, BASE_URL, PHOTO_STORAGE_DIR) # Poll cameras i na separate thread. It is a daemon, so when the main thread exits, it will stop. timerThread = threading.Thread(target=get_images, args=([cameraPoller, POLLING_INTERVAL_SECONDS, SNAP_COUNT])) timerThread.daemon = False timerThread.start() timerThread.join() endTime = datetime.datetime.now() endTimeString = endTime.strftime("%Y-%m-%d-%H-%M-%S") logMessage(f"Exiting Poller at {endTimeString}") </code></pre>
python|python-asyncio
1
1,907,837
14,586,764
Is matplotlib savefig threadsafe?
<p>I have an in-house distributed computing library that we use all the time for parallel computing jobs. After the processes are partitioned, they run their data loading and computation steps and then finish with a "save" step. Usually this involved writing data to database tables.</p> <p>But for a specific task, I need the output of each process to be a .png file with some data plots. There are 95 processes in total, so 95 .pngs.</p> <p>Inside of my "save" step (executed on each process), I have some very simple code that makes a boxplot with matplotlib's <code>boxplot</code> function and some code that uses <code>savefig</code> to write it to a .png file that has a unique name based on the specific data used in that process.</p> <p>However, I occasionally see output where it appears that two or more sets of data were written into the same output file, despite the unique names.</p> <p>Does matplotlib use temporary file saves when making boxplots or saving figures? If so, does it always use the same temp file names (thus leading to over-write conflicts)? I have run my process using <code>strace</code> and cannot see anything that obviously looks like temp file writing from matplotlib.</p> <p>How can I ensure that this will be threadsafe? I definitely want to conduct the file saving in parallel, as I am looking to expand the number of output .pngs considerably, so the option of first storing all the data and then just serially executing the plot/save portion is very undesirable.</p> <p>It's impossible for me to reproduce the full parallel infrastructure we are using, but below is the function that gets called to create the plot handle, and then the function that gets called to save the plot. You should assume for the sake of the question that the thread safety has nothing to do with our distributed library. We know it's not coming from our code, which has been used for years for our multiprocessing jobs without threading issues like this (especially not for something we don't directly control, like any temp files from matplotlib).</p> <pre><code>import pandas import numpy as np import matplotlib.pyplot as plt def plot_category_data(betas, category_name): """ Function to organize beta data by date into vectors and pass to box plot code for producing a single chart of multi-period box plots. """ beta_vector_list = [] yms = np.sort(betas.yearmonth.unique()) for ym in yms: beta_vector_list.append(betas[betas.yearmonth==ym].Beta.values.flatten().tolist()) ### plot_output = plt.boxplot(beta_vector_list) axs = plt.gcf().gca() axs.set_xticklabels(betas.FactorDate.unique(), rotation=40, horizontalalignment='right') axs.set_xlabel("Date") axs.set_ylabel("Beta") axs.set_title("%s Beta to BMI Global"%(category_name)) axs.set_ylim((-1.0, 3.0)) return plot_output ### End plot_category_data def save(self): """ Make calls to store the plot to the desired output file. """ out_file = self.output_path + "%s.png"%(self.category_name) fig = plt.gcf() fig.set_figheight(6.5) fig.set_figwidth(10) fig.savefig(out_file, bbox_inches='tight', dpi=150) print "Finished and stored output file %s"%(out_file) return None ### End save </code></pre>
<p>In your two functions, you're calling <code>plt.gcf()</code>. I would try generating a new figure every time you plot with <code>plt.figure()</code> and referencing that one explicitly so you skirt the whole issue entirely.</p>
python|multithreading|matplotlib|plot
1
1,907,838
68,652,078
Process finished with exit code 0 for Telegram Bot
<p>I'm new to Pycharm but as a beginner, I know I get &quot;Process finished with exit code 0&quot; which means that My code doesn't have any error, but the code is running only 2 seconds. Could you guys help me out, please? Much appreciate it!</p> <p>D:\TelegramBots\venv\Scripts\python.exe D:/TelegramBots/main.py</p> <p>Bot started...</p> <p>Process finished with exit code 0</p> <p><a href="https://i.stack.imgur.com/xq1HT.png" rel="nofollow noreferrer">Image sample</a></p> <pre><code>import Constants as Keys from telegram.ext import * import Responses as R print(&quot;Bot started...&quot;) def start_command(update, context): update.message.reply_text('Type something to get started!') def help_command(update, context): update.message.reply_text('If you need help you should ask Google!') def handle_command(update, context): text = str(update.message.text).lower() response = R.sample_responses(text) update.message.reply_text(response) def error(update, context): print(f&quot;Update {update} caused error {context.error}&quot;) def main(): updater = Updater(Keys.API_KEY, use_context=True) dp = Updater.dispatcher dp.add_handler(CommandHandler(&quot;start&quot;, start_command)) dp.add_handler(CommandHandler(&quot;help&quot;, help_command)) dp.add_handler(CommandHandler(Filters.text, start_command)) dp.add_error_handler(error) updater.start_polling(10) updater.idle() main() </code></pre>
<p>I removed the space in front of main() in the last line, now it's working</p>
python|telegram-bot|python-telegram-bot
0
1,907,839
68,774,840
pexpect behaves differently under cron
<p>I have a script that behaves differently under cron than it does from the terminal. In the terminal, it does what's expected and everything is sequential like it is written in the script. Under cron, I have to add sleep calls to prevent pexpect from sending sigint to the child to kill it prematurely. (yes, I spent an hour digging through strace output to figure this out).</p> <pre><code>def getLftpData(patterns): try: print(f&quot;debug: getLftpData() called&quot;) child = pexpect.spawn(&quot;lftp&quot;, timeout=900) child.expect(&quot;lftp :~&gt;&quot;) child.sendline(&quot;open ftp://ftp.somecompany.com&quot;) child.expect(&quot;ftp.somecompany.com:.&gt;&quot;) child.sendline(&quot;login &lt;username&gt; &lt;password&gt;&quot;) child.expect(&quot;ftp.somecompany.com:.&gt;&quot;) for pat in patterns: child.sendline(f&quot;mirror --include-glob={pat}&quot;) child.expect(&quot;ftp.somecompany.com:.&gt;&quot;) # sleep(60) # This shouldn't be necessary print(f&quot;debug: {str(child)}&quot;) child.sendline(&quot;QUIT&quot;) # sleep(30) # This shouldn't be necessary child.close() except: print(&quot;Error retrieving data:\n&quot;) print(str(child)) </code></pre> <p>Clearly, I still have some debug statements left in there and the code might not be perfect. What I need to understand is why does this work from the terminal, but it fails under cron? strace shows that the quit command is given to the client before the mirror command has finished processing. Then, pexpect sends sighup followed by sigint a very short time later. The sleep() calls got around the issue, I just need to know why.</p> <p>Edit: clarified the example by commenting out the workaround.</p>
<p>Well, there hasn't been any additional information posted, so I decided to approach the problem in a different way. I instead modified the program to write out a script for lftp to execute and then ran it under the subprocess module:</p> <pre><code>lftp -f lftpCmds.txt </code></pre> <p>Then, I remove/unlink the lftpCmds.txt file after it's done. While this works, it is, in my opinion, not a lot better than the sleep() solution I was using before. This is only better because it doesn't impose a time delay that really doesn't need to be there so the execution is a little quicker.</p>
python|cron|pexpect
0
1,907,840
41,610,543
Corpora/stopwords not found when import nltk library
<p>I trying to import the nltk package in python 2.7</p> <pre><code> import nltk stopwords = nltk.corpus.stopwords.words('english') print(stopwords[:10]) </code></pre> <p>Running this gives me the following error:</p> <pre><code>LookupError: ********************************************************************** Resource 'corpora/stopwords' not found. Please use the NLTK Downloader to obtain the resource: &gt;&gt;&gt; nltk.download() </code></pre> <p>So therefore I open my python termin and did the following:</p> <pre><code>import nltk nltk.download() </code></pre> <p>Which gives me:</p> <pre><code>showing info https://raw.githubusercontent.com/nltk/nltk_data/gh-pages/index.xml </code></pre> <p>However this does not seem to stop. And running it again still gives me the same error. Any thoughts where this goes wrong?</p>
<p>You are currently trying to download every item in nltk data, so this can take long. You can try downloading only the stopwords that you need:</p> <pre class="lang-py prettyprint-override"><code>import nltk nltk.download('stopwords') </code></pre> <p>Or from command line (thanks to <a href="https://stackoverflow.com/a/49048876/1916588">Rafael Valero's answer</a>):</p> <pre class="lang-sh prettyprint-override"><code>python -m nltk.downloader stopwords </code></pre> <hr> <h2>Reference:</h2> <ul> <li><a href="https://www.nltk.org/data.html#command-line-installation" rel="noreferrer">Installing NLTK Data - Command line installation</a></li> </ul>
python|nltk
148
1,907,841
41,271,146
How to remove the innermost level of nesting in a list of lists of varying lengths
<p>I'm trying to remove the innermost nesting in a list of lists of single element length lists. Do you know a relatively easy way (converting to NumPy arrays is fine) to get from:</p> <pre><code>[[[1], [2], [3], [4], [5]], [[6], [7], [8]], [[11], [12]]] </code></pre> <p>to this?:</p> <pre><code>[[1, 2, 3, 4, 5], [6, 7, 8], [11, 12]] </code></pre> <p>Also, the real lists I'm trying to do this for contain datetime objects rather than ints in the example. And the initial collection of lists will be of varying lengths.</p> <p>Alternatively, it would be fine if there were nans in the original list so that the length of each list is identical as long as the nans aren't present in the output list. i.e.</p> <pre><code>[[[1], [2], [3], [4], [5]], [[6], [7], [8], [nan], [nan]], [[11], [12], [nan], [nan], [nan]]] </code></pre> <p>to this:</p> <pre><code>[[1, 2, 3, 4, 5], [6, 7, 8], [11, 12]] </code></pre>
<p>If the nesting is always consistent, then this is trivial:</p> <pre><code>In [2]: import itertools In [3]: nested = [ [ [1],[2],[3],[4], [5] ], [ [6],[7],[8] ] , [ [11],[12] ] ] In [4]: unested = [list(itertools.chain(*sub)) for sub in nested] In [5]: unested Out[5]: [[1, 2, 3, 4, 5], [6, 7, 8], [11, 12]] </code></pre> <p>Note, the solutions that leverage <code>add</code> with lists are going to give you O(n^2) performance where n is the number of sub-sublists that are being merged within each sublist. </p>
python|numpy|nested-lists|flatten
6
1,907,842
57,215,631
Cannot connect to cleardb from flask app running on heroku
<p>I'm learning flask etc. and created a simple app locally. Everything worked fine with a local mysql database in xammp. I exported this and imported into clearDB on heroku and deployed my app. The app runs, but whenever I try to access a page relying on the database in the logs I see this error.</p> <p>pymysql.err.OperationalError: (2003, "Can't connect to MySQL server on 'localhost' ([Errno 111] Connection refused)")</p> <p>I ran the app locally, but using the clearDB database, and it was able to retrieve the data no problem. Everything is configured in the file to access based on the URL details from heroku.</p> <p>Code is below. I'm sure there are a ton of other issues, but as I say I'm learning and got all this from a tutorial.</p> <pre class="lang-python prettyprint-override"><code>#instaniation app = Flask(__name__) app.secret_key= 'secret123' #config app.config['MYSQL_HOST'] = 'us-cdbr-iron-east-02.cleardb.net' app.config['MYSQL_USER'] = 'redacted' app.config['MYSQL_PASSWORD'] = 'redacted' app.config['MYSQL_DB'] = 'heroku_0c6326c59d8b6e9' app.config['MYSQL_CURSORCLASS'] = 'DictCursor' #a cursor is a connection to let us run methods for queries. We set is as a dictionary #init MySQL mysql = MySQL(app,cursorclass=DictCursor) #url endpoint @app.route('/') #function for starting app def index(): #can directly return HTML return render_template('index.html') @app.route('/about') def about(): return render_template('about.html') #many articles @app.route('/articles') def articles(): #create Cursor cur = mysql.get_db().cursor() #to execute commands cur.execute("USE myflaskapp") #get articles result = cur.execute("SELECT * FROM articles") #should retrieve all as a dictionary articles = cur.fetchall() if result &gt; 0: return render_template('articles.HTML', articles=articles) else: msg= 'No Articles Found' return render_template('articles.HTML', msg=msg) cur.close() </code></pre>
<p>As you can see from the error: <code>pymysql.err.OperationalError: (2003, "Can't connect to MySQL server on 'localhost' ([Errno 111] Connection refused)")</code> MySQL tried to connect to <code>localhost</code> server, which mean, its doesnt load your config variables.</p> <p>If you are using flask extension from <a href="https://flask-mysql.readthedocs.io/en/latest/#" rel="nofollow noreferrer">https://flask-mysql.readthedocs.io/en/latest/#</a> to load MySQL then you are probably forgot to init your application (<code>pip install flask-mysql</code>)</p> <pre><code>mysql.init_app(app) </code></pre> <p>The head of you code should now look like this</p> <p><strong>And watch for the config variables names</strong>, the extension using slightly different convention</p> <pre><code>from flask import Flask from flaskext.mysql import MySQL #instaniation app = Flask(__name__) app.secret_key= 'secret' #config app.config['MYSQL_DATABASE_USER'] = 'user' app.config['MYSQL_DATABASE_PASSWORD'] = 'password' app.config['MYSQL_DATABASE_DB'] = 'db' app.config['MYSQL_DATABASE_HOST'] = 'host' #init MySQL mysql = MySQL() mysql.init_app(app) </code></pre>
python|mysql|heroku|flask|cleardb
0
1,907,843
57,287,808
splitting the line into multiple variables
<p>below is the text that i want to split and store it in the variables.</p> <pre><code>Pppp CCCC TTTT MMMMM SSSSSS Oono. 1 NIL fL-E 10UU (SPD+), 1000XXXXX (SPD) WEEEEEEEEEEEEE CATTTTTTTTT 44 10/100/1000BBBBB Ppppppp OOo E SSSSSS WS-XXXXX-RRRRR+I CATTTTTTTTT 44 10/100/1000BBBBB Ppppppp OOo E SSSSSS WS-XXXXX-RRRRR+I CATTTTTTTTT 44 10/100/1000BBBBB Ppppppp OOo E SSSSSS WS-XXXXX-RRRRR+I CATTTTTTTTT 44 10/100/1000BBBBB Ppppppp OOo E SSSSSS WS-XXXXX-RRRRR+I CATTTTTTTTT 44 10/100/1000BBBBB Ppppppp OOo E SSSSSS WS-XXXXX-RRRRR+I CATTTTTTTTT </code></pre> <p>i want to split it so that variable 1 :-</p> <pre><code> Pppp 1 44 44 44 44 44 </code></pre> <p>Variable 2 :- </p> <pre><code>CCCC TTTT NIL fL-E 10UU (SPD+), 1000XXXXX (SPD) 10/100/1000BBBBB Ppppppp OOo E SSSSSS 10/100/1000BBBBB Ppppppp OOo E SSSSSS 10/100/1000BBBBB Ppppppp OOo E SSSSSS 10/100/1000BBBBB Ppppppp OOo E SSSSSS 10/100/1000BBBBB Ppppppp OOo E SSSSSS </code></pre> <p>Variable 3:-</p> <pre><code>MMMMM WEEEEEEEEEEEEE WS-XXXXX-RRRRR+I WS-XXXXX-RRRRR+I WS-XXXXX-RRRRR+I WS-XXXXX-RRRRR+I WS-XXXXX-RRRRR+I </code></pre> <p>Variable 4:- </p> <pre><code>SSSSSS Oono. CATTTTTTTTT CATTTTTTTTT CATTTTTTTTT CATTTTTTTTT CATTTTTTTTT CATTTTTTTTT </code></pre> <p>Each variable should store specified value</p> <p>code i have tried:-</p> <pre><code>with open ('sh_module.txt', 'r') as module_info: lines = module_info.read().splitlines()[6:] for l in lines: if not l.isspace(): storeSplit = (" ".join(l.split()[1:10])) A_of_splitOfstoreSplit , B_of_splitOfstoreSplit = storeSplit.split('W') print (storeSplit) </code></pre> <p>code doesn't works. :-(</p> <p>Note:- the text so written is as it is in the text file. do consider the spaces.!</p> <p>thx for the help.! :-)</p>
<p>Edit: found <a href="https://stackoverflow.com/questions/4914008/how-to-efficiently-parse-fixed-width-files">How to efficiently parse fixed width files?</a> after answering. This answer is specific to your question, the dupe shows other ways to deal with fixed length file parsing using structs etc.</p> <hr> <p>You seem to have a fixed-width format - you can simply split each line into a list and then transpose it into colums using <a href="https://docs.python.org/3/library/functions.html#zip" rel="nofollow noreferrer">zip</a>.</p> <p><strong>Create file:</strong></p> <pre><code># 3456789012345678901234567890123456789012345678901234567890123456789 t = """ Pppp CCCC TTTT MMMMM SSSSSS Oono. 1 NIL fL-E 10UU (SPD+), 1000XXXXX (SPD) WEEEEEEEEEEEEE CATTTTTTTTT 44 10/100/1000BBBBB Ppppppp OOo E SSSSSS WS-XXXXX-RRRRR+I CATTTTTTTTT 44 10/100/1000BBBBB Ppppppp OOo E SSSSSS WS-XXXXX-RRRRR+I CATTTTTTTTT 44 10/100/1000BBBBB Ppppppp OOo E SSSSSS WS-XXXXX-RRRRR+I CATTTTTTTTT 44 10/100/1000BBBBB Ppppppp OOo E SSSSSS WS-XXXXX-RRRRR+I CATTTTTTTTT 44 10/100/1000BBBBB Ppppppp OOo E SSSSSS WS-XXXXX-RRRRR+I CATTTTTTTTT """ with open ('sh_module.txt', 'w') as module_info: module_info.write("header\nheader\nheader\nheader\nheader\nheader\n") module_info.write(t) </code></pre> <p><strong>Process file:</strong></p> <pre><code>with open ('sh_module.txt', 'r') as module_info: lines = [n.strip() for n in module_info.read().splitlines()[6:]] data = [] # split file-lines into data - special case for line starting with Pppp as its 4 long for line in lines: # ignore empty lines if line.strip(): if line.startswith("Pppp"): # slightly different fixed width data.append( [line[:4].strip(), line[4:41].strip(), line[41:58].strip(),line[58:].strip()] ) continue linedata = [] linedata.extend( (line[:3].strip(), line[3:41].strip(), line[41:58].strip(),line[58:].strip()) ) data.append(linedata) # create a dict with variables from the splitted line-list variabs = {a[0]:[i for i in a[1:]] for a in zip(*data)} print(variabs) </code></pre> <p>Output:</p> <pre><code>{'Pppp': ['1', '44', '44', '44', '44', '44'], 'CCCC TTTT': ['NIL fL-E 10UU (SPD+), 1000XXXXX (SPD)', '10/100/1000BBBBB Ppppppp OOo E SSSSSS', '10/100/1000BBBBB Ppppppp OOo E SSSSSS', '10/100/1000BBBBB Ppppppp OOo E SSSSSS', '10/100/1000BBBBB Ppppppp OOo E SSSSSS', '10/100/1000BBBBB Ppppppp OOo E SSSSSS'], 'MMMMM': ['WEEEEEEEEEEEEE', 'WS-XXXXX-RRRRR+I', 'WS-XXXXX-RRRRR+I', 'WS-XXXXX-RRRRR+I', 'WS-XXXXX-RRRRR+I', 'WS-XXXXX-RRRRR+I'], 'SSSSSS Oono.': ['CATTTTTTTTT', 'CATTTTTTTTT', 'CATTTTTTTTT', 'CATTTTTTTTT', 'CATTTTTTTTT', 'CATTTTTTTTT']} </code></pre> <p>You can access the columns by <code>variabs["Pppp"]</code>, <code>variabs["SSSSSS Oono."]</code>, etc.</p> <hr> <p>There are other ways to handle this, see <a href="https://stackoverflow.com/questions/4914008/how-to-efficiently-parse-fixed-width-files">How to efficiently parse fixed width files?</a> for more</p> <hr> <p>Edit: using enumerate:</p> <pre><code># split file-lines into data - special case for line on idx 0 for idx, line in enumerate(x.strip() for x in lines if x.strip()): if idx == 0: # slightly different fixed width data.append( [line[:4].strip(), line[4:41].strip(), line[41:58].strip(),line[58:].strip()] ) continue linedata = [] linedata.extend( (line[:3].strip(), line[3:41].strip(), line[41:58].strip(),line[58:].strip()) ) data.append(linedata) </code></pre>
python|python-3.x
1
1,907,844
57,091,946
Conditionally delete rows by ID in Pandas
<p>I have a pandas df:</p> <pre><code>ID Score C D 1 2 x y 1 nan x y 1 2 x y 2 3 x y 2 2 x y 3 2 x y 3 4 x y 3 3 x y </code></pre> <p>For each ID, like to remove rows where df.Score = 2 but only when there is a 3 or 4 present for that ID. I'd like to keep <code>nans</code>and <code>2</code> when the only score per ID = 2.</p> <p>So I get:</p> <pre><code>ID Score C D 1 2 x y 1 nan x y 1 2 x y 2 3 x y 3 4 x y 3 3 x y </code></pre> <p>Any help, much appreciated</p>
<p>Use:</p> <pre><code>df[~df.groupby('ID')['Score'].apply(lambda x:x.eq(2)&amp;x.isin([3,4]).any())] </code></pre> <hr> <pre><code> ID Score C D 0 1 2.0 x y 1 1 NaN x y 2 1 2.0 x y 3 2 3.0 x y 6 3 4.0 x y 7 3 3.0 x y </code></pre>
python|pandas
2
1,907,845
25,745,825
Python/Beautiful Soup find particular heading output full div
<p>I'm attempting to parse a very extensive HTML document looks something like:</p> <pre class="lang-html prettyprint-override"><code>&lt;div class="reportsubsection n" &gt;&lt;br&gt; &lt;h2&gt; part 1 &lt;/h2&gt;&lt;br&gt; &lt;p&gt; insert text here &lt;/p&gt;&lt;br&gt; &lt;table&gt; crazy table thing here &lt;/table&gt;&lt;br&gt; &lt;/div&gt; &lt;div class="reportsubsection n"&gt;&lt;br&gt; &lt;h2&gt; part 2 &lt;/h2&gt;&lt;br&gt; &lt;p&gt; insert text here &lt;/p&gt;&lt;br&gt; &lt;table&gt; crazy table thing here &lt;/table&gt;&lt;br&gt; &lt;/div&gt; </code></pre> <p>Need to parse out the second div based on h2 having text "Part 2". Iwas able to break out all divs with:</p> <pre><code>divTag = soup.find("div", {"id": "reportsubsection"}) </code></pre> <p>but didn't know how to dwindle it down from there. Other posts I found I was able to find the specific text "part 2 but I need to be able to output the whole DIV section it is contained in.</p> <p>EDIT/UPDATE</p> <p>Ok sorry but I'm still a little lost. Here is what I've got now. I feel like this should be so much simpler than I'm making it. Thanks again for all the help<br></p> <pre><code>divTag = soup.find("div", {"id": "reportsubsection"})&lt;br&gt; for reportsubsection in soup.select('div#reportsubsection #reportsubsection'):&lt;br&gt; if not reportsubsection.findAll('h2', text=re.compile('Finding')):&lt;br&gt; continue&lt;br&gt; print divTag </code></pre>
<p>You can always go back <em>up</em> after finding the right <code>h2</code>, or you can test all subsections:</p> <pre><code>for subsection in soup.select('div#reportsubsection #subsection'): if not subsection.find('h2', text=re.compile('part 2')): continue # do something with this subsection </code></pre> <p>This uses a <a href="http://www.crummy.com/software/BeautifulSoup/bs4/doc/#css-selectors" rel="nofollow">CSS selector</a> to locate all <code>subsection</code>s.</p> <p>Or, going back up with the <a href="http://www.crummy.com/software/BeautifulSoup/bs4/doc/#parent" rel="nofollow"><code>.parent</code> attribute</a>:</p> <pre><code>for header in soup.find_all('h2', text=re.compile('part 2')): section = header.parent </code></pre> <p>The trick is to narrow down your search as early as possible; the second option has to find all <code>h2</code> elements in the whole document, while the former narrows the search down quicker.</p>
python|html-parsing|beautifulsoup
2
1,907,846
25,740,018
How to switch pid between python script and subprocess?
<p>I am launching an application using subprocess and then I analyze the output using a Python script. This Python script is executed by a third-party software which I have no control over. The third-party software measures CPU usage and it measures the CPU usage from the Python script, not the application I am launching via subprocess, which is what I actually need.</p> <p>It turns out that if I would have launched the application using e.g. bash's <code>exec</code> instead of <code>subprocess</code>, the PID of the Python script would be the same as of the application, and the application's CPU usage will be measured (which is what I want). However, then I won't be able to analyze the output.</p> <p>How can I be able to analyze the application's output AND make the third party software measure the CPU usage of the application (and not the Python script)? I am thinking I have to somehow switch the PID between the Python script and the subprocess. Can this be done? Any other ideas to solve this problem?</p> <p>This is on both CentOS 6 and Windows 7 with Python 2.7.</p> <p>This is what I have so far:</p> <pre><code>import sys, os, subprocess # Command command = [Render, '/path/to/file'] # Launch subprocess p = subprocess.Popen( command, stdout=subprocess.PIPE, stderr=subprocess.STDOUT ) # PIDs script_pid = os.getpid() app_pid = p.pid # Analyze the app's output for line in iter(p.stdout.readline, b''): sys.stdout.flush() print("&gt;&gt;&gt; " + line.rstrip()) </code></pre>
<p>Simply redirect <code>stdout</code> to something when <code>exec</code>ing the other process:</p> <pre><code>import io import sys sys.stdout = io.StringIO() # or BytesIO exec the_command output = sys.stdout.getvalue() sys.stdout = sys.__stdout__ </code></pre> <hr> <p>By the way, your usage of <code>iter</code> is useless. Doing:</p> <pre><code>for line in iter(file_object.readline, ''): </code></pre> <p>is <em>exactly</em> the same as:</p> <pre><code>for line in file_object: </code></pre> <p>except that:</p> <ul> <li>It is less readable</li> <li>It is longer</li> <li>It adds overhead because you are performing a comparison for every line.</li> </ul> <p>You should use <code>iter(callable, sentinel)</code> when you want a different kind of iteration. For example reading the file in chunks of <code>n</code> bytes:</p> <pre><code>for chunk in iter(lambda: file_object.read(n), ''): </code></pre> <p>I believe the documentation uses the crappy <code>iter(f.readline, '')</code> example because it predates the iterable support for files, so, <em>at the time</em>, that was the best way to read a file line by line.</p>
python|python-2.7
0
1,907,847
25,695,216
If I run the following script, will my ssh session not be terminated?
<p>So I wrote this script called py_script.py that I ran over an ssh session on a school machine:</p> <pre><code>import time import os while True: os.system("echo still_alive") time.sleep(60) </code></pre> <p>... by doing:</p> <p><code>bash $ python py_script.py &amp;</code>.</p> <p>Is this going to prevent the dreaded <code>broken pipe</code> message from happening?</p> <p>The problem is, after a period of inactivity when I am over an ssh connection, my connection will be dropped. To prevent this, I wrote the above script that automatically writes a message to the console to count for an "action" so that I don't have to press enter every 5 minutes. (I'm idle on a machine and need to run a process for a good amount of time.)</p>
<p>If your connection is timing out then it is more advisable to look at SSH configuration options which can keep your connection alive.</p> <p>As a starter example, put the following in a file called <code>~/.ssh/config</code>:</p> <pre><code>Host * ServerAliveInterval 20 TCPKeepAlive=yes </code></pre> <p>You can read more <a href="http://en.wikibooks.org/wiki/OpenSSH/Client_Configuration_Files" rel="nofollow">here</a>.</p>
python|bash|ssh
4
1,907,848
61,761,085
Python locale not working on alpine linux
<p>The code is simple:</p> <pre><code>import locale locale.setlocale(locale.LC_ALL, 'de_DE.UTF-8') # I tried de_DE and de_DE.utf8 too locale.currency(0) Traceback (most recent call last): File "&lt;console&gt;", line 1, in &lt;module&gt; File "/usr/local/lib/python3.7/locale.py", line 267, in currency raise ValueError("Currency formatting is not possible using " ValueError: Currency formatting is not possible using the 'C' locale. </code></pre> <p>It works when I run it on ubuntu. On alpine, however, this error pops up. I tried the workaround from <a href="https://github.com/gliderlabs/docker-alpine/issues/144#issuecomment-436455850" rel="noreferrer">this</a> comment without success. I also added <code>/usr/glibc-compat/bin</code> to <code>PATH</code> on top of that script, didn't help.</p> <p>Is there any way to make locales work on alpine?</p> <p>Try it out yourself:</p> <pre><code>docker run --rm alpine sh -c "apk add python3; python3 -c 'import locale; locale.setlocale(locale.LC_ALL, \"de_DE.UTF-8\"); locale.currency(0)'" </code></pre> <p>Update: <a href="https://github.com/Auswaschbar/alpine-localized-docker" rel="noreferrer">this</a> repo also doesn't work.</p> <p>Update: I tried <a href="https://grrr.tech/posts/2020/add-locales-to-alpine-linux-docker-image/" rel="noreferrer">this</a> guide too, but it seems like it's not compatible with python? Even though the locale does show up, I still get this:</p> <pre><code>/app # locale -a C C.UTF-8 sv_SE.UTF-8 en_GB.UTF-8 ch_DE.UTF-8 pt_BR.UTF-8 ru_RU.UTF-8 it_IT.UTF-8 de_CH.UTF-8 en_US.UTF-8 fr_FR.UTF-8 nb_NO.UTF-8 de_DE.UTF-8 &lt;-- nl_NL.UTF-8 es_ES.UTF-8 /app # python Python 3.7.7 (default, Apr 24 2020, 22:09:29) [GCC 9.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. &gt;&gt;&gt; import locale &gt;&gt;&gt; locale.setlocale(locale.LC_ALL, 'de_DE.UTF-8') 'de_DE.UTF-8' &gt;&gt;&gt; locale.currency(0) Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; File "/usr/local/lib/python3.7/locale.py", line 267, in currency raise ValueError("Currency formatting is not possible using " ValueError: Currency formatting is not possible using the 'C' locale. </code></pre>
<p>This minimum Dockerfile</p> <pre><code>FROM alpine:3.12 ENV MUSL_LOCPATH=&quot;/usr/share/i18n/locales/musl&quot; RUN apk --no-cache add \ musl-locales \ musl-locales-lang \ python3 </code></pre> <p>by using the above mentioned musl-locales packages seem to be partly working only at the moment for Alpine Linux with Python:</p> <ol> <li>LC_TIME: Succeeds</li> </ol> <pre><code>import locale from time import gmtime, strftime locale.setlocale(locale.LC_ALL, &quot;en_US.UTF-8&quot;) 'en_US.UTF-8' strftime(&quot;%a, %d %b %Y %H:%M:%S +0000&quot;, gmtime()) 'Wed, 26 Aug 2020 16:50:17 +0000' locale.setlocale(locale.LC_ALL, &quot;de_DE.UTF-8&quot;) 'de_DE.UTF-8' strftime(&quot;%a, %d %b %Y %H:%M:%S +0000&quot;, gmtime()) 'Mi, 26 Aug 2020 16:50:31 +0000' </code></pre> <ol start="2"> <li>LC_MONETARY: Fails</li> </ol> <pre><code>import locale locale.getlocale() ('en_US', 'UTF-8') locale.currency(0) Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; File &quot;/usr/lib/python3.8/locale.py&quot;, line 267, in currency raise ValueError(&quot;Currency formatting is not possible using &quot; ValueError: Currency formatting is not possible using the 'C' locale. locale.setlocale(locale.LC_ALL, &quot;de_DE.UTF-8&quot;) 'de_DE.UTF-8' locale.currency(0) Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; File &quot;/usr/lib/python3.8/locale.py&quot;, line 267, in currency raise ValueError(&quot;Currency formatting is not possible using &quot; ValueError: Currency formatting is not possible using the 'C' locale. </code></pre> <ol start="3"> <li>LC_NUMERIC: Fails</li> </ol> <pre><code>locale.setlocale(locale.LC_ALL, &quot;en_US.UTF-8&quot;) 'en_US.UTF-8' locale.format_string(&quot;%.1f&quot;, 1.4) '1.4' locale.setlocale(locale.LC_ALL, &quot;de_DE.UTF-8&quot;) 'de_DE.UTF-8' locale.format_string(&quot;%.1f&quot;, 1.4) '1.4' </code></pre> <ol start="4"> <li>This also looks bad:</li> </ol> <pre><code>locale.setlocale(locale.LC_ALL, &quot;en_US.UTF-8&quot;) 'en_US.UTF-8' locale.localeconv() {'int_curr_symbol': '', 'currency_symbol': '', 'mon_decimal_point': '', 'mon_thousands_sep': '', 'mon_grouping': [], 'positive_sign': '', 'negative_sign': '', 'int_frac_digits': 127, 'frac_digits': 127, 'p_cs_precedes': 127, 'p_sep_by_space': 127, 'n_cs_precedes': 127, 'n_sep_by_space': 127, 'p_sign_posn': 127, 'n_sign_posn': 127, 'decimal_point': '.', 'thousands_sep': '', 'grouping': []} locale.setlocale(locale.LC_ALL, &quot;de_DE.UTF-8&quot;) 'de_DE.UTF-8' locale.localeconv() {'int_curr_symbol': '', 'currency_symbol': '', 'mon_decimal_point': '', 'mon_thousands_sep': '', 'mon_grouping': [], 'positive_sign': '', 'negative_sign': '', 'int_frac_digits': 127, 'frac_digits': 127, 'p_cs_precedes': 127, 'p_sep_by_space': 127, 'n_cs_precedes': 127, 'n_sep_by_space': 127, 'p_sign_posn': 127, 'n_sign_posn': 127, 'decimal_point': '.', 'thousands_sep': '', 'grouping': []} </code></pre> <p>The failing localisations might be due to incomplete PO files in <a href="https://gitlab.com/rilian-la-te/musl-locales" rel="nofollow noreferrer">https://gitlab.com/rilian-la-te/musl-locales</a> or due to specific Python expectations which have not been met.</p> <p>Next, someone could check with another programming language like PHP if functions using locales work as expected.</p>
python|docker|locale|python-3.7|alpine-linux
2
1,907,849
23,918,527
Can't get OneDrive authorization code from GET request
<p>I have problems with retreiving auth code from URL from OneDrive. Code I use:</p> <pre><code>url = 'https://login.live.com/oauth20_authorize.srf' payload = {'client_id': client_id, 'scope': scope, 'response_type': response_type, 'redirect_uri': redirect_uri} r = requests.get(url, params=payload) </code></pre> <p>How can I get the code which comes with the redirection to the my page? <code>r.text</code> for instance gives me the html code of the Outlook html page. Here is the result: <a href="https://dl.dropboxusercontent.com/u/18661124/python_onedrive.txt" rel="nofollow">https://dl.dropboxusercontent.com/u/18661124/python_onedrive.txt</a></p>
<p>Posting my comment from before as the answer, for others to reference:</p> <p>If you're doing OAuth from a desktop application, the redirect URL should be <a href="https://login.live.com/oauth20_desktop.srf" rel="nofollow">https://login.live.com/oauth20_desktop.srf</a> and you need to enable "Mobile or Desktop client app" in your App Configuration at account.live.com/developers/applications. If you have it configured this way, you'll be able to access the authentication key from your app.</p>
python|get|onedrive
2
1,907,850
36,087,005
python pandas overflow error dataFrame
<p>I am rather new to python and I am using the pandas library to work with data frames. I created a function "elevate_power" that reads in a data frame with one column of floating point values (example x) , and a degree (example lambda), and outputs a dataframe where each column contains a power of the original column (example: output is x,x^2,x^3)</p> <p>The problem is when I have a degree that is above 30, I get overflow error. Is there a way around this problem ? </p> <p>I am not particularly worried about the precision, so I would not mind loosing some precision. </p> <p>However, (and this is important), I need the output to be type float because I then call some numpy subroutines that give me errors if I change the type.</p> <p>I have tried several tricks: for example I tried using decimal inside the function but then I cannot get the format back to floats, which is a problem because then I get errors when I call dot product and linear algebra solvers from numpy.</p> <p>Any suggestion will be greatly appreciated,</p> <p>This is the test code (which I ran with a low degree value so it won't crash):</p> <pre><code>def elevate_power(column, degree): df = pd.DataFrame(column) dfbase=df if degree &gt; 0: for power in range(2, degree+1): # first we'll give the column a name: name = 'power_' + str(power) df[name]= 0 df[name] = dfbase.apply(lambda x: x**power , axis=1) return(df) import pandas as pd import numpy as np test= pd.Series([1., 2., 3.]) test2=pd.DataFrame(test) degree=5 print elevate_power(test2, degree ) np.dot(test2['power_2'],test2['power_3']) The printout is : 0 power_2 power_3 power_4 power_5 0 1 1 1 1 1 1 2 4 8 16 32 2 3 9 27 81 243 276.0 </code></pre>
<p>How about</p> <pre><code>import pandas as pd import numpy as np series = [1., 2., 3.] degree = 5 a = pd.DataFrame({"power_" + str(power): np.power(series, power) for power in range(1, degree+1)}) print(a) print(a.dtypes) </code></pre> <p>results in floats for me</p>
python|pandas|overflow|stack-overflow
1
1,907,851
35,856,818
How to edit console output
<p>I have the following very simple code</p> <pre><code>stdout.write("Hello World") stdout.write("\rBye world") stdout.write("\rActually hello back") </code></pre> <p>Which prints as expected 'Actually hello back' however if i were to add a newline</p> <pre><code>stdout.write("\n") </code></pre> <p>How can I go back a newline and then to the beginning of the line so I can actually just output "Hi" instead of </p> <pre><code>Actually hello back Hi </code></pre> <p>I tried</p> <pre><code>stdout.write("\r") stdout.write("\b") </code></pre> <p>However none of them seem to do the trick. The end goal is to display a big chunk of text and then update the output in real time without having to write again. How can I achieve this in python?</p> <p><strong>EDIT</strong> My question is different than the one suggested as I don't want to modify a single line. I want to be able to print 4-5 lines of text and then replace them in real time instead of just one line that is modified.</p>
<p>Well, if You want to gain full control over the terminal, I would suggest to use the <a href="https://docs.python.org/2/library/curses.html" rel="nofollow">curses library</a>.</p> <blockquote> <p>The curses module provides an interface to the curses library, the de-facto standard for portable advanced terminal handling.</p> </blockquote> <p>Using it, You can edit multiple lines in terminal like this:</p> <pre><code>import curses import time stdscr = curses.initscr() stdscr.addstr("line 1\n") stdscr.addstr("line 2\n") stdscr.refresh() time.sleep(3) stdscr.erase() stdscr.addstr("edited line 1\n") stdscr.addstr("edited line 2\n") stdscr.refresh() time.sleep(3) curses.endwin() </code></pre> <p>The capabilities of this library are much greater though. Full tutorial <a href="https://docs.python.org/2/howto/curses.html" rel="nofollow">here</a>.</p>
python|console
0
1,907,852
35,961,018
Accessing attributes in superclass
<p>I am building up a tree and a dictionary mapping nodes in the tree to unique ids. When trying to access objects in the dictionary I am experiencing some unexpected behaviour. I am able to access some inherited attributes of the object, but not other. I've extracted part of the project and modified it such that it is hopefully understandable:</p> <pre><code>#!/usr/bin/env python IMPORTS = vars() class Node(object): SYMTAB = {} def __init__(self, kwargs={}): self.ati = kwargs.get(u'@i') self._add_symbol(self.ati, self) self.atr = kwargs.get(u'@r') def _add_symbol(self, k, v): self.SYMTAB[k] = v class CompilationUnit(Node): def __init__(self, kwargs={}): super(CompilationUnit, self).__init__(kwargs) self.types = map(lambda x: IMPORTS[x['@t']](x), kwargs.get('types').get('@e', [])) class BodyDeclaration(Node): def __init__(self, kwargs={}): super(BodyDeclaration, self).__init__(kwargs) class TypeDeclaration(BodyDeclaration): def __init__(self, kwargs={}): super(TypeDeclaration, self).__init__(kwargs) self.members = map(lambda x: IMPORTS[x[u'@t']](x), kwargs.get(u'members').get(u'@e', [])) class ClassOrInterfaceDeclaration(TypeDeclaration): def __init__(self, kwargs={}): super(ClassOrInterfaceDeclaration, self).__init__(kwargs) class FieldDeclaration(BodyDeclaration): def __init__(self, kwargs={}): super(FieldDeclaration, self).__init__(kwargs) print '*'*10, 'SYMTAB:' for k,v in self.SYMTAB.items(): print k,v print '*'*10 print 'SYMTAB[self.atr]:',self.SYMTAB[self.atr] print self.SYMTAB[self.atr].atr print self.SYMTAB[self.atr].members d = {u'@i': 0, u'@r': None, u'@t': u'CompilationUnit', 'types': {u'@e': [{u'@t': u'ClassOrInterfaceDeclaration', u'@i': 1, u'@r': 0, u'members': {u'@e': [{u'@t': 'FieldDeclaration', u'@i': 2, u'@r': 1}]}}]}} c = CompilationUnit(d) print c </code></pre> <p>This will produce the following output:</p> <pre><code>********** SYMTAB: 0 &lt;__main__.CompilationUnit object at 0x105466f10&gt; 1 &lt;__main__.ClassOrInterfaceDeclaration object at 0x10547c050&gt; 2 &lt;__main__.FieldDeclaration object at 0x10547c150&gt; ********** SYMTAB[self.atr]: &lt;__main__.ClassOrInterfaceDeclaration object at 0x10547c050&gt; 0 Traceback (most recent call last): File "class_error.py", line 74, in &lt;module&gt; c = CompilationUnit(d) File "class_error.py", line 30, in __init__ kwargs.get('types').get('@e', [])) File "class_error.py", line 29, in &lt;lambda&gt; self._types = map(lambda x: IMPORTS[x['@t']](x), File "class_error.py", line 54, in __init__ super(ClassOrInterfaceDeclaration, self).__init__(kwargs) File "class_error.py", line 45, in __init__ kwargs.get(u'members').get(u'@e', [])) File "class_error.py", line 44, in &lt;lambda&gt; self._members = map(lambda x: IMPORTS[x[u'@t']](x), File "class_error.py", line 65, in __init__ print self.SYMTAB[self.atr].members AttributeError: 'ClassOrInterfaceDeclaration' object has no attribute 'members' </code></pre> <p>I'm not even sure where to begin trying to fix this. Recently I added the <code>print self.SYMTAB[self.atr].atr</code> line and saw that this actually worked. The only thing I can think of is that <code>FieldDeclaration</code> doesn't inherit from <code>TypeDeclaration</code>, which is where the <code>members</code> attribute is actually defined. But why should this matter? I am accessing the <code>ClassOrInterfaceDeclaration</code> node, which does inherit from <code>TypeDeclaration</code>? This object should have a <code>members</code> attribute. What am I missing about how to access this attribute?</p>
<h2>Problem</h2> <p>There seem to multiple other problems in your program but here:</p> <pre><code>class TypeDeclaration(BodyDeclaration): def __init__(self, kwargs={}): super(TypeDeclaration, self).__init__(kwargs) self.members = map(lambda x: IMPORTS[x[u'@t']](x), kwargs.get(u'members').get(u'@e', [])) </code></pre> <p>the part:</p> <pre><code>kwargs.get(u'members') </code></pre> <p>tries to access <code>self.members</code> which you are just about to create.</p> <p>You put <code>u'@t': u'ClassOrInterfaceDeclaration'</code> into <code>d</code>. Then in here:</p> <pre><code> self.members = map(lambda x: IMPORTS[x[u'@t']](x), kwargs.get(u'members').get(u'@e', [])) </code></pre> <p>You try access <code>ClassOrInterfaceDeclaration().members</code>, which you are just about to create. Instanziation of <code>ClassOrInterfaceDeclaration</code> calls the <code>__init__()</code> of <code>TypeDeclaration</code> that contains the cod above.</p> <h2>Solution</h2> <p>You need to let all <code>__init__()</code> functions to finish before trying to access members. For example, you can move all your printing into a own method:</p> <pre><code>class FieldDeclaration(BodyDeclaration): def __init__(self, kwargs={}): super(FieldDeclaration, self).__init__(kwargs) def meth(self): print '*'*10, 'SYMTAB:' for k,v in self.SYMTAB.items(): print k,v print '*'*10 print 'SYMTAB[self.atr]:',self.SYMTAB[self.atr] print self.SYMTAB[self.atr].atr print self.SYMTAB[self.atr].members d = {u'@i': 0, u'@r': None, u'@t': u'CompilationUnit', 'types': {u'@e': [{u'@t': u'ClassOrInterfaceDeclaration', u'@i': 1, u'@r': 0, u'members': {u'@e': [{u'@t': 'FieldDeclaration', u'@i': 2, u'@r': 1}]}}]}} c = CompilationUnit(d) print c c.SYMTAB[2].meth() </code></pre> <p>Output:</p> <pre><code>&lt;__main__.CompilationUnit object at 0x104ef9510&gt; ********** SYMTAB: 0 &lt;__main__.CompilationUnit object at 0x104ef9510&gt; 1 &lt;__main__.ClassOrInterfaceDeclaration object at 0x104ef9610&gt; 2 &lt;__main__.FieldDeclaration object at 0x104ef9710&gt; ********** SYMTAB[self.atr]: &lt;__main__.ClassOrInterfaceDeclaration object at 0x104ef9610&gt; 0 [&lt;__main__.FieldDeclaration object at 0x104ef9710&gt;] </code></pre>
python|python-2.7|inheritance
-1
1,907,853
29,597,666
How do I get python to only accept positive integer as raw_input?
<p>I'm trying to ask the user to submit a positive integer because my question is "How old are you?"</p>
<p>Use <a href="https://docs.python.org/2/library/stdtypes.html#str.isdigit" rel="nofollow noreferrer"><code>string.isdigit()</code></a>.</p> <blockquote> <p>str.isdigit()</p> <p>Return true if all characters in the string are digits and there is at least one character, false otherwise.</p> <p>For 8-bit strings, this method is locale-dependent.</p> </blockquote> <pre><code>&gt;&gt;&gt; '-23'.isdigit() False &gt;&gt;&gt; '23'.isdigit() True &gt;&gt;&gt; '23.45'.isdigit() False </code></pre> <p>So it would be like,</p> <pre><code>&gt;&gt;&gt; while True: s = input('How old are you: ') if s.isdigit(): break How old are you: y How old are you: -7 How old are you: 8.9 How old are you: 8 &gt;&gt;&gt; </code></pre>
python|raw-input
2
1,907,854
49,451,366
Q Learning Applied To a Two Player Game
<p>I am trying to implement a Q Learning agent to learn an optimal policy for playing against a random agent in a game of Tic Tac Toe.</p> <p>I have created a plan that I believe will work. There is just one part that I cannot get my head around. And this comes from the fact that there are two players within the environment. </p> <p>Now, a Q Learning agent should act upon the current state, <code>s</code>, the action taken given some policy, <code>a</code>, the successive state given the action, <code>s'</code>, and any reward received from that successive state, <code>r</code>. </p> <p>Lets put this into a tuple <code>(s, a, r, s')</code></p> <p>Now usually an agent will act upon every state it finds itself encountered in given an action, and use the Q Learning equation to update the value of the previous state. </p> <p>However, as Tic Tac Toe has two players, we can partition the set of states into two. One set of states can be those where it is the learning agents turn to act. The other set of states can be where it is the opponents turn to act. </p> <p>So, do we need to partition the states into two? Or does the learning agent need to update every single state that is accessed within the game?</p> <p>I feel as though it should probably be the latter, as this might affect updating Q Values for when the opponent wins the game. </p> <p>Any help with this would be great, as there does not seem to be anything online that helps with my predicament. </p>
<p>In general, directly applying Q-learning to a two-player game (or other kind of multi-agent environment) isn't likely to lead to very good results if you assume that the opponent can also learn. However, you specifically mentioned</p> <blockquote> <p>for playing against a random agent</p> </blockquote> <p>and that means it actually can work, because this means the opponent isn't learning / changing its behaviour, so you can reliably <strong>treat the opponent as ''a part of the environment''</strong>.</p> <p>Doing exactly that will also likely be the best approach you can take. Treating the opponent (and his actions) as a part of the environment means that you should basically just completely ignore all of the states in which the opponent is to move. Whenever your agent takes an action, you should also immediately generate an action for the opponent, and only then take the resulting state as the next state. </p> <p>So, in the tuple <code>(s, a, r, s')</code>, we have:</p> <ul> <li><code>s</code> = state in which your agent is to move</li> <li><code>a</code> = action executed by your agent</li> <li><code>r</code> = one-step reward</li> <li><code>s'</code> = <strong>next state in which your agent is to move again</strong></li> </ul> <p>The state in which the opponent is to move, and the action they took, do not appear at all. They should simply be treated as unobservable, nondeterministic parts of the environment. From the point of view of your algorithm, there are no other states in between <code>s</code> and <code>s'</code>, in which there is an opponent that can take actions. From the point of view of your algorithm, the environment is simply nondeterministic, which means that taking action <code>a</code> in state <code>s</code> will sometimes randomly lead to <code>s'</code>, but maybe also sometimes randomly to a different state <code>s''</code>.</p> <hr> <p>Note that this will only work precisely because you wrote that the opponent is a random agent (or, more importantly, a <strong>non-learning agent with a fixed policy</strong>). As soon as the opponent also gains the ability to learn, this will break down completely, and you'd have to move on to proper multi-agent versions of Reinforcement Learning algorithms.</p>
python|tic-tac-toe|reinforcement-learning|q-learning
7
1,907,855
21,231,187
How to call method save() in a Django model class using attr string name?
<p>For example in my code:</p> <pre><code>class ClassName(): [...] image_bigger = models.ImageField(upload_to='dir', max_length=500, blank=True, null=True) image_big = models.ImageField(upload_to='dir', max_length=500, blank=True, null=True) image_medium = models.ImageField(upload_to='dir', max_length=500, blank=True, null=True) image_small = models.ImageField(upload_to='dir', max_length=500, blank=True, null=True) def create_resized(self, attr_name, resized_size): [...] if attr_name == "bigger": self.bigger.save(filename, suf, save=False) elif attr_name == "big": self.big.save(filename, suf, save=False) elif attr_name == "medium": self.medium.save(filename, suf, save=False) elif attr_name == "small": self.small.save(filename, suf, save=False) </code></pre> <p>I'm wondering if there is something like getattr to avoid that ugly if elif block of code...</p>
<p>why not to use getattr/hasattr calls? You can write something like this:</p> <pre><code>full_attr_name = 'image_' + attr_name if hasattr(self, full_attr_name): getattr(self, full_attr_name).save(filename, suf, save=False) </code></pre> <p>Hope, you have caught the idea.</p>
python|django
1
1,907,856
70,240,166
Create Individual DataFrames From a List of csv Files
<p>I have a folder of csv files that I'd like to loop over to create individual DataFrames named after the file itself. So if I have <code>file_1.csv</code>, <code>file_2.csv</code>, <code>file_3.csv</code> ... I'd like DataFrames created for each file and have the df named after the file of the data it contains.</p> <p>Here is what I've tried so far:</p> <pre><code># get list of all files all_files = os.listdir(&quot;./Data/&quot;) # get list of only csv files csv_files = list(filter(lambda f: f.endswith('.csv'), all_files)) # remove file extension to get name only file_names = [] for i in csv_files: file = i[:-4] file_names.append(file) # create DataFrames from each file named after the corresonding file dfs = [] def make_files_dfs(): for a,b in zip(file_names, csv_files): if a == b[:-4]: a = pd.read_csv(eval(f&quot;'Data/{b}'&quot;)) dfs.append(a) </code></pre> <p>error code:</p> <pre><code>ParserError: Error tokenizing data. C error: Expected 70 fields in line 7728, saw 74 </code></pre> <hr /> <h3><strong>Update 1</strong></h3> <ul> <li>using <code>chdir</code> instead of <code>listdir</code></li> <li>replacing the <code>lambda</code> with <code>glob</code></li> <li>two new attempts using response suggestions</li> </ul> <p>new attempt 1:</p> <pre><code>path = &quot;./Data/&quot; os.chdir(path) csv_files = glob.glob(&quot;*.csv&quot;) dataFrameDict = {} def make_files_dfs(): for a in csv_files: dataFrameDict[a[:-4] , pd.read_csv(a)] </code></pre> <p>Error Code:</p> <blockquote> <p>TypeError: unhashable type: 'DataFrame'</p> </blockquote> <p>I feel like this needs a line to append the dicts to a list; will mess with it.</p> <p>new attempt 2:</p> <pre><code>path = &quot;./Data/&quot; os.chdir(path) csv_files = glob.glob(&quot;*.csv&quot;) for i in range(len(csv_files)): globals()[f&quot;df_{i}&quot;] = pd.read_csv(csv_files[i]) </code></pre> <p>Error code:</p> <blockquote> <p>ParserError: Error tokenizing data. C error: Expected 70 fields in line 7728, saw 74</p> </blockquote> <hr /> <h3><strong>Update 2</strong></h3> <ul> <li>instead of trying to create a list of DataFrames, attempting to create a dictionary of DataFrames. Turns out the error code was from one file having extra columns of data in one record, as @Jon Clements pointed out.</li> </ul> <pre><code>path = &quot;./Data/&quot; os.chdir(path) csv_files = glob.glob(&quot;*.csv&quot;) csv_names = [] for i in csv_files: name = i[:-4] csv_names.append(name) zip_object = zip(csv_names, csv_files) df_collection = {} for name, file in zip_object: df_collection[name] = pd.read_csv(file, low_memory=False) </code></pre>
<p>Your code is a bit difficult to understand. You have some unnecessary functions. First of all, it is easier to change the working directory path (by <code>os.chdir(path)</code>. Secondly, you can get rid of your lambda function and use <code>glob.glob</code>. Lastly, you cannot make a DataFrame named after a variable. Your <code>dfs</code> list will hold some class names that won't give you much insight into the DataFrame. It is much better to use a dictionary. Overall, this is how your code can look like:</p> <pre><code>import os import glob path = &quot;the path to your data&quot; os.chdir(path) # get list of only csv files csv_files = glob.glob(&quot;/*.csv&quot;) # create a dictionary with key as the DF name and values as DataFrames dataFrameDictionary={} def make_files_dfs(): for a in csv_files: dataFrameDictionary[a[:-4], pd.read_csv(a)] </code></pre>
python|pandas|dataframe|csv
1
1,907,857
70,220,684
Recovering human-readable Python 3.10 source from cached .pyc bytecode
<p>After manually clearing a corrupt recycle bin on my removable USB drive, I found a couple of my most recently executed Python files to be corrupt as well; opening them with an editor shows their entire contents filled with empty bytes (all <code>00</code>s). I have no idea how this happened, but in any case, my last backup unfortunately dates to several weeks ago, so I'd like to try to recover the lost source files if at all possible.</p> <p>I found the relevant <code>.pyc</code> (dated to the day before the corruption) file in <code>.\__pycache__\</code> and I am attempting to reconstruct a human-readable, ready-to-execute <code>.py</code> file from the binary, but I haven't had much success so far.</p> <p>Many searches in this vein turn up tools such as <a href="https://github.com/rocky/python-uncompyle6/" rel="nofollow noreferrer">uncompyle6</a> or <a href="https://github.com/rocky/python-decompile3" rel="nofollow noreferrer">decompyle3</a>, but <strong>neither of these support Python 3.10</strong> and their developer has stated that they are not planning on maintaining either.</p> <p>Seemingly the only tool/package that does anything remotely related to bytecode decompilation that also supports Python 3.10 is <a href="https://github.com/greyblue9/unpyc37-3.10" rel="nofollow noreferrer">this fork of unpyc3</a>, however it seems to operate on actual code (or code objects; I'm not entirely sure).</p> <p>Hoping that this tool held the key to my code recovery, this is how far I got on my own:</p> <pre><code>from unpyc3 import decompile import dis, marshal with open(&quot;thermo.cpython-310.pyc&quot;, &quot;rb&quot;) as f: f.seek(16) # By all accounts this should be 8 bytes, but 16 is the only way I have successfully been able to read the bytecode raw = f.read() code = marshal.loads(raw) with open(&quot;disassembly.txt&quot;, &quot;w&quot;, encoding=&quot;utf-8&quot;) as out: dis.dis(code, file=out) </code></pre> <p><code>encoding=&quot;utf-8&quot;</code> <em>is</em> needed somewhere in the process because some of my variables are Unicode characters (e.g. α, λ, φ, etc.).</p> <p>This writes what I believe to be a series of CPython <code>Instruction</code> instances to <code>disassembly.txt</code>, a snippet of which I have reproduced below:</p> <pre><code> 2 0 LOAD_CONST 0 (0) 2 LOAD_CONST 1 (None) 4 IMPORT_NAME 0 (Constants) 6 STORE_NAME 0 (Constants) 3 8 LOAD_CONST 0 (0) 10 LOAD_CONST 1 (None) 12 IMPORT_NAME 1 (EOS) 14 STORE_NAME 1 (EOS) 4 16 LOAD_CONST 0 (0) 18 LOAD_CONST 1 (None) 20 IMPORT_NAME 2 (ACM) 22 STORE_NAME 2 (ACM) 7 24 LOAD_CONST 0 (0) 26 LOAD_CONST 2 (('sqrt', 'exp', 'log')) 28 IMPORT_NAME 3 (math) 30 IMPORT_FROM 4 (sqrt) 32 STORE_NAME 4 (sqrt) 34 IMPORT_FROM 5 (exp) 36 STORE_NAME 5 (exp) 38 IMPORT_FROM 6 (log) 40 STORE_NAME 6 (log) 42 POP_TOP </code></pre> <p>The actual source file <code>thermo.py</code> which I'm trying to recover is nearly 3000 lines long, so I'm not going to reproduce the whole output here (nor do I think I can anywhere since it exceeds Pastebin's free limit of 512 kB).</p> <p>This seems like the correct information, but my programming experience completely dries up once we reach this assembly-adjacent code, and I'm honestly at a loss as to the next step. It appears that <code>unpyc3.decompile()</code> accepts a Python <code>module</code>, a Python <code>function</code> or a CPython <code>PyCodeObject</code> as input, but unpyc3's documentation is not very detailed.</p> <p>So my issue now is this:</p> <ul> <li><p>If my above marshal/disassembly approach is correct, I don't know how to process the disassembled <code>Instruction</code>s to feed to <code>unpyc3.decompile()</code>.</p> </li> <li><p>If the above approach is incorrect, I have no idea where to go with this.</p> </li> </ul> <p>If anyone knows how to progress with this problem (or whether or not my goal is actually achievable), I'd appreciate any advice.</p>
<p>try pycdc, pycdc is a linux based c++ tool, is used to reverse python3.10+ pyc code to its original form. find some info on pycdc</p>
python|bytecode|cpython|decompiling|python-3.10
0
1,907,858
53,676,600
String formatting of utcnow
<p>Using <code>utcnow()</code> I get the following:</p> <pre><code>&gt;&gt;&gt; str(datetime.datetime.utcnow()) '2018-12-07 20:44:11.158487' </code></pre> <p>How would I format this as the following string:</p> <pre><code>YYYY-MM-DDTHH:MM:SS 2018-12-07T20:44:11 </code></pre>
<p>Use <code>isoformat()</code>:</p> <pre><code>t = datetime.datetime.utcnow() t.isoformat() &gt; '2018-12-07T20:47:31.645578' t.isoformat(timespec='seconds') &gt; '2018-12-07T20:47:31' </code></pre>
python
2
1,907,859
45,797,843
Create custom Database Connection in Django
<p>I am creating a project in DJango where I want to use a mixture of <code>MySQL</code> and <code>ArangoDB</code>. I am creating an API based on <code>DJango REST Framework</code>.</p> <h2>The Process for ArangoDB</h2> <ol> <li>User calls an API and request enters in the VIEW</li> <li>View validates the request.data via <code>serializers</code></li> <li>If data is valid, <code>.save()</code> is called which saves the data in <code>ArangoDB</code>.</li> <li>I won't use Models as there is no schema (that's why using NoSQL).</li> </ol> <h2>The Problem</h2> <ol> <li>How can I create a global connection with ArangoDB that I can use everywhere else?</li> <li>Will the connection with ArangoDB be active or I need to re-connect?</li> </ol> <p>What should I do? </p>
<p>Regardless of no schema, you <strong>should</strong> create models to work with. Take a look at <a href="https://github.com/pablotcarreira/django-arangodb/blob/master/sample_app/models.py" rel="nofollow noreferrer">this example</a>.</p> <p>Also you have to set similar DATABASES settings:</p> <pre><code>DATABASES = { 'default': { 'ENGINE': 'arangodb_driver', 'HOST': 'localhost', 'PORT': '8529', 'NAME': 'some_user', 'USER': 'root', 'PASSWORD': 'some_password', } } </code></pre> <p>And to install the driver via:</p> <pre><code>pip install git+git://github.com/pablotcarreira/django-arangodb </code></pre>
python|django|arangodb
0
1,907,860
73,534,883
Python filtering pandas
<p>Question: Output the ID of the user and a total number of actions they performed for tasks they completed(action_name column=&quot;CompleteTask&quot;). If a user from this company(ClassPass) did not complete any tasks in the given period of time, you should still output their ID and the number 0 in the second column.</p> <p>dataset:</p> <p><a href="https://i.stack.imgur.com/X5cVc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/X5cVc.png" alt="enter image description here" /></a></p> <p>expected result:</p> <p><a href="https://i.stack.imgur.com/6fCPR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6fCPR.png" alt="enter image description here" /></a></p>
<p>Considering your initial dataframe is named <code>df</code>, you can try this :</p> <pre><code>out = (df.groupby(['user_id'], as_index=False) .apply(lambda x: x[x['action_name'] == 'CompleteTask' ]['num_actions'].sum()) .rename(columns={None: 'total_actions'}) ) </code></pre> <h4><code>&gt;&gt;&gt; print(out)</code></h4> <p><a href="https://i.stack.imgur.com/tBr8u.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tBr8u.png" alt="enter image description here" /></a></p>
python|pandas|filtering|aggregate-functions
0
1,907,861
12,894,841
Python ctypes data-type for passing to F90 real(8) data-type
<p>I'm using ctypes (Pthyon3.2) to call F90 routines from a shared library on a 64-bit Debian Linux machine. If the F90 routine takes and argument of type,</p> <pre><code>! We're in fortran (F90) subroutine MyFunc(FooF90) real(8),intent(in) :: FooF90 ! do some stuff... return </code></pre> <p>what should I pass from Python? My guess is,</p> <pre><code># We're in Python3.2 F90func(ctypes.byref(ctypes.c_double(FooPy))) </code></pre> <p>Is this correct? I couldn't easily find a reference for passing fortran arguments of type real(n) from Python.</p> <p>Thanks in advance.</p>
<p><code>real(8)</code> is not defined by the standard, so anything can happen (you should get the kind number by <code>selected_real_kind</code> or <code>iso_c_binding</code> or <code>iso_fortran_env</code>). In practice it will be equivalent to <code>c_double</code> for most compilers. You are probably using gfortran for which this holds by default.</p> <p>There might be a problem, if the Python interpreter expects by <code>value</code> argument passing by default. Search for <code>byref()</code> function in <a href="http://docs.python.org/library/ctypes.html" rel="nofollow">ctypes</a>.</p>
python|python-3.x|fortran|ctypes
4
1,907,862
12,926,660
Convert integer index from Fama-French factors to datetime index in pandas
<p>I get the Fama-French factors from Ken French's data library using <code>pandas.io.data</code>, but I can't figure out how to convert the integer year-month date index (e.g., <code>200105</code>) to a <code>datetime</code> index so that I can take advantage of more <code>pandas</code> features.</p> <p>The following code runs, but my index attempt in the last un-commented line drops all data in DataFrame <code>ff</code>. I also tried <code>.reindex()</code>, but this doesn't change the index to <code>range</code>. What is the <code>pandas</code> way? Thanks!</p> <pre><code>import pandas as pd from pandas.io.data import DataReader import datetime as dt ff = pd.DataFrame(DataReader("F-F_Research_Data_Factors", "famafrench")[0]) ff.columns = ['Mkt_rf', 'SMB', 'HML', 'rf'] start = ff.index[0] start = dt.datetime(year=start//100, month=start%100, day=1) end = ff.index[-1] end = dt.datetime(year=end//100, month=end%100, day=1) range = pd.DateRange(start, end, offset=pd.datetools.MonthEnd()) ff = pd.DataFrame(ff, index=range) #ff.reindex(range) </code></pre>
<p><code>reindex</code> realigns the existing index to the given index rather than changing the index. you can just do <code>ff.index = range</code> if you've made sure the lengths and the alignment matches.</p> <p>Parsing each original index value is much safer. The easy approach is to do this by converting to a string:</p> <pre><code>In [132]: ints Out[132]: Int64Index([201201, 201201, 201201, ..., 203905, 203905, 203905]) In [133]: conv = lambda x: datetime.strptime(str(x), '%Y%m') In [134]: dates = [conv(x) for x in ints] In [135]: %timeit [conv(x) for x in ints] 1 loops, best of 3: 222 ms per loop </code></pre> <p>This is kind of slow, so if you have a lot observations you might want to use an optimize cython function in pandas:</p> <pre><code>In [144]: years = (ints // 100).astype(object) In [145]: months = (ints % 100).astype(object) In [146]: days = np.ones(len(years), dtype=object) In [147]: import pandas.lib as lib In [148]: %timeit Index(lib.try_parse_year_month_day(years, months, days)) 100 loops, best of 3: 5.47 ms per loop </code></pre> <p>Here <code>ints</code> has 10000 entries.</p>
python|datetime|pandas
4
1,907,863
21,566,437
Iterating through DictReader
<p>I have read a csv file using,</p> <pre><code>with open('test.csv', newline='') as csv_file: #restval = blank columns = - /// restkey = extra columns + d = csv.DictReader(csv_file, fieldnames=None, restkey='+', restval='-', delimiter=',', quotechar='"') </code></pre> <p>I would like to iterate through the created dictionary to find blank values within the csv. I have tried:</p> <pre><code>for k, v in d.items() #Do stuff </code></pre> <p>However I get the error: <strong>AttributeError: 'DictReader' object has no attribute 'items'</strong></p> <p>Is it right in saying that the values stored in <strong>d</strong> is a dictionary of dictionaries?</p> <p>Coming from C# I would have stored the csv in a multidimensional array with a nested for loop to iterate through the values. Unfortunately I'm still new to Python - any help + explanations will be appreciated!</p>
<p><code>DictReader()</code> produces a <em>sequence</em> of dictionaries, not just one dictionary.</p> <pre><code>for row in d: for k, v in row.items(): </code></pre>
python|loops|csv|dictionary
11
1,907,864
40,833,241
Why the loop to search for a string that is present in a file returns True for first iteration and False for the rest?
<p>I have a file named text.txt which contains the following data:</p> <pre><code>My Name Is Lorem Ipsum </code></pre> <p>My python code:</p> <pre><code>with open("text.txt") as f: for i in xrange(5): print "Is\n" in f </code></pre> <p>Output:</p> <pre><code>True False False False False </code></pre> <p>Why the output is True only when i=0?</p> <p>What to do to get True for all the iterations? I do not want to store the contents of the file anywhere!</p>
<p>You're consuming your file at the first test, so you're at the end of the file for the other iterations.</p> <p>You could read the contents to a string, but since you don't want to store the file, I propose to <code>seek</code> to beginning of the file instead:</p> <pre><code>with open("test.txt") as f: for i in range(5): f.seek(0) print ("Is\n" in f) </code></pre>
python|string|file|search
0
1,907,865
30,772,516
How to stream requests to another webserver?
<p>I have a url:</p> <pre><code>myhost.com/largejsondata </code></pre> <p>In python flask-restful, I want to serve that same largejsondata. How do I stream it? I know within the get function for flask-restful I could do:</p> <pre><code>class LargeJSON(Resource): def get(self, todo_id): #openup a URL that contains a large jsonfile #stream output as it appears from previous line to the end-users browser return jsonfile api.add_resource(LargeJSON, '/largefile') </code></pre> <p>but how do I properly get it into a response such that it would "stream" output to the browser as output is processed by the <code>"requests.get"</code>?</p>
<p>With flask you can stream data like this </p> <pre><code>from flask import Response class LargeJSON(Resource): def get(self): jsonfile = {...} return Response(jsonfile, mimetype='application/json') api.add_resource(LargeJSON, '/largefile') </code></pre> <p>from <a href="http://flask.pocoo.org/docs/0.10/patterns/streaming/" rel="nofollow">http://flask.pocoo.org/docs/0.10/patterns/streaming/</a></p> <p>here you have some doc for the Response object <a href="http://flask.pocoo.org/docs/0.10/api/#response-objects" rel="nofollow">http://flask.pocoo.org/docs/0.10/api/#response-objects</a></p>
python|flask|flask-restful
1
1,907,866
31,136,860
Reverse not found for my custom model admin urls after reload(sys.modules['urls.py'])
<p>After doing following when I call reverse for my custom model admin url its giving me Reverse not found and before reloading urls.py reverse working fine.</p> <pre><code>def _reset_urls(self, urlconf_modules): """Reset `urls.py` for a set of Django apps.""" for urlconf in urlconf_modules: if urlconf in sys.modules: reload(sys.modules[urlconf]) clear_url_caches() resolve('/') </code></pre> <p>I debugged this and find out that <code>admin.site._registry</code> is empty when I call <code>reload(sys.modules[urlconf])</code> because it creates new AdminSite object.</p> <p>I tried preserving <code>admin.site</code> in a variable before <code>reload(sys.modules[urlconf])</code> and assigning it back to <code>admin.site</code> after reload but it didn't work.</p> <p>Need help.</p> <p>Thanks in advance.</p>
<p>I was running into the same issue running Django 1.7, this seems to fix it for me:</p> <pre><code>import sys from importlib import reload # Python 3 from django.conf import settings from django.core.urlresolvers import clear_url_caches from django.utils.importlib import import_module def reload_urlconf(urlconf=None): clear_url_caches() if urlconf is None: urlconf = settings.ROOT_URLCONF if urlconf in sys.modules: reload(sys.modules[urlconf]) import_module(urlconf) </code></pre>
python|django|django-models|django-admin|django-1.4
1
1,907,867
39,988,480
Converting url encode data from curl to json object in python using requests
<p>What is the best way to convert the below curl post into python request using the requests module:</p> <pre><code>curl -X POST https://api.google.com/gmail --data-urlencode json='{"user": [{"message":"abc123", "subject":"helloworld"}]}' </code></pre> <p>I tried using python requests as below, but it didn't work: </p> <pre><code>payload = {"user": [{"message":"abc123", "subject":"helloworld"}]} url = https://api.google.com/gmail requests.post(url, data=json.dumps(payload),auth=(user, password)) </code></pre> <p>Can anybody help. </p>
<p>As the comment mentioned, you should put your <code>url</code> variable string in quotes <code>""</code> first.</p> <p>Otherwise, your question is not clear. What errors are being thrown and/or behavior is happening?</p> <p><a href="https://stackoverflow.com/questions/17936555/how-to-construct-the-curl-command-from-python-requests-module"> New cURL method in Python </a></p>
python|http-post|python-requests
0
1,907,868
8,496,779
Using mimetypes.guess_type with email.mime.base.MIMEBase
<p>I'm building SMTP ASCII emails to include attachments and wish to automate the extraction of the "content type" from the proposed attachment using mimetypes.guess_type, then using the result to add it to the mail body by using email.mime.base.MIMEBase.</p> <p>The problem I'm encountering is that mimetypes.guess_type produces a single string containing the complete content type e.g. image/jpeg. But email.mime.base.MIMEBase expectes 2 seperate variables e.g image and jpeg. </p> <p>Is there a clean way to do this (i.e. a different def than email.mime.base.MIMEBase) so that I don't have to parse the result from mimetypes.guess_type before being able to use it?</p> <p>It seems tedious that this would have to be done for these two libraries to work together, so I'm assuming I'm missing another easier way to do it.</p> <p>Thanks.</p>
<p>Well, such kind of "parsing" is very easy with python. You may try the code below:</p> <pre><code>format, enc = mimetypes.guess_type(filename) main, sub = format.split('/') mb = MIMEBase(main, sub) </code></pre> <p>or even more compact:</p> <pre><code>format, enc = mimetypes.guess_type(filename) mb = MIMEBase(*format.split('/')) </code></pre>
python|smtp|mime-types
2
1,907,869
52,313,519
VIX-Model in Python
<p>I read a paper about a model that uses signals of the Standard Deviation of the VIX Index. I first tested the model in Excel and now I want to transform the model in Python code. I'm not that advanced yet in Python and got stuck. The model is simple. Calculate the rolling 20-day standard deviation of the VIX closing prices. Signals are generated if the Standard Deviation is below 0.86 AND the prior 10 days did not generate a signal. So, calculating the 0.86 threshold is easy but how do I include the piece that no signal occurred the prior 10 days.</p> <pre><code>vix['std_dev'] = vix['CLOSE'].rolling(window=20).std() vix['signal'] = np.where(vix['std_dev'] &lt;= 0.86,1,0) </code></pre> <p>vix is just the data of OHLC prices for the VIX index. I would suggest working with <code>.shift()</code>?</p>
<p>Just create another rolling 10 day indication for the signal and use this result as a condition for the actual signal.</p>
python|quantitative-finance|algorithmic-trading
0
1,907,870
52,011,375
Error in numerical derivative of 4th order near the endpoints of an interval
<p>I am integrating a partial differential equation in which I have a fourth-order partial derivative in $x$ and the numerical integration in time is giving me absurd errors. The cause of the problem, I believe, is that I get large errors in the fourth-order derivative. To illustrate that I take numerical derivatives of the function $y(x)=1-\cos(2\pi x)$. Below I plot the derivatives $y_{xx}(x)$ and $y_{xxxx}(x)$ in the domain $(0, 1.0)$. See the figures below: <a href="https://i.stack.imgur.com/npbx7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/npbx7.png" alt="Plot of $y_{xx}"></a></p> <p><a href="https://i.stack.imgur.com/drpDF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/drpDF.png" alt="Plot of $y_{xxxx}$"></a></p> <p>As one can see the errors occur mostly near the boundaries.</p> <p>The numerical derivatives were performed using the numpy gradient method. The python code is here:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt N = 64 X = np.linspace(0.0, 1.0, N, endpoint = True) dx = 1.0/(N-1) Y= 1.0-np.cos(2*np.pi*X) Y_x = np.gradient(Y, X, edge_order = 2) Y_xx = np.gradient(Y_x, X, edge_order = 2) plt.figure() plt.title("$y(x)=1-\cos(2\pi x)$") plt.xlabel("$x$") plt.ylabel("$y_{xx}(x)$") plt.plot(X, ((2*np.pi)**2)*np.cos(2*np.pi*X), 'r-', label="analytics") plt.plot(X, Y_xx, 'b-', label="numerics") plt.legend() plt.grid() plt.savefig("y_xx.png") Y_xxx = np.gradient(Y_xx, X, edge_order = 2) Y_xxxx = np.gradient(Y_xxx, X, edge_order = 2) plt.figure() plt.title("$y(x)=1-\cos(2\pi x)$") plt.xlabel("$x$") plt.ylabel("$y_{xxxx}(x)$") plt.plot(X, -((2*np.pi)**4)*np.cos(2*np.pi*X), 'r-', label="analytics") plt.plot(X, Y_xxxx, 'b-', label="numerics") plt.legend() plt.grid() plt.savefig("y_xxxx.png") plt.show() </code></pre> <p>My question is how do I reduce algorithmically this large error at boundaries beyond the obvious increasing of N? As it is the convergence of the fourth-order derivative is not uniform. Is it possible to make that uniform? Perhaps, using extrapolation at the boundaries will do.</p>
<h3>Underlying mathematical issue</h3> <p>The formula for the 1st derivative that <code>np.gradient</code> uses has error <code>O(h**2)</code> (where h is your dx). As you keep on taking derivatives, this has potential to be bad because changing a function by an amount <code>z</code> can change its derivative by <code>z/h</code>. Usually this does not happen because the error comes from higher order derivatives of the function, which themselves vary smoothly; so, subsequent differentiation "differentiates away" most of the error of the preceding step, rather than magnifying it by <code>1/h</code>.</p> <p>However, the story is different near the boundary, where we must switch from one finite-difference formula (centered difference) to another. The boundary formula also has <code>O(h**2)</code> error but it's a <strong>different</strong> <code>O(h**2)</code>. And now we do have a problem with subsequent differentiation steps, each of which can contribute a factor of <code>1/h</code>. The worst-case scenario for the 4th derivative is <code>O(h**2) * (1/h**3)</code> which fortunately does not materialize here, but you still get a pretty bad boundary effect. I offer two different solutions which perform about the same (the second is marginally better but more expensive). But first, let's emphasize an implication of what Warren Weckesser said in a comment:</p> <p>If the function is <strong>periodic</strong>, you extend its by periodicity, e.g., <code>np.tile(Y, 3)</code>, compute the derivative of <em>that</em>, and take the middle part, truncating away any boundary effects. </p> <h3>Direct computation of the 4th derivative</h3> <p>Instead of applying a finite-difference formula for the 1st derivative four times, apply it for the 4th derivative once. There is still going to be the issue of the boundary values, for which my best idea is the simplest, constant extrapolation. That is:</p> <pre><code>Y_xxxx = (Y[4:] - 4*Y[3:-1] + 6*Y[2:-2] - 4*Y[1:-3] + Y[:-4])/(dx**4) Y_xxxx = np.concatenate((2*[Y_xxxx[0]], Y_xxxx, 2*[Y_xxxx[-1]])) </code></pre> <p>with the result</p> <p><a href="https://i.stack.imgur.com/FAkrD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FAkrD.png" alt="4th derivative"></a></p> <p>If you don't like the little flat bits on the side, take smaller <code>dx</code> (they are of size <code>2*dx</code>). </p> <h3>Differentiate an interpolating spline</h3> <p>Fit a 5th degree spline to the data, then take its 4th derivative <em>analytically</em>. This requires SciPy and is probably a lot slower than the direct approach.</p> <pre><code>from scipy.interpolate import InterpolatedUnivariateSpline spl = InterpolatedUnivariateSpline(X, Y, k=5) Y_xxxx = spl.derivative(4)(X) </code></pre> <p>Result:</p> <p><a href="https://i.stack.imgur.com/L1gN6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/L1gN6.png" alt="spline derivative"></a></p> <h3>Nonuniform grid</h3> <p>A loss of precision at the boundaries is typical for interpolation on a uniform grid, so we should expect an improvement using Chebyshev nodes instead. Here it is: </p> <pre><code>def cheb_nodes(a, b, N): jj = 2.*np.arange(N) + 1 x = np.cos(np.pi * jj / 2 / N)[::-1] x = (a + b + (b - a)*x)/2 return x N = 64 X = cheb_nodes(0, 1, N) </code></pre> <p>The rest proceeds as before, using <code>InterpolatedUnivariateSpline</code> of degree 5. Plotting the 4th derivative itself now shows no visible difference between numerics and analytics, so here is the plot of <code>Y_xxxx - analytics</code> instead:</p> <p><img src="https://i.stack.imgur.com/de8b0.png" alt="difference"></p> <p>The error is at most 3, and it's no longer maximal at the boundary. The maximal error with uniform grid was about 33.</p> <p>I also explored the possibility of imposing the clamped condition on the interpolating spline to further improve its accuracy. Conceivably, <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.make_interp_spline.html" rel="nofollow noreferrer"><code>make_interp_spline</code></a> could do this with</p> <pre><code>l, r = [(0, 0), (1, 0)], [(0, 0), (1, 0)] spl = make_interp_spline(X, Y, k=5, bc_type=(l, r)) </code></pre> <p>but there are errors with either kind of nodes: "collocation matrix is singular". I think its handling of boundary conditions is targeted toward cubic splines. </p>
numpy|numeric|numerical-methods
3
1,907,871
52,435,224
json.dump not converting python list to JS array
<p>When I attempt to pass a python list through to JavaScript in the template it doesn't parse the list into an JS array as expected but instead returns this <code>[&amp;quot;Groceries&amp;quot;, &amp;quot;Clothing&amp;quot;, &amp;quot;Takeaways&amp;quot;, &amp;quot;Alcohol&amp;quot;]</code> causing the page to break.</p> <p>view.py</p> <pre><code>def labels(): category_labels = [] for item in Purchase.objects.order_by().values_list('type', flat=True).distinct(): category_labels.append(item) return category_labels def index(request): try: purchases = Purchase.objects.all().order_by('-time') total_spending = round(Purchase.objects.aggregate(Sum('amount'))['amount__sum'], 2) except Purchase.DoesNotExist: raise Http404("Could not find any purchases.") context = { 'purchases': purchases, 'total_spending': total_spending, 'spending_by_category': prepare_total_spending(), 'total_spending_all_categories': total_spending_all_categories(), 'labels': json.dumps(labels()), } return render(request, 'main/index.html', context) </code></pre> <p>index.html</p> <pre><code>&lt;script type="text/javascript"&gt; console.log(JSON.parse("{{labels}}")) # =&gt; converts this to console.log([&amp;quot;Groceries&amp;quot;, &amp;quot;Clothing&amp;quot;, &amp;quot;Takeaways&amp;quot;, &amp;quot;Alcohol&amp;quot;]) in JS and breaks. &lt;/script&gt; </code></pre>
<pre><code>{{labels | safe}} </code></pre> <p>Solved by @Klaus D.</p>
javascript|python|django|python-3.x|django-2.0
1
1,907,872
63,515,734
struggling to understand what is casuing this key error exception
<p>For my learning and devlopment I have decided to pick up prgramming and decided to pick up python I have started with a colour check application using the following <a href="https://data-flair.training/blogs/project-in-python-colour-detection/comment-page-1/" rel="nofollow noreferrer">tutorial</a></p> <p>However whenever I run the actiavtion command to open my image I get the following error</p> <pre><code>Traceback (most recent call last): File &quot;main.py&quot;, line 14, in &lt;module&gt; PathToImage = args['colour_select'] KeyError: 'colour_select' </code></pre> <p>I am relatively new to python so this may not be the only error in my code. I am mostly confused as to how this is heppening.</p> <p>My code full code:</p> <pre><code># imports import cv2 import numpy as np import pandas as pd import argparse # variable declaration clicked = False red = blue = green = xpos = ypos = 0 ap = argparse.ArgumentParser() ap.add_argument('-i', required=True, help=&quot;PathToImage&quot;) args = vars(ap.parse_args()) PathToImage = args['colour_select'] img = cv2.imread(PathToImage) # read csv file index = [&quot;colour&quot;, &quot;colour_name&quot;, &quot;hex&quot;, &quot;R&quot;, &quot;G&quot;, &quot;B&quot;] csv = pd.read('colours.csv', names=index, header=None) cv2.namedWindow('colour_select') cv2.setMouseCallback('colour_select', draw_function) def draw_function(event, x, y, flags, parameters): if event == cv2.EVENT_LBUTTONDBLCLK: global blue, green, red, xpos, ypos, clicked clicked = True xpos = x ypos = y blue, green, red = img[x, y] blue = int(blue) green = int(green) red = int(red) def getColourName(red, green, blue): minimum = 10000 for i in range(len(csv)): distance = abs(red - int(csv.loc[i, &quot;Red&quot;])) + abs(green - int(csv.loc[i, &quot;Green&quot;])) + abs( blue - int(csv.loc[i, &quot;Blue&quot;])) if (distance &lt;= minimum): minimum = distance ColourName = csv.loc[i, &quot;colour_name&quot;] return ColourName while (1): cv2.imshow(&quot;colour_select&quot;, img) if (clicked): cv2.rectangle(img, (20, 20), (750, 60), (blue, green, red), -1) text = getColourName(red, green, blue) + ' R=' + str(red) + 'G=' + str(green) + ' B+' + str(blue) cv2.putText(img, text, (50, 50), 2, 0.8, (0, 0, 0), 2, cv2.LINE_AA) if (red, blue, green &gt;= 600): cv2.putText(img, text, (50, 50), 2, 0.8, (0, 0, 0), 2, cv2.LINE_AA) clicked = False if cv2.waitKey(20) &amp; 0xFF == 27: break cv2.destroyAllWindows() </code></pre> <p>As always any and all help is appreciated.</p>
<p>I started by isolating the cause of the issue to</p> <pre><code>import argparse ap = argparse.ArgumentParser() ap.add_argument('-i', required=True, help=&quot;PathToImage&quot;) args = vars(ap.parse_args()) PathToImage = args['colour_select'] </code></pre> <p>I put that code in &quot;myscript.py&quot; and ran it; specifically I tried</p> <pre><code>python myscript.py -i &quot;hello&quot; </code></pre> <p>I was able to reproduce the issue. To figure out the cause, I added a print statement</p> <pre><code>import argparse ap = argparse.ArgumentParser() ap.add_argument('-i', required=True, help=&quot;PathToImage&quot;) args = vars(ap.parse_args()) print(args) PathToImage = args['colour_select'] </code></pre> <p>What that showed on the command line was <code>{'i': 'hello'}</code></p> <p>If you want the key to be <code>colour_select</code>, then you should use that as the argument string</p> <pre><code>import argparse ap = argparse.ArgumentParser() ap.add_argument('-colour_select', required=True, help=&quot;PathToImage&quot;) args = vars(ap.parse_args()) if 'colour_select' not in args.keys(): print(&quot;ERROR!&quot;) PathToImage = args['colour_select'] print('PathToImage =',PathToImage) </code></pre>
python
0
1,907,873
63,510,814
Python - List conversion of sliced iterable increases with every slice
<p>I have a generator from read_sql and convert this generator to iterator using itertools.islice. So I convert this generator to iterator in slices using start and stop arguments. And this process runs in a loop to convert generator to iterator in 3 slices and convert the iterator to a list.</p> <p>First time when it runs -&gt; <code>iterable_slice = list(it.islice(generator_df, 0 , 3))</code> takes 2.99 seconds Second time when it runs -&gt; <code>iterable_slice = list(it.islice(generator_df, 4 , 6))</code> takes 5.3 seconds and with every new loop or next set of slices, list conversion takes more time.</p> <p>Why does this happen and where I am making a mistake? Thoughts please. Thank You.</p> <pre><code>#function to convert generator to slices def gen_to_itr(generator_df,slice_start,slice_end): iterable_slice = list(it.islice(generator_df, slice_start,slice_end)) #main function slices = 3 slice_start = 0 slice_end = slices flg_cnt = 0 while slice_end &lt;= bcnt and flg_cnt &lt;= 1: generator_df = pd.read_sql(query2, test_connection_forbankcv_connection, chunksize = 1800) first = time.perf_counter() iterable_slice = gen_to_itr(generator_df,slice_start,slice_end) end = time.perf_counter() print(f'Chunk list created in {round(end-first, 2)} second(s)') slice_start = slice_start+slices ..... </code></pre>
<p><code>it.islice()</code> has to skip over the first <code>slice_start</code> elements of the generator when creating the new iterator. This takes time proportional to <code>slice_start</code>.</p> <p>However, I find it hard to believe that skipping each element of a pandas series would take about 1 second. If the chunksize were smaller than the slice sizes, it might need to do another fetch from the database to get the next chunk. But as long as you're in the same chunk, I think it should have the same speed as iterating through a static series.</p>
python|iterator|generator|itertools|chunking
1
1,907,874
36,437,138
Inverse Match Help in Python
<p>Hello I am looking to trim a McAfee log file and remove all of the "is OK" and other reported instances that I am not interested in seeing. Before we used a shell script that took advantage of the -v option for grep, but now we are looking to write a python script that will work on both linux and windows. After a couple of attempts I was able to get a regex to work in an online regex builder, but I am having a difficult time implementing it into my script. <a href="http://i.stack.imgur.com/DKtTS.png" rel="nofollow">Online REGEX Builder</a></p> <p>Edit: I want to remove the "is OK", "is a broken", "is a block lines", and "file could not be opened" lines so then I am just left with a file of just the problems that I am interested in. Sort of of like of like this in shell:</p> <pre><code>grep -v "is OK" ${OUTDIR}/${OUTFILE} | grep -v "is a broken" | grep -v "file could not be opened" | grep -v "is a block" &gt; ${OUTDIR}/${OUTFILE}.trimmed 2&gt;&amp;1 </code></pre> <p>I read in and search through the file here:</p> <pre><code>import re f2 = open(outFilePath) contents = f2.read() print contents p = re.compile("^((?!(is OK)|(file could not be opened)| (is a broken)|(is a block)))*$", re.MULTILINE | re.DOTALL) m = p.findall(contents) print len(m) for iter in m: print iter f2.close() </code></pre> <p>A sample of the file I am trying to search:</p> <pre><code>eth0 10.0.11.196 00:0C:29:AF:6A:A7 parameters passed to uvscan: --DRIVER /opt/McAfee/uvscan/datfiles/current -- ANALYZE --AFC=32 ATIME-PRESERVE --PLAD --RPTALL RPTOBJECTS SUMMARY --UNZIP -- RECURSIVE --SHOWCOMP --MIME --THREADS=4 /tmp temp XML output is: /tmp/HIQZRq7t2R McAfee VirusScan Command Line for Linux64 Version: 6.0.5.614 Copyright (C) 2014 McAfee, Inc. (408) 988-3832 LICENSED COPY - April 03 2016 AV Engine version: 5700.7163 for Linux64. Dat set version: 8124 created Apr 3 2016 Scanning for 670707 viruses, trojans and variants. No file or directory found matching /root/SVN/swd-lhn-build/trunk/utils/ATIME-PRESERVE No file or directory found matching /root/SVN/swd-lhn-build/trunk/utils/RPTOBJECTS No file or directory found matching /root/SVN/swd-lhn-build/trunk/utils/SUMMARY /tmp/tmp.BQshVRSiBo ... is OK. /tmp/keyring-F6vVGf/socket ... file could not be opened. /tmp/keyring-F6vVGf/socket.ssh ... file could not be opened. /tmp/keyring-F6vVGf/socket.pkcs11 ... file could not be opened. /tmp/yum.log ... is OK. /tmp/tmp.oW75zGUh4S ... is OK. /tmp/.X11-unix/X0 ... file could not be opened. /tmp/tmp.LCZ9Ji6OLs ... is OK. /tmp/tmp.QdAt1TNQSH ... is OK. /tmp/ks-script-MqIN9F ... is OK. /tmp/tmp.mHXPvYeKjb/mcupgrade.conf ... is OK. /tmp/tmp.mHXPvYeKjb/uvscan/uninstall-uvscan ... is OK. /tmp/tmp.mHXPvYeKjb/mcscan ... is OK. /tmp/tmp.mHXPvYeKjb/uvscan/install-uvscan ... is OK. /tmp/tmp.mHXPvYeKjb/uvscan/readme.txt ... is OK. /tmp/tmp.mHXPvYeKjb/uvscan/uvscan_secure ... is OK. /tmp/tmp.mHXPvYeKjb/uvscan/signlic.txt ... is OK. /tmp/tmp.mHXPvYeKjb/uvscan/uvscan ... is OK. /tmp/tmp.mHXPvYeKjb/uvscan/liblnxfv.so.4 ... is OK. </code></pre> <p>But am not getting the correct output. I have tried removing both the MULTILINE and DOTALL options as well and still do not get the correct response. Below is the output when running with DOTALL and MULTILINE. </p> <pre><code>9 ('', '', '', '', '') ('', '', '', '', '') ('', '', '', '', '') ('', '', '', '', '') ('', '', '', '', '') ('', '', '', '', '') ('', '', '', '', '') ('', '', '', '', '') ('', '', '', '', '') </code></pre> <p>Any help would be much appreciated!! Thanks!!</p>
<p>Perhaps think simpler, line by line:</p> <pre><code>import re import sys pattern = re.compile(r"(is OK)|(file could not be opened)|(is a broken)|(is a block)") with open(sys.argv[1]) as handle: for line in handle: if not pattern.search(line): sys.stdout.write(line) </code></pre> <p>Outputs:</p> <pre><code>eth0 10.0.11.196 00:0C:29:AF:6A:A7 parameters passed to uvscan: --DRIVER /opt/McAfee/uvscan/datfiles/current -- ANALYZE --AFC=32 ATIME-PRESERVE --PLAD --RPTALL RPTOBJECTS SUMMARY --UNZIP -- RECURSIVE --SHOWCOMP --MIME --THREADS=4 /tmp temp XML output is: /tmp/HIQZRq7t2R McAfee VirusScan Command Line for Linux64 Version: 6.0.5.614 Copyright (C) 2014 McAfee, Inc. (408) 988-3832 LICENSED COPY - April 03 2016 AV Engine version: 5700.7163 for Linux64. Dat set version: 8124 created Apr 3 2016 Scanning for 670707 viruses, trojans and variants. No file or directory found matching /root/SVN/swd-lhn-build/trunk/utils/ATIME-PRESERVE No file or directory found matching /root/SVN/swd-lhn-build/trunk/utils/RPTOBJECTS No file or directory found matching /root/SVN/swd-lhn-build/trunk/utils/SUMMARY </code></pre>
python|regex|inverse-match
2
1,907,875
19,329,539
Python: REGEX works in one place, but not another with same input
<p>I will preface this with the fact that I am not a Python coder, I am just hacking this together for a quick check of the DB for my Android app. That and I'd like to turn it into a script for my webpage. That aside, I can't figure out why the code outside of the for-statement works while inside doesn't. It appears that row and row2 are identical. It pulls out the "beginner" table name fine in the first, but won't in the loop. Thanks for the help. </p> <pre><code>row = "(u'beginner',)" print row temp = re.search('\(u\'(.*)\',\)', row).group(1) print temp #getRows is a cursor fetch on a sqlite DB if that matters for row2 in getRows: print row2 temp = re.search('\(u\'(.*)\',\)', row2).group(1) print temp </code></pre> <p>OUTPUT: </p> <pre><code>Finding files... done. Importing test modules ... (u'beginner',) beginner (u'beginner',) done. </code></pre> <hr> <p>Ran 0 tests in 0.000s</p> <p>OK</p>
<p>I know why it's not working: because while the string representations of row and row2 are the same, they aren't really the same at all. Try printing <code>type(row2)</code> and you'll see it's a tuple. I know this because the DB API would return a tuple, not a string that looks like a tuple.</p> <p>So when you have this:</p> <pre><code>row = "(u'beginner',)" </code></pre> <p>It's a string that looks like a tuple, and you can re.search it. But you shouldn't do any of this--in the DB row, you should just get the string content by <code>row2[0]</code> and not use re at all.</p>
python|regex
2
1,907,876
19,450,380
Strange literal when using non-ascii characters in string
<p>I have the following test:</p> <pre><code># -*- coding: utf-8 -*- def test_literals(): test_cases = [ 'aaaaa', 'ááááá', u'aaaaa', u'ááááá', ] FORMAT = '%-20s -&gt; %2d %s' for data in test_cases : print FORMAT % (data, len(data), type(data)) test_literals() </code></pre> <p>Which gives:</p> <pre><code>aaaaa -&gt; 5 &lt;type 'str'&gt; ááááá -&gt; 10 &lt;type 'str'&gt; aaaaa -&gt; 5 &lt;type 'unicode'&gt; ááááá -&gt; 5 &lt;type 'unicode'&gt; </code></pre> <p>I am surprised about <code>'ááááá'</code>. What kind of literal is that? It is no unicode, since it has no <code>u</code> prefix (and the type says <code>str</code>), but it is also not a normal ascii string. I would like to know:</p> <ul> <li>What is it?</li> <li>Is it possible to get more information about a <code>basestring</code> object (<code>unicode / str</code>), appart from its type?</li> <li>How can I dump the bytes in hex?</li> </ul>
<ol> <li>It's a UTF-8-encoded string. The fact that it's printing correctly shows that your terminal happens to be using UTF-8, too. Lucky you. That script would fail on a Windows box.<br> How do I know this? You declared the script as being UTF-8 encoded, so Python will interpret the source code accordingly. When printing it, Python will send the raw bytes of the encoded string to the console. If that's set to the same encoding, you get the correct output. If not, you don't.</li> <li>No. You need to know the encoding that is being used. Ideally, you control that yourself. In the real world, it's possible to make an inspired guess sometimes (see <a href="https://pypi.python.org/pypi/chardet" rel="nofollow">chardet</a>), but you can't rely on this.</li> <li>See @falsetru's comment.</li> </ol> <p>All in all, if that confuses you, it might motivate you to switch to Python 3 where all this is much easier.</p>
python|unicode
1
1,907,877
22,276,034
Underlining text in matplotlib legend
<p>I need to have a bit of text underlined in my legend. I found a question answered <a href="https://stackoverflow.com/questions/10727368/underlining-text-in-python-matplotlib">here</a> but I do not understand LaTeX at all. I need to underline "Content determined from gamma spectroscopy" in the legend (line 53 of the code). I tried to do the following from the link:</p> <pre><code>r'\underline{Content determined from gamma spectroscopy} ', </code></pre> <p>but that exact text just shows up in the legend. How exactly can I underline the text?</p> <pre><code>import matplotlib.pyplot as plt from matplotlib.patches import Rectangle eu_cl = 0.1 co_cl = 2.0 ca_eu_gs = 0.05 / eu_cl ca_co_gs = 0.46 / co_cl fa_eu_gs = 0.11 / eu_cl fa_co_gs = 0.76 / co_cl ce_eu_gs = 0.03 / eu_cl ce_co_gs = 0.26 / co_cl ca_eu_ms = 0.04 / eu_cl ca_co_ms = 1.05 / co_cl fa_eu_ms = 0.01 / eu_cl fa_co_ms = 1.85 / co_cl ce_eu_ms = 0.08 / eu_cl ce_co_ms = 1.44 / co_cl y_co = [1,1,1,1,1,1,1e-1,1e-2,1e-3,0] x_co = [0,1e-4,1e-3,1e-2,1e-1,1e0,1e0,1e0,1e0,1e0] #y_eu = [0, 1e-3, 1e-2, 1e-1, 1e0] #x_eu = [1,1,1,1,1] plt.rcParams['legend.loc'] = 'best' plt.figure(1) plt.ylim(1e-3, 1e4) plt.xlim(1e-4, 1e3) #plt.autoscale(enable=True, axis='y', tight=None) #plt.autoscale(enable=True, axis='x', tight=None) ca_gs = plt.scatter(ca_eu_gs, ca_co_gs, color='b', marker='o') fa_gs = plt.scatter(fa_eu_gs, fa_co_gs, color='r', marker='o') ce_gs = plt.scatter(ce_eu_gs, ce_co_gs, color='m', marker='o') ca_ms = plt.scatter(ca_eu_ms, ca_co_ms, color='b', marker='^') fa_ms = plt.scatter(fa_eu_ms, fa_co_ms, color='r', marker='^') ce_ms = plt.scatter(ce_eu_ms, ce_co_ms, color='m', marker='^') extra = Rectangle((0, 0), 1, 1, fc="w", fill=False, edgecolor='none', linewidth=0) extra1 = Rectangle((0, 0), 1, 1, fc="w", fill=False, edgecolor='none', linewidth=0) clearance, = plt.plot(x_co, y_co, color='g') #plt.plot(x_eu, y_eu, color='g') plt.loglog() plt.xlabel('Europium (ppm)') plt.ylabel('Cobalt (ppm)') plt.legend([extra, ca_gs, fa_gs, ce_gs, extra1, ca_ms, fa_ms, ce_ms , clearance], ("Content determined from gamma spectroscopy" , "Coarse aggregate","Fine aggregate","Cement","Content determined from ICP-MS","Coarse aggregate","Fine aggregate","Cement","D/C = 1") , scatterpoints = 1) print('plots created') plt.show() </code></pre> <p>EDIT:</p> <p>I added the following as mentioned in the comments to turn on LaTeX</p> <pre><code>rc('text', usetex=True) </code></pre> <p>But that results in a whole string of errors ending with the following error now:</p> <pre><code>RuntimeError: LaTeX was not able to process the following string: '$10^{-4}$' Here is the full report generated by LaTeX: </code></pre> <p>I am guessing this is because I have to format all of my text using LaTeX now. Is there way to only format some of it using LaTex. I just have no experience with it and now really isn't the best time to learn it (I will someday though).</p>
<p>It didn't work because you left out the </p> <pre><code> from matplotlib import rc rc('text', usetex=True) </code></pre> <p>portion of the answer. LaTeX is a markup language, a very advanced one actually.</p> <p>If you don't want to dive into that, you can probably just set the text style for the legend using the matplotlib parameter, but it will apply it to everything in the legend.</p>
python|matplotlib|legend|underline
2
1,907,878
16,929,717
Work-around for module that performs (bad) initialization
<p>We have a central module that within the module calls an init() function on loading:</p> <pre><code>import x import y import z def init(): .... init() if __name__ == '__main__': ... </code></pre> <p>This gets pulled into every one of our application modules with a statement like:</p> <pre><code>if __name__ == '__main__': import central_module as b b.do_this() b.do_that() </code></pre> <p>init() does a number of bad things, notably establish connections to databases. Because of this, it breaks any unit tests, and the modules I write expect the usual behavior where you import the module and explicitly invoke any initialization.</p> <p>I've implemented a work-around by adding an INITIALIZE variable up top:</p> <pre><code>#INITIALIZE = True INITIALIZE = False # for DEV/test if INITIALIZE: init() </code></pre> <p>But requires me to edit that file in order to run my tests or do development, then revert the change when I'm ready to commit &amp; push.</p> <p>For political reasons, I haven't gotten any traction on just fixing it, with something like:</p> <pre><code>import central_module as b ... b.init() b.do_this() b.do_that() </code></pre> <p>Is there some way that I can more transparently disable that call when the module loads? The problem is that by the time the module is imported, it's already tried to connect to the databases (and failed).</p> <p>Right now my best idea is: I could move the INITIALIZE variable into a previous import, and in my tests import that, set initialize to FALSE, then import central_module.</p> <p>I'll keep working on the political side (arg), but was wondering if there's a better work-around I could drop in place to disable that init call without disrupting all the existing scripts.</p>
<p>Here's my idea, evil as it may be:</p> <ol> <li>Open the module as a file and read in the source.</li> <li>Use <code>ast.parse()</code> to parse it into a AST.</li> <li>Walk the AST until you find the offending function call, and prune it.</li> <li>Evaluate the modified AST and inject it into a new module created by <code>imp.new_module()</code>, and stuff it into <code>sys.modules</code>.</li> <li>Commit the crazy hack with a commit message saying that it shouldn't be necessary at all except for the fact that some twit wouldn't know proper initialization if it bit them in the ass.</li> </ol>
python
3
1,907,879
58,064,174
How to compare two dimensional list that is not in another two dimensional list?
<p>I have two list that are two dimensional. I would like to compare df0 that is not in df1. This is the code i have below but it only giving me the matching value.</p> <p>For Example:</p> <pre><code>df0=[[2, 4, 7, 13, 14], [3, 5, 8, 13, 14], [6, 9, 10, 13, 14]] df1=[[4, 7, 9, 12], [12, 15, 17, 18, 19], [13, 22, 23, 24, 30], [2, 5, 7, 8, 9], [6, 7, 12, 14, 15]] df3= list(enumerate([[list(set(x) &amp; set(y)) for x in df0 if x not in df1] for y in df1])) </code></pre> <p>I would like my results to be:</p> <pre><code>[[[9, 12], [], [22,23,24,30], [5,8,9], [6,12,15]], [[], [], [22,23,24,30], [2,7,9], [6,7,12,15]], [[], [], [22,23,24,30], [], [7,12,15]]] </code></pre>
<p>Your desired result can be achieved by applying this algorithm:</p> <pre><code>[ [ list(sorted(set(x) - set(y))) if set(y) &amp; set(x) else [] for x in df1 ] for y in df0 ] </code></pre> <p>But if this makes any sense, that's up to you :)</p>
python|python-3.x|list|multidimensional-array
0
1,907,880
47,979,049
Combine MultiIndex columns to a single index in a pandas dataframe
<p>With my code I integrate 2 databases in 1. The problem is when I add one more column to my databases, the result is not as expected. Use Python 2.7</p> <p>code:</p> <pre><code>import pandas as pd import pandas.io.formats.excel import numpy as np # Leemos ambos archivos y los cargamos en DataFrames df1 = pd.read_excel("archivo1.xlsx") df2 = pd.read_excel("archivo2.xlsx") df = (pd.concat([df1,df2]) .set_index(["Cliente",'Fecha']) .stack() .unstack(1) .sort_index(ascending=(True, False))) m = df.index.get_level_values(1) == 'Impresiones' df.index = np.where(m, 'Impresiones', df.index.get_level_values(0)) # Creamos el xlsx de salida pandas.io.formats.excel.header_style = None with pd.ExcelWriter("Data.xlsx", engine='xlsxwriter', date_format='dd/mm/yyyy', datetime_format='dd/mm/yyyy') as writer: df.to_excel(writer, sheet_name='Sheet1') </code></pre> <p>archivo1:</p> <pre><code>Fecha Cliente Impresiones Impresiones 2 Revenue 20/12/17 Jose 1312 35 $12 20/12/17 Martin 12 56 $146 20/12/17 Pedro 5443 124 $1,256 20/12/17 Esteban 667 1235 $1 </code></pre> <p>archivo2:</p> <pre><code>Fecha Cliente Impresiones Impresiones 2 Revenue 21/12/17 Jose 25 5 $2 21/12/17 Martin 6347 523 $123 21/12/17 Pedro 2368 898 $22 21/12/17 Esteban 235 99 $7,890 </code></pre> <p>Hope Results:</p> <p><a href="https://i.stack.imgur.com/LOSz1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LOSz1.png" alt=""></a></p> <p>I tried with <code>m1 = df.index.get_level_values(1) == 'Impresiones 2' df.index = np.where(m1, 'Impresiones 2', df.index.get_level_values(0))</code> but I have this error: <code>IndexError: Too many levels: Index has only 1 level, not 2</code></p>
<p>The first bit of the solution is similar to <a href="https://stackoverflow.com/a/47958919/4909087">jezrael's answer</a> to your previous question, using <code>concat</code> + <code>set_index</code> + <code>stack</code> + <code>unstack</code> + <code>sort_index</code>.</p> <pre><code>df = pd.concat([df1, df2])\ .set_index(['Cliente', 'Fecha'])\ .stack()\ .unstack(-2)\ .sort_index(ascending=[True, False]) </code></pre> <p>Now comes the challenging part, we have to incorporate the Names in the 0<sup>th</sup> level, into the 1<sup>st</sup> level, and then reset the index. </p> <p>I use <code>np.insert</code> to insert names above the revenue entry in the index.</p> <pre><code>i, j = df.index.get_level_values(0), df.index.get_level_values(1) k = np.insert(j.values, np.flatnonzero(j == 'Revenue'), i.unique()) </code></pre> <p>Now, I create a new <code>MultiIndex</code> which I then use to <code>reindex</code> <code>df</code> -</p> <pre><code>idx = pd.MultiIndex.from_arrays([i.unique().repeat(len(df.index.levels[1]) + 1), k]) df = df.reindex(idx).fillna('') </code></pre> <p>Now, drop the extra level - </p> <pre><code>df.index = df.index.droplevel() df Fecha 20/12/17 21/12/17 Esteban Revenue $1 $7,890 Impresiones2 1235 99 Impresiones 667 235 Jose Revenue $12 $2 Impresiones2 35 5 Impresiones 1312 25 Martin Revenue $146 $123 Impresiones2 56 523 Impresiones 12 6347 Pedro Revenue $1,256 $22 Impresiones2 124 898 Impresiones 5443 2368 </code></pre>
python|excel|pandas
3
1,907,881
37,261,716
How to date-stamp a pickled object
<p>I have a script that obtains certain information online and stores it in a list. I'd like to keep the list for future executions of the script, and then retrieve the information online again after a certain period of time (let's say 3 days), just in case it's gone stale. My understanding is that I can pickle the list (but please let me know if there's a more advisable way to store it -- EDIT: should I use shelve or json instead?).</p> <p>My main question is this: what's the best and most idiomatic way to store the date and time of the pickle and then evaluate if 3 days have passed?</p>
<p>An approach that completely avoids having to store and manage a separate file object for tracking the timestamp is to use <code>os.path.getmtime()</code> to get the date modified timestamp that linux records for the list/file that you are refreshing. </p> <p>For example:</p> <pre><code>import os import datetime import time threshold = datetime.timedelta(days=3) # can also be minutes, seconds, etc. filetime = os.path.getmtime(filename) # filename is the path to the local file you are refreshing now = time.time() delta = datetime.timedelta(seconds=now-filetime) if delta &gt; threshold: # do something </code></pre>
python|pickle
4
1,907,882
66,042,805
Flask CORS stopped allowing access to resources
<p>Only few days back I didn't have to make any modifications to Flask-CORS extension I'm using. Apparently I do now.</p> <p>Console is giving me this error on POST request:</p> <p><code>Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at http://localhost:5000/blog/posts. (Reason: CORS preflight response did not succeed)</code></p> <p>one of my configs:</p> <pre><code>class BaseConfig: CORS_HEADERS = 'Content-Type' ... ... </code></pre> <p><code>__init__.py</code>:</p> <pre><code>... ... from flask_cors import CORS cors = CORS() def create_app(script_info=None): app = Flask(__name__) app.config.from_object(BaseConfig) cors.init_app(app, resources={r&quot;/*&quot;: {&quot;origins&quot;: &quot;*&quot;}}) ... ... return app </code></pre> <p>my blueprint:</p> <pre><code>from flask_cors import cross_origin ... ... class MyClass(Resource): def get(self): ... ... @cross_origin() def post(self): ... ... </code></pre> <p>Few days back all I had to do is just initialize the extension and it did the rest. Now I'm getting error even with explicitly specifying that I'm allowing access from everywhere. Once this works I'll narrow it down to only one domain but this still throws an error.</p> <p>I've tried to get some debug output with <code>logging.getLogger('flask_cors').level = logging.DEBUG</code></p> <p><code>127.0.0.1 - - [04/Feb/2021 11:02:31] &quot;OPTIONS /blog/posts HTTP/1.1&quot; 404 -</code></p> <p>Any advice ?</p>
<p>I suggest you use the following snippet and, you don't need to use a decorator for CORS anymore. But be careful if you are using Nginx as reverse proxy, you have to add proper headers for Nginx too.</p> <pre><code>cors = CORS(flask_app, resources={r&quot;/api/*&quot;: {&quot;origins&quot;: &quot;*&quot;, &quot;allow_headers&quot;: &quot;*&quot;, &quot;expose_headers&quot;: &quot;*&quot;}}) </code></pre>
python-3.x|flask|cors|flask-cors
0
1,907,883
66,120,838
Unexpected argument passing dictionary to function with **kwargs python3.9
<p>I'm trying to pass a dictionary to a function called solve_slopeint() using **kwargs because the values in the dictionary could sometimes be None depending on the user input. When I try to do that, I get a TypeError saying:</p> <pre><code>solve_slopeint() takes 0 positional arguments but one was given </code></pre> <p>Here's the whole process of what's happening:</p> <ol> <li>arg_dict values set to None and program asks a question</li> <li>User says yes. A function is then called and starts with another question. The user inputs 2</li> <li>User is asked to enter point 1 and then asked to enter point 2</li> <li>Those 2 points are returned to the parent function as a list of lists.</li> <li>arg_dict accesses those values and sets them to its corresponding keys.</li> <li>solve_slopeint() is called with arg_dict as the parameter.</li> </ol> <p>Here is the code that you need to see:</p> <p><strong>main.py</strong></p> <pre><code>def slope_intercept(): arg_dict = { &quot;point1&quot;: None, &quot;point2&quot;: None, &quot;slope&quot;: None, &quot;y-intercept&quot;: None } while True: question1 = input(&quot;Were you given any points that the line passes through? (y/n): &quot;) if question1 == 'y': point_list = passing_points() if len(point_list) == 2: arg_dict[&quot;point1&quot;] = point_list[0] arg_dict[&quot;point2&quot;] = point_list[1] solve_slopeint(arg_dict) </code></pre> <p><strong>functions.py</strong></p> <pre><code>def passing_points(): while True: num_points = input(&quot;How many points were you given?: &quot;) try: num_points = int(num_points) elif num_points == 2: point1_list = [] point2_list = [] while True: point1 = input(&quot;Enter point 1 in the format x,y: &quot;) while True: point2 = input(&quot;Enter point 2 in the format x,y: &quot;) return [point1_list, point2_list] </code></pre> <p><strong>solve_functions.py</strong></p> <pre><code>def solve_slopeint(**kwargs): print(&quot;Equation solved!&quot;) </code></pre> <p><a href="https://i.stack.imgur.com/OoZMB.png" rel="nofollow noreferrer">Click Here </a>to see the output of the debugger in PyCharm.</p> <p>Just so people know, I left out a lot of error checking making sure that the user doesn't intentionally or accidentally input something wrong. If I left out some code that makes this code here not understandable, please tell me in the comments.</p> <p>Does anyone know how to fix this?</p>
<p>You're wrong calling the <code>solve_slopeint</code> function.</p> <p>Do it like this:</p> <pre><code>def slope_intercept(): arg_dict = { &quot;point1&quot;: None, &quot;point2&quot;: None, &quot;slope&quot;: None, &quot;y-intercept&quot;: None } while True: question1 = input(&quot;Were you given any points that the line passes through? (y/n): &quot;) if question1 == 'y': point_list = passing_points() if len(point_list) == 2: arg_dict[&quot;point1&quot;] = point_list[0] arg_dict[&quot;point2&quot;] = point_list[1] solve_slopeint(**arg_dict) # Or: # solve_slopeint(point1=point_list[0], point2=point_list[1]) </code></pre>
python|python-3.x|dictionary|keyword-argument
1
1,907,884
66,146,007
upgrade pytesseract in python
<p>I want to upgrade the pytesseract package and I have used this line <code>pip install pytesseract==0.3.7</code> and I got this as successful</p> <pre><code>Collecting pytesseract==0.3.7 Using cached pytesseract-0.3.7.tar.gz (13 kB) Requirement already satisfied: Pillow in c:\users\future\appdata\local\programs\python\python39\lib\site-packages (from pytesseract==0.3.7) (8.0.1) Using legacy 'setup.py install' for pytesseract, since package 'wheel' is not installed. Installing collected packages: pytesseract Attempting uninstall: pytesseract Found existing installation: pytesseract 0.3.6 Uninstalling pytesseract-0.3.6: Successfully uninstalled pytesseract-0.3.6 Running setup.py install for pytesseract ... done Successfully installed pytesseract-0.3.7 </code></pre> <p>When trying the following lines in the cmd</p> <pre><code>python </code></pre> <p>then typed <code>import pytesseract</code>, I got the following error messages</p> <pre><code>Python 3.9.1 (tags/v3.9.1:1e5d33e, Dec 7 2020, 17:08:21) [MSC v.1927 64 bit (AMD64)] on win32 Type &quot;help&quot;, &quot;copyright&quot;, &quot;credits&quot; or &quot;license&quot; for more information. &gt;&gt;&gt; import pytesseract ** On entry to DGEBAL parameter number 3 had an illegal value ** On entry to DGEHRD parameter number 2 had an illegal value ** On entry to DORGHR DORGQR parameter number 2 had an illegal value ** On entry to DHSEQR parameter number 4 had an illegal value Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; File &quot;C:\Users\Future\AppData\Local\Programs\Python\Python39\lib\site-packages\pytesseract\__init__.py&quot;, line 2, in &lt;module&gt; from .pytesseract import ALTONotSupported File &quot;C:\Users\Future\AppData\Local\Programs\Python\Python39\lib\site-packages\pytesseract\pytesseract.py&quot;, line 36, in &lt;module&gt; from numpy import ndarray File &quot;C:\Users\Future\AppData\Local\Programs\Python\Python39\lib\site-packages\numpy\__init__.py&quot;, line 305, in &lt;module&gt; _win_os_check() File &quot;C:\Users\Future\AppData\Local\Programs\Python\Python39\lib\site-packages\numpy\__init__.py&quot;, line 302, in _win_os_check raise RuntimeError(msg.format(__file__)) from None RuntimeError: The current Numpy installation ('C:\\Users\\Future\\AppData\\Local\\Programs\\Python\\Python39\\lib\\site-packages\\numpy\\__init__.py') fails to pass a sanity check due to a bug in the windows runtime. See this issue for more information: </code></pre> <p>Any ideas of how to fix such a problem?</p>
<p>I was able to pip install the 0.3.6 version, upgrade to the 0.3.7 version, and then import and use the package no problem in a similar environment using python 3.9.0.</p> <p>I would suggest making a virtual environment: <code>python -m venv &lt;path-to-directory&gt;</code></p> <p>And then run: <code>&lt;path-to-directory&gt;/Scripts/Activate</code></p> <p>Once you do that, try to pip install the package again. If it works, there is something messed up in the python installation at &quot;C:\Users\Future\AppData\Local\Programs\Python\Python39&quot;.</p> <p>I would suggest always installing packages in a virtual environment this way. It will save you a lot of headaches if you are changing configurations or trying new packages often. If it stops working, just make a new environment!</p> <p>Link: <a href="https://docs.python.org/3/library/venv.html" rel="nofollow noreferrer">3.9.1 - venv Docs</a></p>
python|python-tesseract
1
1,907,885
7,023,369
get the modules locally for django project
<p>Is there a way to import all the modules in the django project itself instead of setting up again and again in all the systems. I would have used gem freeze or something like that in a rails project.</p>
<p>There's a bit of a terminology confusion here: "modules" refers to individual <code>.py</code> files within a Python package. And importing is what you do within code, to bring modules into the current namespace.</p> <p>I <em>think</em> what you're asking is how to <strong>install</strong> Python <strong>packages</strong> on deployment. The answer to that is, use <a href="http://www.pip-installer.org/en/latest/index.html" rel="nofollow">pip</a> with the <code>freeze</code> command, in conjunction with <a href="http://www.virtualenv.org/en/latest/index.html" rel="nofollow">virtualenv</a>.</p>
python|django|python-module
3
1,907,886
7,293,341
Python/Perl: timed loop implementation (also with microseconds)?
<p>I would like to use Perl and/or Python to implement the following JavaScript pseudocode: </p> <pre><code>var c=0; function timedCount() { c=c+1; print("c=" + c); if (c&lt;10) { var t; t=window.setTimeout("timedCount()",100); } } // main: timedCount(); print("after timedCount()"); var i=0; for (i=0; i&lt;5; i++) { print("i=" + i); wait(500); //wait 500 ms } </code></pre> <p>&nbsp;</p> <p>Now, this is a particularly unlucky example to choose as a basis - but I simply couldn't think of any other language to provide it in :) Basically, there is a 'main loop' and an auxiliary 'loop' (<code>timedCount</code>), which both count at different rates: main with 500 ms period (implemented through a <code>wait</code>), <code>timedCount</code> with 100 ms period (implemented via <code>setInterval</code>). However, JavaScript is essentially single-threaded, not multi-threaded - and so, there is no real <code>sleep</code>/<code>wait</code>/<code>pause</code> or similar (<em>see <a href="http://www.ozzu.com/programming-forum/javascript-sleep-function-t66049.html" rel="nofollow noreferrer">JavaScript Sleep Function - ozzu.com</a></em>), which is why the above is, well, pseudocode ;) </p> <p>By moving the main part to yet another <code>setInterval</code> function, however, we can get a version of the code which can be pasted and ran in a browser shell like <a href="http://www.squarefree.com/shell/shell.html" rel="nofollow noreferrer">JavaScript Shell 1.4</a> (<em>but not in a terminal shell like <a href="https://stackoverflow.com/questions/6170676/envjs-rhino-settimeout-not-working/7292343#7292343">EnvJS/Rhino</a></em>):</p> <pre><code>var c=0; var i=0; function timedCount() { c=c+1; print("c=" + c); if (c&lt;10) { var t; t=window.setTimeout("timedCount()",100); } } function mainCount() // 'main' loop { i=i+1; print("i=" + i); if (i&lt;5) { var t; t=window.setTimeout("mainCount()",500); } } // main: mainCount(); timedCount(); print("after timedCount()"); </code></pre> <p>... which results with something like this output: </p> <pre><code>i=1 c=1 after timedCount() c=2 c=3 c=4 c=5 c=6 i=2 c=7 c=8 c=9 c=10 i=3 i=4 i=5 </code></pre> <p>... that is, the main counts and auxiliary counts are 'interleaved'/'threaded'/'interspersed', with a main count on approx every five auxiliary counts, as anticipated. </p> <p>&nbsp; </p> <p>And now the main question - what is the recommended way of doing this in Perl and Python, respectively? </p> <ul> <li>Additionally, do either Python or Perl offer facilities to implement the above with microsecond timing resolution in cross-platform manner? </li> </ul> <p>&nbsp;</p> <p>Many thanks for any answers,<br> Cheers! </p>
<p>The simplest and most general way I can think of doing this in Python is to use <a href="http://twistedmatrix.com/" rel="nofollow">Twisted</a> (an event-based networking engine) to do this.</p> <pre><code>from twisted.internet import reactor from twisted.internet import task c, i = 0, 0 def timedCount(): global c c += 1 print 'c =', c def mainCount(): global i i += 1 print 'i =', i c_loop = task.LoopingCall(timedCount) i_loop = task.LoopingCall(mainCount) c_loop.start(0.1) i_loop.start(0.5) reactor.run() </code></pre> <p>Twisted has a highly efficient and stable event-loop implementation called the reactor. This makes it single-threaded and essentially a close analogue to Javascript in your example above. The reason I'd use it to do something like your periodic tasks above is that it gives tools to make it easy to add as many complicated periods as you like.</p> <p>It also offers <a href="http://twistedmatrix.com/documents/current/core/howto/time.html" rel="nofollow">more tools for scheduling task calls</a> you might find interesting.</p>
python|perl|loops|timing
4
1,907,887
72,706,888
"Can not merge type" error when merging pandas dataframes
<pre><code>import pandas as pd import numpy as np pdf1 = pd.DataFrame({ 'id': np.array([1, 2, 3, 4, 6, 7], dtype=int), 'name': np.array(['a', 'b', 'c', 'd', None, 'e'], dtype=str) }) print(pdf1.dtypes) pdf2 = pd.DataFrame({ 'id': np.array([1, 2, 3, 5, 6, 7], dtype=int), 'name': np.array(['k', 'l', 'm', 'm', 'o', None], dtype=str) }) print(pdf2.dtypes) res_pdf = pdf1.join(pdf2, on = ['id'], how = 'outer', lsuffix=&quot;_x&quot;, rsuffix=&quot;_y&quot;,) print(res_pdf) </code></pre> <p>results in</p> <pre><code>TypeError Traceback (most recent call last) /tmp/ipykernel_8156/3197760477.py in &lt;module&gt; 17 18 res_pdf = pdf1.join(pdf2, on = ['id'], how = 'outer', lsuffix=&quot;_x&quot;, rsuffix=&quot;_y&quot;,) ---&gt; 19 spark.createDataFrame(res_pdf).show() /opt/spark/python/lib/pyspark.zip/pyspark/sql/session.py in createDataFrame(self, data, schema, samplingRatio, verifySchema) 671 if has_pandas and isinstance(data, pandas.DataFrame): 672 # Create a DataFrame from pandas DataFrame. --&gt; 673 return super(SparkSession, self).createDataFrame( 674 data, schema, samplingRatio, verifySchema) 675 return self._create_dataframe(data, schema, samplingRatio, verifySchema) /opt/spark/python/lib/pyspark.zip/pyspark/sql/pandas/conversion.py in createDataFrame(self, data, schema, samplingRatio, verifySchema) 298 raise 299 data = self._convert_from_pandas(data, schema, timezone) --&gt; 300 return self._create_dataframe(data, schema, samplingRatio, verifySchema) 301 302 def _convert_from_pandas(self, pdf, schema, timezone): /opt/spark/python/lib/pyspark.zip/pyspark/sql/session.py in _create_dataframe(self, data, schema, samplingRatio, verifySchema) 698 rdd, schema = self._createFromRDD(data.map(prepare), schema, samplingRatio) 699 else: --&gt; 700 rdd, schema = self._createFromLocal(map(prepare, data), schema) 701 jrdd = self._jvm.SerDeUtil.toJavaArray(rdd._to_java_object_rdd()) 702 jdf = self._jsparkSession.applySchemaToPythonRDD(jrdd.rdd(), schema.json()) /opt/spark/python/lib/pyspark.zip/pyspark/sql/session.py in _createFromLocal(self, data, schema) 510 511 if schema is None or isinstance(schema, (list, tuple)): --&gt; 512 struct = self._inferSchemaFromList(data, names=schema) 513 converter = _create_converter(struct) 514 data = map(converter, data) /opt/spark/python/lib/pyspark.zip/pyspark/sql/session.py in _inferSchemaFromList(self, data, names) 437 if not data: 438 raise ValueError(&quot;can not infer schema from empty dataset&quot;) --&gt; 439 schema = reduce(_merge_type, (_infer_schema(row, names) for row in data)) 440 if _has_nulltype(schema): 441 raise ValueError(&quot;Some of types cannot be determined after inferring&quot;) /opt/spark/python/lib/pyspark.zip/pyspark/sql/types.py in _merge_type(a, b, name) 1105 if isinstance(a, StructType): 1106 nfs = dict((f.name, f.dataType) for f in b.fields) -&gt; 1107 fields = [StructField(f.name, _merge_type(f.dataType, nfs.get(f.name, NullType()), 1108 name=new_name(f.name))) 1109 for f in a.fields] /opt/spark/python/lib/pyspark.zip/pyspark/sql/types.py in &lt;listcomp&gt;(.0) 1105 if isinstance(a, StructType): 1106 nfs = dict((f.name, f.dataType) for f in b.fields) -&gt; 1107 fields = [StructField(f.name, _merge_type(f.dataType, nfs.get(f.name, NullType()), 1108 name=new_name(f.name))) 1109 for f in a.fields] /opt/spark/python/lib/pyspark.zip/pyspark/sql/types.py in _merge_type(a, b, name) 1100 elif type(a) is not type(b): 1101 # TODO: type cast (such as int -&gt; long) -&gt; 1102 raise TypeError(new_msg(&quot;Can not merge type %s and %s&quot; % (type(a), type(b)))) 1103 1104 # same type TypeError: field name_y: Can not merge type &lt;class 'pyspark.sql.types.StringType'&gt; and &lt;class 'pyspark.sql.types.DoubleType'&gt; </code></pre> <p>I already specified the type explicitly, but Pandas seems to ignore it (and why it thinks that it's DoubleType?).<br /> Any ideas what's the reason and how to fix this?</p>
<p>When you pre-define the datatype like you have, then <code>None</code> gets interpreted as <code>'None'</code>. So it's not a proper null value.</p> <p>We can fix that by doing:</p> <pre><code>pdf1.replace('None', np.nan, inplace=True) pdf2.replace('None', np.nan, inplace=True) </code></pre> <p>Then we can merge them:</p> <pre><code>df = pdf1.merge(pdf2, on='id', how='outer') </code></pre> <p>It appears that pyspark doesn't like <code>np.nan</code>, since it identifies it as a <code>DoubleType</code>.</p> <p>You can get around this by forcing <code>np.nan</code> to <code>None</code></p> <pre><code>df.replace(np.nan, None, inplace=True) </code></pre> <hr /> <p>This is the exact code and output I ran for testing:</p> <pre><code>import pandas as pd import numpy as np pdf1 = pd.DataFrame({ 'id': np.array([1, 2, 3, 4, 6, 7], dtype=int), 'name': np.array(['a', 'b', 'c', 'd', None, 'e'], dtype=str) }) pdf2 = pd.DataFrame({ 'id': np.array([1, 2, 3, 5, 6, 7], dtype=int), 'name': np.array(['k', 'l', 'm', 'm', 'o', None], dtype=str) }) pdf1.replace('None', np.nan, inplace=True) pdf2.replace('None', np.nan, inplace=True) df = pdf1.merge(pdf2, on='id', how='outer') df.replace(np.nan, None, inplace=True) spark.createDataFrame(df).show() </code></pre> <p>Output:</p> <pre><code>+---+------+------+ | id|name_x|name_y| +---+------+------+ | 1| a| k| | 2| b| l| | 3| c| m| | 4| d| null| | 6| null| o| | 7| e| null| | 5| null| m| +---+------+------+ </code></pre> <p>You're doing something else wrong if this still fails.</p>
python|pandas|apache-spark
0
1,907,888
32,060,281
pandas data frame - reduce with initial value
<p>I'm moving some of my <code>R</code> stuff to <code>Python</code>, hence I have to use <code>pandas.DataFrame</code>s. There are several things I'd like to optimise.</p> <p>Suppose we've got a table</p> <pre><code>key value abc 1 abc 2 abd 1 </code></pre> <p>and we want to get a dictionary of form <code>{key -&gt; list[values]}</code>. Here is how I get this done right now. </p> <pre><code>from pandas import DataFrame from StringIO import StringIO def get_dict(df): """ :param df: :type df: DataFrame """ def f(accum, row): """ :param accum: :type accum: dict """ key, value = row[1] return accum.setdefault(key, []).append(value) or accum return reduce(f, df.iterrows(), {}) table = StringIO("key\tvalue\nabc\t1\nabc\t2\nabd\t1") parsed_table = [row.rstrip().split("\t") for row in table] df = DataFrame(parsed_table[1:], columns=parsed_table[0]) result = get_dict(df) # -&gt; {'abc': ['1', '2'], 'abd': ['1']} </code></pre> <p>Two things I don't like about it:</p> <ol> <li>The fact that built-in <code>reduce</code> uses standard Python iteration protocol that kills the speed of NumPy-based data structures like <code>DataFrame</code>. I know that <code>DataFrame.apply</code> has a <code>reduce</code> mode, but it doesn't take a starting value like <code>dict</code>. </li> <li><em>(a minor drawback)</em> The fact that I have to use indexing to get specific values from rows. I wish I could access specific fields in a row by name like in <code>R</code>, i.e. <code>row$key</code> instead of <code>row[1][0]</code></li> </ol> <p>Thank you in advance</p>
<p>Instead of <code>get_dict</code> you could use a dict comprehension:</p> <pre><code>In [100]: {key:grp['value'].tolist() for key, grp in df.groupby('key')} Out[100]: {'abc': ['1', '2'], 'abd': ['1']} </code></pre> <hr> <p>Producing a dict with lists as values automatically means you are leaving the realm of fast NumPy arrays and forcing Python to generate objects which would require Python loops to iterated over the data. When the data set is large, those Python loops can be much slower than equivalent NumPy/Pandas function calls. So your end goal may not be ideal if you are concerned about speed. </p> <p>If you want to take advantage of NumPy/Pandas to perform fast(er) calculations you must keep the data in a NumPy array or Pandas NDFrame.</p>
python|r|pandas|dataframe|reduce
1
1,907,889
38,549,807
Using seek() to create a mean pixel value comparing software in .tiff format?
<p>How does PIL handle seek() function to operate within multiframe .tiff files? I'm trying to extract a piece of information (greyscale pixel values) of various frames in the file, but no matter what I set the seek for, EOFE error is raised. Example code:</p> <pre><code>from PIL import Image im = Image.open('example_recording.tif').convert('LA') width,height = im.size image_lookup = 0 total=0 for i in range(0,width): for j in range(0,height): total += im.getpixel((i,j))[0] total2=0 im.seek(1) for i in range(0,width): for j in range(0,height): total += im.getpixel((i,j))[0] print total print total2 </code></pre> <p>The error log looks like this:</p> <p>File "C:\Users\ltopuser\Anaconda2\lib\site-packages\PIL\Image.py", line 1712, in seek raise EOFError</p> <p>EOFError</p> <p>Cheers, JJ</p>
<p>Was caused by PIL getting to end of the file: can be fixed like this;</p> <pre><code>class ImageSequence: def __init__(self, im): self.im = im def __getitem__(self, ix): try: if ix: self.im.seek(ix) return self.im except EOFError: raise IndexError # this is the end of sequence for frame in ImageSequence(im): for i in range(0,width): for j in range(0,height): total += im.getpixel((i,j))[0] </code></pre>
python|image|tiff|seek
1
1,907,890
59,912,130
Branchless method to convert false/true to -1/+1?
<p>What is a branchless way to do the following mapping?</p> <pre><code>true -&gt; +1 false -&gt; -1 </code></pre> <p>An easy way would be <code>if b then 1 else -1</code> but I'm looking for a method to avoid the branch, i.e. if.</p> <p>If it is relevant, I'm using Python.</p>
<p>Here's a comparison of the solutions posted in comments and answers so far.</p> <p>We can use the <a href="https://docs.python.org/3/library/dis.html" rel="nofollow noreferrer"><code>dis</code></a> module to see the generated bytecode in each case; this confirms that there are no conditional jump instructions (in the Python code itself, at least), and also tells us something about the expected performance, since the number of opcodes executed has a direct impact on that (though they are not perfectly correlated). The number of function calls is also relevant for performance, since these have a particularly high overhead.</p> <h3>@Glannis Clipper and @kaya3: <code>(-1, 1)[b]</code> (3 opcodes)</h3> <pre class="lang-none prettyprint-override"><code> 1 0 LOAD_CONST 2 ((-1, 1)) 3 LOAD_NAME 0 (b) 6 BINARY_SUBSCR </code></pre> <h3>@HeapOverflow: <code>-(-1)**b</code> (4 opcodes)</h3> <pre class="lang-none prettyprint-override"><code> 1 0 LOAD_CONST 0 (-1) 2 LOAD_NAME 0 (b) 4 BINARY_POWER 6 UNARY_NEGATIVE </code></pre> <h3>@HeapOverflow: <code>b - (not b)</code> (4 opcodes)</h3> <pre class="lang-none prettyprint-override"><code> 1 0 LOAD_NAME 0 (b) 2 LOAD_NAME 0 (b) 4 UNARY_NOT 6 BINARY_SUBTRACT </code></pre> <h3>@kaya3: <code>2 * b - 1</code> (5 opcodes)</h3> <pre class="lang-none prettyprint-override"><code> 1 0 LOAD_CONST 0 (2) 3 LOAD_NAME 0 (b) 6 BINARY_MULTIPLY 7 LOAD_CONST 1 (1) 10 BINARY_SUBTRACT </code></pre> <h3>@HeapOverflow: <code>~b ^ -b</code> (5 opcodes)</h3> <pre class="lang-none prettyprint-override"><code> 1 0 LOAD_NAME 0 (b) 2 UNARY_INVERT 4 LOAD_NAME 0 (b) 6 UNARY_NEGATIVE 8 BINARY_XOR </code></pre> <h3>@Mark Meyer: <code>b - (b - 1) * -1</code> (7 opcodes)</h3> <pre class="lang-none prettyprint-override"><code> 1 0 LOAD_NAME 0 (b) 3 LOAD_NAME 0 (b) 6 LOAD_CONST 0 (1) 9 BINARY_SUBTRACT 10 LOAD_CONST 1 (-1) 13 BINARY_MULTIPLY 14 BINARY_SUBTRACT </code></pre> <h3>@Sayse: <code>{True: 1, False: -1}[b]</code> (7 opcodes)</h3> <pre class="lang-none prettyprint-override"><code> 1 0 LOAD_CONST 0 (True) 3 LOAD_CONST 1 (1) 6 LOAD_CONST 2 (False) 9 LOAD_CONST 3 (-1) 12 BUILD_MAP 2 15 LOAD_NAME 0 (b) 18 BINARY_SUBSCR </code></pre> <h3>@deceze: <code>{True: 1}.get(b, -1)</code> (7 opcodes, 1 function call)</h3> <pre class="lang-none prettyprint-override"><code> 1 0 LOAD_CONST 0 (True) 3 LOAD_CONST 1 (1) 6 BUILD_MAP 1 9 LOAD_ATTR 0 (get) 12 LOAD_NAME 1 (b) 15 LOAD_CONST 2 (-1) 18 CALL_FUNCTION 2 (2 positional, 0 keyword pair) </code></pre> <h3>@Glannis Clipper: <code>[-1, 1][int(b)]</code> (7 opcodes, 1 function call)</h3> <pre class="lang-none prettyprint-override"><code> 1 0 LOAD_CONST 1 (-1) 3 LOAD_CONST 0 (1) 6 BUILD_LIST 2 9 LOAD_NAME 0 (int) 12 LOAD_NAME 1 (b) 15 CALL_FUNCTION 1 (1 positional, 0 keyword pair) 18 BINARY_SUBSCR </code></pre> <h3>@divyang4481: <code>2 * int(b) - 1</code> (7 opcodes, 1 function call)</h3> <pre class="lang-none prettyprint-override"><code> 1 0 LOAD_CONST 0 (2) 3 LOAD_NAME 0 (int) 6 LOAD_NAME 1 (b) 9 CALL_FUNCTION 1 (1 positional, 0 keyword pair) 12 BINARY_MULTIPLY 13 LOAD_CONST 1 (1) 16 BINARY_SUBTRACT </code></pre>
python|data-conversion
6
1,907,891
2,036,378
Using savepoints in python sqlite3
<p>I'm attempting to use savepoints with the sqlite3 module built into python 2.6. Every time I try to release or rollback a savepoint, I always recieve an <code>OperationalError: no such savepoint</code>. What am I missing?</p> <pre><code>python version: 2.6.4 (r264:75821M, Oct 27 2009, 19:48:32) [GCC 4.0.1 (Apple Inc. build 5493)] PySQLite version: 2.4.1 sqlite3 version: 3.6.11 Traceback (most recent call last): File "spDemo.py", line 21, in &lt;module&gt; conn.execute("release savepoint spTest;") sqlite3.OperationalError: no such savepoint: spTest </code></pre> <p>from this code:</p> <pre><code>import sys import sqlite3 print 'python version:', sys.version print 'PySQLite version:', sqlite3.version print 'sqlite3 version:', sqlite3.sqlite_version print conn = sqlite3.connect('db_spDemo.db') conn.isolation_level = "DEFERRED" with conn: conn.execute("create table example (A, B);") with conn: conn.execute("insert into example values (?, ?);", (0,200)) conn.execute("savepoint spTest;") conn.execute("insert into example values (?, ?);", (1,201)) conn.execute("insert into example values (?, ?);", (2,202)) conn.execute("release savepoint spTest;") conn.execute("insert into example values (?, ?);", (5,205)) </code></pre>
<p>This appears to be a result of how the sqlite3 module behaves with that isolation level.</p> <p>This works, notice the two changes:</p> <pre><code>import sys import sqlite3 print 'python version:', sys.version print 'PySQLite version:', sqlite3.version print 'sqlite3 version:', sqlite3.sqlite_version print conn = sqlite3.connect('shane.sqlite') conn.isolation_level = None # CHANGED with conn: conn.execute("create table example (A, B);") with conn: conn.execute("insert into example values (?, ?);", (0,200)) conn.execute("savepoint spTest;") conn.execute("insert into example values (?, ?);", (1,201)) conn.execute("insert into example values (?, ?);", (2,202)) conn.execute("rollback to savepoint spTest;") # CHANGED conn.execute("insert into example values (?, ?);", (5,205)) </code></pre> <p>Output:</p> <pre> $ python shane-sqlite3.py && sqlite3 shane.sqlite 'select * from example;' python version: 2.6.2 (release26-maint, Apr 19 2009, 01:56:41) [GCC 4.3.3] PySQLite version: 2.4.1 sqlite3 version: 3.6.10 0|200 5|205 </pre> <p>This is an unsatisfactory answer, and I didn't see anything relevant in the sqlite3 module docs (nor did I try to take a look at the source). But I hope it helps you find the right direction.</p>
python|sqlite|savepoints
3
1,907,892
28,247,249
IndexError: list index out of range error python
<p>here is my simple python code. I could not figure out why its shows index out of range .any help .</p> <pre><code>distances = [] i=0 for i in range(len(points)): point = points[i] next_point = points[i+1] x0 = point[0] y0 = point[1] x1 = next_point[0] y1 = next_point[1] point_distance = get_distance(x0, y0, x1, y1) distances.append(point_distance) </code></pre>
<p>I am not sure but maybe it's the fact that you're using <code>points[i+1]</code>. When you reach the last position of <code>points</code> (<code>points[i]</code>) the <code>i+1</code> will try to access a position that does not exists. You need to check if you're in the last position before getting the next point or limit your <code>for</code> cycle to: <code>len(points) - 1</code></p>
python
3
1,907,893
28,301,892
NameError: name '' is not defined?
<p>I've searched and this has been answered a lot already, but it mostly had to do with version errors. I checked my Python version (it's 3.4.2) but it still returns the error.</p> <p>Code w/ error:</p> <pre><code>import random def start(): rannum = random.randint(0, 20) inpnum = int(input('Please enter a number between 0 and 20: ')) calc() def loop(): inpnum = int(input('Please enter a number between 0 and 20: ')) calc() def calc(): if inpnum &gt; rannum: print('The number you entered is too high.') loop() if inpnum &lt; rannum: print('The number you entered is too low.') loop() if inpnum == rannum: print('Congratulations! You guessed the number!') input('') start() </code></pre> <p>This returns the error code:</p> <pre><code>Traceback (most recent call last): File "guessthenumber.py", line 26, in &lt;module&gt; start() File "guessthenumber.py", line 6, in start calc() File "guessthenumber.py", line 14, in calc if inpnum &gt; rannum: NameError: name 'inpnum' is not defined </code></pre> <p>I tried to fix it myself and changed it to look like this:</p> <pre><code>import random def loop(): inpnum = int(input('Please enter a number between 0 and 20: ')) calc() def calc(): rannum = random.randint(0, 20) inpnum = int(input('Please enter a number between 0 and 20: ')) if inpnum &gt; rannum: print('The number you entered is too high.') loop() if inpnum &lt; rannum: print('The number you entered is too low.') loop() if inpnum == rannum: print('Congratulations! You guessed the number!') input('') calc() </code></pre> <p>It does work but it keeps making a new number (I think I know why, the rannum = random.randint(0, 20) keeps making up a new number every time) and it asks you for an input number twice (because it already asks for one at loop() but does so again in calc(), I think) so you have to randomly guess until you have it right because it keeps changing every guess.</p> <p>To me it looks like complete Python code. Does anyone else know what to do here?</p>
<p>One solution would be to change your calc function to accept an arguemnt:</p> <pre><code>def calc(num): # do stuff with num </code></pre> <p>and pass <code>ipnum</code> into it:</p> <pre><code>def loop(): inpnum = int(input('Please enter a number between 0 and 20: ')) calc(inpnum) </code></pre> <p>This is how you make use of local variables in one function in another function.</p>
python
2
1,907,894
43,964,157
ValueError: invalid literal for float(): when adding annotation in pandas
<p>I get this error when I try to add an annotation to my plot - <code>ValueError: invalid literal for float(): 10_May</code>. </p> <p>my dataframe:</p> <p><a href="https://i.stack.imgur.com/OtNFh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OtNFh.png" alt="enter image description here"></a></p> <p>my code (I use <code>to_datetime</code> and <code>strftime</code> before ploting as I needed to sort dates which were stored as strings):</p> <pre><code># dealing with dates as strings grouped.index = pd.to_datetime(grouped.index, format='%d_%b') grouped = grouped.sort_index() grouped.index = grouped.index.strftime('%d_%b') plt.annotate('Peak', (grouped.index[9], grouped['L'][9]), xytext=(15, 15), textcoords='offset points', arrowprops=dict(arrowstyle='-|&gt;')) grouped.plot() </code></pre> <p><code>grouped.index[9]</code> returns <code>u'10_May'</code> while <code>grouped['L'][9]</code> returns <code>10.0</code>. I know that pandas expect index to be float but I thought I can access it by df.index[]. Will appreciate your suggestions.</p>
<p>For me works first plot and then get index position by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Index.get_loc.html" rel="nofollow noreferrer"><code>Index.get_loc</code></a>:</p> <pre><code>ax = df.plot() ax.annotate('Peak', (df.index.get_loc(df.index[9]), df['L'][9]), xytext=(15, 15), textcoords='offset points', arrowprops=dict(arrowstyle='-|&gt;')) </code></pre> <p>Sample:</p> <pre><code>np.random.seed(10) df = pd.DataFrame({'L':[3,5,0,1]}, index=['4_May','3_May','1_May', '2_May']) #print (df) df.index = pd.to_datetime(df.index, format='%d_%b') df = df.sort_index() df.index = df.index.strftime('%d_%b') df.plot() plt.annotate('Peak', (df.index.get_loc(df.index[2]), df['L'][2]), xytext=(15, 15), textcoords='offset points', arrowprops=dict(arrowstyle='-|&gt;')) </code></pre> <p><a href="https://i.stack.imgur.com/XH8Z9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XH8Z9.png" alt="graph"></a></p> <p>EDIT:</p> <p>More general solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Index.get_loc.html" rel="nofollow noreferrer"><code>get_loc</code></a> + <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.idxmax.html" rel="nofollow noreferrer"><code>idxmax</code></a> + <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.max.html" rel="nofollow noreferrer"><code>max</code></a>:</p> <pre><code>ax = df.plot() ax.annotate('Peak', (df.index.get_loc(df['L'].idxmax()), df['L'].max()), xytext=(15, 15), textcoords='offset points', arrowprops=dict(arrowstyle='-|&gt;')) </code></pre>
python-2.7|pandas|matplotlib|dataframe|jupyter
2
1,907,895
43,952,300
Repeated ERROR: NoneType in Sanic App
<p>I keep getting <code>Error: Nonetype</code> from a Sanic app and I cannot identify the reason.</p> <p>My code:</p> <pre><code>from sanic import Sanic from sanic.response import json, text from sanic.request import RequestParameters from parse_data import ParseData pdata = ParseData() app = Sanic('sanic_sms') app = Sanic(__name__) @app.route("/", methods=["POST",]) async def test(request): data = {} if request.form: data = request.form elif request.json: data = request.json result = pdata.prepare_output(data.get('text')) resp = json(result) resp.headers["Access-Control-Allow-Origin"] = "*" resp.headers["Access-Control-Allow-Headers"] = "Content-Type,Authorization" return resp app.run(host="0.0.0.0", port=8002) </code></pre> <blockquote> <p>2017-05-13 15:10:03,454: INFO: Goin' Fast @ <a href="http://0.0.0.0:8002" rel="nofollow noreferrer">http://0.0.0.0:8002</a><br> 2017-05-13 15:10:03,456: INFO: Starting worker [30662] 2017-05-13<br> 15:11:19,579: ERROR: NoneType</p> </blockquote> <p>How can I solve this?</p>
<p>It was probably some bug in some previous version of Sanic (I got that behavior on version 0.4.0). </p> <p>The app threw <code>NoneType 408</code> at a constant rate.<br> Solved it by tempering the <code>KEEP_ALIVE</code> setting of the <code>Config</code> dictionary:</p> <pre><code>from sanic.config import Config Config.KEEP_ALIVE = False app = Sanic(__name__) </code></pre> <p>In the current version (0.7.0) this seems to have been fixed.</p>
python|sanic
0
1,907,896
43,969,717
Python getopt behavior
<pre><code>#!/usr/bin/python import sys import getopt o, a = getopt.getopt(sys.argv[1:], 'ab:c:') print "options: %s" % (o) print "arguments: %s" % (a) </code></pre> <p>running python:</p> <pre><code>python TestOpt.py a b 10 c 20 </code></pre> <p>it prints out like this:</p> <pre><code>options: [] arguments: ['a', 'b', '10', 'c', '20'] </code></pre> <p>I don't understand why options is an empty list while arguments looks like options?</p>
<p>Try it with:</p> <pre><code>python TestOpt.py -a -b 10 -c 20 foobar </code></pre>
python|getopt
0
1,907,897
32,887,127
How can I get the unicode values of Bullets in a word document
<p>I am working on a web application and don't want user to input some invalid characters which are creating problems.</p> <p>One such character which is causing problem is <strong>diamond bullet</strong> from MS word but to remove that character I need to know the Unicode of the character so that I could include it in the Python regular expression of invalid characters <a href="https://stackoverflow.com/questions/5698267/efficient-way-to-search-for-invalid-characters-in-python">as suggested here</a>.</p> <pre><code>textString = some value which need to be checked for invalid characters pattern = some regular expression for invalid characters if pattern.search(textString): print 'Invalid characters found' else: print 'Valid string' </code></pre> <p>I found a similar solution <a href="https://stackoverflow.com/questions/19399244/view-the-character-unicode-values-of-a-word-document#">here</a> but this is not working for bullets.</p> <p>Guys please help me resolving this issue.</p>
<p>Create a Word document with your invalid characters. (Don't use the bullet maker icon, use the Insert->symbol->symbol browser and pick it from the map).</p> <p>Unzip it.</p> <pre><code>unzip myDoc.docx </code></pre> <p>and open the word/document.xml file in an editor capable of reading the unicode characters. Here I am using <strong>xmllint</strong> and <strong>more</strong> as a quick and dirty example. I don't know which bullet you are talking about, but the one I tried here shows U+F075:</p> <pre><code>xmllint --format word/document.xml | more &lt;w:r w:rsidR="00A50B17" w:rsidRPr="00E62AD7"&gt; &lt;w:rPr&gt; &lt;w:rFonts w:ascii="Wingdings" w:hAnsi="Wingdings"/&gt; &lt;w:color w:val="000000"/&gt; &lt;/w:rPr&gt; &lt;w:t&gt;&lt;U+F075&gt;&lt;/w:t&gt; &lt;/w:r&gt; </code></pre> <p>Then for all the unicode characters, put them in your script.</p>
python|regex|ms-word
1
1,907,898
14,029,672
Faster way of getting files with certain text in them in Python?
<p>I have lots and lots of files in a list called <code>files</code>, which I am looping through and saving all the files which have <code>//StackOverflow</code> on the very first line. It might have some additional text after it, but the line should begin with such text.</p> <p>Currently I am doing it simply like so:</p> <pre><code>matches = [] for file in files: with open(file, "r") as inf: line = inf.readline() if line.strip().startswith("//StackOverflow"): matches.append([line] + inf.readlines()) </code></pre> <p>However, I was wondering if there's a better (faster?) way of doing this, since now I have to open every single file one by one and always read the first line.</p>
<p>You will have to open all the files if you need to look at their contents. What you have is already pretty much the best you can do in Python.</p> <p>In theory, you could read only the first 15 bytes of the file and check if they are equal to <code>//StackOverflow</code>, but I doubt that that will change much.</p> <pre><code>with open(file) as inf: if inf.read(15) == "//StackOverflow": inf.seek(0) matches.append(inf.readlines()) </code></pre>
python|file|file-io|python-3.x
2
1,907,899
54,483,461
DRF Read only field is still validated
<p>I have a field that I want always to be the user. My serializer is like this:</p> <pre><code>class MySerializer(serializers.ModelSerializer): class Meta: model = MyModel fields = '__all__' read_only_fields = ('user',) def perform_save(self, serializer): serializer.save(user=self.request.user) class MyModel(models.Model): user = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE) ... </code></pre> <p>But it gives me the error <code>NOT NULL constraint failed: app_my_model.user_id</code> but the field is read_only... I don't get this.</p>
<p>First of all, there is no method named <strong><code>perform_save()</code></strong> for a serializer, it's for the <strong><code>viewset</code></strong> class. This may be the problem <br> Use the <strong><code>save()</code></strong> method as below<br></p> <pre><code>class MySerializer(serializers.ModelSerializer): class Meta: model = MyModel fields = '__all__' read_only_fields = ('user',) <b>def save(self, **kwargs): kwargs['user'] = self.context['request'].user return super().save(**kwargs)</b></code></pre>
python|django|django-rest-framework
0