Unnamed: 0
int64
0
1.91M
id
int64
337
73.8M
title
stringlengths
10
150
question
stringlengths
21
64.2k
answer
stringlengths
19
59.4k
tags
stringlengths
5
112
score
int64
-10
17.3k
1,901,500
36,841,334
Quit LLDB session after a defined amount of time
<p>I have a Program written in python for automated testing on mobile devices (iOS &amp; Android). The proper workflow of this program is as follows (for smoke tests):</p> <ol> <li><p>Deploy executable to USB-connected device (.ipa or .app) using ios-deploy</p></li> <li><p>Start Application (debugging process) --> writes to stdout.</p></li> <li><p>Write output into Pipe --> this way it is possible to read the output of the debugging process parallel to it.</p></li> <li><p>If the searched needle is detected in the output, the device is restarted (this is quite a dirty workaround, I am going to insert a force-stop method or something similar)</p></li> </ol> <p>My Problem is: When the needle is detected in the output of the debug process, the lldb session is interrupted, but not exited. To exit the lldb session, I have to reconnect the device or quit terminal and open it again. </p> <p>Is there a possibility to append something like a "time-to-live-flag" to the lldb call to determine how long the lldb session should run until it exits auomatically? Another way I can imagine how to exit the lldb session is to join the session again after the device is restarted and then exit it, but it seems that lldb is just a subprocess of ios-deploy. Therefore I have not found any possibility to get access to the lldb process.</p>
<p>There isn't such a thing built into lldb, but presumably you could set a timer in Python and have it kill the debug session if that's appropriate.</p> <p>Note, when you restart the device, the connection from lldb to the remote debug server should close, and lldb should detect that it closed and quit the process. It won't exit when that happens by default, but presumably whatever you have waiting on debugger events can detect the debuggee's exit and exit or whatever you need it to do.</p> <p>Note, if lldb is waiting on input from debugserver (if the program is running) then it should notice this automatically, since the select call will return with EOF. But if the process is stopped when you close the connection, lldb probably won't notice that till it goes to read something. </p> <p>In the latter case, you should be able to have lldb react to the stop that indicates the "needle" is found, and kill the debug session by hand.</p>
python|ios|lldb
1
1,901,501
37,114,714
Call different interpreter for Tcl in python
<p>I wish to run a Tcl script with a different interpreter (<a href="http://opensees.berkeley.edu/wiki/index.php/OpenSees_User" rel="nofollow noreferrer">OpenSees</a>) from python itself, similar to <a href="https://stackoverflow.com/questions/23907698/how-to-run-a-tcl-script-in-a-folder-in-python?rq=1&quot;">this question</a>, what would be the most practical way? I've tried tkinter and subprocess routines but as far as I understand i'm running the script in pure Tcl and nothing happens (functions are defined in the OpenSees environment). I've tried calling my Tcl script via tkinter and but I can't for the life of me figure out how to run tcl with another interpreter, what i've tried is:</p> <p>for-test.tcl</p> <pre><code>proc fractional_while {j float_increment upper_limit} { while {$j &lt; $upper_limit} { set j [expr $j + $float_increment] } } puts "time it took: [time {fractional_while 1 0.001 500} 1]" </code></pre> <p>python file </p> <pre><code>import tkinter r = tkinter.Tk() r.eval('source {for-test.tcl}') </code></pre> <p>What I want to do is call Opensees inside python and run the following routine:</p> <p>elastic-1dof-spectrum.tcl</p> <pre><code>model BasicBuilder -ndm 2 -ndf 3 set time_zero [clock clicks -millisec] set node_restraint 1 set node_spring 2 ... set load_dir $x_direction set x_mass 1 set elastic_modulus 2e6 set rotation_z 6 node $node_restraint 0 0 -mass 0 0 0 node $node_spring 0 0 -mass 0 0 0 fix $node_restraint 1 1 1 equalDOF $master_node $slave_node 1 2 geomTransf Linear $transf_tag uniaxialMaterial Elastic $linear_elastic_mat_tag $elastic_modulus element zeroLength 0 $node_restraint $node_spring -mat $linear_elastic_mat_tag -dir $rotation_z set accel_series "Path -filePath acelerograma_85_ii.191 -dt $ground_motion_time_step -factor 1" pattern UniformExcitation $load_tag $load_dir -accel $accel_series set oscillator_length 1 set node_mass 3 set counter 1 while {$oscillator_length &lt; 1000} { set oscillator_length [expr $counter*0.1] set node_mass [expr $counter + 2] node $node_mass 0 $oscillator_length mass $node_mass $x_mass 0 0 element elasticBeamColumn $node_mass $node_spring $node_mass $area_column $elastic_modulus $inertia_column $transf_tag set eigenvalue [eigen -fullGenLapack 1] set period [expr 2*$pi/sqrt($eigenvalue)] recorder EnvelopeNode -file results/acceleration-envelope-period-$period.out -time -node $node_mass -dof $x_direction accel ... rayleigh [expr 2*$damping_ratio*$x_mass*$eigenvalue] 0 0 0 constraints Transformation # determines how dof constraits are treated and assigned numberer Plain # numbering schemes are tied directly to the efficiency of numerical solvers system BandGeneral # defines the matricial numerical solving method based on the form of the stiffness matrix test NormDispIncr 1e-5 10 0 integrator Newmark $gamma $beta # defines the integrator for the differential movement equation algorithm Newton # solve nonlinear residual equation analysis Transient # for uniform timesteps in excitations analyze [expr int($duration_motion/$time_step)] $time_step incr counter } puts stderr "[expr {([clock clicks -millisec]-$time_zero)/1000.}] sec" ;# RS wipe </code></pre>
<p>It depends on how OpenSees is built and what options it offers.</p> <p>Typically programs that embed Tcl have two main options how to do it, quite similar to Python.</p> <p>Variant one is to have a normal C/C++ main program and link to the Tcl library, in effect the same thing <code>tclsh</code> does, a shell that can execute Tcl commands and providing extra commands statically.</p> <p>Variant two is using a normal <code>tclsh</code> and just loading some extension modules to add the functionality. If that is the case, you can often simply load the package in the <code>tkinter</code> shell if they are similar enough, and are done.</p> <p>OpenSees seems to be a program that implements variant one, a bigwish that includes some extra commands not available outside. So you cannot load the code directly in a <code>tkinter</code> shell.</p> <p>You have three options:</p> <ol> <li>Use something like the Tcllib <code>comm</code> package to communicate between Tkinter and the OpenSees shell (see <a href="https://stackoverflow.com/questions/32549839/running-tcl-code-on-an-existing-tcl-shell-from-python">Running TCL code (on an existing TCL shell) from Python</a> for an example)</li> <li>Run opensees via <code>subprocess</code> and implement some kind of communication protocol to send your commands.</li> <li>Hack at the OpenSees code to build it as a loadable package for Tcl and load it into your tkinter process (might be hard).</li> </ol>
python|tcl|interpreter
1
1,901,502
37,091,953
array manipulation from MATLAB to Python
<p>from MATLAB code:</p> <pre><code>a = rand(1,120); d=zeros(1,124); state=[1:120]; fibre = [1 5 9 13 17 2 6 10 14 18 3 7 11 15 19 4 8 12 16 20 21 69 65 61 57 22 68 64 60 56 71 67 63 59 55 70 66 62 58 54 53 49 45 41 37 52 48 44 40 36 51 47 43 39 35 50 46 42 38 34 121 117 113 109 105 120 116 112 108 104 119 115 111 107 103 118 114 110 106 102 101 97 93 89 85 100 96 92 88 84 99 95 91 87 83 98 94 90 86 82 81 79 77 75 33 80 27 29 31 122 25 78 76 74 123 26 28 30 32 124]; d(fibre)=a(state); </code></pre> <p>to Python code:</p> <pre><code>import numpy as np a = np.arange(120,219,1) d=np.zeros([124]) state = np.arange(0,120,1) fibre = np.array([1,5,9,13,17,2,6,10,14,18,3,7,11,15,19,4,8,12,16,20,21,69,65,61,57,22,68,64,60,56,71,67,63,59,55,70,66,62,58,54,53,49,45,41,37,52,48,44,40,36,51,47,43,39,35,50,46,42,38,34,121,117,113,109,105,120,116,112,108,104,119,115,111,107,103,118,114,110,106,102,101,97,93,89,85,100,96,92,88,84,99,95,91,87,83,98,94,90,86,82,81,79,77,75,33,80,27,29,31,122,25,78,76,74,123,26,28,30,32,124,72,73,23,24]) d[fibre]=a[state] </code></pre> <p>The python code throw an exception in regards to array size, any recommendations on how to fix this?</p>
<p>Your python script has two problem regarding to Matlab code: In the second line you should generate random numbers such as:</p> <pre><code>a = np.random.rand(120) </code></pre> <p>and in the last line, like in comments said, you should know indexing in Matlab starts with 1 and python starts with 0, so your last line will be:</p> <pre><code>d[fibre-1]=a[state] </code></pre>
python|arrays|matlab|numpy
1
1,901,503
48,862,350
How to read 'H3' as 3 H values?
<p>For example, if I enter CH3CH3 currently using counter, my program will read it as just 2 'H' in total, unlike the 6 that I want. How do I work around this? Thanks</p>
<p>I have tried it using <code>regex</code> :</p> <pre><code>import re line = "CH3CH3" match = re.findall('H(\d+)', line) print(sum(map(int, match)),"H") </code></pre> <p>It gives o/p like :</p> <pre><code>6 H </code></pre>
python
0
1,901,504
20,357,347
Filtering by id of ToManyField
<p>I have a resource that looks like this: </p> <pre><code>class CustomerResource(ModelResource): locations = fields.ToManyField('device.resources.LocationResource', 'location_set', null=True) current_location = fields.ToOneField('device.resources.LocationResource', 'current_location', null=True) default_location = fields.ToOneField('device.resources.LocationResource', 'default_location', null=True) class Meta: queryset = Customer.objects.all() resource_name = 'customers' validation = CleanedDataFormValidation(form_class=RegistrationForm) list_allowed_methods = ['get', 'post'] detail_allowed_methods = ['get', 'put', 'patch', 'delete'] authorization = Authorization() excludes =['is_superuser', 'is_active', 'is_staff', 'password', 'last_login',] filtering = { 'location': ('exact'), } </code></pre> <p>I want to query the API for a list of customers filtered by whether they have a certain location in their location field. An example url would look like this: </p> <pre><code>localhost/v1/customers/?location=1&amp;format=json </code></pre> <p>Unfortunately, while Tastypie recognizes my current filtering scheme as valid, when I pass in the location parameter to the URL, it appears to do nothing. The endpoint returns a list of all customers. Am I doing something wrong, or is there a way to extend filtering to get what I want? </p>
<p>Your ToManyField is called 'locations', not 'location'</p>
python|django|tastypie
0
1,901,505
4,271,939
Is it possible to implemet a Ruby-like internal DSL in Python?
<p>Is it possible to implement an internal DSL in a language without macros? Has anyone succeeded in implementing a Ruby-like internal DSL in python?</p> <p>I am trying to develop a simple state machine with a more intuitive syntax like:</p> <pre><code>start -&gt; Event -&gt; Next -&gt;Action </code></pre>
<p>I am having a bit of trouble grokking your question.</p> <p>AFAIU, you are asking</p> <blockquote> <p>Can you implement a Ruby-like internal DSL in a language without macros?</p> </blockquote> <p>And the answer to that is obviously "Yes", since Ruby doesn't have macros.</p>
python|ruby|dsl
1
1,901,506
69,384,704
Filtering large data set by year
<p>Working with a very large dataset that I need to be able to filter by year. I read the text file as a csv:</p> <pre><code>df1=pd.read_csv(filename, sep=&quot;\t&quot;, error_bad_lines=False, usecols=['ID','Date', 'Value1', 'Value2']) </code></pre> <p>And convert the Date column to a date:</p> <pre><code>df1['Date'] = pd.to_datetime(df1['Date'], errors='coerce') </code></pre> <p>I also convert all nulls to zeroes:</p> <pre><code>df2=df1.fillna(0) </code></pre> <p>At this point, my 'Date' field is listed as dtype &quot;Object&quot;, and the dates are formatted like this:</p> <pre><code>2018-02-09 00:00:00 </code></pre> <p>However, I'm not sure how to filter by year. When I try this code:</p> <pre><code>df3 = df2[df2['Date'].dt.year == 2018] </code></pre> <p>I get this error:</p> <pre><code>AttributeError: Can only use .dt accessor with datetimelike values </code></pre> <p>I think what is happening is some dates have been read in as null values, but I'm not sure if that's the case, and I'm not sure how to convert them to dates (a zero date is fine).</p> <p>Is my code to filter the data set correct? How can I get around this attribute error?</p> <p>Thanks!</p>
<p>You could also specify to parse <code>Date</code> when reading it. As @ALollz mentioned you have some NaN values in <code>Date</code> and when you replace them with 0 this changes the type of the column. If you just want to filter by the year then the code below should work. If you wanted to filter by year/month then use <code>'%Y-%m</code> and year/month/date use <code>'%Y-%m-%d'</code>.</p> <pre><code>df1=pd.read_csv(filename, sep=&quot;\t&quot;, error_bad_lines=False, usecols=['ID','Date', 'Value1', 'Value2'] parse_dates=['Date']) df_filtered = df1[df1['Date'].dt.strftime('%Y') == '2018'] </code></pre>
python|pandas|datetime|large-data
0
1,901,507
47,996,766
Importing to a list from a txt file
<p>numbers.txt:</p> <pre><code>5 6 3 8 9 10 </code></pre> <p>I want to create a list from a numbers.txt, my list should be [[5,6],[3,8],[9,10]</p> <p>But I keep getting \t(between numbers) and \n(at the end, ex. 10\n etc.) in the output. How can I fix that? I should be getting [[5,6], [3,8], [9,10]]</p> <p>My code is:</p> <pre><code>file=open("numbers.txt","r") l=[] for line in file: l.append(line.split(',')) print(l) </code></pre> <p>thanks(note: tab is used between two numbers in every line)</p>
<p>Looks like you got tab <code>\t</code> separated values. This is my fav:</p> <pre><code>with open('numbers.txt') as f: # split the file by rows and rows by separator l = [i.split('\t') for i in f.read().split('\n')] print(l) # [['5', '6'], ['3', '8'], ['9', '10']] </code></pre> <p><a href="https://docs.python.org/3/tutorial/inputoutput.html#reading-and-writing-files" rel="nofollow noreferrer">With open...</a> makes sure that the file is closed after.</p> <hr> <p>Other alternatives would include importing libraries. Numpy and pandas are the two leading packages to deal with arrays/data/tables/matrixes. Both of them will evaluate your input (and in this case it would be stored as integers). Numpy was suggested by <a href="https://stackoverflow.com/a/47996874/7386332">Joran Beasley</a>.</p> <pre><code>import numpy as np l = np.genfromtxt('numbers.txt').tolist() import pandas as pd l = pd.read_csv('numbers.txt', sep='\t', header=None).values.tolist() </code></pre> <p>If you need it to be integers or float, change code below to one of these:</p> <pre><code>l = [list(map(int,i.split('\t'))) for i in f.read().split('\n')] # [[5, 6], [3, 8], [9, 10]] l = [list(map(float,i.split('\t'))) for i in f.read().split('\n')] # [[5.0, 6.0], [3.0, 8.0], [9.0, 10.0]] </code></pre>
python|python-3.x
2
1,901,508
51,292,190
Python Tweepy Encode(utf-8)
<p>While using tweepy I came to know about encode(utf-8). I believe encode utf-8 is used to display tweets only in English, Am i right in this regard beacuse I want to make data sets of tweets which are only Written in English, so I can process that tweets for NLP</p>
<p>You're not right.</p> <p><a href="https://en.wikipedia.org/wiki/Unicode" rel="nofollow noreferrer">Unicode</a> is a set of characters intended to cover everything needed for every language and writing system in the world<sup>1</sup> (plus technical stuff like math symbols).</p> <p>It's not used only for English. In fact, it's the exact opposite: before Unicode, handling non-English text was hugely painful, and Unicode is the solution everyone came up with for that problem.</p> <p><a href="https://en.wikipedia.org/wiki/UTF-8" rel="nofollow noreferrer">UTF-8</a> is a way of encoding Unicode characters in a binary stream. It's nothing specific to Tweepy; it's almost universal nowadays, as the default way to encode text (in any language) to disk, network, etc.</p> <p>In Python, <code>s.encode('utf-8')</code> takes a Unicode string <code>s</code>, encodes it using UTF-8, and returns the raw bytes. You only need to call <code>encode</code> if you're working with binary files, network protocols, or APIs somewhere. Normally, you just open text files in text mode and read and write Unicode strings, and your <code>print</code>s and <code>input</code>s and <code>sys.argv</code> and so on are also Unicode strings, and when you get some JSON data off the network you just <code>json.loads</code> it and all of the strings are Unicode, and so on.</p> <p>The official <a href="https://docs.python.org/3/howto/unicode.html" rel="nofollow noreferrer">Python Unicode HOWTO</a> explains a lot more of the history, background, and under-the-covers detail. If you're using Python 3.4 or 2.7 or something, you definitely need to read it. If you're using current Python, it's not as essential, but it's still a useful education.</p> <hr> <p><sub>1. There are a few groups who aren't happy with parts of Unicode, mainly to do with the fact that forces all of the CJK languages to share the same notion of alternate characters. So, if you have an unusual Japanese surname, you might insist that Unicode doesn't really handle every language and writing system. But it's still clearly <em>intended</em> to do so—and definitely not intended to be English-only.</sub></p>
python|utf-8|internationalization|tweepy
1
1,901,509
51,296,770
How to update pip3 to its latest version in Ubuntu 18.04?
<p>I get this error when running </p> <pre><code>$ pip3 install -U pip Requirement already up-to-date: pip in ./dlenv/lib/python3.6/site-packages (10.0.1) launchpadlib 1.10.6 requires testresources, which is not installed. </code></pre> <p>I have searched in apt and <code>testresources</code> seems to be installed already.</p> <pre><code>apt search testresources Sorting... Done Full Text Search... Done python-testresources/bionic,bionic 2.0.0-2 all PyUnit extension for managing expensive test fixtures - Python 2.x python3-testresources/bionic,bionic 2.0.0-2 all PyUnit extension for managing expensive test fixtures - Python 3. </code></pre> <p>I've gone through this github <a href="https://github.com/pypa/pip/issues/5372" rel="noreferrer">issue</a>, which was not clear with a solution. </p>
<p>try this,</p> <pre><code>sudo apt install python3-testresources </code></pre>
python|python-3.x|pip|ubuntu-18.04
15
1,901,510
73,538,666
Print a placeholder in bold within a sentence, using tags. AttributeError: 'str' object has no attribute 'tag_config'
<p>If I select City in the combobox, I would like to print London placeholder in bold. London only in bold, the rest of the sentence not in bold. I've never used tags, so I'm having a hard time, sorry for my difficulty. I would like to obtain:</p> <p><strong>LONDON</strong> Phrase1, Phrase2, Phrase3.</p> <p>I get the error: <code>AttributeError: 'str' object has no attribute 'tag_config'</code></p> <p>How to solve? I would like to try to maintain this code structure, or at least a similar structure.</p> <pre><code>from tkinter import ttk import tkinter as tk from tkinter import * root = tk.Tk() root.geometry(&quot;300x200&quot;) combobox=ttk.Combobox(root, width = 16) combobox.place(x=15, y=10) combobox['value'] = [&quot;City&quot;, &quot;Other&quot;] textbox = tk.Text(root,width=20,height=4) textbox.place(x=15, y=50) def function1(): if combobox.get() == &quot;City&quot;: city = &quot;London&quot; #city = diz[sel_city][&quot;City&quot;] city_start = city.upper() city_start.tag_config(font=(&quot;Verdana&quot;, 14, 'bold')) def function2(): text = f&quot;{city_start} Phrase1, Phrase2, Phrase3&quot; textbox.delete(1.0,END) textbox.insert(tk.END, text) #.format(allenat_random=allenat_random, variable_random=variable_random)) function2() Button = Button(root, text=&quot;Print&quot;, command=function1) Button.pack() Button.place(x=15, y=130) root.mainloop() </code></pre>
<p>Note that <code>.tag_config()</code> should be called on instance of <code>Text</code> widget, i.e. <code>textbox</code> in your case. Also the first argument of <code>.tag_config()</code> should be a tag name.</p> <p>Then <code>.tag_add()</code> should be used to apply the configured effect on the text.</p> <p>Below is the modified <code>function1()</code>:</p> <pre class="lang-py prettyprint-override"><code>def function1(): if combobox.get() == &quot;City&quot;: city = &quot;London&quot; #city = diz[sel_city][&quot;City&quot;] city_start = city.upper() textbox.tag_config('city', font=(&quot;Verdana&quot;, 14, 'bold')) ### added tag argument def function2(): text = f&quot;{city_start} Phrase1, Phrase2, Phrase3&quot; textbox.delete(1.0,END) textbox.insert(tk.END, text) #.format(allenat_random=allenat_random, variable_random=variable_random)) ### search city_start idx = textbox.search(city_start, 1.0) ### apply effect on city_start textbox.tag_add('city', idx, f'{idx}+{len(city_start)}c') function2() </code></pre> <p>And the result:</p> <p><a href="https://i.stack.imgur.com/20HQd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/20HQd.png" alt="enter image description here" /></a></p>
python|python-3.x|string|tkinter
1
1,901,511
17,368,987
Mongoengine queryset only + to_json/as_pymongo missing id
<p>Taking this example:</p> <pre><code>&gt;&gt;&gt; class Doc(Document): ... foo = StringField() ... bar = StringField() </code></pre> <p>If I want the "bar" field:</p> <pre><code>&gt;&gt;&gt; Doc(foo='foo', bar='bar').save() &gt;&gt;&gt; Doc.objects.only('bar').to_json() '[{"bar": "bar"}]' </code></pre> <p>If I want the "id" field and "bar":</p> <pre><code>&gt;&gt;&gt; Doc.objects.only('id', 'bar').to_json() '[{"bar": "bar"}]' </code></pre> <p>Is this intentional or a bug?</p> <p>BTW, I mentioned <code>as_pymongo</code> because to_json uses it.</p> <p>EDIT: removed a useless question.</p>
<p>Try using '_id' instead on 'id'. </p> <p>Background: MongoDb calls it's internal "primary key" "_id" to avoid namespace conflicts (so that you can have a field called "id" for instance) and to denote that it is a mongodb internal... some ORMS use mongo_id for direct access to that "_id" item. But, yeah, it's to avoid namespace issues with the very common field name "id". In the context of your query you are doing a literal mongodb call, so it needs to be '_id' as that is it's real name.</p> <p>Edited..</p>
python|mongodb|mongoengine
0
1,901,512
17,462,994
Python getting a string (key + value) from Python Dictionary
<p>I have dictionary structure. For example:</p> <pre><code>dict = {key1 : value1 ,key2 : value2} </code></pre> <p>What I want is the string which combines the key and the value</p> <p>Needed string -->> key1_value1 , key2_value2</p> <p>Any Pythonic way to get this will help.</p> <p>Thanks </p> <pre><code>def checkCommonNodes( id , rs): for r in rs: for key , value in r.iteritems(): kv = key+"_"+value if kv == id: print "".join('{}_{}'.format(k,v) for k,v in r.iteritems()) </code></pre>
<p>A <code>list</code> of key-value <code>str</code>s,</p> <pre><code>&gt;&gt;&gt; d = {'key1': 'value1', 'key2': 'value2'} &gt;&gt;&gt; ['{}_{}'.format(k,v) for k,v in d.iteritems()] ['key2_value2', 'key1_value1'] </code></pre> <p>Or if you want a single string of all key-value pairs,</p> <pre><code>&gt;&gt;&gt; ', '.join(['{}_{}'.format(k,v) for k,v in d.iteritems()]) 'key2_value2, key1_value1' </code></pre> <p><strong>EDIT:</strong></p> <p>Maybe you are looking for something like this,</p> <pre><code>def checkCommonNodes(id, rs): id_key, id_value = id.split('_') for r in rs: try: if r[id_key] == id_value: print "".join('{}_{}'.format(k,v) for k,v in r.iteritems()) except KeyError: continue </code></pre> <p>You may also be wanting to <code>break</code> after <code>print</code>ing - hard to know exactly what this is for.</p>
python|string|dictionary
27
1,901,513
73,154,668
unexpected keyword argument trying to instantiate a class inheriting from torch.nn.Module
<p>I have seen similar questions, but most seem a little more involved. My problem seems to me to be very straightforward, yet I cannot figure it out. I am simply trying to define a class and then instantiate it, but the arguments passed to the constructor are not recognized.</p> <pre><code>import torch import torch.nn as nn import torch.optim as optim import torch.nn.functional as F from torch.utils.data import DataLoader import torchvision.transforms as transforms # fully connected network class NN(nn.Module): def __int__(self, in_size, num_class): super(NN, self).__init__() self.fc1 = nn.Linear(in_size, 50) self.fc2 = nn.Linear(50, num_class) def forward(self, x): x = F.relu(self.fc1(x)) x = self.fc2(x) return x # initialize network model = NN(in_size=input_size, num_class=num_classes) </code></pre> <p>I get the error: <code>__init__() got an unexpected keyword argument 'in_size'</code> I am using Python 3.1, PyTorch 1.7.1, using PyCharm on macOS Monterey. Thank you!</p>
<p>You have a typo: it should be <code>__init__</code> instead of <code>__int__</code>.</p>
python|pytorch
0
1,901,514
55,893,511
"Dependency was not found" for tfjs in Vue/Webpack project with yarn
<p>I'm trying to use TensorFlow.js for array operations in a JavaScript project. I'm importing it in my Vue component with <code>import * as tf from '@tensorflow/tfjs';</code></p> <p>It appears <code>yarn install tensorflow</code> requires Python 2.7, so I instead used <code>yarn add tensorflow/tfjs</code> to install the subset of Tensorflow.js I needed. This appeared to work, but when I did <code>yarn run serve</code> I got this message:</p> <pre><code> ERROR Failed to compile with 1 errors This dependency was not found: * @tensorflow/tfjs in ./node_modules/cache-loader/dist/cjs.js??ref--12-0!./node_modules/babel-loader/lib!./node_modules/cache-loader/dist/cjs.js??ref--0-0!./node_modules/vue-loader/lib??vue-loader-options!./src/components/DetectorPlot.vue?vue&amp;type=script&amp;lang=js&amp; To install it, you can run: npm install --save @tensorflow/tfjs </code></pre> <p>In <code>yarn.lock</code> after <code>yarn add tensorflow/tfjs</code> I see:</p> <pre><code> "@tensorflow/tfjs@github:tensorflow/tfjs": version "1.1.0" resolved "https://codeload.github.com/tensorflow/tfjs/tar.gz/74b4edef368aa39decc6073af735f81d112bafd8" dependencies: "@tensorflow/tfjs-converter" "1.1.0" "@tensorflow/tfjs-core" "1.1.0" "@tensorflow/tfjs-data" "1.1.0" "@tensorflow/tfjs-layers" "1.1.0" </code></pre>
<p>Thanks to a friend, I have a solution:</p> <p><code>yarn remove @tensorflow/tfjs</code></p> <p>and</p> <p><code>yarn add @tensorflow/tfjs</code></p> <p>I guess it got in a weird state when the first attempt to install tensorflow failed. (It's depressing how often the answer is "turn it off and on again"...)</p>
javascript|tensorflow|vue.js|webpack
1
1,901,515
66,680,988
Cumulative Sum of Grouped Strings in Pandas
<p>I have a pandas data frame that I want to group by two columns and then return the cumulative sum of a third column of strings as a list within one of these groups.</p> <p>Example:</p> <pre><code>Year Bucket Name 2000 1 A 2001 1 B 2003 1 C 2000 2 B 2002 2 C </code></pre> <p>The output I want is:</p> <pre><code>Year Bucket Cum_Sum 2000 1 [A] 2001 1 [A,B] 2002 1 [A,B] 2003 1 [A,B,C] 2000 2 [B] 2001 2 [B] 2002 2 [B,C] 2003 2 [B,C] </code></pre> <p>I tried to piece together an answer from two responses: <a href="https://stackoverflow.com/a/39623235/5143841">https://stackoverflow.com/a/39623235/5143841</a> <a href="https://stackoverflow.com/a/22651188/5143841">https://stackoverflow.com/a/22651188/5143841</a></p> <p>But I can't quite get there.</p>
<p>My Dr. Frankenstein Answer</p> <pre><code>dat = [] rng = range(df.Year.min(), df.Year.max() + 1) for b, d in df.groupby('Bucket'): for y in rng: dat.append([y, b, [*d.Name[d.Year &lt;= y]]]) pd.DataFrame(dat, columns=[*df]) Year Bucket Name 0 2000 1 [A] 1 2001 1 [A, B] 2 2002 1 [A, B] 3 2003 1 [A, B, C] 4 2000 2 [B] 5 2001 2 [B] 6 2002 2 [B, C] 7 2003 2 [B, C] </code></pre> <p>Another freaky answer</p> <pre><code>rng = range(df.Year.min(), df.Year.max() + 1) i = [(y, b) for b, d in df.groupby('Bucket') for y in rng] s = df.set_index(['Year', 'Bucket']).Name.map(lambda x: [x]) s.reindex(i, fill_value=[]).groupby(level=1).apply(pd.Series.cumsum).reset_index() Year Bucket Name 0 2000 1 [A] 1 2001 1 [A, B] 2 2002 1 [A, B] 3 2003 1 [A, B, C] 4 2000 2 [B] 5 2001 2 [B] 6 2002 2 [B, C] 7 2003 2 [B, C] </code></pre>
python|pandas|cumsum
3
1,901,516
66,432,499
making a list from nlp object is not working while the spacy lecture goes with that approach
<p>I'm trying to follow along the lecture from spacy.io.</p> <p>However, I ran into a strange problem.</p> <p>Firstly, I share the link for the code from official spacy webpage.</p> <p><a href="https://course.spacy.io/en/chapter3" rel="nofollow noreferrer">https://course.spacy.io/en/chapter3</a></p> <p>in the example code they provide,</p> <pre><code>import spacy from spacy.matcher import PhraseMatcher from spacy.tokens import Span nlp = spacy.load(&quot;en_core_web_sm&quot;) animals = [&quot;Golden Retriever&quot;, &quot;cat&quot;, &quot;turtle&quot;, &quot;Rattus norvegicus&quot;] animal_patterns = list(nlp.pipe(animals)) print(&quot;animal_patterns:&quot;, animal_patterns) matcher = PhraseMatcher(nlp.vocab) matcher.add(&quot;ANIMAL&quot;, None, *animal_patterns) </code></pre> <p>When I run this on my jupyter,</p> <p>The error message was like below.</p> <blockquote> <p>--------------------------------------------------------------------------- TypeError Traceback (most recent call last) in 5 nlp = spacy.load(&quot;en_core_web_sm&quot;) 6 animals = [&quot;Golden Retriever&quot;, &quot;cat&quot;, &quot;turtle&quot;, &quot;Rattus norvegicus&quot;] ----&gt; 7 animal_patterns = list(nlp.pipe(animals))</p> <p>TypeError: 'list' object is not callable</p> </blockquote> <p>For me it seems like since nlp is not iterable object, it's not possible to create a list from that.</p> <p>How do I fix this? and Why do they show this as an example even if it's not working code at all? is this code based on python 2.x maybe?</p> <p>Thank you</p>
<p>Somehow your <code>list</code> constructor seems to be gone.<br /> This could have been because of operations like <code>list = something</code><br /> Anyways, this should fix it.</p> <pre><code>import spacy from spacy.matcher import PhraseMatcher from spacy.tokens import Span nlp = spacy.load(&quot;en_core_web_sm&quot;) animals = [&quot;Golden Retriever&quot;, &quot;cat&quot;, &quot;turtle&quot;, &quot;Rattus norvegicus&quot;] list = [].__class__ animal_patterns = list(nlp.pipe(animals)) print(&quot;animal_patterns:&quot;, animal_patterns) matcher = PhraseMatcher(nlp.vocab) matcher.add(&quot;ANIMAL&quot;, None, *animal_patterns) </code></pre>
python|nlp|spacy
0
1,901,517
66,413,040
Converting a naive datetime column to a new timezone Pandas Dataframe
<p>I have the following dataframe, named 'ORDdataM', with a DateTimeIndex column 'date', and a price point column 'ORDprice'. The date column has no timezone associated with it (and is naive) but is actually in 'Australia/ACT'. I want to convert it into 'America/New_York' time.</p> <pre><code> ORDprice date 2021-02-23 18:09:00 24.01 2021-02-23 18:14:00 23.91 2021-02-23 18:19:00 23.98 2021-02-23 18:24:00 24.00 2021-02-23 18:29:00 24.04 ... ... 2021-02-25 23:44:00 23.92 2021-02-25 23:49:00 23.88 2021-02-25 23:54:00 23.92 2021-02-25 23:59:00 23.91 2021-02-26 00:09:00 23.82 </code></pre> <p>The line below is one that I have played around with quite a bit, but I cannot figure out what is erroneous. The only error message is: KeyError: 'date'</p> <p><code>ORDdataM['date'] = ORDdataM['date'].dt.tz_localize('Australia/ACT').dt.tz_convert('America/New_York')</code></p> <p>I have also tried</p> <p><code>ORDdataM.date = ORDdataM.date.dt.tz_localize('Australia/ACT').dt.tz_convert('America/New_York')</code></p> <p>What is the issue here?</p>
<p>Your <code>date</code> is index not a column, try:</p> <pre><code>df.index = df.index.tz_localize('Australia/ACT').tz_convert('America/New_York') df # ORDprice #date #2021-02-23 02:09:00-05:00 24.01 #2021-02-23 02:14:00-05:00 23.91 #2021-02-23 02:19:00-05:00 23.98 #2021-02-23 02:24:00-05:00 24.00 #2021-02-23 02:29:00-05:00 24.04 #2021-02-25 07:44:00-05:00 23.92 #2021-02-25 07:49:00-05:00 23.88 #2021-02-25 07:54:00-05:00 23.92 #2021-02-25 07:59:00-05:00 23.91 #2021-02-25 08:09:00-05:00 23.82 </code></pre>
python|pandas|datetime|timezone|datetimeindex
4
1,901,518
64,871,055
Importing all the sql tables into python using pandas dataframe
<p>I have an requirement wherein I would like to import all the tables stored in the sql database into python. I have successfully created a python code for it as follows:</p> <code> <pre><code>import pandas as pd import mysql.connector from pandas import DataFrame db = mysql.connector.connect(host=&quot;localhost&quot;, user=&quot;root&quot;, passwd=&quot;********&quot;) pointer = db.cursor() pointer.execute(&quot;use stock&quot;) pointer.execute(&quot;SELECT * FROM 3iinfotech&quot;) data = pointer.fetchall() data = DataFrame(data,columns =['date','open','high','low','close','volume']) </code></pre> </code> <p>Using this I am able to successfully import the Tables data into a pandas Dataframe.</p> <p>But as can be seen from the database schema there are multiple tables which will again increase in the future.</p> <p><a href="https://i.stack.imgur.com/VHnif.png" rel="nofollow noreferrer">Full database schema along with output of the table</a></p> <p>The dataframe looks like:</p> <p><a href="https://i.stack.imgur.com/pM1gn.png" rel="nofollow noreferrer">Imported table from sql converted to DataFrame</a></p> <p>Is there any ways using loops or by any other methods that this script can be automated for all the tables in a given database.</p> <p>I referred the following :</p> <p><a href="https://stackoverflow.com/questions/32912373/importing-multiple-sql-tables-using-pandas">Importing multiple SQL tables using pandas</a></p> <p>But this does not work in my case.</p> <p>Thanks.....</p>
<p>The SQL error come from ' around the table name, It's not the best but if you are working on a local application, you can workaround with the f strings:</p> <pre><code>with connection.cursor() as pointer: pointer.execute(&quot;use use stock&quot;) pointer.execute(&quot;SHOW TABLES&quot;) tables_tuples=[] for table_dict in pointer: for k,v in table_dict.items(): tables_tuples.append(v) for tables_name in tables_tuples: with connection.cursor() as pointer: #pointer.execute(f&quot;SELECT * FROM %s&quot;,tables_name) pointer.execute(f&quot;SELECT * FROM {tables_name}&quot;) connection.commit() df=pd.DataFrame(pointer) print(df) </code></pre>
python|mysql|pandas
1
1,901,519
63,930,594
Creating a model ChoiceField that relies on another model in Django
<p>I'm new to Django but I'm trying to replace the categories constant with the types of videos from the other model, VideoTypes. How would I do this?</p> <pre><code>from django.db import models # from django.contrib.auth import get_user_model # User = get_user_model() CATEGORIES = ( ('Executive Speaker Series', 'Executive Speaker Series'), ('College 101', 'College 101'), ('Fireside Chat', 'Fireside Chat'), ('Other', 'Other') ) # Create your models here. class VideoType(models.Model): type = models.CharField(max_length=255) def __str__(self): return self.type class Video(models.Model): title = models.CharField(max_length=255) link = models.CharField(max_length=1000) category = models.CharField(choices=CATEGORIES, max_length=255) timestamp = models.DateTimeField(auto_now_add=True) # owner = models.ForeignKey(User, on_delete=models.CASCADE) def __str__(self): return self.title </code></pre>
<p>you're looking for <a href="https://docs.djangoproject.com/en/3.1/ref/models/fields/#django.db.models.ForeignKey" rel="nofollow noreferrer">ForeignKey</a> field. It also needs to have set behaviour how it should behave when the related model is deleted, see <a href="https://docs.djangoproject.com/en/3.1/ref/models/fields/#django.db.models.ForeignKey.on_delete" rel="nofollow noreferrer">on_delete</a> description.</p> <pre><code>VideoType(models.Model): type = models.CharField(max_length=255) def __str__(self): return self.type class Video(models.Model): title = models.CharField(max_length=255) link = models.CharField(max_length=1000) category = models.ForeignKey(VideoType, on_delete=models.SET_NULL) # category is now referring to VideoType model timestamp = models.DateTimeField(auto_now_add=True) </code></pre>
python|django
1
1,901,520
52,970,280
How to Trap Error and Re-Launch Python from Shell
<p>I have a large Python program running on a Raspberry Pi, and every week or two it will get overloaded and throw an out of memory error. I want to trap those errors and call a shell script "kill-and-relaunch.sh" (code below) that will kill the running Python processes and re-launch the program...so it needs to run the shell command as an entirely separate process. Two questions: (1) what is the best method to call the shell that will survive killing the original Python process; and (2) where would I put the error trapping code in a Python program that is already running in multiple processes...do I need to have the error trapping in each process?</p> <p>Here is the shell command I want to call:</p> <pre><code>kill $(ps aux | grep '[p]ython -u home_security.py' | awk '{print $2}') cd ~/raspsecurity source ~/.profile workon py3cv34 nohup python -u home_security.py &amp; </code></pre> <p>Thank you for any suggestions.</p>
<p>You could fire your shell script in a cronjob and add the error (or all) output in an file (as described here <a href="https://stackoverflow.com/a/7526988/7727137">https://stackoverflow.com/a/7526988/7727137</a>). </p>
python|linux|shell
0
1,901,521
53,312,057
Changing `self` for another instance of same object?
<p>I want to create a class, and all objects need to have a unique identifier <code>key</code>, and If I attempt to create a new instance of the object with a previously existent key, the instance should be the same as the one that already existing. </p> <p>Similar to a singleton class, but in this case instead of one class, there are many but different. </p> <p>My first approach was this</p> <pre><code>class Master: existent = {} def __init__(self, key, value=None): try: self = Master.existent[key] return except KeyError: Master.existent[key] = self # Rest of the __init__ method </code></pre> <p>But when I compare two objects, something like this <code>A = Master('A', 0)</code> and <code>B = Master('B', 0)</code>, the <code>B</code> doesn't share any attributes that It should have, and if the Master class has any <code>_method</code> (single underscore), It also doesn't appear.</p> <p>Any Idea how could I do this? </p> <p>I think this is similar to the Factory Methods Pattern, but I'm still having trouble to find the parallels, or how to implemented in an elegant form. </p> <p>EDIT:</p> <p>The class basically has two proprieties and that's it, but many things would Inherit and/or contain instances of this as type, the <em>easy</em> way I thought I could do it, was extracting the properties from the existing instance corresponding to said <code>key</code>, assigning them to the new instance and abuse from the fact that they will have same <code>hash</code> output and the the <code>equal</code> operator will behave according to hashes so I can use <code>==</code> and <code>is</code> operators with no problem.</p> <p>This Idea solves my problem, but overall I think this could be a common or interesting enough scenario to tackle. </p>
<p>I don't think you can do that using the <code>__init__()</code> method, because a new instance of the class has already been created when that method is called. You probably need to create a factory type method something like:</p> <pre><code>class Master: existent = {} init_OK = False def __init__(self, key, value=None): if not Master.init_OK: raise Exception('Direct call to Master() is not allowed') Master.init_OK = False self.key = key self.value = value Master.existent[key] = self @staticmethod def make(key, value=None): try: inst = Master.existent[key] except: Master.init_OK = True inst = Master(key, value=value) return inst </code></pre>
python|design-patterns|singleton|factory-pattern
1
1,901,522
71,770,921
Collate all files in directory from lowest integer to highest, then start from lowest
<p>In a directory I will have any number of files. All of the file names will contain 'R1' or 'R2', 'R3' etc within. for example thisfile_R1.csv. The 'R' numbers in this directory will not always be consistent, so sometime the lowest may be 'R3' and run sequentially to 'R15'.</p> <p>I need a way for python to start at the lowest integer of the R* files in the directory, then loop through to the last 'R' integer, as a way to edit files from lowest to highest, sequentially.</p> <pre><code>procdir = r&quot;C:\Users\processed&quot; collected = os.listdir(procdir) for f in collected: #if fnmatch.fnmatch(f, '*R*.csv'): if &quot;R*.csv&quot; in collected: </code></pre>
<p>You can use the <code>glob</code> module to select only the file names that match your pattern:</p> <pre><code>import glob procdir = r&quot;C:\Users\processed&quot; files = glob.glob(rf&quot;{procdir}\*R*.csv&quot;) </code></pre> <p>Then, use <code>sorted()</code> with a <code>key</code> argument to capture the number and sort on that number:</p> <pre><code>files_sorted = sorted(files, key=lambda x: int(re.findall(r&quot;R(\d+)&quot;, x)[-1])) </code></pre> <p>The lambda expression takes the file path, finds the pattern <code>R</code> followed by any number of digits and captures only the digits, and then converts the last entry in that list to an integer. The sorting is done based on this integer, so you end up with the correct order.</p> <p>If your directory had the files:</p> <pre><code>files = [r&quot;C:\Users\processed\file_R1.csv&quot;, r&quot;C:\Users\processed\file_R100.csv&quot;, r&quot;C:\Users\processed\file_R13.csv&quot;, r&quot;C:\Users\processed\file_R3.csv&quot;, r&quot;C:\Users\processed\file_R30.csv&quot;] </code></pre> <p>you'd get the sorted files like so:</p> <pre><code>['C:\\Users\\processed\\file_R1.csv', 'C:\\Users\\processed\\file_R3.csv', 'C:\\Users\\processed\\file_R13.csv', 'C:\\Users\\processed\\file_R30.csv', 'C:\\Users\\processed\\file_R100.csv'] </code></pre>
python
1
1,901,523
71,468,051
How to use attributeselectedclassifier on pyweka?
<p>Im translating a model done on weka to python-weka-wrapper3 and i dont know how to an evaluator and search options on attributeselectedclassifier.</p> <p>This is the model on weka:</p> <pre><code>weka.classifiers.meta.AttributeSelectedClassifier -E &quot;weka.attributeSelection.CfsSubsetEval -P 1 -E 1&quot; -S &quot;weka.attributeSelection.GreedyStepwise -B -T -1.7976931348623157E308 -N -1 -num-slots 1&quot; -W weka.classifiers.meta.MultiSearch -- -E FM -search &quot;weka.core.setupgenerator.MathParameter -property classifier.classifier.classifier.numOfBoostingIterations -min 5.0 -max 50.0 -step 1.0 -base 10.0 -expression I&quot; -class-label 1 -algorithm &quot;weka.classifiers.meta.multisearch.DefaultSearch -sample-size 100.0 -initial-folds 2 -subsequent-folds 10 -initial-test-set . -subsequent-test-set . -num-slots 1&quot; -log-file /Applications/weka-3-8-3 -S 1 -W weka.classifiers.meta.Bagging -- -P 100 -S 1 -num-slots 1 -I 100 -W weka.classifiers.meta.FilteredClassifier -- -F &quot;weka.filters.supervised.instance.SMOTE -C 0 -K 3 -P 250.0 -S 1&quot; -S 1 -W weka.classifiers.meta.CostSensitiveClassifier -- -cost-matrix &quot;[0.0 1.0; 1.0 0.0]&quot; -S 1 -W weka.classifiers.trees.ADTree -- -B 10 -E -3 -S 1 </code></pre> <p>and I have this right now:</p> <pre><code>base = Classifier(classname=&quot;weka.classifiers.trees.ADTree&quot;, options=[&quot;-B&quot;, &quot;10&quot;, &quot;-E&quot;, &quot;-3&quot;, &quot;-S&quot;, &quot;1&quot;]) cls = SingleClassifierEnhancer(classname=&quot;weka.classifiers.meta.CostSensitiveClassifier&quot;, options =[&quot;-cost-matrix&quot;, &quot;[0.0 1.0; 1.0 0.0]&quot;, &quot;-S&quot;, &quot;1&quot;]) cls.classifier = base smote = Filter(classname=&quot;weka.filters.supervised.instance.SMOTE&quot;, options=[&quot;-C&quot;, &quot;0&quot;, &quot;-K&quot;, &quot;3&quot;, &quot;-P&quot;, &quot;250.0&quot;, &quot;-S&quot;, &quot;1&quot;]) fc = FilteredClassifier() fc.filter = smote fc.classifier = cls bagging_cls = Classifier(classname=&quot;weka.classifiers.meta.Bagging&quot;, options=[&quot;-P&quot;, &quot;100&quot;, &quot;-S&quot;, &quot;1&quot;, &quot;-num-slots&quot;, &quot;1&quot;, &quot;-I&quot;, &quot;100&quot;]) bagging_cls.classifier = fc multisearch_cls = MultiSearch( options = [&quot;-S&quot;, &quot;1&quot;]) multisearch_cls.evaluation = &quot;FM&quot; multisearch_cls.log_file = &quot;/home/pablo/Escritorio/TFG/OUTPUT.txt&quot; multisearch_cls.search = [&quot;-sample-size&quot;, &quot;100&quot;, &quot;-initial-folds&quot;, &quot;2&quot;, &quot;-subsequent-folds&quot;, &quot;10&quot;, &quot;-initial-test-set&quot;, &quot;.&quot;, &quot;-subsequent-test-set&quot;, &quot;.&quot;, &quot;-num-slots&quot;, &quot;1&quot;] mparam = MathParameter() mparam.prop = &quot;numOfBoostingIterations&quot; mparam.minimum = 5.0 mparam.maximum = 50.0 mparam.step = 1.0 mparam.base = 10.0 mparam.expression = &quot;I&quot; multisearch_cls.parameters = [mparam] multisearch_cls.classifier = bagging_cls AttS_cls = AttributeSelectedClassifier() AttS_cls.evaluator = &quot;weka.attributeSelection.CfsSubsetEval -P 1 -E 1&quot; AttS_cls.search = &quot;weka.attributeSelection.GreedyStepwise -B -T -1.7976931348623157E308 -N -1 -num-slots 1&quot; AttS_cls.classifier = multisearch_cls train, test = data_modelos_1_2.train_test_split(70.0, Random(1)) AttS_cls.build_classifier(train) evl = Evaluation(train) evl.crossvalidate_model(AttS_cls, test, 10, Random(1)) print(AttS_cls) #graph.plot_dot_graph(AttS_cls.graph) print(&quot;&quot;) print(&quot;=== Setup ===&quot;) print(&quot;Classifier: &quot; + AttS_cls.to_commandline()) print(&quot;Dataset: &quot;) print(test.relationname) print(&quot;&quot;) print(evl.summary(&quot;=== &quot; + str(10) + &quot; -fold Cross-Validation ===&quot;)) print(evl.class_details()) plcls.plot_roc(evl, class_index=[0, 1], wait=True) </code></pre> <p>but when I do</p> <pre><code>AttS_cls.evaluator = &quot;weka.attributeSelection.CfsSubsetEval -P 1 -E 1&quot; AttS_cls.search = &quot;weka.attributeSelection.GreedyStepwise -B -T -1.7976931348623157E308 -N -1 -num-slots 1&quot; </code></pre> <p>it reach me this error:</p> <pre><code>--------------------------------------------------------------------------- AttributeError Traceback (most recent call last) /tmp/ipykernel_40724/2750622902.py in &lt;module&gt; 30 31 AttS_cls = AttributeSelectedClassifier() ---&gt; 32 AttS_cls.search = &quot;weka.attributeSelection.GreedyStepwise&quot; 33 AttS_cls.classifier = multisearch_cls 34 /usr/local/lib/python3.8/dist-packages/weka/classifiers.py in search(self, search) 435 :type search: ASSearch 436 &quot;&quot;&quot; --&gt; 437 javabridge.call(self.jobject, &quot;setSearch&quot;, &quot;(Lweka/attributeSelection/ASSearch;)V&quot;, search.jobject) 438 439 AttributeError: 'str' object has no attribute 'jobject' </code></pre> <p>I understand that I have to set them as objects because it raise this error because i try to set them as strings but I dont know how.</p>
<p>You need to instantiate <code>ASSearch</code> and <code>ASEvaluation</code> objects. If you have command-lines, you can use the <code>from_commandline</code> helper method like this:</p> <pre class="lang-py prettyprint-override"><code>from weka.core.classes import from_commandline, get_classname from weka.attribute_selection import ASSearch from weka.attribute_selection import ASEvaluation search = from_commandline('weka.attributeSelection.GreedyStepwise -B -T -1.7976931348623157E308 -N -1 -num-slots 1', classname=get_classname(ASSearch)) evaluation = from_commandline('weka.attributeSelection.CfsSubsetEval -P 1 -E 1', classname=get_classname(ASEvaluation)) </code></pre> <p>The second argument of the <code>from_commandline</code> method is the <code>classname</code> of the wrapper that you want to use instead of <code>OptionHandler</code>. For simplicity, I import the correct wrappers and then use the <code>get_classname</code> method to return the dot notation of the wrapper's class. That way I can avoid accidental typos in the classname strings. Also, by using single quotes, you won't have to worry about Weka's quotes in the command-lines and you can just use the Weka command-line string verbatim.</p> <p>You can also use the same approach for instantiating the <code>AttributeSelectedClassifier</code> wrapper itself, without having to go through instantiating search and evaluation separately:</p> <pre class="lang-py prettyprint-override"><code>from weka.core.classes import from_commandline, get_classname from weka.classifiers import AttributeSelectedClassifier cls = from_commandline('weka.classifiers.meta.AttributeSelectedClassifier -E &quot;weka.attributeSelection.CfsSubsetEval -P 1 -E 1&quot; -S &quot;weka.attributeSelection.GreedyStepwise -B -T -1.7976931348623157E308 -N -1 -num-slots 1&quot; -W weka.classifiers.meta.MultiSearch -- -E FM -search &quot;weka.core.setupgenerator.MathParameter -property classifier.classifier.classifier.numOfBoostingIterations -min 5.0 -max 50.0 -step 1.0 -base 10.0 -expression I&quot; -class-label 1 -algorithm &quot;weka.classifiers.meta.multisearch.DefaultSearch -sample-size 100.0 -initial-folds 2 -subsequent-folds 10 -initial-test-set . -subsequent-test-set . -num-slots 1&quot; -log-file /Applications/weka-3-8-3 -S 1 -W weka.classifiers.meta.Bagging -- -P 100 -S 1 -num-slots 1 -I 100 -W weka.classifiers.meta.FilteredClassifier -- -F &quot;weka.filters.supervised.instance.SMOTE -C 0 -K 3 -P 250.0 -S 1&quot; -S 1 -W weka.classifiers.meta.CostSensitiveClassifier -- -cost-matrix &quot;[0.0 1.0; 1.0 0.0]&quot; -S 1 -W weka.classifiers.trees.ADTree -- -B 10 -E -3 -S 1', get_classname(AttributeSelectedClassifier)) </code></pre>
python|classification|weka
1
1,901,524
71,625,334
Convert symbolic math variables to programming language variables
<p>Suppose I have a program which asks me to input an equation to calculate something numerically.</p> <p>I input <code>2*x+y</code> as a string and it instantiates variable <code>x</code> and assigns numerical value to it then it instantiates variable <code>y</code> and assigns a value to it.</p> <p>The result is calculated based on the equation that I input (Here it is <code>2*x+y</code>)</p> <pre><code>x = 2 y = 3 result = 2*x + y print(result) </code></pre> <p>I want to enter the equation symbolically and carry the rest of the equation numerically.</p> <p>What is the best way to do so? (Looking for best practices to implement this idea so the program is able to scale for further development)</p>
<h2>You can use <a href="https://www.w3schools.com/python/ref_func_eval.asp" rel="nofollow noreferrer"><code>eval</code></a> function:</h2> <pre><code>x = 2 y = 3 result = eval('2 * x + y') print(result) </code></pre> <p>Output:</p> <pre><code>7 </code></pre> <h2>You can evaluate input algebraic equations as well:</h2> <pre><code>x = 2 y = 3 equation = input(&quot;Enter the algebraic equation with x and y variables:&quot;) print(f&quot;The result is {eval(equation)}&quot;) </code></pre> <p>Output:</p> <pre><code>Enter the algebraic equation with x and y variables:x*5 - y*2 The result is 4 </code></pre> <h2>For more complex equations I would suggest <a href="https://docs.sympy.org/latest/index.html" rel="nofollow noreferrer"><code>sympy</code></a>:</h2> <pre><code>&gt;&gt;&gt; from sympy.abc import x,y &gt;&gt;&gt; expr = 2*x + y &gt;&gt;&gt; expr.evalf(subs = {x:2, y:3}) 7.00000000000000 </code></pre> <p>You can checkout <a href="https://stackoverflow.com/questions/38176865/implicit-differentiation-with-python-3">this</a> and <a href="https://stackoverflow.com/questions/44269943/how-do-you-evaluate-a-derivative-in-python">this</a> for derivatives.</p>
python|numerical-methods|symbolic-math
0
1,901,525
61,777,398
Offset problem when using ctypes in Python 2
<p>I am trying to read the headers of a small bitmap ("test1.bmp"). I quickly found the structure. But when I try to implement it in Python 2.7 using Structure from ctypes, something strange happens: The offset of the ,,size"-ulong is moved forward by 2 bytes. (see below)</p> <pre><code>&gt;&gt;&gt; BMPHeader.size &lt;Field type=c_ulong, ofs=4, size=4&gt; </code></pre> <p>,,ofs" should be 2, because ,,size" comes after the ,,id"-char*2. This generates an error:</p> <pre><code>ValueError: Buffer size too small (14 instead of at least 16 bytes) </code></pre> <p>What shifts the ,,size"-offset by 2 bytes?</p> <p>Here's my code:</p> <pre><code>from ctypes import * filename = "test1.bmp" data = None with open(filename, "rb") as file: data = file.read() class BMPHeader(Structure): _fields_ = [ ("id", c_char * 2), ("size", c_ulong), ("reserved", c_ulong), ("offset", c_ulong)] def __new__(self, data_buffer=None): return self.from_buffer_copy(data_buffer) def __init__(self, data_buffer): pass header = BMPHeader(data[:14]) </code></pre> <p>P.S.: Please excuse my english (not native). I'm also just a beginner when working with headers etc., so it's quite possible it's just my bad code.</p>
<p>Structures have padding by default for alignment purposes. In your case, it's adding 2 bytes of padding between <code>id</code> and <code>size</code>. Since you're trying to read a file in that doesn't have any padding, you need to turn it off in your structure. Do this by adding <code>_pack_ = 1</code> under <code>class BMPHeader(Structure):</code>.</p>
python|bitmap|header|ctypes|offset
1
1,901,526
17,721,226
How to debug into subprocess.call() in Python?
<p>I am trying to understand a project's source code these days. I run the project, line by line, everything works fine until this line:</p> <pre><code>res = subprocess.call(command, env=os.environ) </code></pre> <p>I checked the variable "command" and realized that this function just throw a command to another python script and try to execute it in a subprocess. So I jumped out of Eclipse, and tried to execute the command through Terminal while under the same directory.</p> <p>Now this is what I got:</p> <pre><code>Traceback (most recent call last): File "/home/elderry/Projects/git/tahoe-lafs/support/bin/tahoe", line 6, in &lt;module&gt; from pkg_resources import load_entry_point File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 2850, in &lt;module&gt; working_set.require(__requires__) File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 696, in require needed = self.resolve(parse_requirements(requirements)) File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 594, in resolve raise DistributionNotFound(req) pkg_resources.DistributionNotFound: allmydata-tahoe==1.10.0.post27 </code></pre> <p>Then I completely lost my direction, where did the subprocess continue to run? Why did the script work well in program but not in Terminal? Since that script is also included in the project, with some hope I set some break points in it in Eclipse, which didn't catch anything. Is there any way to debug into the subprocess, not dive into the code of subprocess module's code itself?</p>
<p>I guess your main project alters <code>PYTHONPAH</code> (<code>sys.path</code>). Look in <code>os.environ</code> of your project and try to run second script with this environment.</p>
python|debugging|subprocess|pydev|archlinux
2
1,901,527
65,925,182
Python Mutilevel Import
<p>I'm having trouble if relative imports. The directory structure is:</p> <pre><code>folder app.py src_1 __init__.py database db_declare.py __init__.py pages page_1.py df_prep.py __init__.py </code></pre> <p>Okay, now I have on:</p> <pre><code>#On app.py from src_1.pages import page_1 #On page_1.py from df_prep import df #On df_prep.py from database.db_declare import * </code></pre> <p>But I still get</p> <pre><code>&quot;*\folder\src_1\pages\page_1.py&quot;, line 9, in &lt;module&gt; from df_prep import df ModuleNotFoundError: No module named 'df_prep' </code></pre> <p>When I run <code>app.py</code>. I've tried adding &quot;..&quot; to <code>sys.path</code>, but it ends up adding to many &quot;..&quot;. I tried to Thank you. I wanted to keep the imports inside the scripts unchanged, meaning if two scripts are in the same folder there should be no reason to write <code>from pages.df_prep import df</code> inside <code>pages_1.py</code>. I'm open to suggestions, but I really would not like to change too much about the file structure.</p>
<p><code>src_1</code> is a package. <code>folder</code> is not, so <code>app.py</code> is not in a package (but everything else is).</p> <p>Relative imports in packages require <code>.</code>:</p> <pre class="lang-py prettyprint-override"><code># in page1.py from .df_prep import df </code></pre> <pre class="lang-py prettyprint-override"><code># in df_prep.py from ..database.db_declare import * </code></pre> <p>See detailed answers here: <a href="https://stackoverflow.com/questions/14132789/relative-imports-for-the-billionth-time">Relative imports for the billionth time</a></p>
python|module|relative-import
1
1,901,528
66,257,466
How to serve a 404.html page using FastAPI in case a user goes to the wrong route?
<p>I want to render the index.html file. Otherwise, return a 404 error page not found.</p> <pre class="lang-py prettyprint-override"><code>from fastapi import FastAPI, Request, HTTPException from fastapi.staticfiles import StaticFiles from fastapi.templating import Jinja2Templates from fastapi.responses import HTMLResponse app = FastAPI() app.mount(&quot;/static&quot;, StaticFiles(directory=&quot;static&quot;), name=&quot;static&quot;) templates = Jinja2Templates(directory=&quot;templates&quot;) @app.get(&quot;/&quot;, response_class=HTMLResponse) async def Index(request: Request): if status_code=404: return templates.TemplateResponse(&quot;404.html&quot;, {&quot;request&quot;: request}) return templates.TemplateResponse(&quot;index.html&quot;, {&quot;request&quot;: request}) if __name__ == '__main__': uvicorn.run(app) </code></pre>
<p>Your <code>index</code> endpoint is <em>only</em> called when the client accesses the <code>/</code> route. So, you can't handle 404 errors in that same endpoint because a 404 means the client accessed some other undefined route.</p> <p>A 404 is an HTTPException and FastAPI has a built-in handler for HTTPExceptions. But you can override it to provide your own handler. See the <a href="https://fastapi.tiangolo.com/tutorial/handling-errors/?h=exceptions#override-the-httpexception-error-handler" rel="nofollow noreferrer">Handling Errors &gt; Override the HTTPException error handler</a> section of the FastAPI docs.</p> <p>Basically, you define a method that handles <em>all</em> the <code>HTTPException</code>s:</p> <pre class="lang-py prettyprint-override"><code>from starlette.exceptions import HTTPException as StarletteHTTPException @app.exception_handler(StarletteHTTPException) async def my_custom_exception_handler(request: Request, exc: StarletteHTTPException): # Handle HTTPExceptions here </code></pre> <p>The function receives as its 2nd argument an <a href="https://www.starlette.io/exceptions/#httpexception" rel="nofollow noreferrer">HTTPException</a>, which has a <code>status_code</code> and a <code>detail</code> attribute. So you can check the <code>status_code</code>, and render appropriate HTML templates. You can even use the passed-in <code>Request</code> and <code>HTTPException</code> objects for your templates:</p> <pre><code>from starlette.exceptions import HTTPException as StarletteHTTPException @app.exception_handler(StarletteHTTPException) async def my_custom_exception_handler(request: Request, exc: StarletteHTTPException): # print(exc.status_code, exc.detail) if exc.status_code == 404: return templates.TemplateResponse('404.html', {'request': request}) elif exc.status_code == 500: return templates.TemplateResponse('500.html', { 'request': request, 'detail': exc.detail }) else: # Generic error page return templates.TemplateResponse('error.html', { 'request': request, 'detail': exc.detail }) </code></pre> <p>Take note again that the function handles <em>all</em> HTTPException types, not just your 404. If you just care about overriding the 404 error, you can pass the other errors to FastAPI's built-in handler:</p> <pre><code>from starlette.exceptions import HTTPException as StarletteHTTPException from fastapi.exception_handlers import http_exception_handler @app.exception_handler(StarletteHTTPException) async def my_custom_exception_handler(request: Request, exc: StarletteHTTPException): if exc.status_code == 404: return templates.TemplateResponse('404.html', {'request': request}) else: # Just use FastAPI's built-in handler for other errors return await http_exception_handler(request, exc) </code></pre>
python|fastapi
2
1,901,529
68,112,182
Replace the string in pandas dataframe
<p>I have the following dataframe (df):</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>shape</th> <th>data</th> </tr> </thead> <tbody> <tr> <td>POINT</td> <td>POINT (4495 33442)</td> </tr> <tr> <td>POLYGON</td> <td>POLYGON ((6324 32691, 6326 32691, 6330 32691, 6333 32693, 6332 32696, 6329 32700, 6328 32704, 6327 32707, 6325 32710, 6322 32713, 6319 32716, 6316 32719, 6313 32722, 6310 32725, 6307 32728, 6303 32728, 6299 32727, 6295 32727, 6291 32730, 6288 32733, 6285 32735, 6281 32735, 6277 32735, 6275 32732, 6274 32729, 6274 32725, 6272 32722, 6269 32720, 6265 32719, 6261 32719, 6258 32716, 6257 32712, 6259 32708, 6262 32705, 6265 32702, 6268 32701, 6272 32701, 6276 32701, 6279 32702, 6283 32702, 6287 32702, 6291 32699, 6294 32696, 6297 32693, 6300 32692, 6304 32692, 6308 32692, 6312 32692, 6316 32692, 6320 32693, 6324 32691))</td> </tr> <tr> <td>POINT</td> <td>POINT (4673 33465)</td> </tr> <tr> <td>POLYGON</td> <td>POLYGON ((5810 33296, 5813 33297, 5816 33299, 5819 33301, 5822 33303, 5826 33306, 5829 33307, 5833 33307, 5836 33308, 5837 33312, 5837 33316, 5836 33319, 5834 33323, 5832 33327, 5830 33330, 5828 33333, 5826 33336, 5824 33339, 5821 33342, 5817 33342, 5813 33341, 5808 33340, 5803 33339, 5800 33338))</td> </tr> </tbody> </table> </div> <p>I would like to convert it into the following format: if POINT then (4495, 33442) if POLYGON then [(5810, 33296), (5813, 33297), (5816, 33299), (5819, 33301), (5822, 33303), (5826, 33306), (5829, 33307), (5833, 33307), (5836, 33308), (5837, 33312), (5837, 33316), (5836, 33319), (5834, 33323), (5832, 33327), (5830, 33330), (5828, 33333), (5826, 33336), (5824, 33339), (5821, 33342), (5817, 33342), (5813, 33341), (5808, 33340), (5803, 33339), (5800, 33338)]. How do I do that?</p> <p>What I tried so far?</p> <pre><code>op2=[] for st, shape in zip(df['data'],df['shape']): if 'POINT' in shape: val=re.findall('\([0-9., ]+\)', st)[-1] op2.append(&quot;({})&quot;.format(&quot;, &quot;.join(re.findall(r&quot;\d+&quot;, val)))) #op2_list = [ast.literal_eval(l) for l in op2] #poi = [Point(i).wkt for i in op2_list] else: # Polygon val=re.findall('\([0-9., ]+\)', st)[-1] paran=val.replace(', ', '),(') fin=paran.replace(' ', ',') op2.append(fin) data['converted']=pd.DataFrame(op2) </code></pre> <p>Desired output:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>shape</th> <th>data</th> <th>converted</th> </tr> </thead> <tbody> <tr> <td>POINT</td> <td>POINT (4495 33442)</td> <td>(4495, 33442)</td> </tr> <tr> <td>POLYGON</td> <td>POLYGON ((6324 32691, 6326 32691, 6330 32691, 6333 32693, 6332 32696, 6329 32700, 6328 32704, 6327 32707, 6325 32710, 6322 32713, 6319 32716, 6316 32719, 6313 32722, 6310 32725, 6307 32728, 6303 32728, 6299 32727, 6295 32727, 6291 32730, 6288 32733, 6285 32735, 6281 32735, 6277 32735, 6275 32732, 6274 32729, 6274 32725, 6272 32722, 6269 32720, 6265 32719, 6261 32719, 6258 32716, 6257 32712, 6259 32708, 6262 32705, 6265 32702, 6268 32701, 6272 32701, 6276 32701, 6279 32702, 6283 32702, 6287 32702, 6291 32699, 6294 32696, 6297 32693, 6300 32692, 6304 32692, 6308 32692, 6312 32692, 6316 32692, 6320 32693, 6324 32691))</td> <td>[(6324, 32691), (6326, 32691), (6330, 32691), (6333, 32693), (6332, 32696), (6329, 32700), (6328, 32704), (6327, 32707), (6325, 32710), (6322, 32713), (6319, 32716), (6316, 32719), (6313, 32722), (6310, 32725), (6307, 32728), (6303, 32728), (6299, 32727), (6295, 32727), (6291, 32730), (6288 ,32733), (6285, 32735), (6281, 32735), (6277, 32735), (6275, 32732), (6274, 32729), (6274, 32725), (6272, 32722), (6269, 32720), (6265, 32719), (6261, 32719), (6258, 32716), (6257, 32712), (6259, 32708), (6262, 32705), (6265, 32702), (6268, 32701), (6272, 32701), (6276, 32701), (6279, 32702), (6283, 32702), (6287, 32702), (6291, 32699), (6294, 32696), (6297, 32693), (6300, 32692), (6304, 32692), (6308, 32692), (6312, 32692), (6316, 32692), (6320, 32693), (6324, 32691)]</td> </tr> <tr> <td>POINT</td> <td>POINT (4673 33465)</td> <td>(4673, 33465)</td> </tr> <tr> <td>POLYGON</td> <td>POLYGON ((5810 33296, 5813 33297, 5816 33299, 5819 33301, 5822 33303, 5826 33306, 5829 33307, 5833 33307, 5836 33308, 5837 33312, 5837 33316, 5836 33319, 5834 33323, 5832 33327, 5830 33330, 5828 33333, 5826 33336, 5824 33339, 5821 33342, 5817 33342, 5813 33341, 5808 33340, 5803 33339, 5800 33338))</td> <td>[(5810, 33296), (5813, 33297), (5816, 33299), (5819, 33301), (5822, 33303), (5826, 33306), (5829, 33307), (5833, 33307), (5836, 33308), (5837, 33312), (5837, 33316), (5836, 33319), (5834, 33323), (5832, 33327), (5830, 33330), (5828, 33333), (5826, 33336), (5824, 33339), (5821, 33342), (5817, 33342), (5813, 33341), (5808, 33340), (5803, 33339), (5800, 33338)]</td> </tr> </tbody> </table> </div> <p>This does not convert the polygons. How do I do that?</p>
<p>This function will format the polygon strings correctly:</p> <pre><code>def format_polygon(s): return [tuple([float(i) for i in x.split(&quot; &quot;)]) for x in s[10:-2].split(&quot;, &quot;)] </code></pre> <p>and this code will format the point strings correctly:</p> <pre><code>def format_point(s): return tuple([float(i) for i in s[7:-1].split(&quot; &quot;)]) </code></pre> <p>they can then be applied to your dataframe like so:</p> <pre><code>df[df[&quot;shape&quot;]==&quot;POINT&quot;][&quot;data&quot;] = df[df[&quot;shape&quot;]==&quot;POINT&quot;][&quot;data&quot;].apply(lambda x: format_point(x)) df[df[&quot;shape&quot;]==&quot;POLYGON&quot;][&quot;data&quot;] = df[df[&quot;shape&quot;]==&quot;POLYGON&quot;][&quot;data&quot;].apply(lambda x: format_polygon(x)) </code></pre>
python|pandas|replace|python-re|parentheses
0
1,901,530
59,417,307
Django doesn't read form
<p>I have form:</p> <pre><code>&lt;form id="contact-form" class="contact__form" method="POST" action="{% url 'backcall' %}"&gt; {% csrf_token %} &lt;span class="text-color"&gt;Send letter&lt;/span&gt; &lt;div class="form-group"&gt; &lt;input name="name" type="text" class="form-control" placeholder="Name"&gt; &lt;/div&gt; &lt;div class="form-group"&gt; &lt;input type="tel" name="phone" class="form-control" placeholder="Phone (+380)" pattern="[\+][3][8][0]\d{9}" minlength="13" maxlength="13" /&gt; &lt;/div&gt; &lt;div class="form-group"&gt; &lt;input name="email" type="email" class="form-control" placeholder="Email"&gt; &lt;/div&gt; &lt;div class="form-group-2 mb-4"&gt; &lt;textarea name="message" class="form-control" rows="4" placeholder="Your letter"&gt;&lt;/textarea&gt; &lt;/div&gt; &lt;button class="btn btn-main" name="submit" type="submit"&gt;Send&lt;/button&gt; &lt;/form&gt; </code></pre> <p>views.py: </p> <pre><code>def backcall(request): backcall = BackCall(name = request.POST['name'], phone = request.POST['phone'], email=request.POST['email'] , message = request.POST['message']) backcall.save() return redirect('thanks') </code></pre> <p>models.py </p> <pre><code>class BackCall(models.Model): name = models.CharField(max_length=50) phone = models.CharField(max_length=13) email = models.EmailField() message = models.TextField(default=None) datetime = models.DateTimeField(auto_now_add=True) </code></pre> <p>When I fill out the form and submit nothing happens. When I follow the link 'backcall/' I get an <a href="https://i.stack.imgur.com/sIDOH.png" rel="nofollow noreferrer">error</a>.</p> <p>What the problem can be connected with and how to fix it? </p>
<p>Please, search more, exist the <a href="https://stackoverflow.com/questions/5895588/django-multivaluedictkeyerror-error-how-do-i-deal-with-it">solution</a></p> <blockquote> <p>Use the MultiValueDict's <code>get</code> method. This is also present on standard dicts and is a way to fetch a value while providing a default if it does not exist.</p> </blockquote> <pre><code>def backcall(request): backcall = BackCall(name = request.POST.get('name'), phone = request.POST.get('phone'), email=request.POST.get('email') , message = request.POST.get('message')) backcall.save() return redirect('thanks') </code></pre> <p>You should check whether or not the record exists in your database, I think, a quick example would be:</p> <pre><code> def backcall(request): obj, created = BackCall.objects.get_or_create(email=request.POST.get('email'), defaults={'phone': request.POST.get('phone'), 'name': request.POST.get('name'), 'message': request.POST.get('message')} if created: return redirect('thanks') return ... </code></pre>
python|django
0
1,901,531
58,901,094
Select corresponding values to a instant t in columns
<p>I work in Python and i have a pandas Dataframe with an evolution of steps at differents months :</p> <pre><code>+----+-------+------+ | Id | Month | Step | +----+-------+------+ | a | 1 | a_1 | | a | 4 | a_2 | | a | 6 | a_3 | | b | 1 | a_1 | | b | 2 | a_4 | +----+-------+------+ </code></pre> <p>I want to have the evolution of steps corresponding to each month in columns like this table :</p> <pre><code>+----+---------+----------+---------+---------+---------+---------+ | Id | Month_1 | Month_2 | Month_3 | Month_4 | Month_5 | Month_6 | +----+---------+----------+---------+---------+---------+---------+ | a | a_1 | a_1 | a_1 | a_2 | a_2 | a_3 | | b | a_1 | a_4 | a_4 | a_4 | a_4 | a_4 | +----+---------+----------+---------+---------+---------+---------+ </code></pre> <p>I don't find simple solution, so if someone have a solution, i take !</p>
<p>Use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.pivot_table.html" rel="nofollow noreferrer"><code>DataFrame.pivot_table</code></a>:</p> <pre><code>new_df=df.pivot_table(index='Id',columns='Month',values='Step',aggfunc=''.join).add_prefix('Month_').rename_axis(columns=None) print(new_df) Month_1 Month_2 Month_4 Month_6 Id a a_1 NaN a_2 a_3 b a_1 a_4 NaN NaN </code></pre> <hr> <p>If you want appear all rage of months and fill use:</p> <pre><code>new_df=( df.pivot_table(index='Id',columns='Month',values='Step',aggfunc=''.join) .reindex(columns=range(df['Month'].min(),df['Month'].max()+1)) .ffill(axis=1) .add_prefix('Month_') .rename_axis(columns=None) .reset_index()) print(new_df) Id Month_1 Month_2 Month_3 Month_4 Month_5 Month_6 0 a a_1 a_1 a_1 a_2 a_2 a_3 1 b a_1 a_4 a_4 a_4 a_4 a_4 </code></pre> <hr> <p>If you don't want fill remove <code>ffill</code></p>
python|pandas
1
1,901,532
31,280,228
Send myself an email using win32com.client
<p>I have a template from code I've gathered from different places to send emails in various of my scripts:</p> <pre><code>import win32com.client ################################################## ################################################## ################################################## ################################################## ################################################## #this is a mutipurpose email template def email_template(recipient, email_subject, mail_body, attachment1, attachment2): Format = { 'UNSPECIFIED' : 0, 'PLAIN' : 1, 'HTML' : 2, 'RTF' : 3} profile = "Outlook" #session = win32com.client.Dispatch("Mapi.Session") outlook = win32com.client.Dispatch("Outlook.Application") #session.Logon(profile) mainMsg = outlook.CreateItem(0) mainMsg.To = recipient mainMsg.BodyFormat = Format['RTF'] ######################### #check if there is a mail body try: mainMsg.Subject = email_subject except: mainMsg.Subject = 'No subject' ######################### #check if there is a mail body try: mainMsg.HTMLBody = mail_body except: mainMsg.HTMLBody = 'No email body defined' ######################### #add first attachement if available try: mainMsg.Attachments.Add(attachment1) except: pass ######################### #add second attachement if available try: mainMsg.Attachments.Add(attachment2) except: pass mainMsg.Send() #this line actually sends the email </code></pre> <p>Works perfectly. Simple. However I have a slight problem, I'm building a script that needs to send the user an email. Using this template, how do I get the users outlook email? I mean like something like just using <code>"me"</code> and it will get my address.</p> <p>Thanks!!</p>
<p>The <a href="https://msdn.microsoft.com/en-us/library/office/ff869341(v=office.15).aspx" rel="nofollow">CurrentUser</a> property of the Namespace or Account class allows to get the display name of the currently logged-on user as a Recipient object. The Recipient class provides the <a href="https://msdn.microsoft.com/en-us/library/office/ff867602(v=office.15).aspx" rel="nofollow">Address</a> property which returns a string representing the e-mail address of the Recipient. </p> <p>In case of Exchange server you may need to call more properties and methods:</p> <ol> <li>Use the <a href="https://msdn.microsoft.com/en-us/library/office/ff863677(v=office.15).aspx" rel="nofollow">AddressEntry</a> property of the Recipient class.</li> <li>Call the <a href="https://msdn.microsoft.com/en-us/library/office/ff869721(v=office.15).aspx" rel="nofollow">GetExchangeUser</a> method of the AddressEntry class which returns an ExchangeUser object that represents the AddressEntry if the AddressEntry belongs to an Exchange AddressList object such as the Global Address List (GAL) and corresponds to an Exchange user.</li> <li>Get the <a href="https://msdn.microsoft.com/en-us/library/office/ff862991(v=office.15).aspx" rel="nofollow">PrimarySmtpAddress</a> property value. Returns a string representing the primary Simple Mail Transfer Protocol (SMTP) address for the ExchangeUser.</li> </ol> <p>Finally, I'd recommend using the <a href="https://msdn.microsoft.com/en-us/library/office/ff865320(v=office.15).aspx" rel="nofollow">Recipients</a> property of the MailItem class which returns a Recipients collection that represents all the recipients for the Outlook item. The <a href="https://msdn.microsoft.com/en-us/library/office/ff866951.aspx" rel="nofollow">Add</a> method creates a new recipient in the Recipients collection.</p> <pre><code> Sub CreateStatusReportToBoss() Dim myItem As Outlook.MailItem Dim myRecipient As Outlook.Recipient Set myItem = Application.CreateItem(olMailItem) Set myRecipient = myItem.Recipients.Add("Eugene Astafiev") myItem.Subject = "Status Report" myItem.Display End Sub </code></pre> <p>Don't forget to call the Resolve or ResolveAll methods of the Recipient(s) class to get your recipients resolved against the address book.</p>
python|outlook|win32com
0
1,901,533
48,892,117
Python 2.7: Downloads PDFS with selenium
<p>I am trying to click certain links on a webpage to download PDF files automatically using selenium. My problem is that i want to click on 3 different links that are named the same: "Lap Times". i have tried using their xpaths in <code>find_element_by_xpath</code> but the download does not take place. </p> <p>If i use <code>find_element_by_text("Lap Times")</code>, only the last link named "Lap Times" is clicked.</p> <p>Here is the webpage: <a href="https://www.fia.com/events/fia-formula-one-world-championship/season-2016/event-timing-information-5" rel="nofollow noreferrer">https://www.fia.com/events/fia-formula-one-world-championship/season-2016/event-timing-information-5</a> </p> <p>Here is my code:</p> <pre><code>years = ["2016", "2017"] races = [] for year in years: for i in range(20): page = "https://www.fia.com/events/fia-formula-one-world-championship/season-" + year + "/event-timing-information-" + str(i) races.append(page) def download_pdfs(races, xpath): profile = webdriver.FirefoxProfile() profile.set_preference("browser.helperApps.neverAsk.saveToDisk", "text/plain,application/octet-stream,application/pdf,application/x-pdf,application/vnd.pdf") profile.set_preference("browser.download.manager.showWhenStarting",False) profile.set_preference("browser.helperApps.neverAsk.openFile","text/plain,application/octet-stream,application/pdf,application/x-pdf,application/vnd.pdf") profile.set_preference("browser.helperApps.alwaysAsk.force", False) profile.set_preference("browser.download.manager.useWindow", False) profile.set_preference("browser.download.manager.focusWhenStarting", False) profile.set_preference("browser.helperApps.neverAsk.openFile", ""); profile.set_preference("browser.download.manager.alertOnEXEOpen", False); profile.set_preference("browser.download.manager.showAlertOnComplete", False); profile.set_preference("browser.download.manager.closeWhenDone", True); profile.set_preference("pdfjs.disabled", True); browser = webdriver.Firefox(profile) for race in races: browser.get(race) try: browser.find_element_by_xpath(xpath).click() except: "cannot download pdf:" + str(race) </code></pre> <p><strong>Download FP3 Lap Times PDFs by clicking on link "Lap Times" below the header "Third Practice"</strong></p> <pre><code>download_pdfs(races, "//*[@id='page-wrapper']/div/div[3]/div[2]/div/div[3]/div/ul[5]/li[1]/p/span/a") </code></pre>
<p>instead of passing in an XPath, Try using this specifically:</p> <pre><code>browser.find_elements_by_xpath("//a[contains(text(), 'Lap Times')]")[1].click() </code></pre> <p>Note the plural version of the find by method here; we're asking for all of the links with the text "Lap Times", then choosing the second one in the list (which is the Third Practice) to click on.</p> <p>I also recommend that you do not use the open "except". either specify an exception, like TimeoutException, or don't use the try/except block at all. You're missing out on details of what problem is being reported.</p>
python|selenium|webdriver
0
1,901,534
49,178,794
Queue object shared between threads and objects in separate modules
<p>I am trying to build a system that will crawl a remote server continuously and download new files locally. I want crawling and downloading to be split into separate objects, and separate threads. To keep track of files found on the server and which still needs to be downloaded, I will use a PriorityQueue. </p> <p>Because the system will be larger later with more tasks added, I will need a main module that sits on top and I need to initiate the PriorityQueue in the main module. But I have a problem on how to share this PriorityQueue between the main module and the crawler.</p> <p>Here is the code below, ignoring the download part for now as it doesn't play into the problem yet, as I can't figure out how to make the Crawler object "see" the queue object created in main.py.</p> <p>main.py</p> <pre><code>import crawler import threading from queue import PriorityQueue class Watchdog(object): def __init__(self): self.queue = PriorityQueue def setup(self): self.crawler = crawler.Crawler() def run(self): t1 = threading.Thread(target=self.crawler.crawl(), args=(&lt;pysftp connection&gt;,)) t1.daemon=True t2.daemon=True t1.start() t2.start() t1.join() t2.join() </code></pre> <p>crawler.py</p> <pre><code>import pysftp class Crawler(object): def __init__(self, connection): self.connection = connection def crawl(self): callbacks = pysftp.WTCallbacks() self.connection.walktree(rootdir, fcallback=callbacks.file_cb, dcallback=callbacks.dir_cb, ucallback=callbacks.unk_cb) for fpath in callbacks.flist: with queue.mutex: if not fpath in queue.queue: queue.put(os.path.getmtime(fpath), fpath) </code></pre> <p>The problem is that I can not figure out how I can make the queue object I create in main.py to be reachable and shared in crawler.py. When I add the download task, it should also be able to see the queue object, and I need the queue to be synced over all modules, so that when a new file is added by crawler, the downloader will immediately see that.</p>
<p>You need to use multiprocessing module. Check the doc at <a href="https://docs.python.org/3.6/library/multiprocessing.html#exchanging-objects-between-processes" rel="nofollow noreferrer">https://docs.python.org/3.6/library/multiprocessing.html#exchanging-objects-between-processes</a></p> <p>You could also use <a href="http://www.celeryproject.org/" rel="nofollow noreferrer">celery</a></p>
python|multithreading|concurrency|queue|priority-queue
0
1,901,535
70,933,730
AWS CDK Python - SubnetSelection and ISubnet objects
<h2>Background</h2> <p>I am attempting to create an EKS Cluster with the <a href="https://docs.aws.amazon.com/cdk/api/v1/python/aws_cdk.aws_eks/Cluster.html" rel="nofollow noreferrer">Cluster</a> object in Python using the AWS CDK.</p> <p>I have a Stack that constructs networking objects such as VPCs and Subnets. That Stack is defining three &quot;types&quot; of subnets:</p> <ol> <li>A control subnet group - contains EKS ENIs</li> <li>A worker subnet group - contains Worker node groups</li> <li>A public subnet group - uses public route tables and will be responsible for ALBs, etc.</li> </ol> <p>The code defining that information is below. This is coming from my Networking Stack:</p> <pre><code># Define the number of subnets to create in a for loop later on. This will be a shared value between the worker, control, and public subnets. subnet_count = range(1,3) # Create empty lists for each of our subnet types. These will hold our SubnetConfigurations that are passed to VPC creation self.public_subnets = [] self.worker_subnets = [] self.control_subnets = [] # Loop through our defined range above, creating the appropriate control, worker, and public subnets, aligning to CIDRs above for x in subnet_count: x = str(x) self.control_subnets.append(ec2.SubnetConfiguration( name = 'Control-0{}'.format(x), cidr_mask=28, subnet_type = ec2.SubnetType.PRIVATE_WITH_NAT, reserved = False )) self.worker_subnets.append(ec2.SubnetConfiguration( name = 'Worker-0{}'.format(x), cidr_mask=24, subnet_type = ec2.SubnetType.PRIVATE_WITH_NAT, reserved = False )) self.public_subnets.append(ec2.SubnetConfiguration( name = 'Public-0{}'.format(x), cidr_mask=27, map_public_ip_on_launch=True, subnet_type = ec2.SubnetType.PUBLIC, reserved = False )) </code></pre> <p>and then I create a VPC for use with EKS, by unpacking those SubnetConfiguration lists:</p> <pre><code> self.kubernetes_vpc = ec2.Vpc(self, &quot;Kubernetes&quot;, cidr=my_cidr, default_instance_tenancy=ec2.DefaultInstanceTenancy.DEFAULT, enable_dns_hostnames=True, enable_dns_support=True, flow_logs=None, gateway_endpoints=None, max_azs=2, nat_gateway_provider=ec2.NatProvider.gateway(), nat_gateways=1, # this is 1 PER AZ subnet_configuration=[*self.public_subnets,*self.control_subnets,*self.worker_subnets], vpc_name=&quot;Kubernetes&quot;, vpn_connections=None ) </code></pre> <p>I pass this stack to the EKS Cluster Stack, referenced as <code>my_network_stack</code></p> <p>So what I'm trying to do now is specifically call out, using <code>subnet_group_name</code> parameter of <code>ec2.SubnetSelection</code> the names of the Control Subnets that are created in the other Stack, and hand those over to the <code>Cluster</code>'s <code>vpc_subnets</code> parameter in the EKS Stack.</p> <pre><code> self.control_subnets = [] for subnet_config in my_network_stack.control_subnets: self.selected_subnets = my_network_stack.kubernetes_vpc.select_subnets(subnet_group_name=subnet_config.name).subnets </code></pre> <p>The <code>my_network_stack.control_subnets</code> is the <code>self.control_subnets</code> defined earlier in the Networking Stack.</p> <p>This should be giving me a list of ISubnet objects that were selected properly based on the Control Subnet group names, I set up a simple test here:</p> <pre><code>for item in self.selected_subnets: logging.debug(type(item)) </code></pre> <p>which returns</p> <pre><code>DEBUG:root:&lt;class 'aws_cdk.aws_ec2.PrivateSubnet'&gt; DEBUG:root:&lt;class 'aws_cdk.aws_ec2.PrivateSubnet'&gt; </code></pre> <p>Those are ISubnet objects, correct?</p> <h2>Method 1</h2> <p>My first attempt to try to get this to work is to provide an unpacker for the specific list, which should be a group of ISubnet objects (truncated Cluster parameters):</p> <pre><code>self.Cluster = eks.Cluster( vpc_subnets = [ ec2.SubnetSelection(subnets=[*self.selected_subnets]) ] </code></pre> <p>which gives me the error:</p> <pre><code>jsii.errors.JSIIError: Expected object reference, got &quot;latest&quot; </code></pre> <p>Not entirely sure what I'm doing wrong. I've tried some variations on passing in the correct list of ISubnet objects, even when I specifically call out the array index:</p> <pre><code>vpc_subnets = [ ec2.SubnetSelection(subnets=[self.selected_subnets[0], self.selected_subnets[1]]) ] </code></pre> <p>but the same error occurs.</p> <h2>Method 2</h2> <p>Use the actual <code>SubnetSelection</code> function to get a list of ISubnet Objects:</p> <pre><code>vpc_subnets = [ ec2.SubnetSelection(subnets= [my_network_stack.kubernetes_vpc.select_subnets(subnet_group_name=self.control_subnet_names[0]).subnets, my_network_stack.kubernetes_vpc.select_subnets(subnet_group_name=self.control_subnet_names[1])]) ] </code></pre> <p>which gives me the error:</p> <pre><code>jsii.errors.JSIIError: Expected object reference, got [{&quot;$jsii.byref&quot;:&quot;aws-cdk-lib.aws_ec2.PrivateSubnet@10011&quot;},{&quot;$jsii.byref&quot;:&quot;aws-cdk-lib.aws_ec2.PrivateSubnet@10012&quot;}] </code></pre> <p>This looks like it could potentially be list of dictionary references with the actual ISubnet Object, in that case, not sure how it's better than Method 1, where the actual object is referenced.</p> <p>Output of <code>pip freeze</code>:</p> <pre><code>$ pip freeze attrs==21.4.0 aws-cdk-lib==2.8.0 cattrs==1.10.0 constructs==10.0.37 jsii==1.52.1 publication==0.0.3 python-dateutil==2.8.2 six==1.16.0 typing-extensions==4.0.1 </code></pre> <h2>Update: Solution</h2> <p>As the answerer pointed out, this error <code>expected object, got 'latest'</code> was related to the ALB version in the cluster creation statement, not the subnets being passed in. That was the problem all along. I've included that (broken) code below:</p> <pre><code> self.cluster = eks.Cluster( self, &quot;InfrastructureCluster&quot;, default_capacity_type=eks.DefaultCapacityType.NODEGROUP, alb_controller=eks.AlbControllerOptions(version='latest'), endpoint_access=eks.EndpointAccess.PUBLIC_AND_PRIVATE, version=eks.KubernetesVersion.V1_21, cluster_name=&quot;InfrastructureCluster&quot;, security_group=my_network_stack.controlplane_security_group, vpc=my_network_stack.kubernetes_vpc, vpc_subnets= [ ec2.SubnetSelection(subnets=self.selected_subnets) ], ) </code></pre> <p>Additionally, this doesn't fix the problem with passing in the subnets, but I was able to finally get this code to work. The key here is that <code>select_subnets(subnet_group_name=subnet_config</code> returns a list of ISubnet objects, so you have separate that into objects, then unpack that into the cluster's <code>vpc_subnets</code>:</p> <pre><code>for subnet_config in my_network_stack.control_subnets: for item in my_network_stack.kubernetes_vpc.select_subnets(subnet_group_name=subnet_config.name).subnets: self.selected_subnets.append(item) # Later, during cluster creation: vpc_subnets= [ ec2.SubnetSelection(subnets=[*self.selected_subnets]) ] </code></pre> <p>I was only able to get the above to work, combinations of passing in the list object (SubnetSelection(subnets=[]) requires a list) would lead to syntax errors.</p>
<p>The following woks fine and is not the cause of the issue:</p> <pre class="lang-python prettyprint-override"><code>self.Cluster = eks.Cluster( vpc_subnets = [SubnetSelection(subnets=selected_subnets)] </code></pre> <p>The cause of the issue is the following line in the initialization of the <code>Cluster</code>, which you provided in chat:</p> <pre class="lang-python prettyprint-override"><code> alb_controller=eks.AlbControllerOptions(version=&quot;latest&quot;) </code></pre> <p>The <a href="https://docs.aws.amazon.com/cdk/api/v1/docs/@aws-cdk_aws-eks.AlbControllerOptions.html#version" rel="nofollow noreferrer"><code>version</code> parameter of the <code>AlbControllerOptions</code></a> expects an instance of <a href="https://docs.aws.amazon.com/cdk/api/v1/docs/@aws-cdk_aws-eks.AlbControllerVersion.html" rel="nofollow noreferrer"><code>AlbControllerVersion</code></a>, not a string.</p> <p>The correct code would be:</p> <pre class="lang-python prettyprint-override"><code> alb_controller=eks.AlbControllerOptions(version=eks.AlbControllerVersion.V2_3_1), </code></pre>
python|aws-cdk
2
1,901,536
60,226,735
How to count overlapping datetime intervals in Pandas?
<p>I have a following DataFrame with two datetime columns:</p> <pre><code> start end 0 01.01.2018 00:47 01.01.2018 00:54 1 01.01.2018 00:52 01.01.2018 01:03 2 01.01.2018 00:55 01.01.2018 00:59 3 01.01.2018 00:57 01.01.2018 01:16 4 01.01.2018 01:00 01.01.2018 01:12 5 01.01.2018 01:07 01.01.2018 01:24 6 01.01.2018 01:33 01.01.2018 01:38 7 01.01.2018 01:34 01.01.2018 01:47 8 01.01.2018 01:37 01.01.2018 01:41 9 01.01.2018 01:38 01.01.2018 01:41 10 01.01.2018 01:39 01.01.2018 01:55 </code></pre> <p>I would like to count how many <em>starts</em> (intervals) are active at the same time before they end at given time (in other words: <strong>how many times each row overlaps with the rest of the rows</strong>). </p> <p>E.g. from 00:47 to 00:52 only one is active, from 00:52 to 00:54 two, from 00:54 to 00:55 only one again, and so on.</p> <p>I tried to stack columns onto each other, sort by date and by iterrating through whole dataframe give each "start" +1 to counter and -1 to each "end". It works but on my original data frame, where I have few millions of rows, <strong>iteration takes forever</strong> - I need to find a quicker way.</p> <p>My original <em>basic-and-not-very-good</em> code:</p> <pre><code>import pandas as pd import numpy as np df = pd.read_csv('something.csv', sep=';') df = df.stack().to_frame() df = df.reset_index(level=1) df.columns = ['status', 'time'] df = df.sort_values('time') df['counter'] = np.nan df = df.reset_index().drop('index', axis=1) print(df.head(10)) </code></pre> <p>gives:</p> <pre><code> status time counter 0 start 01.01.2018 00:47 NaN 1 start 01.01.2018 00:52 NaN 2 stop 01.01.2018 00:54 NaN 3 start 01.01.2018 00:55 NaN 4 start 01.01.2018 00:57 NaN 5 stop 01.01.2018 00:59 NaN 6 start 01.01.2018 01:00 NaN 7 stop 01.01.2018 01:03 NaN 8 start 01.01.2018 01:07 NaN 9 stop 01.01.2018 01:12 NaN </code></pre> <p>and:</p> <pre><code>counter = 0 for index, row in df.iterrows(): if row['status'] == 'start': counter += 1 else: counter -= 1 df.loc[index, 'counter'] = counter </code></pre> <p>final output:</p> <pre><code> status time counter 0 start 01.01.2018 00:47 1.0 1 start 01.01.2018 00:52 2.0 2 stop 01.01.2018 00:54 1.0 3 start 01.01.2018 00:55 2.0 4 start 01.01.2018 00:57 3.0 5 stop 01.01.2018 00:59 2.0 6 start 01.01.2018 01:00 3.0 7 stop 01.01.2018 01:03 2.0 8 start 01.01.2018 01:07 3.0 9 stop 01.01.2018 01:12 2.0 </code></pre> <p>Is there any way i can do this by <strong>NOT</strong> using iterrows()?</p> <p>Thanks in advance!</p>
<p>Use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.cumsum.html" rel="noreferrer"><code>Series.cumsum</code></a> with <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.map.html" rel="noreferrer"><code>Series.map</code></a> (or <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.replace.html" rel="noreferrer"><code>Series.replace</code></a>):</p> <pre><code>new_df = df.melt(var_name = 'status',value_name = 'time').sort_values('time') new_df['counter'] = new_df['status'].map({'start':1,'end':-1}).cumsum() print(new_df) status time counter 0 start 2018-01-01 00:47:00 1 1 start 2018-01-01 00:52:00 2 11 end 2018-01-01 00:54:00 1 2 start 2018-01-01 00:55:00 2 3 start 2018-01-01 00:57:00 3 13 end 2018-01-01 00:59:00 2 4 start 2018-01-01 01:00:00 3 12 end 2018-01-01 01:03:00 2 5 start 2018-01-01 01:07:00 3 15 end 2018-01-01 01:12:00 2 14 end 2018-01-01 01:16:00 1 16 end 2018-01-01 01:24:00 0 6 start 2018-01-01 01:33:00 1 7 start 2018-01-01 01:34:00 2 8 start 2018-01-01 01:37:00 3 9 start 2018-01-01 01:38:00 4 17 end 2018-01-01 01:38:00 3 10 start 2018-01-01 01:39:00 4 19 end 2018-01-01 01:41:00 3 20 end 2018-01-01 01:41:00 2 18 end 2018-01-01 01:47:00 1 21 end 2018-01-01 01:55:00 0 </code></pre> <hr> <p>We could also use <a href="https://numpy.org/doc/1.18/reference/generated/numpy.cumsum.html" rel="noreferrer"><code>numpy.cumsum</code></a>:</p> <pre><code>new_df['counter'] = np.where(new_df['status'].eq('start'),1,-1).cumsum() </code></pre>
python|pandas|datetime|count
6
1,901,537
72,173,566
Python Question about Files with lists and functions. Please Help. Thank you
<p>1.Define a function that:</p> <p>-receives the name of the data file as a parameter</p> <p>-reads the data from the file into 2 lists, a list of names and a list of attendance</p> <p>-returns the lists.</p> <p>2.Define another function which takes both lists as parameters and return the names of students whose attendance was less than 50 percent.</p> <p>This is what the file contains:</p> <pre><code>Jack,10 Charlie,6 James,9 Daniel,4 Jake,2 </code></pre> <p>I am confused how to go about doing the second part of the question as it prints out only the last name Jake and not Daniel. I know its something to do with how I have the return statement positioned. Does anyone know how to fix it? This is what I have so far:</p> <pre><code>def file(filename): names = [] attendance = [] with open(filename) as data_file: for line in data_file: line_data = line.split(',') names.append(line_data[0].strip()) attendance.append(int(line_data[1].strip())) return names, attendance def poor_attendance(name, attendance): for x in attendance: if x &lt; 5: w = name[x] return w def main(): names, attendance = file(&quot;school.txt&quot;) school = poor_attendance(names, attendance) print(f&quot;The following students attendance is less than 50 %:\n{school}&quot;) if __name__ == '__main__': main() </code></pre>
<p>Use <code>zip</code> to read the two lists pairwise so you can return the names that are paired with attendance values less than 50:</p> <pre><code>def poor_attendance(names, attendance): return [n for n, a in zip(names, attendance) if a &lt; 50] </code></pre>
python|list|function|file
1
1,901,538
65,500,221
dict(sorted(dictionary.items(), key=operator.itemgetter(1)) does not always return a ordered dict when the value is a list
<p>I have a dict:</p> <pre><code>count2:defaultdict(&lt;class 'list'&gt;, {'i': [3, 2, 2, 1], 'w': [2, 2], 'p': [2, 2], 'd': [2, 2], 'm': [2, 2], 'y': [2, 2, 2, 1], 'x': [2, 2, 4, 1], 'j': [2, 2], 'o': [2, 1], 'r': [2, 1]}) </code></pre> <p>when I try to sort it by using</p> <pre><code>ordered = dict(sorted(count2.items(), key=operator.itemgetter(1), reverse=True)) </code></pre> <p>It not always sorts it like I want it to sort. (the value must have the biggest number and then descend) so it returns this:</p> <pre><code>orderedStart:{'i': [3, 2, 2, 1], 'x': [2, 2, 4, 1], 'y': [2, 2, 2, 1], 'w': [2, 2], 'p': [2, 2], 'd': [2, 2], 'm': [2, 2], 'j': [2, 2], 'o': [2, 1], 'r': [2, 1]} </code></pre> <p>everything is right except for that <code>x</code> should be in front of <code>i</code> since <code>4 &gt; 3</code>. Are some indexes more prioritized?</p> <p>To facilitate the users , here is an Example of a well sorted list using the same code.</p> <p>Before:</p> <pre><code>count2:defaultdict(&lt;class 'list'&gt;, {'r': [2, 2, 2, 1], 'g': [2, 1], 'e': [3, 1], 'n': [5, 1], 't': [4, 1], 'i': [2, 1], 'o': [5, 1], 'm': [2, 1]}) </code></pre> <p>After:</p> <pre><code>{'n': [5, 1], 'o': [5, 1], 't': [4, 1], 'e': [3, 1], 'r': [2, 2, 2, 1], 'g': [2, 1], 'i': [2, 1], 'm': [2, 1]} </code></pre>
<p>You misunderstand how python compares tuples. You have asked it to compare <code>[3, 2, 2, 1]</code> and <code>[2, 2, 4, 1]</code>. Since 3 &gt; 2, it comes first.</p> <p>Python uses &quot;lexicographic comparison&quot;, which is identical to the way you look up words in a dictionary. First you compare the first letters. If they are different, you're done; if they are different, you look at the second letter. And so forth.</p>
python|python-3.x|list|sorting|dictionary
1
1,901,539
35,212,271
Numpy create two arrays using fromiter simultaneously
<p>I have an iterator that looks something like the following</p> <pre><code>it = ((x, x**2) for x in range(20)) </code></pre> <p>and what I want is two arrays. one of the <code>x</code>s and the other of the <code>x**2</code>s but I don't actually know the number of elements, and I can't convert from one entry to the other, so I couldn't build the first, and then build the second from the first.</p> <p>If I had only one outcome with unknown size, I could use <code>np.fromiter</code> to have it dynamically allocate efficiently, e.g.</p> <pre><code>y = np.fromiter((x[0] for x in it), float) </code></pre> <p>with two I would hope I could do something like</p> <pre><code>ita, itb = itertools.tee(it) y = np.fromiter((x[0] for x in ita), float) y2 = np.fromiter((x[1] for x in itb), float) </code></pre> <p>but because the first call exhausts the iterator, I'd be better off doing</p> <pre><code>lst = list(it) y = np.fromiter((x[0] for x in lst), float, len(lst)) y2 = np.fromiter((x[1] for x in lst), float, len(lst)) </code></pre> <p>Because tee will be filling a deque the size of the whole list anyways. I'd love to avoid copying the iterator into a list before then copying it again into an array, but I can't think of a way to incrementally build an array without doing it entirely manually. In addition, <code>fromiter</code> seems to be written in c, so writing it in python would probably end up with no negligible difference over making a list first.</p>
<p>You could use <code>np.fromiter</code> to build one array with all the values, and then slice the array:</p> <pre><code>In [103]: it = ((x, x**2) for x in range(20)) In [104]: import itertools In [105]: y = np.fromiter(itertools.chain.from_iterable(it), dtype=float) In [106]: y Out[106]: array([ 0., 0., 1., 1., 2., 4., 3., 9., 4., 16., 5., 25., 6., 36., 7., 49., 8., 64., 9., 81., 10., 100., 11., 121., 12., 144., 13., 169., 14., 196., 15., 225., 16., 256., 17., 289., 18., 324., 19., 361.]) In [107]: y, y2 = y[::2], y[1::2] In [108]: y Out[108]: array([ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., 11., 12., 13., 14., 15., 16., 17., 18., 19.]) In [109]: y2 Out[109]: array([ 0., 1., 4., 9., 16., 25., 36., 49., 64., 81., 100., 121., 144., 169., 196., 225., 256., 289., 324., 361.]) </code></pre> <p>The above manages to load the data from the iterator into arrays without the use of intermediate Python lists. The underlying data in the arrays is not contiguous, however. Many operations are faster on contiguous arrays:</p> <pre><code>In [19]: a = np.arange(10**6) In [20]: y1 = a[::2] In [21]: z1 = np.ascontiguousarray(y1) In [24]: %timeit y1.sum() 1000 loops, best of 3: 975 µs per loop In [25]: %timeit z1.sum() 1000 loops, best of 3: 464 µs per loop </code></pre> <p>So you may wish to make <code>y</code> and <code>y2</code> contiguous:</p> <pre><code>y = np.ascontiguousarray(y) y2 = np.ascontiguousarray(y2) </code></pre> <p>Calling <code>np.ascontiguousarray</code> requires copying the non-contiguous data in <code>y</code> and <code>y2</code> into new arrays. Unfortunately, I do not see a way to create <code>y</code> and <code>y2</code> as contiguous arrays without copying.</p> <hr> <p>Here is a benchmark comparing the use of an intermediate Python list to NumPy slices (with and without <code>ascontiguousarray</code>):</p> <pre><code>import numpy as np import itertools as IT def using_intermediate_list(g): lst = list(g) y = np.fromiter((x[0] for x in lst), float, len(lst)) y2 = np.fromiter((x[1] for x in lst), float, len(lst)) return y, y2 def using_slices(g): y = np.fromiter(IT.chain.from_iterable(g), dtype=float) y, y2 = y[::2], y[1::2] return y, y2 def using_slices_contiguous(g): y = np.fromiter(IT.chain.from_iterable(g), dtype=float) y, y2 = y[::2], y[1::2] y = np.ascontiguousarray(y) y2 = np.ascontiguousarray(y2) return y, y2 def using_array(g): y = np.array(list(g)) y, y2 = y[:, 0], y[:, 1] return y, y2 </code></pre> <hr> <pre><code>In [27]: %timeit using_intermediate_list(((x, x**2) for x in range(10**6))) 1 loops, best of 3: 376 ms per loop In [28]: %timeit using_slices(((x, x**2) for x in range(10**6))) 1 loops, best of 3: 220 ms per loop In [29]: %timeit using_slices_contiguous(((x, x**2) for x in range(10**6))) 1 loops, best of 3: 237 ms per loop In [34]: %timeit using_array(((x, x**2) for x in range(10**6))) 1 loops, best of 3: 707 ms per loop </code></pre>
python|arrays|numpy
2
1,901,540
35,228,779
How to send AT commands using to USB modem on Ubuntu
<p>like this command: <code>*342*55*225*5#</code></p> <p>I installed Ubuntu and minicom on it. I tried to follow the guide written in <a href="http://www.thegeekstuff.com/2013/05/modem-at-command/" rel="nofollow">this article</a>, but I was blocked because when I entered AT and clicked enter, it did not respond... </p> <p>On python I was following up a method but when I entered this command </p> <pre><code>comport = serial.Serial('ttyUSB4') </code></pre> <p>I got this error :</p> <pre><code>self.fd = os.open(self.portstr, os.O_RDWR|os.O_NOCTTY|os.O_NONBLOCK) OSError: [Errno 2] No such file or directory: 'ttyUSB4' </code></pre>
<p>Try to provide full path to device:</p> <pre><code>comport = serial.Serial('/dev/ttyUSB4') </code></pre> <p>But prefer udev paths, they will not change if you plug in devices in other order:</p> <pre><code>comport = serial.Serial('/dev/serial/by-id/&lt;nameofyourdevice&gt;') </code></pre>
python|ubuntu
0
1,901,541
45,161,029
logger doesn't output info level even though it is enabled
<p>I'm creating a logger object. Set logging level to <code>logging.INFO</code>. After this I do the following:</p> <pre><code>&gt;&gt;&gt; import logging &gt;&gt;&gt; logger = logging.getLogger('mylogger') &gt;&gt;&gt; logger.setLevel(logging.INFO) &gt;&gt;&gt; logger.isEnabledFor(logging.INFO) True </code></pre> <p>Ok, now try this:</p> <pre><code>logger.info('123') </code></pre> <p>The last line prints nothing. However:</p> <pre><code>&gt;&gt;&gt; logger.critical(123) 123 </code></pre> <p>I know it has to be something very simple that I'm missing. What would that be?</p> <p>Thanks</p>
<p>You need to add a handler, because by default there is none and it'll log only levels above a predefined last resort level.</p> <p>Add this code after the line with <code>getLogger</code>:</p> <pre><code>console_handler = logging.StreamHandler() console_handler.setLevel(logging.INFO) logger.addHandler(console_handler) </code></pre> <p>Also, look at the examples in the <a href="https://docs.python.org/3.6/howto/logging.html#logging-basic-tutorial" rel="nofollow noreferrer">official tutorial</a>.</p>
python-3.x|logging
1
1,901,542
45,259,307
Improving performance of a Python function that outputs the pixels that are different between two images
<p>I'm working on a computer vision project and am looking to build a fast function that compares two images and outputs only the pixels where the differences between the pixels of the two images are sufficiently different. Other pixels get set to (0,0,0). In practice, I want the camera to detect objects and ignore the background. </p> <p>My issue is the function doesn't run fast enough. What are some ways to speed things up?</p> <pre><code>def get_diff_image(fixed_image): #get new image new_image = current_image() #get diff diff = abs(fixed_image-new_image) #creating a filter array filter_array = np.empty(shape = (fixed_image.shape[0], fixed_image.shape[1])) for idx, row in enumerate(diff): for idx2, pixel in enumerate(row): mag = np.linalg.norm(pixel) if mag &gt; 40: filter_array[idx][idx2] = True else: filter_array[idx][idx2] = False #applying filter filter_image = np.copy(new_image) filter_image[filter_array == False] = [0, 0, 0] return filter_image </code></pre>
<p>As others have mentioned, your biggest slow down in this code is iterating over every pixel in Python. Since Python is an interpreted language, these iterations take much longer than their equivalents in C/C++, which numpy uses under the hood.</p> <p>Conveniently, you can specify an axis for <code>numpy.linalg.norm</code>, so you can get all the magnitudes in one numpy command. In this case, your pixels are on axis 2, so we'll take the norm on that axis, like this:</p> <pre><code>mags = np.linalg.norm(diff, axis=2) </code></pre> <p>Here, <code>mags</code> will have the same shape as <code>filter_array</code>, and each location will hold the magnitude of the corresponding pixel.</p> <p>Using a boolean operator on a numpy array returns an array of bools, so:</p> <pre><code>filter_array = mags &gt; 40 </code></pre> <p>With the loops removed, the whole thing looks like this:</p> <pre><code>def get_diff_image(fixed_image): #get new image new_image = current_image() #get diff diff = abs(fixed_image-new_image) #creating a filter array mags = np.linalg.norm(diff, axis=2) filter_array = mags &gt; 40 #applying filter filter_image = np.copy(new_image) filter_image[filter_array == False] = [0, 0, 0] return filter_image </code></pre> <p>But there is still more efficiency to be gained. </p> <p>As noted by pete2fiddy, the magnitude of a vector doesn't depend on its direction. The absolute value operator only changes direction, not magnitude, so we just don't need it here. Sweet!</p> <p>The biggest remaining performance gain is to avoid copying the image. If you need to preserve the original image, start by allocating zeros for the output array since zeroing memory is often hardware accelerated. Then, copy only the required pixels. If you don't need the original and only plan to use the filtered one, then modifying the image in-place will give much better performance.</p> <p>Here's an updated function with those changes in place:</p> <pre><code>def get_diff_image(fixed_image): #get new image new_image = current_image() # Compute difference magnitudes mags = np.linalg.norm(fixed_image - new_image, axis=2) # Preserve original image filter_image = np.zeros_like(new_image) filter_image[mag &gt; 40] = new_image[mag &gt; 40] return filter_image # Avoid copy entirely (overwrites original!) # new_image[mag &lt; 40] = [0, 0, 0] # return new_image </code></pre>
python|opencv|computer-vision
2
1,901,543
55,508,210
Grouping and Summing in Pandas
<p>I have a dataframe with two columns. The first column contains <code>years</code> and the second column contain <code>value</code>. I want to group a certain year and change it to one name for that group and add all the corresponding values.</p> <p>For example, below is the small dataset</p> <pre><code>years value 1950 3 1951 1 1952 2 1961 4 1964 10 1970 34 </code></pre> <p>The output should look like</p> <pre><code>years value 1950's 6 1960's 14 1970's 34 </code></pre> <p>I am trying this in Python using <code>pandas</code> and tried a lot many ways, converting to dict or for loop and but every time I was not able to achieve as desired. Can someone please help?</p>
<p>Use integer division, multiple <code>10</code>, cast to string and add <code>s</code> and use this Series for aggregating <code>sum</code>:</p> <pre><code>y = ((df['years'] // 10) * 10).astype(str) + 's' df = df.groupby(y)['value'].sum().reset_index() print (df) years value 0 1950s 6 1 1960s 14 2 1970s 34 </code></pre> <p><strong>Detail</strong>:</p> <pre><code>print (y) 0 1950s 1 1950s 2 1950s 3 1960s 4 1960s 5 1970s Name: years, dtype: object </code></pre>
python|pandas|dataframe
3
1,901,544
53,997,102
How to "reconnect" an ORM object or list of such objects with the database?
<p>My understanding is that the following code will return an error:</p> <pre><code>from src.mysqlClient import db_session from src.mysqlClient.models import AdvertDom with db_session() as session: advert_doms = session.query(AdvertDom).all() for advert_dom in advert_doms: print(advert_dom.HTMLContent) </code></pre> <p>My understanding is that the error is caused by the session ending, which disconnects the <code>advert_doms</code> list from the database.</p> <hr> <p>If I have a function that returns an ORM object or list of ORM objects, how can I have the objects later "reconnect" with the database so that the code above would work?</p> <p>Here's an example of what I mean:</p> <pre><code>from src.mysqlClient import db_session from src.mysqlClient.models import AdvertDom def function_one(): with db_session() as session: advert_doms = session.query(AdvertDom).all() return advert_doms def function_two() advert_doms = function_one() # TODO: Do something here so that the code below will work. for advert_dom in advert_doms: print(advert_dom.HTMLContent) </code></pre>
<p>The answer is to create a new session and do <code>session.add(orm_object)</code>:</p> <pre><code>from src.mysqlClient import db_session from src.mysqlClient.models import AdvertDom def function_one(): with db_session() as session: advert_doms = session.query(AdvertDom).all() return advert_doms def function_two() advert_doms = function_one() with db_session() as session: # &lt;-- New line of code for advert_dom in advert_doms: session.add(advert_dom) # &lt;-- New line of code print(advert_dom.HTMLContent) </code></pre> <hr> <p>The other way to avoid this problem is to just have a single global session object that doesn't get closed until both functions have run.</p>
python|sqlalchemy
0
1,901,545
58,367,576
How to iterate through svg elements for python
<p>I have been trying to iterate across all svg elements on page with defined characterictics to click on them and parse then, but found it out only how to open the first element Do you know how to open all of them? Any feedback is appreciated :)</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>&lt;tr class="MuiTableRow-root"&gt; &lt;td class="MuiTableCell-root jss346 MuiTableCell-alignLeft MuiTableCell-sizeSmall MuiTableCell-body"&gt;АМАРИЛ М ТАБЛ. П/ПЛЕН/ОБ. 2МГ+500МГ №30&lt;/td&gt; &lt;td class="MuiTableCell-root jss346 MuiTableCell-alignRight MuiTableCell-sizeSmall MuiTableCell-body"&gt;1&lt;/td&gt; &lt;td class="MuiTableCell-root jss346 MuiTableCell-alignLeft MuiTableCell-sizeSmall MuiTableCell-body"&gt;&lt;a class="MuiTypography-root MuiLink-root MuiLink-underlineHover jss348 MuiTypography-colorPrimary"&gt;&lt;svg class="MuiSvgIcon-root jss350" focusable="false" viewBox="0 0 24 24" aria-hidden="true" role="presentation"&gt;&lt;path d="M18 17H6v-2h12v2zm0-4H6v-2h12v2zm0-4H6V7h12v2zM3 22l1.5-1.5L6 22l1.5-1.5L9 22l1.5-1.5L12 22l1.5-1.5L15 22l1.5-1.5L18 22l1.5-1.5L21 22V2l-1.5 1.5L18 2l-1.5 1.5L15 2l-1.5 1.5L12 2l-1.5 1.5L9 2 7.5 3.5 6 2 4.5 3.5 3 2v20z"&gt;&lt;/path&gt;&lt;/svg&gt;&lt;/a&gt;&lt;/td&gt; &lt;/tr&gt; &lt;tr class="MuiTableRow-root"&gt; &lt;td class="MuiTableCell-root jss346 MuiTableCell-alignLeft MuiTableCell-sizeSmall MuiTableCell-body"&gt;АМАРИЛ М ТАБЛ. П/ПЛЕН/ОБ. 2МГ+500МГ №30&lt;/td&gt; &lt;td class="MuiTableCell-root jss346 MuiTableCell-alignRight MuiTableCell-sizeSmall MuiTableCell-body"&gt;1&lt;/td&gt; &lt;td class="MuiTableCell-root jss346 MuiTableCell-alignLeft MuiTableCell-sizeSmall MuiTableCell-body"&gt;&lt;a class="MuiTypography-root MuiLink-root MuiLink-underlineHover jss348 MuiTypography-colorPrimary"&gt;&lt;svg class="MuiSvgIcon-root jss350" focusable="false" viewBox="0 0 24 24" aria-hidden="true" role="presentation"&gt;&lt;path d="M18 17H6v-2h12v2zm0-4H6v-2h12v2zm0-4H6V7h12v2zM3 22l1.5-1.5L6 22l1.5-1.5L9 22l1.5-1.5L12 22l1.5-1.5L15 22l1.5-1.5L18 22l1.5-1.5L21 22V2l-1.5 1.5L18 2l-1.5 1.5L15 2l-1.5 1.5L12 2l-1.5 1.5L9 2 7.5 3.5 6 2 4.5 3.5 3 2v20z"&gt;&lt;/path&gt;&lt;/svg&gt;&lt;/a&gt;&lt;/td&gt; &lt;/tr&gt;</code></pre> </div> </div> </p> <p>and so on...</p> <p>My code:</p> <pre><code>chrome_options = webdriver.ChromeOptions() chrome_options.add_argument("start-maximized") driver = webdriver.Chrome(options=chrome_options) driver.get(url) XPATH = "//*[name()='svg' and contains(@class, 'jss350')]" WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, "//*[name()='svg' and contains(@class, 'jss350')]"))).click() driver.quit() </code></pre>
<p>Seems like you are using <code>XPath</code> incorrectly here -- querying on <code>@name='svg'</code> will get you elements with the <code>name</code> attribute <code>svg</code>, not WebElements of type <code>svg</code>.</p> <p>If you want to wait on all <code>svg</code> elements to exist, you can do this:</p> <pre><code>WebDriverWait(driver, 20).until(EC.presence_of_all_elements_located((By.XPATH, "//svg[contains(@class, 'jss350')]"))) </code></pre> <p>However, you also want to iterate through the elements in a list. You will need to find all elements first, then iterate them. You can try the following code:</p> <pre><code># Wait for all svg elements to be clickable WebDriverWait(driver, 20).until(EC.presence_of_all_elements_located((By.XPATH, "//svg[contains(@class, 'jss350')]"))) # find list of svg elements svg_elements = driver.find_elements_by_xpath("//svg[contains(@class, 'jss350')]") # iterate the list and click each one for element in svg_elements: element.click() </code></pre> <p>This will wait on all <code>svg</code> elements to exist, grab the list of elements from the page, iterate each element, and click it.</p>
python|selenium|svg
0
1,901,546
28,645,597
How to draw a transparent image in pygame?
<p>Consider a chess board, i have a transparent image of queen(queen.png) of size 70x70 and i want to display it over a black rectangle. Code:</p> <pre><code>BLACK=(0,0,0) queen = pygame.image.load('queen.png') pygame.draw.rect(DISPLAYSURF, BLACK, (10, 10, 70, 70)) DISPLAYSURF.blit(queen, (10, 10)) </code></pre> <p>Error: i am not getting transparent image ie black rectangle is not visible at all, only queen with white background. Please suggest</p>
<p>Try changing the line where you load in the queen to:</p> <pre><code>queen = pygame.image.load('queen.png').convert_alpha() </code></pre>
python|pygame|pygame-surface
8
1,901,547
23,862,724
Accessing alternate attributes in a node from ElementTree in Python
<p>I have the following XML file:</p> <pre><code>&lt;?xml version="1.0" encoding="UTF-8"?&gt; &lt;!DOCTYPE MedlineCitationSet PUBLIC "-//NLM//DTD Medline Citation, 1st January, 2014//EN" "http://www.nlm.nih.gov/databases/dtd/nlmmedlinecitationset_140101.dtd"&gt; &lt;MedlineCitationSet&gt; &lt;MedlineCitation Owner="NLM" Status="MEDLINE"&gt; &lt;PMID Version="1"&gt;15326085&lt;/PMID&gt; &lt;Article PubModel="Print-Electronic"&gt; &lt;Journal&gt; &lt;JournalIssue CitedMedium="Internet"&gt; &lt;Volume&gt;44&lt;/Volume&gt; &lt;Issue&gt;4&lt;/Issue&gt; &lt;PubDate&gt; &lt;Year&gt;2004&lt;/Year&gt; &lt;Month&gt;Oct&lt;/Month&gt; &lt;/PubDate&gt; &lt;/JournalIssue&gt; &lt;Title&gt;Hypertension&lt;/Title&gt; &lt;ISOAbbreviation&gt;Hypertension&lt;/ISOAbbreviation&gt; &lt;/Journal&gt; &lt;ArticleTitle&gt;Arterial pressure lowering effect of chronic atenolol therapy in hypertension and vasoconstrictor sympathetic drive.&lt;/ArticleTitle&gt; &lt;Pagination&gt; &lt;MedlinePgn&gt;454-8&lt;/MedlinePgn&gt; &lt;/Pagination&gt; &lt;AuthorList CompleteYN="Y"&gt; &lt;Author ValidYN="Y"&gt; &lt;LastName&gt;Burns&lt;/LastName&gt; &lt;ForeName&gt;Joanna&lt;/ForeName&gt; &lt;Initials&gt;J&lt;/Initials&gt; &lt;Affiliation&gt;Department of Cardiology, Leeds Teaching Hospitals NHS Trust, Leeds, UK. burnsjoanna1@hotmail.com&lt;/Affiliation&gt; &lt;/Author&gt; &lt;Author ValidYN="Y"&gt; &lt;LastName&gt;Mary&lt;/LastName&gt; &lt;ForeName&gt;David A S G&lt;/ForeName&gt; &lt;Initials&gt;DA&lt;/Initials&gt; &lt;/Author&gt; &lt;Author ValidYN="Y"&gt; &lt;LastName&gt;Mackintosh&lt;/LastName&gt; &lt;ForeName&gt;Alan F&lt;/ForeName&gt; &lt;Initials&gt;AF&lt;/Initials&gt; &lt;/Author&gt; &lt;Author ValidYN="Y"&gt; &lt;LastName&gt;Ball&lt;/LastName&gt; &lt;ForeName&gt;Stephen G&lt;/ForeName&gt; &lt;Initials&gt;SG&lt;/Initials&gt; &lt;/Author&gt; &lt;Author ValidYN="Y"&gt; &lt;LastName&gt;Greenwood&lt;/LastName&gt; &lt;ForeName&gt;John P&lt;/ForeName&gt; &lt;Initials&gt;JP&lt;/Initials&gt; &lt;/Author&gt; &lt;/AuthorList&gt; &lt;Language&gt;eng&lt;/Language&gt; &lt;ArticleDate DateType="Electronic"&gt; &lt;Year&gt;2004&lt;/Year&gt; &lt;Month&gt;08&lt;/Month&gt; &lt;Day&gt;23&lt;/Day&gt; &lt;/ArticleDate&gt; &lt;/Article&gt; &lt;/MedlineCitation&gt; &lt;MedlineCitation Owner="NLM" Status="In-Data-Review"&gt; &lt;PMID Version="1"&gt;24096967&lt;/PMID&gt; &lt;Article PubModel="Print-Electronic"&gt; &lt;Journal&gt; &lt;JournalIssue CitedMedium="Internet"&gt; &lt;Volume&gt;31&lt;/Volume&gt; &lt;Issue&gt;3&lt;/Issue&gt; &lt;PubDate&gt; &lt;Year&gt;2014&lt;/Year&gt; &lt;Month&gt;Mar&lt;/Month&gt; &lt;/PubDate&gt; &lt;/JournalIssue&gt; &lt;Title&gt;Pharmaceutical research&lt;/Title&gt; &lt;ISOAbbreviation&gt;Pharm. Res.&lt;/ISOAbbreviation&gt; &lt;/Journal&gt; &lt;ArticleTitle&gt;Semi-mechanistic Modelling of the Analgesic Effect of Gabapentin in the Formalin-Induced Rat Model of Experimental Pain.&lt;/ArticleTitle&gt; &lt;Pagination&gt; &lt;MedlinePgn&gt;593-606&lt;/MedlinePgn&gt; &lt;/Pagination&gt; &lt;AuthorList CompleteYN="Y"&gt; &lt;Author ValidYN="Y"&gt; &lt;LastName&gt;Taneja&lt;/LastName&gt; &lt;ForeName&gt;A&lt;/ForeName&gt; &lt;Initials&gt;A&lt;/Initials&gt; &lt;Affiliation&gt;Division of Pharmacology, Leiden Academic Centre for Drug Research, POBox 9502, 2300 RA, Leiden, The Netherlands.&lt;/Affiliation&gt; &lt;/Author&gt; &lt;Author ValidYN="Y"&gt; &lt;LastName&gt;Troconiz&lt;/LastName&gt; &lt;ForeName&gt;I F&lt;/ForeName&gt; &lt;Initials&gt;IF&lt;/Initials&gt; &lt;/Author&gt; &lt;Author ValidYN="Y"&gt; &lt;LastName&gt;Danhof&lt;/LastName&gt; &lt;ForeName&gt;M&lt;/ForeName&gt; &lt;Initials&gt;M&lt;/Initials&gt; &lt;/Author&gt; &lt;Author ValidYN="Y"&gt; &lt;LastName&gt;Della Pasqua&lt;/LastName&gt; &lt;ForeName&gt;O&lt;/ForeName&gt; &lt;Initials&gt;O&lt;/Initials&gt; &lt;/Author&gt; &lt;Author ValidYN="Y"&gt; &lt;CollectiveName&gt;neuropathic pain project of the PKPD modelling platform&lt;/CollectiveName&gt; &lt;/Author&gt; &lt;/AuthorList&gt; &lt;Language&gt;eng&lt;/Language&gt; &lt;PublicationTypeList&gt; &lt;PublicationType&gt;Journal Article&lt;/PublicationType&gt; &lt;/PublicationTypeList&gt; &lt;ArticleDate DateType="Electronic"&gt; &lt;Year&gt;2013&lt;/Year&gt; &lt;Month&gt;10&lt;/Month&gt; &lt;Day&gt;05&lt;/Day&gt; &lt;/ArticleDate&gt; &lt;/Article&gt; &lt;/MedlineCitation&gt; &lt;/MedlineCitationSet&gt; </code></pre> <p>Note that it contains two chunks of entries PMID 15326085 and 24096967. What I want to do is to parse the XML file and extract the authors last name or collective names. Resulting this:</p> <pre><code>15326085 Burns,Mary,Mackintosh,Ball,Greenwood 24096967 Taneja,Troconiz,Danhof,Della Pasqua, neuropathic pain project of the PKPD modelling platform </code></pre> <p>But why this code failed to capture the 'collective name' in 2nd entry?</p> <pre><code>#!/usr/bin/env python import xml.etree.ElementTree as ET def parse_xml(xmlfile): """docstring for parse_xml""" tree = ET.parse(xmlfile) root = tree.getroot() for medcit in root.findall('MedlineCitation'): pmid = medcit.find('PMID').text authors = medcit.find('Article/AuthorList/') lnlist = [] for auth in authors: lastname = auth.find('LastName').text.encode('utf8') colcname = auth.find('CollectiveName').text if lastname is not None: lnlist.append(lastname) elif colcname is not None: lnlist.append(colcname) print pmid, ",".join(lnlist) parse_xml('myfile.xml') </code></pre> <p>The output of the above code is this:</p> <pre><code>Traceback (most recent call last): File "test.py", line 70, in &lt;module&gt; parse_xml(fvar) File "test.py", line 49, in parse_xml colcname = auth.find('CollectiveName').text AttributeError: 'NoneType' object has no attribute 'text' </code></pre>
<p>Grab the <code>text</code> only if node is found:</p> <pre><code>for auth in authors: lastname = auth.find('LastName') if lastname is not None: lnlist.append(lastname.text.encode('utf8')) else: colcname = auth.find('CollectiveName') if colcname is not None: lnlist.append(colcname.text) </code></pre>
python|xml|parsing
1
1,901,548
46,588,392
Paramiko server port forward with openssh client -N option
<p>I am attempting to build a Paramiko server that just forwards ports. I adapted the code from the <a href="https://github.com/paramiko/paramiko/blob/master/demos/demo_server.py" rel="nofollow noreferrer">demo server</a> code</p> <pre><code>#!/usr/bin/env python import base64 from binascii import hexlify import os import socket import sys import threading import traceback import paramiko from paramiko.py3compat import b, u, decodebytes import logging logging.basicConfig(level=logging.INFO) logger = logging.getLogger(__name__) host_key = paramiko.RSAKey(filename="test_rsa.key") logger.info("Read key: " + u(hexlify(host_key.get_fingerprint()))) class Server(paramiko.ServerInterface): def __init__(self): self.event = threading.Event() def check_auth_publickey(self, username, key): logger.info("Auth attempt with key: " + u(hexlify(key.get_fingerprint()))) try: with open("client_rsa.pub.stripped", "rb") as f: good_key = f.read() good_pub_key = paramiko.RSAKey(data=decodebytes(good_key)) except: logger.exception("failed to read public key") return paramiko.AUTH_FAILED if (username == "robey") and (key == good_pub_key): return paramiko.AUTH_SUCCESSFUL return paramiko.AUTH_FAILED def get_allowed_auths(self, username): return "publickey" def check_channel_request(self, kind, chanid): logger.info("inside channel request") return paramiko.OPEN_SUCCEEDED def check_channel_direct_tcpip_request(self, chanid, origin, destination): return paramiko.OPEN_SUCCEEDED def check_channel_shell_request(self, channel): self.event.set() return True if __name__ == "__main__": sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) sock.bind(("", 2200)) sock.listen(100) logger.info("Listening for connection ...") client, addr = sock.accept() logger.info("Got a connection!") with paramiko.Transport(client) as t: t.load_server_moduli() t.add_server_key(host_key) server = Server() t.start_server(server=server) # wait for auth chan = t.accept(20) if chan is None: logger.info("*** No channel.") sys.exit(1) logger.info("Authenticated!") # prompt for more information chan.send("Username: ") f = chan.makefile("rU") username = f.readline().strip("\r\n") logger.info("received username: " + username) chan.close() </code></pre> <p>And I am using this command to connect successfully:</p> <pre><code>ssh -i client_rsa.key -p 2200 -L 9999:localhost:4000 -T robey@localhost </code></pre> <p>However, when I attempt to use the -N option for the ssh client, ie:</p> <pre><code>ssh -i client_rsa.key -p 2200 -L 9999:localhost:4000 -T -N robey@localhost </code></pre> <p>the Paramiko server hangs after authenticating the client, never reaching the <code>check_channel_request</code> function. Here are the logs from the run:</p> <pre><code>INFO:__main__:Read key: 689f8799e649f931b116b19227dbb2a3 INFO:__main__:Listening for connection ... INFO:__main__:Got a connection! INFO:paramiko.transport:Connected (version 2.0, client OpenSSH_7.2p2) INFO:paramiko.transport:Auth rejected (none). INFO:__main__:Auth attempt with key: cdbb2439816b22a59ee036be3a953e51 INFO:paramiko.transport:Auth rejected (publickey). INFO:__main__:Auth attempt with key: 11c470c88233719a2499f03336589618 INFO:paramiko.transport:Auth granted (publickey). </code></pre> <p>Is there anyway to get the Paramiko server to be able to handle this situation?</p>
<p>Figured this out. The reason nothing was happening is that the tunnel forwarding is not opened until you try to use it. It turns out my tunnel wasn't being created even without the -N option. So the answer is to make sure to use the local port after creating the SSH connection.</p>
python|paramiko|openssh
0
1,901,549
49,569,708
How to determine highest occurrence of categorical labels across multiple columns per row
<p>I am trying to determine the label name with the highest occurrence across multiple columns and set the another pandas columns with that label.</p> <p>For examples, given this dataframe:</p> <pre><code> Class_1 Class_2 Class_3 0 versicolor setosa setosa 1 virginica versicolor virginica 2 virginica setosa setosa 3 versicolor setosa setosa 4 versicolor versicolor virginica </code></pre> <p>I want to add a column called Predictions per the reasoning above:</p> <pre><code> Class_1 Class_2 Class_3 Predictions 0 versicolor setosa setosa setosa 1 virginica versicolor virginica virginica 2 virginica setosa setosa setosa 3 versicolor setosa setosa setosa 4 versicolor versicolor virginica versicolor </code></pre>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.value_counts.html" rel="nofollow noreferrer"><code>value_counts</code></a> for return first index by most common value per rows with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.apply.html" rel="nofollow noreferrer"><code>apply</code></a> and <code>axis=1</code>:</p> <pre><code>df['Predictions'] = df.apply(lambda x: x.value_counts().index[0], axis=1) print (df) Class_1 Class_2 Class_3 Predictions 0 versicolor setosa setosa setosa 1 virginica versicolor virginica virginica 2 virginica setosa setosa setosa 3 versicolor setosa setosa setosa 4 versicolor versicolor virginica versicolor </code></pre> <p>Alternative with <a href="https://docs.python.org/2/library/collections.html#collections.Counter.most_common" rel="nofollow noreferrer"><code>Counter.most_common</code></a>:</p> <pre><code>from collections import Counter df['Predictions'] = [Counter(x).most_common(1)[0][0] for x in df.itertuples()] print (df) Class_1 Class_2 Class_3 Predictions 0 versicolor setosa setosa setosa 1 virginica versicolor virginica virginica 2 virginica setosa setosa setosa 3 versicolor setosa setosa setosa 4 versicolor versicolor virginica versicolor </code></pre>
python|pandas
2
1,901,550
54,955,859
Pandas/Dataframe: How to assign default value when condition fails while taking single cell value from data frame using python?
<p>Let consider the below code:</p> <pre><code>import pandas as pd df = pd.DataFrame([[1, 2], [3, 4], [5, 6], [7, 8]], columns=["A", "B"]) x=0 print(df) x=df.loc[df['A'] == 3, 'B', ''].iloc[0] print(x) </code></pre> <p>while printing the x I get 4 as the output.Its fine. If the condition get fails as per the below code</p> <pre><code>x=df.loc[df['A'] == 33, 'B', ''].iloc[0] </code></pre> <p>I want to print the x's initial value 0 and I want avoid the below error:</p> <blockquote> <p>IndexError: single positional indexer is out-of-bounds</p> </blockquote> <p>Guide me to avoid the error and display the initial value of x. Thanks in advance. </p>
<p>you can have a look at try and except for exception handling, Use:</p> <pre><code>import pandas as pd df = pd.DataFrame([[1, 2], [3, 4], [5, 6], [7, 8]], columns=["A", "B"]) x=0 print(df) try: x=df.loc[df['A'] == 3, 'B', ''].iloc[0] print(x) except Exception as e: print(e) print(x) </code></pre> <p>Output:</p> <pre><code> A B 0 1 2 1 3 4 2 5 6 3 7 8 Too many indexers #the exception 0 #the initial value </code></pre>
python-3.x|pandas|dataframe
2
1,901,551
33,153,965
How to set default value using SqlAlchemy_Utils ChoiceType
<p>I just migrated from NoSQL to SQL, and I'm rather new to the SqlAlchemy ORM. </p> <p>In my use case, I need a field in models to be able to store a given choice set: </p> <pre><code># models.py from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.utils.types.choice import ChoiceType Base = declarative_base() class User(Base): __tablename__ = 'user' USER_TYPES = [ ('user', 'User'), ('admin', 'Admin User') ] id = Column(Integer(), primary_key=True) type = Column(ChoiceType(User_Types), default='user') </code></pre> <p>But when I run my script, I get: </p> <pre><code>&gt; SAWarning: Unicode column 'None' has non-unicode default value 'user' specified. self.default </code></pre> <p>And no errors on other fields where I set default values and type not "ChoiceType"). </p> <p>Anyone knows what I did wrong?</p> <p>Thanks!</p>
<p><code>ChoiceType</code> column is <code>Unicode(255)</code> by default. The <code>Unicode</code> type is a <code>String</code> subclass that assumes input and output as Python unicode data. You should set default as unicode string</p> <pre><code>type = Column(ChoiceType(USER_TYPES), default=u'user') </code></pre> <p>Or you can set <code>impl</code> parameter</p> <pre><code>type = Column(ChoiceType(USER_TYPES, impl=String()), default='user') </code></pre>
python|flask|sqlalchemy|flask-sqlalchemy
9
1,901,552
24,922,315
Merging pandas dataframe based on relationship in multiple columns
<p>Lets say you have a DataFrame of regions (start, end) coordinates and another DataFrame of positions which may or may not fall within a given region. For example:</p> <pre><code>region = pd.DataFrame({'chromosome': [1, 1, 1, 1, 2, 2, 2, 2], 'start': [1000, 2000, 3000, 4000, 1000, 2000, 3000, 4000], 'end': [2000, 3000, 4000, 5000, 2000, 3000, 4000, 5000]}) position = pd.DataFrame({'chromosome': [1, 2, 1, 3, 2, 1, 1], 'BP': [1500, 1100, 10000, 2200, 3300, 400, 5000]}) print region print position chromosome end start 0 1 2000 1000 1 1 3000 2000 2 1 4000 3000 3 1 5000 4000 4 2 2000 1000 5 2 3000 2000 6 2 4000 3000 7 2 5000 4000 BP chromosome 0 1500 1 1 1100 2 2 10000 1 3 2200 3 4 3300 2 5 400 1 6 5000 1 </code></pre> <p>A position falls within a region if:</p> <pre><code>position['BP'] &gt;= region['start'] &amp; position['BP'] &lt;= region['end'] &amp; position['chromosome'] == region['chromosome'] </code></pre> <p>Each position is guaranteed to fall within a maximum of one region although it might not fall in any.</p> <p>What is the best way to merge these two dataframe such that it appends additional columns to position with the region it falls in if it falls in any region. Giving in this case roughly the following output:</p> <pre><code> BP chromosome start end 0 1500 1 1000 2000 1 1100 2 1000 2000 2 10000 1 NA NA 3 2200 3 NA NA 4 3300 2 3000 4000 5 400 1 NA NA 6 5000 1 4000 5000 </code></pre> <p>One approach is to write a function to compute the relationship I want and then to use the DataFrame.apply method as follows:</p> <pre><code>def within(pos, regs): istrue = (pos.loc['chromosome'] == regs['chromosome']) &amp; (pos.loc['BP'] &gt;= regs['start']) &amp; (pos.loc['BP'] &lt;= regs['end']) if istrue.any(): ind = regs.index[istrue].values[0] return(regs.loc[ind ,['start', 'end']]) else: return(pd.Series([None, None], index=['start', 'end'])) position[['start', 'end']] = position.apply(lambda x: within(x, region), axis=1) print position BP chromosome start end 0 1500 1 1000 2000 1 1100 2 1000 2000 2 10000 1 NaN NaN 3 2200 3 NaN NaN 4 3300 2 3000 4000 5 400 1 NaN NaN 6 5000 1 4000 5000 </code></pre> <p>But I'm hoping that there is a more optimized way than doing each comparison in O(N) time. Thanks!</p>
<p>One solution would be to do an inner-join on <code>chromosome</code>, exclude the violating rows, and then do left-join with <code>position</code>:</p> <pre><code>&gt;&gt;&gt; df = pd.merge(position, region, on='chromosome', how='inner') &gt;&gt;&gt; idx = (df['BP'] &lt; df['start']) | (df['end'] &lt; df['BP']) # violating rows &gt;&gt;&gt; pd.merge(position, df[~idx], on=['BP', 'chromosome'], how='left') BP chromosome end start 0 1500 1 2000 1000 1 1100 2 2000 1000 2 10000 1 NaN NaN 3 2200 3 NaN NaN 4 3300 2 4000 3000 5 400 1 NaN NaN 6 5000 1 5000 4000 </code></pre>
python|pandas|merge
5
1,901,553
40,922,320
How to pass options and parameters to eclat algorithm using pyfim?
<p>I am new to python and I am trying to generate frequent item sets from a log file using eclat. I am directly calling the eclat function from fim and passing the whole log file as a nested list. I want to use various options while calling eclat, like passing a file directly as input, passing output file name to write the results, min support, max item set size etc. Can someone tell how to pass the arguments to eclat that is being called as a function from fim? <a href="https://i.stack.imgur.com/p16zU.png" rel="nofollow noreferrer">Attached image of the code</a></p>
<p>You can try with the:</p> <pre><code>eclat(tracts, param_name=value, param_name=value,...) </code></pre> <p>eclat supports several parameters (values displayed are the default one):</p> <pre><code>eclat (tracts, target='s', supp=10, zmin=1, zmax=None, report='a', eval='x', agg='x', thresh=10, prune=None, algo='a', mode='', border=None) </code></pre> <p>For more informations run in python:</p> <pre><code>help(fim.eclat) </code></pre>
python|parameter-passing|apriori|word-frequency
1
1,901,554
39,261,395
idle-python for RHEL 7
<p>Python community. I am looking for a Red Enterprise Linux 7 version of IDLE - Python GUI. The only version I have found are for Windows and Mac. I will be using it to test and build an API to tie in with http.</p>
<p>IDLE is in the <code>python-tools</code> package.</p>
mariadb|python-idle
0
1,901,555
52,656,884
ImportError: No module named pandas_datareader
<p>I am trying to use the pandas data reader library. I initially tried <code>import pandas.io.data</code> but this threw up an import error, stating I should be using </p> <blockquote> <p>from pandas_datareader import data, wb</p> </blockquote> <p>instead. Upon trying this I was greeted with </p> <blockquote> <p>ImportError: No module named pandas_datareader</p> </blockquote> <p>I have had a look around and have tried...</p> <ul> <li><p>"pip install pandas_datareader"</p></li> <li><p>"pip install python_datareader"</p></li> <li><p>"pip install pandas-datareader"</p></li> </ul> <p>Any help would be greatly appreciated </p>
<p>run this command <code>pip install pandas-datareader</code> and for more info documentation is <a href="https://pandas-datareader.readthedocs.io/en/latest/" rel="nofollow noreferrer">here</a> </p>
python|pandas|importerror
0
1,901,556
47,634,574
The reason 'subprocess.run' is better than 'os.system' for beginner
<p>I read many answers on this topic.</p> <p>It seems that they try to explain it with much more difficult illustrations or just saying it's deprecated consulting official documentation.</p> <p><code>os.system</code> is handy for beginner.</p> <p>Could the reason be explained in an easy example o a metaphor?</p>
<p>One example of many is that <code>subprocess.run()</code> can capture the output, while <code>os.system()</code> only captures the return code.</p> <p><code>subprocess.run()</code> is simply way more flexible. It can do everything that <code>os.system()</code> can but also way more. If you KNOW that you never will use any of the benefits in <code>subprocess.run()</code>, then by all means, use <code>os.system()</code>, but most people would say that it's a bit of a waste of time to learn two different tools for the same thing.</p> <p><code>os.system()</code> is pretty much a copy of <code>system()</code> in C.</p>
python|os.system
3
1,901,557
28,058,207
Implement infinite scrolling in Androidviewclient
<p>I am trying to implement Androidviewclient 5.1.1 for app which has implement scrolling of listitems. I want to scroll upto the point all the entries of the list have been finished. That is scrollable becomes FALSE. </p> <p>How should I proceed to do this ?<br> Which property or ANDROIDVIEWCLIENT should I check for? </p>
<p>I used <a href="https://github.com/dtmilano/AndroidViewClient/wiki/Culebra-GUI" rel="nofollow">culebra GUI</a> (from <a href="https://github.com/dtmilano/AndroidViewClient" rel="nofollow">AndroidViewClient 9.2.1</a>) to generate this script <code>scrollable.py</code> while running the <strong>Contacts</strong> app on the device.</p> <pre><code>culebra -pVCLGo scrollable.py </code></pre> <p>To scroll the list I used the drag dialog as described in <a href="http://dtmilano.blogspot.ca/2014/11/culebra-magical-drag.html" rel="nofollow">culebra: the magical drag</a>.</p> <p>Once I exited the GUI and the script was generated, I remove all the other Views but the <code>ListView</code> and added the <code>while</code> loop. So besides the <code>while</code> loop, pretty much everything was automatically generated.</p> <p>The final script is now:</p> <pre><code>#! /usr/bin/env python # -*- coding: utf-8 -*- ''' Copyright (C) 2013-2014 Diego Torres Milano Created on 2015-01-21 by Culebra v9.2.1 __ __ __ __ / \ / \ / \ / \ ____________________/ __\/ __\/ __\/ __\_____________________________ ___________________/ /__/ /__/ /__/ /________________________________ | / \ / \ / \ / \ \___ |/ \_/ \_/ \_/ \ o \ \_____/--&lt; @author: Diego Torres Milano @author: Jennifer E. Swofford (ascii art snake) ''' import re import sys import os try: sys.path.insert(0, os.path.join(os.environ['ANDROID_VIEW_CLIENT_HOME'], 'src')) except: pass from com.dtmilano.android.viewclient import ViewClient TAG = 'CULEBRA' _s = 5 _v = '--verbose' in sys.argv kwargs1 = {'ignoreversioncheck': False, 'verbose': True, 'ignoresecuredevice': False} device, serialno = ViewClient.connectToDeviceOrExit(**kwargs1) kwargs2 = {'startviewserver': True, 'forceviewserveruse': False, 'autodump': False, 'ignoreuiautomatorkilled': True} vc = ViewClient(device, serialno, **kwargs2) device.Log.d(TAG, "dumping content of window=-1", _v) vc.dump(window=-1) def doSomething(view): if view.getClass() == 'android.widget.TextView': print view.getText() while True: device.Log.d(TAG, "finding view with id=android:id/list", _v) android___id_list = vc.findViewByIdOrRaise("android:id/list") # check if scrollable if not android___id_list.isScrollable(): break vc.traverse(root=android___id_list, transform=doSomething) device.Log.d(TAG, "Scrolling", _v) device.dragDip((185.0, 499.0), (191.0, 175.5), 200, 20, 0) vc.sleep(1) device.Log.d(TAG, "dumping content of window=-1", _v) vc.dump(window=-1) </code></pre> <blockquote> <p><strong>NOTE</strong>: in this case, the script never exits because the ListView in Contacts never changes its scrollable property, which I hope your app does as you mentioned in your question.</p> </blockquote> <h2>EDIT 1</h2> <p>Added tree traversal for ListView children as requested in one of the comments</p> <h2>EDIT 2</h2> <p>Added doSomething() transform method</p> <h2>EDIT 3</h2> <p>Now check the class</p>
android|python|androidviewclient
1
1,901,558
34,643,903
Odd Behavior C# vs Python byte ops
<p>Why does C#:</p> <pre><code>byte[] vals = new byte[] {223, 30, 244, 156}; int result = 0; for(int i = 0; i &lt;= 3; ++i) { result &lt;&lt;= 8; result |= vals[i]; } print("RESULT: " + result); </code></pre> <p>Yield: </p> <pre><code>RESULT: -551619428 </code></pre> <p>While Python:</p> <pre><code>vals = array.array('B', [223, 30, 244, 156]) result = 0 for val in vals: result &lt;&lt;= 8 result |= val print 'RESULT: %s' % result </code></pre> <p>Yields:</p> <pre><code>RESULT: 3743347868 </code></pre> <p>While... throwing array values of:</p> <pre><code>[37, 120, 244, 167] </code></pre> <p>at both languages yields:</p> <pre><code>RESULT: 628683943 </code></pre> <p><strong>EDIT: I didn't include this in the original question, but my actual goal was to make Python behave like C# in this case. Per the answers below I see I need to force the int overflow on the Python side.</strong></p> <p><strong>This seems to work:</strong></p> <pre><code>import numpy result = numpy.int32(result) </code></pre>
<p>In C# <code>int</code> is a signed 32-bit integer. The maximum value of <code>int</code> is <code>2147483647</code> - that's lower than <code>3743347868</code>. The operations you perform cause an overflow, resulting in a negative value.</p> <p>The C# code will give the same results as Python if you change the type of <code>result</code> to unsigned int (<code>uint</code>) or a 64-bit integer (<code>long</code>):</p> <pre><code>byte[] vals = new byte[] {223, 30, 244, 156}; uint result = 0; for(int i = 0; i &lt;= 3; ++i) { result &lt;&lt;= 8; result |= vals[i]; } </code></pre>
c#|python
6
1,901,559
46,879,764
Saving "The Economist" Style in matplotlib as default in mplstyle
<p>I created "The Economist" style by modifying the example given <a href="https://stackoverflow.com/questions/29859565/create-the-economist-style-graphs-from-python">here</a>. However, I would like to make this style appear under plt.style.use(your_style). I am having trouble converting in the format it is required. For example, here's my code that creates "The Economist" style: </p> <pre><code>import matplotlib.pyplot as plt import numpy as np x = np.random.randn(1000) y = np.sin(x) fig, ax = plt.subplots(facecolor='#CAD9E1', figsize=(12, 10)) ax.set_facecolor('#CAD9E1') ax.yaxis.grid(color='#ffffff', linewidth=2) ax.spines['left'].set_visible(False) ax.spines['right'].set_visible(False) ax.spines['top'].set_visible(False) ax.tick_params(axis='y', length=0) ax.xaxis.set_ticks_position('bottom') # Lengthen the bottom x-ticks and set them to dark gray ax.tick_params(direction='in', axis='x', length=7, color='0.1') plt.scatter(x, y, color='#006767') </code></pre> <p>The output is the following: </p> <p><a href="https://i.stack.imgur.com/HjeG1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HjeG1.png" alt="enter image description here"></a></p> <p>I opened the default mplstyles that are available and found that I can change the face and grid lines color using the following: </p> <pre><code>axes.facecolor: cad9e1 grid.color: ffffff </code></pre> <p>However, I do not know how to implement the rest, for example: </p> <pre><code>ax.yaxis.grid(color='#ffffff', linewidth=2) ax.spines['left'].set_visible(False) ax.tick_params(axis='y', length=0) ax.xaxis.set_ticks_position('bottom') ax.tick_params(direction='in', axis='x', length=7, color='0.1') </code></pre>
<p>Most, but not all settings you can do, have an equivalent matplotlib rc paramter. I think here you are lucky, the following would the rc Parameters for the "economist" style in question. </p> <p>To put them in a file, see the <a href="https://matplotlib.org/users/customizing.html" rel="nofollow noreferrer">matplotlib customize guide</a>.</p> <pre><code>import numpy as np import matplotlib.pyplot as plt params = {"figure.facecolor": "#cad9e1", "axes.facecolor": "#cad9e1", "axes.grid" : True, "axes.grid.axis" : "y", "grid.color" : "#ffffff", "grid.linewidth": 2, "axes.spines.left" : False, "axes.spines.right" : False, "axes.spines.top" : False, "ytick.major.size": 0, "ytick.minor.size": 0, "xtick.direction" : "in", "xtick.major.size" : 7, "xtick.color" : "#191919", "axes.edgecolor" :"#191919", "axes.prop_cycle" : plt.cycler('color', ['#006767', '#ff7f0e', '#2ca02c', '#d62728', '#9467bd', '#8c564b', '#e377c2', '#7f7f7f', '#bcbd22', '#17becf'])} plt.rcParams.update(params) x = np.random.randn(1000) y = np.sin(x) fig, ax = plt.subplots(figsize=(12, 10)) ax.scatter(x, y) plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/ABlX7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ABlX7.png" alt="enter image description here"></a></p>
python|matplotlib
1
1,901,560
37,793,296
pass form name variable into url in django
<p>I would like to pass the <code>name</code> variable of a selected inputbox in my django template into a url without going through the django views or url.py. Is there an easy way to do this.</p> <pre><code>&lt;div class="form-group"&gt; &lt;label class="col-xs-3 control-label"&gt;Chapter&lt;/label&gt; &lt;div class="col-xs-5 selectContainer"&gt; &lt;input class="form-control" type="text" name="chapter_text" value="1"&gt;&lt;/input&gt; &lt;/div&gt; &lt;/div&gt; </code></pre> <p>I want the <code>name</code> value of the <code>input class</code> to be passed into the <code>name</code> argument of the url in the div button below</p> <pre><code>&lt;div class="form-group"&gt; &lt;div class="col-xs-5 col-xs-offset-3"&gt; &lt;div class="btn btn-default"&gt;&lt;a href="{% url 'api:by_book' 'name' %}"&gt;Download&lt;/a&gt;&lt;/div&gt; &lt;/div&gt; &lt;/div&gt; </code></pre>
<p>You should do it through javascript. Store your url value as a JS variable:</p> <pre><code>&lt;script&gt; my_url = "{% url 'api:by_book' 'name' %}"; &lt;/script&gt; </code></pre> <p>Change the anchor to a button with a onClick event that calls a function. In that function you can build the URL with my_url and recovering the name of the input you want with JS. After that, make a redirection:</p> <pre><code>window.location = "http://www.yoururl.com"; </code></pre>
python|django
1
1,901,561
64,336,251
How to stop alpha-beta with a timer in iterative deepening
<p>I have created a minimax function with alpha beta pruning that I call with iterative deepening. The problem is that when timer is done the function keeps running until it finishes on the depth it started with before the timer ran out.</p> <p><strong>What I want:</strong> When timer runs out the minimax function should quit and either return none (I keep best move outside of minimax, see code for minimax call below), or return the previously calculated best move. I just can't seem to figure out how to implement that in the minimax function, everything I tried results in it still completing the depth it is currently on.</p> <p><strong>Minimax function:</strong></p> <pre><code>def minimax(gamestate, depth, alpha, beta, maximizing_player): if depth == 0 or gamestate.is_check_mate or gamestate.is_stale_mate: return None, evaluate(gamestate) gamestate.is_white_turn = not maximizing_player children = gamestate.get_valid_moves() best_move = children[0] if maximizing_player: max_eval = -math.inf for child in children: board_copy = copy.deepcopy(gamestate) board_copy.make_move(child) current_eval = ai_minimax(board_copy, depth - 1, alpha, beta, False)[1] if current_eval &gt; max_eval: max_eval = current_eval best_move = child alpha = max(alpha, current_eval) if beta &lt;= alpha: break return best_move, max_eval else: min_eval = math.inf for child in children: board_copy = copy.deepcopy(gamestate) board_copy.make_move(child) current_eval = ai_minimax(board_copy, depth - 1, alpha, beta, True)[1] if current_eval &lt; min_eval: min_eval = current_eval best_move = child beta = min(beta, current_eval) if beta &lt;= alpha: break return best_move, min_eval </code></pre> <p>Function call with iterative deepening:</p> <pre><code>for depth in range(1, max_search_depth): time_start = time.time() move, evaluation = minimax(gamestate, depth, alpha, beta, maximizing_player) time_end = time.time() timer = time_end - time_start if timer &gt; max_search_time: break </code></pre>
<p>I often solve this kind of problem using a custom <code>Timeout</code> class.</p> <pre><code>import signal class TimeoutError(Exception): &quot;&quot;&quot; Custom error for Timeout class. &quot;&quot;&quot; pass class Timeout: &quot;&quot;&quot; A timeout handler with context manager. Based on UNIX signals. &quot;&quot;&quot; def __init__(self, seconds=1, error_message=&quot;Timeout&quot;): self.seconds = seconds self.error_message = error_message def handle_timeout(self, signum, frame): raise TimeoutError(self.error_message) def __enter__(self): signal.signal(signal.SIGALRM, self.handle_timeout) signal.alarm(self.seconds) def __exit__(self, type, value, traceback): signal.alarm(0) </code></pre> <p>You can run your recursive function inside a <code>with statement</code> like this:</p> <pre><code>with Timeout(5): try: result = minimax(gamestate, depth, alpha, beta, maximizing_player) except TimeoutError: result = None </code></pre>
python|minimax|alpha-beta-pruning|iterative-deepening
1
1,901,562
64,359,277
Converting a CTE query to SQLAlchemy ORM
<p>(This is a rewritten version of a deleted question from earlier today.)</p> <p>I'm using the SQLAlchemy ORM as part of a Flask app, with MySQL as a backend, and I'm trying to write a query to return a list of entries surrounding a particular entry. While I have a working query in SQL, I'm unsure how to code it in SQLA. The <a href="https://docs.sqlalchemy.org/en/13/orm/query.html#sqlalchemy.orm.query.Query.cte" rel="nofollow noreferrer">docs for CTE in the ORM</a> show a very complicated example, and there aren't many other examples I can find.</p> <p>For now assume a very simple table that only contains words:</p> <pre><code>class Word(db.Model): __tablename__ = 'word' id = db.Column(db.Integer, primary_key=True) word = db.Column(db.String(100)) </code></pre> <p>If I want the 10 words before and after a given word (with an id of 73), an SQL query that does what I need is:</p> <pre><code>WITH cte AS (SELECT id, word, ROW_NUMBER() OVER (ORDER BY word) AS rownumber FROM word) SELECT * FROM cte WHERE rownumber &gt; (SELECT rownumber FROM cte WHERE cte.id = 73) - 10 AND rownumber &lt; (SELECT rownumber FROM cte WHERE cte.id = 73) + 10 ORDER BY rownumber; </code></pre> <p>I can't figure out the next step, however. I want to get a list of Word objects. I'd imagine that the first part of it could be something like</p> <pre><code>id = 73 rowlist = db.session.query(Word.id, db.func.row_number()).filter(Word.id == id).order_by(Word.word).cte() </code></pre> <p>but even if this is right, I don't know how to get this into the next part; I got bogged down in the <code>aliased</code> bits in the examples. Could someone give me a push in the right direction?</p>
<p>This may not be the most elegant solution but it seems to be working for me:</p> <pre class="lang-py prettyprint-override"><code>engine = db.create_engine(sqlalchemy_uri) Base = declarative_base() class Word(Base): __tablename__ = &quot;so64359277&quot; id = db.Column(db.Integer, primary_key=True) word = db.Column(db.String(100)) def __repr__(self): return f&quot;&lt;Word(id={self.id}, word='{self.word}')&gt;&quot; Base.metadata.drop_all(engine, checkfirst=True) Base.metadata.create_all(engine) Session = sessionmaker(bind=engine) session = Session() # test data word_objects = [] for x in [ &quot;Hotel&quot;, &quot;Charlie&quot;, &quot;Alfa&quot;, &quot;India&quot;, &quot;Foxtrot&quot;, &quot;Echo&quot;, &quot;Bravo&quot;, &quot;Golf&quot;, &quot;Delta&quot;, ]: word_objects.append(Word(word=x)) session.add_all(word_objects) session.commit() # show test data with id values pprint(session.query(Word).all()) &quot;&quot;&quot;console output: [&lt;Word(id=1, word='Hotel')&gt;, &lt;Word(id=2, word='Charlie')&gt;, &lt;Word(id=3, word='Alfa')&gt;, &lt;Word(id=4, word='India')&gt;, &lt;Word(id=5, word='Foxtrot')&gt;, &lt;Word(id=6, word='Echo')&gt;, &lt;Word(id=7, word='Bravo')&gt;, &lt;Word(id=8, word='Golf')&gt;, &lt;Word(id=9, word='Delta')&gt;] &quot;&quot;&quot; target_word = &quot;Echo&quot; num_context_rows = 3 rowlist = session.query( Word.id, Word.word, db.func.row_number().over(order_by=Word.word).label(&quot;rownum&quot;), ).cte(&quot;rowlist&quot;) target_rownum = session.query(rowlist.c.rownum).filter( rowlist.c.word == target_word ) select_subset = session.query(rowlist.c.rownum, rowlist.c.id).filter( db.and_( (rowlist.c.rownum &gt;= target_rownum.scalar() - num_context_rows), (rowlist.c.rownum &lt;= target_rownum.scalar() + num_context_rows), ) ) rownum_id_map = {x[0]: x[1] for x in select_subset.all()} min_rownum = min(rownum_id_map) max_rownum = max(rownum_id_map) result = [] for rownum in range(min_rownum, max_rownum + 1): result.append(session.query(Word).get(rownum_id_map[rownum])) pprint(result) &quot;&quot;&quot;console output: [&lt;Word(id=7, word='Bravo')&gt;, &lt;Word(id=2, word='Charlie')&gt;, &lt;Word(id=9, word='Delta')&gt;, &lt;Word(id=6, word='Echo')&gt;, &lt;Word(id=5, word='Foxtrot')&gt;, &lt;Word(id=8, word='Golf')&gt;] &lt;Word(id=1, word='Hotel')&gt;] &quot;&quot;&quot; </code></pre>
python|sql|sqlalchemy
0
1,901,563
55,868,956
How to connect to a server with a certificate and rsa key
<p>I am very new to python programming. I have a certificate and RAS key . and i want to write a python ssl socket client program to send message to server.I know server hostname and port. I have been trying below code.</p> <p>Please guide me.</p> <p>The code I have so far is:</p> <pre class="lang-py prettyprint-override"><code>import socket import ssl context = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH) context.load_cert_chain(certfile="D:/mycerti.crt", keyfile="D:/mykey.key") soc = socket.socket() soc.bind(('xxx.xxx.xxx.xxx', 3335)) # i am giving actual ip address not xx.xx.xx. soc.listen(5) </code></pre> <p>but i keep getting following error:</p> <pre class="lang-py prettyprint-override"><code>Traceback (most recent call last): File "\test.py", line 7, in &lt;module&gt; soc.bind(('xxx.xxx.xxx.xxx', 3335)) OSError: [WinError 10049] The requested address is not valid in its context </code></pre> <p>There was the mistake the way I was using the openssl socket and i corrected it. </p> <pre class="lang-py prettyprint-override"><code>context = ssl.create_default_context(ssl.Purpose.SERVER_AUTH) # not CLIENT_AUTH context.load_cert_chain(certfile="D:/mycerti.crt", keyfile="D:/mykey.key") soc = socket.socket() conn = context.wrap_socket(soc, server_hostname='your_server_hostname') # name, not ip address conn.connect(('xxx.xxx.xxx.xxx', 3335)) </code></pre> <p>Now I don't get the above error but now I error saying.</p> <pre class="lang-py prettyprint-override"><code>Traceback (most recent call last): File "C:/Users/Nouman Yosuf/AppData/Local/Programs/Python/Python37-32/Test.py", line 10, in &lt;module&gt; conn.connect(('x.x.x.x', 3335)) File "C:\Users\Nouman Yosuf\AppData\Local\Programs\Python\Python37-32\lib\ssl.py", line 1150, in connect self._real_connect(addr, False) File "C:\Users\Nouman Yosuf\AppData\Local\Programs\Python\Python37-32\lib\ssl.py", line 1141, in _real_connect self.do_handshake() File "C:\Users\Nouman Yosuf\AppData\Local\Programs\Python\Python37-32\lib\ssl.py", line 1117, in do_handshake self._sslobj.do_handshake() ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1056) </code></pre> <p>I have installed the certificate also as CA and as personal. But still i get above errors.I tried specifying the certificate path and also after installing the certificate too. But still fails to verify the certificate stating unable to get local issuer certificate. I also tried to create a context from scratch.</p> <pre class="lang-py prettyprint-override"><code>context = ssl.SSLContext() context.verify_mode = ssl.CERT_REQUIRED context.check_hostname = True context.load_verify_locations("C:/mycerti.crt") soc = socket.socket() conn = context.wrap_socket(soc, server_hostname='x.x.x.x') conn.connect(('x.x.x.x.', 3335)) </code></pre> <p>But I get error saying.</p> <pre class="lang-py prettyprint-override"><code> [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: IP address mismatch, certificate is not valid for 'x.x.x.x. (_ssl.c:1056) </code></pre> <p>And I am sure the certificate is valid or that Ip. </p>
<p>You're close, but missing a couple concepts.</p> <p>First, if you're a Client, you "connect" your socket to the server. Servers, on the other hand, "bind" to a port a "listen" for incoming connection requests. (Similarly, you need to use ssl.Purpose.SERVER_AUTH rather than ssl.Purpose.CLIENT_AUTH: the latter is used if you are a server, looking to authenticate clients)</p> <p>Second, you do create the context for SSL, but you still need to associate the SSLContext with the socket itself. For that, you use <code>wrap_socket</code>. As you have it, you have a context, you have a socket, but they don't know about each other. Once wrapped, you use that connection, rather than your original soc.</p> <p>Note that <code>wrap_socket</code> takes a servername, not IP address. This is because it will check to make sure your server's certificate matches the certificate being provided by the server (same IP can host many named servers). If it doesn't match, the connection will fail.</p> <p>Try something like this:</p> <pre><code>import socket import ssl context = ssl.create_default_context(ssl.Purpose.SERVER_AUTH) # not CLIENT_AUTH context.load_cert_chain(certfile="D:/mycerti.crt", keyfile="D:/mykey.key") soc = socket.socket() conn = context.wrap_socket(soc, server_hostname='your_server_hostname') # name, not ip address conn.connect(('xxx.xxx.xxx.xxx', 3335)) # Connection made, so do something, like get the server's certificate # and see when it expires... ssl_info = conn.getpeercert() print sl_info['notAfter'] </code></pre>
python|openssl
0
1,901,564
50,006,152
How to restart app when source code modified in pyramid python framework?
<p>Is there any way to restart app when code file modified in pyramid framework?</p> <pre><code>pyramid.reload_templates </code></pre> <p>this varialbe work for template, but not for source code. I want to restart project asas code modified as nodemon npm package. Thanks</p>
<p>From the Pyramid documentation on <a href="https://docs.pylonsproject.org/projects/pyramid/en/latest/narr/project.html#reloading-code" rel="nofollow noreferrer">Reloading Code</a> (pruned for relevance):</p> <blockquote> <p>During development, it's often useful to run <code>pserve</code> using its <code>--reload</code> option. When <code>--reload</code> is passed to <code>pserve</code>, changes to any Python module your project uses will cause the server to restart. This typically makes development easier, as changes to Python code made within a Pyramid application is not put into effect until the server restarts.</p> </blockquote> <pre><code>$VENV/bin/pserve development.ini --reload </code></pre> <blockquote> <p>Changes to template files (such as <code>.pt</code> or <code>.mak</code> files) won't cause the server to restart. Changes to template files don't require a server restart as long as the <code>pyramid.reload_templates</code> setting in the <code>development.ini</code> file is true. Changes made to template files when this setting is true will take effect immediately without a server restart.</p> </blockquote>
python|pyramid|autoload|restart
3
1,901,565
50,113,359
python selenium not running with chrome driver & chrome version
<p>I am trying to run a project in selenium with chrome driver, but after I didn't use it for a month ( Was an update to chrome). When I run the project its opens thechrome browser and then immediately closes.</p> <p>I reciving the following error:</p> <blockquote> <p>Traceback (most recent call last): File "C:\Users\maorb\OneDrive\Desktop\Maor\python\serethd\tvil_arthur.py", line 27, in driver = webdriver.Chrome() File "C:\Program Files (x86)\Python36-32\lib\site-packages\selenium\webdriver\chrome\webdriver.py", line 67, in <strong>init</strong> desired_capabilities=desired_capabilities) File "C:\Program Files (x86)\Python36-32\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 91, in <strong>init</strong> self.start_session(desired_capabilities, browser_profile) File "C:\Program Files (x86)\Python36-32\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 173, in start_session 'desiredCapabilities': desired_capabilities, File "C:\Program Files (x86)\Python36-32\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 233, in execute self.error_handler.check_response(response) File "C:\Program Files (x86)\Python36-32\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 194, in check_response raise exception_class(message, screen, stacktrace) selenium.common.exceptions.WebDriverException: Message: session not created exception from disconnected: Unable to receive message from renderer (Session info: chrome=63.0.3239.108) (Driver info: chromedriver=2.36.540470 (e522d04694c7ebea4ba8821272dbef4f9b818c91),platform=Windows NT 10.0.16299 x86_64)</p> </blockquote> <p>I am using chrome web driver version <code>2.36</code> &amp; Google chrome version <code>63.0.3239.10</code></p> <p>I tried to use latest Chrome &amp; Chrome webdriver version but Its opening the chrome but its just opening and not doing any of code.</p>
<p>Usually people get this error when your script cannot find the chromedriver maybe re look at where you specified the path to be and add a executable path</p>
python|selenium|selenium-webdriver|selenium-chromedriver
0
1,901,566
66,405,910
Python zmq connections
<p>I tried to use python's zmq lib. And now I have two questions:</p> <ol> <li><p>Is there a way to check socket connection state? I'd like to know if connection is established after call <code>connect</code></p> </li> <li><p>I want to one-to-one communication model. I tried to use <code>PAIR</code> zmq socket type. In that case if one client is already connected, server will not receive any messages from secondary connected client. But I'd like to get info in the second client that there is another client and server is busy.</p> </li> </ol>
<ol> <li><p>You'd get an error if <code>connect</code> fails. But I guess the real question is how often do you want to check this? once at startup, before each message, or periodically, using some <code>heartbeat</code>?</p> </li> <li><p>That does not make sense, as you can not send info without connecting first. However, some socket types might give some more info.</p> </li> </ol> <p>But the best way would be to use multiple sockets: one for such status information, and another one for sending data. ZMQ is made to use multiple sockets.</p>
python|zeromq
-1
1,901,567
65,000,775
How to return a dataframe column values from a For loop
<p>I'm trying to calculate the value of one column based on the entries of another column, I'm trying to achieve this using for loop, I don't know how to return the values for entire column, rather just the last iteration, below is an example of my code, please help me find where I'm doing wrong.</p> <pre><code>def marks_calculation(Master_Sheet): for a in Master_Sheet.Marks Condition: if a == &quot;Yes&quot;: Master_Sheet['Total'] = Master_Sheet['Total Marks']*0.85 Master_Sheet.loc[(Master_Sheet['Total Marks'] &gt; Master_Sheet['Basic1']) &amp; (Master_Sheet['Total Marks'] &lt;= Master_Sheet['Basic3']),'Total'] = (Master_Sheet['Basic1'] * Master_Sheet['Slab1']) + ((Master_Sheet['Total Marks'] - Master_Sheet['Basic1']) * Master_Sheet['Slab2']) Master_Sheet.loc[(Master_Sheet['Total Marks'] &gt; Master_Sheet['Basic2']),'Total'] = (Master_Sheet['Basic1'] * Master_Sheet['Slab1']) + ((Master_Sheet['Basic2'] - Master_Sheet['Basic1']) * Master_Sheet['Slab2']) + ((Master_Sheet['Total Marks'] - Master_Sheet['Basic2']) * Master_Sheet['Slab3']) return Master_Sheet['Total'] else : Master_Sheet['Total'] = Master_Sheet['Aggregated Marks'] return Master_Sheet['Total'] Master_Sheet['Total'] = Master_Sheet['Total'].replace(&quot;&quot;,0, regex=True) Master_Sheet[['Students','Total Marks','Aggregated Marks','Marks Condition','Total']] </code></pre> <hr /> <p>e.g. Data</p> <pre><code>Student Condition Aggregated TotalMarks Basic1 Basic2 Slab1 Slab2 Slab3 Total A Yes 65.34 54.29 45 55 49% 64% 82% 28.00 B No 75.65 94.32 23 54 73% 81% 33% 75.65 C No 87.9 82.9 67 78 85% 54% 46% 87.90 D Yes 59.4 92.02 75 83 53% 71% 47% 47.59 E No 83.05 62.45 23 35 70% 35% 23% 83.05 </code></pre> <hr /> <p>Here the Total has only to be calculated if the Condition is Yes, if it is NO, we need to use data in Aggregated. If the Condition is Yes, the calculation for Total goes as below:</p> <p>If(TotalMarks&lt;=Basic1, TotalMarks<em>Slab1, If(AND(TotalMarks&gt;Basic1, TotalMarks&lt;=Basic2),(Basic1</em>Slab1)+((TotalMarks-Basic1)<em>Slab2),(Basic1</em>Slab1)+((TotalMarks-Basic1)*Slab2)+((Basic2-TotalMarks)*Slab3)))</p>
<p>I couldn't understand the calculation example so I wasn't able to produce something for the yes column. The following line will update all your 'Total' No entries to the aggregated data column.</p> <pre><code>Master_Sheet['Total'].loc[Master_Sheet['Condition']=='No'] = Master_Sheet['Aggregated'] </code></pre>
python-3.x|pandas|dataframe|for-loop|if-statement
0
1,901,568
64,001,903
Is there any way to get torch.mode over multidimensional tensor
<p>is there any way torch.mode can be applied over multiple dimensions</p> <p>for example</p> <pre><code>import numpy as np import torch x = np.random.randint(10, size=(3, 5)) y = torch.tensor(x) </code></pre> <p>lets say y has</p> <pre><code>[[6 3 7 3 0] [2 5 7 9 7] [6 1 4 6 3]] </code></pre> <p><code>torch.mode</code> should return a size 3 tensor <code>[3,7,6]</code></p> <p>without using a loop</p>
<p>Use the dimension attribute in torch to select which dimension should be reduced using mode operator.</p> <pre><code>torch.mode(y, dim = 1)[0] </code></pre> <p>Will give you the desired answer.</p>
python|numpy|machine-learning|pytorch
1
1,901,569
52,931,750
How to merge 2 dataframe base on a third dataframe containing multiindex
<p>I have 2 dataframe and i want to merge them. my join keys are index and they are third dataframe.</p> <p>the two dataframe :</p> <pre><code>df1 = pd.DataFrame({'A' : [1,2,3] }) df2 = pd.DataFrame({'B' : ['A','B','C']}) </code></pre> <p>the dataframe containing index list:</p> <pre><code>df3 = pd.DataFrame({'C' :[1], 'D' :[2]}) df3 = df3.set_index(['C','D']) </code></pre> <p>desired output :</p> <pre><code> A B 0 2 C </code></pre>
<p>With tuples in a list:</p> <h3>Setup</h3> <pre><code>import pandas as pd df1 = pd.DataFrame({'A' : [1,2,3] }) df2 = pd.DataFrame({'B' : ['A','B','C']}) t = [(1,2), (1,0)] </code></pre> <p>Make df3, with the correct column names:</p> <pre><code>df3 = pd.DataFrame(t, columns=list(df1.columns)+list(df2.columns)) # A B #0 1 2 #1 1 0 </code></pre> <p>Then you can use <code>lookup</code> + <code>pivot</code> to get the correct value from each frame, and return to the original shape.</p> <pre><code>df3 = df3.stack().reset_index() df3['vals'] = pd.concat([df1, df2], 1).lookup(df3[0], df3.level_1) df3 = df3.pivot(index='level_0', columns='level_1', values='vals').rename_axis(None, 1).rename_axis(None, 0) # A B #0 2 C #1 2 A </code></pre>
pandas
1
1,901,570
65,363,613
Scraping the data out of the text
<p>I am working on a price checker app for steamcommunity market. I have used the following code to extract the source-code out of the website, which includes all of the sales that has been made untill today. Can you please help me to get the data, which is between &quot;[[]]&quot; signs?</p> <pre><code>import requests sites = [ &quot;https://steamcommunity.com/market/listings/730/AK-47%20%7C%20Redline%20%28Field-Tested%29&quot; ] for url in sites: r = requests.get(url) page_source = r.text page_source = page_source.split('\n') print(&quot;\nURL:&quot;, url) for row in page_source[:]: print(row) </code></pre>
<p>I used regex to extract the data</p> <pre><code>import requests import re import json sites = [ &quot;https://steamcommunity.com/market/listings/730/AK-47%20%7C%20Redline%20%28Field-Tested%29&quot; ] for url in sites: r = requests.get(url) page_source = r.text # print(page_source) results = re.search(r'var line1=\[.*\]',page_source).group() print(results[10:]) </code></pre>
python-3.x|web-scraping|text|steamworks-api|steambot
0
1,901,571
72,016,351
Group periodic data in pandas dataframe
<p>I have a pandas dataframe that looks like this:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>idx</th> <th>A</th> <th>B</th> </tr> </thead> <tbody> <tr> <td>01/01/01 00:00:01</td> <td>5</td> <td>2</td> </tr> <tr> <td>01/01/01 00:00:02</td> <td>4</td> <td>5</td> </tr> <tr> <td>01/01/01 00:00:03</td> <td>5</td> <td>4</td> </tr> <tr> <td>02/01/01 00:00:01</td> <td>3</td> <td>8</td> </tr> <tr> <td>02/01/01 00:00:02</td> <td>7</td> <td>4</td> </tr> <tr> <td>02/01/01 00:00:03</td> <td>1</td> <td>3</td> </tr> </tbody> </table> </div> <p>I would like to group data based on its periodicity such that the final dataframe is:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>new_idx</th> <th>01/01/01</th> <th>02/01/01</th> <th>old_column</th> </tr> </thead> <tbody> <tr> <td>00:00:01</td> <td>5</td> <td>3</td> <td>A</td> </tr> <tr> <td>00:00:02</td> <td>4</td> <td>7</td> <td>A</td> </tr> <tr> <td>00:00:03</td> <td>5</td> <td>1</td> <td>A</td> </tr> <tr> <td>00:00:01</td> <td>2</td> <td>8</td> <td>B</td> </tr> <tr> <td>00:00:02</td> <td>5</td> <td>4</td> <td>B</td> </tr> <tr> <td>00:00:03</td> <td>4</td> <td>3</td> <td>B</td> </tr> </tbody> </table> </div> <p>Is there a way to this that holds when the first dataframe gets big (more columns, more periods and more samples)?</p>
<p>One way is to <code>melt</code> the DataFrame, then split the datetime to dates and times; finally <code>pivot</code> the resulting DataFrame for the final output:</p> <pre class="lang-py prettyprint-override"><code>df = df.melt('idx', var_name='old_column') df[['date','new_idx']] = df['idx'].str.split(expand=True) out = df.pivot(['new_idx','old_column'], 'date', 'value').reset_index().rename_axis(columns=[None]).sort_values(by='old_column') </code></pre> <p>Output</p> <pre class="lang-py prettyprint-override"><code> new_idx old_column 01/01/01 02/01/01 0 00:00:01 A 5 3 2 00:00:02 A 4 7 4 00:00:03 A 5 1 1 00:00:01 B 2 8 3 00:00:02 B 5 4 5 00:00:03 B 4 3 </code></pre>
python|python-3.x|pandas|dataframe|pivot-table
3
1,901,572
68,518,750
Kivy - rebuild class/ Boxlayout with updated content
<p>In my Kivy-App, i generate Buttons via a python-class based on a dictionary (in the following example i use a list but that's just an example for the underlying problem). Within the App, the dictionary gets changed and i want to display that change (obviously) in my App (by adding/ removing/ rearranging the Buttons). To achieve this, my approach is to either restart the entire App or only reload that particular BoxLayout. Unfortunately, non of my attempts worked out so far and i could not find any (working) solution on the internet.</p> <p>This is my code example:</p> <p>Python Code:</p> <pre><code>from kivy.app import App from kivy.uix.boxlayout import BoxLayout from kivy.uix.button import Button buttonlist = [] counter = 0 class MainWindow(BoxLayout): def addbutton(self): global buttonlist global counter buttonlist.append(counter) counter += 1 class ButtonBox(BoxLayout): def __init__(self, **kwargs): super().__init__(**kwargs) self.orientation = &quot;vertical&quot; global buttonlist for button in buttonlist: b = Button(text=str(button)) self.add_widget(b) class KivyApp(App): def build(self): return MainWindow() KivyApp().run() </code></pre> <p>KV Code:</p> <pre><code>&lt;MainWindow&gt;: BoxLayout: ButtonBox: Button: text: &quot;add Button&quot; on_press: root.addbutton() </code></pre> <p>My closest attempt was something containing a restart-Method like:</p> <pre><code>def restart(self): self.stop() return KivyApp().run() </code></pre> <p>and calling:</p> <pre><code>App.get_running_app().restart() </code></pre> <p>But for some reason, this does not stop the App but opens a second instance of the App within the first one (resulting in App in App in App in App if pressed often)</p>
<p>You can rebuild the <code>ButtonBox</code> by first calling <code>clear_widgets()</code> on the <code>ButtonBox</code> instance. Here is a modified version of your code that does that:</p> <pre><code>from kivy.app import App from kivy.lang import Builder from kivy.uix.boxlayout import BoxLayout from kivy.uix.button import Button kv = ''' &lt;MainWindow&gt;: BoxLayout: ButtonBox: id: box Button: text: &quot;add Button&quot; on_press: root.addbutton() ''' buttonlist = ['Abba', 'Dabba', 'Doo'] counter = 3 class MainWindow(BoxLayout): def addbutton(self): global buttonlist global counter buttonlist.append(str(counter)) counter += 1 self.ids.box.reload() class ButtonBox(BoxLayout): def __init__(self, **kwargs): super().__init__(**kwargs) self.orientation = &quot;vertical&quot; self.reload() def reload(self): # method to rebuild the ButtonBox contents global buttonlist self.clear_widgets() for button in buttonlist: b = Button(text=str(button)) self.add_widget(b) class KivyApp(App): def build(self): Builder.load_string(kv) return MainWindow() KivyApp().run() </code></pre> <p>I used your <code>kv</code> as a string, just for my own convenience.</p>
python|kivy
0
1,901,573
68,464,024
Python Flask-Mysql KeyError: 'MYSQL_HOST'
<p>I am building a python flask-Mysql app. I am building it using AWS cloud9. But When I run the code I am geting MYSQL_HOST key error. I am attaching code below. Is it because of the installation fault or code error.?`</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>from flask import Flask, request, render_template from flask_mysqldb import MySQL application = Flask(__name__) application.config['MYSQL_HOST'] = 'localhost' application.config['MYSQL_USER'] = 'nfhfjfn' application.config['MYSQL_PASSWORD'] = 'fsfc' application.config['MYSQL_DB'] = 'fsvf' application.config['MYSQL_CURSORCLASS'] = 'DictCursor' mysql = MySQL(application) # mysql.init_app(application) application = Flask(__name__) @application.route("/") def hello(): cursor = mysql.connect().cursor() cursor.execute("SELECT * from LANGUAGES;") mysql.connection.commit() languages = cursor.fetchall() languages = [list(l) for l in languages] return render_template('index.html', languages=languages) if __name__ == "__main__": application.run(host='0.0.0.0',port=8080, debug=True)</code></pre> </div> </div> </p> <p>`</p>
<p>You are calling <code>application = Flask(__name__)</code> <strong>twice</strong>. So second time you are overwriting the first <code>application</code>. It should be:</p> <pre><code>from flask import Flask, request, render_template from flask_mysqldb import MySQL application = Flask(__name__) application.config['MYSQL_HOST'] = 'localhost' application.config['MYSQL_USER'] = 'nfhfjfn' application.config['MYSQL_PASSWORD'] = 'fsfc' application.config['MYSQL_DB'] = 'fsvf' application.config['MYSQL_CURSORCLASS'] = 'DictCursor' mysql = MySQL(application) # mysql.init_app(application) #application = Flask(__name__) &lt;--- remove that @application.route(&quot;/&quot;) def hello(): cursor = mysql.connect().cursor() cursor.execute(&quot;SELECT * from LANGUAGES;&quot;) mysql.connection.commit() languages = cursor.fetchall() languages = [list(l) for l in languages] return render_template('index.html', languages=languages) if __name__ == &quot;__main__&quot;: application.run(host='0.0.0.0',port=8080, debug=True) </code></pre>
python|amazon-web-services|amazon-ec2|flask-mysql
1
1,901,574
68,478,311
The script needs to delete the unused volumes unless they have some certain values
<p>The script needs to delete the unused volumes. If the volume has the</p> <pre><code>Tag Value Key : clean </code></pre> <p>and</p> <pre><code>Value : DND </code></pre> <p>then, it shouldn't delete it. while executing the script, it delete the volumes which have the</p> <pre><code>Tag Value Key : clean Value : DND </code></pre> <pre><code>import boto3 ec2 = boto3.client('ec2', region_name='ap-southeast-2') def lambda_handler(event, context): try: for vol in ec2.describe_volumes()['Volumes']: if vol['State'] == 'available': if vol.get('Tags') == None: ec2.delete_volume(VolumeId = vol['VolumeId']) print(&quot;deleted:&quot;, vol['VolumeId']) if not vol.get('Tags') == None: if not ((vol.get('Tags')[0]['Key'] == 'clean') and (vol.get('Tags')[0]['Value'] == 'DND')): ec2.delete_volume(VolumeId = vol['VolumeId']) print(&quot;deleted:&quot;, vol['VolumeId']) except botocore.exceptions.ClientError as e: error_code = int(e.response['Error']['Code']) if error_code == 404: exists = False </code></pre>
<p>You are checking only the first tag [0]. You should iterate over all of them to check:</p> <pre><code>for vol in ec2.describe_volumes()['Volumes']: if vol['State'] == 'available': if 'Tags' in vol: should_delete = False # iterate over all tags, not only the first tag, # to check if the volume should be deleted # or not for tag in vol['Tags']: if tag['Key'] == 'clean' and tag['Value'] == 'DND': should_delete = True break if should_delete: #ec2.delete_volume(VolumeId = vol['VolumeId']) print(&quot;deleted2:&quot;, vol['VolumeId']) else: #ec2.delete_volume(VolumeId = vol['VolumeId']) print(&quot;deleted1:&quot;, vol['VolumeId']) </code></pre> <p>Edit, as John metioned, it could be simplified:</p> <pre><code>for vol in ec2.describe_volumes()['Volumes']: if vol['State'] == 'available': if 'Tags' in vol: if {'Key': 'clean', 'Value': 'DND'} in vol['Tags']: #ec2.delete_volume(VolumeId = vol['VolumeId']) print(&quot;deleted2:&quot;, vol['VolumeId']) else: #ec2.delete_volume(VolumeId = vol['VolumeId']) print(&quot;deleted1:&quot;, vol['VolumeId']) </code></pre>
python|amazon-web-services|aws-lambda
2
1,901,575
71,585,339
How do i create a heatmap in sns
<p>i want to make a seaborn heatmap from this data, i have tried but still a bit stuck</p> <pre><code>Unnamed: 0 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 0 Touba 24 26 27 29 30 30 29 28 28 29 27 23 </code></pre> <p>various commands, index, cols</p>
<p>You simply need to set the first column as index.</p> <p>You can use <code>set_index</code>:</p> <pre><code>import seaborn as sns sns.heatmap(df.set_index('Unnamed: 0')) </code></pre> <p>But the best would be to correctly read the csv in the first place:</p> <pre><code>df = pd.read_csv(..., index_col=0) sns.heatmap(d) </code></pre> <p>output:</p> <p><a href="https://i.stack.imgur.com/LdPBj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LdPBj.png" alt="heatmap" /></a></p>
python|pandas|seaborn
2
1,901,576
71,734,686
Issue with replacing NULL sqlite3 database column values with other types in Python 3?
<p>I've run into a problem with the sqlite3 module in Python 3, where I can't seem to figure out how to replace <code>NULL</code> values from the database with other ones, mainly strings and integers.</p> <p>This command doesn't do the job, but also raises no exceptions: <code>UPDATE table SET animal='cat' WHERE animal=NULL AND id=32</code></p> <p>The database table column &quot;animal&quot; is of type <code>TEXT</code> and gets filled with <code>NULL</code>s where no other value has been specified. The column &quot;id&quot; is primary keyed and thus features only unique integer row indices.</p> <p>If the column &quot;animal&quot; is defined, not <code>NULL</code>, the above command works flawlessly. I can replace existing strings, integers, and floats with it.</p> <p>What am I overlooking here?</p> <p>Thanks.</p>
<p>The <code>NULL</code> value in SQL is special, and to compare values against it you need to use the <code>IS</code> and <code>IS NOT</code> operators. So your query should be this:</p> <pre class="lang-sql prettyprint-override"><code>UPDATE table SET animal = 'cat' WHERE animal IS NULL AND id = 32; </code></pre> <p><code>NULL</code> by definition means &quot;unknown&quot; in SQL, and so comparing a column directly against it with <code>=</code> also produces an unknown result.</p>
sqlite|python-3.9
2
1,901,577
71,479,406
How to change format of a numpy array of date from YYYY-MM-DD to MM - YY
<p>I have the following date list:</p> <pre><code>[datetime.date(2021, 9, 30), datetime.date(2021, 10, 31), datetime.date(2021, 11, 30), datetime.date(2021, 12, 31), datetime.date(2022, 1, 31), datetime.date(2022, 2, 28)] </code></pre> <p>Which I store in an array:</p> <pre><code>NDateArray = np.array(DateList,dtype=np.datetime64) ['2021-09-30' '2021-10-31' '2021-11-30' '2021-12-31' '2022-01-31' '2022-02-28'] </code></pre> <p>Ho do I convert these to</p> <pre><code>'Sept 21' 'Oct 21' 'Nov 2021' .... </code></pre>
<p>You can do like this :</p> <pre><code>import pandas as pd import numpy as np import datetime DateList = [datetime.date(2021, 9, 30), datetime.date(2021, 10, 31), datetime.date(2021, 11, 30), datetime.date(2021, 12, 31), datetime.date(2022, 1, 31), datetime.date(2022, 2, 28)] NDateList = [np.datetime64(x) for x in DateList] for i in range(0,len(NDateList)): mydate = datetime.datetime.strptime(str(NDateList[i]), '%Y-%m-%d') print(mydate.strftime('%b, %y')) </code></pre> <p><strong>Output :</strong></p> <pre><code>Sep, 21 Oct, 21 Nov, 21 Dec, 21 Jan, 22 Feb, 22 </code></pre> <p>If you want fullname for month and a year as YYYY, you can do <code>print(mydate.strftime('%B, %Y'))</code></p> <p><strong>Output :</strong></p> <pre><code>September, 2021 October, 2021 November, 2021 December, 2021 January, 2022 February, 2022 </code></pre> <p><strong>Or</strong></p> <p>if i want to keep what you did in your code you can continue like this :</p> <pre><code>import pandas as pd import numpy as np import datetime DateList = [datetime.date(2021, 9, 30), datetime.date(2021, 10, 31), datetime.date(2021, 11, 30), datetime.date(2021, 12, 31), datetime.date(2022, 1, 31), datetime.date(2022, 2, 28)] NDateArray = np.array(DateList,dtype=np.datetime64) NDateList = [str(x.astype('datetime64[D]')) for x in NDateArray] NewArrayDate = [] for i in range(0,len(NDateList)): mydate = datetime.datetime.strptime(str(NDateList[i]), '%Y-%m-%d') # append result to new array NewArrayDate.append(mydate.strftime('%b, %y')) print(NewArrayDate) </code></pre>
python
0
1,901,578
71,621,141
Save .vtk voxelized file in Python
<p>I have a 3D matrix data from a txt file representing voxels. I already represented them in vtk as a structured grid:</p> <pre><code>grid = vtk.vtkExplicitStructuredGrid() </code></pre> <p>Everything working fine. Adding color functions, actors, renderer, window... But now I want to save it as a .vtk file to read it with other applications and I don't know how. I tried:</p> <pre><code>exporter = vtk.vtkVRMLExporter() exporter.SetRenderWindow(window) exporter.SetFileName(&quot;sample.vtk&quot;) exporter.Write() exporter.Update() </code></pre> <p>But it creates a file almost empty, just with metadata and not voxels data. I tried change it for: vtkVTKExporter and didn't work too.</p> <p>I also tried to use other functions calling the &quot;grid&quot; but didn't even created the file:</p> <pre><code>vtkXMLUnstructuredGridWriter vtkXMLPolyDataWriter vtkXMLUnstructuredGridWriter </code></pre> <p>And finally I tried to use:</p> <pre><code>writer = vtk.vtkStructuredPointsWriter() windowto_image_filter = vtk.vtkWindowToImageFilter() windowto_image_filter.SetInput(window) windowto_image_filter.SetScale(1) # image quality windowto_image_filter.SetInputBufferTypeToRGBA() writer.SetFileName('sample2.vtk') writer.SetInputConnection(windowto_image_filter.GetOutputPort()) writer.Write() </code></pre> <p>But leads to an error.</p> <p>Is there a way to save it as a .vtk file in binary with all the represented information? And also, is there a way to save the image in the vtk window created as a .jpg or .png like a screenshot. Thanks!</p>
<p>Saving grid with data: use a writer (exporter are mean to save &quot;what is visible now&quot;, and not the underlying data).</p> <p>With <code>vtkStructuredGrid</code>, use <code>vtkXMLStructureGridWriter</code> to have a <code>.vts</code> file, or <code>vtkDataSetWriter</code> to produce legacy <code>.vtk</code> file. Then call <code>writer.SetInputData(grid)</code></p> <p>To save screenshot, you can use <code>vtkPNGWriter</code> on the output of your <code>vtkWindowToImageFilter</code>, as describe in this <a href="https://kitware.github.io/vtk-examples/site/Python/Utilities/Screenshot/" rel="nofollow noreferrer">example</a></p>
python|vtk
0
1,901,579
10,501,114
Enabling validation for my SelectField
<p>I have a SelectField that I want to add validation to with WTForms. The fields gets its values from a dynamic dropdown since it is the city field of a pair of region/city choice where the user first select region and then the city options switches to display the cities of the selected region:</p> <p><img src="https://i.stack.imgur.com/gNHMZ.png" alt="enter image description here"></p> <p>If I set it to the same name as in the form class then I still can perform validation on the input:</p> <pre><code>&lt;div id="cities"&gt; {{form.area}} &lt;/div&gt;{% if form.area.errors %} &lt;div class="maintext"&gt; &lt;ul class="errors"&gt;{% for error in form.area.errors %}&lt;li&gt;{{ error }}&lt;/li&gt;{% endfor %}&lt;/ul&gt;&lt;/div&gt; {% endif %} </code></pre> <p>I want a red frame around the fields which do not validate and this works for other fields than the SelectField (see attached image). I wonder how to enable validation for my SelectField? It works for all other fields. To enable the red frame around the field I tried to subclass the Select class and add this as a widget since this works for other fields but not for the SelectField:</p> <pre><code>from wtforms.widgets import Select class SelectWithRedFrame(Select): def __init__(self, error_class=u'has_errors'): super(SelectWithRedFrame, self).__init__() self.error_class = error_class def __call__(self, field, **kwargs): if field.errors: c = kwargs.pop('class', '') or kwargs.pop('class_', '') kwargs['class'] = u'%s %s' % (self.error_class, c) return super(SelectWithRedFrame, self).__call__(field, **kwargs) class AdForm(Form): my_choices = [ ('1', _('All categories')), ('disabled', _('VEHICLES')), ('2010', _('Cars')), ('3', _('Motorcycles')), ('4', _('Accessories &amp; Parts')), ('disabled', _('PROPERTIES')), ('7', _('Apartments')), ('8', _('Houses')), ('9', _('Commercial properties')), ('10', _('Land')), ('disabled', _('ELECTRONICS')), ('12', _('Mobile phones &amp; Gadgets')), ('13', _('TV/Audio/Video/Cameras')), ('14', _('Computers')), ('disabled', _('HOME &amp; PERSONAL ITEMS')), ('16', _('Home &amp; Garden')), ('17', _('Clothes/Watches/Accessories')), ('18', _('For Children')), ('disabled', _('LEISURE/SPORTS/HOBBIES')), ('20', _('Sports &amp; Outdoors')), ('21', _('Hobby &amp; Collectables')), ('22', _('Music/Movies/Books')), ('23', _('Pets')), ('20', _('BUSINESS TO BUSINESS')), ('24', _('Hobby &amp; Collectables')), ('25', _('Professional/Office equipment')), ('26', _('Business for sale')), ('disabled', _('JOBS &amp; SERVICES')), ('28', _('Jobs')), ('29', _('Services')), ('30', _('Events &amp; Catering')), ('31', _('Others')), ] regions = [ ('', _('Choose')), ('3', _('Delhi')), ('4', _('Maharasta')), ('7', _('Gujarat')), ] cities = [ ('', _('Choose')), ('3', _('Mumbai')), ('4', _('Delhi')), ] nouser = HiddenField(_('No user')) # dummy variable to know whether user is logged in name = TextField(_('Name'), [validators.Required(message=_('Name is required' ))], widget=MyTextInput()) title = TextField(_('Subject'), [validators.Required(message=_('Subject is required' ))], widget=MyTextInput()) text = TextAreaField(_('Ad text'), [validators.Required(message=_('Text is required' ))], widget=MyTextArea()) phonenumber = TextField(_('Phone'), [validators.Optional()]) phoneview = BooleanField(_('Display phone number on site')) price = TextField(_('Price'), [validators.Regexp('^[0-9]+$', message=_('This is not an integer number, please see the example and try again' )), validators.Optional()], widget=MyTextInput()) email = TextField(_('Email'), [validators.Required(message=_('Email is required' )), validators.Email(message=_('Your email is invalid' ))], widget=MyTextInput()) region = SelectField(_('Region'),choices=regions,validators=[validators.Required(message=_('Region is required'))],option_widget=SelectWithRedFrame()) area = SelectField(_('City'),choices=cities,validators=[validators.Required(message=_('City is required' ))],option_widget=SelectWithRedFrame()) def validate_name(form, field): if len(field.data) &gt; 50: raise ValidationError(_('Name must be less than 50 characters' )) def validate_email(form, field): if len(field.data) &gt; 60: raise ValidationError(_('Email must be less than 60 characters' )) def validate_price(form, field): if len(field.data) &gt; 8: raise ValidationError(_('Price must be less than 9 integers' )) </code></pre> <p>Can you tell me what I'm doing wrong and how I can enable my SelectField for validation?</p> <p>Thanks</p>
<p>Have you tried the validator AnyOf yet? See <a href="http://wtforms.simplecodes.com/docs/0.6/validators.html" rel="nofollow">http://wtforms.simplecodes.com/docs/0.6/validators.html</a></p> <p>class wtforms.validators.AnyOf(values, message=u'Invalid value, must be one of: %(values)s', values_formatter=None) Compares the incoming data to a sequence of valid inputs.</p>
python|google-app-engine|python-2.7|wtforms
2
1,901,580
5,228,718
Convolution along one axis only
<p>I have two 2-D arrays with the same first axis dimensions. In python, I would like to convolve the two matrices along the second axis only. I would like to get <code>C</code> below without computing the convolution along the first axis as well.</p> <pre><code>import numpy as np import scipy.signal as sg M, N, P = 4, 10, 20 A = np.random.randn(M, N) B = np.random.randn(M, P) C = sg.convolve(A, B, 'full')[(2*M-1)/2] </code></pre> <p>Is there a fast way?</p>
<p>You can use <code>np.apply_along_axis</code> to apply <code>np.convolve</code> along the desired axis. Here is an example of applying a boxcar filter to a 2d array:</p> <pre><code>import numpy as np a = np.arange(10) a = np.vstack((a,a)).T filt = np.ones(3) np.apply_along_axis(lambda m: np.convolve(m, filt, mode='full'), axis=0, arr=a) </code></pre> <p>This is an easy way to generalize many functions that don't have an <code>axis</code> argument.</p>
numpy|signal-processing|scipy|linear-algebra|convolution
22
1,901,581
4,926,721
KeyError while building multidimensional dictionary in Python
<p>I am trying to build a dictionary with two keys but am getting a KeyError when assigning items. I don't get the error when using each of the keys separately, and the syntax seems pretty straightforward so I'm stumped.</p> <pre><code>searchIndices = ['Books', 'DVD'] allProducts = {} for index in searchIndices: res = amazon.ItemSearch(Keywords = entity, SearchIndex = index, ResponseGroup = 'Large', ItemPage = 1, Sort = "salesrank", Version = '2010-11-01') products = feedparser.parse(res) for x in range(10): allProducts[index][x] = { 'price' : products['entries'][x]['formattedprice'], 'url' : products['entries'][x]['detailpageurl'], 'title' : products['entries'][x]['title'], 'img' : products['entries'][x]['href'], 'rank' : products['entries'][x]['salesrank'] } </code></pre> <p>I don't believe the issue lies with feedparser (which converts xml to dict) or with the results I'm getting from amazon, as I have no issues building a dict when either using 'allProducts[x]' or 'allProducts[index]', but not both.</p> <p>What am I missing?</p>
<p>In order to assign to <code>allProducts[index][x]</code>, first a lookup is done on <code>allProducts[index]</code> to get a dict, then the value you're assigning is stored at index <code>x</code> in that dict.</p> <p>However, the first time through the loop, <code>allProducts[index]</code> doesn't exist yet. Try this:</p> <pre><code>for x in range(10): if index not in allProducts: allProducts[index] = { } # or dict() if you prefer allProducts[index][x] = ... </code></pre> <p>Since you know all the indices that are supposed to be in <code>allProducts</code> in advance, you can initialize it before hand like this instead:</p> <pre><code>map(lambda i: allProducts[i] = { }, searchIndices) for index in searchIndices: # ... rest of loop does not need to be modified </code></pre>
python|dictionary|multidimensional-array
6
1,901,582
62,563,525
Maximum weight edge sum from root node in a binary weighted tree
<p>You are given a binary weighted tree find the maximum weight edge sum from root node.</p> <p><img src="https://i.stack.imgur.com/UperF.jpg" alt="Click here to see the tree" /></p> <p>Following is the tree. Maximum weight starting from root node is 9. Explanation: Node 1-&gt;Node 3 Weight = 6 and Node 3-&gt;Node-6 Weight = 3. Total Weight = 6+3 = 9</p>
<p>You can use any traversal method. A* would find the path with least number of node visits, but it requires extra memory. I would go for a simple depth-first traversal (using recursion) and just keep track of the maximum found.</p> <p>Let's assume your tree is defined like this:</p> <pre><code>from collections import namedtuple Edge = namedtuple(&quot;Edge&quot;, &quot;weight,node&quot;) class Node: def __init__(self, value, left=None, right=None): self.value = value self.left = left self.right = right tree = Node(1, Edge(4, Node(2, Edge(1, Node(4)), Edge(1, Node(5)))), Edge(6, Node(3, Edge(3, Node(6)), Edge(0, Node(7))))) </code></pre> <p>...then you can add this method to your <code>Node</code> class:</p> <pre><code> def maxweight(self): return max(self.left.weight + self.left.node.maxweight() if self.left else 0, self.right.weight + self.right.node.maxweight() if self.right else 0) </code></pre> <p>and call it as:</p> <pre><code>print(tree.maxweight()) </code></pre>
python|python-3.x|algorithm|data-structures|graph-theory
0
1,901,583
60,419,015
Can a pywebview window be kept always on top?
<p>When using the Python package <code>pywebview</code>, is it possible to create a window and keep it on top, like an always-showing control panel in the corner?</p> <p>The application will mostly be run on Windows, so if there isn't a platform independent solution, I am also curious if a Windows-specific solution exists.</p>
<p>This feature is currently not implemented in the released version, but one possible implementation is currently being considered for inclusion in the project.</p> <p>See the <a href="https://github.com/r0x0r/pywebview/issues/476" rel="nofollow noreferrer">issue</a> where the feature was requested, as well as the <a href="https://github.com/r0x0r/pywebview/pull/478" rel="nofollow noreferrer">pull request</a> for the potential implementation.</p>
python|pywebview
1
1,901,584
60,339,724
discord.py listen for and send private message
<p>I am trying to make a discord bot in Python that listens for private messages and then replies.</p> <p>The way I want it to be designed is that the user sends a command to the bot "!token", the bot then iterates over an Array. And if the discordID of the message sender is in the list, the bot then returns a token related to that discordID. If the discordID is not there then it replies with "No token". </p> <p>Quite new to Python. I have looked through documentations and cant seem to find what I am looking for. </p> <p>Thanks in advance! </p>
<p>As you said that you're new to discord.py and Python in general I've written out a bit more than I generally would without an example of what you've tried yourself.</p> <p>The way this is set up now it'll only listen to "!token" You can take this code and add your own commands to it by following the same schematic. I've also left the part where you store the tokens out because you weren't clear on how you wanted that. I'd read up on <a href="https://docs.python.org/3/tutorial/datastructures.html" rel="nofollow noreferrer">python dictionaries</a> to store those though.</p> <p>As Patrick Haughs mentioned you can use the check <a href="https://discordpy.readthedocs.io/en/latest/ext/commands/api.html#discord.ext.commands.dm_only" rel="nofollow noreferrer">commands.dm_only</a> to make sure this will only run in private messages and not in any servers your bot might connect to.</p> <pre><code>from discord.ext import commands idList =[IDS] bot = commands.Bot(command_prefix = "!") runtoken = TOKEN @bot.command() async def token(ctx): if ctx.author.id in idList: await ctx.author.send("Token") else: await ctx.author.send("No token") bot.run(runtoken) </code></pre>
python|discord.py
0
1,901,585
71,216,958
Colouring edge of network based on their direction - NetworkX
<p>In R ggplot2, there is a simple way to color the edge of a network based on its direction between nodes. So if a line (edge) is directed from the nodes A to B, we can use the code:</p> <p><code>geom_edge(aes(colour=stat(index))</code></p> <p>Then I tried to redo this in python using NetworkX - to no avail. Here is an example network:</p> <pre><code>F.add_nodes_from([0,1]) F.add_nodes_from([0,2]) F.add_nodes_from([2,1]) post = nx.circular_layout(F) nx.draw_networkx_nodes(F, post, node_color = 'r', node_size = 100, alpha = 1) nx.draw_networkx_edges(F, post, edgelist= [(1,0)], width = 1, alpha = 1) nx.draw_networkx_edges(F, post, edgelist= [(0,2)], width = 1, alpha = 1) nx.draw_networkx_edges(F, post, edgelist= [(2,1)], width = 1, alpha = 1) plt.axis('off') plt.show() </code></pre> <p>And I have so far found no way to change the colour according to the direction of the edge. Ideally, I would like to achieve something like this:</p> <p><a href="https://i.stack.imgur.com/oASwu.png" rel="nofollow noreferrer">Please see photo here</a></p>
<p>This isn't implemented in networkx, but it would be possible in matplotlib, albeit involve a substantial amount of computation, depending on what exactly you would like to do:</p> <p>If you want to plot your edges as simple lines, you could precompute the node layout, and then follow <a href="https://matplotlib.org/stable/gallery/lines_bars_and_markers/multicolored_line.html" rel="nofollow noreferrer">this guide</a> to plot multicolored lines between the node positions using matplotlib's <code>LineCollection</code>.</p> <p>If you wanted to draw arrows, you would have to a) compute the path of the corresponding <code>FancyArrowPatch</code>, and then b) use that path to clip an appropriately oriented color-gradient mesh. For a simple triangle, this is demonstrated <a href="https://stackoverflow.com/questions/42063542/mathplotlib-draw-triangle-with-gradient-fill">here</a>.</p>
python|matplotlib|plot|networkx
0
1,901,586
71,160,423
how to sample points in 3D in python with origin and normal vector
<p>I have two points p1(x1, y1, z1) and p2(x2, y2, z2) in 3D. And I want to sample points in a circle with radius r that is centered at p1, and the plane which is perpendicular to the vector p2-p1 (so p2-p1 would be the normal vector of that plane). I have the code for sampling in XOY plane using polar system, but suffering on how to generalize to a different normal than (0, 0, 1)</p> <pre><code>rho = np.linspace(0, 2*np.pi, 50) r = 1 x = np.cos(rho) * r y = np.sin(rho) * r z = np.zeros(rho.shape) </code></pre> <p><a href="https://i.stack.imgur.com/Ssx33.png" rel="nofollow noreferrer">Sampled points</a></p>
<p>At first you need to define two base vectors in the circle's plane.</p> <p>The first one is arbitrary vector orthogonal to normal <code>n = p2-p1</code></p> <p>Choose component of normal with the largest magnitude and component with the second magnitude.</p> <p>Exchange their values, negate the largest, and make the third component zero (note that dot product of result with normal is zero, so they are othogonal)</p> <p>For example, if <code>n.y</code> is the largest and <code>n.z</code> is the second, make</p> <pre><code>v = (0, n.z, -n.y) </code></pre> <p>Then calculate the second base vector using vector product</p> <pre><code>u = n x v </code></pre> <p>Normalize vectors <code>v</code> and <code>u</code>. Circle points using center point <code>p1</code> on vector form:</p> <pre><code> f(rho) = p1 + r * v * cos(rho) + r * u * sin(rho) </code></pre> <p>or in components:</p> <pre><code> f.x = p1.x + r * v.x * cos(rho) + r * u.x * sin(rho) and so on </code></pre>
python|numpy|geometry|linear-algebra
2
1,901,587
70,330,634
Bad credentials from Astra DB connect
<p>Currently I am learning about Astra DB from this youtube link <a href="https://www.youtube.com/watch?v=NyDT3KkscSk&amp;t=2439s" rel="nofollow noreferrer">https://www.youtube.com/watch?v=NyDT3KkscSk&amp;t=2439s</a></p> <p>I manage to download the connection.zip file from Astra DB and generated admin token keys. But when I try to do connection such as:</p> <pre><code> from app.crud import create_entry </code></pre> <p>I will get this error:</p> <pre><code>raise NoHostAvailable(&quot;Unable to connect to any servers&quot;, errors) cassandra.cluster.NoHostAvailable: ('Unable to connect to any servers', {'98db9cb2-907a-4f9d-a935-69b69fb2157f-asia-south1.db.astra.datastax.com:29042:611a8370-f129-4099-84e2-c3b2f426ebdc': AuthenticationFailed('Failed to authenticate to 98db9cb2-907a-4f9d-a935-69b69fb2157f-asia-south1.db.astra.datastax.com:29042:611a8370-f129-4099-84e2-c3b2f426ebdc: Error from server: code=0100 [Bad credentials] message=&quot;We recently improved your database security. To find out more and reconnect, see https://docs.datastax.com/en/astra/docs/manage-application-tokens.html&quot;'), '98db9cb2-907a-4f9d-a935-69b69fb2157f-asia-south1.db.astra.datastax.com:29042:040ab116-8c77-4eb4-a357-c9bdcbb637d4': AuthenticationFailed('Failed to authenticate to 98db9cb2-907a-4f9d-a935-69b69fb2157f-asia-south1.db.astra.datastax.com:29042:040ab116-8c77-4eb4-a357-c9bdcbb637d4: Error from server: code=0100 [Bad credentials] message=&quot;We recently improved your database security. To find out more and reconnect, see https://docs.datastax.com/en/astra/docs/manage-application-tokens.html&quot;'), '98db9cb2-907a-4f9d-a935-69b69fb2157f-asia-south1.db.astra.datastax.com:29042:536e6e99-ef4e-47d0-9308-b0c6cdf4aa37': AuthenticationFailed('Failed to authenticate to 98db9cb2-907a-4f9d-a935-69b69fb2157f-asia-south1.db.astra.datastax.com:29042:536e6e99-ef4e-47d0-9308-b0c6cdf4aa37: Error from server: code=0100 [Bad credentials] message=&quot;We recently improved your database security. To find out more and reconnect, see https://docs.datastax.com/en/astra/docs/manage-application-tokens.html&quot;')}) </code></pre> <p>Here is my db.py:</p> <pre><code>import os import pathlib from dotenv import load_dotenv from cassandra.cluster import Cluster from cassandra.auth import PlainTextAuthProvider from cassandra.cqlengine.connection import register_connection, set_default_connection BASE_DIR = pathlib.Path(__file__).parent CLUSTER_BUNDLE = BASE_DIR / 'ignored'/ 'connect.zip' load_dotenv() astra_db_client_id = os.environ.get('ASTRA_DB_CLIENT_ID') astra_db_client_secret = os.environ.get('ASTRA_DB_CLIENT_SECRET') def get_cluster(): cloud_config= { 'secure_connect_bundle': CLUSTER_BUNDLE } auth_provider = PlainTextAuthProvider(astra_db_client_id, astra_db_client_secret) cluster = Cluster(cloud=cloud_config, auth_provider=auth_provider, control_connection_timeout=30, connect_timeout=30) return cluster def get_session(): cluster = get_cluster() session = cluster.connect() register_connection(str(session), session=session) set_default_connection(str(session)) return session # session = get_session() # row = session.execute(&quot;select release_version from system.local&quot;).one() # if row: # print(row[0]) # else: # print(&quot;An error occurred.&quot;) </code></pre> <p>I tried to recreate the token key many times and re download the drivers as well but I still have no luck in passing the bad credential errors here:</p> <p>my crud.py</p> <pre><code>from .db import get_session from .models import Product from cassandra.cqlengine.management import sync_table session = get_session() sync_table(Product) def create_entry(data:dict): return Product.create(**data) </code></pre> <p>models.py</p> <pre><code>from cassandra.cqlengine import columns from cassandra.cqlengine.models import Model class Product(Model): # -&gt; table __keyspace__ = &quot;testing&quot; # asin = columns.Text(primary_key=True, required=True) title = columns.Text() </code></pre>
<p>You might want to take a look at this</p> <p><a href="https://docs.datastax.com/en/astra/docs/docs/using-the-datastax-python-driver-to-connect-to-your-database.html" rel="nofollow noreferrer">https://docs.datastax.com/en/astra/docs/docs/using-the-datastax-python-driver-to-connect-to-your-database.html</a></p> <p>Specifically the section here:</p> <pre class="lang-py prettyprint-override"><code>from cassandra.cluster import Cluster from cassandra.auth import PlainTextAuthProvider cloud_config= { 'secure_connect_bundle': '/path/to/secure-connect-database_name.zip' } auth_provider = PlainTextAuthProvider('username', 'password') cluster = Cluster(cloud=cloud_config, auth_provider=auth_provider) session = cluster.connect() </code></pre> <p>When creating the connection you’ll want to pass the <a href="https://docs.datastax.com/en/astra/docs/docs/obtaining-database-credentials.html" rel="nofollow noreferrer">secure connect bundle zip</a>. You’ll then provide the clientId and clientSecret as the username and password from the connections file you downloaded.</p>
python|cassandra|datastax-astra
1
1,901,588
11,279,674
Restricting length of readline on socket in Python
<p>I'm working on a server, and all of the data is line based. I want to be able to raise an exception when a line exceeds a given length without reading any more data than I have to. For example, client <em>X</em> sends a line that's 16KB long even though the line-length limit is 1024 bytes. After reading more than 1024 bytes, I want to stop reading additional data, close the socket and raise an exception. I've looked through the docs and some of the source code, and I don't see a way to do this without rewriting the _readline method. Is there an easier way that I'm overlooking?</p> <p>EDIT: Comments made me realize I need to add more information. I know I could write the logic to do this without much work, but I was hoping to use builtins to take advantage of efficient buffering with memoryview rather than implementing it myself again or going with the naive approach of reading chunks, joing and splitting as needed without a memoryview.</p>
<p>I don't really like accepting answers that don't really answer the question, so here's the approach I actually ended up taking, and I'll just mark it community wiki or unanswered later if no one has a better solution:</p> <pre><code>#!/usr/bin/env python3 class TheThing(object): def __init__(self, connection, maxlinelen=8192): self.connection = connection self.lines = self._iterlines() self.maxlinelen = maxlinelen def _iterlines(self): """ Yield lines from class member socket object. """ buffered = b'' while True: received = self.connection.recv(4096) if not received: if buffered: raise Exception("Unexpected EOF.") yield received continue elif buffered: received = buffered + received if b'\n' in received: for line in received.splitlines(True): if line.endswith(b'\n'): if len(line) &gt; self.maxlinelen: raise LineTooLong("Line size: %i" % len(line)) yield line else: buffered = line else: buffered += received if len(buffered) &gt; self.maxlinelen: raise LineTooLong("Too much data in internal buffer.") def _readline(self): """ Return next available line from member socket object. """ return next(self.lines) </code></pre> <p>I haven't bothered comparing the code to be certain, but I'm doing fewer concatenations and splits, so I think mine may be more efficient.</p>
python|sockets|readline
2
1,901,589
56,447,616
when will the payment complete using the paypal python sdk?
<p>i followed some of the code listed in <a href="https://github.com/paypal/PayPal-Python-SDK" rel="nofollow noreferrer">https://github.com/paypal/PayPal-Python-SDK</a> to build the payment. Question is at which stage the payment is completed? My understanding is that it is completed after "Authorize Payment" and before redirect user to "return_url", cause I think the "return_url" should do something like "tell user has completed the payment" not doing the payment. I am not sure if my thought right. Below is the payment flow. </p> <p>Create Payment in python app</p> <pre><code>... skipped "return_url": "xxx.com/payment/execute", ... </code></pre> <p>Authorize Payment in python app</p> <pre><code>... Redirect user to approval_url (paypal page) ... </code></pre> <p>After user fill all the info in the paypal, it redirect user to return_url, which is below.</p> <p>Execute Payment in xxx.com/payment/execute</p> <pre><code>... curl 'https://api.sandbox.paypal.com/v1/payments/payment/'+paymentId+'/execute'; ... </code></pre> <p>after that I get JSON</p> <pre><code> { "id": "PAYID-LT3I25Y9M527750ER784091X", "intent": "sale", "state": "approved", "cart": "1L498981HN342484R", "payer": { "payment_method": "paypal", "status": "UNVERIFIED", "payer_info": { "email": "ji@gmail.com", "first_name": "ukf", "last_name": "tfutf", "payer_id": "ZD7ELNRCHVLPY", "shipping_address": { "recipient_name": "ukf tfutf", "line1": "ktf", "city": "ktfu", "state": "", "postal_code": "", "country_code": "HK" }, "country_code": "HK" } }, "transactions": [ { "amount": { "total": "5.00", "currency": "USD", "details": {} }, "payee": { "merchant_id": "4J8HJBF56QT24", "email": "facilitator@gmail.com" }, "description": "This is the payment transaction description.", "item_list": { "items": [ { "name": "item", "sku": "item", "price": "5.00", "currency": "USD", "quantity": 1 } ], "shipping_address": { "recipient_name": "ukf tfutf", "line1": "ktf", "city": "ktfu", "state": "", "postal_code": "", "country_code": "HK" }, "shipping_options": [ null ] }, "related_resources": [ { "sale": { "id": "96712163V8788712D", "state": "completed", "amount": { "total": "5.00", "currency": "USD", "details": { "subtotal": "5.00" } }, "payment_mode": "INSTANT_TRANSFER", "protection_eligibility": "ELIGIBLE", "protection_eligibility_type": "ITEM_NOT_RECEIVED_ELIGIBLE, UNAUTHORIZED_PAYMENT_ELIGIBLE", "transaction_fee": { "value": "0.47", "currency": "USD" }, "receipt_id": "2545046194101961", "parent_payment": "PAYID-LT3I25Y9M527750ER784091X", "create_time": "2019-06-04T15:29:48 Z", "update_time": "2019-06-04T15:29:48 Z", "links": [ { "href": "https://api.sandbox.paypal.com/v1/payments/sale/96712163V8788712D", "rel": "self", "method": "GET" }, { "href": "https://api.sandbox.paypal.com/v1/payments/sale/96712163V8788712D/refund", "rel": "refund", "method": "POST" }, { "href": "https://api.sandbox.paypal.com/v1/payments/payment/PAYID-LT3I25Y9M527750ER784091X", "rel": "parent_payment", "method": "GET" } ], "soft_descriptor": "PAYPAL *TESTFACILIT" } } ] } ], "create_time": "2019-06-04T15:29:49 Z", "links": [ { "href": "https://api.sandbox.paypal.com/v1/payments/payment/PAYID-LT3I25Y9M527750ER784091X", "rel": "self", "method": "GET" } ] } </code></pre>
<p>The payment is completed on PayPal's side when the user approves the payment on the PayPal site. After that it redirects the user to the redirect_url. This web page should indeed inform the user that their payment is completed. It is likely that you can inspect this transaction/payment on the paypal sandbox so long as it did not error. </p> <p>Also note that the SDK you are using is actually going deprecated soon. On the github you linked there it reads "Please note that if you are integrating with PayPal Checkout, this SDK and corresponding API v1/payments are in the process of being deprecated." I instead would recommend that you try to integrate with this SDK instead: <a href="https://github.com/paypal/Checkout-Python-SDK" rel="nofollow noreferrer">https://github.com/paypal/Checkout-Python-SDK</a> which is PayPal's newest release. In addition to using the newer SDK I would recommend trying to use PayPal's Smart Checkout Buttons. They should make your life a little bit easier.</p> <p>I have been using these SDK's in various projects for about the past two months and have not personally found it a straightforward process, but hopefully this helps!</p>
python|paypal|sdk
0
1,901,590
56,472,295
Can you export a created python conda environment for others to activate on their machines?
<p>I do the following:</p> <pre><code>conda create -n myenv -c conda-forge jupyter xarray cmocean numpy matplotlib netCDF4 cartopy pandas conda activate myenv jupyter notebook </code></pre> <p>Is there a way that I can export this environment to another computer to be activated by another user? </p> <p>I want other users to run my jupyter notebook script without having to install python packages.</p>
<p>See <a href="https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#sharing-an-environment" rel="noreferrer">https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#sharing-an-environment</a>.</p> <ol> <li>Activate the environment to export: <code>conda activate myenv</code></li> <li>Export your active environment to a new file: <code>conda env export &gt; environment.yml</code></li> <li>Email or copy the exported <code>environment.yml</code> file to the other person.</li> </ol> <p>To create environment from <code>.yml</code> file: <code>conda env create -f environment.yml</code><br> The first line of the yml file sets the new environment's name.</p>
python|linux|anaconda|environment-variables|conda
7
1,901,591
56,576,228
Python code regarding palindrome sequence
<p>There is a Sequence of Letters which is a palindrome. Now, exactly one pair of randomly picked letters are swapped with each other. Write a python program to determine the original palindrome sequence of letters if possible. def get_original_palindrome(edited_sequence):</p>
<pre><code>def get_misplaced_indexes(w): misplaced_indexes = [] for i in range(len(w)//2): if w[i] != w[len(w)-1-i]: misplaced_indexes.append(i) if len(misplaced_indexes) == 1: misplaced_indexes.append(len(w)//2) return misplaced_indexes def get_one_of_original_palindrome(misplaced_indexes, w): ans = "" index1, index2 = misplaced_indexes for i in range(len(w)): if i == index1: ans += w[index2] elif i == index2: ans += w[index1] else: ans += w[i] return ans def get_original_palindrome(w): if w == w[::-1]: return(w) else: misplaced_indexes = get_misplaced_indexes(w) return get_one_of_original_palindrome(misplaced_indexes, w) print(get_original_palindrome("radar")) print(get_original_palindrome("ardar")) print(get_original_palindrome("rdaar")) print(get_original_palindrome("helelh")) </code></pre>
python-3.x
-1
1,901,592
69,694,115
Can I trust the results from `nloptr` in R?
<p>Trying to solve a nonlinear program with inequality constraints using sequential quadratic programming. I've solved it with Python but I get inconsistent results in R.</p> <p>The objective function takes a vector <code>y</code> and a matrix <code>X</code> and looks for weights <code>W</code> that minimize the L2 norm. There are two constraints:</p> <ol> <li>each weight in the vector <code>W</code> is between 0 and 1</li> <li><code>W</code> sums to 1</li> </ol> <p>In Python I use <code>scipy.optimize.fmin_slsqp</code>, which &quot;<a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.fmin_slsqp.html" rel="nofollow noreferrer">implements the SLSQP Optimization subroutine originally implemented by Dieter Kraft</a>&quot;:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np from scipy.optimize import fmin_slsqp np.random.seed(42) def loss(W): return np.sqrt(np.mean((y - X @ W)**2)) N = X.shape[1] w_start = [1/N]*N w = fmin_slsqp(loss, np.array(w_start), f_eqcons=lambda x: np.sum(x) - 1, bounds=[(0.0, 1.0)]*len(w_start), disp=True) w.round(3) </code></pre> <p>which gives me</p> <pre class="lang-py prettyprint-override"><code>Optimization terminated successfully (Exit mode 0) Current function value: 2.3149922441277146 Iterations: 13 Function evaluations: 514 Gradient evaluations: 13 array([0. , 0. , 0. , 0.085, 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0.113, 0.105, 0.457, 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0.24 , 0. , 0. , 0. , 0. , 0. ]) </code></pre> <p>and I obtain the same results regardless of the starting values <code>w_start</code>.</p> <p>In R I try to solve the problem with <code>nloptr::slsqp</code>, which to my understanding <a href="https://www.rdocumentation.org/packages/nloptr/versions/1.2.2.2/topics/slsqp" rel="nofollow noreferrer">implements the same algorithm</a>:</p> <pre><code>set.seed(42) loss = function(W){ return(sqrt(mean(y - X %*% W)^2)) } N = nrow(X) w_start = rep(0, N) m = nloptr::slsqp(x0 = w_start, fn = loss, lower = rep(0, N), upper = rep(1, N), heq = function(x) sum(x) - 1) </code></pre> <p>However, I get different results:</p> <pre><code>&gt; m$value [1] 0.000407041 &gt; round(m$par,3) [1] 0.027 0.000 0.000 0.000 0.000 0.000 0.062 0.000 0.000 0.007 0.000 0.000 0.000 0.000 0.002 0.010 0.000 [18] 0.035 0.053 0.000 0.171 0.152 0.000 0.049 0.000 0.000 0.000 0.000 0.000 0.080 0.000 0.000 0.337 0.000 [35] 0.000 0.000 0.017 0.000 </code></pre> <p>and the results change with different starting values.</p> <p>Am I implementing this comparison between the Python and R solvers correctly? If so, is this an issue with <code>nloptr</code>?</p> <p>Data to recreate the problem:</p> <pre><code>&gt; dput(y) c(123, 121, 123.5, 124.400001525879, 126.699996948242, 127.099998474121, 128, 126.400001525879, 126.099998474121, 121.900001525879, 120.199996948242, 118.599998474121, 115.400001525879, 110.800003051758, 104.800003051758, 102.800003051758, 99.6999969482422, 97.5, 90.0999984741211, 38.7999992370605, 39.7000007629395, 39.9000015258789, 39.9000015258789, 41.9000015258789, 45, 48.2999992370605, 49, 58.7000007629395, 60.0999984741211, 62.0999984741211, 66.4000015258789, 72.8000030517578, 84.9000015258789, 94.9000015258789, 98, 104.400001525879, 103.900001525879, 117.400001525879 ) </code></pre> <pre><code>&gt; dput(X) structure(c(89.8000030517578, 95.4000015258789, 101.099998474121, 102.900001525879, 108.199996948242, 111.699996948242, 116.199996948242, 117.099998474121, 123, 121.400001525879, 123.199996948242, 119.599998474121, 119.099998474121, 116.300003051758, 113, 114.5, 116.300003051758, 114, 112.099998474121, 39.5999984741211, 42.7000007629395, 42.2999992370605, 42.0999984741211, 43.0999984741211, 46.5999984741211, 50.4000015258789, 50.0999984741211, 55.0999984741211, 56.7999992370605, 60.5999984741211, 68.8000030517578, 73.0999984741211, 84.4000015258789, 90.8000030517578, 99, 103, 110, 114.400001525879, 100.300003051758, 104.099998474121, 103.900001525879, 108, 109.699996948242, 114.800003051758, 119.099998474121, 122.599998474121, 127.300003051758, 126.5, 131.800003051758, 128.699996948242, 127.400001525879, 128, 123.099998474121, 125.800003051758, 126, 122.300003051758, 121.5, 36.7000007629395, 38.7999992370605, 44.0999984741211, 45.0999984741211, 45.5, 48.5999984741211, 50.9000015258789, 52.5999984741211, 56.5, 58.4000015258789, 61.5, 64.6999969482422, 72.0999984741211, 82, 93.5999984741211, 98.5, 103.599998474121, 113, 119.900001525879, 124.800003051758, 125.5, 134.300003051758, 137.899993896484, 132.800003051758, 131, 134.199996948242, 132, 129.199996948242, 131.5, 131, 133.800003051758, 130.5, 125.300003051758, 119.699996948242, 112.400001525879, 109.900001525879, 102.400001525879, 94.5999984741211, 29.3999996185303, 31.1000003814697, 31.2000007629395, 32.7000007629395, 38.0999984741211, 41.7000007629395, 44.7999992370605, 44.7000007629395, 57.4000015258789, 52.7999992370605, 54.5999984741211, 58.0999984741211, 61.4000015258789, 73.3000030517578, 83.4000015258789, 93.0999984741211, 95.0999984741211, 108.599998474121, 116, 120, 117.599998474121, 110.800003051758, 109.300003051758, 112.400001525879, 110.199996948242, 113.400001525879, 117.300003051758, 117.5, 117.400001525879, 118, 116.400001525879, 114.699996948242, 114.099998474121, 112.5, 111, 108.5, 109, 104.800003051758, 42.2000007629395, 45.5, 51.2999992370605, 50.5999984741211, 52.5, 54.5, 57.5999984741211, 58.4000015258789, 61.7000007629395, 64.4000015258789, 67, 80.0999984741211, 85.5999984741211, 95.5999984741211, 113.5, 118.599998474121, 118.5, 122.699996948242, 129.699996948242, 155, 161.100006103516, 156.300003051758, 154.699996948242, 151.300003051758, 147.600006103516, 153, 153.300003051758, 155.5, 150.199996948242, 150.5, 152.600006103516, 154.100006103516, 149.600006103516, 144, 144.5, 142.399993896484, 141, 137.100006103516, 39, 41.2999992370605, 44.7000007629395, 44, 44.2000007629395, 45.9000015258789, 50.0999984741211, 51.7000007629395, 58.7000007629395, 60, 62.7000007629395, 66, 74.0999984741211, 82, 91.0999984741211, 98.6999969482422, 105.199996948242, 111.400001525879, 119.300003051758, 109.900001525879, 115.699996948242, 117, 119.800003051758, 123.699996948242, 122.900001525879, 125.900001525879, 127.900001525879, 130.600006103516, 131, 134, 131.699996948242, 131.199996948242, 128.600006103516, 126.300003051758, 128.800003051758, 129, 129.300003051758, 124.099998474121, 34.2999992370605, 35.7999992370605, 40.9000015258789, 42.4000015258789, 42.4000015258789, 44.5, 47.9000015258789, 49.5, 54.7000007629395, 56.5999984741211, 59.2999992370605, 62.5999984741211, 67.8000030517578, 78.9000015258789, 86.8000030517578, 90.6999969482422, 100.099998474121, 103.900001525879, 109.199996948242, 102.400001525879, 108.5, 126.099998474121, 121.800003051758, 125.599998474121, 123.300003051758, 125.099998474121, 125, 122.800003051758, 117.5, 115.199996948242, 114.099998474121, 111.5, 111.300003051758, 103.599998474121, 100.699996948242, 96.6999969482422, 95, 84.5, 33.7999992370605, 33.5999984741211, 33.7000007629395, 36.2999992370605, 38, 40.2999992370605, 42.5, 45.5999984741211, 51.5, 55.4000015258789, 56.4000015258789, 59.2000007629395, 67.5999984741211, 76.5, 88.5, 97.5999984741211, 99.9000015258789, 107.099998474121, 121.900001525879, 124.800003051758, 125.599998474121, 126.599998474121, 124.400001525879, 131.899993896484, 131.800003051758, 134.399993896484, 134, 136.699996948242, 135.300003051758, 135.199996948242, 133, 130.699996948242, 127.900001525879, 124, 121.599998474121, 118.199996948242, 109.5, 107.599998474121, 41.4000015258789, 41.4000015258789, 41.9000015258789, 41, 41.9000015258789, 45.2000007629395, 48.4000015258789, 49.4000015258789, 54.5999984741211, 56.7999992370605, 60, 63.0999984741211, 69.5999984741211, 80.8000030517578, 89.5999984741211, 96.6999969482422, 108.400001525879, 116.199996948242, 124.099998474121, 134.600006103516, 139.300003051758, 149.199996948242, 156, 159.600006103516, 162.399993896484, 166.600006103516, 173, 150.899993896484, 148.899993896484, 146.899993896484, 148.5, 147.699996948242, 143, 137.800003051758, 135.300003051758, 137.600006103516, 134, 134, 30.6000003814697, 32.2000007629395, 32.5, 32.9000015258789, 34.5, 36.7000007629395, 38.7000007629395, 40.5999984741211, 50, 52.5, 53.7000007629395, 58.2999992370605, 65.0999984741211, 75.6999969482422, 85.1999969482422, 88.8000030517578, 93.5999984741211, 100.099998474121, 109.300003051758, 108.5, 108.400001525879, 109.400001525879, 110.599998474121, 116.099998474121, 120.5, 124.400001525879, 125.5, 127.099998474121, 124.199996948242, 124.599998474121, 132.899993896484, 116.199996948242, 115.599998474121, 111.199996948242, 109.400001525879, 104.099998474121, 101.099998474121, 100.199996948242, 37.7000007629395, 38.5, 41.9000015258789, 41.9000015258789, 43.2000007629395, 45.4000015258789, 47.7999992370605, 49.4000015258789, 54.5999984741211, 56.4000015258789, 58.7999992370605, 61.4000015258789, 72.8000030517578, 84, 93.3000030517578, 99.5, 104.800003051758, 117.099998474121, 124.199996948242, 114, 102.800003051758, 111, 115.199996948242, 118.599998474121, 123.400001525879, 127.699996948242, 127.900001525879, 127.099998474121, 126.400001525879, 127.099998474121, 132, 130.899993896484, 127.599998474121, 121.699996948242, 115.699996948242, 109.400001525879, 105.199996948242, 103.199996948242, 34.2000007629395, 38.9000015258789, 38.7999992370605, 39.2999992370605, 40.2000007629395, 42.7000007629395, 46.5999984741211, 48.0999984741211, 52.5999984741211, 54.7999992370605, 58.2999992370605, 59.7999992370605, 65.0999984741211, 77, 91, 97.5, 103, 114.5, 123.900001525879, 155.800003051758, 163.5, 179.399993896484, 201.899993896484, 212.399993896484, 223, 230.899993896484, 229.399993896484, 224.699996948242, 214.899993896484, 215.300003051758, 209.699996948242, 210.600006103516, 201.100006103516, 183.199996948242, 182.399993896484, 179.800003051758, 171.199996948242, 173.199996948242, 28.2999992370605, 30.1000003814697, 30.6000003814697, 30.6000003814697, 31.5, 33.2999992370605, 36, 36.9000015258789, 41.4000015258789, 43.4000015258789, 46.2999992370605, 49.4000015258789, 56.2999992370605, 66.4000015258789, 75.4000015258789, 79.3000030517578, 85.4000015258789, 90.5, 94.4000015258789, 115.900001525879, 119.800003051758, 125.300003051758, 126.699996948242, 129.899993896484, 133.600006103516, 139.600006103516, 140, 142.699996948242, 140.100006103516, 143.800003051758, 144, 143.899993896484, 133.699996948242, 128.899993896484, 125, 121.199996948242, 116.5, 110.900001525879, 34.2999992370605, 39.2999992370605, 40, 39.9000015258789, 41.5999984741211, 44.2999992370605, 48.0999984741211, 48.9000015258789, 54.2000007629395, 57.0999984741211, 60, 62.5999984741211, 70.3000030517578, 80.6999969482422, 90.6999969482422, 103, 105.099998474121, 117.800003051758, 120.400001525879, 128.5, 133.199996948242, 136.5, 138, 142.100006103516, 140.699996948242, 144.899993896484, 145.600006103516, 143.899993896484, 138.5, 141.199996948242, 138.899993896484, 139.5, 135.399993896484, 135.5, 127.900001525879, 119, 125, 125, 38, 38.7999992370605, 41.5, 41, 41.7999992370605, 46.7000007629395, 49.9000015258789, 50.9000015258789, 55, 54.5, 59, 62.9000015258789, 69.6999969482422, 80.8000030517578, 93.6999969482422, 98.0999984741211, 112.699996948242, 121.199996948242, 129, 104.300003051758, 116.400001525879, 96.8000030517578, 106.800003051758, 110.599998474121, 111.5, 116.699996948242, 117.199996948242, 118.900001525879, 118.300003051758, 117.699996948242, 120.800003051758, 119.400001525879, 113.199996948242, 110.800003051758, 113, 104.300003051758, 108.800003051758, 94.0999984741211, 39.0999984741211, 40.0999984741211, 45.2000007629395, 45.5999984741211, 47, 49.4000015258789, 52.0999984741211, 53.0999984741211, 57.9000015258789, 60.9000015258789, 63, 65.8000030517578, 71.6999969482422, 87.3000030517578, 99.1999969482422, 101.5, 116.300003051758, 120.099998474121, 141.699996948242, 93.4000015258789, 105.400001525879, 112.099998474121, 115, 117.099998474121, 116.800003051758, 120.900001525879, 122.099998474121, 124.900001525879, 123.900001525879, 127, 125.300003051758, 125.800003051758, 122.300003051758, 116.400001525879, 115.300003051758, 113.199996948242, 110, 109, 36.2000007629395, 37.5, 37.4000015258789, 37.2999992370605, 41.4000015258789, 43, 46.4000015258789, 48.7999992370605, 53.5999984741211, 56.5, 59.7000007629395, 63, 69.1999969482422, 78.5999984741211, 89, 96.4000015258789, 106, 115.800003051758, 122.599998474121, 121.300003051758, 127.599998474121, 130, 132.100006103516, 135.399993896484, 135.600006103516, 139.5, 140.800003051758, 141.800003051758, 140.199996948242, 142.100006103516, 140.5, 139.699996948242, 134.100006103516, 130, 129.199996948242, 128.800003051758, 128.699996948242, 127.400001525879, 36, 36.7999992370605, 37.7000007629395, 37.7000007629395, 38, 43.5, 44.7000007629395, 45.9000015258789, 49.9000015258789, 52.2000007629395, 57.2999992370605, 59.9000015258789, 64.6999969482422, 74.8000030517578, 84.8000030517578, 93.6999969482422, 101.900001525879, 108.5, 114.599998474121, 111.199996948242, 115.599998474121, 122.199996948242, 119.900001525879, 121.900001525879, 123.699996948242, 124.900001525879, 127, 127.199996948242, 120.300003051758, 122, 121.099998474121, 122.400001525879, 113.699996948242, 110.099998474121, 103.599998474121, 97.8000030517578, 91.6999969482422, 87.0999984741211, 34, 34.7000007629395, 40.0999984741211, 40.9000015258789, 41.7999992370605, 43.7000007629395, 45.2999992370605, 47.5999984741211, 51.9000015258789, 53.7000007629395, 56.7000007629395, 60.4000015258789, 65.6999969482422, 77.1999969482422, 91.3000030517578, 95.5, 102, 106.199996948242, 115.300003051758, 108.099998474121, 108.599998474121, 104.900001525879, 106.599998474121, 110.5, 114.099998474121, 118.099998474121, 117.699996948242, 117.400001525879, 116.099998474121, 116.300003051758, 117, 117.099998474121, 110.800003051758, 107.699996948242, 105.099998474121, 103.099998474121, 101.300003051758, 92.9000015258789, 33.9000015258789, 34.7000007629395, 41.0999984741211, 41.2000007629395, 42, 44.5999984741211, 46.7999992370605, 48.0999984741211, 53.5999984741211, 55.4000015258789, 59.5, 60.9000015258789, 69.6999969482422, 83.6999969482422, 94.8000030517578, 95.8000030517578, 104, 113.699996948242, 123.300003051758, 189.5, 190.5, 198.600006103516, 201.5, 204.699996948242, 205.199996948242, 201.399993896484, 190.800003051758, 187, 183.300003051758, 177.699996948242, 171.899993896484, 165.100006103516, 159.199996948242, 136.600006103516, 146.699996948242, 142.600006103516, 147.699996948242, 141.899993896484, 38.9000015258789, 44, 40.5999984741211, 40.2999992370605, 41.9000015258789, 44.5, 44.9000015258789, 49.2999992370605, 54.2999992370605, 57.0999984741211, 63.0999984741211, 63.2999992370605, 71.5999984741211, 81.9000015258789, 99.8000030517578, 109.300003051758, 106.599998474121, 114, 129.600006103516, 265.700012207031, 278, 296.200012207031, 279, 269.799987792969, 269.100006103516, 290.5, 278.799987792969, 269.600006103516, 254.600006103516, 247.800003051758, 245.399993896484, 239.800003051758, 232.899993896484, 215.100006103516, 201.100006103516, 195.899993896484, 195.100006103516, 180.399993896484, 31.3999996185303, 34.0999984741211, 36.0999984741211, 36.9000015258789, 37.9000015258789, 40.7999992370605, 43.9000015258789, 45, 49.7000007629395, 53.2000007629395, 55.2999992370605, 58.4000015258789, 67, 74.6999969482422, 90.5, 89.1999969482422, 100, 102, 113.5, 90, 92.5999984741211, 99.3000030517578, 98.9000015258789, 100.300003051758, 103.099998474121, 102.400001525879, 102.400001525879, 103.099998474121, 101, 102.699996948242, 103, 97.5, 96.3000030517578, 88.9000015258789, 88, 88.1999969482422, 82.3000030517578, 77.6999969482422, 39.7000007629395, 41.7000007629395, 41.0999984741211, 41.7999992370605, 43.7000007629395, 46.2999992370605, 49.5, 51.5999984741211, 56, 57.5999984741211, 62.5999984741211, 63, 69.4000015258789, 79.5999984741211, 90.1999969482422, 97.5, 101.199996948242, 110.199996948242, 113.699996948242, 172.399993896484, 187.600006103516, 214.100006103516, 226.5, 227.300003051758, 226, 230.199996948242, 217, 205.5, 197.300003051758, 187.800003051758, 179.300003051758, 179, 169.800003051758, 160.600006103516, 156.300003051758, 154.399993896484, 150.5, 146, 27.2999992370605, 29.3999996185303, 28.7000007629395, 28.8999996185303, 30.1000003814697, 32.9000015258789, 35.7999992370605, 36.5999984741211, 41.7999992370605, 43.7000007629395, 47.2999992370605, 50, 55.5, 66, 75, 78.9000015258789, 83.6999969482422, 90.5999984741211, 96, 93.8000030517578, 98.5, 103.800003051758, 108.699996948242, 110.5, 117.900001525879, 125.400001525879, 122.199996948242, 121.900001525879, 121.300003051758, 123.699996948242, 125.699996948242, 126.800003051758, 119.599998474121, 109.400001525879, 103.199996948242, 99.8000030517578, 92.3000030517578, 87.0999984741211, 37.2999992370605, 38.9000015258789, 38.9000015258789, 39.4000015258789, 39.9000015258789, 42.5999984741211, 45.9000015258789, 47.4000015258789, 53.2000007629395, 55, 59.5999984741211, 62, 67.8000030517578, 77.9000015258789, 94.4000015258789, 100.599998474121, 104.199996948242, 110.300003051758, 123.300003051758, 121.599998474121, 124.599998474121, 124.400001525879, 120.5, 122.099998474121, 122.5, 124.599998474121, 127.300003051758, 131.300003051758, 130.899993896484, 133.5, 132.800003051758, 134, 130, 127.099998474121, 126.699996948242, 126.300003051758, 124.599998474121, 122.400001525879, 36.5999984741211, 38.0999984741211, 38.4000015258789, 42, 42.9000015258789, 46, 48.5, 49.7999992370605, 53.9000015258789, 56.2999992370605, 58.7000007629395, 61.4000015258789, 68.3000030517578, 82.5, 89.1999969482422, 92.1999969482422, 98.0999984741211, 102.199996948242, 108.400001525879, 108.400001525879, 115.400001525879, 121.699996948242, 124.099998474121, 130.5, 132.899993896484, 138.600006103516, 140.399993896484, 143.600006103516, 141.600006103516, 141.600006103516, 143.699996948242, 147, 140, 128.100006103516, 124.199996948242, 119.900001525879, 113.099998474121, 103.599998474121, 38.4000015258789, 39.7999992370605, 39.7999992370605, 40.4000015258789, 41, 43.5999984741211, 46.4000015258789, 47.9000015258789, 53.0999984741211, 55.5, 62.9000015258789, 65.8000030517578, 71.6999969482422, 83.9000015258789, 93.3000030517578, 95.0999984741211, 104.599998474121, 114.400001525879, 122.599998474121, 107.300003051758, 106.300003051758, 109, 110.699996948242, 114.199996948242, 114.599998474121, 118.800003051758, 120.099998474121, 122.300003051758, 122.599998474121, 124, 125.199996948242, 123.300003051758, 125.300003051758, 115.300003051758, 115.800003051758, 113.900001525879, 110.599998474121, 107.599998474121, 38.4000015258789, 44.7000007629395, 44.7000007629395, 44.9000015258789, 46.5999984741211, 49.7999992370605, 52.2999992370605, 53.2999992370605, 57.4000015258789, 60.5999984741211, 61.2999992370605, 64.8000030517578, 69.8000030517578, 81.6999969482422, 97.6999969482422, 100.099998474121, 104.900001525879, 110, 112.300003051758, 123.900001525879, 123.199996948242, 134.399993896484, 142, 146.100006103516, 154.699996948242, 150.199996948242, 148.800003051758, 146.800003051758, 145.800003051758, 149.300003051758, 151.199996948242, 146.300003051758, 135.800003051758, 136.899993896484, 133.399993896484, 136.300003051758, 124.400001525879, 138, 39.2999992370605, 40.2000007629395, 41.5999984741211, 40.5999984741211, 41.2999992370605, 44.2999992370605, 52.2000007629395, 52.2999992370605, 56.2999992370605, 58.7000007629395, 60, 64.5, 71.5999984741211, 84, 94.8000030517578, 100.300003051758, 101.800003051758, 113.5, 121.5, 103.599998474121, 115, 118.699996948242, 125.5, 129.699996948242, 130.5, 136.800003051758, 137.199996948242, 140.399993896484, 135.699996948242, 138.300003051758, 136.100006103516, 136, 131.100006103516, 127, 125.400001525879, 126.599998474121, 126.599998474121, 124.400001525879, 32.5, 34.2999992370605, 34.0999984741211, 33.5, 35.2000007629395, 38.0999984741211, 41, 42.2000007629395, 49.2000007629395, 50.2000007629395, 52.2999992370605, 54.7000007629395, 61.9000015258789, 72.4000015258789, 81.3000030517578, 83, 88.6999969482422, 95.3000030517578, 99.9000015258789, 92.6999969482422, 96.6999969482422, 103, 103.5, 108.400001525879, 113.5, 116.699996948242, 115.599998474121, 116.900001525879, 117.400001525879, 114.699996948242, 115.699996948242, 113, 109.800003051758, 105.699996948242, 104.400001525879, 97, 95.8000030517578, 91.9000015258789, 38.5, 38.5, 39.0999984741211, 39.5999984741211, 40.4000015258789, 42.7999992370605, 45, 46.4000015258789, 53.2000007629395, 54.0999984741211, 58.7999992370605, 62.2999992370605, 68, 78.8000030517578, 89.8000030517578, 92.3000030517578, 108.5, 113.699996948242, 124.699996948242, 99.8000030517578, 106.300003051758, 111.5, 109.699996948242, 114.800003051758, 117.400001525879, 121.699996948242, 124.599998474121, 127.300003051758, 127.199996948242, 130.399993896484, 129.100006103516, 131.399993896484, 129, 125.099998474121, 128.699996948242, 129, 130.600006103516, 125.300003051758, 39.9000015258789, 41.5999984741211, 41.5999984741211, 40.7999992370605, 42.5, 45.2999992370605, 48.2999992370605, 49.5999984741211, 54.7999992370605, 57.2999992370605, 60.2999992370605, 63.7000007629395, 68.3000030517578, 79.0999984741211, 88.3000030517578, 92.5, 98.8000030517578, 103.5, 112.099998474121, 106.400001525879, 108.900001525879, 108.599998474121, 110.400001525879, 114.699996948242, 116, 121.400001525879, 124.199996948242, 126.599998474121, 126.400001525879, 129.699996948242, 129, 131.199996948242, 126.400001525879, 117.199996948242, 115.900001525879, 113.699996948242, 105.800003051758, 96.5, 40.4000015258789, 42, 46.9000015258789, 46.4000015258789, 47.5, 50.5999984741211, 53.2999992370605, 53.2999992370605, 59.0999984741211, 62.2000007629395, 63.7000007629395, 66.9000015258789, 73.8000030517578, 84.0999984741211, 93.8000030517578, 102.099998474121, 105.5, 114.400001525879, 128, 65.5, 67.6999969482422, 71.3000030517578, 72.6999969482422, 75.5999984741211, 75.8000030517578, 77.9000015258789, 78, 79.5999984741211, 79.0999984741211, 74.8000030517578, 77.5999984741211, 73.5999984741211, 69, 66.3000030517578, 66.5, 64.4000015258789, 67.6999969482422, 55, 34.5999984741211, 36.5999984741211, 37.2000007629395, 36.5, 37.7999992370605, 40.5, 43.4000015258789, 44.7000007629395, 49.5, 53.7000007629395, 57.2000007629395, 62.7000007629395, 68.0999984741211, 82, 95.3000030517578, 104.599998474121, 103.5, 108.599998474121, 122.900001525879, 122.599998474121, 124.400001525879, 138, 146.800003051758, 151.800003051758, 155.5, 171.100006103516, 169.399993896484, 162.399993896484, 160.899993896484, 161.600006103516, 163.800003051758, 162.300003051758, 153.800003051758, 144.300003051758, 144.5, 131.199996948242, 128.300003051758, 128.699996948242, 37.7000007629395, 39.5, 40, 39.7999992370605, 41.2999992370605, 41.7999992370605, 47.0999984741211, 47, 52.5, 54.7999992370605, 58.9000015258789, 61, 66.8000030517578, 77, 90.5999984741211, 95.5, 104.900001525879, 113.800003051758, 123.699996948242, 124.300003051758, 128.399993896484, 137, 143.100006103516, 149.600006103516, 152.699996948242, 158.100006103516, 157.699996948242, 155.899993896484, 151.800003051758, 148.899993896484, 149.899993896484, 147.399993896484, 144.699996948242, 136.800003051758, 134.600006103516, 135.800003051758, 133, 129.5, 28.7999992370605, 30.2000007629395, 29.8999996185303, 30.1000003814697, 31.2999992370605, 33.5999984741211, 37.9000015258789, 38.4000015258789, 42.7999992370605, 45.7999992370605, 48.5, 51.7999992370605, 56.4000015258789, 68.8000030517578, 76, 83.5999984741211, 91.3000030517578, 94.5999984741211, 102.099998474121, 114.5, 111.5, 117.5, 116.599998474121, 119.900001525879, 123.199996948242, 129.699996948242, 133.899993896484, 131.600006103516, 122.099998474121, 122.300003051758, 120.5, 119.800003051758, 115.699996948242, 111.900001525879, 109.099998474121, 112.099998474121, 107.5, 109.099998474121, 33.7000007629395, 41.5999984741211, 41.2999992370605, 39.9000015258789, 42, 45.2000007629395, 48.4000015258789, 48.9000015258789, 53.9000015258789, 62.4000015258789, 64.3000030517578, 66.1999969482422, 75.0999984741211, 88.1999969482422, 97.1999969482422, 103.199996948242, 104.099998474121, 112.800003051758, 122.199996948242, 106.400001525879, 105.400001525879, 108.800003051758, 109.5, 111.800003051758, 113.5, 115.400001525879, 117.199996948242, 116.699996948242, 117.099998474121, 117.599998474121, 119.900001525879, 115.599998474121, 106.300003051758, 105.599998474121, 107, 105.400001525879, 106, 102.599998474121, 38.5, 40.2000007629395, 40.2999992370605, 42.5999984741211, 43.9000015258789, 46.5999984741211, 51.2999992370605, 52.0999984741211, 57.0999984741211, 58.7000007629395, 61.2000007629395, 64.9000015258789, 75, 92, 100.800003051758, 106.800003051758, 110.800003051758, 116.300003051758, 128.600006103516, 132.199996948242, 131.699996948242, 140, 141.199996948242, 145.800003051758, 160.699996948242, 161.5, 160.399993896484, 160.300003051758, 168.600006103516, 158.100006103516, 163.100006103516, 157.699996948242, 141.199996948242, 128.899993896484, 125.699996948242, 124.800003051758, 110.400001525879, 114.300003051758, 34.0999984741211, 34.4000015258789, 34.4000015258789, 34.4000015258789, 35.7999992370605, 38.5999984741211, 42.5999984741211, 43.4000015258789, 49.7999992370605, 51.7000007629395, 55.2999992370605, 55.9000015258789, 64.3000030517578, 71, 81.6999969482422, 87.4000015258789, 97.8000030517578, 102.699996948242, 112.900001525879), .Dim = c(38L, 38L), .Dimnames = list( NULL, c(&quot;1&quot;, &quot;2&quot;, &quot;4&quot;, &quot;5&quot;, &quot;6&quot;, &quot;7&quot;, &quot;8&quot;, &quot;9&quot;, &quot;10&quot;, &quot;11&quot;, &quot;12&quot;, &quot;13&quot;, &quot;14&quot;, &quot;15&quot;, &quot;16&quot;, &quot;17&quot;, &quot;18&quot;, &quot;19&quot;, &quot;20&quot;, &quot;21&quot;, &quot;22&quot;, &quot;23&quot;, &quot;24&quot;, &quot;25&quot;, &quot;26&quot;, &quot;27&quot;, &quot;28&quot;, &quot;29&quot;, &quot;30&quot;, &quot;31&quot;, &quot;32&quot;, &quot;33&quot;, &quot;34&quot;, &quot;35&quot;, &quot;36&quot;, &quot;37&quot;, &quot;38&quot;, &quot;39&quot;))) </code></pre>
<p>This may be a rounding/precision issue. In the R implementation if you add a rounding option to the loss function:</p> <pre><code>loss = function(W, digits = 20){ return(round(sqrt(mean(y - X %*% W)^2)),digits) } </code></pre> <p>and leave everything else the same you get the same results as the Python implementation:</p> <pre><code>&gt; m$value [1] 2.314992 &gt; round(m$par,3) [1] 0.000 0.000 0.000 0.085 0.000 0.000 0.000 0.000 [9] 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 [17] 0.000 0.000 0.000 0.113 0.105 0.457 0.000 0.000 [25] 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 [33] 0.240 0.000 0.000 0.000 0.000 0.000 </code></pre>
python|r|optimization|scipy
3
1,901,593
17,981,547
Python UI to enter 2 parameters
<p>I want to create a simple UI so the user can enter an integer and a string parameter. I was hoping to use something as simple as QInputDialog(), but as far as I can tell that only works with 1 parameter.</p> <pre><code>inputDialog = QtGui.QInputDialog() myInt, ok = inputDialog.getInt(inputDialog, 'Enter Params', 'Int', 1) </code></pre> <p>Is there a simple way to do something like:</p> <pre><code>inputDialog = QtGui.QInputDialog() myInt, myString, ok = inputDialog.getInt(inputDialog, 'Enter Params', 'Int', 1), inputDialog.getText(inputDialog, 'Enter Params', 'Text', 1) </code></pre> <p>I started looking at PyQt, but that just seems like such an overkill for something so basic.</p> <p>Any help would be appreciated.</p>
<p>There's no standard Qt dialog with multiple fields. What you can do is either use two dialogs in sequence or else have the user enter both values in a single field, then split them in your code. E.g.:</p> <pre><code>inputDialog = QtGui.QInputDialog() myText, ok = inputDialog.getText(inputDialog, 'Enter number, a space, then text', 'Data', '') if ok: myInt, _, myText = myText.partition(" ") try: myInt = int(myInt) except ValueError: myInt = 0 </code></pre> <p>Neither is very good from a user experience perspective, though.</p>
python|user-interface|pyqt
0
1,901,594
17,693,318
Removing digits from a file
<p>I have a file of the form:</p> <pre><code>car1 auto1 automobile1 machine4 motorcar1 bridge1 span5 road1 route2 </code></pre> <p>But I want to remove the integers so that my file looks like:</p> <pre><code>car auto automobile machine motorcar bridge span road route </code></pre> <p>I am trying to read the file character by character, and if a character is a digit, skip it. But I am printing them in a new file. How can I make changes in the input file itself?</p>
<p>Using <a href="http://www.regular-expressions.info/tutorial.html" rel="nofollow">regular expressions</a>:</p> <pre><code>import re import fileinput for line in fileinput.input("your_file.txt", inplace=True): print re.sub("\d+", "", line), </code></pre> <p>note: fileinput is a nice module for working with files.</p> <p>Edit: for better performance/less flexibility you can use:</p> <pre><code>import fileinput import string for line in fileinput.input("your_file.txt", inplace=True): print line.translate(None, string.digits), </code></pre> <p>For multiple edits/replaces:</p> <pre><code>import fileinput import re for line in fileinput.input("your_file.txt", inplace=True): #remove digits result = ''.join(i for i in line if not i.isdigit()) #remove dollar signs result = result.replace("$","") #some other regex, removes all y's result = re.sub("[Yy]+", "", result) print result, </code></pre>
python
9
1,901,595
72,605,349
Pandas pad dataframe groups
<p>I have a dataframe e.g.:</p> <pre><code> my_label value 0 A 1 1 A 85 2 B 65 3 B 41 4 B 21 5 C 3 </code></pre> <p>I want to group by my_label and to pad groups to a certain length modulo and filling by last value. For example if I want to have multiple of 4, it would give :</p> <pre><code> my_label value 0 A 1 1 A 85 2 A 85 3 A 85 4 B 65 5 B 41 6 B 21 7 B 21 8 C 3 9 C 3 10 C 3 11 C 3 </code></pre> <p>I managed to get a solution that should be working, but for some reason the reindex isn't done at the end of the groups.</p> <pre><code>def _pad(group, seq_len): pad_number = seq_len - (len(group) % seq_len) if pad_number != seq_len: group = group.reindex(range(len(group)+pad_number)).ffill() return group df = (df.groupby('my_label') .apply(_pad, (4)) .reset_index(drop = True)) </code></pre> <p>Here is the code to the above DF for testing :</p> <pre><code>import pandas as pd df = pd.DataFrame({&quot;my_label&quot;:[&quot;A&quot;,&quot;A&quot;,&quot;B&quot;,&quot;B&quot;,&quot;B&quot;,&quot;C&quot;], &quot;value&quot;:[1,85,65,41,21,3]}) </code></pre>
<p>You can concatenate per group a dummy DataFrame with the number of missing rows, then <code>ffill</code>:</p> <pre><code>N = 4 out = (df .groupby('my_label', group_keys=False) .apply(lambda d: pd.concat([d, pd.DataFrame(columns=d.columns, index=range(N-len(d)))])) .ffill() .reset_index(drop=True) ) </code></pre> <p>or, directly concatenating the last row as many times as needed:</p> <pre><code>(df .groupby('my_label', group_keys=False) .apply(lambda d: pd.concat([d, d.loc[[d.index[-1]]*(N-len(d))]])) .reset_index(drop=True) ) </code></pre> <p>output:</p> <pre><code> my_label value 0 A 1 1 A 85 2 A 85 3 A 85 4 B 65 5 B 41 6 B 21 7 B 21 8 C 3 9 C 3 10 C 3 11 C 3 </code></pre>
python|pandas
2
1,901,596
72,758,570
Can not print an assigned variable, the console prints the string
<p>I am trying to automatize a process. I need to extract a string and previously assigned it a variable. For example:</p> <pre><code>H=8 Hello= &quot;Hello&quot; Hi=(Hello[0]) print(H) print(Hi) </code></pre> <p>Console prints:</p> <pre><code>8 H </code></pre> <p>and I need the console to print:</p> <pre><code>8 8 </code></pre>
<p>Use a dictionary instead.</p> <p>Any method that solves this how you want it to be solved will be unsafe and bad practice.</p> <pre><code>data = {'H': 8, 'B': 3} Hello = &quot;Hello&quot; Hi = data[Hello[0]] print(data['H']) print(Hi) </code></pre> <p>Output:</p> <pre><code>8 8 </code></pre> <p>See: <a href="https://stackoverflow.com/questions/1832940/why-is-using-eval-a-bad-practice">Why is using 'eval' a bad practice?</a></p>
python|string|variables|printing
0
1,901,597
68,258,766
Inspect (duck) type of python function arguments
<p>Given the following <code>display</code> function,</p> <pre><code>def display(some_object): print(some_object.value) </code></pre> <p>is there a way to programmatically determine that the attributes of <code>some_object</code> must include <code>value</code>?</p> <p>Modern IDEs (like PyCharm) yield a syntax error if I try to pass an <code>int</code> to the <code>display</code> function, so they are obviously doing this kind of analysis behind the scenes. I am aware how to get the <a href="https://stackoverflow.com/questions/582056/getting-list-of-parameter-names-inside-python-function">function signature</a>, this question is only about how to get the (duck) type information, i.e. which attributes are expected for each function argument.</p> <p>EDIT: In my specific use case, I have access to the source code (non obfuscated), but I am not in control of adding the type hints as the functions are user defined.</p> <p><strong>Toy example</strong></p> <p>For the simple <code>display</code> function, the following inspection code would do,</p> <pre><code>class DuckTypeInspector: def __init__(self): self.attrs = [] def __getattr__(self, attr): return self.attrs.append(attr) dti = DuckTypeInspector() display(dti) print(dti.attrs) </code></pre> <p>which outputs</p> <pre><code>None # from the print in display ['value'] # from the last print statement, this is what i am after </code></pre> <p>However, as the <code>DuckTypeInspector</code> always returns <code>None</code>, this approach won't work in general. A simple <code>add</code> function for example,</p> <pre><code>def add(a, b): return a + b dti1 = DuckTypeInspector() dti2 = DuckTypeInspector() add(dti1, dti2) </code></pre> <p>would yield the following error,</p> <pre><code>TypeError: unsupported operand type(s) for +: 'DuckTypeInspector' and 'DuckTypeInspector' </code></pre>
<p>The way to do this with static analysis is to declare the parameters as adhering to a protocol and then use <code>mypy</code> to validate that the actual parameters implement that protocol:</p> <pre><code>from typing import Protocol class ValueProtocol(Protocol): value: str class ValueThing: def __init__(self): self.value = &quot;foo&quot; def display(some_object: ValueProtocol): print(some_object.value) display(ValueThing()) # no errors, because ValueThing implements ValueProtocol display(&quot;foo&quot;) # mypy error: Argument 1 to &quot;display&quot; has incompatible type &quot;str&quot;; expected &quot;ValueProtocol&quot; </code></pre> <p>Doing this at runtime with mock objects is impossible to do in a generic way, because you can't be certain that the function will go through every possible code path; you would need to write a unit test with carefully constructed mock objects for each function and make sure that you maintain 100% code coverage.</p> <p>Using type annotations and static analysis is <em>much</em> easier, because <code>mypy</code> (or similar tools) can check each branch of the function to make sure that the code is compatible with the declared type of the parameter, without having to generate fake values and actually execute the function against them.</p> <p>If you want to programmatically inspect the annotations from someone else's module, you can use the magic <code>__annotations__</code> attribute:</p> <pre><code>&gt;&gt;&gt; display.__annotations__ {'some_object': &lt;class '__main__.ValueProtocol'&gt;, 'return': None} </code></pre>
python|static-analysis|type-hinting
1
1,901,598
59,090,187
How to use append and extend in a function within a class without explicitly defining them in the same class?
<p>Can i use append and extend in a class without defining them beforehand?</p> <p>Example:</p> <pre><code>class Blabla(): def append(self, x): self.row += [x] def extend(self, row): for i in row: self.row += [i] def combine(self, rows): row_1 = Blabla([]) [...] row_1.append(self.row[i]) </code></pre>
<p>Your class <code>Blabla</code> will not just have <code>append</code> and <code>extend</code> methods automatically somehow, so the short answer to your question of whether you can call them without having defined them, is no.</p> <p>If your class had a member of type list (or of some other type with append and extend functions) then you could call append and extend on that member.</p> <p>For instance if the <code>Blabla.row</code> member is a list type then doing the following would be legal...</p> <pre><code>... row_1.row.append(...) row_1.row.extend(...) ... # or in one of the classes methods... def foo(self, ...): self.row.append(...) self.row.extend(...) </code></pre> <p>Otherwise your class needs to define them first (either explicitly, by inheritance, or by some other means)</p>
python|python-2.7
0
1,901,599
59,183,331
Multiple python append functions in while loop crashing
<p>Attempting to print a composite list by appending item from three smaller lists in sequence:</p> <pre><code>def final_xyz_lister(): global final_xyz_list final_xyz_list = [] step=0 while step==0: final_xyz_list.append(carbon_final_list[step]) final_xyz_list.append(oxygen_final_list[step]) final_xyz_list.append(hydrogen_final_list[step]) step=+1 while 0 &lt; step &lt; 50: final_xyz_list.append(carbon_final_list[step]) final_xyz_list.append(oxygen_final_list[step]) final_xyz_list.append(hydrogen_final_list[step]) step=+1 else: pass </code></pre> <p>If I comment out the second while loop the first element of the list is printed in a list as expected but introduction of the second while loop results in a MemoryError. </p>
<p>There is no need to append the three items in 2 different while loops. It would also be simpler if you used for loops. in this case:</p> <pre><code>for step in range(0, 50): final_xyz_list.append(carbon_final_list[step]) final_xyz_list.append(oxygen_final_list[step]) final_xyz_list.append(hydrogen_final_list[step]) </code></pre> <p>Edit: Also, I just noticed the error, you use <code>step =+ 1</code>, which is the same as saying <code>step = +1</code> or <code>step = 1</code>. This is why you are getting a memory error, you keep defining step as 1, which is between 0 and 50, so the while loop just keeps going. what you probbly wanted to write was <code>step += 1</code>, this increases step by 1 and doesn't set it to 1</p>
python|list|while-loop|append
0