Unnamed: 0
int64
0
1.91M
id
int64
337
73.8M
title
stringlengths
10
150
question
stringlengths
21
64.2k
answer
stringlengths
19
59.4k
tags
stringlengths
5
112
score
int64
-10
17.3k
1,908,000
57,609,542
How to create a random sequence excepting a set of given values
<p>I am using numpy and i want to generate an array of size <code>n</code> with random integers from <code>a</code> to <code>b</code> [upper bound exclusive] that are not in the array <code>arr</code> (if it helps, all values in <code>arr</code> are unique). I want the probability to be distributed uniformly among the other possible values. I am aware I can do it in this way:</p> <pre class="lang-py prettyprint-override"><code>randlist = np.random.randint(a, b, n) while np.intersect1d(randlist, arr).size &gt; 0: randlist = np.random.randint(a, b, n) </code></pre> <p>But this seems really inefficent. What would be the fastest way to do this?</p>
<p>Simplest vectorized way would be with <code>np.setdiff1d</code> + <code>np.random.choice</code> -</p> <pre><code>c = np.setdiff1d(np.arange(a,b),arr) out = np.random.choice(c,n) </code></pre> <p>Another way with <code>masking</code> -</p> <pre><code>mask = np.ones(b-a,dtype=bool) mask[arr-a] = 0 idx = np.flatnonzero(mask)+a out = idx[np.random.randint(0,len(idx),n)] </code></pre>
python|python-3.x|numpy|random
3
1,908,001
57,584,229
How to Assign new value to a list that is in another list?
<p>I tried the following python code to assign a new value to a list.</p> <pre><code>a,b,c=[],[],[] def change(): return [1] for i in [a,b,c]: i=change() print(a) </code></pre> <p>The output is <code>[]</code>, but what I need is <code>[1]</code></p>
<p>What you're doing here is you're re-assigning the variable <code>i</code> within the loop, but you're not actually changing <code>a</code>, <code>b</code>, or <code>c</code>. </p> <p>As an example, what would you expect the following source code to output?</p> <pre class="lang-py prettyprint-override"><code>a = [] i = a i = [1] print(a) </code></pre> <p>Here, you are reassigning <code>i</code> after you've assigned <code>a</code> to it. <code>a</code> itself is not changing when you perform the <code>i = [1]</code> operation, and thus <code>[]</code> will output. This problem is the same one as what you're seeing in your loop.</p> <p>You likely want something like this.</p> <p><code>a, b, c = (change() for i in (a, b, c))</code></p>
python|list|function|for-loop|variable-assignment
1
1,908,002
57,372,873
Delete option not available in tree view odoo
<p>My odoo tree view doesn't seem to have a delete option<a href="https://i.stack.imgur.com/7mz2i.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7mz2i.png" alt="enter image description here"></a></p> <p>as seen in the image, i have only export option! Did i miss something in my view?</p> <pre><code> &lt;record id="physical_tree_view" model="ir.ui.view"&gt; &lt;field name="name"&gt;physical.tree.view&lt;/field&gt; &lt;field name="model"&gt;arrivals.physical&lt;/field&gt; &lt;field name= "arch" type = "xml"&gt; &lt;tree string="Physical"&gt; &lt;field name="lotno"/&gt; &lt;/tree&gt; &lt;/field&gt; &lt;/record&gt; </code></pre> <p>the ir.model.access.csv has the line</p> <pre><code>access_arrivals_physical,access.arrivals.physical,model_arrivals_physical,base.group_user,1,1,1,0 </code></pre>
<p>There are no delete/unlink rights on that model. The last column in the csv is for <code>perm_unlink</code> field in <code>ir.model.access</code> and you've set that to <code>0</code>. So globally no one except Superuser/Odoobot has the right to delete records of that model.</p>
python|odoo|odoo-12
1
1,908,003
54,010,306
Ignoring unused futures with ThreadPoolExecutor
<p>I'm running two functions, <code>fast()</code> and <code>slow()</code>, in parallel using a <code>ThreadPoolExecutor</code>. If <code>fast()</code> returns a non-<code>None</code> result, I'd like to use it, otherwise, use the <code>slow()</code> result. Here is an example:</p> <pre><code>from concurrent.futures import ThreadPoolExecutor from time import sleep def fast(): sleep(2) return 'fast' def slow(): sleep(4) return 'slow' def run_parallel(): with ThreadPoolExecutor() as executor: fast_future = executor.submit(fast) slow_future = executor.submit(slow) fast_result = fast_future.result() if fast_result is not None: slow_future.cancel() return fast_result return slow_future.result() print(run_parallel()) </code></pre> <p>Running that outputs:</p> <pre><code>$ time python example.py fast real 0m4.058s user 0m0.041s sys 0m0.011s </code></pre> <p>Since <code>fast()</code> returned a non-<code>None</code> value, I expected this to take 2s instead of 4s, especially since I have that line to <code>slow_future.cancel()</code>.</p> <p>My ideal syntax for this would be something like:</p> <pre><code>combined_future = fast_future.orElse(slow_future) return combined_future.result() </code></pre> <p>What can I do to get this expected behavior?</p>
<p>This is due to the slow future not being cancellable (<code>slow_future.cancel()</code> returns False), so the threadpool executor waits for the thread. Try something like:</p> <pre><code>from concurrent.futures import ThreadPoolExecutor, as_completed from time import sleep abort_flag = False def fast(): for i in range(20): if abort_flag: return None sleep(.1) return 'fast' def slow(): for i in range(40): if abort_flag: return None sleep(.1) return 'slow' def run_parallel(): global abort_flag with ThreadPoolExecutor() as executor: abort_flag = False fast_future = executor.submit(fast) slow_future = executor.submit(slow) for f in as_completed((fast_future, slow_future)): result = f.result() if result is not None: abort_flag = True return result print(run_parallel()) </code></pre>
python|future|threadpoolexecutor
0
1,908,004
65,195,915
Optimizing loop for millions of entry selections
<p>I have a python anonymisation mechanism that rely on generating fake data from existing attributes.</p> <p>Those attributes are accessible in the domain D which is an array of 16 sets, each set representing values possible for each attributes. the attributes are <code>['uid', 'trans_id', 'trans_date', 'trans_type', 'operation', 'amount', 'balance', 'k_symbol', 'bank', 'acct_district_id', 'frequency', 'acct_date', 'disp_type', 'cli_district_id', 'gender', 'zip']</code></p> <p>Some attributes have very few values (gener is M or F), some are unique (uid) and can have 1260000 different values.</p> <p>The fake data is generated as tuples of randomly selected attributes inside the domain.</p> <p>I have to generate nearly <strong>2 million</strong> tuples.</p> <p>The first implementation of this was:</p> <pre><code>def beta_step(I, V, beta, n, m, D): r = approx_binomial(m - n, beta) print(&quot;r = &quot; + str(r)) i = 0 while i &lt; r: t = [] for attribute in D: a_j = choice(list(attribute)) t.append(a_j) if t not in V + I: V.append(t) i += 1 </code></pre> <p>This took around 0,5s for each tuple. Note that I and V are existing lists (with initialy respectively 1200000 and 800000 tuples)</p> <p>I already found out that I could speed-up things by converting D to a 2D array once and for all, in order not to convert sets in list on each run</p> <pre><code>for attribute in D: a_j = choice(attribute) t.append(a_j) </code></pre> <p>This gets me down to 0.2s by tuple.</p> <p>I also tried looping fewer times and generating multiple tuples at a time like so:</p> <pre><code>def beta_step(I, V, beta, n, m, D): D = [list(attr) for attr in D ] #Convert D in 2D list r = approx_binomial(m - n, beta) print(&quot;r = &quot; + str(r)) i = 0 NT = 1000 #Number of tuples generated at a time while i &lt; r: T = [[] for j in range(NT)] for attribute in D: a_j = choices(attribute,k=min(NT,r-i)) for j in range(len(a_j)): T[j].append(a_j[j]) for t in T: if t not in V + I: V.append(t) i += 1 </code></pre> <p>But this takes around 220s for 1000 tuples so it is not faster than before.</p> <p>I have timed the different parts and it seems that it is the last for loop that takes most of the time (Around 217s).</p> <p>Is there any way I could speed things up in order not to run it for 50 hours?</p> <p>======================= EDIT : I implemented @Larri suggestion like that :</p> <pre><code>def beta_step(I, V, beta, n, m, D): D = [list(attr) for attr in D ] #Convert D in list of lists I = set(tuple(t) for t in I) V = set(tuple(t) for t in V) r = approx_binomial(m - n, beta) print(&quot;r = &quot; + str(r)) i = 0 print('SIZE I', len(I)) print('SIZE V', len(V)) NT = 1000 #Number of tuples to generate at each pass while i &lt; r: T = [[] for j in range(min(NT,r-i))] for attribute in D: a_j = choices(attribute,k=min(NT,r-i)) for j in range(len(a_j)): T[j].append(a_j[j]) new_T = set(tuple(t) for t in T) - I size_V_before = len(V) V.update(new_T) size_V_after = len(V) delta_V = size_V_after-size_V_before i += delta_V return [list(t) for t in V] </code></pre> <p>it now takes about 0s to add elements to V</p> <p>In total, adding the 1680000 tuples took 91s However, converting back to a 2d array takes 200s, is there a way to make it faster that doesn't involve rewritting the whole program to work on sets ?</p>
<p>For the last for loop at least, consider converting to sets instead of using arrays. That allows you to use <code>set.update()</code> method without having to check if <code>t</code> is already included in <code>V</code>. This is assuming that you can incorporate the <code>A</code> in the logic somehow. From the given code I can't see any reference to <code>A</code>.</p> <p>So you can change it to something like <code>V.update(T)</code>. The <code>i</code> would then be the delta of <code>len(V)</code> before and after the operation.</p>
python|python-3.x|optimization
1
1,908,005
22,592,092
Python Multithreading (while and apscheduler)
<p>I am trying to call two functions simultaneously in Python. One is an infinite loop and the other one is started using apscheduler. Like this:</p> <p><strong>Thread.py</strong></p> <pre><code>from multiprocessing import Process import _While import _Scheduler if __name__ == '__main__': p1 = Process(target=_While.main()) p1.start() p2 = Process(target=_Scheduler.main()) p2.start() </code></pre> <p><strong>_While.py</strong></p> <pre><code>import time def main(): while True: print "while" time.sleep(0.5) </code></pre> <p><strong>_Scheduler.py</strong></p> <pre><code>import logging from apscheduler.scheduler import Scheduler def _scheduler(): print "scheduler" if __name__ == '__main__': logging.basicConfig() scheduler = Scheduler(standalone=True) scheduler.add_interval_job(lambda: _scheduler(), seconds=2) scheduler.start() </code></pre> <p>Since only while is printed it seems that _Scheduler isn’t starting. Can somone help me?</p>
<p>You've got at least a couple problems here. First, the <code>target</code> keyword should be a <em>function</em>, not the result of a function. e.g.:</p> <pre><code>p1 = Process(target=_While.main) # Note the lack of function call </code></pre> <p>Second, I don't see any <code>_Scheduler.main</code> function. Maybe you meant to do something like:</p> <pre><code>import logging from apscheduler.scheduler import Scheduler def _scheduler(): print "scheduler" def main(): logging.basicConfig() scheduler = Scheduler(standalone=True) scheduler.add_interval_job(_scheduler, seconds=2) # I doubt that `lambda` is necessary here ... scheduler.start() if __name__ == "__main__": main() </code></pre>
python|multithreading
2
1,908,006
45,411,840
qPython bytes type exception in sync query
<p>I'm trying to query KDB with the following select statement: <code>{select from order where OrderID = x}</code>. When passing in the parameter it keeps throwing b'lenghth exceptions. I've tried <code>numpy.string_</code>, <code>numpy.bytes_</code> and regular <code>bytes</code> using the <code>.encode()</code> method (latin-1 and utf-8).</p> <p>When I query one record to investigate the type of the OrderID column, it tells me the column type is <code>bytes</code>.</p> <p>What am I doing wrong? Not sure what the dash in the <a href="http://qpython.readthedocs.io/en/latest/type-conversion.html" rel="nofollow noreferrer">docs</a> is supposed to mean. Thanks!</p>
<p>It sounds like the type of <code>OrderID</code> on the kdb side is a character list. In which case you need to use <code>like</code> to do the comparison in your query:</p> <pre><code>{select from order where OrderID like x} </code></pre> <p>And then you should be able to use a regular Python string for the parameter, .e.g.</p> <pre><code>q.sync("{select from order where OrderID like x}", "my_order_id") </code></pre> <p>As long as you don't use any wildcard characters in parameter <code>x</code> then this will only match on the exact string. i.e.</p> <pre><code> q)"one" like "one" 1b q)"ones" like "one" 0b q)"ones" like "one*" 1b </code></pre>
python|kdb|qpython
0
1,908,007
45,533,770
How can i use python built-in function like isinstance() in julia using PyCall
<p>PyCall document says: Important: The biggest difference from Python is that object attributes/members are accessed with o[:attribute] rather than o.attribute, so that o.method(...) in Python is replaced by o:method in Julia. Also, you use get(o, key) rather than o[key]. (However, you can access integer indices via o[i] as in Python, albeit with 1-based Julian indices rather than 0-based Python indices.)</p> <p>But i have no idea about which module or object to import</p>
<p>Here's a simple example to get you started</p> <pre class="lang-julia prettyprint-override"><code>using PyCall @pyimport numpy as np # 'np' becomes a julia module a = np.array([[1, 2], [3, 4]]) # access objects directly under a module # (in this case the 'array' function) # using a dot operator directly on the module #&gt; 2×2 Array{Int64,2}: #&gt; 1 2 #&gt; 3 4 a = PyObject(a) # dear Julia, we appreciate the automatic # convertion back to a julia native type, # but let's get 'a' back in PyObject form # here so we can use one of its methods: #&gt; PyObject array([[1, 2], #&gt; [3, 4]]) b = a[:mean](axis=1) # 'a' here is a python Object (not a python # module), so the way to access a method # or object that belongs to it is via the # pythonobject[:method] syntax. # Here we're calling the 'mean' function, # with the appropriate keyword argument #&gt; 2-element Array{Float64,1}: #&gt; 1.5 #&gt; 3.5 pybuiltin(:type)(b) # Use 'pybuiltin' to use built-in python # commands (i.e. commands that are not # under a module) #&gt; PyObject &lt;type 'numpy.ndarray'&gt; pybuiltin(:isinstance)(b, np.ndarray) #&gt; true </code></pre>
python|julia|built-in
3
1,908,008
45,608,091
Exchange data between interactive_mode and script_mode?
<p>Suppose to run a block of code in script_mode and produce such data:</p> <pre><code>my_data = [1, 2, 3, 4] #please note this is output after running not data in script </code></pre> <p>Now I switch to work in console for debugging the code. I need to use the data produced just now, while cannot copy directly for avoiding the effect of gibberish. My solution is to pickle first in the script_mode and unpickle it in interactive_mode:</p> <p>Codes with 5 commands:</p> <p>Script Mode</p> <pre><code>import pickle with open('my_data','wb') as file: pickle.dump(my_data, file) </code></pre> <p>Interactive_mode:</p> <pre><code>import os, pickle # change to the working directory os.chdir('~\..\') with open('my_data', 'rb') as file: my_data = pickle.load(file) # my_data is finally loaded in console # then manipulate it on the console. </code></pre> <p>How to do it in less steps?</p>
<p>You can run the file with the <code>-i</code> option, like <code>python -i your_file_name.py</code>.</p> <p>This will run your file first, then open an interactive shell with all of the variables present and ready for use.</p>
python
2
1,908,009
14,703,515
Django Admin: edit items on the main admin page, without going inside each item
<p>In my Admin page, I want to edit items by selecting from choices under each column, without clicking and going into each item and editing it inside each item's page. I've tried various functions and forms, but still can't find a way to do this.</p> <p>For example: I have to assign an age range to each item, 0-5, 5-15, 15-25, 25-35 (some items will have multiple choices checked), and I want to assign these values from the main admin page, without going inside each item.</p> <p>Any help would be appreciated.</p>
<p>If the field has choices, it is easy, use <a href="https://docs.djangoproject.com/en/dev/ref/contrib/admin/#django.contrib.admin.ModelAdmin.list_editable" rel="nofollow">list_editable</a> in the ModelAdmin class.</p> <p>If you want to use something more custom, you should provide a customized form the the rows in the main admin page (by the way, this called "Change List" in Django terms..)</p> <p>And inside the form, supply choices to the field and event a custom widget.</p>
django|python-2.7|django-admin
0
1,908,010
14,565,762
Django DeleteView redirect to variable location
<p>I want to be able to use the DeleteView CBV in Django 1.5c1 (including the confirmation page), but have the user be redirected to where he/she clicked the object. </p> <p>For example, here is a rough outline of my site's structure based around Events:</p> <pre><code>/events/week/2013/03/ - ListView, shows 3rd week of 2013's Events /events/month/2013/01/ - ListView, shows January of 2013's Events /events/year/2013/ - ListView, show 2013's Events /events/53/ - DetailView, shows a specific Event </code></pre> <p>On any of these Events listings, I could have an Event that appears on them all. Rather than having an Event's URL depend on the list that the user has navigated from (e.g. /events/year/2013/53/), I've chosen to have the Event be served on an independent URL (e.g. /events/53/). </p> <p><strong>With that context, I want to be able to have a delete button on my Event's DetailView that redirects back to the ListView that the user navigated from.</strong></p> <p>I've considered:</p> <ul> <li>Middleware that will look at the previous URL and add it to the session if it's a ListView in my URLconf. This has several disadvantages, one of which being the need to whitelist every possible location that an Event's DetailView can be clicked from.</li> <li>On the delete button on DetailView, append <code>?next={{ request.META.HTTP_REFER }}</code> to the DeleteView's URL and adding it to the delete form somehow, but the whole referrer's URL is passed (e.g. /events/53/delete/?next=www.site.com/events/year/2013/).</li> </ul>
<p>Try something like this as a mixin:</p> <pre><code>class RedirectURLView(View): def get_success_url(self): next_url = self.request.GET.get('next') if next_url: return next_url else: return super(RedirectURLView, self).get_success_url() </code></pre> <p>then append <code>?next={{ request.path }}</code> to the urls</p>
python|django
1
1,908,011
14,420,023
How to output lines ( of characters) from a text file into a single line
<p>I need help in how to connect multiple lines from a txt file into one single line without <strong>white spaces</strong></p> <p>The text file is consisting of 8 lines and each single line has 80 characters, as showing: </p> <p><a href="https://i.stack.imgur.com/dfoDU.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dfoDU.jpg" alt="a busy cat"></a><br> <sub>(source: <a href="http://im33.gulfup.com/HMGw2.jpg" rel="nofollow noreferrer">gulfup.com</a>)</sub> </p> <p>Here is the code that I used, but my problem is that I am not able have all the lines connected with NO white spaces between them: </p> <pre><code>inFile = open ("text.txt","r") # open the text file line1 = inFile.readline() # read the first line from the text.txt file line2 = inFile.readline() # read the second line from the text.txt file line3 = inFile.readline() line4 = inFile.readline() line5 = inFile.readline() line6 = inFile.readline() line7 = inFile.readline() line8 = inFile.readline() print (line1.split("\n")[0], # split each line and print it --- My proplem in this code! line2.split("\n")[0], line3.split("\n")[0], line4.split("\n")[0], line5.split("\n")[0], line6.split("\n")[0], line7.split("\n")[0], line8.split("\n")[0]) </code></pre> <p><a href="https://i.stack.imgur.com/XimDU.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XimDU.jpg" alt="a busy cat"></a><br> <sub>(source: <a href="http://im33.gulfup.com/XfXJ1.jpg" rel="nofollow noreferrer">gulfup.com</a>)</sub> </p>
<p>Just read the lines of the file into a list and use <code>''.join()</code>:</p> <pre><code>with open ("text.txt","r") as inFile: lines = [l.strip() for l in inFile] print ''.join(lines) </code></pre> <p>The <code>.strip()</code> call removes all whitespace from the start and end of the line, in this case the newline.</p> <p>Using a comma with the <code>print</code> statement does more than just omit the newline, it also prints a space between the arguments.</p>
python|text|printing
1
1,908,012
14,391,504
Using Array to Store Pixels of Window
<p>Is it possible (in terms of performance) to have a single multi-dimensional array that contains one 8-bit integer per pixel, for each pixel in the game window? I need to update the game window in a timely manner based on this array. </p> <p>I'm aiming for something like the following:</p> <pre><code>import numpy window_array = numpy.zeros((600, 600), dtype=numpy.int8) #draw the screen for (y, x), value in numpy.ndenumerate(window_array): if value == 1: rgb = (0, 0, 0) elif value == 2: rgb = (50, 50, 50) blit_pixel(x, y, rgb) </code></pre> <p>I'd like to be going 30-60 FPS, but so far my tests have yielded results that were much too slow to run at even a bad framerate. Is it possible to do, and if so, how?</p>
<p>I have never used pygame, so take my anwser with a grain of salt...</p> <p>That said, it seems very unlikely that you are going to get any decent frame rate if you are iterating over 360,000 pixels with a python loop and doing a python function call at each one.</p> <p>I learned from <a href="https://stackoverflow.com/questions/14206938/combine-numpy-arrays-by-reference">this other question</a> (read the comment thread) that the <a href="http://www.pygame.org/docs/ref/surfarray.html" rel="nofollow noreferrer"><code>pygame.surfarray</code></a> module will give you a reference to the array holding the actual screen data. <code>pygame.surfarray.pixels3d</code> should return a reference to an array of shape <code>(rows, cols, 3)</code> where the last dimension holds the RGB values on screen. With that reference, you can change the pixels on screen directly, without needing a python loop doing something like:</p> <pre><code>import numpy surf_array = pygame.surfarray.pixels3d(surface) window_array = numpy.zeros(surf_array.shape[:2], dtype=numpy.int8) ... surf_array[numpy.nonzero(window_array == 1)] = np.array([0, 0, 0]) surf_array[numpy.nonzero(window_array == 2)] = np.array([50, 50, 50]) </code></pre> <p>Not sure if a call to <code>pygame.display.update</code> is needed to actually show your changes, but this approach will sustain a much, much higher frame rate than what you had in mind.</p>
python|arrays|numpy|pixel
0
1,908,013
68,729,349
getting mean() used in groupby to use the right grouped values for calculation
<p>Data import from csv:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Date</th> <th>Item_1</th> <th>Item 2</th> </tr> </thead> <tbody> <tr> <td>1990-01-01</td> <td>34</td> <td>78</td> </tr> <tr> <td>1990-01-02</td> <td>42</td> <td>19</td> </tr> <tr> <td>.</td> <td>.</td> <td>.</td> </tr> <tr> <td>.</td> <td>.</td> <td>.</td> </tr> <tr> <td>2020-12-31</td> <td>41</td> <td>23</td> </tr> </tbody> </table> </div> <pre><code>df = pd.read_csv(r'Insert file directory') df.index = pd.to_datetime(df.index) gb= df.groupby([(df.index.year),(df.index.month)]).mean() </code></pre> <p>Issue: So basically, the requirement is to group the data according to year and month before processing and I thought that the groupby function would have grouped the data so that the mean() calculate the averages of all values grouped under Jan-1990, Feb-1990 and so on. However, I was wrong. The output result in the average of all values under Item_1 <br> <br> My example is similar to the below post but in my case, it is calculating the mean. I am guessing that it has to do with the way the data is arranged after groupby or some parameters in mean() have to be specified but I have no idea which is the cause. Can someone enlighten me on how to correct the code?</p> <p><a href="https://stackoverflow.com/questions/26646191/pandas-groupby-month-and-year">Pandas groupby month and year</a></p> <p>Update: Hi all, I have created the sample data file .csv with 3 items and 3 months of data. I am wondering if the cause has to do with the conversion of data into df when it is imported from .csv because I have noticed some weird time data on the leftmost as shown below:</p> <p><a href="https://i.stack.imgur.com/7S1b7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7S1b7.png" alt="enter image description here" /></a></p> <p>Link to sample file is: <a href="https://www.mediafire.com/file/t81wh3zem6vf4c2/test.csv/file" rel="nofollow noreferrer">https://www.mediafire.com/file/t81wh3zem6vf4c2/test.csv/file</a></p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd df = pd.read_csv( 'test.csv', index_col = 'date' ) df.index = pd.to_datetime( df.index ) df.groupby([(df.index.year),(df.index.month)]).mean() </code></pre> <p>Seems to do the trick from the provided data.</p>
python|pandas
1
1,908,014
6,556,581
Convert date to months and year
<p>From the following dates how to get months and year using python</p> <p>dates are </p> <pre><code> "2011-07-01 09:26:11" //This showud display as "This month" "2011-06-07 09:26:11" //This should display as June "2011-03-12 09:26:11" //This should display as March "2010-07-25 09:26:11" // Should display as last year </code></pre> <p>Please let me know how to get these formats using python 2.4</p> <p>When I have a date variable[with same format] in pandas dataframe, how to perform this ?</p>
<pre><code>import time import calendar current = time.localtime() dates = ["2011-07-01 09:26:11", "2011-06-07 09:26:11", "2011-03-12 09:26:11", "2010-07-25 09:26:11"] for date in dates: print "%s" % date date = time.strptime(date, "%Y-%m-%d %H:%M:%S") if date.tm_year == current.tm_year - 1: print "Last year" elif date.tm_mon == current.tm_mon: print "This month" else: print calendar.month_name[date.tm_mon] </code></pre> <p>Should do roughly what you ask for.</p>
python
5
1,908,015
57,188,657
Creating new sheet in excel and writing data with openpyxl
<p>I have already existing excel file and at the beginning in my code i import data from first sheet.</p> <p>Now i cant write points from two variables <code>(xw, yw)</code> into new sheet in excel to two colmuns (A1-A[Q] and B1-B[Q]).</p> <p><code>xw</code> and <code>yw</code> are arrays consisting of float numbers (example: 1.223, 2.434 etc.).</p> <pre class="lang-py prettyprint-override"><code>Q = len(xw) wb.create_sheet('Points') sheet2 = wb.get_sheet_by_name('Points') for q in range(1,Q): sheet2["A[q]"]=xw[q] sheet2["B[q]"]=yw[q] wb.save('PARAMS.xlsx') </code></pre> <p><strong>EDIT:</strong></p> <p>I want to fill third column with zeros (C1-CQ). My code is below, but its start from C2, not C1. I did <code>sheet2[f"C{1}"]=0</code> but it looks bad. What is the solution?</p> <pre class="lang-py prettyprint-override"><code>Q = len(xw) zw= np.zeros(Q) sheet2 = wb.create_sheet('Points') sheet2[f"A{1}"]=0 sheet2[f"B{1}"]=T_r sheet2[f"C{1}"]=0 for i, row in enumerate(sheet2.iter_rows(min_row=2, max_col=3, max_row=Q+1)): row[0].value = xw[i] row[1].value = yw[i] row[2].value = zw[i] wb.save('PARAMS.xlsx') </code></pre>
<p>You are trying to acces literally the cell <code>"A[q]"</code> which of course does not exist. Change it to <code>f"A{q}"</code> (The same to <code>B</code> of course).</p> <p>Also, <code>openpyxl</code> uses 1-based indexing so it means you will skip your first elements from the lists. Therefore you should do:</p> <pre class="lang-py prettyprint-override"><code>for q in range(Q): sheet2[f"A{q+1}"] = xw[q] sheet2[f"B{q+1}"] = yw[q] </code></pre> <p>Alternatively, you could use the API of <code>openpyxl</code> to access the cells using <a href="https://openpyxl.readthedocs.io/en/stable/tutorial.html#accessing-many-cells" rel="nofollow noreferrer"><code>iter_rows</code></a> and <a href="https://docs.python.org/3/library/functions.html#enumerate" rel="nofollow noreferrer"><code>enumerate</code></a>:</p> <pre class="lang-py prettyprint-override"><code>for i, row in enumerate(sheet2.iter_rows(min_row=1, max_col=2, max_row=Q)): row[0].value = xw[i] row[1].value = yw[i] </code></pre> <hr> <p>Notice that <code>create_sheet</code> also <strong>returns</strong> the sheet created so you can simply do: </p> <pre><code>sheet2 = wb.create_sheet('Points') </code></pre>
python|excel|python-3.x|openpyxl
1
1,908,016
56,904,783
How to install a python library to a pulled docker image?
<p>I have pulled a docker image to run airflow (pucker/airflow) and it is running well. However, I can't manage to install a new python library on this image. I have read that you have to add the package in the docker file. However, I don't know where it is stored. I work on MacOSX.</p> <p>Thanks for your help</p>
<p>As I understand it, you only pulled a <code>puckel/docker-airflow</code> image from dockerhub, and you're simply running that image.</p> <p>If you need to add extra libraries, and if you want to include the install of these libraries in a build process, you probably need a <code>Dockerfile</code>. For instance, if you want to install <code>requests</code>, a minimalist Dockerfile could be as follows:</p> <pre><code>FROM puckel/docker-airflow RUN pip install requests </code></pre> <p>Create such a file in <code>myproject/</code>, then <code>cd</code> in <code>myproject/</code> and simply run <code>docker build .</code> This will output a simple log such as:</p> <pre><code>Step 1/2 : FROM puckel/docker-airflow ---&gt; 12753a529f9f Step 2/2 : RUN python3 -m pip install requests ---&gt; Running in 66860c8ca099 Requirement already satisfied: requests in /usr/local/lib/python3.6/site-packages (2.22.0) Requirement already satisfied: certifi&gt;=2017.4.17 in /usr/local/lib/python3.6/site-packages (from requests) (2019.3.9) Requirement already satisfied: chardet&lt;3.1.0,&gt;=3.0.2 in /usr/local/lib/python3.6/site-packages (from requests) (3.0.4) Requirement already satisfied: idna&lt;2.9,&gt;=2.5 in /usr/local/lib/python3.6/site-packages (from requests) (2.8) Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,&lt;1.26,&gt;=1.21.1 in /usr/local/lib/python3.6/site-packages (from requests) (1.25.3) Removing intermediate container 66860c8ca099 ---&gt; 66b9d91c4c95 Successfully built 66b9d91c4c95 </code></pre> <p>Then run <code>docker run 66b9d91c4c95</code> to instantiate the image you just created, or <code>docker run -it 66b9d91c4c95 bash</code> to open <code>bash</code> in it.</p> <p>You can read on <a href="https://docs.docker.com/engine/reference/commandline/tag/" rel="nofollow noreferrer">docker tags</a> to replace <code>66b9d91c4c95</code> by a meaningful name.</p>
python|docker|airflow
3
1,908,017
23,844,476
How to convert this string to datetime and date alone
<p>This question is far from unique, but i cannot find a way to convert the strings that are contained in this df column to datetime and date alone objects in order to use them as the index of my dataframe.</p> <p>How can i convert this string to datetime or date format to use it as an index on my df? </p> <p>The format of this column in particular is as follows: </p> <pre><code>&gt;&gt;&gt; data['DateTime'] 0 20140101 00:00:00 1 20140101 00:00:00 3 20140101 00:00:00 4 20140101 00:00:00 5 20140101 00:00:00 6 20140101 00:00:00 7 20140101 00:00:00 8 20140101 00:00:00 9 20140101 00:00:00 10 20140101 00:00:00 Name: DateTime, Length: 3779, dtype: object </code></pre>
<p>Use <code>to_datetime</code> to convert to a string to a datetime, you can pass a formatting string but in this case it seems to handle it fine, then if you wanted a date then call <code>apply</code> and use a lambda to call <code>.date()</code> on each datetime entry:</p> <pre><code>In [59]: df = pd.DataFrame({'DateTime':['20140101 00:00:00']*10}) df Out[59]: DateTime 0 20140101 00:00:00 1 20140101 00:00:00 2 20140101 00:00:00 3 20140101 00:00:00 4 20140101 00:00:00 5 20140101 00:00:00 6 20140101 00:00:00 7 20140101 00:00:00 8 20140101 00:00:00 9 20140101 00:00:00 In [60]: df['DateTime'] = pd.to_datetime(df['DateTime']) df.dtypes Out[60]: DateTime datetime64[ns] dtype: object In [61]: df['DateTime'] = df['DateTime'].apply(lambda x:x.date()) print(df) df.dtypes DateTime 0 2014-01-01 1 2014-01-01 2 2014-01-01 3 2014-01-01 4 2014-01-01 5 2014-01-01 6 2014-01-01 7 2014-01-01 8 2014-01-01 9 2014-01-01 Out[61]: DateTime object dtype: object </code></pre>
date|datetime|pandas
2
1,908,018
72,051,642
How to rename columns of list of dataframes in pandas?
<p>I have list of dataframes where each has different columns, and I want to assign unique column names to all and combine it but it is not working. Is there any quick way to do this in pandas?</p> <p><strong>my attempt</strong></p> <pre><code>!pip install wget import wget import pandas as pd url = 'https://github.com/adamFlyn/test_rl/blob/main/test_data.xlsx' data= wget.download(url) xls = pd.ExcelFile('~/test_data.xlsx') names = xls.sheet_names[1:] # iterate to find sheet name that matches data_dict = pd.read_excel(xls, sheet_name = [name for name in xls.sheet_names if name in names]) dfs=[] for key, val in data_dict.items(): val['state_abbr'] = key dfs.append(val) for df in dfs: st=df.columns[0] df['state']=st df.reset_index() for df in dfs: lst=df.columns.tolist() lst=['county','orientation','state_abbr','state'] df.columns=lst final_df=pd.concat(dfs, axis=1, inplace=True) </code></pre> <p>but I am not able to rename coumns of each dataframe like this and have this error:</p> <pre><code>for df in dfs: lst=df.columns.tolist() lst=['county','orientation','state_abbr','state'] df.columns=lst </code></pre> <blockquote> <p>ValueError: Length mismatch: Expected axis has 5 elements, new values have 4 elements</p> </blockquote> <p>how should I do this in pandas? any quick thoughts or trick to do this? thanks</p>
<p>The error was coming from the data. Almost all DataFrames sheets had 3 columns but only &quot;NC&quot; had a redundant column that starts as &quot;Unnamed&quot;, which is almost all NaN except for one row which has <code>&quot;`&quot;</code> as value. If we remove that column from that sheet, the rest of the code works as expected.</p> <p>You can assign new columns using <code>assign</code> and change column names using <code>set_axis</code> in a dict comprehension. Also, instead of a list comprehension to get the sheet names, you can use <code>names</code> itself. Finally, simply concatenate all with <code>concat</code>.</p> <pre class="lang-py prettyprint-override"><code>out = pd.concat([df.loc[:, ~df.columns.str.startswith('Unnamed')] .set_axis(['county','orientation'], axis=1) .assign(state=df.columns[0], state_abbr=k) for k, df in pd.read_excel(xls, sheet_name = names).items()]) </code></pre> <p>Output:</p> <pre class="lang-py prettyprint-override"><code> county orientation state state_abbr 0 Aleutians East Plaintiff Alaska AK 1 Aleutians West Plaintiff Alaska AK 2 Anchorage Neutral Alaska AK 3 Bethel Plaintiff Alaska AK 4 Bristol Bay Plaintiff Alaska AK .. ... ... ... ... 18 Sweetwater Neutral Wyoming WY 19 Teton Neutral Wyoming WY 20 Uinta Defense Wyoming WY 21 Washakie Defense Wyoming WY 22 Weston Defense Wyoming WY [3117 rows x 4 columns] </code></pre>
python|python-3.x|excel|pandas|dataframe
3
1,908,019
46,448,861
can't extract any data with this code using scrapy
<p>I'm just learning how to use scrapy but I'm having trouble running my first spider. This is my code but it doesn't extract any data! Could you please help me :) </p> <pre><code> import scrapy class Housin(scrapy.Spider): name ='housin' star_urls = ['http://www.metrocuadrado.com/apartamento/venta/bogota/usado/'] def parse (self,response): for href in response.css('a.data-details-id::attr(href)'): yield response.follow(href, self.parse_livi) def parse_livi(self,response): yield { 'latitude': response.xpath('//input[@id="latitude"]/@value').extract_first(), 'longitud': response.xpath('//input[@id="longitude"]/@value').extract_first(), 'price': response.xpath('//dd[@class="important"]/text()').extract_first(), 'Barrio_com': response.xpath('.//dl/dt[h3/text()="Nombre com&amp;uacute;n del barrio "]/following-sibling::dd[1]/h4/text()').extract_first(), 'Barrio_cat': response.xpath('.//dl/dt[h3/text()="Nombre del barrio catastral"]/following-sibling::dd[1]/h4/text()').extract_first(), 'Estrato': response.xpath('.//dl/dt[h3/text()="Estrato"]/following-sibling::dd[1]/h4/text()').extract_first(), 'id': response.xpath('//input[@id="propertyId"]/@value').extract_first() } </code></pre>
<p>Your issue is that your scraper doesn't start at all. Below </p> <pre><code>star_urls = ['http://www.metrocuadrado.com/apartamento/venta/bogota/usado/'] </code></pre> <p>should be</p> <pre><code>start_urls = ['http://www.metrocuadrado.com/apartamento/venta/bogota/usado/'] </code></pre> <p>That typo (missing <code>t</code>) causes scrapy to not find any starting url and hence the scraping doesn't start at all</p>
python|web-scraping|scrapy
1
1,908,020
46,350,799
How to format triple quoted text into a triple quoted print statement
<p>So currently I am working on a text based project the includes a lotto. The trouble is that I want to use triple quoted text numbers inside a print statement that already has triple quotes. basically I want to do something like this:</p> <pre><code>num1 = """ ______ / ___ \ \/ \ \ ___) / (___ ( ) \ /\___/ / \______/ """ print(""" ______________ | | | | | | | {a} | | | | | | | """.format(a=num1)) </code></pre> <p>This is just a concept, I know this does not actually work. If I try to do anything like this It just pushes the non-formated text to the next line. Should I print it line by line? Do I need to use <code>%s</code> rather than <code>.format()</code>? Anyway I hope I communicated my question clearly. </p> <p>-Zeus</p>
<p>I don't think there isn't any python builtin which can help you out with that. I'd recommend you just look for a proper ascii text library to build effortlesly that type of ascii-text scripts, <a href="http://www.figlet.org/" rel="nofollow noreferrer">figlet</a> is one of the hundreds available out there.</p> <p>If you want to code something by yourself, i'd just install <code>pillow</code> and i'd use it to fill a custom buffer with the shapes i wanted, look below a 5-min example which "renders" text and rectangles to such a buffer. Be aware it's not doing out-of-bounds checks and probably is quite slow... but i guess you can start working on top of that:</p> <pre><code>from PIL import Image, ImageFont, ImageDraw class AsciiBuffer(): def __init__(self, width, height): self.font_size = 15 self.font = ImageFont.truetype('arialbd.ttf', self.font_size) self.width = width self.height = height self.clear() def clear(self): self.buffer = [['*' for x in range(self.width)] for y in range(self.height)] def draw_text(self, x0, y0, text): size = self.font.getsize(text) image = Image.new('1', size, 1) draw = ImageDraw.Draw(image) draw.text((0, 0), text, font=self.font) for y in range(size[1]): line = [] for x in range(size[0]): if image.getpixel((x, y)): self.buffer[y0 + y][x0 + x] = ' ' else: self.buffer[y0 + y][x0 + x] = '#' def draw_rectangle(self, x0, y0, w, h, fill=' '): for y in range(h): for x in range(w): self.buffer[y0 + y][x0 + x] = fill def render(self): for y in range(self.height): print(''.join(self.buffer[y])) if __name__ == "__main__": k = 20 ab = AsciiBuffer(k * 3 + 4, k * 3 + 4) n = 1 for i in range(3): for j in range(3): x = 1 + (k + 1) * i y = 1 + (k + 1) * j ab.draw_rectangle(x, y, k, k) ab.draw_text(x + int(k / 4), y, str(n)) n += 1 ab.render() </code></pre> <p>Output:</p> <p><a href="https://i.stack.imgur.com/w4S95.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/w4S95.png" alt="enter image description here"></a></p> <p><strong>Refactoring:</strong></p> <p>In any case, the above code has multiple problems, being one of them the fact the class AsciiBuffer is coupled with PIL, when this class should be as dummy as possible, for instance, just drawing "text sprites" should be fine, so here's a little refactoring where i'm showing you how to generate "text sprites" either from multiline strings (like the one you posted in your question) or from certain system fonts rendered by PIL:</p> <pre><code>import sys from PIL import Image, ImageFont, ImageDraw class SpriteGenerator(): def __init__(self): pass def from_multiline_string(self, text, length): buf = [] for l in text.splitlines(): buf.append(list(l) + [' '] * (length - len(l))) return buf def from_pil(self, text, font_name='arialbd.ttf', font_size=15): font = ImageFont.truetype(font_name, font_size) size = font.getsize(text) image = Image.new('1', size, 1) draw = ImageDraw.Draw(image) draw.text((0, 0), text, font=font) buf = [] for y in range(size[1]): line = [] for x in range(size[0]): if image.getpixel((x, y)): line.append(' ') else: line.append('#') buf.append(line) return buf class AsciiBuffer(): def __init__(self, width, height): self.width = width self.height = height self.clear() def clear(self): self.buffer = [['*' for x in range(self.width)] for y in range(self.height)] def draw_sprite(self, x0, y0, sprite): for y, row in enumerate((sprite)): for x, pixel in enumerate(row): self.buffer[y0 + y][x0 + x] = pixel def draw_rectangle(self, x0, y0, w, h, fill=' '): for y in range(h): for x in range(w): self.buffer[y0 + y][x0 + x] = fill def render(self): for y in range(self.height): print(''.join(self.buffer[y])) if __name__ == "__main__": num = """ ______ / ___ \\ \\/ \\ \\ ___) / (___ ( ) \\ /\\___/ / \\______/ """ k = 15 ab = AsciiBuffer(k * 3 + 4, k * 3 + 4) sg = SpriteGenerator() n = 1 for i in range(3): for j in range(3): x = 1 + (k + 1) * i y = 1 + (k + 1) * j ab.draw_rectangle(x, y, k, k) if n == 3: ab.draw_sprite( x + int(k / 4), y, sg.from_multiline_string(num, 11)) else: ab.draw_sprite( x + int(k / 4), y, sg.from_pil(str(n), font_size=k)) n += 1 ab.render() </code></pre> <p>Output:</p> <p><a href="https://i.stack.imgur.com/sQgeN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sQgeN.png" alt="enter image description here"></a></p>
python|python-3.x|formatting
1
1,908,021
70,052,580
'float' object is not subscriptable when testing average in google colab
<p>I want to test whether the average house price is greater than 100000</p> <p>Using Z score because the data sample is large &gt;= 30</p> <pre><code>import math from statsmodels.stats.weightstats import ztest stdev = 16518 alpha = 0.05 null_mean = 100000 Z_Score,p_value = ztest(sample['SalePrice'],value=null_mean,alternative='larger') </code></pre> <p>when i run it using google colab the result is an error but if i use jupiter notebook there is no error. can you guys find the problem?</p>
<p>If <code>sample</code> is a <code>pandas.Series</code> (as opposed to a <code>pandas.DataFrame</code>) then <code>sample['SalesPrice']</code> can be an arbitrary object rather than a collection of arbitrary objects. In your case it seems to be a <code>float</code> so it is not <code>value</code> but <code>x1</code> that your error is likely about.</p> <p>I just tried your code with <code>[1,2,3,4,5]</code> instead of <code>sample['SalePrice']</code> and it worked fine for me.</p>
python|data-science|google-colaboratory
0
1,908,022
53,615,478
What is more "Pythonic" - dict comprehsion with list generator, or an explicit for loop?
<p>Suppose I have a function with input 'raw_data' raw_data consists of lines such as:</p> <pre><code>key1: str1 key2: str2 ... </code></pre> <p>Where strx is of the form aa:bb:cc:dd... - that is a ':' separated string There is a helper function which does something to the strings, converting them to values, let's call it <code>get_value()</code></p> <p>What would be the most pythonic way to return a dict?</p> <pre><code>to_dict(raw_data): list_data = raw_data.splitlines() return { key.replace(':',''): get_value(str) for key,str in (line.split() for line in list_data)} </code></pre> <p>or </p> <pre><code>to_dict(raw_data): list_data = raw_data.splitlines() mydict = {} for line in list_data: key,str = line.split() key = key.replace(':', '') mydict.update({ key: get_value(str)}) return mydict </code></pre> <p>Or is there some much more pythonic way of doing this?<br> I'm aware this might seem opinion based question, but there seems to be a consensus about what is 'more pythonic' or 'less pythonic' way of doing things, I just don't know what the consensus is in this case.</p>
<p>How about:</p> <pre><code>{k:v for l in raw_data.splitlines() for (k,v) in [l.split(":")]} </code></pre>
python
2
1,908,023
46,024,536
Cannot get table data - HTML
<p>I am trying to get the 'Earnings Announcements table' from: <a href="https://www.zacks.com/stock/research/amzn/earnings-announcements" rel="nofollow noreferrer">https://www.zacks.com/stock/research/amzn/earnings-announcements</a></p> <p>I am using different beautifulsoup options but none get the table.</p> <pre><code>table = soup.find('table', attrs={'class': 'earnings_announcements_earnings_table'}) table = soup.find_all('table') </code></pre> <p>When I inspect the table, the elements of the table are there.</p> <p>I am pasting a portion of the code I am getting for the table (js, json?).</p> <pre><code>document.obj_data = { "earnings_announcements_earnings_table" : [ [ "10/26/2017", "9/2017", "$0.06", "--", "--", "--", "--" ] , [ "7/27/2017", "6/2017", "$1.40", "$0.40", "&lt;div class=\"right neg negative neg_icon showinline down\"&gt;-1.00&lt;/div&gt;", "&lt;div class=\"right neg negative neg_icon showinline down\"&gt;-71.43%&lt;/div&gt;", "After Close" ] , [ "4/27/2017", "3/2017", "$1.03", "$1.48", "&lt;div class=\"right pos positive pos_icon showinline up\"&gt;+0.45&lt;/div&gt;", "&lt;div class=\"right pos positive pos_icon showinline up\"&gt;+43.69%&lt;/div&gt;", "After Close" ] , [ "2/2/2017", "12/2016", "$1.40", "$1.54", "&lt;div class=\"right pos positive pos_icon showinline up\"&gt;+0.14&lt;/div&gt;", "&lt;div class=\"right pos positive pos_icon showinline up\"&gt;+10.00%&lt;/div&gt;", "After Close" ] , [ "10/27/2016", "9/2016", "$0.85", "$0.52", "&lt;div class=\"right neg negative neg_icon showinline down\"&gt;-0.33&lt;/div&gt;", "&lt;div class=\"right neg negative neg_icon showinline down\"&gt;-38.82%&lt;/div&gt;", "After Close" ] , [ "7/28/2016", "6/2016", "$1.14", "$1.78", "&lt;div class=\"right pos positive pos_icon showinline up\"&gt;+0.64&lt;/div&gt;", "&lt;div class=\"right pos positive pos_icon showinline up\"&gt;+56.14%&lt;/div&gt;", "After Close" ] , [ "4/28/2016", "3/2016", "$0.61", "$1.07", "&lt;div class=\"right pos positive pos_icon showinline up\"&gt;+0.46&lt;/div&gt;", "&lt;div class=\"right pos positive pos_icon showinline up\"&gt;+75.41%&lt;/div&gt;", "After Close" ] , [ "1/28/2016", "12/2015", "$1.61", "$1.00", "&lt;div class=\"right neg negative neg_icon showinline down\"&gt;-0.61&lt;/div&gt;", "&lt;div class=\"right neg negative neg_icon showinline down\"&gt;-37.89%&lt;/div&gt;", "After Close" ] , [ "10/22/2015", "9/2015", "-$0.1", "$0.17", "&lt;div class=\"right pos positive pos_icon showinline up\"&gt;+0.27&lt;/div&gt;", "&lt;div class=\"right pos positive pos_icon showinline up\"&gt;+270.00%&lt;/div&gt;", "After Close" ] , [ "7/23/2015", "6/2015", "-$0.15", "$0.19", "&lt;div class=\"right pos positive pos_icon showinline up\"&gt;+0.34&lt;/div&gt;", "&lt;div class=\"right pos positive pos_icon showinline up\"&gt;+226.67%&lt;/div&gt;", "After Close" ] , [ "4/23/2015", "3/2015", "-$0.13", "-$0.12", "&lt;div class=\"right pos positive pos_icon showinline up\"&gt;+0.01&lt;/div&gt;", "&lt;div class=\"right pos positive pos_icon showinline up\"&gt;+7.69%&lt;/div&gt;", "After Close" ] , [ "1/29/2015", "12/2014", "$0.24", "$0.45", "&lt;div class=\"right pos positive pos_icon showinline up\"&gt;+0.21&lt;/div&gt;", "&lt;div class=\"right pos positive pos_icon showinline up\"&gt;+87.50%&lt;/div&gt;", "After Close" ] , [ "10/23/2014", "9/2014", "-$0.73", "-$0.95", "&lt;div class=\"right neg negative neg_icon showinline down\"&gt;-0.22&lt;/div&gt;", "&lt;div class=\"right neg negative neg_icon showinline down\"&gt;-30.14%&lt;/div&gt;", "After Close" ] , [ "7/24/2014", "6/2014", "-$0.13", "-$0.27", "&lt;div class=\"right neg negative neg_icon showinline down\"&gt;-0.14&lt;/div&gt;", "&lt;div class=\"right neg negative neg_icon showinline down\"&gt;-107.69%&lt;/div&gt;", "After Close" ] , [ "4/24/2014", "3/2014", "$0.22", "$0.23", "&lt;div class=\"right pos positive pos_icon showinline up\"&gt;+0.01&lt;/div&gt;", "&lt;div class=\"right pos positive pos_icon showinline up\"&gt;+4.55%&lt;/div&gt;", "After Close" ] , [ "1/30/2014", "12/2013", "$0.68", "$0.51", "&lt;div class=\"right neg negative neg_icon showinline down\"&gt;-0.17&lt;/div&gt;", "&lt;div class=\"right neg negative neg_icon showinline down\"&gt;-25.00%&lt;/div&gt;", "After Close" ] , [ "10/24/2013", "9/2013", "-$0.09", "-$0.09", "&lt;div class=\"right pos_na showinline\"&gt;0.00&lt;/div&gt;", "&lt;div class=\"right pos_na showinline\"&gt;0.00%&lt;/div&gt;", "After Close" ] , [ "7/25/2013", "6/2013", "$0.04", "-$0.02", "&lt;div class=\"right neg negative neg_icon showinline down\"&gt;-0.06&lt;/div&gt;", "&lt;div class=\"right neg negative neg_icon showinline down\"&gt;-150.00%&lt;/div&gt;", "After Close" ] , [ "4/25/2013", "3/2013", "$0.10", "$0.18", "&lt;div class=\"right pos positive pos_icon showinline up\"&gt;+0.08&lt;/div&gt;", "&lt;div class=\"right pos positive pos_icon showinline up\"&gt;+80.00%&lt;/div&gt;", "After Close" ] , [ "1/29/2013", "12/2012", "$0.28", "$0.21", "&lt;div class=\"right neg negative neg_icon showinline down\"&gt;-0.07&lt;/div&gt;", "&lt;div class=\"right neg negative neg_icon showinline down\"&gt;-25.00%&lt;/div&gt;", "After Close" ] , [ "10/25/2012", "9/2012", "-$0.08", "-$0.23", "&lt;div class=\"right neg negative neg_icon showinline down\"&gt;-0.15&lt;/div&gt;", "&lt;div class=\"right neg negative neg_icon showinline down\"&gt;-187.50%&lt;/div&gt;", "After Close" ] , [ "7/26/2012", "6/2012", "--", "--", "--", "--", "After Close" ] , [ "4/26/2012", "3/2012", "--", "--", "--", "--", "After Close" ] , [ "1/31/2012", "12/2011", "--", "--", "--", "--", "After Close" ] , [ "10/25/2011", "9/2011", "--", "--", "--", "--", "After Close" ] , [ "7/26/2011", "6/2011", "--", "--", "--", "--", "After Close" ] , [ "4/26/2011", "3/2011", "--", "--", "--", "--", "--" ] , [ "1/27/2011", "12/2010", "--", "--", "--", "--", "After Close" ] , [ "10/21/2010", "9/2010", "--", "--", "--", "--", "After Close" ] , [ "7/22/2010", "6/2010", "--", "--", "--", "--", "After Close" ] , [ "4/22/2010", "3/2010", "--", "--", "--", "--", "After Close" ] , [ "1/28/2010", "12/2009", "--", "--", "--", "--", "After Close" ] , [ "10/22/2009", "9/2009", "--", "--", "--", "--", "After Close" ] , [ "7/23/2009", "6/2009", "--", "--", "--", "--", "After Close" ] ] </code></pre> <p>How could I get this table? Thanks!</p>
<p>So the solution is to parse the whole HTML document using Python's string and RegExp functions instead of BeautifulSoup because we are not trying to get the data from HTML tags but instead we want to get them inside a JS code.</p> <p>So this code basically, get the JS array inside "earnings_announcements_earnings_table" and since the JS Array is the same as Python's list structure, I just parse it using ast. The result is a list were you can loop into and it shows all data from all the pages of the table.</p> <pre><code>import urllib2 import re import ast user_agent = {'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:12.0) Gecko/20100101 Firefox/12.0'} req = urllib2.Request('https://www.zacks.com/stock/research/amzn/earnings-announcements', None, user_agent) source = urllib2.urlopen(req).read() compiled = re.compile('"earnings_announcements_earnings_table"\s+\:', flags=re.IGNORECASE | re.DOTALL) match = re.search(compiled, source) if match: source = source[match.end(): len(source)] compiled = re.compile('"earnings_announcements_webcasts_table"', flags=re.IGNORECASE | re.DOTALL) match = re.search(compiled, source) if match: source = source[0: match.start()] result = ast.literal_eval(str(source).strip('\r\n\t, ')) print result </code></pre> <p>Let me know if you need clarifications.</p>
html|python-2.7|beautifulsoup
0
1,908,024
54,890,923
super() has not attribute parent method
<p>EDIT: This error is specific to AWS Lambda</p> <p>Hello I cannot figure out why I am not able to call a parent's method from a child's method.</p> <p>I have a parent class defined in <code>classes/IDKKKK.py</code></p> <pre><code>class IDKKKK: def foobar(self, foo, bar): return { 'foo': foo, 'bar': bar} def foobar2(self, fo, ob, ar): return {'foobar': fo+ob+ar} </code></pre> <p>And I have a child class defined in <code>classes/OMGGG.py</code></p> <pre><code>from classes.IDKKKK import IDKKKK class OMGGG(IDKKKK): def childFoo(self): idc = {} return super().foobar(idc, super().foobar2('idk', ' what is ', 'going on')) </code></pre> <p>I create an instance of <code>OMGGG</code> and call <code>childFoo()</code> and I receive <code>super() has no attribute 'foobar'</code> in my <code>main.py</code></p> <pre><code>from classes.OMGGG import OMGGG omg = OMG() print(omg.childfoo()) </code></pre> <p>I am using python 3.7 so <code>super()</code> should work, however I tried</p> <p><code>super(OMGGG, self).foobar(...</code></p> <p>to no avail.</p> <p>Not quite sure what I am doing wrong. I am thinking I might be importing it incorrectly?</p> <p>Edit: it appears I forgot to add self. This was an error in translation.</p>
<p>Your <code>childFoo</code> method needs to take <code>self</code> as a parameter:</p> <pre><code>def childFoo(self): ... </code></pre>
python|oop|aws-lambda
3
1,908,025
33,480,260
pandas drop row below each row containing an 'na'
<p>i have a dataframe with, say, 4 columns <code>[['a','b','c','d']]</code>, to which I add another column <code>['total']</code> containing the sum of all the other columns for each row. I then add another column <code>['growth of total']</code> with the growth rate of the total.</p> <p>some of the values in <code>[['a','b','c','d']]</code> are blank, rendering the <code>['total']</code> column invalid for these rows. I can easily get rid of these rows with df.dropna(how='any').</p> <p>However, my growth rate will be invalid not only for rows with missing values in <code>[['a','b','c','d']]</code>, but also for the following row. How do I drop all these rows?</p>
<p>Here's one option that I think does what you're looking for:</p> <pre><code>In [76]: df = pd.DataFrame(np.arange(40).reshape(10,4)) In [77]: df.ix[1,2] = np.nan In [78]: df.ix[6,1] = np.nan In [79]: df['total'] = df.sum(axis=1, skipna=False) In [80]: df Out[80]: 0 1 2 3 total 0 0 1 2 3 6 1 4 5 NaN 7 NaN 2 8 9 10 11 38 3 12 13 14 15 54 4 16 17 18 19 70 5 20 21 22 23 86 6 24 NaN 26 27 NaN 7 28 29 30 31 118 8 32 33 34 35 134 9 36 37 38 39 150 In [81]: df['growth'] = df['total'].iloc[1:] - df['total'].values[:-1] In [82]: df Out[82]: 0 1 2 3 total growth 0 0 1 2 3 6 NaN 1 4 5 NaN 7 NaN NaN 2 8 9 10 11 38 NaN 3 12 13 14 15 54 16 4 16 17 18 19 70 16 5 20 21 22 23 86 16 6 24 NaN 26 27 NaN NaN 7 28 29 30 31 118 NaN 8 32 33 34 35 134 16 9 36 37 38 39 150 16 </code></pre>
python|pandas
1
1,908,026
73,826,504
Two constraints setting together in optimization problem
<p>I am working on an optimization problem, and facing difficulty setting up two constraints together in Python. Hereunder, I am simplifying my problem by calculation of area and volume. Only length can be changed, other parameters should remain the same.</p> <p>Constraint 1: Maximum area should be 40000m2 Constraint 2: Minimum volume should be 50000m3</p> <p>Here, I can set values in dataframe by following both constraints one-by-one, how to modify code so that both constraints (1 &amp; 2) should meet given requirements? Many Thanks for your time and support!</p> <pre><code>import pandas as pd import numpy as np df = pd.DataFrame({'Name': ['A', 'B', 'C', 'D'], 'Length': [1000, 2000, 3000, 5000], 'Width': [5, 12, 14, 16], 'Depth': [15, 10, 15, 18]}) area = (df['Length'])*(df['Width']) volume = (df['Length'])*(df['Width'])*(df['Depth']) print(area) print(volume) #Width and Depth are constants, only Length can be change #Constraint 1: Maximum area should be 40000m2 #Calculation of length parameter by using maximum area, with other given parameters Constraint_length_a = 40000/ df['Width'] #Constraint 2: Minimum volume should be 50000m3 #Calculation of length parameter by using minimum area, with other given parameters Constraint_length_v = 50000/ ((df['Width'])*(df['Depth'])) #Setting Length values considering constraint 1 df.at[0, 'Length']=Constraint_length_a[0] df.at[1, 'Length']=Constraint_length_a[1] df.at[2, 'Length']=Constraint_length_a[2] df.at[2, 'Length']=Constraint_length_a[3] #Setting Length values considering constraint 2 df.at[0, 'Length']=Constraint_length_v[0] df.at[1, 'Length']=Constraint_length_v[1] df.at[2, 'Length']=Constraint_length_v[2] df.at[2, 'Length']=Constraint_length_v[3] </code></pre>
<p>I believed the code below solve the current problem you are facing. If I can help any further let me know.</p> <pre><code>import pandas as pd import numpy as np df = pd.DataFrame({'Name': ['A', 'B', 'C', 'D'], 'Length': [1000, 2000, 3000, 5000], 'Width': [5, 12, 14, 16], 'Depth': [15, 10, 15, 18]}) area = (df['Length'])*(df['Width']) volume = (df['Length'])*(df['Width'])*(df['Depth']) def constraint1(df, col, n): df.loc[:n,'lenght'] = 40000 / df.loc[:n, col] df.drop('Length', axis=1, inplace=True) return df def constraint2(df, col, col1, n): df.loc[:n, 'lenght'] = 50000/ ((df.loc[:n,col])*(df.loc[:n,col1])) df.drop('Length', axis=1, inplace=True) return df </code></pre> <p>If you want to performance it through the whole column then</p> <pre><code>def constraint1a(df, col): df['lenght'] = 40000 / df[col] df.drop('Length', axis=1, inplace=True) return df def constraint2a(df, col, col1): df['lenght'] = 50000/ ((df[col])*(df[col1])) df.drop('Length', axis=1, inplace=True) return df df = constraint1(df, 'Width', 3) df1 = constraint2(df, 'Width','Depth', 3) df2 = constraint1a(df, 'Width') df3 = constraint2a(df, 'Width','Depth') </code></pre> <p>Adding the conditions I left out the first time</p> <pre><code>def constraint1(df, col, col1): l = [] for x, w in zip(df[col], df[col1]): if x &gt; 40000: l.append(40000 / w) else: l.append(x) df[col] = l return df def constraint2(df, col, col1, col2): l = [] for x, w, d in zip(df[col], df[col1], df[col2]): if x &lt;= 50000: l.append(50000 / (w*d)) else: l.append(x) df[col] = l return df df1 = constraint1(df, 'Length', 'Width') df2 = constraint2(df, 'Length', 'Width', 'Depth') </code></pre>
python|pandas|dataframe|optimization|constraints
0
1,908,027
73,534,082
it keeps saying my variable inside the function is not defined can someone help me with this
<pre><code>def draw_card(): randomrd_drawn_int = random.randint(1,20) if card_drawn_int == [1,2,3,4,5,6,7,8,9,10,11,12]: card_drawn = ['Mechanized Infantry'] elif card_drawn_int == [13,14,15,16,17]: card_drawn = ['STRV 122'] elif card_drawn_int == [18,19,20]: card_drawn = ['JAS 37'] return card_drawn print(card_drawn) </code></pre> <p>here is the syntax error by the way</p> <pre><code>NameError: Traceback (most recent call last) Input In [15], in &lt;cell line: 1&gt;() ----&gt; 1 print(card_drawn) NameError: name 'card_drawn' is not defined </code></pre>
<p><code>card_drawn</code> is defined inside the function <code>draw_card</code> inside if cases.</p> <pre class="lang-py prettyprint-override"><code>def draw_card(): randomrd_drawn_int = random.randint(1,20) card_drawn = None if card_drawn_int == [1,2,3,4,5,6,7,8,9,10,11,12]: card_drawn = ['Mechanized Infantry'] elif card_drawn_int == [13,14,15,16,17]: card_drawn = ['STRV 122'] elif card_drawn_int == [18,19,20]: card_drawn = ['JAS 37'] print(card_drawn) return card_drawn </code></pre> <p>I have rewritten the function so it works.</p> <p>You need to learn about the scope in Python : <a href="https://www.w3schools.com/PYTHON/python_scope.asp" rel="nofollow noreferrer">https://www.w3schools.com/PYTHON/python_scope.asp</a></p>
python|function|variables|return
2
1,908,028
12,798,095
Getting 405 Method Not Allowed while using POST method in bottle
<p>I am developing one simple code for force download now problem is that i'm not getting any error in GET method but getting error "405 Method Not Allowed" in post method request. My code for GET method.</p> <pre><code>@route('/down/&lt;filename:path&gt;',method=['GET', 'POST']) def home(filename): key = request.get.GET('key') if key == "tCJVNTh21nEJSekuQesM2A": return static_file(filename, root='/home/azoi/tmp/bottle/down/', download=filename) else: return "File Not Found" </code></pre> <p>When i request with key it is returning me file for download when it is get method <a href="http://mydomain.com/down/xyz.pdf?key=tCJVNTh21nEJSekuQesM2A" rel="noreferrer">http://mydomain.com/down/xyz.pdf?key=tCJVNTh21nEJSekuQesM2A</a></p> <p>Now i used another code for handling POST methods</p> <pre><code>@route('/down/&lt;filename:path&gt;',method=['GET', 'POST']) def home(filename): key = request.body.readline() if key == "tCJVNTh21nEJSekuQesM2A": return static_file(filename, root='/home/azoi/tmp/bottle/down/', download=filename) else: return "File Not Found" </code></pre> <p>Now by using this code i cannot handle post method i.e. i am getting 405 Method Not Allowed error from server.</p> <p>Any solution for this ?</p>
<p>Router takes only one method in <code>method</code> parameter, not list of methods. Use several <code>@route</code> decorators instead:</p> <pre><code>@route('/down/&lt;filename:path&gt;', method='GET') @route('/down/&lt;filename:path&gt;', method='POST') def home(filename): pass </code></pre> <p>Check documentation for more information: <a href="http://bottlepy.org/docs/dev/routing.html#routing-order" rel="noreferrer">http://bottlepy.org/docs/dev/routing.html#routing-order</a></p> <p><strong>UPDATE</strong></p> <p>Recent Bottle version allows to specify list of methods: <a href="http://bottlepy.org/docs/dev/api.html#bottle.Bottle.route" rel="noreferrer">http://bottlepy.org/docs/dev/api.html#bottle.Bottle.route</a></p>
python|bottle
12
1,908,029
13,126,910
How can I run Instruments from Python?
<p>Anyone know how to run Instruments from Python? I tired to use os.system and it didn't work. </p> <p>If I run Instruments from a command line, I only need to run: </p> <pre><code>instruments -w id -t xxxxxxxxxxxxxx xx.js </code></pre> <p>I will need to run the above in python. I suppose the following will work</p> <pre><code>import os os.system('instruments -w id -t xxxxx xx.js') </code></pre> <p>I also tried with os.system ('open -a instruments xxxxxx') </p> <p>Neither way worked. Anyone have a better idea?</p> <p>I expected it to run instruments just like running it from command line. And start to run javascritps using instruments. It didn't happen. What happened was just a 256 printed out. </p>
<p>It's hard to tell from your code snippet because you might have cut a lot out to be brief, but it looks like you are invoking the command for instruments incorrectly. Here's a line-broken example:</p> <pre><code>instruments \ -D [trace document to write] \ -t [Automation Trace Template] \ [Your App Bundle] \ -e UIARESULTSPATH [where results should be written] \ -e UIASCRIPT [your actual script file </code></pre> <p>For a full example of how to run Instruments from the command line, check out my <a href="https://github.com/jonathanpenn/AutomationExample/blob/master/run_automation.sh" rel="nofollow">demo repo</a>.</p> <p>That has a shell script that walks through how it works to invoke Instruments from the command line. You can use that as the basis for launching from Python.</p> <p>Also, I include a copy of my <code>unix_instruments</code> wrapper script. Instruments doesn't return a non-zero status code if automation scripts log failures, so this wrapper script keeps an eye on all the log output and returns a non-zero status code for you. How to use it is all in the repo, too.</p>
python|instruments|ios-ui-automation|xcode-instruments
0
1,908,030
24,563,475
Why does python multiprocessing pickle objects to pass objects between processes?
<p>Why does the <code>multiprocessing</code> package for python <code>pickle</code> objects to pass them between processes, i.e. to return results from different processes to the main interpreter process? This may be an incredibly naive question, but why can't process A say to process B "object x is at point y in memory, it's yours now" without having to perform the operation necessary to represent the object as a string. </p>
<p><code>multiprocessing</code> runs jobs in different processes. Processes have their own independent memory spaces, and in general cannot share data through memory.</p> <p>To make processes communicate, you need some sort of channel. One possible channel would be a "shared memory segment", which pretty much is what it sounds like. But it's more common to use "serialization". I haven't studied this issue extensively but my guess is that the shared memory solution is too tightly coupled; serialization lets processes communicate without letting one process cause a fault in the other.</p> <p>When data sets are really large, and speed is critical, shared memory segments may be the best way to go. The main example I can think of is video frame buffer image data (for example, passed from a user-mode driver to the kernel or vice versa).</p> <p><a href="http://en.wikipedia.org/wiki/Shared_memory" rel="nofollow noreferrer">http://en.wikipedia.org/wiki/Shared_memory</a></p> <p><a href="http://en.wikipedia.org/wiki/Serialization" rel="nofollow noreferrer">http://en.wikipedia.org/wiki/Serialization</a></p> <p>Linux, and other *NIX operating systems, provide a built-in mechanism for sharing data via serialization: "domain sockets" This should be quite fast.</p> <p><a href="http://en.wikipedia.org/wiki/Unix_domain_socket" rel="nofollow noreferrer">http://en.wikipedia.org/wiki/Unix_domain_socket</a></p> <p>Since Python has <code>pickle</code> that works well for serialization, <code>multiprocessing</code> uses that. <code>pickle</code> is a fast, binary format; it should be more efficient in general than a serialization format like XML or JSON. There are other binary serialization formats such as Google Protocol Buffers.</p> <p>One good thing about using serialization: it's about the same to share the work within one computer (to use additional cores) or to share the work between multiple computers (to use multiple computers in a cluster). The serialization work is identical, and network sockets work about like domain sockets.</p> <p>EDIT: @Mike McKerns said, in a comment below, that <code>multiprocessing</code> can use shared memory sometimes. I did a Google search and found this great discussion of it: <a href="https://stackoverflow.com/questions/14124588/python-multiprocessing-shared-memory">Python multiprocessing shared memory</a></p>
python|multiprocessing|pickle
7
1,908,031
41,077,784
AttributeError: 'tuple' object has no attribute 'collidepoint'
<p>I recently started with python and pygame. my first project is to create a game where you have to click on blocks that randomly appear on the screen. After hours of research and problem solving i haven't come up with a solution. I guess I'm not that good at customizing other answers to my code. Here's the code:</p> <pre><code>import pygame import time import random pygame.init() game_width = 800 game_height = 600 black = (0,0,0) white = (255,255,255) red = (255,0,0) green = (0,255,0) blue = (0,0,255) gameDisplay = pygame.display.set_mode((game_width, game_height)) pygame.display.set_caption('Click It') clock = pygame.time.Clock() def text_objects(text, font): textSurface = font.render(text, True, red) return textSurface, textSurface.get_rect() def message_display(text): largeText = pygame.font.Font('freesansbold.ttf',115) TextSurf, TextRect = text_objects(text, largeText) TextRect.center = ((display_width/2),(display_height/2)) gameDisplay.blit(TextSurf, TextRect) pygame.display.update() time.sleep(2) game_loop() def failed(): message_diplay('You Failed') def game_loop(): thing_height = 100 thing_width = 100 thing_startx = random.randrange(0, game_width-thing_width) thing_starty = random.randrange(0, game_height-thing_height) rectangle = (thing_startx, thing_starty, thing_width, thing_height) def things(thingx, thingy, thingw, thingh, color): pygame.draw.rect(gameDisplay, color, [thingx, thingy, thingw, thingh]) failed = False while not failed: for event in pygame.event.get(): if event.type == pygame.QUIT: pygame.quit() quit() gameDisplay.fill(white) things(thing_startx, thing_starty, thing_width, thing_height, blue ) for event in pygame.event.get(): if event.type == pygame.MOUSEBUTTONDOWN: click = rectangle.collidepoint(pygame.mouse.get_pos()) if click == 1: print ('Clicked!!') pygame.display.update() clock.tick(60) game_loop() pygame.quit() quit() </code></pre>
<p>You're creating a tuple and are trying to use the method <code>collidepoint</code> (which tuples don't have). You're probably intending to use a <a href="https://www.pygame.org/docs/ref/rect.html#pygame.Rect" rel="nofollow noreferrer"><code>Rect</code></a> object instead, so change the line</p> <pre><code>rectangle = (thing_startx, thing_starty, thing_width, thing_height) </code></pre> <p>to</p> <pre><code>rectangle = pygame.Rect(thing_startx, thing_starty, thing_width, thing_height) </code></pre>
python|python-3.x|pygame
2
1,908,032
30,841,416
Python text mining
<p>I have a function get_numbers('X') that runs a Bing search to find a contact number for 'X', i.e. get_numbers('Google') would return a customer service contact number. I want to extend the search by running Bing search on different forms of the company name. And then run a for loop to run get_numbers on all the versions of the name.</p> <pre><code>def company_names(company): etc =['','ltd','plc', 'inc'] names = [ '{} {}'.format(company,i) for i in etc ] return names def get_more_numbers(company): company = company_names(company) for i in company: name = company[i] get_numbers(name) </code></pre> <p>I'm getting the error: </p> <pre><code> File "&lt;ipython-input-22-716ce1744cc0&gt;", line 5, in get_more_numbers name = company[i] TypeError: list indices must be integers, not str </code></pre>
<p>You cannot have string as indices. You are iterating using a for each of string. <code>i</code> will contain the name. Not the index. You can remove this line.</p> <pre><code>name = company[i] </code></pre> <p>And replace next line with <code>get_numbers(i)</code> </p>
python
1
1,908,033
40,283,796
TypeError: unsupported operand type(s) for -: 'int' and 'str' (python)
<p>I really need help here. I am very new to python (in fact I started yesterday) and I keep getting this message:</p> <blockquote> <p><strong>TypeError: unsupported operand type(s) for -: 'int' and 'str'</strong></p> </blockquote> <p>when trying this:</p> <pre><code>age = input() year = (2016-age) print (year) </code></pre> <p>Please make your answers simple because I'm new.</p>
<p><code>input</code> function returns a string, so you have a string in your <code>age</code> variable. you cannot substract a string from an integer, so you have to convert your string into an integer with the <code>int</code> function.</p> <pre><code>age = int(input()) </code></pre>
python
4
1,908,034
40,195,275
How to perform html markups on site in Django
<p>Recently I've started to learn Django, I've decided to make a sample blog website. I've made Post model which creates and publish post. But there's a problem, I've no idea how to attach html markups to my Post object's attribute for instance "text" e.g I want to bold my text, but instead <strong>text</strong>, I see "<code>&lt;b&gt;text&lt;/b&gt;</code>". Here is how I've made Post model:</p> <pre><code>from django.db import models from django.utils import timezone class Post(models.Model): author = models.ForeignKey('auth.User') title = models.CharField(max_length=200) introduction = models.TextField() text = models.TextField() created_date = models.DateTimeField( default=timezone.now) published_date = models.DateTimeField( blank=True, null=True) def publish(self): self.published_date = timezone.now() self.save() def __str__(self): return self.title </code></pre>
<p>On your template file use the <a href="https://docs.djangoproject.com/en/dev/ref/templates/builtins/#safe" rel="nofollow"><code>safe</code></a> filter like this:</p> <pre><code>&lt;h1&gt;{{post.title | safe}}&lt;/h1&gt; </code></pre>
python|html|django
2
1,908,035
8,402,838
Jumping to end of python output buffer in emacs
<p>When using python-mode, py-execute-buffer puts the output in a <em>Python Output</em> buffer. I'm nearly always interested in seeing the end of that output, not the beginning. How can I configure emacs so that it automatically jumps to the bottom of the buffer, instead of starting at the top, when it first appears?</p>
<p>I don't see any hooks for that, but it can be done with some advising. This code attaches and idle timer with 0 timeout to <code>py-postprocess-output-buffer</code>, so that is executed after output postprocessing is done and control is given back to the user:</p> <pre><code>(defadvice py-postprocess-output-buffer (after my-py-postprocess-output-buffer activate) (run-with-idle-timer 0 nil (lambda () (let ((output-win (get-buffer-window py-output-buffer)) (orig-win (selected-window))) (when output-win (select-window output-win) (end-of-buffer) (select-window orig-win)))))) </code></pre>
python|emacs
1
1,908,036
58,675,618
No monad in Python?
<p>Please help understand why set(first).update(second) does not work in Python.</p> <pre><code>&gt;&gt;&gt; names1 = ["Ava", "Emma", "Olivia"] &gt;&gt;&gt; names2 = ["Olivia", "Sophia", "Emma"] &gt;&gt;&gt; &gt;&gt;&gt; sn1=set(names1) &gt;&gt;&gt; sn1.update(names2) &gt;&gt;&gt; sn1 {'Sophia', 'Emma', 'Ava', 'Olivia'} &gt;&gt;&gt; sn1=set(names1).update(names2) &gt;&gt;&gt; sn1 (Nothing displayed) </code></pre> <h1>Update</h1> <p>As in the comment, it had nothing to do with monad. The question was if there was a way to get the result of chained transformations on a object in one line.</p>
<p>In the second example, <code>sn1</code> is set to the return value of the <code>update</code> method (which is <code>None</code>), not the set returned by <code>set(names1)</code>. </p> <p>Starting in Python 3.8, you can do something like you are trying using assignment expressions.</p> <pre><code>&gt;&gt;&gt; names1 = ["Ava", "Emma", "Olivia"] &gt;&gt;&gt; names2 = ["Olivia", "Sophia", "Emma"] &gt;&gt;&gt; (sn1 := set(names1)).update(names2) &gt;&gt;&gt; sn1 {'Sophia', 'Olivia', 'Emma', 'Ava'} </code></pre>
python
2
1,908,037
52,321,121
Failed to filter rows containing a specific value in the index column after resetting index
<p>I'm organizing data of a number of plans, which contains the information of the phase of the plan, P(Preliminary) or F(Final). I'm using the methods shown in the <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.filter.html" rel="nofollow noreferrer">examples</a> in the pandas documentation.</p> <pre><code>df1 = pd.read_excel('FilePath', sheetname = 'ForFilter') df1 landuse_SUB_ID TYPE RECD_DATE PHASE LAND_USE CPACTIONDA 0 24 1 2000-04-07 P ROW 2000-05-04 1 24 1 2000-04-07 P NONE 2000-05-04 2 25 1 2000-08-10 P COMM 2000-09-08 3 34 1 2000-04-14 F REC 2000-04-14 4 34 1 2000-04-14 F SFD 2000-04-14 5 35 1 2000-01-20 P NONE 2000-02-02 6 42 1 2000-04-04 P SFD 2000-05-01 7 42 1 2000-12-06 P SFD 2001-01-03 8 43 1 2000-09-07 P NONE 2000-09-21 9 51 1 2000-11-10 P NONE 2000-11-28 10 53 1 2000-02-22 F SFD 2000-02-22 </code></pre> <p>After playing with the methods in the example (using <code>like</code> and <code>regex</code>), it seems to me that these methods can only filter the values in the index column. Therefore I changed the index:</p> <pre><code>df1_filter1 = df1.set_index('PHASE') landuse_SUB_ID TYPE RECD_DATE LAND_USE CPACTIONDA PHASE P 24 1 2000-04-07 ROW 2000-05-04 P 24 1 2000-04-07 NONE 2000-05-04 P 25 1 2000-08-10 COMM 2000-09-08 F 34 1 2000-04-14 REC 2000-04-14 F 34 1 2000-04-14 SFD 2000-04-14 P 35 1 2000-01-20 NONE 2000-02-02 P 42 1 2000-04-04 SFD 2000-05-01 P 42 1 2000-12-06 SFD 2001-01-03 P 43 1 2000-09-07 NONE 2000-09-21 P 51 1 2000-11-10 NONE 2000-11-28 F 53 1 2000-02-22 SFD 2000-02-22 </code></pre> <p>Now the data frame is using <code>Phase</code> as index, I used the <code>like</code> method to filter <code>df1_filter1</code>:</p> <pre><code>df1_filter1.filter(like = 'F', axis = 0) </code></pre> <p>I get the error </p> <blockquote> <p>"ValueError: cannot reindex from a duplicate axis"</p> </blockquote> <p>This seems like a really simple operation to me, so I'm just wondering what I did wrong to have caused this error. And what shall be the best method (fewest steps and cleanest code) for my question.</p>
<p><code>filter</code> may intuitively feel like the right function, but you almost certainly should use <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.loc.html" rel="nofollow noreferrer"><code>loc</code></a> to filter your data (on your examples link above, it says "See also: loc" in a big yellow box). For this simple example, you could also use boolean indexing:</p> <pre><code>&gt;&gt;&gt; df1.loc[df1['PHASE'] == 'F'] # or boolean indexing via df1[df1['PHASE'] == 'F'] landuse_SUB_ID TYPE RECD_DATE PHASE LAND_USE CPACTIONDA 3 34 1 2000-04-14 F REC 2000-04-14 4 34 1 2000-04-14 F SFD 2000-04-14 10 53 1 2000-02-22 F SFD 2000-02-22 </code></pre>
python|pandas|filter
4
1,908,038
52,410,962
New to Python, wondering if creating a specific graphic is possible
<p>I'm brand new to Python (as in I just started looking at it today). My only other coding experience is in Matlab and a little bit in R. I can't do what I want to in Matlab, so I'm wondering if Python is the tool I need. I want to make a graphic similar to what is seen here: <a href="https://www.cbc.ca/news/technology/charts-climate-change-bar-codes-1.4802293" rel="nofollow noreferrer">https://www.cbc.ca/news/technology/charts-climate-change-bar-codes-1.4802293</a></p> <p>I have a matrix of weather data which I would use to create the colour values. Is something like this possible in Python, and if so could someone help me with finding some resources to learn how to do so?</p> <p>Thanks!</p>
<p>Well, it seems ur looking to get some good visualizations, i bet you could start with visualization packages in python. Like ggplot2 in R, python has matplotlib, and seaborn libraries which can greatly help you to achieve this. Below are some resources you can look at : For matplotlib follow this : <a href="https://matplotlib.org/gallery/index.html" rel="nofollow noreferrer">https://matplotlib.org/gallery/index.html</a> and for seaborn : <a href="https://seaborn.pydata.org/examples/index.html" rel="nofollow noreferrer">https://seaborn.pydata.org/examples/index.html</a> </p> <p>Hope this helps!</p>
python|graphics
1
1,908,039
63,526,517
how to specify database name in sqlalchemy Teradata connection string?
<p>I am using SQLAlchemy in a Python script. I can run a select query but cannot insert because it doesn't recognize the database name. It seems Teradata doesn't have a schema concept so instead you would say &quot;database some_db&quot;.</p> <pre><code>import sqlalchemy conn_string = 'teradata://' + user + ':' + passw + '@' + host + '/?authentication=LDAP' eng = sqlalchemy.create_engine(conn_string) df.to_sql('db_name.my_table', con = eng, if_exists = 'replace', index=False) </code></pre> <p>This gives me an error</p> <pre><code>The user does not have CREATE TABLE access to database (the default DB) </code></pre> <p>But that's because the DB is not specified, which I'm trying to figure out how to do. I'm able to do a select query like this:</p> <pre><code># execute sql sql = 'select top 10 * from db_name.some_table' result = eng.execute(sql) for x in result: print(x) </code></pre> <p>I tried to do this but it didn't help:</p> <pre><code>eng.execute('database db_name') </code></pre> <p>FYI, this script is using sqlalchemy-teradata. I checked the documentation but it doesn't really say much about this:</p> <p><a href="https://downloads.teradata.com/tools/articles/teradata-sqlalchemy-introduction" rel="nofollow noreferrer">https://downloads.teradata.com/tools/articles/teradata-sqlalchemy-introduction</a></p> <p>I also tried to put the DB into the connection string but it seems to be ignored:</p> <pre><code>conn_string = 'teradata://' + user + ':' + passw + '@' + host + ':1025' + '/database=db_name' + '/?authentication=LDAP' </code></pre>
<p>The <em>tablename</em> in <code>df.to_sql</code> method is always considered an unqualified name. The &quot;schema&quot; concept is mapped to the Teradata &quot;database&quot; object by the sqlalchemy dialect. So just use the <code>schema=</code> parameter:</p> <pre><code>df.to_sql('my_table', schema ='db_name', con = eng, if_exists = 'replace', index=False) </code></pre>
sql|python-3.x|sqlalchemy|teradata
0
1,908,040
39,176,864
Dataframe for HiveContext in spark is not callable
<p>I have the below spark script:</p> <pre><code>from pyspark import SparkContext, SparkConf from pyspark.sql import SQLContext, HiveContext spark_context = SparkContext(conf=SparkConf()) sqlContext = HiveContext(spark_context) outputPartition=sqlContext.sql("select * from dm_mmx_merge.PLAN_PARTITION ORDER BY PARTITION,ROW_NUM") outputPartition.printSchema() outputPartition.filter(outputPartition("partition")==3).show() </code></pre> <p>`</p> <p>I get the output of schema as"</p> <pre><code>root |-- seq: integer (nullable = true) |-- cpo_cpo_id: long (nullable = true) |-- mo_sesn_yr_cd: string (nullable = true) |-- prod_prod_cd: string (nullable = true) |-- cmo_ctry_nm: string (nullable = true) |-- cmo_cmo_stat_ind: string (nullable = true) |-- row_num: integer (nullable = true) |-- partition: long (nullable = true) </code></pre> <p>But i also get the error: <code>Traceback (most recent call last): File "hiveSparkTest.py", line 18, in &lt;module&gt; outputPartition.filter(outputPartition(partition)==3).show() TypeError: 'DataFrame' object is not callable</code></p> <p>I need the get the output for each partition value and do transformation. Any help would be highly appreciable.</p>
<p>In line </p> <pre><code> outputPartition.filter(outputPartition(partition)==3).show() </code></pre> <p>you are trying to use outputPartition as a method. Use</p> <pre><code> outputPartition['partition'] </code></pre> <p>instead of</p> <pre><code> outputPartition(partition) </code></pre>
python|apache-spark|pyspark
2
1,908,041
55,243,614
Return sunrise and sunset time based on a datetime using Python
<p>My <code>df</code> is like this. </p> <pre><code>DateTime Pdc 01/04/2016 10:00 1 01/04/2016 10:05 2 02/04/2016 10:10 3 02/04/2016 10:15 4 03/04/2016 10:20 5 03/04/2016 10:25 6 03/04/2016 10:30 7 </code></pre> <p>I want to add two columns of sunrise time and sunset time based on the <strong>date</strong> on the <code>df</code> like below as an example: </p> <pre><code>DateTime Pdc Sunrise Sunset 01/04/2016 10:00 1 time time 01/04/2016 10:05 2 time time 02/04/2016 10:10 3 time time 02/04/2016 10:15 4 time time 03/04/2016 10:20 5 time time 03/04/2016 10:25 6 time time 03/04/2016 10:30 7 time time </code></pre> <p>I have no idea how to do that. Trying <a href="https://michelanders.blogspot.com/2010/12/calulating-sunrise-and-sunset-in-python.html" rel="nofollow noreferrer">this</a>, I don't know how to implement it without package <code>timezone</code>. <a href="https://rhodesmill.org/skyfield/almanac.html#sunrise-and-sunset" rel="nofollow noreferrer">This</a> return error that I cannot solve. <a href="https://rhodesmill.org/pyephem/" rel="nofollow noreferrer">This</a> works with Python 2, I didn't check it. <a href="https://pypi.org/project/astral/" rel="nofollow noreferrer">This</a> works for cities on the list, but not my cities.</p>
<p>You can retrieve sunset and sunrise time from <a href="https://sunrise-sunset.org/api" rel="nofollow noreferrer">https://sunrise-sunset.org/api</a>. All you need is to know your latitude and longitude. If you don't know them you can retrieve them from <code>geopy</code> lib:</p> <pre><code>import requests from geopy.geocoders import Nominatim location = Nominatim().geocode('Moscow') r = requests.get('https://api.sunrise-sunset.org/json', params={'lat': location.latitude, 'lng': location.longitude}).json()['results'] print('Sunrise:', r['sunrise']) print('Sunset:', r['sunset']) </code></pre> <p>Output:</p> <pre><code>Sunrise: 3:30:56 AM Sunset: 3:42:58 PM </code></pre>
python|pandas|datetime|timezone
1
1,908,042
37,348,988
How to get Pysqlcipher to detect openssl during installation
<p>I am trying to Install pysqlcipher to enable me use sqlcipher function in python. While installing pysqlcipher I got an error saying </p> <blockquote> <p>Fatal error: OpenSSL could not be detected!</p> </blockquote> <p>I then tried to install openssl</p> <pre><code>C:\Python27\Scripts&gt;pip install pyopenssl </code></pre> <p>I got a successful message</p> <blockquote> <p>Successfully installed cffi-1.6.0 cryptography-1.3.2 enum34-1.1.6 idna-2.1 ipadd ress-1.0.16 pyasn1-0.1.9 pycparser-2.14 pyopenssl-16.0.0 six-1.10.0</p> </blockquote> <p>I then tried reinstalling pysqlcipher</p> <pre><code>C:\Python27\Scripts&gt;pip install pysqlcipher </code></pre> <p>The same error occurred in the process of installation.</p> <blockquote> <p>Fatal error: OpenSSL could not be detected!</p> </blockquote> <p>How do I get it to detect openssl?</p> <p>Pls note that I'm installing it on Windows 7</p>
<p>1)Installed a prebuilt OpenSSL binary (Win32 OpenSSL v1.0.2d or later) from <a href="https://slproweb.com/products/Win32OpenSSL.html" rel="nofollow">https://slproweb.com/products/Win32OpenSSL.html</a></p> <p>2) Confirm that the OPENSSL_CONF environment variable is set properly in environment variables. See <a href="http://www.computerhope.com/issues/ch000549.htm" rel="nofollow">http://www.computerhope.com/issues/ch000549.htm</a></p>
python-2.7|openssl|sqlcipher|pysqlcipher
2
1,908,043
37,242,259
Cannot configure Python 3.5 to use Visual C++ compiler on Windows
<p>When I was using Python 3.4 I used MinGW to compile modules. Unfortunately in 3.5 MinGW support no longer works. I've installed the correct Visual C++ stuff, but <code>pip</code> still tries to use the MinGW compiler and fails.</p> <p>How do I tell it to use the correct compiler?</p>
<p>Try following:</p> <ul> <li>Install <a href="https://www.microsoft.com/en-us/download/details.aspx?id=48146" rel="nofollow">Visual Studio Community 2015</a> with C++ checked</li> <li>Make sure <a href="http://www.devdungeon.com/content/fix-pip-install-unable-find-vcvarsallbat" rel="nofollow">environment variable</a> for VS is set</li> <li>In <code>PYTHONPATH\Lib\distutils</code> dir create (or edit) file <code>distutils.cfg</code> with following lines:</li> </ul> <pre><code>[build] compiler=msvc </code></pre>
python|windows|configuration|python-3.5
1
1,908,044
31,977,245
How can I efficiently implement multithreading/multiprocessing in a Python web bot?
<p>Let's say I have a web bot written in python that sends data via POST request to a web site. The data is pulled from a text file line by line and passed into an array. Currently, I'm testing each element in the array through a simple for-loop. How can I effectively implement multi-threading to iterate through the data quicker. Let's say the text file is fairly large. Would attaching a thread to each request be smart? What do you think the best approach to this would be?</p> <pre><code>with open("c:\file.txt") as file: dataArr = file.read().splitlines() dataLen = len(open("c:\file.txt").readlines())-1 def test(data): #This next part is pseudo code result = testData('www.example.com', data) if result == 'whatever': print 'success' for i in range(0, dataLen): test(dataArr[i]) </code></pre> <p>I was thinking of something along the lines of this, but I feel it would cause issues depending on the size of the text file. I know there is software that exists which allows the end-user to specify the amount of the threads when working with large amounts of data. I'm not entirely sure of how that works, but that's something I'd like to implement.</p> <pre><code>import threading with open("c:\file.txt") as file: dataArr = file.read().splitlines() dataLen = len(open("c:\file.txt").readlines())-1 def test(data): #This next part is pseudo code result = testData('www.example.com', data) if result == 'whatever': print 'success' jobs = [] for x in range(0, dataLen): thread = threading.Thread(target=test, args=(dataArr[x])) jobs.append(thread) for j in jobs: j.start() for j in jobs: j.join() </code></pre>
<p>This sounds like a recipe for <code>multiprocessing.Pool</code></p> <p>See here: <a href="https://docs.python.org/2/library/multiprocessing.html#introduction" rel="nofollow">https://docs.python.org/2/library/multiprocessing.html#introduction</a></p> <pre><code>from multiprocessing import Pool def test(num): if num%2 == 0: return True else: return False if __name__ == "__main__": list_of_datas_to_test = [0, 1, 2, 3, 4, 5, 6, 7, 8] p = Pool(4) # create 4 processes to do our work print(p.map(test, list_of_datas_to_test)) # distribute our work </code></pre> <p>Output looks like:</p> <pre><code>[True, False, True, False, True, False, True, False, True, False] </code></pre>
python|multithreading|multiprocessing|python-multithreading|python-multiprocessing
2
1,908,045
40,731,313
Adding elements to list in python in proper order
<p>I want to add element to list using specific index. Using insert it is not working for me as it should in all cases. Please take a look:</p> <pre><code>rest = [666, 555, 222] s= sorted(rest) list1 = [] list1.insert(rest.index(s[0]),3) list1.insert(rest.index(s[1]),2) list1.insert(rest.index(s[2]),1) print list1 </code></pre> <p>So what I wanted to achieve - highest value mapped to lowest from rest list. But what I get is: 1, 3, 2 and the goal is to be 1, 2, 3 (in this case).</p> <p>I understand that that is how insert function works, but is there any other way to acheive what I want?</p>
<p>Is it what you are looking for?</p> <pre><code>rest = [666, 555, 222] s= sorted(rest) list1 = [0] * len(rest) list1[rest.index(s[0])] = 3 list1[rest.index(s[1])] = 2 list1[rest.index(s[2])] = 1 print list1 </code></pre> <p>The above code gives <code>[1, 2, 3]</code> as output (as you expect).</p>
python
2
1,908,046
62,957,266
How to locate the last web element using classname attribute through Selenium and Python
<pre><code>getphone = driver.find_element_by_class_name('_3ko75')[-1] phone = getphone.get_attribute(&quot;title&quot;) </code></pre> <p>Not working I need to get the title on string format.</p> <pre><code>Exception has occurred: TypeError 'WebElement' object is not subscriptable File &quot;C:\Users\vmaiha\Documents\Python Projects\Project 01\WP_Answer.py&quot;, line 43, in check getphone = driver.find_element_by_class_name('_3ko75')[-1] </code></pre>
<p>Based on your code trials, to get the title of the last <a href="https://stackoverflow.com/questions/52782684/what-is-the-difference-between-webdriver-and-webelement-in-selenium/52805139#52805139">WebElement</a> based on the value of the <em>classname</em> attribute you can use either of the following <a href="https://stackoverflow.com/questions/48369043/official-locator-strategies-for-the-webdriver/48376890#48376890">Locator Strategies</a>:</p> <ul> <li><p>Using <code>XPATH</code>, <code>find_element*</code> and <code>last()</code>:</p> <pre><code>print(driver.find_element_by_xpath(&quot;//*[@class='_3ko75'][last()]&quot;).get_attribute(&quot;title&quot;)) </code></pre> </li> <li><p>Using <code>XPATH</code>, <code>find_elements*</code> and <code>[-1]</code>:</p> <pre><code>print(driver.find_elements_by_xpath(&quot;//*[@class='_3ko75']&quot;)[-1].get_attribute(&quot;title&quot;)) </code></pre> </li> </ul> <p>Preferably using <a href="https://stackoverflow.com/questions/59130200/selenium-wait-until-element-is-present-visible-and-interactable/59130336#59130336">WebDriverWait</a>:</p> <pre><code>print(WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.XPATH, &quot;//*[@class='_3ko75'][last()]&quot;))).get_attribute(&quot;title&quot;)) </code></pre> <p>or</p> <pre><code>print(WebDriverWait(driver, 20).until(EC.visibility_of_all_elements_located((By.XPATH, &quot;//*[@class='_3ko75']&quot;)))[-1].get_attribute(&quot;title&quot;)) </code></pre>
python|selenium|selenium-webdriver|xpath|getattribute
0
1,908,047
32,425,145
scrapy on ubuntu server
<p>I have a problem I just cannot resolve. After installing scrapy (with pip) I get and error when trying to make startup project:</p> <pre><code> File "/usr/local/bin/scrapy", line 5, in &lt;module&gt; from pkg_resources import load_entry_point File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 2749, in &lt;module&gt; working_set = WorkingSet._build_master() File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 446, in _build_master return cls._build_from_requirements(__requires__) File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 459, in _build_from_requirements dists = ws.resolve(reqs, Environment()) File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 628, in resolve raise DistributionNotFound(req) pkg_resources.DistributionNotFound: Scrapy==1.0.3.post1-g83a06ed </code></pre> <p>Does anyone familiar with this? I tried a lot of things including reinstalling packages.</p> <p>I`m using DigitalOcean server with ubuntu 14.04 and python 2.7.9</p> <p>Thanks, Aviad</p>
<p>Stumbled across this the other day:</p> <p><a href="https://stackoverflow.com/questions/27614629/scrapy-failing-in-terminal">Scrapy failing in terminal</a></p> <p>You may need to upgrade Scrapy.</p> <p><code>easy_install --upgrade scrapy</code></p> <p>or</p> <p><code>pip install --upgrade scrapy</code></p> <p>If you don't have easy_install it can be installed with</p> <p><code>sudo apt-get install python-setuptools</code></p>
python|python-2.7|ubuntu
0
1,908,048
43,989,235
Accurate timing for imports in Python
<p>The <a href="https://docs.python.org/library/timeit.html" rel="nofollow noreferrer"><code>timeit</code></a> module is great for measuring the execution time of small code snippets but when the code changes global state (like <code>timeit</code>) it's really hard to get accurate timings.</p> <p>For example if I want to time it takes to import a module then the first import will take much longer than subsequent imports, because the submodules and dependencies are already imported and the files are already cached. So using a bigger <code>number</code> of repeats, like in:</p> <pre><code>&gt;&gt;&gt; import timeit &gt;&gt;&gt; timeit.timeit('import numpy', number=1) 0.2819331711316805 &gt;&gt;&gt; # Start a new Python session: &gt;&gt;&gt; timeit.timeit('import numpy', number=1000) 0.3035142574359181 </code></pre> <p>doesn't really work, because the time for one execution is almost the same as for 1000 rounds. I could execute the command to "reload" the package:</p> <pre><code>&gt;&gt;&gt; timeit.timeit('imp.reload(numpy)', 'import importlib as imp; import numpy', number=1000) 3.6543283935557156 </code></pre> <p>But that it's only 10 times slower than the first <code>import</code> seems to suggest it's not accurate either. </p> <p>It also seems impossible to unload a module entirely (<a href="https://stackoverflow.com/q/3105801/5393381">"Unload a module in Python"</a>).</p> <p>So the question is: What would be an appropriate way to accuratly measure the <code>import</code> time?</p>
<p>Since it's nearly impossible to fully unload a module, maybe the inspiration behind this answer is <a href="https://www.youtube.com/watch?v=nn2FB1P_Mn8" rel="nofollow noreferrer">this</a>...</p> <p>You could run a loop in a python script to run x times a python command importing <code>numpy</code> and another one doing nothing, and substract both + average:</p> <pre><code>import subprocess,time n=100 python_load_time = 0 numpy_load_time = 0 for i in range(n): s = time.time() subprocess.call(["python","-c","import numpy"]) numpy_load_time += time.time()-s s = time.time() subprocess.call(["python","-c","pass"]) python_load_time += time.time()-s print("average numpy load time = {}".format((numpy_load_time-python_load_time)/n)) </code></pre>
python|import|module|timing
3
1,908,049
26,983,026
Adding up all columns with multiindex on axis 0
<p>I have a multiIndexed pandas dataframe called <code>df</code> looking like this: </p> <pre><code> Rh 0.1 0.2 0.3 x y z 0 0 0 0.125 0.126 0.127 1 0.134 0.135 0.137 1 0 ... </code></pre> <p>I want to keep the x,y,z multiIndex and sum up all columns. The call <code>df.sum(axis=1)</code> does not give the right result. The desired output should look like: </p> <pre><code> Rh sum x y z 0 0 0 0.378 1 0.406 1 0 ... </code></pre>
<p>Works on my end:</p> <pre><code>from io import StringIO import pandas datafile = StringIO("""\ x y z A B C 0 0 0 0.125 0.126 0.127 0 0 1 0.134 0.135 0.137 """) df = pandas.read_table(datafile, index_col=['x', 'y', 'z'], sep='\s+') df.sum(axis=1) x y z 0 0 0 0.378 1 0.406 dtype: float64 </code></pre>
python|pandas|dataframe
1
1,908,050
26,984,189
Miliseconds added between the two delayed datetime outputs
<p>Code: </p> <pre><code>while True: print(datetime.datetime.now()) time.sleep(5) print(datetime.datetime.now()) </code></pre> <p><strong>Result:</strong></p> <p>00:06:53.728000</p> <p>00:06:58.763000</p> <p>00:06:58.833000</p> <p>00:07:03.838000</p> <p>So why doesn't it come out as: 06:58.728000, 06:58.728000, 07:03:728000? </p>
<p>The <a href="https://docs.python.org/2/library/time.html" rel="nofollow"><code>time.sleep</code></a> documentation is the best:</p> <blockquote> <p>The actual suspension time may be less than that requested because any caught signal will terminate the sleep() following execution of that signal’s catching routine.</p> </blockquote> <p>But also:</p> <blockquote> <p>Also, the suspension time may be longer than requested by an arbitrary amount because of the scheduling of other activity in the system.</p> </blockquote> <p>So, there are no guarantees (this tripped me over as well in the past). You're getting a five second delay (roughly) but the actual delay can be a bit less and/or a bit more than the requested delay.</p>
python|datetime|python-3.x
0
1,908,051
12,308,046
How to launch your already made Django App on Heroku
<p>I have already made a django app that runs on my server. I now want to launch it on the web using heroku but all the tutorials that I find make you start a whole new project. I dont know what to do to simply update my already existing django project to work with heroku. </p> <p>My files now are organized like so:</p> <pre><code>in hcp: crunchWeb: crunchWeb: files = _init_.py ; settings.py ; urls.py ; wsgi.py crunchApp: files = _init_.py ; admin.py ; models.py ; views.py etc... manage.py sqlite3.db env: folders= bin ; helloflask ; include ; lib #all of these were created automatically templates: all my .html files </code></pre> <p>I would like to know what commands from the heroku tutorial (https://devcenter.heroku.com/articles/django#using-a-different-wsgi-server) I still need to do and which ones I can skip.</p> <p>I would also like to know in what folder I need to be in when executing all of my commands</p> <p>Thanks!</p> <pre><code>2012-09-06T21:44:52+00:00 app[web.1]: File "/app/.heroku/venv/lib/python2.7/site-packages/django/db/__init__.py", line 34, in __getattr__ 2012-09-06T21:44:52+00:00 app[web.1]: File "/app/.heroku/venv/lib/python2.7/site-packages/django/db/utils.py", line 92, in __getitem__ 2012-09-06T21:44:52+00:00 app[web.1]: return getattr(connections[DEFAULT_DB_ALIAS], item) 2012-09-06T21:44:52+00:00 app[web.1]: backend = load_backend(db['ENGINE']) 2012-09-06T21:44:52+00:00 app[web.1]: File "/app/.heroku/venv/lib/python2.7/site-packages/django/db/utils.py", line 24, in load_backend 2012-09-06T21:44:52+00:00 app[web.1]: return import_module('.base', backend_name) 2012-09-06T21:44:52+00:00 app[web.1]: File "/app/.heroku/venv/lib/python2.7/site-packages/django/utils/importlib.py", line 35, in import_module 2012-09-06T21:44:52+00:00 app[web.1]: __import__(name) 2012-09-06T21:44:52+00:00 app[web.1]: File "/app/.heroku/venv/lib/python2.7/site-packages/django/db/backends/sqlite3/base.py", line 31, in &lt;module&gt; 2012-09-06T21:44:52+00:00 app[web.1]: raise ImproperlyConfigured("Error loading either pysqlite2 or sqlite3 modules (tried in that order): %s" % exc) 2012-09-06T21:44:52+00:00 app[web.1]: django.core.exceptions.ImproperlyConfigured: Error loading either pysqlite2 or sqlite3 modules (tried in that order): No module named _sqlite3 2012-09-06T21:44:54+00:00 heroku[web.1]: Process exited with status 1 2012-09-06T21:44:54+00:00 heroku[web.1]: State changed from starting to crashed 2012-09-06T21:44:54+00:00 heroku[web.1]: State changed from crashed to starting 2012-09-06T21:44:58+00:00 heroku[web.1]: Starting process with command `python ./manage.py runserver 0.0.0.0:57395 --noreload` 2012-09-06T21:44:59+00:00 app[web.1]: File "./manage.py", line 10, in &lt;module&gt; </code></pre> <p>settings.py </p> <pre><code> # Django settings for crunchWeb project. import dj_database_url DEBUG = True TEMPLATE_DEBUG = DEBUG ADMINS = ( # ('Your Name', 'your_email@example.com'), ) MANAGERS = ADMINS DATABASES = {'default': dj_database_url.config(default='postgres://localhost')} # { # 'default': { # 'ENGINE': 'django.db.backends.sqlite3', # Add 'postgresql_psycopg2', 'mysql', 'sqlite3' or 'oracle'. # 'NAME': '/Users/Santi/hcp/crunchWeb/sqlite3.db', # Or path to database file if using sqlite3. # 'USER': '', # Not used with sqlite3. # 'PASSWORD': '', # Not used with sqlite3. # 'HOST': '', # Set to empty string for localhost. Not used with sqlite3. # 'PORT': '', # Set to empty string for default. Not used with sqlite3. # } # } </code></pre>
<p>DIRECTLY FROM THE DIRECTIONS <a href="https://devcenter.heroku.com/articles/django" rel="nofollow">https://devcenter.heroku.com/articles/django</a>, once again if you read the directions and follow them you will have executed:</p> <pre><code>$pip install Django psycopg2 dj-database-url </code></pre> <h2>Database settings</h2> <p>Next, configure the application to use Heroku’s Postgres database. The installed dj-database-url module will do everything automatically from the env.</p> <p>Add the following to your settings.py:</p> <pre><code>import dj_database_url DATABASES = {'default': dj_database_url.config(default='postgres://localhost')} </code></pre> <p>You can add these lines at the end of your settings.py to continue to use sql lite locally and only postgres on heroku.</p> <pre><code>settings.py ------------------------ import dj_database_url import os if os.getcwd() == "/app": DATABASES = {'default': dj_database_url.config(default='postgres://localhost')} </code></pre>
python|django|heroku
3
1,908,052
1,030,522
Unescape _xHHHH_ XML escape sequences using Python
<p>I'm using Python 2.x [not negotiable] to read XML documents [created by others] that allow the content of many elements to contain characters that are not valid XML characters by escaping them using the <code>_xHHHH_</code> convention e.g. ASCII BEL aka U+0007 is represented by the 7-character sequence <code>u"_x0007_"</code>. Neither the functionality that allows representation of any old character in the document nor the manner of escaping is negotiable. I'm parsing the documents using cElementTree or lxml [semi-negotiable].</p> <p>Here is my best attempt at unescapeing the parser output as efficiently as possible:</p> <pre><code>import re def unescape(s, subber=re.compile(r'_x[0-9A-Fa-f]{4,4}_').sub, repl=lambda mobj: unichr(int(mobj.group(0)[2:6], 16)), ): if "_" in s: return subber(repl, s) return s </code></pre> <p>The above is biassed by observing a very low frequency of "_" in typical text and a better-than-doubling of speed by avoiding the regex apparatus where possible.</p> <p>The question: Any better ideas out there?</p>
<p>You might as well check for <code>'_x'</code> rather than just <code>_</code>, that won't matter much but surely the two-character sequence's even rarer than the single underscore. Apart from such details, you do seem to be making the best of a bad situation!</p>
python|xml|escaping
1
1,908,053
866,000
Using BeautifulSoup to find a HTML tag that contains certain text
<p>I'm trying to get the elements in an HTML doc that contain the following pattern of text: #\S{11}</p> <pre><code>&lt;h2&gt; this is cool #12345678901 &lt;/h2&gt; </code></pre> <p>So, the previous would match by using:</p> <pre><code>soup('h2',text=re.compile(r' #\S{11}')) </code></pre> <p>And the results would be something like:</p> <pre><code>[u'blahblah #223409823523', u'thisisinteresting #293845023984'] </code></pre> <p>I'm able to get all the text that matches (see line above). But I want the parent element of the text to match, so I can use that as a starting point for traversing the document tree. In this case, I'd want all the h2 elements to return, not the text matches.</p> <p>Ideas?</p>
<pre><code>from BeautifulSoup import BeautifulSoup import re html_text = """ &lt;h2&gt;this is cool #12345678901&lt;/h2&gt; &lt;h2&gt;this is nothing&lt;/h2&gt; &lt;h1&gt;foo #126666678901&lt;/h1&gt; &lt;h2&gt;this is interesting #126666678901&lt;/h2&gt; &lt;h2&gt;this is blah #124445678901&lt;/h2&gt; """ soup = BeautifulSoup(html_text) for elem in soup(text=re.compile(r' #\S{11}')): print elem.parent </code></pre> <p>Prints:</p> <pre><code>&lt;h2&gt;this is cool #12345678901&lt;/h2&gt; &lt;h2&gt;this is interesting #126666678901&lt;/h2&gt; &lt;h2&gt;this is blah #124445678901&lt;/h2&gt; </code></pre>
python|regex|beautifulsoup|html-content-extraction
85
1,908,054
57,363,811
How to plot legends using loop in Axes3D in python?
<p>I have the following array in Python:</p> <pre><code>numbers=[75, 100, 680, 123, 4, 4, 8, 15] </code></pre> <p>These numbers are corresponding to the amount of points, which are being associated to a certain cluster.</p> <p>So together with plotting the cluster points I want to have a legend, which is giving following information:</p> <pre><code>"Cluster 1 has 75 points" "Cluster 2 has 100 points" </code></pre> <p>and so on.</p> <p>I have difficulties coding the loops, so the help will be highly appreciated.</p>
<h2><strong>UPDATE:</strong></h2> <p>After providing more information I think you need this:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D # random data: 800 samples, 8 coordinates X_scaled = np.random.rand(800,8) # some labels e.g. output of clustering algorithm labels = np.concatenate([np.ones(400),np.zeros(400)]) # unique classes/groups in the data number_of_classes = np.unique(labels).shape[0] # the desired legends legends = ['cluster 1', 'cluster 2'] # colors for the groups colors = ["r","b"] fig1 = plt.figure() ax = Axes3D(fig1) for i in range(number_of_classes): ax.scatter(X_scaled[:,0][labels==i], X_scaled[:,1][labels==i],X_scaled[:,7][labels==i], c = colors[i] ,s=50, label= legends[i] + " has {} points".format(X_scaled[:,0][labels==i].shape[0])) plt.legend() plt.savefig("test.png", dpi = 300) plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/tVed6.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tVed6.jpg" alt="enter image description here"></a></p> <p>EDIT 1: How to use a for loop to create the legends.</p> <pre><code>legends = [] for i in range(5): legends.append('cluster{}'.format(i)) print(legends) ['cluster0', 'cluster1', 'cluster2', 'cluster3', 'cluster4'] </code></pre>
python|matplotlib|plot
1
1,908,055
70,976,489
pytube: 'NoneType' object has no attribute 'span'
<p>I try to follow pytube example for downloading video from YouTube:</p> <pre><code>from pytube import YouTube video = YouTube('https://www.youtube.com/watch?v=BATOxzbVNno') video.streams.all() </code></pre> <p>and immediately get this error:</p> <pre><code>--------------------------------------------------------------------------- AttributeError Traceback (most recent call last) &lt;ipython-input-4-2556eb2eb903&gt; in &lt;module&gt;() 1 from pytube import YouTube 2 video = YouTube('https://www.youtube.com/watch?v=BATOxzbVNno') ----&gt; 3 video.streams.all() 5 frames /usr/local/lib/python3.7/dist-packages/pytube/cipher.py in get_throttling_function_code(js) 301 # Extract the code within curly braces for the function itself, and merge any split lines 302 code_lines_list = find_object_from_startpoint(js, match.span()[1]).split('\n') --&gt; 303 joined_lines = &quot;&quot;.join(code_lines_list) 304 305 # Prepend function definition (e.g. `Dea=function(a)`) AttributeError: 'NoneType' object has no attribute 'span' </code></pre> <p>Please help me. It worked fine just yesterday! Thanks a lot!</p>
<p>Just ran into that error myself, seems it occurs quite frequently regardless of it getting temporary fixes.</p> <p>Found a fix on github: <a href="https://github.com/pytube/pytube/issues/1243" rel="nofollow noreferrer">NoneType object has no attribute 'span'</a></p> <p>Just replace the function <em><strong>get_throttling_function_name</strong></em> with:</p> <pre><code>def get_throttling_function_name(js: str) -&gt; str: &quot;&quot;&quot;Extract the name of the function that computes the throttling parameter. :param str js: The contents of the base.js asset file. :rtype: str :returns: The name of the function used to compute the throttling parameter. &quot;&quot;&quot; function_patterns = [ # https://github.com/ytdl-org/youtube-dl/issues/29326#issuecomment-865985377 # a.C&amp;&amp;(b=a.get(&quot;n&quot;))&amp;&amp;(b=Dea(b),a.set(&quot;n&quot;,b))}}; # In above case, `Dea` is the relevant function name r'a\.[A-Z]&amp;&amp;\(b=a\.get\(&quot;n&quot;\)\)&amp;&amp;\(b=([^(]+)\(b\)', ] logger.debug('Finding throttling function name') for pattern in function_patterns: regex = re.compile(pattern) function_match = regex.search(js) if function_match: logger.debug(&quot;finished regex search, matched: %s&quot;, pattern) function_name = function_match.group(1) is_Array = True if '[' or ']' in function_name else False if is_Array: index = int(re.findall(r'\d+', function_name)[0]) name = function_name.split('[')[0] pattern = r&quot;var %s=\[(.*?)\];&quot; % name regex = re.compile(pattern) return regex.search(js).group(1).split(',')[index] else: return function_name raise RegexMatchError( caller=&quot;get_throttling_function_name&quot;, pattern=&quot;multiple&quot; ) </code></pre>
python|python-3.x|pytube
4
1,908,056
70,886,317
How to reverse the legends of stacked barplot in pandas
<p>I have a dataset with a few records about some crop production by year. So I am visualizing the top produced crop by each year in a stacked bar chart. Dataset I have used can be found in <a href="https://www.kaggle.com/pyatakov/india-pmfby-statistics" rel="nofollow noreferrer">kaggle PMFBY Coverage.csv</a>.</p> <p>Here is my code.</p> <pre class="lang-py prettyprint-override"><code># Top Crop by year plt.figure(figsize=(12, 6)) df_crg_[df_crg_.year==2018].groupby('cropName').size().nlargest(5).plot(kind='barh', color='red', label='2018') df_crg_[df_crg_.year==2019].groupby('cropName').size().nlargest(5).plot(kind='barh', color='green', label='2019') df_crg_[df_crg_.year==2020].groupby('cropName').size().nlargest(5).plot(kind='barh', color='blue', label='2020') df_crg_[df_crg_.year==2021].groupby('cropName').size().nlargest(5).plot(kind='barh', color='maroon', label='2021') plt.legend(loc=&quot;upper right&quot;) plt.xlabel('Total Production Time') plt.title('Top Crop by year') plt.show() </code></pre> <p>And this was the output <a href="https://i.stack.imgur.com/MJPXG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MJPXG.png" alt="enter image description here" /></a></p> <p>Now if you look at the graph you would notice the stacked bar chart legends are revered, it is showing 2021 status first instead of 2018. So I want to reverse this order of representation.</p> <p>I found one <a href="https://stackoverflow.com/questions/54874269/ordering-of-elements-in-pandas-stacked-bar-chart/54875576">solution</a> for this question but I don't know how to apply it, as it is assigning plotting commands to one variable but in my case, there are four plotting commands.</p> <hr /> <p>Only this answer would do, but if know and can answer any other method of extracting top produced crop by year then that would be great. If you notice here I am manually going through each year then extracting that year's top crop. I tried doing it with groupby but I wasn't able to get the answer.</p> <p>Thanks</p>
<p>First off, the same 5 crops need to be selected each year. Otherwise, you can't have a fixed ordering on the y-axis.</p> <p>The easiest way to get a plot with the <em>overall</em> 5 most-frequent crops, is seaborn's <code>sns.countplot</code> and limiting to the 5 largest. Note that seaborn is strongly objected to stacked bar plots, so you'll get &quot;dodged&quot; bars (which are easier to compare, year by year, and crop by crop):</p> <pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt import seaborn as sns import pandas as pd import numpy as np df = pd.read_csv('PMFBY coverage.csv') sns.set_style('white') order = df.groupby('cropName').size().sort_values(ascending=False)[:5].index plt.figure(figsize=(12, 5)) ax = sns.countplot(data=df, y='cropName', order=order, hue='year') for bars in ax.containers: ax.bar_label(bars, fmt='%.0f', label_type='edge', padding=2) sns.despine() plt.tight_layout() plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/rACnk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rACnk.png" alt="sns.countplot example" /></a></p> <p>With pandas, you can get stacked bars, but you need a bit more manipulation:</p> <pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt import seaborn as sns import pandas as pd import numpy as np df = pd.read_csv('PMFBY coverage.csv') sns.set_style('white') order = df.groupby('cropName').size().sort_values(ascending=False)[:5].index df_5_largest = df[df['cropName'].isin(order)] df_5_largest_year_count = df_5_largest.groupby(['cropName', 'year']).size().unstack('year').reindex(order) ax = df_5_largest_year_count.plot.barh(stacked=True, figsize=(12, 5)) ax.invert_yaxis() for bars in ax.containers: ax.bar_label(bars, fmt='%.0f', label_type='center', color='white', fontsize=16) sns.despine() plt.tight_layout() plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/Vgtcy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Vgtcy.png" alt="pandas stacked bars" /></a></p> <p>Now, compare this with how the bars would look like if you'd consider the 5 largest crops of each individual year. Notice how the crops and their order is different each year. How would you combine such information to a single plot?</p> <pre class="lang-py prettyprint-override"><code>sns.set_style('white') fig, axs = plt.subplots(2, 2, figsize=(14, 8)) df[df.year == 2018].groupby('cropName').size().nlargest(5).plot(kind='barh', color='C0', title='2018', ax=axs[0, 0]) df[df.year == 2019].groupby('cropName').size().nlargest(5).plot(kind='barh', color='C1', title='2019', ax=axs[0, 1]) df[df.year == 2020].groupby('cropName').size().nlargest(5).plot(kind='barh', color='C2', title='2020', ax=axs[1, 0]) df[df.year == 2021].groupby('cropName').size().nlargest(5).plot(kind='barh', color='C3', title='2021', ax=axs[1, 1]) for ax in axs.flat: ax.bar_label(ax.containers[0], fmt='%.0f', label_type='edge', padding=2) ax.margins(x=0.1) sns.despine() plt.tight_layout() plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/dCe5W.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dCe5W.png" alt="bars for 5 largest crops for each year" /></a></p>
python|pandas|matplotlib|seaborn|data-visualization
2
1,908,057
70,744,911
Python - pandas converts MS SQL date to nvarchar
<p>My python code runs <code>read_sql...</code> method on a sample MS SQL Server query.</p> <p>One of the columns - <code>system_type_name</code> - indicates type <code>date</code> while running in SSMS.</p> <p><a href="https://i.stack.imgur.com/YIbyM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YIbyM.png" alt="enter image description here" /></a></p> <p>Query executed with below code gives me <code>nvarchar(10)</code> type:</p> <p>Config.json:</p> <pre><code>{&quot;Drive&quot;: &quot;SQL Server&quot;, &quot;Server&quot;: &quot;server_name&quot;, &quot;Database&quot;:...... &quot;UID&quot;:........ &quot;PWD&quot;:......} </code></pre> <p>Code:</p> <pre><code>import pandas as pd import pyodbc query = ''' DECLARE @dt DECLARE @sql nvarchar(max) = N'procedure_name 0, ''1900-01-01'', @dt'; SELECT system_type_name FROM sys.dm_exec_describe_first_result_set(@sql, NULL, 0) ''' cnxn = connect_to_db(&quot;config.json&quot;, False) src = pd.read_sql(query, cnxn) print(src) </code></pre> <p>df result:</p> <pre><code> num_legs system_type_name nvarchar(10) </code></pre> <p>Is this some conversion bug/issue pandas is having with <code>date</code> type?</p>
<p>The ancient &quot;SQL Server&quot; driver returns a string representation for several T-SQL types. Newer ODBC drivers return more specific types. For example:</p> <pre class="lang-py prettyprint-override"><code># with DRIVER=SQL Server # print(type(crsr.execute(&quot;SELECT CAST('2022-01-17' AS DATE) AS d&quot;).fetchval())) # &lt;class 'str'&gt; # with DRIVER=ODBC Driver 17 for SQL Server # print(type(crsr.execute(&quot;SELECT CAST('2022-01-17' AS DATE) AS d&quot;).fetchval())) # &lt;class 'datetime.date'&gt; </code></pre>
python|sql-server|pandas|pyodbc
2
1,908,058
33,741,146
comparing two sorted lists in python
<p>i need to compare 2 lists of countries that i have organised, one by population and one by area, so that it prints out any countries that are in the same position it both lists. so far anything i have tried has resulted in it only returning a single country that has the same position in both lists when there should be a total of 6.</p> <pre><code>def coincidingCountries(): countries = readCountries() for i in range(0,len(countries)): swap = False for j in range(0,len(countries)-(i+1)): if countries[j][1]&gt;countries[j+1][1]: temp = countries[j+1] countries[j+1] = countries[j] countries[j] = temp swap = True for i in range(0,len(countries)): smallest = i for j in range(i,len(countries)): if countries[j][2]&lt; countries[smallest][2]: smallest = j temp = countries[i] countries[i] = countries[smallest] countries[smallest] = temp </code></pre>
<p>Try <a href="http://pandas.pydata.org/" rel="nofollow">pandas</a>:</p> <pre><code>import pandas as pd pop = ['a', 'b', 'c', 'd', 'e', 'i'] area = ['a', 'c', 'b', 'd', 'e', 'f'] countries = pd.DataFrame(data = {'pop': pop, 'area': area}) print countries[countries['area']==countries['pop']] area pop 0 a a 3 d d 4 e e </code></pre> <p>This assumes your two lists are already sorted and will print rows of the table where the values match.</p>
python|python-2.7
2
1,908,059
46,962,702
python 3 Call child attribute from parent method
<p>Can't call child attribute in parent method, here's the test:</p> <pre><code>#!/usr/bin/env python3 class A(): def getPath(self): return self.__path class B(A): def __init__( self, name, path): self.__name = name self.__path = path instance = B('test', '/home/test/Projects') print(instance.getPath()) </code></pre> <p>Running python test file <code>$ ./test.py</code> returns</p> <pre><code>./test.py Traceback (most recent call last): File "./test.py", line 17, in &lt;module&gt; print(instance.getPath()) File "./test.py", line 6, in getPath return self.__path AttributeError: 'B' object has no attribute '_A__path' </code></pre>
<p>You are getting this because you are using a private attribute. If you do it using a non-private attributes, it will succeed.</p> <p>Private attributes in Python are designed to allow each class to have its own private copy of a variable without that variable being overridden by subclasses. So in B, __path means _B__path, and in A, __path means __A_path. This is exactly how Python is intended to work. <a href="https://docs.python.org/3/tutorial/classes.html#tut-private" rel="noreferrer">https://docs.python.org/3/tutorial/classes.html#tut-private</a></p> <p>Since want A to be able to access __path, you should not use double-underscores. Instead, you can use a single underscore, which is a convention to indicate that a variable is private, without actually enforcing it.</p> <pre><code>#!/usr/bin/env python3 class A(): def getPath(self): return self._path class B(A): def __init__( self, name, path): self.__name = name self._path = path instance = B('test', '/home/test/Projects') print(instance.getPath()) $ ./test.py /home/test/Projects </code></pre>
python-3.x|inheritance|methods|attributes|parent
5
1,908,060
46,951,015
saving large streaming data in python
<p>I have a large amount of data coming in every second in the form of python dictionaries, right now I am saving it to mySQL server as they come in but that creates a backlog thats more than a few hours. What is the best way to save the data locally and move it to a mySQL server every hour or so as a chunk to save time.I have tried redis but it cant save a list of these dictionaries which I can later move to mySQL.</p>
<p>A little-known fact about the Python native <code>pickle</code> format is that you can happily concatenate them into a file.</p> <p>That is, simply open a file in <code>append</code> mode and <code>pickle.dump()</code> your dictionary into that file. If you want to be extra fancy, you could do something like timestamped files:</p> <pre><code>def ingest_data(data_dict): filename = '%s.pickles' % date.strftime('%Y-%m-%d_%H') with open(filename, 'ab') as outf: pickle.dump(data_dict, outf, pickle.HIGHEST_PROTOCOL) def read_data(filename): with open(filename, 'rb') as inf: while True: yield pickle.load(inf) # TODO: handle EOF error </code></pre>
python|sql
1
1,908,061
29,928,607
ruby tags for Sphinx/rst
<p>I create HTML documents from a rst-formated text, with the help of <a href="http://sphinx-doc.org/rest.html" rel="nofollow noreferrer">Sphinx</a>. I need to display some Japanese words with <a href="http://en.wikipedia.org/wiki/Furigana" rel="nofollow noreferrer">furiganas</a> (=small characters above the words), something like that : <img src="https://i.stack.imgur.com/eWwzh.jpg" alt="Japanese text with furiganas over some words"></p> <p>I'd like to produce HTML displaying furiganas thanks to <a href="http://www.w3schools.com/tags/tag_ruby.asp" rel="nofollow noreferrer">the &lt; ruby > tag</a>.</p> <p>I can't figure out how to get this result. I tried to:</p> <ul> <li>insert raw HTML code with the <a href="http://docutils.sourceforge.net/docs/ref/rst/directives.html#raw-data-pass-through" rel="nofollow noreferrer">.. raw:: html directive</a> but it breaks my line into several paragraphs.</li> <li>use the <a href="http://docutils.sourceforge.net/docs/ref/rst/roles.html#subscript" rel="nofollow noreferrer">:superscript: directive</a> but the text in furigana is written <em>beside</em> the text, not <em>above</em>.</li> <li>use the <a href="http://docutils.sourceforge.net/docs/ref/rst/directives.html#custom-interpreted-text-roles" rel="nofollow noreferrer">:role: directive</a> to create a link between the text and a CSS class of my own. But the :role: directive can only be applied to a segment of text, not to TWO segments as required by the furiganas (=text + text above it).</li> </ul> <p>Any idea to help me ?</p>
<p>As long as I know, there's no simple way to get the expected result.</p> <p><a href="https://github.com/suizokukan/logotheras" rel="nofollow">For a specific project</a>, I choosed not to generate the furiganas with the help of Sphinx but to modify the .html files afterwards. See the <code>add_ons/add_furiganas.py</code> script and the result <a href="http://94.23.197.37/hologenri" rel="nofollow">here</a>. Yes, it's a quick-and-dirty trick :( </p>
html|css|python-sphinx|restructuredtext|ruby-characters
0
1,908,062
29,894,039
How to use zip file which have all the package of nltk?
<p>We need to import all the packages from the zip file using python code.</p> <pre><code>import zipfile a = zipfile.Zipfile("Path",r) </code></pre> <p>After this we have zip file as <code>nltk</code> which is stored in local. So how we need to use it and run samplde code like below:</p> <pre><code>from nltk.corpus import wordnet from nltk.corpus import wordnet as wn wn.synsets('dog') </code></pre>
<p>Use <a href="https://docs.python.org/2/library/zipimport.html" rel="nofollow">zipimport</a> module or</p> <pre><code>import sys sys.path.insert(0, '/my/path/file.zip') import my_module my_module.caLL_something() </code></pre>
python
1
1,908,063
65,516,926
Reshaping before as_strided for optimisation
<pre class="lang-py prettyprint-override"><code>def forward(x, f, s): B, H, W, C = x.shape # e.g. 64, 16, 16, 3 Fh, Fw, C, _ = f.shape # e.g. 4, 4, 3, 3 # C is redeclared to emphasise that the dimension is the same Sh, Sw = s # e.g. 2, 2 strided_shape = B, 1 + (H - Fh) // Sh, 1 + (W - Fw) // Sw, Fh, Fw, C x = as_strided(x, strided_shape, strides=( x.strides[0], Sh * x.strides[1], Sw * x.strides[2], x.strides[1], x.strides[2], x.strides[3]), ) # print(x.flags, f.flags) # The reshaping changes the einsum from 'wxyijk,ijkd' to 'wxyz,zd-&gt;wxyd' f = f.reshape(-1, f.shape[-1]) x = x.reshape(*x.shape[:3], -1) # Bottleneck! return np.einsum('wxyz,zd-&gt;wxyd', x, f, optimize='optimal') </code></pre> <p>(On the contrary, the variant <em>without</em> the reshapes uses <code>return np.einsum('wxyijk,ijkd-&gt;wxyd', x, f)</code>)</p> <p>For reference, here are the flags for <code>x</code> and <code>f</code> before reshaping:</p> <pre><code>x.flags: C_CONTIGUOUS : False F_CONTIGUOUS : False OWNDATA : False WRITEABLE : True ALIGNED : True WRITEBACKIFCOPY : False UPDATEIFCOPY : False f.flags: C_CONTIGUOUS : True F_CONTIGUOUS : False OWNDATA : True WRITEABLE : True ALIGNED : True WRITEBACKIFCOPY : False UPDATEIFCOPY : False </code></pre> <p>Interestingly the major bottleneck in the routine is <em>not</em> the <code>einsum</code>, but rather the reshaping (flattening) of <code>x</code>. I understand that <code>f</code> does not suffer from such problems since its memory is C-contiguous, so the reshape amounts to a quick internal modification without changing the data - but since <code>x</code> is not C-contiguous (and does not own its data, for that matter), the reshape is far more expensive since it involves changing the data/fetching non-cache-aligned data often. This, in turn, results from the <code>as_strided</code> function performed on <code>x</code> - the modification of the strides must be in such a manner as to disturb the natural ordering. (FYI, the <code>as_strided</code> is incredibly fast, and should be fast no matter what strides are passed to it)</p> <p>Is there a way to achieve the same result without incurring the bottleneck? Perhaps by reshaping <code>x</code> before using <code>as_strided</code>?</p> <hr> Also note, for almost 100% of applications: B: [1-64], H, W: [1-60], C: [1-8] Fh, Fw: [1-12] <hr> <p>I'm also including some graphs here, for variation of timing with a variation in the tensor dimensions <code>B</code> (batch size), as well as <code>H, W</code> (image size) on my device (as you can see, the one involving reshape is already reasonably competitive with Tensorflow):</p> <p><a href="https://i.stack.imgur.com/lA3Tf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lA3Tf.png" alt="Variation with batch size" /></a> <a href="https://i.stack.imgur.com/ISrwr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ISrwr.png" alt="Variation with image size" /></a></p> <hr> <p>EDIT: An interesting find - the reshape-algorithm beats the non-reshape-algorithm by a factor of 5 on the CPU, but when I use the GPU (i.e. using CuPy instead of NumPy), both algorithms are equally fast (around twice as fast as TensorFlow's forward pass)</p>
<p>Reshaping of the strided array is a little bit costly, for the reasons you've mentioned (copy on non-contiguous array), but not as costly as you think. <code>np.einsum</code> can actually be a bottleneck in your application, depending on tensor sizes. As mentioned in <a href="https://stackoverflow.com/questions/56085669/convolutional-layer-in-python-using-numpy">Convolutional layer in Python using Numpy</a>, <code>np.tensordot</code> can be a good candidate to replace <code>np.einsum</code>.</p> <p>Just to give you a quick example:</p> <pre class="lang-py prettyprint-override"><code>x = np.arange(64*221*221*3).reshape((64, 221, 221, 3)) f = np.arange(4*4*3*5).reshape((4, 4, 3, 5)) s = (2, 2) B, H, W, C = x.shape # e.g. 64, 16, 16, 3 Fh, Fw, C, _ = f.shape # e.g. 4, 4, 3, 3 Sh, Sw = s # e.g. 2, 2 strided_shape = B, 1 + (H - Fh) // Sh, 1 + (W - Fw) // Sw, Fh, Fw, C print(strided_shape) # (64, 109, 109, 4, 4, 3) </code></pre> <p>after initializing the variables, we can test timings of the code parts</p> <pre class="lang-py prettyprint-override"><code>%timeit x_strided = as_strided(x, strided_shape, strides=(x.strides[0], Sh * x.strides[1], Sw * x.strides[2], x.strides[1], x.strides[2], x.strides[3]), ) &gt;&gt;&gt; 7.11 µs ± 118 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) %timeit f_reshaped = f.reshape(-1, f.shape[-1]) &gt;&gt;&gt; 450 ns ± 7.43 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each) %timeit x_reshaped = x_strided.reshape(*x_strided.shape[:3], -1) # Bottleneck! &gt;&gt;&gt; 94.6 ms ± 896 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) # einsum without reshape %timeit np.einsum('wxy...,...d-&gt;wxyd', x_strided, f, optimize='optimal') &gt;&gt;&gt; 809 ms ± 1.1 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) # einsum with reshape %%timeit f_reshaped = f.reshape(-1, f.shape[-1]) x_reshaped = x_strided.reshape(*x_strided.shape[:3], -1) # Bottleneck! k = np.einsum('wxyz,zd-&gt;wxyd', x_reshaped, f_reshaped, optimize='optimal') &gt;&gt;&gt; 549 ms ± 3.05 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) # tensordot without reshape %timeit k = np.tensordot(x_strided, f, axes=3) &gt;&gt;&gt; 271 ms ± 4.89 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) # tensordot with reshape %%timeit f_reshaped = f.reshape(-1, f.shape[-1]) x_reshaped = x_strided.reshape(*x_strided.shape[:3], -1) # Bottleneck! k = np.tensordot(x_reshaped, f_reshaped, axes=(3, 0)) &gt;&gt;&gt; 266 ms ± 3.15 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) </code></pre> <p>I got similar results with the tensor sizes in your code (i.e. 64, 16, 16, 3 and 4, 4, 3, 3).</p> <p>As you can see, there is an overhead with resize operation, but it makes matrix operations faster because of continuous data. Please, be aware that results would vary depending on cpu speed, cpu architecture/generation etc.</p>
python|numpy|conv-neural-network|tensor|numpy-einsum
1
1,908,064
43,213,880
Using 'datetime64[ns]' format for extraction from pandas dataframe
<p>I have a dataframe which has elements as:</p> <pre><code>df1[1:4] Sims 2014-01-02 [51, 53, 51, 3... 2014-01-03 [56, 48, 64, ... 2014-01-04 [57, 45, 47, ... </code></pre> <p>The sims are list of 500 elements each.</p> <p>I have another dataframe as:</p> <pre><code>df2[1:4] Date Month Day HE Year DateTime 2012-01-01 02:00:00 2012-01-01 1.0 1.0 2.0 2012.0 2012-01-01 03:00:00 2012-01-01 1.0 1.0 3.0 2012.0 2012-01-01 04:00:00 2012-01-01 1.0 1.0 4.0 2012.0 </code></pre> <p>I am trying the following in various configurations:</p> <pre><code>df1[df2['Date']] </code></pre> <p>But it errors out complaining about time format difference between <code>df1</code> index and <code>df2['Date']</code>. However, both have same time format as shown below.</p> <pre><code>df1.index[1:4] DatetimeIndex(['2014-01-02', '2014-01-03', '2014-01-04'], dtype='datetime64[ns]', freq=None) df2['Date'][1:4].values array(['2012-01-01T00:00:00.000000000', '2012-01-01T00:00:00.000000000', '2012-01-01T00:00:00.000000000'], dtype='datetime64[ns]') </code></pre> <p>How do I make the following work:</p> <pre><code>df1[df2['Date']] </code></pre> <p>Edit: Error message:</p> <pre><code>KeyError: "['2012-01-01T00:00:00.000000000' '2012-01-01T00:00:00.000000000'\n '2012-01-01T00:00:00.000000000' ..., '2016-12-31T00:00:00.000000000'\n '2016-12-31T00:00:00.000000000' '2016-12-31T00:00:00.000000000'] not in index" </code></pre>
<p><code>df1[df2['Date']]</code> -type indexing tends to error in my experience if you are trying to index on rows instead of columns. The problem is presumably that you let <code>pandas</code> guess over which axis you whish to slice, and this doesn't always pan out as desired.</p> <p>You could try using a more explicit indexing method such as <code>df1.loc[df2['Date'], :]</code> or <code>df1.xs(df2['Date'], 0)</code>.</p>
python|pandas
2
1,908,065
36,869,917
python 2.7 cPickle append List
<p>I am trying to write to append to a list using cPickle in <strong>python 2.7</strong> but it does not append.</p> <p><strong>Code:</strong></p> <pre><code>import cPickle import numpy a = numpy.array([[1, 2],[3, 4]]); output = open("1.pkl",'wb'); cPickle.dump(a,output); a = numpy.array([[4, 5],[6, 7]]); output = open("1.pkl",'ab'); cPickle.dump(a,output); print(cPickle.load(open("1.pkl",'rb'))); </code></pre> <p><strong>Output:</strong></p> <pre><code>[[1 2] [3 4]] </code></pre> <p>I was using this method to append the arrays in text files before</p> <p><strong>Code:</strong></p> <pre><code>a = numpy.array([[1, 2],[3, 4]]); text_file = open("1.txt", "w"); numpy.savetxt(text_file, a); text_file.close(); a = numpy.array([[4, 5],[6, 7]]); text_file = open("1.txt", "a"); numpy.savetxt(text_file, a); text_file.close(); text_file = open("1.txt", "r"); print(text_file.read()); </code></pre> <p><strong>Output:</strong></p> <pre><code>1.000000000000000000e+00 2.000000000000000000e+00 3.000000000000000000e+00 4.000000000000000000e+00 4.000000000000000000e+00 5.000000000000000000e+00 6.000000000000000000e+00 7.000000000000000000e+00 </code></pre> <p>I Was using this to write the data of a python simulation I setup for Power Systems. The output data is huge around 7GB. And the writing process was slowing down the simulation a lot. I read that cPickle can make writing process faster. </p> <p><strong>How do I append to the cPickle output file without having to read the whole data?</strong> </p> <p><strong>Or is there a better alternative to cPickle to make writing faster?</strong> </p>
<p>I don't believe you can just append to a pickle, or in a way that makes sense anyway.</p> <p>If you just get the current serialized version of an object and add another serialized object at the end of the file, it wouldn't just magically append the second object to the original list.</p> <p>You would need to read in the original object, append to it in Python, and then dump it back.</p> <pre><code>import cPickle as pickle import numpy as np filename = '1.pkl' a = np.array([[1, 2],[3, 4]]) b = np.array([[4, 5],[6, 7]]) # dump `a` with open(filename,'wb') as output_file: pickle.dump(a, output, -1) # load `a` and append `b` to it with open(filename, 'rb') as output_file: old_data = pickle.load(output_file) new_data = np.vstack([old_data,a]) # dump `new_data` with open(filename, 'wb') as output_file: pickle.dump(new_data, output_file, -1) # test with open(filename, 'rb') as output_file: print(pickle.load(output_file)) </code></pre> <p>After reading your question a second time, you state that you don't want to read in the whole data again. I suppose this doesn't answer your question then, does it?</p>
python-2.7|pickle
0
1,908,066
48,518,434
Keras - Negative dimension size caused by subtracting 5 from 4 for 'conv2d_5/convolution' (op: 'Conv2D') with input shapes: [?,4,80,64], [5,5,64,64]
<p>I have a similar model to the one below, but after modifying the architecture, I keep getting the following error: </p> <blockquote> <p>Negative dimension size caused by subtracting 5 from 4 for 'conv2d_5/convolution' (op: 'Conv2D') with input shapes: [?,4,80,64], [5,5,64,64].</p> </blockquote> <p>I am still new to machine learning so I couldn't make much sense of the parameters. Any help?</p> <pre><code>model_img = Sequential(name="img") # Cropping model_img.add(Cropping2D(cropping=((124,126),(0,0)), input_shape=(376,1344,3))) # Normalization model_img.add(Lambda(lambda x: (2*x / 255.0) - 1.0)) model_img.add(Conv2D(16, (7, 7), activation="relu", strides=(2, 2))) model_img.add(Conv2D(32, (7, 7), activation="relu", strides=(2, 2))) model_img.add(Conv2D(32, (5, 5), activation="relu", strides=(2, 2))) model_img.add(Conv2D(64, (5, 5), activation="relu", strides=(2, 2))) model_img.add(Conv2D(64, (5, 5), activation="relu", strides=(2, 2))) model_img.add(Conv2D(128, (3, 3), activation="relu")) model_img.add(Conv2D(128, (3, 3), activation="relu")) model_img.add(Flatten()) model_img.add(Dense(100)) model_img.add(Dense(50)) model_img.add(Dense(10)) model_lidar = Sequential(name="lidar") model_lidar.add(Dense(32, input_shape=(360,))) model_lidar.add(Dropout(0.1)) model_lidar.add(Dense(10)) model_imu = Sequential(name='imu') model_imu.add(Dense(32, input_shape=(10, ))) model_imu.add(Dropout(0.1)) model_imu.add(Dense(10)) merged = Merge([model_img, model_lidar, model_imu], mode="concat") model = Sequential() model.add(merged) model.add(Dense(16)) model.add(Dropout(0.2)) model.add(Dense(1)) </code></pre> <p>Answer: I couldn't complete the training because of issues with sensor but the model works fine now thanks to the 2 answers below</p>
<p>Here is the output shapes of each layer in your model</p> <pre><code>(?, 376, 1344, 3) - Input (?, 126, 1344, 3) - Cropping2D (?, 126, 1344, 3) - Lambda (?, 60, 669, 16) - Conv2D 1 (?, 27, 332, 32) - Conv2D 2 (?, 12, 164, 32) - Conv2D 3 (?, 4, 80, 64) - Conv2D 4 </code></pre> <p>By the time the inputs have passed through the 4th Conv2D layer the output shape is already <code>(4,80)</code>. You cannot apply another Conv2D layer with filter size (5, 5) since the first dimension of your output is less than the filter size.</p>
python|neural-network|keras
4
1,908,067
20,296,902
Converting a list of numbers into a binary number base 10
<p>How do you convert a list of 0 and 1's into a binary number with base 10? For example: <code>([0,0,1,0,0,1])</code> will give me 9</p>
<p>try this:</p> <pre><code>int("".join(str(x) for x in a),2) </code></pre> <p>Convert the list into a string. And then make the binary to decimal conversion</p>
python|list|binary
8
1,908,068
48,056,977
Pandas merge TypeError: object of type 'NoneType' has no len()
<p>I'm experimenting with pandas merge left_on and right_on params. According to <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.merge.html" rel="noreferrer">Documentation 1</a> and <a href="https://pandas.pydata.org/pandas-docs/stable/merging.html" rel="noreferrer">Documentation 2</a></p> <p>Documentation 1: states that left_on and right_on are field names to join on in left/right DataFrame. Documentation 2: Columns from the left DataFrame to use as <strong>keys</strong></p> <p>What does <strong>keys</strong> means?</p> <p>Following documentation 1:</p> <pre><code>left_frame = pd.DataFrame({'key': range(5), 'left_value': ['a', 'b', 'c', 'd', 'e']}) right_frame = pd.DataFrame({'key': range(2,7), 'right_value': ['f', 'g', 'h', 'i', 'j']}) </code></pre> <p>I did this:</p> <pre><code>df = pd.merge(left_frame,right_frame,how='right',right_on='key') </code></pre> <p>left_frame has 'key' as field name, but yet it returns </p> <pre><code>TypeError: object of type 'NoneType' has no len() </code></pre>
<p>It seems you need:</p> <pre><code>df = pd.merge(left_frame, right_frame, how='right', on='key') </code></pre> <p>because same left and right column names.</p> <p>If columns names are different:</p> <pre><code>df = pd.merge(left_frame, right_frame, how='right', right_on='key1', left_on='key2') </code></pre> <blockquote> <p>What does <strong>keys</strong> means?</p> </blockquote> <p>If check <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.merge.html" rel="noreferrer"><code>merge</code></a>:</p> <blockquote> <p><strong>on</strong> : label or list</p> <p>Field names to join on. Must be found in both DataFrames. If on is None and not merging on indexes, then it merges on the intersection of the columns by default.</p> </blockquote> <p>And it means values in columns to join on.</p>
python|pandas
12
1,908,069
48,012,894
How do I find the number of fridays between two dates(including both the dates)
<p>How can i find the <em>number of days</em> and also <em>number of Fridays</em> between two dates inHow can i find the <em>number of days</em> and also <em>number of Fridays</em> between two dates in python. python.</p>
<p>Number of days between two dates:</p> <pre><code>import datetime start = datetime.datetime.strptime(raw_input('Enter date in format yyyy,mm,dd : '), '%Y,%m,%d') end = datetime.datetime.strptime(raw_input('Enter date in format yyyy,mm,dd:'), '%Y,%m,%d') diff = end-start print diff.days &gt;&gt; 361 </code></pre> <p>Getting number of Fridays:</p> <pre><code># key 0 would be Monday as the start date is from Monday days = { 0: 0, 1: 0, 2: 0, 3: 0, 4: 0, 5: 0, 6: 0, } full_weeks = (diff.days) / 7 remainder = (diff.days) % 7 first_day = start.weekday() # Start date is on Monday for day in days.keys(): days[day] = full_weeks for i in range(0, remainder): days[(first_day + i) % 7] += 1 print days[4] # Gives number of Fridays between the date range &gt;&gt; 2 </code></pre> <p>Python docs - Datetime: <a href="https://docs.python.org/2/library/datetime.html" rel="nofollow noreferrer">https://docs.python.org/2/library/datetime.html</a></p>
python-3.x
0
1,908,070
51,116,473
Expand a list from one dataframe to another dataframe pandas
<p>I was hoping to get help with the following:</p> <p>I have a given df below of:</p> <pre><code>df fruit State Count apples CA 45 apples VT 54 apples MI 18 pears TX 20 pears AZ 89 plums NV 62 plums ID 10 </code></pre> <p>I took all the highest counts for each fruit per state, and was able to get something back like:</p> <pre><code>df2 fruit State Count apples VT 54 pears AZ 89 plums NV 62 </code></pre> <p>Now I am trying to figure out how to get the 'State' values from df2 as a new column in df to look like something like this:</p> <pre><code>df fruit State Count Main apples CA 45 VT apples VT 54 VT apples MI 18 VT pears TX 20 AZ pears AZ 89 AZ plums NV 62 NV plums ID 10 NV </code></pre> <p>I can do something similar with the .transform() function, but i only know how to do that while calling the max function. Could i run transform on a df['list']? Or am i missing something else here?</p>
<p>Two step :-) without <code>groupby</code> </p> <pre><code>df2=df.sort_values('Count').drop_duplicates('fruit',keep='last') df['new']=df.fruit.map(df2.set_index('fruit').State) df Out[240]: fruit State Count new 0 apples CA 45 VT 1 apples VT 54 VT 2 apples MI 18 VT 3 pears TX 20 AZ 4 pears AZ 89 AZ 5 plums NV 62 NV 6 plums ID 10 NV </code></pre>
python|list|pandas|dataframe|expand
2
1,908,071
73,604,662
How to sample random datapoints from a dataframe
<p>I have a dataset X in panda dataframe with about 48000 datapoints. In the dataset here is a feature called gender, 1 representing male and 0 representing female. How do I sample entries from my original dataset? Say I want a new dataset Y with 1000 random datapoint samples from X with 700 males and 300 females? I came up with this simple algorithm but cant figure out why it isn't working</p> <pre><code>def Sample(X,maleSize,femalesize): DD=X for i in range(len(DD)): if (DD.race[i]==1.0)&amp;(DD.gender.sum()==maleSize): DD=DD.drop(i) if (DD.race[i]==0.0) &amp; ((len(DD)-DD.gender.sum())&gt;femalesize): DD=DD.drop(i) return DD </code></pre>
<p>Use:</p> <pre><code>males = X[X['gender']==1].sample(n=700) females = X[X['gender']==0].sample(n=300) ndf = males.append(females).sample(frac=1) </code></pre> <p>Or:</p> <pre><code>weights = [.7 if x==1 else .3 for x in X['gender']] X.sample(n=1000, weights = weights) </code></pre>
python|pandas|dataframe|dataset
1
1,908,072
17,413,175
How can I identify the minimum value in a numpy array, excluding the diagonal zeros?
<pre><code> &gt;&gt;&gt; x array([[ 0, 2, 3, 4], [ 6, 0, 8, 9], [ 1, 2, 0, -9] [ -9, 4, 3, 0]) </code></pre> <p>I want to be able to identify -9 as the min(x), not 0.</p> <p>Thanks</p>
<ol> <li><p>Set the diagonal to the maximum value of its <code>dtype</code>. For floating point types, that would be <code>np.inf</code>, but for integers, you have to work a little harder.</p> <pre><code>x[np.diag_indices_from(x)] = np.iinfo(x.dtype).max </code></pre></li> <li><p>Take the <code>min</code>.</p></li> <li><p>Set the diagonal back to zero.</p></li> </ol>
python|arrays|numpy
4
1,908,073
64,362,032
How to melt a dataframe while doing some operation?
<p>Let's say that I have the following dataframe:</p> <pre><code>index K1 K2 D1 D2 D3 N1 0 1 12 4 6 N2 1 1 10 2 7 N3 0 0 3 5 8 </code></pre> <p>Basically, I want to transform this dataframe into the following:</p> <pre><code>index COL1 COL2 K1 D1 = 0*12+1*10+0*3 K1 D2 = 0*4+1*2+0*5 K1 D3 = 0*6+1*7+0*8 K2 D1 = 1*12+1*10+0*3 K2 D2 = 1*4+1*2+0*5 K2 D3 = 1*6+1*7+0*8 </code></pre> <p>The content of <code>COL2</code> is basically the dot product (aka the scalar product) between the vector in <code>index</code> and the one in <code>COL1</code>. For example, let's take the first line of the resulting df. Under <code>index</code>, we have <code>K1</code> and, under <code>COL1</code> we have <code>D1</code>. Looking at the first table, we know that <code>K1 = [0,1,0]</code> and <code>D1 = [12,10,3]</code>. The scalar product of these two &quot;vectors&quot; is the value inside <code>COL2</code> (first line).</p> <p>I'm trying to find a way of doing this without using a nested loop (because the idea is to make something efficient), however, I don't exactly know how. I tried using the <code>pd.melt()</code> function and, although it gets me closer to what I want, it doesn't exactly get me to where I want. Could you give me a hint?</p>
<p>This is matrix multiplication:</p> <pre><code>(df[['D1','D2','D3']].T@df[['K1','K2']]).unstack().reset_index() </code></pre> <p>Output:</p> <pre><code> level_0 level_1 0 0 K1 D1 10 1 K1 D2 2 2 K1 D3 7 3 K2 D1 22 4 K2 D2 6 5 K2 D3 13 </code></pre>
python|pandas
7
1,908,074
73,094,287
Create folder by files year
<p>I have a lot of pictures in a paste following a pattern for the file name, they only differ in the file type which may be .jpg or .jpeg For instance:</p> <pre><code>IMG-20211127-WA0027.jpg IMG-20211127-WA0028.jpeg IMG-20211127-WA0029.jpg </code></pre> <p>I'm trying to find a way to create a folder for each year and send the pictures for the respective folder, given that the file name already has its year. How can I create folders for each year, move the files to the right folder?</p> <p>I tried to adapt a code from a tutorial, but I'm not getting what I need. Please see my code below :</p> <pre><code>from distutils import extension import os import shutil path = &quot;D:\WhatsApp Images&quot; files = os.listdir(path) year = os.path.getmtime(path) for file in files: filename, extension = os.path.splitext(file) extension = extension[1:] if os.path.exists(path+'/'+extension): shutil.move(path+'/'+file, path+'/'+extension+'/'+file) else: os.makedirs(path+'/'+extension) shutil.move(path+'/'+file,path+'/'+extension+'/'+file) </code></pre>
<p>I like @alexpdev 's answer, but you can do this all within <code>pathlib</code> alone:</p> <pre class="lang-py prettyprint-override"><code>from pathlib import Path path_to_your_images = &quot;D:\\WhatsApp Images&quot; img_types = [&quot;.jpg&quot;, &quot;.jpeg&quot;] # I'm assuming that all your images are jpegs. Extend this list if not. for f in Path(path_to_your_images).iterdir(): if not f.suffix.lower() in img_types: # only deal with image files continue year = f.stem.split(&quot;-&quot;)[1][:4] yearpath = Path(path_to_your_images) / year # create intended path yearpath.mkdir(exist_ok = True) # make sure the dir exists; create it if it doesn't f.rename(yearpath / f.name) # move the file to the new location </code></pre>
python|python-3.x|operating-system|shutil
0
1,908,075
73,363,025
How to map nested dictionaries to dataframe columns in python?
<p>I have nested dictionaries like this:</p> <pre><code>X = {A:{col1:12,col-2:13},B:{col1:12,col-2:13},C:{col1:12,col-2:13},D:{col1:12,col-2:13}} Y = {A:{col1:3,col-2:5},B:{col1:1,col-2:2},C:{col1:4,col-2:7},D:{col1:8,col-2:7}} Z = {A:{col1:6,col-2:7},B:{col1:4,col-2:7},C:{col1:5,col-2:7},D:{col1:4,col-2:9}} </code></pre> <p>I also have a data frame with a single column like this:</p> <pre><code>Df: data frame([A,B,C,D],columns = ['Names']) </code></pre> <p>For every row in the data frame, I want to map the given dictionaries in this format:</p> <pre><code>Names Col-1_X Col-1_Y Col-1_Z Col-2_X Col-2_Y Col-2_Z A 12 3 6 13 5 7 B 12 1 4 13 2 7 C 12 4 5 13 7 7 D 12 8 4 13 7 9 </code></pre> <p>Can anyone help me get the data in this format?</p>
<p>Transpose and concat them:</p> <pre><code>dfX = pd.DataFrame(X).T.add_suffix('_X') dfY = pd.DataFrame(Y).T.add_suffix('_Y') dfZ = pd.DataFrame(Z).T.add_suffix('_Z') output = pd.concat([dfX, dfY,dfZ], axis=1)) </code></pre> <p>output :</p> <pre><code> col1_X col-2_X col1_Y col-2_Y col1_Z col-2_Z A 12 13 3 5 6 7 B 12 13 1 2 4 7 C 12 13 4 7 5 7 D 12 13 8 7 4 9 </code></pre>
python|python-3.x|pandas|dataframe|dictionary
1
1,908,076
73,181,851
How to read the docstring of test functions from a fixture?
<p>I was trying to get the docstrings of all the test functions from a fixture defined in conftest.py, as shown in the code below, so that they can be analyzed for purposes.</p> <p>But, from here how can I access the <code>__doc__</code> attribute of that function when the function is only available as a string (<code>request.node.name</code>) !?</p> <p>Is there a way to read docstrings through <code>request</code> OR from other default pytest fixtures !?</p> <p>Contents of conftest.py</p> <pre><code> 1 import pytest 2 3 @pytest.fixture(scope='function', autouse=True) 4 def publish_to_pubsub(request): 5 print(&quot;\n\nSTARTED Test '{}'&quot;.format(request.node.name)) 6 test_name = request.node.name // Here - need to get the docstring of this function . 7 9 def fin(): 12 print(&quot;COMPLETED Test '{}'\n&quot;.format(request.node.name)) 13 14 request.addfinalizer(fin) </code></pre>
<p>Figured it out.</p> <p>Reference: <a href="https://docs.pytest.org/en/6.2.x/reference.html#pytest.FixtureRequest.function" rel="nofollow noreferrer">docs.pytest.org/en/6.2.x/reference.html#request</a></p> <pre><code>print(request.function.__doc__) </code></pre>
python|pytest|docstring|conftest
0
1,908,077
49,990,515
Value error with numpy arrays (shapes)
<p>I keep getting a ValueError when working with numpy arrays and I can't figure out what's causing it, as it seems to be working correctly outside of my for loop. Here is my code:</p> <pre><code>import numpy as np def x(t, x_0, w): return x_0*np.cos(w*t) def x_prime(t, x_0, w): return -x_0*w*np.sin(w*t) w = 1 x_0 = 1 h = 0.001 t = np.arange(0, 10, h) y = np.array([[0, 0]]*len(t)) y[0] = [x_0, 0] # The line below works correctly, but not inside my loop print np.array([x_prime(1, x_0, w), -w**2 * x(1, x_0, w)])*h + y[0] for i in range(1, len(t)): # Euler's method y[i] = y[i-1] + np.array([x_prime(t, x_0, w), -w**2 * x(t, x_0, w)]) * h </code></pre> <p>From the <code>print</code> line I get this output: <code>[ 9.99158529e-01 -5.40302306e-04]</code>, so that seems to be working correctly. However, I'm getting this error at the <code>y[i]</code> line:</p> <pre><code>ValueError: operands could not be broadcast together with shapes (2,) (2,10000) </code></pre> <p>I'm not sure why, since my print statement earlier is basically doing the same thing, and <code>y[i]</code> should be the same shape. Does anyone know what the problem is?</p>
<p>In the <code>print</code> line the first argument of <code>x()</code>/<code>x_prime()</code> is a scalar (<code>1</code>).</p> <p>In the <code>y[i]</code> line you pass <code>t</code> instead, which is a 10000-elements array, resulting in <code>np.array([x_prime(t, x_0, w), -w**2 * x(t, x_0, w)])</code> being a (2,10000) matrix, hence the <code>ValueError</code>.</p> <p>Perhaps what you want to do is:</p> <pre><code>y[i] = y[i-1] + np.array([x_prime(t[i], x_0, w), -w**2 * x(t[i], x_0, w)]) * h </code></pre>
python|arrays|python-2.7|numpy|valueerror
1
1,908,078
64,166,966
Why does the value of request.user appear to be inconsistent?
<p>I am wanting an application to be able to check whether the user that is currently logged in is &quot;test2&quot;. I'm using Django and running the following code:</p> <pre><code>&lt;script&gt; console.log('{{ request.user }}') {% if request.user == &quot;test2&quot; %} console.log('The user that is currently logged in is test2.') {% else %} console.log('There was an error.') {% endif %} &lt;/script&gt; </code></pre> <p>And my console is logging the following:</p> <pre><code>test2 There was an error. </code></pre> <p>Why is this? How do I get the application to recognise that &quot;test2&quot; is logged in?</p>
<p>This may be because <code>request.user</code> is actully object, but <code>__str__</code> is giving <code>username</code> attribute of <code>User</code>. So:</p> <pre><code>{% if request.user.username == &quot;test2&quot; %} console.log('The user that is currently logged in is test2.') {% else %} console.log('There was an error.') {% endif %} </code></pre> <p>*Note:- I am assuming <code>test2</code> is <em>username</em> and it is <em>unique</em>.</p>
javascript|python|django
2
1,908,079
64,009,481
How can I use MQTT long term in IoT Core?
<p>So first of all, what I really want to achieve: I want to know when an IoT device has stopped working (i.e. lost connection, shut down, basically it's not longer talking to IoT Core). I can't seem to find an implementation for this on GCP.</p> <p>I have a raspberry pi as my IoT device, I have configured it on IoT core and somewhere I read that since this is not implemented a way to solve it is to create a logging sink which activates a cloud function whenever there is a CONNECT/DISCONNECT log. This would serve my purpose and I have implemented this sink and cloud function to alert me.</p> <p>I have been following <a href="https://cloud.google.com/iot/docs/how-tos/mqtt-bridge" rel="nofollow noreferrer">this guide</a> on connecting to MQTT. However, the way the explain it, they set it up such that whenever the expiration time on the JWT is exceeded, they disconnect the client and create a new one to re-new the JWT. This would make it such that I am going to be alerted of connection/disconnection whenever this client needs to be renewed. So I won't be able to differentiate of a real issue from renewals of the MQTT client.</p> <p>In the same guide, I see that they mention MQTT long term or LTS, and they claim that this way you can set up the client once and communicate continuously through it for the supported time which it says its until 2030. This seems to be what I really want, but I have not been able to connect this way and they don't explain it other than saying the hostname should be <code>mqtt.2030.ltsapis.goog</code> and to use a primary and backup certificates which are different from the complete root CA from the first method.</p> <p>I tried using basically the same process for setting up the client:</p> <pre><code> client = mqtt.Client(client_id=client_id) # With Google Cloud IoT Core, the username field is ignored, and the # password field is used to transmit a JWT to authorize the device. client.username_pw_set( username='unused', password=create_jwt(project_id, private_key_file, algorithm)) # Enable SSL/TLS support. client.tls_set(ca_certs=ca_certs, tls_version=ssl.PROTOCOL_TLSv1_2) </code></pre> <p>but changing the hostname and giving it the primary cert where I would give it the complete ca_certs, but it won't accept it and I am not sure how to do it otherwise with primary and backup certifications. I am looking at the documentation on tls_set, but I don't see where these would go or how they differ from the complete ca certs. I haven't seen any other examples outside of this guide.</p> <p>I am hoping to be able to connect to this MQTT LTS so that I can maintain the connection without having to constantly renew the client.</p>
<p>The long term MQTT domain lets you use the LTS <em>configuration</em> for a long period of time, not the connection.</p> <p>As you mention, for your use case the solution would be to activate and use <a href="https://cloud.google.com/iot/docs/how-tos/device-logs" rel="nofollow noreferrer">device logs</a>. One of the events is triggered when a <a href="https://cloud.google.com/iot/docs/how-tos/device-logs#list_of_logged_device_events" rel="nofollow noreferrer">device disconnects</a> from IoT Core, and you can use that event to trigger an alert.</p> <p>Keep in mind that the <a href="https://cloud.google.com/iot/quotas#time_limits" rel="nofollow noreferrer">time limits</a> for the connection are set for security purposes, and the client should renew the connection.</p>
python|google-cloud-platform|mqtt|iot
2
1,908,080
53,320,728
Extract one hot encoding from a file into a dataset
<p>I have a dataset images and corresponding labels, where to each image file there is a .txt file which contains the one hot encoding:</p> <pre><code>0 0 0 0 1 0 </code></pre> <p>My code looks something like this:</p> <pre><code>imageString = tf.read_file('image.jpg') imageDecoded = tf.image.decode_jpeg(imageString) labelString = tf.read_file(labelPath) # decode csv string </code></pre> <p>but labelString looks like this:</p> <pre><code>tf.Tensor(b'0\n0\n0\n0\n1\n', shape=(), dtype=string) </code></pre> <p>Is there a way to transform this into an array of numbers inside tensorflow?</p>
<p>Here is a function to do that.</p> <pre><code>import tensorflow as tf def read_label_file(labelPath): # Read file labelStr = tf.io.read_file(labelPath) # Split string (returns sparse tensor) labelStrSplit = tf.strings.split([labelStr]) # Convert sparse tensor to dense labelStrSplitDense = tf.sparse.to_dense(labelStrSplit, default_value='')[0] # Convert to numbers labelNum = tf.strings.to_number(labelStrSplitDense) return labelNum </code></pre> <p>A test case:</p> <pre><code>import tensorflow as tf # Write file for test labelPath = 'labelData.txt' labelTxt = '0\n0\n0\n0\n1\n0' with open(labelPath, 'w') as f: f.write(labelTxt) # Test the function with tf.Session() as sess: label_data = read_label_file(labelPath) print(sess.run(label_data)) </code></pre> <p>Output:</p> <pre class="lang-none prettyprint-override"><code>[0. 0. 0. 0. 1. 0.] </code></pre> <p>Note the function, as I wrote it, uses some of the new-ish API endpoints, you can also write it as below for more backwards compatibility, with almost the same meaning (there are slight differences between <a href="https://www.tensorflow.org/api_docs/python/tf/strings/split" rel="nofollow noreferrer"><code>tf.strings.split</code></a> and <a href="https://www.tensorflow.org/api_docs/python/tf/string_split" rel="nofollow noreferrer"><code>tf.string_split</code></a>):</p> <pre><code>import tensorflow as tf def read_label_file(labelPath): labelStr = tf.read_file(labelPath) labelStrSplit = tf.string_split([labelStr], delimiter='\n') labelStrSplitDense = tf.sparse_to_dense(labelStrSplit.indices, labelStrSplit.dense_shape, labelStrSplit.values, default_value='')[0] labelNum = tf.string_to_number(labelStrSplitDense) return labelNum </code></pre>
python|tensorflow|tensorflow-datasets
1
1,908,081
53,333,644
How to use dask to populate DataFrame in parallelized task?
<p>I would like to use dask to parallelize a numbercrunching task.</p> <p>This task utilizes only one of the cores in my computer. </p> <p>As a result of that task I would like to add an entry to a DataFrame via <code>shared_df.loc[len(shared_df)] = [x, 'y']</code>. This DataFrame should be populized by all the (four) paralllel workers / threads in my computer. </p> <p>How do I have to setup dask to perform this?</p>
<p>The right way to do something like this, in rough outline:</p> <ul> <li><p>make a function that, for a given argument, returns a data-frame of some part of the total data</p></li> <li><p>wrap this function in <code>dask.delayed</code>, make a list of calls for each input argument, and make a dask-dataframe with <code>dd.from_delayed</code></p></li> <li><p>if you really need the index to be sorted and the index to partition along different lines than the chunking you applied in the previous step, you may want to do <code>set_index</code> </p></li> </ul> <p>Please read the docstrings and examples for each of these steps!</p>
python|pandas|python-multiprocessing|python-multithreading|dask
0
1,908,082
65,146,786
How to create Pandas Dataframe from lists?
<p>I have four lists like this:</p> <pre><code>A = ['column_1', 'column_2', 'column_3'] B = ['string_1', 'string_2', 'string_3'] numA = [1,2,3] numB = [4,5,6] </code></pre> <p>Is there is way to make Dataframe which as column names will take both lists <code>A</code> and <code>B</code> and as an row both lists <code>numA</code> and <code>numB</code>? So it will look like this:</p> <pre><code>column_1 column_2 column_3 string_1 string_2 string_3 1 2 3 4 5 6 </code></pre>
<p>Try:</p> <pre><code>pd.DataFrame([numA+numB], columns=A+B) </code></pre>
python|pandas
3
1,908,083
72,039,582
Terminate called after throwing an instance of 'std::bad_alloc' from importing torch_geometric
<p>I am writing in python and getting the error:</p> <p>&quot;terminate called after throwing an instance of 'std::bad_alloc'.<br /> what(): std::bad_alloc.<br /> Aborted (core dumped)&quot;</p> <p>After lots of debugging, I found out the source of the issue is:</p> <pre><code>import torch_geometric </code></pre> <p>I even created a file with just this line of code, and I still get the error.<br /> I am running in a conda environment (4.10.3) I made sure that I installed torch_geometric while I was in the conda environment. I tried deleting and reinstalling, but this did not work.<br /> I also tried deleting and reinstalling torch/cuda.<br /> I googled the error, but only seemed to come up with issues in data allocation, but I'm not sure how this would be an issue, since I am just importing torch_geometric.</p> <p>Any ideas?</p>
<p>This problem is because of mismatched versions of pytorch. The current pytorch being used is 1.11.0, but when scatter and sparse were installed installed scatter and sparse, 1.10.1 were used:</p> <ul> <li>pip install torch-scatter -f <a href="https://data.pyg.org/whl/torch-1.10.1+cu113.html" rel="nofollow noreferrer">https://data.pyg.org/whl/torch-1.10.1+cu113.html</a>.</li> <li>pip install torch-sparse -f <a href="https://data.pyg.org/whl/torch-1.10.1+cu113.html" rel="nofollow noreferrer">https://data.pyg.org/whl/torch-1.10.1+cu113.html</a></li> </ul> <p>So,torch-1.10.1 was used to install scatter and sparse, but torch-1.11.0 was the true version.</p> <p>Simply doing:</p> <ul> <li>pip uninstall torch-scatter</li> <li>pip uninstall torch-sparse</li> <li>pip install torch-scatter -f <a href="https://data.pyg.org/whl/torch-1.11.0+cu113.html" rel="nofollow noreferrer">https://data.pyg.org/whl/torch-1.11.0+cu113.html</a>.</li> <li>pip install torch-sparse -f <a href="https://data.pyg.org/whl/torch-1.11.0+cu113.html" rel="nofollow noreferrer">https://data.pyg.org/whl/torch-1.11.0+cu113.html</a></li> </ul> <p>Resolves the issue.</p>
pytorch|python-import|importerror|bad-alloc|pytorch-geometric
1
1,908,084
62,516,020
How to import external python modules to your AWS Lambda functions, in serverless framework?
<pre><code>root - module_1 - utils.py - module_2 - handler.py (Lambda Function) (Requires functions from utils.py) - serverless.yml - module_3 - handler.py (Lambda Function) - serverless.yml </code></pre> <p>How to import the classes and the methods in the utils.py, loacted in a completely different directory?</p>
<p>As you know, the serverless framework zips all the contents in the directory in which it is present and deploys it to the cloud. But our Lambda uses functions and classes from a completely different directory, so when we deploy the function, it doesn't include those files.</p> <p>How can we accomplish that?</p> <p>Well, we can copy that module and paste it in that Lambda function directory so that it is included while deploying the Lambda.</p> <p>It is not feasible, let's say when that module is needed by 10 different Lambda modules</p> <pre><code>root - module_1 - utils.py - module_2 - handler.py (Lambda Function) (Requires functions from utils.py) - serverless.yml . . . - module_10 - handler.py (Lambda Function) (Requires functions from utils.py) - serverless.yml </code></pre> <p>A single change in utils.py, has to be made in 10 different places, ugh....</p> <p>No worries, serverless has got a plugin which comes to your rescue</p> <pre><code>serverless-package-external </code></pre> <p>This plugin will help you to solve your issuse. Have a good day!</p>
python|amazon-web-services|module|aws-lambda|package
2
1,908,085
61,701,611
How to source additional environment in pycharm?
<p>I have a ROS application which has a work space with a setup.bash file and another python script with its own virtual environment. </p> <p>So far this is what I do in my terminal:</p> <pre><code>1_ pipenv shell (to activate my python virtual environment). 2_ source ../ros_workspace/devel/setup.bash 3_ python some_python_script.py </code></pre> <p>This code works as I expect. </p> <p>However, I want to do the same and run this script in pycharm, where my virtual environment is already activated. But how do I source the setup bash additionaly? My setup.bash file also looks like the following: <a href="https://i.stack.imgur.com/7mldy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7mldy.png" alt="enter image description here"></a></p> <p>What I have tried also is making a "before launch" as follows: <a href="https://i.stack.imgur.com/AOqIC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AOqIC.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/CUnDB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CUnDB.png" alt="enter image description here"></a></p>
<p>If you set your virtual environment as your interpreter of choice in PyCharm, it will use that particular virtual environment to run its scripts. However, you can also take advantage of some of the functionality that our run configurations provide.</p> <p><a href="https://i.stack.imgur.com/XXRMg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XXRMg.png" alt="enter image description here"></a></p> <p>You can check out the "Before Launch" part of the whole configuration window to enter scripts that you want executed.</p> <p>Once you've set your configurations, you can then go on to run or debug the configuration. Furthermore, if it is just environment variables that you want to source, you can just put in the environment variables in the "Environment Variables" box.</p> <p>In case you want to run a shellscript, you will need to create a new shell configuration like so:</p> <p><a href="https://i.stack.imgur.com/CXJdi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CXJdi.png" alt="enter image description here"></a></p> <p>Once you've added that configuration, you can then go on to reference it later.</p> <p>You will now see that you can reference that configuration in question:</p> <p><a href="https://i.stack.imgur.com/K9a1j.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/K9a1j.png" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/nmw1b.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nmw1b.png" alt="enter image description here"></a></p>
python|pycharm|virtualenv|ros
0
1,908,086
60,616,004
glue etl jobs - get s3 subfolders using create_dynamic_frame.from_options
<p>I am creating an AWS Glue ETL job, but I'm running into some roadblocks with file retrieval.</p> <p>It seems that the following code only gets the files at the root folder 2017 and not any further. Is there any way to include all subfolders and files within them?</p> <pre><code>dyf = glueContext.create_dynamic_frame.from_options( 's3', {"paths": [ 's3://bucket/2017/' ]}, "json", transformation_ctx = "dyf") </code></pre>
<p>Found a solution for this problem, looks like the dictionary accepts more parameters, the one I needed was "recurse". You can also exclude certain patterns with "exclusions".</p> <p>Source <a href="https://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-etl-connect.html#aws-glue-programming-etl-connect-s3" rel="nofollow noreferrer">https://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-etl-connect.html#aws-glue-programming-etl-connect-s3</a></p> <pre><code>dyf = glueContext.create_dynamic_frame.from_options( 's3', { "paths": [ 's3://bucket/2017/' ], "recurse" : True }, "json", transformation_ctx = "dyf") </code></pre>
python|amazon-web-services|apache-spark|pyspark|aws-glue
4
1,908,087
70,040,973
Find how many grid cells contain x,y points
<p>So, I have written the code shown below:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt x = np.random.randint(-960,960,15) y = np.random.randint(-540,540,15) fig, ax = plt.subplots(figsize=(20, 11)) ax.scatter(x, y, marker='o', color='red', alpha=0.8) img = plt.imread(scene_folder) plt.imshow(img, extent = [-960, 960, -540, 540], aspect='auto') plt.grid(color='white', linestyle='--', linewidth=1) plt.show() plt.close() </code></pre> <p>This code generates this image:</p> <p><a href="https://i.stack.imgur.com/4jwPD.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4jwPD.jpg" alt="enter image description here" /></a></p> <p>So far, so good. What I want to achieve next is to check which grids contain points (red dots) and get an image like this one (grids that contain red dots are now white):</p> <p><a href="https://i.stack.imgur.com/FZl5Q.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FZl5Q.png" alt="enter image description here" /></a></p> <p>Then, I want to get the number of grids that are white (or whatever color) and divide it by the total number of grids in that image. The picture above has 48 grids, 13 of which are white (13/48=27,1%).</p> <p>Any ideas on how to approach this? Thanks in advance.</p>
<p>It looks like you need to list coordinates of all the pixels:</p> <pre><code>extent = [-960, 960, -540, 540] p1, p2 = 8, 6 x1 = p1*(x-extent[0])//(extent[1]-extent[0]) y1 = p2*(y-extent[2])//(extent[3]-extent[2]) &gt;&gt;&gt; np.transpose([x1, y1]) array([[4, 2], [6, 0], [2, 2], [7, 0], [6, 5], [1, 1], [5, 2], [1, 0], [4, 1], [3, 0], [7, 5], [3, 1], [1, 5], [4, 1], [0, 1]], dtype=int32) </code></pre> <p>And then you can use <a href="https://numpy.org/doc/stable/reference/generated/numpy.unique.html" rel="nofollow noreferrer"><code>np.unique</code></a> to find a number of different ones:</p> <pre><code>&gt;&gt;&gt; len(np.unique(np.transpose([x1, y1]), axis=0)) 14 </code></pre>
python|arrays|numpy|2d
0
1,908,088
63,470,659
Transforming Wide dataset to long format with multiple columns
<p>I have a dataset that looks like the following:</p> <pre><code>Name County Industry Jobs.2019 Jobs.2018 Establish.2019 Establish.2018 EPW.2019 EPW.2018 rows_0 Adams, OH Auto 1 2 3 4 5 6 row_1 Allen, OH Mfg 2 3 5 7 9 10 ... row_100 Adams,OH IT 5 32 1 87 8 9 </code></pre> <p>Ultimately, I would like to transform in a long format such as:</p> <pre><code>Name County Industry Jobs Establish EPW Year rows_0 Adams, OH Auto 1 3 5 2019 rows_1 Adams, OH Auto 2 4 6 2018 rows_2 Allen, OH Mfg 1 5 9 2019 </code></pre> <p>I was able to get it into long format with melt:</p> <pre><code>data_df_unpivot = data_df.melt(id_vars=['County', 'Industry'], var_name=['metric'], value_name='value') </code></pre> <p>but that really only gets me the format:</p> <pre><code>County Industry metric value Adams, OH Auto Jobs.2019 1 Adams, OH Auto Jobs.2018 2 Adams, OH Auto EPW.2019 5 Adams, OH Auto EPW.2018 6 </code></pre> <p>I know I need to do a split on Jobs.2019, etc. but not sure what to do after the fact to get it into the appropriate format.</p> <p>All the data is coming from an API and is nested JSON that I had to flatten. The end goal is to load into SQL so I'm wondering if I do the ETL in Python or let Snowflake handle, either way I'm faced with the same issue with elongating the table.</p> <p>This will also be a living table as data comes out i.e Jobs.2020, Jobs.2021</p>
<p>The answer is in your title: use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.wide_to_long.html" rel="nofollow noreferrer"><code>pd.wide_to_long</code></a>.</p> <pre><code>print (pd.wide_to_long(df, stubnames=[&quot;Jobs&quot;,&quot;Establish&quot;,&quot;EPW&quot;], i=[&quot;Name&quot;,&quot;County&quot;,&quot;Industry&quot;], j=&quot;Year&quot;, sep=&quot;.&quot;, suffix=&quot;\d+&quot;) .reset_index()) Name County Industry Year Jobs Establish EPW 0 row_0 Adams, OH Auto 2019 1 3 5 1 row_0 Adams, OH Auto 2018 2 4 6 2 row_1 Allen, OH Mfg 2019 2 5 9 3 row_1 Allen, OH Mfg 2018 3 7 10 4 row_100 Adams, OH IT 2019 5 1 8 5 row_100 Adams, OH IT 2018 32 87 9 </code></pre>
python|pandas|snowflake-cloud-data-platform
0
1,908,089
61,067,147
Is there anyway to change the color of the highlighted text in a tkinter listbox?
<p>I am trying to change the text color of a highlighted item in a Tkinter listbox and I can't find anything in the documentation for changing the text color. It seems like it is defaulted to a white text, and I would like it to stay the same color as depicted in my picture below. Also, bonus question: is it possible to not have the highlighted text underscored?</p> <p><img src="https://i.stack.imgur.com/oGVN5.png" alt="enter image description here"></p> <p>Thanks in advance!!!</p>
<p>Both is possible. You can set the <code>selectforeground</code> of the listbox widget to change its color and set the value of <code>activestyle</code> to remove the underline. </p> <p>Here is an example from <a href="http://effbot.org/tkinterbook/listbox.htm" rel="nofollow noreferrer">effbot page</a>, augmented with foreground color definition and without item underline:</p> <pre><code>from tkinter import * master = Tk() listbox = Listbox(master, selectforeground='Black', activestyle='none') listbox.pack() listbox.insert(END, "a list entry") for item in ["one", "two", "three", "four"]: listbox.insert(END, item) mainloop() </code></pre>
python|user-interface|tkinter
2
1,908,090
61,116,067
distribution of times grouped by weeks
<p>I want to find the distribution of times grouped by weeks for timeseries data. For example timeseries is: </p> <pre><code>2019-04-01 02:00:00 0.6 2019-04-02 10:45:00 2.0 2019-04-03 02:00:00 3.0 2019-04-10 00:00:00 0.6 2019-04-11 10:45:00 2.0 2019-04-13 10:45:00 6.0 2019-04-17 11:45:00 2.5 2019-04-18 11:45:00 3.0 2019-04-19 11:45:00 6.0 dtype: float64 </code></pre> <p>I want to know that in week 14 (week of <code>2019-04-01</code>) there were two records at <code>02:00:00</code>, one record at <code>10:45:00</code>, and no records for other times. In week 15, there was one record at <code>00:00:00</code>, two records at <code>10:45:00</code>, and no records for other times.</p> <p>This is currently my solution for finding the distribution over 15min increments of time:</p> <pre><code>import panda as pd import numpy as np import datetime as dt def dist(series, bins): h = np.histogram(series, bins) return dict(zip(h[1][:-1], h[0])) # creating bins, i.e. 15min increments throughout the day times = pd.Series(index = pd.date_range(start='2019-01-01', end='2019-01-02', freq='15min')) times = set(times.index.time) times = list(times) times.sort() dummy = (dt.datetime.combine(dt.date.today(), max(times))+dt.timedelta(seconds = 10)).time() times = times + [dummy] # finding distribution each week df = pd.DataFrame({'week': list(timeseries.index.week), 'time': list(timeseries.index.time)}) df = df.groupby(by=['week'])['time'].apply(lambda x: dist(x, times)) df.index.names = ['week', 'time'] df.name = 'counts' df = df.reset_index().pivot(index='time', columns='week', values='counts') </code></pre> <p>are there better ways to do this?</p>
<p>What about something very simple like that?</p> <pre class="lang-py prettyprint-override"><code># I'm starting with a Series here s.head(2) # time # 2019-04-01 02:00:00 0.6 # 2019-04-02 10:45:00 2.0 # Name: value, dtype: float64 # Resampling the series to the expected bin, say 15 min # filling with NaN undefined values s = s.resample('15min').asfreq() s.head(3) # time # 2019-04-01 02:00:00 0.6 # 2019-04-01 02:15:00 NaN # 2019-04-01 02:30:00 NaN # Freq: 15T, Name: value, dtype: float64 # Performing the summary to get how many times are defined by week / time # sampled by 15 min (NaN are not counted) result = s.groupby([s.index.week, s.index.time]).count() result.head() # time # 14 00:00:00 0 # 00:15:00 0 # 00:30:00 0 # 00:45:00 0 # 01:00:00 0 # Name: value, dtype: int64 # Getting only the hours with values result[result != 0] # time # 14 02:00:00 2 # 10:45:00 1 # 15 00:00:00 1 # 10:45:00 2 # 16 11:45:00 3 # Name: value, dtype: int64 </code></pre> <p>I think it could give you the answers you want.</p> <blockquote> <p>want to know that in week 14 (week of <code>2019-04-01</code>) there were two records at <code>02:00:00</code>, one record at <code>10:45:00</code>, and no records for other times. In week 15, there was one record at <code>00:00:00</code>, two records at <code>10:45:00</code>, and no records for other times.</p> </blockquote> <h1>Notes</h1> <p>This is how to generate the example <code>DataFrame</code>.</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd import io zz = """ time, value 2019-04-01 02:00:00, 0.6 2019-04-02 10:45:00, 2.0 2019-04-03 02:00:00, 3.0 2019-04-10 00:00:00, 0.6 2019-04-11 10:45:00, 2.0 2019-04-13 10:45:00, 6.0 2019-04-17 11:45:00, 2.5 2019-04-18 11:45:00, 3.0 2019-04-19 11:45:00, 6.0""" df = pd.read_table(io.StringIO(zz), sep=',') df = df.set_index(pd.DatetimeIndex(df['time'])) df = df.drop('time', axis=1) s = df.iloc[:,0] </code></pre>
python|pandas|dataframe|datetime|time-series
0
1,908,091
66,323,149
Load Text File into DataFrame with Specific Format
<p>I am trying to load a text file with the following format:</p> <pre><code>PR Maybe IMPACT TASK FIST 12 SA 1450 1 12 RE 0 </code></pre> <p>I tried something like this but the formatting of the text file is weird.</p> <pre><code>df = pd.read_csv(r&quot;file.TXT&quot;,sep = &quot; &quot;, delimiter = &quot;\t&quot;) </code></pre>
<p>Use <code>pd.read_fwf</code> instead if you are using a fixed-width file. <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_fwf.html" rel="nofollow noreferrer">Reference</a></p>
python|pandas
0
1,908,092
63,229,894
How can I embed Python Codes into Excel(Xlwings) and how can everyone use it?
<p>There are a few issues I am wonder about xlwings module and need your help.</p> <p>The code I am working on, is briefly working on a database creation by entering the data I enter on one sheet row by row in the date_range specified on the other sheet.</p> <p>My Input sheet pic.:<a href="https://i.stack.imgur.com/DzAAb.png" rel="nofollow noreferrer">enter image description here</a></p> <p>My Output Sheet pic.: <a href="https://i.stack.imgur.com/bIitz.png" rel="nofollow noreferrer">enter image description here</a></p> <pre><code>import pandas as pd import xlwings as xw def world(): import os script_dir = os.path.dirname(__file__) rel_path = &quot;database.xlsm&quot; file_path = os.path.join(script_dir, rel_path) wb = xw.Book(file_path) wb.save() ws = wb.sheets[&quot;Sheet1&quot;] ws2 = wb.sheets[&quot;Sheet2&quot;] df = pd.read_excel(file_path) area=df[&quot;B&quot;].loc[0] country=df[&quot;B&quot;].loc[1] city=df[&quot;B&quot;].loc[2] firstdate=df[&quot;B&quot;].loc[3] lastdate=df[&quot;B&quot;].loc[4] daterange=pd.date_range(firstdate,lastdate) a = ws2.range('A' + str(ws2.cells.last_cell.row)).end('up').row + 1 for i in daterange: ws2.cells(a,1).value=area.upper() ws2.cells(a,2).value=country.upper() ws2.cells(a,3).value=city.upper() ws2.cells(a,4).value=i a+=1 wb.sheets['Sheet2'].autofit() </code></pre> <p>With the code I wrote by using pandas, I can run the program on my own computer.When I click &quot;RUN&quot; button on Excel File, it fills 'Sheet2' as I want(shown above Output pic.).</p> <p>My questions are how can I embed this code that I wrote using pandas, into Excel? And how do I convert it to a format that everyone can use (including other computers without pandas or xlwings)?</p> <p>Thanks in advance.</p>
<p>I don't know if is possible do what you want. I know that LibreOffice Calc can call python scripts, but I dont know about their limitations or how you can import pandas.</p> <p>I suggest you to recreate your code using VBA, considering that your code is simple.</p>
python|excel|vba|xlwings
-1
1,908,093
62,202,317
How to count repeated label between given RFMin and RFMax
<p>I am reading the following CSV file that have three columns and multiple rows: </p> <pre><code>Notation RFMin RFMax AA100 1000 3333 BB200 3300 4500 </code></pre> <p>Currently my output file looks like this:</p> <pre><code> Notation RFRange Label AA100 1000 AG, IF AA100 1259 AG, IF AA100 1518 AG, TE, WW AA100 1777 AG, TE, WW AA100 2037 Unknown AA100 2296 Unknown AA100 2555 MH, WE AA100 2814 MH, WE AA100 3074 DT, MH, WE AA100 3333 DT, MH, WE BB200 3300 DT, MH, WE BB200 3433 DT, MH, WE BB200 3567 DT, MH, WE BB200 3700 DT, MH, WE BB200 3833 DT, MH, WE BB200 3967 DT, MH, WE BB200 4100 Unknown BB200 4233 Unknown BB200 4366 Unknown BB200 4500 Unknown </code></pre> <ol> <li>I am printing 10 numbers between <code>RFMIn</code> and <code>RFMax</code> with using a linspace </li> <li>I am printing Notations Based on the <code>N</code> numbers of samples between <code>RFMin</code> and <code>RFMax</code>, </li> <li>I am labeling on those 10 numbers from #1 based on the condition exists </li> </ol> <p>How should I do #4?</p> <ol start="4"> <li>I want to find out how many times each labels are repeating in between each <code>RFmin</code> and <code>RFMax</code>. For example, between <code>1000-3333</code> total of <code>'AG'</code> repeat 4 times, <code>'MH'</code> repeats 5 times, <code>'IF'</code> repeats 2 times, <code>'WW'</code> 2 times, and so on... , in <code>3300-4500</code> - <code>'AG'</code> repeats 0 times, <code>'MH'</code> repeats 6 times, <code>'IF'</code> repeats 0 times, and so on… </li> </ol> <p>Here is the code:</p> <pre><code>import pandas as pd import numpy as np df = csv.read_csv(filePath) N = 10 RFarray = [] Notation=[] c = np.zeros((df.shape[0], N)) for index, col in df.iterrows(): RFMin = col['RFMin'] RFMax = col['RFMax'] c[col] = np.linspace(RFMin, RFMax, N) for ir, r1 in enumerate(c): for b in r1: RFarray.append(b) Notation.append(df.loc[ir, 'Notation']) dict = {'Notation': Notation,'RFRange': RFarray} data = pd.DataFrame(dict) data['Label'] = 'Unknown' data.loc[(data['RFRange'] &lt; 1300), 'Label'] = 'AG, IF' data.loc[(data['RFRange'] &gt;=1300) &amp; (data['RFRange'] &lt;=2000), 'Label'] = 'AG, TE, WW' data.loc[(data['RFRange'] &gt;=2500) &amp; (data['RFRange'] &lt;=2900), 'Label'] = 'MH, WE' data.loc[(data['RFRange'] &gt;=3000) &amp; (data['RFRange'] &lt;=4000), 'Label'] = 'DT, MH, WE' data.to_csv('Output.csv', header=True, index=None, float_format=%.2f ) </code></pre>
<p>Try this, merge the 2 dataframes, convert label to list, add the range filter, groupby notation and concatenate all the Labels together into 1 list per notation and then use the <code>Counter</code> from <code>collections</code> to count each element in the list:</p> <pre><code>from collections import Counter df2['Label'] = df2['Label'].str.split(',') df = pd.merge(df1, df2, on=['Notation']) df = df[(df['RFRange']&gt;df['RFMin']) &amp; (df['RFRange']&lt;df['RFMax'])] df = df.groupby(by='Notation', as_index=False).agg({'Label': 'sum'}) df['counts'] = df['Label'].apply(lambda x: Counter(x)) print(df) Notation Label counts 0 AA100 [AG, IF, AG, TE, WW, AG, TE, WW, Unknown, Unkn... {'AG': 3, 'IF': 1, 'TE': 2, 'WW': 2, 'Unknown'... 1 BB200 [DT, MH, WE, DT, MH, WE, DT, MH, WE, DT, MH, W... {'DT': 5, 'MH': 5, 'WE': 5, 'Unknown': 3} </code></pre>
python|pandas
0
1,908,094
31,226,059
Why is my implementation of Iterative Deepening Depth-First Search taking as much memory as BFS?
<p>BFS requires <code>O(b^d)</code> memory, whereas IDDFS is known to run in only <code>O(bd)</code> memory. However, when I profile these two implementations they turn out to use exactly the same amount of RAM - what am I missing?</p> <p>I'm using a <code>Tree</code> class with a branching factor or 10 to run the tests:</p> <pre><code>class Tree(object): def __init__(self, value): self.key = value self.children = [ ] def insert(self, value): if len(self.children) == 0: self.children = [ Tree(value) for x in range(10) ] else: for ch in self.children: ch.insert(value) </code></pre> <p>My implementation of <code>iddfs</code>:</p> <pre><code>def iddfs(t): for i in range(0,8): printGivenLevel(t, i) def printGivenLevel(t, level): if not t: return if level == 1: pass elif level &gt; 1: for ch in t.children: printGivenLevel(ch, level - 1) </code></pre> <p><code>BFS</code> is</p> <pre><code>def bfs(t): currLevel = [t] nextLevel = [] while currLevel: for node in currLevel: if node: nextLevel.extend([ x for x in node.children ]) currLevel = nextLevel nextLevel = [] </code></pre> <p>The code is not really doing anything, just looping through the whole tree. I'm using <a href="https://github.com/fabianp/memory_profiler" rel="nofollow">https://github.com/fabianp/memory_profiler</a> to profile the code.</p>
<p>IDDFS's memory benefits only apply to an implicit tree, where nodes are generated as they're reached and discarded soon after. With a tree represented completely in memory, the tree itself already takes <code>O(b^d)</code> memory, and the memory required for either IDDFS or BFS is minor in comparison.</p>
python|algorithm|depth-first-search|breadth-first-search|iterative-deepening
4
1,908,095
66,952,724
How to make list including certain multi-bit letters by using Python?
<p>I am trying to list up certain words by using Python.</p> <p>Here's the text file(input file): (the 'random_text' are random texts that exist in text file)</p> <pre><code>random_text reg A1_M0; reg A1_M1; reg A1_M10; reg A1_M11; reg A1_M2; reg [3:0] B1_M0; reg [3:0] B1_M1; reg [3:0] B1_M10; reg [3:0] B1_M11; reg [3:0] B1_M2; random_text </code></pre> <p>I want to make below two lists by extracting the lines and spliting the data.</p> <pre><code>list1 = [A1_M0, A1_M1, A1_M10, A1_M11, A1_M2] list2 = [B1_M0[3], B1_M0[2], B1_M0[1], B1_M0[0], B1_M1[3], B1_M1[2], ... , B1_M2[1], B1_M2[0]] </code></pre> <p>I thought about 3 steps:</p> <ol> <li><p>Extract target data (by using '.findall') -&gt; A1_M0, A1_M1, A1_M10, A1_M11, A1_M2, B1_M0[3:0], B1_M1[3:0], B1_M10[3:0], B1_M11[3:0], B1_M2[3:0]</p> </li> <li><p>In case of 4-bit-data, split all of those into 1-bit data.</p> </li> <li><p>Make these into 2 types of lists (list1, list2)</p> </li> </ol> <p>I've tried '.readlines' method, but It's hard for me to achieve. How can I deal with this?</p>
<p>I made some assumptions that I hope were correct.</p> <ol> <li>I group all bits together by their prefix, as that seems to be the commonality between the bits in <code>list1</code> and <code>list2</code></li> <li>The difference between your random text and meaningful data was the presence of the prefix &quot;reg &quot;</li> <li>There are only 2 types of &quot;reg &quot; prefixed lines, and the difference is the presence of the &quot;[&quot; character, you can modify the conditions if there is a more nuanced definition than this.</li> </ol> <pre class="lang-py prettyprint-override"><code>def get_reg_lines(f): for line in f.readlines(): if &quot;reg &quot; in line: yield line.removeprefix('reg ').removesuffix(';\n') def handle_single(d, line): k, _ = line.split(&quot;_&quot;, 1) d.setdefault(k, []).append(line) def handle_multi(d, line): r, b = line.split(&quot; &quot;, 1) k, _ = b.split(&quot;_&quot;, 1) start, end = [int(v) for v in r[1:-1].split(&quot;:&quot;)] d.setdefault(k, []).extend([f'{b}[{i}]' for i in range(start, end - 1, -1)]) def main(): d = {} with open('input.txt', 'r') as f: for line in get_reg_lines(f): if &quot;[&quot; in line: handle_multi(d, line) else: handle_single(d, line) print(d) if __name__ == '__main__': main() </code></pre> <p>I put them in a <code>dict</code> d keyed with the prefixes <code>A1</code> and <code>B1</code>. You can unpack these into whatever lists you need if that is your preferred way to interact with the data.</p> <p>Also note: I used <code>removeprefix</code> and <code>removesuffix</code> which requires Python3.9 or newer, replace with slice notation for prior versions.</p> <pre class="lang-py prettyprint-override"><code>yield line[4:-2] </code></pre> <p>*Code was tested on input.txt with the following contents:</p> <pre><code>random_text reg A1_M0; reg A1_M1; reg A1_M10; reg A1_M11; reg A1_M2; reg [3:0] B1_M0; reg [3:0] B1_M1; reg [3:0] B1_M10; reg [3:0] B1_M11; reg [3:0] B1_M2; random_text </code></pre>
python|arrays|python-3.x|list|readline
1
1,908,096
42,973,764
Error in shape of logits in TensorFlow
<p>I am building an LSTM with TensorFlow and I think I am mis-defining my outputs because I am getting the following error: </p> <pre><code>InvalidArgumentError (see above for traceback): logits and labels must have the same first dimension, got logits shape [160,14313] and labels shape [10] [[Node: SparseSoftmaxCrossEntropyWithLogits/SparseSoftmaxCrossEntropyWithLogits = SparseSoftmaxCrossEntropyWithLogits[T=DT_FLOAT, Tlabels=DT_INT32, _device="/job:localhost/replica:0/task:0/cpu:0"](add, Reshape_1)]] </code></pre> <p>Key here being: "<strong>got logits shape [160,14313] and labels shape [10]</strong>". Is the <code>num_steps</code> still being taken into account for the shape of the output? </p> <p>The input is <code>num_steps</code> (16) wide and the output is just size 1, both with <code>batch_size</code> 10.</p> <p>I have defined the network like this: </p> <pre><code>x = tf.placeholder(tf.int32, [None, num_steps], name='input_placeholder') y = tf.placeholder(tf.int32, [None, 1], name='labels_placeholder') x_one_hot = tf.one_hot(x, num_classes) rnn_inputs = [tf.squeeze(i, squeeze_dims=[1]) for i in tf.split(x_one_hot, num_steps, 1)] # still a list of tensors (batch_size, num_classes) tmp = tf.stack(rnn_inputs) print(tmp.get_shape()) tmp2 = tf.transpose(tmp, perm=[1, 0, 2]) print(tmp2.get_shape()) rnn_inputs = tmp2 cell = tf.contrib.rnn.LSTMCell(state_size, state_is_tuple=True) cell = tf.contrib.rnn.MultiRNNCell([cell] * num_layers, state_is_tuple=True) init_state = cell.zero_state(batch_size, tf.float32) print(init_state) rnn_outputs, final_state = tf.nn.dynamic_rnn(cell, rnn_inputs, initial_state=init_state) with tf.variable_scope('softmax'): W = tf.get_variable('W', [state_size, num_classes]) b = tf.get_variable('b', [num_classes], initializer=tf.constant_initializer(0.0)) #reshape rnn_outputs and y rnn_outputs = tf.reshape(rnn_outputs, [-1, state_size]) y_reshaped = tf.reshape(y, [-1]) logits = tf.matmul(rnn_outputs, W) + b </code></pre>
<p>Fixed by adding this line: </p> <pre><code>rnn_outputs = rnn_outputs[:, num_steps-1, :] </code></pre>
tensorflow|neural-network|deep-learning|recurrent-neural-network
1
1,908,097
50,859,183
Python plot after string
<p>I want to plot data from .dat file. But this .dat file starts with string. For example, My file includes col 1 col 2 col 3 and I want to read under the col 3 data. I want skip two rows because they have string and wanna read only under col 3. How can skip the strings?. If we accept the data is 5x3 data so that I will only plot (3:5,3) for col3. How can I do? I share a code and this is only working If I remove string. </p> <pre><code> #-------input.dat--------- # x y z # col 1 col 2 col 3 # 3 5 5 # 5 6 4 # 7 7 3 import matplotlib.pyplot as plt import numpy as np import pylab as pl data = open('input.dat') lines = data.readlines() data.close() x1=[] for line in lines: p= line.split() x1.append(float(p[3])) xv=np.array(x1) plt.plot(xv) plt.show() </code></pre>
<p>Since you are already importing <code>numpy</code>, you could use <a href="https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.genfromtxt.html" rel="nofollow noreferrer"><code>np.genfromtext</code></a> here to make things a lot simpler, since it has the option <code>skip_header</code> which tells it how many header rows to skip.</p> <pre><code>import matplotlib.pyplot as plt import numpy as np data = np.genfromtxt('input.dat', skipheader=2) xv = data[:, 2] plt.plot(xv) plt.show() </code></pre> <p>Or, if you only need to read in column 3:</p> <pre><code>import matplotlib.pyplot as plt import numpy as np xv = np.genfromtxt('input.dat', skipheader=2, usecols=(2,)) plt.plot(xv) plt.show() </code></pre>
python|python-3.x|numpy|matplotlib|plot
3
1,908,098
35,005,820
sFrame into scipy.sparse csr_matrix
<p>I have a sframe like:</p> <pre><code>x = sf.SFrame({'users': [{'123': 1.0, '122': 5}, {'134': 3.0, '123': 10}]}) </code></pre> <p>I want to convert into scipy.sparse csr_matrix without invoking graphlab create, but only using sframe and Python.</p> <p>How to do it?</p>
<p>Assuming you want the row number to be the row index in the output sparse matrix, the only tricky step is using <code>SFrame.stack</code> - from there you should be able to construct a <code>csr_matrix</code> directly.</p> <pre><code>import sframe as sf from scipy.sparse import csr_matrix x = sf.SFrame({'users': [{'123': 1.0, '122': 5}, {'134': 3.0, '123': 10}]}) x = x.add_row_number('row_id') x = x.stack('users') A = csr_matrix((x['X3'], (x['row_id'], x['X2'])), shape=(2, 135)) </code></pre> <p>I'm also hard-coding the dimension of the matrix here, but that's probably something you'd want to figure out programmtically.</p>
python|sframe
0
1,908,099
45,263,361
Attribute name with symbols in XML with Python 3.6.2
<p>I'm creating XML file with ElemenTree. In specification of receiving server attribute of node should be "name-1", but IDE receives me an error if I try to use it. Pls advice correct syntax for this attribute.</p> <pre><code>import xml.etree.cElementTree as ET Query = ET.Element("Query") Command = ET.SubElement(Query, "Command) Items = ET.SubElement(Command, "Items") ET.SubElement(Items, "field2", name-1="Name").text </code></pre>
<p>Use a dictionary to create the attribute. This prevents <code>name-1</code> from being interpreted as "<code>name</code> minus one".</p> <pre><code>ET.SubElement(Items, "field2", {"name-1": "Name"}) </code></pre>
xml|elementtree|python-3.6
0