Unnamed: 0
int64
0
1.91M
id
int64
337
73.8M
title
stringlengths
10
150
question
stringlengths
21
64.2k
answer
stringlengths
19
59.4k
tags
stringlengths
5
112
score
int64
-10
17.3k
1,906,300
4,988,625
Python: How do I install packages within my package or repository?
<p>My program requires specific versions of several python packages. I don't want to have to require the user to specifically install the specific version, so I feel that the best solution is to simply install the package within the source repository, and to distribute it along with my package.</p> <p>What is the simplest way to do this?</p> <p>(Please be detailed - I'm familiar with pip and easy_install, but they don't seem to do this, at least not by default).</p>
<p>Go for <a href="http://pypi.python.org/pypi/virtualenv" rel="nofollow">virtualenv</a>. Life will be much easier. MUCH easier. Basically, it allows you to create specific python environments as needed.</p>
python|version-control|package
1
1,906,301
61,987,921
RabbitMQ cleanup after consumers
<p>I having problem handling the following scenario:</p> <p>I have one publisher which wants to upload a lot of binary information (Like images), so instead I want it to save the image and upload a path or some reference to that file. </p> <p>I have multiple <strong>different</strong> consumers which are reading from this MQ and do different things. In order to do that, I simply send the information in fan-out to some exchange and define several queues for each different consumers.</p> <p>This could work just fine, except for the trashing of the FS. Since no one is responsible for deleting the saved images. I need some way of defining a hook to the time <strong>every</strong> consumer is done consuming a message from an exchnage? Maybe setting some callback for the cleanup of the message in the exchnage?</p> <p>Few notes:</p> <ol> <li><p>Everything happens locally, we can assume that everything is on the same FS for simplicity.</p></li> <li><p>I know that I can simply let the publisher save the image and give FS links for the different consumers, but this solution is problematic, since I want the publisher to be oblivious to the consumers. I don't want to update the publisher's code every time a new consumer may be used (or one can be removed).</p></li> <li><p>I am working with python. (pika module) </p></li> <li><p>I am new to Message Queues, so if you have a better suggestion to get things done, I would love to learn about it.</p></li> </ol>
<p>Once the image is processed by consumer publish message <code>FileProcessed</code> with the information related to the file. That message can be picked up by another consumer which is in charge of cleaning up the messages, so that consumer will remove the file.</p> <p>Additionally, make sure that your messages is re-queued in case of failure, so they will be picked up later and their processing will be retried. Make sure the retry count is limited, when the limit is reached route the message to dead letter exchange.</p> <p>Some useful links below:</p> <ul> <li><a href="https://pika.readthedocs.io/en/stable/modules/spec.html#pika.spec.BasicProperties" rel="nofollow noreferrer">pika.BasicProperties</a> for handling retries.</li> <li><a href="https://www.rabbitmq.com/tutorials/tutorial-one-python.html" rel="nofollow noreferrer">RabbitMQ tutorial</a></li> <li><a href="https://medium.com/@deepakjoseph08/re-enqueuing-with-delay-in-rabbitmq-with-pika-d30797056468" rel="nofollow noreferrer">Pika DLX Implementation</a></li> </ul>
python|rabbitmq
0
1,906,302
70,284,243
Python : How to populate a column value to new column in another dataframe on comparing other column?
<p>Have got two dataframe</p> <p><strong>df</strong></p> <pre><code>Store Sku Fixture 11 AA Product 12 BB Tier </code></pre> <p><strong>df1</strong></p> <pre><code>Store Sku Bit 11 AA 1 11 AA 2 12 CC 1 12 CC 2 12 CC 3 </code></pre> <p>So, comparing the 'Store' column from dataframes, need to populate 'Fixture' column in df1 from df. <strong>Expected Output:</strong></p> <pre><code>Store Sku Bit Fixture 11 AA 1 Product 11 AA 2 Product 12 CC 1 Tier 12 CC 2 Tier 12 CC 3 Tier </code></pre> <p>Thanks in Advance!</p>
<p>You are using dataframes as a relational database. If you are familiar with SQL, you will find <a href="https://pandas.pydata.org/docs/getting_started/comparison/comparison_with_sql.html" rel="nofollow noreferrer">this documentation</a> useful.</p> <p>Joining can be achieved either using <code>Series.map</code> or <code>merge</code>.</p> <p><strong>Option 1: map</strong></p> <p>First, build a dictionary holding the relation used for the mapping:</p> <pre><code>map_dict = df.set_index('Store')['Fixture'].to_dict() </code></pre> <p><code>set_index</code> uses a column as row labels. By default, it does not modify the original <code>DataFrame</code>.</p> <p>Then, we use this dict for a mapping of the 'Store' <code>Series</code>, and append the output to <code>df1</code>:</p> <pre><code>df1['Fixture'] = df1['Store'].map(map_dict) </code></pre> <p><strong>Option 2: <code>merge</code></strong></p> <p>You can perform a join of the two <code>DataFrame</code>s using merge. First, we get rid of the 'Sku' column of <code>df</code>, so we do not have issues with column names, and then perform the join:</p> <pre><code>joined = pd.merge(df1, df[['Store', 'Fixture']], on='Store') </code></pre> <p>If you don't get rid of <code>df</code>'s 'Sku' column, you will end up having an 'Sku_x' column from <code>df1</code> and a 'Sku_y' column from <code>df</code>.</p>
python|pandas|dataframe|indexing|merge
0
1,906,303
70,096,645
Problems installing python packages on Mac M1
<p>I want to install python packages listed in the requirements file of my github repo. However, I have problems installing those python packages into my conda environment.</p> <p>First of all, I installed conda with Miniforge3-MacOSX-arm64 which supports the M1 with arm64 architecture. However, some specific python packages like onnxruntime I wasn't able to install, because I encountered error messages like that:</p> <pre><code>ERROR: Could not find a version that satisfies the requirement onnxruntime ERROR: No matching distribution found for onnxruntime </code></pre> <p>I assumed that for those specific python packages there is no support yet for the M1.</p> <p>Therefore, I pursued another approach. I set the settings of Terminal to &quot;Open with Rosetta&quot;. The plan is to install the applications of the intel x86_64 architecture and let Rossetta create the binaries to let run on arm64. Then I uninstalled miniforge for arm64 and installed miniforge for x86_64 named Miniforge3-MacOSX-x86_64. With that setup I was able to install all listed python packages of the requirement file and with <code>pip freeze</code> I can also confirm that they have been installed. However, I am somehow not able to use those python packages. For instance if I want to run pytest I get the following error:</p> <pre><code>zsh: illegal hardware instruction pytest </code></pre> <p>I assumed Rossetta takes care of that, that I can use applications for x86_64 also on arm64. But somehow it doesn't work. I tried a lot of different things and am out of ideas.</p> <p>Does anyone know what the problem is? I would be also thankful for advice and suggestions how to properly set up a python environment on Mac M1.</p>
<p>I had the same problem back in 2days ago, I'm using <code>m1 pro</code>. I was trying to install the python packages only using <code>pip</code> but I got a numbers of errors, then I decided to install with <code>conda</code>.</p> <p>In my case it worked, here is what I've done so far is:</p> <p>First Enable the <code>open with rosetta</code> in your zsh. And then,</p> <pre class="lang-sh prettyprint-override"><code># create environment in conda conda create -n venv python=3.8 # with your python version # activate conda activate venv </code></pre> <p>and visit the conda website to look for the packages: <a href="https://anaconda.org/" rel="nofollow noreferrer">check packages</a></p> <p>For suppose if you are looking for <code>pytest</code> packages then you can search it, and you'll get a result like this, with the available package and channel.</p> <p><a href="https://i.stack.imgur.com/uqBFM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uqBFM.png" alt="enter image description here" /></a></p> <p>You need to enable that specific channel to get that package with this command:</p> <pre class="lang-sh prettyprint-override"><code># config channel conda config --append channels conda-forge # available channel name # then install conda install --yes --file requirements.txt </code></pre> <p>Make sure, your have the same version of <code>pytest</code> in your <code>requirements.txt</code> file. <code>(eg:pytest==6.2.5)</code></p> <p>Hope this should work, if not try to install it with <code>pip</code> like: <code>pip install -r requirements.txt</code> after environment enable.</p>
python|macos
4
1,906,304
11,450,646
Class argument test in __init__
<p>I have a class <code>Point</code> that accepts position, value and flag as arguments. This class should accept only integers as position and value arguments. I tried the code below, but it doesn't work properly.</p> <pre><code>class PointException(Exception): pass class Point(): def __init__(self, position, value, flag=False): try: if all([isinstance(x, int) for x in position, value]): self.position = position self.value = value self.point = (position, value) self.flag = flag except: raise PointException("Foo value and position must be integers.") def __repr__(self): return "&lt; {0}, {1}, {2} &gt;".format(self.position, self.value, self.flag) def __eq__(self, other): if not isinstance(other, Point): return False try: return all([self.point == other.point, self.flag == other.flag]) except AttributeError: return False def __ne__(self, other): return not self.__eq__(other) </code></pre> <p><strong>Update</strong></p> <p>I get an <code>AttributError</code> when I try <code>Point(1, 1.2)</code>, for instance.</p> <pre><code>AttributeError: Point instance has no attribute 'position' </code></pre>
<pre><code>if all([isinstance(x, int) for x in position, value]) </code></pre> <p>should be </p> <pre><code>if all(isinstance(x, int) for x in (position, value)) </code></pre> <p>And more generally you have to <code>raise</code> the exception in <code>__init__</code>, not catch it with <code>except</code>:</p> <pre><code>def __init__(self, position, value, flag=False): if not all(isinstance(x, int) for x in (position, value)): raise PointException("Foo value and position must be integers.") self.position = position self.value = value self.point = (position, value) self.flag = flag </code></pre> <p>There are other areas of improvement that you can read about in the other answers</p>
python|exception
3
1,906,305
70,426,470
Pythonic method to pull the start / end of specific intervals within a list?
<p>I have an ordered list, for example:</p> <pre><code> my_list = [0.1, 0.2, 0.3, 0.4, 2.6, 2.7, 2.8, 2.9, 5.1, 5.2, 6.1, 6.2, 6.3, 7.1, 7.2, 7.3, 7.4, 7.5, 10.1, 10.2, 10.3, 10.4, 10.5] </code></pre> <p>I need intervals of numbers &lt; 1s apart, where there are at least 3 numbers. I only want the start and end of the intervals. For example:</p> <p><code> Output: [[0.1, 0.4], [2.6, 2.9], [5.1, 7.5], [10.1, 10.5]]</code></p> <p>or</p> <pre><code>In[0]: print(start) Output: [0.1, 2.6, 5.1, 10.1] In[1]: print(end) Output: [0.4, 2.9, 7.5, 10.5] </code></pre> <p>I've tried a variety of loops, but I'm having trouble getting <em>only</em> the start times appended to a new list, and also having trouble avoiding &quot;Index out of range&quot; when getting to the end of the list. Here is where I'm at currently:</p> <pre><code> for i in range(0, (len(my_list)-2)): second = i + 1 third = i + 2 if ((my_list[third] - my_list[second])) &lt; 1 and ((my_list[second] - my_list[i]) &lt; 1): temp.append(my_list[i]) else: end.append(my_list[second]) start.append(temp[0]) temp.clear() </code></pre> <p>My solution to get only the start of the interval is to append the items to a temporary list, append the first and last element and clear that list. I'm sure there is a more elegant way to do this, and the list can be thousands of rows so I don't think this is a very efficient method.</p> <p>Any help would be much appreciated.</p>
<p>Using Pandas:</p> <pre><code>arr = pd.Series(my_list) arr = arr.groupby(arr.astype(int)).nth([0,-1]) result = arr.values.reshape([arr.size//2,2]) </code></pre> <p>Or without pandas you can use <code>itertools.groupby</code> using <code>int</code> as your key: <em>(Note: this assumes the list is sorted)</em></p> <pre><code>from itertools import groupby result = [] for k, g in groupby(my_list, int): group = list(g) result.append([group[0], group[-1]]) </code></pre> <p>Or as a comprehension:</p> <pre><code>result = [[f:=next(g), [f, *g][-1]] for k, g in groupby(my_list, int)] </code></pre>
python|list|loops
3
1,906,306
63,359,856
how to target nested element in selenium
<p>i am trying to target and get the text in element with class b ,how do I get it</p> <pre><code> &lt;span class=&quot;a &quot;&gt; &lt;span class=&quot;b &quot;&gt;&lt;!-- some random number --&gt;&lt;/span&gt; posts &lt;/span&gt; </code></pre> <p>I have tried this but this throws an error ,I want to make it clear</p> <blockquote> <p>I dont want to target the element using class</p> </blockquote> <p>and</p> <blockquote> <p>I want to target it using xpath</p> </blockquote> <pre><code>post = self.driver.find_element_by_xpath('//span[contains(text,&quot;posts&quot;)]').text </code></pre>
<p>Try this one:</p> <pre><code>from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC post = WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.XPATH, '//span[contains(.,&quot;posts&quot;)]/span'))).text </code></pre>
python|selenium|selenium-webdriver|nested
1
1,906,307
56,590,796
How do I create a colormap plot that can make a sphere in matplotlib 3d look like half i bright (look like the earth is illuminated on one side)
<p>I am trying to create a 3D sphere in matplotlib and have it color like one side of the sphere is illuminated by the sun.</p> <p>I have tried using matplotlib colormaps.</p> <pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt import numpy as np from matplotlib import cm from mpl_toolkits.mplot3d import Axes3D fig = plt.figure() ax = fig.add_subplot(111, projection='3d') ax.set_axis_off() phi = np.linspace(0,2*np.pi, 50) theta = np.linspace(0, np.pi, 25) x=np.outer(np.cos(phi), np.sin(theta)) y=np.outer(np.sin(phi), np.sin(theta)) z=np.outer(np.ones(np.size(phi)), np.cos(theta)) PHI=np.outer(phi,np.ones(np.size(theta))) THETA=np.outer(np.ones(np.size(phi)),theta) data = PHI/np.pi norm = plt.Normalize(vmin=data.min(), vmax=data.max()) surface=ax.plot_surface(x, y, z, cstride=1, rstride=1, facecolors=cm.jet(norm(data))) surface=ax.plot_surface(x, y, z, cstride=1, rstride=1, facecolors=cm.binary(norm(data)),cmap=plt.get_cmap('jet')) plt.show() </code></pre> <p>I am expecting a sphere that looks something like this: <a href="https://i.stack.imgur.com/u7ABk.png" rel="nofollow noreferrer">Or basically something that looks like the earth with the day side and the night side</a></p> <p>But instead my results are something like this: <a href="https://i.stack.imgur.com/iPQDv.png" rel="nofollow noreferrer">current plot from the above code</a></p>
<p>You need to use LightSource package:</p> <pre><code>import matplotlib.pyplot as plt import numpy as np from matplotlib import cm from mpl_toolkits.mplot3d import Axes3D from matplotlib.colors import LightSource fig = plt.figure() ax = fig.add_subplot(111, projection='3d') ax.set_axis_off() phi = np.linspace(0,2*np.pi, 100) theta = np.linspace(0, np.pi, 50) x=np.outer(np.cos(phi), np.sin(theta)) y=np.outer(np.sin(phi), np.sin(theta)) z=np.outer(np.ones(np.size(phi)), np.cos(theta)) PHI=np.outer(phi,np.ones(np.size(theta))) THETA=np.outer(np.ones(np.size(phi)),theta) data = PHI/np.pi norm = plt.Normalize(vmin=data.min(), vmax=data.max()) # use Light Source ls = LightSource(0, 0) # create rgb shade rgb = ls.shade(x, cmap=cm.Wistia, vert_exag=0.1, blend_mode='soft') # blend shade bsl = ls.blend_hsv(rgb, np.expand_dims(x*0.8, 2)) # plot surface surface = ax.plot_surface(x, y, z, cstride=1, rstride=1, facecolors=bsl, linewidth=0, antialiased=False, shade=False) plt.show() </code></pre> <p>Output:</p> <p><a href="https://i.stack.imgur.com/6Ml2K.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6Ml2K.png" alt="enter image description here"></a></p>
python|numpy|matplotlib|plot|3d
0
1,906,308
69,869,102
Python 3.10: ModuleNotFoundError: No module named 'statsmodels'
<p>I am trying to <strong>statsmodel</strong>. I installed <strong>statsmodel</strong> using the <code>pip install statsmodels</code> in the CMD terminal.</p> <p>However every time I try to run the <code>import statsmodels.api as sm</code> in Spyder I get the following error:</p> <pre><code>ModuleNotFoundError: No module named 'statsmodels' </code></pre> <p>Do I have to add Spyder as into Windows Path to get access to the statsmodels library? Why is statsmodels not loading? Here is proof that I have it installed:</p> <pre><code>C:\Users\Jessica&gt;pip list Package Version --------------- ------- numpy 1.21.4 pandas 1.3.4 patsy 0.5.2 pip 21.3.1 python-dateutil 2.8.2 pytz 2021.3 scipy 1.7.2 setuptools 57.4.0 six 1.16.0 statsmodels 0.13.0 </code></pre>
<p>Simply run:</p> <p><code>python -m pip install statsmodels</code></p>
python|importerror
0
1,906,309
17,915,290
What's the right form of python interpreter?
<p>I just began to learn Python for not long and I tried to run a very simple python CGI script.</p> <p>The html code is</p> <pre><code>&lt;form action='cgi-bin/hello_get.py' method = 'post'&gt; Name: &lt;input type = 'text' name = 'name'&gt; &lt;br/&gt; &lt;input type = 'submit' value='Submit'/&gt; &lt;/form&gt; </code></pre> <p>and the 'hello_get.py' file is</p> <pre><code>#!c:/Python27/python.exe import cig, cgitb form = cgi.FieldStorage() name = form.getvalue('name') print "Content-type:text/html\r\n\r\n" print "&lt;html&gt;" print "&lt;head&gt;" print "&lt;title&gt;Hello - Second CGI Program&lt;/title&gt;" print "&lt;/head&gt;" print "&lt;body&gt;" print "&lt;h2&gt;Hello %s &lt;/h2&gt; % (name)" print "&lt;/body&gt;" print "&lt;/html&gt;" </code></pre> <p>However, everytime I tried it on the browser, after I press Submit, the reply was the whole 'hello_get.py' file. The page just show the whole content of 'hello_get.py' file. Like this <a href="http://i.imgur.com/ogUcE0l.jpg" rel="nofollow">http://i.imgur.com/ogUcE0l.jpg</a> So where did I go wrong? It should be very simple.I thought the form of the path of python interpreter was wrong, but I have tried several ways and nothing worked.</p> <p>Thanks!!!</p>
<p>Your first print line ends with a double quote instead of a single quote. Change it to this:</p> <pre><code>print 'Content-type:text/html\r\n\r\n' </code></pre> <p>EDIT After looking at an example online, that first line should be surrounded by double quotes, so</p> <pre><code>print "Content-type:text/html\r\n\r\n" </code></pre>
python|cgi
0
1,906,310
18,044,860
How to change the background color of a Treeview
<p>I'm here to ask you how to change background of a treeview, I tried that</p> <pre><code>ttk.Style().configure("Treeview", background="#383838") </code></pre> <p>It's work perfectly just for the cell, but the rest of the Treeview stay white.</p> <p>I tried to change the background of the window, the frame too, but it does not work.</p> <p>So, how to do that, i'm sure that you know.</p> <p>Bye and thanks in advance :)</p> <p>The code</p> <pre><code>from tkinter import * from tkinter import ttk p=Tk() separator = PanedWindow(p,bd=0,bg="#202322",sashwidth=2) separator.pack(fill=BOTH, expand=1) _frame = Frame(p,bg="#383838") t=ttk.Treeview(_frame) t["columns"]=("first","second") t.column("first",anchor="center" ) t.column("second") t.heading("first",text="first column") t.heading("second",text="second column") t.insert("",0,"dir1",text="directory 1") t.insert("dir1","end","dir 1",text="file 1 1",values=("file 1 A","file 1 B")) id=t.insert("","end","dir2",text="directory 2") t.insert("dir2","end",text="dir 2",values=("file 2 A","file 2 B")) t.insert(id,"end",text="dir 3",values=("val 1 ","val 2")) t.insert("",0,text="first line",values=("first line 1","first line 2")) t.tag_configure("ttk",foreground="black") ysb = ttk.Scrollbar(orient=VERTICAL, command= t.yview) xsb = ttk.Scrollbar(orient=HORIZONTAL, command= t.xview) t['yscroll'] = ysb.set t['xscroll'] = xsb.set ttk.Style().configure("Treeview", background="#383838",foreground="white") p.configure(background='black') t.grid(in_=_frame, row=0, column=0, sticky=NSEW) ysb.grid(in_=_frame, row=0, column=1, sticky=NS) xsb.grid(in_=_frame, row=1, column=0, sticky=EW) _frame.rowconfigure(0, weight=1) _frame.columnconfigure(0, weight=1) separator.add(_frame) w = Text(separator) separator.add(w) p.mainloop() </code></pre>
<p>The missing option is <code>fieldbackground</code> which I only found by accident <a href="http://docs.python.org/dev/library/tkinter.ttk.html#tkinter.ttk.Style.theme_settings" rel="nofollow">in an example</a>. So if you add it to the style declaration</p> <pre><code>ttk.Style().configure("Treeview", background="#383838", foreground="white", fieldbackground="red") </code></pre> <p>it works as you'd like. I used <code>red</code> to make the change very visible; obviously you'll want to change that for greater color harmony.</p>
python|python-3.x|tkinter
3
1,906,311
69,122,337
Download a file from Telegram using aiogram bot-framework in Python
<p>How can I download a file which has been sent by a user in chat?</p> <h3>For example</h3> <p>I need to download the file <code>moonloader.log</code> from Telegram to my local path <code>C:\text-folder\moonloader.log</code> and read it.</p> <h3>Code so far</h3> <pre class="lang-py prettyprint-override"><code>def checkFile(path): if os.path.isfile(path): f = open(path, 'r') log = f.read() print('начинаю проверку...') # check log result = re.search('MoonLoader v.(.+) loaded.', log) if result: moonlog_version = result.group(1) print('• Версия moonloader: ' + moonlog_version) for err in range(0, len(errors)): for i in errors[err]: print(' • Ошибка: ' + errors[err][i]) # ON RECEIVE FILE @dp.message_handler(content_types=types.ContentType.DOCUMENT) async def fileHandle(message: types.File): await message.reply(text='файл получен, начинаю поиск ошибок...') ## LOAD FILE CODE checkFile(LOADED FILE PATH) </code></pre> <h3>Updated Code</h3> <p>I tried to follow the <a href="https://stackoverflow.com/a/69122846/5730279">answer of hc_dev</a> and added the download method. But not sure how to get the <code>File</code> or <code>file_path</code> from <code>message</code>. I tried this:</p> <pre class="lang-py prettyprint-override"><code>def download_file(file: types.File): file_path = file.file_path destination = r'C:\\Users\\admin\\Desktop\\moonlog inspector\\download' destination_file = bot.download_file(file_path, destination) # ON RECEIVE FILE @dp.message_handler(content_types=types.ContentType.DOCUMENT) async def fileHandle(message: types.Document): await message.reply(text='файл получен, начинаю поиск ошибок...') ## LOAD FILE CODE download_file(message.file_id) </code></pre> <p>But when running it raises the error:</p> <blockquote> <p>'Message' object has no attribute 'file_id'</p> </blockquote>
<p>Following question welcomed and explained the new <a href="https://core.telegram.org/bots/api#getfile" rel="nofollow noreferrer"><code>getFile</code></a> operation in <em>Telegram Bot API</em>: <a href="https://stackoverflow.com/questions/31096358/how-do-i-download-a-file-or-photo-that-was-sent-to-my-telegram-bot/32679930#32679930">How do I download a file or photo that was sent to my Telegram bot?</a></p> <p>In <em>aiogram</em> you would use <a href="https://docs.aiogram.dev/en/latest/telegram/bot.html?highlight=file#aiogram.bot.base.BaseBot.download_file" rel="nofollow noreferrer"><code>download_file</code></a> on you <code>Bot</code> object: As parameters you can pass the string <code>file_path</code> (path identifying the file on the telegram server) and a <code>destination</code> on your local volume.</p> <p>The <code>file_path</code> is an attribute of an <a href="https://docs.aiogram.dev/en/latest/telegram/types/file.html" rel="nofollow noreferrer"><code>types.File</code></a> object.</p> <pre class="lang-py prettyprint-override"><code>bot = # your_bot accessing Telegram's API def download_file(file: types.File): file_path = file.file_path destination = r&quot;C:\folder\file.txt&quot; destination_file = bot.download_file(file_path, destination) </code></pre>
python|file|download|telegram-bot|aiogram
0
1,906,312
69,276,472
Speed up the List appending process
<p>I would like to speed up the process of the below code</p> <pre><code>x = [] y = [] z = [] for i in range(0, 1000000): if 0 &lt; u[i] &lt; 1920 and 0 &lt; v[i] &lt; 1080: x.append(u[i]) y.append(v[i]) z.append([x_ind[i], y_ind[i]]) </code></pre> <p>Any ideas would be really appreciated. Thanks</p>
<p>Typically, you can optimize cases like this by replacing loops over indices and indexing with loops over the raw values. So replacement code here would be:</p> <pre><code>x = [] y = [] z = [] for a, b, c in zip(u, v, ind): if 0 &lt; a &lt; 1920 and 0 &lt; b &lt; 1080: x.append(a) y.append(b) z.append([c, c]) </code></pre> <p>If <code>u</code>, <code>v</code> and <code>ind</code> might be longer than <code>1000000</code> and you <em>must</em> stop at <code>1000000</code> items checked, you'd just add an import to the top of the file, <code>from itertools import islice</code>, and change the <code>for</code> loop itself to:</p> <pre><code>for a, b, c in islice(zip(u, v, ind), 1000000): </code></pre> <p>Either way, you remove all indexing from the code (indexing has one of the worst ratios of overhead to useful work accomplished in the CPython reference interpreter, though other interpreters and tools like Cython will behave differently) and, if you use nicer names than <code>a</code>, <code>b</code> and <code>c</code>, more self-documenting code.</p> <p>There are minor benefits (decreasing in the most recent versions of Python) to pre-binding copies of <code>append</code> instead of dynamic binding, so if you're <em>really</em> hurting for speed, especially on older versions of Python that didn't optimize away the creation of bound methods, you can try:</p> <pre><code>x = [] y = [] z = [] xapp, yapp, zapp = x.append, y.append, z.append for a, b, c in zip(u, v, ind): if 0 &lt; a &lt; 1920 and 0 &lt; b &lt; 1080: xapp(a) yapp(b) zapp([c, c]) </code></pre> <p>(adding <code>islice</code> if needed) to reduce method call overhead a bit at the expense of uglier code. Definitely don't do this unless profiling has shown this is the hot code path and you <em>really</em> need it faster.</p> <p>Lastly, a note: If this code is being run at top-level (outside of any function) it will run significantly slower (variable lookup for locally scoped names in a function is a C array lookup; looking up globally scoped names, which all lookups outside a function involve, involves at least one <code>dict</code> key lookup, which is substantially more expensive). Put it in a function (along with the definitions of <code>x</code>, <code>y</code> and <code>z</code>; <code>u</code>, <code>v</code> and <code>ind</code> don't matter so much if you're <code>zip</code>ping rather than indexing them) and call that function instead of running at global scope and it should run a lot faster.</p> <p>Improvements beyond this might be possible using <code>numpy</code> arrays instead of <code>list</code>s, but you'd need to be much more specific about your problem to hazard a guess on the utility of such a change.</p>
python|list|for-loop
2
1,906,313
69,144,374
How get count of value repeated in list at index position 1?
<p>How to count number of values repeated in list at first position and do sum based on that</p> <p><strong>My input :</strong></p> <pre><code>[[2, 1], [2, 1], [2, 1], [1, 2]] </code></pre> <p>Note : my list will contain 2 OR 1 in first index[0]</p> <p>In above the 2 is repeated the maximum number of times so my sum should be like get its second value of all and do sum and display</p> <pre><code> 2 , 1 -&gt; 1 + 2 , 1 -&gt; 1 + 2 , 1 -&gt; 1 ---------------------- 2 , 3 </code></pre> <p>So output would be : <code>2 , 3</code></p> <p>How to achieve above output from given Input</p> <p>In my code not able to implement such logic</p> <pre><code>cnt=0 m[0]=0 for m in my_list: if m[0] == 2 cnt+=1 v=m[1] print(m[0],v[1]) </code></pre>
<p>Try:</p> <pre><code>#create a list with just the 0th elements keys = [i[0] for i in l] #get the key that has the maximum count max_key = max(keys, key=keys.count) #sum the 1st element for all sublists where the 0th element is the same as the max_key &gt;&gt;&gt; max_key, sum(i[1] for i in l if i[0]==max_key) (2, 3) </code></pre> <p>In one line (not recommended as it's not very readable):</p> <pre><code>&gt;&gt;&gt; max([i[0] for i in l], key=[i[0] for i in l].count), sum(i[1] for i in l if i[0]==max([i[0] for i in l], key=[i[0] for i in l].count)) (2, 3) </code></pre>
python
1
1,906,314
69,266,686
Find if theme is dark or light in windows 10 using python
<p>Is there any way to find if system has dark or light theme in windows 10? Has windows provided any api which will detect the theme using win32api that is usable from python 2.7?</p>
<p>There is already an answer for this on <a href="https://stackoverflow.com/questions/65156048/how-to-get-system-color-in-python">How to get system color in python?</a></p> <p>And perhaps <a href="https://stackoverflow.com/questions/65294987/detect-os-dark-mode-in-python">Detect OS dark mode in Python</a> might be useful too.</p>
python|windows|python-2.7|darkmode
2
1,906,315
68,144,123
Create a Python executable with Selenium webdriver manager
<p>I want to convert my Selenium code to an executable, I used auto-py-to-exe, it always worked for non Selenium codes, so I don't know what to do.</p> <p>I was looking at the answer that was provided here : <a href="https://stackoverflow.com/questions/49853252/create-a-python-executable-with-chromedriver-selenium">Create a Python executable with chromedriver &amp; Selenium</a> but it doesn't exactly address my problem because you need to change the .spec file and add your Chromedriver path, but I don't use chromdriver, I use webdriver manager.</p> <p>so I am kinda lost here any help</p>
<p>I was stuck with same problem. I thought to assume that selenium script as normal python file. And it worked. Steps to make it executable file:</p> <pre><code>pip install pyinstaller pyinstaller --onefile pythonScriptName.py </code></pre>
python|selenium|selenium-webdriver|executable
0
1,906,316
68,113,735
communication between two bots? (discord.py)
<p>I'm a beginner-intermediate level programmer working with discord.py for the first time. I want to create two bots that, when one is prompted, both send messages one after another as if in a conversation.</p> <p>Is that even possible in discord.py? I considered making two different bots in two different .py files, creating variables for each line of the conversation for both bots, and then having them each prompt if the message content matched the variable. However, I don't want the bots to prompt if the line is said by someone other than the other bot.</p> <p>Any advice? Thanks so much!</p>
<p>You can use the <a href="https://github.com/Ext-Creators/discord-ext-ipc" rel="nofollow noreferrer">discord-ext-ipc</a> lib. You can setup a server on both bots and exchange Http Messages whenever an specific Event triggered on one bot.</p>
python|discord|discord.py|bots|chatbot
1
1,906,317
59,418,634
Reading Hong Kong Supplementary Character Set in python 3
<p>I have a hkscs dataset that I am trying to read in python 3. Below code</p> <pre><code>encoding = 'big5hkscs' lines = [] num_errors = 0 for line in open('file.txt'): try: lines.append(line.decode(encoding)) except UnicodeDecodeError as e: num_errors += 1 </code></pre> <p>It throws me error <code>UnicodeDecodeError: 'utf-8' codec can't decode byte 0xae in position 0: invalid start byte</code>. Seems like there is a non utf-8 character in the dataset that the code is not able to decode. </p> <p>I tried adding <code>errors = ignore</code> in this line <code>lines.append(line.decode(encoding, errors='ignore'))</code></p> <p>But that does not solve the problem.</p> <p>Can anyone please suggest?</p>
<p>If a text file contains text encoded with an encoding that is not the default encoding, the encoding must be specified when opening the file to avoid decoding errors:</p> <pre><code>encoding = 'big5hkscs' path = 'file.txt' with open(path, 'r', encoding=encoding,) as f: for line in f: # do something with line </code></pre> <p>Alternatively, the file may be opened in binary mode, and text decoded afterwards:</p> <pre><code>encoding = 'big5hkscs' path = 'file.txt' with open(path, 'rb') as f: for line in f: decoded = line.decode(encoding) # do something with decoded text </code></pre> <p>In the question, the file is opened without specifying an encoding, so its contents are automatically decoded with the default encoding - apparently UTF-8 in the is case.</p>
python|utf-8|character-encoding|nlp
2
1,906,318
63,032,622
How to access a widget from one screen inside another screen in kivy
<p>How do I access a widget from screen1 and make changes to it in screen2 and also return screen1 back to it initial state(like I just run the code). I have commented out some code that is not working.</p> <pre><code>from kivy.app import App from kivy.uix.label import Label from kivy.uix.button import Button from kivy.uix.screenmanager import ScreenManager, Screen, FadeTransition class ScreenManagement(ScreenManager): def __init__(self, **kwargs): super(ScreenManagement, self).__init__(**kwargs) class Screen2(Screen): def __init__(self, **kwargs): super(Screen2, self).__init__(**kwargs) self.retry = Button(text='retry', font_size=15, size_hint=(.26, .26), pos_hint={'center_x': .5, 'center_y': .32}, on_press=self.retrying, background_color=(0, 0, 1, 1)) self.add_widget(self.retry) def retrying(self, *args): self.manager.current = 'screen1' # it should change the text in screen1 to &quot;i am back to screen1, thanks you&quot; #self.welc.text=&quot; i am back to screen1, thank you&quot; # it should change the button color back to it normal state #self.goto.background_color='normal state' class Screen1(Screen): def __init__(self, **kwargs): super(Screen1, self).__init__(**kwargs) self.welc = Label(text='hi there welcome to my first screen', font_size=15, size_hint=(.26, .26), pos_hint={'center_x': .5, 'center_y': .7}) self.add_widget(self.welc) self.goto = Button(text='next screen', font_size=15, size_hint=(.2, .2), pos_hint={'center_x': .5, 'center_y': .32}, on_press=self.going, background_color=(0, 0, 1, 1)) self.add_widget(self.goto) def going(self, *args): self.goto.background_color=(1,0,0,1) self.manager.current = 'screen2' class Application(App): def build(self): sm = ScreenManagement(transition=FadeTransition()) sm.add_widget(Screen1(name='screen1')) sm.add_widget(Screen2(name='screen2')) return sm if __name__ == &quot;__main__&quot;: Application().run() </code></pre> <p>My question is this:</p> <ol> <li>how do i change the text in screen1 when the retry button is pressed.</li> <li>How do i return screen1 back to it initial state after the retry button is pressed so that the &quot;next screen&quot; button color changes back to blue</li> </ol>
<p>Once you have changed the current screen to <code>screen1</code>, then you can access that <code>Screen</code> as <code>self.manager.current_screen</code>, so your <code>retrying()</code> method can be:</p> <pre><code>def retrying(self, *args): self.manager.current = 'screen1' self.manager.current_screen.welc.text = &quot;i am back to screen1, thanks you&quot; self.manager.current_screen.goto.background_color = background_color=(0, 0, 1, 1) </code></pre> <p>To set the <code>screen1</code> back to its original state, you could write another method that just sets all the values back to the original value one by one. Or you could recreate <code>screen1</code> by doing something like this in a method of your <code>Application</code>.:</p> <pre><code>def reset_screen1(self): sm = self.root scr1 = sm.get_screen('screen1`) sm.remove_widget(scr1) sm.add_widget(Screen1(name='screen1')) </code></pre>
python|kivy
0
1,906,319
62,918,455
Python NetworkX: edges color in a weighted graph
<p>I am trying to plot a fully connected graph with edge weights given by the Gaussian similarity function using the <code>networkx</code> library in Python. When I plot the graph the color intensity of the edges seems to be very mild, which I guess is due to the small connectivity weights (<a href="https://i.stack.imgur.com/Exou6.png" rel="nofollow noreferrer">Half-moons fully connected graph</a> ). However, I was wondering if there is a way to make the color intensity stronger.</p> <p>The code I used:</p> <pre class="lang-python prettyprint-override"><code>import numpy as np import matplotlib from matplotlib import pyplot as plt from sklearn import cluster, datasets import networkx as nx def eucledian_dist(x_i, x_j): coord = x_i.shape[0] d=[] if coord == x_j.shape[0]: for i in range(coord): d.append((x_i[i] - x_j[i])**2) return (np.sqrt(sum(d),dtype=np.float64)) def distance_matrix(data, distance_measure): Npts= data.shape[0] distance_matrix=np.zeros((Npts,Npts)) for xi in range(Npts): for xj in range(Npts): distance_matrix[xi,xj] = distance_measure(data[xi],data[xj]) return(distance_matrix) def adjacency_matrix(data, sigma): dist_matrix = distance_matrix(data, eucledian_dist) adjacency_matrix= np.exp(-(dist_matrix)**2 /sigma) adjacency_matrix[adjacency_matrix==1] = 0 return(adjacency_matrix) #Generate data Npts = 35 half_moons_data = datasets.make_moons(n_samples=Npts, noise=.040, random_state=1991) nodes_coord = dict() for key in [i for i in range(Npts)]: nodes_coord[key] = list(half_moons_data[0][key]) #Compute adjancency matrix W = adjacency_matrix(half_moons_data[0], sigma=0.05) #Create graph: nodes_idx = [i for i in range(Npts)] graph = nx.Graph() graph.add_nodes_from(nodes_idx) graph.add_weighted_edges_from([(i,j, W[i][j]) for i in range(Npts) for j in range(Npts)]) #Plot graph: nx.draw_networkx_nodes(graph, nodes_coord, node_size=5, node_color=&quot;red&quot;) nx.draw_networkx_edges(graph, nodes_coord, edge_cmap= plt.cm.Blues, width=1.5, edge_color=[graph[u][v]['weight'] for u, v in graph.edges], alpha=0.2) plt.show() </code></pre> <p>I would really appreciate any advice/feedback.</p>
<p>Let's add a cap on the maximum value for edge color using the <code>edge_vmax</code> paramter for your data:</p> <pre><code>nx.draw_networkx_edges(graph, nodes_coord, edge_cmap= plt.cm.Blues, width=1.5, edge_color=[graph[u][v]['weight'] for u, v in graph.edges], alpha=.2, edge_vmax=10e-30) </code></pre> <p>Output:</p> <p><a href="https://i.stack.imgur.com/pqIgT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pqIgT.png" alt="enter image description here" /></a></p> <p>From <a href="https://networkx.github.io/documentation/stable/reference/generated/networkx.drawing.nx_pylab.draw_networkx_edges.html" rel="nofollow noreferrer">docs</a>:</p> <blockquote> <p>edge_vmin,edge_vmax (floats) – Minimum and maximum for edge colormap scaling (default=None)</p> </blockquote> <blockquote> <p>edge_color : color string, or array of floats Edge color. Can be a single color format string (default='r'), or a sequence of colors with the same length as edgelist. If numeric values are specified they will be mapped to colors using the edge_cmap and edge_vmin,edge_vmax parameters.</p> </blockquote>
python|graph|networkx
2
1,906,320
35,438,730
Adding custom module via RPyC
<p>I'm trying to add a new module to a connection.</p> <p>I have the following files: main.py UpdateDB.py</p> <p>In UpdateDB:</p> <pre><code>def UpdateDB(): ... </code></pre> <p>In main.py:</p> <pre><code>import UpdateDB import rpyc conn = rpyc.classic.connect(...) rpyc.utils.classic.upload_package(conn, UpdateDB) conn.modules.UpdateDB.UpdateDB() </code></pre> <p>And I can figure out how to invoke the UpdateDB() function. I get:</p> <pre><code>AttributeArror: 'module' object has no attribute 'UpdateDB' </code></pre> <p>Perhaps I'm trying to do it wrong. So let me explain what I'm trying to do: I want to create a connection to the server and run on it a function from the UpdateDB.py file.</p>
<p>Not sure how to do that in classic mode (not sure why you'd use it), but here is how to accomplish the task in the newer RPyC service mode.</p> <p>Script ran as Server:</p> <pre><code>import rpyc from rpyc.utils.server import ThreadedServer class MyService(rpyc.Service): def exposed_printSomething(self, a): print a print "printed on server!" return 'printed on client!' if __name__ == '__main__': server = ThreadedServer(MyService, port=18812) server.start() </code></pre> <p>Script ran as Client:</p> <pre><code>import rpyc if __name__ == '__main__': conn = rpyc.connect("127.0.0.1", port=18812) print conn.root.printSomething("passed to server!") </code></pre> <p>Result on Server:</p> <pre><code>passed to server! printed on server! </code></pre> <p>Result on Client:</p> <pre><code>printed on client! </code></pre>
python|python-module|rpyc
1
1,906,321
35,539,067
How can keep a timed count of user input (from a button)?
<p>I'm using a Raspberry Pi 2 and a breadboard to make a morse code interpreter. The circuit is a simple button that lights up an LED when pressed. How could I use functions to count how long the button is held down?</p>
<p>You haven't given sufficient details of your circuit to pin this down to your particular use case, so I'll suggest the following (on which I'll base my code):</p> <ul> <li>Connect the LED (with series resistor) to GPIO 12</li> <li>Connect a 10k resistor between GPIO 11 and the 3.3V rail (acts as a pull-up)</li> <li>Connect the push-button between GPIO 11 and Ground (0V).</li> </ul> <p>I'd then write the code to monitor the pin and log the time for each press into a list. You can read out the values in the same order which allows you to process the times and interpret the code from them.</p> <pre><code>import RPi.GPIO as GPIO import time # set up an empty list for the key presses presses = [] # set up the GPIO pins to register the presses and control the LED GPIO.setmode(GPIO.BOARD) GPIO.setup(12, GPIO.OUT) GPIO.setup(11, GPIO.IN) # turn LED off initially GPIO.output(12, False) while True: # wait for input to go 'low' input_value = GPIO.input(11) if not input_value: # log start time and light the LED start_time = time.time() GPIO.output(12, True) # wait for input to go high again while not input_value: input_value = GPIO.input(11) else: # log the time the button is un-pressed, and extinguish LED end_time = time.time() GPIO.output(12, False) # store the press times in the list, as a dictionary object presses.append({'start':start_time, 'end':end_time}) </code></pre> <p>You'll finish up with a list that looks something like this:</p> <blockquote> <p>[{start:123234234.34534, end:123234240.23482}, {start:123234289.96841, end:123234333.12345}]</p> </blockquote> <p>The numbers you see here are in units of seconds (this is your system time, which is measured in seconds since the epoch).</p> <p>You can then use a for loop to run through each item in the list, subtract each end time from the start time, and of course subtract the start of the next press from the end of the previous one, so that you can assemble characters and words based on the gaps between presses. Have fun :-)</p>
python|raspberry-pi
0
1,906,322
58,725,588
Method chaining in python results in an error
<p>I thought in Python I am allowed to perform method chaining.</p> <pre><code>basket = [1,3,2,4,6,8] basket.append(7) basket.sort() basket.reverse() </code></pre> <p>This works.</p> <pre><code>basket.append(7).sort().reverse() </code></pre> <p>This does not. </p> <pre><code>AttributeError: 'NoneType' object has no attribute 'sort' </code></pre> <p>I am not sure what is going on here, but I assume that happens because in place methods result in "NoneType" <code>result = basket.sort()</code> and therefore the second method will be performed on the result and not the original object. </p> <p>Can anyone help me how to do these operations without writing a new line for each method?</p>
<p>Because <code>append</code> <code>sort</code> and <code>reverse</code> are all "in-place" methods, so they don't return anything, instead they update the original list, the best way would be:</p> <pre><code>print(sorted(basket + [7], reverse=True)) </code></pre>
python|list|sorting|methods
6
1,906,323
31,275,282
theano function not updating parameters during gradient optimization in feed forward neural net
<p>Trying to get my hands wet with theano and deep nets by starting with a very simple implementation of a three layer feed forward neural network and testing it on the mnist data set. </p> <p>I am using a rudimentary implementation of stochastic gradient descent to start out with, and the network is not training properly. The parameters of the network are not being updated. </p> <p>Was wondering if anyone could could point out what I'm doing wrong. </p> <p>The following code is my lstm module. I've called it that because I planned on implementing lstm networks in the future. </p> <pre><code>import theano, theano.tensor as T import numpy as np from collections import OrderedDict np_rng = np.random.RandomState(1234) class FeedForwardLayer(object): def __init__(self, input_size, hidden_size, activation): self.input_size = input_size self.hidden_size = hidden_size self.activation = activation self.create_layer() def create_layer(self): self.W = create_shared(self.hidden_size, self.input_size, "weight") self.b = create_shared(self.hidden_size, name="bias") def activate(self, x): if x.ndim &gt; 1: return self.activation(T.dot(self.W, x.T) + self.b[:, None]).T else: return self.activation(T.dot(self.W, x) + self.b) @property def params(self): return [self.W, self.b] @params.setter def params(self, param_list): self.W.set_value(param_list[0]) self.b.set_value(param_list[1]) class Network(object): def __init__(self, input_size, celltype=FeedForwardLayer, layer_sizes=None): self.input_size = input_size self.celltype = celltype self.layer_sizes = layer_sizes self.create_layers() def create_layers(self): self.layers = [] input_size = self.input_size for layer_size in self.layer_sizes: self.layers.append(self.celltype(input_size, layer_size, activation=T.nnet.sigmoid)) input_size = layer_size def forward(self, x): out = [] layer_input = x for layer in self.layers: layer_out = layer.activate(layer_input) out.append(layer_out) layer_input = layer_out return out @property def params(self): return [param for layer in self.layers for param in layer.params] @params.setter def params(self, param_list): start = 0 for layer in self.layers: end = start + len(layer.params) layer.params = param_list[start:end] start = end def create_shared(m, n=None, name=None): if n is None: return theano.shared(np_rng.standard_normal((m, )), name=name) else: return theano.shared(np_rng.standard_normal((m, n)), name=name) def optimization_updates(cost, params, lr=.01): """ implements stochastic gradient descent Inputs --------------- cost -- theano variable to minimize params -- network weights to take gradient with respect to lr -- learning rate """ lr = theano.shared(np.float64(lr).astype(theano.config.floatX)) gparams = T.grad(cost, params) updates = OrderedDict() for gparam, param in zip(gparams, params): updates[param] = param - lr * gparam return updates </code></pre> <p>The following code is where I create, train, and test a simple three-layer feed forward network on the mnist data set. </p> <pre><code>from lstm import Network import theano, theano.tensor as T import numpy as np import lstm as L from sklearn.datasets import load_digits from sklearn.cross_validation import train_test_split from sklearn.metrics import confusion_matrix, classification_report from sklearn.preprocessing import LabelBinarizer # load and normalize dataset digits = load_digits() X = digits.data y = digits.target X -= X.min() X /= X.max() # create network model = Network(64, layer_sizes=[100, 10]) # prepare training and test data X_train, X_test, y_train, y_test = train_test_split(X, y) labels_train = LabelBinarizer().fit_transform(y_train) labels_test = LabelBinarizer().fit_transform(y_test) data = T.vector() result = model.forward(data)[-1] label = T.vector() cost = (result - label).norm(L=2) updates = L.optimization_updates(cost, model.params) update = theano.function([data, label], cost, updates=updates, allow_input_downcast=True) predict = theano.function([data], result, allow_input_downcast=True) for X, y in zip(X_train, labels_train): c = update(X, y) predictions = [] for X in X_test: prediction = predict(X) predictions.append(np.argmax(prediction)) print(confusion_matrix(y_test, predictions)) print(classification_report(y_test, predictions)) </code></pre> <p>The problem I'm facing is that the parameters are not being updated properly. I'm not sure if that's because I'm not calculating the gradient properly, or If I'm not using the theano function correctly. </p>
<p>You have to make more than one pass on the dataset when using stochastic gradient descent. It is not unusual that the classification error and the confusion matrix do not change much during the first epoch, especially if the dataset is small.</p> <p>I made the following change in your code to train for 100 epochs</p> <pre><code>for i in xrange(100): for X, y in zip(X_train, labels_train): c = update(X, y) </code></pre> <p>The confusion matrix seems to have started improving:</p> <pre><code>[[ 0 0 18 0 13 4 5 0 5 0] [ 0 42 0 2 0 0 0 0 2 0] [ 0 0 51 0 0 0 0 1 0 0] [ 0 0 0 45 0 1 0 1 2 0] [ 0 0 0 0 33 0 0 0 0 0] [ 0 0 0 0 0 47 0 0 0 0] [ 0 0 0 0 0 0 45 0 0 0] [ 0 0 0 0 1 0 0 48 0 0] [ 0 2 1 0 0 0 0 0 34 0] [ 0 1 0 25 0 3 0 2 16 0]] </code></pre>
python-3.x|neural-network|theano
2
1,906,324
31,565,948
Sphinx copy html file
<p>I use Sphinx for generate docs and some of docs are pure html files. I would like sphinx copy such files to build directory to corresponding path as is. Can I do it ? </p>
<p>Don't know if I got what you are trying to do.</p> <p>What I needed was: after creating the html content with sphinx</p> <pre><code>$ make html </code></pre> <p>I needed to copy those html to the doc folder, so github pages would see it.</p> <p>That said, what I am doing is: I've created a python script to call sphinx and then to copy the files.</p> <p>The script and the docs are on the folder doc_src. The structure is:</p> <pre><code>Project/ doc/ (where I need my html) doc_src/ (where I build them) </code></pre> <p>The script inside doc_src is something like:</p> <pre><code>import os import sys import shutil def make_doc(): """Run Sphinx to build the doc.""" try: # removing previous build print('removing previous build') cmd = 'make clean' os.system(cmd) shutil.rmtree('../doc/', ignore_errors=True) # new build cmd = 'make html' os.system(cmd) # copy files - windows cmd here print('copy files') cmd = r'xcopy _build\html\. ..\doc\ /e /Y' os.system(cmd) except Exception as error: print(error) if __name__ == '__main__': make_doc() </code></pre>
html|copy|python-sphinx
0
1,906,325
59,565,815
Early stages of text-based rpg - classes and methods
<p>I am in the very early stages of making my own text-based Rpg with python 3. I decided to create classes (still not finished with that) to maintain the different characters. I just ran into a problem regarding the attacking part of the game.</p> <p>I am trying to make a method for my class, that attacks something. My problem is then to have it attack an enemy. I tried combining it with the class for the enemy, so the character attacks the enemy's hp. This, however, creates an error, saying I did not define the hp for the enemy.</p> <p>This is my classes:</p> <pre><code>class Mage(): def __init__(self,name): self.name = name self.hp = 100 self.mana = 200 self.spellpower = 1 def fireball(self, enemy): dmg = 10 * self.spellpower self.mana -= 20 enemy.hp -= dmg class Orc(): def __init__(self): self.name = "Orc" self.hp = 20 self.dmg = 5 def hit(self): dmg = self.dmg </code></pre> <p>and this is the rest of the code (where I use the fireball method in the end):</p> <pre><code>print ("Welcome to the game") name = input("What is your name? ") class1 = input("Choose your class: (Fighter, Mage) ") if class1 == "Fighter" or class1 == "fighter": print (f"welcome Fighter,{name}. Grab your sword and come with me") class1 = Fighter(name) if class1 == "Mage" or class1 == "mage": print (f"Come with us Mage {name}, we need your help") class1 = Mage(name) class1.fireball(Orc) </code></pre>
<p>You are calling fireball on <code>Orc</code> which is the class, but not the instance. Instantiate first an orc and then call fireball on it. This is equivalent to trying to call fireball on the entire Orc race instead of calling fireball on a particular orc.</p>
python|class|oop
1
1,906,326
59,742,298
Subprocess does not execute Rscript
<p>I need to run Rscript from Python and wait until it finishes. As far as I know, <code>subprocess</code> is more preferable than <code>os.system(command)</code>.</p> <p>This is my current code:</p> <pre><code>import subprocess command = "Rscript myscript.r -f 1" subprocess.call([command]) # run some Python code </code></pre> <p>It gives me the following error message:</p> <blockquote> <p>FileNotFoundError: [Errno 2] No such file or directory: 'Rscript myscript.r -f 1'</p> </blockquote> <p>Just to mention that <code>os.system(command)</code> worked well, but it was not waiting until the script <code>myscript.r</code> finishes.</p>
<p><code>command</code> is the command <em>line</em>, not the command. Passing <code>command</code> in a list of 1 element tries to run <code>"Rscript myscript.r -f 1"</code> as a command.</p> <p>Splitting into arguments (manually, which allows to pass parameters containing spaces too) is the best way:</p> <pre><code>subprocess.call(["Rscript","myscript.r","-f","1"]) </code></pre> <p>Note that it does roughly the same as <code>os.system</code> (except that <code>os.system</code> 1) is unsafe 2) doesn't handle quoting)</p> <p>So it's not going to wait for command to complete. On Windows, we could try to prefix with <code>cmd /c</code> to avoid that command detaches:</p> <pre><code>subprocess.call(["cmd","/c","Rscript","myscript.r","-f","1"]) </code></pre>
python|subprocess|python-os
1
1,906,327
59,621,664
python & c-c++ extended module case segmentfault
<p>c++ code</p> <pre><code>extern "C" PyObject * test(){ PyObject *oplist = PyList_New(10000); for(uint32_t j = 0; j &lt; 10000; j++){ PyObject* pTuple = PyTuple_New(3); assert(PyTuple_Check(pTuple)); assert(PyTuple_Size(pTuple) == 3); PyTuple_SetItem(pTuple, 0, Py_BuildValue("s", "b")); PyTuple_SetItem(pTuple, 1, Py_BuildValue("i", 1)); PyTuple_SetItem(pTuple, 2, Py_BuildValue("s", "a")); PyList_SetItem(oplist, j, pTuple); } return oplist; } </code></pre> <p>python code</p> <pre><code>LID = ctypes.CDLL('%s/token2map_lib.so' % '.') LID.test.restype = py_object LID.test() </code></pre> <p>build cmd <code> g++ -fPIC token2map.cpp -I/usr/local/app/service/virtualenvs/NLP/include/python2.7 -shared -o token2map_lib.so </code> i just show a part of code,forgive me and total code is to long</p> <p>problem: in c++ code,this function return res_list is smallar everything is ok. forever, result set is more than 215(j = 215) case segmentfalut. i can't find problem, hope present friend can give me some advise,i will very grateful. </p> <p>i got a way to solution this problem</p> <pre><code>extern "C" PyObject * test(){ PyGILState_STATE gstate = PyGILState_Ensure(); PyObject *oplist = PyTuple_New(10000); for(int32_t j = 0; j &lt; 10000; j++){ PyObject * pTuple = PyTuple_New(3); assert(PyTuple_Check(pTuple)); assert(PyTuple_Size(pTuple) == 3); PyTuple_SetItem(pTuple, 0, Py_BuildValue("s", "b")); PyTuple_SetItem(pTuple, 1, Py_BuildValue("i", 1)); PyTuple_SetItem(pTuple, 2, Py_BuildValue("s", "a")); PyTuple_SetItem(oplist, j, pTuple); } PyGILState_Release(gstate); return oplist; } </code></pre> <p>but is there have anthor way to solution this problem? i don't think get GIL locak is a gool way</p>
<p>I can't give you an explanation as to why your code gives a seg fault. I do find it interesting that you were able to build your extension with a function that takes no parameters. I can, however provide code that does build and work:</p> <pre><code>#define Py_SSIZE_T_CLEAN #include &lt;Python.h&gt; #include &lt;inttypes.h&gt; #ifdef __cplusplus extern "C" { #endif static PyObject *test(PyObject *self, PyObject *ignorethis) { PyObject *oplist = PyList_New(10000); for(uint32_t j = 0; j &lt; 10000; ++j){ PyObject *pTuple = PyTuple_New(3); PyTuple_SetItem(pTuple, 0, Py_BuildValue("s", "b")); PyTuple_SetItem(pTuple, 1, Py_BuildValue("i", 1)); PyTuple_SetItem(pTuple, 2, Py_BuildValue("s", "a")); PyList_SetItem(oplist, j, pTuple); } return oplist; } static PyMethodDef methods[] = { {"test", test, METH_NOARGS, "function given by so"} }; static PyModuleDef foobar = { PyModuleDef_HEAD_INIT, "foobar", "so question module", -1, methods }; PyMODINIT_FUNC PyInit_foobar(void){ PyObject *module; module = PyModule_Create(&amp;foobar); return module; } #ifdef __cplusplus } #endif </code></pre> <p>you can load this module with <code>import foobar</code> and then run with <code>foobar.test()</code></p>
python|c++|c
1
1,906,328
70,928,182
modeling reinforcement learning environment with Ray
<p>I have been playing around with the idea of using reinforcement learning on a particular problem in which I am optimizing a raw materials purchasing strategy for a particular commodity. I have created a simple gym environment to show a simplified version of what I'd like to accomplish. The goal is to take in multiple items (in this case 2) and optimize a purchasing strategy for each item so that the sum of days on hand of all items are minimized without running out of either item.</p> <pre><code>from gym import Env from gym.spaces import Discrete, Box, Tuple from gym import spaces import numpy as np import random import pandas as pd from random import randint #define our variable starting points #array of the start quantity for 2 seperate items start_qty = np.array([10000, 200]) #create the number of simulation weeks sim_weeks = 1 #set a starting safety stock level------INGORE FOR NOW #safety_stock = 4003 #create simple demand profile for each item #demand = np.array([301, 1549, 3315, 0, 1549, 0, 0, 1549, 1549, 1549]) demand = np.array([1800, 45]) #create minimum order and max order quantities for each item min_ord = np.array([26400, 250]) max_ord = np.array([100000, 100000]) prev_30_usage = np.array([1600, 28]) #how this works is it in the numpy arrays- the stuff in index 0 is the first item's info # and the stuff in index 1 is the second item's info class ResinEnv(Env): def __init__(self): self.action_space = Tuple([Discrete(2), Discrete(2)]) self.observation_space = Box(low= np.array([-10000000]), high = np.array([10000000])) #set the start qty self.state = np.array([10000, 200]) #self.start = start_qty #set the purchase length self.purchase_length = sim_weeks self.min_order = min_ord def step(self, action): self.purchase_length -= 1 #apply action self.state[0] -=demand[0] self.state[1] -= demand[1] #see if we need to buy #action is between 0 and 1- round the action to the nearest tenth action = np.around(action, decimals = 0) #self.state +=action*self.min_order np.add(self.state, action* self.min_order, out=self.state, casting=&quot;unsafe&quot;) #self.state += (action*100) + 26400 #calculate the days on hand from this days = self.state/prev_30_usage/7 #item_reward1 = action[0] #item_reward2 = action[1] #calculate reward: right now reward is negative of days_on_hand #GOING TO NEED TO CHANGE THIS REWARD AT SOME POINT MOVING FORWARD AS IT #NEEDS TO TREAT HIGH VOLUME ITEMS AND LOW VOLUME ITEMS THE SAME- THIS IS BIASED AGAINST LOW VOLUME if self.state[0] &lt; 0: item_reward1 = -10000 else: item_reward1 = days[0] if self.state[1]&lt; 0: item_reward2 = -10000 else: item_reward2 = days[1] reward = item_reward1 + item_reward2 #check if we are out of weeks if self.purchase_length&lt;=0: done = True else: done = False #reduce the weeks left to purchase by 1 week #done = True #set placeholder for info info = {} #return step information return self.state, reward, done, info def render(self): pass def reset(self): self.state = np.array([10000, 200]) self.purchase_length = sim_weeks self.demand = demand self.action_space = Tuple([Discrete(2), Discrete(2)]) self.min_order= min_ord return self.state #, self.purchase_length, self.demand, self.action_space, self.min_order </code></pre> <p>The environment seems to be functioning just fine as seen with this code:</p> <pre><code>episodes = 100 for episode in range(1, episodes+1): state = env.reset() done = False score = 0 while not done: #env.render() action = env.action_space.sample() n_state, reward, done, info = env.step(action) score+=reward print('Episode:{} Score:{} Action:{}'.format(episode, score, action)) </code></pre> <p>I have attempted to run this through various ways of modeling with no luck and have discovered Ray but can't seem to get that to work either. I was wondering if someone could walk me through the process of modeling in Ray- or help identify any issues with the environment itself that would cause Ray to not work. Any help is greatly appreciated, as I am new to RL and completely stumped.</p>
<p>I'm new in RL and was search for some code and find yours</p> <p>it seams you need only define env due to i add this line and it is worked</p> <pre><code>.... episodes = 100 env = ResinEnv() for episode in range(1, episodes+1): state = env.reset() done = False score = 0 .... </code></pre> <p>hope it is useful</p>
python|reinforcement-learning|openai-gym
0
1,906,329
70,882,559
pydantic's `Field`'s default value ignores constraint checks
<pre class="lang-py prettyprint-override"><code>from pydantic import BaseModel class User(BaseModel): age: int = Field('foo', ge=0) User() # doesn't raise an error # User(age='foo') </code></pre> <p>Why doesn't this raise an error since a string <code>foo</code> is passed even though an <code>int</code> is expected?</p> <p><code>User(age='foo')</code> however raises the <code>ValidationError</code> as expected.</p>
<p>This connected to the config that you can add to all of your models. By default the default of Fields are excluding from validation, simply assuming that the programmer puts a proper default value. However, if you want to enforce validation you cann enforce it by adding a Config to your model:</p> <pre><code>class User(BaseModel): age: int = Field('foo', ge=0) class Config(BaseConfig): validate_all = True if __name__ == &quot;__main__&quot;: User() # Now raise an error </code></pre> <p>Also have a look at the other options for configs in the the docs: <a href="https://pydantic-docs.helpmanual.io/usage/model_config/" rel="nofollow noreferrer">https://pydantic-docs.helpmanual.io/usage/model_config/</a></p>
python|pydantic
2
1,906,330
70,795,108
Networkx - create a multilayer network from two adjacent matrices
<p>I have two adjacent matrices that represent two brain structures (cerebellum and cortex):</p> <p>Dataset:</p> <pre><code>import networkx as nx from astropy.io import fits # Cerebellum with fits.open('matrix_CEREBELLUM_large.fits') as data: matrix_cerebellum = pd.DataFrame(data[0].data.byteswap().newbyteorder()) # 1858 rows × 1858 columns # Cortex with fits.open('matrix_CORTEX_large.fits') as data: matrix_cortex = pd.DataFrame(data[0].data.byteswap().newbyteorder()) #1464 rows × 1464 columns </code></pre> <p>Note: datasets can be downloaded here: <a href="https://cosmosimfrazza.myfreesites.net/cosmic-web-and-brain-network-datasets" rel="nofollow noreferrer">brain datasets</a></p> <h2>Adjacent matrices</h2> <p>Adjacent matrices here are not weighted, and have the usual binary representation, with 1 value for connected nodes and 0 otherwise, like so:</p> <pre><code>0 1 0 1 0 0 0 0 0 0 0 ... 0 0 0 1 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0 1 ... </code></pre> <p>I'm using the library <code>Networkx</code> to look for <strong>community detection</strong> in the networks. I could try to do that for each network, individually.</p> <h2>Simulation</h2> <p>Let's say I need to simulate the real world networks, where a fraction of cortex nodes ( say, 0.01%) projects edges into cerebellum.</p> <p>I'm wondering how I could implement this simulation considering my community detection goal.</p> <h2>Approaches</h2> <p>I initially though about creating a bipartite network, but decided instead to use a multilayer network (2 layers, actually) approach.</p> <p>In this approach, cortex would be network layer 1, cerebellum would be network layer 2, each one with <em>intra</em>-connections already represented in each adjacent matrix.</p> <p>Now I would add the cortex projections as <em>inter</em>-connections between the two layers.</p> <h2>Question</h2> <p>How do I create these projections and represent the new matrix, knowing that I need to:</p> <ol> <li>start from my adjacent matrices</li> <li>keep their intra-connectivity mappings</li> <li>add a new mapping for the intermediate layer</li> </ol>
<p>Here's a way to do what you want:</p> <ol> <li>First after loading your adjacency matrices to pandas you can convert them to two different graphs with <code>nx.convert_matrix.from_pandas_adjacency</code></li> <li>You can then join the two graph into a single graph by using <code>nx.disjoint_union</code>. The nodes of both graphs are basically concatenated onto a single graph (see more <a href="https://networkx.org/documentation/stable/reference/algorithms/generated/networkx.algorithms.operators.binary.disjoint_union.html#networkx.algorithms.operators.binary.disjoint_union" rel="nofollow noreferrer">here</a>).</li> <li>Once you have the full graph, you can randomly draw nodes from the cortex part of the full graph with a 0.01 probability.</li> <li>Similarly you can draw the same number of nodes on the cerebellum part of the graph to act as recipients of the connection.</li> <li>Finally you can create the edges between the chosen nodes on both sides.</li> <li>And you can get your adjacency matrix from the final graph by using <code>adj_matrix_full=nx.linalg.graphmatrix.adjacency_matrix(full_g,weight=None)</code></li> </ol> <p>See full code below for more details:</p> <pre class="lang-py prettyprint-override"><code>import networkx as nx from astropy.io import fits import pandas as pd import numpy as np import matplotlib.pyplot as plt fig=plt.figure(figsize=(20,20)) # Cerebellum with fits.open('matrix_CEREBELLUM_large.fits') as data: matrix_cerebellum = pd.DataFrame(data[0].data.byteswap().newbyteorder()) # 1858 rows × 1858 columns # Cortex with fits.open('matrix_CORTEX_large.fits') as data: matrix_cortex = pd.DataFrame(data[0].data.byteswap().newbyteorder()) #1464 rows × 1464 columns cerebellum_g=nx.convert_matrix.from_pandas_adjacency(matrix_cerebellum) #convert cerebellum adj matrix to nx graph N_nodes_cer=cerebellum_g.number_of_nodes() cortex_g=nx.convert_matrix.from_pandas_adjacency(matrix_cortex) #convert matrix adj matrix to nx graph N_nodes_cort=cortex_g.number_of_nodes() full_g=nx.algorithms.operators.binary.disjoint_union(cortex_g,cerebellum_g) #concatenate the two graphs #choose randomly 0.01 cortex nodes to project to cerebellum p=0.01 N_projs=int(cortex_g.number_of_nodes()*p) cortex_proj_nodes=np.random.choice(cortex_g.number_of_nodes(), size=N_projs,replace=False) cerebellum_recipient=np.random.choice(cerebellum_g.number_of_nodes(), size=N_projs,replace=False) #Add edges for i in range(N_projs): full_g.add_edge(list(full_g.nodes)[cortex_proj_nodes[i]],list(full_g.nodes)[N_nodes_cort+cerebellum_recipient[i]]) #Color the nodes based on brain region color_map = [] for node in full_g: if node &lt; N_nodes_cort: color_map.append('tab:blue') else: color_map.append('tab:orange') adj_matrix_full=nx.linalg.graphmatrix.adjacency_matrix(full_g,weight=None) #Compute adj matrix for full graph pos = nx.circular_layout(full_g) #Setting up a legend plt.plot([],[],color='tab:blue',label='Cortex') plt.plot([],[],color='tab:orange',label='Cerebellum') #plotting graph nx.draw(full_g,pos=pos,node_color=color_map) plt.legend() </code></pre> <p>And the output gives: <a href="https://i.stack.imgur.com/xwoNY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xwoNY.png" alt="enter image description here" /></a></p>
python|networkx|adjacency-matrix
1
1,906,331
60,320,697
Reportlab 3.5 diacritic's in text are mispositioned for certain fonts (Junicode)
<p>I'm creating a parser in python3.7 that takes xml files as input and creates PDF files as output. I am using reportlab 3.5 and everything is working except one thing. The text I am parsing uses a certain font called "Junicode". The font is used properly except that the diacritics (the letters that should go above another letter, like "´" goes over "e" like this é) are shifted to the right. One example here:</p> <p><a href="https://i.stack.imgur.com/uUOsX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uUOsX.png" alt="Word with the diacritic shifted."></a></p> <p>I am using SimpleDocTemplate and the text goes into a table. I simplified the code a little:</p> <pre><code>pdfmetrics.registerFont(TTFont('Junicode', './fonts/Junicode.ttf')) pdfmetrics.registerFont(TTFont('JunicodeBd', './fonts/Junicode-Bold.ttf')) pdfmetrics.registerFont(TTFont('JunicodeBI', './fonts/Junicode-BoldItalic.ttf')) pdfmetrics.registerFont(TTFont('JunicodeIt', './fonts/Junicode-Italic.ttf')) document = SimpleDocTemplate("output_pdf/" + os.path.splitext(os.path.basename(imgfile))[0] + ".pdf", pagesize=self.canvas_size, rightMargin=Helpers.mm_to_pts(self.margin_right), leftMargin=Helpers.mm_to_pts(self.margin_left), topMargin=Helpers.mm_to_pts(self.margin_top), bottomMargin=Helpers.mm_to_pts(self.margin_bottom)) frame = Frame(document.leftMargin, document.bottomMargin, document.width, document.height) text_template = PageTemplate(id='textpage', frames=[frame], onPage=self.__draw_text_page) document.addPageTemplates([text_template]) page_flow = [some_other_stuff , NextPageTemplate('textpage'), PageBreak()] [... code to get line from xml] table_data.append(['', line]) [...] table_styles = [('ALIGN', (0, 0), (0, -1), 'RIGHT'), ('ALIGN', (2, 0), (2, -1), 'RIGHT'), ('SIZE', (0, 0), (-1, -1), self.font_size), ('FONT', (0, 0), (-1, -1), 'Junicode'), ('VALIGN', (0, 0), (-1, -1), 'MIDDLE'), ] table = Table(table_data, rowHeights=self.table_row_height) table.setStyle(TableStyle(table_styles)) page_flow.append(table) document.build(page_flow) </code></pre> <p>I am trying to get the diacritics above the letters, f.e. in the image above, i'd like the 90-degree tilted ":" to be on top of the y.</p> <p>Does anyone know where this is coming from and if there is a solution for it?</p> <p>Thanks, Paul</p>
<p>Workaround:</p> <p>I managed to solve the issue by editing the font with FontForge. I transformed the x position of the diacritics to a negative value, thus positioning them properly. For capital letters I had to increase the y value too, so I created additional diacritics (on places in the font where no letter was set) with increased y values and replaced the old ones with new ones if a capital letter is in front of it.</p>
python|python-3.x|reportlab
0
1,906,332
3,217,768
How to add __iter__ to dynamic type?
<h2>Source</h2> <pre><code>def flags(*opts): keys = [t[0] for t in opts] words = [t[1] for t in opts] nums = [2**i for i in range(len(opts))] attrs = dict(zip(keys,nums)) choices = iter(zip(nums,words)) return type('Flags', (), dict(attrs)) Abilities = flags( ('FLY', 'Can fly'), ('FIREBALL', 'Can shoot fireballs'), ('INVISIBLE', 'Can turn invisible'), ) </code></pre> <h2>Question</h2> <p>How can I add an <code>__iter__</code> method to <code>Abilities</code> so that I can iterate over <code>choices</code>?</p> <h2>Why?</h2> <p>This way I can use things like </p> <pre><code>hero.abilities = Abilities.FLY | Abilities.FIREBALL if hero.abilities &amp; Abilities.FIREBALL: for k, v in Abilities: print k, v </code></pre> <p>in my code without having to use any magic numbers or strings, and I can also save the set of flags to the DB as a single int, or display the list in a readable format.</p> <p>Other improvements are welcome.</p>
<p>There's no need to use a dynamic type here; I'd restructure this as a simple class, for example:</p> <pre><code>class flags(object): def __init__(self, *opts): keys = [t[0] for t in opts] words = [t[1] for t in opts] nums = [2**i for i in range(len(opts))] self.attrs = dict(zip(keys,nums)) self.choices = zip(nums,words) def __getattr__(self, a): return self.attrs[a] def __iter__(self): return iter(self.choices) Abilities = flags( ('FLY', 'Can fly'), ('FIREBALL', 'Can shoot fireballs'), ('INVISIBLE', 'Can turn invisible'), ) print Abilities.FLY for k, v in Abilities: print k, v </code></pre>
python
4
1,906,333
2,636,656
How to do this in a pythonic way?
<p>Consider this Python snippet:</p> <pre><code>for a in range(10): if a == 7: pass if a == 8: pass if a == 9: pass else: print "yes" </code></pre> <p>How can it be written shorter?</p> <pre><code>#Like this or... if a ?????[7,8,9]: pass </code></pre>
<p>Use the <code>in</code> operator:</p> <pre><code>if a in (7,8,9): pass </code></pre>
python|syntax
17
1,906,334
5,859,144
Shift all indices in NumPy array
<p>I have a numpy array like this: </p> <pre><code>x=np.array([0,1,2,3,4]) </code></pre> <p>and want to create an array where the value in index 0 is in index 1, index 1 is in index 2, etc.</p> <p>The output I want is:</p> <pre><code>y=np.array([0,0,1,2,3]). </code></pre> <p>I'm guessing there's an easy way to do this without iterating through the full array. How can I do this in a numPythonic way?</p>
<p>You can use</p> <pre><code>y = numpy.roll(x, 1) y[0] = 0 </code></pre> <p>or</p> <pre><code>y = numpy.r_[0, x[:-1]] </code></pre>
python|numpy
17
1,906,335
30,452,201
Plot NumPy ndarray into a 3D surface
<p>I have a <code>numpy.ndarray</code> of size 200x200. I want to plot it as a 3D surface where x and y are indexes of the array and z is the value of that array element. Is there any easy way to do it or do I have to transform my array into a long list?</p>
<p>If what you want is to plot a 3D surface on top of a 2D grid what you could do is something similar to this:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D from matplotlib import cm # create some fake data array_distribution3d = np.ones((200, 200)) array_distribution3d[0:25, 0:25] = -1 # create the meshgrid to plot on x = np.arange(0, array_distribution3d.shape[0]) y = np.arange(0, array_distribution3d.shape[1]) # here are the x,y and respective z values X, Y = np.meshgrid(x, y) z = [] for i in range(0, array_distribution3d.shape[0]): z_y = [] for j in range(0, array_distribution3d.shape[1]): z_y.append(array_distribution3d[i, j]) z.append(z_y) Z = np.array(z) # create the figure, add a 3d axis, set the viewing angle fig = plt.figure(figsize=(12, 9)) ax = fig.add_subplot(111, projection='3d') ax.view_init(45, 60) # here we create the surface plot ax.plot_surface(X, Y, Z) </code></pre> <p>However, to the best of my knowledge, this kind of data can be plotted as a colourmap. This can be plotted as follows:</p> <pre><code>import numpy as np import os.path import matplotlib.pyplot as plt array_distribution = np.ones((200, 200)) array_distribution[0:25, 0:25] = -1 fig = plt.imshow(array_distribution) plt.colorbar(fraction=0.035, pad=0.035, ticks=[-1., 0., 1.]) axes = plt.gca() axes.set_ylim([0, 200]) figure = plt.gcf() file = os.path.join('demo1.png') figure.savefig(file, dpi=250) plt.close('all') print('done') </code></pre>
python|numpy|matplotlib|plot
0
1,906,336
66,874,582
Python subprocess.Popen giving error with unix 'find' command
<p>I am trying to get the largest 20 files in a directory that are older than 60 days by passing the unix 'find' command to Python's subprocess.Popen. This is what I have tried:</p> <pre><code># Get largest files older than 60 days cmd_string = 'find {0} -type f -mtime +60 -size +300M -exec du -sh {{}} \; | sort -rh | head -n20'.format(rmnt) print(cmd_string) cmd_args = shlex.split(cmd_string, posix=False) print(cmd_args[12]) find_out = subprocess.Popen(cmd_args, stdout=subprocess.PIPE).stdout </code></pre> <p>where rmnt is a directory name (eg. '/mnt/active'). The print statements return the command correctly:</p> <pre><code>find /mnt/active -type f -mtime +60 -size +300M -exec du -sh {} \; | sort -rh | head -n20 \; </code></pre> <p>But I am getting this error:</p> <pre><code>find: missing argument to `-exec' </code></pre> <p>I thought that the problem was due to the special character &quot;\&quot; but it is being printed as expected.</p> <p><strong>Update 1.</strong></p> <p>I found this similar question: <a href="https://stackoverflow.com/questions/45051547/find-missing-argument-to-exec-when-using-subprocess">find: missing argument to `-exec&#39; when using subprocess</a></p> <p>It deals with subprocess.call rather than subprocess.Popen but the answer that &quot;there is no shell to interpret and remove that backslash&quot; seems to apply here also. However, with the backslash removed I still get an error:</p> <pre><code>find: paths must precede expression: | </code></pre> <p>This error suggests that the backslash is not being passed to the find command</p>
<p>For anyone that has a similar issue, there are 2 problems in my code.</p> <p><strong>1.</strong></p> <p>By default, subprocess.Popen uses <code>shell=False</code> and therefore there is no shell to interpret and remove the backslash as explained here: <a href="https://stackoverflow.com/questions/45051547/find-missing-argument-to-exec-when-using-subprocess">find: missing argument to `-exec&#39; when using subprocess</a></p> <p><strong>2.</strong></p> <p>To use a pipe with the subprocess module, you have to pass <code>shell=True</code> which is a security hazard. Therefore, the commands should be separated into 2 or more commands as explained here: <a href="https://stackoverflow.com/questions/13332268/how-to-use-subprocess-command-with-pipes">How to use `subprocess` command with pipes</a></p> <p>The code that worked for me is:</p> <pre><code># Get largest files older than 60 days find_cmd = 'find {0} -type f -mtime +60 -size +300M -exec du -sh {{}} ;'.format(rmnt) find_args = shlex.split(find_cmd) sort_cmd = 'sort -rh' sort_args = shlex.split(sort_cmd) find_process = subprocess.Popen(find_args, stdout=subprocess.PIPE) sort_process = subprocess.Popen(sort_args, stdin=find_process.stdout,stdout=subprocess.PIPE) find_process.stdout.close() sort_out = sort_process.stdout </code></pre>
python|subprocess|find|popen|shlex
0
1,906,337
42,891,954
Pygame - "error: display Surface quit"
<p>I've made a menu screen where clicking on a button leads to a different <em>screen</em> in the same window.</p> <pre><code>def main(): import pygame, random, time pygame.init() size=[800, 600] screen=pygame.display.set_mode(size) pygame.display.set_caption("Game") done=False clock=pygame.time.Clock() while done==False: for event in pygame.event.get(): pos = pygame.mouse.get_pos() if event.type == pygame.QUIT: done=True break if button_VIEW.collidepoint(pos): if event.type == pygame.MOUSEBUTTONDOWN: print("VIEW.") view() break screen.fill(black) ... def view(): done=False clock=pygame.time.Clock() while done==False: for event in pygame.event.get(): pos = pygame.mouse.get_pos() if event.type == pygame.QUIT: done=True break ... </code></pre> <p>If possible, I'd like to know how I can avoid the error:</p> <pre><code> screen.fill(black) error: display Surface quit &gt;&gt;&gt; </code></pre> <p>After looking at other questions on here, I tried adding <code>break</code>s to the exits of any loops, but still the error occurs.</p> <p>I understand the issue is that the program is trying to execute <code>screen.fill(black)</code> after the window has been closed, but I have no further ideas on how to prevent the error.</p> <p>I appreciate any help. Sorry if it seems simple.</p>
<p>Several possibilities:</p> <ul> <li>end the process (with e.g. <code>sys.exit()</code>) in the <code>view</code> function. Not ideal.</li> <li>return a value from the <code>view</code> function to indicate that the application shoud end (e.g. <code>return done</code>), and check for that return value in the <code>main</code> function (<code>if done: return</code>). Better.</li> <li>make <code>done</code> global and check for its value in the <code>main</code> function. I really would not like this solution.</li> <li>my favourite: avoid multiple event loops altogether, so the problem solves itself (so you could just e.g. exit the <code>main</code> function with return).</li> </ul>
python-2.7|pygame
1
1,906,338
66,426,177
Display custom entities using diSplacy
<p>I have a text string with a set of <em>fixed</em> named entities (person, location, ...) as shown in the example below</p> <pre class="lang-py prettyprint-override"><code>text = &quot;My name is John Smith and I live in Paris&quot; entities = [ (&quot;Person&quot;, 11, 21), # John Smith (&quot;Location&quot;, 36, 41), # Paris ] </code></pre> <p>and I would like to display them using the very nice renderer from Spacy called DiSplacy [1]. If I understand well, the best way for me is to create a custom <code>Doc</code> object in Spacy using my custom entities but I didn't find the right way to do this.</p> <p>[1] <a href="https://spacy.io/usage/visualizers#ent" rel="nofollow noreferrer">https://spacy.io/usage/visualizers#ent</a></p>
<p>You can use the manual option in <code>displacy.serve()</code> or <code>displacy.render()</code>.</p> <pre class="lang-py prettyprint-override"><code>dic_ents = { &quot;text&quot;: &quot;My name is John Smith and I live in Paris&quot;, &quot;ents&quot;: [ {&quot;start&quot;: 11, &quot;end&quot;: 21, &quot;label&quot;: &quot;Person&quot;}, {&quot;start&quot;: 36, &quot;end&quot;: 41, &quot;label&quot;: &quot;Location&quot;}, ], &quot;title&quot;: None } displacy.render(dic_ents, manual=True, style=&quot;ent&quot;) </code></pre>
python|spacy|named-entity-recognition
0
1,906,339
66,655,949
i cant replace ' , ' to ' ' in string type to convert it to integer
<p>I thought I can use this code to convert a string:</p> <pre><code>str = &quot;3,443&quot; str.replace(&quot;,&quot;, &quot;&quot;) int_num = int(str) </code></pre> <p>But it doesn't work and raises a <code>ValueError</code>. How can I convert this string?</p>
<p><code>str.replace</code> doesn't change the string <em>in loco</em>. You need to reassign it. Also, you shouldn't use <code>str</code> to name your variables.</p> <pre class="lang-py prettyprint-override"><code>string = &quot;3,443&quot; string = string.replace(&quot;,&quot;, &quot;&quot;) int_num = int(string) </code></pre>
python
10
1,906,340
3,427,505
django shell triggering Postgres idle transaction problems
<p>It's not the fault of the django (iPython) shell, actually. The problem is developers who open the django shell <code>./manage.py shell</code> run through some queries (it often only generates selects), and then either leave the shell running or somehow kill their (ssh) session (actually, I'm not sure if the latter case leaves the transaction open - I haven't tested it)</p> <p>In any case, nagios regularly alerts over these idle transactions. We could, of course call <code>developer.stop_doing_that_dammit()</code> but it's unreliable.</p> <p>I'm looking for thoughts on resolving this in a way that allows developers to use the django shell, but closes transactions should they forget to close their session out.</p>
<p>You may always run a cron job, that will call pg_cancel_backend() within the database, for the backends that are idle for longer than e.g. 1 day (of course that depends on the nagios settings).</p>
python|django|postgresql|ipython
1
1,906,341
50,278,016
Python view vs copy error wants me to use .loc in script only
<p>I'm running a long script which has a dataframe <code>df</code>. as the script runs, building up and modifying <code>df</code> column by column I get this error over and over again in the command line: </p> <blockquote> <p>A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy" rel="nofollow noreferrer">http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy</a></p> </blockquote> <p>But then it will print out the line that is causing the warning and it wont look like a problem. Lines such as the following will trigger it (each line triggered it separately):</p> <pre><code>df['ZIP_DENS'] = df['ZIP_DENS'].astype(str) df['AVG_WAGE'] = df['AVG_WAGE'].astype(str).apply(lambda x:x if x != 'nan' else 'unknown') df['TERM_BIN'] = df['TERMS'].map(terms_dict) df['LOSS_ONE'] = 'T_'+ df['TERM'].astype(str) +'_C_'+ df['COMP'].astype(str) + df['SIZE'] # this one's inside a loop: df[i + '_BIN'] = df[i + '_BIN'].apply(lambda x:x if x != 'nan' else 'unknown') </code></pre> <p>There are some examples of the mutations I'm making on the dataframe. Now, this warning just started showing up but I can't recreate this problem in the interpreter. When I open a terminal I try things like this and it gives me no warnings:</p> <pre><code>import pandas as pd df = pd.DataFrame([list('ab'),list('ef')],columns=['first','second']) df['third'] = df[['first','second']].astype('str') </code></pre> <p>Is there something I'm missing, something I don't understand about the nature of DataFrames that this warning is trying to tell me? Do you think perhaps I did something to this dataframe at the beginning of the script and then all subsequent mutations on the object are mutations on a view or a copy of it or something weird like that is going on? </p>
<p>As I mentioned in my comment, the likely issue is that somewhere upstream in your code, you assigned a slice of some other <code>pd.DataFrame</code> to <code>df</code>. This is a common cause of confusion and is also explained under <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#why-does-assignment-fail-when-using-chained-indexing" rel="nofollow noreferrer">why-does-assignment-fail-when-using-chained-indexing</a> in the link that the <code>Warning</code> mentions. </p> <p>A minimal example:</p> <pre><code>data = pd.DataFrame({'a':range(7), 'b':list('abcccdb')}) df = data[data.a % 2 == 0] #making a subselection of the DataFrame df['b'] = 'b' </code></pre> <blockquote> <p>/home/user/miniconda3/envs/myenv/lib/python3.6/site-packages/ipykernel_launcher.py:1: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead</p> <p>See the caveats in the documentation: <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy" rel="nofollow noreferrer">http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy</a> """Entry point for launching an IPython kernel.</p> </blockquote> <p>Notice that this section:</p> <pre><code>df = data[data.a % 2 == 0] #making a subselection of the DataFrame df['b'] = 'b' </code></pre> <p>could just as well be rewritten this way:</p> <pre><code>data[data.a % 2 == 0]['b'] = 'b' #obvious chained indexing df = data[data.a % 2 == 0] </code></pre> <p>The correct way of writing this bit is the following way:</p> <pre><code>data = pd.DataFrame({'a':range(7), 'b':list('abcccdb')}) df = data.loc[data.a % 2 == 0].copy() #making a copy of the subselection df.loc[:,'b'] = 'b' </code></pre>
python|pandas
1
1,906,342
26,695,380
Writing generated numbers in text file in Python
<p>So I tried browsing other posts, and even asked a friend before resorting to actually asking here. I have a homework assignment which calls for me to create a program in Python, that generates random numbers based on how many numbers a user inputs. For example, if they input that they want to generate 5 numbers... the program will do just that. Now my issue is that I created the following:</p> <pre><code>import random def main(): howMany = 0 numbers = 0 howMany = int(input('How many numbers would you like to generate?: ')) infile = open ('rand_write.txt', 'w') for n in range(1,howMany): numbers = random.randint(1,115) infile.write(str(numbers)) infile.close() main() </code></pre> <p>Everything works alright, until it's time to actually get the 5 numbers onto a text file. I can't... for the life of me.. figure out just what I'm doing wrong. The program writes to the text file, but it only writes a random number, not 5. I would greatly appreciate any pointers and guidance in figuring out what I should do to solve this issue. Thank you very much!</p>
<p>Your indentation is wrong. You need to put tab in front of <code>infile.write(str(numbers))</code> to make sure that it gets execute in each loop. Otherwise you end up just writing the last number.</p> <p>You may also want to write some separator between the numbers.</p> <p>Finally you may want to make just one call to generate the random numbers as follows:</p> <pre><code>numpy.random.random_integers(1,115, howMany) </code></pre>
python|random|numbers
1
1,906,343
26,764,094
Python Multiply two arrays of unequal size
<p>I have an amplitude curve from x = 2000 to 5000 in 3000 steps and a data curve from x = 0 to 10000 in 50000 steps. Now I want to normalize the data (multiply with the amplitude curve), but as you can see the two arrays are of unequal length and have different start points. </p> <p>Is there any way of doing this without resizing one of the two? (all values outside the amplitude range can be zero)</p>
<p>You said you don't want to resize the lists so you'll probably just have to iterate both lists using a while loop and keeping track of indices for each array. Stop looping when you reach the end of one the ranges.</p> <p>You could also use the zip and map functions to do something like</p> <pre><code>&gt;&gt;&gt; b = [2, 4, 6, 8] &gt;&gt;&gt; c = [1, 3, 5, 7, 9] &gt;&gt;&gt; map( lambda x : x[0]*x[1], zip(b, c[1:])) &gt;&gt;&gt; [6, 20, 42, 72] </code></pre> <p>but I am not sure if thats something you "can" do.</p>
python|arrays|numpy
0
1,906,344
26,615,113
Python fabric How to send password and yes/no for user prompt
<p>I have created a fabfile with multiple hosts.</p> <p>I am automating my experiment. When i run the command "sudo adduser --ingroup hadoop hduser" it will ask for the following.</p> <ol> <li>New unix password</li> <li>confirm Password.</li> <li>Full Name</li> <li>Room,Ph,etc</li> <li>is this information Correct? Y/N</li> </ol> <p>I would like to pass all this information as part of script without prompting user. How can i do this ?</p> <p>Thanks</p> <p>Navaz</p>
<p>Another option is to use <code>fexpect</code>, an extension of fabric that enables you to respond to prompts:</p> <pre><code>from ilogue.fexpect import expect, expecting, run prompts = [] prompts += expect('What is your full name?','John Doe') prompts += expect('is this information Correct? Y/N','Y') with expecting(prompts): sudo('adduser --ingroup hadoop hduser') </code></pre>
python|shell|hadoop|fabric
1
1,906,345
26,619,381
Clean nested lists in Python
<p>Say I have a list in Python:</p> <pre><code>l = [1, 2, 3, [a, [[9, 10, 11]], c, d], 5, [e, f, [6, 7, [8]]]] </code></pre> <p>I would like to clean all the nested lists so that, if they are of length 1 (just one item), the are "pulled up" out of the list, such that the revised list would look like:</p> <pre><code>l = [1, 2, 3, [a, [9, 10, 11], c, d], 5, [e, f, [6, 7, 8]]] </code></pre> <p>Naturally I can just check if <code>len == 1</code> and replace that index with its contents, but... Is there a built-in way to do this?</p>
<p>You could use a recursive function:</p> <pre><code>def expand_singles(item): if isinstance(item, list): if len(item) == 1: return expand_singles(item[0]) return [expand_singles(i) for i in item] return item </code></pre> <p>Demo:</p> <pre><code>&gt;&gt;&gt; def expand_singles(item): ... if isinstance(item, list): ... if len(item) == 1: ... return expand_singles(item[0]) ... return [expand_singles(i) for i in item] ... return item ... &gt;&gt;&gt; l = [1, 2, 3, ['a', [[9, 10, 11]], 'c', 'd'], 5, ['e', 'f', [6, 7, [8]]]] &gt;&gt;&gt; expand_singles(l) [1, 2, 3, ['a', [9, 10, 11], 'c', 'd'], 5, ['e', 'f', [6, 7, 8]]] </code></pre>
python|list|data-cleaning
2
1,906,346
64,843,239
Python - Weighted Average from a single list
<p>I have a single list as this:</p> <pre><code>prices = [10, 10, 10, 40, 40, 50] </code></pre> <p>and I would like to calculate the weighted average from this list, so the weight of the number 10 would be 3, weight of 40 would be 2 and weight of 50 would be 1. How do I do this without having 2 separated lists?</p>
<p>You neeed some form of lookup for your weights, either a list of the same lenght or a dictionary seem prudent.</p> <p>Compute it using a dictionary of weights:</p> <pre><code>l = [10, 10, 10, 40, 40, 50] w = {10:3,40:2,50:1} avg_w = sum(i*w[i] for i in l)/len(l) print(avg_w) </code></pre> <p>or use a weight list:</p> <pre><code>w2= [3,3,3,2,2,1] avg_w = sum(v*w2[i] for i,v in enumerate(l))/len(l) print(avg_w) </code></pre> <p>Output:</p> <pre><code>50 </code></pre>
python|numpy
0
1,906,347
61,266,211
Simplify nested loop python
<p>I have two big lists called a and b. Both of them have a size of 2000 values. Each value is DataFrame with 35136 values. </p> <pre><code>a = [Dataframe, Dataframe....] --&gt; Size:2000 a[0] = [0, 0, 0, 2, 0, 0, 3, 0....] --&gt;Name: colA, Size:35136 . . a[8] = [] . . b = [Dataframe, Dataframe....] b[0] = [11, 0, 0, 0, 50, 0, 0, 11.....] --&gt;Name: colB, Size:35136 </code></pre> <p>I need to change the DataFrame a iterating in both lists and in each DataFrame. How to do it faster?</p> <pre><code>for j in range(0, 2000): for i in range(0, 35136): if len(a[j]) == 0: b[j] = [] else: if b[j]['colA'][i] != 0: tmp = b[j]['colA'][i] if (b[j]['colA'][i] == 0) &amp; (a[j]['colB'][i] == 0): b[j]['colA'][i] = tmp </code></pre> <p>Desired output is for this input:</p> <pre><code>b[0] = [11, 11, 11, 0, 50, 50, 0, 11.....] --&gt;Name: colB, Size:35136 </code></pre> <p>Thank you. </p>
<p>If I understand your example, I think this should be equivalent (and significantly faster):</p> <pre><code>for idx, (df_a, df_b) in enumerate(zip(a, b)): if len(df_a) == 0: b[idx] = [] else: df_b['colA'].mask(cond=(df_b['colA'] == 0) &amp; (df_a['colB'] == 0), other=df_b['colA'].where(df_b['colA'] != 0).ffill(), inplace=True) </code></pre> <p>This relies on all of the DataFrames having the same index (not just the same size), which will be the default unless you've set your own indices.</p>
python|pandas|performance
0
1,906,348
57,938,424
can't get a simple counter function to work
<p>I have looked at this piece of code over and over again and can't figure out what I am doing wrong!</p> <pre><code>word='banana' def counting(x): count=0 for i in word: if i=='a': count=count+1 return count print (counting(word)) </code></pre> <p>The result should be <code>3</code> (<code>3</code> instances of <code>'a'</code> in <code>'banana'</code>). But the actual output is <code>1</code>. How should I fix my code?</p>
<p>Your return statement appears to be indented so as to be within the if statement within the loop. Make sure you are not returning the count until the loop fully completes.</p> <pre class="lang-py prettyprint-override"><code>word='banana' def counting(x): count=0 for i in x: if i=='a': count=count+1 return count print (counting(word)) </code></pre>
python|function
2
1,906,349
56,393,293
Azure function Python - error "binding type(s) 'blobTrigger' are not registered"
<p>I am trying to run an Azure function locally on my Mac and getting the following error: <code>The binding type(s) 'blobTrigger' are not registered. Please ensure the type is correct and the binding extension is installed.</code></p> <p>I'm working with <code>Python 3.6.8</code> and have installed <code>azure-functions-core-tools</code> using homebrew (<code>brew tap azure/functions; brew install azure-functions-core-tools</code>). </p> <p>Setup my <code>local.settings.json</code> file with the expected configuration, so function should be listening to the correct storage container hosted in azure.</p> <p>Im certain I have not changed any code or configuration files since it was working last week.</p> <p>host.json file contains:</p> <pre><code>{ "version": "2.0", "extensionBundle": { "id": "Microsoft.Azure.Functions.ExtensionBundle", "version": "[1.*, 2.0.0)" } } </code></pre> <p>function.json file contains:</p> <pre><code>{ "scriptFile": "__init__.py", "bindings": [ { "name": "xmlblob", "type": "blobTrigger", "direction": "in", "path": "&lt;directory&gt;/{name}", "connection": "AzureStorageAccountConnectionString" } ] } </code></pre> <p>requirements.txt file contains:</p> <pre><code>azure-cosmos==3.1.0 azure-functions-worker==1.0.0b6 azure-storage==0.36.0 azure-storage-blob==2.0.1 xmljson==0.2.0 xmlschema==1.0.11 </code></pre> <p>Then I run the following commands in my terminal:</p> <pre><code>1) pip install -r requirements.txt 2) source .env/bin/activate 3) func host start </code></pre> <p>I then get the following error:</p> <pre><code>&lt;Application name&gt;: The binding type(s) 'blobTrigger' are not registered. Please ensure the type is correct and the binding extension is installed. </code></pre>
<p>You have done everything correct by the looks of it, but you need to have the dotnet core framework and runtime installed locally in order to execute the trigger.</p> <p>For me on Ubuntu, I followed <a href="https://docs.microsoft.com/en-us/dotnet/core/install/linux-package-manager-ubuntu-1804" rel="nofollow noreferrer">this guide</a>. Once installed I was able to trigger a blob function locally.</p> <p>For Mac, I would <a href="https://dotnet.microsoft.com/download" rel="nofollow noreferrer">take a look here</a> about installing dotnot core.</p>
python|azure-functions|azure-blob-storage
1
1,906,350
18,368,086
FInd a US street address in text (preferably using Python regex)
<p>Disclaimer: I read very carefully this thread: <a href="https://stackoverflow.com/questions/4542941/street-address-search-in-a-string-python-or-ruby">Street Address search in a string - Python or Ruby</a> and many other resources.</p> <p>Nothing works for me so far.</p> <p>In some more details here is what I am looking for is:</p> <p>The rules are relaxed and I definitely am not asking for a perfect code that covers all cases; just a few simple basic ones with assumptions that the address should be in the format:</p> <blockquote> <p>a) Street number (1...N digits);</p> <p>b) Street name : one or more words capitalized;</p> <p>b-2) (optional) would be best if it could be prefixed with abbrev. "S.", "N.", "E.", "W."</p> <p>c) (optional) unit/apartment/etc can be any (incl. empty) number of arbitrary characters</p> <p>d) Street "type": one of ("st.", "ave.", "way");</p> <p>e) City name : 1 or more Capitalized words;</p> <p>f) (optional) state abbreviation (2 letters)</p> <p>g) (optional) zip which is any 5 digits.</p> </blockquote> <p>None of the above needs to be a valid thing (e.g. an existing city or zip).</p> <p>I am trying expressions like these so far:</p> <blockquote> <blockquote> <blockquote> <p>pat = re.compile(r'\d{1,4}( \w+){1,5}, (.*), ( \w+){1,5}, (AZ|CA|CO|NH), [0-9]{5}(-[0-9]{4})?', re.IGNORECASE)</p> </blockquote> </blockquote> </blockquote> <pre><code>&gt;&gt;&gt; pat.search("123 East Virginia avenue, unit 123, San Ramondo, CA, 94444") </code></pre> <p>Don't work, and for me it's not easy to understand why. Specifically: how do I separate in my pattern a group of any words from one of specific words that should follow, like state abbrev. or street "type ("st., ave.)?</p> <p>Anyhow: here is an example of what I am hoping to get: Given def ex_addr(text): # does the re magic # returns 1st address (all addresses?) or None if nothing found</p> <pre><code>for t in [ 'The meeting will be held at 22 West Westin st., South Carolina, 12345 on Nov.-18', 'The meeting will be held at 22 West Westin street, SC, 12345 on Nov.-18', 'Hi there,\n How about meeting tomorr. @10am-sh in Chadds @ 123 S. Vancouver ave. in Ottawa? \nThanks!!!', 'Hi there,\n How about meeting tomorr. @10am-sh in Chadds @ 123 S. Vancouver avenue in Ottawa? \nThanks!!!', 'This was written in 1999 in Montreal', "Cool cafe at 420 Funny Lane, Cupertino CA is way too cool", "We're at a party at 12321 Mammoth Lane, Lexington MA 77777; Come have a beer!" ] print ex_addr(t) </code></pre> <p>I would like to get:</p> <blockquote> <pre><code>'22 West Westin st., South Carolina, 12345' '22 West Westin street, SC, 12345' '123 S. Vancouver ave. in Ottawa' '123 S. Vancouver avenue in Ottawa' None # for 'This was written in 1999 in Montreal', "420 Funny Lane, Cupertino CA", "12321 Mammoth Lane, Lexington MA 77777" </code></pre> </blockquote> <p>Could you please help?</p>
<p>I just ran across this in GitHub as I am having a similar problem. Appears to work and be more robust than your current solution.</p> <p><a href="https://github.com/madisonmay/CommonRegex" rel="noreferrer">https://github.com/madisonmay/CommonRegex</a></p> <p>Looking at the code, the regex for street address accounts for many more scenarios. '\d{1,4} [\w\s]{1,20}(?:street|st|avenue|ave|road|rd|highway|hwy|square|sq|trail|trl|drive|dr|court|ct|parkway|pkwy|circle|cir|boulevard|blvd)\W?(?=\s|$)'</p>
python|regex|postal-code
10
1,906,351
18,582,236
Python; How to turn each line of a text file into a separate string element in a list:
<p>I am very new to python. What I would like to do is to enter a list of items separated by line, like so:</p> <blockquote> <p><strong>item number one</strong></p> <p><strong>item number two</strong></p> <p><strong>item number three</strong></p> </blockquote> <p>and have them added to a list like:</p> <blockquote> <p><strong>['item number one', 'item number two', 'item number three']</strong></p> </blockquote> <p>Thanks!</p>
<p>I'm assuming, from the title of your post, that you've been given a text file, which I'll call <code>file.txt</code>.</p> <pre><code>with open('file.txt') as rd: items = rd.readlines() </code></pre> <p><code>readlines()</code> automatically breaks up the file by newspace characters and returns the contents of a file as a list of strings, one string for each line. To get rid of the newlines, use the <code>strip()</code> function. For example, you can replace <code>items=rd.readlines()</code> with <code>items = [x.strip() for x in rd.readlines()]</code>.</p>
python|list
4
1,906,352
71,569,859
How to do while loop until my output has reached
<p>I have a password verifier and I have made it that when there is less than 6 digits to change a variable called &quot;invalid&quot; to 1 and if it is is more than 16 digits to change a variable called &quot;invalid&quot; to 1. How do I make it that when the variable is one for it to run the code again and if it is on zero for it to stop. Is there a way to start the code again when the invalid variable is one?</p>
<p>Initialize a variable to check if the conditions are met. The while loop will run until password is of right size</p> <pre><code>check = 1 while check == 1: input = input('Give your password') if 6 &lt; len(input) &lt; 16: check = 0 </code></pre>
python|loops
2
1,906,353
55,528,627
Sending Json data to server, from a file, periodically using python
<p>Is there any $.get() equivalent in python2.7 through which I could send data, from a file (upon being modified), to a server?<br> I have used cronjobs to perform these kinds of things, but here I do not want to use it.</p>
<p>At the first you need to read a string from your file:</p> <pre><code>with open("your file", "r") as f: data = f.read() </code></pre> <p>then you should convert string <code>data</code> to <code>json</code>:</p> <pre><code>import json json = json.loads(data) </code></pre> <p>then by <code>requests</code> module send it to your server:</p> <pre><code>import requests resp = requests.get(url="your url", params=json) #you can use post instead of get </code></pre>
python|python-2.7
1
1,906,354
42,523,903
Django can't find my models in Linux but can in Windows
<p>I am somewhat still unfamiliar with Django so please excuse the possible elementary question. </p> <p>am trying to deploy a Django project, but I am having some problems with it. </p> <p>The project works exactly as it should on Windows, however I am getting this error on my Ubuntu VPS:</p> <pre><code>Unhandled exception in thread started by &lt;function wrapper at [...]&gt; Traceback (most recent call last): [...] Stack trace from system/django files File "/home/django/[Project Name]/apps/sale/admin.py", line 8, in &lt;module&gt; from .forms import SaleRequestFormAdmin File "/home/django/[Project Name]/apps/sale/forms.py", line 6, in &lt;module&gt; from apps.listing.models import Listing ImportError: No module named listing.models </code></pre> <p>My structure for the project is like this (with irrelevant components removed or shortened):</p> <pre><code>. ├── apps │ ├── [...] │ ├── listing │ │ ├── admin.py │ │ ├── api.py │ │ ├── apps.py │ │ ├── forms.py │ │ ├── __init__.py │ │ ├── migrations │ │ │ ├── [...] │ │ ├── models.py │ │ ├── tests.py │ │ └── views.py │ ├── sale │ │ ├── admin.py │ │ ├── api.py │ │ ├── apps.py │ │ ├── forms.py │ │ ├── __init__.py │ │ ├── migrations │ │ │ ├── [...] │ │ ├── models.py │ │ ├── tests.py │ │ └── views.py ├── __init__.py ├── [Project Name] │ ├── __init__.py │ ├── settings.py │ ├── urls.py │ └── wsgi.py ├── manage.py </code></pre> <p>The <code>apps</code> folder contains all of my Django apps, and is in the same folder as <code>manage.py</code> and my project folder.</p> <p>You can see that there <em>is</em> a file called <code>models.py</code> in the <code>listing</code> app.</p> <p>Does anyone have any idea why this does not work on Linux? I couldn't find any relevant information on Google, and I really don't know why it is failing. </p> <p>Any help would be greatly appreciated.</p>
<p>I think the problem is that <code>apps</code> path is not in your <code>sys</code> path</p> <p>Try adding the following in your settings.py file:</p> <pre><code>import os import sys PROJECT_ROOT = os.path.dirname(__file__) sys.path.insert(0, os.path.join(PROJECT_ROOT, 'apps')) </code></pre> <p>Let know if it helps!</p>
python|linux|django
-1
1,906,355
53,911,218
Sharing a function with different modules
<ol> <li><p>If I have function like this </p> <pre><code>def get_time_format(value, time): for item in value: time.append(datetime.strptime(str(item),'%H').strftime('%I%P').lstrip('0').upper()) return time </code></pre> <p>For example, above function I am using across different modules, Instead of code repeation, can I put that in separate file and call them in the module whichever requires them.</p></li> <li><p>Second part is related to the first one, so if I have few lines of code which is common. Can I create a file which is having all the common code lines?</p> <pre><code>hierarchy:- hello.py hello.py/ds/a.py hello.py/ds/b.py hello.py/ds/c.py hello.py/ds/d.py </code></pre> <p>Above <code>hello.py</code> is my main file and <code>a,b,c and d</code> are modules.</p></li> </ol> <p>Can some tell me where I can to create a file to share common codes within modules as asked in 1 and 2. I am new to python and using the modules for the first time.</p>
<p>save the function as a <code>example.py</code> file in any directory. Example <code>home/ubuntu/Desktop</code> Now use the following snippet in any code to use the function</p> <pre><code>import sys sys.path.insert(0, 'home/ubuntu/Desktop') import example as eg eg.get_time_format(value, time) </code></pre> <p>You can define multiple functions like get_time_format(value,time) foo(x,y) bar(x,y) in <code>example.py</code> like To use them, just type <code>eg.foo(x,y)</code> <code>eg.bar(x,y)</code></p>
python|python-3.x|module
0
1,906,356
45,291,143
Preselected Radio Button Not Functioning wxPython
<p>I'm working to develop an application where the user enters information into text boxes that are generated when a radio button is selected and the information would be stored in a CSV file. When the application is opened, the first radio button is selected. While this is not an issue, none of the text boxes appear. If one of the radio buttons is selected then the first one is selected the text boxes appear no problem.</p> <p>Here is the code that generate the radio buttons:</p> <pre><code> self.radioStaticBox = wx.StaticBox(self.panel,-1,"Material Type: ") self.radioStaticBoxSizer = wx.StaticBoxSizer(self.radioStaticBox, wx.VERTICAL) self.radioBox = sc.SizedPanel(self.panel, -1) self.radioBox.SetSizerType("horizontal") self.isoRadioButton = wx.RadioButton(self.radioBox,-1, "Isotropic") self.orthoRadioButton = wx.RadioButton(self.radioBox,-1, "Orthotropic") self.orthotRadioButton = wx.RadioButton(self.radioBox,-1, "Orthotropic (with thickness)") self.isoRadioButton.SetValue(True) self.radioBox.Bind(wx.EVT_RADIOBUTTON, self.set_type) </code></pre> <p>And the function that the radio buttons are being bound to:</p> <pre><code>def generate_params(self, event): self.matStaticBoxSizer.Clear(True) if self.matType == "Iso": idSb = wx.StaticBox(self.panel, 0, "Name:") idSbs = wx.StaticBoxSizer(idSb, wx.HORIZONTAL) self.idText = wx.TextCtrl(self.panel) idSbs.Add(self.idText, 0, wx.ALL|wx.LEFT, self.margin) .... </code></pre> <p>Thanks for the help!</p>
<p>In short, you appear to be defining part of the display in the initial section of your code and then another section in the <code>define</code> function <code>generate_params</code>, this will by definition, excuse the pun, ensure that you do not see what is defined there until that function executes.<br> Define all of the display items together in the initial section and then populate them within your functions, as appropriate to the function.<br> In other words, define the screen in one place, the beginning, populate values as and when.</p>
python|inheritance|wxpython
0
1,906,357
45,581,154
How to retrieve subclasses of the class type in python3?
<p>In python3, <code>object</code> is a base for all classes. </p> <pre><code>&gt;&gt;&gt; object &lt;class 'object'&gt; &gt;&gt;&gt; object.mro() [&lt;class 'object'&gt;] # it makes sense. </code></pre> <p>However:</p> <pre><code>&gt;&gt;&gt; object.__class__ &lt;class 'type'&gt; &gt;&gt;&gt; object.__subclasses__() [&lt;class 'type'&gt;, ....] </code></pre> <p>class 'type' is object's supclass and subclass.</p> <blockquote> <p>defination of <code>__class__</code> in official documentation python 3.6.2</p> <ul> <li><p><code>instance.``__class__</code></p> <p>The class to which a class instance belongs.</p></li> </ul> </blockquote> <p>try 'type'</p> <pre><code>&gt;&gt;&gt; type &lt;class 'type'&gt; &gt;&gt;&gt; type.__class__ &lt;class 'type'&gt; &gt;&gt;&gt; type.__subclasses__ &lt;method '__subclasses__' of 'type' objects&gt; </code></pre> <p>error occurs then:</p> <pre><code>&gt;&gt;&gt; type.__subclasses__() Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; TypeError: descriptor '__subclasses__' of 'type' object needs an argument </code></pre> <p>How to retrieve the subclasses of class type?</p>
<p>There isn't sufficient documentation for <code>__subclasses__</code> around. However, it appears calling that dunder from <code>type</code> requires an <strong>instance</strong> of the '<em>type</em>' object:</p> <pre><code>&gt;&gt;&gt; type.__subclasses__(type) # 'type' is an instance of itself [&lt;class 'abc.ABCMeta'&gt;, &lt;class 'enum.EnumMeta'&gt;, &lt;class '__main__.a'&gt;] </code></pre> <p>In fact, <code>type.__subclasses__(obj)</code> can be used instead of calling the method directly from <code>obj</code> since every object is an instance of <code>type</code>:</p> <pre><code>&gt;&gt;&gt; int.__subclasses__() [&lt;class 'bool'&gt;, &lt;enum 'IntEnum'&gt;, ...] &gt;&gt;&gt; type.__subclasses__(int) [&lt;class 'bool'&gt;, &lt;enum 'IntEnum'&gt;, ...] </code></pre> <p>And the behaviour is <em>also</em> consistent with object <code>object</code>, so that:</p> <pre><code>type.__subclasses__(object) == object.__subclasses__() </code></pre>
python
3
1,906,358
14,732,950
Blender python scripting
<p>I want to use <code>blender</code> to programmatically move the camera around the scene while remaining focused on a particular location. What's the easiest way to make the camera look at an object without having to specify <code>rx</code>,<code>ry</code>,and <code>rz</code>. I'm looking for the Python function to call and not do it through the <code>blender</code> GUI. I am using <code>blender 2.65</code>. </p>
<p>The easiest way to make the camera look at an object is the <code>Edit Object</code> <strong>Actuator</strong>. There you can replace <code>Add Object</code> to <code>Track To</code> and then you just need to specify an object.</p> <p>Perhaps you can change it in the game with a <strong>python script</strong>, using <code>EditObjectActuator.track_object</code></p>
python|camera|blender
0
1,906,359
41,521,431
Python: Flatten a list of Objects
<p>I have a list of Objects and each object has inside it a list of other object type. I want to extract those lists and create a new list of the other object.</p> <pre><code>List1:[Obj1, Obj2, Obj3] Obj1.myList = [O1, O2, O3] Obj2.myList = [O4, O5, O6] Obj3.myList = [O7, O8, O9] </code></pre> <p>I need this:</p> <p><code>L = [O1, O2, O3, O4, ...., O9];</code></p> <p>I tried <code>extend()</code> and <code>reduce()</code> but didn't work</p> <pre><code>bigList = reduce(lambda acc, slice: acc.extend(slice.coresetPoints.points), self.stack, []) </code></pre> <p>P.S.</p> <p>Looking for python flatten a list of list didn't help as I got a list of lists of other object.</p>
<p>using <code>itertools.chain</code> (or even better in that case <code>itertools.chain.from_iterable</code> as niemmi noted) which avoids creating temporary lists and using <code>extend</code></p> <pre><code>import itertools print(list(itertools.chain(*(x.myList for x in List1)))) </code></pre> <p>or (much clearer and slightly faster):</p> <pre><code>print(list(itertools.chain.from_iterable(x.myList for x in List1))) </code></pre> <p>small reproduceable test:</p> <pre><code>class O: def __init__(self): pass Obj1,Obj2,Obj3 = [O() for _ in range(3)] List1 = [Obj1, Obj2, Obj3] Obj1.myList = [1, 2, 3] Obj2.myList = [4, 5, 6] Obj3.myList = [7, 8, 9] import itertools print(list(itertools.chain.from_iterable(x.myList for x in List1))) </code></pre> <p>result:</p> <pre><code>[1, 2, 3, 4, 5, 6, 7, 8, 9] </code></pre> <p>(all recipes to flatten a list of lists: <a href="https://stackoverflow.com/questions/952914/how-to-make-a-flat-list-out-of-list-of-lists">How to make a flat list out of list of lists?</a>)</p>
python|python-2.7|list|flatmap
6
1,906,360
41,244,322
How to color voronoi according to a color scale ? And the area of each cell
<p>Is it possible to color the <a href="https://docs.scipy.org/doc/scipy-0.18.1/reference/generated/scipy.spatial.Voronoi.html" rel="noreferrer"><code>scipy.spatial.Voronoi</code></a> diagram? <a href="https://stackoverflow.com/questions/20515554/colorize-voronoi-diagram">I know it is.</a></p> <p>But now my goal is to color each cell according to a color scale to represent a physical quantity. </p> <p>As in the image below (PRL 107, 155704 (2011)): </p> <p><a href="https://i.stack.imgur.com/iE89n.png" rel="noreferrer"><img src="https://i.stack.imgur.com/iE89n.png" alt="enter image description here"></a></p> <p>And I would also like to know if it is possible to calculate the area of each cell, because it is a quantity that I would like to calculate</p>
<h2>Color scale:</h2> <p>Actually the <a href="/q/20515554">link</a> you provide gives the code needed to colorize the Voronoi diagram. In order to assign each cell a color representing a physical quantity, you need to map the values of this physical quantity to a normalized colormap using the method shown in <a href="/q/28752727">Map values to colors in matplotlib</a>.</p> <p>For example, if I want to assign each cell a color corresponding to a quantity 'speed': </p> <pre><code>import numpy as np import matplotlib as mpl import matplotlib.cm as cm import matplotlib.pyplot as plt from scipy.spatial import Voronoi, voronoi_plot_2d # generate data/speed values points = np.random.uniform(size=[50, 2]) speed = np.random.uniform(low=0.0, high=5.0, size=50) # generate Voronoi tessellation vor = Voronoi(points) # find min/max values for normalization minima = min(speed) maxima = max(speed) # normalize chosen colormap norm = mpl.colors.Normalize(vmin=minima, vmax=maxima, clip=True) mapper = cm.ScalarMappable(norm=norm, cmap=cm.Blues_r) # plot Voronoi diagram, and fill finite regions with color mapped from speed value voronoi_plot_2d(vor, show_points=True, show_vertices=False, s=1) for r in range(len(vor.point_region)): region = vor.regions[vor.point_region[r]] if not -1 in region: polygon = [vor.vertices[i] for i in region] plt.fill(*zip(*polygon), color=mapper.to_rgba(speed[r])) plt.show() </code></pre> <h3>Sample output:</h3> <p><img src="https://i.stack.imgur.com/DQI89.png" alt="(Voronoi diagram)">)</p> <h2>Area of cells:</h2> <p><code>scipy.spatial.Voronoi</code> allows you to access the vertices of each cell, which you can order and apply the <a href="https://en.wikipedia.org/wiki/Shoelace_formula" rel="noreferrer">shoelace formula</a>. I haven't tested the outputs enough to know if the vertices given by the Voronoi algorithm come already ordered. But if not, you can use the dot product to get the angles between the vector to each vertex and some reference vector, and then order the vertices using these angles:</p> <pre><code># ordering vertices x_plus = np.array([1, 0]) # unit vector in i direction to measure angles from theta = np.zeros(len(vertices)) for v_i in range(len(vertices)): ri = vertices[v_i] if ri[1]-self.r[1] &gt;= 0: # angle from 0 to pi cosine = np.dot(ri-self.r, x_plus)/np.linalg.norm(ri-self.r) theta[v_i] = np.arccos(cosine) else: # angle from pi to 2pi cosine = np.dot(ri-self.r, x_plus)/np.linalg.norm(ri-self.r) theta[v_i] = 2*np.pi - np.arccos(cosine) order = np.argsort(theta) # returns array of indices that give sorted order of theta vertices_ordered = np.zeros(vertices.shape) for o_i in range(len(order)): vertices_ordered[o_i] = vertices[order[o_i]] # compute the area of cell using ordered vertices (shoelace formula) partial_sum = 0 for i in range(len(vertices_ordered)-1): partial_sum += vertices_ordered[i,0]*vertices_ordered[i+1,1] - vertices_ordered[i+1,0]*vertices_ordered[i,1] partial_sum += vertices_ordered[-1,0]*vertices_ordered[0,1] - vertices_ordered[0,0]*vertices_ordered[-1,1] area = 0.5 * abs(partial_sum) </code></pre>
python|numpy|physics|voronoi
15
1,906,361
41,560,784
Load Firefox with python selenium script, but it will open the skype support page
<p>I recorded a script by selenium IDE in Firefox, and exported to python webdriver. but when I run the code, it will open the skype support page at the same time. Don't know why.</p> <p>My firefox is the latest(50.1.0), python 2.7.12, selenium 3.0.2</p> <pre><code># -*- coding: utf-8 -*- from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.common.keys import Keys from selenium.webdriver.support.ui import Select from selenium.common.exceptions import NoSuchElementException from selenium.common.exceptions import NoAlertPresentException import unittest, time, re class Test2(unittest.TestCase): def setUp(self): self.driver = webdriver.Firefox() self.driver.implicitly_wait(30) self.base_url = "http://www.cic.gc.ca/english/visit/" self.verificationErrors = [] self.accept_next_alert = True def test_2(self): driver = self.driver driver.get(self.base_url) driver.find_element_by_id("kw").click() driver.find_element_by_id("kw").clear() driver.find_element_by_id("kw").send_keys("tt") driver.find_element_by_id("su").click() def is_element_present(self, how, what): try: self.driver.find_element(by=how, value=what) except NoSuchElementException as e: return False return True def is_alert_present(self): try: self.driver.switch_to_alert() except NoAlertPresentException as e: return False return True def close_alert_and_get_its_text(self): try: alert = self.driver.switch_to_alert() alert_text = alert.text if self.accept_next_alert: alert.accept() else: alert.dismiss() return alert_text finally: self.accept_next_alert = True # def tearDown(self): # self.driver.quit() # self.assertEqual([], self.verificationErrors) if __name__ == "__main__": unittest.main() </code></pre>
<p>I am not a python/selenium expert, but I think I have experienced the same problem. I have just recently installed selenium in my Enthought Canopy application.The following simple code produces a Skype web-browser window:</p> <pre><code>#file: selenium2.py . from selenium import webdriver #browser = webdriver.Chrome() browser = webdriver.Firefox() browser.get('http://seleniumhq.org/') # always gets skype ??? """ This url window is the response: https://support.skype.com/en/faq/FA34612/what-is-the-skype-extension """ </code></pre>
python|selenium|firefox
0
1,906,362
6,443,869
Debugging Python pygame program
<pre><code>from livewires import games, color #Creating and validating the pygame screen. scrwidth = 640 scrheight = 480 fps = 50 myscr = games.Screen(scrwidth, scrheight) #Loading an image into memory to create an image object wall_image = games.load_image("wall.jpg", transparent = False) myscr.set_background (wall_image) #Printing Arbitary Score texty = games.Text(value = "Score: 2048321", size = 70, color = color.black, x = 400, y = 30) myscr.add(texty) myscr.mainloop() </code></pre> <p>For some reason, I am unable to print the score string in the position that I've assigned it to. </p> <p>When I did the same thing without assigning variables to the given objects, I was able to do this successfully, but now I'm not that I've assigned it to a variable. </p> <p>Any input would be appreciated. Thanks in advance.</p> <p>EDIT: As requested, the working code.</p> <pre><code>from livewires import games, color games.init(screen_width = 640, screen_height = 480, fps = 50) wall_image = games.load_image("wall.jpg", transparent = False) games.screen.background = wall_image score = games.Text(value = 1756521, size = 60, color = color.black, x = 550, y = 30) games.screen.add(score) games.screen.mainloop() </code></pre> <p>Here we are! Working code:</p> <pre><code>from livewires import games, color #Creating and validating the pygame screen. scrwidth = 640 scrheight = 480 fpsy = 50 games.init(screen_width = scrwidth, screen_height =scrheight, fps = fpsy) myscr = games.screen #Loading an image into memory to create an image object wall_image = games.load_image("wall.jpg", transparent = False) myscr.background = wall_image #Printing Arbitary Score texty = games.Text(value = "Score: 2048321", size = 70, color = color.black, x = 400, y = 30) myscr.add(texty) myscr.mainloop() </code></pre>
<p>I think i know whats happening, if i am not wrong you could be assuming that </p> <p>games.init(screen_width = 640, screen_height = 480, fps = 50)</p> <p>is the same thing as </p> <pre><code>scrwidth = 640 scrheight = 480 fps = 50 games.init(scrwidth, scrheight) </code></pre> <p>But that may not the case, the arguments int Screen look for name=value pairs in no particular order, so just value pairs may not work. You could however do this</p> <pre><code>scrwidth = 640 scrheight = 480 fps = 50 games.init(screen_width=scrwidth, screen_height= scrheight, fps=fps) myscr = games.screen </code></pre> <p>I am guessing since your size was not set properly and x,y in text are absolute variables, your text may be messed up </p>
python|debugging|pygame
2
1,906,363
57,026,136
How do i exclude a certain group from a csv file?
<p>I have grouped the data together but I do not know how to exclude certain groups of data from a CSV file.</p> <p>Here is my code: <code>data2 = df.groupby(['cor','moa']).sum()</code></p> <p>In this CSV file, cor means the country that visitors stay at, so this column consists of groups like total and asean which I would like to exclude. However, I do not know how to go about doing so.</p>
<p>IIUC you can do:</p> <pre><code>data2 = df[df['cor'] != 'asean'].groupby(['cor','moa']).sum() </code></pre>
python|pandas
0
1,906,364
53,993,729
Can I compare two sets of tuples based on the elements of the tuples without nested looping?
<p>I have the following code:</p> <pre><code>ChangedLinks = set(NewLinkData) - set(OldLinkData) ReplaceQueue = [] LinkUpdateTokenID = 0 for ChangedLink in ChangedLinks: for OldLink in OldLinkData: if ChangedLink[0] is OldLink[0]: ReplaceStrings = (OldLink[1], "&lt;&lt;LINK UPDATE TOKEN " + str(LinkUpdateTokenID) + "&gt;&gt;", ChangedLink[1]) ReplaceQueue.append(ReplaceStrings) LinkUpdateTokenID += 1 </code></pre> <p><code>ChangedLinks</code> is a set of tuples, and <code>OldLinkData</code> is a list of tuples.</p> <p>There is a noticeable dip in the performance of the method this is in as the lengths of <code>ChangedLinks</code> and <code>OldLinkData</code> increase, because of course there is; that's just sheer math! It goes from effectively instantaneous from the user perspective to taking a noticeable amount of time (though less than a second, at least on my machine).</p> <p>I need to add a new element to the <code>ReplaceQueue</code> list only when I can match the first element of a tuple in <code>OldLinkData</code> as the same object as the first element of a tuple in <code>ChangedLinks</code>. (These tuple elements are unique within their respective lists, as in, <code>OldLinkData[0][0]</code> is unique among all other members of <code>OldLinkData</code>, and the same for <code>OldLinkData[1][0]</code>, and so on.) The only way I can think of to accomplish this is to loop through each set/list as in the code above and compare the tuple elements.</p> <p>Is there a more efficient way to do this? Ideally I'd like some way to quickly construct a list of only the members of <code>OldLinkData</code> which share their first element with one of the members of <code>ChangedLinks</code>, in the same order as <code>ChangedLinks</code>, so that I can just compare the lists side-by-side. But I have no idea how to begin solving this issue.</p> <p>Edit: Some examples of expected input and output:</p> <pre><code>OldLinkData: [(&lt;Page.Page object at 0x035AF070&gt;, ']([0])'), (&lt;Page.Page object at 0x043FE4F0&gt;, ']([0, 0])'), (&lt;Page.Page object at 0x043FE590&gt;, ']([0, 0, 0])'), (&lt;Page.Page object at 0x043FE5B0&gt;, ']([0, 1])')] NewLinkData: [(&lt;Page.Page object at 0x035AF070&gt;, ']([0])'), (&lt;Page.Page object at 0x043FE5B0&gt;, ']([0, 0])'), (&lt;Page.Page object at 0x043FE4F0&gt;, ']([0, 1])'), (&lt;Page.Page object at 0x043FE590&gt;, ']([0, 1, 0])')] ChangedLinks: {(&lt;Page.Page object at 0x043FE590&gt;, ']([0, 1, 0])'), (&lt;Page.Page object at 0x043FE5B0&gt;, ']([0, 0])'), (&lt;Page.Page object at 0x043FE4F0&gt;, ']([0, 1])')} ReplaceQueue: [(']([0, 0, 0])', '&lt;&lt;LINK UPDATE TOKEN 0&gt;&gt;', ']([0, 1, 0])'), (']([0, 1])', '&lt;&lt;LINK UPDATE TOKEN 1&gt;&gt;', ']([0, 0])'), (']([0, 0])', '&lt;&lt;LINK UPDATE TOKEN 2&gt;&gt;', ']([0, 1])')] </code></pre> <p>To be clear, this is actual input and output as printed from the console in the working code. I'm looking for a way to achieve this same result more efficiently than the current code.</p> <p>The tuples in <code>OldLinkData</code> and <code>NewLinkData</code> are of the form:</p> <pre><code>(Page.Page object at X, String) </code></pre> <p>The purpose of the code is to produce <code>ReplaceQueue</code>, a list of old and new values for replacing substrings throughout a series of strings (the page contents in a hierarchical notebook). <code>ReplaceQueue</code>'s content has to be narrowed to cases where the same <code>Page.Page</code> object in memory has two different associated "links" (string representations of integer index paths wrapped in some markdown syntax) across <code>OldLinkData</code> and <code>NewLinkData</code>.</p> <p>The difference between <code>OldLinkData</code> and <code>NewLinkData</code> is obtained with <code>ChangedLinks</code> as <code>set(NewLinkData) - set(OldLinkData)</code>, but then I need to associate the changed strings with each other in <code>ReplaceQueue</code>.</p> <p>The <code>LinkUpdateTokenID</code> integer is just an intermediate step so that I can guarantee unique parameters for <code>str.replace</code> and not muck things up when two objects swap link strings.</p> <p>Edit: Thanks to @ParitoshSingh, the following code is noticeably faster:</p> <pre><code>def GetLinkData(self): LinkData = {} LinkData[id(self.RootPage)] = "](" + self.JSONSerializer.SerializeDataToJSONString(self.RootPage.GetFullIndexPath(), Indent=None) + ")" self.AddSubPageLinkData(self.RootPage, LinkData) return LinkData def AddSubPageLinkData(self, CurrentPage, LinkData): for SubPage in CurrentPage.SubPages: LinkData[id(SubPage)] = "](" + self.JSONSerializer.SerializeDataToJSONString(SubPage.GetFullIndexPath(), Indent=None) + ")" self.AddSubPageLinkData(SubPage, LinkData) def UpdateLinks(self, OldLinkData, NewLinkData): ReplaceQueue = [] for PageID in NewLinkData: if PageID in OldLinkData: if NewLinkData[PageID] != OldLinkData[PageID]: ReplaceStrings = (OldLinkData[PageID], "&lt;&lt;LINK UPDATE TOKEN" + str(PageID) + "&gt;&gt;", NewLinkData[PageID]) ReplaceQueue.append(ReplaceStrings) for ReplaceStrings in ReplaceQueue: self.SearchWidgetInst.ReplaceAllInNotebook(SearchText=ReplaceStrings[0], ReplaceText=ReplaceStrings[1], MatchCase=True, DelayTextUpdate=True) for ReplaceStrings in ReplaceQueue: self.SearchWidgetInst.ReplaceAllInNotebook(SearchText=ReplaceStrings[1], ReplaceText=ReplaceStrings[2], MatchCase=True, DelayTextUpdate=True) </code></pre>
<p><strong>EDIT</strong>: For users looking at a similar problem to this, please refer to a more generic solution below. This edit only addresses this specific scenario for the OP.<br> To the OP, The lookups can be sped up by using hashable values. For this specific use case, try the <a href="https://docs.python.org/3/library/functions.html#id" rel="nofollow noreferrer">id() function</a><br> <em>Warning</em>: The caveats should be kept in mind. id function is guaranteed to produce unique values for objects that coexist at the same time, but is only guaranteed to be linked to memory address in CPython, other implementations may differ. </p> <pre><code>OldLinkData = list(zip("123","abc")) print(OldLinkData) #[('1', 'a'), ('2', 'b'), ('3', 'c')] NewLinkData = list(zip('1245','axyz')) print(NewLinkData) #[('1', 'a'), ('2', 'x'), ('4', 'y'), ('5', 'z')] #code: #Create a key value mapping based on the id of objects. OldLinkDataDict = {id(OldLink[0]): OldLink for OldLink in OldLinkData} #{244392672200: ('1', 'a'), 244392672368: ('2', 'b'), 244420136496: ('3', 'c')} ReplaceQueue = [] LinkUpdateTokenID = 0 for NewLink in NewLinkData: new_id = id(NewLink[0]) if new_id in OldLinkDataDict: #only consider cases where NewLink exists in OldLinkData if NewLink[1] != OldLinkDataDict[new_id][1]: #only when the value changes (similar to ChangedLinks) ReplaceStrings = (OldLinkDataDict[new_id][1], "&lt;&lt;LINK UPDATE TOKEN " + str(LinkUpdateTokenID) + "&gt;&gt;", NewLink[1]) ReplaceQueue.append(ReplaceStrings) LinkUpdateTokenID += 1 print(ReplaceQueue) #[('b', '&lt;&lt;LINK UPDATE TOKEN 0&gt;&gt;', 'x')] </code></pre> <p>If you're curious, this demonstration only works because python caches the int objects for small numbers. <a href="https://github.com/python/cpython/blob/4830f581af57dd305c02c1fd72299ecb5b090eca/Objects/longobject.c#L18-L23" rel="nofollow noreferrer">[-5 to 256]</a></p> <hr> <p><strong>Generalized Solution</strong></p> <p>You can see very good gains by changing your datatype of OldLinkData to a dictionary if your comparison objects are hashables. <a href="https://docs.python.org/3/library/stdtypes.html#mapping-types-dict" rel="nofollow noreferrer">Link to Docs</a>. Because dictionary keys are hashables, the dictonary lookups are a constant time operation <code>O(1)</code>, and do not require iteration in the dictionary. </p> <pre><code>#Dummy data OldLinkData = list(zip("123","abc")) print(OldLinkData) #[('1', 'a'), ('2', 'b'), ('3', 'c')] NewLinkData = list(zip('1245','axyz')) print(NewLinkData) #[('1', 'a'), ('2', 'x'), ('4', 'y'), ('5', 'z')] #code: #ChangedLinks = set(NewLinkData) - set(OldLinkData) #Remove this, set creation requires an iteration anyways OldLinkDataDict = dict(OldLinkData) print(OldLinkDataDict) #{'1': 'a', '2': 'b', '3': 'c'} ReplaceQueue = [] LinkUpdateTokenID = 0 for NewLink in NewLinkData: if NewLink[0] in OldLinkDataDict: #only consider cases where NewLink exists in OldLinkData if NewLink[1] != OldLinkDataDict[NewLink[0]]: #only when the value changes (similar to ChangedLinks) ReplaceStrings = (OldLinkDataDict[NewLink[0]], "&lt;&lt;LINK UPDATE TOKEN " + str(LinkUpdateTokenID) + "&gt;&gt;", NewLink[1]) ReplaceQueue.append(ReplaceStrings) LinkUpdateTokenID += 1 print(ReplaceQueue) #[('b', '&lt;&lt;LINK UPDATE TOKEN 0&gt;&gt;', 'x')] </code></pre> <p>Some comparison. Note that ideally you should only do the dictionary creation once, but i kept it included in the time comparison in case you can't get away with changing the datatype of OldLinkData permanently. In that case, you just would want to create the dictionary for comparison as needed.</p> <pre><code>OldLinkData = list(zip("123","abc")) NewLinkData = list(zip('1245','axyz')) </code></pre> <p>BaseLine</p> <pre><code>%%timeit ChangedLinks = set(NewLinkData) - set(OldLinkData) ReplaceQueue = [] LinkUpdateTokenID = 0 for ChangedLink in ChangedLinks: for OldLink in OldLinkData: if ChangedLink[0] is OldLink[0]: ReplaceStrings = (OldLink[1], "&lt;&lt;LINK UPDATE TOKEN " + str(LinkUpdateTokenID) + "&gt;&gt;", ChangedLink[1]) ReplaceQueue.append(ReplaceStrings) LinkUpdateTokenID += 1 </code></pre> <p>NewCode</p> <pre><code>%%timeit OldLinkDataDict = dict(OldLinkData) ReplaceQueue = [] LinkUpdateTokenID = 0 for NewLink in NewLinkData: if NewLink[0] in OldLinkDataDict: #only consider cases where NewLink exists in OldLinkData if NewLink[1] != OldLinkDataDict[NewLink[0]]: #only when the value changes (similar to ChangedLinks) ReplaceStrings = (OldLinkDataDict[NewLink[0]], "&lt;&lt;LINK UPDATE TOKEN " + str(LinkUpdateTokenID) + "&gt;&gt;", NewLink[1]) ReplaceQueue.append(ReplaceStrings) LinkUpdateTokenID += 1 </code></pre> <p>BaseLine: <code>2.16 µs ± 52.6 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) </code></p> <p>NewCode: <code>1.62 µs ± 98.4 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)</code></p>
python|python-3.x|tuples
3
1,906,365
25,549,375
Creating Double precision columns using SqlAlchemy from Alembic
<p>I need to use a DOUBLE column in my MYSQL db. I've read the docs, which suggest I use a Float with precision of 64. This, however, doesn't seem to be working. I end up with a regular float column, and the required precision does not exist.</p> <p>I also tried:</p> <pre><code>from sqlalchemy.dialects.mysql import DOUBLE #and in the model: item = db.Column(DOUBLE()) </code></pre> <p>However, when migrating, Flask-Migrate doesn't seem to be able to tell a difference between the previous column and the new one, and generates an empty migration.</p> <p>I read this: <a href="https://stackoverflow.com/questions/12409724/no-changes-detected-in-alembic-autogeneration-of-migrations-with-flask-sqlalchem">No changes detected in Alembic autogeneration of migrations with Flask-SQLAlchemy</a> and <a href="https://github.com/miguelgrinberg/Flask-Migrate/issues/24" rel="nofollow noreferrer">https://github.com/miguelgrinberg/Flask-Migrate/issues/24</a></p> <p>And tried setting compare_type=True in the alembic settings, but there is still no difference registered between Float and Double types.</p> <p>I know I can just manually switch the columns in the db, but how can I enforce a double precision column using sqlalchemy?</p>
<p>Setting <a href="https://alembic.zzzcomputing.com/en/latest/api/runtime.html#alembic.runtime.environment.EnvironmentContext.configure.params.compare_type" rel="nofollow noreferrer"><code>compare_type=True</code></a> on the <code>EnvironmentContext</code> should work (and does work for me). If you are still having issues trying to auto-generate this, you can always just add this operation to the migration manually.</p> <pre><code>def upgrade(): #... op.alter_column('my_table', 'my_column', type_=DOUBLE, existing_nullable=False) #... </code></pre> <p>As a general note, Alembic is not perfect at auto-generating migrations. It's a good idea to check the migrations scripts and edit them as needed first.</p>
python|mysql|sqlalchemy|alembic
2
1,906,366
25,575,228
Windows net use line break
<p>i would like to fetch only the lines from windows net use cmd wich are relevant for me.</p> <p>if UNC path is to long net use does a line break</p> <p>my code:</p> <p><code>output = subprocess.Popen('net use', stdout=subprocess.PIPE).communicate() valid_lines = [ line.strip() for line in output[0].split('\r\n')] valid_lines = valid_lines[6:-3] print "output", valid_lines</code></p> <p>Sample net use: <img src="https://i.stack.imgur.com/YXUPe.jpg" alt="enter image description here"></p> <p>output ['Getrennt \\192.168.1.111\bze\export', 'Microsoft Windows Network', 'OK \\master\bze\export Microsoft Windows Network']</p> <p>i would like to have the output in from first one line, like on 'OK.....'</p> <p>thx</p>
<p>This is a good question and I found a better and much reliable way to do this provided your OS supports WMIC command. I beleive from windows XP onwards this is supported but you can check for your specific system. Using WMIC you can format your output as csv and you get fully reliable info. Sample code is below:</p> <pre><code>import subprocess output = subprocess.Popen('wmic netuse list /format:csv', stdout=subprocess.PIPE).communicate() valid_lines = [ line.strip() for line in output[0].split('\r\n')] #If you don't want the header use 2: instead of 1: myData=[line.split(',') for line in valid_lines][1:] </code></pre>
python|net-use
0
1,906,367
20,639,498
Python dictionary assignment during list comprehension
<p>I have a dictionary of permissions as such:</p> <pre><code>{'READ': True, 'WRITE': None, 'DELETE': False} </code></pre> <p>The actual dictionary has more keys, but this suffices for the example. I would like to iterate over the dictionary and change any values of <code>None</code> to <code>False</code>. This can be done easily with a for loop, but I'm wondering if there's an idiomatic way to do it with a list comprehension. If it was an object instead of a dictionary I would just do:</p> <pre><code>[setattr(k, v, False) for k, v in object if v is None] </code></pre> <p>(or similar), but I'm not sure how to do it like that without using <code>dict.__setitem__</code></p> <p>This isn't super important to solve my problem, but I'm just wondering if there's a more concise way to do it</p>
<p>Have you considered a <a href="http://www.python.org/dev/peps/pep-0274/" rel="nofollow">dictionary comprehension</a>:</p> <pre><code>&gt;&gt;&gt; dct = {'READ': True, 'WRITE': None, 'DELETE': False} &gt;&gt;&gt; dct = {k:v if v is not None else False for k,v in dct.items()} &gt;&gt;&gt; dct {'READ': True, 'WRITE': False, 'DELETE': False} &gt;&gt;&gt; </code></pre> <p>Note: If you are on Python 2.x, you should use <a href="http://docs.python.org/2.7/library/stdtypes.html#dict.iteritems" rel="nofollow"><code>dict.iteritems</code></a> instead of <code>dict.items</code>. Doing so will improve efficiency since the former returns an iterator where as the later returns a list.</p>
python|list-comprehension
6
1,906,368
36,164,986
How to install package in anaconda?
<p>I want to add music package to anaconda interpreter. I'm using ubuntu 14.04 64bit. I downloaded music21-1.9.3.tar.gz from anaconda cloud. I unpacked it to anaconda3/pkgs</p> <pre><code>ext installer.py music21 PKG-INFO setup.cfg installer.command MANIFEST.in music21.egg-info README.md setup.py </code></pre> <p>I found nothing on the web, or doesn't work. How can I install it? </p>
<p>Are you using Windows? If so, open up a Command Prompt window.</p> <p>What I like to do is Copy the Link Address of the package that I would like to install. In this case, a simple google search lead me to a popular python package site : <a href="https://pypi.python.org/pypi/music21/1.9.3" rel="noreferrer">https://pypi.python.org/pypi/music21/1.9.3</a></p> <p>I right-click the tar.gz hyperlink and click "Copy Link Address" to get this : <a href="https://pypi.python.org/packages/source/m/music21/music21-1.9.3.tar.gz#md5=d271e4a8c60cfa634796fc81d1278eaf" rel="noreferrer">https://pypi.python.org/packages/source/m/music21/music21-1.9.3.tar.gz#md5=d271e4a8c60cfa634796fc81d1278eaf</a></p> <p>Now to install this, in your command prompt window, type the following :</p> <pre><code>pip install https://pypi.python.org/packages/source/m/music21/music21-1.9.3.tar.gz#md5=d271e4a8c60cfa634796fc81d1278eaf </code></pre> <p>or</p> <pre><code>conda install https://pypi.python.org/packages/source/m/music21/music21-1.9.3.tar.gz#md5=d271e4a8c60cfa634796fc81d1278eaf </code></pre> <p>And it should automatically download the package from that link address, unzip it, and then attempt to install it in your python environment. </p> <p>It's good to know how to install python packages manually as well for distributions that don't lend themselves as easily to cross-platform auto-installations. </p> <p>What you would do is unzip the tar.gz file ( or any other compressed package file ) until you have a folder directory with a "setup.py" file name. You would go into your command prompt window and "cd" into that directory. </p> <p>Then you would call the Python Executable by typing "python" which lets the command prompt know that you are calling python to run your command and finish the line so in total it looks like this : </p> <pre><code>python setup.py install </code></pre> <p>There you have it. </p>
python|package|installation|anaconda|music21
18
1,906,369
29,551,737
Read a line store it in a variable and then read another line and come back to the first line. Python 2
<p>This is a tricky question and I've read a lot of posts about it, but I haven't been able to make it work.</p> <p>I have a big file. I need to read it line by line, and once I reach a line of the form <code>"Total is: (any decimal number)"</code>, take this string and to save the number in a variable. If the number is bigger than 40.0, then I need to find the fourth line above the <code>Total</code> line (for example, if the <code>Total</code> line was line 39, this line would be line 35). This line will be in the format <code>"(number).(space)(substring)"</code>. Finally, I need to parse this substring out and do further processing on it.</p> <p>This is an example of what an input file might look like:</p> <pre class="lang-none prettyprint-override"><code>many lines that we don't care about many lines that we don't care about ... 1. Hi45 People: bla bla bla bla bla bla whitespace bla bla bla bla bla Total is: (*here there will be a decimal number*) bla bla white space ... more lines we don't care about and then more lines and then again we get 2. How144 People: bla bla bla bla bla bla whitespace bla bla bla bla bla Total is: (*here there will be a decimal number*) bla bla white space </code></pre> <p>I have tried many things, including using the <code>re.search()</code> method to capture what I need from each line I need to focus on.</p> <p>Here is my code which I modified from another stackoverflow Q &amp; A:</p> <pre class="lang-py prettyprint-override"><code>import re import linecache number = "" higher_line = "" found_line = "" with open("filename_with_many_lines.txt") as aFile: for num, line in enumerate(aFile, 1): searchObj = re.search(r'(\bTotal\b)(\s)(\w+)(\:)(\s)(\d+.\d+)', line) if searchObj: print "this is on line", line print "this is the line number:", num var1 = searchObj.group(6) print var1 if float(var1) &gt; 40.0: number = num higher_line = number - 4 print number print higher_line found_line = linecache.getline("filename_with_many_lines.txt", higher_line) print "found the", found_line </code></pre> <p>The expected output would be:</p> <pre class="lang-none prettyprint-override"><code>this is on line Total is: 45.5 this is the line number: 14857 14857 14853 found the 1. Hi145 this is on line Total is: 62.1 this is the line number: 14985 14985 14981 found the 2.How144 </code></pre>
<p>If the line you need is always four lines above the <code>Total is:</code> line, you could keep the previous lines in a bounded <a href="https://docs.python.org/3/library/collections.html#collections.deque" rel="nofollow"><code>deque</code></a>.</p> <pre><code>from collections import deque with open(filename, 'r') as file: previous_lines = deque(maxlen=4) for line in file: if line.startswith('Total is: '): try: higher_line = previous_lines[-4] # store higher_line, do calculations, whatever break # if you only want to do this once; else let it keep going except IndexError: # we don't have four previous lines yet # I've elected to simply skip this total line in that case pass previous_lines.append(line) </code></pre> <p>A bounded <code>deque</code> (one with a maximum length) will discard an item from the opposite side if adding a new item would cause it to exceed its maximum length. In this case, we're appending strings to the right side of the <code>deque</code>, so once the length of the <code>deque</code> reaches <code>4</code>, each new string we append to the right side will cause it to discard one string from the left side. Thus, at the beginning of the <code>for</code> loop, the <code>deque</code> will contain the four lines prior to the current line, with the oldest line at the far left (index <code>0</code>).</p> <p>In fact, <a href="https://docs.python.org/3/library/collections.html#collections.deque" rel="nofollow">the documentation on <code>collections.deque</code></a> mentions use cases very similar to ours:</p> <blockquote> <p>Bounded length deques provide functionality similar to the <code>tail</code> filter in Unix. They are also useful for tracking transactions and other pools of data where only the most recent activity is of interest.</p> </blockquote>
python|regex|file|line|python-2.x
2
1,906,370
62,618,013
No module named 'pandas_schema'
<p>When trying to <code>import pandas_schema</code>, I get the error <em>ModuleNotFoundError: No module named 'pandas_schema'</em><br /> I'm currently using PyCharm Community 2020.1.2</p> <p>Details:<br /> python 3.8.3<br /> pip 20.1.1<br /> pandas-schema 0.3.5<br /> pandas 1.0.5<br /> windows10 home 64-bit</p>
<p>Check your python environment. In a Powershell terminal, run</p> <pre class="lang-bsh prettyprint-override"><code>PS /home/edgar&gt; pip list | Select-String -Pattern pandas pandas 1.0.4 pandas-schema 0.3.5 </code></pre> <p>If <code>pandas-schema</code> is missing, then run</p> <pre class="lang-bsh prettyprint-override"><code>PS /home/edgar&gt; pip install pandas-schema </code></pre> <p>If the package is not missing then most likely your interpreter in PyCharm is not correctly setup. Run the following in PS:</p> <pre class="lang-bsh prettyprint-override"><code>PS /home/edgar&gt; Get-Command python CommandType Name Version Source ----------- ---- ------- ------ Application python 0.0.0.0 /path/to/python </code></pre> <p>Copy the <code>Source</code> value and go to <code>Settings &gt; Project: myproject &gt; Project Interpreter</code>, and next to the interpreter text field click the config icon and go to <code>Add...</code>. Paste the path to the Python interpreter as an existing environment.</p>
python-3.x|pandas
0
1,906,371
62,572,125
Problem when importing PySCIPOpt. AttributeError: type object 'pyscipopt.scip.Expr' has no attribute '__div__'
<p>I want to use python interface for <code>SCIP</code>; I installed <code>PySCIPOpt</code> following <a href="https://github.com/SCIP-Interfaces/PySCIPOpt/blob/master/INSTALL.md" rel="nofollow noreferrer">these steps</a>.</p> <p>I'm using <code>SCIP7</code>, <code>PySCIPOpt</code> 3, and python 3.7. <code>SCIP</code>'s interactive shell alone works well. However, when I try to <code>import pyscipopt</code>, I get the following error</p> <blockquote> <p>File &quot;src/pyscipopt/scip.pyx&quot;, line 1, in init pyscipopt.scip AttributeError: type object 'pyscipopt.scip.Expr' has no attribute '<strong>div</strong>'</p> </blockquote> <p>My operating system is Linux Mint 19.2</p> <p>I tried to test the installation as suggested, and I get the errors in the image<a href="https://i.stack.imgur.com/7ySDb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7ySDb.png" alt="enter image description here" /></a></p>
<p>The problem seems to be an update in cython. It should be fixed in the current master of PySCIPOpt. See also <a href="https://github.com/SCIP-Interfaces/PySCIPOpt/issues/397" rel="nofollow noreferrer">https://github.com/SCIP-Interfaces/PySCIPOpt/issues/397</a></p>
python|interface|scip
1
1,906,372
62,592,626
Extract the elements of list if they are present in the column in python dataframe?
<pre><code>frame = pd.DataFrame({'a' : ['the cat, dog is blue', 'the sky is green', 'the dog is black']}) frame a 0 the cat,dog is blue 1 the sky is green 2 the dog is black mylist = ['dog', 'cat', 'fish'] </code></pre> <p>Expected output</p> <pre><code> a matched_str 0 the cat, dog is blue cat, dog 1 the sky is green NA 2 the dog is black dog </code></pre> <p>Please advise</p> <p>Tried as below:</p> <pre><code>import re def pattern_searcher(search_str:str, search_list:str): search_obj = re.search(search_list, search_str) if search_obj : return_str = search_str[search_obj.start(): search_obj.end()] else: return_str = 'NA' return return_str pattern = '|'.join(mylist) frame['matched_str'] = frame['a'].apply(lambda x: pattern_searcher(search_str=x, search_list=pattern)) </code></pre>
<p>try <code>str.extractall</code> after creating a bitwise OR <code>|</code> string of your values.</p> <pre><code>frame = pd.DataFrame({'a' : ['the cat, dog is blue', 'the sky is green', 'the dog is black']}) mylist = ['dog', 'cat', 'fish'] words = '|'.join(mylist) #'dog|cat|fish' frame['b'] = frame['a'].str.extractall(f&quot;({words})&quot;).groupby(level=0).agg(','.join) a b 0 the cat, dog is blue cat,dog 1 the sky is green NaN 2 the dog is black dog </code></pre>
python-3.x|pandas
1
1,906,373
70,357,428
SIFT match computer vision
<p>I need to determine the location of yogurts in the supermarket. Source photo looks like <a href="https://i.stack.imgur.com/nZBK5.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nZBK5.jpg" alt="enter image description here" /></a></p> <p>With template:</p> <p><a href="https://i.stack.imgur.com/cbX9e.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cbX9e.jpg" alt="enter image description here" /></a></p> <p>I using SIFT to extract key points of template:</p> <pre><code>img1 = cv.imread('train.jpg') sift = cv.SIFT_create()# queryImage kp1, des1 = sift.detectAndCompute(img1, None) path = glob.glob(&quot;template.jpg&quot;) cv_img = [] l=0 for img in path: img2 = cv.imread(img) # trainImage # Initiate SIFT detector # find the keypoints and descriptors with SIFT kp2, des2 = sift.detectAndCompute(img2,None) # FLANN parameters FLANN_INDEX_KDTREE = 1 index_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5) search_params = dict(checks=50) # or pass empty dictionary flann = cv.FlannBasedMatcher(index_params,search_params) matches = flann.knnMatch(des1,des2,k=2) # Need to draw only good matches, so create a mask # ratio test as per Lowe's paper if (l &lt; len(matches)): l = len(matches) image = img2 match = matches h_query, w_query, _= img2.shape matchesMask = [[0,0] for i in range(len(match))] good_matches = [] good_matches_indices = {} for i,(m,n) in enumerate(match): if m.distance &lt; 0.7*n.distance: matchesMask[i]=[1,0] good_matches.append(m) good_matches_indices[len(good_matches) - 1] = i bboxes = [] src_pts = np.float32([ kp1[m.queryIdx].pt for m in good ]).reshape(-1,2) dst_pts = np.float32([ kp2[m.trainIdx].pt for m in good ]).reshape(-1,2) model, inliers = initialize_ransac(src_pts, dst_pts) n_inliers = np.sum(inliers) matched_indices = [good_matches_indices[idx] for idx in inliers.nonzero()[0]] print(len(matched_indices)) model, inliers = ransac( (src_pts, dst_pts), AffineTransform, min_samples=4, residual_threshold=4, max_trials=20000 ) n_inliers = np.sum(inliers) print(n_inliers) matched_indices = [good_matches_indices[idx] for idx in inliers.nonzero()[0]] print(matched_indices) q_coordinates = np.array([(0, 0), (h_query, w_query)]) coords = model.inverse(q_coordinates) print(coords) h_query, w_query,_ = img2.shape q_coordinates = np.array([(0, 0), (h_query, w_query)]) coords = model.inverse(q_coordinates) print(coords) # bboxes_list.append((i, coords)) M, mask = cv.findHomography(src_pts, dst_pts, cv.RANSAC, 2) draw_params = dict(matchColor = (0,255,0), singlePointColor = (255,0,0), matchesMask = matchesMask, flags = cv.DrawMatchesFlags_DEFAULT) img3 = cv.drawMatchesKnn(img1,kp1,image,kp2,match,None,**draw_params) plt.imshow(img3),plt.show() </code></pre> <p>Result of SIFT looks like</p> <p><a href="https://i.stack.imgur.com/jBsse.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jBsse.png" alt="enter image description here" /></a></p> <p>The question is what is the best way to clasterise points to obtain rectangles, representing each yogurt? I tried RANSAC, but this method doesn't work in this case.</p>
<p>I am proposing an approach based on what is discussed in <a href="https://www.cs.cmu.edu/%7Emmv/papers/06humanoids-stefan.pdf" rel="nofollow noreferrer">this</a> paper. I have modified the approach a bit because the use-case is not entirely same but they do use SIFT features matching to locate multiple objects in video frames. They have used PCA for reducing time but that may not be required for still images.</p> <p>Sorry I could not write a code for this as it will take a lot of time but I believe this should work to locate all the occurrences of the template object.</p> <p>The modified approach is like this:</p> <p>Divide the template image into regions: left, middle, right along the horizontal and top, bottom along the vertical</p> <p>Now when you match features between the template and source image, you will get features matched from some of the keypoints from these regions on multiple locations on the source image. You can use these keypoints to identify which region of the template is present at what location(s) in the source image. If there are overlapping regions i.e. keypoints from different regions matched with close keypoints in source image then that would mean a wrong match.</p> <p>Mark each set of matching keypoints within a neighborhood on source image as left, center, right, top, bottom depending upon if they have majority matches from keypoints of a particular region in the template image.</p> <p>Starting from each left region on source image move towards right and if we find a central region followed by a right region then this area of source image between regions marked as left and right, can be marked as location of one template object.</p> <p>There could be overlapping objects which could result in a left region followed by another left region when moving in right direction from the left region. The area between the two left regions can be marked as one template object.</p> <p>For further refined locations, each area of source image marked as one template object can be cropped and re-matched with the template image.</p>
python|opencv|computer-vision
2
1,906,374
46,090,323
error when inserting dataframe into MS-SQL with Python
<p>I want to insert a pandas dataframe into MS-SQL using pypyodbc. Here is my code:</p> <p>Create a dataframe at first:</p> <pre><code>df = pd.DataFrame({'C1': [11, 21, 31], 'C2': [12, 22, 32], 'C3': [13, 23, 33]}) tablename = tb </code></pre> <p>and then insert dataframe into MS-SQL</p> <pre><code>def insertDFtoSQL(df,tablename): con = ppo.connect(r'Driver={SQL Server}; 'r'Server=se;' r'Database=db;' r'Trusted_Connection=yes;') cols = ','.join([k for k in df.dtypes.index]) params = ','.join('?' * len(df.columns)) sql = 'INSERT INTO {0} ({1}) VALUES ({2})'.format(tablename, cols, params) data = [tuple(x) for x in df.values] con.cursor().executemany(sql, data) con.commit() con.close() </code></pre> <p>this code resulted in an error message that said "type object is not subscriptable".</p> <p>However, everything would be right if I used </p> <pre><code>data1 = [(11,12,13),(22,23,24),(32,33,34)] </code></pre> <p>to replace data in <code>executemany(sql, data)</code> as <code>executemany(sql, data1)</code></p> <p>Any ideas about it? </p>
<p><code>df.values</code> is a <code>&lt;class 'numpy.ndarray'&gt;</code>, and when you do</p> <pre class="lang-python prettyprint-override"><code>data = [tuple(x) for x in df.values] </code></pre> <p>you get a list of tuples containing elements of type <code>&lt;class 'numpy.int64'&gt;</code>. pypyodbc is expecting the tuples to contain "normal" Python types, so you'll need to use</p> <pre class="lang-python prettyprint-override"><code>data = [tuple(int(col) for col in row) for row in df.values] </code></pre> <p>to convert the numbers into plain old <code>int</code> values.</p>
python|sql-server|pandas|dataframe|pypyodbc
3
1,906,375
33,224,176
Why doesn't numpy.where give me the correct answer?
<p>I'm trying to estimate the number of neutrons that passes through a "wall" without being reflected or absorbed for starters, but, <code>numpy.where</code> doesn't give me the right answer:</p> <pre><code>import numpy as np Tabx = np.zeros(10) Tabcos = np.ones(10) actif = np.ones(10, dtype=bool) iActif = np.where(actif)[0] pa = 0 ps = 0 for i in range(15): iActif = np.where(actif)[0] r = np.random.random(10) l = -1 * 0.2 * np.log(r[iActif]) Tabx[iActif] += l * Tabcos[iActif] if np.where(Tabx[iActif] &gt; 1)[0].size!=0: actif[np.where(Tabx[iActif] &gt; 1)[0]]=False print("itération: ", i + 1) print(Tabx) print(actif) print(iActif) </code></pre> <p>I'll show only when it starts to be wrong:</p> <pre><code>itération: 6 [ 1.34250751 1.22131969 0.61147827 0.72320522 1.18101783 0.2767469 1.87170912 0.68726641 1.44933786 1.25179186] [False False False False False False True True True True] [3 6 7 8 9] </code></pre> <p>And the problem doesn't stop.</p>
<p>This isn't part of your big problem, but this block can probably be simplified:</p> <pre><code>if np.where(Tabx[iActif] &gt; 1)[0].size!=0: actif[np.where(Tabx[iActif] &gt; 1)[0]]=False </code></pre> <p>to</p> <pre><code>actif[ Tabx[iActif]&gt;1] = False </code></pre> <p>But you haven't made any attempt to describe what this is supposed to be doing, nor what exactly it is doing wrong. I have no idea what 'right one' is.</p>
python|numpy|where
0
1,906,376
73,669,158
Discord.py 2.0 slash command integration with commands.Bot()
<p>commands.Bot() currently requires a command prefix but with discord's new slash commands that is not necessary. Is there a way to get around the required command prefix?</p> <p>I am currently using discord.Client so this isn't an issue. However, after some research it seems like commands.Bot() is the better option since it is an extension of discord.Client.</p>
<p><code>discord.Client()</code> represents a client connection that connects to Discord. This class is used to interact with the Discord WebSocket and API. and <code>commands.Bot()</code> is just an extension to allow prefix commands, you don't need it if you have slash commands.</p> <p>To answer your question, no there's no &quot;get around&quot; because it would make no sense. <code>commands.Bot()</code> was created to allow prefix commands and if you're bypassing it, you have a normal discord.Client instance.</p> <ol> <li><a href="https://discordpy.readthedocs.io/en/latest/api.html?highlight=discord%20client#discord.Client" rel="nofollow noreferrer">discord.Client()</a></li> <li><a href="https://discordpy.readthedocs.io/en/latest/ext/commands/api.html?highlight=commands%20bot#discord.ext.commands.Bot" rel="nofollow noreferrer">commands.Bot()</a></li> </ol>
python|python-3.x|discord|discord.py
1
1,906,377
12,877,591
Invalid Syntax; nth prime number
<pre><code>def primetest(x): if x &lt; 2: return False if x == 2: return True if x % 2 == 0: return False for i in range(3,(x**0.5)+1): if x % i == 0: return False return True def nthprime(n): primes = [] x = 2 while len(primes) &lt; n: if primetest(x) == True: primes.append(x) x = x + 1 return list(-1) print nthprime(10001) </code></pre> <p> Whenever I try to run this it says that "print nthprime(10001)" is invalid syntax.</p> <p>-prime test is to test wether a number is prime and nthprime creates a list of prime numbers a certain lengths and then return the last element of the list.</p>
<p><code>print</code> is a function in Python 3, not a statement. You should change your last line of code to:</p> <pre><code>print(nthprime(10001)) </code></pre>
python-3.x
1
1,906,378
41,103,097
does TensorFlow automatically use sparse_softmax_cross_entropy_with_logits when possible?
<p>Let's say that I have some code such as:</p> <pre><code>out = tf.nn.softmax(x) # shape (batch,time,n) labels = .... # reference labels of type (batch,time)-&gt;int </code></pre> <p>And then I define my loss as the Cross Entropy:</p> <pre><code>loss = -tf.log(tf.gather_nd(out, labels)) </code></pre> <p>Will TensorFlow automatically replace the <code>loss</code> in the computation graph by this?</p> <pre><code>loss = sparse_softmax_cross_entropy_with_logits(x, labels) </code></pre> <p>What type of optimizations can I expect that TensorFlow will apply?</p> <p>Follow-up question: If TensorFlow doesn't do this optimization, how can I do it manually? Consider that I have a modular framework where I get some <code>out</code> tensor which could possibly be the output of a <code>softmax</code> operation, and I want to calculate Cross Entropy, and I want to use <code>sparse_softmax_cross_entropy_with_logits</code> if possible. How could I accomplish this? Can I do something like the following?</p> <pre><code>if out.op == "softmax": # how to check this? x = out.op.sources[0] # how to get this? loss = sparse_softmax_cross_entropy_with_logits(x, labels) else: loss = -tf.log(tf.gather_nd(out, labels)) </code></pre>
<p>TensorFlow generally doesn't merge nodes together in the way you're hoping. This is because other code (e.g. fetching outputs when running) may depend on intermediate nodes like the softmax, so removing them behind the user's back would be confusing.</p> <p>If you do want to do this optimization yourself as part of a higher-level framework, you can analyze the current graphdef, but there's no annotation in TF to tell you what the outputs are, since that can vary at runtime depending on how session.run is called.</p>
tensorflow
0
1,906,379
29,030,725
'str' object has no attribute 'decode'
<p>I'm trying to decode hex string to binary values. I found this below command on internet to get it done, </p> <pre><code>string_bin = string_1.decode('hex') </code></pre> <p>but I got error saying</p> <pre><code>'str' object has no attrubute 'decode' </code></pre> <p>I'm using python v3.4.1</p>
<p>You cannot decode string objects; they are <em>already</em> decoded. You'll have to use a different method.</p> <p>You can use the <a href="https://docs.python.org/3/library/codecs.html#codecs.decode"><code>codecs.decode()</code> function</a> to apply <code>hex</code> as a codec:</p> <pre><code>&gt;&gt;&gt; import codecs &gt;&gt;&gt; codecs.decode('ab', 'hex') b'\xab' </code></pre> <p>This applies a <a href="https://docs.python.org/3/library/codecs.html#binary-transforms"><em>Binary transform</em></a> codec; it is the equivalent of using the <a href="https://docs.python.org/3/library/base64.html#base64.b16decode"><code>base64.b16decode()</code> function</a>, with the input string converted to uppercase:</p> <pre><code>&gt;&gt;&gt; import base64 &gt;&gt;&gt; base64.b16decode('AB') b'\xab' </code></pre> <p>You can also use the <a href="https://docs.python.org/3/library/binascii.html#binascii.unhexlify"><code>binascii.unhexlify()</code> function</a> to 'decode' a sequence of hex digits to bytes:</p> <pre><code>&gt;&gt;&gt; import binascii &gt;&gt;&gt; binascii.unhexlify('ab') b'\xab' </code></pre> <p>Either way, you'll get a <code>bytes</code> object.</p>
python|string|python-3.x|binary|hex
13
1,906,380
28,932,257
Last hyperlink in webpage table using Python
<p>I am using Beautifulsoup4 to parse a webpage. Similar to how Bing works, if you enter a search term, it will return the first ten hits with the subsequent hits on following pages listed page 2, page 3 etc... The first page returned after the query does contain hyperlinks from page 2 until the very last page. What I am trying to establish is exactly what that very last page is (ie . Page 87) for example. </p> <p>Below is a sample of the HTML source code from the page: </p> <pre><code>&lt;tr&gt;&lt;td colspan=4 align=left class='uilt'&gt;����� ������� ��������: 3543.&lt;br&gt;��������: 1 &lt;a href="/main/search.php?str=&amp;tag=&amp;nopass=&amp;cat=25&amp;page=2"&gt;2&lt;/a&gt; &lt;a href="/main/search.php?str=&amp;tag=&amp;nopass=&amp;cat=25&amp;page=3"&gt;3&lt;/a&gt; &lt;a href="/main/search.php?str=&amp;tag=&amp;nopass=&amp;cat=25&amp;page=4"&gt;4&lt;/a&gt; &lt;a href="/main/search.php?str=&amp;tag=&amp;nopass=&amp;cat=25&amp;page=5"&gt;5&lt;/a&gt; &lt;a href="/main/search.php?str=&amp;tag=&amp;nopass=&amp;cat=25&amp;page=6"&gt;6&lt;/a&gt; &lt;a href="/main/search.php?str=&amp;tag=&amp;nopass=&amp;cat=25&amp;page=7"&gt;7&lt;/a&gt; &lt;a href="/main/search.php?str=&amp;tag=&amp;nopass=&amp;cat=25&amp;page=8"&gt;8&lt;/a&gt; &lt;a href="/main/search.php?str=&amp;tag=&amp;nopass=&amp;cat=25&amp;page=9"&gt;9&lt;/a&gt; &lt;a href="/main/search.php?str=&amp;tag=&amp;nopass=&amp;cat=25&amp;page=10"&gt;10&lt;/a&gt; &lt;br&gt;&lt;/td&gt;&lt;/tr&gt; </code></pre> <p>In the above example, how would I work out that the last link is page 10? There is further HTML after the above and so I can't simply slice X amount of positions from the end of the HTML code.</p> <p>Thanks </p>
<p>With raw Selenium you should be able to do something like this:</p> <pre><code>driver.find_elements_by_css_selector(".uilt a")[-1].text </code></pre> <p>This will find the last <code>&lt;a&gt;</code> tag that is a descendant of the element with class <code>uilt</code> and return its text. No need for BeautifulSoup.</p>
python|python-2.7|selenium|beautifulsoup
2
1,906,381
52,428,088
Printing list of VMs in a Resource Group with Azure Python SDK
<p>I have a python script which simply lists the VMs in a resource group. It was working in the past but for some reason has stopped producing output. All other commands in my script work, it is just this one that is giving me bother. My code is as follows:</p> <pre><code>credentials = MSIAuthentication() subscription_client = SubscriptionClient(credentials) subscription = next(subscription_client.subscriptions.list()) subscription_id = subscription.subscription_id compute_client = ComputeManagementClient(credentials, subscription_id) resourceGroup = "myResourceGroup" for vm in compute_client.virtual_machines.list(resourceGroup): print(vm) </code></pre> <p>I have also tried appending an older API version but still nothing is printed. I can confirm there are VMs in this resource group.</p> <p>I have a similar command for listing VMs in VMSS groups, and it works fine.</p> <p>Does anyone know what could be the issue with this particular command?</p>
<p>Answering my own question.</p> <p>The problem stemmed from having incorrect/missing permissions in the custom role applied to the server. Ensure that your custom role has the 'Microsoft.Compute/virtualMachines/read' permissions.</p>
python|azure
2
1,906,382
51,758,697
Python2.7 json loaded text list back into an int list
<p>I load a list from a windows INI file via simplejson. The list is read in as a string and I need to convert it back into an proper list so <code>arr[0] = [30, 40, 80]</code> and <code>arr[1] = [90, 255, 255]</code></p> <p>config.ini:</p> <pre><code>Advanced Settings tlhsv = "[30, 40, 80], [90, 255, 255]" </code></pre> <p>Main.py</p> <pre><code>tlhsv = self.config.get('Advanced Settings', 'tlhsv') print(tlhsv) u'"[30, 40, 80], [90, 255, 255]"' </code></pre> <p>How on earth am I suppose to do this? Or even better is there a way I can format the INI file so its automatically read in correctly by simplejson.</p> <p>I have tried to format the ini differently but having a [], () or comma give a ValueError and explaining nothing. Reading it in as a string was all that worked.</p>
<p>you can use ast module to do it</p> <pre><code>&gt;&gt;&gt; import ast &gt;&gt;&gt; loaded_json = json.loads('{"tlhsv": "[30, 40, 80], [90, 255, 255]"}') &gt;&gt;&gt; li = ast.literal_eval(loaded_json['tlhsv']) &gt;&gt;&gt; li ([30, 40, 80], [90, 255, 255]) &gt;&gt;&gt; </code></pre> <p>you are having multiple list so it will stored as a tuple, beacuse you cant assign Multiple items to a single variable, So just use a loop to access each item of the tuple. </p>
python|json|python-2.7
0
1,906,383
59,856,614
Overfitting and data leakage in tensorflow/keras neural network
<p>Good morning, I'm new in machine learning and neural networks. I am trying to build a fully connected neural network to solve a regression problem. The dataset is composed by 18 features and 1 label, and all of these are physical quantities. </p> <p>You can find the code below. I upload the figure of the loss function evolution along the epochs (you can find it below). I am not sure if there is overfitting. Someone can explain me why there is or not overfitting?</p> <pre><code>import pandas as pd import numpy as np from sklearn.ensemble import RandomForestRegressor from sklearn.feature_selection import SelectFromModel from sklearn import preprocessing from sklearn.model_selection import train_test_split from matplotlib import pyplot as plt import keras import tensorflow as tf from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Dropout, BatchNormalization from tensorflow.keras.callbacks import EarlyStopping from keras import optimizers from sklearn.metrics import r2_score from keras import regularizers from keras import backend from tensorflow.keras import regularizers from keras.regularizers import l2 # ============================================================================= # Scelgo il test size # ============================================================================= test_size = 0.2 dataset = pd.read_csv('DataSet.csv', decimal=',', delimiter = ";") label = dataset.iloc[:,-1] features = dataset.drop(columns = ['Label']) y_max_pre_normalize = max(label) y_min_pre_normalize = min(label) def denormalize(y): final_value = y*(y_max_pre_normalize-y_min_pre_normalize)+y_min_pre_normalize return final_value # ============================================================================= # Split # ============================================================================= X_train1, X_test1, y_train1, y_test1 = train_test_split(features, label, test_size = test_size, shuffle = True) y_test2 = y_test1.to_frame() y_train2 = y_train1.to_frame() # ============================================================================= # Normalizzo # ============================================================================= scaler1 = preprocessing.MinMaxScaler() scaler2 = preprocessing.MinMaxScaler() X_train = scaler1.fit_transform(X_train1) X_test = scaler2.fit_transform(X_test1) scaler3 = preprocessing.MinMaxScaler() scaler4 = preprocessing.MinMaxScaler() y_train = scaler3.fit_transform(y_train2) y_test = scaler4.fit_transform(y_test2) # ============================================================================= # Creo la rete # ============================================================================= optimizer = tf.keras.optimizers.Adam(lr=0.001) model = Sequential() model.add(Dense(60, input_shape = (X_train.shape[1],), activation = 'relu',kernel_initializer='glorot_uniform')) model.add(Dropout(0.2)) model.add(Dense(60, activation = 'relu',kernel_initializer='glorot_uniform')) model.add(Dropout(0.2)) model.add(Dense(60, activation = 'relu',kernel_initializer='glorot_uniform')) model.add(Dense(1,activation = 'linear',kernel_initializer='glorot_uniform')) model.compile(loss = 'mse', optimizer = optimizer, metrics = ['mse']) history = model.fit(X_train, y_train, epochs = 100, validation_split = 0.1, shuffle=True, batch_size=250 ) history_dict = history.history loss_values = history_dict['loss'] val_loss_values = history_dict['val_loss'] y_train_pred = model.predict(X_train) y_test_pred = model.predict(X_test) y_train_pred = denormalize(y_train_pred) y_test_pred = denormalize(y_test_pred) plt.figure() plt.plot((y_test1),(y_test_pred),'.', color='darkviolet', alpha=1, marker='o', markersize = 2, markeredgecolor = 'black', markeredgewidth = 0.1) plt.plot((np.array((-0.1,7))),(np.array((-0.1,7))),'-', color='magenta') plt.xlabel('True') plt.ylabel('Predicted') plt.title('Test') plt.figure() plt.plot((y_train1),(y_train_pred),'.', color='darkviolet', alpha=1, marker='o', markersize = 2, markeredgecolor = 'black', markeredgewidth = 0.1) plt.plot((np.array((-0.1,7))),(np.array((-0.1,7))),'-', color='magenta') plt.xlabel('True') plt.ylabel('Predicted') plt.title('Train') plt.figure() plt.plot(loss_values,'b',label = 'training loss') plt.plot(val_loss_values,'r',label = 'val training loss') plt.xlabel('Epochs') plt.ylabel('Loss Function') plt.legend() print("\n\nThe R2 score on the test set is:\t{:0.3f}".format(r2_score(y_test_pred, y_test1))) print("The R2 score on the train set is:\t{:0.3f}".format(r2_score(y_train_pred, y_train1))) from sklearn import metrics # Measure MSE error. score = metrics.mean_squared_error(y_test_pred,y_test1) print("\n\nFinal score test (MSE): %0.4f" %(score)) score1 = metrics.mean_squared_error(y_train_pred,y_train1) print("Final score train (MSE): %0.4f" %(score1)) score2 = np.sqrt(metrics.mean_squared_error(y_test_pred,y_test1)) print(f"Final score test (RMSE): %0.4f" %(score2)) score3 = np.sqrt(metrics.mean_squared_error(y_train_pred,y_train1)) print(f"Final score train (RMSE): %0.4f" %(score3)) </code></pre> <p><a href="https://i.stack.imgur.com/1LgL9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1LgL9.png" alt="enter image description here"></a></p> <p><strong>EDIT:</strong></p> <p>I tried alse to do feature importances and to raise n_epochs, these are the results:</p> <p>Feature Importance:</p> <p><a href="https://i.stack.imgur.com/RoD1w.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RoD1w.png" alt="enter image description here"></a></p> <p>No Feature Importace:</p> <p><a href="https://i.stack.imgur.com/oBGBl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oBGBl.png" alt="enter image description here"></a></p>
<p>Looks like you don't have overfitting! Your training and validation curves are descending together and converging. The clearest sign you could get of overfitting would be a deviation between these two curves, something like this: <a href="https://i.stack.imgur.com/SQAEI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SQAEI.png" alt="overfitting ecample"></a></p> <p>Since your two curves are descending and are not diverging, it indicates your NN training is healthy.</p> <p><strong>HOWEVER</strong>! Your validation curve is suspiciously below the training curve. This hints a possible data leakage (train and test data have been mixed somehow). More info on a nice an short <a href="https://ibarrond.github.io/data-leakage" rel="nofollow noreferrer">blog post</a>. In general, you should <strong>split the data before any other preprocessing (normalizing, augmentation, shuffling, etc...)</strong>. </p> <p>Other causes for this could be some type of regularization (dropout, BN, etc..) that is active while computing the training accuracy and it's deactivated when computing the Validation/Test accuracy.</p>
python|neural-network|anaconda|spyder|figure
7
1,906,384
18,851,167
Django and multi-image upload form
<p>I have this form that includes around 25 inputs on the same. It includes a main image input and the rest are some text inputs or drop down menus.</p> <p>The problem is that I also need the user to upload multiple images. I was thinking of doing it on the next page itself.</p> <p>I have 2 questions:</p> <ul> <li>What is the best way for adding this multiple image upload form to the current form? Not related to Django, more related to the structure of the form.</li> <li>What is the best way of adding a multiple image/file to work correctly with Django? Any libraries or modules for such a job or any manual way to do it.</li> </ul>
<p>With <strong>formsets</strong> you allow the user to create several images at once. To create a formset out of an ImageForm you would do:</p> <pre><code>&gt;&gt;&gt; from django.forms.formsets import formset_factory &gt;&gt;&gt; ImageFormSet = formset_factory(ImageForm) </code></pre> <p><a href="https://docs.djangoproject.com/en/1.5/topics/forms/formsets/" rel="nofollow">https://docs.djangoproject.com/en/1.5/topics/forms/formsets/</a></p> <p>And Django comes with an optional “form wizard” application that splits forms across multiple Web pages. It maintains state in one of the backends so that the full server-side processing can be delayed until the submission of the final form.</p> <p><a href="https://docs.djangoproject.com/en/1.5/ref/contrib/formtools/form-wizard/" rel="nofollow">https://docs.djangoproject.com/en/1.5/ref/contrib/formtools/form-wizard/</a></p>
python|django
1
1,906,385
19,047,159
Get QUERY_STRING using python
<p>Struggling to see why this isn't working:</p> <pre><code>import MySQLdb import cgi, cgitb import os from wsgiref.simple_server import make_server from cgi import parse_qs, escape def index(req): d = parse_qs(os.environ['QUERY_STRING']) dtbox = d.get('dt', [''])[0] tmbox = d.get('tm', [''])[0] </code></pre> <p>The script returns a KeyError: QUERY_STRING</p> <p>I can see the URL which is:</p> <pre><code>http://myserver/currentcost.py?dt=2013-09-10&amp;tm=00 </code></pre> <p>I'm simply lost!</p>
<p>Try using:</p> <p>d = parse_qs(os.environ.get('QUERY_STRING', ''))</p> <p>This worked for me because I had the same problem. After researching for a while I came across this solution and everything seemed to work fine after that. </p>
python
0
1,906,386
19,284,550
Using csv.reader and csv.writer to write to the same file instead of different files?
<p>How can I convert the following code to write to the same file instead of having to write to a new file?</p> <pre><code>import csv import itertools from operator import itemgetter def filter_data(data): for classname, group in itertools.groupby(data, itemgetter(2)): filtered_group = [line for line in group] new_count = len(filtered_group) for line in filtered_group: if line[5] == "Count": line[5] = "Counter" else: line[5] = new_count yield line with open('main.csv', 'rb') as f_in, open('main1.csv', 'wb') as f_out: reader = csv.reader(f_in) writer = csv.writer(f_out) writer.writerows(filter_data(reader)) </code></pre> <p>My attemot if stringio what isn't working...</p> <pre><code>import csv import itertools from operator import itemgetter import StringIO def filter_data(data): for classname, group in itertools.groupby(data, itemgetter(2)): filtered_group = [line for line in group] new_count = len(filtered_group) for line in filtered_group: if line[5] == "Count": line[5] = "Counter" else: line[5] = new_count yield line output = StringIO.StringIO() with open('../main.csv', 'rb') as f_in: reader = csv.reader(f_in) output.writelines(filter_data(reader)) contents = output.getvalue() output.close() with open('../main_test.csv', 'wb') as f_out: f_out.writelines(contents) </code></pre>
<p>Try the following:</p> <h3>code</h3> <pre><code>import csv import itertools from operator import itemgetter import StringIO def filter_data(data): for classname, group in itertools.groupby(data, itemgetter(2)): filtered_group = [line for line in group] new_count = len(filtered_group) for line in filtered_group: if line[5] == &quot;Count&quot;: line[5] = &quot;Counter&quot; else: line[5] = new_count yield ','.join(map(str, line)) + '\n' output = StringIO.StringIO() with open('../main.csv', 'rb') as f_in: reader = csv.reader(f_in) output.writelines(filter_data(reader)) contents = output.getvalue() output.close() with open('../main.csv', 'wb') as f_out: f_out.writelines(contents) </code></pre> <p>This should work :)</p>
python|csv|python-2.7
1
1,906,387
67,308,845
shutil.copy() function got error message FileNotFoundError
<p>I'm studying the python with the book automate the boring stuff with python. But I got some problem to execute basic code.</p> <pre><code>&gt;&gt;&gt; import shutil, os &gt;&gt;&gt; from pathlib import Path &gt;&gt;&gt; p = Path.home() &gt;&gt;&gt; shutil.copy(p / 'spam.txt', p / 'some_folder') </code></pre> <p>This is the code I'm trying to run. But as I enter this code, I got error message &quot;FileNotFoundError: [Errno 2] No such file or directory: 'C:\Users\name\spam.txt' &quot;. But I tried making spam.txt and spam folder. But it doesn't work. How to solve this problem?</p>
<p>I tried the code and it's working perfectly, you have to <code>right click</code> in your <code>home</code> folder -&gt; <code>new</code> -&gt; <code> text document</code> and name it <code>spam</code>, then the <code>some_folfer</code> will be created with a copy of the content of <code>spam.txt</code></p>
python
-1
1,906,388
67,230,123
Most efficient way to multiply a small matrix with a scalar in numpy
<p>I have a program whose main performance bottleneck involves multiplying matrices which have one dimension of size 1 and another large dimension, e.g. 1000:</p> <pre><code>large_dimension = 1000 a = np.random.random((1,)) b = np.random.random((1, large_dimension)) c = np.matmul(a, b) </code></pre> <p>In other words, multiplying matrix <code>b</code> with the scalar <code>a[0]</code>.</p> <p>I am looking for the most efficient way to compute this, since this operation is repeated millions of times.</p> <p>I tested for performance of the two trivial ways to do this, and they are practically equivalent:</p> <pre><code>%timeit np.matmul(a, b) &gt;&gt; 1.55 µs ± 45.8 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each) %timeit a[0] * b &gt;&gt; 1.77 µs ± 34.6 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each) </code></pre> <p>Is there a more efficient way to compute this?</p> <ul> <li>Note: I cannot move these computations to a GPU since the program is using multiprocessing and many such computations are done in parallel.</li> </ul>
<p>In this case, it is probably faster to work with an element-wise multiplication but the time you see is mostly the <strong>overhead of Numpy</strong> (calling C functions from the CPython interpreter, wrapping/unwraping types, making checks, doing the operation, array allocations, etc.).</p> <blockquote> <p>since this operation is repeated millions of times</p> </blockquote> <p>This is the problem. Indeed, the CPython <strong>interpreter</strong> is very bad at doing things with a <strong>low latency</strong>. This is especially true when you work on Numpy types as calling a C code and performing checks for trivial operation is much slower than doing it in pure Python which is also much slower than compiled native C/C++ codes. If you really need this, and you cannot <strong>vectorize your code</strong> using Numpy (because you have a loop iterating over timesteps), then you move away from using CPython, or at least not a pure Python code. Instead, you can use <strong>Numba</strong> or <strong>Cython</strong> to mitigate the impact doing C calls, wrapping types, etc. If this is not enough, then you will need to write a native C/C++ code (or any similar language) unless you find exactly a <strong>dedicated Python package</strong> doing exactly that for you. Note that Numba is fast only when it works on native types or Numpy arrays (containing native types). If you works with a lot of pure Python types and you do not want to rewrite your code, then you can try the <strong>PyPy</strong> JIT.</p> <hr /> <p>Here is a simple example in Numba avoiding the (costly) creation/allocation of a new array (as well as many Numpy internal checks and calls) that is specifically written to solve your specific case:</p> <pre class="lang-py prettyprint-override"><code>@nb.njit('void(float64[::1],float64[:,::1],float64[:,::1])') def fastMul(a, b, out): val = a[0] for i in range(b.shape[1]): out[0,i] = b[0,i] * val res = np.empty(b.shape, dtype=b.dtype) %timeit fastMul(a, b, res) # 397 ns ± 0.587 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each) </code></pre> <p>At the time of writing, this solution is faster than all the others. As most of the time is spent in calling Numba and performing some internal checks, using Numba directly for the function containing the iteration loop should result in an even faster code.</p>
python|performance|numpy|matrix-multiplication|scalar
3
1,906,389
36,387,331
How do combine a random amount of lists in python
<p>I'm working on a program that reads trough a <a href="https://en.wikipedia.org/wiki/FASTQ_format" rel="nofollow">FASTQ</a> file and gives the amount of N's per sequence in this file. I managed to get the number of N's per line and I put these in a list. The problem is that I need all the numbers in one list to sum op the total amount of N's in the file but they get printed in their own list.</p> <pre><code>C:\Users\Zokids\Desktop&gt;N_counting.py test.fastq [4] 4 [3] 3 [5] 5 </code></pre> <p>This is my output, the List and total amount in the list. I've seen ways to manually combine lists but one can have hundreds of sequences so that's a no go.</p> <pre><code>def Count_N(line): ''' This function takes a line and counts the anmount of N´s in the line ''' List = [] Count = line.count("N") # Count the amount of N´s that are in the line returned by import_fastq_file List.append(int(Count)) Total = sum(List) print(List) print(Total) </code></pre> <p>This is what I have as code, another function selects the lines.</p> <p>I hope someone can help me with this. Thank you in advance.</p>
<p>The <code>List</code> you're defining in your function never gets more than one item, so it's not very useful. Instead, you should probably <code>return</code> the count from the function, and let the calling code (which is presumably running in some kind of loop) <code>append</code> the value to its own list. Of course, since there's not much to the function, you might just move it's contents out to the loop too!</p> <p>For example:</p> <pre><code>list_of_counts = [] for line in my_file: count = line.count("N") list_of_counts.append(count) total = sum(list_of_counts) </code></pre>
list|python-3.x|subtotal|fastq
1
1,906,390
36,581,425
How to process short keypress in the same time long keypress is performed?
<p>I try to write simple game in Python and PyQt4. It's simple platform game and I want to process "jump" keypress during "move" keypress.</p> <p>It's like I'm holding <code>right arrow</code> key and in the same time I'm pressing <code>z</code> (or simply any key, may be shift, ctrl, cmd, alt) and I want to continue move to the right and in the same time perform jump.</p> <p>When I use <code>keyPressEvent</code> it works fine with long "move" keypress but every "jump" keypress breaks the move and I need to press arrow to continue.</p> <p>To better understand what I want to accomplish lets look at this: (> is right arrow for move, z is for jump)</p> <p><code> key: &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt; . z . . o player: oooooooo </code></p> <p>As you can see I'm holding <code>&gt;</code> and player moves. In the same time when I press <code>z</code> player jumps and stops even when <code>&gt;</code> key is still pressed.</p> <p>Is it possible to do this in pyqt? Maybe I need some external library for this? Any help will be appreciated!</p> <hr> <p><strong>Solved!</strong></p> <p>Thanks to the answer from @Brendan Abel I've done everything I've needed :) Code looks similar to this (I've simplified for clarity):</p> <pre class="lang-py prettyprint-override"><code>class Test(QtGui.QMainWindow): pressed_keys = { QtCore.Qt.Key_Left: False, QtCore.Qt.Key_Right: False, QtCore.Qt.Key_Z: False, } def __init__(self): self.timer = QtCore.QTimer(self) self.timer.timeout.connect(self.key_action) self.timer.start(100) # definitions hidden for simplicity self.key_actions = { QtCore.Qt.Key_Left: self.player_move_backward, QtCore.Qt.Key_Right: self.player_move_forward, QtCore.Qt.Key_Z: self.player_jump, } def keyPressEvent(self, e): key = e.key() self.pressed_keys[key] = True def keyReleaseEvent(self, e): key = e.key() self.pressed_keys[key] = False def key_action(self): for key, is_pressed in self.pressed_keys.items(): if is_pressed: action = self.key_actions[key] action() self.update() </code></pre> <p>My game acts now like this: <a href="https://i.stack.imgur.com/e8vGR.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/e8vGR.gif" alt="jump and move"></a></p>
<p>You will probably need to do some type of regular polling of the keyboard state to determine what controls to process. Unfortunately, Qt doesn't have a direct API for polling keyboard state. You'll have to keep track of which keys are down by maintaining a global map of keys that have had a <em>keypress</em> event but no <em>keyrelease</em> event. Then poll that map at a regular interval to get the list of keys that are pressed.</p> <pre><code>pressed_keys = set() def keyPressEvent(self, event): pressed_keys.add(event.key()) def keyReleaseEvent(self, event): pressed_keys.remove(event.key()) </code></pre> <p>When adding keys, you can choose whether you want to add <em>modifier keys</em> (ie. shift, ctrl, alt) as well (using <code>event.modifiers()</code>), and whether the should apply to only the key they were first pressed with, or whether they should apply to all pressed keys.</p> <p>You can use a <code>QTimer</code> to do the polling. Basically, instead of <code>keyPressEvent</code> triggering your game to update, you're going to update based off of <code>QTimer.timeout</code> instead.</p> <pre><code>timer = QTimer() timer.timeout.connect(self.update_game) timer.start(100) # Update rate @QtCore.pyqtSlot() def update_game(self): if Qt.Key_Space in pressed_keys: ... if Qt.Key_Up in pressed_keys: ... </code></pre> <p>You could also choose not to use the <code>QTimer</code> to do the updates and keep using the <code>keypress</code> and <code>keyrelease</code> events to trigger the updates, but you'll still need to keep a persistent state of pressed keys.</p>
python|python-3.x|pyqt|pyqt4
2
1,906,391
13,787,549
memory consumption and lifetime of temporaries
<p>I've a python code where the memory consumption steadily grows with time. While there are several objects which can legitimately grow quite large, I'm trying to understand whether the memory footprint I'm observing is due to these objects, or is it just me littering the memory with temporaries which don't get properly disposed of --- Being a recent convert from a world of manual memory management, I guess I just don't exactly understand some very basic aspects of how the python runtime deals with temporary objects. </p> <p>Consider a code with roughly this general structure (am omitting irrelevant details): </p> <pre><code>def tweak_list(lst): new_lst = copy.deepcopy(lst) if numpy.random.rand() &gt; 0.5: new_lst[0] += 1 # in real code, the operation is a little more sensible :-) return new_lst else: return lst lst = [1, 2, 3] cache = {} # main loop for step in xrange(some_large_number): lst = tweak_list(lst) # &lt;&lt;-----(1) # do something with lst here, cut out for clarity cache[tuple(lst)] = 42 # &lt;&lt;-----(2) if step%chunk_size == 0: # dump the cache dict to a DB, free the memory (?) cache = {} # &lt;&lt;-----(3) </code></pre> <p>Questions:</p> <ol> <li>What is the lifetime of a <code>new_list</code> created in a <code>tweak_list</code>? Will it be destroyed on exit, or will it be garbage collected (at which point?). Will repeated calls to <code>tweak_list</code> generate a gazillion of small lists lingering around for a long time? </li> <li>Is there a temporary creation when converting a <code>list</code> to a <code>tuple</code> to be used as a <code>dict</code> key?</li> <li>Will setting a <code>dict</code> to an empty one release the memory?</li> <li>Or, am I approaching the issue at hand from a completely wrong perspective? </li> </ol>
<ol> <li><p><code>new_lst</code> is cleaned up when the function exists when not returned. It's reference count drops to 0, and it can be garbage collected. On current cpython implementations that happens immediately.</p> <p>If it <em>is</em> returned, the value referenced by <code>new_lst</code> replaces <code>lst</code>; the list referred to by <code>lst</code> sees it's reference count drop by 1, but the value originally referred to by <code>new_lst</code> is still being referred to by another variable.</p> </li> <li><p>The <code>tuple()</code> key is a value stored in the <code>dict</code>, so that's not a temporary. No extra objects are created other than that tuple.</p> </li> <li><p>Replacing the old <code>cache</code> dict with a new one will reduce the reference count by one. If <code>cache</code> was the only reference to the dict it'll be garbage collected. This then causes the reference count for all contained tuple keys to drop by one. If nothing else references to those those will be garbage collected.</p> </li> <li><p>Note that when Python frees memory, that does not necessarily mean the operating system reclaims it immediately. Most operating systems will only reclaim the memory when it is needed for something else, instead presuming the program might need some or all of that memory again soon.</p> </li> </ol>
python
4
1,906,392
53,616,368
django :signals is not working pre_save()
<p>I have user app.</p> <p>In signals.py I have</p> <pre><code>from django.db.models.signals import pre_save from user.models import User from django.dispatch import receiver import random import string @receiver(pre_save,sender=User) def create_hash_for_user(sender,instance,**kwargs): allowed_chars = ''.join((string.ascii_letters, string.digits)) unique_id = ''.join(random.choice(allowed_chars) for _ in range(32)) print("Request finished!") instance.user_hash = unique_id instance.save() </code></pre> <p>In apps.py</p> <pre><code>from django.apps import AppConfig class UserConfig(AppConfig): name = 'user' def ready(self): import user.signals </code></pre> <p>In models.py I have extended the abstractbaseuser</p> <pre><code>from django.db import models from django.shortcuts import render from django.core.exceptions import ValidationError from django.contrib.auth.models import ( AbstractBaseUser, BaseUserManager ) from .utils import file_size class MyUserManager(BaseUserManager): def create_user(self, username,email,password=None): """ Creates and saves a User with the given email, date of birth and password. """ if not email: raise ValueError('Users must have an email address') user = self.model( email=self.normalize_email(email), username=username ) user.set_password(password) user.save(using=self._db) return user def create_superuser(self,username,email,password): """ Creates and saves a superuser with the given email, date of birth and password. """ user = self.create_user( username=username, email=email, password=password, ) user.is_admin = True user.save(using=self._db) return user class User(AbstractBaseUser): username=models.CharField(max_length=200,blank=True,null=True) first_name = models.CharField(max_length=100,blank=True,null=True) last_name = models.CharField(max_length=100, blank=True, null=True) email = models.EmailField(unique=True) image = models.ImageField(upload_to='images',blank=True,validators=[file_size]) date_joined = models.DateTimeField(auto_now=False,auto_now_add=True) is_active = models.BooleanField(default=True) is_admin = models.BooleanField(default=False) user_hash = models.CharField(max_length=512,blank=True,null=True) USERNAME_FIELD='email' objects = MyUserManager() def __str__(self): return self.email def has_perm(self, perm, obj=None): return True def has_module_perms(self, app_label): return True @property def is_staff(self): return self.is_admin </code></pre> <p>But signal function is not called, and Request finished! is not printed and user_hash is not created.</p>
<p>In your <code>__init__.py</code> of user app, you must set the app config you created in <code>apps.py</code></p> <pre><code># __init__.py default_app_config = 'user.apps.UserConfig' </code></pre>
python|django
5
1,906,393
54,393,568
How can I delete a letter from a string in a list?
<p>I have to solve the following problem: -Write a program that takes a text as an entry and prints the same text without the letter 'r' in it.The teacher said that we have to use a list to put all the words in it and then remove the letter "r" from them. Here is how I started:</p> <pre><code> text = input("Enter some text: ") l=text.split(" ") for i in range(len(l)): #I want to use for to run the elements of the list </code></pre> <p>I don't know what to do next and what method I should use, I thought maybe the method remove() can be useful,but I don't know how to use it in each item of the list.</p>
<p>Welcome to SO, </p> <p>Here is my version. </p> <pre><code>statement = input("Enter some text: ") listOfWords = statement.split(" ") for i, word in enumerate(listOfWords): if("r" in word.lower()): print("found an instance of letter r in the word ({0})".format(word)) listOfWords[i]=word.replace("r", "") sentence = " ".join(listOfWords) print(sentence) </code></pre> <p>First - it grabs all the words from the input text as a list. Then it iterates over all the words and if there is a "r" , it removes it from the word and updates the list as well. At the every end it generates the sentense back from the list.</p> <p>Please note - this is ultra verbose code. It does not use Python features like List Comprehension, but it is much easier to understand. I purposefully added debug statements.</p> <p>Please let me know if this works for you. </p> <p>Thanks </p>
python
2
1,906,394
71,100,550
How to print values in dataframes column by their indexes?
<p>I have a dataframe:</p> <pre><code>id value 742 aa 1711 bb 1731 qq 1799 ff 2741 pp </code></pre> <p>id is the index column. I want to print only those values from column &quot;value&quot; which are in this list: [742, 1731, 1799]. So the output must be:</p> <pre><code>aa qq ff </code></pre> <p>How to do that?</p> <p>I tried this:</p> <pre><code>for i in [742, 1731, 1799]: print(df[df.index == i][&quot;value&quot;]) </code></pre> <p>but the output is:</p> <pre><code>value 742 aa Name: value, dtype: object value 1731 qq Name: value, dtype: object value 1799 ff Name: value, dtype: object </code></pre>
<p>Use <code>.set_index</code> to declare <code>'id'</code> as the index, then <code>.loc[...]</code> to specify the list of indices you're interested in.</p> <pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({'id': [8, 13, 21, 34, 55, 89], 'val': 'for sale baby shoes never worn'.split()}).set_index('id') print(df) # val # id # 8 for # 13 sale # 21 baby # 34 shoes # 55 never # 89 worn print(df.loc[[13,89]]) # val # id # 13 sale # 89 worn </code></pre>
python|python-3.x|dataframe|indexing
0
1,906,395
9,642,000
python scikits learn - SVM options
<p>Just curious about two options in scikits learn SVM class. What does Scale_C and shrinking do? There wasn't much in the documentation. Scale C seems to be able to scale the C paramter appropriately for the training data.</p> <p>Thanks</p>
<p><code>scale_C=True</code> (deprecated in the dev version and scheduled for removal in 0.12) causes the regularization parameter <code>C</code> to be divided by the number of samples before it is handed to the underlying LibSVM implementation.</p> <p><code>shrinking</code> enables or disables the "shrinking heuristic", described by <a href="http://www.joachims.org/publications/joachims_99a.ps.gz" rel="nofollow">Joachims 1999</a>, that should speed up SVM training.</p>
python|machine-learning|svm|scikits|scikit-learn
2
1,906,396
55,258,924
Add HTML tags to text using Python
<p>I have a text in file "File1" that contains the below text:</p> <pre><code>-Accounting -HR Some text -IT --Networks --Storage --DBA </code></pre> <p>I need a piece of code that will read File1 line by line and replace "-" and "--" with appropriate HTML tags and save the end result shown below in a text file File2</p> <pre><code>&lt;ul&gt; &lt;li&gt;Accounting&lt;/li&gt; &lt;li&gt;HR&lt;/li&gt; &lt;/ul&gt; Some text &lt;ul&gt;&lt;li&gt;IT &lt;ul&gt; &lt;li&gt;Networks&lt;/li&gt; &lt;li&gt;Storage&lt;/li&gt; &lt;li&gt;DBA&lt;/li&gt; &lt;/ul&gt; &lt;/li&gt;&lt;/ul&gt; </code></pre> <p>So far I tried the code below. </p> <p>I set two booleans that are used to check if the current line contains "-" or "--" to False initially. If there is a "-" or "--" in the current line then code adds appropriate tags in the beginning of the line, changes booleans to True and goes to the next line. </p> <p>Now booleans are used to see if there was "-" or "--" in the previous line, if there were dashes it will add appropriate tags to the beginning of the line which should be in the previous line but we're already in the next line so. The other way would be to check if the next line starts with "-" or "--" but I am not sure how to. When I use next() the line is skipped. Would reading from two files at the same time with one being one line ahead and checking what it has in that next line be a better solution? </p> <pre><code> single_dash_prev_line = False double_dash_prev_line = False for line in File1: current_line = line if line[0] == "-": if line[1] != "-": if single_dash_prev_line == False: new_line = "&lt;ul&gt;&lt;li&gt;" + current_line[1:] File2.write(new_line) single_dash_prev_line = True elif single_dash_prev_line == True: new_line = "&lt;/li&gt;&lt;li&gt;" + current_line[1:] File2.write(new_line) single_dash_prev_line = True elif line[1] == "-": if single_dash_prev_line == True: new_line = "&lt;ul&gt;&lt;li&gt;" + line[2:] print(new_line) File2.write(new_line) double_dash_prev_line = True elif double_dash_prev_line == True: new_line = "&lt;/li&gt;&lt;li&gt;" + line[2:] File2.write(new_line) double_dash_prev_line = True elif single_dash_prev_line == True: new_line = "&lt;/li&gt;&lt;/ul&gt;" + current_line[1:] File2.write(new_line) single_dash_prev_line = False elif double_dash_prev_line == True: new_line = "&lt;/li&gt;&lt;/ul&gt;" + current_line[1:] File2.write(new_line) single_dash_prev_line = False else: single_dash_prev_line = False double_dash_prev_line = False File2.write(current_line) </code></pre>
<p>The code below did what I needed.</p> <pre><code>with open("finalfile.txt", 'w', encoding='utf-8') as File2, open("test.txt", "r", encoding='utf-8') as File1: previous_line = "" new_line = "" double_dash_prev_line = False single_dash_prev_line = False for line in File1: current_line = line if line[0] == "-": if line[1] != "-": if single_dash_prev_line == False and double_dash_prev_line == False: new_line = "&lt;ul&gt;&lt;li&gt; " + current_line[1:] File2.write(new_line) single_dash_prev_line = True double_dash_prev_line = False elif single_dash_prev_line == True: new_line = "&lt;/li&gt;&lt;li&gt; " + current_line[1:] File2.write(new_line) single_dash_prev_line = True double_dash_prev_line = False elif double_dash_prev_line == True: new_line = "&lt;/ul&gt;&lt;/li&gt;&lt;/ul&gt;&lt;ul&gt;&lt;li&gt; " + current_line[1:] File2.write(new_line) single_dash_prev_line = True double_dash_prev_line = False elif line[1] == "-": if single_dash_prev_line == True: new_line = "&lt;ul&gt;&lt;li&gt; " + line[2:] File2.write(new_line) double_dash_prev_line = True single_dash_prev_line = False elif double_dash_prev_line_line == True: new_line = "&lt;/li&gt;&lt;li&gt; " + line[2:] File2.write(new_line) double_dash_prev_line = True single_dash_prev_line = False elif single_dash_prev_line == True: new_line = "&lt;/li&gt;&lt;/ul&gt; " + current_line[1:] File2.write(new_line) single_dash_prev_line = False double_dash_prev_line = False elif double_dash_prev_line_line == True: new_line = "&lt;/li&gt;&lt;/ul&gt;&lt;/ul&gt; " + current_line[1:] File2.write(new_line) double_dash_prev_line = False single_dash_prev_line = False else: single_dash_prev_line = False double_dash_prev_line = False File2.write(current_line) </code></pre>
python|text
0
1,906,397
52,463,788
Shuffle and split 2 numpy arrays so as to maintain their ordering with respect to each other
<p>I have 2 numpy arrays X and Y, with shape X: [4750, 224, 224, 3] and Y: [4750,1].</p> <p>X is the training dataset and Y is the correct output label for each entry.</p> <p>I want to split the data into train and test so as to validate my machine learning model. Therefore, I want to split them randomly so that they both have the correct ordering after random split is applied on X and Y. ie- every row of X is correctly has its corresponding label unchanged after the split. </p> <p>How can I achieve the above objective ? </p>
<p>This is how I would do it</p> <pre><code>def split(x, y, train_ratio=0.7): x_size = x.shape[0] train_size = int(x_size * train_ratio) test_size = x_size - train_size train_indices = np.random.choice(x_size, size=train_size, replace=False) mask = np.zeros(x_size, dtype=bool) mask[train_indices] = True x_train, y_train = x[mask], y[mask] x_test, y_test = x[~mask], y[~mask] return (x_train, y_train), (x_test, y_test) </code></pre> <p>I simply choose the required number of indices I need (randomly) for my train set, remaining will be for the test set.</p> <p>Then use a mask to select the train and test samples.</p>
python|numpy
2
1,906,398
52,896,618
AttributeError: 'SelectorList' object has no attribute 'replace'
<p>I'm trying to run a Scrapy spider and dump it all into a json file. Here's my code:</p> <pre><code>import scrapy import re class MissleItem(scrapy.Item): missle_name = scrapy.Field() missle_type = scrapy.Field() missle_origin = scrapy.Field() missle_range = scrapy.Field() missle_comments = scrapy.Field() class missleSpider(scrapy.Spider): name = 'missle_list' allowed_domains = ['en.wikipedia.org'] start_urls = ['https://en.wikipedia.org/wiki/...'] def parse(self, response): table = response.xpath('///div/table[2]/tbody') rows = table.xpath('//tr') row = rows[2] row.xpath('td//text()')[0].extract() for row in response.xpath('// \ [@class="wikitable"]//tbody//tr'): name = { 'Missle' : row.xpath('td[1]//text()').extract_first(), 'Type': row.xpath('td[2]//text()').extract_first(), 'Origin' : row.xpath('td[3]/a//text()').extract_first(), 'Range': row.xpath('td[4]//text()').replace(u'\&amp;nbsp;', u' ').extract_first(), 'Comments' : row.xpath('td[5]//text()').extract_first()} yield MissleItem(missle_name=name['Missle'], missle_type=name['Type'], missle_origin=name['Origin'], missle_range=name['Range'], missle_comments=name['Comments']) </code></pre> <p>When I run the previous code, I get: AttributeError: 'SelectorList' object has no attribute 'replace'</p> <p>My question is, how can I return my Range Column without the 'nbsp;' extra output? I tried:</p> <pre><code>'Range': row.xpath('td[4]//text()').strip().extract_first() </code></pre> <p>But then I got a:</p> <pre><code>AttributeError: 'SelectorList' object has no attribute 'strip' </code></pre> <p>Any help would be greatly appreciated</p>
<pre><code>row.xpath('td[4]//text()').replace(u'\&amp;nbsp;', u'').extract_first(), </code></pre> <p>try to put the <code>extract_first()</code> before replace attribute</p>
python|scrapy
1
1,906,399
52,541,117
How can I alphabetize Python functions using Sublime Text?
<p>I installed a plugin that will alphabetize blocks. I just need a way to select all the defs in a python file. So far I've got this regex.</p> <p><a href="https://i.stack.imgur.com/oHn8R.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oHn8R.png" alt="Selects all defs except the last line."></a></p> <p>This doesn't select the last line because there isn't any newline. I could enter a newline at the end, but I'd like to avoid that. In fact, ideally I'd like to avoid grabbing all the newlines above.</p> <p><a href="https://i.stack.imgur.com/pFFLN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pFFLN.png" alt="Selects all defs without any unnecessary newlines"></a></p> <p>But I'm worried that if I don't grab the newline, then it won't match functions that have a blank line in the middle.</p> <p><a href="https://i.stack.imgur.com/bxqSQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bxqSQ.png" alt="enter image description here"></a></p> <p>If there's a better way than what I'm trying--by selecting the blocks and using an alphabetizer plugin--then please suggest it. Otherwise, is there some way I can get the regex to match just the defs?</p>
<p><code>def.+(\n?\n.+)+</code></p> <p>Will accomplish what you want. (Sublime seems to follow the usual "dot is not newline" convention)</p> <hr> <p>Breaking down the components of the expression:</p> <p><code>def.+</code> - match the def line, up to a newline</p> <p><code>\n?\n.+</code> - match a newline, followed by some characters, optionally prepended by another newline (the prepend handles the case of an empty line in the middle of a def)</p> <p><code>(...)+</code> - start a capture group, and match its pattern one or more times</p> <p><code>(\n?\n.+)+</code> - combine the previous two pieces, so we match any sequence of non-empty lines with at most one empty line between any two non-empty lines (pedantically, any sequence of non-empty-line and empty-line-then-non-empty-line blocks)</p> <p>The final <code>+</code> could be a <code>*</code> instead if it's permissable to match "empty" defs like</p> <pre><code>def empty(): </code></pre>
python|regex|sublimetext3
0