Unnamed: 0
int64
0
1.91M
id
int64
337
73.8M
title
stringlengths
10
150
question
stringlengths
21
64.2k
answer
stringlengths
19
59.4k
tags
stringlengths
5
112
score
int64
-10
17.3k
1,908,300
29,592,783
copy values in a python list to a dictionary
<p>If I have a list, </p> <pre><code>lstValues=[1, 2, "cat", "meow"] </code></pre> <p>And I have a dictionary with keys as follows:</p> <pre><code>data {id, count, animal, sound} </code></pre> <p>If I am reading from a file formatted as follows:</p> <pre><code>#id count animal sound 10 1 tiger purr 11 2 lion roar </code></pre> <p>I am reading the file and each line is split to return the list of the above format, is there a way to append values from list to dictionary? </p> <p>So if what I am asking were to be possible, I would be able to do something like:</p> <pre><code>data.append(lstValues) </code></pre> <p>I know dictionary keys have no set order. I am fairly sure this is not possible but I would like a confirmation.</p> <p><strong>EDIT:</strong> I want to avoid doing a manual assignment. I know dicts dont support the syntax I provided as an example, which is why I said "something like".</p>
<p>Here's a quick way:</p> <pre><code>keys = ["id", "count", "animal", "sound"] lstValues = [1, 2, "cat", "meow"] data = dict(zip(keys, lstValues)) </code></pre> <p>The <code>zip</code> function combines the two lists into a list of tuples (each tuple is a key and a value). The <code>dict</code> function converts the list of tuples into a dictionary.</p>
python
2
1,908,301
46,204,992
How can I extract the audio embeddings (features) from Google’s AudioSet?
<p>I’m talking about the audio features dataset available at <a href="https://research.google.com/audioset/download.html" rel="nofollow noreferrer">https://research.google.com/audioset/download.html</a> as a tar.gz archive consisting of frame-level audio tfrecords.</p> <p>Extracting everything else from the tfrecord files works fine (I could extract the keys: video_id, start_time_seconds, end_time_seconds, labels), but the actual embeddings needed for training do not seem to be there at all. When I iterate over the contents of any tfrecord file from the dataset, only the four keys video_id, start_time_seconds, end_time_seconds, and labels, are printed.</p> <p>This is the code I'm using:</p> <pre><code>import tensorflow as tf import numpy as np def readTfRecordSamples(tfrecords_filename): record_iterator = tf.python_io.tf_record_iterator(path=tfrecords_filename) for string_record in record_iterator: example = tf.train.Example() example.ParseFromString(string_record) print(example) # this prints the abovementioned 4 keys but NOT audio_embeddings # the first label can be then parsed like this: label = (example.features.feature['labels'].int64_list.value[0]) print('label 1: ' + str(label)) # this, however, does not work: #audio_embedding = (example.features.feature['audio_embedding'].bytes_list.value[0]) readTfRecordSamples('embeddings/01.tfrecord') </code></pre> <p>Is there any trick to extracting the 128-dimensional embeddings? Or are they really not in this dataset?</p>
<p>Solved it, the tfrecord files need to be read as sequence examples, not as examples. The above code works if the line</p> <pre><code>example = tf.train.Example() </code></pre> <p>is replaced by</p> <pre><code>example = tf.train.SequenceExample() </code></pre> <p>The embeddings and all other content can then be viewed by simply running</p> <pre><code>print(example) </code></pre>
python|tensorflow|protocol-buffers
5
1,908,302
46,331,714
Possible combination of a nested list in python
<p>If I have a list of lists and want to find all the possible combination from each different indices, how could I do that?</p> <p>For example:</p> <pre><code>list_of_lists = [[1, 2, 3], [4, 5, 6], [7, 8, 9]] </code></pre> <p>I want to find</p> <pre><code>all_possibility = [[1, 5, 9], [1, 8, 6], [4, 2, 9], [4, 8, 3], [7, 2, 6], [7, 5, 3]] </code></pre> <p>where </p> <ul> <li><p>[1,5,9]: 1 is 1st element of [1, 2, 3], 5 is 2nd element of [4, 5, 6], 9 is 3rd element of [7, 8, 9]. </p></li> <li><p>[1,8,6]: 1 is 1st element of [1, 2, 3], 8 is 2nd element of [7, 8, 9], 6 is 3rd element of [4, 5, 6]. </p></li> </ul> <p>and so on.</p> <p>(Edited) Note: I would like the result to be in the same order as the original element of the list. [1, 8, 6] but not [1, 6, 8] because 8 is the 2nd element of [7, 8, 9]. </p>
<p>What you're looking for is the Cartesian product, in Python <code>itertools.product</code>:</p> <pre><code>&gt;&gt;&gt; import itertools &gt;&gt;&gt; list_of_lists = [[1, 2, 3], [4, 5, 6], [7, 8, 9]] &gt;&gt;&gt; all_possibility = list(itertools.product(*list_of_lists)) &gt;&gt;&gt; print(all_possibility) [(1, 4, 7), (1, 4, 8), (1, 4, 9), (1, 5, 7), (1, 5, 8), (1, 5, 9), (1, 6, 7), (1, 6, 8), (1, 6, 9), (2, 4, 7), (2, 4, 8), (2, 4, 9), (2, 5, 7), (2, 5, 8), (2, 5, 9), (2, 6, 7), (2, 6, 8), (2, 6, 9), (3, 4, 7), (3, 4, 8), (3, 4, 9), (3, 5, 7), (3, 5, 8), (3, 5, 9), (3, 6, 7), (3, 6, 8), (3, 6, 9)] </code></pre> <p>If you want permutations based on the indices rather than the values, you can use <code>itertools.combinations</code> to get the possible indices, then use those indices to get the respective values from the sub-lists, like this:</p> <pre><code>&gt;&gt;&gt; list_of_lists = [[1, 2, 3], [4, 5, 6], [7, 8, 9]] &gt;&gt;&gt; length = 3 &gt;&gt;&gt; all_indices = list(itertools.permutations(range(length), length)) &gt;&gt;&gt; all_possibility = [[l[i] for l,i in zip(list_of_lists, indices)] for indices in all_indices] &gt;&gt;&gt; print(all_possibility) [[1, 5, 9], [1, 6, 8], [2, 4, 9], [2, 6, 7], [3, 4, 8], [3, 5, 7]] </code></pre>
python|python-3.x|list
5
1,908,303
60,895,665
Interpolation methods when working with data from multiple pandas DataFrames?
<p>I have a DataFrame that has many columns and I want interpolate to get y-values using x, y points and my known x- values. I know how to carry out the interpolation by selecting one column (of each DataFrame) of x, y points and x-values. My goal is to have a DataFrame containing all of the interpolated y-values. Generally, I can obtain the y-values one of two ways (that I know of):</p> <p>Here is an example of how my DataFrames are formatted:</p> <pre><code>random.seed( 30 ) df_x_pts = pd.DataFrame({ &quot;x_pts_1&quot;: np.random.uniform(low=1, high=200, size=(10,)), &quot;x_pts_2&quot;: np.random.uniform(low=1, high=500, size=(10,)), &quot;x_pts_3&quot;: np.random.uniform(low=1, high=750, size=(10,)),}) df_y_pts = pd.DataFrame({ &quot;y_pts_1&quot;: np.random.uniform(low=1, high=200, size=(10,)), &quot;y_pts_2&quot;: np.random.uniform(low=1, high=500, size=(10,)), &quot;y_pts_3&quot;: np.random.uniform(low=1, high=750, size=(10,)),}) df_x_vals = pd.DataFrame({ &quot;x_vals_1&quot;: np.random.uniform(low=1, high=200, size=(50,)), &quot;x_vals_2&quot;: np.random.uniform(low=1, high=500, size=(50,)), &quot;x_vals_3&quot;: np.random.uniform(low=1, high=750, size=(50,)),}) </code></pre> <p><strong>1)</strong> I can calculate this for each column in each DataFrame using scipy:</p> <pre><code>import pandas as pd import numpy as np from scipy.interpolate import interp1d y = df_y_pts[&quot;y_pts_1&quot;] x = df_x_pts[&quot;x_pts_1&quot;] # Fit the interpolation on the original index and values f = interp1d(x, y, kind='linear') # Perform interpolation for values across the full desired index f(x_val) </code></pre> <p><strong>2)</strong> Or using numpy:</p> <pre><code>x_pts = df_x_pts[&quot;x_pts_1&quot;] y_pts = df_y_pts[&quot;y_pts_1&quot;] x_vals = df_x_vals[&quot;x_vals_1&quot;] y_vals = np.interp(x_vals, x_pts, y_pts) print(y_vals) </code></pre> <p>What is a sensible method to accomplish this across the all DataFrames so that I return a single DataFrame that would look like df_x_vals, except a DataFrame that consists of all the interpolated y values? Any help is greatly appreciated!</p>
<p>A bit late to the party and not enough rep for a comment, so I am formulating this as an answer:</p> <blockquote> <p>how do you use this to interpolate values between columns, instead of down the row?</p> </blockquote> <p>Check the <code>axis</code> argument of <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.interpolate.html" rel="nofollow noreferrer"><code>DataFrame.Interpolate</code></a>, I believe this is what you want.</p> <p>Basically <code>...interpolate(..., axis=1)</code></p>
python|python-3.x|pandas|numpy|interpolation
0
1,908,304
49,659,071
Is it better to use Python's lru_cache / memoize feature on public functions or lower-level private functions, or does it matter?
<p>In Python 3, given public &amp; private functions, something like this:</p> <pre><code>def my_public_function(a, b, c) -&gt; int: rv = _my_private_function(a, b, c) return rv def _my_private_function(a, b, c) -&gt; int: return a + b + c </code></pre> <p>If I want to optimize this using <code>functools.lru_cache</code>, am I better to do that on the public function or the private one?</p> <p>My instinct is the public function so that it's cached as early as possible and returned without touching anything further.</p> <pre><code>from functools import lru_cache @lru_cache() def my_public_function... </code></pre> <p>Is there any design pattern or other design principle which would suggest using <code>@lru_cache()</code> on <code>_my_private_function</code> instead?</p>
<p>You save time (a function call and 2 adds) by handling this in the public function. In python there isn't really a notion of "private" functions. The convention starting a name with _ to tell others to stay away but there is nothing language specific to it. So it has no affect on any calling function, including lru_cache.</p>
python|caching
1
1,908,305
62,816,651
How do I load a csv into MySQL using pandas?
<p>Sorry for what is probably a simple question. I've been working with pandas for a little &amp; have had no trouble loading excel files into MSSQL, however, I haven't been able to figure it out for mysql. I wasn't able download the MySQLdb package, so I installed the mysql package &amp; used mysql.connector as shown below:</p> <p><strong>Original Code</strong></p> <pre><code>import pandas as pd import mysql.connector as mysql data = pd.read_excel(r&quot;[my file path]&quot;) df = pd.DataFrame(data, columns=['Name', 'country', 'age', 'gender']) conn = mysql.connect('Driver={SQL Server};' 'Server=[my server name];' 'Database=[my database];' 'Trusted_Connection=yes;') cursor = conn.cursor() print(df) for row in df.itertuples(): cursor.execute(''' INSERT INTO [database].dbo.[table] (Name, country, age, gender) VALUES (?,?,?,?) ''', row.Name, row.country, row.age, row.gender ) conn.commit() </code></pre> <p>Not sure what exactly I should be doing to make this all work.</p> <p><strong>Code that works</strong></p> <pre><code>import mysql.connector import pandas as pd conn= mysql.connector.connect(user='[username]', password='[password]', host='[hostname]', database='[databasename]') cursor = conn.cursor() excel_data = pd.read_excel(r'[filepath]',sep=',', quotechar='\'') for row in excel_data.iterrows(): testlist = row[1].values cursor.execute(&quot;INSERT INTO [tablename](Name, Country, Age, Gender)&quot; &quot; VALUES('%s','%s','%s','%s')&quot; % tuple(testlist)) conn.commit() cursor.close() conn.close() </code></pre> <p><strong>More concise code which also works</strong></p> <pre><code>import pandas as pd from sqlalchemy import create_engine, types engine = create_engine('mysql://root:[password]@localhost/[database]') # enter your password and database names here df = pd.read_excel(r&quot;[file path]&quot;,sep=',',quotechar='\'',encoding='utf8') # Replace Excel_file_name with your excel sheet name df.to_sql('[table name]',con=engine,index=False,if_exists='append') # Replace Table_name with your sql table name </code></pre>
<p>Pandas has a <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_sql.html" rel="nofollow noreferrer">number of great functions</a> for dealing with SQL databases. Once you have read from excel, created your <code>df</code>, and connected to your database, you can simply use:</p> <pre class="lang-py prettyprint-override"><code>df.to_sql(&quot;table_name&quot;, conn, if_exists='replace', index=False) </code></pre> <p>and the columns of your database table will be created automatically. Of course, you can visit the link above and play around with the other parameters. Also, remember to close your database connection with <code>conn.close()</code>.</p> <p>Edit: Figured I'd add how to simply read the dataframe out of the database as well.</p> <pre class="lang-py prettyprint-override"><code>df = pd.read_sql(&quot;SELECT * FROM table_name&quot;, conn) </code></pre> <p>Hope this helps!</p>
python|mysql|pandas
1
1,908,306
53,419,424
where does the welcome message of interactive python interpreter come from?
<p>When entering <code>python</code> on Linux shell, the welcome message is printed:</p> <pre><code>[root@localhost ~]# python Python 2.7.5 (default, Nov 20 2015, 02:00:19) [GCC 4.8.5 20150623 (Red Hat 4.8.5-4)] on linux2 Type "help", "copyright", "credits" or "license" for more information. </code></pre> <p>where do those lines come from? Are they determined during compilation or installation?</p> <p>I have another version of <code>python</code> executable and a set of libs on my system, but when I enter that <code>python</code>, it also shows the same welcome message as above. </p> <p>Thanks,</p> <p><strong>UPDATE:</strong></p> <p>I use absolute path to start another version of python. And just found the welcome message has the same content as sys.version and sys.platform. But if I copy the other version of python to a different Linux machine B, and still use absolute path to run it. I get </p> <pre><code>Python 2.7.15rc1 (default, Nov 12 2018, 14:31:15) [GCC 7.3.0] on linux2 Type "help", "copyright", "credits" or "license" for more information. </code></pre> <p>This welcome message is the same as machine B's python.</p>
<p>Edited: The C version source code is similar: <a href="https://github.com/python/cpython/blob/7e4db2f253c555568d56177c2fd083bcf8f88d34/Modules/main.c#L705" rel="nofollow noreferrer">https://github.com/python/cpython/blob/7e4db2f253c555568d56177c2fd083bcf8f88d34/Modules/main.c#L705</a></p> <pre><code>if (!Py_QuietFlag &amp;&amp; (Py_VerboseFlag || (command == NULL &amp;&amp; filename == NULL &amp;&amp; module == NULL &amp;&amp; stdin_is_interactive))) { fprintf(stderr, "Python %s on %s\n", Py_GetVersion(), Py_GetPlatform()); if (!Py_NoSiteFlag) fprintf(stderr, "%s\n", COPYRIGHT); } </code></pre> <p>which <code>Py_GetVersion()</code> returns version base on a MACRO</p> <p><a href="https://github.com/python/cpython/blob/7e4db2f253c555568d56177c2fd083bcf8f88d34/Include/patchlevel.h#L26" rel="nofollow noreferrer">https://github.com/python/cpython/blob/7e4db2f253c555568d56177c2fd083bcf8f88d34/Include/patchlevel.h#L26</a></p> <pre><code>/* Version as a string */ #define PY_VERSION "3.7.0a0" </code></pre> <p>so it is compile time determined, you probably have a messed up PATH?</p> <hr> <p>Old answer, which is actually just a python module</p> <p><a href="https://github.com/python/cpython/blob/7e4db2f253c555568d56177c2fd083bcf8f88d34/Lib/code.py#L214" rel="nofollow noreferrer">https://github.com/python/cpython/blob/7e4db2f253c555568d56177c2fd083bcf8f88d34/Lib/code.py#L214</a></p> <pre><code> if banner is None: self.write("Python %s on %s\n%s\n(%s)\n" % (sys.version, sys.platform, cprt, self.__class__.__name__)) elif banner: self.write("%s\n" % str(banner)) </code></pre> <p>Not sure if this answers your question, but still fun to know.</p>
python
1
1,908,307
45,909,038
Marker edge colors or just other markers of matplotlib.axes.Axes.errorbar (Python Package) to every point
<p>Is there any way to use color markers of matplotlib.axes.Axes.errorbar (Python Package), or just other markers, to every point? </p> <p>Example: </p> <pre><code>err = np.array([0.0, math.log(0.0423746910453,10), math.log(0.26659516937,10)]) plt.errorbar(x,y,yerr=err,marker='^',markeredgecolor='gray') </code></pre> <p>I would like to have a color marker, or just other marker, to every point (error).</p> <ul> <li>Point 1: 0.0 -> markeredgecolor='red', or marker='^'</li> <li>Point 2: math.log(0.0423746910453,10) -> markeredgecolor='blue', or marker='X'</li> <li>Point 3: math.log(0.26659516937,10) -> markeredgecolor='green', or marker='o'</li> </ul>
<p>I'm not sure if there matplotlib supports this natively, however you can simply workaround this limitation:</p> <p>First, draw the line connecting the points with <code>plt.plot(x, y)</code>. You can then draw each marker separately by calling <code>plt.errorbar</code> for each point with different options.</p>
python|matplotlib
1
1,908,308
33,156,042
Tkinter Application Name/Menu
<p>How do I change this menu? </p> <p><a href="https://i.stack.imgur.com/3wxEP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3wxEP.png" alt="python app info"></a></p> <p>Used py2app to convert it. PS: The app is called py.app.</p> <p><strong>Edit to further clarify:</strong></p> <p>I would like to edit the names of the menu items under the "py" cascade and what methods they trigger. For example: About py, Services, Hide py etc.</p>
<p>To edit/add items to the Application Menu go <a href="http://www.tkdocs.com/tutorial/menus.html" rel="nofollow" title="here">here</a> and read the section on Mac OS X. It tells you exactly how to do it and lots of other useful things. </p>
python|tkinter|py2app
-1
1,908,309
13,148,233
Python: Using find function not in order
<p>So the code below takes a string of inputted information (A math expression), and uses the find function to find one of the operators in "*/+-" and separates the string accordingly.</p> <pre><code>def splitting1(z): for operators in "*/+-": if operators in z: position1= z.find(operators) position2= z.rfind(operators) text_before_operators= (z[:position1]).strip() text_after_operators= (z[(position1+1):(position2)]).strip() return text_before_operators,text_after_operators </code></pre> <p>My problem is that if I have an inputted expression such as <code>3/5*7</code> then <code>position1</code> will first find <code>*</code> before finding <code>/</code>. I want the code to associate 'position1' with the left most operator. Is there a way to omit operator precedence when using the for/in functions? If not, is there a better string manipulator that can omit the order of precedence.</p> <p>Note z is the input. And the input is limited to two operators in case that created ambiguity.</p>
<p>It looks like you are trying to lex, so I suggest you look into modules designed especially for this purpose, for example <a href="http://www.dabeaz.com/ply/ply.html" rel="nofollow"><code>ply</code></a>.</p> <p>.</p> <p>Saying that, I think you are on the right track for this example, but you are missing some <strong>recursion</strong> (to make a more general lexer for these): </p> <pre><code>def splitting1(z): for char in "*/+-": if char in z: position = z.find(char) text_before_operator= (z[:position]).strip() text_after_operator= (z[position+1:]).strip() return (char, splitting1(text_before_operator), splitting1(text_after_operator)) return ("number", z) </code></pre> <p>One way to find the left-most operator regardless of precedence i.e. omit operator precedence, is to rearrange what you iterate over:</p> <pre><code>def splitting2(z): for char in z: if char in "*/+-": position = z.find(char) text_before_operator= (z[:position]).strip() text_after_operator= (z[position+1:]).strip() return (char, splitting2(text_before_operator), splitting2(text_after_operator)) return ("number", z) </code></pre> <p><em>Note these function return a different result to your original function.</em></p>
python|operator-keyword|operator-precedence
0
1,908,310
21,769,390
How to resolve ambigious colum after join in sqlalchemy
<p>When I join two tables (objects) using statement as </p> <pre><code>session.query(object, robject).filter(getattr(object.c, "hid")==getattr(robject.c,\ )).subquery() </code></pre> <p>results in column reference "hid" is ambiguous since both tables have hid column. How should I resolve this?</p> <p>Thanks </p>
<p>for a subquery you typically want to name only those columns you need (since the subq isn't used to load full objects) and you then can use label() for anything additional:</p> <pre><code>subq = sess.query(object.a, object.b, object.hid.label('o_hid'), robject.c, robject.hid.label('r_hid')).filter(..).subquery() </code></pre> <p>the subquery then names those columns based on the label name:</p> <pre><code> query(Something).join(subq, subq.c.o_hid == Something.q).filter(subq.c.r_hid == 5) </code></pre>
python|sql|sqlalchemy
3
1,908,311
24,717,365
Overwrite text file on first write, then append to it - Python
<p>My first write to the file needs to overwrite it, then my next ones need to append to it. But There is no way to know what write will be first. My writes are in conditional statements. Here is what I have:</p> <pre><code>class MyHTMLParser(HTMLParser): def __init__(self): HTMLParser.__init__(self) self.strict = False self.indent = " " self.pos = 0 self.output_file = 'output_sass.txt' def handle_starttag(self, tag, attrs): if attrs != []: for attr in attrs: if ('id' in attr): id = attr.index('id') with open(self.output_file, 'w') as the_file: the_file.writelines(self.indent * self.getpos()[1] + '#' + attr[id+1] + ' {' +'\n') ## print (self.indent * self.getpos()[1] + "#" + attr[id+1] + " {") self.pos = self.getpos()[1] break elif ('class' in attr): clas = attr.index('class') with open(self.output_file, 'w') as the_file: the_file.writelines(self.indent * self.getpos()[1] + "." + attr[clas+1] + " {"+'\n') ## print (self.indent * self.getpos()[1] + "." + attr[clas+1] + " {") self.pos = self.getpos()[1] break else: with open(self.output_file, 'w') as the_file: the_file.writelines(self.indent * self.getpos()[1] + tag + " {"+'\n') ## print (self.indent * self.getpos()[1] + tag + " {") self.pos = self.getpos()[1] break else: with open(self.output_file, 'w') as the_file: the_file.writelines(self.indent * self.getpos()[1] + tag + " {"+'\n') ## print (self.indent * self.getpos()[1] + tag + " {") self.pos = self.getpos()[1] def handle_endtag(self, tag): with open(self.output_file, 'w') as the_file: the_file.writelines(self.indent * self.pos + "}"+'\n') ## print(self.indent * self.pos + "}") </code></pre>
<p>Add a class attribute that holds the changing flag:</p> <pre><code>import itertools class MyHTMLParser(HTMLParser): def __init__(self, ...): ... self.modes = itertools.chain('w', itertools.cycle('a')) @property def mode(self): return next(self.modes) def handle_starttag: ... with open(filepath, self.mode) as the_file: # self.mode is 'w' the first time and 'a' every time thereafter # write stuff </code></pre>
python-3.x|output
2
1,908,312
40,788,530
how to calculate mean of some rows for each given date in a dataframe
<p>I have a dataframe with 5 columns and 50k rows.all columns are <code>int</code> except the <code>date</code> that is <code>date time</code>.</p> <p><a href="https://i.stack.imgur.com/WhPzc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WhPzc.png" alt="enter image description here"></a></p> <p>In this dataframe, data gathered for 1 year and <strong>there are multiple data for one day</strong>. i want to calculate the mean and variance of some columns for each day and put it in a new data frame.</p> <p>Is there any pandas function or other way to do this?</p>
<p>use <code>groupby</code>, it will return a new dataframe for you.</p> <pre><code>df.groupby('date').mean() df.groupby('date').std() </code></pre> <p>isolating columns:</p> <pre><code>df.groupby('date')['price_per_unit'].mean() </code></pre>
python|pandas|dataframe|statistics
4
1,908,313
38,080,867
Draw a map based on values from the source point
<p>I want to make a map. So I have values in an array and I want to use those values to make a map. So the values are from 1 to 250 and there are 120 of them (one for 3 degrees). So how do I calculate where should I put my points based on the values in the right order (from 0 to 360 degrees)?</p> <p>Here is my code so far:</p> <pre><code>import pygame white = (255, 255, 255) black = (0, 0, 0) import sys pygame.init() screen = pygame.display.set_caption("Drawing is fun!") screen = pygame.display.set_mode((500, 500)) screen.fill(white) pygame.draw.circle(screen, black, (250, 250), 250, 1) clock = pygame.time.Clock() while True: for event in pygame.event.get(): if event.type == pygame.QUIT: pygame.quit(); sys.exit(); pygame.display.update() clock.tick(60) </code></pre> <p>I don't have the array yet but I will have it in the near future. Let's say that the array is named <code>values</code>.</p> <p>For those who didn't understand my question:<br> - I have a 500 x 500 pygame canvas.<br> - I have 120 values which represent 3 degrees each. We got them by spinning an ultra sonic sensor around and reading the values.<br> - I want to create an image on the canvas that reads those values and puts points in the image and then it connects them forming lines.<br> - It will be like a map generated by an ultrasonic sensor.</p>
<p>So full size of your field is <code>2w=500</code>, and half-size is <code>w=250</code></p> <p>Center of field has coordinates <code>(cx, cy)</code> = <code>(w, w)</code></p> <p>Distance are already normalized in you case (max value 250 = w)</p> <p>If your objects are point in the center angle of the sector, then screen coordinates of object at i-th index are</p> <pre><code>central_angle = (1.5 + i * 3) * Pi / 180 //in radians x = cx + Round(values[i] * Cos(central_angle)) //round float value to get integer coordinate y = cy + Round(values[i] * Sin(central_angle)) </code></pre>
python|python-3.x|geometry|pygame
0
1,908,314
38,297,848
Matching JSON Keys
<p>When data come from JSON in the format below, I can turn it into a dictionary in <strong>Python</strong>. However, the information is not always in the same order. For instance, <code>first_name</code>, I would assume is always 0. However, when I get the data, depending on the form used, may be in the 2 position. So now, instead of storing <code>first_name</code> in the right db field, it may be in the <code>email</code> field.</p> <p>Here's my question: how do I ensure the value matches the db field using the <code>name</code> key, value?</p> <pre><code>{ "name": "first_name", "values": [ "Joe" ] }, { "name":"last_name", "values": [ "Example" ] }, { "name": "email", "values": [ "joe@example.com" ] } </code></pre> <p>Thank you, as always! :-)</p> <hr> <p>Update Here's how I access the code:</p> <pre><code>first_name = data['field_data'][0]['values'][0].encode('utf-8') last_name = data['field_data'][1]['values'][0].encode('utf-8') email = data['field_data'][2]['values'][0].encode('utf-8') telephone = data['field_data'][3]['values'][0].encode('utf-8') </code></pre> <p>I'm making the assumption that <code>first_name</code> will always be at <strong>0</strong>, even though that's not the case. The Python saves the data, it's putting emails in places where first_name should be. I'm trying to find a way to verify the JSON <code>name</code> key corresponds with the same db field before submitting the data.</p>
<p>Instead of relying on the order which, as you indicated, is fragile, you should actually look for the element which has the name you want by iterating on the array.</p> <p>For example:</p> <pre><code>data = { 'field_data' : [ { "name": "first_name", "values": [ "Joe" ] }, { "name":"last_name", "values": [ "Example" ] }, { "name": "email", "values": [ "joe@example.com" ] } ] } def get_values(name): for data_element in data.get('field_data'): if data_element.get('name') == name: return data_element.get('values') return None if __name__ == "__main__": first_name = get_values('first_name')[0] last_name = get_values('last_name')[0] email = get_values('email')[0] print 'first_name: ' + first_name print 'last_name: ' + last_name print 'email: ' + email </code></pre> <p>In this example, the <code>get_values</code> function looks for the element which has the desired name. Then, at the very bottom of the example, we use it to retrieve <code>first_name</code>, <code>last_name</code> and <code>email</code>.</p>
python|json
2
1,908,315
30,990,171
Import variable from module imports only first occurrence
<p>In module1.py, I have a variable, var initialized to an empty string. A function in module1 changes the value of this variable. When I import this variable from module2.py, it only reflects the initial state before the function had changed it, even though I made sure to call the changing function before starting the import.</p> <p>module1.py</p> <pre><code>class App(Frame): global nums nums = [] def __init__(self, parent): Frame.__init__(self, parent, background='lightgreen') self.parent = parent self.vcmd = parent.register(self.validate) self.centerWindow() ............ </code></pre> <p>and this is where it gets updated, by a function in same class</p> <pre><code>nums.append(self.b_eq) </code></pre> <p>However, on importing <strong>nums</strong>, I still get an empty array</p>
<p>Python modules act as singletons. If you want to change the value you could either have the function return a result and then call that function. Or, you could create a class in Module1 and instantiate an object. From there you can set the value as desired.</p>
python|tkinter|python-import
0
1,908,316
29,055,700
Using Django, how I can achieve this in a single query?
<p>I have the following model:</p> <pre><code>class Ticket(models.Model): # ... other fields omitted active_at = models.DateTimeField() duration = models.DurationField() </code></pre> <p>Given <code>now = datetime.now()</code>, I'd like to retrieve all records for which <code>now</code> is between <code>active_at</code> and <code>active_at + duration</code>.</p> <p>I'm using Django 1.8. Here are the <a href="https://docs.djangoproject.com/en/1.8/ref/models/fields/#durationfield" rel="nofollow">DurationField docs</a>.</p>
<p>As noted in the documentation, arithmetic with a <code>DurationField</code> will not always work as expected in databases other than PostgreSQL. I don't know to which extend this works or doesn't work in other databases, so you'll have to try it yourself.</p> <p>If that works, you can use the following query:</p> <pre><code>from django.db.models import F active_tickets = Ticket.objects.filter(active_at__lte=now, active_at__gt=now-F('duration')) </code></pre> <p>The <code>F</code> object refers to a field in the database, <code>duration</code> in this case. </p>
python|django|django-queryset|timedelta|django-aggregation
3
1,908,317
29,131,569
Defining a nested dictionary in python
<p>I want to define a nested dictionary in python. I tried the following:</p> <pre><code>keyword = 'MyTest' # Later I want to pull this iterating through a list key = 'test1' sections = dict(keyword={}) #This is clearly wrong but how do I get the string representation? sections[keyword][key] = 'Some value' </code></pre> <p>I can do this: </p> <pre><code>sections = {} sections[keyword] = {} </code></pre> <p>But then there is a warning in the Pycharm saying it can be defined through dictionary label.</p> <p>Can someone point out how to achieve this?</p>
<pre><code>keyword = 'MyTest' # Later I want to pull this iterating through a list key = 'test1' sections = {keyword: {}} sections[keyword][key] = 'Some value' print(sections) {'MyTest': {'test1': 'Some value'}} </code></pre> <p><code>dict(keyword={})</code> creates a dict with the string <code>"keyword"</code> as the key not the value of the variable keyword.</p> <pre><code>In [3]: dict(foo={}) Out[3]: {'foo': {}} </code></pre> <p>Where using a dict literal actually uses the value of the variable as above.</p>
python|dictionary
5
1,908,318
8,746,876
nested operators
<p>I'm trying to write a tuple list comprehension using operators in python 2.7, tkinter. Alpha is the original data, beta the result.</p> <pre><code>alpha=[(A,1,1,2), (B,2,2,2), (C,3,1,2)] </code></pre> <p>product</p> <pre><code>beta=[(alpha[0],"%.2f"% reduce(mul,alpha[1:])) for alpha in alpha] beta [(A,2.00),(B,8.00),(C,6.00)] </code></pre> <p>sum</p> <pre><code>beta=[(alpha[0],"%.2f"% reduce(add,alpha[1:])) for alpha in alpha] beta [(A,4.00),(B,6.00),(C,6.00)] </code></pre> <p>But when I try to combine these for nested operations, I'm stumped. How do I get the</p> <p>sum of products?</p> <pre><code>beta [(A,16.00),(B,16.00),(C,16.00)] </code></pre> <p>products / sum of products?</p> <pre><code>beta [(A,0.13),(B,0.44),(C,0.38)] </code></pre> <p>I've tried various iterations of the following with no success</p> <pre><code>beta=[(alpha[0],"%.2f"% reduce(add, map(mul,alpha[1:])) for alpha in alpha] </code></pre>
<p>Here is one way to do it:</p> <pre><code>In [46]: alpha=[('A',1,1,2),('B',2,2,2),('C',3,1,2)] In [49]: total = float(sum(reduce(mul,a[1:]) for a in alpha)) In [50]: total Out[50]: 16.0 In [51]: [(a[0], "%.2f" % (reduce(mul,a[1:])/total)) for a in alpha] Out[51]: [('A', '0.12'), ('B', '0.50'), ('C', '0.38')] </code></pre> <p>I assume the <code>0.44</code> is a typo. If it isn't, please clarify how it should be computed.</p>
python|nested|tuples|operator-keyword
1
1,908,319
52,259,119
SQLite query across multiple tabels
<p>I have some code for my python project(though this is a SQLite question) where I'm using SQLite to keep all game items.</p> <pre><code>import sqlite3 conn = sqlite3.connect('test.db') c = conn.cursor() def item_by_owned(owned): c.execute("SELECT * FROM items WHERE owned=:owned", {'owned': 1}) return c.fetchall() def print_inventory_names(inventory): for i in inventory: #print out the name(index[0]) of each item in inventory print(i[0]) inventory = item_by_owned(1) i = 0 print_inventory_names(inventory) </code></pre> <p>This works great if i were to put every item into the one table, I'd like to split my DB up though as below: <a href="https://i.stack.imgur.com/68DVi.png" rel="nofollow noreferrer">DataBase Layout</a></p> <p>Is there a way to search across multiple tables?</p> <p>something like: SELECT * FROM items,items2 WHERE owned=:owned</p>
<p>SELECT * FROM items UNION SELECT * FROM items2 WHERE owned=:owned""", {'owned': 1})</p>
python-3.x|sqlite
0
1,908,320
19,141,432
python numpy machine epsilon
<p>I am trying to understand what is machine epsilon. According to the Wikipedia, it can be calculated as follows:</p> <pre><code>def machineEpsilon(func=float): machine_epsilon = func(1) while func(1)+func(machine_epsilon) != func(1): machine_epsilon_last = machine_epsilon machine_epsilon = func(machine_epsilon) / func(2) return machine_epsilon_last </code></pre> <p>However, it is suitable only for double precision numbers. I am interested in modifying it to support also single precision numbers. I read that numpy can be used, particularly <code>numpy.float32</code> class. Can anybody help with modifying the function? </p>
<p>An easier way to get the machine epsilon for a given float type is to use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.finfo.html" rel="noreferrer"><code>np.finfo()</code></a>:</p> <pre><code>print(np.finfo(float).eps) # 2.22044604925e-16 print(np.finfo(np.float32).eps) # 1.19209e-07 </code></pre>
python|numpy|epsilon
255
1,908,321
56,150,636
Use Python to extract three sentences based on word finding
<p>I'm working on a text-mining use case in python. These are the sentences of interest:</p> <blockquote> <p>As a result may continue to be adversely impacted, by fluctuations in foreign currency exchange rates. Certain events such as the threat of additional tariffs on imported consumer goods from <strong>China</strong>, have increased. Stores are primarily located in shopping malls and other shopping centers.</p> </blockquote> <p>How can I extract the sentence with the keyword "China"? I do need a sentence before and after that, actually atleast two sentences before and after.</p> <p>I've tried the below, as was answered <a href="https://stackoverflow.com/questions/27074905/python-extracting-a-sentence-with-a-particular-word">here</a>:</p> <pre><code>import nltk from nltk.tokenize import word_tokenize sents = nltk.sent_tokenize(text) my_sentences = [sent for sent in sents if 'China' in word_tokenize(sent)] </code></pre> <p>Please help!</p>
<h1>TL;DR</h1> <p>Use <code>sent_tokenize</code>, keep track of the index where the focus word and window the sentences to get the desired result.</p> <pre><code>from itertools import chain from nltk import sent_tokenize, word_tokenize from nltk.tokenize.treebank import TreebankWordDetokenizer word_detokenize = TreebankWordDetokenizer().detokenize text = """As a result may continue to be adversely impacted, by fluctuations in foreign currency exchange rates. Certain events such as the threat of additional tariffs on imported consumer goods from China, have increased global economic and political uncertainty and caused volatility in foreign currency exchange rates. Stores are primarily located in shopping malls and other shopping centers, certain of which have been experiencing declines in customer traffic.""" tokenized_text = [word_tokenize(sent) for sent in sent_tokenize(text)] sent_idx_with_china = [idx for idx, sent in enumerate(tokenized_text) if 'China' in sent or 'china' in sent] window = 2 # If you want 2 sentences before and after. for idx in sent_idx_with_china: start = max(idx - window, 0) end = min(idx+window, len(tokenized_text)) result = ' '.join(word_detokenize(sent) for sent in tokenized_text[start:end]) print(result) </code></pre> <p>Another example, <code>pip install wikipedia</code> first:</p> <pre><code>from itertools import chain from nltk import sent_tokenize, word_tokenize from nltk.tokenize.treebank import TreebankWordDetokenizer word_detokenize = TreebankWordDetokenizer().detokenize import wikipedia text = wikipedia.page("Winnie The Pooh").content tokenized_text = [word_tokenize(sent) for sent in sent_tokenize(text)] sent_idx_with_china = [idx for idx, sent in enumerate(tokenized_text) if 'China' in sent or 'china' in sent] window = 2 # If you want 2 sentences before and after. for idx in sent_idx_with_china: start = max(idx - window, 0) end = min(idx+window, len(tokenized_text)) result = ' '.join(word_detokenize(sent) for sent in tokenized_text[start:end]) print(result) print() </code></pre> <p>[out]:</p> <blockquote> <p>Ashdown Forest in England where the Pooh stories are set is a popular tourist attraction, and includes the wooden Pooh Bridge where Pooh and Piglet invented Poohsticks. The Oxford University Winnie the Pooh Society was founded by undergraduates in 1982. == Censorship in China == In the People's Republic of China, images of Pooh were censored in mid-2017 from social media websites, when internet memes comparing Chinese president Xi Jinping to Pooh became popular. The 2018 film Christopher Robin was also denied a Chinese release.</p> </blockquote>
regex|python-3.x|nltk|text-segmentation
1
1,908,322
36,312,885
how to use python urllib2 with delay on response?
<p>I want to load an internet page,wait to the info to be fully loaded (takes about 10 sec) and then download the site's source.</p> <p>something like<code>urllib.urlopen(&quot;http://example.com/&quot;)</code> but with wait. Is there a easy way to do that and wait between the site opening and the data download? how?</p> <p>thanks, Itzik Kidana</p> <h3>sorry for beeing noob...</h3>
<p>Using <code>time.sleep(10)</code> may work but it's not optimal. I assume you want the javascript of the page to load and then get the page source. For this you can use <a href="http://www.seleniumhq.org/" rel="nofollow">selenium</a>. It opens a browser and perform all kinds of shenanigans on a web page, check out the documentation. If it's for scraping purposes I'd have a look at <a href="http://scrapy.org/" rel="nofollow">Scrapy</a></p>
python|python-2.7|urllib2|bots
1
1,908,323
22,174,041
Getting 2-d array pattern from user inputs in python
<p>I want to give the user input pattern as input1: <code>00,01,10,11</code> and another input as input2: <code>0.1,0.2,0.24,0.5</code> and these inputs I am giving alternate value as one by one from these two inputs. For giving user input I am using: <code>input = int(raw_input())</code>. But my desired output should be in separate 2-d array as <code>[[0,0],[0,1],[1,0],[1,1]]</code> and <code>[[0.1],[0.2],[0.24],[0.5]]</code> please give me good idea for this.</p>
<p>You can use <a href="http://docs.python.org/2/library/stdtypes.html#str.split" rel="nofollow"><code>str.split</code></a> to convert the string input into a list of substrings:</p> <pre><code>"0.1,0.2".split(",") == ["0.1", "0.2"] </code></pre> <p>and <a href="http://docs.python.org/2/library/functions.html#map" rel="nofollow"><code>map</code></a> to apply some processing rule to those substrings; in your second case, this is conversion to <code>float</code>:</p> <pre><code>map(float, "0.1,0.2".split(",")) == [0.1, 0.2] </code></pre> <p>Your first example (binary strings to lists of integers) will require a custom function for <code>map</code>, for example:</p> <pre><code>map(lambda s: map(int, s), "001,010".split(",")) == [[0, 0, 1], [0, 1, 0]] </code></pre> <p>Note that in Python 3.x <a href="http://docs.python.org/3.3/library/functions.html#map" rel="nofollow"><code>map</code></a> is an iterator, so you may need to explicitly convert it to a <code>list</code>.</p>
python|user-input|multidimensional-array
0
1,908,324
43,884,993
Odoo-8 modify journal items list view
<p>In Odoo-8 there is list view for journal items, in that there are two selection field one for <code>account journal</code> and second is for <code>account period</code>.</p> <p><a href="https://i.stack.imgur.com/GFt3r.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GFt3r.png" alt="enter image description here"></a></p> <blockquote> <p><strong>What I want to do</strong></p> <p>I would like to remove default value from period and journal.</p> <p>I would also like to filter period in which only open period will be loaded.</p> <p><strong>What I have done so far</strong></p> <p>I have tried to set default value for period and journal by just setting <strong>_defaults</strong> but then filter wasn't worked.</p> </blockquote>
<p>Check the file located in: <code>addons/account/static/src/xml/account_move_line_quickadd.xml</code></p> <p>This file defines the view you are referring to.</p> <p>The file: <code>addons/account/static/src/js/account_move_line_quickadd.js</code></p> <p>is the js file that sets the default values. If you take a look at the js file you will see that at some point the <code>default_get</code> method of the <code>account_move_line</code> model is called.</p> <p>What you can do is either modify the js (by extending the widget of course) or you can go and override the <code>default_get</code> method of the <code>account_move_line</code> model and do your work there.</p>
python-2.7|openerp|odoo-8|odoo-view
1
1,908,325
43,753,185
How to do math expression from input() in Python?
<p>I have a situation when I have to convert user's input to int and do math operation:</p> <pre><code> import ast user_input = input() if user_input.isdigit(): print('it is a number:', ast.literal_eval(user_input)) elif user_input.isalpha(): print('it is a string') elif user_input.isalnum(): print('it is something mixed') else: print('can\'t recognize', user_input) </code></pre> <p>The first case <strong>if user_input.isdigit</strong> doesn't work if there is a math expression like <strong>2 + 2</strong> or <strong>5 * 5</strong>. What`s wrong?</p>
<p>You can use <code>eval</code> if you want to evaluate a expression. Also you need to use <code>raw_input()</code> in Python 2</p> <pre><code>&gt;&gt;&gt; user_input = eval(raw_input()) 2+3 &gt;&gt;&gt; user_input 5 </code></pre> <p>In all other cases , you can use a <code>map()</code> function. Like <code>user_input = map(int, raw_input())</code></p> <p><strong>Disclaimer</strong></p> <p>Don't use <code>eval</code> to take raw sql queries. It can destroy your database.</p>
python|string
1
1,908,326
54,590,961
Python TensorFlow - Fail in training a simple neural network - original_name_scope required
<p>I am getting acquainted with Tensorflow keras in Python.</p> <p>I try to train a very simple network, with a simple dataset created by myself. I am trying to follow the lines of the tutorial in the official tf website:</p> <p><a href="https://www.tensorflow.org/tutorials/keras/basic_regression" rel="nofollow noreferrer">https://www.tensorflow.org/tutorials/keras/basic_regression</a></p> <p>In particular, I have the following code:</p> <pre><code>import numpy as np import tensorflow as tf from tensorflow import keras from tensorflow import layers # Generate data NumElems = 1000 NumDims = 2 TrainSize = int(0.6 * NumElems) print(TrainSize) x = np.random.rand(NumDims,NumElems)*2 - 1 y = sum(x**2) x_training = x[:, :TrainSize] y_training = y[:TrainSize] x_test = x[:, TrainSize:] y_test = y[TrainSize:] # Build the model NH1 = 10 #Number of hidden nodes on first layer NH2 = 10 #Number of hidden nodes on second layer model = keras.Sequential() model.add(layers.Dense(NH1, activation='relu')) model.add(layers.Dense(NH2, activation='relu')) model.add(layers.Dense(1)) #Compile the model optimizer = tf.keras.optimizers.RMSprop(0.001) model.compile(loss='mse', optimizer=optimizer, metrics=['mae', 'mse']) #Train the model model.fit(x_training, y_training, epochs=10, batch_size = 50) </code></pre> <p>It works well excepted for the last line, which generates the following error:</p> <pre><code>Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; File "/home/giuseppe/TF_Regression.py", line 38, in &lt;module&gt; model.fit(x_training, y_training, epochs=10, batch_size = 50) File "/home/giuseppe/venv/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py", line 1536, in fit validation_split=validation_split) File "/home/giuseppe/venv/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py", line 992, in _standardize_user_data class_weight, batch_size) File "/home/giuseppe/venv/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py", line 1032, in _standardize_weights self._set_inputs(x) File "/home/giuseppe/venv/lib/python3.6/site-packages/tensorflow/python/training/checkpointable/base.py", line 474, in _method_wrapper method(self, *args, **kwargs) File "/home/giuseppe/venv/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py", line 1242, in _set_inputs self.build(input_shape=input_shape) File "/home/giuseppe/venv/lib/python3.6/site-packages/tensorflow/python/keras/engine/sequential.py", line 221, in build with ops.name_scope(layer._name_scope()): File "/home/giuseppe/venv/lib/python3.6/site-packages/tensorflow/python/layers/base.py", line 151, in _name_scope return self._current_scope.original_name_scope AttributeError: 'NoneType' object has no attribute 'original_name_scope' </code></pre> <p>I have no clue what it is about and how to fix it. Could someone please help me?</p> <p>I thank you in advance. My best regards, Giuseppe</p>
<p>Probably some object you are working with is None (It's probably related to x_training or y_training). Read this: <a href="https://stackoverflow.com/questions/8949252/dont-understand-what-this-attributeerror-means/">Don&#39;t understand what this AttributeError means</a>.</p>
python|tensorflow|keras
0
1,908,327
54,294,114
django inspectdb ORA-00904 "IDENTITY_COLUMN"
<p>I'm currently trying to get Django (version 2.1.5) models from existing Oracle 11 database by <code>python manage.py inspectdb</code>, but this error still occurs:</p> <pre><code>Unable to inspect table table_name The error was: ORA-00904: "IDENTITY_COLUMN": invalid identifier </code></pre> <p>I have tried to use different Djangos. Example. with 2.0, error didn't occur, but no text for models was created. Another questions from this topic here on SO were not helpful.</p> <p>Based on this <a href="https://javarevisited.blogspot.com/2014/09/ora-00904-invalid-identifier-error-in-11g-database.html" rel="nofollow noreferrer">link</a>, I think error occurs because I have no primary key in the table, but since I am not sure, I don't want to make any changes to existings database.</p> <p>Does anybody solved this problem?</p>
<p>I had the same issue and solved doing this:</p> <pre><code># database configuration settings.py # 'oracle': { # 'ENGINE': 'django.db.backends.oracle', # 'NAME': 'host:port/service', # 'USER': 'database_user', # 'PASSWORD': 'database_password', # } pip uninstall django pip install Django==1.11.22 cd &lt;django application&gt; python manage.py inspectdb --database oracle &lt;table name&gt; &gt; oraclemodel.py pip uninstall django pip install Django==2.2.4 </code></pre>
python|django|database|oracle|django-models
1
1,908,328
54,284,189
error with animation of a string - matplotlib
<p>I am trying to get an <code>animation</code> working in <code>matplotlib</code>. I had this <code>code</code> working before but now I'm returning an error. I'm unsure if an update has caused this or not?</p> <p>The <code>code</code> is below. This was working before. but now it returns an error:</p> <pre><code> raise TypeError("invalid type comparison") TypeError: invalid type comparison </code></pre> <p>Example:</p> <pre><code>import matplotlib.pyplot as plt import matplotlib.animation as animation import numpy as np import pandas as pd d = ({ 'Time' : [1,2,3,4,5,6,7,8,9,10], }) df = pd.DataFrame(data = d) fig, ax = plt.subplots(figsize = (10,6)) #Event Table Events_table = plt.table(cellText= [[''],[''],[''],[''],['']], colWidths = [1], rowLabels=['Time','1','2','3','4'], colLabels=['Events'], bbox = [0.124, 0.75, 0.236, 0.22]) Frame_number = df['Time'] label = plt.text(-180, 50, Frame_number, fontsize = 8, ha = 'center') def animate(i) : label.set_text(Frame_number[i+1]) ani = animation.FuncAnimation(fig, animate, np.arange(0,10),# init_func = init, interval = 100, blit = False) plt.draw() </code></pre>
<p>I doubt this code would have ever worked in previous versions; in any case there are thwo problems:</p> <ul> <li><p>I think you will want to set your label to the <em>first</em> element of the series before the animation starts.</p> <pre><code>label = plt.text(0, 0, Frame_number[0]) </code></pre></li> <li><p>If you use <code>i+1</code> to index the series, your animation needs to stop one index before the last one, </p> <pre><code>ani = animation.FuncAnimation(..., frames=np.arange(0, len(Frame_number)-1) ) </code></pre></li> </ul> <p>Complete code:</p> <pre><code>import matplotlib.pyplot as plt import matplotlib.animation as animation import numpy as np import pandas as pd df = pd.DataFrame(data = { 'Time' : [1,2,3,4,5,6,7,8,9,10], }) fig, ax = plt.subplots(figsize = (10,6)) Frame_number = df['Time'] label = plt.text(0, 0, Frame_number[0]) def animate(i) : label.set_text(Frame_number[i+1]) ani = animation.FuncAnimation(fig, animate, np.arange(0,len(Frame_number)-1), interval = 100, blit = False) plt.show() </code></pre>
python|pandas|matplotlib|animation
1
1,908,329
54,451,450
Trying to create a function to only accept numbers between 1 and 5 and no strings in Python
<p>I'm creating a function that receives a number between 1 and 5 (likert scale), from a question. When the user inputs a wrong INT my loop is ok, the question repeats. But i want the question to repeat if the user inputs a string too. But in that case the program crashes "ValueError: invalid literal for int() with base 10" </p> <pre><code>def likert(msg): while True: L = int(input(msg)) if 1 &lt;= L &lt;= 5 and type(L) == int: return L elif L &lt; 1 or L &gt; 5: print('\033[031mError [1 to 5] only\033[m') continue </code></pre>
<p>Instead of trying to abstract the input as <code>int</code> right off the bat, do this instead:</p> <pre><code>def likert(): while True: L = input() if L.isalpha: #if input is string print('\033[031mError [1 to 5] only\033[m') continue elif L.isdigit: #if input is int if 1 &lt;= L &lt;= 5: #if input is within range return L else: #if input is out of range print('\033[031mError [1 to 5] only\033[m') continue </code></pre>
python|function|validation
1
1,908,330
54,352,104
Is there any difference between these two cases: passing class name vs object
<p>I'm new to Python and not sure about many idioms. I have found code where a function gets a class name as argument. Is there any reason to do it? I've simplified the code to this:</p> <pre><code>class A: def __init__(self): print ("A") def foo_1(a): inst = a() return inst def foo_2(a): inst = a return inst if __name__ == "__main__": i1 = foo_1(A) i2 = foo_2(A()) </code></pre> <p>Is there any difference between implementing it like foo_1 ( the way it is implemented now), and the foo_2 (The way I would consider to be more intuitive)</p> <p>Thanks!</p>
<p>These do slightly different things.</p> <p><code>foo_1</code> gets passed a <code>class</code> (itself an object), not just its name. It then instantiates an instance of that class, which it returns.</p> <p><code>foo_2</code> gets passed an instance (actually in this simplified example any object) and just returns it.</p> <p>In this example this will mean that <code>i1</code> and <code>i2</code> seem to be the same. But they are actually different objects and will, for example, not compare equal. </p> <p>You would use something like <code>foo_1</code> as a factory of instances of some class or to modify the class object before creating an instance. And you would use <code>foo_2</code> if you want to do something with an instance. In particular this means that if you call the function multiple times, you can pass the same instance every time, whereas <code>foo_1</code> will always generate a new instance.</p> <p>So, it depends on what you want to do.</p>
python
2
1,908,331
9,182,485
Quirk in calculating a percentage with a while loop
<p>I am trying to do a quick percentage counter, as follow:</p> <pre><code>percentage = 1 while chunk: chunk = f.read(size/100) read += len(chunk) m.update(chunk) if size &gt; 500000000: print '%s done for %s.'%(percentage, name) percentage += 1 </code></pre> <p>This returns 102 print statements, [1,102], instead of what I am trying to do, [1,100]. The reason for this seems to be that it is doing a print statement on the first iteration (0 read) and also on the last iteration (100% read). What would be the simplest way to fix this?</p>
<p>No, it's not a simple off-by-one error.</p> <p>Let's say the file is 199 bytes long. Each iteration will read <code>size/100</code> bytes, i.e. one byte. Consequently, the loop will execute 199 times, and "percentages" would range from 1 to 199.</p> <p>Given that you only print out the percentages for large files, you won't actually see numbers as high as 199. Nonetheless, the logic is still flawed, as you have already observed.</p> <p>A better way is to get rid of the <code>percentage</code> variable and use a direct computation instead:</p> <pre><code>while chunk: chunk = f.read(size/100) read += len(chunk) m.update(chunk) if size &gt; 500000000: print '%s done for %s.'%(100.0 * read / size, name) </code></pre>
python
7
1,908,332
9,492,441
copying a nested dictionary
<p>I have a dict with integer keys and the value for each key is a map with integer keys and set of integers as value.</p> <p>Ex:</p> <pre><code>x = { 3 : {0:set([1,2,5]), 1:set([3]), 2:set([7,8])}, 4 : {0:set([1,2]), 1:set([4]), 2:set([7])}, } </code></pre> <p>I am trying to write a function that does the following operation:</p> <p>Given a key(k) which is present in each value of <code>x</code> (say 2), it should return a new dict with similar structure and following properties: </p> <ul> <li>Key(T) : An element in the union of all values under <code>k</code></li> <li>Value : A dict with same inner keys, but the value for a given inner key(<code>k'</code>) is union of all sets associated with <code>k'</code> in old dict that have <code>T in x[k]</code></li> </ul> <p>In the above example, if the argument is <code>k=2</code> then it should return:</p> <pre><code>y = { 7 : {0:set([1,2,5]), 1:set([3,4]), 2:set([7])}, 8 : {0:set([1,2,5]), 1:set([3]), 2:set([8])}, } </code></pre> <p>I am currently doing this by iterating over all possible values under <code>k</code> in the old dict and constructing the new one. Is there an efficient way of doing this ?</p>
<p>You have to create the copy of the object also for that you can use <code>deepcopy</code> of <code>copy</code> module. check <a href="http://docs.python.org/library/copy.html#copy.deepcopy" rel="nofollow">http://docs.python.org/library/copy.html#copy.deepcopy</a> for detail. This will copy all object and basic data.</p>
python|algorithm
4
1,908,333
47,806,699
How to categorize in various categories from list in python
<p>I have a data in list as follows</p> <pre><code> ['And user clicks on the link "Statement and letter preferences" -&gt; set([0, 2])', 'And user waits for 10 seconds -&gt; set([0, 2])', 'Then page is successfully launched -&gt; set([0, 1, 2])', '@TestRun -&gt; set([0, 1, 2])', 'And user set text "#Surname" on textbox name "surname" -&gt; set([0, 1, 2])', 'And user click on "menu open user preferences" label -&gt; set([0, 2])'] </code></pre> <p>In these data I have set([0,2]) , now I want all the satements that occurs in 0,1,2 in different list ? how can we do this in python</p> <p>Expected output is </p> <p>list_0 i.e which contains all statements that has 0 in set(0,2)</p> <pre><code> list_0 [And user clicks on the link "Statement and letter preferences And user waits for 10 seconds Then page is successfully launched '@TestRun And user set text "#Surname" on textbox name "surname And user click on "menu open user preferences" label] list_1 [ Then page is successfully launched '@TestRun And user set text "#Surname" on textbox name "surname] list_2 [And user clicks on the link "Statement and letter preferences And user waits for 10 seconds Then page is successfully launched '@TestRun And user set text "#Surname" on textbox name "surname And user click on "menu open user preferences" label] </code></pre>
<p>I'd recommend appending strings to a dictionary of lists. You'll understand why. </p> <p>First, here's a high level approach to solving this problem - </p> <ol> <li>Iterate over each string</li> <li>Split the string into its content and list of IDs</li> <li>For each ID, add the string to the appropriate dict key.</li> </ol> <pre><code>from collections import defaultdict import re d = defaultdict(list) for i in data: x, y = i.split('-&gt;') for z in map(int, re.findall('\d+', y)): d[z].append(x.strip()) # for performance, move the `strip` call outside the loop </code></pre> <pre><code>print(d) { "0": [ "And user clicks on the link \"Statement and letter preferences\"", "And user waits for 10 seconds", "Then page is successfully launched", "@TestRun", "And user set text \"#Surname\" on textbox name \"surname\"", "And user click on \"menu open user preferences\" label" ], "1": [ "Then page is successfully launched", "@TestRun", "And user set text \"#Surname\" on textbox name \"surname\"" ], "2": [ "And user clicks on the link \"Statement and letter preferences\"", "And user waits for 10 seconds", "Then page is successfully launched", "@TestRun", "And user set text \"#Surname\" on textbox name \"surname\"", "And user click on \"menu open user preferences\" label" ] } </code></pre> <p>You can find all strings related to ID <code>i</code> by querying <code>d[i]</code>. This is much cleaner than initialising separate lists.</p>
python
2
1,908,334
47,871,423
python - can i name my file as logging.py
<p>So I want to have a logging wrapper file - which is in a directory like this :</p> <pre><code>| team -- | libraries -- | logging.py -- | __init__.py </code></pre> <p>Problem is that inside logging I do :</p> <pre><code>from logging import .... </code></pre> <p>which is the native Python logging mechanism. So when writing tests for my logging wrapper, or when importing that in other places like <code>import team.libraries.logging</code></p> <p>the above line fails because I think it tries to import from the local logging file.</p> <p>So my question is :</p> <p>Would it be best to change the order in which python tries to import? i.e. first try on where the python is installed etc? Is it good practise?</p> <p>Or better to just change my local file name?</p> <p>PS : I will rename the file since it's easier, but still - is there a way to do so? is it possible at all?</p>
<blockquote> <p>Would it be best to change the order in which python tries to import? i.e. first try on where the python is installed etc? Is it good practise?</p> </blockquote> <p>No. Clearly not.</p> <blockquote> <p>Or better to just change my local file name?</p> </blockquote> <p>Yes, for sure.</p> <p>All reasons are already explained in comments and are pretty obvious.</p>
python|import
0
1,908,335
34,182,742
Print second folder name
<p>I have a Python code that sorts the folders inside a folder. However, I want to print the name of the second folder and not all of them. Any suggestions?</p> <pre><code>for root,dirs,files in os.walk("C:\\Folder testing"): for dirname in sorted(dirs, key=int, reverse=True): print(dirs) </code></pre>
<p>I wouldn't use os.walk for printing just one folder. I'd rather make a list of all the folders, and then select the one I want:</p> <pre><code>some_path = "C:\\Folder testing" dirs = [f for f in os.listdir(some_path) if os.path.isdir(os.path.join(some_path, f))] dirs_sorted = sorted(dirs, key=int, reverse=True) try: print dirs_sorted[1] except IndexError: print "Folder doesn't exist" </code></pre> <p>Beware that your sorting method requires that the folders names are numbers only.</p>
python|python-3.x
1
1,908,336
72,756,916
Cumulative sum based on another column's boolean value
<p>I have a pandas dataframe with the following format</p> <pre><code>name | is_valid | account | transaction Adam | True | debit | +10 Adam | False | credit | +10 Adam | True | credit | +10 Benj | True | credit | +10 Benj | False | debit | +10 Adam | True | credit | +10 </code></pre> <p>I want to create two new columns <code>credit_cumulative</code> and <code>debit_cumulative</code>. For <code>credit_cumulative</code>, it counts the cumulative sum of the transaction column <strong>for the corresponding person</strong>, and <strong>for the corresponding account</strong> in that row, the transaction column will count only <strong>if is_valid column</strong> is true. debit_cumulative wants to behave in the same way.</p> <p>In the above example, the result should be:</p> <pre><code>from | is_valid | account | transaction | credit_cumulative | debit_cumulative Adam | True | debit | +10 | 0 | 10 Adam | False | credit | +10 | 0 | 10 Adam | True | credit | +10 | 10 | 10 Benj | True | credit | +10 | 10 | 0 Benj | False | debit | +10 | 10 | 0 Adam | True | credit | +10 | 20 | 10 </code></pre> <p>To illustrate, the first row is Adam, and account is debit, is_valid is true, so we increase debit_cumulative by 10.</p> <p>For the second row, is_valid is negative. So transaction does not count. Name is Adam, is credit_cumulative and debit_cumulative will remain the same.</p> <p>All rows shall behave this way.</p> <p>Here is the code to the original data I described:</p> <pre><code>d = {'name': ['Adam', 'Adam', 'Adam', 'Benj', 'Benj', 'Adam'], 'is_valid': [True, False, True, True, False, True], 'account': ['debit', 'credit', 'credit', 'credit', 'debit', 'credit'], 'transaction': [10, 10, 10, 10, 10, 10]} df = pd.DataFrame(data=d) </code></pre>
<p>Try:</p> <pre class="lang-py prettyprint-override"><code># credit mask = df.is_valid.eq(True) &amp; df.account.eq(&quot;credit&quot;) df.loc[mask, &quot;credit_cumulative&quot;] = ( df[mask].groupby([&quot;name&quot;, &quot;account&quot;])[&quot;transaction&quot;].cumsum() ) df[&quot;credit_cumulative&quot;] = df.groupby(&quot;name&quot;)[&quot;credit_cumulative&quot;].apply( lambda x: x.ffill().fillna(0) ) # debit mask = df.is_valid.eq(True) &amp; df.account.eq(&quot;debit&quot;) df.loc[mask, &quot;debit_cumulative&quot;] = ( df[mask].groupby([&quot;name&quot;, &quot;account&quot;])[&quot;transaction&quot;].cumsum() ) df[&quot;debit_cumulative&quot;] = df.groupby(&quot;name&quot;)[&quot;debit_cumulative&quot;].apply( lambda x: x.ffill().fillna(0) ) print(df) </code></pre> <p>Prints:</p> <pre class="lang-none prettyprint-override"><code> name is_valid account transaction credit_cumulative debit_cumulative 0 Adam True debit 10 0.0 10.0 1 Adam False credit 10 0.0 10.0 2 Adam True credit 10 10.0 10.0 3 Benj True credit 10 10.0 0.0 4 Benj False debit 10 10.0 0.0 5 Adam True credit 10 20.0 10.0 </code></pre>
python|pandas|pandas-groupby|cumsum
3
1,908,337
39,454,205
xarray.Dataset.where() method force-changes dtype of DataArrays to float
<h1>Problem description</h1> <p>I have a dataset with <code>int</code>s in them, and I'd like to select a subdataset by some criteria but I would like to preserve the integer datatype. It seems to me that Xarray force-changes the integer data to float datatype.</p> <h1>Example setup</h1> <h3>Code</h3> <pre><code>import numpy import xarray nums = numpy.random.randint(0, 100, 13) names = numpy.random.choice(["babadook", "samara", "jason"], 13) data_vars = {"num": xarray.DataArray(nums), "name": xarray.DataArray(names)} dataset = xarray.Dataset(data_vars) print(dataset) </code></pre> <h3>Output</h3> <pre class="lang-none prettyprint-override"><code>&lt;xarray.Dataset&gt; Dimensions: (dim_0: 13) Coordinates: * dim_0 (dim_0) int64 0 1 2 3 4 5 6 7 8 9 10 11 12 Data variables: num (dim_0) int64 93 99 49 35 92 14 41 57 28 59 74 1 15 name (dim_0) &lt;U8 'babadook' 'samara' 'samara' 'samara' 'jason' ... In [16]: </code></pre> <h1>Example problem</h1> <h3>Code</h3> <pre><code>subdataset = dataset.where(dataset.num &lt; 50, drop=True) print(subdataset) </code></pre> <h3>Output</h3> <pre class="lang-none prettyprint-override"><code>&lt;xarray.Dataset&gt; Dimensions: (dim_0: 7) Coordinates: * dim_0 (dim_0) int64 2 3 5 6 8 11 12 Data variables: num (dim_0) float64 49.0 35.0 14.0 41.0 28.0 1.0 15.0 name (dim_0) &lt;U32 'samara' 'samara' 'jason' 'babadook' 'jason' ... </code></pre>
<p>That's because with numpy (which xarray uses under-the-hood) ints don't have a way of representing <code>NaN</code>s. So with most <code>where</code> results, the type needs to be coerced to floats.</p> <p>If <code>drop=True</code> and every value that is masked is dropped, that's not actually a constraint - you could have the new array retain its dtype, because there's no need for <code>NaN</code> values. That's not in xarray at the moment, but could be an additional feature.</p>
python|python-xarray
5
1,908,338
39,627,108
Python BeautifulSoup Mac Installation Error
<p>I am completely new to all things programming. As I am working to learn the basics of Python, I have run into a problem that I've been unable to work through by reading and Googling.</p> <p>I am trying to install BeautifulSoup, and I thought I had done so successfully, but when I try to test whether or not it's installed correctly I get an error.</p> <p>I am using PyCharm and typed the following into the Python Console:</p> <pre><code>&gt;&gt;&gt; from bs4 import BeautifulSoup </code></pre> <p>And I receive the following error:</p> <pre><code>Traceback (most recent call last): File "&lt;input&gt;", line 1, in &lt;module&gt; File "/Applications/PyCharm CE.app/Contents/helpers/pydev/_pydev_bundle/pydev_import_hook.py", line 21, in do_import module = self._system_import(name, *args, **kwargs) ImportError: No module named 'bs4' </code></pre> <p>I read through a previous thread about a BeautifulSoup install error and one of the things that was mentioned was to check the preferences settings in PyCharm to ensure that it's using the right version of Python . . .</p> <p><img src="https://i.stack.imgur.com/SBcmp.png" alt="PyCharm Project Interpreter Screenshot"></p> <p>Anyway, I can't seem to figure out what's wrong, so any insight and help in resolving this issue would be tremendously appreciated.</p>
<p>You can use pip to install beautifulsoup on mac, by typing in the following command in Terminal:</p> <pre><code>pip install beautifulsoup4 </code></pre> <p>You might be facing some permission problems if you are running the OS X preinstalled python as interpreter. I would suggest installing python with Homebrew if that's the case. </p>
python|beautifulsoup
-2
1,908,339
16,235,817
Python equivalent of PHP Mcrypt
<p>I wrote the following DES encryption scheme in PHP . It uses a static Initialization Vector to make sure the output and input is one to one mapped </p> <p>PHP code :</p> <pre><code>function encrypt($plaintext, $key) { # use an explicit encoding for the plain text $plaintext_utf8 = utf8_encode($plaintext); # create a random IV to use with CBC encoding # $iv_size = mcrypt_get_iv_size(MCRYPT_DES, MCRYPT_MODE_CBC); # $iv = mcrypt_create_iv($iv_size, MCRYPT_RAND); # defining a constant IV to use with CBC encoding $iv = "kritanj "; # creates the DES cipher text $ciphertext = mcrypt_encrypt(MCRYPT_DES, $key, $plaintext_utf8, MCRYPT_MODE_CBC, $iv); # prepend the IV for it to be available for decryption $ciphertext = $iv . $ciphertext; # encode the resulting cipher text so it can be represented by a string $ciphertext_base64 = base64_encode($ciphertext); return $ciphertext_base64; } function decrypt($ciphertext_base64, $key) { $ciphertext_dec = base64_decode($ciphertext_base64); # retrieves the IV, iv_size should be created using mcrypt_get_iv_size() # $iv_size = mcrypt_get_iv_size(MCRYPT_DES, MCRYPT_MODE_CBC); # $iv = substr($ciphertext_dec, 0, $iv_size); $iv = "kritanj "; # retrieves the cipher text (everything except the $iv_size in the front) $ciphertext_dec = substr($ciphertext_dec, $iv_size); # decrypting the DES cipher text $plaintext_utf8_dec = mcrypt_decrypt(MCRYPT_DES, $key, $ciphertext_dec, MCRYPT_MODE_CBC, $iv); return $plaintext_utf8_dec; } $plaintext = "The secret message is : " ; $key = "7chrkey" ; $ciphertext = encrypt($plaintext, $key); echo "Encrypted: ".$ciphertext ."&lt;br&gt;"; $plaintext1 = decrypt($ciphertext, $key); echo "Decrypted: ".$plaintext1 ; </code></pre> <p>Output :</p> <pre><code>Encrypted: a3JpdGFuaiB3DY63WHnE9led43FyFe53HlhUEr+vVJg= Decrypted: The secret message is : </code></pre> <p>Now I tried to write a equivalent code in python Here is what I managed</p> <p>Python code:</p> <pre><code>import binascii def conv_sevnCharKey_to_64bit_DESkey(key): st = bin(int(binascii.hexlify(key), 16)) st = st[2:] des_key = '' for i in xrange(8) : parity = 0 sevnBits = st[i*7:i*7+7] for c in sevnBits : if c in '1' : parity += 1 if parity % 2 == 0 : eigthBytes = sevnBits + '1' else : eigthBytes = sevnBits + '0' des_key += eigthBytes n = int('0b'+des_key, 2) DESkey = binascii.unhexlify('%x' % n) return DESkey from pyDes import * plaintext = "The secret message is : " iv = 'kritanj ' key = conv_sevnCharKey_to_64bit_DESkey('7chrkey') plaintext_utf8 = plaintext.encode('utf-8') # iniltalizing DES in cbc mode k = des(key, CBC, iv) # encrypting ciphertext = k.encrypt(plaintext_utf8) # prepending the IV ciphertext = iv + ciphertext; # encoding to base64 ciphertext_base64 = ciphertext.encode('base64','strict') print "Encrypted: ", ciphertext_base64 # decoding base64 ciphertext= ciphertext_base64.decode('base64','strict') # striping the IV and decrypting plaintext_utf8 = k.decrypt(ciphertext[8:]) # decoding utf-8 (if nessacary) plaintext1 = plaint`ext_utf8.decode('utf-8') print "Decrypted: ", plaintext1 assert plaintext1== plaintext </code></pre> <p>Output:</p> <pre><code>Encrypted: a3JpdGFuaiD+sGHb2GfZSXDac1r6mH+JDx7535yxL9k= Decrypted: The secret message is : </code></pre> <p>Why are the ciphertexts different? And is there a way to make sure they are identical ?</p> <p>[I want to make a 'meet in the middle' attack on a text encrypted under a double DES encryption using the above php code . But since I cant recreate the encryption I cant implement the attack]</p>
<p>Looks like your scheme to pad the key differs from the PHP code: </p> <pre><code>from pyDes import * plaintext = "The secret message is : " iv = 'kritanj ' key = '7chrkey' plaintext = plaintext.encode("utf-8") iv = 'kritanj ' k = des(key+'\0', CBC, iv) print (iv + k.encrypt(plaintext)).encode("base64", "strict") </code></pre> <p>Output:</p> <pre><code>'a3JpdGFuaiB3DY63WHnE9led43FyFe53HlhUEr+vVJg=\n' </code></pre> <p>In short: padding the encryption key with a NULL byte will yield the same result as your PHP code.</p>
php|python|encryption|mcrypt|des
2
1,908,340
31,978,476
Why a=1, b=1, id(a) == id(b) but a=1.0, b=1.0, id(a) != id(b) in python?
<pre><code>a = 1 b = 1 id(a) == id(b) a = 1.0 b = 1.0 id(a) != id(b) </code></pre> <p>why when a and b is decimal id(a) != id(b) in python ?<br> when number is decimal python will create two objects ?</p>
<p>The only reason why <code>id(1) == id(1)</code> is that the low integers are cached for performance. Try <code>id(1000) == id(1000)</code></p> <p>Actually, sometimes that works. A better test allocates things in different statements:</p> <pre><code>&gt;&gt;&gt; x = 1 &gt;&gt;&gt; y = 1 &gt;&gt;&gt; id(x) == id(y) True &gt;&gt;&gt; x = 1000 &gt;&gt;&gt; y = 1000 &gt;&gt;&gt; id(x) == id(y) False &gt;&gt;&gt; &gt;&gt;&gt; id(1000) == id(1000) True </code></pre> <p>The same thing can happen with strings, as well, under even more conditions:</p> <pre><code>&gt;&gt;&gt; x = 'abcdefg' &gt;&gt;&gt; y = 'abcdefg' &gt;&gt;&gt; x is y True </code></pre> <p>The bottom line is that using <code>is</code> (or comparing <code>id()</code> values, which is essentially the same thing, but slower) to determine if two objects are identical is only a good strategy under certain circumstances, because Python will cache objects for performance.</p> <p>One hard and fast rule is that two different <em>mutable</em> objects <em>will</em> have different <code>id</code> values, but as the commenters below point out, there are no guarantees as to whether multiple <em>immutable</em> objects of the same value will or will not be created.</p> <p>It is easier for the interpreter to cache things when you use literal values. If you make it calculate things, then you can see which things it works really hard to cache, vs. which things it opportunistically caches because it noticed them in proximity:</p> <pre><code>&gt;&gt;&gt; x = 1000 &gt;&gt;&gt; y = 2000 // 2 &gt;&gt;&gt; x is y False &gt;&gt;&gt; x == y True &gt;&gt;&gt; x = 1 &gt;&gt;&gt; y = 2 // 2 &gt;&gt;&gt; x is y True &gt;&gt;&gt; &gt;&gt;&gt; x = 'abcdefg' &gt;&gt;&gt; y = 'abcdefg' &gt;&gt;&gt; x is y True &gt;&gt;&gt; y = 'abc' + 'defg' &gt;&gt;&gt; x is y True &gt;&gt;&gt; x = 'abcdefghijklmnopqrstuvwxyz' &gt;&gt;&gt; y = 'abcdefghijklmnopqrstuvwxyz' &gt;&gt;&gt; x is y True &gt;&gt;&gt; y = 'abcdefghijklm' + 'nopqrstuvwxyz' &gt;&gt;&gt; x is y False &gt;&gt;&gt; x == y True </code></pre>
python
8
1,908,341
32,125,426
Raspberry Pi - Node.js run Python script to continuously sample ADC
<p>I am trying to host a local server (using Node.js) on a Raspberry Pi. The Pi has an ADC (MCP3008) connected to it, and I have a Python script that continuously samples the ADC and prints the current value. I want to have the Node server run the Python script, and whenever it sees a print statement, to just do a console.log(current value) for the time being. I am new to Node and web development in general, so it may be something simple that I'm missing so that Node will continuously receive data from the Python script. I'm trying to use Socket.io at the moment, as that seems to make sense as the method for Node to see changes from the Python script, but maybe this isn't the best way to do it. The basic webpage is from a tutorial I found (<a href="http://www.jaredwolff.com/blog/raspberry-pi-getting-interactive-with-your-server-using-websockets/" rel="nofollow">http://www.jaredwolff.com/blog/raspberry-pi-getting-interactive-with-your-server-using-websockets/</a>). The code I am currently using is here:</p> <pre><code>var app = require('http').createServer(handler) , io = require('socket.io').listen(app) , url= require('url') , fs = require('fs') , gpio = require('onoff').Gpio , PythonShell = require('python-shell'); app.listen(5000); function handler (req, res) { var path = url.parse(req.url).pathname; if (path == '/') { index = fs.readFile(__dirname+'/public/index.html', function(error,data) { if (error) { res.writeHead(500); return res.end("Error: unable to load index.html"); } res.writeHead(200,{'Content-Type': 'text/html'}); res.end(data); }); } else if( /\.(js)$/.test(path) ) { index = fs.readFile(__dirname+'/public'+path, function(error,data) { if (error) { res.writeHead(500); return res.end("Error: unable to load " + path); } res.writeHead(200,{'Content-Type': 'text/plain'}); res.end(data); }); } else { res.writeHead(404); res.end("Error: 404 - File not found."); } } // Python var pyshell = new PythonShell('mcp3008.py'); pyshell.run('mcp3008.py', function (err, results) { if (err) throw err; console.log('Results: %j', results); }); io.sockets.on('connection', function (socket) { pyshell.on('message', function (message) { console.log(message); }); }); </code></pre> <p>Thank you for any hints or help that you can provide!</p>
<p>As jfriend00 recommended, I looked into node.js solutions. I had previously tried this, using several mcp3008 packages available on npm, but none of them successfully installed on my Raspberry Pi (model B). However, I ended up rewriting the one located here (<a href="https://github.com/fiskeben/mcp3008.js" rel="nofollow">https://github.com/fiskeben/mcp3008.js</a>) as a separate .js file, included it with my code (along with some work from the npm spi library), and put it into a loop to read the ADC pin. That's working for now, and should be good enough for my current needs, but it still seems like a more processor-intensive solution than it should be. Thanks for your feedback!</p>
python|node.js|socket.io|raspberry-pi
1
1,908,342
9,697,667
Compare strings in python
<p>How to compare two strings and list of character values</p> <pre><code>l=['s','t','a','k','','o','v','e','r'] s='stack over' </code></pre> <p>how to compare the content(list and s). if both are equal it has to return 'zero' if one is greater than other then return positive one is lesser than other then negative value I want to compare above list 'l' and string 's' please tell me the how to do it with python code..</p>
<pre><code>l=['s','t','a','k','','o','v','e','r'] s='stack over' cmp(l, list(s)) </code></pre> <p>returns -1 because <code>l</code> is greater than <code>s</code> (the fourth position's <code>k</code> in <code>l</code> is greater than <code>c</code> in <code>s</code>)</p> <pre><code>l=['s','t','a','c', 'k',' ','o','v','e','r'] s='stack over' cmp(l, list(s)) </code></pre> <p>returns 0 as they are "equal".</p>
python
7
1,908,343
1,375,283
save python output to log
<p>I have a python script the runs this code:</p> <pre><code>strpath = "sudo svnadmin create /svn/repos/" + short_name os.popen (strpath, 'w') </code></pre> <p>How can I get the output of that command stored in a variable or written to a log file in the current directory?</p> <p>I know, there may not be an output, but if there is, I need to know.</p>
<p>Use the <code>'r'</code> mode to open the pipe instead:</p> <pre><code>f = os.popen (strpath, 'r') for line in f: print line f.close() </code></pre> <p>See the documentation for <a href="http://docs.python.org/library/os.html#os.popen" rel="nofollow noreferrer"><code>os.popen()</code></a> for more information.</p> <p>The <a href="http://docs.python.org/library/subprocess.html" rel="nofollow noreferrer"><code>subprocess</code></a> module is a better way to execute external commands like this, because it allows much more control over the process execution, as well as providing access to both input and output streams simultaneously.</p>
python|logging|variables
3
1,908,344
1,854,827
How can a total, complete beginner read source code?
<p>I am a complete, total beginner in programming, although I do have knowledge of CSS and HTML.</p> <p>I would like to learn Python. I downloaded lots of source code but the amount of files and the complexity really confuses me. I don't know where to begin. Is there a particular order I should look for?</p> <p>Thanks.</p> <p>EDIT: Sorry guys, I forgot to mention that I already have both the online tutorial and a couple of books handy. I basically I don't quite understand how to "dismantle" and understand complex source code, in order to grasp programming techniques and concepts. </p> <p>EDIT2: Thanks for the extremely quick comments, guys. I really appreciate it. This website is awesome.</p>
<p>Have you looked at these:</p> <p><a href="https://stackoverflow.com/questions/207701/python-tutorial-for-total-beginners">Python tutorial for total beginners?</a></p> <p><a href="https://stackoverflow.com/questions/34570/what-is-the-best-quick-read-python-book-out-there">What is the best quick-read Python book out there?</a></p> <p><a href="https://stackoverflow.com/search?q=[python]+books">SO Python Book Search</a></p>
python|coding-style|code-readability
9
1,908,345
62,965,816
Custom 2D list sort with lambda expression (in 1 line) in Python
<p>I have a python 2D list like this-</p> <pre><code>[[3,4],[1,2],[2,1],[6,5]] </code></pre> <p>I like to have it sorted in both direction, row and column-wise. So, my desired output would be like this-</p> <pre><code>[[1, 2], [1, 2], [3, 4], [5, 6]] </code></pre> <p>What I have done is-</p> <pre><code>list.sort(key = lambda x: (x[1], x[0])) </code></pre> <p>And what I am getting is-</p> <pre><code>[[2, 1], [1, 2], [3, 4], [6, 5]] </code></pre> <p>Can anyone please help to sort it <a href="https://stackoverflow.com/a/16585637/2193439"><code>in-place</code></a> with <a href="https://realpython.com/python-lambda/#:%7E:text=Python%20does%20not%20encourage%20using,return%20one%20or%20more%20functions." rel="nofollow noreferrer"><code>lambda</code></a> expression?</p>
<p>The <code>key</code> parameter (and <code>lambda</code>) is not meant to modify the content of the <code>list</code>. Instead, it is meant to be used for sorting according to how each element evaluate when the <code>key</code> function is applied to it. However, you can use <code>key</code>'s side-effects to achieve what you are after by calling <code>.sort()</code> on your <code>key</code>'s function's argument. Since <code>.sort()</code> result is just <code>None</code>, you also need to provide the argument itself to be used for the actual sorting:</p> <pre><code>l = [[3,4],[1,2],[2,1],[6,5]] l.sort(key=lambda x: (x.sort(), x)) print(l) # [[1, 2], [1, 2], [3, 4], [5, 6]] </code></pre> <p>This is not considered good programming practice, though.</p> <p>A cleaner and much more efficient, but obviously not 1-line, approach would be:</p> <pre><code>l = [[3,4],[1,2],[2,1],[6,5]] for x in l: x.sort() l.sort() print(l) # [[1, 2], [1, 2], [3, 4], [5, 6]] </code></pre> <hr /> <p>The <code>key</code> based approach is also significantly less efficient, since it tries to sort the inner lists at each <code>key</code> call which we should expect to occur <code>n log n</code> times, against the <code>n</code> times strictly required, thus some of the inner list are inevitably being sorted more than necessary. Instead, looping through the outer list explicitly sort each inner lists once.</p> <p>Just to give some ideas of the timings:</p> <pre><code>import random import copy import collections def double_sort_loop(seq): for x in seq: x.sort() seq.sort() def double_sort_deque(seq): collections.deque(map(lambda x: x.sort(), seq), maxlen=0) seq.sort() def double_sort_key(seq): seq.sort(key=lambda x: (x.sort(), x)) def gen_input(n, m, v_min=None, v_max=None): if v_min is None: v_min = 0 if v_max is None: v_max = v_min + (2 * m * n) return [[random.randint(v_min, v_max) for _ in range(m)] for _ in range(n)] random.seed(0) base = gen_input(100000, 10) %timeit seq = copy.deepcopy(base); double_sort_loop(seq) # 1 loop, best of 3: 1.03 s per loop %timeit seq = copy.deepcopy(base); double_sort_deque(seq) # 1 loop, best of 3: 1.02 s per loop %timeit seq = copy.deepcopy(base); double_sort_key(seq) # 1 loop, best of 3: 1.19 s per loop </code></pre> <p>(<strong>EDITED</strong>)</p>
python|arrays|list|algorithm|lambda
1
1,908,346
32,353,694
How to enable caching for Selenium's webdriver Firefox?
<p>Every time I run<code>webdriver.Firefox.get('someurl)</code>code, A new Firefox driver is shown and it doesn't cache any webpage that it loads. I want to make Selenium to tell every Firefox that get loaded to cache webpages so for future uses, they don't have to load every webpage from the beginning. How do I do that?</p> <pre><code>def setup(self): print 'running fp' self.path = r'path to my profile folder' self.profile = webdriver.FirefoxProfile(self.path) self.web = webdriver.Firefox(self.profile) self.cache = self.web.application_cache </code></pre>
<p>How about creating a new Firefox <em>profile</em> with appropriate cache settings and then use it with <code>Selenium</code>?</p> <p>Take a look at this: <a href="http://www.toolsqa.com/selenium-webdriver/custom-firefox-profile/" rel="nofollow">http://www.toolsqa.com/selenium-webdriver/custom-firefox-profile/</a></p> <p>And then in your <code>Python</code> script:</p> <pre><code>from selenium import webdriver firefox_profile = webdriver.FirefoxProfile('path_to_your_profile') browser = webdriver.Firefox(firefox_profile) cache = browser.application_cache </code></pre>
python|firefox|selenium
3
1,908,347
32,240,013
'charmap' codec can't encode characters
<p>I'm using tweepy and get this error when printing tweet messages on the screen (Windows).</p> <pre><code>#!/usr/bin/env python from tweepy import Stream from tweepy import OAuthHandler from tweepy.streaming import StreamListener import json #consumer key, consumer secret, access token, access secret. ckey = 'xyz' csecret = 'xyz' atoken = 'xyz' asecret = 'xyz' class Listener(StreamListener): def on_data(self, data): print json.loads(data)['text'] return True def on_error(self, status): print status auth = OAuthHandler(ckey, csecret) auth.set_access_token(atoken, asecret) twitterStream = Stream(auth, Listener()) twitterStream.filter(track=['#hash1', '#hash2'], languages=['en']) </code></pre> <blockquote> <pre><code>&gt; Traceback (most recent call last): File &gt; "C:....twitterSentiment.py", &gt; line 34, in &lt;module&gt; &gt; twitterStream.filter(track=['#hash1', '#hash2'], languages=['en']) File &gt; line 430, in filter &gt; self._start(async) File "C:......streaming.py", &gt; line 346, in _start &gt; self._run() File "C:.....streaming.py", &gt; line 286, in _run &gt; raise exception UnicodeEncodeError: 'charmap' codec can't encode characters in position 108-111: character maps to &lt;undefined&gt; </code></pre> </blockquote> <p>It is caused by Windows not supporting all characters. Is there a workaround for this?</p>
<p>You are getting this error, because it is not able to print <code>unicode</code> part of <code>tweet.text</code>. Encode it to <code>utf-8</code> (unicode).</p> <pre><code>def on_data(self, data): print json.loads(data)['text'].encode('utf-8') return True </code></pre>
python|twitter|tweepy
6
1,908,348
28,327,658
Simple form submit python
<p>I have simple python app which gets name and phone from html form submit using post method. I have this in my index.html: </p> <pre><code>&lt;form action="http://127.0.0.1:5000/get_phone/" method = "POST"&gt; First name: &lt;input type="text" name="firstname" &gt; Phone: &lt;input type="text" name="lastname" &gt; &lt;input type="submit" value="Submit"&gt; &lt;/form&gt; </code></pre> <p>And have my <code>getPhone.py</code>:</p> <pre><code>from flask import Flask app = Flask(__name__) @app.route('/get_phone/', methods=['POST', 'GET']) def get_phone(): if request.method == 'POST': print ('First name:', request.form['firstname']) print ('Phone:', request.form['lastname']) return 'Take a look at your terminal!' if __name__ == '__main__': app.run() </code></pre> <p>But when I submit my form with I get the following message in my browser:</p> <blockquote> <p>Internal Server Error</p> <p>The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p> </blockquote> <p>In my console I have this:</p> <pre><code>&gt; * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit) 127.0.0.1 - - [04/Feb/2015 22:20:04] "GET /get_phone/ HTTP/1.1" 500 - 127.0.0.1 - - [04/Feb/2015 22:20:12] "POST /get_phone/ HTTP/1.1" 500 - 127.0.0.1 - - [04/Feb/2015 22:26:22] "GET /get_phone/ HTTP/1.1" 500 - </code></pre> <p>How to fix this?</p> <p>UPD: consoleLog with debug = true:</p> <pre><code> * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit) * Restarting with stat 127.0.0.1 - - [04/Feb/2015 22:35:30] "POST /get_phone/ HTTP/1.1" 500 - Traceback (most recent call last): File "C:\Python34\lib\site-packages\flask\app.py", line 1836, in __call__ return self.wsgi_app(environ, start_response) File "C:\Python34\lib\site-packages\flask\app.py", line 1820, in wsgi_app response = self.make_response(self.handle_exception(e)) File "C:\Python34\lib\site-packages\flask\app.py", line 1403, in handle_except ion reraise(exc_type, exc_value, tb) File "C:\Python34\lib\site-packages\flask\_compat.py", line 33, in reraise raise value File "C:\Python34\lib\site-packages\flask\app.py", line 1817, in wsgi_app response = self.full_dispatch_request() File "C:\Python34\lib\site-packages\flask\app.py", line 1477, in full_dispatch _request rv = self.handle_user_exception(e) File "C:\Python34\lib\site-packages\flask\app.py", line 1381, in handle_user_e xception reraise(exc_type, exc_value, tb) File "C:\Python34\lib\site-packages\flask\_compat.py", line 33, in reraise raise value File "C:\Python34\lib\site-packages\flask\app.py", line 1475, in full_dispatch _request rv = self.dispatch_request() File "C:\Python34\lib\site-packages\flask\app.py", line 1461, in dispatch_requ est return self.view_functions[rule.endpoint](**req.view_args) File "C:\Users\nevernight\Desktop\visa + py\getPhone.py", line 6, in get_phone if request.method == 'POST': NameError: name 'request' is not defined 127.0.0.1 - - [04/Feb/2015 22:35:32] "GET /get_phone/?__debugger__=yes&amp;cmd=resou rce&amp;f=style.css HTTP/1.1" 200 - 127.0.0.1 - - [04/Feb/2015 22:35:32] "GET /get_phone/?__debugger__=yes&amp;cmd=resou rce&amp;f=jquery.js HTTP/1.1" 200 - 127.0.0.1 - - [04/Feb/2015 22:35:33] "GET /get_phone/?__debugger__=yes&amp;cmd=resou rce&amp;f=debugger.js HTTP/1.1" 200 - 127.0.0.1 - - [04/Feb/2015 22:35:34] "GET /get_phone/?__debugger__=yes&amp;cmd=resou rce&amp;f=ubuntu.ttf HTTP/1.1" 200 - 127.0.0.1 - - [04/Feb/2015 22:35:34] "GET /get_phone/?__debugger__=yes&amp;cmd=resou rce&amp;f=console.png HTTP/1.1" 200 - 127.0.0.1 - - [04/Feb/2015 22:35:34] "GET /get_phone/?__debugger__=yes&amp;cmd=resou rce&amp;f=source.png HTTP/1.1" 200 - 127.0.0.1 - - [04/Feb/2015 22:35:35] "GET /get_phone/?__debugger__=yes&amp;cmd=resou rce&amp;f=console.png HTTP/1.1" 200 - </code></pre>
<p>The issue is that you haven't imported <code>request</code>. Like this:</p> <pre><code>from flask import Flask, request </code></pre>
python|flask
2
1,908,349
34,750,575
RethinkDb do function based secondary indexes update themselves dynamically?
<p>Let's say that I need to maintain an index on a table where multiple documents can relate do the same item_id (not primary key of course).</p> <p>Can one secondary compound index based on the result of a function which of any item_id returns the most recent document based on a condition, update itself whenever a newer document gets inserted?</p> <p>This table already holds 1.2 million documents in just 25 days, so it's a big-data case here as it will keep growing and must always keep the old records to build whatever pivots needed over the years.</p>
<p>I'm not 100% sure I understand the question, but if you have a secondary index and insert a new document or change an old document, the document will be in the correct place in the index once the write completes. So if you had a secondary index on a timestamp, you could write <code>r.table('items').orderBy(index: r.desc('timestamp')).limit(n)</code> to get the most recent <code>n</code> documents (and you could also subscribe to changes on that).</p>
indexing|rethinkdb|rethinkdb-python
0
1,908,350
34,627,380
Pandas:drop_duplicates() based on condition in python
<p>Having below data set:</p> <pre><code>data_input: A B 1 C13D C07H 2 C07H C13D 3 B42C B65H 4 B65H B42C 5 A45B A47C </code></pre> <p>i.e. row 1 and row 2 in <code>data_input</code> are same,I just want to keep one,so drop row 2.</p> <p>Want the Output as below:</p> <pre><code>data_output: A B 1 C13D C07H 2 B42C B65H 3 A45B A47C </code></pre>
<p>You can create a third column <code>'C'</code> based on <code>'A'</code> and <code>'B'</code> and use it to find duplicates as such:</p> <pre><code>df['C'] = df['A'] + df['B'] df['C'] = df['C'].apply(lambda x: ''.join(sorted(x))) df = df.drop_duplicates(subset='C')[['A', 'B']] </code></pre>
python|pandas
7
1,908,351
12,212,571
Unable to Detect GSL Header Folder When Installing mlpy
<p>I am having the following problem installing MlPy:</p> <pre><code>sudo python setup.py install running install running build running build_py running build_ext building 'mlpy.gsl' extension llvm-gcc-4.2 -fno-strict-aliasing -fno-common -dynamic -g -Os -pipe -fno-common -fno-strict-aliasing -fwrapv -mno-fused-madd -DENABLE_DTRACE -DMACOSX -DNDEBUG -Wall -Wstrict-prototypes -Wshorten-64-to-32 -DNDEBUG -g -fwrapv -Os -Wall -Wstrict-prototypes -DENABLE_DTRACE -arch i386 -arch x86_64 -pipe -I/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -I/Library/Python/2.7/site-packages/numpy-1.8.0.dev_f2f0ac0_20120725-py2.7-macosx-10.8-x86_64.egg/numpy/core/include -I/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c mlpy/gsl/gsl.c -o build/temp.macosx-10.7-intel-2.7/mlpy/gsl/gsl.o mlpy/gsl/gsl.c:223:24: error: gsl/gsl_sf.h: No such file or directory mlpy/gsl/gsl.c:224:39: error: gsl/gsl_statistics_double.h: No such file or directory In file included from /Library/Python/2.7/site-packages/numpy-1.8.0.dev_f2f0ac0_20120725-py2.7-macosx-10.8-x86_64.egg/numpy/core/include/numpy/ndarraytypes.h:1722, from /Library/Python/2.7/site-packages/numpy-1.8.0.dev_f2f0ac0_20120725-py2.7-macosx-10.8-x86_64.egg/numpy/core/include/numpy/ndarrayobject.h:17, from /Library/Python/2.7/site-packages/numpy-1.8.0.dev_f2f0ac0_20120725-py2.7-macosx-10.8-x86_64.egg/numpy/core/include/numpy/arrayobject.h:15, from mlpy/gsl/gsl.c:227: /Library/Python/2.7/site-packages/numpy-1.8.0.dev_f2f0ac0_20120725-py2.7-macosx-10.8-x86_64.egg/numpy/core/include/numpy/npy_deprecated_api.h:11:2: warning: #warning "Using deprecated NumPy API, disable it by #defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" mlpy/gsl/gsl.c: In function ‘__pyx_pf_4mlpy_3gsl_sf_gamma’: mlpy/gsl/gsl.c:835: warning: implicit declaration of function ‘gsl_sf_gamma’ mlpy/gsl/gsl.c: In function ‘__pyx_pf_4mlpy_3gsl_1sf_fact’: mlpy/gsl/gsl.c:887: warning: implicit declaration of function ‘gsl_sf_fact’ mlpy/gsl/gsl.c: In function ‘__pyx_pf_4mlpy_3gsl_2stats_quantile_from_sorted_data’: mlpy/gsl/gsl.c:1123: warning: implicit declaration of function ‘gsl_stats_quantile_from_sorted_data’ mlpy/gsl/gsl.c:223:24: error: gsl/gsl_sf.h: No such file or directory mlpy/gsl/gsl.c:224:39: error: gsl/gsl_statistics_double.h: No such file or directory In file included from /Library/Python/2.7/site-packages/numpy-1.8.0.dev_f2f0ac0_20120725-py2.7-macosx-10.8-x86_64.egg/numpy/core/include/numpy/ndarraytypes.h:1722, from /Library/Python/2.7/site-packages/numpy-1.8.0.dev_f2f0ac0_20120725-py2.7-macosx-10.8-x86_64.egg/numpy/core/include/numpy/ndarrayobject.h:17, from /Library/Python/2.7/site-packages/numpy-1.8.0.dev_f2f0ac0_20120725-py2.7-macosx-10.8-x86_64.egg/numpy/core/include/numpy/arrayobject.h:15, from mlpy/gsl/gsl.c:227: /Library/Python/2.7/site-packages/numpy-1.8.0.dev_f2f0ac0_20120725-py2.7-macosx-10.8-x86_64.egg/numpy/core/include/numpy/npy_deprecated_api.h:11:2: warning: #warning "Using deprecated NumPy API, disable it by #defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" mlpy/gsl/gsl.c: In function ‘__pyx_pf_4mlpy_3gsl_sf_gamma’: mlpy/gsl/gsl.c:835: warning: implicit declaration of function ‘gsl_sf_gamma’ mlpy/gsl/gsl.c: In function ‘__pyx_pf_4mlpy_3gsl_1sf_fact’: mlpy/gsl/gsl.c:887: warning: implicit declaration of function ‘gsl_sf_fact’ mlpy/gsl/gsl.c: In function ‘__pyx_pf_4mlpy_3gsl_2stats_quantile_from_sorted_data’: mlpy/gsl/gsl.c:1123: warning: implicit declaration of function ‘gsl_stats_quantile_from_sorted_data’ mlpy/gsl/gsl.c: In function ‘__Pyx_BufFmt_ProcessTypeChunk’: mlpy/gsl/gsl.c:3761: warning: implicit conversion shortens 64-bit value into a 32-bit value mlpy/gsl/gsl.c:3764: warning: implicit conversion shortens 64-bit value into a 32-bit value lipo: can't open input file: /var/tmp//cc5P7Y1q.out (No such file or directory) error: command 'llvm-gcc-4.2' failed with exit status 1 </code></pre> <p>The folder containing my GSL header files is at: /usr/local/brew/Cellar/gsl/1.15/include/gsl E.g. I can find gsl_sf.h in this folder.</p> <p>Will getting the setup.py script to recognise this folder help? If so, how can I do that?</p> <p>Thank you in advance.</p>
<p>I had this same problem installing on Mac because gsl installs into a /usr/local/lib and /usr/local/include which setup.py doesn't expect. I found a post on the mlpy google group with a patched setup.py file. Below is a diff between the updated version on the original. You will need to update mp_libdir and mp_includedir.</p> <pre><code>&lt; #### macports library/include &lt; mp_libdir = ['/usr/local/lib'] &lt; mp_includedir = ['/usr/local/include'] &lt; 66,67c62 &lt; library_dirs=mp_libdir, &lt; include_dirs=py_inc + np_inc + mp_includedir), --- &gt; include_dirs=py_inc + np_inc), 104,105c99 &lt; include_dirs=py_inc + np_inc + mp_includedir, &lt; library_dirs=mp_libdir, --- &gt; include_dirs=py_inc + np_inc, 140,141c134 &lt; library_dirs=mp_libdir, &lt; include_dirs=py_inc + np_inc + mp_includedir), --- &gt; include_dirs=py_inc + np_inc), 178,179c171 &lt; include_dirs=py_inc + np_inc + mp_includedir, &lt; library_dirs=mp_libdir, --- &gt; include_dirs=py_inc + np_inc, 216,217c208 &lt; include_dirs=py_inc + np_inc + mp_includedir, &lt; library_dirs=mp_libdir, --- &gt; include_dirs=py_inc + np_inc, 222,223c213 &lt; include_dirs=py_inc + np_inc + mp_includedir, &lt; library_dirs=mp_libdir, --- &gt; include_dirs=py_inc + np_inc, </code></pre>
python-2.7|failed-installation
3
1,908,352
23,230,816
Fresh Django Server, added Zinnia to project according to official docs, does not work. Many details inside
<p>The server is a virtual Ubuntu machine that I setup today, according to these directions/notes (I made notes where I deviated from the tutorial):</p> <p><a href="https://www.evernote.com/shard/s50/sh/5c4f5ed1-bdb0-40c1-b9de-39fae702d709/d906be4f255c36241a3b76bf6fc7e7b7" rel="nofollow">https://www.evernote.com/shard/s50/sh/5c4f5ed1-bdb0-40c1-b9de-39fae702d709/d906be4f255c36241a3b76bf6fc7e7b7</a></p> <p>That got the Django "It worked!" page at the server's address on the local network. I then followed the instructions at the official site (I can't post too many links, my reputation is too low), and when I tried to do a ./manage.py syncdb, I get the following error:</p> <pre><code>CommandError: One or more models did not validate: zinnia.entry: 'sites' has an m2m relation with model &lt;class 'django.contrib.sites.models.Site'&gt;, which has either not been installed or is abstract. </code></pre> <p>The Zinnia urls (/weblog/ and /comments/) produce 404 errors that indicate that the Zinnia urls, which are definitely in the project's urls.py, are not making it <em>out</em> of urls.py. I suspect the syncdb error has something to do with this:</p> <pre><code>Using the URLconf defined in homepage.urls, Django tried these URL patterns, in this order: ^admin/ The current URL, weblog/, didn't match any of these. </code></pre> <p>To be explicit, starting from a working Django server, I did the following, according to directions (I'm restating the steps I have taken so that it's totally clear):</p> <ul> <li>$ pip install django-blog-zinnia</li> <li>added 'tagging', 'mptt', 'zinnia', to the installed apps in settings.py</li> <li>also added the TEMPLATE_CONTEXT_PROCESSORS to settings.py</li> </ul> <p>I'm also a bit confused about the fact that there is no editable python code in the project directory - does Zinnia run completely like a black box? Oh, I also made sure all the requirements were installed, and I pasted the requirements.txt, but the site thought it was code and wouldn't let me post it. Anyways, everything listed on the Zinnia install page is in there.</p>
<p>Make sure you have all of the required installed apps. Note there are a few <code>django.contrib</code> apps that are required, including <code>django.contrib.sites</code>, which your error message indicates you missed. </p> <p>Relevant portion of docs <a href="http://docs.django-blog-zinnia.com/en/v0.14/getting-started/install.html#applications" rel="nofollow">here</a>.</p> <p>EDIT:</p> <p><code>INSTALLED_APPS</code> requires at least the following:</p> <pre><code> INSTALLED_APPS = ( 'django.contrib.auth', 'django.contrib.admin', 'django.contrib.sites', # Note this one is not included by default 'django.contrib.comments', # Note this one is not included by default 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'django.contrib.contenttypes', 'tagging', 'mptt', 'zinnia', ) </code></pre> <p>Also, you'll likely need to add a <code>SITE_ID</code> setting.</p> <pre><code>SITE_ID = 1 </code></pre> <p>Sites framework setup <a href="https://docs.djangoproject.com/en/dev/ref/contrib/sites/#enabling-the-sites-framework" rel="nofollow">here</a>.</p> <p>EDIT 2:</p> <p>Since Django 1.6 <code>django.contrib.comments</code> is a separated project: <a href="https://github.com/django/django-contrib-comments" rel="nofollow"><code>django_comments</code></a>.</p> <p>You must install it as in this <a href="http://django-contrib-comments.readthedocs.org/en/latest/quickstart.html" rel="nofollow">quick install guide</a> and add <code>'django_comments'</code> in <code>INSTALLED_APPS</code> (not <code>'django.contrib.comments'</code>).</p>
python|django|zinnia
5
1,908,353
42,026,345
Classes in File Handling in Python 2.7
<p>I am getting this output in the interactive mode.</p> <pre><code> class test: def p(self): print 'PP' &gt;&gt;&gt; f=open('E:\Python\Roy Progs\Test','w') &gt;&gt;&gt; t=test() &gt;&gt;&gt; import pickle &gt;&gt;&gt; pickle.dump(t,f) &gt;&gt;&gt; f.close() &gt;&gt;&gt; f=open('E:\Python\Roy Progs\Test','r') &gt;&gt;&gt; pickle.load(f).p() PP &gt;&gt;&gt; f.close() &gt;&gt;&gt; =============================== RESTART: Shell =============================== &gt;&gt;&gt; f=open('E:\Python\Roy Progs\Test','r') &gt;&gt;&gt; import pickle &gt;&gt;&gt; pickle.load(f).p() Traceback (most recent call last): File "&lt;pyshell#14&gt;", line 1, in &lt;module&gt; pickle.load(f).p() File "E:\Python\lib\pickle.py", line 1384, in load return Unpickler(file).load() File "E:\Python\lib\pickle.py", line 864, in load dispatch[key](self) File "E:\Python\lib\pickle.py", line 1075, in load_inst klass = self.find_class(module, name) File "E:\Python\lib\pickle.py", line 1132, in find_class klass = getattr(mod, name) AttributeError: 'module' object has no attribute 'test' </code></pre> <p>From the output I realize that the definition of the class (whose object is stored in the file) must be in there in the RAM at the time of retrieving data and using it. I however do not understand why this must be the case, by storing objects in the file am I not storing the class definition also?</p>
<p>The pickle module stores classes by named reference. If you change the name or location of the class pickle will raise an error. </p> <p>A quick illustration of that can be seen in the interactive:</p> <pre><code>&gt;&gt;&gt; class test: x = 5 &gt;&gt;&gt; from pickle import dumps &gt;&gt;&gt; dumps(test) 'c__main__\ntest\np0\n.' # pickle is storing a reference to 'test' </code></pre> <p>To successfully call load pickle must be able to find the previously defined class (which is destroyed when you call restart in idle)</p>
python-2.7|class|binaryfiles|file-handling
1
1,908,354
47,301,262
how to plot a "group by" dataframe in Bokeh as Bar chart
<p>i have a dataframe </p> <pre><code> suite_name fail Pass Report_datetime 0 VOLTE-VOLTE 5 7 2017-11-14 00:00:00 1 VOLTE-VOLTE 5 7 2017-11-11 00:00:00 2 VOLTE-VOLTE 5 7 2017-11-10 00:00:00 3 VOLTE-VOLTE 5 7 2017-11-09 00:00:00 4 VOLTE-VOLTE 5 7 2017-11-14 00:00:00 5 VOLTE-VOLTE 5 7 2017-11-14 00:00:00 </code></pre> <p>i have grouped it </p> <pre><code>g1=df.groupby( [ 'Report_datetime'] ).sum() print g1 </code></pre> <p><strong>Output :</strong></p> <pre><code>Report_datetime fail Pass 2017-11-14 00:00:00 5 7 2017-11-11 00:00:00 5 7 2017-11-10 00:00:00 5 7 2017-11-10 00:00:00 5 7 ** </code></pre> <p>How to plot this data in Bokeh? Bar.charts are not supported in latest version of Bokeh , so any example with Vbar and Figure would be great</p>
<p>You can use <a href="https://bokeh.pydata.org/en/latest/docs/user_guide/categorical.html#visual-dodge" rel="noreferrer">visual dodge method</a>:</p> <p>First data preparation:</p> <pre><code>g1 = df.groupby('Report_datetime', as_index=False).sum() print (g1) Report_datetime fail Pass 0 2017-11-09 5 7 1 2017-11-10 5 7 2 2017-11-11 5 7 3 2017-11-14 15 21 #convert datetimes to strings g1['Report_datetime'] = g1['Report_datetime'].dt.strftime('%Y-%m-%d') #convert dataframe to dict data = g1.to_dict(orient='list') dates = g1['Report_datetime'].tolist() </code></pre> <hr> <pre><code>from bokeh.core.properties import value from bokeh.io import show, output_file from bokeh.models import ColumnDataSource from bokeh.plotting import figure from bokeh.transform import dodge output_file("dodged_bars.html") source = ColumnDataSource(data=data) #get max possible value of plotted columns with some offset p = figure(x_range=dates, y_range=(0, g1[['fail','Pass']].values.max() + 3), plot_height=250, title="Report", toolbar_location=None, tools="") p.vbar(x=dodge('Report_datetime', -0.25, range=p.x_range), top='fail', width=0.4, source=source, color="#c9d9d3", legend=value("fail")) p.vbar(x=dodge('Report_datetime', 0.25, range=p.x_range), top='Pass', width=0.4, source=source, color="#718dbf", legend=value("Pass")) p.x_range.range_padding = 0.1 p.xgrid.grid_line_color = None p.legend.location = "top_left" p.legend.orientation = "horizontal" show(p) </code></pre> <p><a href="https://i.stack.imgur.com/19Ibc.png" rel="noreferrer"><img src="https://i.stack.imgur.com/19Ibc.png" alt="graph"></a></p>
python|pandas|bokeh
5
1,908,355
33,662,292
Why do new objects in multiprocessing have the same id?
<p>I tried to create a new object in a process when using multiprocessing module. However, something confuses me.</p> <p>When I use multiprocessing module, the id of the new object is the same</p> <pre><code>for i in range(4): p = multiprocessing.Process(target=worker) p.start() def worker(): # stanford named entity tagger st = StanfordNERTagger(model_path,stanford_ner_path) print id(st) # all the processes print the same id </code></pre> <p>But when I use threading, they are different:</p> <pre><code>for i in range(4): p = threading.Thread(target=worker) p.start() def worker(): # stanford named entity tagger st = StanfordNERTagger(model_path,stanford_ner_path) print id(st) # threads print differnt ids </code></pre> <p>I am wondering why they are different.</p>
<p><a href="https://docs.python.org/3/library/functions.html#id" rel="nofollow noreferrer">id</a> in CPython returns the pointer of the given object. As threads have shared address space, two different instances of an object will be allocated in two different locations returning two different ids (aka virtual address pointers).</p> <p>This is not the case for separate processes which own their own address space. By chance, they happen to get the same address pointer.</p> <p>Keep in mind that address pointers are virtual, therefore they represent an offset within the process address space itself. That's why they are the same.</p> <p>It is usually better not to rely on id() for distinguishing objects, as new ones might get ids of old ones making hard to track them over time. It usually leads to tricky bugs.</p>
python|multiprocessing
8
1,908,356
67,946,301
KivyMD toast appearing at the center of the screen when compiled to APK
<p>I'm New To Kivymd And working on a small project to learn it myself :)</p> <p>I found in kivymd <strong>KitchenSink</strong> Demo to How to use toasts.</p> <p>Here is The Demo code:</p> <pre><code>from kivymd.app import MDApp from kivymd.toast import toast from kivy.lang import Builder KV = ''' BoxLayout: orientation:'vertical' MDToolbar: id: toolbar title: 'Test Toast' md_bg_color: app.theme_cls.primary_color left_action_items: [['menu', lambda x: '']] FloatLayout: MDRaisedButton: text: 'TEST KIVY TOAST' on_release: app.show_toast() pos_hint: {'center_x': .5, 'center_y': .5} ''' class Test(MDApp): def show_toast(self): '''Displays a toast on the screen.''' toast('Test Kivy Toast') def build(self): return Builder.load_string(KV) Test().run() </code></pre> <hr /> <p>But The <strong>Problem</strong> is When I Compiled the exact same code using <strong>buildozer</strong> to <strong>apk</strong> it is toasting at the center of the screen when the button is clicked!</p> <p>Am i doing anything wrong here? The buildozer successfully compiles the apk and the apk opens in my android without any crash, but the toast is not appearing in the bottom of the screen!, Instead it is appearing in the center of the screen.</p> <p>What am I doing wrong in here?</p> <p><a href="https://i.stack.imgur.com/wuDcx.jpg" rel="nofollow noreferrer">Here is the ScreenShot When I Compiled The App</a></p>
<p>You must set the <code>gravity</code> parameter to specify the position. About <code>kivymd</code> android <code>toast</code> arguments you can read <a href="https://github.com/kivymd/KivyMD/blob/dc7224ba23d6f6e4826d462ed577227038171137/kivymd/toast/androidtoast/androidtoast.py#L68" rel="nofollow noreferrer">here</a>.</p> <pre class="lang-py prettyprint-override"><code>from kivymd.app import MDApp from kivymd.toast import toast from kivy.lang import Builder from kivy import platform KV = &quot;&quot;&quot; Screen: MDRectangleFlatButton: text: &quot;Show toast&quot; pos_hint: {&quot;center_x&quot;: .5, &quot;center_y&quot;: .5} on_release: app.toast('Text') &quot;&quot;&quot; class Test(MDApp): def build(self): return Builder.load_string(KV) def toast(self, text='', duration=2.5): if platform == 'android': toast(text=text, gravity=80, length_long=duration) else: toast(text=text, duration=duration) Test().run() </code></pre>
android|kivy|kivy-language|kivymd|python-3.9
1
1,908,357
61,396,317
How do I replace a complex string in a column (Python)
<p><strong>Background</strong></p> <p>I have a dataset, df, where I would like to replace the string: 'Connected to call (audio, video or screen sharing)' with 'Connected', as well as replace 'Ended call' with 'Ended'</p> <pre><code>Connect End Connected to call (audio, video or screen sharing) 3/3/2020 2:00:01 PM Ended call 3/3/2020 2:05:00 PM </code></pre> <p><strong>Desired Output:</strong></p> <pre><code>Connect End Connected 3/3/2020 2:00:01 PM Ended 3/3/2020 2:05:00 PM </code></pre> <p><strong>What I have tried:</strong></p> <pre><code>df1 = df["Connect"] = df["Connect"].replace(Connected to call (audio, video, or screen sharing), "Connected") </code></pre> <p>Furthermore, how would I replace strings if they are located within multiple columns? Connect and End? (As shown above)</p> <p>Any suggestion is appreciated.</p>
<p>You need to escape <code>parantheses</code> with <code>\</code> while replacing. That is what is creating problem. </p> <p>So do something like this:</p> <pre><code>In [133]: df.Connect.str.replace("Connected to call \(audio, video or screen sharing\)", 'Connected') Out[133]: 0 Connected 1 Ended call Name: Connect, dtype: object </code></pre> <p>For all replacements together, you can do this:</p> <pre><code>In [142]: replacements= {'Connect' : {"Connected to call \(audio, video or screen sharing\)" : 'Connected', 'Ended call': 'Ended'}} In [143]: df.replace(replacements, regex=True, inplace=True) In [144]: df Out[144]: Connect 0 Connected 1 Ended </code></pre>
python|pandas|numpy
1
1,908,358
57,249,273
How to detect paragraphs in a text document image for a non-consistent text structure in Python OpenCV
<p>I am trying to identify paragraphs of text in a <code>.pdf</code> document by first converting it into an image then using OpenCV. But I am getting bounding boxes on lines of text instead of paragraphs. How can I set some threshold or some other limit to get paragraphs instead of lines?</p> <p>Here is the sample input image:</p> <p><a href="https://i.stack.imgur.com/Zufql.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Zufql.png" alt="input"></a></p> <p>Here is the output I am getting for the above sample: </p> <p><a href="https://i.stack.imgur.com/NQlUh.png" rel="noreferrer"><img src="https://i.stack.imgur.com/NQlUh.png" alt="output"></a></p> <p>I am trying to get a single bounding box on the paragraph in the middle. I am using <a href="https://stackoverflow.com/questions/23506105/extracting-text-opencv">this</a> code.</p> <pre><code>import cv2 import numpy as np large = cv2.imread('sample image.png') rgb = cv2.pyrDown(large) small = cv2.cvtColor(rgb, cv2.COLOR_BGR2GRAY) # kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3)) kernel = np.ones((5, 5), np.uint8) grad = cv2.morphologyEx(small, cv2.MORPH_GRADIENT, kernel) _, bw = cv2.threshold(grad, 0.0, 255.0, cv2.THRESH_BINARY | cv2.THRESH_OTSU) kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (9, 1)) connected = cv2.morphologyEx(bw, cv2.MORPH_CLOSE, kernel) # using RETR_EXTERNAL instead of RETR_CCOMP contours, hierarchy = cv2.findContours(connected.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) #For opencv 3+ comment the previous line and uncomment the following line #_, contours, hierarchy = cv2.findContours(connected.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE) mask = np.zeros(bw.shape, dtype=np.uint8) for idx in range(len(contours)): x, y, w, h = cv2.boundingRect(contours[idx]) mask[y:y+h, x:x+w] = 0 cv2.drawContours(mask, contours, idx, (255, 255, 255), -1) r = float(cv2.countNonZero(mask[y:y+h, x:x+w])) / (w * h) if r &gt; 0.45 and w &gt; 8 and h &gt; 8: cv2.rectangle(rgb, (x, y), (x+w-1, y+h-1), (0, 255, 0), 2) cv2.imshow('rects', rgb) cv2.waitKey(0) </code></pre>
<p>This is a classic situation for <a href="https://opencv24-python-tutorials.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_morphological_ops/py_morphological_ops.html#dilation" rel="nofollow noreferrer">dilate</a>. Whenever you want to connect multiple items together, you can dilate them to join adjacent contours into a single contour. Here's a simple approach:</p> <ol> <li><p><strong>Obtain binary image.</strong> <a href="https://www.geeksforgeeks.org/python-opencv-cv2-imread-method/" rel="nofollow noreferrer">Load the image</a>, convert to <a href="https://opencv24-python-tutorials.readthedocs.io/en/stable/py_tutorials/py_imgproc/py_colorspaces/py_colorspaces.html" rel="nofollow noreferrer">grayscale</a>, <a href="https://opencv24-python-tutorials.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_filtering/py_filtering.html#gaussian-filtering" rel="nofollow noreferrer">Gaussian blur</a>, then <a href="https://opencv24-python-tutorials.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_thresholding/py_thresholding.html#otsus-binarization" rel="nofollow noreferrer">Otsu's threshold</a> to obtain a binary image.</p> </li> <li><p><strong>Connect adjacent words together.</strong> We create a <a href="https://opencv24-python-tutorials.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_morphological_ops/py_morphological_ops.html#structuring-element" rel="nofollow noreferrer">rectangular kernel</a> and <a href="https://opencv24-python-tutorials.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_morphological_ops/py_morphological_ops.html#dilation" rel="nofollow noreferrer">dilate</a> to merge individual contours together.</p> </li> <li><p><strong>Detect paragraphs.</strong> From here we <a href="https://docs.opencv.org/2.4/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html#findcontours" rel="nofollow noreferrer">find contours</a>, obtain the rectangular <a href="https://docs.opencv.org/2.4/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html?highlight=boundingrect#boundingrect" rel="nofollow noreferrer">bounding rectangle coordinates</a> and <a href="https://opencv24-python-tutorials.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_contours/py_contours_begin/py_contours_begin.html" rel="nofollow noreferrer">highlight the rectangular contours</a>.</p> </li> </ol> <hr /> <p>Otsu's threshold to obtain a binary image</p> <p><a href="https://i.stack.imgur.com/Mv77u.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Mv77u.png" alt="enter image description here" /></a></p> <p>Here's where the magic happens. We can assume that a paragraph is a section of words that are close together, to achieve this we dilate to connect adjacent words</p> <p><a href="https://i.stack.imgur.com/Uz6nQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Uz6nQ.png" alt="enter image description here" /></a></p> <p>Result</p> <p><a href="https://i.stack.imgur.com/RdBvu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RdBvu.png" alt="enter image description here" /></a></p> <pre><code>import cv2 import numpy as np # Load image, grayscale, Gaussian blur, Otsu's threshold image = cv2.imread('1.png') gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) blur = cv2.GaussianBlur(gray, (7,7), 0) thresh = cv2.threshold(blur, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1] # Create rectangular structuring element and dilate kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (5,5)) dilate = cv2.dilate(thresh, kernel, iterations=4) # Find contours and draw rectangle cnts = cv2.findContours(dilate, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) cnts = cnts[0] if len(cnts) == 2 else cnts[1] for c in cnts: x,y,w,h = cv2.boundingRect(c) cv2.rectangle(image, (x, y), (x + w, y + h), (36,255,12), 2) cv2.imshow('thresh', thresh) cv2.imshow('dilate', dilate) cv2.imshow('image', image) cv2.waitKey() </code></pre>
python|image|opencv|image-processing|computer-vision
28
1,908,359
27,754,317
File Function in python 3
<p>Hi im new at python(i have been learning it for like 3 days) and for your help will be thankful.</p> <p>So i made this function which u see below , created 2.txt documents called oldfile,newfile. To the oldfile i wrote 64 characters(like "asasdasffafdsfsfgsgsdgS"). The function should open both files and write to the new file 50 characters from the new file but when i press start in pyscripter the code works but it doesnt do anything(It wont write anything to the newfile.txt).</p> <pre><code>def copy_file(oldfile, newfile): infile = open(oldfile, 'r') outfile = open(newfile, 'w') while True: text = infile.read(50) if text == "": break outfile.write(text) infile.close() outfile.close() return </code></pre>
<p>you need to call your function:</p> <pre><code>def copy_file(oldfile, newfile): infile = open(oldfile, 'r') outfile = open(newfile, 'w') while True: text = infile.read(50) if text == "": break outfile.write(text) infile.close() outfile.close() return copy_file('old_file_name_path', 'new_file_name_path') # call your function with path of file here </code></pre>
python|python-3.x
0
1,908,360
65,874,755
Is there a way such that I can refresh a board on the screen instead of having it pile on and on in the console?
<p>I've just completed a Connect Four game and now I'm wondering how I can keep the board in one place without having it refresh over and over again in the console. I've searched far and wide on the web for any solutions but couldn't find anything.</p> <p>Below is my full code. Feel free to run it on your IDEs. Any help would be greatly appreciated!</p> <pre><code>import random import ast def winner(board): &quot;&quot;&quot;This function accepts the Connect Four board as a parameter. If there is no winner, the function will return the empty string &quot;&quot;. If the user has won, it will return 'X', and if the computer has won it will return 'O'.&quot;&quot;&quot; row = 0 while row &lt;= len(board) - 1: count = 0 last = '' col = 0 while col &lt;= len(board[0]) - 1: row_win = board[row][col] if row_win == &quot; &quot;: count = 0 col += 1 continue if row_win == last: count += 1 else: count = 1 if count &gt;= 4: return row_win last = row_win col += 1 row += 1 col = 0 while col &lt;= len(board[0]) - 1: count = 0 last = '' row = 0 while row &lt;= len(board) - 1: col_win = board[row][col] if col_win == &quot; &quot;: count = 0 row += 1 continue if col_win == last: count += 1 else: count = 1 if count &gt;= 4: return col_win last = col_win row += 1 col += 1 row = 0 while row &lt;= len(board) - 1: col = 0 while col &lt;= len(board[0]) - 1: try: if (board[row][col] == board[row + 1][col + 1] and board[row + 1][col + 1] == board[row + 2][ col + 2] and board[row + 2][col + 2] == board[row + 3][col + 3]) and ( board[row][col] != &quot; &quot; or board[row + 1][col + 1] != &quot; &quot; or board[row + 2][col + 2] != &quot; &quot; or board[row + 3][col + 3] != &quot; &quot;): return board[row][col] except: IndexError col += 1 row += 1 row = 0 while row &lt;= len(board) - 1: col = 0 while col &lt;= len(board[0]) - 1: try: if (board[row][col] == board[row + 1][col - 1] and board[row + 1][col - 1] == board[row + 2][ col - 2] and board[row + 2][col - 2] == board[row + 3][col - 3]) and ( board[row][col] != &quot; &quot; or board[row + 1][col - 1] != &quot; &quot; or board[row + 2][col - 2] != &quot; &quot; or board[row + 3][col - 3] != &quot; &quot;): return board[row][col] except: IndexError col += 1 row += 1 # No winner: return the empty string return &quot;&quot; def display_board(board): &quot;&quot;&quot;This function accepts the Connect Four board as a parameter. It will print the Connect Four board grid (using ASCII characters) and show the positions of any X's and O's. It also displays the column numbers on top of the board to help the user figure out the coordinates of their next move. This function does not return anything.&quot;&quot;&quot; header = &quot; &quot; i = 1 while i &lt; len(board[0]): header = header + str(i) + &quot; &quot; i += 1 header = header + str(i) print(header) separator = &quot;+---+&quot; i = 1 while i &lt; len(board[0]): separator = separator + &quot;---+&quot; i += 1 print(separator) for row in board: print(&quot; &quot;, &quot; | &quot;.join(row)) separator = &quot;+---+&quot; i = 1 while i &lt; len(board[0]): separator = separator + &quot;---+&quot; i += 1 print(separator) print() def make_user_move(board): &quot;&quot;&quot;This function accepts the Connect Four board as a parameter. It will ask the user for a row and column. If the row and column are each within the range of 1 and 7, and that square is not already occupied, then it will place an 'X' in that square.&quot;&quot;&quot; valid_move = False while not valid_move: try: col_num = &quot;What col would you like to move to (1 - &quot; + str(len(board[0])) + &quot;)? &quot; col = int(input(col_num)) if board[0][col - 1] != ' ': print(&quot;Sorry, that column is full. Please try again!\n&quot;) else: col = col - 1 for row in range(len(board) - 1, -1, -1): if board[row][col] == ' ' and not valid_move: board[row][col] = 'X' valid_move = True except: ValueError print(&quot;Please enter a valid option! &quot;) return board def make_computer_move(board): &quot;&quot;&quot;This function accepts the Connect Four board as a parameter. It will randomly pick row and column values between 0 and 6. If that square is not already occupied it will place an 'O' in that square. Otherwise, another random row and column will be generated.&quot;&quot;&quot; computer_valid_move = False while not computer_valid_move: col = random.randint(0, len(board) - 1) if board[0][col] != ' ': print(&quot;Sorry, that column is full. Please try again!\n&quot;) else: for row in range(len(board) - 1, -1, -1): if board[row][col] == ' ' and not computer_valid_move: board[row][col] = 'O' computer_valid_move = True return board def main(): &quot;&quot;&quot;The Main Game Loop:&quot;&quot;&quot; cf_board = [] row = int(input(&quot;How many rows do you want your game to have? &quot;)) col = int(input(&quot;How many columns do you want your game to have? &quot;)) move_order = input(&quot;Would you like to move first? (y/n) &quot;) while move_order.lower() != &quot;y&quot; and move_order.lower() != &quot;n&quot;: print(&quot;Invalid input! Please choose y (yes) or n (no) - case insensitive. &quot;) move_order = input(&quot;Would you like to move first? (y/n) &quot;) if move_order.lower() == &quot;y&quot;: users_turn = True elif move_order.lower() == &quot;n&quot;: users_turn = False free_cells = col * row row_str = &quot;\&quot; \&quot;&quot; board_str = &quot;\&quot; \&quot;&quot; i = 0 j = 0 while i &lt; col - 1: row_str = row_str + &quot;,&quot; + &quot;\&quot; \&quot;&quot; i += 1 board_list = [row_str] while j &lt; row - 1: board_list.append(row_str) j += 1 cf_board = [list(ast.literal_eval(x)) for x in board_list] # for i in range(row-1): # cf_board.append(row_str) while not winner(cf_board) and (free_cells &gt; 0): display_board(cf_board) if users_turn: cf_board = make_user_move(cf_board) users_turn = not users_turn else: cf_board = make_computer_move(cf_board) users_turn = not users_turn free_cells -= 1 display_board(cf_board) if (winner(cf_board) == 'X'): print(&quot;Y O U W O N !&quot;) elif (winner(cf_board) == 'O'): print(&quot;T H E C O M P U T E R W O N !&quot;) elif free_cells == 0: print(&quot;S T A L E M A T E !&quot;) print(&quot;\n*** GAME OVER ***\n&quot;) # Start the game! main() </code></pre>
<p>I guess the idea you want to implement is similar to make a <a href="https://stackoverflow.com/questions/3173320/text-progress-bar-in-the-console">terminal based progress bar</a>;</p> <p>For example on unix/linux systems you can update a progress bar or a simple sentence in one place with &quot;\r&quot; character instead of &quot;\n&quot; which used on print() function by default:</p> <pre><code>from sys import stdout from time import sleep def realtime_sentence(): name = 0 while name &lt;= 100: stdout.write(&quot;\r My name is {}&quot;.format(name)) name += 1 sleep(0.2) def realtime_progress_bar(): progress = 0 while progress &lt;= 100: stdout.write(&quot;\r Progress: {} {}%&quot;.format(progress*&quot;#&quot;, progress)) progress += 1 sleep(0.2) realtime_sentence() realtime_progress_bar() </code></pre> <p>Also check this question:</p> <ol> <li><a href="https://stackoverflow.com/questions/6840420/rewrite-multiple-lines-in-the-console">Rewrite multiple lines in the console</a></li> </ol>
python|python-3.x
1
1,908,361
65,619,697
Doesn't "isin" within a lambda function work?
<p>Can't I use a <code>isin</code> within a lambda function. E.g.</p> <pre><code>mylist = [&quot;A&quot;, &quot;B&quot;, &quot;C&quot;] df[&quot;Col2&quot;] = df[&quot;Col1&quot;].apply(lambda x: 1 if x.isin(mylist) else 0) </code></pre> <p>I'm getting an <code>AttributeError: 'str' object has no attribute 'isin'</code> but the following filter works though:</p> <pre><code>df[df[&quot;Col1&quot;].isin(mylist)] </code></pre>
<p>Use the <code>in</code> operator.</p> <p>Replace <code>x.isin(mylist)</code> with <code>x in mylist</code> as follows:</p> <pre><code>df[&quot;Col2&quot;] = df[&quot;Col1&quot;].apply(lambda x: 1 if x in mylist else 0) </code></pre>
python|python-3.x|pandas|lambda
1
1,908,362
72,338,355
Python to Arduino Serial Communication-Manual Input Works but Variable Does Not
<p>I am communicating from raspberry pi using python to my Arduino via usb and serial communication. I got it to work perfectly when I ask for an input but when I try and save a variable and push that through, it doesn't work. I think the Arduino is reading the serial until a newline is found. When I type input from python and press enter, the arduino successfully recognizes a newline and executes the code. However, when I save a string with \n at the end, the Arduino does not correctly receive the string and the rest of the code does not continue.</p> <p>Is there an easy way to fix this, or even a longer way to trick python by still asking for an input, but the computer automatically types in what I need my variable to be and then presses enter?</p> <p>Here is my python code that works:</p> <pre><code>import serial import time if __name__ == '__main__': ser = serial.Serial('/dev/ttyACM0', 9600, timeout=1) ser.flush() while True: direction_distance = input(&quot;Enter String :&quot;) ser.write(direction_distance.encode('utf-8')) time.sleep(0.5) receive_string=ser.readline().decode('utf-8').rstrip() print(receive_string) </code></pre> <p>Here is Arduino Code that works:</p> <pre><code>void setup(){ Serial.begin(9600); } void loop(){ if(Serial.available() &gt; 0) { String data = Serial.readStringUntil('\n'); //Rest of code continues below with newly imported 'data' } } </code></pre> <p>Here is my python code that sends data to the Arduino, but the Arduino fails to recognize the newline '\n'</p> <pre><code>import serial import time if __name__ == '__main__': ser = serial.Serial('/dev/ttyACM0', 9600, timeout=1) ser.flush() while True: direction_distance = &quot;2000\n&quot; ser.write(direction_distance.encode('utf-8')) time.sleep(0.5) receive_string=ser.readline().decode('utf-8').rstrip() print(receive_string) </code></pre>
<p>There wasn't a long enough delay for the strings to go fully go through. I was able to fix this issue changing the sleep to 2 seconds. (And by removing the \n)</p> <pre><code>import serial import time if __name__ == '__main__': ser = serial.Serial('/dev/ttyACM0', 9600, timeout=1) ser.flush() while True: direction_distance = &quot;2000&quot; time.sleep(2) ser.write(direction_distance.encode('utf-8 receive_string=ser.readline().decode('utf-8').rstrip() print(receive_string) </code></pre>
python|arduino|raspberry-pi|serial-port
2
1,908,363
36,885,551
Dynamic Field selection in odoo
<p>How can i define a dynamic selection field where the user can add new items that are not predefined this code define a static selection field:</p> <p>score:field.selection([('key1','value1').....],string='score'),</p>
<p>Your best bet would be to create a many2one field: </p> <pre><code>class yourmodule_score_rel(models.Model): _name = 'yourmodel.score.rel' name = fields.Char('Score') </code></pre> <p>And in your module's class:</p> <pre><code>score_id = fields.Many2one('yourmodel.roles.rel', string='Score') </code></pre>
python|odoo-9
0
1,908,364
48,759,117
Python-edit file property descriptions in windows
<p>I'm creating a simple python app to go through a folder display each JPG and allow someone to edit the photo's Title,subject,comments and add tags, then save and move on to the next. (essentially, I want to avoid, in windows, having to right click>properties>details> and edit each of the above fields, then "OK".) </p> <p>Can someone please recommend the libraries and modules I need to import to display the photo and edit the properties? </p> <p>I'm new to Python, so a snippet of code to show how do do it would be most appreciated. </p> <p>I'm using python 3.6 in Windows 10</p> <p>Thanks in advance.</p>
<p>To change a photos EXIF data in python you can do something like:</p> <p>Get EXIF Data:</p> <pre><code>import piexif from PIL import Image img = Image.open(fname) exif_dict = piexif.load(img.info['exif']) altitude = exif_dict['GPS'][piexif.GPSIFD.GPSAltitude] </code></pre> <p>Set / Save EXIF data:</p> <pre><code>exif_dict['GPS'][piexif.GPSIFD.GPSAltitude] = (140, 1) exif_bytes = piexif.dump(exif_dict) img.save('_%s' % fname, "jpeg", exif=exif_bytes) </code></pre> <p>Taken from <a href="https://stackoverflow.com/questions/44636152/how-to-modify-exif-data-in-python">this</a> Answer</p>
python|metadata|jpeg
0
1,908,365
48,655,822
Understanding Python
<p>Need help understanding what exactly this code does. I understand what the join functions does. I am just struggling a bit with understanding what the format function is doing and what the lambda function is doing.</p> <pre><code>t = ''.join('{0}'.format(key, val) for key, val in sorted(c.items(), key = lambda x:x[-1], reverse = True)) </code></pre>
<p><strong>TL;DR:</strong></p> <p><code>lambda</code> is just a good way at specifying (before-hand) what to look at or use. Otherwise you wouldn't be able to specify:</p> <p><em>"I want to sort on index 1 of each element in the iterable"</em></p> <p><strong>Stepwise:</strong></p> <p>The <code>lambda</code> is setting the <code>key</code> as the last element (<code>x[-1]</code>) of <code>x</code>, where <code>x</code> is an element inside <code>c.items()</code>. </p> <p>So essentially, the flow is this:</p> <p><code>sorted()</code> gets executed first; it is going to sort the contents of <code>c.items()</code></p> <p>But what does it sort on? <code>lambda x:x[-1]</code> says the <code>key</code> is the last element of each element in the iterable <code>c.items()</code>. So if:</p> <p><code>c.items() == ['123', '456', '789']</code></p> <p>Then the <code>key</code> for the <code>sorted()</code> is going to be the <code>'3', '6', '9'</code> and the will be sorted in <code>reverse = True</code>.</p> <p>But since <code>c.items()</code> likely represents a <code>dict</code> contents, the <code>for key, val</code> means that each element will <em>unpack</em> into 2 distinct variables: <code>key, val</code>. So my example before is more like:</p> <p><code>c.items() == [['this', 1], ['that', 2], ['thus', 0]</code></p> <p>And the <code>lambda</code> allows for sorting on <code>1, 2, and 0</code>.</p> <p>You already said you understand the <code>join</code>, but the <code>format</code> is only using the value of <code>key</code>; hence the <code>{0}</code>.</p> <p>So here is an example with what you've got going on:</p> <pre><code>&gt;&gt;&gt; c = {'this': 136, 'that': 133, 'thus': 156} &gt;&gt;&gt; c.items() dict_items([('this', 136), ('that', 133), ('thus', 156)]) &gt;&gt;&gt; t = ''.join('{0}'.format(key, val) for key, val in sorted(c.items(), key = lambda x:x[-1], reverse = True)) &gt;&gt;&gt; t 'thusthisthat' </code></pre> <p>Since the <code>sorted()</code> takes the <code>lambda</code> return as the <code>key</code>, we know that the <code>key</code> will be the last (<code>[-1]</code>) element for each element in the iterable (<code>c.items()</code>). That means we are going to automatically sort <strong>ascending</strong>, but since we have <code>reversed = True</code> we are going to sort <strong>descending</strong>. </p> <p>As such, the above is what you get!</p>
python|python-3.x
1
1,908,366
20,189,975
Add new key value pair in a Python Dictionary
<p>I am really stuck with a problem here, I am new to python and using it for a typical requirement. I want to add key value pairs into a dictionary within a loop for eg:</p> <pre><code>Eg_dict={} for row in iterable: dict={row[1],row[2]} </code></pre> <p>This code doesn't do what I want to achieve, this just adds the very last record of the iterable into the dictionary, its easy to assume that my code is reading each record from iterable and rewriting the dictionary over and over again. </p> <p>So my question is simple how to add all the records from the iterable in the dictionary? This would be the default behavior of an array in unix. </p> <p>P.S: the iterable here can be assumed as a csv.reader object and I am trying to insert the second and third columns into the dictionary.</p>
<p>Use dictionary assignment:</p> <pre><code>for row in iterable: Eg_dict[row[1]] = row[2] </code></pre> <p>or ever replace the whole code with a dict comprehension:</p> <pre><code>Eg_dict = {row[1]: row[2] for row in iterable} # python 2.7 and up </code></pre> <p>or</p> <pre><code>Eg_dict = dict(row[1:3] for row in iterable) # python 2.3 and up </code></pre> <p>If each <code>row</code> has a fixed number of entries, you can use tuple assignment:</p> <pre><code>for key, value in row: # two elements </code></pre> <p>or</p> <pre><code>for _, key, value in row: # three elements, ignore the first </code></pre> <p>to make a loop more readable. E.g. <code>{key: value for _, key, value in iterable}</code>.</p> <p>Your code instead creates a new <strong>set</strong> object (just unique values, no keys) each loop, replacing the previous one.</p>
python|dictionary
8
1,908,367
19,903,114
How to insert Records of data into MySQL database with python?
<p>I have an form which has FirstName, Lastname, Age and Gender. I am using MySQL db. While using MySQl db, do we need to create table , do the insert operation in the single pythonic script ?</p> <p>For example :</p> <pre><code>#!/usr/bin/python import MySQLdb db = MySQLdb.connect("localhost", "usename", "password", "TESTDB") cursor = db.cursor() cursor.execute ( """ CREATE TABLE PERSON ( F_NAME CHAR(20) NOT NULL, L_NAME CHAR(20), AGE INT, GENDER CHAR(4) ) """) cursor.execute( """ INSERT INTO PERSON (F_NAME, L_NAME, AGE, GENDER) VALUES ('Neeraj','Lad','22','Male'), ('Vivek','Pal','24','Male') """) print cursor.rowcount </code></pre> <p>Edited Code:</p> <pre><code>#!/usr/bin/python import MySQLdb import cgi print "Content-type: text/html\n" form = cgi.FieldStorage() f_Name = form.getvalue('firstname', '') l_Name = form.getvalue('lastname', '') age = form.getvalue('age', 0) gender = form.getvalue('gender', '') db = MySQLdb.connect(host="", user="", password="", db="") cursor = db.cursor() sql = "INSERT INTO PERSON (F_NAME, L_NAME, Age, Gender) VALUES (%s, %s, %s, %s)" %(f_name, l_name, age, gender) cursor.execute(sql) db.commit() db.close() </code></pre>
<p>I'm not 100% clear on what you're asking, but I'll take a guess.</p> <p>You have to create a table exactly once in the database before you can insert into it.</p> <p>If your Python script is talking to a brand-new database each time it runs, then it needs a <code>CREATE TABLE</code> statement.</p> <p>If your Python script <em>might be</em> talking to a brand-new database, but will usually be talking to an already-existing one, then you can use <code>CREATE TABLE IF NOT EXISTS</code>.</p> <p>But, except in toy learning projects, both of these are rare. Normally, you create the database once, then you write Python scripts that connect to it and assume it's already been created. In that case, you will not have a <code>CREATE TABLE</code> statement in your form handler.</p> <p>If you're asking about inserting multiple values in a single <code>INSERT</code> statement… normally, you won't be inserting hard-coded values like <code>'Neeraj'</code>, but rather values that you get dynamically (e.g., from the web form). So you will be using <a href="http://www.python.org/dev/peps/pep-0249/#id14" rel="nofollow">parameterized SQL statements</a> like this:</p> <pre><code>cursor.execute(""" INSERT INTO PERSON (F_NAME, L_NAME, AGE, GENDER) VALUES (%s, %s, %s, %s) """, (f_name, l_name, age, gender)) </code></pre> <p>In that case, if you have, say, a list of 4-tuples, each representing a person, and you want to insert all of them, you do that not by putting multiple copies of the parameter lists in the SQL statement, but by putting a single parameter list, and using the <a href="http://www.python.org/dev/peps/pep-0249/#executemany" rel="nofollow"><code>executemany</code></a> function:</p> <pre><code>cursor.execute(""" INSERT INTO PERSON (F_NAME, L_NAME, AGE, GENDER) VALUES (%s, %s, %s, %s) """, list_of_people) </code></pre>
python|database|cgi|mysql-python
2
1,908,368
4,316,608
WSDL XSD and soappy
<p>I have the following WSDL and XSD</p> <pre><code>from SOAPpy import WSDL import os # you'll need to configure these two values; # see http://www.google.com/apis/ WSDLFILE = os.path.join(os.path.dirname(__file__), "getiwsAesPayment.wsdl") _server = WSDL.Proxy(WSDLFILE) print _server </code></pre> <p>Which gives me the error:</p> <pre><code> schema.load(reader) File "/home/gregory/.virtualenvs/casadeal/src/soappy/SOAPpy/wstools/XMLSchema.py", line 1205, in load tp.fromDom(childNode) File "/home/gregory/.virtualenvs/casadeal/src/soappy/SOAPpy/wstools/XMLSchema.py", line 1322, in fromDom raise SchemaError, 'namespace of schema and import match' SOAPpy.wstools.XMLSchema.SchemaError: namespace of schema and import match </code></pre> <p>Apparently it may come from the fact that the targetNamespace are the same for wsdl and xsd ?</p> <p>WSDL</p> <pre><code>&lt;?xml version="1.0" encoding="UTF-8"?&gt; &lt;definitions name="getiwsAesPayment" targetNamespace="http://ws.AMANTY.m2t.biz/" xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:tns="http://ws.AMANTY.m2t.biz/" xmlns="http://schemas.xmlsoap.org/wsdl/"&gt; &lt;types&gt; &lt;xsd:schema&gt; &lt;xsd:import namespace="http://ws.AMANTY.m2t.biz/" schemaLocation="getiwsAesPayment.xsd"/&gt; &lt;/xsd:schema&gt; &lt;/types&gt; &lt;message name="getiwsaespayment"&gt; &lt;part name="parameters" element="tns:getiwsaespayment"&gt; &lt;/part&gt; &lt;/message&gt; &lt;message name="getiwsaespaymentResponse"&gt; &lt;part name="parameters" element="tns:getiwsaespaymentResponse"&gt; &lt;/part&gt; &lt;/message&gt; &lt;portType name="getiwsAesPayment"&gt; &lt;operation name="getiwsaespayment"&gt; &lt;input message="tns:getiwsaespayment"&gt; &lt;/input&gt; &lt;output message="tns:getiwsaespaymentResponse"&gt; &lt;/output&gt; &lt;/operation&gt; &lt;/portType&gt; &lt;binding name="getiwsAesPaymentPortBinding" type="tns:getiwsAesPayment"&gt; &lt;soap:binding style="document" transport="http://schemas.xmlsoap.org/soap/http"/&gt; &lt;operation name="getiwsaespayment"&gt; &lt;soap:operation soapAction=""/&gt; &lt;input&gt; &lt;soap:body use="literal"/&gt; &lt;/input&gt; &lt;output&gt; &lt;soap:body use="literal"/&gt; &lt;/output&gt; &lt;/operation&gt; &lt;/binding&gt; &lt;service name="getiwsAesPaymentService"&gt; &lt;port name="getiwsAesPaymentPort" binding="tns:getiwsAesPaymentPortBinding"&gt; &lt;soap:address location="http://partner.ma:8080/AMANTYWebServicesWAR/getiwsAesPayment"/&gt; &lt;/port&gt; &lt;/service&gt; &lt;/definitions&gt; </code></pre> <p>XSD</p> <pre><code>&lt;?xml version="1.0" encoding="UTF-8"?&gt; &lt;xs:schema version="1.0" targetNamespace="http://ws.AMANTY.m2t.biz/" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:tns="http://ws.AMANTY.m2t.biz/"&gt; &lt;xs:element name="commandReg" type="tns:commandReg"/&gt; ....... &lt;/xs:schema&gt; </code></pre>
<p>try to include (not import) other xsd's in your wsdl</p>
python|soap|wsdl|webservice-client
0
1,908,369
48,318,858
Why is filtering DataFrame by boolean mask so much faster than apply()?
<p>I want to compare the performance between 2 different methods to filter pandas DataFrames. So I created a test set with <code>n</code> points in the plane and I filter out all points which are not in the unit square. I am surprised one method is so much faster than the other. The larger <code>n</code> becomes the bigger the difference. What would be the explanation for that?</p> <p>This is my script</p> <pre><code>import numpy as np import time import pandas as pd # Test set with points n = 100000 test_x_points = np.random.uniform(-10, 10, size=n) test_y_points = np.random.uniform(-10, 10, size=n) test_points = zip(test_x_points, test_y_points) df = pd.DataFrame(test_points, columns=['x', 'y']) # Method a start_time = time.time() result_a = df[(df['x'] &lt; 1) &amp; (df['x'] &gt; -1) &amp; (df['y'] &lt; 1) &amp; (df['y'] &gt; -1)] end_time = time.time() elapsed_time_a = 1000 * abs(end_time - start_time) # Method b start_time = time.time() result_b = df[df.apply(lambda row: -1 &lt; row['x'] &lt; 1 and -1 &lt; row['y'] &lt; 1, axis=1)] end_time = time.time() elapsed_time_b = 1000 * abs(end_time - start_time) # print results print 'For {0} points.'.format(n) print 'Method a took {0} ms and leaves us with {1} elements.'.format(elapsed_time_a, len(result_a)) print 'Method b took {0} ms and leaves us with {1} elements.'.format(elapsed_time_b, len(result_b)) print 'Method a is {0} X faster than method b.'.format(elapsed_time_b / elapsed_time_a) </code></pre> <p>Results for different values of <code>n</code>:</p> <pre><code>For 10 points. Method a took 1.52087211609 ms and leaves us with 0 elements. Method b took 0.456809997559 ms and leaves us with 0 elements. Method a is 0.300360558081 X faster than method b. For 100 points. Method a took 1.55997276306 ms and leaves us with 1 elements. Method b took 1.384973526 ms and leaves us with 1 elements. Method a is 0.887819043252 X faster than method b. For 1000 points. Method a took 1.61004066467 ms and leaves us with 5 elements. Method b took 10.448217392 ms and leaves us with 5 elements. Method a is 6.48941211313 X faster than method b. For 10000 points. Method a took 1.59096717834 ms and leaves us with 115 elements. Method b took 98.8278388977 ms and leaves us with 115 elements. Method a is 62.1180878166 X faster than method b. For 100000 points. Method a took 2.14099884033 ms and leaves us with 1052 elements. Method b took 995.483875275 ms and leaves us with 1052 elements. Method a is 464.962360802 X faster than method b. For 1000000 points. Method a took 7.07101821899 ms and leaves us with 10045 elements. Method b took 9613.26599121 ms and leaves us with 10045 elements. Method a is 1359.5306494 X faster than method b. </code></pre> <p>When I compare it to Python native list comprehension method a is still much faster </p> <pre><code>result_c = [ (x, y) for (x, y) in test_points if -1 &lt; x &lt; 1 and -1 &lt; y &lt; 1 ] </code></pre> <p>Why is that?</p>
<p>If you follow the Pandas <a href="https://github.com/pandas-dev/pandas/blob/v0.22.0/pandas/core/frame.py#L4774-L4879" rel="nofollow noreferrer">source code for <code>apply</code></a> you will see that in general it ends up doing a python <code>for __ in __</code> loop.</p> <p>However, Pandas DataFrames are made up of Pandas Series, which are under the hood made up of numpy arrays. Masked filtering uses the fast, vectorized methods that numpy arrays allow. For info on why this is faster than doing plain python loops (as in <code>.apply</code>), see <a href="https://stackoverflow.com/questions/8385602/why-are-numpy-arrays-so-fast">Why are NumPy arrays so fast?</a></p> <p>Top answer from there:</p> <blockquote> <p>Numpy arrays are densely packed arrays of homogeneous type. Python lists, by contrast, are arrays of pointers to objects, even when all of them are of the same type. So, you get the benefits of locality of reference.</p> <p>Also, many Numpy operations are implemented in C, avoiding the general cost of loops in Python, pointer indirection and per-element dynamic type checking. The speed boost depends on which operations you're performing, but a few orders of magnitude isn't uncommon in number crunching programs.</p> </blockquote>
python|pandas|dataframe
2
1,908,370
48,436,380
why blas is slower than numpy
<p>Thanks for Mats Petersson's help. The running time of his C++ does look properly finally! But I have new two questions. </p> <ol> <li>Why Mats Petersson's code is twice times faster than my code ?</li> </ol> <p>Mats Petersson's C++ code is:</p> <pre><code>#include &lt;iostream&gt; #include &lt;openblas/cblas.h&gt; #include &lt;array&gt; #include &lt;iterator&gt; #include &lt;random&gt; #include &lt;ctime&gt; using namespace std; const blasint m = 100, k = 100, n = 100; // Mats Petersson's declaration array&lt;array&lt;double, k&gt;, m&gt; AA[500]; array&lt;array&lt;double, n&gt;, k&gt; BB[500]; array&lt;array&lt;double, n&gt;, m&gt; CC[500]; // My declaration array&lt;array&lt;double, k&gt;, m&gt; AA1; array&lt;array&lt;double, n&gt;, k&gt; BB1; array&lt;array&lt;double, n&gt;, m&gt; CC1; int main(void) { CBLAS_ORDER Order = CblasRowMajor; CBLAS_TRANSPOSE TransA = CblasNoTrans, TransB = CblasNoTrans; const float alpha = 1; const float beta = 0; const int lda = k; const int ldb = n; const int ldc = n; default_random_engine r_engine(time(0)); uniform_real_distribution&lt;double&gt; uniform(0, 1); double dur = 0; clock_t start,end; double total = 0; // Mats Petersson's initialization and computation for(int i = 0; i &lt; 500; i++) { for (array&lt;array&lt;double, k&gt;, m&gt;::iterator iter = AA[i].begin(); iter != AA[i].end(); ++iter) { for (double &amp;number : (*iter)) number = uniform(r_engine); } for (array&lt;array&lt;double, n&gt;, k&gt;::iterator iter = BB[i].begin(); iter != BB[i].end(); ++iter) { for (double &amp;number : (*iter)) number = uniform(r_engine); } } start = clock(); for(int i = 0; i &lt; 500; ++i){ cblas_dgemm(Order, TransA, TransB, m, n, k, alpha, &amp;AA[i][0][0], lda, &amp;BB[i][0][0], ldb, beta, &amp;CC[i][0][0], ldc); } end = clock(); dur += (double)(end - start); cout&lt;&lt;endl&lt;&lt;"Mats Petersson spends "&lt;&lt;(dur/CLOCKS_PER_SEC)&lt;&lt;" seconds to compute it"&lt;&lt;endl&lt;&lt;endl; // It turns me! dur = 0; for(int i = 0; i &lt; 500; i++){ for(array&lt;array&lt;double, k&gt;, m&gt;::iterator iter = AA1.begin(); iter != AA1.end(); ++iter){ for(double&amp; number : (*iter)) number = uniform(r_engine); } for(array&lt;array&lt;double, n&gt;, k&gt;::iterator iter = BB1.begin(); iter != BB1.end(); ++iter){ for(double&amp; number : (*iter)) number = uniform(r_engine); } start = clock(); cblas_dgemm(Order, TransA, TransB, m, n, k, alpha, &amp;AA1[0][0], lda, &amp;BB1[0][0], ldb, beta, &amp;CC1[0][0], ldc); end = clock(); dur += (double)(end - start); } cout&lt;&lt;endl&lt;&lt;"I spend "&lt;&lt;(dur/CLOCKS_PER_SEC)&lt;&lt;" seconds to compute it"&lt;&lt;endl&lt;&lt;endl; } </code></pre> <p>Here is the result:</p> <pre><code>Mats Petersson spends 0.215056 seconds to compute it I spend 0.459066 seconds to compute it </code></pre> <p>So, why his code is twice times faster than my code ?</p> <ol start="2"> <li>Python is still faster?</li> </ol> <p>the numpy code is </p> <pre><code>import numpy as np import time a = {} b = {} c = {} for i in range(500): a[i] = np.matrix(np.random.rand(100, 100)) b[i] = np.matrix(np.random.rand(100, 100)) c[i] = np.matrix(np.random.rand(100, 100)) start = time.time() for i in range(500): c[i] = a[i]*b[i] print(time.time() - start) </code></pre> <p>the result is: <a href="https://i.stack.imgur.com/a4KYf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/a4KYf.png" alt="enter image description here"></a></p> <p>Still can not understand it!</p>
<p>So, I can't reproduce the original results, however, with this code:</p> <pre><code>#include &lt;iostream&gt; #include &lt;openblas/cblas.h&gt; #include &lt;array&gt; #include &lt;iterator&gt; #include &lt;random&gt; #include &lt;ctime&gt; using namespace std; const blasint m = 100, k = 100, n = 100; array&lt;array&lt;double, k&gt;, m&gt; AA[500]; array&lt;array&lt;double, n&gt;, k&gt; BB[500]; array&lt;array&lt;double, n&gt;, m&gt; CC[500]; int main(void) { CBLAS_ORDER Order = CblasRowMajor; CBLAS_TRANSPOSE TransA = CblasNoTrans, TransB = CblasNoTrans; const float alpha = 1; const float beta = 0; const int lda = k; const int ldb = n; const int ldc = n; default_random_engine r_engine(time(0)); uniform_real_distribution&lt;double&gt; uniform(0, 1); double dur = 0; clock_t start,end; double total = 0; for(int i = 0; i &lt; 500; i++){ for(array&lt;array&lt;double, k&gt;, m&gt;::iterator iter = AA[i].begin(); iter != AA[i].end(); ++iter){ for(double&amp; number : (*iter)) number = uniform(r_engine); } for(array&lt;array&lt;double, n&gt;, k&gt;::iterator iter = BB[i].begin(); iter != BB[i].end(); ++iter){ for(double&amp; number : (*iter)) number = uniform(r_engine); } } start = clock(); for(int i = 0; i &lt; 500; i++) { cblas_dgemm(Order, TransA, TransB, m, n, k, alpha, &amp;AA[i][0][0], lda, &amp;BB[i][0][0], ldb, beta, &amp;CC[i][0][0], ldc); total += CC[i][i/5][i/5]; } end = clock(); dur = (double)(end - start); cout&lt;&lt;endl&lt;&lt;"It spends "&lt;&lt;(dur/CLOCKS_PER_SEC)&lt;&lt;" seconds to compute it"&lt;&lt;endl&lt;&lt;endl; cout &lt;&lt; "total =" &lt;&lt; total &lt;&lt; endl; } </code></pre> <p>and this code:</p> <pre><code>import numpy as np import time a = {} b = {} c = {} for i in range(500): a[i] = np.matrix(np.random.rand(100, 100)) b[i] = np.matrix(np.random.rand(100, 100)) c[i] = np.matrix(np.random.rand(100, 100)) start = time.time() for i in range(500): c[i] = a[i]*b[i] print(time.time() - start) </code></pre> <p>we know that the loops do (nearly) the same thing. My results are these:</p> <ul> <li>python 2.7: 0.676353931427</li> <li>python 3.4: 0.6782681941986084</li> <li>clang++ -O2: 0.117377</li> <li>g++ -O2: 0.117685</li> </ul> <p>Making the arrays global ensures that we don#t blow up the stack. I also changed rengine1 to rengine, since it wouldn't compile as it was.</p> <p>I then made sure both examples calculate 500 different array values.</p> <p>Interestingly, the total time for g++ is much shorter than the total time for clang++ - but that's the loop outside the time measurement, the actual matrix multiplication is the same, give or take a thousandth of a second. Total execution time for python is somewhere between clang and g++.</p>
python|c++|numpy
2
1,908,371
51,381,010
time a script with less than seconds
<p>i have this script but counts from seconds while the scripts ends in less than a second.</p> <pre><code>import time start = time.time() p=[1,2,3,4,5] print('It took {0:0.1f} seconds'.format(time.time() - start)) </code></pre> <p><code>python 3.7</code> uses a new function that can do that. I have 3.6.5. How do i do that?</p>
<p><a href="https://docs.python.org/3/library/time.html#time.perf_counter" rel="nofollow noreferrer"><code>time.perf_counter()</code>, available since Python 3.3</a>, lets you access a high-resolution wallclock.</p> <pre><code>t0 = time.perf_counter() time.sleep(.1) print(time.perf_counter() - t0) </code></pre>
python|time
2
1,908,372
17,314,959
bundling python "back end" code in a phonegap app
<p><strong>Short version</strong>:</p> <p>Is there a standard way to bundle Python "back-end server" type code in with a phonegap client application?</p> <p><strong>Long version</strong>:</p> <p>I have a number of applications that I'm considering porting to phonegap. In general, the apps are written in Python. Some of them have web front-ends, some of them are stand-alone Python apps with interfaces based on wxpython.</p> <p>So each application has some client-side stuff, which is already in html+javascript+css, or which I'm happy to translate to html+javascript+css.</p> <p>For the server side, for some of the applications it's okay to leave the server code on a server. But for some/most, I'd like to package the server-side logic in with the phonegap app, so it can be a standalone app. This requirement comes from two needs. One is that many of these apps are used for emergency response, and need to work even when cell towers and other network infrastructure is not available. The other is simply that some of the apps are quite simple, and don't warrant a client/server architecture -- they just happen to have a lot of "back end logic" type code written in Python.</p> <p>Now, I know that I could just port all of that back-end Python logic to javascript, but I was hoping to find a solution where this sizable code base could remain in Python.</p> <p>My idea is that I could write a phonegap plugin that actually contains the complete Python interpreter (at least complete enough to handle most of the stuff in my code base). (That is, both iOS and Android allow native C code, so I should be able to compile Python -- or at least much of it -- from source, or just link to iOS and Android Python libraries that others have built.)</p> <p>So in the javascript code, I could have the client call some function like "InvokeBackEndMethod()". This would act much like an ajax call, but instead of calling out on the network, it would send the query/url/message to the Python plugin. My understanding is that phonegap plugins can maintain persistent state (e.g., a database plugin lets you make one call to open the database and subsequent calls to read from it and close it). So the Python "server" code could maintain its state just like it does on the real server. In fact the Python code might be running a web framework like cherrpy, so it would truly be like running both the client and the server in the same mobile app.</p> <p>My questions are:</p> <p>(1) Does that plan sound reasonable?</p> <p>(2) Has someone already solved this problem? I was hoping to find a project called, say, "phonegap server", and it would essentially be a "universal" PhoneGap extension, in the sense that it would take arbitrary calls from the client, and would dispatch those calls to your choice of various mechanisms: Python, Java, mono, etc. (i.e., this universal phonegap extension would get "extended" by various language "plugins" and then those plugins in turn would get "extended" by whatever business logic the user added in the given language). It may be that such a project isn't needed by most people because they don't have a requirement to run disconnected and/or they don't have a big code base of "back end" logic that they'd like to deploy in a stand-alone app but leave in the original language. But it seems like some people must need that, no?</p>
<p>Two very different initiatives you should check out: <a href="http://omz-software.com/pythonista/" rel="nofollow noreferrer">http://omz-software.com/pythonista/</a> Allows the export to an Xcode project.</p> <p>And <a href="https://github.com/brython-dev/brython" rel="nofollow noreferrer">https://github.com/brython-dev/brython</a> Use Python instead of Javascript for HTML5 development.</p>
python|plugins|cordova|client-server
2
1,908,373
17,523,545
Pandas : histogram with fixed width
<p>I have data which I want to do an histogram, but I want the histogram to start from a given value and the width of a bar to be fixed. For example, for the serie [1, 3, 5, 10, 12, 20, 21, 25], I want, instead of</p> <pre><code>&gt;&gt;&gt; p.Series([1, 3, 5, 10, 12, 20, 21, 25]).hist(bins=3).figure # | | # | | | # | | | # 0 8.5 17 </code></pre> <p><img src="https://i.stack.imgur.com/nREIU.png" alt="Current histogram"></p> <p>I want the bars to have a width of 10 :</p> <pre><code>| | | | | | | | 0 10 20 </code></pre> <p>How can I do that ?</p> <p>EDIT : I eventually get what I wanted <img src="https://i.stack.imgur.com/XoaZ2.png" alt="good hist"></p>
<p>I think</p> <pre><code>p.Series([1, 3, 5, 10, 12, 20, 21, 25]).hist(bins=[0, 10, 20, 30]).figure </code></pre> <p>will do what you want. Alternately you can do</p> <pre><code>p.Series([1, 3, 5, 10, 12, 20, 21, 25]).hist(bins=3, range=(0,30)).figure </code></pre> <p>See <a href="http://matplotlib.org/api/axes_api.html#matplotlib.axes.Axes.hist" rel="noreferrer">documentation</a> for <code>hist</code> and the <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.histogram.html" rel="noreferrer">documentation</a> for <code>np.histogram</code>.</p> <p>I suspect you are also running into some issues because it is labeling the <em>center</em> of the bins, not the edges.</p>
python|matplotlib|pandas
43
1,908,374
64,319,998
How can I create a DF by conditionally looking up into another DF, and appending the results horizontally into a new DF?
<p>I have two dataframes - df1 with all the data, and df2 that will be the basis of the lookup into df1. I need to create a 'result' dataframe with one row per row in DF2, where each row should be the original row from DF2, then, horizontally, each of the rows from DF1 that match the ID, and have a date before or equal to the date specified.</p> <p>I effectively need to find the mini dataframes within DF1 (see code snippet below) that corresponds to the ID and all prior dates for a given row in DF2, then append each row of said mini DF onto the right hand side of DF2.</p> <p>I understand, as per code snippet 1, how to find the mini_df based on the conditions in DF2, however I'm at a loss as to how to create the new dataframe that contains the appended rows horizontally. Ideally I'd like to specify how many rows I take from said dataframe e.g. if 10 rows match the ID and before-or-on date criteria, I'd like to take say 6 of them.</p> <pre><code> #code snippet 1 to create mini dataframe from df1 for each row in df2 mini_df = df1[(df1['ID']=='A0') &amp; (df1['Date']&lt;='20200102')] </code></pre> <pre class="lang-python prettyprint-override"><code> df1 ID1 Date1 Value 0 A0 20200101 123 1 A0 20200102 234 2 A0 20200103 345 3 A1 20200101 456 4 A1 20200102 567 5 A1 20200103 678 df2 ID2 Date2 0 A0 20200103 1 A1 20200103 result ID2 Date2 ID1-1 Date1-1 Value-1 ID1-2 Date1-2 Value-2 ID1-3 Date1-3 Value-3 0 A0 20200103 A0 20200101 123 A0 20200102 234 A0 20200103 345 1 A1 20200103 A1 20200101 456 A1 20200102 567 A1 20200103 678 </code></pre> <p>Code to reproduce the tables:</p> <pre class="lang-python prettyprint-override"><code> import pandas as pd df1 = pd.DataFrame({'ID1': ['A0', 'A0','A0', 'A1', 'A1', 'A1'], 'Date1': ['20200101', '20200102','20200103', '20200101', '20200102','20200103'], 'Value':[123,234,345,456,567,678]}) df2 = pd.DataFrame({'ID2': ['A0', 'A1'],'Date2': ['20200102', '20200102',],}) result = pd.DataFrame({'ID2':['A0','A1'], 'Date1':['20200102', '20200102',], 'ID1-1':['A0','A1'], 'Date1-1': ['20200101','20200101'], 'Value-1':[123,456], 'ID1-2':['A0','A1'], 'Date1-2': ['20200102','20200102'], 'Value-2':[234,567], 'ID1-3':['A0','A1'], 'Date1-3': ['20200103','20200103'], 'Value-3':[345,678], }) </code></pre> <p>Thanks in advance for any advice.</p>
<p>I hope this is what you wanted. I am a bit confused about your example output maybe you can explain more how you got to it.</p> <pre><code>mini_dfs = [df1[(df1['ID1']==row[&quot;ID2&quot;]) &amp; (df1['Date1']&lt;=row['Date2'])].head(6).reset_index() for i,row in df2.iterrows()] for i, df in enumerate(mini_dfs): df.columns = map(lambda x:f&quot;{x}-{i+1}&quot;, df.columns) pd.concat(mini_dfs, axis=1) </code></pre> <p>Ouput</p> <pre><code> index-1 ID1-1 Date1-1 Value-1 index-2 ID1-2 Date1-2 Value-2 0 0 A0 20200101 123 3 A1 20200101 456 1 1 A0 20200102 234 4 A1 20200102 567 </code></pre>
python|pandas|dataframe
1
1,908,375
70,476,184
Make a password input in tkinter with show password option
<p>I want to create an Entry widget in tkinter to write a passsword. I used <code>password.config(show = &quot;*&quot;)</code> and it works fine. Is there a way to disable this configuretion and make the widget to show me what I typed ? I'd like to make this happen when press a button</p>
<p><code>password.config(show = &quot;&quot;)</code> will make the password visible again.</p> <p>So you can make a function that toggles between <code>password.config(show = &quot;*&quot;)</code> and <code>password.config(show = &quot;&quot;)</code> to hide and show the password.</p> <p>Here's a quick program using a checkbutton that illustrates this:</p> <pre><code>import tkinter as tk def toggle(): global password, checkbutton if checkbutton.var.get(): password.config(show = &quot;*&quot;) else: password.config(show = &quot;&quot;) root = tk.Tk() password = tk.Entry(root) password.default_show_val = password.config(show = &quot;*&quot;) password.config(show = &quot;*&quot;) checkbutton = tk.Checkbutton(root, text=&quot;Show password&quot;, onvalue=False, offvalue=True, command=toggle) checkbutton.var = tk.BooleanVar(value=True) checkbutton['variable'] = checkbutton.var password.pack() checkbutton.pack() tk.mainloop() </code></pre> <p><strong>Output</strong></p> <p>Hidden: <a href="https://i.stack.imgur.com/t0wRb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/t0wRb.png" alt="enter image description here" /></a></p> <p>Shown: <a href="https://i.stack.imgur.com/KS7lq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KS7lq.png" alt="enter image description here" /></a></p>
python|tkinter
0
1,908,376
70,596,531
Performing POST requests
<p>I have some code that i have to convert from Javascript to Python.</p> <p>In JS side, the code performs HTTP requests using <code>fetch</code>:</p> <pre><code>let resp = await fetch( url + 'get/token', { method: 'POST', mode: 'cors', body: fd }); let data = await resp.json(); return data.token; </code></pre> <p>How the <code>fetch()</code> command for javascript can be performed from Python? What is the equivalent command?</p>
<p>You don't have to use <code>Flask</code> for this. So if you just want to perform a simple Post Request to a given URL just use the Pythons Request Library like so:</p> <pre class="lang-py prettyprint-override"><code>import requests url = 'someurl' myobj = {'somekey': 'somevalue'} # if you want to send JSON Data x = requests.post(url, data = myobj) print(x.text) </code></pre> <p>EDIT:</p> <p>As @OneCricketeer mentioned in his comment you could do this also with the <code>urllib</code> Package which is included in the Python 3.</p> <pre class="lang-py prettyprint-override"><code>from urllib import request from urllib.parse import urlencode req = request.Request('your-url', method=&quot;POST&quot;) req.add_header('Content-Type', 'application/json') # you could set any header you want with that method data = urlencode({&quot;key&quot;:&quot;value&quot;}) data = data.encode() response = request.urlopen(req, data=data) content = response.read() </code></pre>
python|http
1
1,908,377
70,631,647
FastAPI Python Database connectivity
<p>I am new to the FastAPI world, I am creating an API to fetch and post the data to the MySQL database. I followed few link on internet and developed below code</p> <p><strong>DatabaseConnection.py</strong></p> <pre><code>from sqlalchemy import create_engine from sqlalchemy.orm import sessionmaker from sqlalchemy.ext.declarative import declarative_base sql_database_url=&quot;mysql://root:root@localhost:3306/first_db&quot; engine=create_engine(sql_database_url) sessionLocal=sessionmaker(autocommit=False,bind=engine) base=declarative_base() </code></pre> <p><strong>MainClass.py</strong></p> <pre><code>from fastapi import FastAPI,Query,Depends from sqlalchemy import Column,String,Integer from typing import Optional,List from pydantic import BaseModel from sqlalchemy.orm import Session from DatabaseConnection import engine,sessionLocal,base app=FastAPI() class User(base): __tablename__=&quot;users&quot; id=Column(Integer,primary_key=True,index=True) name=Column(String(255),unique=True,index=True) class UserSchema(BaseModel): id:int name:str class Config: orm_model=True base.metadata.create_all(bind=engine) @app.post(&quot;/create-user&quot;) def createUser(userSchema:UserSchema): user=User(id=userSchema.id,name=userSchema.name) sessionLocal().add(user) sessionLocal().commit() return user </code></pre> <p>When i try to run this API using UVICorn i was running successful and the table also created successfully but the data that i am sending through the body is not added in the table. <br /> The table is showing <em><strong>null</strong></em> value is added \</p> <p>I have referred <a href="https://www.youtube.com/watch?v=Z1VV9aMoVbE&amp;ab_channel=RedMeet" rel="nofollow noreferrer">link</a></p> <p>Any help will be appreciated. Thanks in advance</p>
<p>Thanks @anjaneyulubatta505 for help <br /> but we need to do below changes to post data in database</p> <pre><code>@app.post(&quot;/create-user&quot;) def createUser(userSchema:UserSchema): user = User(id=userSchema.id, name=userSchema.name) with Session(bind=engine) as session: session.add(user) session.commit() return user </code></pre> <p>referred links</p> <ol> <li><a href="https://stackoverflow.com/questions/20201809/sqlalchemy-flask-attributeerror-session-object-has-no-attribute-model-chan">Link1</a></li> <li><a href="https://docs.sqlalchemy.org/en/14/orm/session_basics.html#opening-and-closing-a-session" rel="nofollow noreferrer">Link2</a> from sqlAlChemy</li> </ol>
python|mysql|rest|post|fastapi
1
1,908,378
72,895,956
aiohttp instance variable values shared by different users?
<p>I am testing an aiohttp app with web.run. I instantiate an imported class just after the import declaration and the a value of this instance is changed by data channel( for instance.changevalue() function). It works well for a single user. But when I test this from two users( mobile and laptop at the same time), The changed value from one device is reflected on the other device. Is this because of aiohttp running in standalone? or I am doing something wrong with my class instance?</p>
<p>It worked after I used gunicorn to manage more than one process.</p>
python|aiohttp|class-instance-variables
0
1,908,379
49,934,720
excel file write and read show difficulty
<p>After running code i have find some problem</p> <pre><code>import xlsxwriter import pandas as pd workbook = xlsxwriter.Workbook('arrays.xlsx') worksheet = workbook.add_worksheet() array = [0.222333333333333, 0.048150492835172518, 'a12', 'a13', 'a14'] a=[array] print('writedata=',a) row = 0 for col, data in enumerate(a): workbook.close() workbook.close() x = pd.read_excel('arrays.xlsx') print('readdata=',x)` </code></pre> <p>write read is like array</p> <pre><code>[0.222333333333333, 0.048150492835172518, 'a12', 'a13', 'a14'] </code></pre> <p>but after reading that show like this</p> <pre><code>readdata= Empty DataFrame Columns: [0.222333333333333, 0.04815049283517252, a12, a13, a14] Index: [] </code></pre> <p>how i get only that like simple answer </p> <pre><code>[0.222333333333333, 0.04815049283517252, a12, a13, a14] </code></pre>
<p>This is just a side effect of the precision of floating point numbers. </p> <p>Excel uses <a href="https://en.wikipedia.org/wiki/Double-precision_floating-point_format" rel="nofollow noreferrer">IEEE 754 Doubles</a> to store numbers. The precision of these numbers is 16 digits whereas the number you are trying to store, 0.048150492835172518, has 17 significant digits.</p> <p>You will see the same behaviour if you insert 0.048150492835172518 into a cell in Excel, save the file, close it, open it again and then copy out the number.</p>
python|python-3.x|python-2.7|subprocess|xlsxwriter
0
1,908,380
66,726,059
plotting two arrays in python with one being filled with random numbers
<p>I am trying to plot two arrays <code>r_mod and gbp</code> in python after importing <code>matplotlib</code>. Array <code>r_mod</code> contains random numbers. When I plot the two array with the command <code>plt.plot(r_mod,gbp,&quot;o&quot;)</code>, I get the first figure below which shows the global behavior of the relevant function stored in array <code>gbp</code>. However, when plotting with <code>plt.plot(r_mod,gbp)</code>, I get the second figure below which does not show the global behavior of the function.</p> <p>Can someone tell me how to fix this problem ? I need to plot with <code>lines</code> not with <code>&quot;o&quot;</code> .</p> <p><a href="https://i.stack.imgur.com/gpwS7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gpwS7.png" alt="enter image description here" /></a></p> <p><a href="https://i.stack.imgur.com/6bNlZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6bNlZ.png" alt="enter image description here" /></a></p>
<p>Reason for this is that matplotlib is ploting in order of first array. To solve this you need to sort first array.<br /> Remember that the second table, if it is correlated with the first, must not be sorted, but the elements must be moved to their corresponding places from the new order of first array.</p>
python|arrays|numpy|matplotlib
1
1,908,381
64,728,478
How to find if the most common value of a column appears more than X% times?
<p>Consider the following dataframe:</p> <pre><code> ID Column 0 500 2 1 500 2 2 500 2 3 500 2 4 500 2 5 500 4 </code></pre> <p>How can we see if the most common value of 'Column' appears more than X% of the times?</p> <p>I've tried to do: <code>df.locate[df.groupby('ID')['Column'].count_values(normalize=True).max() &gt; X]</code> , but I get an error.</p>
<p>I think what you had was close to a solution. It's not really clear to me, if you want to calculate this just over the whole column, or per group, so here's a solution for both. You can change variable <code>at_least_this_proportion</code>, to change the minimum threshold:</p> <pre><code>import pandas as pd from io import StringIO text = &quot;&quot;&quot; ID Column 0 500 2 1 500 2 2 500 2 3 500 2 4 500 2 5 500 4 6 501 2 7 501 2 &quot;&quot;&quot; df = pd.read_csv(StringIO(text), header=0, sep='\s+') # set minimum threshold at_least_this_proportion = 0.5 </code></pre> <p>Calculate per group:</p> <pre><code># find the value that occurs at least 50% within its group value_counts_per_group = df.groupby('ID')['Column'].value_counts(normalize=True) ids_that_meet_threshold = value_counts_per_group[value_counts_per_group &gt; at_least_this_proportion].index.get_level_values(0) # get all rows for which the id meets the threshold df[df['ID'].isin(ids_that_meet_threshold)] </code></pre>
pandas
1
1,908,382
53,271,163
How to get the tag name of html using Python Beautiful Soup?
<pre><code>header = head.find_all('span') [&lt;span itemprop="name"&gt;Raj&lt;/span&gt;, &lt;span itemprop="street"&gt;24 Omni Street&lt;/span&gt;, &lt;span itemprop="address"&gt;Ohio&lt;/span&gt;, &lt;span itemprop="Region"&gt;US&lt;/span&gt;, &lt;span itemprop="postal"&gt;40232&lt;/span&gt;, &lt;span class="number"&gt;334646344&lt;/span&gt;] print (header[0].tag) print(header[0].text) ####output None Raj ... ####Expected output Name Raj ... </code></pre> <p>I could not able to extract all the value of span itemprop. It throws me None output. Am I doing something wrong?</p> <p>Thanks, Raj</p>
<p>Yes, <a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/#tag" rel="nofollow noreferrer"><code>class 'bs4.element.Tag'</code></a> does not have a <code>tag</code> attribute, as itself <em>is</em> a <code>Tag</code>. From the docs:</p> <blockquote> <p>You can access a tag’s attributes by treating the tag like a dictionary.</p> </blockquote> <p>So you've got the list of all the <code>span</code> tags, now just iterate the list and get their attribute that you want (i.e. <code>'itemprop'</code>):</p> <pre><code>spans = head.find_all('span') for span in spans: try: print(span['itemprop'].decode().title() + ': ' + span.text) except KeyError: continue </code></pre> <p>output:</p> <pre><code>Name: Raj Street: 24 Omni Street Address: Ohio Region: US Postal: 40232 </code></pre> <p>Format the output or store the data as needed</p>
python|html|css|beautifulsoup|tags
2
1,908,383
52,920,082
How to compare dynamic variables in a programmatic way in python
<p>I have a function that gets and compares 2 out of 6 and more websites data. After getting 2 sites data I begin to sort the data out. Since each site has different formatting, I need to sort each of them differently.</p> <p>And since I compare 2 of them, I only need to sort 2 of them. In order to do this I need to know which site is selected first and which one is second. And My code below evaluates this with <strong>if and elif for each of the sites</strong>. With each website added to the dictionary, I found a solution to write another if and elif statements.</p> <p>My question is: how can I only execute the related sites' sorting code without using if and elif pairs for each website? Is there a pythonic or programmatic way of doing this?</p> <p>My func is:</p> <pre><code>def getpairs(xx,yy): mydict = {1:"http://1stsite.com", 2:"http://2ndsite.com", ... , 6:"http://6thsite.com" } with urllib.request.urlopen(mydict[xx]) as url: dataone = json.loads(url.read().decode()) with urllib.request.urlopen(mydict[yy]) as url: datatwo = json.loads(url.read().decode()) if xx == 1: sorted1 = some code to sort 1st website data(dataone list) dataxx = sorted1 elif yy == 1: sorted1 =some code to sort 1st website data(datatwo list) datayy = sorted1 if xx == 2: ... ... ... if xx == 6: sorted6 = some code to sort 6th website data(dataone list) dataxx = sorted6 elif yy == 6: sorted6 = some code to sort 6th website data(datatwo list) datayy = sorted6 compared = set(dataxx).intersection(datayy) return compared </code></pre> <p>Thank you for your time</p>
<p>You can create another dictionary with the sorting functions, indexed in the same way that <code>mydict</code> is indexed, or perhaps with the URLs. Something like this:</p> <pre><code>def sorting_function_1(data): ... def sorting_function_2(data): ... def sorting_function_3(data): ... SORTING_FUNCTIONS = { 1: sorting_function_1, 2: sorting_function_2, 3: sorting_function_3, 4: sorting_function_2, 5: sorting_function_1, ... } def fetch_data(url, sorting_function): with urllib.request.urlopen(url) as response: data = json.loads(response.read().decode()) sorted_data = sorting_function(data) return sorted_data def getpairs(xx, yy): mydict = { ... } dataxx = fetch_data(mydict[xx], SORTING_FUNCTIONS[xx]) datayy = fetch_data(mydict[yy], SORTING_FUNCTIONS[yy]) ... </code></pre> <p>I hope this help.</p>
python|dynamic
1
1,908,384
65,376,291
How to string join one column with another columns - pandas
<p>I just came across this question, how do I do <code>str.join</code> by one column to join the other, here is my <code>DataFrame</code>:</p> <pre><code>&gt;&gt;&gt; df = pd.DataFrame({'a': ['a', 'b', 'c', 'd'], 'b': ['hello', 'good', 'great', 'nice']}) a b 0 a hello 1 b good 2 c great 3 d nice </code></pre> <p>I would like the <code>a</code> column to join the values in the <code>b</code> column, so my desired output is:</p> <pre><code> a b 0 a haealalao 1 b gbobobd 2 c gcrcecact 3 d ndidcde </code></pre> <p>How would I go about that?</p> <p>Hope you can see the correlation with this, here is one example with the first row that you can do in python:</p> <pre><code>&gt;&gt;&gt; 'a'.join('hello') 'haealalao' &gt;&gt;&gt; </code></pre> <p>Just like in the desired output.</p> <p>I think it might be useful to know how two columns can interact. <code>join</code> might not be the best example but there are other functions that you could do. It could maybe be useful if you use <code>split</code> to <code>split</code> on the other columns, or replace the characters in the other columns with something else.</p> <p><strong>P.S. I have a self-answer below.</strong></p>
<h1>TL;DR</h1> <p>The below code is the fastest answer I could figure out from this question:</p> <pre><code>it = iter(df['a']) df['b'] = [next(it).join(i) for i in df['b']] </code></pre> <p>The above code first does a generator of the <code>a</code> column, then you can use <code>next</code> for getting the next value every time, then in the list comprehension it joins the two strings.</p> <h2>Long answer:</h2> <p>Going to show my solutions:</p> <p><strong>Solution 1:</strong></p> <p>To use a <code>list</code> comprehension and a generator:</p> <pre><code>it = iter(df['a']) df['b'] = [next(it).join(i) for i in df['b']] print(df) </code></pre> <p><strong>Solution 2:</strong></p> <p>Group by the index, and <code>apply</code> and <code>str.join</code> the two columns' value:</p> <pre><code>df['b'] = df.groupby(df.index).apply(lambda x: x['a'].item().join(x['b'].item())) print(df) </code></pre> <p><strong>Solution 3:</strong></p> <p>Use a <code>list</code> comprehension that iterates through both columns and <code>str.join</code>s:</p> <pre><code>df['b'] = [x.join(y) for x, y in df.values.tolist()] print(df) </code></pre> <p>These codes all output:</p> <pre><code> a b 0 a haealalao 1 b gbobobd 2 c gcrcecact 3 d ndidcde </code></pre> <h3>Timing:</h3> <p>Now it's time to move on to timing with the <code>timeit</code> module, here is the code we use to time:</p> <pre><code>from timeit import timeit df = pd.DataFrame({'a': ['a', 'b', 'c', 'd'], 'b': ['hello', 'good', 'great', 'nice']}) def u11_1(): it = iter(df['a']) df['b'] = [next(it).join(i) for i in df['b']] def u11_2(): df['b'] = df.groupby(df.index).apply(lambda x: x['a'].item().join(x['b'].item())) def u11_3(): df['b'] = [x.join(y) for x, y in df.values.tolist()] print('Solution 1:', timeit(u11_1, number=5)) print('Solution 2:', timeit(u11_2, number=5)) print('Solution 3:', timeit(u11_3, number=5)) </code></pre> <p>Output:</p> <pre><code>Solution 1: 0.007374127670871819 Solution 2: 0.05485127553865618 Solution 3: 0.05787154087587698 </code></pre> <p>So the first solution is the quickest, using a generator.</p>
python|pandas|string|dataframe
3
1,908,385
65,245,439
How to use DNN to fit these data
<p>I'm using DNN to fit these data, and I use softmax to classify them into 2 class, and each of them has a demensity of 4040, can someone with experience tell me what's wrong with my nets.</p> <p>It is strange that my initial loss is 7.6 and my initial error is 0.5524, and Basically they won't change anymore.</p> <pre><code>for train, test in kfold.split(data_pro, valence_labels): model = keras.Sequential() model.add(keras.layers.Dense(5000,activation='relu',input_shape=(4040,))) model.add(keras.layers.Dropout(rate=0.25)) model.add(keras.layers.Dense(500, activation='relu')) model.add(keras.layers.Dropout(rate=0.5)) model.add(keras.layers.Dense(1000, activation='relu')) model.add(keras.layers.Dropout(rate=0.5)) model.add(keras.layers.Dense(2, activation='softmax')) model.add(keras.layers.Dropout(rate=0.5)) model.compile(optimizer=tf.keras.optimizers.RMSprop(learning_rate=0.0001,rho=0.9), loss='binary_crossentropy', metrics=['accuracy']) print('------------------------------------------------------------------------') print(f'Training for fold {fold_no} ...') log_dir=&quot;logs/fit/&quot; + datetime.datetime.now().strftime(&quot;%Y%m%d-%H%M%S&quot;) tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1) # Fit data to model history = model.fit(data_pro[train], valence_labels[train], batch_size=128, epochs=50, verbose=1, callbacks=[tensorboard_callback] ) # Generate generalization metrics scores = model.evaluate(data_pro[test], valence_labels[test], verbose=0) print(f'Score for fold {fold_no}: {model.metrics_names[0]} of {scores[0]}; {model.metrics_names[1]} of {scores[1]*100}%') acc_per_fold.append(scores[1] * 100) loss_per_fold.append(scores[0]) # Increase fold number fold_no = fold_no + 1 # == Provide average scores == print('------------------------------------------------------------------------') print('Score per fold') for i in range(0, len(acc_per_fold)): print('------------------------------------------------------------------------') print(f'&gt; Fold {i+1} - Loss: {loss_per_fold[i]} - Accuracy: {acc_per_fold[i]}%') print('------------------------------------------------------------------------') print('Average scores for all folds:') print(f'&gt; Accuracy: {np.mean(acc_per_fold)} (+- {np.std(acc_per_fold)})') print(f'&gt; Loss: {np.mean(loss_per_fold)}') print('------------------------------------------------------------------------') </code></pre> <p><a href="https://i.stack.imgur.com/NKlhD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NKlhD.png" alt="enter image description here" /></a></p>
<p>You shouldn't add <code>Dropout</code> after the final <code>Dense</code> , delete the <code>model.add(keras.layers.Dropout(rate=0.5))</code></p> <p>And I think your code may raise error because your <code>labels</code>'s dim is 1 , But your final <code>Dense</code>'s <code>units</code> is 2 . Change <code>model.add(keras.layers.Dense(2, activation='softmax'))</code> to <code>model.add(keras.layers.Dense(1, activation='sigmoid'))</code></p> <p>Read <a href="https://www.tensorflow.org/tutorials/quickstart/beginner" rel="nofollow noreferrer">this</a> to learn <code>tensorflow</code></p> <p>Update 1 :</p> <p>Change</p> <pre><code>model.compile(optimizer= tf.keras.optimizers.SGD(learning_rate = 0.00001,momentum=0.9,nesterov=True), loss=tf.keras.losses.CategoricalCrossentropy(), metrics=['accuracy']) </code></pre> <p>to</p> <pre><code>model.compile(optimizer= tf.keras.optimizers.Adam(learning_rate=3e-4), loss=tf.keras.losses.CategoricalCrossentropy(), metrics=['accuracy']) </code></pre> <p>And change</p> <pre><code>accAll = [] for epoch in range(1, 50): model.fit(train_data, train_labels, batch_size=50,epochs=5, validation_data = (val_data, val_labels)) val_loss, val_Accuracy = model.evaluate(val_data,val_labels,batch_size=1) accAll.append(val_Accuracy) </code></pre> <p>to</p> <pre><code>accAll = model.fit( train_data, train_labels, batch_size=50,epochs=20, validation_data = (val_data, val_labels) ) </code></pre>
deep-learning|tensorflow2.0
1
1,908,386
65,332,407
Python Flask-sqlAlchemy Prepapre query
<p>I'm working on an API made with python flask-SQLalchemy.</p> <p>I'm looking for an elegant way prepare my query step by step.</p> <p>Today my code is working but seam very hugly to me because of many duplicated content :</p> <pre><code> if filteron is None: if orderby == 'config': if order == 'DESC': job = JobFull.query.order_by(JobFull.config_id.desc()).limit(limit).all() else: job = JobFull.query.order_by(JobFull.config_id.asc()).limit(limit).all() elif orderby == 'crawler': if order == 'DESC': job = JobFull.query.order_by(JobFull.crawler_id.desc()).limit(limit).all() else: job = JobFull.query.order_by(JobFull.crawler_id.asc()).limit(limit).all() elif orderby == 'site': if order == 'DESC': job = JobFull.query.order_by(JobFull.site_id.desc()).limit(limit).all() else: job = JobFull.query.order_by(JobFull.site_id.asc()).limit(limit).all() else: if order == 'DESC': job = JobFull.query.order_by(JobFull.job_id.desc()).limit(limit).all() else: job = JobFull.query.order_by(JobFull.job_id.asc()).limit(limit).all() </code></pre> <p>what i would like to do is prepare my query like :</p> <pre><code>if filteron is None: if order == 'DESC': job = job.query.orderby.desc() if orderby == 'config': job = job.query.orderby.(JobFull.config_id) if limit is not None: job = job.limit(limit) </code></pre> <p>is there an elegant way to do that or does I need to continue in my <strong>if nightmare</strong> ?</p> <p>regards,</p>
<p>Extrapolate your logic into a reusable function, combined with the <code>getattr</code> function.</p> <pre><code>def create_query(model, orderby: str='', desc: bool=False, limit: int=0): &quot;&quot;&quot; model: your SQLAlchemy Model orderby: the name of the column you want to order by desc: switch to order by Descending limit: limit the number of results returned&quot;&quot;&quot; query = model.query if orderby: col = getattr(model, orderby) col = col.desc() if desc else col.asc() query = query.order_by(col) if limit: query = query.limit(limit) return query.all() </code></pre>
python|flask|flask-sqlalchemy
2
1,908,387
66,277,597
how to add constraint to make sure weights toward to 0 as much as possible in tensorflow?
<p>say we have a simple neural network with 4 <code>Dense</code> layers, Lin -&gt; L1 -&gt; L2 -&gt; Lout; assume <code>L2 = matrix[1x5]</code> and the 5 values can be represented as <code>[a1, a2, a3, a4, a5]</code>; when we train the model, we know there are lots of groups of <code>[a1, a2, a3, a4, a5]</code> satisfying the data like <code>[1,2,3,4,5]</code> <code>[1,0,4,5,5]</code> <code>[0,0,15,0,0]</code> <code>[0,0,0,5,0]</code>;</p> <p><strong>my question is how to add a constraint to the layer weights so that we can make sure most of them are 0</strong>. for example, the 4 groups L2 weights <code>[1,2,3,4,5]</code> <code>[1,0,4,5,5]</code> <code>[0,0,15,0,0]</code> <code>[0,0,0,5,0]</code>, where the 3rd and 4th one has 4 zeros; and 5 &lt; 15 so that we treat the 4th one as the most prior among the 4 groups.</p> <p>we know TensorFlow Keras has the functionality: <a href="https://keras.io/api/layers/constraints/" rel="nofollow noreferrer">https://keras.io/api/layers/constraints/</a></p> <p>but there are no built-in constraints for my question. any idea on how to write such a constraint or maybe there is another way to do this?</p> <p>more specific, we have lots of vectors and we want to classify the vectors, we want a layer to recognize which columns are important (but we do not know exact columns, like word embedding, we need to transform a word to vector; here we need to transform a vector to importance bitmask and then do further processing) and we can drop out other columns. for example, we have features <code>[x1, x2, x3, x4, x5]</code> and we got L2 <code>[0,0,0,5,0]</code>, then we can say, the 4th column is important so that we can transform the feature vector to <code>[0, 0, 0, 5 * x4, 0]</code></p> <p>thx in advance.</p>
<blockquote> <p>so that we can make sure most of them are 0</p> </blockquote> <p>if there is no <strong>strict</strong> requirements to the number of 0s (as you might have suggested in the single-column example) you are looking for <a href="https://en.wikipedia.org/wiki/Lasso_(statistics)" rel="nofollow noreferrer">Lasso regression</a> (so called <strong>L1</strong> regularization) which, to simply put, penalizes the magnitude of each weight. The weight will only be big if it is absolutely crucial for the inference.</p> <p>In tensorflow 2.x this can be done via <a href="https://www.tensorflow.org/api_docs/python/tf/keras/regularizers/L1" rel="nofollow noreferrer">kernel regularizer</a>. Now, this enforces weights to be small, but it does not guarantee it will be 0. Furthermore, it strongly affects performance if used abusively.</p> <p>As a side note, the problem you are <strong>probably</strong> trying to solve is related to machine learning interpretability/explainability, and while your approach is interesting, it might be worth looking at methods/models constructed solely for this purpose (there are models that are able to produce feature significance etc)</p>
tensorflow|constraints|dropout
0
1,908,388
69,077,652
How to use or in roles discord.py
<p>So, i have this code:</p> <p><code>role = discord.utils.find(lambda r: r.name == 'Member of IMP', lambda r: r.name == 'Blitzo', lambda r: r.name == 'Stolas', lambda r: r.name == 'Moxxie', lambda r: r.name == 'Millie', lambda r: r.name == 'Loona')</code></p> <p>And I'm trying to figure out how to make it mean : If user has &quot;role&quot; or &quot;role&quot; or &quot;role&quot; I set up everything else all i need to know is how to use &quot;or&quot; in this situation</p>
<p>You could use 1 single <code>lambda</code> and use the <code>in</code> operator on a <code>tuple</code> (or another iterable like a <code>list</code> or <code>set</code>).</p> <pre class="lang-py prettyprint-override"><code>names = (&quot;Member of IMP&quot;, &quot;Blitzo&quot;, .....,) role = discord.utils.find(lambda r: r.name in names) </code></pre> <p>If you <em>really</em> want to use <code>or</code> then this is probably what you're looking for:</p> <pre class="lang-py prettyprint-override"><code>role = discord.utils.find(lambda r: r.name == &quot;name1&quot; or r.name == &quot;name2&quot; or ....) </code></pre> <p>However, this becomes unmaintainable &amp; unreadable very fast so I suggest the first.</p>
python|discord|discord.py
3
1,908,389
62,389,540
Does ImageDataGenerator Class in Tensorflow Create New Data?
<p>I know for a fact that many use the ImageDataGenerator class in Tensorflow for augmentation. I wonder if ImageDataGenerator creates new data with augmentation or applies random augmentation to the data and imports without duplicating and augmenting. Is there any way to create new data with augmentation if the latter is true?</p>
<p>Augmentation is different from dataset enrichment. </p> <p>Augmentation refers to creating new samples at runtime (they do not physically exist on your storage) in order to ensure the creation of a more robust model, avoid overfitting etc.</p> <p>If you wanted to create different examples of your images, you have to manually iterate through your dataset, apply some image transformations, and save them to disk. Otherwise, a picture rotated by 20 degrees for instance to left will be present only at training time for the network but not persisted to your disk.</p>
tensorflow|keras
0
1,908,390
54,100,105
Given a list of strings, I want to add the values of a dictionary to a new dictionary if any of the values are equal to the values in the list
<p>For example:</p> <pre><code>my_dict = {1: "apple", 2: "pear", 3: "orange", 4: "apple", 5: "grape", 6: "mango", 7: "mango", 8: "pear", 9: "lemon"} fruit_list = ["apple", "mango", "lemon"] </code></pre> <p>I want to parse through my_dict and if a value in my_dict is equal to either of the values in fruit_list, add that key and value to a new dictionary, so the output dictionary is:</p> <pre><code>{1: "apple", 4:"apple", 6:"mango", 7:"mango", 9:"lemon"} </code></pre> <p>i have already tired using the enumerate function but dont seem to be getting an output.</p> <p>Thank you!</p>
<p>You could do a dict comprehension:</p> <pre><code>new_dict = {k: v for k, v in my_dict.items() if v in fruit_list} </code></pre>
python|dictionary
4
1,908,391
58,532,497
Python Get Value of Javascript Variable
<p>I'm <strong>scraping</strong> instagram page (<a href="https://instagram.com/celmirashop" rel="nofollow noreferrer">https://instagram.com/celmirashop</a>) and get script (HTML and some javascript). the result like this</p> <pre><code>&lt;script&gt;some script&lt;/script&gt; &lt;script&gt;some script&lt;/script&gt; &lt;script&gt;some script&lt;/script&gt; &lt;script&gt;window._sharedData = {&quot;config&quot;:{&quot;csrf_token&quot;:&quot;sSqrj6c8tfN1HwOIlwmpqONT2bAPhtNu&quot;,&quot;viewer&quot;:null etc....&lt;/script&gt; </code></pre> <p>I have creating script like this</p> <pre><code>import urllib.request import json import re from bs4 import BeautifulSoup web = urllib.request.urlopen(&quot;https://instagram.com/celmirashop&quot;) soup = BeautifulSoup(web.read(), 'lxml') pattern = re.compile(r&quot;window._sharedData = .&quot;) script = soup.find(&quot;script&quot;,text=pattern) print(script) </code></pre> <p>and giving me a result a specific javascript that I want to. like this</p> <pre><code>&lt;script&gt;window._sharedData = {&quot;config&quot;:{&quot;csrf_token&quot;:&quot;sSqrj6c8tfN1HwOIlwmpqONT2bAPhtNu&quot;,&quot;viewer&quot;:null etc....&lt;/script&gt; </code></pre> <p>How can I get the value of window._sharedData ? and loop it. because I want save in mysql</p>
<p>Assuming ends with ; and occurs only once you can use the following regex pattern on the response.text</p> <pre><code>import re s = '''&lt;script&gt;window._sharedData = {"config":{"csrf_token":"sSqrj6c8tfN1HwOIlwmpqONT2bAPhtNu","viewer":null"};&lt;/script&gt;''' p = re.compile(r'window\._sharedData = (.*);') print(p.findall(s)[0]) </code></pre>
python|web-scraping
2
1,908,392
33,364,370
How to add chain id in pdb
<p>By using biopython library, I would like to add chains ids in my pdb file. I'm using </p> <pre><code>p = PDBParser() structure=p.get_structure('mypdb',mypdb.pdb) model=structure[0] model.child_list=["A","B"] </code></pre> <p>But I got this error:</p> <pre><code>Traceback (most recent call last): File "../../principal_axis_v3.py", line 319, in &lt;module&gt; main() File "../../principal_axis_v3.py", line 310, in main protA=read_PDB(struct,ch1,s1,e1) File "../../principal_axis_v3.py", line 104, in read_PDB chain=model[ch] File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/Bio/PDB/Entity.py", line 38, in __getitem__ return self.child_dict[id] KeyError: 'A' </code></pre> <p>I tried to changes the keys in th child.dict, butI got another error:</p> <pre><code>Traceback (most recent call last): File "../../principal_axis_v3.py", line 319, in &lt;module&gt; main() File "../../principal_axis_v3.py", line 310, in main protA=read_PDB(struct,ch1,s1,e1) File "../../principal_axis_v3.py", line 102, in read_PDB model.child_dict.keys=["A","B"] AttributeError: 'dict' object attribute 'keys' is read-only </code></pre> <p>How can I add chains ids ?</p>
<p>Your error is that <code>child_list</code> is not a list with chain IDs, but of <code>Chain</code> objects (<code>Bio.PDB.Chain.Chain</code>). You have to create <code>Chain</code> objects and then add them to the structure. A lame example:</p> <pre><code>from Bio.PDB.Chain import Chain my_chain = Chain("C") model.add(my_chain) </code></pre> <p>Now you can access the model <code>child_dict</code>:</p> <pre><code>&gt;&gt;&gt; model.child_dict {'A': &lt;Chain id=A&gt;, 'C': &lt;Chain id=C&gt;} &gt;&gt;&gt; model.child_dict["C"] &lt;Chain id=C&gt; </code></pre>
python|biopython|protein-database
1
1,908,393
40,957,764
Replace character in numpy ndarray (Python)
<p>I have a <strong>numpy ndarray</strong> with 6 elements: </p> <p><code>['\tblah blah' '"""123' 'blah' '"""' '\t456' '78\t9']</code></p> <p>I am trying to <strong>replace all tab characters <code>\t</code> with 4 spaces each</strong> so that the numpy array would now be:</p> <p><code>[' blah blah' '"""123' 'blah' '"""' ' 456' '78 9']</code></p> <p>I have considered <strong>re.sub</strong> but cannot figure out how to implement it when it comes down to an numpy ndarray. Any suggestions/help please?</p>
<p>You could use <a href="https://docs.scipy.org/doc/numpy/reference/routines.char.html" rel="nofollow noreferrer"><code>NumPy's core.defchararray</code></a> that deals with string related operations and for this case use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.core.defchararray.replace.html#numpy.core.defchararray.replace" rel="nofollow noreferrer"><code>replace</code> method</a>, like so -</p> <pre><code>np.core.defchararray.replace(arr,'\t', ' ') </code></pre> <p>Sample run -</p> <pre><code>In [44]: arr Out[44]: array(['\tblah blah', '"""123', 'blah', '"""', '\t456', '78\t9'], dtype='|S10') In [45]: np.core.defchararray.replace(arr,'\t', ' ') Out[45]: array([' blah blah', '"""123', 'blah', '"""', ' 456', '78 9'], dtype='|S13') </code></pre>
python|python-3.x|numpy|multidimensional-array
2
1,908,394
43,911,131
get related path when using a function in an imported python file
<p>my structure is like this:</p> <p>---file1.py</p> <p>---file2.py</p> <p>---directory1</p> <pre><code> ---file3.py </code></pre> <p>---tools</p> <pre><code> ---setting_manager.py ---settings.json </code></pre> <p>here directory1 and tools are two directories... In setting_manager.py, i have a function that read some settings from settings.json.</p> <pre><code>with open('settings.json', 'r') as f: properties = json.load(f) return properties </code></pre> <p>And in file1 file2 file3, I import the setting_manager like this:</p> <pre><code>from tools import setting_manager </code></pre> <p>but when I need to use this function in file1 file2 and file3. it seems python load funcs directly and can't find my 'settings.json'. For example, when using in file1, i need to set </p> <pre><code>with open('tools/settings.json', 'r') as f: </code></pre> <p>then it can work. but in file3 i need to set </p> <pre><code>with open('../tools/settings.json', 'r') as f: </code></pre> <p>is there any way to enable my demand?</p>
<p>I think you can define the dynamic path in your <code>setting.py</code> file as:</p> <pre><code>import sys, os pathname = os.path.join(dir, '/relative/path/to/tools') </code></pre> <p>Then, You can use this global variable in any file in which you want to use base path.</p> <p>Hope this will help.</p> <p>Thanks.</p>
python|import|module|path
2
1,908,395
37,513,519
Python GUI only shows one label instead of two
<p>So I'm making a game and I was wondering why the num of rows/columns show one instead of showing both. When I comment one out, the other shows and vice versa instead of both showing. </p> <pre><code>class OthelloGUI(): def __init__(self): self._root_window = tkinter.Tk() self._root_window.title('Othello') self.read_row() self.read_column() def read_row(self) -&gt; int: self.row_text =tkinter.StringVar() self.row_text.set('Num of rows:') row_label = tkinter.Label( master = self._root_window, textvariable = self.row_text, background = 'yellow', height = 1, width = 10, font = DEFAULT_FONT) row_label.grid(row=1, column = 0, padx = 10, pady=10, sticky = tkinter.W+tkinter.N) return self.row.get() def read_column(self) -&gt; int: self.column_text =tkinter.StringVar() self.column_text.set('Num of columns:') column_label = tkinter.Label( master = self._root_window, textvariable = self.column_text, background = 'yellow', height = 1, width = 13, font = DEFAULT_FONT) column_label.grid(row=1, column = 0, padx = 10, pady=50, sticky = tkinter.W+tkinter.N) return self.column.get() </code></pre>
<p>You are calling <code>grid</code> with the same coordinates:</p> <pre><code>row_label.grid(row=1, column = 0, padx = 10, pady=10, sticky = tkinter.W+tkinter.N) column_label.grid(row=1, column = 0, padx = 10, pady=50, sticky = tkinter.W+tkinter.N) </code></pre> <p>When you grid both at (1, 0), the second one will override the first. Instead, use different row/column arguments:</p> <pre><code>row_label.grid(row=1, column = 0, padx = 10, pady=10, sticky = tkinter.W+tkinter.N) column_label.grid(row=2, column = 0, padx = 10, pady=50, sticky = tkinter.W+tkinter.N) </code></pre> <p>Of course, set the row/column to whatever you want in your interface.</p>
python|user-interface
1
1,908,396
34,401,617
Can Eve/Cerberus Do Validation of the Schema Itself?
<p>I would like to do a sort of "pre-validation" of a schema to enforce certain fields are included in a schema used in an Eve resource. I see that we can validate and extend validation using Cerberus (<a href="http://docs.python-cerberus.org/en/latest/customize.html" rel="nofollow">http://docs.python-cerberus.org/en/latest/customize.html</a>). I don't want to validate the data against the schema, but that the schema contains certain fields itself.</p> <p>My application is created by allowing other teams in my organization to provide their own resources, and I have a few fields that I would like to enforce that they provide in their schemas.</p> <p>Is this possible with Eve/Cerberus? I assume that it probably is not and I will have to roll my own. If it isn't available, when is the proper time to inject this validation of the schema?</p>
<p>I am not sure I understand your question. You can make sure a field is included by setting the <code>required</code> constraint on it. You can also set a <code>default</code> value for a missing field if that can be of any help. See <a href="http://python-eve.org/config#schema-definition" rel="nofollow">Schema Definition</a> in the documentation for details.</p>
python|eve
1
1,908,397
73,309,499
Calculate sum of columns of same name pandas
<p>How can i search for duplicate columns in a dataframe and then create a new column with same name. the new column is result of 'OR' operator of these columns. Then drop old duplicated columns.</p> <p>Example:</p> <p>For that, I tried to create a unique column 'job' that is the result of 'OR' operator of the two 'job' columns in the table bellow.</p> <p>There is my table look like:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>name</th> <th>job</th> <th>maried</th> <th>children</th> <th>job</th> </tr> </thead> <tbody> <tr> <td>John</td> <td>True</td> <td>True</td> <td>True</td> <td>True</td> </tr> <tr> <td>Peter</td> <td>True</td> <td>False</td> <td>True</td> <td>True</td> </tr> <tr> <td>Karl</td> <td>False</td> <td>True</td> <td>True</td> <td>True</td> </tr> <tr> <td>jack</td> <td>False</td> <td>False</td> <td>False</td> <td>False</td> </tr> </tbody> </table> </div> <p>the result that I want is:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>name</th> <th>job</th> <th>maried</th> <th>children</th> </tr> </thead> <tbody> <tr> <td>John</td> <td>True</td> <td>True</td> <td>True</td> </tr> <tr> <td>Peter</td> <td>True</td> <td>False</td> <td>True</td> </tr> <tr> <td>Karl</td> <td>True</td> <td>True</td> <td>True</td> </tr> <tr> <td>jack</td> <td>False</td> <td>False</td> <td>False</td> </tr> </tbody> </table> </div> <p>I tried to do this (df1 is my table):</p> <pre><code>df_join = pd.DataFrame() df1_dulp = pd.DataFrame() df_tmp = pd.DataFrame() for column in df1.columns: df1_dulp = df1.filter(like=str(column)) if df1_dulp.shape[1] &gt;= 2: for i in range(0, df1_dulp.shape[1]): df_tmp += df1_dulp.iloc[:,i] if column in df1_dulp.columns: df1_dulp.drop(column, axis=1, inplace=True) df_join = df_join.join(df1_dulp, how = 'left', lsuffix='left', rsuffix='right') </code></pre> <p>The result is an empty table (df_join).</p>
<p>You can select the boolean columns with <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.select_dtypes.html" rel="nofollow noreferrer"><code>select_dtypes</code></a>, then aggregate as OR with <a href="https://pandas.pydata.org/docs/reference/api/pandas.core.groupby.GroupBy.any.html" rel="nofollow noreferrer"><code>groupby.any</code></a> on columns:</p> <pre><code>out = (df .select_dtypes(exclude='bool') .join(df.select_dtypes('bool') .groupby(level=0, axis=1, sort=False).any() ) ) </code></pre> <p>output:</p> <pre><code> name job maried children 0 John True True True 1 Peter True False True 2 Karl True True True 3 jack False False False </code></pre>
python|pandas|dataframe|sum|python-2.x
1
1,908,398
66,358,565
Gaussian Fit to Data in Python using Lmfit
<p>I'm trying to write a code that performs a Gaussian fit to a gamma ray calibration spectrum, i.e. multiple peaks. Here is my current code:</p> <pre><code>from numpy import loadtxt import numpy as np from lmfit.models import GaussianModel import matplotlib.pyplot as plt #Centre of each of the peaks we want to fit Gaussian to peakCentroids = np.array([251, 398, 803, 908, 996, 1133, 1178, 2581, 3194, 3698, 4671]) #Import total data set data = loadtxt('Phase_1_02.dat') x = data[:, 0] #channel number/gamma energy y = data[:, 1] #Count number mod = GaussianModel() pars = mod.guess(y, x=x) out = mod.fit(y, pars, x=x) print(out.fit_report(min_correl=0.25)) for currentPeakNumber in range(len(peakCentroids)): fig = plt.figure(figsize = (8, 8)) plt.plot(x, y, 'b') plt.plot(x, out.init_fit, 'k--', label='initial fit') plt.plot(x, out.best_fit, 'r-', label='best fit') plt.legend(loc='best') plt.show() </code></pre> <hr /> <p>It's outputting the spectra for my data and printing the relevant parameters (e.g. center, sigma, fwhm etc.), but I'm having a bit of brain freeze in terms of fitting the Gaussian peak to each of the centroids specified! Currently the output spectra is only fitting to the first peak at a value of about 248. Is there anyone much better at coding in Python than me that can shed some light on how to go about this and if it's possible using Lmfit please? Thanks in advance!! :)</p>
<p>If I understand the question correctly, you are looking to model the data you have with a series of Gaussian line shapes, centered at the many (10 or more) values you have.</p> <p>If that is the case, the model should be constructed from the 10 or more Gaussians, but your model only builds one Gaussian. You'll want to build a model with something like</p> <pre><code>import numpy as np from lmfit.models import GaussianModel peakCentroids = np.array([251.0, 398, 803, 908, 996, 1133, 1178, 2581, 3194, 3698, 4671.0]) mod = None for i, cen in enumerate(peakCentroids): thispeak = GaussianModel(prefix='p%d_' %(i+1)) if mod is None: mod = thispeak else: mod = mod + thispeak pars = mod.make_params() for i, cen in enumerate(peakCentroids): pars['p%d_center' % (i+1)].value = cen pars['p%d_amplitude' % (i+1)].value = 1.0 # is that a sensible initial value? pars['p%d_sigma' % (i+1)].value = 1.0 # is that a sensible initial value? out = mod.fit(y, pars, x=x) print(out.fit_report(min_correl=0.25)) </code></pre> <p>or at least that might be a good place to start...</p>
python|gaussian
0
1,908,399
64,769,301
Seaborn Scatterplot - Order the X axis lables
<p>I have the name of the day of the week on the x-axis of my seaborn <strong>scatterplot</strong>.</p> <p>I want to order it according to the following natural order</p> <pre><code>cats = [ 'Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday'] </code></pre> <p>I tried following which is working for other plot types:</p> <pre><code>sns.scatterplot(data=df, x='Day', y='Time', hue='Type', order= cats) </code></pre> <p>But gives me an error: it seems like order is not in the scatterplot func. How to solve this?</p> <pre><code> File &quot;/..lib/python3.7/site-packages/matplotlib/artist.py&quot;, line 970, in _update_property .format(type(self).__name__, k)) AttributeError: 'PathCollection' object has no property 'order' </code></pre>
<p>Use this code instead of order:</p> <pre><code>sns.scatterplot(data=df, x='Day', y='time', hue_order= ['cats']) </code></pre> <p><a href="https://i.stack.imgur.com/IGMo8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IGMo8.png" alt="screenshot of code" /></a></p>
python|matplotlib|seaborn
1