Unnamed: 0
int64
0
1.91M
id
int64
337
73.8M
title
stringlengths
10
150
question
stringlengths
21
64.2k
answer
stringlengths
19
59.4k
tags
stringlengths
5
112
score
int64
-10
17.3k
1,901,600
63,060,835
Remove rows with the least frequent value in a column grouping by other columns in a Pandas Dataframe
<p>I have a <em>pandas dataframe</em> with <strong>inconsistent rows</strong>. In the example below <code>key1</code> and <code>key2</code> are two values which put together must be unique, so the couple <code>(key1 ,key2)</code> is the primary key and should appear once in dataframe, while <code>info</code> is a binary information of <code>(key1 ,key2)</code> and could be <code>T</code> or <code>F</code>. Unfortunately <code>(key1 ,key2)</code> are repeated in the dataframe and sometimes they have <code>info=T</code> and other times <code>info=F</code>, which is obviously an error.</p> <p>To remove repetitions I'd like to adopt this reasoning: I'd like to count how many times (for the same couple <code>(key1 ,key2)</code>) <code>info</code> is <code>T</code> and how many times <code>info</code> is <code>F</code> and</p> <ol> <li>if the frequencies are different (most of the time) <strong>keep only one of the rows that have the most frequent value</strong> between <code>T</code> and <code>F</code> with a function like <code>df.drop_duplicates(subset = [&quot;key1&quot;,&quot;key2&quot;] , keep = &quot;first&quot;)</code> in which <code>first</code> should be the row with most frequent value of <code>info</code>.</li> <li>If instead 50% of rows has <code>info=T</code> and 50% has <code>info=F</code>, I want to <strong>remove all of them</strong>, because I have no idea which is the right one with a function like <code>df.drop_duplicates(subset = [&quot;key1&quot;,&quot;key2&quot;] , keep = False)</code>.</li> </ol> <p>I don't know how to do this kind of filter because I want to keep 1 row if one case and 0 rows in the other, depending on the values of a specific column within groups of similar rows.</p> <p><strong>Desired behaviour</strong></p> <p>In:</p> <pre class="lang-sh prettyprint-override"><code> key1 key2 info 0 a1 a2 T 1 a1 a2 T #duplicated row of index 0 2 a1 a2 F #similar row of indexes 0 and 1 but inconsistent with info field 3 b1 b2 T 4 b1 b2 T #duplicated row of index 3 5 b1 b3 T #not duplicated since key2 is different from indexes 3 and 4 6 c1 c2 T 7 c1 c2 F #duplicated row of index 5 but inconsistent with info field </code></pre> <p>Out:</p> <pre class="lang-sh prettyprint-override"><code> key1 key2 info 0 a1 a2 T # for(a1,a2) T:2 and F:1 3 b1 b2 T # for(b1,b2) T:2 and F:0 5 b1 b3 T # for(b1,b3) T:1 and F:0 # no rows for (c1,c2) because T:1 and F:1 </code></pre> <p>Thank you</p>
<p><code>groupby</code> and use <code>pd.Series.mode</code> to get the modal value. <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.mode.html" rel="nofollow noreferrer"><code>pd.Series.mode</code></a> will return the modes in the case of ties, so this allows us to remove these cases with <code>drop_duplicates</code> as we expect only a single mode for each unique <code>['key1', 'key2']</code>.</p> <pre><code>import pandas as pd (df.groupby(['key1', 'key2'])['info'] .apply(pd.Series.mode) .reset_index() .drop_duplicates(['key1', 'key2'], keep=False) .drop(columns='level_2') ) # key1 key2 info #0 a1 a2 T #1 b1 b2 T #2 b1 b3 T </code></pre> <hr /> <p>The result of the <code>groupby</code> + <code>mode</code> is:</p> <pre><code>key1 key2 a1 a2 0 T b1 b2 0 T b3 0 T c1 c2 0 F # Tied mode so it gets 2 rows with the last 1 T # index level indicating the # of items tied for mode. </code></pre>
python|pandas|dataframe|duplicates
2
1,901,601
63,253,271
How to apply top_k_categorical_accuracy over batch dimension in Keras
<p>I am making a custom metric in keras, based on <code>top_k_categorical_accuracy</code>. In my custom metric function I receive y_true and pred (two tensors) with 3 dimensions, having a shape of <strong>(batch_size, d2, d3)</strong>, but apparently <code>top_k_categorical_accuracy</code> expects a 2-d tensor.</p> <pre class="lang-python prettyprint-override"><code>tf.keras.metrics.top_k_categorical_accuracy(y_true, y_pred, k=2) </code></pre> <p>My question is how can I apply this top_k function across different batches?</p> <p>In the example below I would expect the output of the metric to be 1/2 (with k=2).</p> <p>This would be done by taking the <code>K.mean</code> of <code>top_k_categorical_accuracy(y_true[0], y_pred[0])</code> (1st batch gives <strong>2/3</strong>) and <code>top_k_categorical_accuracy(y_true[1], y_pred[1])</code> (2nd batch gives <strong>1/3</strong>). So the mean would be <strong>1/2</strong></p> <pre class="lang-python prettyprint-override"><code>y_true = [ [[0, 0, 1], [0, 1, 0], [1, 0, 0]], [[0, 0, 1], [0, 1, 0], [1, 0, 0]] ] y_pred = [ [[0.1, 0.7, 0.2], [0.05, 0.95, 0], [0.2,0.3,0.5]], [[0.7, 0.2, 0.1], [0.95, 0, 0.05], [0.3,0.2,0.5]] ] </code></pre>
<p>Since only the last dimension is actual class predictions, you can reshape the first two dimensions into one using K.reshape:</p> <pre><code>y_true = K.reshape(y_true, shape=(-1,3)) y_pred = K.reshape(y_pred, shape=(-1,3)) </code></pre> <p>Then the tensors will meet the API's shape requirements and produce an average score across batch*d1, which is also average across batch as you requested since each batch has the same number of d1.</p>
python|tensorflow|keras|tensorflow2.0|tf.keras
1
1,901,602
62,232,101
Login required in django
<p>I am developing ecommerce website in django . I have view ( addToCart) I want sure before add to cart if user logged in or not so that i use @login_required('login') before view but when click login it show error (can't access to page ).</p> <p>Note that: normal login is working </p>
<p>Please check the following 1. Add login url on settings 2. Add redirect url on login required decorator 3. If you create a custom login view make sure to check next kwargs </p>
python|django|django-views
0
1,901,603
31,348,265
group by in Pandas DataFrame Python
<p>I'm new to Pandas and I'd like to know what I'm doing wrong in the following example.</p> <p>I found an example <a href="https://stackoverflow.com/questions/17995024/how-to-assign-a-name-to-the-a-size-column?lq=1">here</a> explaining how to get a data frame after applying a group by instead of a series. </p> <pre><code>df1 = pd.DataFrame( { "Name" : ["Alice", "Bob", "Mallory", "Mallory", "Bob" , "Mallory"] , "City" : ["Seattle", "Seattle", "Baires", "Caracas", "Baires", "Caracas"] }) df1['size'] = df1.groupby(['City']).transform(np.size) df1.dtypes #Why is size an object? shouldn't it be an integer? df1[['size']] = df1[['size']].astype(int) #convert to integer df1['avera'] = df1.groupby(['City'])['size'].transform(np.mean) #group by again </code></pre> <p>Basically, I want to apply the same transformation to a huge data set I'm working on now, but I'm getting an error message:</p> <pre><code>budgetbid['meanpb']=budgetbid.groupby(['jobid'])['probudget'].transform(np.mean) #can't upload this data for the sake of explanation ValueError: Length mismatch: Expected axis has 5564 elements, new values have 78421 elements </code></pre> <p>Thus, my questions are:</p> <ol> <li>How can I overcome this error?</li> <li>Why do I get an object type when apply group by with size instead of an integer type?</li> <li><p>Let us say that I want to get a data frame from <code>df1</code> with unique cities and their respective <code>count(*)</code>. I know I can do something like</p> <p>newdf=df1.groupby(['City']).size()</p></li> </ol> <p>Unfortunately, this is a series, but I want a data frame with two columns, <code>City</code> and the brand new variable, let's say <code>countcity</code>. How can I get a data frame from a group-by operation like the one in this example? </p> <ol start="4"> <li>Could you give me an example of a <code>select distinct</code> equivalence here in pandas? </li> </ol>
<p>Question 2: <em>Why does <code>df1['size']</code> have dtype <code>object</code>?</em></p> <p><code>groupby/transform</code> returns a DataFrame with a <a href="https://github.com/pydata/pandas/blob/master/pandas/core/groupby.py#L2463" rel="nofollow">dtype for each column which is compatible</a> with both the original column's dtype and the result of the transformation. Since <code>Name</code> has dtype object, </p> <pre><code>df1.groupby(['City']).transform(np.size) </code></pre> <p>is converted to dtype object as well.</p> <p>I'm not sure why <code>transform</code> is coded to work like this; there might be some usecase which demands this to ensure correctness in some sense.</p> <hr> <p>Questions 1 &amp; 3: <em>Why do I get <code>ValueError: Length mismatch</code> and how can I avoid it</em></p> <p>There are probably NaNs in the column being grouped. For example, suppose we change one of the values in <code>City</code> to <code>NaN</code>:</p> <pre><code>df2 = pd.DataFrame( { "Name" : ["Alice", "Bob", "Mallory", "Mallory", "Bob" , "Mallory"] , "City" : [np.nan, "Seattle", "Baires", "Caracas", "Baires", "Caracas"] }) grouped = df2.groupby(['City']) </code></pre> <p>then </p> <pre><code>In [86]: df2.groupby(['City']).transform(np.size) ValueError: Length mismatch: Expected axis has 5 elements, new values have 6 elements </code></pre> <p>Groupby does not group the NaNs:</p> <pre><code>In [88]: [city for city, grp in df2.groupby(['City'])] Out[88]: ['Baires', 'Caracas', 'Seattle'] </code></pre> <p>To work around this, use <code>groupby/agg</code>:</p> <pre><code>countcity = grouped.agg('count').rename(columns={'Name':'countcity'}) # countcity # City # Baires 2 # Caracas 2 # Seattle 1 </code></pre> <p>and then merge the result back into <code>df2</code>:</p> <pre><code>result = pd.merge(df2, countcity, left_on=['City'], right_index=True, how='outer') print(result) </code></pre> <p>yields</p> <pre><code> City Name countcity 0 NaN Alice NaN 1 Seattle Bob 1 2 Baires Mallory 2 4 Baires Bob 2 3 Caracas Mallory 2 5 Caracas Mallory 2 </code></pre> <hr> <p>Question 4: Do you mean <em>what is the Pandas equivalent of the SQL <code>select distinct</code> statement?</em></p> <p>If so, perhaps you are looking for <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.unique.html" rel="nofollow">Series.unique</a> or perhaps iterate through the keys in the Groupby object, as was done in</p> <pre><code>[city for city, grp in df2.groupby(['City'])] </code></pre>
python|pandas|dataframe
5
1,901,604
16,019,624
How can I cluster a list of lists in Python based on string indices? Need insight
<p>I have a list of lists in this fashion. </p> <pre><code>[['Introduction', '0 11 0 1 0'], ['Floating', '0 11 33 1 0'], ['point', '0 11 33 1 1'], ['numbers', '0 11 33 1 2'], ['IEEE', '0 11 58 1 0'], ['Standard', '0 11 58 1 1'], ['754', '0 11 58 1 2']] </code></pre> <p>I want to cluster/group the words in the list based on its string indices. The grouping is based on the first 3 numbers of the string index. What would be the best way to tackle this problem. I am thinking of using regular expressions. Is there a direct and easy way to this grouping? </p> <p><strong>Expected Output:</strong></p> <pre><code>Introduction 0 11 0 Floating point numbers 0 11 33 IEEE Standard 754 0 11 58 </code></pre>
<p>maybe using <code>itertools.groupby</code>?</p> <pre><code>from itertools import groupby def key(item): return [int(x) for x in item[1].split()[:3]] master_lst = [['Introduction', '0 11 0 1 0'], ['Floating', '0 11 33 1 0'], ['point', '0 11 33 1 1'], ['numbers', '0 11 33 1 2'], ['IEEE', '0 11 58 1 0'], ['Standard', '0 11 58 1 1'], ['754', '0 11 58 1 2']] for k,v in groupby(master_lst,key=key): print ' '.join(x[0] for x in v) +' ' + ' '.join(str(x) for x in k) </code></pre> <p>Results in:</p> <pre><code>Introduction 0 11 0 Floating point numbers 0 11 33 IEEE Standard 754 0 11 58 </code></pre>
python|regex|string|list|python-2.7
5
1,901,605
59,507,256
Adding multiple columns to pandas dataframe with np.where clause
<p>I am trying to add multiple columns to a dataframe with numpy.where() in an ETL logic. </p> <p>This is my df:</p> <p><a href="https://i.stack.imgur.com/QMVND.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QMVND.png" alt="enter image description here"></a></p> <p>I am trying to get my df as:</p> <p><a href="https://i.stack.imgur.com/hasMs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hasMs.png" alt="enter image description here"></a></p> <p>And the code is:</p> <pre><code>current_time = pd.Timestamp.utcnow().strftime('%Y-%m-%d %H:%M:%S') df = pd.concat( [ df, pd.DataFrame( [ np.where( # When old hash code is available and new hash code is not available. 0 -- N ( df['new_hash'].isna() &amp; ~df['old_hash'].isna() ) | # When hash codes are available and matched. 3.1 -- 'N' ( ~df['new_hash'].isna() &amp; ~df['old_hash'].isna() &amp; ~(df['new_hash'].ne(df['old_hash'])) ), ['N', df['cr_date'], df['up_date']], np.where( # When new hash code is available and old hash code is not available. 1 -- Y ( ~df['new_hash'].isna() &amp; df['old_hash'].isna() ), ['Y', current_time, current_time], np.where( # When hash codes are available and matched. 3.2 -- 'Y' ( ~df['new_hash'].isna() &amp; ~df['old_hash'].isna() &amp; df['new_hash'].ne(df['old_hash']) ), ['Y', df['cr_date'], current_time], ['N', df['cr_date'], df['up_date']] ) ) ) ], index=df.index, columns=['is_changed', 'cr_date_new', 'up_date_new'] ) ], axis=1 ) </code></pre> <p>Tried above code with <code>df.join()</code> instead of <code>pd.concat()</code>. Still giving me below specified <code>ValueError</code></p> <p>I am able add one column at a time. and the example is:</p> <pre><code>df['is_changed'] = ( np.where( # When old hash code is available and new hash code is not available. 0 -- N ( df['new_hash'].isna() &amp; ~df['old_hash'].isna() ) | # When hash codes are available and matched. 3.1 -- 'N' ( ~df['new_hash'].isna() &amp; ~df['old_hash'].isna() &amp; ~(df['new_hash'].ne(df['old_hash'])) ), 'N', np.where( # When new hash code is available and old hash code is not available. 1 -- Y ( ~df['new_hash'].isna() &amp; df['old_hash'].isna() ), 'Y', np.where( # When hash codes are available and matched. 3.2 -- 'Y' ( ~df['new_hash'].isna() &amp; ~df['old_hash'].isna() &amp; df['new_hash'].ne(df['old_hash']) ), 'Y', 'N' ) ) ) ) </code></pre> <p>But getting error (<code>ValueError: operands could not be broadcast together with shapes (66,) (3,) (3,)</code>) with multiple columns.</p> <p>what is the wrong with adding multiple columns? Can someone help me in this?</p>
<p>In <code>np.where(cond,A,B)</code> Python evaluates each of <code>cond</code>, <code>A</code> and <code>B</code>, and then passes them to the <code>where</code> function. <code>where</code> then <code>broadcasts</code> the inputs against each other, and performs the element-wise selection. You appear to have 3 nested <code>where</code>. I'm guessing the error occurs in the inner most one, since it will be evaluated first (<strong>I wouldn't have to guess if you provided the error traceback.</strong>)</p> <pre><code> np.where( # When hash codes are available and matched. 3.2 -- 'Y' ( ~df['new_hash'].isna() &amp; ~df['old_hash'].isna() &amp; df['new_hash'].ne(df['old_hash']) ), ['Y', df['cr_date'], current_time], ['N', df['cr_date'], df['up_date']] ) </code></pre> <p>The <code>cond</code> part the first <code>()</code> logical_and expression.</p> <p>The <code>A</code> is the 3 element list, and <code>B</code> the next list.</p> <p>Assuming there are 66 rows, the <code>cond</code> will have (66,) shape. </p> <p><code>np.array(['Y', df['cr_date'], current_time])</code> is probably a (3,) shape object dtype array, since the inputs consist on a string, a Series, and a time object.</p> <p>That accounts for the 3 shapes in the error message: <code>shapes (66,) (3,) (3,))</code></p> <p>If you try to set only one column at a time, the expression would be <code>np.where(cond, 'Y', 'N')</code>, or <code>np.where(cond, Series1, Series2)</code>.</p> <p>If you don't understand what I (or the error) mean by <code>broadcasting</code>, you may need to learn more about <code>numpy</code> (which underlies <code>pandas</code>).</p>
python|pandas|numpy
1
1,901,606
49,087,990
Python - Request being blocked by Cloudflare
<p>I am trying to log into a website. When I look at print(g.text) I am not getting back the web page I expect but instead a cloudflare page that says 'Checking your browser before accessing'</p> <pre><code>import requests import time s = requests.Session() s.get('https://www.off---white.com/en/GB/') headers = {'Referer': 'https://www.off---white.com/en/GB/login'} payload = { 'utf8':'✓', 'authenticity_token':'', 'spree_user[email]': 'EMAIL@gmail.com', 'spree_user[password]': 'PASSWORD', 'spree_user[remember_me]': '0', 'commit': 'Login' } r = s.post('https://www.off---white.com/en/GB/login', data=payload, headers=headers) print(r.status_code) g = s.get('https://www.off---white.com/en/GB/account') print(g.status_code) print(g.text) </code></pre> <p>Why is this occurring when I have set the session? </p>
<p>You might want to try this:</p> <pre><code>import cloudscraper scraper = cloudscraper.create_scraper() # returns a CloudScraper instance # Or: scraper = cloudscraper.CloudScraper() # CloudScraper inherits from requests.Session print scraper.get("http://somesite.com").text # =&gt; "&lt;!DOCTYPE html&gt;&lt;html&gt;&lt;head&gt;..." </code></pre> <p>It does not require Node.js dependency. All credits go to <a href="https://pypi.org/project/cloudscraper/" rel="noreferrer">this pypi page</a></p>
python|python-3.x
39
1,901,607
70,998,847
Transfer Learning/Fine Tuning - how to keep BatchNormalization in inference mode?
<p>In the following tutorial <a href="https://www.tensorflow.org/guide/keras/transfer_learning" rel="nofollow noreferrer">Transfer learning and fine-tuning by TensorFlow</a> it is explained that that when unfreezing a model that contains BatchNormalization (BN) layers, these should be kept in inference mode by passing <code>training=False</code> when calling the base model.</p> <blockquote> <p>[…]</p> <h3>Important notes about <code>BatchNormalization</code> layer</h3> <p>Many image models contain <code>BatchNormalization</code> layers. That layer is a special case on every imaginable count. Here are a few things to keep in mind.</p> <ul> <li><code>BatchNormalization</code> contains 2 non-trainable weights that get updated during training. These are the variables tracking the mean and variance of the inputs.</li> <li>When you set <code>bn_layer.trainable = False</code>, the <code>BatchNormalization</code> layer will run in inference mode, and will not update its mean &amp; variance statistics. This is not the case for other layers in general, as weight trainability &amp; inference/training modes are two orthogonal concepts. But the two are tied in the case of the <code>BatchNormalization</code> layer.</li> <li>When you unfreeze a model that contains <code>BatchNormalization</code> layers in order to do fine-tuning, you should keep the <code>BatchNormalization</code> layers in inference mode by passing <code>training=False</code> when calling the base model. Otherwise the updates applied to the non-trainable weights will suddenly destroy what the model has learned.</li> </ul> <p>[…]</p> </blockquote> <p>In the examples they pass <code>training=False</code> when calling the base model, but later they set <code>base_model.trainable=True</code>, which for my understanding is the opposite of inference mode, because the BN layers will be set to trainable as well.</p> <p>For my understanding there would have to be <code>0 trainable_weights</code> and <code>4 non_trainable_weights</code> for inference mode, which would be identical to when setting the <code>bn_layer.trainable=False</code>, which they stated would be the case for running the <code>bn_layer</code> in inference mode.</p> <p>I checked the number of <code>trainable_weights</code> and number of <code>non_trainable_weights</code> and they are both <code>2</code>.</p> <p>I am confused by the tutorial, how can I really be sure BN layer are in inference mode when doing fine tuning on a model?</p> <p>Does setting <code>training=False</code> on the model overwrite the behavior of <code>bn_layer.trainable=True</code>? So that even if the <code>trainable_weights</code> get listed with <code>2</code> these would not get updated during training (fine tuning)?</p> <hr /> <p>Update:</p> <p>Here I found some further information: <a href="https://keras.io/api/layers/normalization_layers/batch_normalization/" rel="nofollow noreferrer"><code>BatchNormalization</code> layer - on keras.io</a>.</p> <blockquote> <p>[...]</p> <h3>About setting <code>layer.trainable = False</code> on a <code>BatchNormalization</code> layer:</h3> <p>The meaning of setting <code>layer.trainable = False</code> is to freeze the layer, i.e. its internal state will not change during training: its trainable weights will not be updated during <code>fit()</code> or <code>train_on_batch()</code>, and its state updates will not be run.</p> <p>Usually, this does not necessarily mean that the layer is run in inference mode (which is normally controlled by the <code>training</code> argument that can be passed when calling a layer). &quot;Frozen state&quot; and &quot;inference mode&quot; are two separate concepts.</p> <p>However, in the case of the <code>BatchNormalization</code> layer, <strong>setting</strong> <code>trainable = False</code> <strong>on the layer means that the layer will be subsequently run in inference mode</strong> (meaning that it will use the moving mean and the moving variance to normalize the current batch, rather than using the mean and variance of the current batch).</p> <p>This behavior has been introduced in TensorFlow 2.0, in order to enable layer.trainable = False to produce the most commonly expected behavior in the convnet fine-tuning use case.</p> <p>Note that: - Setting <code>trainable</code> on an model containing other layers will recursively set the <code>trainable</code> value of all inner layers. - If the value of the <code>trainable</code> attribute is changed after calling <code>compile()</code> on a model, the new value doesn't take effect for this model until <code>compile()</code> is called again.</p> </blockquote> <p>Question:</p> <ol> <li>In case I want to fine tune the whole model, so I am going to unfreeze the <code>base_model.trainable = True</code>, would I have to manually set the BN layers to <code>bn_layer.trainable = False</code> in order to keep them in inference mode?</li> <li>What does happen when with the call of the <code>base_model</code> passing <code>training=False</code> and additionally setting <code>base_model.trainable=True</code>? Do layers like <code>BatchNormalization</code> and <code>Dropout</code> stay in inference mode?</li> </ol>
<p>After reading the documentation and having a look on the source code of TensorFlows implementations of <code> tf.keras.layers.Layer</code>, <code>tf.keras.layers.Dense</code>, and <code>tf.keras.layers.BatchNormalization</code> I got the following understanding.</p> <p>If <code>training = False</code> is passed on calling the layer or the model/base model, it will run in inference mode. This has nothing to do with the attribute <code>trainable</code>, which means something different. It would probably lead to less misunderstanding, if they would have called the parameter <code>training_mode</code> instead of <code>training</code>. I would have preferred defining it the other way round and calling it <code>inference_mode </code>.</p> <p>When doing Transfer Learning or Fine Tuning <code>training = False</code> should be passed on calling the base model itself. As far as I saw until now this will only affect layers like <code>tf.keras.layers.Dropout</code> and <code>tf.keras.layers.BatchNormalization</code> and will have not effect on the other layers. Running in inference mode via <code>training = False</code> will result in:</p> <ul> <li><code>tf.layers.Dropout</code> not to apply the dropout rate at all. As <code>tf.layers.Dropout</code> has no trainable weights, setting the attribute <code>trainable = False</code> will have no effect at all on this layer.</li> <li><code>tf.keras.layers.BatchNormalization</code> normalizing its inputs using the mean and variance of its moving statistics learned during training</li> </ul> <p>The attribute <code>trainable</code> will only activate or deactivate updating the trainable weights of a layer.</p>
tensorflow|keras|tensorflow2.0|tf.keras
2
1,901,608
70,867,276
Combine 2 columns end to end pandas
<p>I'm trying to combine 2 columns end to end from the same data frame into a new data frame. My columns are</p> <p>a a1 b b1</p> <p>1 2 3 4</p> <p>5 6 7 8</p> <p>My expected output:</p> <p>a b</p> <p>1 3</p> <p>5 7</p> <p>2 4</p> <p>6 8</p> <p>I tried</p> <pre><code>import pandas as pd d1 = [d[&quot;a&quot;], d['b']] d2 = [d[&quot;a1&quot;], d['b2']] d3= pd.DataFrame({&quot;a&quot;:[],&quot;b&quot;:[]}) d3=pd.concat(d1, axis=1, ignore_index=True) d3=pd.concat(d2, axis=1, ignore_index=True) </code></pre> <p>I'm only getting series objects as a result.</p> <p>Note: Sorry if anything is confusing, I'm new in the coding Thank you!</p>
<p>Sure the below can be simplified further, but this works for now.</p> <pre><code>#import pandas import pandas as pd #recreate dataframe df = pd.DataFrame({'a':[1,5], 'a1':[2,6], 'b':[3,7], 'b1':[4,8]}) #create expected columns a = df['a'].append(df['a1'], ignore_index=True) b = df['b'].append(df['b1'], ignore_index=True) #concatenate on columns and rename columns df2 = pd.concat([a,b], axis = 1) df2.columns = ['a','b'] df2 </code></pre>
python-3.x|pandas
2
1,901,609
60,166,781
how to convert C++ tesseract-ocr code to Python?
<p>I want to convert the C++ version <a href="https://tesseract-ocr.github.io/tessdoc/APIExample" rel="nofollow noreferrer">Result iterator example</a> in tesseract-ocr doc to Python.</p> <pre><code> Pix *image = pixRead("/usr/src/tesseract/testing/phototest.tif"); tesseract::TessBaseAPI *api = new tesseract::TessBaseAPI(); api-&gt;Init(NULL, "eng"); api-&gt;SetImage(image); api-&gt;Recognize(0); tesseract::ResultIterator* ri = api-&gt;GetIterator(); tesseract::PageIteratorLevel level = tesseract::RIL_WORD; if (ri != 0) { do { const char* word = ri-&gt;GetUTF8Text(level); float conf = ri-&gt;Confidence(level); int x1, y1, x2, y2; ri-&gt;BoundingBox(level, &amp;x1, &amp;y1, &amp;x2, &amp;y2); printf("word: '%s'; \tconf: %.2f; BoundingBox: %d,%d,%d,%d;\n", word, conf, x1, y1, x2, y2); delete[] word; } while (ri-&gt;Next(level)); } </code></pre> <p>What I could do till right now is the following :</p> <pre><code>import ctypes liblept = ctypes.cdll.LoadLibrary('liblept-5.dll') pix = liblept.pixRead('11.png'.encode()) print(pix) tesseractLib = ctypes.cdll.LoadLibrary(r'C:\Program Files\tesseract-OCR\libtesseract-4.dll') tesseractHandle = tesseractLib.TessBaseAPICreate() tesseractLib.TessBaseAPIInit3(tesseractHandle, '.', 'eng') tesseractLib.TessBaseAPISetImage2(tesseractHandle, pix) #tesseractLib.TessBaseAPIRecognize(tesseractHandle, tesseractLib.TessMonitorCreate()) </code></pre> <p>I cannot convert the C++ <code>api-&gt;Recognize(0)</code> to Python(what I have tried is in the last line(commented) of the code, but it is wrong), I am not experienced with C++, so I cannot go on anymore, anyone can help with the conversion ? The APIs:</p> <ul> <li><p>From tess4j: <a href="http://tess4j.sourceforge.net/docs/docs-3.0/net/sourceforge/tess4j/TessAPI1.html#TessBaseAPIAnalyseLayout-net.sourceforge.tess4j.ITessAPI.TessBaseAPI-" rel="nofollow noreferrer">http://tess4j.sourceforge.net/docs/docs-3.0/net/sourceforge/tess4j/TessAPI1.html#TessBaseAPIAnalyseLayout-net.sourceforge.tess4j.ITessAPI.TessBaseAPI-</a></p></li> <li><p>From the source code: <a href="https://github.com/tesseract-ocr/tesseract/blob/420cbac876b06beeee271d9f44ba800d943a8a83/include/tesseract/capi.h" rel="nofollow noreferrer">https://github.com/tesseract-ocr/tesseract/blob/420cbac876b06beeee271d9f44ba800d943a8a83/include/tesseract/capi.h</a></p></li> </ul> <p>I guess I also have some difficulty on the subsequent conversion , for example , I don't know how to denote <code>tesseract::RIL_WORD</code> in Python, so it would be kind to provide me a full version of the conversion , thanks ! </p> <p>I know there is a project named <a href="https://github.com/sirfz/tesserocr" rel="nofollow noreferrer">tesserocr</a> can save me from the conversion , but the problem with the project is they don't provide an uptodate windows Python wheels, which is the main reason for me to do the conversion . </p>
<p>I think the problem is that <code>api-&gt;Recognize()</code> expects a pointer as first argument. <a href="https://github.com/tesseract-ocr/tesseract/blob/420cbac876b06beeee271d9f44ba800d943a8a83/include/tesseract/baseapi.h#L491" rel="nofollow noreferrer">They mistakenly put a <code>0</code> in their example but it should be <code>nullptr</code></a>. <code>0</code> and <code>nullptr</code> both have the same value but on 64bits systems they don't have the same size (usually ; I assume on some weird non-x86 systems this may not be true either).</p> <p>Their example still works with a C++ compiler because the compiler is aware that the function expects a pointer (64bits) and fix it silently.</p> <p>In your example, it seems you haven't specified the exact <a href="https://github.com/tesseract-ocr/tesseract/blob/420cbac876b06beeee271d9f44ba800d943a8a83/include/tesseract/capi.h#L334" rel="nofollow noreferrer">prototype of <code>TessBaseAPIRecognize()</code></a> to ctypes. So ctypes can't know a pointer (64 bits) is expected by this function. Instead it assumes that this function expects an integer (32 bits) --> it crashes.</p> <p>My suggestions:</p> <ol> <li><a href="https://gitlab.gnome.org/World/OpenPaperwork/pyocr/blob/49babfa2af93e490bcc132edb5614b5e87a14cf1/src/pyocr/libtesseract/tesseract_raw.py#L481" rel="nofollow noreferrer">Use <code>ctypes.c_void_p(None)</code> instead of 0</a></li> <li>If you intend to use that in production, <a href="https://gitlab.gnome.org/World/OpenPaperwork/pyocr/blob/49babfa2af93e490bcc132edb5614b5e87a14cf1/src/pyocr/libtesseract/tesseract_raw.py#L242" rel="nofollow noreferrer">specify to ctypes all the function prototypes</a></li> <li>Be careful with the examples you look at: Those examples use <a href="https://github.com/tesseract-ocr/tesseract/blob/420cbac876b06beeee271d9f44ba800d943a8a83/include/tesseract/baseapi.h" rel="nofollow noreferrer">Tesseract base API</a> (C++ API) whereas if you want to use libtesseract with Python + ctypes, you have to use <a href="https://github.com/tesseract-ocr/tesseract/blob/420cbac876b06beeee271d9f44ba800d943a8a83/include/tesseract/capi.h" rel="nofollow noreferrer">Tesseract C API</a>. Those 2 APIs are very similar but may not be identical.</li> </ol> <p>If you need further help, you can have a look at how things are done in <a href="https://gitlab.gnome.org/World/OpenPaperwork/pyocr/tree/49babfa2af93e490bcc132edb5614b5e87a14cf1/src/pyocr/libtesseract" rel="nofollow noreferrer">PyOCR</a>. If you decide to use <a href="https://gitlab.gnome.org/World/OpenPaperwork/pyocr" rel="nofollow noreferrer">PyOCR</a> in your project, just beware that the license of PyOCR is GPLv3+, which implies some restrictions.</p>
python|c++|tesseract|python-tesseract
0
1,901,610
2,883,920
Why is i++++++++i valid in python?
<p>I "accidentally" came across this weird but valid syntax</p> <pre><code>i=3 print i+++i #outputs 6 print i+++++i #outputs 6 print i+-+i #outputs 0 print i+--+i #outputs 6 </code></pre> <p>(for every even no: of minus symbol, it outputs 6 else 0, why?)</p> <p>Does this do anything useful?</p> <p><strong>Update (Don't take it the wrong way..I love python)</strong>: One of Python's principle says There should be one-- and preferably only one --obvious way to do it. It seems there are infinite ways to do i+1</p>
<p>Since Python doesn't have C-style ++ or -- operators, one is left to assume that you're negating or positivating(?) the value on the left.</p> <p>E.g. what would you expect <code>i + +5</code> to be?</p> <pre><code>i=3 print i + +(+i) #outputs 6 print i + +(+(+(+i))) #outputs 6 print i + -(+i) #outputs 0 print i + -(-(+i)) #outputs 6 </code></pre> <p>Notably, from the <a href="http://docs.python.org/reference/grammar.html" rel="nofollow noreferrer">Python Grammar Specification</a>, you'll see the line:</p> <pre><code>factor: ('+'|'-'|'~') factor | power </code></pre> <p>Which means that a factor in an expression can be a factor preceded by <code>+</code>, <code>-</code>, or <code>~</code>. I.e. it's recursive, so if <code>5</code> is a factor (which it is because factor->power->NUMBER), then <code>-5</code> is a factor and so are <code>--5</code> and <code>--------5</code>.</p>
python
28
1,901,611
30,435,906
CKAN - ckanext-scheming - add dataset button for each schema
<p>I've been following the instructions and the examples to configure ckanext-scheming extension (<a href="https://github.com/open-data/ckanext-scheming" rel="nofollow">ckanext-scheming</a>).</p> <p>Now I am able to access by URL to the different schemas (as the example shows) that I've configured. At this time, I wanted to put these links in the main Dataset page in order to choose the schema of a new dataset before is going to be created.</p> <p>Does anybody know how do I should to get a button of "Add new dataset" in the main window for each schema that I have defined? Someone knows if that is something implemented and configured in this extension?</p> <p>Thanks.</p>
<p>The extension does not cover this yet, but the template where the 'Add Dataset' button is defined is here:</p> <p><a href="https://github.com/ckan/ckan/blob/master/ckan/templates/package/search.html#L16" rel="nofollow">https://github.com/ckan/ckan/blob/master/ckan/templates/package/search.html#L16</a></p> <p>You could play with that to add more buttons for different dataset types</p>
python|ckan
1
1,901,612
30,451,774
How to match two text files, find matches and replace with original content?
<p>Basically I have 2 text files. </p> <p><strong>Text file A:(repeated strings)</strong></p> <pre><code>hg17_chr2_74388709_74389 hg17_chr5_137023651_1370 hg17_chr7_137880501_1378 hg17_chr5_137023651_1370 </code></pre> <p><strong>Text file B:</strong></p> <pre><code>hg17_chrX_52804801_52805856 hg17_chr15_79056833_79057564 hg17_chr2_74388709_74389559 hg17_chr1_120098891_120099441 hg17_chr5_137023651_137024301 hg17_chr11_85997073_85997627 hg17_chr7_137880501_137881251 </code></pre> <p>File A was trimmed by a tool therefore the match can be found to be exact the same for the first 24 characters of each string for both file. How to match both files and output the result in a new file with desired content:</p> <pre><code>hg17_chr2_74388709_74389559 hg17_chr5_137023651_137024301 hg17_chr7_137880501_137881251 hg17_chr5_137023651_137024301 </code></pre>
<p>Easy solution with only opening the files once:</p> <pre><code>with open('file_a','r') as fa: # open file a --&gt; read the files into lists list_a = fa.read().splitlines() with open('file_b','r') as fb: # open file b --&gt; read the files into lists list_b = fb.read().splitlines() # get element in list_b if list_a contain the element(only first 24 characters) match_list = [n for n in list_b if n[:24] in list_a] with open('file_c','w+') as fc: # write the matching list to the new file fc.write('\n'.join(match_list)) </code></pre>
python|regex
1
1,901,613
66,957,404
Save to excel with Python and Pandas every X loops
<p>I am learning to scrape and while I was testing one or two pages at a time, I was able to save the scraped data to excel with pandas at the end of all the loops. However, now that I am testing with 50 pages, there is a risk an error will be found before the final save, so I would like to periodically save every 10 loops, however I'm not sure what additional code I would need to inject into my project.</p> <p>I have tried moving the &quot;save&quot; code to the end of each loops, but it appears to create too many files (and each file seems to have the cumulative data whereas I would like just the incremental changes - or those changes that happened since the last save - to be saved). Code as follows:</p> <p>This is where the loops starts&quot;</p> <pre><code> #loop through the dictionaries to populate url for province, cityValues in provinceDictionary.items(): for city, code in cityValues.items(): for category, categoryValues in businessCategoryDictionary.items(): for catname, catcode in categoryValues.items(): for page in pageNumbers: url = (baseURL.format(province, city, catname, code, catcode)) #Get the contents of the page we're looking at by requesting the URL results = requests.get((url) + str(page) + &quot;.html&quot;, headers=headers) print('now processing page ' + str(results.url)) #parse html content soup = BeautifulSoup(results.text, &quot;html.parser&quot;) #Grab the container that holds the company info companies_div = soup.find_all('div', {'id': re.compile('result-id-.*')}) #control the speed of the loop sleep(randint(2, 10)) for x in companies_div: name = x.h2.a.text print(name) names.append(name) #save to excel after all loops completed #eliminates truncation in pd dataframe pd.set_option('display.max_columns', None) pd.set_option('display.max_rows', None) pd.set_option('display.max_colwidth', None) #ininitalize pd dataframe companies = pd.DataFrame({ 'Name': names, 'Address': addresses, 'Province': provinces, 'Postal Code': postalCodes, 'Category': categories, 'URLs': urls }) companies.to_excel('test_' + str(int(time.time())) + '.xlsx', index=False) </code></pre> <p>Any help is greatly appreciated.</p>
<p>I was able to solve this by creating a variable called loopcount, which tracks how many loops have been iterated through. Once the loops hits desired number (10 in this case), it executes the save procedure and then resets the counter to 1.</p> <pre><code>loopCount = 1 #loop through the dictionaries to populate url for province, cityValues in provinceDictionary.items(): for city, code in cityValues.items(): for category, categoryValues in businessCategoryDictionary.items(): for catname, catcode in categoryValues.items(): for page in pageNumbers: url = (baseURL.format(province, city, catname, code, catcode)) #Get the contents of the page we're looking at by requesting the URL results = requests.get((url) + str(page) + &quot;.html&quot;, headers=headers) print('now processing page ' + str(results.url)) #parse html content soup = BeautifulSoup(results.text, &quot;html.parser&quot;) #Grab the container that holds the company info companies_div = soup.find_all('div', {'id': re.compile('result-id-.*')}) #control the speed of the loop sleep(randint(2, 10)) #increase the loop count each time loopCount +=1 print('currently on loop' + str(loopCount)) # every 10 loops, save if loopCount == 10: #eliminates truncation in pd dataframe pd.set_option('display.max_columns', None) pd.set_option('display.max_rows', None) pd.set_option('display.max_colwidth', None) #ininitalize pd dataframe companies = pd.DataFrame({ 'Name': names }) companies.to_csv('results_' + str(int(time.time())) + '.csv', index=False) #reset loop count to 1 after saving loopCount = 1 print('currently on loop' + str(loopCount)) </code></pre>
python|web-crawler
0
1,901,614
42,909,415
Reconstructing string of words using index position of words
<p>I compressed a file and it gave each unique word in my string a value (0,1,2,3 etc)</p> <p>I now have the list of numbers in order of appearance e.g (0,1,2,1,3,4,5,2,2 etc)</p> <p>Using the numbers and list of unique words is there a way to decompress the sentence and get the original sentence I started with?</p> <p>I have a text file with the following</p> <p>[0,1,2,3,2,4,5,6,2,7,8,2,9,2,11,12,13,15,16,17,18,19] ["Lines","long","lines","very","many","likes","for","i","love","how","amny","does","it","take","to","make","a","cricle..","big","questions"]</p> <p>My code compressed the orignal sentence by getting the position and the unique words. </p> <p>The original sentence was "Lines long lines very lines amny likes for lines i love lines how many lines does it take to make a cricle"</p> <p>Now i want to be able to reconstruct the sentence using the list of unique words and position list. I want to be able to do this with any sentence not just this one example sentence.</p>
<p>To go back to words, you can access your map of words and for each of the numbers add a word onto the sentence.</p> <pre><code>numbers = [1, 2] sentence = "" words = {1: "hello", 2: "world"} for number in numbers: sentence += words[number] + " " sentence = sentence[:-1] # removes last space </code></pre>
python
0
1,901,615
66,722,563
How to create a "weight" field when sampling a population in python?
<p>I am sampling a population and I'd like to know if there is a straightforward way to generate a column called &quot;weight&quot; that indicates the sample weight in the sampled data.</p> <p>Here is my code.</p> <h1>I create the population that is to be sampled</h1> <pre><code>import pandas as pd df=pd.DataFrame({'Age':[18,20,20,56,56,57,60]}) print(df) Age 0 18 1 20 2 20 3 56 4 56 5 57 6 60 </code></pre> <h1>I take a 30% random sample of that population</h1> <pre><code>sampleData = df.sample(frac=0.3) print(sampleData) Age 6 60 5 57 </code></pre> <p>What I would like to know is whether it's possible to generate a field called &quot;weight&quot; that indicates the sample weight (without having to manually calculate the weight). So, I'd like my sample data to look like:</p> <pre><code> Age Weight 6 60 3.333 5 57 3.333 </code></pre>
<p>Just use <code>assign()</code> method and inside it use <code>round()</code> method:-</p> <pre><code>frac=0.3 sampleData=df.sample(frac=frac).assign(Weight=round(1/frac,3)) </code></pre> <p>Now if you print <code>sampleData</code> you will get your desired output:-</p> <pre><code> Age Weight 4 56 3.333 2 20 3.333 </code></pre>
python|pandas|sample
1
1,901,616
66,446,582
What does [::-1] actually do in numpy?
<p>Say I have something like</p> <pre><code>import numpy as np a = np.array([10,20,30,40,50,60]) # this will get the indices of elements in reverse sorted order a.argsort()[::-1] </code></pre> <p>I can imagine that <code>-1</code> specifies the direction, but what does the <code>::</code> operator do? Is this a numpy thing or a python thing in general?</p>
<p>It reverses the array:</p> <pre><code>In [149]: a = np.array([10,20,30,40,50,60]) In [150]: b = a[::-1] In [151]: b Out[151]: array([60, 50, 40, 30, 20, 10]) </code></pre> <p>In detail the interpreter translates that indexing expression to:</p> <pre><code>In [152]: a.__getitem__(slice(None,None,-1)) Out[152]: array([60, 50, 40, 30, 20, 10]) </code></pre> <p>Under the covers <code>numpy</code> just returns a <code>view</code> with an change in <code>strides</code>:</p> <pre><code>In [153]: a.strides Out[153]: (8,) In [154]: b.strides Out[154]: (-8,) </code></pre> <p>That -1 slice step can be used elsewhere</p> <p>To reverse strings and lists:</p> <pre><code>In [155]: 'astring'[::-1] Out[155]: 'gnirtsa' In [156]: [1,2,3,4][::-1] Out[156]: [4, 3, 2, 1] </code></pre> <p>and to generate numbers in 'reverse' order:</p> <pre><code>In [157]: np.arange(0,10,1) Out[157]: array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) In [158]: np.arange(9,-1,-1) Out[158]: array([9, 8, 7, 6, 5, 4, 3, 2, 1, 0]) In [160]: np.arange(9,-1,-3) Out[160]: array([9, 6, 3, 0]) </code></pre> <p>Here we have to specify the end points, while in slicing those can be <code>None</code>, and taken from the object's shape.</p>
python-3.x|numpy|numpy-ndarray
3
1,901,617
72,317,704
EDITING ONE FOR ALL
<p>how to make it so that if you change the value from true to false or vice versa in one place, all others change. class Ralevant default is True class Rooms default is True class Registration value changes and if in the registration class you change from true to false, then the Room class and Ralevant should also change</p> <pre><code> from django.contrib.auth.models import User import datetime class Ralevant(models.Model): bool_roo = models.BooleanField(default=True) def __str__(self): return f'{self.bool_roo}' year=datetime.datetime.now().year month=datetime.datetime.now().month day = datetime.datetime.now().day class Rooms(models.Model): room_num = models.IntegerField() room_bool = models.ForeignKey(Ralevant, on_delete=models.CASCADE, related_name='name1') category = models.CharField(max_length=150) def __str__(self): return f'{self.room_num}' class Registration(models.Model): rooms = models.ForeignKey(Rooms, on_delete=models.CASCADE) first_name = models.CharField(max_length=150) last_name = models.CharField(max_length=150) admin = models.ForeignKey(User, on_delete=models.CASCADE) pasport_serial_num = models.CharField(max_length=100) birth_date = models.DateField() img = models.FileField() visit_date = models.DateTimeField() leave_date = models.DateTimeField(default=datetime.datetime(year=year,month=month,day=day+1,hour=12,minute=00,second=00)) guest_count = models.IntegerField() room_bool = models.ForeignKey(Ralevant, on_delete=models.CASCADE, related_name='name2') def __str__(self): return self.rooms,self.last_name,self.first_name </code></pre>
<p>You can implement it in your <code>views.py</code> or in your <code>serializers.py</code>, simple import model and update <code> Model.objects.filter(pk=obj.pk).update()</code> something like that</p>
python|django
0
1,901,618
72,188,501
Convert JSON with dictionaries into pandas Dataframe (AWS)
<p>I have an Amazon serverless Aurora SQL database instance with some debt installments data. I was trying to connect on the DB with AWS Lambda (python 3.7) and found this method:</p> <pre><code>import boto3 rds_client = boto3.client('rds-data') database_name = 'dbname' db_cluster_arn = 'arn:aws:rds:us-east-1:xxxx:cluster:xxxx' db_credentials_secrets_store_arn = 'arn:aws:secretsmanager:us-east-1:xxxx:secret:rds-db-credentials/cluster-xxxx/' def lambda_handler(event, context): response = execute_statement('SELECT * FROM focafidc.estoque'); json_string = str(response) return response def execute_statement(sql): response = rds_client.execute_statement( secretArn=db_credentials_secrets_store_arn, database=database_name, resourceArn=db_cluster_arn, sql=sql ) return response; </code></pre> <p>The response returns something like a JSON with data nested in dictionaries:</p> <pre><code>{ &quot;ResponseMetadata&quot;: { &quot;RequestId&quot;: &quot;f7df6de2-8144-4b7b-9cf0-c828454b4a0d&quot;, &quot;HTTPStatusCode&quot;: 200, &quot;HTTPHeaders&quot;: { &quot;x-amzn-requestid&quot;: &quot;f7df6de2-8144-4b7b-9cf0-c828454b4a0d&quot;, &quot;content-type&quot;: &quot;application/json&quot;, &quot;content-length&quot;: &quot;324685&quot;, &quot;date&quot;: &quot;Tue, 10 May 2022 13:51:57 GMT&quot; }, &quot;RetryAttempts&quot;: 0 }, &quot;numberOfRecordsUpdated&quot;: 0, &quot;records&quot;: [ [ { &quot;stringValue&quot;: &quot;2022-05-02&quot; }, { &quot;longValue&quot;: 1 }, { &quot;longValue&quot;: 1 }, { &quot;stringValue&quot;: &quot;a3789&quot; }, { &quot;stringValue&quot;: &quot;519.60&quot; }, { &quot;stringValue&quot;: &quot;2023-05-02&quot; }, { &quot;stringValue&quot;: &quot;2598.00&quot; }, { &quot;longValue&quot;: 666000002 }, { &quot;stringValue&quot;: &quot;1.88&quot; }, { &quot;stringValue&quot;: &quot;b190&quot; }, { &quot;stringValue&quot;: &quot;1996-03-25&quot; }, { &quot;stringValue&quot;: &quot;Brasileiro&quot; }, { &quot;stringValue&quot;: &quot;masculino&quot; }, { &quot;stringValue&quot;: &quot;false&quot; }, { &quot;stringValue&quot;: &quot;SP&quot; }, { &quot;stringValue&quot;: &quot;São Paulo&quot; }, { &quot;longValue&quot;: 111111111 }, { &quot;longValue&quot;: 1111111111 }, { &quot;booleanValue&quot;: true }, { &quot;stringValue&quot;: &quot;LOJAS S.A.&quot; }, { &quot;stringValue&quot;: &quot;99999999999999&quot; } ], [ { &quot;stringValue&quot;: &quot;2022-05-02&quot; }, { &quot;longValue&quot;: 1 }, { &quot;longValue&quot;: 2 }, { &quot;stringValue&quot;: &quot;a3789&quot; }, { &quot;stringValue&quot;: &quot;519.60&quot; }, { &quot;stringValue&quot;: &quot;2024-05-01&quot; }, { &quot;stringValue&quot;: &quot;2598.00&quot; }, { &quot;longValue&quot;: 666000002 }, { &quot;stringValue&quot;: &quot;1.88&quot; }, { &quot;stringValue&quot;: &quot;b190&quot; }, { &quot;stringValue&quot;: &quot;1996-03-25&quot; }, { &quot;stringValue&quot;: &quot;Brasileiro&quot; }, { &quot;stringValue&quot;: &quot;masculino&quot; }, { &quot;stringValue&quot;: &quot;false&quot; }, { &quot;stringValue&quot;: &quot;SP&quot; }, { &quot;stringValue&quot;: &quot;São Paulo&quot; }, { &quot;longValue&quot;: 111111111 }, { &quot;longValue&quot;: 1111111111 }, { &quot;booleanValue&quot;: true }, { &quot;stringValue&quot;: &quot;LOJAS S.A.&quot; }, { &quot;stringValue&quot;: &quot;99999999999999&quot; } ], [ { &quot;stringValue&quot;: &quot;2022-05-02&quot; }, { &quot;longValue&quot;: 1 }, { &quot;longValue&quot;: 3 }, { &quot;stringValue&quot;: &quot;a3789&quot; }, { &quot;stringValue&quot;: &quot;519.60&quot; }, { &quot;stringValue&quot;: &quot;2025-05-01&quot; }, { &quot;stringValue&quot;: &quot;2598.00&quot; }, { &quot;longValue&quot;: 666000002 }, { &quot;stringValue&quot;: &quot;1.88&quot; }, { &quot;stringValue&quot;: &quot;b190&quot; }, { &quot;stringValue&quot;: &quot;1996-03-25&quot; }, { &quot;stringValue&quot;: &quot;Brasileiro&quot; }, { &quot;stringValue&quot;: &quot;masculino&quot; }, { &quot;stringValue&quot;: &quot;false&quot; }, { &quot;stringValue&quot;: &quot;SP&quot; }, { &quot;stringValue&quot;: &quot;São Paulo&quot; }, { &quot;longValue&quot;: 111111111 }, { &quot;longValue&quot;: 1111111111 }, { &quot;booleanValue&quot;: true }, { &quot;stringValue&quot;: &quot;LOJAS S.A.&quot; }, { &quot;stringValue&quot;: &quot;99999999999999&quot; } ], [ { &quot;stringValue&quot;: &quot;2022-05-02&quot; }, { &quot;longValue&quot;: 1 }, { &quot;longValue&quot;: 4 }, { &quot;stringValue&quot;: &quot;a3789&quot; }, { &quot;stringValue&quot;: &quot;519.60&quot; }, { &quot;stringValue&quot;: &quot;2026-05-01&quot; }, { &quot;stringValue&quot;: &quot;2598.00&quot; }, { &quot;longValue&quot;: 666000002 }, { &quot;stringValue&quot;: &quot;1.88&quot; }, { &quot;stringValue&quot;: &quot;b190&quot; }, { &quot;stringValue&quot;: &quot;1996-03-25&quot; }, { &quot;stringValue&quot;: &quot;Brasileiro&quot; }, { &quot;stringValue&quot;: &quot;masculino&quot; }, { &quot;stringValue&quot;: &quot;false&quot; }, { &quot;stringValue&quot;: &quot;SP&quot; }, { &quot;stringValue&quot;: &quot;São Paulo&quot; }, { &quot;longValue&quot;: 111111111 }, { &quot;longValue&quot;: 1111111111 }, { &quot;booleanValue&quot;: true }, { &quot;stringValue&quot;: &quot;LOJAS S.A.&quot; }, { &quot;stringValue&quot;: &quot;99999999999999&quot; } ], [ { &quot;stringValue&quot;: &quot;2022-05-02&quot; }, { &quot;longValue&quot;: 1 }, { &quot;longValue&quot;: 5 }, { &quot;stringValue&quot;: &quot;a3789&quot; }, { &quot;stringValue&quot;: &quot;519.60&quot; }, { &quot;stringValue&quot;: &quot;2027-05-01&quot; }, { &quot;stringValue&quot;: &quot;2598.00&quot; }, { &quot;longValue&quot;: 666000002 }, { &quot;stringValue&quot;: &quot;1.88&quot; }, { &quot;stringValue&quot;: &quot;b190&quot; }, { &quot;stringValue&quot;: &quot;1996-03-25&quot; }, { &quot;stringValue&quot;: &quot;Brasileiro&quot; }, { &quot;stringValue&quot;: &quot;masculino&quot; }, { &quot;stringValue&quot;: &quot;false&quot; }, { &quot;stringValue&quot;: &quot;SP&quot; }, { &quot;stringValue&quot;: &quot;São Paulo&quot; }, { &quot;longValue&quot;: 111111111 }, { &quot;longValue&quot;: 1111111111 }, { &quot;booleanValue&quot;: true }, { &quot;stringValue&quot;: &quot;LOJAS S.A.&quot; }, { &quot;stringValue&quot;: &quot;99999999999999&quot; } ] ] } </code></pre> <p>I need this data to be a pandas dataframe, so I tried to json_normalize the response JSON and got the following result:</p> <pre><code>bd1 = pd.json_normalize(response,['records']) print(bd1) 0 ... 20 0 {'stringValue': '2022-05-02'} ... {'stringValue': '99999999999999'} 1 {'stringValue': '2022-05-02'} ... {'stringValue': '99999999999999'} 2 {'stringValue': '2022-05-02'} ... {'stringValue': '99999999999999'} 3 {'stringValue': '2022-05-02'} ... {'stringValue': '99999999999999'} 4 {'stringValue': '2022-05-02'} ... {'stringValue': '99999999999999'} </code></pre> <p>Can you guys suggest any method to create or convert this to an only values Dataframe?</p>
<p>How about we first parsing the records to standard python objects, and then we handle the JSON-like python structure to dataframe. Assume that you've already parsed the records to nested-list of dicts like the following:</p> <pre><code>true, false, null = True, False, None records = [ [ { &quot;stringValue&quot;: &quot;2022-05-02&quot; }, { &quot;longValue&quot;: 1 }, { &quot;longValue&quot;: 1 }, { &quot;stringValue&quot;: &quot;a3789&quot; }, { &quot;stringValue&quot;: &quot;519.60&quot; }, { &quot;stringValue&quot;: &quot;2023-05-02&quot; }, { &quot;stringValue&quot;: &quot;2598.00&quot; }, { &quot;longValue&quot;: 666000002 }, { &quot;stringValue&quot;: &quot;1.88&quot; }, { &quot;stringValue&quot;: &quot;b190&quot; }, { &quot;stringValue&quot;: &quot;1996-03-25&quot; }, { &quot;stringValue&quot;: &quot;Brasileiro&quot; }, { &quot;stringValue&quot;: &quot;masculino&quot; }, { &quot;stringValue&quot;: &quot;false&quot; }, { &quot;stringValue&quot;: &quot;SP&quot; }, { &quot;stringValue&quot;: &quot;São Paulo&quot; }, { &quot;longValue&quot;: 111111111 }, { &quot;longValue&quot;: 1111111111 }, { &quot;booleanValue&quot;: true }, { &quot;stringValue&quot;: &quot;LOJAS S.A.&quot; }, { &quot;stringValue&quot;: &quot;99999999999999&quot; } ], [ { &quot;stringValue&quot;: &quot;2022-05-03&quot; }, { &quot;longValue&quot;: 1 }, { &quot;longValue&quot;: 2 }, { &quot;stringValue&quot;: &quot;a3789&quot; }, { &quot;stringValue&quot;: &quot;519.60&quot; }, { &quot;stringValue&quot;: &quot;2024-05-01&quot; }, { &quot;stringValue&quot;: &quot;2598.00&quot; }, { &quot;longValue&quot;: 666000002 }, { &quot;stringValue&quot;: &quot;1.88&quot; }, { &quot;stringValue&quot;: &quot;b190&quot; }, { &quot;stringValue&quot;: &quot;1996-03-25&quot; }, { &quot;stringValue&quot;: &quot;Brasileiro&quot; }, { &quot;stringValue&quot;: &quot;masculino&quot; }, { &quot;stringValue&quot;: &quot;false&quot; }, { &quot;stringValue&quot;: &quot;SP&quot; }, { &quot;stringValue&quot;: &quot;São Paulo&quot; }, { &quot;longValue&quot;: 111111111 }, { &quot;longValue&quot;: 1111111111 }, { &quot;booleanValue&quot;: true }, { &quot;stringValue&quot;: &quot;LOJAS S.A.&quot; }, { &quot;stringValue&quot;: &quot;99999999999999&quot; } ], ] </code></pre> <p>Here we start to extract the values you focus on:</p> <pre><code>def first(seq): return next(iter(seq)) import pandas as pd records_values = [[first(item.values()) for item in record] for record in records] df = pd.DataFrame(records_values) print(df) </code></pre> <p><strong>The Output is:</strong></p> <pre><code> 0 1 2 3 ... 17 18 19 20 0 2022-05-02 1 1 a3789 ... 1111111111 True LOJAS S.A. 99999999999999 1 2022-05-03 1 2 a3789 ... 1111111111 True LOJAS S.A. 99999999999999 [2 rows x 21 columns] </code></pre> <p>And if you want to keep the correspond value types, you can extract the value types from one item of the records like this, and do the related type-casting later in pandas:</p> <pre><code>candidate = records[0] value_types = [first(item.keys()) for item in candidate] # ['stringValue', 'longValue', 'longValue', 'stringValue', 'stringValue', 'stringValue', 'stringValue', # 'longValue', 'stringValue', 'stringValue', 'stringValue', 'stringValue', 'stringValue', 'stringValue', # 'stringValue', 'stringValue', 'longValue', 'longValue', 'booleanValue', 'stringValue', 'stringValue'] </code></pre>
python|amazon-web-services|aws-lambda|amazon-rds
0
1,901,619
50,767,028
How do I patch a python method for a test case to run?
<p>A project I'm working on uses django for basically everything. When writing a model, I found it necessary to override the save() method to spin off a task to be run by a worker:</p> <pre><code>class MyModel(models.Model) def _start_processing(self): my_task.apply_async(args=['arg1', ..., 'argn']) def save(self, *args, **kwargs): """Saves the model object to the database""" # do some stuff self._start_processing() # do some more stuff super(MyModel, self).save(*args, **kwargs) </code></pre> <p>In my tester, I want to test the parts of the save override that are designated by <code># do some stuff</code> and <code># do some more stuff</code>, but don't want to run the task. To do this, I believe I should be using mocking (which I'm very new to). </p> <p>In my test class, I've set it up to skip the task invocation:</p> <pre><code>class MyModelTests(TestCase): def setUp(self): # Mock the _start_processing() method. Ha! @patch('my_app.models.MyModel._start_processing') def start_processing(self, mock_start_processing): print('This is when the task would normally be run, but this is a test!') # Create a model to test with self.test_object = MyModelFactory() </code></pre> <p>Since the factory creates and saves an instance of the model, I need to have overwritten the <code>_start_processing()</code> method before that is called. The above doesn't seem to be working (and the task runs and fails). What am I missing?</p>
<p>First of all, you have to wrap into decorator not the function which you want to use as replacement, but the "scope" in which your mock should work. So, for example, if you need to mock the <code>_start_processing</code> for the whole <code>MyModelTests</code> class, you should place the decorator before the class definition. If only for one test method - wrap only test method with it.</p> <p>Secondly, define that <code>start_processing</code> function somewhere outside the class, and pass <code>@patch('my_app.models.MyModel._start_processing', new=start_processing)</code>, so it will know what to use as a replacement for actual method. But be aware to match the actual method signature, so use just </p> <pre><code>def start_processing(self): print('This is when the task would normally be run, but this is a test!') </code></pre> <p>Thirdly, you will have to add <code>mock_start_processing</code> argument to each test case inside this class (test_... methods), just because mocking works like this :).</p> <p>And finally. You have to be aware about the <code>target</code> you are patching. Your current <code>my_app.models.MyModel._start_processing</code> could be broken. You have to patch the class using the path where it is USED, not where it is DEFINED. So, if you are creating objects with <code>MyModelFactory</code> inside <code>TestCase</code>, and <code>MyModelFactory</code> lives in <code>my_app.factories</code> and it imports <code>MyModel</code> as <code>from .models import MyModel</code>, you will have to use <code>@patch('my_app.factories.MyModel._start_processing')</code>, not <code>'my_app.models.MyModel._start_processing'</code>.</p> <p>Hopefully, it helps.</p>
python|django|unit-testing|mocking
1
1,901,620
50,890,686
How can I get this Spider to export a JSON file for each Items List?
<p>In my following file <code>Reddit.py</code>, it has this Spider:</p> <pre><code>import scrapy class RedditSpider(scrapy.Spider): name = 'Reddit' allowed_domains = ['reddit.com'] start_urls = ['https://old.reddit.com'] def parse(self, response): for link in response.css('li.first a.comments::attr(href)').extract(): yield scrapy.Request(url=response.urljoin(link), callback=self.parse_topics) def parse_topics(self, response): topics = {} topics["title"] = response.css('a.title::text').extract_first() topics["author"] = response.css('p.tagline a.author::text').extract_first() if response.css('div.score.likes::attr(title)').extract_first() is not None: topics["score"] = response.css('div.score.likes::attr(title)').extract_first() else: topics["score"] = "0" if int(topics["score"]) &gt; 10000: author_url = response.css('p.tagline a.author::attr(href)').extract_first() yield scrapy.Request(url=response.urljoin(author_url), callback=self.parse_user, meta={'topics': topics}) else: yield topics def parse_user(self, response): topics = response.meta.get('topics') users = {} users["name"] = topics["author"] users["karma"] = response.css('span.karma::text').extract_first() yield users yield topics </code></pre> <p>What it does that it gets all the URLs from the main page of <code>old.reddit</code>, Then scrape each URL's <strong>title</strong>, <strong>author</strong> and <strong>score</strong>.</p> <p>What I've added to it is a second part, Where it checks if the <strong>score</strong> is higher than <strong>10000</strong>, If it is, Then the Spider goes to the <strong>user</strong>'s page and scrape his <strong>karma</strong> from it.</p> <p>I do understand that I can scrape the <strong>karma</strong> from the <strong>topic</strong>'s page, But I would like to do it this way, Since there is other part of the <strong>user</strong>'s page I scrape That doesn't exist in the <strong>topic</strong>'s page.</p> <p>What I want to do is to export the <code>topics</code> list which contains <code>title, author, score</code> into a <code>JSON</code> file named <code>topics.json</code>, Then if the <strong>topic</strong>'s score is higher than <strong>10000</strong> to export the <code>users</code> list which contains <code>name, karma</code> into a <code>JSON</code> file named <code>users.json</code>.</p> <p>I only know how to use the <code>command-line</code> of</p> <pre><code>scrapy runspider Reddit.py -o Reddit.json </code></pre> <p>Which exports all the lists into a single <code>JSON</code> file named <code>Reddit</code> but in a bad structure like this</p> <pre><code>[ {"name": "Username", "karma": "00000"}, {"title": "ExampleTitle1", "author": "Username", "score": "11000"}, {"name": "Username2", "karma": "00000"}, {"title": "ExampleTitle2", "author": "Username2", "score": "12000"}, {"name": "Username3", "karma": "00000"}, {"title": "ExampleTitle3", "author": "Username3", "score": "13000"}, {"title": "ExampleTitle4", "author": "Username4", "score": "9000"}, .... ] </code></pre> <hr> <p>I have no-knowledge at all about <strong>Scrapy</strong>'s <code>Item Pipeline</code> nor <code>Item Exporters</code> &amp; <code>Feed Exporters</code> on how to implement them on my Spider, or how to use them overall, Tried to understand it from the Documentation, But it doesn't seem I get how to use it in my Spider.</p> <hr> <p>The final result I want is two files:</p> <p><strong><em>topics.json</em></strong></p> <pre><code>[ {"title": "ExampleTitle1", "author": "Username", "score": "11000"}, {"title": "ExampleTitle2", "author": "Username2", "score": "12000"}, {"title": "ExampleTitle3", "author": "Username3", "score": "13000"}, {"title": "ExampleTitle4", "author": "Username4", "score": "9000"}, .... ] </code></pre> <p><strong><em>users.json</em></strong></p> <pre><code>[ {"name": "Username", "karma": "00000"}, {"name": "Username2", "karma": "00000"}, {"name": "Username3", "karma": "00000"}, .... ] </code></pre> <p>while getting rid of duplicates in the list.</p>
<p>Applying approach from below SO thread</p> <p><a href="https://stackoverflow.com/questions/50083638/export-scrapy-items-to-different-files/50133981#50133981">Export scrapy items to different files</a></p> <p>I created a sample scraper</p> <pre><code>import scrapy class ExampleSpider(scrapy.Spider): name = 'example' allowed_domains = ['example.com'] start_urls = ['http://example.com/'] def parse(self, response): yield {"type": "unknown item"} yield {"title": "ExampleTitle1", "author": "Username", "score": "11000"} yield {"name": "Username", "karma": "00000"} yield {"name": "Username2", "karma": "00000"} yield {"someothertype": "unknown item"} yield {"title": "ExampleTitle2", "author": "Username2", "score": "12000"} yield {"title": "ExampleTitle3", "author": "Username3", "score": "13000"} yield {"title": "ExampleTitle4", "author": "Username4", "score": "9000"} yield {"name": "Username3", "karma": "00000"} </code></pre> <p>And then in <code>exporters.py</code></p> <pre><code>from scrapy.exporters import JsonItemExporter from scrapy.extensions.feedexport import FileFeedStorage class JsonMultiFileItemExporter(JsonItemExporter): types = ["topics", "users"] def __init__(self, file, **kwargs): super().__init__(file, **kwargs) self.files = {} self.kwargs = kwargs for itemtype in self.types: storage = FileFeedStorage(itemtype + ".json") file = storage.open(None) self.files[itemtype] = JsonItemExporter(file, **self.kwargs) def start_exporting(self): super().start_exporting() for exporters in self.files.values(): exporters.start_exporting() def finish_exporting(self): super().finish_exporting() for exporters in self.files.values(): exporters.finish_exporting() exporters.file.close() def export_item(self, item): if "title" in item: itemtype = "topics" elif "karma" in item: itemtype = "users" else: itemtype = "self" if itemtype == "self" or itemtype not in self.files: super().export_item(item) else: self.files[itemtype].export_item(item) </code></pre> <p>Add below to the <code>settings.py</code></p> <pre><code>FEED_EXPORTERS = { 'json': 'testing.exporters.JsonMultiFileItemExporter', } </code></pre> <p>Running the scraper I get 3 files generated</p> <p><strong>example.json</strong></p> <pre><code>[ {"type": "unknown item"}, {"someothertype": "unknown item"} ] </code></pre> <p><strong>topics.json</strong></p> <pre><code>[ {"title": "ExampleTitle1", "author": "Username", "score": "11000"}, {"title": "ExampleTitle2", "author": "Username2", "score": "12000"}, {"title": "ExampleTitle3", "author": "Username3", "score": "13000"}, {"title": "ExampleTitle4", "author": "Username4", "score": "9000"} ] </code></pre> <p><strong>users.json</strong></p> <pre><code>[ {"name": "Username", "karma": "00000"}, {"name": "Username2", "karma": "00000"}, {"name": "Username3", "karma": "00000"} ] </code></pre>
python|json|python-3.x|scrapy|scrapy-spider
1
1,901,621
51,053,578
Django choosen user permission is not showing in admin page
<p>I tried to extend default Django user in my project by using <strong>AbstractUser</strong>. In Django admin i couldn't see choosen user permissions. </p> <p><a href="https://i.stack.imgur.com/3aauP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3aauP.png" alt="enter image description here"></a></p> <p>Here is my work</p> <pre><code>from django.db import models from django.contrib.auth.models import AbstractUser class ExtendedUser(AbstractUser): bio = models.TextField(max_length=500, blank=True) birth_date = models.DateField(null=True, blank=True) </code></pre> <p>After that i add my extended user in <strong>admin.py</strong></p> <pre><code>class ExtendedUserAdmin(admin.ModelAdmin): pass admin.site.register(ExtendedUser, ExtendedUserAdmin) </code></pre> <p>Also add AUTH_USER_MODEL in <strong>settings.py</strong></p> <pre><code>AUTH_USER_MODEL = '_aaron_user.ExtendedUser' </code></pre>
<p>I solved this problem by importing UserAdmin and register my ExtendedUser with this model in my <strong>admin.py</strong> file.</p> <pre><code>from.models import ExtendedUser from django.contrib.auth.admin import UserAdmin admin.site.register(ExtendedUser, UserAdmin) </code></pre> <p>The result is choosen groups and choosen user permissions are now available.</p> <p><a href="https://i.stack.imgur.com/nNiK3.png" rel="noreferrer"><img src="https://i.stack.imgur.com/nNiK3.png" alt="enter image description here"></a></p>
python|django|python-3.x|django-admin
5
1,901,622
50,930,805
Nested for loops looping through a python list
<p>I have to loop through a list of over 4000 item and check their similarity with a recommendation algorithm in python.</p> <p>The script takes a long time to run (10-11 Hours) and I wanted to incorporate multi-threading to improve speed but dont know how to do it exactly.</p> <pre><code> import numpy as np import pandas as pd import matplotlib.pyplot as plt data=pd.read_csv('data.csv',index_col=0, encoding="ISO-8859-1") # Get list of unique items itemList=list(set(data["product_ref"].tolist())) # Get count of customers userCount=len(set(data["customer_id"].tolist())) # Create an empty data frame to store item affinity scores for items. itemAffinity= pd.DataFrame(columns=('item1', 'item2', 'score')) def itemUsers(ind): return data[data.product_ref==itemList[ind]]["customer_id"].tolist() rowCount=0 for ind1 in range(len(itemList)): item1Users = itemUsers(ind1) pool = Pool() pool.map(loop2, data_inputs) for ind2 in range(ind1+1, len(itemList)): print(ind1, ":", ind2) item2Users = itemUsers(ind2) commonUsers= len(set(item1Users).intersection(set(item2Users))) score=commonUsers / userCount itemAffinity.loc[rowCount] = [itemList[ind1],itemList[ind2],score] rowCount +=1 </code></pre>
<p>Incoprating multi-threading will not improve your running time.</p> <p>Think about it this way, when you multi-thread you spread your computational time between multiple threads - When you could of spread it on one process.</p> <p>It could help when you have on thread waiting for user input for example and you want to compute while waiting, but <strong>this isn't your case.</strong></p>
python|multithreading|pandas|iteration|nested-loops
1
1,901,623
3,221,387
How to copy last X bits?
<p>Let's say I have two integers with the following binary representations:</p> <pre><code>01101010 00110101 </code></pre> <p>And now I want to copy the last 3 bits from the first integer over the second one so that it becomes</p> <pre><code>00110010 </code></pre> <p>What's the easiest way to do that?</p> <p>(Actually, my goal is to shift the all the X+1 bits to the right one, essentially deleting the Xth bit, and keeping the X-1 bits the same -- in this case, X is 4)</p> <hr> <p><strong>The "why?":</strong></p> <p>You have a bunch of flags,</p> <pre><code>1 = 'permission x' 2 = 'permission y' 4 = 'permission z' 8 = 'permission w' </code></pre> <p>You decide that that "permission y" is no longer needed in your program, and thus shift z and w up a position (making them 2 and 4 respectively). However, now you need to update all the values in your database.... (what formula do you use?)</p>
<p>Depending on your version of python, the way you express binary literals changes, see <a href="https://stackoverflow.com/questions/1476/how-do-you-express-binary-literals-in-python">this question for the details</a>.</p> <p>I'm using 2.5.2, so I used this:</p> <pre><code>&gt;&gt;&gt; a = int('01101010', 2) &gt;&gt;&gt; b = int('00110101', 2) &gt;&gt;&gt; mask = 07 # Mask out the last 3 bits. &gt;&gt;&gt; (b &amp; ~mask) | (a &amp; mask) 50 &gt;&gt;&gt; int('00110010', 2) 50 </code></pre> <hr> <p>Details:</p> <pre><code>(b &amp; ~mask) &lt;- This keeps the first n-3 bits. (By negating the 3bit mask). (a &amp; mask) &lt;- This keeps the last 3 bits. If you '|' (bitwise OR) them together, you get your desired result. </code></pre> <hr> <p>I didn't understand your goal in the last sentence, so I don't know how to address that :)</p>
python|binary|bit-manipulation
8
1,901,624
3,937,464
Is it possible to pre-create a virtualenv for use in hudson builds?
<p>I'm following the outline of the hudson/python/virtualenv CI solution <a href="http://heisel.org/blog/2009/11/21/django-hudson/" rel="nofollow">described at heisel.org</a> but one step of this is really chafing, and that's the part where the virtualenv, created just for the CI run, is configured:</p> <pre><code>pip install -q -E ./ve -r requirements.pip pip install -q -E ./ve -r requirements-test.pip </code></pre> <p>This takes an inordinate amount of time to run, and every time a source file changes we'll end up re-downloading what amounts to a significant amount of data.</p> <p>Is it possible to create template workspaces in Hudson, so that instead of checking out into a bare workspace it checks out into one that is pre-prepared?</p>
<p>Here are a couple options:</p> <ol> <li><p>Have an archive in your source repository that blows up into the virtualenv/pip install. You'll need to make the virtualenv starting point relocatable. </p></li> <li><p>Use whatever SCM option is appropriate to not wipe out the workspace between builds (e.g. Use svn update, or don't check Mercurial's Clean Build option). Then keep the install commands in your build script, but put them in under an <code>if</code> statement so they are only run (for example) if a <code>.pip_installed</code> file is not present, or if a build parameter is set.</p></li> <li><p>You might be able to get the <a href="http://wiki.hudson-ci.org/display/HUDSON/Clone+Workspace+SCM+Plugin" rel="nofollow noreferrer">Clone Workspace</a> plugin to do what you want. But that's an alternative SCM, which I'm guessing you probably don't want since Hudson won't check out from multiple SCMs (see <a href="https://stackoverflow.com/questions/1976720/can-hudson-check-out-from-multiple-scms">this previous question</a> for some ideas about working around this). </p></li> </ol> <p>It's probably also a good idea to set up your pip configuration to pull from a local cache of packages. </p> <pre><code>pip -f http://localhost/packages/ </code></pre>
python|continuous-integration|hudson|virtualenv|pip
1
1,901,625
45,224,364
Adding a New Column in a DF based on muiltiple conditions (Beginner)
<p>I currently have a data based e.g. below:</p> <p><a href="https://i.stack.imgur.com/clJI7.jpg" rel="nofollow noreferrer">Link To Table</a></p> <p>I would like to add a new column, to the right of "Equity" called "Exposure". </p> <pre><code>If the Quantity &gt;=0 then "Exposure" = df[Equity] - df[market value]. If the Quantity &lt;0 then "Exposure" = df[Equity] - (-1*df[Market Value]) </code></pre> <p>Please help. Thank you.</p>
<p>It seems you are looking for <code>df.transform</code>:</p> <pre><code>df['Exposure'] = df.transform(lambda x: (x['Equity'] - x['Market Value']) if x['Quantity'] &gt; 0 else (x['Equity'] + x['Market Value']), axis=1) </code></pre>
python|dataframe
2
1,901,626
61,286,258
Fillna with backwards and forward looking condition in Pandas
<p>I am working with a dataframe that has a column with several NaN that I want to fill according to the following condition: If going backwards and forward up to 3 rows there are 2 equal values, then fill the NaN with that value.</p> <p>Since this might not be clear, a couple of examples below:</p> <pre><code> col1 0 10 1 10 2 NaN 3 NaN 4 NaN 5 10 6 5 7 NaN 8 5 9 NaN 10 NaN 11 NaN 12 NaN </code></pre> <ul> <li>The value in row 2 has a 10 at 1 row going back and a 10 in 3 rows going forward. --> Fill with 10 </li> <li>The value in row 3 has a 10 at 2 rows going back and a 10 in 2 rows going forward. --> Fill with 10 </li> <li>The value in row 4 has a 10 at 3 rows going back and a 10 in 1 row going forward. --> Fill with 10</li> <li>The value in row 7 has a 5 at 1 row going back and a 5 in 1 row going forward. --> Fill with 5</li> <li>The value in row 9 has a 5 at 1 row going back but no 5 in the 3 rows going forward. --> Then, don't fill</li> </ul> <p>Then, the result would be like this:</p> <pre><code> col1 0 10 1 10 2 10 3 10 4 10 5 10 6 5 7 5 8 5 9 NaN 10 NaN 11 NaN 12 NaN </code></pre> <p>Is there any functionality I can use to give this logic to the <code>fillna</code>?</p> <p>Thanks!!</p>
<p>You can compare forward filling and back filling <code>Series</code> with limit parameter, chain mask with <code>&amp;</code> for bitwise AND for only rows with missing values and replace it by forward filling column:</p> <pre><code>m1 = df['col1'].isna() f = df['col1'].ffill(limit=3) m2 = f.eq(df['col1'].bfill(limit=3)) df['col2'] = df['col1'].mask(m1 &amp; m2, f) print (df) col1 col2 0 10.0 10.0 1 10.0 10.0 2 NaN 10.0 3 NaN 10.0 4 NaN 10.0 5 10.0 10.0 6 5.0 5.0 7 NaN 5.0 8 5.0 5.0 9 NaN NaN 10 NaN NaN 11 NaN NaN 12 NaN NaN </code></pre>
python|pandas|dataframe|fillna
4
1,901,627
58,109,401
How do you unstack columns in a DataFrame?
<p>I have a DataFrame, 'df' and have 53 different columns and 1740 rows. The columns include; 'Age', 'RaceOne', 'RaceTwo', 'RaceThree', 'Name', 'Identity' .. etc. But I want to reorganize the DataFrame so that a new variable 'RaceTimes' replaces 'RaceOne', 'RaceTwo', 'RaceThree' and the remainder of the DataFrame columns follows suit in a particular manner, as shown in the second DataFrame below...</p> <p>Current df:</p> <pre><code>'Age' 'RaceOne' 'RaceTwo' 'RaceThree' 'Name' 'Identity' ... 'Male/Female 25 15:40:00 15:35:00 15:39:00 Wendy 105888 ... Female 26 15:43:00 15:25:00 15:15:00 Steve 114342 ... Male 22 15:20:00 15:31:00 15:23:00 Ant 123553 ... Male </code></pre> <p>What I'd like to see...</p> <pre><code>'Age' 'RaceTimes' 'Name' 'Identity' ... 'Male/Female' 25 15:40:00 Wendy 105888 ... Female 25 15:35:00 Wendy 105888 ... Female 25 15:39:00 Wendy 105888 ... Female 26 15:43:00 Steve 114342 ... Male 26 15:25:00 Steve 114342 ... Male 26 15:15:00 Steve 114342 ... Male 22 15:20:00 Ant 123553 ... Male 22 15:31:00 Ant 123553 ... Male 22 15:23:00 Ant 123553 ... Male </code></pre>
<p>IIUC, check <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.wide_to_long.html" rel="nofollow noreferrer"><code>pd.wide_to_long</code></a>:</p> <pre><code>final=(pd.wide_to_long(df,'Race',i='Age',j='v',sep='',suffix='\w+').reset_index(1,drop=True) .sort_index().reset_index()) </code></pre> <hr> <pre><code> Age Male/Female Name Identity ... Race 0 22 Male Ant 123553 ... 15:20:00 1 22 Male Ant 123553 ... 15:31:00 2 22 Male Ant 123553 ... 15:23:00 3 25 Female Wendy 105888 ... 15:40:00 4 25 Female Wendy 105888 ... 15:35:00 5 25 Female Wendy 105888 ... 15:39:00 6 26 Male Steve 114342 ... 15:43:00 7 26 Male Steve 114342 ... 15:25:00 8 26 Male Steve 114342 ... 15:15:00 </code></pre>
python|pandas|dataframe
1
1,901,628
57,747,413
is it possible to obtain 'groupby-transform-apply' style results with the function return series rather than scaler?
<p>I want to achieve the following behavior: </p> <pre class="lang-py prettyprint-override"><code>res = df.groupby(['dimension'], as_index=False)['metric'].transform(lambda x: foo(x)) </code></pre> <p>where foo(x) returns a series the same size as the input which is df['metric']<br> however, this will throw the following error:<br> ValueError: transform must return a scalar value for each group </p> <p>i know i can use a for loop style, but how can i achieve this in a groupby manner?</p> <p>e.g. </p> <pre><code>df: col1 col2 col3 0 A1 B1 1 1 A1 B1 2 2 A2 B2 3 </code></pre> <p>and i want to achieve: </p> <pre><code> col1 col2 col3 0 A1 B1 1 - (1+2)/2 1 A1 B1 2 - (1+2)/2 2 A2 B2 3 - 3 </code></pre>
<p>You can do this using <code>transform</code>:</p> <pre><code>df['col3']=(df.col3-df.groupby(['col1','col2'])['col3'].transform('sum'))/2 </code></pre> <p>Or using <code>apply</code>(slower):</p> <pre><code>df['col3']=df.groupby(['col1','col2'])['col3'].apply(lambda x: (x-x.sum())/2) </code></pre> <hr> <pre><code> col1 col2 col3 0 A1 B1 -1.0 1 A1 B1 -0.5 2 A2 B2 0.0 </code></pre>
pandas|pandas-groupby
0
1,901,629
55,292,082
How can i create an array with formed by a specified number of empty lists in python?
<p>Something that outputs an array of the form, in case i want 4 empty lists in the array:</p> <blockquote> <p>[[] [] [] [] []]</p> </blockquote>
<p>Try a list comprehension:</p> <pre><code>&gt;&gt;&gt; list_of_4_empty_lists = [[] for _ in range(4)] &gt;&gt;&gt; list_of_4_empty_lists [[], [], [], []] </code></pre> <p><em>Note: Your example output has 5 empty lists rather than 4.</em></p>
python|arrays
0
1,901,630
57,569,220
Unable to import TokenObtainPairView,TokenRefreshView from JWT
<p>I want to use Json Web token authentication.</p> <p>but when I import, it gives me error of no reference of TokenObtainPairView, TokenRefreshView, found, however I installed jwt.</p> <p>urls.py:</p> <pre><code> from django.contrib import admin from django.urls import path from rest_framework_jwt.views import ( TokenObtainPairView, TokenRefreshView, ) from django.conf.urls import url,include urlpatterns = [ path('admin/', admin.site.urls), path('api/token/', TokenObtainPairView.as_view(), name='token_obtain_pair'), path('api/token/refresh/', TokenRefreshView.as_view(), name='token_refresh'), url(r'^auth/', include('authsystem.urls')) </code></pre> <p>Settings.py:</p> <pre><code> REST_FRAMEWORK = { 'DEFAULT_PERMISSION_CLASSES': ( 'rest_framework.permissions.IsAuthenticated', ), 'DEFAULT_AUTHENTICATION_CLASSES': ( 'rest_framework_simplejwt.authentication.JWTAuthentication', 'rest_framework.authentication.SessionAuthentication', ), </code></pre> <p>}</p> <p>when I do pip freeze I have the packages:</p> <pre><code> Django==2.2.4 django-cors-headers==3.1.0 djangorestframework==3.10.2 djangorestframework-jwt==1.11.0 djangorestframework-simplejwt==4.3.0 Pillow==6.1.0 PyJWT==1.7.1 pytz==2019.2 sqlparse==0.3.0 </code></pre> <p>I have tried to import from different way but still it giving me cannot find reference.</p>
<p>You imported it from the wrong framework, you need to import it from the <code>rest_framework_simplejwt.views</code> module, not the <s><code>rest_framework_jwt.views</code></s> module:</p> <pre><code>from <b>rest_framework_simplejwt</b>.views import ( TokenObtainPairView, TokenRefreshView, )</code></pre> <p>Is there a specific reason why you installed both <code>djangorestframework-jwt</code> and <code>djangorestframework-simplejwt</code>?</p>
python|django|django-rest-framework|jwt-auth
3
1,901,631
54,051,444
Can't get Tensorflow Version > 0.11 on Raspberry pi
<p>Setting: </p> <ul> <li>Raspberry Pi 3 (B+) running Raspbian Stretch with ARMv7 cpu</li> <li>BerryConda Python 3.6 environment</li> </ul> <p>On Raspberry pi, I can't seem to install a tensorflow version newer than <code>0.11</code> (at time of writing, <code>1.12</code> is the newest tensorflow version). If I <code>pip install tensorflow</code> (after upgrading pip of course) I get <code>0.11</code>: </p> <p><a href="https://i.stack.imgur.com/V1Bhu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/V1Bhu.png" alt="enter image description here"></a> </p> <p>If I try to force it to install a newer version, I get a <code>tensorflow-1.11.0-cp35-none-linux_armv7l.whl is not a supported wheel on this platform</code> error: </p> <p><a href="https://i.stack.imgur.com/FszDI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FszDI.png" alt="enter image description here"></a></p> <p>Looking at the <a href="https://www.piwheels.org/simple/tensorflow/" rel="nofollow noreferrer">list of wheels in piwheels for tensorflow</a> it seems that <code>0.11</code> is the only one that works for "any" architecture, so that must have something to do with it.</p>
<p>The userland for Raspbian <a href="https://raspberrypi.stackexchange.com/questions/61785/is-there-a-specific-download-for-64-bit-raspbian">is still 32-bit</a>. Given the limited amount of memory, it doesn't make much sense to use a 64-bit userland.</p> <p>I doubt if a wheel made for Python 3.5 would work with Python 3.6, but you might try <code>tensorflow-1.11.0-cp35-none-linux_armv6l.whl</code>.</p> <p>As a last resort, you could try building it yourself; you might want to use an USB harddisk or SSD for that purpose, though.</p> <p>A raspberry Pi does seem somewhat underpowered for tensorflow... Running it on a desktop PC (<em>especially</em> if you have a powerful GPU and a tensorflow that can take advantage of that) should be significantly faster.</p>
python|tensorflow|raspberry-pi|conda
0
1,901,632
58,432,977
Print parent name for each selection when selected multiple nodes in Tkinter tree
<p>I want user to select multiple nodes from different branches of Tkinter Tree. So that I can do further process I should know the parent branch of each selection. </p> <ul> <li>When I select just one node I am able to get the parent id by using code below.</li> <li>When I select multiple nodes(pressing the ctrl key),I just get parent node of first selection</li> </ul> <p>How can I get the parent node of all selections done simultaneously?</p> <p>Here is my working code:</p> <pre><code>import ttk import Tkinter as tk def select(): item_iid = tree.selection()[0] parent_iid = tree.parent(item_iid) node = tree.item(parent_iid)['text'] print node root = tk.Tk() tree = ttk.Treeview(root,show="tree")#, selectmode=EXTENDED) tree.config(columns=("col1")) #SUb treeview style = ttk.Style(root) style.configure("Treeview") tree.configure(style="Treeview") tree.insert("", "0", "item1", text="Branch1",) tree.insert("", "1", "item2", text="Branch2") #sub tree using item attribute to achieve that tree.insert("item1", "1", text="FRED") tree.insert("item1", "1", text="MAVIS") tree.insert("item1", "1", text="BRIGHT") tree.insert("item2", "2", text="SOME") tree.insert("item2", "2", text="NODES") tree.insert("item2", "2", text="HERE") tree.pack(fill=tk.BOTH, expand=True) tree.bind("&lt;Return&gt;", lambda e: select()) root.mainloop() </code></pre> <p>Current output:</p> <p><a href="https://i.stack.imgur.com/009vQ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/009vQ.jpg" alt="enter image description here"></a></p> <p>Able to display parent name when selecting one node only</p> <p>When done multiple selection parent of only the first one displayed, expecting parent name for each node selected.</p> <p>Branch1 displayed i.e only for the first selection:</p> <p><a href="https://i.stack.imgur.com/IzJYH.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IzJYH.jpg" alt="enter image description here"></a></p>
<blockquote> <p>selection()</p> <p>Returns a <strong>tuple</strong> of selected items.</p> </blockquote> <p><a href="https://docs.python.org/3/library/tkinter.ttk.html#tkinter.ttk.Treeview.selection" rel="nofollow noreferrer">(source)</a> (emphasis mine)</p> <hr> <p><code>.selection()</code> returns a tuple of all items selected in the <code>Treeview</code>. On the first line of the function, you are explicitly only selecting the first item:</p> <pre><code>def select(): item_iid = tree.selection()[0] #&lt;---Right here you tell Python that you only want to use the first item from the tuple. parent_iid = tree.parent(item_iid) node = tree.item(parent_iid)['text'] print node </code></pre> <p>Making a simple change to the function to make it loop through all elements of the tuple will resolve this:</p> <pre><code>def select(): for i in tree.selection(): item_iid = i parent_iid = tree.parent(item_iid) node = tree.item(parent_iid)['text'] print(node) </code></pre>
python-2.7|tkinter|tree|treeview|tkinter-layout
1
1,901,633
65,302,140
How to execute pytest fixture on each parameter for test function?
<p>In my current setup of end-to-end tests I am using Selenium. I have a fixture that looks something like this:</p> <pre><code>@pytest.fixture(scope=&quot;session&quot;) def browser(request): # Here I do a basic setup # Setting up accounts from configuration based on input from test function # Initializing webdriver wrapper with this data # yield driver # teardown </code></pre> <p>So far I was only using parameters for a fixture and typical test function would look like this:</p> <pre><code>@pytest.mark.parametrize('browser', [(SomeEnum, AnotherEnum1), (SomeEnum, AnotherEnum2)], indirect=True) def some_test(browser): </code></pre> <p>This will result in two tests:</p> <ul> <li><code>some_test[broswer0]</code></li> <li><code>some_test[browser1]</code></li> </ul> <p>I am trying to combine parameters for a function and parameters for a fixture now, so test function looks like this:</p> <pre><code>@pytest.mark.parametrize('browser', [([SomeEnum1, SomeEnum2], AnotherEnum)], indirect=True) @pytest.mark.parametrize('param1,param2', [(DifferentEnum, False), (DifferentEnum2, True)]) def some_test(browser, param1, param2): </code></pre> <p>This setup will result in 2 tests, which I want:</p> <ul> <li>some_test[DifferentEnum-False-browser0]</li> <li>some_test[DifferentEnum2-True-browser0]</li> </ul> <p>If I run tests individually, everything is fine. But if I run them together, first one will finish and pass and it seems that second one doesn't go through the fixture at all, but browser session just stays open.</p> <p>What I need to change for fixture to be executed for each of the tests?</p>
<p>Narrow the scope of the <code>browser</code> fixture:</p> <pre><code>@pytest.fixture(scope=&quot;function&quot;) def browser(request): ... </code></pre> <p>or just drop it completely since <code>function</code> is the default scope.</p> <pre><code>@pytest.fixture def browser(request): ... </code></pre>
python|selenium|pytest
1
1,901,634
65,413,465
beg. python struggling to remove .0 at the end of an int
<p>so Im taking a course intro to python and one of the labs has me stuck.</p> <p>I have to take 2 inputs and divide them 3 times</p> <p>ex input is 2000 2</p> <p>ex output is 1000 500 250</p> <p>so the input 1 is divided by input 2 then that answer is divided by output 2 again and again then print</p> <p>the problem is that my output keeps putting a .0 at the end ex 1000.0 500.0 250.0</p> <p>and that is wrong when I submit it</p> <p>heres what i got</p> <pre><code> user_num1 = input() user_num2 = input() a = (int(user_num1) / int(user_num2)) b = (int(a) / int(user_num2)) c = (int(b) / int(user_num2)) print (a, b, c) </code></pre>
<p>Your problem is caused because when you divide two integers, the result is a float</p> <pre><code># In your last line you can do print (int(a), int(b), int(c)) # Or a = int(user_num1 / user_num2) b = int(a / user_num2) c = int(b / user_num2) print (a, b, c) </code></pre>
python
0
1,901,635
28,735,141
Django Navigation Key Error
<p>I have a question about Django navigation bar. I got previous help on how to fix my <code>tags.py</code> but now I get an error that says:</p> <pre><code>Exception Type: KeyError Exception Value: 'request' Exception Location: /usr/lib/python2.7/site-packages/django/template/context.py in __getitem__, line 70 Python Executable: /usr/bin/python </code></pre> <p>I was told to add </p> <pre><code>TEMPLATE_CONTEXT_PROCESSORS = ( 'django.core.context_processors.request', 'django.contrib.auth.context_processors.auth' ) </code></pre> <p>In my settings.py file but the error still persists.</p> <pre><code>## file path -- Main project folder -- manage.py -- .. other files .. -- src -- urls.py -- settings.py -- .. other files .. -- templates -- base.html -- app folder -- templates -- app name -- index.html -- login.html -- register.html -- templatetags -- __init__.py -- tags.py ## urls.py from django.conf.urls import patterns, url from game import views urlpatterns = patterns('', url(r'^$', views.index, name='index'), url(r'^upload/$', views.upload_file, name='upload'), url(r'^successful_upload/$', views.successful_upload, name='successful_upload'), url(r'^play/$', views.play, name='play'), url(r'^registration/$', views.register, name='register'), url(r'^successful_registeration/$', views.successful_registeration, name='successful_registeration'), url(r'^login/$', views.login, name='login'), url(r'^logout/$', views.logout, name='logout') ) ## tags.py from django import template register = template.Library() @register.tag def active(parser, token): import re args = token.split_contents() template_tag = args[0] if len(args) &lt; 2: raise template.TemplateSyntaxError, "%r tag requires at least one argument" % template_tag return NavSelectedNode(args[1:]) class NavSelectedNode(template.Node): def __init__(self, patterns): self.patterns = patterns def render(self, context): path = context['request'].path for p in self.patterns: pValue = template.Variable(p).resolve(context) if path == pValue: return "-active" return "" ## base.html {% load tags %} {% url 'index' as home %} {% url 'upload' as upload %} {% url 'play' as play %} {% url 'register' as contact %} {% url 'login' as login %} &lt;div id="navigation"&gt; &lt;a class="{% active request home %}" href="{{ home }}"&gt;Home&lt;/a&gt; &lt;a class="{% active request upload %}" href="{{ upload }}"&gt;Upload&lt;/a&gt; &lt;a class="{% active request play %}" href="{{ play }}"&gt;Play&lt;/a&gt; &lt;a class="{% active request contact %}" href="{{ contact }}"&gt;Contact&lt;/a&gt; &lt;a class="{% active request login %}" href="{{ login }}"&gt;Login&lt;/a&gt; &lt;/div&gt; ## login.html {% extends "base.html" %} {% if error_message %}&lt;p&gt;&lt;strong&gt;{{ error_message }}&lt;/strong&gt;&lt;/p&gt;{% endif %} &lt;form action="/game/login/" method="post"&gt; {% csrf_token %} &lt;table border='0'&gt; &lt;div class="fieldWrapper"&gt;&lt;tr&gt;&lt;td&gt; {{ form.user_name.errors }}&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt; &lt;label for="id_user_name"&gt;User Name:&lt;/label&gt;&lt;/td&gt;&lt;td&gt; {{ form.user_name }}&lt;/td&gt;&lt;/tr&gt; &lt;/div&gt; &lt;div class="fieldWrapper"&gt;&lt;tr&gt;&lt;td&gt; {{ form.password.errors }}&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;tr&gt;&lt;td&gt; &lt;label for="id_password"&gt;Password:&lt;/label&gt;&lt;/td&gt;&lt;td&gt; {{ form.password }}&lt;/td&gt;&lt;/tr&gt; &lt;/div&gt; &lt;/table&gt; &lt;input type="submit" value="Login" /&gt; &lt;/form&gt; ## settings.py # Build paths inside the project like this: os.path.join(BASE_DIR, ...) import os BASE_DIR = os.path.dirname(os.path.dirname(__file__)) # Quick-start development settings - unsuitable for production # See https://docs.djangoproject.com/en/1.7/howto/deployment/checklist/ # SECURITY WARNING: keep the secret key used in production secret! SECRET_KEY = 'xz6e)1=!%wma16h9$lt&amp;gl8(96^(@1t2n)&amp;lesteje_%x$+jn^' # SECURITY WARNING: don't run with debug turned on in production! DEBUG = True TEMPLATE_DEBUG = True ALLOWED_HOSTS = [] # Application definition INSTALLED_APPS = ( 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'game', ) MIDDLEWARE_CLASSES = ( 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.auth.middleware.SessionAuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware', ) ROOT_URLCONF = 'wam.urls' WSGI_APPLICATION = 'wam.wsgi.application' # Database # https://docs.djangoproject.com/en/1.7/ref/settings/#databases DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', 'NAME': os.path.join(BASE_DIR, 'db.sqlite3'), } } # Internationalization # https://docs.djangoproject.com/en/1.7/topics/i18n/ LANGUAGE_CODE = 'en-us' TIME_ZONE = 'America/Chicago' USE_I18N = True USE_L10N = True USE_TZ = True # Static files (CSS, JavaScript, Images) # https://docs.djangoproject.com/en/1.7/howto/static-files/ STATIC_URL = '/static/' TEMPLATE_DIRS = [os.path.join(BASE_DIR, 'templates')] from django.conf import global_settings TEMPLATE_CONTEXT_PROCESSORS = ( 'django.core.context_processors.request', 'django.contrib.auth.context_processors.auth' ) # views.py from django.shortcuts import render, render_to_response from django.http import HttpResponse, HttpResponseRedirect from django.core.context_processors import csrf from django.http import Http404 from forms import UploadFileForm, UserRegisterForm, UserLoginForm from game.models import UserLogin def handle_uploaded_file(f, n): with open('ais/' + n + '.py', 'wb+') as destination: for chunk in f.chunks(): destination.write(chunk) # Create your views here. def index(request): return HttpResponse("Hello, world. You're at the game index.") def upload_file(request): if request.method == 'POST': form = UploadFileForm(request.POST, request.FILES) if form.is_valid(): post = request.POST files = request.FILES handle_uploaded_file(files['player1_ai_code'], post['player1_ai_title']) handle_uploaded_file(files['player2_ai_code'], post['player2_ai_title']) return HttpResponseRedirect('/game/successful_upload') else: form = UploadFileForm() c = {'form': form} c.update(csrf(request)) return render_to_response('game/upload.html', c) def successful_upload(request): return HttpResponse("The two ai's were successfully upload.") def play(request): import sys sys.path.insert(0, 'ais') sys.path.insert(0, '../tictactoe/') import tictactoe from html_change import * s = tictactoe.play_game(ai=['ai1', 'randai']) return HttpResponse(change(s)) def register(request): if request.method == 'POST': form = UserRegisterForm(request.POST) if form.is_valid(): post = request.POST # check if the user exists in the database check_user_exists = UserLogin.objects.filter(user_name=post['user_name']) if check_user_exists: c = {'form': form, 'error_message': "This user name already exists."} c.update(csrf(request)) return render_to_response('game/register.html', c) # check size of user name if len(post['user_name']) &lt; 5: c = {'form': form, 'error_message': "Your username must be longer than 5 characters."} c.update(csrf(request)) return render_to_response('game/register.html', c) # check size of password if len(post['password']) &lt; 5: c = {'form': form, 'error_message': "Your password must be longer than 5 characters."} c.update(csrf(request)) return render_to_response('game/register.html', c) # check if passwords match -- for the form if post['password'] != post['re_password']: c = {'form': form, 'error_message': "Your passwords do not match"} c.update(csrf(request)) return render_to_response('game/register.html', c) # registeration successful user = UserLogin(user_name=post['user_name'], password=post['password']) user.save() return HttpResponseRedirect('/game/successful_registeration') else: form = UserRegisterForm() c = {'form': form} c.update(csrf(request)) return render_to_response('game/register.html', c) def successful_registeration(request): return HttpResponse("Your registration was successful") def login(request): if request.method == 'POST': form = UserLoginForm(request.POST) if form.is_valid(): m = UserLogin.objects.get(user_name=request.POST['user_name']) if m.password == request.POST['password']: request.session['member_id'] = m.id return HttpResponseRedirect('/game') else: c = {'form': form, 'error_message': "Your username and password didn't match."} c.update(csrf(request)) return render_to_response('game/login.html', c) else: form = UserLoginForm() c = {'form': form} c.update(csrf(request)) return render_to_response('game/login.html', c) def logout(request): try: del request.session['member_id'] except KeyError: pass return HttpResponseRedirect("/game") </code></pre>
<p>You should import <code>TEMPLATE_CONTEXT_PROCESSORS</code> from <code>django.conf.global_settings</code> in your <em>settings.py</em> file instead of defining a new one:</p> <p><strong>settings.py</strong></p> <pre><code>from django.conf.global_settings import TEMPLATE_CONTEXT_PROCESSORS ... TEMPLATE_CONTEXT_PROCESSORS += ('django.core.context_processors.request',) ... </code></pre> <p>Additionally, context processors are only applied to RequestContext instances so you should check your view and make sure you render your template with a <code>RequestContext</code> instance:</p> <pre><code>from django.template import RequestContext return render_to_response('base.html', {}, request_context=RequestContext(request)) </code></pre> <p>or even better, use <code>render</code> shortcut function:</p> <pre><code>from django.shortcuts import render return render(request, 'base.html') </code></pre> <p>See the <a href="https://docs.djangoproject.com/en/1.7/ref/templates/api/#subclassing-context-requestcontext" rel="nofollow">documentation</a> for more information context variables in templates.</p>
python|django|navigation
0
1,901,636
28,429,229
ImportError: cannot import name is_python_keyword
<p>I am trying to execute a python script , but I get an error on line</p> <pre><code>from jinja2.utils import Markup, concat, escape, is_python_keyword, next </code></pre> <p>ImportError: cannot import name is_python_keyword</p> <p>I checked there is no file named is_python.py</p>
<p>Looking at the <a href="http://svn.python.org/projects/external/Jinja-2.3.1/jinja2/utils.py" rel="nofollow">source code</a> for 2.3.1 they have a line:</p> <pre><code>from keyword import iskeyword as is_python_keyword </code></pre> <p>They are using the builtin <code>keyword</code> module.</p> <p>The current version is 2.7.3 so it seems they have changed the code and it is no longer available.</p> <p>You could use the above import from the builtin module instead.</p>
python|jinja2
2
1,901,637
14,389,892
IPython Notebook: Plotting with LaTeX?
<p>Displaying lines of LaTeX in IPython Notebook has been answered previously, but how do you, for example, label the axis of a plot with a LaTeX string when plotting in IPython Notebook?</p>
<p>It works the same in IPython as it does in a stand-alone script. This example comes from <a href="http://matplotlib.sourceforge.net/users/usetex.html" rel="noreferrer">the docs</a>:</p> <pre><code>import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('text', usetex = True) mpl.rc('font', family = 'serif') plt.figure(1, figsize = (6, 4)) ax = plt.axes([0.1, 0.1, 0.8, 0.7]) t = np.arange(0.0, 1.0+0.01, 0.01) s = cos(2*2*pi*t)+2 plt.plot(t, s) plt.xlabel(r'\textbf{time (s)}') plt.ylabel(r'\textit{voltage (mV)}', fontsize = 16) plt.title(r"\TeX\ is Number $\displaystyle\sum_{n=1}^\infty\frac{-e^{i\pi}}{2^n}$!", fontsize = 16, color = 'r') plt.grid(True) plt.savefig('tex_demo') plt.show() </code></pre> <p><img src="https://i.stack.imgur.com/ouXHK.png" alt="enter image description here"></p>
python|matplotlib|latex|ipython|ipython-notebook
9
1,901,638
68,741,610
invalid command 'npm run build' when running it from setup.py
<p>I am trying to build the Javascript end of my app using <code>npm run build</code> via the <code>setup.py</code> config file. I am using the <code>build</code> class from <code>distutils</code> as suggested elsewhere, but I am getting an error when I run <code>pip install .</code></p> <pre><code>from setuptools import setup from distutils.command.build import build import json import os class javascript_build(build): def run(self): self.run_command(&quot;npm run build&quot;) build.run(self) if __name__ == &quot;__main__&quot;: setup( cmdclass={'build': javascript_build}, ) </code></pre> <p>Does anyone know why is this happening?</p> <pre><code> running npm run build error: invalid command 'npm run build' ---------------------------------------- ERROR: Failed building wheel for chemiscope </code></pre> <p>EDIT 1: So it seems that instead of running <code>npm run build</code>, it is running <code>python setup.py npm run build</code>. So my question changes a little to how do I exactly force <code>distutils</code> to run <code>npm run build</code>?</p>
<p><code>self.run_command(&quot;xxx&quot;)</code> doesn't run a program — it calls another <code>distutils</code>/<code>setuptools</code> subcommand; something like calling <code>python setup.py xxx</code> but from the same process, not via the command line. So you can do <code>self.run_command(&quot;sdist&quot;)</code> but not <code>self.run_command(&quot;npm&quot;)</code>.</p> <p>In your case you need <code>os.system(&quot;npm run build&quot;)</code> or <code>subprocess.call(&quot;npm run build&quot;)</code>.</p>
python|setuptools|setup.py|distutils
2
1,901,639
41,528,970
Pandas previous month begin
<p>Given a dataframe:</p> <pre><code>df = pd.DataFrame({'c':[0,1,1,2,2,2],'date':pd.to_datetime(['2016-01-01','2016-02-01','2016-03-01','2016-04-01','2016-05-01','2016-06-05'])}) </code></pre> <p>How to get the previous month begin for each date? The below doesn't work for 6/5 and there is some extra time portion. </p> <pre><code>pd.to_datetime(df['date'], format="%Y%m") + pd.Timedelta(-1,unit='M') + MonthBegin(0) </code></pre> <p><strong>EDIT</strong></p> <p>I have a workaround (2 steps back and 1 step forward):</p> <pre><code>(df['date']+ pd.Timedelta(-2,unit='M')+ MonthBegin(1)).dt.date </code></pre> <p>Don't like this. There should be something better.</p>
<p>You can first subtract <code>MonthEnd</code> to get to the end of the previous month, then <code>MonthBegin</code> to get to the beginning of the previous month:</p> <pre><code>df['date'] - pd.offsets.MonthEnd() - pd.offsets.MonthBegin() </code></pre> <p>The resulting output:</p> <pre><code>0 2015-12-01 1 2016-01-01 2 2016-02-01 3 2016-03-01 4 2016-04-01 5 2016-05-01 </code></pre>
datetime|pandas
11
1,901,640
41,370,018
Adding stock price data frames to a list so you have a list of stock prices histories
<p>I'm trying to get the historical stock price data for all these tickers going back to 2014. All of these companies went public in 2014, so it will automatically get them from the day they first traded.</p> <p>What I would like is for the <code>stocklist</code> list to contain at the end is a list of dataframes/price histories for each company, but separately and not put together.</p> <p>So stocklist would be data frames/stock histories for each company, i.e. <code>['LC', 'ZAYO']</code> etc. </p> <pre><code>tickers = ['LC', 'ZAYO', 'GPRO', 'ANET', 'GRUB', 'CSLT', 'ONDK', 'QUOT', 'NEWR', 'ATEN'] stocklist = [] for i in tickers: stock = Share(i) adj = stock.get_historical('2014-1-1', '2016-12-27') df = pd.DataFrame(adj) df = df.set_index('Date') df['Adj_Close'] = df['Adj_Close'].astype(float, errors='coerce') price = df.sort() i = price stocklist.append(i) </code></pre>
<p>You're not appending to <code>stocklist</code> inside the loop due to bad indentation.</p> <p>Also, you're messing with the loop variable <code>i</code> needlessly.</p> <p>This might work, although it's difficult to test since the <code>Share</code> class is not available:</p> <pre><code>tickers = ['LC', 'ZAYO', 'GPRO', 'ANET', 'GRUB', 'CSLT', 'ONDK', 'QUOT', 'NEWR', 'ATEN'] stocklist = [] for ticker in tickers: stock = Share(ticker) adj = stock.get_historical('2014-1-1', '2016-12-27') df = pd.DataFrame(adj) df.set_index('Date', inplace=True) df['Adj_Close'] = df['Adj_Close'].astype(float, errors='coerce') df.sort_index(inplace=True) stocklist.append(df) </code></pre> <p>Changes I made:</p> <ul> <li>use <code>tickers</code> as a variable name instead of <code>list</code> which is the name of a built-in type</li> <li>set index and sort the dataframe in-place instead of making copies</li> <li>use <code>DataFrame.sort_index()</code> for sorting since <code>DataFrame.sort()</code> is deprecated</li> <li>fixed indentation so <code>stocklist</code> is populated inside the loop</li> <li>removed the unnecessary assignment before <code>stocklist</code> appending</li> </ul> <p>It might also be more useful to collect the dataframes in a dictionary keyed by tickers. So you would initialize <code>stocklist = {}</code> and instead of appending do <code>stocklist[ticker] = df</code>.</p>
python|pandas
1
1,901,641
41,665,778
Change text from a FTP server to string (Python)
<p>I would like to open <a href="ftp://ftp.nasdaqtrader.com/SymbolDirectory/otherlisted.txt" rel="nofollow noreferrer">this site</a> with Python and convert it to a string. I want the text to stay as it is because I am going to extract the first word of each line later. Here's what I tried:</p> <pre><code>from ftplib import FTP ftp = FTP('ftp.nasdaqtrader.com') ftp.login() a=ftp.retrbinary('NLST /SymbolDirectory/nasdaqlisted.txt', str) print(a) </code></pre> <p>After this I get the following message</p> <pre><code>226 Transfer complete. </code></pre> <p>I would like to get the contents of the text file, not this. How do I fix it?</p>
<p>First: you have to use <code>RETR</code> instead <code>NLST</code>. </p> <p>Second: retrieved data is send to function which you add in <code>retrbinary</code> as second argument.</p> <p>Third: you may have to convert <code>bytes</code> to <code>string</code> using <code>decode()</code> (or <code>decode("UTF-8")</code> or <code>decode("some_encodig_name")</code>)</p> <pre><code>from ftplib import FTP def my_function(data): print(data.decode()) ftp = FTP('ftp.nasdaqtrader.com') ftp.login() status = ftp.retrbinary('RETR /SymbolDirectory/nasdaqlisted.txt', my_function) print(status) </code></pre> <p>Doc: <a href="https://docs.python.org/3.5/library/ftplib.html" rel="nofollow noreferrer">ftplib</a>, <a href="https://docs.python.org/3.5/library/codecs.html#standard-encodings" rel="nofollow noreferrer">standard-encodings</a></p>
python|ftp
4
1,901,642
41,393,691
Change page limit in test case
<p>Question in short: how do I override the <code>PAGE_SIZE</code> setting of <code>REST_FRAMEWORK</code> in a test case in Django?</p> <p>Details about question: I have the following settings in my project's <code>base.py</code>:</p> <pre><code>REST_FRAMEWORK = { 'DEFAULT_PERMISSION_CLASSES': ( 'rest_framework.permissions.IsAuthenticated', ), 'DEFAULT_RENDERER_CLASSES': ( 'rest_framework.renderers.JSONRenderer', ), 'DEFAULT_AUTHENTICATION_CLASSES': ( 'Masterdata.authentication.FashionExchangeAuthentication', ), 'DEFAULT_FILTER_BACKENDS': ( 'rest_framework.filters.DjangoFilterBackend', ), 'DEFAULT_PAGINATION_CLASS': 'rest_framework.pagination.LimitOffsetPagination', 'PAGE_SIZE': 100 } </code></pre> <p>Now I want to create a test case, in which I change the page size to 10. However, I cannot override this particular setting while keeping the rest of the dictionary intact. Does anyone know how to do so?</p> <p>This is what I tried:</p> <p>1) Adding the modify_settings decorator above the test method:</p> <pre><code>@modify_settings(REST_FRAMEWORK={ 'remove': 'PAGE_SIZE', 'append': {'PAGE_SIZE': 10} }) </code></pre> <p>This does not change the setting.</p> <p>2) Using override_settings as context manager:</p> <pre><code>test_settings = settings.REST_FRAMEWORK test_settings['PAGE_SIZE'] = 10 with override_settings(REST_FRAMEWORK = test_settings): # do stuff </code></pre> <p>However, the line with <code>test_settings['PAGE_SIZE']=10</code> fails because apparently the variable <code>settings.REST_FRAMEWORK</code> is a list instead of a dictionary.</p> <pre><code>print(settings.REST_FRAMEWORK) ['DEFAULT_PERMISSION_CLASSES', 'DEFAULT_AUTHENTICATION_CLASSES', 'DEFAULT_FILTER_BACKENDS', 'DEFAULT_PAGINATION_CLASS', 'DEFAULT_RENDERER_CLASSES'] </code></pre> <p>How come this setting is a list here? I have verified that the variable is not overwritten anywhere else in the project.</p>
<p>Beneath your rest_framework settings, you can check for if this is test environment. This should work:</p> <pre><code>if 'test' in sys.argv: REST_FRAMEWORK['PAGE_SIZE'] = 10 </code></pre> <p>Otherwise you can create separate settings file for test cases:</p> <pre><code># settings_tests.py from settings import * # Override your changes. REST_FRAMEWORK['PAGE_SIZE'] = 10 </code></pre>
python|django|pagination|django-rest-framework|django-testing
-1
1,901,643
25,866,733
Python - hardware handling, how to efficiently implement hardware registers as a property in an object
<p>I'm writing a code, which handles some hardware accessible via certain communication interface. I'd like to define certain sets of registers as properties of the object describing the whole device. Lets assume, that I have functions <code>hw_read(address, size)</code> and <code>hw_write(address, size, array_of_values)</code>, which allow to access the hardware.</p> <p>Let's assume, that I have a block of registers with length 2, starting from address 10, and want to assign it to a property "reg1". I can do it in a following way:</p> <pre><code>class my_hardware(object): def __init__(self): # Perform initialization return @property def reg1(self): return hw_read(10,2) @reg1.setter def reg1(self,value): hw_write(10,2,value) </code></pre> <p>The above implementation is not very convenient, as I have to provide address and size twice. It is error prone, especially if I have to define bigger number of registers. Is it possible to implement it in such a way, that I could easily define set a few registers like below:</p> <pre><code>dev1=my_hardware() dev1.define_register("reg1",10,2) dev1.define_register("reg2",12,6) </code></pre> <p>And then access them via:</p> <pre><code>dev1.reg1=(0x1234,0x3211) print dev1.reg2 </code></pre> <p>The second approach has also significant advantage, that list of registers (their names, addresses and sizes) may be read from an external text file (e.g. used also for VHDL synthesis).</p> <p><strong>Update - possible solution?</strong></p> <p>After studying soe posts related to dynamical adding of properties, I have achieved the required functionality with the code shown below:</p> <pre><code>def hw_write (first,size,val): print "write:"+str(first)+", "+str(size)+" val="+str(val) def hw_read (first,size): print "read:"+str(first)+", "+str(size) return (0x1234,)*size class my_hardware(object): def __init__(self): return def add_reg(self,name,first,size): setattr(my_hardware,name,property(lambda self : hw_read(first,size), lambda self, x: hw_write(first,size,x))) </code></pre> <p>Below are sample results with dummy hw_read and hw_write functions:</p> <pre><code>&gt;&gt;&gt; a=my_hardware() &gt;&gt;&gt; a.add_reg("reg1",10,2) &gt;&gt;&gt; a.add_reg("reg2",20,3) &gt;&gt;&gt; a.reg1 read:10, 2 (4660, 4660) &gt;&gt;&gt; a.reg2 read:20, 3 (4660, 4660, 4660) &gt;&gt;&gt; a.reg1=(10, 11) write:10, 2 val=(10, 11) &gt;&gt;&gt; a.reg2=(10, 11, 12) write:20, 3 val=(10, 11, 12) </code></pre> <p>I'll appreciate any suggestions whether the above solution is reasonable.</p> <p>One problem (or feature?) which I can see, is that registers are defined "per class", not "per instance". It may be helpful, if we have a few devices with the same register sets, but may be misleading, if we want to use "my_hardware" class to access different devices with different register sets connected to the same bus.</p> <p><strong>update 2</strong></p> <p>I have found a solution, which allows to define derived classes describing particular devices, and define registers per class:</p> <pre><code>def hw_write (first,size,val): print "write:"+str(first)+", "+str(size)+"val="+str(val) def hw_read (first,size): print "read:"+str(first)+", "+str(size) return (0x1234,)*size class my_hardware(object): def __init__(self, base_address): self.ba = base_address return @classmethod def add_reg(myclass,name,first,size): setattr(myclass,name,property(lambda self : hw_read(self.ba+first,size), lambda self, x: hw_write(self.ba+first,size,x))) </code></pre> <p>Below is a sample session demonstrating correct operation:</p> <pre><code>&gt;&gt;&gt; class dev1(my_hardware): ... pass ... &gt;&gt;&gt; class dev2(my_hardware): ... pass ... &gt;&gt;&gt; dev1.add_reg("reg1",10,2) &gt;&gt;&gt; dev2.add_reg("reg2",15,3) &gt;&gt;&gt; a=dev1(100) &gt;&gt;&gt; b=dev2(200) &gt;&gt;&gt; a.reg1 read:110, 2 (4660, 4660) &gt;&gt;&gt; a.reg2 Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; AttributeError: 'dev1' object has no attribute 'reg2' &gt;&gt;&gt; b.reg1 Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; AttributeError: 'dev2' object has no attribute 'reg1' &gt;&gt;&gt; b.reg2 read:215, 3 (4660, 4660, 4660) </code></pre> <p>As you can see, register reg1 is defined in devices of class "dev1", while register reg2 is defined in devices of class dev2. Of course in this case we also need a base address to be passed to each device, and therefore I had to add "base_address" to the constructor.</p> <p>Thanks, Wojtek</p>
<p><em>Update (Slight changes to match functionality of your second solution):</em> You could implement a small Domain-Specific Language (<a href="http://en.wikipedia.org/wiki/Domain-specific_language" rel="nofollow">http://en.wikipedia.org/wiki/Domain-specific_language</a>) using the Descriptor Protocol (<a href="https://docs.python.org/3.4/howto/descriptor.html" rel="nofollow">https://docs.python.org/3.4/howto/descriptor.html</a>). As an example, given a descriptor:</p> <pre><code>class RegisterDescriptor(object): """ A descriptor that models reading and writing to hardware registers. """ def __init__(self, offset, size): self.offset = offset self.size = size def __get__(self, obj, type=None): if obj is None: return self return hw_read(obj.base_address + self.offset, self.size) def __set__(self, obj, value): if obj is None: raise AttributeError("Cannot set attribute") hw_write(obj.base_address + self.offset, self.size, value) </code></pre> <p>With this descriptor, you can now write a class and denote your registers semantically. Again, for example:</p> <pre><code>class AddressableHardware(object): """ Base class for addressable hardware components. Attributes: base_address -- the base address of the device. """ def __init__(self, base_address): self.base_address = base_address class MyHardware(AddressableHardware): """ An example hardware device. """ reg1 = RegisterDescriptor(2, 4) reg2 = RegisterDescriptor(6, 1) def __init__(self, base_address): AddressableHardware.__init__(self, base_address) mh = MyHardware(0x2E) print(mh.reg1) mh.reg2 = b'\x00' </code></pre> <p><em>Update</em>: This approach is different from your second solution as I've taken a more declarative approach, where each type of device would have its own class and associated methods (maybe some helper methods to hide low-level access to the device) and deal in more Pythonic datatypes. Using the descriptor solution above, you would end up with something like:</p> <pre><code>class HardwareDeviceA(AddressableHardware): reg1 = RegisterDescriptor(10, 2) reg2 = RegisterDescriptor(20, 3) def __init__(self, base_address): AddressableHardware.__init__(self, base_address) class HardwareDeviceB(AddressableHardware): reg1 = RegisterDescriptor(10, 4) def __init__(self, base_address): AddressableHardware.__init__(self, base_address) </code></pre> <p>This allows you to have multiple instances of device type A and B connected to the machine at different base addresses, and you don't need to setup their registers each time.</p>
python|properties|hardware
1
1,901,644
61,616,142
Pandas pivot with interchanging column values from duplicate row values
<p>I have data with duplicate parts that looks like this:</p> <pre><code>Part | Location | ONHand A | XY | 5 A | XW | 4 B | XC | 6 B | XV | 8 C | XQ | 9 </code></pre> <p>And I'm trying to convert it all into one row per part, listing all the locations and quantities on hand in each location.</p> <p>I tried using this code</p> <pre><code>df_f = df.assign(cc=df.groupby('Part').cumcount()+1).set_index(['Part', 'cc']).unstack() df_f.columns = [f'{col[0]}{col[1]}' for col in df_f.columns] df_f.to_csv('parts_multi_location.csv') </code></pre> <p>But the problem is it returns Location 1, 2, 3 and then ONHand 1, 2, 3 and so forth.</p> <p>I need the end result to return Location 1, Onhand 1, Location 2, Onhand 2, so the headers should look like this:</p> <pre><code>Part | Location_1 | Onhand_1 | Location 2| Onhand 2 A | XY | 5 | XW | 4 B | XC | 6 | XV | 8 C | XQ | 9 </code></pre>
<p>You did most of the job. The only thing missing is <code>sort_index</code>:</p> <pre><code>df_f = df.assign(cc=df.groupby('Part').cumcount()+1).set_index(['Part', 'cc']).unstack() # this is what you are missing df_f = df_f.sort_index(level=(1,0), axis=1) df_f.columns = [f'{col[0]}{col[1]}' for col in df_f.columns] </code></pre> <p>Output:</p> <pre><code> Location1 ONHand1 Location2 ONHand2 Part A XY 5.0 XW 4.0 B XC 6.0 XV 8.0 C XQ 9.0 NaN NaN </code></pre>
python|python-3.x|excel|pandas
2
1,901,645
24,213,784
Find if any string element in list is contained in list of strings
<p>I need to create a script to accept/reject some text based on whether a list of strings is present in it.</p> <p>I have a list of keywords that should be used as a rejection mechanism:</p> <pre><code>k_out = ['word1', 'word2', 'some larger text'] </code></pre> <p>If any of those string elements is found in the list I present below, the list should be marked as <em>rejected</em>. This is the list that should be checked:</p> <pre><code>c_lst = ['This is some text that contains no rejected word', 'This is some larger text which means this list should be rejected'] </code></pre> <p>This is what I've got:</p> <pre><code>flag_r = False for text in k_out: for lst in c_lst: if text in lst: flag_r = True </code></pre> <p>is there a more <em>pythonic</em> way of going about this?</p>
<p>You can use <a href="https://docs.python.org/3/library/functions.html#any"><code>any</code></a> and a <a href="https://docs.python.org/3/reference/expressions.html#grammar-token-generator_expression">generator expression</a>:</p> <pre><code>&gt;&gt;&gt; k_out = ['word1', 'word2', 'some larger text'] &gt;&gt;&gt; c_lst = ['This is some text that contains no rejected word', 'This is some larger text which means this list should be rejected'] &gt;&gt;&gt; any(keyword in string for string in c_lst for keyword in k_out) True &gt;&gt;&gt; </code></pre>
python|list
6
1,901,646
46,499,327
Django encoding error when reading from a CSV
<p>When I try to run:</p> <pre><code>import csv with open('data.csv', 'rU') as csvfile: reader = csv.DictReader(csvfile) for row in reader: pgd = Player.objects.get_or_create( player_name=row['Player'], team=row['Team'], position=row['Position'] ) </code></pre> <p>Most of my data gets created in the database, except for one particular row. When my script reaches the row, I receive the error: </p> <pre><code>ProgrammingError: You must not use 8-bit bytestrings unless you use a text_factory that can interpret 8-bit bytestrings (like text_factory = str). It is highly recommended that you instead just switch your application to Unicode strings.` </code></pre> <p>The particular row in the CSV that causes this error is:</p> <pre><code>&gt;&gt;&gt; row {'FR\xed\x8aD\xed\x8aRIC.ST-DENIS', 'BOS', 'G'} </code></pre> <p>I've looked at the other similar Stackoverflow threads with the same or similar issues, but most aren't specific to using Sqlite with Django. Any advice? </p> <p>If it matters, I'm running the script by going into the Django shell by calling <code>python manage.py shell</code>, and copy-pasting it in, as opposed to just calling the script from the command line.</p> <p>This is the stacktrace I get:</p> <pre><code>Traceback (most recent call last): File "&lt;console&gt;", line 4, in &lt;module&gt; File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/csv.py", line 108, in next row = self.reader.next() File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/codecs.py", line 302, in decode (result, consumed) = self._buffer_decode(data, self.errors, final) UnicodeDecodeError: 'utf8' codec can't decode byte 0xcc in position 1674: invalid continuation byte </code></pre> <p><strong>EDIT:</strong> I decided to just manually import this entry into my database, rather than try to read it from my CSV, based on Alastair McCormack's feedback </p> <blockquote> <p>Based on the output from your question, it looks like the person who made the CSV mojibaked it - it doesn't seem to represent FRÉDÉRIC.ST-DENIS. You can try using windows-1252 instead of utf-8 but I think you'll end up with FRíŠDíŠRIC.ST-DENIS in your database.</p> </blockquote>
<p>I suspect you're using Python 2 - <code>open()</code> returns str which are simply byte strings.</p> <p>The error is telling you that you need to <strong>decode</strong> your text to Unicode string before use. </p> <p>The simplest method is to decode each cell:</p> <pre><code>with open('data.csv', 'r') as csvfile: # 'U' means Universal line mode and is not necessary reader = csv.DictReader(csvfile) for row in reader: pgd = Player.objects.get_or_create( player_name=row['Player'].decode('utf-8), team=row['Team'].decode('utf-8), position=row['Position'].decode('utf-8) ) </code></pre> <p>That'll work but it's ugly add decodes everywhere and it won't work in Python 3. Python 3 improves things by opening files in text mode and returning Python 3 strings which are the equivalent of Unicode strings in Py2.</p> <p>To get the same functionality in Python 2, use the <code>io</code> module. This gives you a <code>open()</code> method which has an <code>encoding</code> option. Annoyingly, the Python 2.x CSV module is broken with Unicode, so you need to install a backported version:</p> <pre><code>pip install backports.csv </code></pre> <p>To tidy your code and future proof it, do:</p> <pre><code>import io from backports import csv with io.open('data.csv', 'r', encoding='utf-8') as csvfile: reader = csv.DictReader(csvfile) for row in reader: # now every row is automatically decoded from UTF-8 pgd = Player.objects.get_or_create( player_name=row['Player'], team=row['Team'], position=row['Position'] ) </code></pre>
django|sqlite|encoding|python-unicode
1
1,901,647
61,022,308
Converting Double For Loop to Numpy Linear Algebra for Dual Form SVM
<p>I'm trying to create a Dual Form SVM and it's running very slow but correctly right now. I currently have this for the objective function (which is the bottleneck)...</p> <pre><code>ij = 0 for i in range(len(x)): for j in range(len(x)): ij += y[i]*y[j]*a[i]*a[j]*np.dot(x[i].T, x[j]) ij /= 2 </code></pre> <p>This runs very slow. I somehow need to convert this to linear algebra to speed it up using NumPy but I tend to struggle with that.</p> <p>FYI: a, y, and x are all the same length. a and y contain all floats. x is a two dimensional vector of floats.</p>
<p>I think that there is a better/cleaner way, but here it goes my best. </p> <pre><code>def np_way(): # Compute 1st part: y[i]*y[j]*a[i]*a[j] ay = a*y ya = np.outer(ay, ay) # print(ya) # Compute 2nd part: np.dot(x[i].T, x[j]) _dot = np.outer(x, x) dot = _dot[::2, ::2] + _dot[1::2, 1::2] # print(dot) return (ya * dot).sum()/2 </code></pre> <p>You can uncomment to debug it.</p> <p>I've put your code in a <code>original_way()</code> function and compared ir with <code>np_way()</code> so I could <code>timeit</code>:</p> <pre><code>%timeit original_way() %timeit np_way() 1 loop, best of 3: 708 ms per loop 100 loops, best of 3: 3.21 ms per loop </code></pre> <p>The results were with a length of <code>500</code>, being <code>np_way()</code> around 220 times faster than <code>original_way()</code>.</p>
python|numpy|svm|linear-algebra
1
1,901,648
49,367,586
How do I get row from Excel and then put it into a Pandas dataframe as a column?
<p>I plan to do this using only Pandas, however this is my first time using Pandas. I know that Pandas has a read_excel function.</p> <p>My row in excel is the 4th row and has dates but I need these dates in a dataframe on Python in a column.</p> <p>Any help will be appreciated.</p> <pre><code>import pandas as pd fp = "G:\\Data\\Data2\\myfile.xlsm" data = pd.read_excel(fp, skiprows = 4, sheet_name = "CRM View" ) </code></pre> <p>This is all I have so far, but to my understanding this will read everything from the fourth row in my excel file, where as I only want the contents of the fourth row and then this row is to be fed as a column in my dataframe. </p>
<p>So, when you read excel your first row will be header and indices start from <code>0</code>.</p> <p>If you take that into account your desired row is fetched like this:</p> <pre><code>import pandas as pd fp = "G:\\Data\\Data2\\myfile.xlsm" data = pd.read_excel(fp, sheet_name = "CRM View" ) dates_row = data.loc[2, :] </code></pre> <p>Now you can make that row into column like this:</p> <pre><code>new_data = pd.DataFrame({'Dates': dates_row}) </code></pre>
python|excel|pandas
2
1,901,649
20,953,127
Why built-in functions like abs works on numpy array?
<p>I feel surprised that <code>abs</code> works on numpy array but not on lists. Why is that?</p> <pre><code>import numpy as np abs(np.array((1,-2))) array([1, 2]) abs([1,-1]) TypeError: bad operand type for abs(): 'list' </code></pre> <p>Also, built in functions like <code>sum</code> also works on numpy array. I guess it is because numpy array supports <code>__getitem__</code>? But in case of <code>abs</code>, if it depends on <code>__getitem__</code> it should work for list as well, but it didn't.</p>
<p>That's because <code>numpy.ndarray</code> implements the <code>__abs__(self)</code> method. Just provide it for your own class, and <code>abs()</code> will magically work. For non-builtin types you can also provide this facility after-the-fact. E.g.</p> <pre><code>class A: "A class without __abs__ defined" def __init__(self, v): self.v = v def A_abs(a): "An 'extension' method that will be added to `A`" return abs(a.v) # Make abs() work with an instance of A A.__abs__ = A_abs </code></pre> <p>However, this will not work for built-in types, such as <code>list</code> or <code>dict</code>.</p>
python|arrays|numpy
25
1,901,650
21,261,330
Splitting string and removing whitespace Python
<p>I would like to split a String by comma <code>','</code> and remove whitespace from the beginning and end of each split. </p> <p>For example, if I have the string:</p> <p><code>"QVOD, Baidu Player"</code></p> <p>I would like to split and strip to:</p> <p><code>['QVOD', 'Baidu Player']</code></p> <p>Is there an elegant way of doing this? Possibly using a list comprehension?</p>
<p>Python has a spectacular function called <code>split</code> that will keep you from having to use a regex or something similar. You can split your string by just calling <code>my_string.split(delimiter)</code></p> <p>After that python has a <code>strip</code> function which will remove all whitespace from the beginning and end of a string.</p> <pre><code>[item.strip() for item in my_string.split(',')] </code></pre> <p>Benchmarks for the two methods are below:</p> <pre><code>&gt;&gt;&gt; import timeit &gt;&gt;&gt; timeit.timeit('map(str.strip, "QVOD, Baidu Player".split(","))', number=100000) 0.3525350093841553 &gt;&gt;&gt; timeit.timeit('map(stripper, "QVOD, Baidu Player".split(","))','stripper=str.strip', number=100000) 0.31575989723205566 &gt;&gt;&gt; timeit.timeit("[item.strip() for item in 'QVOD, Baidu Player'.split(',')]", number=100000) 0.246596097946167 </code></pre> <p>So the list comp is about 33% faster than the map.</p> <p>Probably also worth noting that as far as being "pythonic" goes, Guido himself votes for the LC. <a href="http://www.artima.com/weblogs/viewpost.jsp?thread=98196" rel="noreferrer">http://www.artima.com/weblogs/viewpost.jsp?thread=98196</a></p>
python|regex|split|whitespace|strip
12
1,901,651
70,330,899
How to reverse caps lock in a string in python?
<p>I have a string that can either be written with caps lock or not. &quot;With caps lock&quot; means that it is either is like tHIS or like THIS. It is easy enough to detect the second case with &quot;isupper()&quot; function, but I wasn't able to find a way to find the first case reliably. For strings of length 1 I used &quot;islower()&quot; to detect if they should be capitalized, so it shouldn't be a problem</p> <h2><strong>Code I used</strong></h2> <pre class="lang-py prettyprint-override"><code>import re inp = input() trutable = &quot;&quot; for i in inp: if i.isupper(): trutable += &quot;1&quot; if i.islower(): trutable += &quot;0&quot; pattern = re.compile(r'^01') answ = re.match(pattern, trutable) if inp.isupper() or answ != None or (len(inp) == 1 and inp.islower()): inp = inp.capitalize() print(inp) </code></pre>
<p>Would you please try:</p> <pre><code>s = &quot;hELLO wORLD!&quot; print(s.swapcase()) </code></pre> <p>Output:</p> <pre><code>Hello World! </code></pre>
python|string|capslock
2
1,901,652
53,501,046
Flask can't find applications file
<p>I have a standalone app that takes in an excel file and outputs a word doc. This works fine as standalone. </p> <p>I have now tried to integrate it into a Flask application, but flask can't find the subfolder "templates" of my application. Here is my file structure:</p> <pre><code>my_flask_site ├── flask_app.py ├── __init__.py ├── templates | ├── index.html | └── report.html ├── uploads | └── myfile.xlsx | └── apps └── convert_app ├── __init__.py ├── main.py ├── report | ├── __init__.py | ├── data_ingest.py | └── report_output.py └── templates └── output_template.docx </code></pre> <p>now I can't get the report_output.py file to find the output_template.docx file now it is in the flask application. </p> <pre><code>def run_report(file): data = data_ingest.Incident(file) priority_count = dict(data.df_length()) size = sum(priority_count.values()) print(priority_count) print(size) report = report_output.Report() report.header() report.priority_header(0) i = 0 if '1' in priority_count: for _ in range(priority_count['1']): field = data.fields(i) report.priority_body(field) i += 1 report.break_page() report.priority_header(1) else: report.none() report.priority_header(1) if '2' in priority_count: for _ in range(priority_count['2']): field = data.fields(i) report.priority_body(field) i += 1 report.break_page() report.priority_header(2) else: report.none() report.break_page() report.priority_header(2) if '3' in priority_count: for _ in range(priority_count['3']): field = data.fields(i) report.priority_body(field) i += 1 report.break_page() if '4' in priority_count: for _ in range(priority_count['4']): field = data.fields(i) i += 1 output = OUTPUT_FILE+f"/Platform Control OTT Daily Report {data.field[0]}.docx" report.save(output) print(f"Report saved to:\n\n\t {output}") def main(file): run_report(file) if __name__ == "__main__": main() </code></pre> <p>and here is the report_output.py (without the word format part):</p> <pre><code>from docx import Document class Report(object): def __init__(self): self.doc = Document('./templates/pcc_template.docx') self.p_title = ['Major Incident', 'Stability Incidents (HPI)', 'Other Incidents'] self.date = datetime.now().strftime('%d %B %Y') def save(self, output): self.doc.save(output) </code></pre> <p>There is more in the format_report.py file, but it is related to the function of the app. Where I am stuck is how I get the app to again see it's own template folder and the template file inside it. </p>
<p>I have solved my problem, after finding this post here <a href="https://stackoverflow.com/questions/30328586/refering-to-a-directory-in-a-flask-app-doesnt-work-unless-the-path-is-absolute">Refering to a directory in a Flask app doesn&#39;t work unless the path is absolute</a>. </p> <p>What I take from this is that the file path has to be absolute from the Flask applications root folder, in this case "my_flask_site" is the root folder and adding the full file path solved the problem. </p>
flask|python-import|filepath
1
1,901,653
53,661,833
Alphabet position in python
<p>Newbie here...Trying to write a function that takes a string and replaces all the characters with their respective dictionary values. Here is what I have:</p> <pre><code>def alphabet_position(text): dict = {'a':'1','b':'2','c':'3','d':'4','e':'5','f':'6','g':'7','h':'8':'i':'9','j':'10','k':'11','l':'12','m':'13','n':'14','o':'15','p':'16','q':'17','r':'18','s':'19','t':'20','u':'21','v':'22','w':'23','x':'24','y':'25','z':'26'} text = text.lower() for i in text: if i in dict: new_text = text.replace(i, dict[i]) print (new_text) </code></pre> <p>But when I run:</p> <pre><code>alphabet_position("The sunset sets at twelve o' clock.") </code></pre> <p>I get:</p> <pre><code>the sunset sets at twelve o' cloc11. </code></pre> <p>meaning it only changes the last character in the string. Any ideas? Any input is greatly appreciated.</p>
<p>Following your logic you need to create a <code>new_text</code> string and then iteratively replace its letters. With your code, you are only replacing one letter at a time, then start from scratch with your original string:</p> <pre><code>def alphabet_position(text): dict = {'a':'1','b':'2','c':'3','d':'4','e':'5','f':'6','g':'7','h':'8','i':'9','j':'10','k':'11','l':'12','m':'13','n':'14','o':'15','p':'16','q':'17','r':'18','s':'19','t':'20','u':'21','v':'22','w':'23','x':'24','y':'25','z':'26'} new_text = text.lower() for i in new_text: if i in dict: new_text = new_text.replace(i, dict[i]) print (new_text) </code></pre> <p>And as suggested by Kevin, you can optimize a bit using <code>set</code>. (adding his comment here since he deleted it: <code>for i in set(new_text):</code>) Note that this might be beneficial only for large inputs though...</p>
python
5
1,901,654
73,547,573
Multiplication/Division across different columns and rows
<p>I have a table that is unfortunately blown up a bit and I want to perform multiplication and division between columns. These operations need to be performed in subsets within the table (in my example below groupbed by year and country) so I feel like groupby would be the solution.</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: center;">Year</th> <th style="text-align: center;">Country</th> <th style="text-align: center;">A</th> <th style="text-align: center;">B</th> </tr> </thead> <tbody> <tr> <td style="text-align: center;">2019</td> <td style="text-align: center;">EU</td> <td style="text-align: center;">3</td> <td style="text-align: center;">nan</td> </tr> <tr> <td style="text-align: center;">2019</td> <td style="text-align: center;">EU</td> <td style="text-align: center;">nan</td> <td style="text-align: center;">5</td> </tr> <tr> <td style="text-align: center;">2022</td> <td style="text-align: center;">China</td> <td style="text-align: center;">1.5</td> <td style="text-align: center;">nan</td> </tr> <tr> <td style="text-align: center;">2022</td> <td style="text-align: center;">China</td> <td style="text-align: center;">nan</td> <td style="text-align: center;">7.9</td> </tr> <tr> <td style="text-align: center;">2022</td> <td style="text-align: center;">EU</td> <td style="text-align: center;">nan</td> <td style="text-align: center;">5</td> </tr> <tr> <td style="text-align: center;">2022</td> <td style="text-align: center;">EU</td> <td style="text-align: center;">0.4</td> <td style="text-align: center;">nan</td> </tr> </tbody> </table> </div> <p>Simply put I want have a new column col[C] = col[A]/col[B] so that 3/5 &amp; 1.5/7.9 &amp; 0.4/5 and the same thing with multiplication in a fourth column. The table can not be restructured unfortunately and I am not familiar enough with Pandas. Any help would be much appreciated.</p>
<p>You can aggregate with <code>groupby.first</code> and <code>merge</code>:</p> <pre><code>out = df.merge(df.groupby(['Year', 'Country']) .first() .eval('A/B') .reset_index(name='C') ) </code></pre> <p>output:</p> <pre><code> Year Country A B C 0 2019 EU 3.0 NaN 0.600000 1 2019 EU NaN 5.0 0.600000 2 2022 China 1.5 NaN 0.189873 3 2022 China NaN 7.9 0.189873 4 2022 EU NaN 5.0 0.080000 5 2022 EU 0.4 NaN 0.080000 </code></pre>
python|pandas|dataframe
0
1,901,655
12,885,860
Numpy standard deviation not working for me
<p>Where I am going wrong with this <code>for</code> loop which is meant to take as input a specific corpus, sample size and number of samples and then give the average and standard deviation of expected no. of sentiment tokens?</p> <pre><code>def test_iterate(corpus_reader, sample_size, number_of_samples): for i in xrange(number_of_samples): tokens = corpus_reader.sample_words_by_sents(sample_size) sents = corpus_reader.sample_sents(sample_size) print expected_sentiment_tokens(tokens) s = [] s.append(expected_sentiment_tokens(tokens)) s = array(s) print "Average expected no of sentiment tokens: %s" % average(s) print "Standard deviation of sentiment tokens: %s" % std(s) test_iterate(rcr, 500, 3) </code></pre> <p>returns</p> <pre><code>181.166666667 186.277777778 185.5 Average expected no of sentiment tokens: 185.5 Standard deviation of sentiment tokens: 0.0 </code></pre> <p>For some reason the average is being set to the last sample instead of averaging and standard deviating all of the samples together.</p>
<p>Use a debugger (or even print statements) to look at the stats object that you are calling average and std on.... as DSM mentioned, it is probably std 0 (i.e. a single number or something like that)</p>
python|numpy
1
1,901,656
40,882,152
Access to c pointer in python class from python method, cython
<p>In the process of implementing my python integration I faced a problem. I have class that looks like this:</p> <pre><code>cdef class SomeClass: cdef CPPClass* cpp_impl def some_method(self): self.cpp_impl.cppMethod() </code></pre> <p>And I have cpp class that can return <code>CPPClass*</code> value. Smth like this:</p> <pre><code>class Creator { public: CPPClass* createClass(); } </code></pre> <p>So I'd like to create SomeClass instance like this:</p> <pre><code>cdef class PyCreator: cdef Creator* cpp_impl def getSomeClass(self): o = SomeClass() o.cpp_impl = self.cpp_impl.createClass() return o </code></pre> <p>But I'm getting error that cython can't convert <code>CPPClass*</code> to Python object. How can I solve my problem? Thank you.</p>
<p>In <code>getSomeClass</code> it needs to know what type <code>o</code> is so that the assignment to <code>cpp_impl</code> makes sense:</p> <pre><code>def getSomeClass(self): cdef SomeClass o = SomeClass() # define the type o.cpp_impl = self.cpp_impl.createClass() # I think you missed a "self" here return o </code></pre>
python|c++|integration|cython
1
1,901,657
38,394,084
setting PYTHON_LIBRARY during opencv-2.4.10 build
<p>I'm on CentOS6.7 and I'm building opencv-2.4.10 (I removed 2.4.9 because my python cv2 package didn't seem to go along with underneath opencv-2.4.9. When I print cv2.__version__ in python, it shows 2.4.10 so I figured I should upgrade opencv to 2.4.10 because python cv2 is just a python wrapper for real c++ opencv. Anyways..)</p> <p>The only environment variable related to python is PYTHON_PATH.</p> <pre><code>ckim@stph45:~/Downloads/opencv-2.4.10/build] echo $PYTHON_PATH /home/ckim/anaconda2/lib/python2.7/site-packages/ </code></pre> <p>Under /home/ckim/Downloads/opencv-2.4.10/build directory, I did </p> <pre><code>cmake -D CMAKE_INSTALL_PREFIX=/usr/local -D BUILD_EXAMPLES=ON -D CUDA_GENERATION=Auto .. </code></pre> <p>and I can see the result configuration message some of which related to python seems odd to me. See below. </p> <pre><code>.... -- Python: -- Interpreter: /home/ckim/anaconda2/bin/python2 (ver 2.7.12) -- Libraries: /usr/local/lib/libpython2.7.so -- numpy: /home/ckim/anaconda2/lib/python2.7/site-packages/numpy/core/include (ver 1.10.2) -- packages path: lib/python2.7/site-packages .... </code></pre> <p>the Interpreter and numpy is correctly pointing to my anaconda2 python environment, but why is the Library pointing the python installed in my system(not anaconda2)? </p> <p>I tried passing python related variables in the cmake command as directed <a href="http://docs.opencv.org/master/d7/d9f/tutorial_linux_install.html#gsc.tab=0" rel="nofollow">here</a> but it didn't help. Installing something on CentOS is not a breeze almost always, but I'm sticking on CentOS.</p>
<p>I succeeded in building opencv-2.4.10 by following commands. </p> <p>make clean ; cmake -D CMAKE_INSTALL_PREFIX=/usr/local -D BUILD_EXAMPLES=ON -D CUDA_GENERATION=Auto -D PYTHON_INCLUDE_DIR=/home/ckim/anaconda2/include/python2.7/ PYTHON_LIBRARY=/home/ckim/anaconda2/lib/libpython2.7.so .. | &amp; tee log.cmake make -j8 | &amp; tee log.make</p> <p>now opencv-2.4.10 is installed and cv2 is also 2.4.10.<br> When I look into log.cmake, I still see </p> <pre><code>Libraries: /usr/local/lib/libpython2.7.so (ver 2.7.12) </code></pre> <p>But anyway the compile was a success, so it seems I can ignore this configuration message and the parameter I set in the command line (PYTHON_LIBRARY) only matters.</p>
python|opencv|centos
0
1,901,658
30,811,542
3D numpy array iteration using shared C libraries
<p>I'm currently working on an image processing project. I'm using Python, the SimpleITK, numpy, and a couple of other libraries to take a stack of DICOM images, turn them into a 3D numpy array, and then do some image processing operations using the SITK or other mathematical techniques (masking, etc.)</p> <p>Right now, I'm trying to make an averaging filter that takes the average of a 3x3 neighborhood and replaced the center pixel of that neighborhood with that average value. The result is just a blurred image. Since Python's not really good at looping through 300x300x400 pixels really fast, I'm trying to use a C library to do it for me. The problem is, I'm not good at C. (or python for that matter...)</p> <p>Below is my C code:</p> <pre><code>int i, j, k, m, n, p; double kernsum; void iter(double *data, int N, int height, int width, int depth, double *kernavg){ double kern[N*N]; for (k = 0; k &lt; depth; k++){ for (i = (N - 1)/2; i &lt; height - 1; i++){ for (j = (N - 1)/2; j &lt; width - 1; j++){ for (m = i - (N - 1)/2; m &lt; i + (N - 1)/2; m++){ for (n = j - (N - 1)/2; n &lt; j + (N - 1)/2; n++){ kern[m + n*N] = data[i + j*width + k*width*depth]; } } kernsum = 0; for (p = 0; p &lt; N*N; p++){ kernsum += kern[p]; } kernavg[i + j*width + k*width*depth] = kernsum/(N*N); } } } } </code></pre> <p>And here's some of the python code I'm using. poststack is a large 3D numpy array.</p> <pre><code>height = poststack.shape[1] width = poststack.shape[2] depth = poststack.shape[0] N = 3 kernavgimg = np.zeros(poststack.shape, dtype = np.double) lib = ctypes.cdll.LoadLibrary('./iter.so') iter = lib.iter iter(ctypes.c_void_p(poststack.ctypes.data), ctypes.c_int(N), ctypes.c_int(height), ctypes.c_int(width), ctypes.c_int(depth), ctypes.c_void_p(kernavgimg.ctypes.data)) print kernavgimg pyplot.imshow(kernavgimg[0, :, :], cmap = 'gray') pyplot.show() image.imsave('/media/sd/K/LabCode/python_code/dump/test.png', kernavgimg.data[0, :, :], cmap = 'gray') pyplot.imshow(poststack[0, :, :], cmap = 'gray') pyplot.show() image.imsave('/media/sd/K/LabCode/python_code/dump/orig.png', poststack[0, :, :], cmap = 'gray') print kernavgimg[0, :, :] == poststack[0, :, :] print kernavgimg.shape print poststack.shape </code></pre> <p>I should mention that I looked at this StackOverflow post and I don't see what I'm doing different from the guy who asked the original question... </p> <p><a href="https://stackoverflow.com/questions/5862915/passing-numpy-arrays-to-a-c-function-for-input-and-output">Passing Numpy arrays to a C function for input and output</a></p> <p>I know I'm making a stupid mistake, but what is it?</p>
<p>The problem is that the C code produce a segmentation fault, because it tries to access <code>kern[m*N + n*N]</code> where indexes are outside of the the allocated array boundaries. </p> <p>The indexing of your multidimensional arrays is wrong. For an array <code>X</code> of shape <code>(n, m)</code> the equivalent of <code>X[i][j]</code> for a flattened array in C is <code>X[i*m + j]</code>, not the way you were using it in the code above. </p>
python|c|image|image-processing|medical-imaging
1
1,901,659
29,267,960
different formats for TabularAdapter columns?
<p>I've found that I can apply a format to ALL the columns in a TabularAdapter by adding a statement like this to the TabularAdapter declaration: format = '%7.4f'.</p> <p>However, I'd like to have different formatting for each column in the table...is this possible? I've tried to specify the format for just column index 2 (as seen in the example below), but it doesn't apply to just that column. I've been searching for how to do this correctly, but so far have found nothing.</p> <p>Here's a little example file:</p> <pre><code>from traits.api import HasTraits, Array from traitsui.api import View, Group,Item, TabularEditor from traitsui.tabular_adapter import TabularAdapter from numpy import dtype test_dtype = dtype([('Integer#1', 'int'), ('Integer#2', 'int'), ('Float', 'float')]) class testArrayAdapter(TabularAdapter): columns = [('Col1 #', 0), ('Col2', 1), ('Col3', 2)] even_bg_color = 0xf4f4f4 # very light gray width = 125 class test(HasTraits): test_array = Array(dtype=test_dtype) view = View( Item(name = 'test_array', show_label = False, editor = TabularEditor(adapter = testArrayAdapter()), ), Item(name = 'test_array', show_label = False, editor = TabularEditor(adapter = testArrayAdapter(column=2, format='%.4f')), ), ) Test = test() Test.test_array.resize(5, refcheck = False) Test.configure_traits() </code></pre> <p>What I'd like to see is to have the 3rd column have the 4 decmals (it is a float after all), while columns 1 &amp; 2 are presented as just integers.</p>
<p>There are at least two ways you can do this. One is to override the method <code>get_format(self, object, name, row, column)</code> of the <code>TabularAdapter</code> in your adapter class, and have it return the appropriate format based on the <code>column</code> argument. E.g.</p> <pre><code> def get_format(self, object, name, row, column): formats = ['%d', '%d', '%.4f'] return formats[column] </code></pre> <p>Another method is to use the "traits magic" that is implemented in the <code>TabularAdapter</code> class. In your subclass, you can set the format for a column by defining a specially named <code>Str</code> trait. One set of names that works for a numpy structured array such as your <code>test_array</code> is</p> <pre><code> object_0_format = Str("%d") object_1_format = Str("%d") object_2_format = Str("%.4f") </code></pre> <p>(See the <a href="http://docs.enthought.com/traitsui/traitsui_user_manual/factories_advanced_extra.html#tabularadapter" rel="nofollow"><code>TabularAdapter</code> documentation</a>, and <a href="https://github.com/enthought/traitsui/blob/master/examples/tutorials/traitsui_4.0/editors/tabular_editor/tabular_editor.rst" rel="nofollow">this file</a> in the github repo for more information.)</p> <p>Here's a modified version of your script that demonstrates both approaches. For variety, I used the format <code>"%04d"</code> for the first column. (I hope you don't mind the gratuitous name and style changes.)</p> <pre><code>from traits.api import HasTraits, Array, Str from traitsui.api import View, Item, TabularEditor from traitsui.tabular_adapter import TabularAdapter from numpy import dtype test_dtype = dtype([('Integer#1', 'int'), ('Integer#2', 'int'), ('Float', 'float')]) class TestArrayAdapter1(TabularAdapter): columns = [('Col1 #', 0), ('Col2', 1), ('Col3', 2)] even_bg_color = 0xf4f4f4 # very light gray width = 125 def get_format(self, object, name, row, column): formats = ['%04d', '%d', '%.4f'] return formats[column] class TestArrayAdapter2(TabularAdapter): columns = [('Col1 #', 0), ('Col2', 1), ('Col3', 2)] even_bg_color = 0xf4f4f4 # very light gray width = 125 object_0_format = Str("%04d") object_1_format = Str("%d") object_2_format = Str("%.4f") class Test(HasTraits): test_array = Array(dtype=test_dtype) view = \ View( Item(name='test_array', show_label=False, editor=TabularEditor(adapter=TestArrayAdapter1())), Item(name='test_array', show_label=False, editor=TabularEditor(adapter=TestArrayAdapter2())), ) test = Test() test.test_array.resize(5, refcheck=False) test.configure_traits() </code></pre>
python|enthought|traits|traitsui
2
1,901,660
59,484,793
How to remove overlapping contours and separate each character as an individual contour for character extraction?
<p>I am trying to implement character extraction from images in Python using the <code>MSER</code> in <code>opencv</code>. This is my code till now:</p> <pre><code>import cv2 import numpy as np # create MSER object mser = cv2.MSER_create() # convert image to grayscale gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # detect the regions regions,_ = mser.detectRegions(gray) # find convex hulls of the regions hulls = [cv2.convexHull(p.reshape(-1, 1, 2)) for p in regions] # initialize threshold area of the contours ThresholdContourArea = 10000 # initialize empty list for the characters and their locations char = [] loc =[] # get the character part of the image and it's location if the area of contour less than threshold for contour in hulls: if cv2.contourArea(contour) &gt; ThresholdContourArea: continue # get the bounding rectangle around the contour bound_rect = cv2.boundingRect(contour) loc.append(bound_rect) det_char = gray[bound_rect[1]:bound_rect[1]+bound_rect[3],bound_rect[0]:bound_rect[0]+bound_rect[2]] char.append(det_char) </code></pre> <p>But this method gives multiple contours for the same letter and at some places multiple words are put into one contour. Here is an eg: original image:</p> <p><a href="https://i.stack.imgur.com/w6nUT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/w6nUT.png" alt="enter image description here"></a></p> <p>After adding the contours:</p> <p><a href="https://i.stack.imgur.com/9kMEp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9kMEp.png" alt="enter image description here"></a></p> <p>Here the first T has multiple contours around and the two rs are combined into one contour. How do I prevent that?</p>
<p>Instead of using <code>MSER</code>, here's a simple approach using thresholding + contour filtering. We first remove the border then Otsu's threshold to obtain a binary image. The idea is that each letter should be an individual contour. We find contours and draw each rectangle.</p> <p>Removed border <code>-&gt;</code> binary image <code>-&gt;</code> result</p> <p><img src="https://i.stack.imgur.com/DlrNP.png" height="250"> <img src="https://i.stack.imgur.com/ju1kj.png" height="250"> <img src="https://i.stack.imgur.com/c1o0e.png" height="250"></p> <p><strong>Note:</strong> In some cases, the letters are connected so to remove the merged characters, we can first enlarge the image using <code>imutils.resize()</code> then perform <a href="https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_morphological_ops/py_morphological_ops.html#erosion" rel="nofollow noreferrer">erosion</a> or <a href="https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_morphological_ops/py_morphological_ops.html#opening" rel="nofollow noreferrer">morphological opening</a> to separate each character. However, I was unable to obtain great results since the text would disappear even with the smallest sized kernel. </p> <p>Code</p> <pre><code>import cv2 import imutils # Load image, grayscale, Otsu's threshold image = cv2.imread('1.png') image = imutils.resize(image, width=500) # Remove border kernel_vertical = cv2.getStructuringElement(cv2.MORPH_RECT, (1,50)) temp1 = 255 - cv2.morphologyEx(image, cv2.MORPH_CLOSE, kernel_vertical) horizontal_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (50,1)) temp2 = 255 - cv2.morphologyEx(image, cv2.MORPH_CLOSE, horizontal_kernel) temp3 = cv2.add(temp1, temp2) result = cv2.add(temp3, image) # Convert to grayscale and Otsu's threshold gray = cv2.cvtColor(result, cv2.COLOR_BGR2GRAY) thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1] # Find contours and filter using contour area cnts = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) cnts = cnts[0] if len(cnts) == 2 else cnts[1] for c in cnts: x,y,w,h = cv2.boundingRect(c) cv2.rectangle(result, (x, y), (x + w, y + h), (36,255,12), 2) cv2.imshow('thresh', thresh) cv2.imshow('result', result) cv2.waitKey() </code></pre>
python|opencv|image-processing|computer-vision|mser
1
1,901,661
18,830,232
Can I run line_profiler over a pytest test?
<p>I have identified some long running pytest tests with </p> <pre><code>py.test --durations=10 </code></pre> <p>I would like to instrument one of those tests now with something like line_profiler or cprofile. I really want to get the profile data from the test itself as the pytest setup or tear down could well be part of what is slow. </p> <p>However given how line_profiler or cprofile is typically involved it isn't clear to me how to make them work with pytest.</p>
<p>Run pytest like this:</p> <pre><code>python3 -m cProfile -o profile -m pytest </code></pre> <p>You can even pass in optional arguments:</p> <pre><code>python3 -m cProfile -o profile -m pytest tests/worker/test_tasks.py -s campaigns </code></pre> <p>This will create a binary file called <code>profile</code> in your current directory. This can be analyzed with pstats:</p> <pre><code>import pstats p = pstats.Stats('profile') p.strip_dirs() p.sort_stats('cumtime') p.print_stats(50) </code></pre> <p>This will print the 50 lines with the longest cumulative duration.</p>
python|pytest|cprofile
48
1,901,662
19,031,189
Most Efficient Way to Automate Grouping of List Entries
<p><strong>Background:</strong><br>I have a very large list of 3D cartesian coordinates, I need to process this list to group the coordinates by their Z coordinate (ie all coordinates in that plane). Currently, I manually create groups from the list using a loop for each Z coordinate, but if there are now dozens of possible Z (was previously handling only 2-3 planes)coordinates this becomes impractical. I know how to group lists based on like elements of course, but I am looking for a method to automate this process for n possible values of Z.<br><br><strong>Question:</strong><br>What's the most efficient way to automate the process of grouping list elements of the same Z coordinate and then create a unique list for each plane?</p> <p><strong>Code Snippet:</strong><br> I'm just using a simple list comprehension to group individual planes: <br> <code>newlist=[x for x in coordinates_xyz if insert_possible_Z in x]</code> <br> I'm looking for it to automatically make a new unique list for every Z plane in the data set. <br><br><strong>Data Format:</strong><br> <code>((x1,y1,0), (x2, y2, 0), ... (xn, yn, 0), (xn+1,yn+1, 50),(xn+2,yn+2, 50), ... (x2n+1,y2n+1, 100), (x2n+2,y2n+2, 100)...)</code>etc. <br>I want to automatically get all coordinates where Z=0, Z=50, Z=100 etc. Note that the value of Z (increments of 50) is an example only, the actual data can have any value.<br><br><strong>Notes:</strong><br>My data is imported either from a file or generated by a separate module in lists. This is necessary for interface with another program (that I have not written).</p>
<p>The most efficient way to group elements by Z and make a list of them so grouped is to not make a list.</p> <p><a href="http://docs.python.org/2/library/itertools.html#itertools.groupby" rel="nofollow">itertools.groupby</a> does the grouping you want without the overhead of creating new lists. </p> <p>Python generators take a little getting used to when you aren't familiar with the general mechanism. The <a href="https://wiki.python.org/moin/Generators" rel="nofollow">official generator documentation</a> is a good starting point for learning why they are useful.</p>
python|list|python-2.7|coordinates
1
1,901,663
67,438,459
Sending a variable to discord channel Python
<p>So I have this code:</p> <pre><code>while True: recent_post = api.user_timeline(screen_name = 'PartAlert', count = 1, since_id=recent_id, include_rts = True, tweet_mode='extended') if recent_post: last_post = recent_post[0].full_text urls = re.findall('http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&amp;+]|[!*(),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+', last_post) if urls: print(f&quot;[{datetime.datetime.now()}] Link(s) Found:&quot;) for x in range(len(urls)): if urls[x][-1] == ':': urls[x] = urls[x][:-1] print(urls[x]) recent_id = recent_post[0].id time.sleep(10) </code></pre> <p>which gets the twitter post of a specific user. What I want now is that the <code>url</code> variable will send to a discord channel but whatever I try it doesnt work. Any ideas?</p>
<p>Be sure to place the <code>send_url</code> method <strong>above</strong> the line in which you call <code>send_url</code> (or you could use classes, but that might require redesigning your bot).</p> <p>To send messages to a channel, you can use:</p> <pre><code>channel_id = # some channel id here message = # some message here channel = client.get_channel(channel_id) await channel.send(message) </code></pre> <p>You could also define <code>send_url</code> like so</p> <pre><code>async def send_url(channel_id, message): channel = client.get_channel(channel_id) await channel.send(message) </code></pre> <p>Just be sure to place it <strong>above</strong> the line in which you call the method.</p>
python|discord|discord.py|tweepy
0
1,901,664
67,580,186
How can i make my string search correct because it only shows the first hit?
<p>Currently i have this code</p> <pre><code>for startLine, endLine in zip(list1, list2): print(startLine, endLine) for line in lines[startLine:endLine]: if 'error ' in line and fail_lines: line = line.rstrip() search_results.append(line) </code></pre> <p>and the lists being referenced to are</p> <pre><code>list1 = [ '1' , '9', '15'] list2 = [ '7' , '12', '22'] </code></pre> <p>For example on the code when the program runs</p> <pre><code>for line in lines[1:7] </code></pre> <p>The string search only gets the first hit of the &quot;error&quot; keyword even though I used the for loop function. How can I make it so that all the lines containing the error string inside the given range will be appended?</p>
<p>You need to do something like this,</p> <pre class="lang-py prettyprint-override"><code>lines = [ &quot;error progress while looping&quot;, &quot;connection time out error&quot;, &quot;timed error sem&quot;, &quot;in = error Time&quot; ] list1 = ['1', '9', '15'] list2 = ['7', '12', '22'] search_results = [] for s_line, e_line in zip(list1, list2): for line in lines[int(s_line):int(e_line)]: if line.find(&quot;error&quot;) &gt;= 0: line = line.rstrip() search_results.append(line) print(search_results) </code></pre> <p>this gives me an output,</p> <pre class="lang-sh prettyprint-override"><code>['connection time out error', 'timed error sem', 'in = error Time'] ['connection time out error', 'timed error sem', 'in = error Time'] ['connection time out error', 'timed error sem', 'in = error Time'] </code></pre> <p>make sure to convert the extracted <code>s_line</code> <code>(start_line)</code> and <code>e_line</code> <code>(end_line)</code> to <code>int</code> before giving them as a <strong>index</strong> value.</p>
python|arrays|string|list|loops
1
1,901,665
63,684,893
Calculating the area covered by at least one of three rectangles
<p>I have a problem where I would need to calculate the area covered by at least one of three rectangles.</p> <p>I've defined a function <code>calculate</code> as follows (apologies for the redundant variables it was for clarity):</p> <pre><code>def calculate(rec1, rec2, rec3): if rec1 == rec2 == rec3: return abs((rec1[1]-rec1[3])) * abs(rec1[0]-rec1[2]) else: area1 = abs((rec1[1]-rec1[3])) * abs(rec1[0]-rec1[2]) area2 = abs((rec2[1]-rec2[3])) * abs(rec2[0]-rec2[2]) area3 = abs((rec3[1]-rec3[3])) * abs(rec3[0]-rec3[2]) xmin1, ymin1, xmax1, ymax1 = rec1[0], rec1[3], rec1[2], rec1[1] xmin2, ymin2, xmax2, ymax2 = rec2[0], rec2[3], rec2[2], rec2[1] xmin3, ymin3, xmax3, ymax3 = rec3[0], rec3[3], rec3[2], rec3[1] area12 = (min(xmax1, xmax2) - max(xmin1, xmin2)) * (min(ymax1, ymax2) - max(ymin1, ymin2)) area13 = (min(xmax1, xmax3) - max(xmin1, xmin3)) * (min(ymax1, ymax3) - max(ymin1, ymin3)) area23 = (min(xmax2, xmax3) - max(xmin2, xmin3)) * (min(ymax2, ymax3) - max(ymin2, ymin3)) return (area1 + area2 + area3) - (area12 + area13 + area23) </code></pre> <p>However, this seems to be not working. What am I missing in the formula? <code>area12</code>, <code>area13</code> and <code>area23</code> are the areas of the intersecting triangles denoted by the last two digits in the end e.g <code>area12</code> is the area of intersection for <code>rec1</code> and <code>rec2</code>.</p> <p>For the input ((x1, y1) denotes the left upper corner and (x2,y2) right lower corner)</p> <pre><code>(2,-1,3,-3), (0,2,3,0), (-3,0,1,-1) </code></pre> <p>I should get an output of <code>12</code>, but I get <code>13</code> and simply adding <code>+1</code> to the return value doesn't work in other test cases.</p>
<p>What you are looking for is the area of the union of the rectangles.</p> <p>In the case of two rectangles, this area is the sum of the individual areas minus the area of the intersection. It is interesting to note that the intersection is also a rectangle (or empty). If we denote the intersection by <code>&amp;</code> and the union by <code>|</code>, we have</p> <pre><code>Area(A | B) = Area(A) + Area(B) - Area(A &amp; B). </code></pre> <p>To generalize to three rectangles, we can imagine that the above union is made of two positive rectangles and a negative one. Hence</p> <pre><code>Area(A | B | C) = Area((A | B) | C) = Area(A) + Area(C) - Area(A &amp; C) + Area(B) + Area(C) - Area(B &amp; C) - Area(A &amp; B) - Area(C) + Area(A &amp; B &amp; C) = Area(A) + Area(B) + Area(C) - Area(B &amp; C) - Area(C &amp; A) - Area(A &amp; B) + Area(A &amp; B &amp; C). </code></pre> <p>Then to find the area of the intersection of two rectangles it suffices to consider the rightmost of the two left sides and the leftmost of the two rights sides. If they are crossed, the intersection is empty. Otherwise their distance is the width of the intersection. A similar reasoning gives you the height.</p>
python|computational-geometry
2
1,901,666
36,342,872
Set IntelliJ/Pycharm to handle pandas "Unresolved references" warning
<p>Whenever I try to access a non-method attribute of a Series or DataFrame (such as <code>columns</code> or <code>loc</code>), IntelliJ throws me an "unresolved reference" warning which doesn't crash my code, but is fairly annoying to see. I'd rather not disable this inspection, and I'd like to avoid peppering my code with suppressions.</p> <p>I've set the "Collect run-time types information for code insight" option of the debugger, but this didn't work. I've also tried adding references to ignore in the "Ignore References" list in the Inspections tab, but nothing I tried seemed to work.</p> <p>The warning I get will be something like <code>Cannot find reference loc in 'Series | Series'</code>.</p>
<p>One thing that works for me when PyCharm's autocomplete is having trouble figuring out my variable type is type hinting. It happens rarely enough that I (thankfully) don't have to do it that often.</p> <pre><code>df = pd.DataFrame() # type: pd.DataFrame </code></pre> <p>More info on typing can be found here: <a href="https://stackoverflow.com/questions/32557920/what-are-type-hints-in-python-3-5">What are Type hints in Python 3.5</a></p>
python|pandas|intellij-idea|pycharm|suppress-warnings
1
1,901,667
19,522,538
How do I find out if python script was invoked by piping sth into it or called directly?
<p>My python program both accepts its main input from a command line parameter (<code>sys.argv[1]</code>) and through piping (<code>sys.stdin</code>).</p> <p>But it needs additional parameters which will shift their position in the <code>sys.argv</code> list when the main input is not present.</p> <p>How do I find out if the script was invoked directly</p> <pre><code>myscript.py "input" "add_param" </code></pre> <p>or by piping</p> <pre><code>echo "input" | myscript.py "add_param" </code></pre> <p>Just checking the number of params is not enough. There are more (optional) parameters and I also want to give the right error message for missing parameters.</p>
<p>You can use <a href="http://docs.python.org/2/library/stdtypes#file.isatty" rel="nofollow"><code>sys.stdin.isatty()</code></a>.</p> <p>If the script is executed using pipe (<code>|</code>), it return False.</p> <pre><code>&gt;&gt;&gt; sys.stdin.isatty() True </code></pre>
python|pipe
3
1,901,668
13,537,853
python sent result in new windows
<p>I want to send some result from my program to a different window, not into the console (at my command prompt in Windows). That is, I want to learn to make a GUI application when I have only ever made a non-gui command line "hello world" program ever before, in Python. Where do I start:</p> <p>Here's a command line program:</p> <pre><code> print 'I love python ' # output to standard console output </code></pre> <p>How do I put that message "I love python" in its own new window?</p>
<p>One way would be to learn <a href="http://www.pythonware.com/library/tkinter/introduction/" rel="nofollow">tkinter</a>. There is a lot of work that goes into making a "GUI" application with windows you create yourself. </p> <p>Instead of one line of code where you print something, you will have to make a window, and put things into those windows, and then react to the user clicking on things. </p> <p><strong>Update</strong> Until I edited your question (after you accepted this answer) I was not sure, so I guessed you were asking "how do I make my output go to a new window, not to the console where my print statement is going now". That unclear question has now been removed and replaced by my own rewrite of your question. (People who want to see the original unreadable question should look at the edit history).</p>
python|user-interface|tkinter
0
1,901,669
22,266,867
Why does Python run my code bottom to top?
<p>Assume these 3 files:</p> <p><strong>charNames.py</strong></p> <pre><code> a = 'Susan' b = 'Charlie' c = 'Jiji' </code></pre> <p><strong>threenames.py</strong></p> <pre><code>a = 'dead' b = 'parrot' c = 'sketch' print a,b,c </code></pre> <p><strong>storytime.py</strong></p> <pre><code>#!/usr/bin/env python import charNames import threenames print charNames.a threenames </code></pre> <hr> <p>I run <code>storytime.py</code> (which is also chmod +x) by using <code>./storytime.py</code> from Terminal, this is the output that I get:</p> <pre><code>$ ./storytime.py dead parrot sketch Susan $ </code></pre> <p>Why does the result execute <code>print a,b,c</code> from <em>threenames.py</em> before it runs <code>print charNames.a</code>?</p> <p>From my understanding, Python is a top down programming language, like bash. So should it print "Susan" first, then "dead parrot sketch"?</p> <p>This is run on OSX, with Python 2.7.5</p>
<p>In Python when you import a file it is executed. That is why you are seeing the output from <strong>threenames.py</strong> first, because it is executed right after it is imported.</p> <p>If you want a way to only run code in a file if it is the main file and not an import, you can use this code in <strong>threenames.py</strong>:</p> <pre><code>if __name__ == '__main__': print a, b, c </code></pre> <p>If you run <strong>threenames.py</strong>, you will see a,b, and c printed because it is the main file, but when it is imported, it is module, so the print function and any other function calls inside that conditional will not be executed</p>
python|bash
4
1,901,670
16,768,744
Wrong number of total entities on all pages except the last when using gae search and cursors for pagination
<p>I use gae's search API and I'm getting some strange results. When it is returning the number of documents, it is about ten times as many. I'm using an if test in mapreduce to check that entity is visible (a boolean variable) and that the entity is modified during the last 60 days, only entities modified during the last 60 days should be in the index. So what do you think I could be doing wrong? The strange thing it returns about as many elements as the total for the total count of a blank query that should match everything in the index and I'm getting about ten times as many as it should be, but only the number is wrong, when I page through the actual result set it is the correct length and then value for the last page of number of total entities matched is correct. Can you help me? On all pages for the pagination except the last page, the number of results are too many. <img src="https://i.stack.imgur.com/baSxi.png" alt="enter image description here"> </p> <p>Only on the last page the correct number of total entities is displayed. Why? <img src="https://i.stack.imgur.com/oJQXM.png" alt="enter image description here"></p> <p>The mapreduce code I use to build to index is:</p> <pre><code>def index(entity): try: edge = datetime.datetime.now() - timedelta(days=60) if (entity.published == True and entity.modified &gt; edge): city_entity = montaomodel.City.all().filter('name =', entity.city).get() region_entity = montaomodel.Region.all().filter('name =', entity.region).get() price = 0 try: if entity.price: price = long(entity.price) except (Exception), e: price = 0 logging.info('price conversion failed for entity %s', str(entity.key().id()) ) mileage = -1 try: if entity.mileage: mileage = int(entity.mileage) except (Exception), e: mileage = -1 logging.info('mileage conversion failed for entity %s', str(entity.key().id()) ) regdate = -1 try: if entity.regdate: regdate = int(entity.regdate) except (Exception), e: regdate = -1 logging.info('regdate conversion failed for entity %s', str(entity.key().id()) ) company_ad = 0 if entity.company_ad: company_ad = 1 cityId = 0 if city_entity: cityId = city_entity.key().id() regionID = 0 if region_entity: regionID = region_entity.key().id() category = 0 if entity.category: category = entity.category doc = search.Document(doc_id=str(entity.key()), fields=[ search.TextField(name='title', value=entity.title), search.TextField(name='text', value=entity.text), search.TextField(name='city', value=entity.city), search.TextField(name='region', value=entity.region), search.NumberField(name='cityID', value=int(cityId)), search.NumberField(name='regionID', value=int(regionID)), search.NumberField(name='category', value=int(category)), search.NumberField(name='constant', value=1), search.NumberField(name='adID', value=int(entity.key().id())), search.TextField(name='name', value=entity.name), search.DateField(name='date', value=entity.modified.date()), search.NumberField(name='price', value=long(price)), search.NumberField(name='mileage', value=int(mileage)), search.NumberField(name='regdate', value=int(regdate)), search.TextField(name='type', value=entity.type), search.TextField(name='currency', value=entity.currency), search.NumberField(name='company_ad', value=company_ad), search.NumberField(name='hour', value=entity.modified.hour), search.NumberField(name='minute', value=entity.modified.minute), ], language='en') yield search.Index(name='ads').put(doc) #yield op.db.Put(ad) except Exception, e: logging.info('There occurred exception:%s' % str(e)) </code></pre> <p>The search code is</p> <pre><code>def find_documents(query_string, limit, cursor): try: subject_desc = search.SortExpression( expression='date', direction=search.SortExpression.DESCENDING, default_value=datetime.now().date()) # Sort up to 1000 matching results by subject in descending order sort = search.SortOptions(expressions=[subject_desc], limit=1000) # Set query options options = search.QueryOptions( limit=limit, # the number of results to return cursor=cursor, sort_options=sort, #returned_fields=['author', 'subject', 'summary'], #snippeted_fields=['content'] ) query = search.Query(query_string=query_string, options=options) index = search.Index(name=_INDEX_NAME) # Execute the query return index.search(query) except search.Error: logging.exception('Search failed') return None </code></pre>
<p>The number of documents reported found by full-text search is approximate when there are more than 1000 results. The approximation used by GAE is not working well with your data.</p> <p>You can use <a href="https://developers.google.com/appengine/docs/python/search/queryoptionsclass#QueryOptions_number_found_accuracy" rel="nofollow">The QueryOptions Class</a> to change the accuracy of the reported number of search results, e.g with</p> <pre><code> # Set query options options = search.QueryOptions( number_found_accuracy=2000 ) </code></pre> <p>This makes the reported number of documents accurate as long as it is less than 2000.</p>
python|google-app-engine|python-2.7|mapreduce|full-text-search
1
1,901,671
16,971,688
How can I return the last element that fits under the criteria for driver.find_element_by_partial_link_text?
<p>I have a document that either has 1 or two links that can be found using the selenium/python command <code>driver.find_element_by_partial_link_text</code>. Whether it's one or two links, I always want the script to return the last in the list.</p> <p>Is there any way to do this?</p> <p>i.e. the page sometimes has a link with text stackoverflow.com/xxxxxxx and stackoverflow.com/iwantthislink. Other times it just has stackoverflow.com/iwantthislink.</p> <p>When I use <code>driver.find_element_by_partial_link_text("stackoverflow.com/")</code> it returns stackoverflow.com/xxxxxxx since it comes before stackoverflow.com/iwantthislink.</p> <p>I just need a way to always return the last element in the list.</p>
<p><code>find_elements_by_partial_link_text</code> returns a list. (Notice <code>elements</code> vs <code>element</code>.)</p> <p>To access the last element do:</p> <pre><code>my_elem = driver.find_elements_by_partial_link_text('sometext')[-1] </code></pre> <p>Better yet, test the existence of the elements beforehand:</p> <pre><code>elems = driver.find_elements_by_partial_link_text('sometext') if elems: my_elem = elems[-1] </code></pre>
python|selenium
4
1,901,672
16,776,500
Do things in random order?
<p>Is there any way in Python to do things in random order? Say I'd like to run <code>function1()</code>, <code>function2()</code>, and <code>function3()</code>, but not necessarilly in that order, could that be done? The obvious answer is to make a list and choose them randomly, but how would you get the function name from the list and actually run it?</p>
<p>This is actually pretty simple. Python functions are just objects, that happen to be callable. So you can store them in a list, and then call them using the call operator (<code>()</code>).</p> <p>Make your list of functions, shuffle them with <a href="http://docs.python.org/3.3/library/random.html?highlight=random#random.shuffle" rel="noreferrer"><code>random.shuffle()</code></a>, and then loop through, calling them.</p> <pre><code>to_call = [function1, function2, function3] random.shuffle(to_call) for f in to_call: f() </code></pre> <p>If you wanted to store the returned values, you could add them to a list, and that would make a good case for a <a href="http://www.youtube.com/watch?v=pShL9DCSIUw" rel="noreferrer">list comprehension</a>:</p> <pre><code>returned_values = [f() for f in to_call] </code></pre>
python|random
15
1,901,673
43,706,947
How do you call a function stored in a variable within Python,mh
<p>Below as you can see when I try to call the <code>mainLogin()</code> function stored in a variable it says it has not been defined.</p> <p>I know there's a way around but cant seem to figure it out.</p> <pre><code>validUser = {} answer = "" answer = mainLogin() def mainLogin(): while True: print("Are you currently a registered user [y/n]: ") answer = input().lower() if answer in "y n".split(): return answer else: print("Error: please enter [y/n].") def Login(answer): if answer == "y": while True: askUsername = input("ENTER USERNAME: ") askPassword = input("ENTER PASSWORD: ") if len(askUsername) &gt; 0 and askUsername.isalpha(): if askUsername in validUser and valid[askUsername] == askPassword: print("\nLogin Success!\n") break else: print("Error: Incorrect username or password!") else: print("Error: Don't be silly!") elif answer == "n": while True: createUsername = input("ENTER NEW USERNAME: ") if len(createUsername) &gt; 0 and createUsername.isalpha(): if createrUsername in validUser: print("Meesage: username already exists.") else: createPassword = input("ENTER NEW PASSWORD: ") if len(createPassword) &gt; 0 and createPasssword.isalpha(): validUser[createUsername] == createPassword print("\nUSER CREATED\n") else: print("Error: Please choose a different password.") else: print("Error: Don't be silly!") else: print("Error: You have not entered [y/n]") </code></pre>
<p>Put <code>answer = mainLogin()</code> and the bottom of your file.</p> <p>The functions below have not yet been registered by python. Python will go thought your script/module line by line. When you have code in the root of the module it will be executed.</p> <p>To solve this problem you can also do the following:</p> <pre><code>def main(): answer = mainLogin() def mainLogin(): # code if __name__ = "__main__": main() </code></pre> <p>Here python will inspect the entire file from top to bottom before we start doing any work. This is the standard way to solve the problem.</p> <p><code>__name__</code> is the name of the module. It will be assigned the name <code>__main__</code> if we specifically start the module with the python command. <code>python script.py</code>. If we import the module from another script, <code>__name__</code> will contain the actual name of the module, so <code>main()</code> will not be called.</p>
python|function|variables|authentication
1
1,901,674
71,423,991
Getting "AttributeError: 'float' object has no attribute 'replace'" error
<pre><code>df['Mass']=df['Mass'].apply(lambda x: x.replace('$', '').replace(',', '')).astype(float) </code></pre>
<p>It looks like <code>x</code> type is float, but you can use replace only over string. A cast should do it:</p> <pre><code>df['Mass']=df['Mass'].apply(lambda x: str(x).replace('$', '').replace(',', '')).astype(float) </code></pre>
python|pandas|dataframe
0
1,901,675
39,124,363
Check if a directory is a mount point with python 2.7
<p>Is there a pythonic way and without shell commands (i.e. with subprocess module) to check if a directory is a mount point? </p> <p>Up to now I use:</p> <pre><code>import os import subprocess def is_mount_point(dir_path): try: check_output([ 'mountpoint', path.realpath(dir_name) ]) return True except CalledProcessError: return False </code></pre>
<p>There is an <a href="https://docs.python.org/2/library/os.path.html#os.path.ismount" rel="noreferrer"><code>os.path.ismount(path)</code></a>.</p> <blockquote> <p>Return True if pathname path is a mount point: a point in a file system where a different file system has been mounted. The function checks whether path‘s parent, path/.., is on a different device than path, or whether path/.. and path point to the same i-node on the same device — this should detect mount points for all Unix and POSIX variants.</p> </blockquote> <pre><code>import os os.path.ismount(dir_name) # returns boolean </code></pre> <p>You may also refer to <a href="https://github.com/python/cpython/blob/master/Lib/posixpath.py#L180" rel="noreferrer">implementation</a> (if you're on POSIX system). Check <code>macpath.py</code> or <code>ntpath.py</code> for other platforms.</p>
python|python-2.7
18
1,901,676
52,851,531
Qualified import in Python
<p>I am looking for a way to import certain methods from a module in a <em>qualified</em> manner; for instance (pseudo-code),</p> <pre><code>from math import sqrt as math.sqrt # ... SQRT2 = math.sqrt(2) </code></pre> <p>Is this possible?</p> <p>This is useful for managing namespaces, in order not to pollute the global namespace. Futhermore, this scheme clearly indicates the source of a method/class in any part of the code. I can also use an <code>import math</code>, but then the actual required methods (eg., <code>sqrt</code>) will be implicit.</p>
<p>You can use the built-in <a href="https://docs.python.org/3/library/functions.html#__import__" rel="nofollow noreferrer"><code>__import__</code></a> function with the <code>fromlist</code> parameter instead:</p> <pre><code>math = __import__('math', fromlist=['sqrt']) SQRT2 = math.sqrt(2) </code></pre>
python-import|qualified-name
3
1,901,677
52,733,480
Machine learning algorithm which gives multiple outputs mapped from single input
<p>I need some help, i am working on a problem where i have the OCR of an image of an invoice and i want to extract certain data from it like invoice number, amount, date etc which is all present within the OCR. I tried with the classification model where i was individually passing each sentence from the OCR to the model and to predict it the invoice number or date or anything else, but this approach takes a lot of time and i don't think this is the right approach.</p> <p>So, i was thinking whether there is an algorithm where i can have an input string and have outputs mapped from that string like, invoice number, date and amount are present within the string.</p> <p>E.g:</p> <pre><code>Inp string: The invoice 1234 is due on 12 oct 2018 with amount of 287 Output: Invoice Number: 1234, Date: 12 oct 2018, Amount 287 </code></pre> <p>So, my question is, is there an algorithm which i can train on several invoices and then make predictions?</p>
<p>What you are searching for is invoice data extraction ML. There are plenty of ML algorithms available, but none of them is done for your use case. Why? Because it is a very special use case. You can't just use Tensorflow and use sentences as input, although it can return multiple outputs.</p> <p>You could use NLP (natural language processing) approaches to extract data. It is used by <a href="http://taggun.io/" rel="nofollow noreferrer">Taggun</a> to extract data from receipts. In that case, you can use only sentences. But you will still need to convert your sentences into NLP form (tokenization).</p> <p>You could use deep learning (e.g. Tensorflow). In that case, you need to vectorize your sentences into vectors that can be input into a neural network. This approach needs much more creativity while there is no standard approach to do that. The goal is to describe every sentence as good as possible. But there is still one problem - how to parse dates, amounts, etc. Would it help NN if you would mark sentences with <em>contains_date</em> True/False? Probably yes. A similar approach is used in invoice data extraction services like:</p> <ul> <li><a href="https://rossum.ai" rel="nofollow noreferrer">rossum.ai</a></li> <li><a href="https://typless.com" rel="nofollow noreferrer">typless.com</a></li> </ul> <p>So if you are doing it for fun/research I suggest starting with a really simple invoice. Try to write a program that will extract invoice number, issue date, supplier and total amount with parsing and <em>if</em> statements. It will help you to define properties for feature vector input of NN. For example, <em>contains_date</em>, <em>contains_total_amount_word</em>, etc. See this <a href="https://elitedatascience.com/keras-tutorial-deep-learning-in-python" rel="nofollow noreferrer">tutorial</a> to start with NN.</p> <p>If you are using it for work I suggest taking a look at one of the existing services for invoice data extraction.</p> <p>Disclaimer: I am one of the creators of typless. Feel free to suggest edits.</p>
python|machine-learning|deep-learning|ocr
0
1,901,678
47,614,940
I am getting an error in my Python code
<p>Right now I'm doing a easy videogame in Python using the libraty Turtle but this error appeared and I don't know how to solve it.</p> <p>My code is here: </p> <p>pastebin.com/wu5jM0gT</p> <p>Error :</p> <pre><code>Traceback (most recent call last): File "C:/Users/ricar/PycharmProjects/Juego/Juego.py", line 122, in &lt;module&gt; objetivo.movimiento() File "C:/Users/ricar/PycharmProjects/Juego/Juego.py", line 91, in movimiento self.forward(self.speed) File "C:\Python27\lib\lib-tk\turtle.py", line 1553, in forward self._go(distance) File "C:\Python27\lib\lib-tk\turtle.py", line 1520, in _go ende = self._position + self._orient * distance File "C:\Python27\lib\lib-tk\turtle.py", line 277, in __mul__ return Vec2D(self[0]*other, self[1]*other) TypeError: unsupported operand type(s) for *: 'float' and 'instancemethod'` </code></pre> <p>Any suggestions? </p>
<p>You forgot the parentheses</p> <pre><code>self.forward(self.speed()) </code></pre>
python
0
1,901,679
37,220,460
Python: Catching an exception works outside of a function but not inside a function
<p>I have a strange problem which I can't solve myself.</p> <p>If I execute <code>outside_func.py</code> in two separate terminals, the second execution catches the BlockingIOError exception and the message is printed:</p> <p><strong>outside_func.py</strong></p> <pre><code>import fcntl import time # Raise BlockingIOError if same script is already running. try: lockfile = open('lockfile', 'w') fcntl.flock(lockfile, fcntl.LOCK_EX | fcntl.LOCK_NB) except BlockingIOError: print('Script already running.') time.sleep(20) </code></pre> <p>If I do the same with <code>inside_func.py</code> nothing is caught and no message is printed:</p> <p><strong>inside_func.py</strong></p> <pre><code>import fcntl import time # Raise BlockingIOError if same script is already running. def script_already_running(): try: lockfile = open('lockfile', 'w') fcntl.flock(lockfile, fcntl.LOCK_EX | fcntl.LOCK_NB) except BlockingIOError: print('Script already running.') script_already_running() time.sleep(20) </code></pre> <p>Any ideas?</p>
<p>The file is closed when you leave the function so the two snippets are not the same, in the code snippet where the try is <em>outside of a function</em> there is still a reference to the file object in the scope of the sleep call so further calls to open the <em>lockfile</em> rightfully error. If you change the function by moving the sleep inside the function you will see the error raised as now you have comparable code:</p> <pre><code>import fcntl import time # Raise BlockingIOError if same script is already running. def script_already_running(): try: lockfile = open('lockfile', 'w') fcntl.flock(lockfile, fcntl.LOCK_EX | fcntl.LOCK_NB) except BlockingIOError: print('except') sleep(20) </code></pre>
python|exception|exception-handling
0
1,901,680
34,010,974
I cant get this code to work in python
<pre><code>keepgoing = True num1 = int(input("Enter a Number")) num2 = int(input("Enter a Number")) bignums = 0 smallnums = 0 counter = 0 while keepgoing: if num1 &gt; num2: bignums = bignums + num1 smallnums = smallnums + num2 else: bignums = bignums + num2 smallnums = smallnums + num1 counter + 1 if num1 == 0: keepgoing = False print (bignums / counter) print (smallnums / counter) </code></pre> <p>The program i am writing: Write a program that enters pairs of numbers until the first number in the pair is 0. The program will add the smallest number to a total for smaller numbers, the largest number to a total for the largest numbers. After exit from the loop, it will print the average of the smaller numbers, the average of the higher numbers, and the highest and lowest numbers entered.</p> <p>I think i did everything right, but it wont end/exit the loop when you type "0". Also i don't know how to make it show the highest and lowest number. Can anyone help me please?</p>
<p>Do something like this as suggested by @Joseph keep the entry for numbers inside the loop:</p> <pre><code>keepgoing = True bignums = 0 smallnums = 0 counter = 0 while keepgoing: num1 = int(input("Enter a First Number")) num2 = int(input("Enter a Second Number")) if num1 &gt; num2: bignums = bignums + num1 smallnums = smallnums + num2 else: bignums = bignums + num2 smallnums = smallnums + num1 counter += 1 if num1 == 0: keepgoing = False print "Average of big numbers", bignums / counter print "Average of small numbers", smallnums / counter </code></pre> <p>Also would be good if you handle cases like what will happen if the user enters both the numbers as same. Let me know if you need more help</p>
python|python-3.x
1
1,901,681
39,541,846
No chars as expected (???-??????????.?? instead)
<p>I am using python3.5 asyncio+aiomysql wrapped in docker to extract some strings from my db. I expect to get 'ваш-шиномонтаж.рф' but got ???-??????????.?? instead. (Mysql table use utf-8 encoding.)</p> <p>Here is a code:</p> <pre><code># encoding: utf-8 import asyncio from aiomysql import create_pool async def get_pool(loop): return await create_pool(host='127.0.0.1', port=3306, user='dbu', password='pwd', db='db', loop=loop) async def get_sites(pool): async with pool.acquire() as conn: async with conn.cursor() as cur: await cur.execute( "select canonic_domain from site where id=1132", ()) sites = await cur.fetchall() for s in sites: print(type(s[0])) print(s[0]) return sites def process(): loop = asyncio.get_event_loop() pool = loop.run_until_complete(loop.create_task(get_pool(loop))) sites = loop.run_until_complete(loop.create_task(get_sites(pool))) if __name__ == "__main__": process() </code></pre> <p>the output:</p> <pre><code>&lt;class 'str'&gt; ???-??????????.?? I expect: &lt;class 'str'&gt; 'ваш-шиномонтаж.рф' what could be the problem? </code></pre>
<p>Solved by adding params to connection: charset='utf8', use_unicode=True:</p>
mysql|encoding|docker|character-encoding|python-3.5
0
1,901,682
39,446,579
Difference between devpi and pypi server
<p>Had a quick question here, am used to devpi and was wondering what is the difference between devpi and pypi server ?</p> <p>Is on better than another? Which of this one scale better?</p> <p>Cheers</p>
<p><strong>PyPI</strong> (Python Package Index)- is the official repository for third-party Python software packages. Every time you use e.g. <code>pip</code> to install a package that is not in the standard it will get downloaded from the PyPI server.</p> <p>All of the packages that are on PyPI are publicly visible. So if you upload your own package then anybody can start using it. And obviously you need internet access in order to use it.</p> <p><strong>devpi</strong> (not sure what the acronym stands for) - is a self hosted private Python Package server. Additionally you can use it for testing and releasing of your own packages. </p> <p>Being self hosted it's ideal for proprietary work that maybe you wouldn't want (or can't) share with the rest of the world.</p> <p>So other features that devpi offers:</p> <ul> <li>PyPI mirror - cache locally any packages that you download form PyPI. This is excellent for CI systems. Don't have to worry if a package or server goes missing. You can even still use it if you don't have internet access.</li> <li>multiple indexes - unlike PyPI (which has only one index) in devpi you can create multiple indexes. For example a <code>main</code> index for packages that are rock solid and <code>development</code> where you can release packages that are still under development. Although you have to be careful with this because a large amount of indexes can make things hard to track. </li> <li>The server has a simple web interface where you can you and search for packages.</li> <li>You can integrate it with <code>pip</code> so that you can use your local devpi server as if you were using PyPI.</li> </ul> <p>So answering you questions:</p> <ul> <li><em>Is one better than the other?</em> - well these are two different tools really. No clear answer here, depends on what your needs are.</li> <li><em>Which scales better?</em> - definitely devpi.</li> </ul> <p>The official website is very useful with good examples: <a href="http://doc.devpi.net/latest/" rel="nofollow noreferrer">http://doc.devpi.net/latest/</a></p>
python|pypi|devpi|python-packaging
0
1,901,683
39,700,904
Why use pandas.DataFrame.copy() for column extraction
<p>I've recently seen this kind of code:</p> <pre><code>import pandas as pd data = pd.read_csv('/path/to/some/data.csv') colX = data['colX'].copy() data.drop(labels=['colX'], inplace=True, axis=1) </code></pre> <p>I know that, to make an explicit copy of an object, I need <code>copy()</code>, but in this case, when extracting and subsequent deletion of a colum, is there a good reason to use <code>copy()</code>?</p>
<p>@EdChum statet in the comments:</p> <blockquote> <p>the user may want to separate that column from the main df, of course if the user just wanted to delete that column then taking a copy is pointless if their intention is to delete the column but for instance they didn't take a copy and instead took a reference then operations on that column may or may not affect the orig df if you didn't drop it.</p> </blockquote>
python|pandas
0
1,901,684
31,857,553
can you add a charfield to a list of checkboxes in django
<p>I have a form with a list of checkboxes. If the user can't find what he wants, he should be able to check 'Other', and then fill in a text input, hit submit and the form will register the value of the input as choice.</p> <p>Is there a way of escaping the pre-defined options found in forms.py? When I change the value of the checkbox, it fails Django validation on submission.</p> <p>These are the options in forms.py</p> <pre><code>INTEREST_DESTINATION_CHOICES = ( ('cornwall', 'Cornwall'), ('cotswolds', 'Cotswolds'), ('east anglia', 'East Anglia'), ('Lake District', 'Lake District'), ('Devon', 'Devon'), ('Dorset', 'Dorset'), ('Peak District', 'Peak District'), ('Wales', 'Wales'), ('Sussex', 'Sussex'), ('Other', 'Other'), ) </code></pre> <p>This seems like a common feature for forms, but can't find anything about it online or in the docs</p>
<p>You can use a separate field that is hidden and only shown and validated if "Other" option is selected. They way that you store that information and you send it to your model it does not make a difference since you are really storing strings.</p>
python|django|forms
0
1,901,685
38,887,025
How to insert dictionary items into a PostgreSQL table
<p>So I've connected my Postgresql database into Python with Psycopg2, and I've pulled two specific columns to update. The first column I have used as the keys in a Python dictionary, and I have run some functions on the second one and use the results as the values in the dictionary. Now what I want to do is add those values back into Postgresql table as a new column, but I want them to pair with the correct keys they are paired with in the dictionary. Essentially, I want to take dictionary values and insert them as a new column and pick which "key" in the Postgresql table they belong to (however, I don't want to manually assign them, because, well, there's hopefully a better way).</p> <p>Postgresql Table </p> <pre><code> |col1 |col2 |col3 | ... | coln row1 | a1 | b1 | c1 | ... | n1 row2 | a2 | b2 | c2 | ... | n2 ... | ... | ... | ... | ... | n... rowm | am | bm | cm | ... | nm </code></pre> <p>This is the dictionary I made in Python, where <code>f()</code> is a series of functions ran on variable: </p> <pre><code>{ a1 : f(c1), a2 : f(c2), ... : ... } </code></pre> <p>Now my goal is to add the values column back into my table so that it corresponds to the original keys. Ideally, to look something like this:</p> <pre><code> |col1|col2|col3| ... |newcol| coln row1 | a1 | b1 | c1 | ... | f(c1)| n1 row2 | a2 | b2 | c2 | ... | f(c2)| n2 ... | ...| ...| ...| ... | ... | n... rowm | am | bm | cm | ... | f(cm)| nm </code></pre> <p>I know I can insert the column into the table, but not sure how to pair it with keys. Any help is very much appreciated!</p>
<p>You want an <code>UPDATE</code> statement something like the following:</p> <pre><code>import psycopg2 con = psycopg2.connect('your connection string') cur = connection.cursor() # add newcol cur.execute('ALTER TABLE your_table ADD COLUMN newcol text;') con.commit() for k,v in your_dict.iteritems(): cursor.execute('''UPDATE your_table SET newcol = (%s) WHERE col1 = (%s);''',(v,k)) conn.commit() cur.close() con.close() </code></pre>
python|postgresql|dictionary|key-value|psycopg2
1
1,901,686
40,566,324
Django - django-autocomplete-light setup how to
<p>I am following the tutorial on how to setup django-autocomplete fields and Im struggling to get it working. Here's the tutorial: <a href="https://django-autocomplete-light.readthedocs.io/en/master/tutorial.html" rel="noreferrer">https://django-autocomplete-light.readthedocs.io/en/master/tutorial.html</a></p> <p>settings Installed Apps</p> <pre><code>INSTALLED_APPS = ( 'dal', 'dal_select2', 'django.contrib.admin', </code></pre> <p>project urls.py</p> <pre><code>from textchange.views import TextbookAutoComplete urlpatterns = [ url(r'^textbook-autocomplete$', TextbookAutoComplete.as_view(), name='textbook-autocomplete'), </code></pre> <p>HTML</p> <pre><code>&lt;form method="POST"&gt; {% csrf_token %} {% for field in form3 %} {{ field }} {% endfor %} &lt;input id="search" class="button" type="submit" value="Search Textbooks" name="Search"&gt;&lt;/input&gt; &lt;/form&gt; </code></pre> <p>Forms.py</p> <pre><code>class Search(forms.ModelForm): longschool = forms.ModelChoiceField( queryset=Textbook.objects.all(), widget=autocomplete.ModelSelect2(url='textbook-autocomplete') ) class_name = forms.ModelChoiceField( queryset=Textbook.objects.all(), widget=autocomplete.ModelSelect2(url='textbook-autocomplete') ) isbn = forms.ModelChoiceField( queryset=Textbook.objects.all(), widget=autocomplete.ModelSelect2(url='textbook-autocomplete') ) class Meta: model = Textbook fields = ('longschool', 'class_name', 'isbn') </code></pre> <p>Views.py</p> <pre><code>class TextbookAutoComplete(autocomplete.Select2QuerySetView): def get_queryset(self): # Don't forget to filter out results depending on the visitor ! if not self.request.user.is_authenticated(): return Textbook.objects.none() qs = Textbook.objects.all() if self.q: qs = qs.filter(name__istartswith=self.q) return qs </code></pre> <p>Jquery added</p> <pre><code> &lt;script type="text/javascript" src="{% static "admin/js/jquery.js" %}"&gt;&lt;/script&gt; </code></pre> <p>When the form shows up in my html it's just three dropdowns without input fields (as in without anywhere to type). Can anyone see what I am missing?</p> <p>Any help would be greatly appreciated.</p>
<p>I just ran into the same problem and was able to solve it. I had 2 issues. I was forgetting to load <code>{{form.media}}</code> below the form that I loaded.<br> <br>Make sure that you run the python manage.py collectstatic command after installing the django-autocomplete-light package. (This will copy the static files of this third party package into your own static folder). <br><br> django-autocomplete-light tried to load these static files form <code>static/admin/js/vendor/</code>, but for some reason the files were not there and could not be loaded. I manually included the javascript and css files in the header of the template where my form with the autocomplete input was loaded and that finally worked: <br></p> <p>when loading the form in a template: </p> <pre><code>&lt;html&gt; &lt;head&gt; &lt;script src="https://ajax.googleapis.com/ajax/libs/jquery/1.8.3/jquery.min.js"&gt;&lt;/script&gt; &lt;script src="http://code.jquery.com/ui/1.9.2/jquery-ui.js"&gt;&lt;/script&gt; &lt;link href="https://cdnjs.cloudflare.com/ajax/libs/select2/4.0.6-rc.0/css/select2.min.css" rel="stylesheet" /&gt; &lt;script src="https://cdnjs.cloudflare.com/ajax/libs/select2/4.0.6-rc.0/js/select2.full.js"&gt;&lt;/script&gt; &lt;/head&gt; &lt;body&gt; {{ form.as_p }} {{ form.media }} &lt;/body&gt; </code></pre>
jquery|python|django|autocomplete
7
1,901,687
9,887,165
Django localization with unnamed string arguments
<p>I've made a few Badge classes in Django, each containing some sort of description in a string variable:</p> <pre><code>"You get this badge because you've runned %d meters in %d minutes" "You get this badge because you've killed %d monsters of the type %s" </code></pre> <p>etc. And the classes also have a function <code>get_description(badge_level_requirements)</code>, so in the templates it will be called together with a list to assemble the string for a specific user: </p> <pre><code>class RunnerBadge(Badge): des=ugettext_lazy("You get this badge because you've runned %d meters in %d minutes") def get_description(cls,badge_level_requirements): return cls.des%badge_level_requirements </code></pre> <p>And I've stored the requirements lists in the database without any argument names already :( As shown in the examples, different classes have different numbers of values to fill in the string, the values mean different things as well. So I can't really name the arguments. </p> <p>However, if I want to internationalize these strings, there'll be errors: <code>'msgid' format string with unnamed arguments cannot be properly localized</code> And the language file cannot be generated for this matter.</p> <p>Is there a way to bypass this error?</p> <p><strong>Update</strong></p> <p>I've come across this method for bypassing the error without changing database. In database, the level requirements are stored in a text field in the format of dict:</p> <pre><code>#Requirment of Runner's badge "{'gold':(100,10),'silver':(50,5),'bronze':(25,2)}" </code></pre> <p>And in class definition, mannually add argument names as 'arg_0','arg_1'... to descriptions. The get_description method is changed to pre-process the data before used for filling description strings.</p> <pre><code>class RunnersBadge(Badge): requirements=#get the previous dict from database description="You get this badge because you've runned %(arg_0)d meters in %(arg_1)d minutes" @classmethod def get_description(cls,level): ''' This is actually a method of parent class Badge level is either 'gold','silver' or 'bronze' ''' dic={} try: for (num,val) in enumerate(cls.requirements[level]): dic['arg_'+str(num)]=val except TypeError: dic['arg_0']=cls.requirements[level] return cls.description%dic </code></pre> <p>This method keeps most of the current structure (logic and database). And the translator just need to take care of the words' placement.</p>
<ol> <li><p>like in code, <strong><a href="http://c2.com/cgi/wiki?MeaningfulName" rel="nofollow">variable names should be meaningful within their context</a></strong>, 'meter_count' and 'minute_count' are explicit, compared to 'arg_0' and 'arg_1' which are meaningless</p></li> <li><p>use <strong><a href="https://docs.djangoproject.com/en/dev/topics/i18n/translation/#working-with-lazy-translation-objects" rel="nofollow">standard translation in python code</a></strong>, it is less error prone and is recognized by the amazingly useful makemessages command</p></li> <li><p>use <strong><a href="https://docs.djangoproject.com/en/dev/topics/i18n/translation/#standard-translation" rel="nofollow">use named-string interpolation</a></strong> (e.g., <code>%(day)s</code>) instead of positional interpolation (e.g., <code>%s</code> or <code>%d</code>) whenever you have more than a single parameter, because the order of parameters can change depending on the language. I.e. germanic and latin languages inverse the noun/adjective order, dates are displayed differently depending on the language, etc, etc ...</p></li> <li><p>use <strong><a href="http://en.wiktionary.org/wiki/ran#Verb" rel="nofollow">ran instead of runned</a></strong>, double check the syntactical validity of your english translation strings</p></li> </ol> <p>This:</p> <pre><code>class RunnersBadge(Badge): requirements=#get the previous dict from database description="You get this badge because you've runned %(arg_0)d meters in %(arg_1)d minutes" </code></pre> <p>Becomes:</p> <pre><code>from django.utils.translation import ugettext as _ class RunnersBadge(Badge): requirements=#get the previous dict from database description=_("You get this badge because you've ran %(meter_count)d meters in %(minute_count)d minutes") </code></pre>
python|django|localization|internationalization
5
1,901,688
68,422,454
Reading Yahoo Finance: Extract a smaller table from a bigger table Python (historical low price)
<p>I have a table from yahoo finance with one year of data</p> <p>The table is like the figure below</p> <p><a href="https://i.stack.imgur.com/e7UQ3.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/e7UQ3.jpg" alt="enter image description here" /></a></p> <p><a href="https://i.stack.imgur.com/NTqBM.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NTqBM.jpg" alt="enter image description here" /></a></p> <p>I want to join two columns of this table to make a new one. I want the column date and price</p> <p>Does anyone can give me a tip?</p>
<p>I got the first part of your question, where you only want to see a table (pandas dataframe) with columns date and low.</p> <p>Column selection in pandas is very easy. You put the list of columns you like to see in a python list, and then pass it to dataframe in way you subset.</p> <pre><code>mycolumns = [&quot;date&quot;, &quot;low&quot;] newdf = bvsp2[mycolumns] print(newdf) </code></pre> <p>Output</p> <pre><code> date low 0 2017-02-16 67661.0 1 2017-02-17 67158.0 2 2017-02-20 67756.0 3 2017-02-21 68536.0 4 2017-02-22 68282.0 .. ... ... </code></pre>
python|pandas|yahoo-finance
0
1,901,689
68,356,141
Python Flask Blog Message without format
<p>I have codes for a simple message blog that very body can leave message. One of the problems is that the message display from sqlite db is without format, e.g. without paragraph. How can I impove it (or add markdown enable)? I appreciate your help. Thank you.</p> <p>main app py</p> <pre class="lang-py prettyprint-override"><code>@app.route('/') def index(): conn = db_conn() posts = conn.execute('SELECT * FROM table_posts').fetchall() conn.close() return render_template('index.html', posts=posts) @app.route('/create_new_post', methods=('GET', 'POST')) def create_new_post(): if request.method == 'POST': content = request.form['content'] conn = db_conn() conn.execute('INSERT INTO table_posts (content) VALUES (?)', (content,)) conn.commit() conn.close() return redirect(url_for('index')) else: return render_template('create_new_post.html') </code></pre> <p>index.html</p> <pre><code>{% extends 'base.html' %} {% block title %} Simple Message Board {% endblock %} {% block content %} {% for post in posts %} &lt;br&gt; &lt;div class=&quot;card&quot;&gt; &lt;div class=&quot;card-body&quot;&gt; &lt;p class=&quot;card-text&quot;&gt; {{ post['content'] }} &lt;/p&gt; &lt;span class=&quot;badge badge-secondary&quot;&gt;{{ post['time_stamp'] }}&lt;/span&gt; &lt;/div&gt; &lt;/div&gt; {% endfor %} {% endblock %} </code></pre> <p>The outcome i want is as follows:</p> <pre><code>Text of 1st line Text of 2nd line Text of 3rd line </code></pre> <p>But the actual content displayed is as follows:</p> <pre><code>Text of 1st line Text of 2nd line Text of 3rd line </code></pre>
<p>I think you are trying to render the body as html not as text.</p> <p>jinja autoescapes the text , you can stop autoescaping, For that you can use the safe filter of jinja.</p> <pre><code>{{ post['content']|safe }} </code></pre> <p>You can also see the <a href="https://flask.palletsprojects.com/en/2.0.x/templating/#controlling-autoescaping" rel="nofollow noreferrer">docs</a></p>
python|html|sqlite|flask
0
1,901,690
68,444,542
Python: How to split a string by phrase from right to left (first occurrence only)
<p>Is it possible to split a string on a phrase (potentially more than one word) in Python 3 from right to left (first occurrence only)?</p> <p>Currently I'm able to split a string based on a list of phrases but I have an edge case in that if more than one of those specified phrases occurs in the string then it splits on both.</p> <p><strong>The problem</strong></p> <p>Given a sample CSV containing the following:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>SENTENCES</th> <th></th> </tr> </thead> <tbody> <tr> <td>THIS IS SENTENCE THREE</td> <td>1</td> </tr> <tr> <td>THIS IS SENTENCE TWO</td> <td>2</td> </tr> <tr> <td>I CONTAIN ONE BUT ALSO TWO</td> <td>3</td> </tr> </tbody> </table> </div> <p>And my code which opens a CSV, loops through each row, and then looks to split out specified phrases:</p> <pre class="lang-py prettyprint-override"><code>import re import csv def split_phrase(string): phrases = ['ONE', 'TWO', 'THREE'] print(f'Raw: {string}') split_phrase = '' # Only needed for testing purposes to prevent error on output for phrase in phrases: if phrase in string: list = re.split(f'\\b({phrase})\\b', string) print(f'Split: {list}') sentence = list[0] split_phrase = list[1] print(f'Phrase: {split_phrase}') return sentence, split_phrase input_dir = 'input1/' output_dir = 'output1/' filename = 'demo.csv' with open(input_dir + filename, 'r') as input_csv: csv_reader = csv.reader(input_csv) data = list(csv_reader) input_csv.close() for row in data[1:]: # Ignore the header row sentence = row[0] # First column sentence = split_phrase(sentence) # Split out specified phrase </code></pre> <p>I get the following output:</p> <pre><code>$ python3 demo.py Raw: THIS IS SENTENCE THREE Split: ['THIS IS SENTENCE ', 'THREE', ''] Phrase: THREE Raw: THIS IS SENTENCE TWO Split: ['THIS IS SENTENCE ', 'TWO', ''] Phrase: TWO Raw: I CONTAIN ONE BUT ALSO TWO Split: ['I CONTAIN ', 'ONE', ' BUT ALSO TWO'] Phrase: ONE Split: ['I CONTAIN ONE BUT ALSO ', 'TWO', ''] Phrase: TWO </code></pre> <p><strong>NOTE:</strong> The last sentence is processed by the for loop twice due to it containing two of the phrases in the phrase list.</p> <p><strong>Desired outcome</strong></p> <p>I know that of the listed phrases to split out it will always be the last one on the right. So I'd like to grab <strong>only the first occurrence from right to left</strong>.</p> <p><strong>NOTE:</strong> A &quot;phrase&quot; can contain one or more words.</p> <p>Is this possible? And if so, how may I achieve it?</p>
<p>I've answered this by using <code>string.rfind()</code> to search from the end of the string, and iterating through the list of possible phrases. There may be better ways to do this that do not iterate, but this is the best I've found.</p> <pre><code>one = &quot;THIS IS SENTENCE THREE&quot; two = &quot;THIS IS SENTENCE TWO&quot; three = &quot;I CONTAIN ONE BUT ALSO TWO&quot; four = &quot;I CONTAIN ONE BUT ALSO TWO AND SOME MORE TEXT&quot; phrases = ['ONE', 'TWO', 'THREE'] def find_words(phrases, string): i = -1 p = &quot;&quot; for phrase in phrases: newI = string.rfind(phrase) if newI &gt; i: i = newI p = phrase return (string[:i], string[i:i+len(p)], string[i+len(p)::]) print(find_words(phrases, one)) print(find_words(phrases, two)) print(find_words(phrases, three)) print(find_words(phrases, four)) </code></pre> <p>Output:</p> <pre><code>('THIS IS SENTENCE ', 'THREE', '') ('THIS IS SENTENCE ', 'TWO', '') ('I CONTAIN ONE BUT ALSO ', 'TWO', '') ('I CONTAIN ONE BUT ALSO ', 'TWO', ' AND SOME MORE TEXT') </code></pre>
python|python-3.x|split
1
1,901,691
1,482,383
Understanding this class in python. The operator % and formatting a float
<pre><code>class FormatFloat(FormatFormatStr): def __init__(self, precision=4, scale=1.): FormatFormatStr.__init__(self, '%%1.%df'%precision) self.precision = precision self.scale = scale def toval(self, x): if x is not None: x = x * self.scale return x def fromstr(self, s): return float(s)/self.scale </code></pre> <p>The part that confuses me is this part</p> <pre><code>FormatFormatStr.__init__(self, '%%1.%df'%precision) </code></pre> <p>does this mean that the precision gets entered twice before the 1 and once before df? Does df stand for anything that you know of? I don't see it elsewhere even in its ancestors as can be seen here:</p> <pre><code>class FormatFormatStr(FormatObj): def __init__(self, fmt): self.fmt = fmt def tostr(self, x): if x is None: return 'None' return self.fmt%self.toval(x) class FormatObj: def tostr(self, x): return self.toval(x) def toval(self, x): return str(x) def fromstr(self, s): return s </code></pre> <p>also, I put this into my Ipython and get this:</p> <pre><code>In [53]: x = FormatFloat(.234324234325435) In [54]: x Out[54]: &lt;matplotlib.mlab.FormatFloat instance at 0x939d4ec&gt; </code></pre> <p>I figured that it would reduce precision to 4 and scale to 1. But instead it gets stored somewhere in my memory. Can I retrieve it to see what it does to the number?</p> <p>Thanks everyone you're very helpful!</p>
<pre><code>&gt;&gt;&gt; precision=4 &gt;&gt;&gt; '%%1.%df'%precision '%1.4f' </code></pre> <p>%% gets translated to %</p> <p>1 is printed as is</p> <p>%d prints precision as a decimal number</p> <p>f is printed literally</p>
python|class|operators
2
1,901,692
2,137,394
How to setup and run Python on Wampserver?
<p>Can anyone help me to set up Python to run on Wampserver. From what I've read so far you would need to use a combination of Wampser, Python, mod_pyhton, and adjustment to the Apache http.conf file. I've tried it but i belive i am having conflict when it comes to versions. Does anyone know of a cobination of versions that can work so that i can do some local python development using my wampserver? Links to the download would be greatly appreciated.</p> <p>My current config: Wampserver 2.0c => Apache Version : 2.2.8 , PHP Version : 5.2.6 , MySQL Version : 5.0.51b </p>
<p>Do not use <code>mod_python</code>; it does not do what most people think it does. Use <a href="http://code.google.com/p/modwsgi/" rel="nofollow noreferrer"><code>mod_wsgi</code></a> instead.</p>
python|apache|installation|mod-python|wampserver
3
1,901,693
32,344,985
grabbing and storing multiple regex variables as an array
<p>I ran into problems when trying to get this code to work and am currently stuck.</p> <p>At the moment I am grabbing multiple values from one pattern. The problem is that I think it stores the multiple values as a string. Ideally I want to be able to have each of the desired values stored in an array. Say there are 5 values per item and n items, I want my array to be 5 x n size. Currently it is just size n. My code is as follows:</p> <pre><code>import re import pickle regex = '''&lt;item&gt; &lt;first&gt;(.+?)&lt;/first&gt; &lt;second&gt;(.+?)&lt;/first&gt; ... &lt;fifth&gt;(.+?)&lt;/fifth&gt; &lt;/item&gt; ''' pattern = re.compile(regex) with open('d.dat') as searchfile: filetext = searchfile.read() results = re.findall(pattern, filetext) pickle.dump(results, open('save.p', 'wb')) </code></pre>
<pre><code>object=list(re.findall(pattern,filetext)[0]) </code></pre> <p><code>re.findall</code> return a <em>list of tuples</em>.So you can convert it into list again.</p>
python|arrays|regex
0
1,901,694
28,065,972
drawing transparent and click through text on top of all windows
<p><img src="https://i.stack.imgur.com/7Ky4o.jpg" alt="example screen">I am creating a Lyrics display app in Tkinter and python for Ubuntu. It works fine in displaying lyrics in a window. but I want to create a on screen presentation of lyrics lines like it happens in MiniLyrics. So I want to draw over all windows and that should with click through and have transparent background. what strategy I should use to achieve this task.</p>
<p>You cannot use tkinter to do what you want. Tkinter can only affect the windows it creates. </p>
python|linux|user-interface|gtk|cairo
2
1,901,695
28,146,799
What is the best way to track a drone's position using a chessboard grid? (Python & OpenCV)
<p>I'm currently working on a project with drones and I need to be able to track the position of the drone. I'm planning on doing this using a chessboard (only using it inside so GPS function is not available) and using indexes such as A1, A2, B1, C7 etc.</p> <p>However, it's quite difficult to determine it's position doing it solely like this (say for example you push the drone and it passes a square). So I'd like to place tags on each square and do some sort of recognition on them. However, which tags would be the best to use? Since it's an 8*8 board, making it a total of 64 squares which means 64 tags.</p>
<p>Something that I used some time ago for a similar localization project is the <a href="http://www.hitl.washington.edu/artoolkit/" rel="nofollow">ARToolkit</a>. We used <a href="http://www.artoolworks.com/support/library/Creating_and_training_new_ARToolKit_markers" rel="nofollow"><em>markers</em></a> which could be detected by the toolkit to perform an indoor localization.</p> <p><strong>First case: camera NOT mounted on the drone</strong></p> <p>For each square in your grid you could use a unique marker, and see if at any time it can be registered by the camera or not. If not, your drone is flying over it.</p> <p><strong>Second case: camera mounted on the drone</strong></p> <p>The ARToolkit allows you to calculate your distance to each tag. So if you always have at least three tags in your drone's view and the position of those tags are known, you can simply triangulate and find the drone's position.</p>
python|opencv
1
1,901,696
32,804,842
Python how to get multibyte variable size in 8-bit?
<p>I am fetching some remote binary file and saving it in the python variable. How can i get its size .i.e size after the file storing on disk ? without actually storing on disk.</p> <p>In php i am using it like this</p> <pre><code>&lt;?php $file_content contains remote downloaded binary file data. echo mb_strlen($file_content, '8bit'); </code></pre> <p>above code gives file size i.e. file data size stored in variable.</p> <p>whats the equivalent of it in python ?</p>
<p>The size of the content of a <code>bytes</code> or bytestring is its length.</p> <pre><code>&gt;&gt;&gt; len(b'12345') 5 </code></pre>
php|python|filesize|multibyte
1
1,901,697
32,803,103
Pip not Recognised in CMD
<p>I have installed Python Pip for the purpose of installing Pyaudio.<br/> I have downloaded <strong>PyAudio‑0.2.8‑cp26‑none‑win32.whl</strong> from <a href="http://www.lfd.uci.edu/~gohlke/pythonlibs/#pyaudio" rel="nofollow">http://www.lfd.uci.edu/~gohlke/pythonlibs/#pyaudio</a> and placed it in my desktop. Here is the CMD log. <br/></p> <pre><code>C:\Users\Shadow Mori&gt; cd C:\users\shadow mori\desktop C:\Users\Shadow Mori\Desktop&gt; pip install PyAudio‑0.2.8‑cp26‑none‑win32.whl 'pip' is not recognized as an internal of external command, operable program or batch file. C:\Users\Shadow Mori\Desktop&gt; </code></pre> <p>When I change the directory to C:\python26\scripts\ pip is recognised, but I can't get to PyAudio‑0.2.8‑cp26‑none‑win32.whl no matter what the directory is. I have tried adding the System Variable of C:\python26\Scripts\ like you do with Python but it doesn't work either. <br/> Thanks in advance for any help at all.</p>
<p>When you type a command in the command prompt, it looks in a number of directories to find the executable. This is usually the current directory and the directories listed in the PATH environment variable.</p> <p>A one-off solution is to provide the full path to the command, which skips the search entirely and tells windows exactly what you want to do...</p> <pre><code>cd C:\users\shadow mori\desktop C:\python26\scripts\pip install PyAudio‑0.2.8‑cp26‑none‑win32.whl </code></pre> <p>(You could also <code>cd</code> to the scripts directory and instead provide the full path to the <code>.whl</code>)</p> <p>A more permanent solution is to add the python scripts folder to your path.</p> <p>This varies depending on your version of windows, but on Win7 and 8 is Control Panel->System->Advanced System Settings->Environment Variables (button)</p> <p>In the bottom half of the window, find PATH and edit it. </p> <p><a href="https://i.stack.imgur.com/ANVB3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ANVB3.png" alt="enter image description here"></a></p> <p>Add <code>;C:\Python26\Scripts</code> to the end of the existing value. The semicolon is to separate it from previous values. Eg:</p> <p><a href="https://i.stack.imgur.com/SslUq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SslUq.png" alt="enter image description here"></a></p> <p>The change will take effect for all new command prompts. You should now be able to run <code>pip</code> (or any other command in the scripts directory) from any location.</p>
python|pip|pyaudio
2
1,901,698
34,861,234
python: ignoring leading ">>>" and "..." in interactive mode?
<p>Many online python examples show interactive python sessions with normal leading ">>>" and "..." characters before each line.</p> <p>Often, there's no way to copy this code without also getting these prefixes.</p> <p>In these cases, if I want to re-paste this code into my own python interpreter after copying, I have to do some work to first strip off those prefixes.</p> <p>Does anyone know of a way to get python or iPython (or any other python interpreter) to automatically ignore leading ">>>" and "..." characters on lines that are pasted in?</p> <p>Example:</p> <pre><code>&gt;&gt;&gt; if True: ... print("x") ... </code></pre>
<p>IPython will do this for you automatically.</p> <pre><code>In [5]: &gt;&gt;&gt; print("hello") hello In [10]: &gt;&gt;&gt; print( ....: ... "hello" ....: ) hello </code></pre>
python|ipython|interpreter|pasting
5
1,901,699
7,788,170
append word from line when reading txt file python
<p>I am trying to create a program that will read a text file and create a list of lines of words.</p> <p>However am only able to append each line and not word, any help would be appreciated with this problem.</p> <pre><code>text = open("file.txt","r") for line in text.readlines(): sentence = line.strip() list.append(sentence) print list text.close() </code></pre> <p>Example text</p> <pre><code>I am here to do something </code></pre> <p>and I wanted it to append it like this</p> <pre><code>[['I','am','here']['to','do','something']] </code></pre> <p>Thanks in advance.</p>
<p>Each <code>line</code> in the example is just a string, so something like,</p> <p><pre><code>... PUNCTUATION = ',.?!"\'' words = [w.strip(PUNCTUATION) for w in line.split() if w.strip(PUNCTUATION)] list.append(words) ... </pre></code></p> <p>would probably be okay to the first approximation although may not cover every edge case in the way that you want (i.e. hyphenated words, words not separated by whitespace, words that have a trailing apostrophe etc.)</p> <p>The conditional is to avoid blank entries.</p>
python
1