Unnamed: 0
int64 0
1.91M
| id
int64 337
73.8M
| title
stringlengths 10
150
| question
stringlengths 21
64.2k
| answer
stringlengths 19
59.4k
| tags
stringlengths 5
112
| score
int64 -10
17.3k
|
---|---|---|---|---|---|---|
1,909,100 | 69,850,859 |
Understanding flaky Django tests: Creation order affecting array order
|
<p>I have a test in Django that looks something like this:</p>
<pre class="lang-py prettyprint-override"><code>class WebhookClientSerializerTestCase(TestCase):
def test_serializes_agent_from_client(self):
agent1 = factories.AgentUserFactory(email='dave@agently.com')
agent2 = factories.AgentUserFactory(email='jim@agently.com')
client = factories.ClientFactory(agents=[agent1, agent2])
schema = serializers.WebhookClientSerializer() # do some stuff
data = schema.dump(client).data
self.assertEqual(data['agents'][0]['email'], 'dave@agently.com')
self.assertEqual(data['agents'][1]['email'], 'jim@agently.com')
</code></pre>
<p>We create a entity called an <code>Agent</code>, then another, and run through some custom serializer logic. The serializer returns an array of serialized agent data. There are other tests defined within the same class.</p>
<p>However, sometimes the agents come out in the wrong order. It fails with</p>
<pre><code>AssertionError: u'jim@agently.com' != 'dave@agently.com'
</code></pre>
<p>I had assumed that when the entities were created, that it would be done sequentially. This test runs fine locally, but it fails in CI/CD sometimes, not always. We use the <code>--parallel</code> flag when running tests, but I don't see where or how the asynchronicity is messing with the order of the output array.</p>
<p>Why does the order change, and how could I write this test more robustly?</p>
|
<p>You should be using self.assertDictEqual(dict_a, dict_b)</p>
<p>It has nothing to do with parallel as the test case is not done by two processes.
You should always aim to use the most specific assertion it helps a lot to reduce occasionally failing tests.</p>
|
python|django|testing
| 1 |
1,909,101 | 72,877,729 |
Find the original MeasureBase value with python-measurement
|
<p><code>python-measurement</code> is a library to read measurements pythonically, <a href="https://python-measurement.readthedocs.io/en/latest/index.html" rel="nofollow noreferrer">https://python-measurement.readthedocs.io/en/latest/index.html</a> e.g.</p>
<pre><code>>>> from measurement.measures import Weight
>>> w = Weight(lb=135) # Represents 135lbs
>>> print w
135.0 lb
>>> print w.kg
61.234919999999995
</code></pre>
<p>When I read the measurement object directly into django/ORM/DB, I'm not sure why it automatically converted to the <code>default_unit</code>, e.g.</p>
<ul>
<li>When reading <code>1 us_qt</code>, it returns a a <code>MeasureBase</code> object with the value <code>.001 cubic_meter</code></li>
</ul>
<p>In the database, the value is <code>.001</code> and the unit is <code>cubic_meter</code>, but I need to see what the original unit was - ie <code>US qt</code>.</p>
<p><strong>Is there a way to see the original unit (at a minimum, as the value can be computed) and value that was entered?</strong></p>
|
<p>Most probably, there's some mechanism in your Django/DB library that uses <code>python-measurement</code> defaults the different subclasses of <code>MeasureBase</code> to some default units. Like how the "standard unit" feature is used in the <a href="https://python-measurement.readthedocs.io/en/latest/topics/creating_your_own_class.html#using-formula-based-conversions" rel="nofollow noreferrer">doc example</a>.</p>
<pre><code>from sympy import S, Symbol
from measurement.base import MeasureBase
class Temperature(MeasureBase):
SU = Symbol('kelvin')
STANDARD_UNIT = 'k'
UNITS = {
'c': SU - S(273.15),
'f': (SU - S(273.15)) * S('9/5') + 32,
'k': 1.0
}
ALIAS = {
'celsius': 'c',
'fahrenheit': 'f',
'kelvin': 'k',
}
</code></pre>
<p>What you can try then is to guess the units independently using the native <code>python-measurement</code> functions, e.g.</p>
<pre><code>>>> from measurement.utils import guess
>>> m = guess(10, 'qt')
>>> repr(m)
Volume(qt=10.0)
</code></pre>
|
python|units-of-measurement|measurement
| 0 |
1,909,102 | 73,084,159 |
Taking max of absolute values of two df columns in python
|
<p>The questions looks very easy but I did not find a suitable intuitive answer.
Suppose I have a df.</p>
<p><code>df = pd.DataFrame({"A": [-1,2,3], "B": [-2, 8, 1], "C": [-5, -6, 7]})</code></p>
<p>I want to create a column 'D' which gives the max of absolute values between 'A' and 'B'.
In short what I am expecting is kind of the following form.</p>
<p><code>df["D"] = (df["A"].abs(), df["B"].abs()).max()</code></p>
<p>or
<code>df["D"] = max(df["A"].abs(), df["B"].abs())</code></p>
<p>or
<code>df["D"] = max(abs(df["A"]), abs(df["B"])</code></p>
<p>Obviously, none of them works because the syntax is taken from SAS and Excel.
Help please.</p>
|
<p>IIUC, You want this:</p>
<pre><code>df['D'] = df[['A', 'B']].abs().max(axis=1)
print(df)
</code></pre>
<hr />
<pre><code> A B C D
0 -1 -2 -5 2
1 2 8 -6 8
2 3 1 7 3
</code></pre>
|
python|python-3.x|pandas|dataframe
| 3 |
1,909,103 | 73,014,592 |
Python program is executing first if condition only
|
<p>I am trying to create a basic program in python, where I perform calculations based on sentences. For that I have created if/else conditions but after running the program it is taking the first condition every time, do you have any idea why it is happening?</p>
<pre><code>import re
query = 'plus 5 and 3'
def multiply(query):
query = '*'.join(re.findall('\d+',query))
print(f'{eval(query)}')
return f'Answer is {eval(query)}'
def addition(query):
query = '+'.join(re.findall('\d+',query))
print(f'{eval(query)}')
return f'Answer is {eval(query)}'
def subtract(query):
query = '-'.join(re.findall('\d+',query))
print(f'{eval(query)}')
return f'Answer is {eval(query)}'
def divide(query):
query = '/'.join(re.findall('\d+',query))
print(f'{eval(query)}')
return f'Answer is {eval(query)}'
if 'multiplication' or 'multiply' in query:
print(multiply(query))
elif 'addition' or 'add' or 'plus' in query:
print(addition(query))
elif 'subtraction' or 'subtract' or 'minus' in query:
print(subtract(query))
elif 'division' or 'divide' in query:
print(divide(query))
</code></pre>
|
<p>You cannot say <code>if 'multiplication' or 'multiply' in query</code>. If you want to check if one of those two words is in query, then you must do it like this:
<code>if 'multiplication' in query or 'multiply' in query</code>. You will need to change this for all 4 if statements.</p>
|
python|python-3.x|if-statement
| 2 |
1,909,104 | 55,857,321 |
pandanic way to express desired operations in an df.groupby().agg()
|
<p>This question may not related to pandas, but how python handles when a function is passed as an argument in another function, I am not sure.</p>
<p>Anyways, Please observe the intention of the following code, the question is the triple quote:</p>
<pre><code>import pandas as pd
import numpy as np
"""given:"""
df = pd.DataFrame(
{
'a': [100]*2+[200]*2,
'b': np.arange(11,55,11),
}
)
gb = df.groupby('a', as_index=0)
"""what's the pandanic way of writing the following working code:"""
gb.agg( {'b': lambda sr: sr.iat[0]})
def foo(sr, arg):
return sr.sum() + arg
gb.agg( {'b': lambda sr: foo(sr, 888)} )
"""into the following pseudo, but not working, code:"""
gb.agg( {'b': iat[0]} )
gb.agg( {'b': foo( ,888)} )
</code></pre>
|
<p>That is <code>nth</code></p>
<pre><code>gb.nth(0)
Out[503]:
a b
0 100 11
2 200 33
</code></pre>
|
python|pandas|dataframe
| 1 |
1,909,105 | 73,354,565 |
Is it recommended to create a dataclass to save data queried from peewee or directly modify the model?
|
<p>I have a model Service to save data from ping probes, so i ping at the IP address, i must enter the result value in a list "history" of 30 values, delete the oldest value in that list and calculate the average to save the list and the average in the database.</p>
<pre><code>class Service(peewee.Model):
identifier = peewee.CharField(max_length=30, primary_key=True)
ip = peewee.CharField(max_length=15)
latency = peewee.CharField(max_length=10)
history = peewee.CharField(max_length=1000)
device = peewee.ForeignKeyField(Core, on_delete='CASCADE')
</code></pre>
<p>I could create the functions to calculate and average, do the round robin and then save the results to the database, or I could try to serialize the model in a dataclass and implement the necessary functions. Which would be the best option? what would be the best practice?.</p>
<p>To keep in mind, I must iterate this procedure in several services.</p>
|
<p>If it were me, I would store one row per ping per IP, so your rows would look like:</p>
<pre><code>123 | 1.1.1.2 | 2022-08-15T12:34:56 | 30ms
124 | 1.1.1.3 | 2022-08-15T12:34:56 | 18ms
... (other rows here) ...
151 | 1.1.1.2 | 2022-08-15T12:35:25 | 38ms
152 | 1.1.1.3 | 2022-08-15T12:35:25 | 21ms
... etc
</code></pre>
<p>Each time you add a new row, you can use a window function or similar to trim extra rows. To calculate average latencies, e.g., you might</p>
<pre><code>select avg(latency) from my_table where ip = '1.1.1.2';
</code></pre>
|
python|peewee
| 0 |
1,909,106 | 64,737,526 |
What is the correct way to test a type is bare typing.Dict in Python?
|
<p>I want to write a function <code>is_bare_dict</code> to return True for <code>Dict</code> and false for any typed dict such as <code>Dict[int, str]</code>.</p>
<p>One way I could come up with is something like this:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Dict, KT, VT
def is_bare_dict(typ) -> bool:
ktype = typ.__args__[0]
vtype = typ.__args__[1]
return ktype is KT and vtype is VT
</code></pre>
<p>You can run the above code: <a href="https://wandbox.org/permlink/sr9mGbWo3Lh7VrPh" rel="nofollow noreferrer">https://wandbox.org/permlink/sr9mGbWo3Lh7VrPh</a></p>
|
<p>It can also be done in one line</p>
<pre><code>from typing import Dict, KT, VT
def is_bare_dict(typ) -> bool:
return not isinstance(typ.__args__, tuple)
print(is_bare_dict(Dict)) # Print True
print(is_bare_dict(Dict[int, str])) # Print False
</code></pre>
|
python|python-typing
| 1 |
1,909,107 | 64,641,244 |
Data transformation: Take column value as column name
|
<p>I have a dataframe like this</p>
<pre class="lang-py prettyprint-override"><code>df = {'ID': ['A', 'B', 'C'], '2': ['colname1', 'colname2', 'colname1'], '3': [3, 4, 0], '4':['colname3', 'colname3', 'colname3'], '5':[0, 2, 1]}
old = pd.DataFrame(data=df)
old
ID 2 3 4 5
0 A colname1 3 colname3 0
1 B colname2 4 colname3 2
2 C colname1 0 colname3 1
</code></pre>
<p>Where ID A has a value 3 for colname1 and ID B has a value 4 for colname2.</p>
<p>I am trying to clean it so that it looks like</p>
<pre class="lang-py prettyprint-override"><code>df = {'ID': ['A', 'B', 'C'], 'colname1': [3, 'None', 0], 'colname2': ['None', 4, 'None'], 'colname3':[0, 2, 1]}
new = pd.DataFrame(data=df)
new
ID colname1 colname2 colname3
0 A 3 None 0
1 B None 4 2
2 C 0 None 1
</code></pre>
<p>Please note this is a simple example. The actual dataset is a lot larger than this.</p>
<p>My thought was to build another dataframe, extracting all the distinct column names (which appears at the even column) first.</p>
<pre class="lang-py prettyprint-override"><code>df.iloc[:,1::2].T.apply(lambda x: x.unique(), axis=1)
</code></pre>
<p>Then, write a loop to extract the values from the old dataframe to the new dataframe.</p>
<p>But I am not sure how to proceed. Is there a better way of doing this?</p>
|
<p>One idea is use <code>lreshape</code> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.pivot.html" rel="nofollow noreferrer"><code>DataFrame.pivot</code></a>:</p>
<pre><code>c1 = old.columns[1::2]
c2 = old.columns[2::2]
df = pd.lreshape(old, {'a':c1, 'b':c2}).pivot('ID','a','b')
#alternative if duplicates in `ID`, `a` pairs
#df = pd.lreshape(old, {'a':c1, 'b':c2}).pivot_table(index='ID',columns='a',values='b', aggfunc='mean')
print (df)
a colname1 colname2 colname3
ID
A 3.0 NaN 0.0
B NaN 4.0 2.0
C 0.0 NaN 1.0
</code></pre>
|
python|pandas|dataframe|data-cleaning
| 2 |
1,909,108 | 63,825,446 |
How to assign a new descriptive column while concatenating data frames in python
|
<p>I have two data frames that i want to concatenate in python. However, I want to add another column <code>type</code> in order to distinguish among the columns.</p>
<p>Here is my sample data:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'numbers': [1, 2, 3], 'colors': ['red', 'white', 'blue']},
columns=['numbers', 'colors'])
df1 = pd.DataFrame({'numbers': [7, 9, 9], 'colors': ['yellow', 'brown', 'blue']},
columns=['numbers', 'colors'])
pd.concat([df,df1])
</code></pre>
<p>This code will give me the following result:</p>
<pre><code> numbers colors
0 1 red
1 2 white
2 3 blue
0 7 yellow
1 9 brown
2 9 blue
</code></pre>
<p>but what I would like to get is as follows:</p>
<pre><code> numbers colors type
0 1 red first
1 2 white first
2 3 blue first
0 7 yellow second
1 9 brown second
2 9 blue second
</code></pre>
<p>type column is going to help me to differentiate between the values of the two data frames.</p>
<p>Can anyone help me with this please?</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.assign.html" rel="nofollow noreferrer"><code>DataFrame.assign</code></a> for new columns:</p>
<pre><code>df = pd.concat([df.assign(typ='first'),df1.assign(typ='second')])
print (df)
numbers colors typ
0 1 red first
1 2 white first
2 3 blue first
0 7 yellow second
1 9 brown second
2 9 blue second
</code></pre>
|
python|pandas|dataframe|merge|concat
| 2 |
1,909,109 | 53,263,850 |
Why does the "sections" property in Python wikipedia package return an empty list?
|
<p>I can get content of a Wikipedia section if I ask for a specific section, for example:</p>
<pre><code>import wikipedia
print(wikipedia.WikipediaPage('Sleep Well Beast').section("Promotion"))
</code></pre>
<p>but when I try to get all the sections, I get an empty list, why?</p>
<pre><code>print(wikipedia.WikipediaPage('Sleep Well Beast').sections)
</code></pre>
<p>According to documentation, this should give a list of sections:</p>
<p><a href="https://wikipedia.readthedocs.io/en/latest/code.html#api" rel="nofollow noreferrer">https://wikipedia.readthedocs.io/en/latest/code.html#api</a></p>
|
<p>You need to install the <code>wikipedia_sections</code> libary using</p>
<pre><code>pip install wikipedia_sections
</code></pre>
|
python|wikipedia
| 0 |
1,909,110 | 71,541,427 |
How to divide all values in column by level total in multi-indexed DataFrame
|
<p>I have this multi-indexed DataFrame. The FGs are 4 groups I created.<br />
I need to change the biomass (aka BIOMc) in percentages.<br />
To do so I have to divide for the sum over biomasses inside each group. I don't know how to do that in a multi-index DataFrame.</p>
<p>I know how I can obtain the result for a single group, for example:</p>
<pre><code>workdf.loc['RSH'] / workdf.loc['RSH'].sum()
</code></pre>
<p>But I don't know how to reiterate (without actually iterating because I don't think it's necessary here) the process for all the groups and without specifically writing the names of FGs.</p>
<pre><code>import pandas as pd
workdf = pd.DataFrame({
'FG': ['RSH', 'RSH', 'RSH', 'RSS', 'RSS', 'SSH', 'SSH', 'SSS', 'SSS', 'SSS'],
'Diet': ['A', 'B', 'C', 'A', 'C', 'B', 'C', 'A', 'B', 'C'],
'BIOMc': [3, 0, 21, 0, 2, 0, 11, 0, 1, 3]
}).set_index(['FG', 'Diet'])
BIOMc
FG Diet
RSH A 3
B 0
C 21
RSS A 0
C 2
SSH B 0
C 11
SSS A 0
B 1
C 3
</code></pre>
|
<p>Use <code>groupby</code>+<code>transform</code>:</p>
<pre><code>df['BIOMc']/df.groupby(level='FG')['BIOMc'].transform('sum')
</code></pre>
|
python|python-3.x|pandas|dataframe
| 1 |
1,909,111 | 62,621,142 |
How to translate only custom error messages in DRF?
|
<p>In DRF all default error messages are already translated. But I need to translate my own error messages.
What I did:</p>
<ol>
<li>Put all my error messages into <code>_()</code>, where <code>_</code> is <code>gettext</code></li>
<li>In settings set <code>LOCALE_PATHS</code> to <code>locale</code> folder</li>
<li>Ran <code>python manage.py makemessages -l ru</code></li>
</ol>
<p>That created .po file and here goes the first trouble. In the .po file are my messages and besides that there are a lot of default Django messages which I don't want to translate. (I do not want to override the translation, I want to extend it)</p>
<p>I translated my messages in the .po file, then I ran <code>python manage.py compilemessages</code> which created .mo file. And here is the second trouble.</p>
<p>All my messages are now translated, but default DRF messages (they were not in the .po file, there were Django messages, not DRF) are not already translated, they are only in English (e.g. <code>Authentication credentials were not provided</code>, <code>This field is required</code>, etc.)</p>
|
<p>Need to exclude messages from packages in venv.
e.g.</p>
<p><strong>--ignore</strong></p>
<pre><code>python manage.py makemessages -l ru --ignore=env
python manage.py compilemessages -l ru --ignore=env
</code></pre>
|
python|django|django-rest-framework|internationalization|translation
| 0 |
1,909,112 | 71,376,743 |
django.core.exceptions.ImprperlyConfigured solution
|
<p>I wonder why I get this error :</p>
<p>django.core.exceptions.ImproperlyConfigured: Requested setting INSTALLED_APPS, but settings are not configured. You must either define the environment variable DJANGO_SETTINGS_MODULE or call settings.configure() before accessing settings.</p>
<p>this is my code :</p>
<pre><code>from main_app.models import student
std1 = student(name='tom', fullname='scholz')
std1.save()
print(student.objects.all())
</code></pre>
<p>there is not any problem with running server and migrations. but when it comes to running a .py file (made by myself) such an error rises.</p>
|
<p>This problem is simply because Django doesn't know where to find required settings.
As it's [Documentation][1] mentions :</p>
<pre><code> When you use Django, you have to tell it which settings you’re using. Do this
by using an environment variable, DJANGO_SETTINGS_MODULE.
</code></pre>
<p>based on Documentation you should do that (informing Django about the setting)by defining a environmental variable.</p>
<p>as I see that there are little ambiguities about what to do facing this error a list of different solutions you can take, is provided :</p>
<p>1- define a environment variable in the operating system([link][2]).</p>
<p>2- if you are using a virtual environment; set a new environment variable in activate.bat</p>
<pre><code> set "DJANGO_SETTINGS_MODULE=<your_proj_name>.settings"
</code></pre>
<p>in this way, whenever you activate the virtual environment that variable would be defined automatically.</p>
<p>3- (the way you did!)in each python file; write the command which defines that environment variable.</p>
<pre><code> os.environ.setdefault('DJANGO_SETTINGS_MODULE', '<your_proj_name>.settings')
</code></pre>
<p>but if you use the third method, it is really important where you are putting that command(in the second and definitely first case we don't care at all!).
so you should take care of not using Django before that command in your script.
(some people wonder why after using this command still we are getting the same error. the reason is they put that line of code in a place where Django was used before)</p>
<p>you might ask why when you were using migration and so on; you didn't face this error. the reason would be clear if we take a look at manage.py
because there is that command there(third method of the above list)</p>
<p>but you
[1]: <a href="https://docs.djangoproject.com/en/4.0/topics/settings/#designating-the-settings" rel="nofollow noreferrer">https://docs.djangoproject.com/en/4.0/topics/settings/#designating-the-settings</a>
[2]: <a href="https://www.computerhope.com/issues/ch000549.htm" rel="nofollow noreferrer">https://www.computerhope.com/issues/ch000549.htm</a></p>
|
python|django|settings
| 1 |
1,909,113 | 71,174,267 |
Serialize dataclass with class as a field to JSON in Python
|
<p>At a project I'm contributing to we have a very simple, but important class (let's call it <code>LegacyClass</code>). Modifying it would be a long process.<br />
I'm contributing new dataclasses (like <code>NormalDataclass</code>) to this project and I need to be able to serialize them to JSON.
I don't have access to the JSON encoder, so I cannot specify custom encoder.</p>
<p>Here you can find sample code</p>
<pre class="lang-py prettyprint-override"><code>import dataclasses
import collections
import json
#region I cannot easily change this code
class LegacyClass(collections.abc.Iterable):
def __init__(self, a, b):
self.a = a
self.b = b
def __iter__(self):
yield self.a
yield self.b
def __repr__(self):
return f"({self.a}, {self.b})"
#endregion
#region I can do whatever I want to this part of code
@dataclasses.dataclass
class NormalDataclass:
legacy_class: LegacyClass
legacy_class = LegacyClass('a', 'b')
normal_dataclass = NormalDataclass(legacy_class)
normal_dataclass_dict = dataclasses.asdict(normal_dataclass)
#endregion
#region I cannot easily change this code
json.dumps(normal_dataclass_dict)
#endregion
</code></pre>
<p>What I would want to get:</p>
<pre class="lang-json prettyprint-override"><code>{"legacy_class": {"a": "a", "b": "b"}}
</code></pre>
<p>What I'm getting:</p>
<pre><code>TypeError: Object of type LegacyClass is not JSON serializable
</code></pre>
<p>Do you have any suggestions?
Specifying <code>dict_factory</code> as an argument to <code>dataclasses.asdict</code> would be an option, if there would not be multiple levels of <code>LegacyClass</code> nesting, eg:</p>
<pre class="lang-py prettyprint-override"><code>@dataclasses.dataclass
class AnotherNormalDataclass:
custom_class: List[Tuple[int, LegacyClass]]
</code></pre>
<p>To make <code>dict_factory</code> recursive would be to basically rewrite <code>dataclasses.asdict</code> implementation.</p>
|
<p><strong>Edit:</strong> The simplest solution, based on the most recent edit to the question above, would be to define your own <code>dict()</code> method which returns a JSON-serializable <code>dict</code> object. Though in the long term, I'd probably suggest contacting the team who implements the <code>json.dumps</code> part, to see if they can update the encoder implementation for the dataclass.</p>
<p>In any case, here's a working example you can use for the present scenario:</p>
<pre class="lang-py prettyprint-override"><code>import dataclasses
import collections
import json
class LegacyClass(collections.abc.Iterable):
def __init__(self, a, b):
self.a = a
self.b = b
def __iter__(self):
yield self.a
yield self.b
def __repr__(self):
return f"({self.a}, {self.b})"
@dataclasses.dataclass
class NormalDataclass:
legacy_class: LegacyClass
def dict(self):
return {'legacy_class': self.legacy_class.__dict__}
legacy_class = LegacyClass('a', 'b')
normal_dataclass = NormalDataclass(legacy_class)
normal_dataclass_dict = normal_dataclass.dict()
print(normal_dataclass_dict)
json.dumps(normal_dataclass_dict)
</code></pre>
<p>Output:</p>
<pre><code>{'legacy_class': {'a': 'a', 'b': 'b'}}
</code></pre>
<hr />
<p>You should be able to pass <code>default</code> argument to <code>json.dumps</code>, which will be called whenever the encoder finds an object that it can't serialize to JSON, for example a Python class or a <code>datetime</code> object.</p>
<p>For example:</p>
<pre class="lang-py prettyprint-override"><code>import dataclasses
import collections
import json
class LegacyClass(collections.abc.Iterable):
def __init__(self, a, b):
self.a = a
self.b = b
def __iter__(self):
yield self.a
yield self.b
def __repr__(self):
return f"({self.a}, {self.b})"
@dataclasses.dataclass
class NormalDataclass:
legacy_class: LegacyClass
legacy_class = LegacyClass('aa', 'bb')
normal_dataclass = NormalDataclass(legacy_class)
normal_dataclass_dict = dataclasses.asdict(normal_dataclass)
o = json.dumps(normal_dataclass_dict,
### ADDED ###
default=lambda o: o.__dict__)
print(o) # {"legacy_class": {"a": "aa", "b": "bb"}}
</code></pre>
<hr />
<p>If you have a more complex use case, you could consider creating a default function which can check the type of each value as it gets serialized to JSON:</p>
<pre class="lang-py prettyprint-override"><code>import dataclasses
import collections
import json
from datetime import date, time
from typing import Any
class LegacyClass(collections.abc.Iterable):
def __init__(self, a, b):
self.a = a
self.b = b
def __iter__(self):
yield self.a
yield self.b
def __repr__(self):
return f"({self.a}, {self.b})"
@dataclasses.dataclass
class NormalDataclass:
legacy_class: LegacyClass
my_date: date = date.min
legacy_class = LegacyClass('aa', 'bb')
normal_dataclass = NormalDataclass(legacy_class)
normal_dataclass_dict = dataclasses.asdict(normal_dataclass)
def default_func(o: Any):
# it's a date, time, or datetime
if isinstance(o, (date, time)):
return o.isoformat()
# it's a Python class (with a `__dict__` attribute)
if isinstance(type(o), type) and hasattr(o, '__dict__'):
return o.__dict__
# print a warning and return a null
print(f'couldn\'t find an encoder for: {o!r}, type={type(o)}')
return None
o = json.dumps(normal_dataclass_dict, default=default_func)
print(o) # {"legacy_class": {"a": "aa", "b": "bb"}, "my_date": "0001-01-01"}
</code></pre>
|
python|json|nested|python-dataclasses
| 1 |
1,909,114 | 70,731,471 |
Adding prefix to existing array
|
<p>I am trying to add suffix to an existing <code>array</code>. Below is my code</p>
<pre><code>print('a' + [10, 100])
</code></pre>
<p>With this I am getting below error</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: can only concatenate str (not "list") to str
</code></pre>
<p>Could you please help hoe to do that? I could use some <code>for</code> loop but I believe there may be more straightforward way to achieve the same.</p>
|
<p>You can create a new concatenated array as:</p>
<pre><code>>>> ['{0}{1}'.format('a', num) for num in [10, 100]]
['a10', 'a100']
</code></pre>
<p>Read <a href="https://docs.python.org/2/library/string.html#format-examples" rel="nofollow noreferrer">String format</a> and <a href="https://docs.python.org/3/tutorial/datastructures.html#list-comprehensions" rel="nofollow noreferrer">List Comprehensions</a> from doc.</p>
|
arrays|python-3.x
| 2 |
1,909,115 | 63,600,683 |
InterfaceError: Error binding parameter 4 - probably unsupported type.for image and blob image does not display on QTLabel
|
<pre><code>from PyQt5.QtWidgets import *
from PyQt5 import QtWidgets, QtGui
from PIL import Image, ImageQt
import cv2 as cv
img = cv.imread('1.1 cat.jpeg.jpeg')
im = Image.fromarray(img)
im.save('file.png')
con = lite.connect('Final_Avirs.db')
cur = con.cursor()
def createtb():
queryveh = """CREATE TABLE IF NOT EXISTS VehicleTB(Scan_DI INTEGER PRIMARY KEY NOT NULL UNIQUE,
Vehicle_number TEXT NOT NULL, Vehicle_type TEXT NOT NULL, Cam_loc TEXT NOT NULL, Date_Time TEXT NOT NULL,
Vehicle_number_pic BLOB NOT NULL) """
cur.execute(queryveh)
con.commit()
def vehicledetailsquery(vn, vt, cl, dt, vnp):
scan_id = vn + "1"
query = " INSERT INTO 'VehicleTB' ( Vehicle_number, Vehicle_type, Cam_loc, " \
"Date_Time, Vehicle_number_pic ) VALUES( ?, ?, ?, ?, ?)"
cur.execute(query, (vn, vt, cl, dt, vnp))
con.commit()
img = Image.open('file.png')
createtb()
vehicledetailsquery('aswe23','2039230', 'cam-2', '23343', img )
app = QtWidgets.QApplication([])
def vehicletbquery():
query = "SELECT * from VehicleTB"
vehicletb = cur.execute(query).fetchall()
return vehicletb
data = vehicletbquery()
ww = QtWidgets.QLabel()
for i, d in enumerate(data):
if i == 6:
ww.w.setPixmap(QtGui.QPixmap.fromImage(d))
ww.show()
app.exec()
</code></pre>
<p>I know the problem is from my image values but I don't know how to handle it, the best I could do was to convert the Pillow image to string and doing that, i can display the image on the PYQT Label, the label just comes out empty.</p>
|
<p>The data type BLOB save bytes, in your case you are trying to save a PIL object throwing the error. The solution is to convert those objects. Then the same happens with the data obtained from the database but in reverse:</p>
<pre><code>import sqlite3
import io
from PyQt5 import QtGui, QtWidgets
from PIL import Image
import cv2 as cv
img = cv.imread('1.1 cat.jpeg.jpeg')
im = Image.fromarray(img)
im.save("file.png")
con = sqlite3.connect("Final_Avirs.db")
cur = con.cursor()
def createtb():
queryveh = """CREATE TABLE IF NOT EXISTS VehicleTB(Scan_DI INTEGER PRIMARY KEY NOT NULL UNIQUE,
Vehicle_number TEXT NOT NULL, Vehicle_type TEXT NOT NULL, Cam_loc TEXT NOT NULL, Date_Time TEXT NOT NULL,
Vehicle_number_pic BLOB NOT NULL) """
cur.execute(queryveh)
con.commit()
def vehicledetailsquery(vn, vt, cl, dt, vnp):
scan_id = vn + "1"
query = (
" INSERT INTO 'VehicleTB' ( Vehicle_number, Vehicle_type, Cam_loc, "
"Date_Time, Vehicle_number_pic ) VALUES( ?, ?, ?, ?, ?)"
)
cur.execute(query, (vn, vt, cl, dt, vnp))
con.commit()
img = Image.open("file.png")
createtb()
# convert to bytes
f = io.BytesIO()
img.save(f, format="PNG")
img_bytes = f.getvalue()
vehicledetailsquery("aswe23", "2039230", "cam-2", "23343", img_bytes)
def vehicletbquery():
query = "SELECT * from VehicleTB"
vehicletb = cur.execute(query).fetchall()
return vehicletb
data = vehicletbquery()
app = QtWidgets.QApplication([])
scroll_area = QtWidgets.QScrollArea(widgetResizable=True)
container = QtWidgets.QWidget()
scroll_area.setWidget(container)
lay = QtWidgets.QVBoxLayout(container)
for row_data in data:
for i, d in enumerate(row_data):
if i == 5:
# QPixmap from bytes
pixmap = QtGui.QPixmap()
pixmap.loadFromData(d)
label = QtWidgets.QLabel()
label.setPixmap(pixmap)
lay.addWidget(label)
scroll_area.show()
app.exec()
</code></pre>
|
python|pyqt|pyqt5
| 0 |
1,909,116 | 63,570,988 |
TypeError at /edit : edit() missing 1 required positional argument: 'entry'
|
<p>I am very new at Django and I am working on a web app project. this particular page is supposed to edit an entry and save the entry. but I keep getting the missing 1 required argument</p>
<p><em>Views.py</em></p>
<pre><code># editPage forms
class editform(forms.Form):
content = forms.CharField(widget=forms.Textarea(), label='')
def edit(request, entry):
if request.method == 'GET':
page = util.get_entry(entry)
return render(request,"encyclopedia/edit.html",{
"form":SearchEntry(),
"edit":editform(initial={'content': page}),
"entry":entry
})
#If this is a POST request
else:
form = editform(request.POST)
if form.is_valid():
content = form.cleaned_data["content"]
util.save_entry(entry,content)
page = util.get_entry(entry)
page = mark.convert(page)
return render(request,"encyclopedia/entry.html",{
"form":SearchEntry(),
"page":page,
"entry": title
})
</code></pre>
<p><em>urls.py</em></p>
<p>from django.urls import path</p>
<p>from . import views</p>
<pre><code>urlpatterns = [
path("", views.index, name="index"),
path("wiki/<str:entry>", views.entry, name="entry"),
path("search", views.search, name="search"),
path("newEntry", views.newEntry, name="newEntry"),
path("edit", views.edit, name="edit"),
</code></pre>
<p><em>edit HTML</em></p>
<pre><code>{% extends "encyclopedia/layout.html" %}
{% block title %}
Edit {{name}}
{% endblock %}
{% block body %}
<h1>{{title}}</h1>
<form action= "{% url 'edit' %}" method="POST">
{% csrf_token %}
{{ edit }}
<br>
<input class="save btn btn-info" type="submit" value="save"/>
</form>
<p> Click the "save" button to save your entry to the encyclopedia.</p>
<br>
<a href = "{% url 'index' %}"> Return Home</a>
{% endblock %}
</code></pre>
<p><em>entry HTML</em></p>
<pre><code>{% extends "encyclopedia/layout.html" %}
{% block title %}
Encyclopedia
{% endblock %}
{% block body %}
<h1>{{title}}</h1>
{{entry | safe}}
<a href = "{% url 'edit' %}"> Edit Content</a>
<br>
<br>
<a href = "{% url 'index' %}"> Return Home</a>
{% endblock %}
</code></pre>
<p>when I change this particular url:</p>
<pre><code>path("edit/<str:entry>", views.edit, name="edit"),
</code></pre>
<p>I get a different issue:
<strong>Reverse for 'edit' with no arguments not found. 1 pattern(s) tried: ['edit/(?P[^/]+)$']</strong></p>
|
<p>The problem is in your <code>urls.py</code> file in this line:</p>
<pre class="lang-py prettyprint-override"><code> path("edit", views.edit, name="edit"),
</code></pre>
<p>because <code>views.edit</code> is expecting that you must provide two-parameter <code>request</code> and <code>entry</code> in your <code>url</code>. And in your case <strong>entry</strong> is missing. Try to add entry in your urlspatterns path, in this case, I'm expecting your entry is <code>int</code>.</p>
<pre class="lang-py prettyprint-override"><code>path("edit/<int:entry>", views.edit, name="edit"),
</code></pre>
<p>and this entry can be your model pk or anything else you want. After modifying your <code>urlspatterns</code> whenever you call edit view in your <strong>html</strong> you need to do this:</p>
<pre class="lang-html prettyprint-override"><code>{% url 'edit' entry=your_entry_value %}
</code></pre>
<p>instead of:</p>
<pre class="lang-html prettyprint-override"><code>{% url 'edit' %}
</code></pre>
|
python|html|django
| 1 |
1,909,117 | 56,474,148 |
how to convert python float to bytes to be interpreted in C++ programm
|
<p>I need to build a python script that export data to a custom file format. This file is then read by a cpp programm (I have the source code, but cannot compile it).</p>
<p>custome file fomat spec:</p>
<ul>
<li>The file must be little endian. </li>
<li>The float must be 4 bytes long.</li>
</ul>
<p>I'm failing at exporting python float to bytes. thus crashing the cpp app without any error trace. If i try to fill all the float with 0 it load perfectly fine, but if i try anything else it crashes.</p>
<p>This is how the cpp app read the float </p>
<pre><code>double Eds2ImporterFromMemory::read4BytesAsFloat(long offset) const
{
// Read data.
float i = 0;
memcpy(&i, &_data[offset], 4);
return i;
}
</code></pre>
<p>And I try to export the python float as follows:</p>
<pre><code>def write_float(self, num):
# pack float
self.f.write(struct.pack('<f', float(num)))
</code></pre>
<p>And also like this as some people suggested me:</p>
<pre><code>def write_float(self, num):
# unpack as integer the previously pack as float number
float_hex = struct.unpack('<I', struct.pack('<f', num))[0]
self.f.write(float_hex.to_bytes(4, byteorder='little'))
</code></pre>
<p>But it fails every time. I'm not a cpp guy as you can see. Do you have an idea on why my python script is not working.</p>
<p>Thanks in advance</p>
|
<p>Please excuse me for being really bad at python, but i tried to produce your desired result and it works fine for me. </p>
<p>The python code:</p>
<pre><code>import struct
value = 13.37
ba = bytearray(struct.pack("f", value))
for b in ba:
print("%02x" % b)
</code></pre>
<p>I imagine if you would just concat that (basically write the hexadecimal representation to a file in that order), it would work just fine.</p>
<p>Either way, it will output</p>
<pre><code>85
eb
55
41
</code></pre>
<p>Which i put in an array of unsigned chars and used memcpy in the same way your C++ code did:</p>
<pre><code>unsigned char data[] = {0x85, 0xeb, 0x55, 0x41};
float f;
memcpy(&f, data, sizeof(float));
std::cout << f << std::endl;
std::cin.get();
</code></pre>
<p>Which will output <code>13.37</code> as you would expect. If you cannot reproduce the correct result this way, i suspect that the error occurs somewhere else, in which case it would be helpful to see <em>how</em> the format is written by python and how it's read by C++.</p>
<p>Also, keep in mind that there are several ways to represent the byte array, like:</p>
<p><code>\x85eb5541</code>, <code>0x85, 0xeb, 0x55, 0x41</code>, <code>85eb5541</code> and so on. To me it seems very likely that you just aren't outputting the correct format and thus the "parser" crashes.</p>
|
python|c++|file|floating-point
| 0 |
1,909,118 | 56,792,212 |
Alsa Audio library - Error -> Has no PCM member
|
<p>I'm working on a project where I have to control 8 audio channel.
I'm programming in python3 using alsaaudio library. It all worked but I have these 3 errors and, once I start the program, my internet connection goes down.</p>
<p>In the following code, you can see how I initialize the device (octosound card by AudioInjector). Please note that if the indentitation is wrong is just because a copy paste error.</p>
<pre class="lang-py prettyprint-override"><code>import alsaaudio
def start_device(ch):
variables.mut.acquire()
if variables.device_flag[ch] == 1:
try:
variables.device_PCM[ch] = alsaaudio.PCM(type=alsaaudio.PCM_PLAYBACK, mode = alsaaudio.PCM_NORMAL,device=variables.device_name[ch])
variables.device_flag[ch] = 0 # device open
print('device -%s- OPEN' % (variables.device_name[ch]))
except:
print("Except raised")
json_builder.jsonerror("Init device ch" + str(ch) +" FAILED to OPEN",ch)
variables.device_flag[ch] == 1
else:
print("Device -%s- already opened" % (variables.device_name[ch]))
variables.mut.release()
</code></pre>
<p>The strange things are that this code works and I can drive all 8 channels but I got 3 errors and my internet stop working:</p>
<ul>
<li><p>message: "Module 'alsaaudio' has no 'PCM' member"</p></li>
<li><p>message: "Module 'alsaaudio' has no 'PCM_PLAYBACK' member"</p></li>
<li><p>message: "Module 'alsaaudio' has no 'PCM_NORMAL' member"</p></li>
</ul>
<p>(the device=device_name[ch] works, no error)</p>
|
<p>Well, I will recommend you to use Alvas.Audio Library which can edit, convert, play, pause convert audio files. The C# Alvas.Audio library can also be used to convert headerless format (SLINEAR) etc.
<a href="http://alvas.net/alvas.audio,tips.aspx" rel="nofollow noreferrer">http://alvas.net/alvas.audio,tips.aspx</a>
Moreover it helps to extract AVI streams and convert one file format to another. So, try the Alvas.Audio C# library and get free trial <a href="https://www.filerepairtools.com/alavas-audio-library.html" rel="nofollow noreferrer">https://www.filerepairtools.com/alavas-audio-library.html</a></p>
|
python|python-3.x|alsa|pyalsaaudio
| 0 |
1,909,119 | 56,552,444 |
Shuffle rows of a DataFrame until all consecutive values in a column are different?
|
<p>I have a dataframe with rows that I'd like to shuffle continuously until the value in column <code>B</code> is not identical across any two consecutive rows:</p>
<p>initial dataframe:</p>
<pre><code>A | B
_______
a 1
b 1
c 2
d 3
e 3
</code></pre>
<p>Possible outcome:</p>
<pre><code>A | B
_______
b 1
c 2
e 3
a 1
d 3
</code></pre>
<p>I made a function <code>scramble</code> meant to do this but I am having trouble passing the newly scrambled dataframe back into the function to test for matching <code>B</code> values:</p>
<pre><code>def scamble(x):
curr_B='nothing'
for index, row in x.iterrows():
next_B=row['B']
if str(next_B) == str(curr_B):
x=x.sample(frac=1)
curr_B=next_B
curr_B=next_B
return x
df=scramble(df)
</code></pre>
<p>I suspect the function is finding the matching values in the next row, but I can't shuffle it continuously until there are no two sequential rows with the same <code>B</code> value.</p>
<p>Printing the output yields a dataframe shows consecutive rows with the same value in <code>B</code>.</p>
|
<p>If your goal is to eliminate consecutive duplicates, you can just use <code>groupby</code> and <code>cumcount</code>, then reindex your DataFrame:</p>
<pre><code>df.loc[df.groupby('B').cumcount().sort_values().index]
A B
0 a 1
2 c 2
3 d 3
1 b 1
4 e 3
</code></pre>
<hr>
<p>If you actually want randomness, then you can group on <code>cumcount</code> and call <code>shuffle</code>. This should eliminate consecutive dupes to some degree (NOT GUARANTEED) while preserving randomness and still avoiding slow iteration. Here's an example:</p>
<pre><code>np.random.seed(0)
(df.groupby(df.groupby('B').cumcount(), group_keys=False)
.apply(lambda x: x.sample(frac=1))
.reset_index(drop=True))
A B
0 d 3
1 a 1
2 c 2
3 b 1
4 e 3
</code></pre>
|
python|pandas|function|dataframe|recursion
| 2 |
1,909,120 | 18,008,434 |
Syntax Error in Python 3.2.1: Unable to write an exception to a ValueError
|
<p>I'm new at python and recently I've been trying to write a code for a hangman game: a part of the MIT OCW course: Introduction to Computer Science and Programming. My code goes as follows:</p>
<pre><code>def write_words (word, al):
neww = (list(word))
newal = (list(al))
x = 0
while (x <= (len(newal))):
z = newal[x]
y = neww.index(z)
except ValueError:
pass
x = x + 1
return (z)
return (y)
</code></pre>
<p>when I call the function using <code>write_words ("word", "abcdefghijklmnopqrstuvwxy")</code> I still get the <code>ValueError:'a' is not in list</code>, that is supposed to be corrected with the exception. I 've been trying to figure out the problem, and apparently is a <code>Syntax error</code>. Please, I would be very grateful with any help. My python version is the 3.2.1</p>
|
<p>You don't have a <code>try</code> statement there. The format is <code>try-except</code>. Something like this.</p>
<pre><code>try:
a = 25 / 0
except ZeroDivisionError:
print("Not Possible")
# Output: Not Possible
</code></pre>
|
python
| 4 |
1,909,121 | 18,050,815 |
Why were True and False changed to keywords in Python 3
|
<p>In Python 2, we could reassign <code>True</code> and <code>False</code> (but not <code>None</code>), but all three (<code>True</code>, <code>False</code>, and <code>None</code>) were considered builtin variables. However, in Py3k all three were changed into keywords as per <a href="http://docs.python.org/3.0/whatsnew/3.0.html" rel="noreferrer">the docs</a>.</p>
<p>From my own speculation, I could only guess that it was to prevent shenanigans like <a href="https://stackoverflow.com/questions/2055029/why-cant-python-handle-true-false-values-as-i-expect">this</a> which derive from the old <code>True, False = False, True</code> prank. However, in Python 2.7.5, and perhaps before, statements such as <code>None = 3</code> which reassigned <code>None</code> raised <code>SyntaxError: cannot assign to None</code>.</p>
<p>Semantically, I don't believe <code>True</code>, <code>False</code>, and <code>None</code> are keywords, since they are at last semantically literals, which is what Java has done. I checked PEP 0 (the index) and I couldn't find a PEP explaining why they were changed.</p>
<p>Are there performance benefits or other reasons for making them keywords as opposed to literals or special-casing them like <code>None</code> in python2?</p>
|
<p>Possibly because Python 2.6 not only allowed <code>True = False</code> but also allowed you to say funny things like:</p>
<pre><code>__builtin__.True = False
</code></pre>
<p>which would reset <code>True</code> to <code>False</code> for the entire process. It can lead to really funny things happening:</p>
<pre><code>>>> import __builtin__
>>> __builtin__.True = False
>>> True
False
>>> False
False
>>> __builtin__.False = True
>>> True
False
>>> False
False
</code></pre>
<p><em>EDIT</em>: As pointed out by <a href="https://stackoverflow.com/users/77939/mike">Mike</a>, the <a href="http://wiki.python.org/moin/Python3.0" rel="noreferrer">Python wiki</a> also states the following under <em>Core Language Changes</em>:</p>
<ul>
<li>Make True and False keywords.
<ul>
<li>Reason: make assignment to them impossible. </li>
</ul></li>
</ul>
|
python|keyword
| 48 |
1,909,122 | 60,937,845 |
Python: print column wise differences
|
<p>i have below python code to compare 2 CSV file rows, and match each column field and display the difference. However the output is not in order, Please help to improve code output.</p>
<p>(I googled and found a python package <code>csvdiff</code> but it requires to specify column number.)</p>
<p><code>2 CSV files:</code></p>
<pre><code>cat file1.csv
1,2,2222,3333,4444,3,
cat file2.csv
1,2,5555,6666,7777,3,
</code></pre>
<p><code>My Python3 code:</code></p>
<pre><code>with open('file1.csv', 'r') as t1, open('file2.csv', 'r') as t2:
filecoming = t1.readlines()
filevalidation = t2.readlines()
for i in range(0,len(filevalidation)):
coming_set = set(filecoming[i].replace("\n","").split(","))
validation_set = set(filevalidation[i].replace("\n","").split(","))
ReceivedDataList=list(validation_set.intersection(coming_set))
NotReceivedDataList=list(coming_set.union(validation_set)-
coming_set.intersection(validation_set))
print(NotReceivedDataList)
</code></pre>
<p><code>output:</code></p>
<pre><code>['6666', '5555', '3333', '2222', '4444', '7777']
</code></pre>
<p>Even though it is printing the differences from both files, the output is not in order. (3 differences from file2, and 3 differences from file1) </p>
<p>i am trying the produce the column wise results i.e., with each difference in file1 to corresponding difference in file2.</p>
<p><code>somethinglike</code></p>
<pre><code>2222 - 5555
3333 - 6666
4444 - 7777
</code></pre>
<p>Please help,, </p>
<p>Thanks in advance.</p>
|
<p>Try this:</p>
<pre><code>import pandas
with open('old.csv', 'r') as t1, open('new.csv', 'r') as t2:
filecoming = t1.readlines()
filevalidation = t2.readlines()
for i in range(0,len(filevalidation)):
coming_set = set(filecoming[i].replace("\n","").split(","))
validation_set = set(filevalidation[i].replace("\n","").split(","))
ReceivedDataList=list(validation_set.intersection(coming_set))
NotReceivedDataList=list(coming_set.union(validation_set)-coming_set.intersection(validation_set))
print(NotReceivedDataList)
old=[]
new=[]
for items in NotReceivedDataList:
if items in coming_set:
old.append(items)
elif items in validation_set:
new.append(items)
print(old)
print(new)
</code></pre>
<p><strong>Output:</strong></p>
<pre><code>['2222', '5555', '6666', '3333', '4444', '7777']
['2222', '3333', '4444']
['5555', '6666', '7777']
</code></pre>
<p><strong>Addition</strong>:
this my help you more
lets have old and new from CSV file, then <code>[item for item in old if item not in new]</code> would give you items that are not in <code>new</code>. Also with help of <code>enumerate</code> we can identify with column is different(differences are in column 2,3 and 4)</p>
<pre><code>old=[1,2,2222,3333,4444,3]
new=[1,2,5555,6666,7777,3]
print([item for item in old if item not in new])
print([item for item in new if item not in old])
for index, (first, second) in enumerate(zip(old, new)):
if first != second:
print(index, first ,second)
</code></pre>
<p>Output:</p>
<pre><code>[2222, 3333, 4444]
[5555, 6666, 7777]
2 2222 5555
3 3333 6666
4 4444 7777
</code></pre>
|
python|csv
| 0 |
1,909,123 | 66,325,451 |
PyQt5 MainWindow with flag instantly goes out of scope
|
<p>I created UI using Qt Designer. Then I converted the ui file to a .py file (pyuic -x) - it works fine if launched directly. Then I tried to subclass my ui in a separate file to implement additional logic. And this is where things start to go wrong. Inheriting from QMainWindow and my Qt Designer file works OK with no issues, as expected. However, the moment I set any WindowFlag for my QMainWindow (any flag - I tried these: StaysOnTop, FramelessWindowHint) and run the file, the window appears and instantly disappears. The program continues to run in a console window, or as a PyCharm process, but the window is gone. It looks to me like it is getting out of scope - but why setting a simple flag would make any difference to the garbage collector? Could someone explain this behaviour?</p>
<p>Minimum code required to reproduce this phenomenon:</p>
<pre><code>from ui import Ui_MainWindow
from PyQt5 import QtCore, QtWidgets, QtGui
import sys
class Logic(QtWidgets.QMainWindow, Ui_MainWindow):
def __init__(self):
QtWidgets.QMainWindow.__init__(self)
self.setupUi(self)
self.show()
# self.setWindowFlags(QtCore.Qt.WindowStaysOnTopHint)
# self.setWindowFlags(QtCore.Qt.FramelessWindowHint)
self.setAttribute(QtCore.Qt.WA_NoSystemBackground, True)
self.setAttribute(QtCore.Qt.WA_TranslucentBackground, True)
if __name__ == "__main__":
app = QtWidgets.QApplication(sys.argv)
window = Logic()
sys.exit(app.exec_())
</code></pre>
<p>The window should appear and stay on the screen until one (or more) of the flags are uncommented. I use Python 3.8 (32-bit) with PyQt5. Run environment provided by PyCharm. Windows 10.</p>
|
<p>From the <a href="https://doc.qt.io/qt-5/qwidget.html#windowFlags-prop" rel="nofollow noreferrer">documentation of <code>setWindowFlags()</code></a>:</p>
<blockquote>
<p>Note: This function calls setParent() when changing the flags for a window, causing the widget to be hidden. You must call show() to make the widget visible again..</p>
</blockquote>
<p>So, just move <code>self.show()</code> after setting the flags, or call it from outside the <code>__init__</code> (<em>after</em> the instance is created), which is the most common and suggested way to do so, as it's considered good practice to show a widget only after it has been instanciated.</p>
|
python|garbage-collection|pyqt5
| 0 |
1,909,124 | 66,201,657 |
Installing Tensorflow on mac m1
|
<p>I am using PyCharm and in the Python Interpreter I install my packages. In other words</p>
<pre><code>In Pycharm: from Python 3.9 venv --> Interpreter Settings --> Install Package: Tensorflow (+) --> Search for Package --> Install
</code></pre>
<p>I almost got anything I want (numpy, scipy, pandas, even torch!). However, I tried to install Tensorflow and I get the following ERROR:</p>
<pre><code>ERROR: Could not find a version that satisfies the requirement tensorflow
ERROR: No matching distribution found for tensorflow
</code></pre>
<p>So, I tried using the shell and it is already installed in my <code>pip list</code> with the version <code>2.4.0</code> as of today.</p>
<p>Since I am using a <code>virtual env in PyCharm</code>, is there a way that I can solve the error and have the package installed? OR how can I migrate the installed packages from my pip to this venv?</p>
<p>If you are an M1 user you would already know that working on apps is a headache with the new silicon apple Mac.</p>
<p>I look forward to some help and suggestions.</p>
<p>Thanks!</p>
|
<p>Basically, at the time of posting this issue, tensorflow does not support <code>Python 3.9</code> and only works for <code>3.8</code>.</p>
<p>Further details are provided <a href="https://github.com/tensorflow/tensorflow/issues/47151" rel="nofollow noreferrer">#47151</a></p>
|
python|tensorflow|pycharm|apple-m1
| 3 |
1,909,125 | 69,138,721 |
VS Code in Python changes text color of the code if set a return type
|
<p>The problem is that for example, you have your color theme installed in VS Code, and works just fine, any color theme, but if you set a return type like:</p>
<pre><code>def some_function() -> str :
</code></pre>
<p>instead of just:</p>
<pre><code>def some_function():
</code></pre>
<p>then the color theme stops applying and just turns the text color into white and yellow.
(it happens not only for str obviously)</p>
<p>Has anyone been able to solve that and how?</p>
|
<p>If anyone runs into this same issue, the extension that was causing it was "Python for VSCode" (author Thomas Haakon Townsend)</p>
|
python|visual-studio-code|themes|vscode-settings|return-type
| 4 |
1,909,126 | 72,526,641 |
`matplotlib` colored output for graphs
|
<p>How to output colored graph (each vertex has its own color) using <code>matplotlib</code> library for python? Is there any method to adjust specific color to each vertex?</p>
<p>Code example without using colors:</p>
<pre><code>import networkx as nx
import matplotlib.pyplot as plt
class Graph:
def __init__(self, edges):
self._edges = edges
def visualize(self):
vg = nx.Graph()
vg.add_edges_from(self._edges)
nx.draw_networkx(vg)
plt.show()
nodes = [['A', 'B'], ['A', 'C'], ['B', 'D'], ['C', 'D'],
['C', 'E'], ['D', 'F'], ['E', 'F']]
G = Graph(nodes)
G.visualize()
</code></pre>
<p>That's how i want to see it:
<img src="https://i.stack.imgur.com/mSK1y.jpg"></p>
|
<p>I'm not sure if you want to change colors only for this case or make it more flexible - using <em>list comprehension</em>, but AFAIK <code>draw_networkx</code> has a parameter which takes a list of strings or for <code>RGB</code> tuple of floats, so only what you can do is prepare a list of colors:</p>
<pre class="lang-py prettyprint-override"><code>import networkx as nx
import matplotlib.pyplot as plt
class Graph:
def __init__(self, edges, colors):
self._edges = edges
self._colors = colors
def visualize(self):
vg = nx.Graph()
vg.add_edges_from(self._edges)
nx.draw_networkx(vg, node_color=self._colors)
plt.show()
nodes = [['A', 'B'], ['A', 'C'], ['B', 'D'], ['C', 'D'],
['C', 'E'], ['D', 'F'], ['E', 'F']]
colors = ['green', 'red', 'red', 'green', 'green', 'red']
G = Graph(nodes, colors)
G.visualize()
</code></pre>
|
python|python-3.x|matplotlib|graph
| 1 |
1,909,127 | 59,121,428 |
Strange convertion of pandas dataframe to spark dataframe with defined schema
|
<p>I'm facing the following problem and cound't get an answer yet: when converting a pandas dataframe with integers to a pyspark dataframe with a schema that supposes data comes as a string, the values change to "strange" strings, just like the example below. I've saved a lot of important data like that, and I wonder why that happened and if it is possible to "decode" these symbols back to integer forms. Thanks in advance!</p>
<pre><code>import pandas as pd
from pyspark.sql.types import StructType, StructField,StringType
df = pd.DataFrame(data = {"a": [111,222, 333]})
schema = StructType([
StructField("a", StringType(), True)
])
sparkdf = spark.createDataFrame(df, schema)
sparkdf.show()
</code></pre>
<p>Output:</p>
<pre><code>--+
+---+
| a|
+---+
| o|
| Þ|
| ō|
+---+
</code></pre>
|
<p>I cannot reproduce the problem on any recent version but the most likely reason is that you incorrectly defined the schema (in combination with enabled Arrow support).</p>
<p>Either cast the input:</p>
<pre><code>df["a"] = df.a.astype("str")
</code></pre>
<p>or define the correct schema:</p>
<pre><code>from pyspark.sql.types import LongType
schema = StructType([
StructField("a", LongType(), True)
])
</code></pre>
|
pandas|dataframe|apache-spark
| 0 |
1,909,128 | 63,298,054 |
How to check if my code runs inside a SLURM environment?
|
<p>I am working on a workflow using Snakemake that is supposed to be portable to any Linux based system, but is mainly developed to run on a hpc using SLURM.
For optimization when using SLURM I'd like to check if code runs in a SLURM environment and then alter tasks slightly to improve on resources management.</p>
<p>My first idea was to just try and resolve the environment variable $SLURM_JOB_ID via os.path.expandvars, but that is kinda dirty in my opinion, so is there a clean way to just check the environment?</p>
|
<p>Checking for the environment variable is the way to go. In Python you would do it like this:</p>
<pre><code>import os
if "SLURM_JOB_ID" in os.environ:
print("Running in Slurm")
else:
print("NOT running in Slurm")
</code></pre>
|
python|hpc|slurm|snakemake
| 1 |
1,909,129 | 62,433,410 |
TensorFlow fake-quantize layers are also called from TF-Lite
|
<p>I'm using TensorFlow 2.1 in order to train models with quantization-aware training.</p>
<p>The code to do that is:</p>
<pre><code>import tensorflow_model_optimization as tfmot
model = tfmot.quantization.keras.quantize_annotate_model(model)
</code></pre>
<p>This will add fake-quantize nodes to the graph. These nodes should adjust the model's weights so they are more easier to be quantized into int8 and to work with int8 data.</p>
<p>When the training ends, I convert and quantize the model to TF-Lite like so:</p>
<pre><code>converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = [give data provider]
quantized_tflite_model = converter.convert()
</code></pre>
<p>At this point, I wouldn't expect to see the fake-quantize layers in the TL-Lite graph. But surprisingly, I do see them.
Moreover, when I run this quantized model in TF-Lite C++ <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/examples/minimal/minimal.cc" rel="noreferrer">sample app</a>, I see that it's also running the fake-quantize nodes during inference. In addition to that, it also dequantize and quantize the activations between each layer.</p>
<p>That's a sample of the output from the C++ code:</p>
<blockquote>
<p>Node 0 Operator Builtin Code 80 FAKE_QUANT<br>
Inputs: 1<br>
Outputs: 237<br>
Node 1 Operator Builtin Code 114 QUANTIZE<br>
Inputs: 237<br>
Outputs: 238<br>
Node 2 Operator Builtin Code 3 CONV_2D<br>
Inputs: 238 59 58<br>
Outputs: 167<br>
Temporaries: 378<br>
Node 3 Operator Builtin Code 6 DEQUANTIZE<br>
Inputs: 167<br>
Outputs: 239<br>
Node 4 Operator Builtin Code 80 FAKE_QUANT<br>
Inputs: 239<br>
Outputs: 166<br>
Node 5 Operator Builtin Code 114 QUANTIZE<br>
Inputs: 166<br>
Outputs: 240<br>
Node 6 Operator Builtin Code 3 CONV_2D<br>
Inputs: 240 61 60<br>
Outputs: 169 </p>
</blockquote>
<p>So I find all this very weird, taking also into account the fact that this model should run only on int8 and actually fake-quantize nodes are getting float32 as inputs.</p>
<p>Any help here would be appreciated.</p>
|
<p>You can force TF Lite to only use the INT operations:</p>
<pre class="lang-py prettyprint-override"><code>converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
</code></pre>
<p>If an error occurs, then some layers of your network do not have an INT8 implementation yet.</p>
<p>Furthermore you could also try to investigate your network using <a href="https://github.com/lutzroeder/netron" rel="nofollow noreferrer">Netron</a>.</p>
<p>Nonetheless, if you also want to have INT8 inputs and output you also need to adjust those:</p>
<pre><code>converter.inference_input_type = tf.int8
converter.inference_output_type = tf.int8
</code></pre>
<p>However, there is currently an open issue regarding the in- and output, see <a href="https://github.com/tensorflow/tensorflow/issues/38285" rel="nofollow noreferrer">Issue #38285</a></p>
|
tensorflow|quantization|tensorflow-lite|quantization-aware-training
| 0 |
1,909,130 | 62,069,979 |
pandas merge with conditionnal aggregation
|
<p>I want to merge two dataframe based on a composed key. The second dataframe have duplicated rows considering the key. Note that the key is not unique in the first dataframe too because there is in fact other many columns in real data. I need to merge with aggregated(product) values on second dataframe but with a condition on dates. The rows to aggregate should have a date lower than the date of the row from the first data frame.</p>
<p>Here is an example : </p>
<pre><code>df1 = pd.DataFrame({
'Code': ['Code1', 'Code1', 'Code1', 'Code2', 'Code3', 'Code4'],
'SG': ['SG1', 'SG1', 'SG1', 'SG2', 'SG3', 'SG3'],
'Date':
['2020-02-01', '2020-02-01', '2020-03-01', '2020-01-01', '2020-02-01', '2020-02-01']
})
print(df1)
Code SG Date
0 Code1 SG1 2020-02-01
1 Code1 SG1 2020-02-01
2 Code1 SG1 2020-03-01
3 Code2 SG2 2020-01-01
4 Code3 SG3 2020-02-01
5 Code4 SG3 2020-02-01
df2 = pd.DataFrame({
'Code': ['Code1', 'Code1', 'Code2', 'Code3'],
'SG': ['SG1', 'SG1', 'SG2', 'SG3'],
'Date': ["2019-01-01", "2020-02-25", "2020-01-13", "2020-01-25"],
'Coef': [0.5, 0.7, 0.3, 0.3]
})
print(df2)
Code SG Date Coef
0 Code1 SG1 2019-01-01 0.5
1 Code1 SG1 2020-02-25 0.7
2 Code2 SG2 2020-01-13 0.3
3 Code3 SG3 2020-01-25 0.3
</code></pre>
<p>I want the following result : Line two has aggregated coef 0.5x0.7 =0.35 as all df2.Date for corresponding key are lower than df1.Date</p>
<pre><code> Code SG Date Coef
0 Code1 SG1 2020-02-01 0.50
1 Code1 SG1 2020-02-01 0.50
2 Code1 SG1 2020-03-01 0.35
3 Code2 SG2 2020-01-01 NaN
4 Code3 SG3 2020-02-01 0.30
5 Code4 SG3 2020-02-01 NaN
</code></pre>
<p>Thank you for help.</p>
|
<p>OK, I finally got it!</p>
<h1>Merging (LEFT JOIN) by Code and SG</h1>
<pre><code>df_group = pd.merge(df1,df2, on=['Code','SG'], how='left', suffixes=('','_result'))
</code></pre>
<h1>Creating a filter for lower dates</h1>
<pre><code>df_group['lower_date_mask'] = df_group['Date_result'] <= df_group['Date']
</code></pre>
<h1>Filtering the Coef column with NaNs.</h1>
<pre><code>df_group.loc[df_group['lower_date_mask'] == False,'lower_date_mask'] = np.nan
df_group['Coef'] = df_group['Coef'] * df_group['lower_date_mask']
</code></pre>
<h1>We assign infinite to True values here just to avoid Pandas bug when executing the <code>.prod()</code> function with NaNs</h1>
<pre><code>df_group.loc[df_group['lower_date_mask'] == 1.0,'lower_date_mask'] = np.inf
</code></pre>
<p>Github issue about the aggregation functions with nan: <a href="https://github.com/pandas-dev/pandas/issues/20824" rel="nofollow noreferrer">https://github.com/pandas-dev/pandas/issues/20824 </a></p>
<h1>Aggregating by .prod()</h1>
<pre><code>df_group = df_group.groupby(['Code','SG','Date']).prod()
</code></pre>
<h1>Creating final dataframe</h1>
<pre><code>df_group.reset_index(inplace = True)
df_group.loc[df_group['lower_date_mask'] == 1.0,'Coef'] = np.nan
df_group.drop(columns = ['lower_date_mask'],inplace = True)
</code></pre>
<h1>Final output</h1>
<pre><code> Code SG Date Coef
0 Code1 SG1 2020-02-01 0.50
1 Code1 SG1 2020-03-01 0.35
2 Code2 SG2 2020-01-01 NaN
3 Code3 SG3 2020-02-01 0.30
4 Code4 SG3 2020-02-01 NaN
</code></pre>
<p>It is worth saying that you can achieve this with the <code>.apply()</code> function, however, this would slow you down if your DataFrame grow bigger.</p>
<p>Hope I could help! It took me literally two hours to think this code through!</p>
<p><strong>EDIT</strong>:</p>
<p>As mentioned by @codesensei, his database has other columns that make the combination <code>['Code','SG','Date']</code> not unique. In that case, there are two possible ways to deal with that. First, if there are other columns in df1 or df2 that make the combination unique, just add them in the grouping, like the following:</p>
<pre><code>df_group = df_group.groupby(['Code','SG','Date','column_of_interest']).prod()
</code></pre>
<p>Second, if it's easier to make the combination unique by some sort of ID, let's say the index of df1, you can do:</p>
<pre><code>df1.reset_index(inplace = True)
# merge dataframes and follow the other steps as stated earlier in this answer
df_group = df_group.groupby(['Code','SG','Date','index']).prod()
</code></pre>
<p>If you want, you can rename the 'index' to something else, just to make it more explicit.</p>
<p>Hope I could help!</p>
|
python|pandas|merge
| 1 |
1,909,131 | 35,726,775 |
Getting region inside contour in VTK
|
<p>I'm generating a contour in 2D through <code>vtkContourFilter</code>. (see image attached below) Now I would like to get the region that is inside the contour and save it as <code>vtkImageData</code> or something similar that would result in an image just with the data inside the contour. Everything else would be black, just to have the same dimensions as the slice.</p>
<p><a href="https://i.stack.imgur.com/cZpBd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cZpBd.png" alt="enter image description here"></a>
<a href="https://i.stack.imgur.com/GGVKR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GGVKR.png" alt="enter image description here"></a></p>
<p>I don't know how to get the region inside the contour, is there anyway to do it? </p>
<p>This is what I did so far: </p>
<pre><code>import vtk
reader = vtk.vtkXMLImageDataReader()
reader.SetFileName("sample.vti")
reader.GetOutput().SetUpdateExtentToWholeExtent()
reader.Update()
flipYFilter = vtk.vtkImageFlip()
flipYFilter.SetFilteredAxis(1)
flipYFilter.SetInput(reader.GetOutput())
flipYFilter.Update()
image = flipYFilter.GetOutput()
extractSlice = vtk.vtkExtractVOI()
extractSlice.SetInput(image)
extractSlice.SetVOI(image.GetExtent()[0], image.GetExtent()[1], \
image.GetExtent()[2], image.GetExtent()[3], \
5, 5)
extractSlice.SetSampleRate(1, 1, 1)
extractSlice.Update()
contour = vtk.vtkContourFilter()
contour.SetInputConnection(extractSlice.GetOutputPort())
contour.SetValue(1,90)
#How to get the region inside the contour?
</code></pre>
<p>Thanks in advance. </p>
|
<p>The <code>vtkContourFilter</code> is a - in your case - line representation that does not allow for any "inside/outside" filtering. What you want is the <code>vtk.Threshold</code></p>
<pre><code>threshold = vtk.Threshold()
threshold.SetInputConnection(extractSlice.GetOutputPort())
threshold.ThresholdRange = [37.35310363769531, 276.8288269042969]
</code></pre>
<p>The above code is something out of my head, and the two scalars are the min and max values which you apply the threshold to. Have a look at <a href="http://www.paraview.org/" rel="nofollow noreferrer">Paraview</a>, which you can use to assemble your visualization and record everything using the <em>Python tracer</em>. This leaves you with python code that you can then use with normal VTK and python, which is great and exactly what you need. But this way, the prototyping process is way faster, than only with python.</p>
<p><a href="https://i.stack.imgur.com/biWh6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/biWh6.png" alt="enter image description here"></a></p>
|
python|vtk|contour
| 4 |
1,909,132 | 58,670,836 |
Function output in the form of list
|
<p>I am trying to print some prime numbers fetched from excel based on some condition.
I want output as : </p>
<pre><code>values fetched are 2, 4, 6 from excel
</code></pre>
<p>But am getting output as </p>
<pre><code>values fetched are 2 from excel
values fetched are 4 from excel
values fetched are 6 from excel
</code></pre>
<p>Code I've tried so far:</p>
<pre><code>def prim():
global K
global excel_value
co = 0
for ro in range(sheet.nrows):
if sheet.cell_value(ro,1)=='Yes':
K = sheet.cell_value(ro,co)
excel_value = K.encode('ascii')
#print(excel_value) # output ---->2 4 6
#sys.stdout.write(str(excel_value)+','+' ') # output --> 2, 4, 6
output1 = 'values fetched are'+ excel_value +'from excel)
sys.stdout.write(output1)
prim()
</code></pre>
|
<p>get <code>stdout</code> out of the loop, like this:</p>
<pre><code>def prim():
global K
global excel_value
co = 0
l =[]
for ro in range(sheet.nrows):
if sheet.cell_value(ro,1)=='Yes':
K = sheet.cell_value(ro,co)
excel_value = K.encode('ascii')
l.append(excel_value)
if l:
output1 = 'values fetched are ' + ','.join(l) +' from excel'
else:
output1 = 'Nothing'
sys.stdout.write(output1)
</code></pre>
|
python
| 1 |
1,909,133 | 58,815,813 |
Uploading a Video to Azure Media Services with Python SDKs
|
<p>I am currently looking for a way to upload a video to Azure Media Services (AMS v3) via Python SDKs. I have followed its instruction, and am able to connect to AMS successfully.</p>
<p><strong>Example</strong></p>
<pre><code>credentials = AdalAuthentication(
context.acquire_token_with_client_credentials,
RESOURCE,
CLIENT,
KEY)
client = AzureMediaServices(credentials, SUBSCRIPTION_ID) # Successful
</code></pre>
<p>I also successfully get all the videos' details uploaded via its portal</p>
<pre><code>for data in client.assets.list(RESOUCE_GROUP_NAME, ACCOUNT_NAME).get(0):
print(f'Asset_name: {data.name}, file_name: {data.description}')
# Asset_name: 4f904060-d15c-4880-8c5a-xxxxxxxx, file_name: 夢想全紀錄.mp4
# Asset_name: 8f2e5e36-d043-4182-9634-xxxxxxxx, file_name: an552Qb_460svvp9.webm
# Asset_name: aef495c1-a3dd-49bb-8e3e-xxxxxxxx, file_name: world_war_2.webm
# Asset_name: b53d8152-6ecd-41a2-a59e-xxxxxxxx, file_name: an552Qb_460svvp9.webm - Media Encoder Standard encoded
</code></pre>
<p><strong>However</strong>, when I tried to use the following method; it failed. Since I have no idea what to parse as <em>parameters</em> - <a href="https://docs.microsoft.com/en-us/python/api/azure-mgmt-media/azure.mgmt.media.operations.assetsoperations?view=azure-python" rel="nofollow noreferrer">Link to Python SDKs</a></p>
<pre><code>create_or_update(resource_group_name, account_name, asset_name,
parameters, custom_headers=None, raw=False, **operation_config)
</code></pre>
<p><br></p>
<p><strong>Therefore</strong>, I would like to ask questions as follows (everything is done via Python SDKs):</p>
<ol>
<li>What kind of parameters does it expect?</li>
<li>Can a video be uploaded directly to AMS or it should be uploaded to Blob Storage first?</li>
<li>Should an <strong>Asset</strong> contain only one video or multiple files are fine?</li>
</ol>
|
<ol>
<li>The documentation for the REST version of that method is at <a href="https://docs.microsoft.com/en-us/rest/api/media/assets/createorupdate" rel="nofollow noreferrer">https://docs.microsoft.com/en-us/rest/api/media/assets/createorupdate</a>. This is effectively the same as the Python parameters.</li>
<li>Videos are stored in Azure Storage for Media Services. This is true for input assets, the assets that are encoded, and any streamed content. It all is in Storage but accessed by Media Services. You do need to create an asset in Media Services which creates the Storage container. Once the Storage container exists you upload via the Storage APIs to that Media Services created container.</li>
<li>Technically multiple files are fine, but there are a number of issues with doing that that you may not expect. I'd recommend using 1 input video = 1 Media Services asset. On the encoding output side there will be more than one file in the asset. Encoding output contains one or more videos, manifests, and metadata files.</li>
</ol>
|
python|azure|azure-media-services
| 3 |
1,909,134 | 58,774,849 |
Using Adobe Readers Export as text function in python
|
<p>I want to convert lots of PDFs into text files.
The formatting is very important and only Adobe Reader seems to get it right (PDFMiner or PyPDF2 do not.)</p>
<p>Is there a way to automate the "export as text" function from Adobe Reader?</p>
|
<p>The following code will do what you want for one file. I recommend organizing the script into a few little functions and then calling the functions in a loop to process many files. You'll need to install the <code>keyboard</code> library using <code>pip</code>, or some other tool.</p>
<pre><code>import pathlib as pl
import os
import keyboard
import time
import io
KILL_KEY = 'esc'
read_path = pl.Path("C:/Users/Sam/Downloads/WS-1401-IP.pdf")
####################################################################
write_path = pl.Path(str(read_path.parent/read_path.stem) + ".txt")
overwrite_file = os.path.exists(write_path)
# alt -- activate keyboard shortcuts
# `F` -- open file menu
# `v` -- select "save as text" option
# keyboard.write(write_path)
# `alt+s` -- save button
# `ctrl+w` -- close file
os.startfile(read_path)
time.sleep(1)
keyboard.press_and_release('alt')
time.sleep(1)
keyboard.press_and_release('f') # -- open file menu
time.sleep(1)
keyboard.press_and_release('v') # -- select "save as text" option
time.sleep(1)
keyboard.write(str(write_path))
time.sleep(1)
keyboard.press_and_release('alt+s')
time.sleep(2)
if overwrite_file:
keyboard.press_and_release('y')
# wait for program to finish saving
waited_too_long = True
for _ in range(5):
time.sleep(1)
if os.path.exists(write_path):
waited_too_long = False
break
if waited_too_long:
with io.StringIO() as ss:
print(
"program probably saved to somewhere other than",
write_path,
file = ss
)
msg = ss.getvalue()
raise ValueError(msg)
keyboard.press_and_release('ctrl+w') # close the file
</code></pre>
|
python|pdf|text|adobe-reader
| 0 |
1,909,135 | 73,259,547 |
Get the indixes of the values which are greater than 0 in the column of a dataframe
|
<p>finding the right solution for it. I checked many questions but haven't found this one yet. Can someone pls help me?</p>
<p>I want to go trough a column from the dataframe and check every value from the column, if it is greater than 0. If it is true, then to get the index from it.</p>
<p>This is what i have tried so far:
<a href="https://i.stack.imgur.com/xyDlf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xyDlf.png" alt="enter image description here" /></a></p>
|
<p>This should do the trick:</p>
<pre><code>ans = df.index[df['Column_name']>0].tolist()
</code></pre>
<p><code>ans</code> will be the list of the indexes of the values that are greater the 0 in the column <code>"Column_name"</code></p>
<p>If you have any questions feel free to ask me in the comments and if my comment helped you please consider marking it as the answer :)</p>
|
python|dataframe
| 1 |
1,909,136 | 31,657,287 |
Django 1.8 (on Python 3) - Make a SOAP request with pysimplesoap
|
<p>Since one week I'm trying to retrieve some datas from a Dolibarr application including Dolibarr's webservices.</p>
<p>In few words, I'm trying to make a soap request to retrieve user's informations.</p>
<p>At first, I tried to instantiate SoapClient with 'wsdl' and 'trace' parameters, in vain.. SoapClient's object was never been create !</p>
<p>Secondly, I made a made a classical SoapClient's object without 'wsdl', however I used : 'location', 'action', 'namespace', 'soap_ns', 'trace'; It was a success (I think) but It did't work when I called Client's call method.. My dolibarrkey did't match my key on the webservice, but their keys are the same (copy & paste).</p>
<p>For more explanations take a look to dolibarr api (to retrieve datas) with the xlm dataformat.</p>
<p>Link to getUser web service (click on getUser to show parameters):
<a href="http://barrdoli.yhapps.com/webservices/server_user.php" rel="nofollow">http://barrdoli.yhapps.com/webservices/server_user.php</a></p>
<p>Link to xml dataformat (for the SOAP request maybe):
<a href="http://barrdoli.yhapps.com/webservices/server_user.php?wsdl" rel="nofollow">http://barrdoli.yhapps.com/webservices/server_user.php?wsdl</a></p>
<pre><code>from pysimplesoap.client import SoapClient, SoapFault
import sys
def listThirdParties():
# create a simple consumer
try:
# client = SoapClient(
# "[MyAppDomain]/webservices/server_user.php")
# print(client)
# client = SoapClient(wsdl="[MyAppDomain]/webservices/server_user.php?wsdl", trace=True)
client = SoapClient(
location = "[myAppDomain]/webservices/server_user.php",
action = '[myAppDomain]/webservices/server_user.php?wsdl', # SOAPAction
namespace = "[myAppDomain]/webservices/server_user.php",
soap_ns='soap',
trace = True,
)
print("connected bitch")
except:
print("error connect")
message = dict()
message['use'] = "encoded"
message["namespace"] = "http://www.dolibarr.org/ns/"
message["encodingStyle"] = "http://schemas.xmlsoap.org/soap/encoding/"
message["message"] = "getUserRequest"
parts = dict()
auth = dict()
auth['dolibarrkey'] = '********************************'
auth['sourceapplication'] = 'WebServicesDolibarrUser'
auth['login'] = '********'
auth['password'] = '********'
auth['entity'] = ''
parts["authentication"] = auth
parts["id"] = 1
parts["ref"] = "ref"
parts["ref_ext"] = "ref_ext"
message["parts"] = parts
# call the remote method
response = client.call(method='getUser', kwargs=message)
# extract and convert the returned value
# result = response.getUser
# return int(result)
print(response)
pass
</code></pre>
<p>I've "BAD_VALUE_FOR_SECURITY_KEY" into a xlm response, I think it's my request which made with a bad xml dataformat..</p>
<p>shell response : </p>
<pre><code>-------- RESPONSE -------
b'<?xml version="1.0" encoding="UTF-8"?><SOAP-ENV:Envelope SOAP-ENV:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/" xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:SOAP-ENC="http://schemas.xmlsoap.org/soap/encoding/" xmlns:tns="http://www.dolibarr.org/ns/"><SOAP-ENV:Body><ns1:getUserResponse xmlns:ns1="http://barrdoli.yhapps.com/webservices/server_user.php"><result xsi:type="tns:result"><result_code xsi:type="xsd:string">BAD_VALUE_FOR_SECURITY_KEY</result_code><result_label xsi:type="xsd:string">Value provided into dolibarrkey entry field does not match security key defined in Webservice module setup</result_label></result><user xsi:nil="true" xsi:type="tns:user"/></ns1:getUserResponse></SOAP-ENV:Body></SOAP-ENV:Envelope>'
</code></pre>
<p>I really want to know how should I do to make a working soap request with a clean xlm dataformat.</p>
<p>Thanks</p>
|
<p>Did you try to setup WebService Security (WSSE) with <code>client</code> object? Following examples are taken from <a href="https://code.google.com/p/pysimplesoap/wiki/SoapClient" rel="nofollow">https://code.google.com/p/pysimplesoap/wiki/SoapClient</a></p>
<pre><code>client['wsse:Security'] = {
'wsse:UsernameToken': {
'wsse:Username': 'testwservice',
'wsse:Password': 'testwservicepsw',
}
}
</code></pre>
<p>or try to Setup AuthHeaderElement authentication header</p>
<pre><code>client['AuthHeaderElement'] = {'username': 'mariano', 'password': 'clave'}
</code></pre>
|
php|python|django|web-services|soap
| 0 |
1,909,137 | 31,237,091 |
python xlwings in virtualenv possible?
|
<p>I've tried to start playing with xlwings using python 3.4.3 in a virtualenv, but one of the examples programs errors out because it cannot see numpy, which is very much installed in the virtualenv. <code>pip freeze</code> run in the virtualenv shows (cleaned some of the obviously non-essential out):</p>
<pre><code>appscript==1.0.1
lxml==3.4.4
numpy==1.9.2
pandas==0.16.1
psutil==3.0.1
ptyprocess==0.5
pyparsing==2.0.3
python-dateutil==2.4.2
virtualenv==13.0.3
virtualenv-clone==0.2.5
virtualenvwrapper==4.6.0
xlrd==0.9.3
XlsxWriter==0.7.3
xlwings==0.3.5
</code></pre>
<p>I'm not sure that setting <code>PYTHON_MAC</code> to the location of my 3.4.3 install (done via Homebrew) is going to solve this because the location of the site-packages is elsewhere.</p>
<p>Is it possible to run xlwings from a virtualenv or do I need to have my desired packages installed in the system wide site-packages as well?</p>
|
<p>You need to set the location of <code>PYTHON_MAC</code> (or <code>PYTHON_WIN</code>) to the location of your virtualenv. E.g. <code>PYTHON_MAC = ".../env/bin/python"</code>.</p>
|
python|numpy|xlwings
| 6 |
1,909,138 | 15,608,555 |
Python convert html to text and mimic formatting
|
<p>I'm learning BeautifulSoup, and found many "html2text" solutions, but the one i'm looking for should mimic the formatting:</p>
<pre><code><ul>
<li>One</li>
<li>Two</li>
</ul>
</code></pre>
<p>Would become</p>
<pre><code>* One
* Two
</code></pre>
<p>and </p>
<pre><code>Some text
<blockquote>
More magnificent text here
</blockquote>
Final text
</code></pre>
<p>to </p>
<pre><code>Some text
More magnificent text here
Final text
</code></pre>
<p>I'm reading the docs, but I'm not seeing anything straight forward. Any help? I'm open to using something other than beautifulsoup.</p>
|
<p>Take a look at Aaron Swartz's <a href="https://github.com/aaronsw/html2text" rel="noreferrer">html2text</a> script (can be installed with <code>pip install html2text</code>). Note that the output is valid <a href="http://en.wikipedia.org/wiki/Markdown" rel="noreferrer">Markdown</a>. If for some reason that doesn't fully suit you, some rather trivial tweaks should get you the exact output in your question:</p>
<pre><code>In [1]: import html2text
In [2]: h1 = """<ul>
...: <li>One</li>
...: <li>Two</li>
...: </ul>"""
In [3]: print html2text.html2text(h1)
* One
* Two
In [4]: h2 = """<p>Some text
...: <blockquote>
...: More magnificent text here
...: </blockquote>
...: Final text</p>"""
In [5]: print html2text.html2text(h2)
Some text
> More magnificent text here
Final text
</code></pre>
|
python|html|beautifulsoup
| 13 |
1,909,139 | 15,689,616 |
Generating list with constraints
|
<p>I need to generate a list of length n that:</p>
<ol>
<li>has elements from the domain [-1, 0, 1]</li>
<li>Has exactly k nonzero elements </li>
</ol>
<p>I understand that my elements will be a subset of the cross-product of [-1, 0, 1] with itself numerous times however simply generating the cross-product using the iterables package and then removing the "wrong" ones isn't possible in a timely manner for n bigger than 10 really. </p>
<p>I'm wondering if there is a feasible way?</p>
<p>N.B. For context the problem is generating Circular Weighing Matrices using search algorithms. Any insight into the problem conceptually is also appreciated.</p>
|
<p>The problem reduces to finding lists with <code>n-k</code> zeroes and <code>k</code> non-zeroes, then specializing the non-zeroes to -1 and 1.</p>
<p>You can do this easily with combinations of indices:</p>
<pre><code>def gen_lists(n, k):
for nzinds in itertools.combinations(range(n), n-k):
l = [0] * n
for nz in itertools.product([-1,1], repeat=n-k):
for i,v in zip(nzinds, nz):
l[i] = v
yield l
</code></pre>
<p>Sample output:</p>
<pre><code>>>> for l in gen_lists(3, 1):
... print l
...
[-1, -1, 0]
[-1, 1, 0]
[1, -1, 0]
[1, 1, 0]
[-1, 0, -1]
[-1, 0, 1]
[1, 0, -1]
[1, 0, 1]
[0, -1, -1]
[0, -1, 1]
[0, 1, -1]
[0, 1, 1]
</code></pre>
|
python|list
| 1 |
1,909,140 | 59,617,176 |
Wrong plotting in bokeh
|
<p>When running the below code, it displays a wrong plot.</p>
<pre><code>from pandas_datareader import data
from datetime import datetime as dt
from bokeh.plotting import figure, show, output_file
st=dt(2016,3,1)
end=dt(2016,3,10)
df=data.DataReader(name="GOOG",data_source="yahoo",start=st,end=end)
p=figure(x_axis_type="datetime",height=300,width=1000)
p.title.text="CandleStick graph"
p.rect(df.index[df.Close > df.Open],(df.Close+df.Open)/2,12*60*60*1000,abs(df.Close-df.Open))
show(p)
</code></pre>
|
<p>All the data columns need to be the same length, but you are passing one that is shorter than the others:</p>
<pre><code>df.index[df.Close > df.Open]
</code></pre>
<p>Actually running your code, Bokeh even tells you exactly this:</p>
<pre><code>BokehUserWarning: ColumnDataSource's columns must be of the same length.
Current lengths: ('height', 9), ('x', 5), ('y', 9)
</code></pre>
<p>You are only passing 5 coordinates for <code>x</code> and 9 coordinates for all the others. The arguments all need to match up. You can either:</p>
<ul>
<li><p>Not do the subsetting on <code>df.index</code> at all</p></li>
<li><p>Subset all the other arguments in the same way</p></li>
</ul>
<p><em>(For future reference: you should <strong>always</strong> include any error or warning messages such as the one above in your SO questions—and, as @sjc mentioned, describe the problem in actual detail, not just state "is wrong")</em></p>
|
python|pandas|bokeh|pandas-bokeh
| 2 |
1,909,141 | 25,069,622 |
Pandas - read_hdf or store.select returning incorrect results for a query
|
<p>I have a large dataset (4 million rows, 50 columns) stored via pandas store.append. When I use either store.select or read_hdf with a query for 2 columns being greater than a certain value (i.e. "(a > 10) & (b > 1)" I get 15,000 or so rows returned.</p>
<p>When I read in the entire table, as say df, and do df[(df.a > 10) & (df.b > 1)] I get 30,000 rows. I narrowed down the problem - when I read in the entire table and do df.query("(a > 10) & (b > 1)") it's the same 15,000 rows, but when I set the engine to python ---> df.query("(a > 10) & (b > 1)", engine = 'python') I get the 30,000 rows.</p>
<p>I suspect it has something to do with the eval/numexpr method of querying in the HDF and Query methods.</p>
<p>The types are float64's in columns a and b, and even if I query with a float (ie. 1. instead of 1) the problem still persists.</p>
<p>I would appreciate any feedback or if others have the same problem we need to fix this.</p>
<p>Regards,
Neil</p>
<p>========================</p>
<h1>Here's the info:</h1>
<h2>pd.show_versions()</h2>
<pre><code>INSTALLED VERSIONS
commit: None
python: 2.7.6.final.0
python-bits: 32
OS: Darwin
OS-release: 13.3.0
machine: x86_64
processor: i386
byteorder: little
LC_ALL: None
LANG: None
pandas: 0.14.1
nose: 1.3.3
Cython: None
numpy: 1.8.0
scipy: 0.14.0
statsmodels: 0.5.0
IPython: 1.2.1
sphinx: 1.2.2
patsy: 0.2.0
scikits.timeseries: 0.91.3
dateutil: 2.2
pytz: 2013.8
bottleneck: 0.7.0
tables: 3.1.1
numexpr: 2.4
matplotlib: 1.3.1
openpyxl: 2.0.3
xlrd: 0.9.3
xlwt: 0.7.5
xlsxwriter: 0.5.5
lxml: 3.3.5
bs4: None
html5lib: 0.95-dev
httplib2: None
apiclient: None
rpy2: None
sqlalchemy: 0.9.4
pymysql: None
psycopg2: None
</code></pre>
<h2>df.info() ---> on the selected 15,000 or so rows</h2>
<pre><code>Int64Index: 15533 entries, 67302 to 142465
Data columns (total 47 columns):
date 15533 non-null datetime64[ns]
text 15533 non-null object
date2 1090 non-null datetime64[ns]
x1 15533 non-null float64
x2 15533 non-null float64
x3 15533 non-null float64
x4 15533 non-null float64
x5 15533 non-null float64
x6 15533 non-null float64
x7 15533 non-null float64
x8 15533 non-null float64
x9 15533 non-null float64
x10 15533 non-null float64
x11 15533 non-null float64
x12 15533 non-null float64
x13 15533 non-null float64
x14 15533 non-null float64
x15 15533 non-null float64
x16 15533 non-null float64
x17 15533 non-null float64
x18 15533 non-null float64
a 15533 non-null float64
x19 15533 non-null float64
x20 15533 non-null float64
x21 15533 non-null float64
x22 15533 non-null float64
x23 15533 non-null float64
x24 15533 non-null float64
b 15533 non-null float64
x25 15533 non-null float64
x26 15533 non-null float64
x27 15533 non-null float64
x28 15533 non-null float64
x29 15533 non-null float64
x30 15533 non-null float64
x31 15497 non-null float64
x32 15497 non-null float64
x33 15497 non-null float64
x34 15497 non-null float64
x35 15533 non-null int64
x36 15533 non-null int64
x37 15533 non-null int64
x38 15533 non-null int64
x39 15533 non-null int64
x40 15533 non-null int64
x41 15533 non-null int64
x42 15533 non-null int64
dtypes: datetime64ns, float64(36), int64(8), object(1)
</code></pre>
<h2>ptdump -av file</h2>
<pre><code>/ (RootGroup) ''
/._v_attrs (AttributeSet), 4 attributes:
[CLASS := 'GROUP',
PYTABLES_FORMAT_VERSION := '2.1',
TITLE := '',
VERSION := '1.0']
/MKT (Group) ''
/MKT._v_attrs (AttributeSet), 14 attributes:
[CLASS := 'GROUP',
TITLE := '',
VERSION := '1.0',
data_columns := ['date', 'text', 'a', 'x20', 'x23', 'x24', 'b', 'x25', 'x26', 'x35', 'x36', 'x37', 'x38', 'x39', 'x40', 'x41', 'x42'],
encoding := None,
index_cols := [(0, 'index')],
info := {1: {'type': 'Index', 'names': [None]}, 'index': {}},
levels := 1,
nan_rep := 'nan',
non_index_axes := [(1, ['date', 'text', 'date2', 'x1', 'x2', 'x3', 'x4', 'x5', 'x6', 'x7', 'x8', 'x9', 'x10', 'x11', 'x12', 'x13', 'x14', 'x15', 'x16', 'x17', 'x18', 'a', 'x19', 'x20', 'x21', 'x22', 'x23', 'x24', 'b', 'x25', 'x26', 'x27', 'x28', 'x29', 'x30', 'x31', 'x32', 'x33', 'x34', 'x35', 'x36', 'x37', 'x38', 'x39', 'x40', 'x41', 'x42'])],
pandas_type := 'frame_table',
pandas_version := '0.10.1',
table_type := 'appendable_frame',
values_cols := ['values_block_0', 'values_block_1', 'date', 'text', 'a', 'x20', 'x23', 'x24', 'b', 'x25', 'x26', 'x35', 'x36', 'x37', 'x38', 'x39', 'x40', 'x41', 'x42']]
/MKT/table (Table(3637597,)) ''
description := {
"index": Int64Col(shape=(), dflt=0, pos=0),
"values_block_0": Int64Col(shape=(1,), dflt=0, pos=1),
"values_block_1": Float64Col(shape=(29,), dflt=0.0, pos=2),
"date": Int64Col(shape=(), dflt=0, pos=3),
"text": StringCol(itemsize=30, shape=(), dflt='', pos=4),
"a": Float64Col(shape=(), dflt=0.0, pos=5),
"x20": Float64Col(shape=(), dflt=0.0, pos=6),
"x23": Float64Col(shape=(), dflt=0.0, pos=7),
"x24": Float64Col(shape=(), dflt=0.0, pos=8),
"b": Float64Col(shape=(), dflt=0.0, pos=9),
"x25": Float64Col(shape=(), dflt=0.0, pos=10),
"x26": Float64Col(shape=(), dflt=0.0, pos=11),
"x35": Int64Col(shape=(), dflt=0, pos=12),
"x36": Int64Col(shape=(), dflt=0, pos=13),
"x37": Int64Col(shape=(), dflt=0, pos=14),
"x38": Int64Col(shape=(), dflt=0, pos=15),
"x39": Int64Col(shape=(), dflt=0, pos=16),
"x40": Int64Col(shape=(), dflt=0, pos=17),
"x41": Int64Col(shape=(), dflt=0, pos=18),
"x42": Int64Col(shape=(), dflt=0, pos=19)}
byteorder := 'little'
chunkshape := (322,)
autoindex := True
colindexes := {
"x41": Index(6, medium, shuffle, zlib(1)).is_csi=False,
"x20": Index(6, medium, shuffle, zlib(1)).is_csi=False,
"x37": Index(6, medium, shuffle, zlib(1)).is_csi=False,
"x42": Index(6, medium, shuffle, zlib(1)).is_csi=False,
"x26": Index(6, medium, shuffle, zlib(1)).is_csi=False,
"x38": Index(6, medium, shuffle, zlib(1)).is_csi=False,
"x40": Index(6, medium, shuffle, zlib(1)).is_csi=False,
"date": Index(6, medium, shuffle, zlib(1)).is_csi=False,
"x36": Index(6, medium, shuffle, zlib(1)).is_csi=False,
"text": Index(6, medium, shuffle, zlib(1)).is_csi=False,
"x23": Index(6, medium, shuffle, zlib(1)).is_csi=False,
"x39": Index(6, medium, shuffle, zlib(1)).is_csi=False,
"index": Index(6, medium, shuffle, zlib(1)).is_csi=False,
"x25": Index(6, medium, shuffle, zlib(1)).is_csi=False,
"x24": Index(6, medium, shuffle, zlib(1)).is_csi=False,
"a": Index(6, medium, shuffle, zlib(1)).is_csi=False,
"x35": Index(6, medium, shuffle, zlib(1)).is_csi=False,
"b": Index(6, medium, shuffle, zlib(1)).is_csi=False}
/MKT/table._v_attrs (AttributeSet), 83 attributes:
[CLASS := 'TABLE',
x23_dtype := 'float64',
x23_kind := ['x23'],
x20_dtype := 'float64',
x20_kind := ['x20'],
FIELD_0_FILL := 0,
FIELD_0_NAME := 'index',
FIELD_10_FILL := 0.0,
FIELD_10_NAME := 'x25',
FIELD_11_FILL := 0.0,
FIELD_11_NAME := 'x26',
FIELD_12_FILL := 0,
FIELD_12_NAME := 'x35',
FIELD_13_FILL := 0,
FIELD_13_NAME := 'x36',
FIELD_14_FILL := 0,
FIELD_14_NAME := 'x37',
FIELD_15_FILL := 0,
FIELD_15_NAME := 'x38',
FIELD_16_FILL := 0,
FIELD_16_NAME := 'x39',
FIELD_17_FILL := 0,
FIELD_17_NAME := 'x40',
FIELD_18_FILL := 0,
FIELD_18_NAME := 'x41',
FIELD_19_FILL := 0,
FIELD_19_NAME := 'x42',
FIELD_1_FILL := 0,
FIELD_1_NAME := 'values_block_0',
FIELD_2_FILL := 0.0,
FIELD_2_NAME := 'values_block_1',
FIELD_3_FILL := 0,
FIELD_3_NAME := 'date',
FIELD_4_FILL := '',
FIELD_4_NAME := 'text',
FIELD_5_FILL := 0.0,
FIELD_5_NAME := 'a',
FIELD_6_FILL := 0.0,
FIELD_6_NAME := 'x20',
FIELD_7_FILL := 0.0,
FIELD_7_NAME := 'x23',
FIELD_8_FILL := 0.0,
FIELD_8_NAME := 'x24',
FIELD_9_FILL := 0.0,
FIELD_9_NAME := 'b',
a_dtype := 'float64',
a_kind := ['a'],
NROWS := 3637597,
TITLE := '',
VERSION := '2.7',
x24_dtype := 'float64',
x24_kind := ['x24'],
b_dtype := 'float64',
b_kind := ['b'],
x25_dtype := 'float64',
x25_kind := ['x25'],
x26_dtype := 'float64',
x26_kind := ['x26'],
date_dtype := 'datetime64',
date_kind := ['date'],
x39_dtype := 'int64',
x39_kind := ['x39'],
x37_dtype := 'int64',
x37_kind := ['x37'],
x41_dtype := 'int64',
x41_kind := ['x41'],
x35_dtype := 'int64',
x35_kind := ['x35'],
x40_dtype := 'int64',
x40_kind := ['x40'],
x38_dtype := 'int64',
x38_kind := ['x38'],
x42_dtype := 'int64',
x42_kind := ['x42'],
x36_dtype := 'int64',
x36_kind := ['x36'],
index_kind := 'integer',
text_dtype := 'string240',
text_kind := ['text'],
values_block_0_dtype := 'datetime64',
values_block_0_kind := ['date2'],
values_block_1_dtype := 'float64',
values_block_1_kind := ['x22', 'x18', 'x21', 'x16', 'x19', 'x17', 'x4', 'x5', 'x6', 'x7', 'x8', 'x9', 'x29', 'x30', 'x28', 'x2', 'x1', 'x3', 'x10', 'x27', 'x11', 'x12', 'x13', 'x14', 'x15', 'x33', 'x32', 'x34', 'x31']]
</code></pre>
<h1>Here's how I read in the table:</h1>
<pre><code>df = DataFrame()store = pd.HDFStore('/Users/neil/MKT.h5')
df = store.select('MKT', "(a > 10) & (b > 1)")
store.close()
</code></pre>
<h1>Here's how I write/fill the table:</h1>
<pre><code>store = pd.HDFStore('/Users/neil/MKT.h5')
listofsearchablevars = ['date', 'text', 'a', 'x20', 'x23', 'x24', 'b', 'x25', 'x26', 'x35', 'x36', 'x37', 'x38', 'x39', 'x40', 'x41', 'x42']
df = .....
store.append('MKT', df, data_columns = listofsearchablevars, nan_rep = 'nan', chunksize=500000, min_itemsize = {'values': 30})
store.close()
</code></pre>
<p>EDIT: response to request to provide some sample data....</p>
<h2>Data</h2>
<p>For clarity,
let's call the 15,000 result: "INCORRECT"
let's call the 30,000 result: "CORRECT"
let's call items in CORRECT but not in INCORRECT: "Only in CORRECT"</p>
<p>I have confirmed, that all rows/items in INCORRECT are completely found
in CORRECT.</p>
<h1>Here's a couple of rows of data in each (just took rows 10000 and 10001 of each):</h1>
<h2>Only in CORRECT:</h2>
<pre><code> 9869 9870
date 2001-08-10 00:00:00 2001-08-17 00:00:00
text DCR DCR
date2 NaN NaN
x19 1.9 1.8396
x18 1.98 1.9
x20 1.8 1.8
x9 2.54 2.54
x10 5.25 5.125
x11 9.625 9.625
x12 1.61 1.7
x13 1.05 1.05
x14 1.05 1.05
x21 75700 64800
x23 140992.7 116948.9
x24 0.0008284454 0.0007097211
x25 0.002580505 0.002630241
x26 0.001540047 0.001440302
x27 0.001850877 0.001832468
x5 17.915 17.915
x8 17.915 17.915
x2 34.0379 32.9563
a 34.0385 32.95643
x6 -42.80079 -42.80079
x7 -8.762288 -9.844354
x4 0 0
x1 -0.0003349149 -0.0003349149
x3 -0.0003349149 -0.0003349149
x28 1.579e+07 1.579e+07
b 1.261029 1.302433
x29 1.284075 1.326236
x30 1.488814 1.537697
x22 -0.2891579 -0.3205045
x17 0.31 0.31
x15 0.84 0.84
x16 2.5937 2.5937
x34 6.895 7.105
x32 -1.29055 -1.35055
x31 -0.77 -0.63
x33 -0.665 -0.49
x38 1 1
x42 0 0
x36 0 0
x40 0 0
x35 0 0
x39 0 0
x37 0 0
x41 0 0
</code></pre>
<h2>INCORRECT:</h2>
<pre><code> 153641 153642
date 2008-08-22 00:00:00 2008-08-29 00:00:00
text PRL PRL
date2 NaN NaN
x19 1.9 1.88
x18 1.95 1.94
x20 1.85 1.87
x9 2.07 2.07
x10 2.23 2.23
x11 2.94 2.94
x12 1.75 1.75
x13 1.71 1.71
x14 1.69 1.69
x21 133549 73525
x23 254119.1 140764.5
x24 0.001485416 0.0008315729
x25 0.001227271 0.001204803
x26 0.001006876 0.001048327
x27 0.0009764919 0.0009638125
x5 18.008 18.008
x8 18.058 18.058
x2 34.2152 33.855
a 34.3102 33.94904
x6 -35.07229 -35.07229
x7 -0.7620911 -1.123251
x4 0 0
x1 0.0111308 0.0111308
x3 0.0111308 0.0111308
x28 1.5488e+08 1.5488e+08
b 1.251983 1.265302
x29 1.272828 1.286369
x30 1.247996 1.261273
x22 0.1368421 0.1489362
x17 0.16 0.16
x15 0.2 0.2
x16 0.47 0.47
x34 2.25 2.34
x32 1.395 1.365
x31 1.25 1.31
x33 1.175 1.25
x38 1 1
x42 0 0
x36 0 0
x40 0 0
x35 0 0
x39 0 0
x37 0 0
x41 0 0
</code></pre>
<h2>CORRECT:</h2>
<pre><code> 99723 99725
date 2009-11-27 00:00:00 2009-12-11 00:00:00
text ACL ACL
date2 NaN NaN
x19 1.17 1.2
x18 1.22 1.39
x20 1.11 1.14
x9 1.76 1.76
x10 1.76 1.76
x11 1.76 1.76
x12 0.63 0.74
x13 0.36 0.36
x14 0.17 0.17
x21 285474 709374
x23 333678.1 868999.7
x24 0.0005489386 0.001393863
x25 0.002350057 0.002279827
x26 0.002160912 0.002111369
x27 0.002428953 0.002244943
x5 103.908 103.908
x8 103.908 103.908
x2 121.5721 124.6894
a 121.5724 124.6896
x6 92.16074 92.16074
x7 213.7331 216.8503
x4 0 0
x1 -0.008266928 -0.008266928
x3 -0.008266928 -0.008266928
x28 0.02743141 0.02703708
b 1.037747 1.011804
x29 1.421532 1.385994
x30 1.52714 1.488961
x22 1.213675 1.7
x17 0.47 0.47
x15 0.48 0.48
x16 0.48 0.48
x34 0.32 0.32
x32 1.04 1.04
x31 -0.6 -0.6
x33 -0.5901 -0.479
x38 0 0
x42 0 0
x36 0 0
x40 0 0
x35 0 0
x39 0 0
x37 0 0
x41 0 0
</code></pre>
|
<p>SUCESS!!!!! I filled all NaNs in the data and now the read_hdf where returns the correct 30,000 rows. column a had NaNs (which was one of the data_columns in the query, a > 10). Man, that was painful. FYI - because of my paranoia, to get rid of any possible circumstance that this could repeat itself in the future, I completely fillna(0) the entire table as I cannot risk reaching conclusions from this analysis with an incorrect or incomplete query from the table. It was a NaN issue for sure.</p>
|
python|pandas|pytables|hdf
| 0 |
1,909,142 | 60,267,646 |
Adding functions to content navigation drawer
|
<p>I am trying to add functions to my kivymd navigation drawer but i cant find a way to do it. i want the items to pen defferent pages. an example is the settings item should open the settings page when clicked. i an using the new updated kivymd version 0.103.0</p>
<p>this is an example code</p>
<pre><code>from kivy.lang import Builder
from kivy.uix.boxlayout import BoxLayout
from kivy.properties import StringProperty
class TestNavigationDrawer(MDApp):
def build(self):
return Builder.load_string(KV)
def on_start(self):
icons_item = {
"folder": "My files",
"account-multiple": "Shared with me",
"star": "Starred",
"history": "Recent",
"checkbox-marked": "Shared with me",
"upload": "Upload",
}
for icon_name in icons_item.keys():
self.root.ids.content_drawer.ids.md_list.add_widget(
ItemDrawer(icon=icon_name, text=icons_item[icon_name])
)
</code></pre>
|
<p>A short example how to add function to content;</p>
<pre><code># .py file
class ItemDrawer(OneLineIconListItem):
icon = StringProperty()
def upload(self):
if self.icon == "upload":
print("Item uploaded")
# .kv file
<ItemDrawer>:
on_release: root.upload()
</code></pre>
|
python|kivy|kivy-language
| 0 |
1,909,143 | 60,189,348 |
Remove some same-index elements from two lists based on one of them
|
<p>Let's suppose that I have these two lists:</p>
<pre><code>a = [{'id': 3}, {'id': 7}, None, {'id': 1}, {'id': 6}, None]
b = ['5', '5', '3', '5', '3', '5']
</code></pre>
<p>I want to filter both at the same-index based though only on <code>a</code> and specifically on filtering out the <code>None</code> elements of <code>a</code>.</p>
<p>So finally I want to have this:</p>
<pre><code>[{'id': 3}, {'id': 7}, {'id': 1}, {'id': 6}]
['5', '5', '5', '3']
</code></pre>
<p>I have written this code for this:</p>
<pre><code>a_temp = []
b_temp = []
for index, el in enumerate(a):
if el:
a_temp.append(a[index])
b_temp.append(b[index])
a = a_temp[:]
b = b_temp[:]
</code></pre>
<p>I am wondering though if there is any more pythonic way to do this?</p>
|
<p>This solution </p>
<ol>
<li>uses <code>zip()</code> to group corresponding elements of <code>a</code> and <code>b</code> together</li>
<li>Makes a list of 2-tuples of corresponding elements, such that the corresponding element of <code>a</code> is not None</li>
<li>Use the <code>zip(*iterable)</code> idiom to flip the dimensions of the list, thus separating the single list of 2-tuples into two lists of singletons, which we assign to <code>new_a</code> and <code>new_b</code></li>
</ol>
<pre class="lang-py prettyprint-override"><code>a = [{'id': 3}, {'id': 7}, None, {'id': 1}, {'id': 6}, None]
b = ['5', '5', '3', '5', '3', '5']
new_a, new_b = zip(*((x, y) for x, y in zip(a, b) if x))
# new_a = ({'id': 3}, {'id': 7}, {'id': 1}, {'id': 6})
# new_b = ('5', '5', '5', '3')
</code></pre>
|
python-3.x
| 2 |
1,909,144 | 59,979,138 |
I am trying to print the highest odd number but giving my y and z value the highest number (even if its even) creates an issue?
|
<p>I am trying to print the highest odd number but giving my y and z value the highest number (even if its even) creates an issue??</p>
<pre><code>x,y,z = 13, 14, 10
if x%2 != 0 or y%2 != 0 or z%2 != 0:
if x > y and x > z and x%2 != 0:
print(x)
elif y > z and y%2 != 0:
print(y)
elif z%2 != 0:
print(z)
else:
print('None of them are odd!')
</code></pre>
|
<p>Its easier to just use a list instead of assigning each value to a variable, but if you need to keep the x, y, z then use: </p>
<pre><code>x,y,z = 13, 14, 10
try:
print(max(i for i in [x, y, z] if i % 2))
except ValueError:
print("None Are Odd")
</code></pre>
|
python|python-3.x
| 2 |
1,909,145 | 30,379,560 |
What does this function do? (Python iterators)
|
<pre><code>def partition(n, iterable):
p = izip_longest(*([iter(iterable)] * n))
r = []
for x in p:
print(x) #I added this
s = set(x)
s.discard(None)
r.append(list(s))
return r
</code></pre>
<p>This is actually in a job posting on SO and being a newbie I thought it was interesting. So you get output like the following:</p>
<pre><code>partition(5, L)
(1, 2, 3, 4, None)
Out[86]: [[1, 2, 3, 4]]
</code></pre>
<p>To me this is already confusing because I thought <code>izip_longest(*([iter(iterable)] * n))</code>would run the <code>izip_longest</code> function on a list of n identical iterators so I would have expected first an output of <code>(1,1,1,1,1)</code> and then an output of <code>(2,2,2,2,2)</code> and so on.</p>
<p>So short version of my question, is what's going on with this line:</p>
<pre><code> p = izip_longest(*([iter(iterable)] * n))
</code></pre>
<p>Parsing it I would have thought [iter(iterable)]*n creates a list of length n of identical iterables all pointing to the same thing - that's what it does on the command line, but that doesn't seem to be what it does here based on the output printed above. </p>
<p>Also I thought the * at the beginning <code>...longest(*...</code> was there since the list is of unknown length but I don't think that entirely makes sense. What is that first <code>*</code> symbol doing inside the function call? Doesn't seem like it's simply indicating an unknown length list of arguments...</p>
<p>So at the end of the day I'm completely lost. Can someone walk me through this syntax?</p>
<p>Thanks a lot for any input!</p>
<hr>
<p>Thanks for all the helpful answers, everyone. I am not sure if I am tacking on an answer or a question here, but it seems to me this list comprehension will do the same thing for lists and tuples (I realize iterators would also apply to dictionaries, custom classes, other things...)</p>
<pre><code>[L[i*n:(i+1)*n] for i in range(int(ceil(len(L)/float(n)))) ]
</code></pre>
|
<p><code>iter(my_list)</code> converts a list into an iterable (that is one where elements are consumed as they are seen)</p>
<p><code>[my_iter]*5</code> creates a new list of <code>[my_iter,my_iter,my_iter,my_iter,my_iter]</code> in which all the <code>my_iter</code>'s point to the same exact iterator</p>
<pre><code>zip(*[my_iter,my_iter,my_iter,my_iter,my_iter])
</code></pre>
<p>is the same as </p>
<pre><code>zip(my_iter,my_iter,my_iter,my_iter,my_iter)
</code></pre>
<p>(the splat simply unpacks a list/tuple) which basically just returns a <code>5xlen(my_list)//5</code> 2d list</p>
<p>you could simplify it with normal zip</p>
<pre><code>#this method will have no `None` entries
new_list_partitioned = zip(*[iter(big_list)]*N) + [big_list[-(len(big_list)%N):],]
</code></pre>
|
python|iterator
| 6 |
1,909,146 | 67,009,685 |
Replace 2D numpy array with 3D (elements to vectors)
|
<p>This question has probably been asked before so I'm happy to be pointed to the answer but I couldn't find it.</p>
<p>I have a 2D numpy array of <code>True</code> and <code>False</code>. Now I need to convert it into a black and white image (a 3D numpy array), that is, I need [0,0,0] in place of every <code>False</code> and [1,1,1] in place of every <code>True</code>. What's the best way to do this? For example,</p>
<pre><code>Input:
[[False, True],
[True, False]]
Output:
[[[0, 0, 0], [1, 1, 1]],
[[1, 1, 1], [0, 0, 0]]]
</code></pre>
<p>(As you probably know, 3D images are arrays of shape <code>(height, width, 3)</code> where 3 is the depth dimension i.e. number of channels.)</p>
<p>Bonus points if someone can tell me how to also convert it back, i.e., if I have a pure black and white image (purely [0,0,0] and [0,0,1] pixels), how do I get a 2D matrix of the same height-width dimensions but with <code>True</code> in place of white pixels ([1,1,1]) and <code>False</code> in place of black pixels ([0,0,0]).</p>
|
<p>The cheapest way is to view your <code>bool</code> data as <code>np.uint8</code>, and add a fake dimension:</p>
<pre><code>img = np.lib.stride_tricks.as_strided(mask.view(np.uint8),
strides=mask.strides + (0,),
shape=mask.shape + (3,))
</code></pre>
<p>Unlike <a href="https://numpy.org/doc/stable/reference/generated/numpy.ndarray.astype.html" rel="nofollow noreferrer"><code>mask.astype(np.uint8)</code></a>, <a href="https://numpy.org/doc/stable/reference/generated/numpy.ndarray.view.html" rel="nofollow noreferrer"><code>mask.view(np.uint8)</code></a> does not copy the data, instead harnesses the fact that <code>bool_</code> is stored in a single byte. Similarly, the new dimension created by <a href="https://numpy.org/doc/stable/reference/generated/numpy.lib.stride_tricks.as_strided.html" rel="nofollow noreferrer"><code>np.lib.stride_tricks.as_strided</code></a> is a view which does not copy any data.</p>
<p>You can bypass <code>as_strided</code> and <code>view</code> by creating a new <a href="https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html" rel="nofollow noreferrer">array object</a> manually:</p>
<pre><code>img = np.ndarray(shape=mask.shape + (3,), dtype=np.uint8,
strides=mask.strides + (0,), buffer=mask)
</code></pre>
|
python|arrays|numpy|image-processing|cv2
| 2 |
1,909,147 | 65,536,561 |
opening Python IDLE with AutoHotKey
|
<p>I've been messing around with AutoHotKey, trying to create some useful shortcuts for myself. I can open file explorer, calculator, chrome, etc., but I haven't been able to write a program that opens Python IDLE.</p>
<p>My program says:</p>
<pre><code>!t::
Run pythonw
return
</code></pre>
<p>I've tried pythonw.exe, python.exe, python, python3.exe, and a bunch of other combinations. Any idea why it isn't working? Thanks!</p>
|
<p>So, your problem is that Python IDLE is different from pythonw.exe, python.exe, python, python3.exe .</p>
<p>Python IDLE is an IDE or Code Editor provided with python whereas pythonw.exe, python.exe, python, python3.exe are the python interpreter.</p>
<p>pythonw.exe - for executing python scripts without opening console window(Hidden)
python OR python3.exe OR python3.exe - are the same, that is the normal python interpreter.</p>
<p>So to open IDLE you should execute the IDLE file not the interpreter.</p>
<p>The path for IDLE(in Windows Computer) is usually :</p>
<pre><code>C:\Python39\Lib\idlelib\idle.bat
</code></pre>
<p>Here the Python version is 3.9 but it may be different in your case!
Check for the version you installed by opening command prompt and typing :</p>
<pre><code>C:\>python --version
Python 3.9.1
</code></pre>
|
python|autohotkey|python-idle
| 1 |
1,909,148 | 65,902,043 |
Python: mock or patch a class with code executed in class initialization
|
<p>I have two files, says 'example.py' and 'test.py'</p>
<p>The example.py is like:</p>
<pre><code> class Example(object):
print('hello world')
def greeting(self, words):
print(words)
</code></pre>
<p>and example.py is imported in test.py :</p>
<pre><code>import example
</code></pre>
<p>This will cause code 'print('hello world') to be executed.</p>
<p>Is there a way in unittest.mock to mock() or patch() class Example to stop <code>print('hello world')</code> being executed but still to keep method <code>greeting()</code> working?</p>
|
<p>I am sorry but I do not have enough reputation to write a comment so I have to resort to writing an answer.</p>
<p>For the very case that you present I believe that just suppressing the sys.stdout upon initialization is feasible. However I think that only the end result of the "print('hello world')" can be suppressed, not the execution itself.</p>
<p>I do not have evidence of it but I think that unittest.mocks does workarounds the "object" "class Example"(likely inspects for attributes and "copies" them dynamically). Because of this, it requires the "class Example" object to exist before actually creating the mock. Therefore the initialization of "class Example" would have already taken place and the code that you are interested in supressing would have already been executed.</p>
<p>Are you only interested in supressing <strong>the outcome</strong> of that code or just <strong>the execution itself</strong> ?</p>
|
python|unit-testing|mocking
| 0 |
1,909,149 | 50,799,465 |
Pandas to_datetime seems incompatible with numpy datetime objects
|
<p>Printing the start of the dataframe gives me</p>
<pre><code>print(IRdaily.head())
None
1 mo 3 mo 6 mo 1 yr 2 yr 3 yr 5 yr 7 yr 10 yr 20 yr 30 yr
Date
1990-01-02 NaN 7.83 7.89 7.81 7.87 7.90 7.87 7.98 7.94 NaN 8.00
1990-01-03 NaN 7.89 7.94 7.85 7.94 7.96 7.92 8.04 7.99 NaN 8.04
1990-01-04 NaN 7.84 7.90 7.82 7.92 7.93 7.91 8.02 7.98 NaN 8.04
1990-01-05 NaN 7.79 7.85 7.79 7.90 7.94 7.92 8.03 7.99 NaN 8.06
1990-01-08 NaN 7.79 7.88 7.81 7.90 7.95 7.92 8.05 8.02 NaN 8.09
</code></pre>
<p>I'd now like to filter the data by discarding data before a given date, so I tried the following:</p>
<pre><code>IRdaily = IRdaily[IRdaily['Date'] >= datetime.datetime(2000,1,1)]
</code></pre>
<p>but I get a KeyError:</p>
<pre><code>IRdaily[IRdaily['Date'] >= dt]
KeyError: 'Date'
</code></pre>
<p>also, </p>
<pre><code>type(datetime.datetime(2000,1,1))
<class 'datetime.datetime'>
</code></pre>
<p>which doesn't match the dtype of IRdaily['Date']. What am I doing wrong? </p>
<p>Many thanks in advance for your guidance </p>
<p>Thomas Philips</p>
|
<p>I think there is <code>DatetimeIndex</code>, need compare by <code>.index</code>:</p>
<pre><code>print (df.index)
DatetimeIndex(['1990-01-02', '1990-01-03', '1990-01-04', '1990-01-05',
'1990-01-08'],
dtype='datetime64[ns]', freq=None)
</code></pre>
<hr>
<pre><code>IRdaily = IRdaily[IRdaily.index >= datetime.datetime(2000,1,1)]
</code></pre>
<p>Or:</p>
<pre><code>IRdaily = IRdaily[IRdaily.index >= '2000-01-01']
</code></pre>
<hr>
<p>Or use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.query.html" rel="nofollow noreferrer"><code>query</code></a>:</p>
<pre><code>IRdaily = IRdaily.query('Date >= "2000-01-01"')
</code></pre>
|
python-3.x|pandas
| 1 |
1,909,150 | 51,064,919 |
LightGBM Error - length not same as data
|
<p>I am using lightGBM for finding feature importance but I am getting error <code>LightGBMError: b'len of label is not same with #data'</code> .
X.shape
(73147, 12)
y.shape
(73147,)</p>
<p>Code:</p>
<pre><code>from sklearn.model_selection import train_test_split
import lightgbm as lgb
# Initialize an empty array to hold feature importances
feature_importances = np.zeros(X.shape[1])
# Create the model with several hyperparameters
model = lgb.LGBMClassifier(objective='binary', boosting_type = 'goss', n_estimators = 10000, class_weight = 'balanced')
# Fit the model twice to avoid overfitting
for i in range(2):
# Split into training and validation set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = i)
# Train using early stopping
model.fit(X, y_train, early_stopping_rounds=100, eval_set = [(X_test, y_test)],
eval_metric = 'auc', verbose = 200)
# Record the feature importances
feature_importances += model.feature_importances_
</code></pre>
<p>See screenshot below:</p>
<p><a href="https://i.stack.imgur.com/JP9j8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JP9j8.png" alt="enter image description here"></a></p>
|
<p>You seem to have a typo in your code; instead of</p>
<pre><code>model.fit(X, y_train, [...])
</code></pre>
<p>it should be</p>
<pre><code>model.fit(X_train, y_train, [...])
</code></pre>
<p>As it is now, it is understandable that the length of <code>X</code> and <code>y_train</code> is not the same, hence your error.</p>
|
python|machine-learning|lightgbm
| 1 |
1,909,151 | 3,423,289 |
Problem using MySQLdb on OSX: symbol not found _mysql_affected_rows
|
<p>This is related to <a href="https://stackoverflow.com/questions/1299013/problem-using-mysqldb-symbol-not-found-mysql-affected-rows">a previous question</a>.</p>
<p>However, the main posted solution there is not working for me. I'm on Snow Leopard, using the 32-bit 5.1.49 MySQL dmg install. <strike>I'm using the built in python</strike> (apparently, as noted in the comments, my Python version is different), which appears to be 2.6.5 32-bit:</p>
<pre>
Python 2.6.5 (r265:79359, Mar 24 2010, 01:32:55)
[GCC 4.0.1 (Apple Inc. build 5493)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> sys.maxint
2147483647
</pre>
<p>I've downloaded MySQL-python 1.2.3 from <a href="http://sourceforge.net/projects/mysql-python/" rel="nofollow noreferrer">the usual location</a> and changed site.cfg so that mysql_config points to the right place and the registry_key directive is commented out. The packages seems to build and install just fine:</p>
<pre>
caywork:MySQL-python-1.2.3 carl$ python setup.py clean
running clean
caywork:MySQL-python-1.2.3 carl$ python setup.py build
running build
running build_py
copying MySQLdb/release.py -> build/lib.macosx-10.3-fat-2.6/MySQLdb
running build_ext
caywork:MySQL-python-1.2.3 carl$ sudo python setup.py install
running install
running bdist_egg
running egg_info
writing MySQL_python.egg-info/PKG-INFO
writing top-level names to MySQL_python.egg-info/top_level.txt
writing dependency_links to MySQL_python.egg-info/dependency_links.txt
reading manifest file 'MySQL_python.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching 'MANIFEST'
warning: no files found matching 'ChangeLog'
warning: no files found matching 'GPL'
writing manifest file 'MySQL_python.egg-info/SOURCES.txt'
installing library code to build/bdist.macosx-10.3-fat/egg
running install_lib
running build_py
copying MySQLdb/release.py -> build/lib.macosx-10.3-fat-2.6/MySQLdb
running build_ext
creating build/bdist.macosx-10.3-fat/egg
copying build/lib.macosx-10.3-fat-2.6/_mysql.so -> build/bdist.macosx-10.3-fat/egg
copying build/lib.macosx-10.3-fat-2.6/_mysql_exceptions.py -> build/bdist.macosx-10.3-fat/egg
creating build/bdist.macosx-10.3-fat/egg/MySQLdb
copying build/lib.macosx-10.3-fat-2.6/MySQLdb/__init__.py -> build/bdist.macosx-10.3-fat/egg/MySQLdb
copying build/lib.macosx-10.3-fat-2.6/MySQLdb/connections.py -> build/bdist.macosx-10.3-fat/egg/MySQLdb
creating build/bdist.macosx-10.3-fat/egg/MySQLdb/constants
copying build/lib.macosx-10.3-fat-2.6/MySQLdb/constants/__init__.py -> build/bdist.macosx-10.3-fat/egg/MySQLdb/constants
copying build/lib.macosx-10.3-fat-2.6/MySQLdb/constants/CLIENT.py -> build/bdist.macosx-10.3-fat/egg/MySQLdb/constants
copying build/lib.macosx-10.3-fat-2.6/MySQLdb/constants/CR.py -> build/bdist.macosx-10.3-fat/egg/MySQLdb/constants
copying build/lib.macosx-10.3-fat-2.6/MySQLdb/constants/ER.py -> build/bdist.macosx-10.3-fat/egg/MySQLdb/constants
copying build/lib.macosx-10.3-fat-2.6/MySQLdb/constants/FIELD_TYPE.py -> build/bdist.macosx-10.3-fat/egg/MySQLdb/constants
copying build/lib.macosx-10.3-fat-2.6/MySQLdb/constants/FLAG.py -> build/bdist.macosx-10.3-fat/egg/MySQLdb/constants
copying build/lib.macosx-10.3-fat-2.6/MySQLdb/constants/REFRESH.py -> build/bdist.macosx-10.3-fat/egg/MySQLdb/constants
copying build/lib.macosx-10.3-fat-2.6/MySQLdb/converters.py -> build/bdist.macosx-10.3-fat/egg/MySQLdb
copying build/lib.macosx-10.3-fat-2.6/MySQLdb/cursors.py -> build/bdist.macosx-10.3-fat/egg/MySQLdb
copying build/lib.macosx-10.3-fat-2.6/MySQLdb/release.py -> build/bdist.macosx-10.3-fat/egg/MySQLdb
copying build/lib.macosx-10.3-fat-2.6/MySQLdb/times.py -> build/bdist.macosx-10.3-fat/egg/MySQLdb
byte-compiling build/bdist.macosx-10.3-fat/egg/_mysql_exceptions.py to _mysql_exceptions.pyc
byte-compiling build/bdist.macosx-10.3-fat/egg/MySQLdb/__init__.py to __init__.pyc
byte-compiling build/bdist.macosx-10.3-fat/egg/MySQLdb/connections.py to connections.pyc
byte-compiling build/bdist.macosx-10.3-fat/egg/MySQLdb/constants/__init__.py to __init__.pyc
byte-compiling build/bdist.macosx-10.3-fat/egg/MySQLdb/constants/CLIENT.py to CLIENT.pyc
byte-compiling build/bdist.macosx-10.3-fat/egg/MySQLdb/constants/CR.py to CR.pyc
byte-compiling build/bdist.macosx-10.3-fat/egg/MySQLdb/constants/ER.py to ER.pyc
byte-compiling build/bdist.macosx-10.3-fat/egg/MySQLdb/constants/FIELD_TYPE.py to FIELD_TYPE.pyc
byte-compiling build/bdist.macosx-10.3-fat/egg/MySQLdb/constants/FLAG.py to FLAG.pyc
byte-compiling build/bdist.macosx-10.3-fat/egg/MySQLdb/constants/REFRESH.py to REFRESH.pyc
byte-compiling build/bdist.macosx-10.3-fat/egg/MySQLdb/converters.py to converters.pyc
byte-compiling build/bdist.macosx-10.3-fat/egg/MySQLdb/cursors.py to cursors.pyc
byte-compiling build/bdist.macosx-10.3-fat/egg/MySQLdb/release.py to release.pyc
byte-compiling build/bdist.macosx-10.3-fat/egg/MySQLdb/times.py to times.pyc
creating stub loader for _mysql.so
byte-compiling build/bdist.macosx-10.3-fat/egg/_mysql.py to _mysql.pyc
creating build/bdist.macosx-10.3-fat/egg/EGG-INFO
copying MySQL_python.egg-info/PKG-INFO -> build/bdist.macosx-10.3-fat/egg/EGG-INFO
copying MySQL_python.egg-info/SOURCES.txt -> build/bdist.macosx-10.3-fat/egg/EGG-INFO
copying MySQL_python.egg-info/dependency_links.txt -> build/bdist.macosx-10.3-fat/egg/EGG-INFO
copying MySQL_python.egg-info/top_level.txt -> build/bdist.macosx-10.3-fat/egg/EGG-INFO
writing build/bdist.macosx-10.3-fat/egg/EGG-INFO/native_libs.txt
zip_safe flag not set; analyzing archive contents...
creating 'dist/MySQL_python-1.2.3-py2.6-macosx-10.3-fat.egg' and adding 'build/bdist.macosx-10.3-fat/egg' to it
removing 'build/bdist.macosx-10.3-fat/egg' (and everything under it)
Processing MySQL_python-1.2.3-py2.6-macosx-10.3-fat.egg
Removing /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/MySQL_python-1.2.3-py2.6-macosx-10.3-fat.egg
Copying MySQL_python-1.2.3-py2.6-macosx-10.3-fat.egg to /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages
MySQL-python 1.2.3 is already the active version in easy-install.pth
Installed /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/MySQL_python-1.2.3-py2.6-macosx-10.3-fat.egg
Processing dependencies for MySQL-python==1.2.3
Finished processing dependencies for MySQL-python==1.2.3
</pre>
<p>But when I try to use it, I get this:</p>
<pre>
caywork:MySQL-python-1.2.3 carl$ python -c "import MySQLdb"
/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/MySQL_python-1.2.3-py2.6-macosx-10.3-fat.egg/_mysql.py:3: UserWarning: Module _mysql was already imported from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/MySQL_python-1.2.3-py2.6-macosx-10.3-fat.egg/_mysql.pyc, but /Users/carl/Source/MySQL-python-1.2.3 is being added to sys.path
Traceback (most recent call last):
File "", line 1, in
File "MySQLdb/__init__.py", line 19, in
import _mysql
File "build/bdist.macosx-10.3-fat/egg/_mysql.py", line 7, in
File "build/bdist.macosx-10.3-fat/egg/_mysql.py", line 6, in __bootstrap__
ImportError: dlopen(/Users/carl/.python-eggs/MySQL_python-1.2.3-py2.6-macosx-10.3-fat.egg-tmp/_mysql.so, 2): Symbol not found: _mysql_affected_rows
Referenced from: /Users/carl/.python-eggs/MySQL_python-1.2.3-py2.6-macosx-10.3-fat.egg-tmp/_mysql.so
Expected in: dynamic lookup
</pre>
<p>All I can find on the web are older examples of this problem, along with numerous rants about how hard it is to get MySQL-python working on OSX. If anyone can help with this, I would greatly appreciate it!!</p>
|
<p>The only (admittedly kludgy) solution that I ended up getting to work was to use MySQL-python-1.2.2 after applying <a href="http://gist.github.com/511466" rel="nofollow noreferrer">this patch</a>, cobbled together from advice found here (<a href="http://www.mangoorange.com/2008/08/01/installing-python-mysqldb-122-on-mac-os-x/" rel="nofollow noreferrer">http://www.mangoorange.com/2008/08/01/installing-python-mysqldb-122-on-mac-os-x/</a>) and here (<a href="http://flo.nigsch.com/?p=62" rel="nofollow noreferrer">http://flo.nigsch.com/?p=62</a>). Sorry for the lack of links but I don't have enough rep points to post more than one link.</p>
|
python|mysql|macos
| 0 |
1,909,152 | 3,904,054 |
How to catch python syntax errors?
|
<pre><code>try:
pattern=r'<tr><td><a href='(?P<link>[\s\S]*?)'[\s\S]*?><img src='(?P<img>[\s\S]*?)' width='130' height='130'[\s\S]*?/></a></td>'
except:
try:
pattern=r"<tr><td><a href='(?P<link>[\s\S]*?)'[\s\S]*?><img src='(?P<img>[\s\S]*?)' width='130' height='130'[\s\S]*?/></a></td>"
except:
pattern=r"""<tr><td><a href='(?P<link>[\s\S]*?)'[\s\S]*?><img src='(?P<img>[\s\S]*?)' width='130' height='130'[\s\S]*?/></a></td>"""
</code></pre>
<p>I'm writing regular expressions through a tool, and then generate the python code. There are some situations where I need to use ' or " or """ to wrap the regular expression. I want to try/except the error. If the error is captured, then I can try another. But it didn't work. Any help?</p>
|
<p>You need to escape your quotes inside the RE. In your first line, all the single quotes need to be escaped as <code>\'</code>.</p>
<p>Don't use a try block to fix your faulty RE. Just do it right the first time.</p>
|
python|regex|error-handling|try-except
| 0 |
1,909,153 | 3,758,147 |
Easiest way to read/write a file's content in Python
|
<p>In Ruby you can read from a file using <code>s = File.read(filename)</code>. The shortest and clearest I know in Python is</p>
<pre><code>with open(filename) as f:
s = f.read()
</code></pre>
<p>Is there any other way to do it that makes it even shorter (preferably one line) and more readable?</p>
<p>Note: initially I phrased the question as "doing this in a single line of code". As pointed by S.Lott, shorter doesn't necessary mean more readable. So I rephrased my question just to make clear what I meant. I think the Ruby code is better and more readable not necessarily because it's one line versus two (though that matters as well), but also because it's a class method as opposed to an instance method, which poses no question about who closes the file, how to make sure it gets closed even if an exception is raised, etc. As pointed in the answers below, you can rely on the GC to close your file (thus making this a one-liner), but that makes the code worse even though it's shorter. Not only by being unportable, but by making it unclear.</p>
|
<pre><code>with open('x.py') as f: s = f.read()
</code></pre>
<p>***grins***</p>
|
python
| 178 |
1,909,154 | 50,582,589 |
Truncating milliseconds out of DateTimeIndex
|
<p>When I use <code>pandas.date_range()</code>, I sometimes have timestamp that have lots of milliseconds that I don't wish to keep.</p>
<p>Suppose I do...</p>
<pre><code>import pandas as pd
dr = pd.date_range('2011-01-01', '2011-01-03', periods=15)
>>> dr
DatetimeIndex([ '2011-01-01 00:00:00',
'2011-01-01 03:25:42.857142784',
'2011-01-01 06:51:25.714285824',
'2011-01-01 10:17:08.571428608',
'2011-01-01 13:42:51.428571392',
'2011-01-01 17:08:34.285714176',
'2011-01-01 20:34:17.142857216',
'2011-01-02 00:00:00',
'2011-01-02 03:25:42.857142784',
'2011-01-02 06:51:25.714285824',
'2011-01-02 10:17:08.571428608',
'2011-01-02 13:42:51.428571392',
'2011-01-02 17:08:34.285714176',
'2011-01-02 20:34:17.142857216',
'2011-01-03 00:00:00'],
dtype='datetime64[ns]', freq=None)
</code></pre>
<p>To ignore the currend miliseconds, I am forced to do this.</p>
<pre><code>>>> t = []
>>> for item in dr:
... idx = str(item).find('.')
... if idx != -1:
... item = str(item)[:idx]
... t.append(pd.to_datetime(item))
...
>>> t
[Timestamp('2011-01-01 00:00:00'),
Timestamp('2011-01-01 03:25:42'),
Timestamp('2011-01-01 06:51:25'),
Timestamp('2011-01-01 10:17:08'),
Timestamp('2011-01-01 13:42:51'),
Timestamp('2011-01-01 17:08:34'),
Timestamp('2011-01-01 20:34:17'),
Timestamp('2011-01-02 00:00:00'),
Timestamp('2011-01-02 03:25:42'),
Timestamp('2011-01-02 06:51:25'),
Timestamp('2011-01-02 10:17:08'),
Timestamp('2011-01-02 13:42:51'),
Timestamp('2011-01-02 17:08:34'),
Timestamp('2011-01-02 20:34:17'),
Timestamp('2011-01-03 00:00:00')]
</code></pre>
<p>Is there a better way ?
I already tried this...</p>
<ol>
<li><code>dr = [ pd.to_datetime(item, format='%Y-%m-%d %H:%M:%S') for item in dr ]</code></li>
</ol>
<p>But it doesn't do anything.</p>
<ol start="2">
<li><code>(pd.date_range('2011-01-01', '2011-01-03', periods=15)).astype('datetime64[s]')</code></li>
</ol>
<p>But it says it can't cast it.</p>
<ol start="3">
<li><code>dr = (dr.to_series()).apply(lambda x:x.replace(microseconds=0))</code></li>
</ol>
<p>But this line doesn't solve my problem, as... </p>
<pre><code>2018-04-17 15:07:04.777777664 gives --> 2018-04-17 15:07:04.000000664
</code></pre>
|
<p>I believe need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DatetimeIndex.floor.html" rel="nofollow noreferrer"><code>DatetimeIndex.floor</code></a>:</p>
<pre><code>print (dr.floor('S'))
DatetimeIndex(['2011-01-01 00:00:00', '2011-01-01 03:25:42',
'2011-01-01 06:51:25', '2011-01-01 10:17:08',
'2011-01-01 13:42:51', '2011-01-01 17:08:34',
'2011-01-01 20:34:17', '2011-01-02 00:00:00',
'2011-01-02 03:25:42', '2011-01-02 06:51:25',
'2011-01-02 10:17:08', '2011-01-02 13:42:51',
'2011-01-02 17:08:34', '2011-01-02 20:34:17',
'2011-01-03 00:00:00'],
dtype='datetime64[ns]', freq=None)
</code></pre>
|
python|pandas
| 2 |
1,909,155 | 50,394,351 |
slack: how to know who voted with reaction
|
<p>How can I know who voted with reaction on the slack? I want to create a bot which gets who voted with reaction as an input, and send message based on that information. Any simple codes for it, like with slackclient or slackbot?</p>
|
<p>You definitely want to look at using the Events API (<a href="https://api.slack.com/events-api" rel="nofollow noreferrer">https://api.slack.com/events-api</a>), specifically the <code>reaction_added</code> event (<a href="https://api.slack.com/events/reaction_added" rel="nofollow noreferrer">https://api.slack.com/events/reaction_added</a>). </p>
|
python|bots|slack
| 1 |
1,909,156 | 50,485,911 |
Creating new dataframes in loop in python
|
<p>I have been working on a project, and I am stuck in a situation where I need to create multiple dataframes from a list of strings by filtering on values of the list of strings from another dataframe having a column containing the same values in the list. I am writing the code as below:</p>
<pre><code>df = pd.DataFrame({'A': range(1, 5), 'B': np.random.randn(4), 'C':['A',A','B','C']}
list = df.C.unique()
list = list.tolist()
for r in list:
exec('df_{}=df[df.C=={}]'.format(r))
</code></pre>
<p>This has been throwing an error saying 'tuple index out of range'. Could anyone please quickly help on this?</p>
|
<p>I suggest using <code>dict</code> as it does the job safer than <code>exec</code>:</p>
<pre><code>uniqueC = df.C.unique()
dfs = {'df_{}'.format(r): df[df.C==r] for r in uniqueC}
</code></pre>
<p>Now when you need a certain dataframe just call:</p>
<pre><code>dfs['df_A']
# A B C
#0 1 1.755507 A
#1 2 -0.371027 A
</code></pre>
|
python|pandas|for-loop|dataframe
| 0 |
1,909,157 | 26,915,355 |
Django 1.7 Installed app Import Error
|
<p>I'm writing an app on django 1.7.1</p>
<p>I recently created a VM to setup a development server and moved my app to that VM (my machine was Open SUSE and the VM is CentOS7)</p>
<p>After set up the database and the python packages I run the migration and run the server. The server start without any problem</p>
<p>The problem is when I try to run my script to populate the DB. </p>
<p>I load the settings </p>
<pre><code>sys.path.append('/home/evtdb/FLWeb/FLWeb')
os.environ['DJANGO_SETTINGS_MODULE'] = 'settings'
django.setup()
</code></pre>
<p>but i got this error</p>
<pre><code> File "scripts/populate_database.py", line 14, in <module>
django.setup()
File "/usr/lib64/python2.7/site-packages/django/__init__.py", line 21, in setup
apps.populate(settings.INSTALLED_APPS)
File "/usr/lib64/python2.7/site-packages/django/apps/registry.py", line 85, in populate
app_config = AppConfig.create(entry)
File "/usr/lib64/python2.7/site-packages/django/apps/config.py", line 87, in create
module = import_module(entry)
File "/usr/lib64/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
**ImportError: No module named events**
</code></pre>
<p>The settings.py include the events app on INSTALLED APPS</p>
<pre><code>INSTALLED_APPS = (
#'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'events',
'dh5bp',
'django_tables2',
)
</code></pre>
<p>The apps.py and the <strong>init</strong>.py are also set up</p>
<p><strong>events/apps.py</strong></p>
<pre><code>from django.apps import AppConfig
class EventsConfig(AppConfig):
name = 'events'
verbose_name = "Events"
</code></pre>
<p><strong>events/__init__.py</strong></p>
<pre><code>default_app_config = 'events.apps.EventsConfig'
</code></pre>
<p>I don't know why the server can start running but not the script </p>
|
<p>I don't know about Django 1.7, but I got that error because I was using Django 1.5. Once I upgrade to Django 1.9, the problem is solved. It turns out, module <code>django.apps</code> is not available in the older version of Django.</p>
|
python|django
| 0 |
1,909,158 | 61,419,248 |
separate datafiles negative sign and white-space delimiter
|
<p>I am trying to separate identify both white space ' ' and '-' as column delimiterles. My files have the bug of not consistenly being separated by a space, example:</p>
<pre><code>8.55500000 42.93079187 -99.98428964 -0.59917942 20.86164814 8.37369433 0.56431509
8.55600000 42.94500503-100.05470144 -0.55062999 20.86380446 8.38865674 0.56429834
8.55700000 42.99565203-100.11651750 -0.54444340 20.87003752 8.39975047 0.55109542
8.55800000 42.99873154-100.07383720 -0.54648262 20.85777962 8.41246904 0.55645774
</code></pre>
|
<p>This is a more complex use of <code>sep</code> so this is the explanation. You cannot keep the separator as part of the column for only <em>some</em> cases, so this time the code is actually keeping the separator as the column. This is defined as an optional <code>-</code> sign, followed consecutive numbers. This approach will solve the issue <strong>however</strong> it is going to create multiple <code>nan</code> columns (which are dropped). If the file is large in terms of columns and rows, this could lead to memory problems.</p>
<pre><code>from io import StringIO
S = '''
8.500000 42.93079187 -99.98428964 -0.59917942 20.86164814 8.37369433 0.56431509
8.55600000 42.94500503-100.05470144 -0.55062999 20.86380446 8.38865674 0.56429834
8.55700000 42.99565203-100.11651750 -0.54444340 20.87003752 8.39975047 0.55109542
8.55800000 42.99873154-100.07383720 -0.54648262 20.85777962 8.41246904 0.55645774'''
df = pd.read_csv(StringIO(S),
sep='\s*(-?[0-9\.]+)',
engine='python', header=None).dropna(axis=1)
df.head()
# 1 3 5 7 9 11 13
# 0 8.500 42.930792 -99.984290 -0.599179 20.861648 8.373694 0.564315
# 1 8.556 42.945005 -100.054701 -0.550630 20.863804 8.388657 0.564298
# 2 8.557 42.995652 -100.116518 -0.544443 20.870038 8.399750 0.551095
# 3 8.558 42.998732 -100.073837 -0.546483 20.857780 8.412469 0.556458
</code></pre>
|
python|pandas
| 1 |
1,909,159 | 57,817,880 |
Pandas read_sql inconsistent behaviour dependent on driver?
|
<p>When I run a query from my local machine to a SQL server db, data is returned. If I run the same query from a JupyterHub server (using ssh), the following is returned:</p>
<blockquote>
<p>TypeError: 'NoneType' object is not iterable</p>
</blockquote>
<p>Implying it isn't getting any data.</p>
<p>The connection string is OK on both systems (albeit different), because running the same stored procedure works fine on both systems using the connection string -</p>
<p><code>Local= "Driver={SQL Server};Server=DNS-based-address;Database=name;uid=user;pwd=pwd"</code></p>
<p><code>Hub = "DRIVER=FreeTDS;SERVER=IP.add.re.ss;PORT=1433;DATABASE=name;UID=dbuser;PWD=pwd;TDS_Version=8.0"</code></p>
<p>Is there something in the FreeTDS driver that affects chunksize, or means a set nocount is required in the original query as per this <a href="https://stackoverflow.com/questions/30534095/nonetype-object-is-not-iterable-error-in-pandas">NoneType object is not iterable error in pandas</a> - I tried this fix by the way and got nowhere.</p>
|
<p>Are you using <code>pymssql</code>, which is build on top of FreeTDS? </p>
<p>For SQL-Server you could also try the Microsoft JDBC Driver with the python package <code>jaydebeapi</code>: <a href="https://github.com/microsoft/mssql-jdbc" rel="nofollow noreferrer">https://github.com/microsoft/mssql-jdbc</a>.</p>
<pre><code>import pandas as pd
import pymssql
conn = pymssql.connect(
host = r'192.168.254.254',
port = '1433',
user = r'user',
password = r'password',
database = 'DB_NAME'
)
query = """SELECT * FROM db_table"""
df = pd.read_sql(con=conn, sql=query)
</code></pre>
|
python|sql-server|pandas|freetds
| 1 |
1,909,160 | 56,343,783 |
How do I add padding between shapes in kivy?
|
<p>I'm creating multiple shapes automatically, but I have a feeling that the shapes are just being overlapped by each other. I want to be able to add padding to the shapes so that this is not a problem. </p>
<p>Code:</p>
<pre><code>...
with open("streak.json", "r+") as f:
data = json.load(f)
get_score = data.get(key, {}).get('score')
for x in range(get_score):
self.update_canvas()
def update_canvas(self):
can = self.root.get_screen("three")
with can.ids.my_box.canvas.before:
Color(0,0,0,1)
Line(width=5)
Rectangle(pos=can.pos, size=(30,30))
with can.ids.my_box.canvas:
Color(0, 1, 0, .95, mode='rgba')
Rectangle(pos=can.pos, size=(30,30))
</code></pre>
<p><strong>EDIT</strong></p>
<p>This question has been answered <a href="https://stackoverflow.com/questions/56329981/how-do-i-create-multiple-shapes-relative-to-each-other-in-kivy">How do I create multiple shapes relative to each other in kivy?</a></p>
|
<blockquote>
<p>Rectangle(pos=can.pos, size=(30,30))</p>
</blockquote>
<p>Your Rectangles all have the same position because that's exactly what you set for them. To give them different positions, just pass different values for the <code>pos</code> argument.</p>
|
python|kivy
| 0 |
1,909,161 | 56,086,466 |
How to update choropleth map in Dash
|
<p>I'm making a webapp using Python and Dash, this webapp includes a choropleth map of the world with data based on the selected year. I want to be able to change the year and with that also update the map to match that year. I prefer to do this with the Dash slider, although any other way I would appreciate as well.</p>
<p>I've tried updating other graphs such as line charts with the text input, and that worked, but when i changed it to a choropleth map it stopped updating. It now only creates the map but updates on it don't show up. I've put some print text in the update function and it confirmed that it is actually called when I change the input, but the graph just doesn't update.</p>
<h2>The layout:with the dcc.input i want to update the html.Div 'my-div'</h2>
<pre><code>app.layout = html.Div( children=[
html.H1(
children='UN Sustainable Development goal: Poverty',
style={
'textAlign': 'center',
'color': colors
}
),
dcc.Input(id='my-id',value='30', type='text'),
html.Div(id='my-div')
,
daq.Slider(
id='my-daq-slider',
min=1,
max=sliderlength,
step=1,
),
html.Div(id='slider-output')
], style={
'textAlign': 'center'
})
</code></pre>
<h2>The update part</h2>
<pre><code>@app.callback(
Output('my-div', 'children'),
[Input('my-id', 'value')])
def update_output_div(input_value):
return dcc.Graph(
id='my-div',
figure={'data': [go.Choropleth(
locations = df_pov['Country Code'],
z = df_pov.iloc[:,int(input_value)],
text = df_pov['Country Name'],
autocolorscale = True,
reversescale = False,
marker = go.choropleth.Marker(
line = go.choropleth.marker.Line(
color = 'rgb(180,180,180)',
width = 0.5
)),
colorbar = go.choropleth.ColorBar(
tickprefix = '%',
title = '% below 1.90$ '),
)],
'layout': go.Layout(
title = go.layout.Title(
text = list(df_pov)[int(input_value)]
),
geo = go.layout.Geo(
showframe = False,
showcoastlines = False,
projection = go.layout.geo.Projection(
type = 'equirectangular'
)
),
annotations = [go.layout.Annotation(
x = 0.55,
y = 0.1,
xref = 'paper',
yref = 'paper',
text = 'Source: Kaggle',
showarrow = False
)]
)
}
)
</code></pre>
<p>What i expected: for the choropleth to update when changing the text input, or slider input.
Actual: the map gets created once ( with the same function that should update it), but doesn't update.</p>
|
<p>Dash doesn't like new components being loaded in like this. Try initializing your graph by adding an empty graph to the layout like this:</p>
<pre class="lang-py prettyprint-override"><code>html.Div(id='my-div', children=[
dcc.Graph(
id='my-graph-id',
figure=dict(
data=[],
layout={},
),
)
])
</code></pre>
<p>With the graph already on the page, you would have to change the callback so that it updates the <code>figure</code> prop of the <code>Graph</code> instead of the <code>children</code> of the <code>div</code>.</p>
<p>Also, you have multiple IDs that are the same. Dash doesn't like that. Make sure every ID is unique. If all of this still does not work, I may need to ask you to post some more details so I can help further. Good luck!</p>
|
python|plotly|plotly-dash
| 1 |
1,909,162 | 71,507,755 |
Identifying differing usernames in two data sets
|
<p>Say I have two sets of data in csvs, one smaller containing usernames, first names and last names, one larger with similar data and my objective is to identify rows with the same first and last name, but different usernames and output the two usernames along with the first name and last to another csv with columns like fname, lname, uname_dataset1, uname_dataset2</p>
<p>What would be the best way to go about this using Python?</p>
<p>Sample data set 1:</p>
<pre class="lang-none prettyprint-override"><code>"u_last_name","u_first_name","u_middle_name","u_dept","u_title","u_emp_id","u_ad_account","u_ad_email"
"Smith","John","N/A"," TEST DEPARTMENT5","TestJobTitle",,"TestADUName3","testemail5@domain.mail""
"Doe","Jane","N/A","000350 - TEST DEPARTMENT4","TestJobTitle2",,"TestADUName3","testemail4@domain.mail""
"Rogers","Bob",,"003107 - TEST DEPARTMENT2","TestJobTitle3",,"TestUName2","testemail2@domain.mail"
"Adams","Mike",,"003107 - TEST DEPARTMENT","TestJobTitle4",,"j3fox","testemail@domain.mail"
</code></pre>
<p>Sample data set 2:</p>
<pre class="lang-none prettyprint-override"><code>Employee ID,Employee Name Current,Identity Active Directory ID Current,Employee Work Email Address Current,Position Number,Employee Record,Department Code,Department,
10493692,"Potential Last Name Middle Name, First Name",TestADUname,,40812966,0,000303,000303 - TESTDEPARTMENT,
10432545,"LastName2, FName2",TestADUName2,testemail2@mail.test,40837987,1,000314,000314 - TESTDEPARTMENT2,
10470189,"LastName3, PotentialMName FirstName",TestADUname3,testemail2@mail.test,40777394,0,000383,000383 - TESTDEPARTMENT3,
</code></pre>
|
<p>One possibility is to create a dictionary then iterate through the smaller file storing the data into the dictionary using the first and last name as the dictionary key. Then iterate the second csv checking at each line if the first and last combo exists in the dictionary and update it with new information if so.</p>
<p>Rough example...</p>
<pre><code>data = {}
with open("file1.csv") as csv1:
for line in csv1.read().split("\n"):
items = line.split(",")
first = items[0]
last = items[1]
data[(first,last)] = items[2:]
with open("file2.csv") as csv2:
for line in csv2.read().split("\n"):
items = line.split(",")
key = (items[0],items[1])
if key in data:
data[key].append(items[2:])
</code></pre>
<p>then you can write the data to another file like any other.</p>
<pre><code>with open("newfile.csv", "wt") as newfile:
for key, value in data.items():
line = ",".join([*key] + value) + "\n"
newfile.write(line)
</code></pre>
|
python|csv|data-analysis
| 1 |
1,909,163 | 71,512,823 |
__file__ returns a wrong relative path after changing the current working directory in Python 3.7
|
<p>Under certain circumstances, the variable <code>__file__</code> is relative (see <a href="https://stackoverflow.com/questions/7116889/is-module-file-attribute-absolute-or-relative">this stackoverflow question</a> for details). If the variable is relative and one changes the current working directory, <code>__file__</code> doesn't change accordingly. As a result, the relative path is invalid.</p>
<p>Please see this minimal example:</p>
<pre><code>import os
if __name__ == '__main__':
filepath = __file__
filepath_real = os.path.realpath(filepath)
print('pwd', os.getcwd())
print('relative', filepath, os.path.exists(filepath))
print('absolute', filepath_real, os.path.exists(filepath_real))
os.chdir('..')
print('... changing the working directory ...')
filepath = __file__
filepath_real = os.path.realpath(filepath)
print('pwd', os.getcwd())
print('relative', filepath, os.path.exists(filepath))
print('absolute', filepath_real, os.path.exists(filepath_real))
</code></pre>
<p>Suppose this file is located at <code>/home/user/project/script.py</code>. If I execute <code>script.py</code>, this is the output:</p>
<pre><code>$ cd /home/user/project/
$ python script.py
pwd /home/user/project
relative test.py True
absolute /home/user/project/test.py True
... changing the working directory ...
pwd /home/user
relative test.py False
absolute /home/user/test.py False
</code></pre>
<p>Is it possible to get a correct <code>__file__</code> path (one that exists according to <code>os.path.exists</code>) after changing the working directory in Python 3.7? I am aware that this issue does not exist in higher Python versions (such as 3.9), where <code>__file__</code> always returns an absolute path.</p>
|
<p>get the <code>__file__</code> path at the beginning of the python file, which ensure that you get the path when the file is imported before you calling the function.</p>
<p>python interpreter evaluate the file line by line in sequence, file will be evaulated only once by interpreter even you import the file multiple times , so first time the file is imported, all codes of the file have been evaluated and you path vallue is assigned.</p>
<p>you will only have issue when you change working dir, then force reload the file programmatically and reimport the file, then the value will be changed, otherwise it stays constant duing life time of your program.</p>
<p><strong>EDIT</strong></p>
<p>below example to show the after you import the module, the <code>__file__</code> does not change after you change the working dir</p>
<pre><code>import os
if __name__ == '__main__':
filepath = os.__file__
filepath_real = os.path.realpath(filepath)
print('pwd', os.getcwd())
print('relative', filepath, os.path.exists(filepath))
print('absolute', filepath_real, os.path.exists(filepath_real))
os.chdir('..')
print('... changing the working directory ...')
filepath = os.__file__
filepath_real = os.path.realpath(filepath)
print('pwd', os.getcwd())
print('relative', filepath, os.path.exists(filepath))
print('absolute', filepath_real, os.path.exists(filepath_real))
</code></pre>
<p><strong>output</strong></p>
<pre><code>pwd /home/gftea/repo/jackygao/eiffel-system/utils
relative /usr/lib/python3.8/os.py True
absolute /usr/lib/python3.8/os.py True
... changing the working directory ...
pwd /home/gftea/repo/jackygao/eiffel-system
relative /usr/lib/python3.8/os.py True
absolute /usr/lib/python3.8/os.py True
</code></pre>
|
python|filepath
| 1 |
1,909,164 | 55,536,915 |
How can I determine which elements in a list are integers and which are strings?
|
<p>In Python 3.6, I need to determine which elements from a list contain integers, floats, or strings.</p>
<p>When I use "type" function it returns it's a list element, but what is contained inside that element? a string or a function?</p>
<pre><code>my_list=[3],['XX'],[0],[1],[4],['3']
type(my_list[0])
<class 'list'>
</code></pre>
|
<p>You got <code>list</code> because you checked the element at index <code>0</code> in the <code>tuple</code>, which is a <code>list</code></p>
<pre><code>>>> my_list=[3],['XX'],[0],[1],[4],['3'] # my_list is a tuple now :)
>>> [y.__class__ for x in my_list for y in x]
[<type 'int'>, <type 'str'>, <type 'int'>, <type 'int'>, <type 'int'>, <type 'str'>]
>>> [y.__class__.__name__ for x in my_list for y in x]
['int', 'str', 'int', 'int', 'int', 'str']
</code></pre>
|
python|string|list
| 3 |
1,909,165 | 55,402,116 |
Format output from BeautifulSoup
|
<p>From reading the BeautifulSoup documentation I have managed to write a short python script to scrape a table and print it however I can't figure out how to format it into a table. The end goal is to get football game predictions from the website: <a href="https://afootballreport.com/predictions/over-1.5-goals/" rel="nofollow noreferrer">https://afootballreport.com/predictions/over-1.5-goals/</a> and to save them to a text file.</p>
<p>Here is the code I have written so far:</p>
<pre><code>import urllib
import urllib.request
from bs4 import BeautifulSoup
def make_soup(url):
thepage = urllib.request.urlopen(url)
soupdata = BeautifulSoup(thepage, "html.parser")
return soupdata
soup = make_soup("https://afootballreport.com/predictions/over-1.5-goals/")
for record in soup.findAll('tr'):
for data in record.findAll('td'):
print(data.text.strip())
</code></pre>
<p>and this is the output:</p>
<pre><code>03/28
17:30
Iceland Reykjavik Youth Cup
Fjölnir / Vængir U19
Valur / KH U19
Over 1.5
Valur / KH U19 have over 1.5 goals in 100% of their games in the last 2 months (total games 6).
03/28
17:30
Saudi Arabia Pro League
Al Ittifaq
Al Quadisiya
Over 1.5
Al Ittifaq have over 1.5 goals in 100% of their games in the last 2 months (total games 8).
</code></pre>
<p>I want to have it so it has a column for each row: Date, Time, Football League, Hometeam, AwayTeam, Tip, Description.
Like this: </p>
<pre><code>Date, Time, Football League, HomeTeam, AwayTeam, Tip, Description
03/28, 17:30, Iceland Reykjavik Youth Cup, Fjölnir / Vængir U19, Valur / KH U19, Over 1.5, Valur / KH U19 have over 1.5 goals in 100% of their games in the last 2 months (total games 6).
</code></pre>
<p>Would someone be able to help me please?</p>
|
<p>You're doing an awful lot of work. Whenever I see a <code><table></code> tag, I'd first try pandas' <code>.read_html()</code>. It'ss do most of the work for you and then you can just manipulate the dataframe as needed. </p>
<pre><code>import pandas as pd
tables = pd.read_html('https://afootballreport.com/predictions/over-1.5-goals/')
table = tables[0]
table[['Date', 'Time']] = table['Home team - Away team'].str.split(' ', expand=True)
table = table.drop(['Home team - Away team'],axis=1)
table = table.rename(columns={"Unnamed: 3":"Description"})
table[['Football League', 'Home Team', 'Away Team']] = table['Tip'].str.split(' ', expand=True)
table = table.drop(['Tip'],axis=1)
</code></pre>
<p><strong>Output:</strong></p>
<pre><code>print (table.head(5).to_string())
Logic Description Date Time Football League Home Team Away Team
0 Over 1.5 Valur / KH U19 have over 1.5 goals in 100% of ... 03/28 17:30 Iceland Reykjavik Youth Cup Fjölnir / Vængir U19 Valur / KH U19
1 Over 1.5 Al Ittifaq have over 1.5 goals in 100% of thei... 03/28 17:30 Saudi Arabia Pro League Al Ittifaq Al Quadisiya
2 Over 1.5 Sarreguemines have over 1.5 goals in 100% of t... 03/28 19:00 France National 3 Sarreguemines Strasbourg II
3 Over 1.5 Mons Calpe have over 1.5 goals in 100% of thei... 03/28 19:29 Gibraltar Premier Division Mons Calpe Glacis United
4 Over 1.5 Glacis United have over 1.5 goals in 100% of t... 03/28 19:29 Gibraltar Premier Division Mons Calpe Glacis United
</code></pre>
<p><strong>EDIT:</strong></p>
<p>If you are using Pandas Version 0.24.2</p>
<pre><code>import pandas as pd
tables = pd.read_html('https://afootballreport.com/predictions/over-1.5-goals/')
table = tables[0]
table[['Date', 'Time']] = table['Home team - Away team'].str.split(' ', expand=True)
table = table.drop(['Home team - Away team'],axis=1)
table = table.rename(columns={"Logic":"Description"})
table[['Football League', 'Home Team', 'Away Team']] = table['Home team - Away team.1'].str.split(' ', expand=True)
table = table.drop(['Home team - Away team.1'],axis=1)
</code></pre>
|
python|python-3.x|beautifulsoup
| 1 |
1,909,166 | 55,345,171 |
Remove key:value pair from dictionary based on a condition?
|
<p>I have nested dictionary 'my_dict' as given below. I want to remove common keys from nested dictionary grouped by main key name format.</p>
<pre><code>my_dict = {'abc_1': {'00000000': 0.01555745891946835,
'facility': 0.04667237675840505,
'among': 0.01555745891946835},
'abc_2': {'00000000': 0.01555745891946835,
'before': 0.04667237675840505,
'last': 0.01555745891946835},
'mno_1': {'hello': 0.01555745891946835,
'hola': 0.04667237675840505,
'0000150000': 0.01555745891946835},
'mno_2': {'hello': 0.01555745891946835,
'name': 0.04667237675840505,
'0000150000': 0.01555745891946835},
'oko_1': {'err': 0.01555745891946835,
'error': 0.04667237675840505,
'7812': 0.01555745891946835},
'oko_2': {'9872': 0.01555745891946835,
'error': 0.04667237675840505,
'00': 0.01555745891946835}}
</code></pre>
<p>For example, common keys in nested dictionary for keys starting abc* is 00000000. So, I want to remove this key. Likewise, i want to do for all.
Expected result is given below:</p>
<p>Expected Result: </p>
<pre><code>result_dict = {'abc_1': {'facility': 0.04667237675840505,
'among': 0.01555745891946835},
'abc_2': {'before': 0.04667237675840505,
'last': 0.01555745891946835},
'mno_1': {'hola': 0.04667237675840505},
'mno_2': {'name': 0.04667237675840505},
'oko_1': {'err': 0.01555745891946835,
'7812': 0.01555745891946835},
'oko_2': {'9872': 0.01555745891946835,
'00': 0.01555745891946835}}
</code></pre>
|
<p>First, get all the keys, then filter which keys you wish to keep. Then you can reconstruct the new dict with only the keys to keep:</p>
<pre><code>all_keys = [n for k in my_dict.values() for n in k.keys()]
keys_to_keep = {k for k in all_keys if all_keys.count(k) == 1}
result_dict = {k: {kk: v[kk] for kk in keys_to_keep if kk in v} for k, v in my_dict.items()}
</code></pre>
<p>result:</p>
<pre><code>{'abc_1': {'facility': 0.04667237675840505, 'among': 0.01555745891946835}, 'abc_2': {'before': 0.04667237675840505, 'last': 0.01555745891946835}, 'mno_1': {'hola': 0.04667237675840505}, 'mno_2': {'name': 0.04667237675840505}, 'oko_1': {'err': 0.01555745891946835, '7812': 0.01555745891946835}, 'oko_2': {'9872': 0.01555745891946835, '00': 0.01555745891946835}}
</code></pre>
|
python|python-3.x|dictionary
| 2 |
1,909,167 | 55,220,072 |
Difference in padding integer and string in keras
|
<p>I'm trying to pad a text for a seq2seq model.</p>
<pre><code>from keras_preprocessing.sequence import pad_sequences
x=[["Hello, I'm Bhaskar", "This is Keras"], ["This is an", "experiment"]]
pad_sequences(sequences=x, maxlen=5, dtype='object', padding='pre', value="<PAD>")
</code></pre>
<p>I encounter following error:</p>
<pre><code>ValueError: `dtype` object is not compatible with `value`'s type: <class 'str'>
You should set `dtype=object` for variable length strings.
</code></pre>
<p>However, when I try to do same for integer it works well.</p>
<pre><code>x=[[1, 2, 3], [4, 5, 6]]
pad_sequences(sequences=x, maxlen=5, padding='pre', value=0)
Output:
array([[0, 0, 1, 2, 3],
[0, 0, 4, 5, 6]], dtype=int32)
</code></pre>
<p>I hope to get output as:</p>
<pre><code>[["<PAD>", "<PAD>", "<PAD>", "Hello, I'm Bhaskar", "This is Keras"], ["<PAD>", "<PAD>","<PAD>", "This is an", "experiment"]]
</code></pre>
|
<p>As suggested by the Error, change <code>dtype</code> to <code>object</code>(not string but to an object itself), It will do the job for you.</p>
<pre><code>from keras.preprocessing.sequence import pad_sequences
x=[["Hello, I'm Bhaskar", "This is Keras"], ["This is an", "experiment"]]
pad_sequences(sequences=x, maxlen=5, dtype=object, padding='pre', value="<PAD>")
</code></pre>
<p><strong>Output</strong></p>
<pre><code>array([['<PAD>', '<PAD>', '<PAD>', "Hello, I'm Bhaskar", 'This is Keras'],
['<PAD>', '<PAD>', '<PAD>', 'This is an', 'experiment']],
dtype=object)
</code></pre>
|
python|keras|nlp
| 0 |
1,909,168 | 42,463,969 |
Numeration of groups in regular expression
|
<p>I am learning regular expressions in Python but couldn't find out what is numeration in <code>.group()</code> based on.</p>
<p>Here is my code:</p>
<pre><code>import re
string = 'suzi sabin joe brandon josh'
print(re.search(r'^.*\b(suzi|sabin|joe|brandon|josh)\b.*$', string).group(0))
# output : suzi sabin joe brandon josh
print(re.search(r'^.*\b(suzi|sabin|joe|brandon|josh)\b.*$', string).group(1))
# output : josh
</code></pre>
<p>I am wondering </p>
<ul>
<li>Why is there only group(1) and not group(1-5)?</li>
<li>Why was josh classified into group(1)?</li>
</ul>
<p>I am thankful for any advice.</p>
|
<p>When you call <code>group(0)</code>, you get the whole matched text, which is the whole string, since your pattern matches from the beginning of the string to the end.</p>
<p>While the regex <em>matches</em> everything, it only <em>captures</em> one name (in group 1 because regex counts from 1 for historical reasons). Because the first <code>.*</code> is greedy (it tries to match as much text as possible), it gobbles up the earlier names, and the captured name is the last one, <code>"josh"</code> (and the last <code>.*</code> matches an empty string). The captured name is what you get when you call <code>group(1)</code>.</p>
<p>If you want to separately capture each name, you'll need to do things differently. Probably something like this would work:</p>
<pre><code>print(re.findall(r'\b(suzi|sabin|joe|brandon|josh)\b', string))
</code></pre>
<p>This will print the list <code>['suzi', 'sabin', 'joe', 'brandon', 'josh']</code>. Each name will appear in the output in the same order it appears in the input string, which need not be the same order they were in the pattern. This might not do exactly what you want though, since it will skip over any text that isn't one of the names you're looking for (rather than failing to match anything).</p>
|
python|regex
| 0 |
1,909,169 | 42,285,159 |
Tried to write "cracker" - OFC it doesnt work but i think it could xD
|
<p>I am quite new to python and i tried to write "cracker" (not really, just a programme that writes all possible combinations). </p>
<p>I used linecache and then just while loops (far too many).</p>
<p>The idea was to make a dictionary with a-Z and 0-9 characters and then using linecache get the characters and put them together</p>
<p>(It worked with only 2 characters changing but when i tried 8 characters...)</p>
<p>As i am new to python i am not really a friend with intendation, but somehow i made it to work BUT...</p>
<p>THE PROBLEM IS IT WILL NEVER DO:</p>
<pre><code>print("ITS HERE")
</code></pre>
<p>................................................................................</p>
<pre><code>import easygui
import time
import linecache
x1=1
x2=1
x3=1
x4=1
x5=1
x6=1
x7=1
x8=0
p=0
while p!=36:
p=p+1
while x1!=36:
while x2!=36:
while x3!=36:
while x4!=36:
while x5!=36:
while x6!=36:
while x7!=36:
while x8!=36:
x8=x8+1
Char1=linecache.getline("Dictionary.txt",x1).rstrip("\n")
Char2=linecache.getline("Dictionary.txt",x2).rstrip("\n")
Char3=linecache.getline("Dictionary.txt",x3).rstrip("\n")
Char4=linecache.getline("Dictionary.txt",x4).rstrip("\n")
Char5=linecache.getline("Dictionary.txt",x5).rstrip("\n")
Char6=linecache.getline("Dictionary.txt",x6).rstrip("\n")
Char7=linecache.getline("Dictionary.txt",x7).rstrip("\n")
Char8=linecache.getline("Dictionary.txt",x8).rstrip("\n")
print(Char1+Char2+Char3+Char4+Char5+Char6+Char7+Char8)
time.sleep(0.25)
if x2==36:
x1=x1+1
x2=0
if x3==36:
x2=x2+1
x3=0
if x4==36:
x3=x3+1
x4=0
if x5==36:
x4=x4+1
x5=0
if x6==36:
x5=x5+1
x6=0
if x7==36:
x6=x6+1
x7=0
if x8==36:
x7=x7+1
x8=0
time.sleep (60000)
</code></pre>
|
<p>SO here is the working code, if you have millennials of time definetely try it xD</p>
<pre><code>import easygui
import time
import linecache
x1=1
x2=1
x3=1
x4=1
x5=1
x6=1
x7=1
x8=1
while x8!=36:
Char1=linecache.getline("AlgoDictionary.txt",x1).rstrip("\n")
Char2=linecache.getline("AlgoDictionary.txt",x2).rstrip("\n")
Char3=linecache.getline("AlgoDictionary.txt",x3).rstrip("\n")
Char4=linecache.getline("AlgoDictionary.txt",x4).rstrip("\n")
Char5=linecache.getline("AlgoDictionary.txt",x5).rstrip("\n")
Char6=linecache.getline("AlgoDictionary.txt",x6).rstrip("\n")
Char7=linecache.getline("AlgoDictionary.txt",x7).rstrip("\n")
Char8=linecache.getline("AlgoDictionary.txt",x8).rstrip("\n")
print(Char1+Char2+Char3+Char4+Char5+Char6+Char7+Char8)
time.sleep (0.1)
x8=x8+1
if x2==36:
x1=x1+1
x2=1
if x3==36:
x2=x2+1
x3=1
if x4==36:
x3=x3+1
x4=1
if x5==36:
x4=x4+1
x5=1
if x6==36:
x5=x5+1
x6=1
if x7==36:
x6=x6+1
x7=1
if x8==36:
x7=x7+1
x8=1
</code></pre>
|
python|cracking|linecache
| 0 |
1,909,170 | 54,144,553 |
How to return html code as a variable in python
|
<p>I try to return html code as variable.
What am I doing wrong?</p>
<p>python:
<code>return render_template("dashboard.html", services = "<h1>Service 1</h1>")</code>
html:
<code>Your services: {{services}}</code></p>
<p>But it gives the services value as text:
<code>Your services: <h1>Service 1</h1>"</code></p>
|
<p>Seems your Jinja2 is in automatic escaping mode. You need to mark your string as "safe" so that it won't be escaped:</p>
<pre><code>{{services|safe}}
</code></pre>
<p>Read more at <a href="http://jinja.pocoo.org/docs/2.10/templates/#html-escaping" rel="nofollow noreferrer">Jinja2 docs</a>.</p>
|
python|html|flask|jinja2
| 0 |
1,909,171 | 54,093,067 |
Get '_set' for a many-to-many relationship with a through
|
<p>I've got a user model like this:</p>
<pre><code>class Person(AbstractUser):
id = models.AutoField(primary_key=True)
...(additional attributes taken out for brevity)...
kids = models.ManyToManyField(
'Person', through='Relationship', related_name='parents')
</code></pre>
<p>and a Relationship model that looks like this:</p>
<pre><code>class Relationship(models.Model):
parent_id = models.IntegerField()
kid_id = models.IntegerField()
class Meta:
unique_together = ('parent_id', 'kid_id')
</code></pre>
<p>I'm trying to figure out the best way to get a set of the kids related to a particular parent (who would be the ones logged in).</p>
<p>I've got something like this:</p>
<pre><code>user = Person.objects.get(id=request.user.id)
print(user.relationship_set.all())
</code></pre>
<p>But that gives me an error <code>'Person' object has no attribute 'relationship_set'</code></p>
<p>How best would I accomplish this?</p>
|
<p>I ended up going a slightly different route and using this filter:</p>
<pre><code>Person.objects.filter(id__in=[ r.kid_id for r in Relationship.objects.filter( parent_id=person_id ) ]
</code></pre>
|
python|django|django-models
| 0 |
1,909,172 | 58,492,297 |
How to keep main loop running after running a process by multiprocessing
|
<p>My main function runs an infinite while loop that identifies objects by attributing a number to the object. This loop shall not be blocked. Depending on the value of this number a separate process shall be polling that variable within a while loop that exits only on a certain value of the variable.</p>
<p>I have tried to use multiprocessing for that polling task. But the new process when started apparently stops the main loop so that the object's number is not being changed.
To avoid problems with scope of the variable I set a GPIO-pin to 0 or 1 in the main loop when a certain object is being detected. The GPIO-pin is read within a while-loop within the started process, but it remains unchanged when the object is changing.</p>
<p>How to keep the main while-loop running when the process is running? </p>
<pre><code>def input_polling():
print(rpigpio.input(Pi_kasse)," ..................")
condition=True
while condition:
print(RPi.GPIO(12))
def main(args):
....
inThread=multiprocessing.Process(target=input_polling,args=[])
....
while True:
print("311")
inThread=multiprocessing.Process(target=input_polling,args=[])
inThread.start()
....
If object==3
Rpi.GPIO.output(Object,Low)
else
Rpi.GPIO.output(Object,HIGH)
inThread.terminate()
inThread.join()
if __name__ == '__main__':
sys.exit(main(sys.argv))
</code></pre>
|
<p>You should use queues to transfer you variable to your sub-process. And in the process use a 'stop'-value as sentinel.</p>
<pre><code>import multiprocessing as mp
def input_polling(in_queue):
print(rpigpio.input(Pi_kasse)," ..................")
# polls the input queue and stops when "STOP" is send
# will block until element becomes available
for a in iter(in_queue.get, 'STOP'):
print("Doing stuff")
def main(args):
....
in_queue = mp.Queue()
inThread=multiprocessing.Process(target=input_polling,args=[in_queue])
inThread.start()
while True:
print("311")
....
If object==3
in_queue.put(your_number)
# when done put 'STOP' into queue and wait for process to terminate
in_queue.put('STOP')
inThread.terminate()
inThread.join()
if __name__ == '__main__':
sys.exit(main(sys.argv))
</code></pre>
|
python|multiprocessing
| 0 |
1,909,173 | 58,452,861 |
What does return do in this function
|
<p>Shouldn't a return statement something I mean isn't returning something the reason return is here.</p>
<pre><code>def countdown(x):
if x == 0:
print("Done!!")
return
else:
print(x,"...")
countdown(x-1)
print('fooo')
countdown(10)
</code></pre>
|
<p>It returns <code>None</code> and as a consequence just <em>exits</em>/<em>stops the execution</em> of the current function. </p>
<p>It is not mandatory for <em>return</em> to actually return something meaningful. In case of this specific code, it would work the same with or without the <code>return</code>.</p>
|
python-3.x
| 1 |
1,909,174 | 22,771,579 |
Filtering all inner text from a HTML document
|
<p>I want to take a large HTML document and i want to strip away all the inner text between all tags. Everything I seem to find is just about extracting text from the HTML. All I want is the raw HTML tags with their attributes intact. How would one potentially go about filtering out the text?</p>
|
<p>Find all text with <code>soup.find_all(text=True)</code>, and <code>.extract()</code> on each text element to remove it from the document:</p>
<pre><code>for textelement in soup.find_all(text=True):
textelement.extract()
</code></pre>
<p>Demo:</p>
<pre><code>>>> from bs4 import BeautifulSoup
>>> soup = BeautifulSoup('''\
... <html><body><p>Hello world!<p>
... <div><ul><li>This is all
... </li><li>Set to go!</li></ul></div>
... </body></html>''')
>>> soup
<html><body><p>Hello world!</p><p>
</p><div><ul><li>This is all
</li><li>Set to go!</li></ul></div>
</body></html>
>>> for textelement in soup.find_all(text=True):
... textelement.extract()
...
u'Hello world!'
u'\n'
u'This is all\n'
u'Set to go!'
u'\n'
>>> print soup.prettify()
<html>
<body>
<p>
</p>
<p>
</p>
<div>
<ul>
<li>
</li>
<li>
</li>
</ul>
</div>
</body>
</html>
</code></pre>
|
python|html|beautifulsoup
| 1 |
1,909,175 | 22,672,108 |
merging in pandas vs merging in R
|
<p>I'm afraid I do not quite understand the merging capabilities of pandas, although I much prefer python over R for now.</p>
<p>In R, I have always been able to merge dataframes very easily as follows:</p>
<pre><code>> merge(test,e2s, all.x=T)
Gene Mutation Chromosome Entrez
1 AGRN p.R451H chr1 375790
2 C1orf170 p.V663A/V683A chr1 84808
3 HES4 p.R44S chr1 57801
4 ISG15 p.S83N chr1 9636
5 PLEKHN1 p.S476P/S511P/S563P/S76P chr1 84069
</code></pre>
<p>However, I have been unable to reconstruct this in pandas with merge(how="left,right,inner,outer").. For example:</p>
<pre><code>Outer yields a union, which makes sense:
x = test.merge(e2s, how="outer")
In [133]: x.shape
Out[133]: (46271, 4)
</code></pre>
<p>But inner yields an empty dataframe, even though <code>Entrez_Gene_Id</code> has been merged successfully:</p>
<pre><code>In [143]: x = test.merge(e2s, how="inner")
In [144]: x
Out[144]:
Empty DataFrame
Columns: [Gene, Mutation, Chromosome, Entrez_Gene_Id]
Index: []
[0 rows x 4 columns]
</code></pre>
<p>The intersection should contain one row with the <code>gene : HES4</code>. Is there some sort of string matching I need to turn on for this?:</p>
<p>e2s:</p>
<pre><code>57794 SUGP1
57795 BRINP2
57796 DKFZP761C1711
57798 GATAD1
57799 RAB40C
57801 HES4
57804 POLD4
57805 CCAR2
57817 HAMP
</code></pre>
<p>test:</p>
<pre><code> Gene Mutation Chromosome
0 PLEKHN1 p.S476P/S511P/S563P/S76P chr1
1 C1orf170 p.V663A/V683A chr1
2 HES4 p.R44S chr1
3 ISG15 p.S83N chr1
4 AGRN p.R451H chr1
5 RNF223 p.P242H chr1
</code></pre>
<p><strong>Update:</strong></p>
<p>As far as I know the columns are labelled so that they should merge fine, I only want to merge by the <code>Gene</code> column and keep all test rows:</p>
<pre><code>In [148]: e2s.columns
Out[148]: Index([u'Gene', u'Entrez_Gene_Id'], dtype='object')
In [149]: test.columns
Out[149]: Index([u'Gene', u'Mutation', u'Chromosome'], dtype='object')
</code></pre>
<p>This was done by explicitly renaming the dataframes:</p>
<pre><code>e2s.rename(columns={"Gene":u'Gene',"Entrez_Gene_Id":u'Entrez_Gene_Id'}, inplace=True)
</code></pre>
<p>to dict:</p>
<pre><code>{u'Chromosome': {0: u'chr1',
1: u'chr1',
2: u'chr1',
3: u'chr1',
4: u'chr1',
5: u'chr1'},
u'Gene': {0: u'PLEKHN1',
1: u'C1orf170',
2: u'HES4',
3: u'ISG15',
4: u'AGRN',
5: u'RNF223'},
u'Mutation': {0: u'p.S476P/S511P/S563P/S76P',
1: u'p.V663A/V683A',
2: u'p.R44S',
3: u'p.S83N',
4: u'p.R451H',
5: u'p.P242H'}}
{u'Entrez_Gene_Id': {14118: u'SUGP1',
14119: u'BRINP2',
14120: u'DKFZP761C1711',
14121: u'GATAD1',
14122: u'RAB40C',
14123: u'HES4',
14124: u'POLD4',
14125: u'CCAR2',
14126: u'HAMP'},
u'Gene': {14118: 57794,
14119: 57795,
14120: 57796,
14121: 57798,
14122: 57799,
14123: 57801,
14124: 57804,
14125: 57805,
14126: 57817}}
</code></pre>
|
<p>Perhaps you haven't labelled the columns (this is needed otherwise how can you know which columns to use to match against!)</p>
<p>It works fine if they are both are frames with labelled columns:</p>
<pre><code>In [11]: e2s
Out[11]:
number Gene
0 57794 SUGP1
1 57795 BRINP2
2 57796 DKFZP761C1711
3 57798 GATAD1
4 57799 RAB40C
5 57801 HES4
6 57804 POLD4
7 57805 CCAR2
8 57817 HAMP
In [12]: test
Out[12]:
Gene Mutation Chromosome
0 PLEKHN1 p.S476P/S511P/S563P/S76P chr1
1 C1orf170 p.V663A/V683A chr1
2 HES4 p.R44S chr1
3 ISG15 p.S83N chr1
4 AGRN p.R451H chr1
5 RNF223 p.P242H chr1
In [13]: e2s.merge(test)
Out[13]:
number Gene Mutation Chromosome
0 57801 HES4 p.R44S chr1
In [14]: test.merge(e2s)
Out[14]:
Gene Mutation Chromosome number
0 HES4 p.R44S chr1 57801
</code></pre>
|
python|r|join|merge|pandas
| 1 |
1,909,176 | 22,575,172 |
python module import syntax
|
<p>I'm teaching myself Python (I have experience in other languages).</p>
<p>I found a way to import a "module". In PHP, this would just be named an include file. But I guess Python names it a module. I'm looking for a simple, best-practices approach. I can get fancy later. But right now, I'm trying to keep it simple while not developing bad habits. Here is what I did:</p>
<ol>
<li><p>I created a blank file named <code>__init__.py</code>, which I stored in Documents (the folder on the Mac)</p></li>
<li><p>I created a file named myModuleFile.py, which I stored in Documents</p></li>
<li><p>In myModuleFile.py, I created a function:</p>
<pre><code>def myFunction()
print("hello world")
</code></pre></li>
<li><p>I created another file: myMainFile.py, which I stored in Documents</p></li>
<li><p>In this file, I typed the following:</p>
<pre><code>import myModuleFile.py
myModuleFile.myFunction()
</code></pre></li>
</ol>
<p>This successfully printed out "hello world" to the console when I ran it on the terminal.</p>
<p>Is this a best-practices way to do this for my simple current workflow? </p>
<p>I'm not sure the dot notation means I'm onto something good or something bad. It throws an error if I try to use myFunction() instead of myModuleFile.myFunction(). I kind of think it would be good. If there were a second imported module, it would know to call myFunction() from myModuleFile rather than the other one. So the dot notation makes everybody know exactly which file you are trying to call the function from.</p>
<p>I think there is some advanced stuff using sys or some sort of exotic configuration stuff. But I'm hoping my simple little way of doing things is ok for now.</p>
<p>Thanks for any clarification on this.</p>
|
<p>For your import you don't need the ".py" extension</p>
<p>You can use:</p>
<pre><code>import myModuleFile
myModuleFile.myFunction()
</code></pre>
<p>Or</p>
<pre><code>from myModuleFile import myFunction
myFunction()
</code></pre>
<p>Last syntax is common if you import several functions or globals of your module.</p>
<p>Besides to use the "main" function, I'd put this on your module:</p>
<pre><code>from myModuleFile import myFunction
if __name__ == '__main__':
myFunction()
</code></pre>
<p>Otherwise the main code could be executed in imports or other cases.</p>
<p>I'd use just one module for myModuleFile.py and myMainFile.py, using the previous pattern let you know if your module is called from command line or as import.</p>
<p>Lastly, I'd change the name of your files to avoid the CamelCase, that is, I'd replace <code>myModuleFile.py</code> by <code>my_module.py</code>. Python loves the lowercase ;-)</p>
|
python|syntax|import|module
| 3 |
1,909,177 | 22,756,324 |
How to avoid e-05 in python
|
<p>I have a Python program where i calculate some probabilities, but in some of the answers I get f.ex. 8208e-06, but I want my numbers to come out on the regular form 0.000008... How do I do this? </p>
|
<p>You can use the <code>f</code> format specifier and specify the number of decimal digits:</p>
<pre><code>>>> '{:.10f}'.format(1e-10)
'0.0000000001'
</code></pre>
<p>Precision defaults to <code>6</code>, so:</p>
<pre><code>>>> '{:f}'.format(1e-6)
'0.000001'
>>> '{:f}'.format(1e-7)
'0.000000'
</code></pre>
<hr>
<p>If you want to remove trailing zeros just <a href="https://docs.python.org/2/library/stdtypes.html#str.rstrip"><code>rstrip</code></a> them:</p>
<pre><code>>>> '{:f}'.format(1.1)
'1.100000'
>>> '{:f}'.format(1.1).rstrip('0')
'1.1'
</code></pre>
<hr>
<p>By default when converting to string using <code>str</code> (or <code>repr</code>) python returns the shortest possible representation for the given float, which may be the exponential notation.</p>
<p>This is usually what you want, because the decimal representation may display the representation error which user of applications usually don't want and don't care to see:</p>
<pre><code>>>> '{:.16f}'.format(1.1)
'1.1000000000000001'
>>> 1.1
1.1
</code></pre>
|
python|int|decimal
| 20 |
1,909,178 | 28,693,320 |
What's the best way of distinguishing bools from numbers in Python?
|
<p>I have an application where I need to be able to distinguish between numbers and bools as quickly as possible. What are the alternatives apart from running <code>isinstance(value, bool)</code> first?</p>
<p>Edit:
Thanks for the suggestions. Actually, what I want to be able to do have a check for numbers that leaves out bools so that I can reorder my checks (numbers are far more prevalent) and improve my negacalls. <code>isinstance()</code> itself is fast enough. The <code>x is True or x is False</code> is intriguing.</p>
|
<p>So, Padraic Cunningham suggests, that the following might be a bit faster. My own quick experiments with <code>cProfile</code>-ing it haven't shown any difference:</p>
<pre><code>isbool = value is True or value is False
</code></pre>
<p>I assume that's as fast as you can get: Two non-type-coercing comparisons.</p>
<p><strong>Edit:</strong> I replayed the timing tests from @user 5061 and added my statement. This is my result:</p>
<pre><code>>>> import timeit
>>> stmt1 = "isinstance(123, bool)"
>>> stmt2 = "123 is True or 123 is False"
>>> t1 = timeit.timeit(stmt1)
>>> t2 = timeit.timeit(stmt2)
>>> print t1
0.172112941742
>>> print t2
0.0690350532532
</code></pre>
<p><strong>Edit 2:</strong> Note, that I'm using Python 2.7 here. @user 5061 might use Python 3 (telling from the <code>print()</code> function), so any solution provided here should be tested by OP before putting in production, for YMMV.</p>
|
python
| 5 |
1,909,179 | 14,639,121 |
Row-wise scaling with Numpy
|
<p>I have an array H of dimension MxN, and an array A of dimension M . I want to scale H rows with array A. I do it this way, taking advantage of element-wise behaviour of Numpy</p>
<pre><code>H = numpy.swapaxes(H, 0, 1)
H /= A
H = numpy.swapaxes(H, 0, 1)
</code></pre>
<p>It works, but the two swapaxes operations are not very elegant, and I feel there is a more elegant and consise way to achieve the result, without creating temporaries. Would you tell me how ?</p>
|
<p>I think you can simply use <code>H/A[:,None]</code>:</p>
<pre><code>In [71]: (H.swapaxes(0, 1) / A).swapaxes(0, 1)
Out[71]:
array([[ 8.91065496e-01, -1.30548362e-01, 1.70357901e+00],
[ 5.06027691e-02, 3.59913305e-01, -4.27484490e-03],
[ 4.72868136e-01, 2.04351398e+00, 2.67527572e+00],
[ 7.87239835e+00, -2.13484271e+02, -2.44764975e+02]])
In [72]: H/A[:,None]
Out[72]:
array([[ 8.91065496e-01, -1.30548362e-01, 1.70357901e+00],
[ 5.06027691e-02, 3.59913305e-01, -4.27484490e-03],
[ 4.72868136e-01, 2.04351398e+00, 2.67527572e+00],
[ 7.87239835e+00, -2.13484271e+02, -2.44764975e+02]])
</code></pre>
<p>because <code>None</code> (or <code>newaxis</code>) extends <code>A</code> in dimension <a href="http://www.scipy.org/Numpy_Example_List#newaxis" rel="noreferrer">(example link)</a>:</p>
<pre><code>In [73]: A
Out[73]: array([ 1.1845468 , 1.30376536, -0.44912446, 0.04675434])
In [74]: A[:,None]
Out[74]:
array([[ 1.1845468 ],
[ 1.30376536],
[-0.44912446],
[ 0.04675434]])
</code></pre>
|
python|numpy
| 5 |
1,909,180 | 41,490,526 |
What are solvers callback functions in pycaffe, and how can I use them?
|
<p>Looking at <a href="https://github.com/BVLC/caffe/pull/3020" rel="nofollow noreferrer">this PR</a>, I see that one can define <code>on_start</code> and <code>on_gradient</code> callbacks for <code>caffe.Solver</code> object.</p>
<pre><code>import caffe
solver = caffe.AdamSolver('solver.prototxt')
solver.add_callback(on_start, on_gradient) # <- ??
</code></pre>
<p>What type of objects are <code>on_start</code> and <code>on_gradient</code>?<br>
What are these callbacks for?<br>
How can one use them (an example would be nice...)?</p>
|
<p><strong>1. Where and how are the callbacks defined?</strong></p>
<p>The callbacks are part of the Solver, and are thus defined in the <a href="https://github.com/BVLC/caffe/blob/master/include/caffe/solver.hpp" rel="nofollow noreferrer"><code>solver.hpp</code></a> file. To be exact, there is a <a href="https://github.com/BVLC/caffe/blob/master/include/caffe/solver.hpp#L78" rel="nofollow noreferrer"><code>Callback</code></a> class, which looks like this:</p>
<pre><code> // Invoked at specific points during an iteration
class Callback {
protected:
virtual void on_start() = 0;
virtual void on_gradients_ready() = 0;
template <typename T>
friend class Solver;
};
const vector<Callback*>& callbacks() const { return callbacks_; }
void add_callback(Callback* value) {
callbacks_.push_back(value);
}
</code></pre>
<p>and a <a href="https://github.com/BVLC/caffe/blob/master/include/caffe/solver.hpp#L117" rel="nofollow noreferrer">protected vector</a> of such callbacks, which is a member of the <code>Solver</code> class.</p>
<pre><code> vector<Callback*> callbacks_;
</code></pre>
<p>So, this basically provides an <code>add_callback</code> function to the <code>Solver</code> class, which allows one to add an object of type <code>Callback</code> to a vector. This is to make sure, that each callback has two methods: <code>on_start()</code> and <code>on_gradients_ready()</code>.</p>
<p><strong>2. Where are the callbacks called?</strong></p>
<p>This happens in the <a href="https://github.com/BVLC/caffe/blob/master/src/caffe/solver.cpp" rel="nofollow noreferrer"><code>solver.cpp</code></a> file, in the <a href="https://github.com/BVLC/caffe/blob/master/src/caffe/solver.cpp#L194" rel="nofollow noreferrer"><code>step()</code></a> function, which contains the main worker loop. Here's that main loop part (with lots of things stripped out for simplicity):</p>
<pre><code>while (iter_ < stop_iter) {
for (int i = 0; i < callbacks_.size(); ++i) {
callbacks_[i]->on_start();
}
// accumulate the loss and gradient
Dtype loss = 0;
for (int i = 0; i < param_.iter_size(); ++i) {
loss += net_->ForwardBackward();
}
loss /= param_.iter_size();
for (int i = 0; i < callbacks_.size(); ++i) {
callbacks_[i]->on_gradients_ready();
}
ApplyUpdate();
++iter_;
}
</code></pre>
<p><strong>3. Where is this used?</strong></p>
<p>This callback feature was implemented when multi-GPU support was added. The only place (that I know of), where callbacks are used, is to synchronize the solver between multiple GPUs:</p>
<p>The <a href="https://github.com/BVLC/caffe/blob/master/include/caffe/parallel.hpp#L85" rel="nofollow noreferrer"><code>P2PSync</code></a> class in <a href="https://github.com/BVLC/caffe/blob/master/include/caffe/parallel.hpp" rel="nofollow noreferrer"><code>parallel.hpp</code></a> inherits from the <code>Solver::Callback</code> class, and implements an <a href="https://github.com/BVLC/caffe/blob/master/src/caffe/parallel.cpp#L287" rel="nofollow noreferrer"><code>on_start()</code></a> and <a href="https://github.com/BVLC/caffe/blob/master/src/caffe/parallel.cpp#L325" rel="nofollow noreferrer"><code>on_gradients_ready()</code></a> method, which synchronize the GPUs and finally accumulate the all gradient updates.</p>
<p><strong>4. How to use callbacks from Python?</strong></p>
<p>As the pull request <a href="https://github.com/BVLC/caffe/pull/3020" rel="nofollow noreferrer">#3020</a> explains,</p>
<blockquote>
<p><code>on_start</code> and <code>on_gradient</code> are python functions.</p>
</blockquote>
<p>so it should be straight-forward to use. A full, runnable example is shown in <a href="https://gist.github.com/HBadertscher/9c38dfebc91186887f11a4c2c80ead52#file-test_callbacks-py" rel="nofollow noreferrer">this Github Gist</a> I created. </p>
<p><strong>5. How is this useful?</strong></p>
<p>As the two callback functions do not take <em>any</em> arguments, you can't simply use them to keep track of the loss or similar things. To do that, you have to create a wrapper function around the <code>Solver</code> class, and call <code>add_callback</code> with two methods as callback functions. This allows you to access the net from within the callback, by using <code>self.solver.net</code>. In the following example, I use the <code>on_start</code> callback to load data into the net, and the <code>on_gradients_ready</code> callback to print the loss function.</p>
<pre><code>class SolverWithCallback:
def __init__(self, solver_file):
self.solver = caffe.SGDSolver(solver_file)
self.solver.add_callback(self.load, self.loss)
def solve(self):
self.solver.solve()
def load(self):
inp = np.random.randint(0, 255)
self.solver.net.blobs['data'].data[...] = inp
self.solver.net.blobs['labels'].data[...] = 2 * inp
def loss(self):
print "loss: " + str(self.solver.net.blobs['loss'].data)
if __name__=='__main__':
solver = SolverWithCallback('solver.prototxt')
solver.solve()
</code></pre>
|
python|neural-network|deep-learning|caffe|pycaffe
| 3 |
1,909,181 | 25,896,398 |
Running an iPython notebook server on an EC2 Ubuntu instance via Docker
|
<p>I am trying to run an iPython notebook server via Docker on an EC2 Ubuntu instance. I have enabled all incoming HTTP connections on port 80, SSH connections on port 22 and custom TCP connections on port 8888.</p>
<p>I installed docker using</p>
<p><code>sudo apt-get install docker.io</code></p>
<p>Then I pulled the ipython/notebook repository</p>
<p><code>sudo docker pull ipython/scipyserver</code></p>
<p>However, I am unable to deploy the notebook. I tried</p>
<pre><code>sudo docker run -d -p 54.187.44.99:8888:8888 -e "PASSWORD=<your password>" ipython/scipyserver
</code></pre>
<p>where 54.187.44.99 is the public IP of my aws ec2 instance.</p>
<p>This gives me the following error -</p>
<pre><code>2014/09/17 17:00:09 Error response from daemon: Cannot start container 5c9e1f998606d90b93a2652e9998373c3a200e3cf2f219bb8f5c4e03f429bfdc: port has already been allocated
</code></pre>
<p>However, the port 8888 is not being used on the host machine. I used netstat to verify this.</p>
<p>Could someone more knowledgeble please guide me where I am going wrong? Thanks.</p>
|
<p>Try to listen on <strong>0.0.0.0</strong> because if the ec2 instance is inside a vpc you will not be able to see the public ip in the network interface list.</p>
<pre><code>sudo docker run -d -p 0.0.0.0:8888:8888 -e "PASSWORD=<your password>" ipython/scipyserver
</code></pre>
<p>or simply...</p>
<pre><code>sudo docker run -d -p 8888:8888 -e "PASSWORD=<your password>" ipython/scipyserver
</code></pre>
|
ubuntu|networking|amazon-ec2|ipython|docker
| 0 |
1,909,182 | 25,810,855 |
To Count the number of columns in a CSV file using python 2.4
|
<p>I want to count the total number of columns in a CSV file. Currently i am using python 2.7 and 3.4. Code works perfectly in these versions and when i try to implement the same thing in python 2.4, it is showing as next() is not defined.</p>
<p>Code i am using currently(2.7 and 3.4)</p>
<pre><code>f = open(sys.argv[1],'r')
reader = csv.reader(f,delimiter=d)
num_cols = len(next(reader)) # Read first line and count columns
</code></pre>
<p>My strong need is to implement the same in <strong>Python 2.4</strong> . Any help would be greatly appreciated.</p>
|
<p>I do not have Python 2.4 installed at the moment, so I can not really test this.</p>
<p>According to the documentation, the <a href="https://docs.python.org/2/library/functions.html#next" rel="nofollow"><code>next</code> builtin is new in Python 2.6</a>. However, the <code>csv.reader</code> has a <a href="https://docs.python.org/2/library/csv.html#csv.csvreader.next" rel="nofollow"><code>next</code> method of it's own</a>, and that one seems to have existed even in 2.4, so you should be able to use this.</p>
<pre><code>num_cols = len(reader.next())
</code></pre>
|
python|python-2.7|csv|python-2.4
| 3 |
1,909,183 | 44,730,175 |
Pandas - Concat multiindex with single index
|
<p>I have a dataframe that looks like this:</p>
<pre><code> Qty
Year Month
2017 Jan 1
Feb 2
2016 Jan 7
Feb 4
</code></pre>
<p>and <code>df.groupby(level = 0).sum()</code> gives me this:</p>
<pre><code> Qty
Year
2017 3
2016 11
</code></pre>
<p>and I want to produce this:</p>
<pre><code> Qty
Year Month
2017 Jan 1
Feb 2
2017 Total 3
2016 Jan 7
Feb 4
2016 Total 11
</code></pre>
<p>Where the value of the <code>Month</code> index is an empty string. <code>concat</code> doesn't quit work how I want, it gives:</p>
<pre><code> Qty
(2017, Jan) 1
(2017, Feb) 2
(2016, Jan) 7
(2016, Feb) 4
2017 3
2016 11
</code></pre>
|
<p>Try this:</p>
<pre><code>In [59]: df.append(df.groupby(level=0).sum().reset_index().assign(Month='Total') \
.set_index(['Year','Month'])) \
.sort_index()
Out[59]:
Qty
Year Month
2016 Feb 4
Jan 7
Total 11
2017 Feb 2
Jan 1
Total 3
</code></pre>
|
pandas
| 1 |
1,909,184 | 44,415,744 |
How do I change the colour of my button and label on Tkinter?
|
<p>My task is to create a label and button on Tkinter. The button has to change the label, and I have to change the colour of the button and the label. I have changed the colour of the background but I can't figure out how to do the same for the label and button.</p>
<pre><code>from tkinter import *
from tkinter import ttk
def change():
print("change functon called")
def main():
rootWindow = Tk()
rootWindow.geometry('400x400')
rootWindow.configure(bg="red")
global Label
label = ttk.Label( rootWindow, text="Hello World!" )
label.pack()
button1 = ttk.Button( rootWindow, text="Change Label",
command=change)
button1.pack()
rootWindow.mainloop()
main()
</code></pre>
|
<p>So configuring a buttons colors is a bit different when using <code>tkinter</code> button VS a <code>ttk</code> style button.</p>
<p>For a tkinter button you would use the background = "color" argument like the following:</p>
<pre><code>button1 = Button( rootWindow, text="Change Label",
background = 'black', foreground = "white", command=change)
</code></pre>
<p>For a <code>ttk</code> button you would configure the style and then use the <code>style = "style name"</code> argument like the following.</p>
<pre><code>style = ttk.Style()
style.configure("BW.TLabel", foreground="white", background="black")
buttonTTK = ttk.Button( rootWindow, text="TTK BUTTON",style = "BW.TLabel", command=change)
</code></pre>
<p>More information on <code>ttk</code> configs can be found <a href="https://docs.python.org/3/library/tkinter.ttk.html" rel="nofollow noreferrer">here</a></p>
<pre><code>from tkinter import *
from tkinter import ttk
def change():
print("change functon called")
def main():
rootWindow = Tk()
label = ttk.Label( rootWindow, text="Hello World!",
background = 'black', foreground = "white")
label.pack()
button1 = Button( rootWindow, text="Change Label",
background = 'black', foreground = "white", command=change)
button1.pack()
style = ttk.Style()
style.configure("BW.TLabel", foreground="white", background="black")
buttonTTK = ttk.Button( rootWindow, text="TTK BUTTON",style = "BW.TLabel", command=change)
buttonTTK.pack()
rootWindow.mainloop()
main()
</code></pre>
<p>Result:</p>
<p><a href="https://i.stack.imgur.com/1y73X.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1y73X.png" alt="enter image description here"></a></p>
|
python|tkinter
| 4 |
1,909,185 | 23,543,431 |
Treatment of NaN's
|
<p>I have trouble understanding how pandas and/or numpy are handling NaN values. I am extracting subsets of a pandas dataframe in order to compute t-stats, e.g. I want to know whether there is a significant difference in the mean of x2 for the group whose x1 value is A compared to those with an x1 value of B (apologies for not making this a working example, but I don't know how to recreate the NaN values that pop up in my dataframe, the original data is read in using read_csv, with the csv denoting missing values with <code>NA</code>):</p>
<pre><code>import numpy as np
import pandas as pd
import scipy.stats as st
A = data[data['x1']=='A']['x2']
B = data[data['x1']=='B'].x2
A
2 3
3 1
5 2
6 3
10 3
12 2
15 2
16 0
21 0
24 1
25 1
28 NaN
31 0
32 3
...
677 0
681 NaN
682 3
683 1
686 2
Name: praxiserf, Length: 335, dtype: float64
</code></pre>
<p>That is, I have two <code>pandas.core.series.Series</code> objects, which I then want to perform a t-test on. However, using</p>
<pre><code>st.ttest_ind(A, B)
</code></pre>
<p>returns:</p>
<pre><code>(array(nan), nan)
</code></pre>
<p>I presume this has to do with the fact that <code>ttest_ind</code> accepts arrays as inputs and there seems to be a problem with my NaN values when converting the series to an array. If I try to calculate means of the original series, I get:</p>
<pre><code>A.mean(), B.mean()
1.5802, 1.2
</code></pre>
<p>However, when I try to turn the series into an array, I get:</p>
<pre><code>A_array = np.asarray(A)
A_array
array([ 3., 1., 2., 3., 3., 2., 2., 0., 0., 1., 1.,
nan, 0., 3., ..., 1., nan, 0., 3. ])
</code></pre>
<p>That is, <code>NaN</code> turned into <code>nan</code> and taking means doesn't work anymore:</p>
<pre><code>A.mean()
nan
</code></pre>
<p>How should the missing values be treated in order to ensure that I can still do calculations with the series/array? </p>
|
<p><code>pandas</code> uses the same code as the <code>bottleneck</code> <code>nanmean</code> function, I believe, thus automatically ignoring <code>nan</code>s. <code>numpy</code> doesn't do that for you. What you really want to do, however, is mask the <code>nan</code>-values in both series and pass that to the t-test:</p>
<pre class="lang-python prettyprint-override"><code>mask = numpy.logical_and(numpy.isfinite(A), numpy.isfinite(B))
st.ttest_ind(A[mask], B[mask])
</code></pre>
|
python|numpy|nan
| 5 |
1,909,186 | 23,636,049 |
Combining index and value sorting with top-K selection
|
<p>Say I have a dataframe with columns <code>A</code>, <code>B</code>, <code>C</code>, and <code>data</code>.</p>
<p>I would like to:</p>
<ol>
<li>Convert it to a multi-index dataframe with indices <code>A</code>, <code>B</code> and <code>C</code></li>
<li>Sort the rows by the <strong>the indices</strong> <code>A</code> and <code>B</code> of this dataframe.</li>
<li>Within each <code>A</code> <code>B</code> pair of the index, sort the rows (i.e. the <code>C</code> index) <strong>by the value on the column <code>data</code></strong>.</li>
<li>Get the top 20 rows within each such <code>A</code> <code>B</code> pair, according to the previous sorting on data.</li>
</ol>
<p>This shouldn't be hard, but I have tried all sorts of approaches, and none of them give me what I want. The following, for example, is close, but it gives me only values for the first group of <code>A</code> <code>B</code> indices.</p>
<pre><code>temp = mdf.set_index(['A', 'B','C']).sort_index()
# Sorting by value and retrieving the top 20 entries:
func = lambda x: x.sort('data', ascending=False).head(20)
temp = temp.groupby(level=['A','B'],as_index=False).apply(func)
# Drop the dummy index (?) introduced in the line above
temp = temp.reset_index(level=0)['data']
</code></pre>
<h1>Update:</h1>
<pre><code>def create_random_multi_index():
df = pd.DataFrame({'A' : [np.random.random_integers(10) for x in xrange(500)],
'B' : [np.random.random_integers(10) for x in xrange(500)],
'C' : [np.random.random_integers(10) for x in xrange(500)],
'data' : randn(500) })
return df
</code></pre>
<p>E.g. of what I am looking for (showing <strong>top 3 elements</strong>, note how the data is sorted within each <code>A-B</code> pair) :</p>
<pre><code> data
A B C
1 1 10 2.057864
5 1.234252
7 0.235246
2 7 1.309126
6 0.450208
8 0.397360
2 2 2 1.609126
1 0.250208
4 0.597360
...
</code></pre>
|
<p>Not sure I 100% understand what you want, but I think this will do it. When you reset it stays in the same order. The key is the <code>sortlevel()</code>, it sorts lexiographically the levels (and the remaining levels on ties). In 0.14 (coming soon) their is an option <code>sort_remaining</code> which you can play with I think. </p>
<pre><code>In [48]: np.random.seed(1234)
In [49]: df = pd.DataFrame({'A' : [np.random.random_integers(10) for x in xrange(500)],
....: 'B' : [np.random.random_integers(10) for x in xrange(500)],
....: 'C' : [np.random.random_integers(10) for x in xrange(500)],
....: 'data' : randn(500) })
</code></pre>
<p>First set the index, then sort it and reset.</p>
<p>Then groupby A,B and pull out the first 20 biggest elements.</p>
<pre><code>df.set_index(['A','B','C']).sortlevel().reset_index().groupby(
['A','B']).apply(lambda x: x.sort(columns='data',ascending=False).head(20)).set_index(['A','B','C'])
Out[8]:
data
A B C
1 1 1 0.959688
2 0.918230
2 0.731919
10 0.212463
1 0.103644
1 -0.035266
2 8 1.459579
8 1.277935
5 -0.075886
2 -0.684101
3 -0.928110
3 5 0.675987
4 0.065301
5 -0.800067
7 -1.349503
4 4 1.167308
8 1.148327
9 0.417590
6 -1.274146
10 -2.656304
5 2 -0.962994
1 -0.982679
6 2 1.410920
6 1.352527
10 0.510330
4 0.033275
1 -0.679686
10 -0.896797
1 -2.858669
7 8 -0.219342
8 -0.591054
2 -0.773227
1 -0.781850
3 -1.259089
10 -1.387992
10 -1.891734
8 7 1.578855
2 -0.498898
9 3 0.644277
8 0.572177
2 0.058431
9 -0.146912
4 -0.334690
10 9 0.795346
8 -0.137661
10 -1.335385
2 1 9 1.309405
3 0.328546
5 0.198422
1 -0.561974
3 -0.578069
2 5 0.645426
1 -0.138808
5 -0.400199
5 -0.513738
10 -0.667343
9 -1.983470
3 3 1.210882
6 0.894201
3 0.743652
...
[500 rows x 1 columns]
</code></pre>
|
python|pandas
| 2 |
1,909,187 | 24,148,239 |
Extract variable names from file
|
<p>I have a text file containing a large number of lines like this. </p>
<pre><code>NOTE: Variable Variable_S1 already exists on file D1.D, using Var_S8 instead.
NOTE: The variable name more_than_eight_letters_m has been truncated to ratio_s.
NOTE: Variable ratio_s already exists on file D1.D, using Var_S9 instead.
</code></pre>
<p>I am trying to create a list containing 2 columns :</p>
<pre><code>Variable_S1 Var_S8
more_than_eight_letters Var_S9
</code></pre>
<p>Can someone tell me how to do this using sed or python or even R ?</p>
|
<p>I don't know about sed or R, but in Python:</p>
<pre><code>>>> import re
>>> i = """NOTE: Variable Variable_S1 already exists on file D1.D, using Var_S8 instead.
NOTE: The variable name more_than_eight_letters_m has been truncated to ratio_s.
NOTE: Variable ratio_s already exists on file D1.D, using Var_S9 instead."""
>>> print(re.findall(r'(\w+_\w+)', i))
['Variable_S1', 'Var_S8', 'more_than_eight_letters_m', 'ratio_s', 'ratio_s', 'Var_S9']
</code></pre>
<p>Here is an improved version, which will give you the set of variables for each line:</p>
<pre><code>>>> print([re.findall(r'(\w+_\w+)', line) for line in i.split('\n')])
[['Variable_S1', 'Var_S8'],
['more_than_eight_letters_m', 'ratio_s'],
['ratio_s', 'Var_S9']]
</code></pre>
|
python|sed
| 1 |
1,909,188 | 20,581,267 |
Uniform Circular LBP face recognition implementation
|
<p>I am trying to implement a basic face recognition system using Uniform Circular LBP (8 Points in 1 unit radius neighborhood).
I am taking an image, <strong>re-sizing it to 200 x 200</strong> pixels and then <strong>splitting the image in 8x8 little images</strong>. I then compute the histogram for each little image and <strong>get a list of histograms</strong>. To <strong>compare 2 images</strong>, I compute chi-squared distance between the corresponding histograms and generate a score.</p>
<p><strong>Here's my Uniform LBP implementation:</strong></p>
<pre><code>import numpy as np
import math
uniform = {0: 0, 1: 1, 2: 2, 3: 3, 4: 4, 5: 58, 6: 5, 7: 6, 8: 7, 9: 58, 10: 58, 11: 58, 12: 8, 13: 58, 14: 9, 15: 10, 16: 11, 17: 58, 18: 58, 19: 58, 20: 58, 21: 58, 22: 58, 23: 58, 24: 12, 25: 58, 26: 58, 27: 58, 28: 13, 29: 58, 30: 14, 31: 15, 32: 16, 33: 58, 34: 58, 35: 58, 36: 58, 37: 58, 38: 58, 39: 58, 40: 58, 41: 58, 42: 58, 43: 58, 44: 58, 45: 58, 46: 58, 47: 58, 48: 17, 49: 58, 50: 58, 51: 58, 52: 58, 53: 58, 54: 58, 55: 58, 56: 18, 57: 58, 58: 58, 59: 58, 60: 19, 61: 58, 62: 20, 63: 21, 64: 22, 65: 58, 66: 58, 67: 58, 68: 58, 69: 58, 70: 58, 71: 58, 72: 58, 73: 58, 74: 58, 75: 58, 76: 58, 77: 58, 78: 58, 79: 58, 80: 58, 81: 58, 82: 58, 83: 58, 84: 58, 85: 58, 86: 58, 87: 58, 88: 58, 89: 58, 90: 58, 91: 58, 92: 58, 93: 58, 94: 58, 95: 58, 96: 23, 97: 58, 98: 58, 99: 58, 100: 58, 101: 58, 102: 58, 103: 58, 104: 58, 105: 58, 106: 58, 107: 58, 108: 58, 109: 58, 110: 58, 111: 58, 112: 24, 113: 58, 114: 58, 115: 58, 116: 58, 117: 58, 118: 58, 119: 58, 120: 25, 121: 58, 122: 58, 123: 58, 124: 26, 125: 58, 126: 27, 127: 28, 128: 29, 129: 30, 130: 58, 131: 31, 132: 58, 133: 58, 134: 58, 135: 32, 136: 58, 137: 58, 138: 58, 139: 58, 140: 58, 141: 58, 142: 58, 143: 33, 144: 58, 145: 58, 146: 58, 147: 58, 148: 58, 149: 58, 150: 58, 151: 58, 152: 58, 153: 58, 154: 58, 155: 58, 156: 58, 157: 58, 158: 58, 159: 34, 160: 58, 161: 58, 162: 58, 163: 58, 164: 58, 165: 58, 166: 58, 167: 58, 168: 58, 169: 58, 170: 58, 171: 58, 172: 58, 173: 58, 174: 58, 175: 58, 176: 58, 177: 58, 178: 58, 179: 58, 180: 58, 181: 58, 182: 58, 183: 58, 184: 58, 185: 58, 186: 58, 187: 58, 188: 58, 189: 58, 190: 58, 191: 35, 192: 36, 193: 37, 194: 58, 195: 38, 196: 58, 197: 58, 198: 58, 199: 39, 200: 58, 201: 58, 202: 58, 203: 58, 204: 58, 205: 58, 206: 58, 207: 40, 208: 58, 209: 58, 210: 58, 211: 58, 212: 58, 213: 58, 214: 58, 215: 58, 216: 58, 217: 58, 218: 58, 219: 58, 220: 58, 221: 58, 222: 58, 223: 41, 224: 42, 225: 43, 226: 58, 227: 44, 228: 58, 229: 58, 230: 58, 231: 45, 232: 58, 233: 58, 234: 58, 235: 58, 236: 58, 237: 58, 238: 58, 239: 46, 240: 47, 241: 48, 242: 58, 243: 49, 244: 58, 245: 58, 246: 58, 247: 50, 248: 51, 249: 52, 250: 58, 251: 53, 252: 54, 253: 55, 254: 56, 255: 57}
def bilinear_interpolation(i, j, y, x, img):
fy, fx = int(y), int(x)
cy, cx = math.ceil(y), math.ceil(x)
# calculate the fractional parts
ty = y - fy
tx = x - fx
w1 = (1 - tx) * (1 - ty)
w2 = tx * (1 - ty)
w3 = (1 - tx) * ty
w4 = tx * ty
return w1 * img[i + fy, j + fx] + w2 * img[i + fy, j + cx] + \
w3 * img[i + cy, j + fx] + w4 * img[i + cy, j + cx]
def thresholded(center, pixels):
out = []
for a in pixels:
if a > center:
out.append(1)
else:
out.append(0)
return out
def uniform_circular(img, P, R):
ysize, xsize = img.shape
transformed_img = np.zeros((ysize - 2 * R,xsize - 2 * R), dtype=np.uint8)
for y in range(R, len(img) - R):
for x in range(R, len(img[0]) - R):
center = img[y,x]
pixels = []
for point in range(0, P):
r = R * math.cos(2 * math.pi * point / P)
c = R * math.sin(2 * math.pi * point / P)
pixels.append(bilinear_interpolation(y, x, r, c, img))
values = thresholded(center, pixels)
res = 0
for a in range(0, P):
res += values[a] << a
transformed_img.itemset((y - R,x - R), uniform[res])
transformed_img = transformed_img[R:-R,R:-R]
return transformed_img
</code></pre>
<p>I did an experiment on <a href="http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html" rel="nofollow noreferrer">AT&T database</a> taking 2 gallery images and 8 probe images per subject. The ROC for the experiment came out to be:</p>
<p><img src="https://i.stack.imgur.com/c2NXc.png" alt="ROC Uniform LBP"></p>
<p>In the above ROC, <strong>x axis denotes the false accept rate</strong> and the <strong>y axis denotes the genuine accept rate</strong>. The accuracy seems to be poor according to Uniform LBP standards. I am sure there is something wrong with my implementation. It would great if someone could help me with it. Thanks for reading.</p>
<p><strong>EDIT:</strong></p>
<p>I think I made a mistake in the above code. I am going clockwise while the paper on LBP suggest that I should go anticlockwise while assigning weights. The line: <code>c = R * math.sin(2 * math.pi * point / P)</code> should be <code>c = -R * math.sin(2 * math.pi * point / P)</code>. Results after the edit are even worse. This suggests something is way wrong with my code. I guess the way I am choosing the coordinates for interpolation is messed up.</p>
<p><img src="https://i.stack.imgur.com/iZs4j.png" alt="Uniform Circular LBP ROC(2)"></p>
<p>Edit: next I tried to replicate @bytefish's code <a href="http://www.bytefish.de/blog/local_binary_patterns/" rel="nofollow noreferrer">here</a> and used the uniform hashmap to make the implementation Uniform Circular LBP.</p>
<pre><code>def uniform_circular(img, P, R):
ysize, xsize = img.shape
transformed_img = np.zeros((ysize - 2 * R,xsize - 2 * R), dtype=np.uint8)
for point in range(0, P):
x = R * math.cos(2 * math.pi * point / P)
y = -R * math.sin(2 * math.pi * point / P)
fy, fx = int(y), int(x)
cy, cx = math.ceil(y), math.ceil(x)
# calculate the fractional parts
ty = y - fy
tx = x - fx
w1 = (1 - tx) * (1 - ty)
w2 = tx * (1 - ty)
w3 = (1 - tx) * ty
w4 = tx * ty
for i in range(R, ysize - R):
for j in range(R, xsize - R):
t = w1 * img[i + fy, j + fx] + w2 * img[i + fy, j + cx] + \
w3 * img[i + cy, j + fx] + w4 * img[i + cy, j + cx]
center = img[i,j]
pixels = []
res = 0
transformed_img[i - R,j - R] += (t > center) << point
for i in range(R, ysize - R):
for j in range(R, xsize - R):
transformed_img[i - R,j - R] = uniform[transformed_img[i - R,j - R]]
</code></pre>
<p>Here's the ROC for the same:</p>
<p><img src="https://i.stack.imgur.com/gFIVx.png" alt="Uniform Circular LBP ROC(3)"></p>
<p>I tried to implement the same code in C++. Here is the code:</p>
<pre><code>#include <stdio.h>
#include <stdlib.h>
#include <opencv2/opencv.hpp>
using namespace cv;
int* uniform_circular_LBP_histogram(Mat& src) {
int i, j;
int radius = 1;
int neighbours = 8;
Size size = src.size();
int *hist_array = (int *)calloc(59,sizeof(int));
int uniform[] = {0,1,2,3,4,58,5,6,7,58,58,58,8,58,9,10,11,58,58,58,58,58,58,58,12,58,58,58,13,58,14,15,16,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,17,58,58,58,58,58,58,58,18,58,58,58,19,58,20,21,22,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,23,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,24,58,58,58,58,58,58,58,25,58,58,58,26,58,27,28,29,30,58,31,58,58,58,32,58,58,58,58,58,58,58,33,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,34,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,35,36,37,58,38,58,58,58,39,58,58,58,58,58,58,58,40,58,58,58,58,58,58,58,58,58,58,58,58,58,58,58,41,42,43,58,44,58,58,58,45,58,58,58,58,58,58,58,46,47,48,58,49,58,58,58,50,51,52,58,53,54,55,56,57};
Mat dst = Mat::zeros(size.height - 2 * radius, size.width - 2 * radius, CV_8UC1);
for (int n = 0; n < neighbours; n++) {
float x = static_cast<float>(radius) * cos(2.0 * M_PI * n / static_cast<float>(neighbours));
float y = static_cast<float>(radius) * -sin(2.0 * M_PI * n / static_cast<float>(neighbours));
int fx = static_cast<int>(floor(x));
int fy = static_cast<int>(floor(y));
int cx = static_cast<int>(ceil(x));
int cy = static_cast<int>(ceil(x));
float ty = y - fy;
float tx = y - fx;
float w1 = (1 - tx) * (1 - ty);
float w2 = tx * (1 - ty);
float w3 = (1 - tx) * ty;
float w4 = 1 - w1 - w2 - w3;
for (i = 0; i < 59; i++) {
hist_array[i] = 0;
}
for (i = radius; i < size.height - radius; i++) {
for (j = radius; j < size.width - radius; j++) {
float t = w1 * src.at<uchar>(i + fy, j + fx) + \
w2 * src.at<uchar>(i + fy, j + cx) + \
w3 * src.at<uchar>(i + cy, j + fx) + \
w4 * src.at<uchar>(i + cy, j + cx);
dst.at<uchar>(i - radius, j - radius) += ((t > src.at<uchar>(i,j)) && \
(abs(t - src.at<uchar>(i,j)) > std::numeric_limits<float>::epsilon())) << n;
}
}
}
for (i = radius; i < size.height - radius; i++) {
for (j = radius; j < size.width - radius; j++) {
int val = uniform[dst.at<uchar>(i - radius, j - radius)];
dst.at<uchar>(i - radius, j - radius) = val;
hist_array[val] += 1;
}
}
return hist_array;
}
int main( int argc, char** argv )
{
Mat src;
int i,j;
src = imread( argv[1], 0 );
if( argc != 2 || !src.data )
{
printf( "No image data \n" );
return -1;
}
const int width = 200;
const int height = 200;
Size size = src.size();
Size new_size = Size();
new_size.height = 200;
new_size.width = 200;
Mat resized_src;
resize(src, resized_src, new_size, 0, 0, INTER_CUBIC);
int count = 1;
for (i = 0; i <= width - 8; i += 25) {
for (j = 0; j <= height - 8; j += 25) {
Mat new_mat = resized_src.rowRange(i, i + 25).colRange(j, j + 25);
int *hist = uniform_circular_LBP_histogram(new_mat);
int z;
for (z = 0; z < 58; z++) {
std::cout << hist[z] << ",";
}
std::cout << hist[z] << "\n";
count += 1;
}
}
return 0;
}
</code></pre>
<p>ROC for the same:</p>
<p><img src="https://i.stack.imgur.com/QGxNz.png" alt="enter image description here"></p>
<p>I also did a rank based experiment. And got this CMC curve.</p>
<p><img src="https://i.stack.imgur.com/8dq5x.png" alt="CMC Curve"></p>
<p>Some details about the CMC curve: X axis represents ranks. (1-10) and Y axis represents the accuracy (0-1). So, I got a 80%+ Rank1 accuracy.</p>
|
<p>I don't know about python but most probably your code is broken.</p>
<p>My advice is, follow these 2 links, and try to port one of the C++ codes to python. First link also contains some information about LBP.</p>
<p><a href="http://www.bytefish.de/blog/local_binary_patterns/" rel="nofollow">http://www.bytefish.de/blog/local_binary_patterns/</a></p>
<p><a href="https://github.com/berak/uniform-lbp" rel="nofollow">https://github.com/berak/uniform-lbp</a></p>
<p>And one other thing I can say, you said you are resizing images into 200x200. Why are you doing that? As far as I know AT&T images are smaller than that, your are just making images bigger but I don't think it is going to help you, moreover it may have a negative effect in performance.</p>
|
c++|python|opencv|image-processing|face-recognition
| 4 |
1,909,189 | 46,560,909 |
Is it good practise to indent inline comments?
|
<p>I found myself writing some tricky algorithmic code, and I tried to comment it as well as I could since I really do not know who is going to maintain this part of the code.<br/>
Following this idea, I've wrote quite a lot of block and inline comments, also trying not to over-comment it. But still, when I go back to the code I wrote a week ago, I find it difficult to read because of the swarming presence of the comments, especially the inline ones.
I though that indenting them (to ~120char) could easy the readability, but would obviously make the line way too long according to style standards.</p>
<p>Here's an example of the original code:</p>
<pre><code>fooDataTableOccurrence = nestedData.find("table=\"public\".")
if 0 > fooDataTableOccurrence: # selects only tables without tag value "public-"
otherDataTableOccurrence = nestedData.find("table=")
dbNamePos = nestedData.find("dbname=") + 7 # 7 is the length of "dbname="
if -1 < otherDataTableOccurrence: # selects only tables with tag value "table="
# database resource case
resourceName = self.findDB(nestedData, dbNamePos, otherDataTableOccurrence, destinationPartitionName)
if resourceName: #if the resource is in a wrong path
if resourceName in ["foo", "bar", "thing", "stuff"]:
return True, False, False # respectively isProjectAlreadyExported, isThereUnexpectedData and wrongPathResources
wrongPathResources.append("Database table: " + resourceName)
</code></pre>
<p>And here's how indenting inline comments would look like:</p>
<pre><code>fooDataTableOccurrence = nestedData.find("table=\"public\".")
if 0 > seacomDataTableOccurrence: # selects only tables without tag value "public-"
otherDataTableOccurrence = nestedData.find("table=")
dbNamePos = nestedData.find("dbname=") + 7 # 7 is the length of "dbname="
if -1 < otherDataTableOccurrence: #selects only tables with tag value "table="
# database resource case
resourceName = self.findDB(nestedData, dbNamePos, otherDataTableOccurrence, destinationPartitionName)
if resourceName: # if the resource is in a wrong path
if resourceName in ["foo", "bar", "thing", "stuff"]:
return True, False, False # respectively isProjectAlreadyExported, isThereUnexpectedData and wrongPathResources
wrongPathResources.append("Database table: " + resourceName)
</code></pre>
<p>The code is in Python (my company legacy code is not drastically following the PEP8 standard so we had to stick with that), but <strong>my point is not about the cleanness of the code itself, but on the comments</strong>. I am looking for a trade-off between readability and easy understanding of the code, and sometimes I find difficult achieving both at the same time<br><br>
<strong>Which of the examples is better? If none, what would be?</strong></p>
|
<p>Maybe this is an XY_Problem?<br>
Could the comments be eliminated altogether?</p>
<p>Here is a (quick & dirty) attempt at refactoring the code posted:</p>
<pre><code>dataTableOccurrence_has_tag_public = nestedData.find("table=\"public\".") > 0
if dataTableOccurrence_has_tag_public:
datataTableOccurrence_has_tag_table = nestedData.find("table=") > 0
prefix = "dbname="
dbNamePos = nestedData.find(prefix) + len(prefix)
if datataTableOccurrence_has_tag_table:
# database resource case
resourceName = self.findDB(nestedData,
dbNamePos,
otherDataTableOccurrence,
destinationPartitionName)
resource_name_in_wrong_path = len(resourceName) > 0
if resourceNameInWrongPath:
if resourceName in ["foo", "bar", "thing", "stuff"]:
project_already_exported = True
unexpected_data = False
return project_already_exported,
unexpected_data,
resource_name_in_wrong_path
wrongPathResources.append("Database table: " + resourceName)
</code></pre>
<p>Further work could involve extracting functions out of the block of code.</p>
|
python|comments|pep
| 2 |
1,909,190 | 49,445,729 |
Cumulative sum at intervals
|
<p>Consider this dataframe:</p>
<pre><code>dfgg
Out[305]:
Parts_needed output
Year Month PartId
2018 1 L27849 72 72
2 L27849 75 147
3 L27849 101 248
4 L27849 103 351
5 L27849 77
6 L27849 120
7 L27849 59
8 L27849 79
9 L27849 28
10 L27849 64
11 L27849 511
12 L27849 34
2019 1 L27849 49
2 L27849 68
3 L27849 75
4 L27849 45
5 L27849 84
6 L27849 42
7 L27849 40
8 L27849 52
9 L27849 106
10 L27849 75
11 L27849 176
12 L27849 58 2193
2020 1 L27849 135 2328
2 L27849 45 2301
3 L27849 21 2247
4 L27849 35
5 L27849 17
6 L27849 39
...
2025 7 L27849 94
8 L27849 13
9 L27849 94
10 L27849 65
11 L27849 141
12 L27849 34
2026 1 L27849 22
2 L27849 132
3 L27849 49
4 L27849 33
5 L27849 48
6 L27849 53
7 L27849 103
8 L27849 122
9 L27849 171
10 L27849 182
11 L27849 68
12 L27849 23
2027 1 L27849 44
2 L27849 21
3 L27849 52
4 L27849 53
5 L27849 57
6 L27849 187
7 L27849 69
8 L27849 97
9 L27849 31
10 L27849 29
11 L27849 33
12 L27849 8
</code></pre>
<p>In this dataframe, I need to obtain cumulative sum of Parts_needed at intervals of 2 years. For eg:
for <code>1-2018, 72</code> will keep getting added to the following rows <code>75,101,103..</code> upto <code>1-2020 135</code>. Similarly, at <code>2-2018, 75</code> will keep getting added to the following rows <code>101,103..</code> upto <code>2-2020 45</code>. For the last 2 years however, the cumulative sum will be for whatever rows are remaining. I'm not being able to set a range with np.cumsum() Can somebody help me please?</p>
<p>edit: I have edited, to include the expected output. For 2-2020, the output is 2328+45-72 (since 72 has been added for 2 years) For 3-2020, the output is 2301+21-75 (since 75 has been added for 2 years) and so on.</p>
|
<p>Basically you want a running total if the beginning was zero padded. You can do that with convolution. Here is a simple numpy example which you should be able to adapt to your pandas use case:</p>
<pre><code>import numpy as np
a = np.array([10,20,3,4,5,6,7])
width = 4
kernel = np.ones(width)
np.convolve(a,kernel)
</code></pre>
<p>returning </p>
<pre><code>array([10., 30., 33., 37., 32., 18., 22., 18., 13., 7.])
</code></pre>
<p>As you can see this is a cumulative sum up until <code>37</code> in the output (or <code>a[3]</code>) and after that it's a sum of a rolling 4 element window.</p>
<p>This will work for you assuming you always have 24 rows for each 2 year period.</p>
<p>Here is a pandas example using only 2 months per year (so <code>width</code> is <code>4</code> instead of <code>24</code>):</p>
<pre><code>>>> import numpy as np
>>> import pandas as pd
>>> df = pd.DataFrame({'year':[18,18,19,19,20,20,21,21],'month':[1,2,1,2,1,2,1,2],'parts':[230,5,2,12,66,32,1,2]})
>>> df
month parts year
0 1 230 18
1 2 5 18
2 1 2 19
3 2 12 19
4 1 66 20
5 2 32 20
6 1 1 21
7 2 2 21
>>> width = 4
>>> kernel = np.ones(width)
>>> # Drop the last elements as you don't want the window to roll passed the end
>>> np.convolve(df['parts'],kernel)[:-width+1]
array([230., 235., 237., 249., 85., 112., 111., 101.])
</code></pre>
<p>Now you just assign that last array to a new column of your <code>DataFrame</code></p>
|
python|pandas|cumsum
| 1 |
1,909,191 | 49,710,301 |
chatbot error EOL while scanning string literal
|
<p>This is my code:</p>
<pre><code>for files in os.listdir('C:/Users/Tatheer Hussain/Desktop//ChatBot/chatterbot-corpus-master/chatterbot_corpus/data///english/'):
data = open('C:/Users/Tatheer Hussain/Desktop//ChatBot/chatterbot-corpus-master/chatterbot_corpus/data///english/'+ files , 'r').readlines()
bot.train(data)
</code></pre>
<p>i get this SyntaxError:
EOL while scanning string literal</p>
|
<p><code>\</code> is the escape character in Python. If you end your string with <code>\</code>, it will escape the close quote, so the string is no longer terminated properly.</p>
<p>You should use a raw string by prefixing the open quote with <code>r</code>:</p>
<pre><code>os.listdir(r'C:/Users/Tatheer Hussain/Desktop//ChatBot/chatterbot-corpus-master/chatterbot_corpus/data///english/')
</code></pre>
|
python|chatbot
| -1 |
1,909,192 | 49,511,471 |
Minimization python function which is constant on small intervals
|
<p>I want to minimize a convex function on an area with bounds and constraints, therefore I try to use <code>scipy.optimize.minimize</code> with the <code>SLSQP</code> option. However my function is only defined at discrete points. Linear interpolation does not seem like an option as computing my functions in all of its values would take way too much time. As a Minimal Working Example I have:</p>
<pre><code>from scipy.optimize import minimize
import numpy as np
f=lambda x : x**2
N=1000000
x_vals=np.sort(np.random.random(N))*2-1
y_vals=f(x_vals)
def f_disc(x, x_vals, y_vals):
return y_vals[np.where(x_vals<x)[-1][-1]]
print(minimize(f_disc, 0.5, method='SLSQP', bounds = [(-1,1)], args = (x_vals, y_vals)))
</code></pre>
<p>which yields the following output:</p>
<pre><code> fun: 0.24999963136767756
jac: array([ 0.])
message: 'Optimization terminated successfully.'
nfev: 3
nit: 1
njev: 1
status: 0
success: True
x: array([ 0.5])
</code></pre>
<p>which we of course know to be false, but the definition of <code>f_disc</code> tricks the optimizer in believing that it is constant at the given index. For my problem I only have <code>f_disc</code> and do not have access to <code>f</code>. Moreover one call to <code>f_disc</code> can take as much as a minute.</p>
|
<p>If your function is not smooth gradient-based optimization techniques will fail. Of course you can use methods that are not based on gradients, but these usually require more function evaluations.</p>
<p>Here are two options that can work.</p>
<p>The <a href="https://en.wikipedia.org/wiki/Nelder%E2%80%93Mead_method" rel="nofollow noreferrer">nelder-mead method</a> does not need a gradient, but it has the drawback that it cannot handle bounds or constraints:</p>
<pre><code>print(minimize(f_disc, 0.5, method='nelder-mead', args = (x_vals, y_vals)))
# final_simplex: (array([[ -4.44089210e-16], [ 9.76562500e-05]]), array([ 2.35756658e-12, 9.03710082e-09]))
# fun: 2.3575665763730149e-12
# message: 'Optimization terminated successfully.'
# nfev: 32
# nit: 16
# status: 0
# success: True
# x: array([ -4.44089210e-16])
</code></pre>
<p><a href="https://docs.scipy.org/doc/scipy-0.18.1/reference/generated/scipy.optimize.differential_evolution.html" rel="nofollow noreferrer"><code>differential_evolution</code></a> is an optimizer that does not make any assumptions about smoothness. It cannot only handle bounds; it requires them. However, it takes even more function evaluations than nelder-mead.</p>
<pre><code>print(differential_evolution(f_disc, bounds = [(-1,1)], args = (x_vals, y_vals)))
# fun: 5.5515134011907119e-13
# message: 'Optimization terminated successfully.'
# nfev: 197
# nit: 12
# success: True
# x: array([ 2.76298719e-06])
</code></pre>
|
python|numpy|optimization|scipy|minimization
| 2 |
1,909,193 | 53,499,701 |
question about string using regex in python
|
<p>I am doing a python question where if there is a non alphanumeric character between 2 alpha numeric characters, it should be replaced with a single space ' '. As seen below. This is only for non-alpha chars BETWEEN 2 alpha chars, in the case below its between 'this and is', and 'is and matrix', the last half (non alph chars) shoudl stay untouched. How do i go about doing this. </p>
<p>Your Output:</p>
<pre><code>'This$#is% Matrix# %!'
</code></pre>
<p>Expected Output:</p>
<pre><code>'This is Matrix# %!'
</code></pre>
|
<p>Please find the below code. Hope this will help you.</p>
<p>import re</p>
<p>a ='This$#is% Matrix# %!'</p>
<p>b= re.sub(r'([\w])([\W]{1,})([\w])',r'\1 \3',a )</p>
<p>print(b)</p>
|
python|regex
| 0 |
1,909,194 | 46,065,879 |
Redis value fetch for 3000 keyhash takes about 10 seconds(python 3.5)
|
<p>I am new to REDIS world. I am trying to fetch values for 3000 keys from REDIS. Each hash has 6 values that I want to fetch. I am using Python 3.5 to to make connection to REDIS one time and then loop through my key hashs to get their respective values from REDIS. However, currently it is taking about 10 seconds to get the values for these 3000 rows. I am using the below code to get data from REDIS. Can you please help me in speeding up the fetch. Is there a way to send all the keys at once and get the values related to them? I am using python 3.5 for this. </p>
<pre><code>redis_pool = redis.ConnectionPool(host='XYZ',
port='XY',
db='X')
r = redis.Redis(connection_pool=red_pool)
field1 = r.hmget(primary_key, "Field1")
</code></pre>
|
<p>You could try <a href="https://redis.io/topics/pipelining" rel="noreferrer">pipeline</a> to speed up your query.</p>
<pre><code>r = redis.Redis(connection_pool=red_pool)
pipe = r.pipeline()
for key in keys_list:
pipe.hget(key, "field1")
results = pipe.execute()
</code></pre>
<p>The <code>results</code> will be the list of every hget reply. you could refer <a href="https://github.com/andymccurdy/redis-py" rel="noreferrer">redis-py</a>'s readme's pipeline section to learn more about how to use pipeline in python redis client.</p>
|
python|performance|redis
| 6 |
1,909,195 | 33,342,266 |
Issue with request module in python
|
<p>I'm not figuring out to resolve this issue.</p>
<p>I installed <code>python/pip</code> and it works.. but i'm not able to import <strong><em>requests module</em></strong> inside my python script.</p>
<p>if i launch this command:</p>
<pre><code>pip install requests --upgrade
</code></pre>
<p>i'm going to receive this as output:</p>
<pre><code>/usr/local/lib/python2.7/site-packages/pip/_vendor/requests/packages/urllib3/util/ssl_.py:90: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
InsecurePlatformWarning
Requirement already up-to-date: requests in /usr/local/lib/python2.7/site-packages
</code></pre>
<p>So, if i launch a simple script:</p>
<pre><code>#! usr/bin/python
import requests
printl("hello")
</code></pre>
<p>i get this error:</p>
<pre><code> File "test.py", line 5, in <module>
import requests
ImportError: No module named requests
</code></pre>
<p><strong>UPDATE with sys.path output</strong></p>
<pre><code>[ '/usr/lib/python2.7/site-packages/simplejson-2.0.9-py2.7.egg', '/usr/lib64/python27.zip', '/usr/lib64/python2.7', '/usr/lib64/python2.7/plat-linux2', '/usr/lib64/python2.7/lib-tk', '/usr/lib64/python2.7/lib-old', '/usr/lib64/python2.7/lib-dynload', '/usr/lib64/python2.7/site-packages', '/usr/lib64/python2.7/site-packages/gtk-2.0', '/usr/lib/python2.7/site-packages']
</code></pre>
<p>what's wrong?</p>
|
<p>I resolved the issue.</p>
<p>If someone has got the same problem, simply run the scripts with the version of python which has the modules.</p>
<p>So for example, in my case this works:</p>
<pre><code>$python2.7 test.py
</code></pre>
|
python
| 0 |
1,909,196 | 73,771,173 |
Read multiple *.txt files into Pandas Dataframe with filename as the first column
|
<p>Currently I have these codes which only reads one specific txt file and split into different columns. Each txt file is stored in the same directory and looks like this:</p>
<pre><code>0 0.712518 0.615250 0.439180 0.206500
1 0.635078 0.811750 0.292786 0.092500
</code></pre>
<p>The code I wrote:</p>
<pre><code>spark.read.format('csv').options(header='false').load("/mnt/datasets/model1/train/labels/2a.txt").toPandas()
df_2a.columns = ['Value']
df_2a_split = df_2a['Value'].str.split(' ', n=0, expand=True)
df_2a_split.columns = ['class','c1','c2','c3','c4']
display(df_2a_split)
</code></pre>
<p>And the output is like this:</p>
<pre><code>class c1 c2 c3 c4
0 0.712518 0.61525 0.43918 0.2065
1 0.635078 0.81175 0.292786 0.0925
</code></pre>
<p>However, I want to ingest all txt.files in a directory including the filename as the first column in the pandas dataframe. The expected result looks like below</p>
<pre><code>file_name class c1 c2 c3 c4
2a.txt 0 0.712518 0.61525 0.43918 0.2065
2a.txt 1 0.635078 0.81175 0.292786 0.0925
2b.txt 2 0.551273 0.5705 0.30198 0.0922
2b.txt 0 0.550212 0.31125 0.486563 0.2455
</code></pre>
|
<pre><code>import os
import pandas as pd
import spark
directory = '/mnt/datasets/model1/train/labels/'
# Get all the filenames within your directory
files = []
for file in os.listdir(directory):
if os.path.isfile(os.path.join(directory, file)):
files.append(file)
# Create an empty df and fill it by looping your files
df = pd.DataFrame()
for file in files:
df_temp = spark.read.format('csv').options(header='false').load(directory + file).toPandas()
df_temp.columns = ['file_name', 'Value']
df_temp = df_temp['Value'].str.split(' ', n=0, expand=True)
df_temp.columns = ['file_name', 'class','c1','c2','c3','c4']
df = pd.concat([df, df_temp], ignore_index=True)
# Fill the filename column
df['file_name'] = files
display(df)
</code></pre>
|
python|pandas|dataframe|csv|text
| 0 |
1,909,197 | 12,863,580 |
Python pass instance of itself as an argument to another function
|
<p>I have a UserModel class that will essentially do everything like login and update things.</p>
<p>I'm trying to pass the instance of itself (the full class) as an argument to another function of another class.</p>
<p>For example: (obviously not the code, but you get the idea)</p>
<pre><code>from Car import CarFactory
class UserModel:
def __init__(self,username):
self.username = username
def settings(self,colour,age,height):
return {'colour':colour,'age':age,'height':height}
def updateCar(self,car_id):
c = CarFactory(car_id, <<this UserModel instance>>)
</code></pre>
<p>So, as you can see from the very last line above I would like to pass an instance of UserModel to the CarData class, so when within the CarData class I can access the UserModel.settings(), however, I am unsure of the syntax. I could of course just do:</p>
<pre><code>c = CarFactory(car_id,self.settings)
</code></pre>
<p>Any help would be grateful appreciated.</p>
<p>Thanks</p>
|
<pre><code>c = CarFactory(car_id, self)
</code></pre>
<p>doesnt work?</p>
<p>on a side note it would be <code>self.settings()</code> not <code>self.settings</code> ... unless you define settings to be a property</p>
|
python
| 13 |
1,909,198 | 21,516,860 |
Tried to implement lists as dictionary keys within algorithm, what's a quick solution?
|
<p>I am trying to implement the Apriori algorithm... <a href="http://codeding.com/articles/apriori-algorithm" rel="nofollow">http://codeding.com/articles/apriori-algorithm</a> in Python.</p>
<p>The highest level data structuring goes something like this:</p>
<pre><code>frequentItemSets[ k-level : itemSetDictionary]
|
|__
itemSetDictionary[ listOfItems : supportValueOfItems]
|
|__
list of integers, sorted lexicographically
</code></pre>
<p>I need to keep track of an arbitrary number of sets, the cardinality (k-level) of those sets, and a value that I calculate for each of those sets. I thought that using a list for all of the sets would be a good idea as they maintain order and are iterable. I tried to use lists as the keys within the itemSetDictionary, as you can see above, but now I see that iterable data structures are not allowed to be keys wihtin Python dictionaries.</p>
<p>I am trying to figure out the quickest way to fix this issue. I know that I can just create some classes so that the keys are now objects, and not iterable data structures, but I feel like that would take a lot of time for me to change.</p>
<p>Any ideas?</p>
|
<p>Dictionary keys must be <a href="http://docs.python.org/2/glossary.html#term-hashable" rel="nofollow">hashable</a>, which usable requires them to be <a href="http://docs.python.org/2/glossary.html#term-immutable" rel="nofollow">immutable</a>. Whether they are iterable is immaterial.</p>
<p>In this particular case, you probably want to use <a href="http://docs.python.org/2/library/stdtypes.html#frozenset" rel="nofollow"><code>frozenset</code></a>s as keys.</p>
|
python|python-2.7|dictionary|hashable
| 2 |
1,909,199 | 24,621,792 |
Add values in two dictionaries together on same key
|
<p>How can I approach adding items from two table-indexed dictionaries together as in this simple example?</p>
<pre><code>table1 = {'item1': {'quantity': 3, 'value': 3.0}, \
'item2': {'quantity': 10, 'value': 30} \
}
table2 = {'item1': {'quantity': 5, 'value': 5.0}, \
'item3': {'quantity': 7, 'value': 10.5} \
}
newDic = {'item1': {'quantity': 8, 'value': 8.0}, \
'item2': {'quantity': 10, 'value': 30}, \
'item3': {'quantity': 7, 'value': 10.5} \
}
</code></pre>
<p>I have a function that parses and filters contents of a flat file into a dictionary. I'd like to redesign the script to parse multiple files. Each table has exactly the same column headers. The rows have same items with different values, but some files have <em>additional</em> items.</p>
|
<p>You can add <code>table2</code> to <code>table1</code></p>
<pre><code>table1 = {'item1': {'quantity': 3, 'value': 3.0}, \
'item2': {'quantity': 10, 'value': 30} \
}
table2 = {'item1': {'quantity': 5, 'value': 5.0}, \
'item3': {'quantity': 7, 'value': 10.5} \
}
for key, val in table2.items():
if key not in table1:
table1[key] = val
else:
for k, v in val.items():
if k not in table1[key]:
table1[key][k] = v
else:
table1[key][k] += v
print table1
</code></pre>
<p>.</p>
<pre><code>{
'item2': {'value': 30, 'quantity': 10},
'item3': {'value': 10.5, 'quantity': 7},
'item1': {'value': 8.0, 'quantity': 8}
}
</code></pre>
|
python
| 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.