Unnamed: 0
int64 0
1.91M
| id
int64 337
73.8M
| title
stringlengths 10
150
| question
stringlengths 21
64.2k
| answer
stringlengths 19
59.4k
| tags
stringlengths 5
112
| score
int64 -10
17.3k
|
---|---|---|---|---|---|---|
1,907,200 | 66,460,266 |
How to render wagtail orderable to homepage template?
|
<p>I want to render them for my home templates. How can I solve the issue to get the output on templates? I created snippets and orderable and now can't get the output in my homepage. I tried with related_name="question_answer" but it is not working here.</p>
<pre><code> @register_snippet
class Question(models.Model):
text = models.CharField(max_length=255)
slug = models.CharField(max_length=255)
panels = [
FieldPanel('text'),
FieldPanel('slug')
]
def __str__(self):
return self.text
class Answer(Orderable, models.Model):
page = ParentalKey(HomePage, on_delete=models.CASCADE, related_name="question_answer")
question = models.ForeignKey(Question, null = True, blank = True, on_delete = models.CASCADE, related_name = '+')
answer = models.CharField(max_length=300)
url = models.URLField(max_length=200)
panels = [
MultiFieldPanel([
SnippetChooserPanel('question'),
], heading="Questions"),
MultiFieldPanel([
FieldPanel('answer'),
FieldPanel('url'),
], heading="Answers & Urls")
]
def __str__(self):
return self.page.title + " -> " + self.question.text
```
</code></pre>
|
<p>Setting <code>related_name="question_answer"</code> on the ParentalKey means that you can access the Answer objects related to the page as <code>page.question_answer.all()</code>. So, in your template, you can do something like:</p>
<pre><code><h1>Answers</h1>
<ul>
{% for item in page.question_answer.all %}
<li>{{ item.answer }}</li>
{% endfor %}
</ul>
</code></pre>
|
python|django|wagtail
| 1 |
1,907,201 | 66,713,106 |
how can I get a previous row value in pandas column
|
<p>I have a df like this</p>
<pre><code>| count | A |
|---------|---|
| yes |2 |
| yes |2 |
| total | |
| yes |2 |
| yes |2 |
| total | |
</code></pre>
<p>I want a output like below</p>
<pre><code>| count | A |
|---------|---|
| yes |2 |
| yes |2 |
| total | 2 |
| yes |2 |
| yes |2 |
| total | 2 |
</code></pre>
<p>that is fill the A column values where there is total with the previous row value
any idea how can I achieve this</p>
|
<p>You can try with <code>df.loc</code> and <code>series.mask</code> and <code>ffill</code></p>
<pre><code>df.loc[df['count'].eq("total"),"A"] = df['A'].mask(df['A'].eq('')).ffill()
</code></pre>
<hr />
<pre><code>print(df)
count A
0 yes 2
1 yes 2
2 total 2
3 yes 2
4 yes 2
5 total 2
</code></pre>
|
python|pandas|dataframe|row
| 1 |
1,907,202 | 66,586,856 |
How can I make my project available in the poetry environment?
|
<p>I want to be able to run/import my packages during development, using poetry as my dependency and environment management tool. I simply cannot figure out how to do this in poetry (without manipulating sys.path in every interpreter)</p>
<p>The <a href="https://python-poetry.org/docs/basic-usage/#installing-dependencies-only" rel="noreferrer">poetry documentation</a> seems to indicate that this should be done by default:</p>
<blockquote>
<p>The current project is installed in editable mode by default.</p>
</blockquote>
<p>But I have tried this with multiple projects, and the current project is never able to be imported from the interpreter in the virtual env. It always fails with a <code>ModuleNotFoundError</code>. I also cannot see how or where this installation is supposed to happen.</p>
<p>The docs also describe adding path dependencies in editable mode:</p>
<pre><code>[tool.poetry.dependencies]
my-package = {path = "../my/path", develop = true}
</code></pre>
<p>but this always fails with "can't open file" or "Directory does not seem to be a python package". The directory has <code>__init__.py</code>, and I am using the default poetry setup with a src directory.</p>
|
<p>There are two things needed, that the package under development is available in the venv.</p>
<p>First, poetry must be able to find the package folder. poetry is able to do this by default, if the folder containing the package data is located in the same folder as the <code>pyproject.toml</code> or is a subfolder of a folder called <code>src</code>. The name of the folder containing the package data must be the same as defined under <code>name</code> in the <code>[tool.poetry]</code> section of the <code>pyproject.toml</code>. In case <code>name</code> contains <code>.</code> or <code>-</code> these characters must be replaced by <code>_</code> for the package folder.</p>
<p>If you are using a different schema, you have to tell poetry via <code>packages</code> in your <code>pyproject.toml</code> where it can find the packages, e.g.:</p>
<pre><code>packages = [
{ include = "my_package", from = "lib" },
]
</code></pre>
<p>See the <a href="https://python-poetry.org/docs/pyproject/#packages" rel="noreferrer">docs</a> for more example.</p>
<p>Second, run <code>poetry install</code>. This will install all dependencies and install the projects package in editable mode as well in a virtual environment.</p>
<p>Don't forget to activate the venv if you start working. This can be done via <code>poetry shell</code> or run the script via <code>poetry run python my_script.py</code>.</p>
|
python-poetry
| 11 |
1,907,203 | 64,912,022 |
403 Forbidden message when trying to use Beautifulsoup
|
<p>this has been mentioned here before and I tried passing through a fake user perimeter, but to no avail. Can you please help?</p>
<pre><code>import requests
from bs4 import BeautifulSoup
headers = requests.utils.default_headers()
headers.update({
'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:52.0) Gecko/20100101 Firefox/52.0',
})
page = requests.get('https://ingatlan.com/lista/elado+lakas')
soup = BeautifulSoup(page.content, 'html.parser')
print(soup.prettify())
</code></pre>
<p>The error message is the following:</p>
<pre><code>$ python hello.py
<html>
<head>
<title>
403 Forbidden
</title>
</head>
<body>
<center>
<h1>
403 Forbidden
</h1>
</center>
<hr/>
<center>
nginx
</center>
</body>
</html>
</code></pre>
|
<p>Its show response</p>
<pre><code>import requests
from bs4 import BeautifulSoup
headers={
'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:52.0) Gecko/20100101 Firefox/52.0',
}
def main(url):
with requests.Session() as req:
req.headers.update(headers)
r = req.get(url).text
soup = BeautifulSoup(r, 'lxml')
print(soup.prettify())
url = 'https://ingatlan.com/lista/elado+lakas'
main(url)
</code></pre>
|
python|beautifulsoup
| 1 |
1,907,204 | 64,712,464 |
is there an python method for solve this list index error?
|
<blockquote>
<p>I have error that say: the list index out of range ?</p>
</blockquote>
<pre><code>equal_score = []
for i, j in enumerate(new_gd):
if i > len(new_gd):
break
if new_gd[i]['score'] == new_gd[i+1]['score']:
equal_score.append(new_gd[i])
equal_score.append(new_gd[i+1])'
</code></pre>
|
<p>Since you refer to the index <code>i+1</code> you should do <code>if i+1 >= len(new_gd): break</code> so you make sure <code>i+1</code> exists.</p>
|
python
| 1 |
1,907,205 | 63,901,816 |
sql max() over (partition by) in pandas
|
<p>I'm trying to get the same SQL output for max over partition by, but in pandas.
the goal is to replace the did_renew==No with Yes, but under specific conditions and for a group of the dataframe
This is my datafreame:</p>
<pre><code> date_id sf_id renewal_date is_up_for_renewal did_renew datediff
168 2020-09-01 0010O00001n1s1rQAA 2020-09-30 Yes Undetermined NaN
169 2020-08-01 0010O00001n1s1rQAA 2020-09-30 No Undetermined 1.0
170 2020-07-01 0010O00001n1s1rQAA 2020-09-30 No Undetermined 1.0
171 2020-06-01 0010O00001n1s1rQAA 2020-09-30 No Undetermined 1.0
172 2020-05-01 0010O00001n1s1rQAA 2020-09-30 No Undetermined 1.0
173 2020-04-01 0010O00001n1s1rQAA 2020-09-30 No Undetermined 1.0
174 2020-03-01 0010O00001n1s1rQAA 2020-09-30 No Undetermined 1.0
175 2020-02-01 0010O00001n1s1rQAA 2020-09-30 No Undetermined 1.0
176 2020-01-01 0010O00001n1s1rQAA 2020-09-30 No Undetermined 1.0
177 2019-12-01 0010O00001n1s1rQAA 2020-09-30 No Undetermined 1.0
178 2019-11-01 0010O00001n1s1rQAA 2020-09-30 No Undetermined 1.0
179 2019-10-01 0010O00001n1s1rQAA 2020-09-30 No Undetermined 1.0
180 2019-08-01 0010O00001n1s1rQAA 2019-08-31 Yes No 2.0
181 2019-07-01 0010O00001n1s1rQAA 2019-08-31 No No 1.0
182 2019-06-01 0010O00001n1s1rQAA 2019-08-31 No No 1.0
183 2019-05-01 0010O00001n1s1rQAA 2019-08-31 No No 1.0
184 2019-04-01 0010O00001n1s1rQAA 2019-08-31 No No 1.0
185 2019-03-01 0010O00001n1s1rQAA 2019-08-31 No No 1.0
186 2019-02-01 0010O00001n1s1rQAA 2019-08-31 No No 1.0
187 2019-01-01 0010O00001n1s1rQAA 2019-08-31 No No 1.0
188 2018-12-01 0010O00001n1s1rQAA 2019-08-31 No No 1.0
189 2018-11-01 0010O00001n1s1rQAA 2019-08-31 No No 1.0
190 2018-10-01 0010O00001n1s1rQAA 2019-08-31 No No 1.0
191 2018-09-01 0010O00001n1s1rQAA 2019-08-31 No No 1.0
192 2018-08-01 0010O00001n1s1rQAA 2019-08-31 No No 1.0
</code></pre>
<p>In SQL I would write: case when dafediff=2 then max('Yes') over (partition by sf_id,renewal_date) end
That would have created a new column with values only for rows 180-192 (see the renewal date is different for rows 168-179, 180-192)
This is how the results should be in column <code>target</code>:</p>
<pre><code> date_id sf_id renewal_date is_up_for_renewal did_renew datediff target
168 2020-09-01 0010O00001n1s1rQAA 2020-09-30 Yes Undetermined NaN Undetermined
169 2020-08-01 0010O00001n1s1rQAA 2020-09-30 No Undetermined 1.0 Undetermined
170 2020-07-01 0010O00001n1s1rQAA 2020-09-30 No Undetermined 1.0 Undetermined
171 2020-06-01 0010O00001n1s1rQAA 2020-09-30 No Undetermined 1.0 Undetermined
172 2020-05-01 0010O00001n1s1rQAA 2020-09-30 No Undetermined 1.0 Undetermined
173 2020-04-01 0010O00001n1s1rQAA 2020-09-30 No Undetermined 1.0 Undetermined
174 2020-03-01 0010O00001n1s1rQAA 2020-09-30 No Undetermined 1.0 Undetermined
175 2020-02-01 0010O00001n1s1rQAA 2020-09-30 No Undetermined 1.0 Undetermined
176 2020-01-01 0010O00001n1s1rQAA 2020-09-30 No Undetermined 1.0 Undetermined
177 2019-12-01 0010O00001n1s1rQAA 2020-09-30 No Undetermined 1.0 Undetermined
178 2019-11-01 0010O00001n1s1rQAA 2020-09-30 No Undetermined 1.0 Undetermined
179 2019-10-01 0010O00001n1s1rQAA 2020-09-30 No Undetermined 1.0 Undetermined
180 2019-08-01 0010O00001n1s1rQAA 2019-08-31 Yes No 2.0 Yes
181 2019-07-01 0010O00001n1s1rQAA 2019-08-31 No No 1.0 Yes
182 2019-06-01 0010O00001n1s1rQAA 2019-08-31 No No 1.0 Yes
183 2019-05-01 0010O00001n1s1rQAA 2019-08-31 No No 1.0 Yes
184 2019-04-01 0010O00001n1s1rQAA 2019-08-31 No No 1.0 Yes
185 2019-03-01 0010O00001n1s1rQAA 2019-08-31 No No 1.0 Yes
186 2019-02-01 0010O00001n1s1rQAA 2019-08-31 No No 1.0 Yes
187 2019-01-01 0010O00001n1s1rQAA 2019-08-31 No No 1.0 Yes
188 2018-12-01 0010O00001n1s1rQAA 2019-08-31 No No 1.0 Yes
189 2018-11-01 0010O00001n1s1rQAA 2019-08-31 No No 1.0 Yes
190 2018-10-01 0010O00001n1s1rQAA 2019-08-31 No No 1.0 Yes
191 2018-09-01 0010O00001n1s1rQAA 2019-08-31 No No 1.0 Yes
192 2018-08-01 0010O00001n1s1rQAA 2019-08-31 No No 1.0 Yes
</code></pre>
<p>The full dataframe would include many groups of sf_id's so I know I need to use the groupby method for <code>[sf_id,renewal_date]</code> but not sure how to accomplish this
Thanks in advance!</p>
|
<p>IIUC,</p>
<pre><code>df['target'] = (df.assign(target=df['datediff']==2))\
.groupby(['sf_id', 'renewal_date'])['target']\
.transform('max').map({True:'Yes',False:'Undetermined'})
</code></pre>
<p>Output:</p>
<pre><code> date_id sf_id renewal_date is_up_for_renewal did_renew datediff target
168 2020-09-01 0010O00001n1s1rQAA 2020-09-30 Yes Undetermined NaN Undetermined
169 2020-08-01 0010O00001n1s1rQAA 2020-09-30 No Undetermined 1.0 Undetermined
170 2020-07-01 0010O00001n1s1rQAA 2020-09-30 No Undetermined 1.0 Undetermined
171 2020-06-01 0010O00001n1s1rQAA 2020-09-30 No Undetermined 1.0 Undetermined
172 2020-05-01 0010O00001n1s1rQAA 2020-09-30 No Undetermined 1.0 Undetermined
173 2020-04-01 0010O00001n1s1rQAA 2020-09-30 No Undetermined 1.0 Undetermined
174 2020-03-01 0010O00001n1s1rQAA 2020-09-30 No Undetermined 1.0 Undetermined
175 2020-02-01 0010O00001n1s1rQAA 2020-09-30 No Undetermined 1.0 Undetermined
176 2020-01-01 0010O00001n1s1rQAA 2020-09-30 No Undetermined 1.0 Undetermined
177 2019-12-01 0010O00001n1s1rQAA 2020-09-30 No Undetermined 1.0 Undetermined
178 2019-11-01 0010O00001n1s1rQAA 2020-09-30 No Undetermined 1.0 Undetermined
179 2019-10-01 0010O00001n1s1rQAA 2020-09-30 No Undetermined 1.0 Undetermined
180 2019-08-01 0010O00001n1s1rQAA 2019-08-31 Yes No 2.0 Yes
181 2019-07-01 0010O00001n1s1rQAA 2019-08-31 No No 1.0 Yes
182 2019-06-01 0010O00001n1s1rQAA 2019-08-31 No No 1.0 Yes
183 2019-05-01 0010O00001n1s1rQAA 2019-08-31 No No 1.0 Yes
184 2019-04-01 0010O00001n1s1rQAA 2019-08-31 No No 1.0 Yes
185 2019-03-01 0010O00001n1s1rQAA 2019-08-31 No No 1.0 Yes
186 2019-02-01 0010O00001n1s1rQAA 2019-08-31 No No 1.0 Yes
187 2019-01-01 0010O00001n1s1rQAA 2019-08-31 No No 1.0 Yes
188 2018-12-01 0010O00001n1s1rQAA 2019-08-31 No No 1.0 Yes
189 2018-11-01 0010O00001n1s1rQAA 2019-08-31 No No 1.0 Yes
190 2018-10-01 0010O00001n1s1rQAA 2019-08-31 No No 1.0 Yes
191 2018-09-01 0010O00001n1s1rQAA 2019-08-31 No No 1.0 Yes
192 2018-08-01 0010O00001n1s1rQAA 2019-08-31 No No 1.0 Yes
</code></pre>
<p>Details, much like your <em>case when</em> I am creating/<code>assign</code> a temporary column 'target' that is True when datediff equals to 2. Then, I am going to <code>groupby</code> this column just like your <em>partition by</em> on 'sf_id' and 'renewal_date'. Next we use <code>transform</code> to get that max 'target' for that group hence create True for all records in a group where any record has datediff equal to 2. Lastly, we use <code>map</code> to change the True to Yes and False to Undetermined.</p>
|
python-3.x|pandas
| 2 |
1,907,206 | 68,641,276 |
How to set event handler as passive in Dash plotly to avoid data points being skipped
|
<p>I am using dash plotly with python to plot line graphs I am using the extendData property of the graph to update the traces to avoid redrawing the graph. I am facing a problem where the graphs skip few data points. I am also getting some console warnings which are as follows</p>
<pre><code>async-plotlyjs.v1_16_0m1617903285.js:2 [Violation] Added non-passive event listener to a scroll-blocking 'touchstart' event. Consider marking event handler as 'passive' to make the page more responsive. See https://www.chromestatus.com/feature/5745543795965952
</code></pre>
<pre><code>async-plotlyjs.v1_16_0m1617903285.js:2 [Violation] Added non-passive event listener to a scroll-blocking 'wheel' event. Consider marking event handler as 'passive' to make the page more responsive. See https://www.chromestatus.com/feature/5745543795965952
</code></pre>
<p>I think the reason for data points being skipped is this waring as it may block the callback from being called. More data points are being skipped when I move from the graph tab to the other tab(let's say I move from the graph tab to the StackOverflow tab). How do I prevent this from happening? or if there is another way to do this please feel free to post it in the comments.</p>
<p>these are the screenshots where it shipped one data point.</p>
<p><a href="https://i.stack.imgur.com/OHrUf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OHrUf.png" alt="data point 99" /></a></p>
<p><a href="https://i.stack.imgur.com/E3ArN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/E3ArN.png" alt="skipped data point 100" /></a></p>
<p><a href="https://i.stack.imgur.com/tavXK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tavXK.png" alt="data point 101 " /></a></p>
<p>FYI I am using
Google Chrome Version 92.0.4515.107 (Official Build) (64-bit)
but the problem is persistent on other browsers as well.</p>
<p>Below mentioned is the code</p>
<pre><code>import random
import webbrowser
import dash
import dash_bootstrap_components as dbc
import numpy as np
import dash_core_components as dcc
import dash_html_components as html
from dash.dependencies import Output, Input, State
app = dash.Dash(__name__,suppress_callback_exceptions=True)
app.layout = html.Div([
dbc.Row([
dbc.Col([
dcc.Graph(
id='graph-voltage',
figure={
'layout': {
'title': 'Voltage',
'xaxis': {
'title': 'Time'
},
'yaxis': {
'title': 'Voltage in V',
'range': [0, 5],
}
},
'data': [{'name': 'Cell 01', 'type': 'line', 'x': [], 'y': []},
{'name': 'Cell 02', 'type': 'line', 'x': [], 'y': []},
{'name': 'Cell 03', 'type': 'line', 'x': [], 'y': []},
{'name': 'Cell 04', 'type': 'line', 'x': [], 'y': []},
{'name': 'Cell 05', 'type': 'line', 'x': [], 'y': []},
{'name': 'Cell 06', 'type': 'line', 'x': [], 'y': []},
{'name': 'Cell 07', 'type': 'line', 'x': [], 'y': []},
{'name': 'Cell 08', 'type': 'line', 'x': [], 'y': []},
{'name': 'Cell 09', 'type': 'line', 'x': [], 'y': []},
{'name': 'Cell 10', 'type': 'line', 'x': [], 'y': []},
{'name': 'Cell 11', 'type': 'line', 'x': [], 'y': []},
{'name': 'Cell 12', 'type': 'line', 'x': [], 'y': []},
{'name': 'Cell 13', 'type': 'line', 'x': [], 'y': []},
{'name': 'Cell 14', 'type': 'line', 'x': [], 'y': []},
{'name': 'Cell 15', 'type': 'line', 'x': [], 'y': []},
{'name': 'Cell 16', 'type': 'line', 'x': [], 'y': []},
]
}
),
], ),
]),
dcc.Interval(
id='interval-graph-update',
interval=0.5 * 1000,
n_intervals=0),
])
@app.callback(Output('graph-voltage', 'extendData'),
[Input('interval-graph-update', 'n_intervals')]
)
def extend_single_trace(n_intervals):
CVT_CELL1 = np.array([])
CVT_CELL2 = np.array([])
CVT_CELL3 = np.array([])
CVT_CELL4 = np.array([])
CVT_CELL5 = np.array([])
CVT_CELL6 = np.array([])
CVT_CELL7 = np.array([])
CVT_CELL8 = np.array([])
CVT_CELL9 = np.array([])
CVT_CELL10 = np.array([])
CVT_CELL11 = np.array([])
CVT_CELL12 = np.array([])
CVT_CELL13 = np.array([])
CVT_CELL14 = np.array([])
CVT_CELL15 = np.array([])
CVT_CELL16 = np.array([])
CVT_TIME_STAMP = np.array([])
CVT_CELL1 = np.append(CVT_CELL1, random.randint(0,5))
CVT_CELL2 = np.append(CVT_CELL2, random.randint(0,5))
CVT_CELL3 = np.append(CVT_CELL3, random.randint(0,5))
CVT_CELL4 = np.append(CVT_CELL4, random.randint(0,5))
CVT_CELL5 = np.append(CVT_CELL5, random.randint(0,5))
CVT_CELL6 = np.append(CVT_CELL6, random.randint(0,5))
CVT_CELL7 = np.append(CVT_CELL7, random.randint(0,5))
CVT_CELL8 = np.append(CVT_CELL8, random.randint(0,5))
CVT_CELL9 = np.append(CVT_CELL9, random.randint(0,5))
CVT_CELL10 = np.append(CVT_CELL10, random.randint(0,5))
CVT_CELL11 = np.append(CVT_CELL11, random.randint(0,5))
CVT_CELL12 = np.append(CVT_CELL12, random.randint(0,5))
CVT_CELL13 = np.append(CVT_CELL13, random.randint(0,5))
CVT_CELL14 = np.append(CVT_CELL14, random.randint(0,5))
CVT_CELL15 = np.append(CVT_CELL15, random.randint(0,5))
CVT_CELL16 = np.append(CVT_CELL16, random.randint(0,5))
CVT_TIME_STAMP = np.append(CVT_TIME_STAMP, n_intervals)
return (dict(x=[CVT_TIME_STAMP, CVT_TIME_STAMP, CVT_TIME_STAMP, CVT_TIME_STAMP, CVT_TIME_STAMP, CVT_TIME_STAMP,
CVT_TIME_STAMP, CVT_TIME_STAMP, CVT_TIME_STAMP, CVT_TIME_STAMP, CVT_TIME_STAMP, CVT_TIME_STAMP,
CVT_TIME_STAMP, CVT_TIME_STAMP, CVT_TIME_STAMP, CVT_TIME_STAMP, ],
y=[CVT_CELL1, CVT_CELL2, CVT_CELL3, CVT_CELL4, CVT_CELL5, CVT_CELL6, CVT_CELL7, CVT_CELL8, CVT_CELL9,
CVT_CELL10, CVT_CELL11, CVT_CELL12, CVT_CELL13, CVT_CELL14, CVT_CELL15, CVT_CELL16],
)
)
if __name__ == "__main__":
webbrowser.open('http://127.0.0.1:5050/')
app.run_server(port=5050, debug=True, use_reloader=False)
</code></pre>
<p>Any help will be appreciated,
Thanks in advance.</p>
|
<p>The reason Dash graph skips the data points is that it cannot handle large amounts of data in the specified update rate of the page. The workaround for this is to either reduce the amount of data or increase the update rate.
I hope this is helpful.</p>
|
python|plotly|plotly-dash|plotly-python|plotly.js
| 0 |
1,907,207 | 68,530,363 |
OpenTelemetry Python - How to instanciate a new span as a child span for a given trace_id
|
<p>My goal is to perform tracing of the whole process of my application through several component. I am using GCP and Pub/Sub message queue to communicate information between components (developped in Python).</p>
<p>I am currently trying to keep the same root trace between component A and component B by creating a new span as a child span of my root trace.</p>
<p>Here is a small diagram:</p>
<pre><code>Component A ---> Pub/Sub message ---> component B
(create the root trace) (contain information) (new span for root trace)
</code></pre>
<p>I have a given <code>trace_id</code> and <code>span_id</code> of my parent that I can transmit through Pub/Sub but I can't figure out how to declare a new span as a child of this last. All I managed to do is to link a new trace to the parent one but it is not the behavior I am looking for.</p>
<p>Has someone already tried to do something like that ?</p>
<p>Regards,</p>
|
<p>It's called trace context propagation and there are multiple formats such w3c trace context, jaeger, b3 etc... <a href="https://github.com/open-telemetry/opentelemetry-specification/blob/b46bcab5fb709381f1fd52096a19541370c7d1b3/specification/context/api-propagators.md#propagators-distribution" rel="noreferrer">https://github.com/open-telemetry/opentelemetry-specification/blob/b46bcab5fb709381f1fd52096a19541370c7d1b3/specification/context/api-propagators.md#propagators-distribution</a>. You will have to use one of the propagator's inject/extract methods for this. Here is the simple example using W3CTraceContext propagator.</p>
<pre class="lang-py prettyprint-override"><code>from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import (BatchSpanProcessor,
ConsoleSpanExporter)
from opentelemetry.trace.propagation.tracecontext import \
TraceContextTextMapPropagator
trace.set_tracer_provider(TracerProvider())
trace.get_tracer_provider().add_span_processor(BatchSpanProcessor(ConsoleSpanExporter()))
tracer = trace.get_tracer(__name__)
prop = TraceContextTextMapPropagator()
carrier = {}
# Injecting the context into carrier and send it over
with tracer.start_as_current_span("first-span") as span:
prop.inject(carrier=carrier)
print("Carrier after injecting span context", carrier)
# Extracting the remote context from carrier and starting a new span under same trace.
ctx = prop.extract(carrier=carrier)
with tracer.start_as_current_span("next-span", context=ctx):
pass
</code></pre>
|
python|google-cloud-platform|open-telemetry|google-cloud-trace
| 10 |
1,907,208 | 10,521,236 |
How can I implement testable, maintainable real-time logic?
|
<p><strong>assumption 1</strong>: you have a suite of modules (very maintainable, with tests) for real-time monitoring. They all run very quickly but are executed repeatedly. They are all required to return a boolean flag, but may also return other data. For example, the CheckParrot module would return if a parrot is observed to be dead, or not. The SeekMorlocks module would return true if it found any, but additionally the number, heading, and distance.</p>
<p><strong>assumption 2</strong>: your applications will tie these modules together using some sort of custom algorithm, which might include state variables. Examples include RTS games, trading programs, vehicle monitoring systems, etc. The algorithm can be represented by a truth table, or equivalently, a <a href="http://www.cs.umd.edu/class/sum2003/cmsc311/Notes/Comb/pla.html" rel="nofollow">programmable logic array</a>.</p>
<p><strong>question</strong>: What open source is out there to help with implementing a programmable logic array, where the inputs and outputs are executable modules? The goal is to isolate the algorithm (PLA) for independent testing, and easily plug modules into it. </p>
<p>At the moment I am mostly interested in a Java solution but am also curious about any C++ or Python.</p>
<p>Thanks</p>
|
<p>You may want to take a look at <a href="http://www.jboss.org/drools/drools-expert.html" rel="nofollow">Drools</a></p>
<p>It's rules engine and a set of tools to create / test them.</p>
|
java|c++|logic|python-2.7
| 3 |
1,907,209 | 10,344,468 |
SQLAlchemy: how to filter on PgArray column types?
|
<p>In pure postgres we can write:</p>
<pre class="lang-sql prettyprint-override"><code>SELECT * FROM my_table WHERE 10000 = ANY (array_field);
</code></pre>
<p>or</p>
<pre class="lang-sql prettyprint-override"><code>SELECT * FROM my_table WHERE 10000 = ALL (array_field);
</code></pre>
<p>How to do the same with the help of sqlalchemy without raw sql?</p>
|
<p><code>a = ANY(b_array)</code> is equivalent to <code>a</code><strong><code>IN</code></strong><code>(elements_of_b_array)</code><sup>1</sup>.</p>
<p>Therefore you can use the <a href="http://docs.sqlalchemy.org/en/latest/core/expression_api.html#sqlalchemy.sql.operators.ColumnOperators.in_" rel="noreferrer"><code>in_()</code> method</a>.</p>
<p>I can't remember ever having used <code>a = ALL(b_array)</code> in all my years with PostgreSQL. Have you?</p>
<hr>
<p>If you are dealing with an <strong>array column</strong> and want to test whether it contains a given element (or all elements of a given array) in that column, then you can utilize <a href="http://www.postgresql.org/docs/current/interactive/functions-array.html#ARRAY-OPERATORS-TABLE" rel="noreferrer">PostgreSQL array operators</a> <code>@></code> (<code>contains</code>) or more appropriately the inverse sibling <strong><code><@</code></strong> (<code>is contained by</code>).</p>
<p>Array operators carry the advantage that they can be supported with a <strong>GIN index</strong> on the array column (unlike the <code>ANY</code> construct).</p>
<p>Your SQL statement:</p>
<pre class="lang-sql prettyprint-override"><code>SELECT * FROM my_table WHERE 10000 = ANY (array_field);
</code></pre>
<p>is (almost)<sup>1</sup> equivalent to</p>
<pre class="lang-sql prettyprint-override"><code>SELECT * FROM my_table WHERE 10000 <@ array_field;
</code></pre>
<p>I am no expert with SQLAlchemy, but according to the <a href="http://docs.sqlalchemy.org/en/latest/core/tutorial.html" rel="noreferrer">tutorial in the SQLAlchemy manual</a>, you can use any operator:</p>
<blockquote>
<p>If you have come across an operator which really isn’t available, you
can always use the <strong><code>op()</code></strong> method; this generates whatever
operator you need:</p>
<pre class="lang-py prettyprint-override"><code>>>> print users.c.name.op('tiddlywinks')('foo') users.name tiddlywinks :name_1
</code></pre>
</blockquote>
<p>Bold emphasis mine. Your statement could look like this in SQLA:</p>
<pre class="lang-py prettyprint-override"><code>s = select([my_table], array_field.op('@>')('ARRAY[10000]'))
</code></pre>
<p>Or with alternative input syntax for PostgreSQL array values:</p>
<pre class="lang-py prettyprint-override"><code>s = select([my_table], array_field.op('@>') (cast('{10000}', int[])))
</code></pre>
<hr>
<p><sup>1</sup> There is a subtle difference with NULL handling: </p>
<pre class="lang-sql prettyprint-override"><code>SELECT '{NULL}'::int[] <@ ... -- that's an array with a single NULL element
</code></pre>
<p>always yields <code>FALSE</code>.</p>
<pre class="lang-sql prettyprint-override"><code>SELECT NULL IN (...)
SELECT NULL = ANY (...)
SELECT NULL::int[] <@ ...
</code></pre>
<p>always yield <code>NULL</code>.</p>
<p>If you are not going to query for <code>NULL</code> values, you can ignore this.</p>
|
python|sql|postgresql|sqlalchemy
| 26 |
1,907,210 | 10,734,453 |
Check If Word Is In A String errors
|
<p>Using the following from <a href="https://stackoverflow.com/questions/5319922/python-check-if-word-is-in-a-string">Python - Check If Word Is In A String</a></p>
<pre><code>>>> def findWholeWord(w):
... return re.compile(r'\b({0})\b'.format(w), flags=re.IGNORECASE).search
...
>>> findWholeWord('seek')('those who seek shall find')
<_sre.SRE_Match object at 0x22c1828>
>>> findWholeWord('seek')
<built-in method search of _sre.SRE_Pattern object at 0x22b8190>
>>> findWholeWord('seek')('those who seek shall find')
<_sre.SRE_Match object at 0x22c1828>
>>> findWholeWord('seek')('those who shall find')
</code></pre>
<p>Is this an error or this should be the result?</p>
<pre><code><_sre.SRE_Match object at 0x22c1828>
</code></pre>
|
<p>This is a funny piece of code, but in python finding a word in a string is actually much simpler. You don't even need a function for this:</p>
<pre><code> if some_word in some_str.split():
....
</code></pre>
<p>To find just substrings rather than whole words, omit the <code>split</code> part:</p>
<pre><code>print 'word' in 'funny swordfish'.split() # False
print 'word' in 'funny swordfish' # True
</code></pre>
|
python
| 3 |
1,907,211 | 5,472,501 |
SQLAlchemy - Combine Textual query with a filter
|
<p>I'm using <strong>SA 0.6.6</strong>, <strong>Python 2.66</strong> and <strong>Postgres 8.3</strong>. </p>
<p>I have certain queries which require somewhat complex security check that can be handled with a <code>WITH RECURSIVE</code> query. What I'm trying to do is combine a textual query with a query object so I can apply filters as necessary.</p>
<p>My original thought was was to create my text query as a subquery and then combine that with the user's query and filters. Unfortunately, this isn't working.</p>
<pre><code>subquery = session.query(sharedFilterAlias).\
from_statement(sharedFilterQuery).subquery()
</code></pre>
<p>This results in this error: </p>
<pre><code>AttributeError: 'Annotated_TextClause' object has no attribute 'alias'
</code></pre>
<p>Is there anyway to combine a textual query with SQLAlchemy's query object?</p>
|
<p>After a time going by without an answer <a href="https://groups.google.com/forum/?fromgroups#!topic/sqlalchemy/VAttoxkLlXw" rel="noreferrer">I posted to the SA Google Group</a>, where <a href="http://techspot.zzzeek.org/" rel="noreferrer">Michael Bayer</a> himself set me in the right direction.</p>
<p>The answer is to turn my text query into an SA text clause. Then use that with in_ operator. Here's an example of the finished product:</p>
<pre><code>sharedFilterQuery = '''WITH RECURSIVE
q AS
(
SELECT h.*
FROM "Selection"."FilterFolder" h
join "Selection"."Filter" f
on f."filterFolderId" = h.id
WHERE f.id = :filterId
UNION
SELECT hp.*
FROM q
JOIN "Selection"."FilterFolder" hp
ON hp.id = q."parentFolderId"
)
SELECT f.id
FROM "Selection"."Filter" f
where f.id = :filterId and
(f."createdByUserId" = 1 or
exists(select 1 from q where "isShared" = TRUE LIMIT 1))
'''
inClause = text(sharedFilterQuery,bindparams=[bindparam('filterId',filterId)])
f = session.query(Filter)\
.filter(Filter.description == None)\
.filter(Filter.id.in_(inClause)).first()
</code></pre>
|
python|string|text|filter|sqlalchemy
| 7 |
1,907,212 | 62,499,547 |
Add a regression line on the plot with actual data
|
<p>I have the following data in a pandas dataframe:</p>
<pre><code>freq = [10, 2, 1, 10, 6, 4, 1, 1,
6, 3, 4, 10, 6, 3, 9, 5,
5, 5, 4, 2, 2, 9, 11, 7, 5,
1, 3, 10, 7, 5, 5, 5, 8,
7, 25, 17, 9, 6, 7, 8, 4,
10, 3, 1, 7, 11, 6, 5, 10,
11, 8, 11, 15, 4, 6, 11, 6,
10, 10, 10, 4, 5, 7, 15, 15,
10, 12, 17, 25, 26, 22, 14, 15,
15, 7, 9, 8, 6, 1]
date=[737444, 737445, 737446, 737447, 737448,
737449, 737450, 737451, 737452, 737453, 737454, 737455, 737456,
737457, 737458, 737459, 737460, 737461, 737462, 737463, 737464,
737465, 737466, 737467, 737468, 737469, 737470, 737472, 737473,
737474, 737475, 737476, 737477, 737478, 737479, 737480, 737481,
737482, 737483, 737484, 737485, 737486, 737487, 737488, 737489,
737490, 737491, 737492, 737493, 737494, 737495, 737496, 737497,
737498, 737499, 737500, 737501, 737502, 737503, 737504, 737505,
737506, 737507, 737508, 737509, 737510, 737511, 737512, 737513,
737514, 737515, 737516, 737517, 737518, 737519, 737520, 737521,
737522, 737523]
</code></pre>
<p>I have calculated the coefficient and intercept for regression as follows:</p>
<pre><code> from sklearn.model_selection import train_test_split
y = np.asarray(df['Frequency'])
X = df[['Date']]
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.3,random_state=42)
model = LinearRegression()
model.fit(X_train, y_train)
model.score(X_train, y_train)
coefs = zip(model.coef_, X.columns)
model.__dict__
</code></pre>
<p>Getting the following results:</p>
<pre><code> Coefficient:
[0.08711929]
Intercept:
-64241.58584385233
sl = -64241.6 + 0.1 Date
</code></pre>
<p>I would like to plot this line above the plot that shows the trend of actual data.
How can I do?</p>
|
<pre><code>import matplotlib.pyplot as plt
datemin = min(date)
datemax = max(date)
x_new = np.linspace(datemin, datemax , 100)
y_new = model.predict(x_new[:, np.newaxis])
plt.figure(figsize=(4, 3))
ax = plt.axes()
ax.scatter(date, freq)
ax.plot(x_new, y_new)
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.axis('tight')
plt.show()
</code></pre>
<p>Check a detailed explanation <a href="https://scipy-lectures.org/packages/scikit-learn/auto_examples/plot_linear_regression.html" rel="nofollow noreferrer">here</a></p>
|
python|pandas|matplotlib|machine-learning|scikit-learn
| 1 |
1,907,213 | 60,586,279 |
Tensorflow `map_fn` takes long time to execute
|
<p>Given tensors <code>a</code> of shape <code>(n, f)</code> and <code>b</code> of shape <code>(m, f)</code>, I have created a function to calculate euclidean distances between these two tensors</p>
<pre><code>import tensorflow as tf
nr = tf.reduce_sum(tf.square(a), 1)
nw = tf.reduce_sum(tf.square(b), 1)
nr = tf.reshape(nr, [-1, 1])
nw = tf.reshape(nw, [1, -1])
res = nr - 2*tf.matmul(a, b, False, True) + nw
res = tf.argmin(res, axis=1)
</code></pre>
<p>So far so good, the code runs slightly fast (I got better performance with <code>cKDTree</code>, when <code>n= 1000, m=1600, f=4</code>, but this is not the issue now). I will check the performance versus different input sizes later.</p>
<p>In this example the <code>b</code> tensor is a rank 2, flattened version of a rank 3 tensor. I do that to be able to evaluate the euclidean distances using two tensors with same rank (that is simpler). But after evaluate the distances I need to know where on the original tensor each one of the nearest elements are. For that I have created the custom lambda function <code>fn</code> to convert back to the rank 3 tensor coordinates.</p>
<pre><code>fn = lambda x: (x//N, x%N)
# This map takes a enormous amount of time
out = tf.map_fn(fn, res, dtype=(tf.int64, tf.int64))
return tf.stack(out, axis=1)
</code></pre>
<p>But sadly this <code>tf.map_fn</code> takes a <strong>HUGE</strong> time to run, around <strong>300ms</strong>.</p>
<p>Just for comparison, if I perform a <code>np.apply_along_axis</code> in a dataset that exacly the same data (but a numpy array) the footprint is barely noticiable, around 50 microseconds vs. 300ms of tensorflow equivalent.</p>
<p>There are better approaches in tensorflow for this <code>mapping</code>?</p>
<p>TF version 2.1.0 and CUDA is enabled and working.</p>
<p>Just to add some timings</p>
<pre><code>%timeit eucl_dist_tf_vecmap(R_tf, W_tf)
28.1 ms ± 128 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
%timeit eucl_dist_tf_nomap(R_tf, W_tf)
2.07 ms ± 122 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
%timeit eucl_dist_ckdtree_applyaxis(R, W)
878 µs ± 2.34 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
%timeit eucl_dist_ckdtree_noapplyaxis(R, W)
817 µs ± 51 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
</code></pre>
<p>The first two timings are using the custom function shown here, the first one with <code>vectorized_map</code> and the second one without <code>vectorized_map</code> and the <code>stack</code> (the overhead is on <code>vectorized_map</code>, tested.</p>
<p>And the last two times is an implementations based on scipy's <code>cKDTree</code>. The first one uses <code>np.apply_along_axis</code> exactly as used in vectorized map. We can see that overhead is much smaller in the numpy array.</p>
|
<p>You could try tf.vectorized_map. <a href="https://www.tensorflow.org/api_docs/python/tf/vectorized_map" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/vectorized_map</a></p>
<p>If you need to change de data type, you can try to change parallel_iterations value in map_fn params, that is set to 1 by default in eager mode.</p>
|
python|tensorflow|mapping|tensorflow2.0
| 1 |
1,907,214 | 70,354,036 |
How to Create Multiple Data Frames in For Loop
|
<p>I have a list of names (I changed them to letters) that correspond with values in a data frame column.</p>
<pre><code>names = ['A','B','C','D','E','F','G','H']
</code></pre>
<p>I am trying to create a separate data frame for each name containing that name's associated QTY grouped by part number.</p>
<pre><code>for x in names:
name_data = data[data['Name']== x]
name_dict = name_data.groupby(['Part No.']).Quantity.sum().to_dict()
df1= pd.DataFrame.from_dict(name_dict, orient = 'index',columns = ['QTY'])
</code></pre>
<p>As you can see from the code each time it loops it writes the new loop data over the previous loop's data in df1. How to I iterate a new data frame name each time it loops, so that I end of with 8 separate data frames?</p>
|
<p>You could save the dataframes to a <code>list</code>:</p>
<pre><code>list_of_dfs = list()
for x in names:
df = data[data['Name'].eq(x)].groupby('Part No.')['Quantity'].sum().rename("QTY").to_frame()
list_of_dfs.append(df)
</code></pre>
|
python|dataframe|for-loop
| 1 |
1,907,215 | 70,237,667 |
how to do loop over the tickers and save result on the dictionary
|
<p>I’m not so sure how to do the last part which is the dictionary part and ticker part, and also
On
“”file = open("/home/ubuntu/environment/hw5/" + tickers + “.txt”)””"
This line keep showing</p>
<p>TypeError: must be str, not list</p>
<p>Any suggestion on how to fix those or make the code works ?</p>
<p>Here’s my code</p>
<pre><code>
import json
def meanReversionStrategy(prices):
total_profit = 0
first_buy = None
buy = 0
for i in range(len(prices)):
if i >= 5:
current_price = prices[i]
moving_average = (prices[i-1] + prices[i-2] + prices[i-3] + prices[i-4] +prices[i-5]) / 5
if current_price < moving_average * 0.95 and buy == 0:
buy = current_price
print("buy at: ",round (current_price,2))
if first_buy is None:
first_buy = buy
elif current_price > moving_average * 1.05 and buy != 0:
print("sell at: ", round(current_price,2))
print("trade profit: ", round(current_price - buy,2))
total_profit = current_price - buy
buy = 0
final_profit_percentage = ( total_profit / first_buy ) * 100
print("First buy: " , round(first_buy,2))
print("Total profit: " , round(total_profit, 2))
print("Percentage return: ", round(final_profit_percentage, 2),"%")
def simpleMovingAverageStrategy(prices):
i = 0
buy = 0
total_profit = 0
first_buy = 0
for p in prices:
if i >= 5:
moving_average = (prices[i-1] + prices[i-2] + prices[i-3] + prices[i-4] +
prices[i-5]) / 5
#simple moving average logic, not mean
if p > moving_average and buy == 0: #buy
print("buying at: ", p)
buy = p
if first_buy == 0:
first_buy = p
elif p < moving_average and buy != 0: #sell
print("selling at: ", p)
print("trade profit: ", p - buy)
total_profit += p - buy
buy = 0
i += 1
final_percentage = (total_profit / first_buy) * 100
print("first buy: ", first_buy)
print("total profit: ", total_profit)
print("final percentage: ", final_percentage, "%")
return total_profit, final_percentage
tickers = ["AAPL1" , "ADBE" , "BA", "CMCSA", "CSCO", "CVS", "GOOG", "TLSYY","TM"]
file = open("/home/ubuntu/environment/hw5/" + tickers + ".txt")
lines = file.readlines()
# print(lines)
prices = []
for line in lines:
prices.append(float(line))
profit, returns = simpleMovingAverageStrategy(prices)
results = {}
results["AAPL1_profit"] =profit
results["AAPL1_returns"] = returns
json.dump(results, open("/home/ubuntu/environment/hw5/results.json", "w") )
</code></pre>
<p>Coding Requirements</p>
<p>-Create a function called meanReversionStrategy which takes a list called “prices” as an argument. The function runs a mean reversion strategy, and outputs to the console the buys and sells of the strategy (like you did in HW4). The function returns the profit and final returns percentage.</p>
<p>-Create a function called simpleMovingAverageStrategy which takes a list called “prices” as an argument. The function runs a Simple Moving Average strategy, and outputs to the console the buys and sells of the strategy. The function returns the profit and final returns percentage.</p>
<p>-Create a function called saveResults which takes a dictionary as an argument. Save the dictionary to a json file called “results.json”.</p>
<blockquote>
<blockquote>
<p>loop through the list of tickers</p>
</blockquote>
</blockquote>
<blockquote>
<blockquote>
<p>for ticker in tickers:</p>
</blockquote>
</blockquote>
<blockquote>
<blockquote>
<pre><code>-load prices from a file <ticker>.txt, and store them in the results dictionary with the
</code></pre>
</blockquote>
</blockquote>
<blockquote>
<blockquote>
<pre><code>key “<ticker>_prices”
</code></pre>
</blockquote>
</blockquote>
<blockquote>
<blockquote>
<pre><code>-call meanReversionStrategy(prices) and store the profit and returns in the results
</code></pre>
</blockquote>
</blockquote>
<blockquote>
<blockquote>
<pre><code>dictionary with the keys “<ticker>_mr_profit” and “<ticker>_mr_returns”
</code></pre>
</blockquote>
</blockquote>
<blockquote>
<blockquote>
<pre><code>-call simpleMovingAverageStrategy(prices) and store the profit and returns in the
</code></pre>
</blockquote>
</blockquote>
<blockquote>
<blockquote>
<pre><code>results dictionary with the keys “<ticker>_sma_profit” and “<ticker>_sma_profit”
</code></pre>
</blockquote>
</blockquote>
<blockquote>
<blockquote>
<pre><code>with the keys “ticker_mr_profit” and “ticker_mr_returns”
</code></pre>
</blockquote>
</blockquote>
<blockquote>
<blockquote>
<p>call saveResults(results) and save the results dictionary to a file called results.json</p>
</blockquote>
</blockquote>
|
<p>Welcome!</p>
<p>As this is a homework question, I will not solve it for You, but here is my advice for You:</p>
<p><code>tickers</code> is an array and as your homework description states, You must 'loop through the list of tickers' and <code>open</code> each of them. So how do You loop over <code>tickers</code>?</p>
<p>And reflect about the error. What do You think should be the result of <code>"/home/ubuntu/environment/hw5/" + tickers + ".txt"</code>?</p>
|
python|json|dictionary
| 0 |
1,907,216 | 70,529,696 |
Plot confidence interval of a duration series
|
<p>I measured the duration of 6000 requests.</p>
<p>I got now an Array of 6000 elements. Each element represents the duration of a connection request in milliseconds.
<code>[3,2,2,3,4,2,2,4,2,3,3,4,2,4,4,3,3,3,4,3,2,3,5,5,2,4,4,2,2,2,3,5,3,2,2,3,3,3,5,4........]</code></p>
<p>I want to plot the confidence interval in Python and in a clearly arranged manner.</p>
<p>Do you have any Idea how I should plot them?</p>
|
<p>From what I understood this code should answer your question</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
from statistics import NormalDist
X = np.random.sample(100)
data = ((X - min(X)) / (max(X) - min(X))) * 3 + 3
confidence_interval = 0.95
def getCI(data, ci):
normalDist = NormalDist.from_samples(data)
z = NormalDist().inv_cdf((1 + ci) / 2.)
p = normalDist.stdev * z / ((len(data) - 1) ** .5)
return normalDist.mean, normalDist.mean - p, normalDist.mean + p
avg, lower, upper = getCI(data, confidence_interval)
sns.set_style("whitegrid")
plt.figure(figsize=(8, 4))
sns.histplot(data, bins = 10)
plt.axvspan(lower, upper, facecolor='r', alpha=0.2)
plt.axvline(avg, color = 'b', label = 'Average')
plt.ylabel("Operations")
plt.xlabel("Connection Request Duration (ms)")
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/U5nti.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/U5nti.png" alt="enter image description here" /></a></p>
<p>For boxplot:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
from statistics import NormalDist
X = np.random.sample(100)
data = ((X - min(X)) / (max(X) - min(X))) * 3 + 3
confidence_interval = 0.95
def getCI(data, ci):
normalDist = NormalDist.from_samples(data)
z = NormalDist().inv_cdf((1 + ci) / 2.)
p = normalDist.stdev * z / ((len(data) - 1) ** .5)
return normalDist.mean, normalDist.mean - p, normalDist.mean + p
avg, lower, upper = getCI(data, confidence_interval)
sns.set_style("whitegrid")
plt.figure(figsize=(8, 4))
sns.boxplot(data = data, orient = "h")
plt.axvspan(lower, upper, facecolor='r', alpha=0.4)
plt.axvline(avg, color = 'b', label = 'Average')
plt.ylabel("Operations")
plt.xlabel("Connection Request Duration (ms)")
plt.yticks([0],["Server Retry Request Delay"])
plt.savefig("fig.png")
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/d06At.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/d06At.png" alt="enter image description here" /></a></p>
<p>For Multiple Plots:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
from statistics import NormalDist
X1, X2 = np.random.sample(100), np.random.sample(100)
data1, data2 = ((X1 - min(X1)) / (max(X1) - min(X1))) * 3 + 3, ((X2 - min(X2)) / (max(X2) - min(X2))) * 2 + 3
confidence_interval = 0.95
def getCI(data, ci):
normalDist = NormalDist.from_samples(data)
z = NormalDist().inv_cdf((1 + ci) / 2.)
p = normalDist.stdev * z / ((len(data) - 1) ** .5)
return normalDist.mean, normalDist.mean - p, normalDist.mean + p
sns.set_style("whitegrid")
avg1, lower1, upper1 = getCI(data1, confidence_interval)
avg2, lower2, upper2 = getCI(data2, confidence_interval)
fig = plt.figure(figsize=(12, 6))
ax1 = fig.add_subplot(211)
ax2 = fig.add_subplot(212, sharex = ax1, sharey = ax1)
sns.boxplot(data = data1, orient = "h", ax = ax1)
ax1.axvspan(lower1, upper1, facecolor='r', alpha=0.4)
ax1.axvline(avg1, color = 'b', label = 'Average')
sns.boxplot(data = data2, orient = "h", ax = ax2)
ax2.axvspan(lower2, upper2, facecolor='r', alpha=0.4)
ax2.axvline(avg2, color = 'b', label = 'Average')
ax2.set_xlabel("Connection Request Duration (ms)")
plt.setp(ax1.get_xticklabels(), visible=False)
plt.setp(ax1.get_yticklabels(), visible=False)
plt.setp(ax2.get_yticklabels(), visible=False)
fig.text(0.08, 0.5, "Operations", va='center', rotation='vertical')
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/2AbAl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2AbAl.png" alt="enter image description here" /></a></p>
|
python|pandas|matplotlib|seaborn
| 0 |
1,907,217 | 63,388,572 |
TensorFlow 4-d Tensor
|
<p>Use <code>tf.zeros</code> to initialize a 4-d Tensor of zeros with size 10 x 256 x 256 x 3.</p>
<pre class="lang-py prettyprint-override"><code>images = # TODO
assert isinstance(images, tf.Tensor), "matrix must be a tf Tensor object"
assert tf.rank(images).numpy() == 4, "matrix must be of rank 4"
assert tf.shape(images).numpy().tolist() == [10, 256, 256, 3], "matrix is incorrect shape"
</code></pre>
|
<pre class="lang-py prettyprint-override"><code>images = tf.zeros(shape=[10, 256, 256, 3])
</code></pre>
<p>This is homework, right?</p>
|
tensorflow|tensor
| 2 |
1,907,218 | 17,818,667 |
tkinter create_window erases previous window
|
<p>For some reason, when I go to create_window in my Tkinter canvas, it erases everything that was previously in said window, and jams the window in the top left corner (even though I set it somewhere else.</p>
<pre><code>canvas.create_window(30, height - 40, anchor = NW, width = 40,
window = canvas.data.buildSquareButton)
</code></pre>
<p>precedes </p>
<pre><code>canvas.create_rectangle(0,0,width, 40, fill = "#888888888",
outline = "#888888888")
canvas.create_rectangle(0, height, width, (height-40), fill = "#888888888",
outline = "#888888888")
canvas.create_rectangle(0, 40, width, (height - 40), fill = "#fffffffff",
outline = "#fffffffff")
</code></pre>
<p>and an image.</p>
<p>I put in a 1 second time.sleep after the create_window, and I could see that the button was put in the right place. Then after the time.sleep was over, the button threw itself in the top right corner and the rectangle never appeared. I commented out the window, and the rectangles appeared fine.</p>
<p>Am I doing something wrong when I call the window, or is there a Tkinter glitch?</p>
|
<p>There's not enough information in your question to know for sure. However, my guess is that you are <code>pack</code>ing or <code>grid</code>ing a widget in the canvas, and that's causing the canvas to shrink to fit its contents. Or, you're doing something else to cause the canvas to shrink.</p>
<p>To compound the problem, your canvas probably has the same background color as your main window, so you <em>think</em> the contents are being erased, but in reality you're looking at the widget that the canvas is in rather than the canvas itself.</p>
<p>To help prove or disprove that theory, give your canvas a garish background color, such as a bright red. Then run your code and see what happens to the red part of the screen.</p>
<p>Bottom line: there is no bug in tkinter that would cause the behavior you describe. There is a bug in some code that you aren't showing us. </p>
<p>The best thing is for you to create the smallest possible program that reproduces the problem. The mere act of trying to do that may expose the bug in your code. If you are able to reproduce it in a dozen or two lines of code, update the question and we can probably spot the error. </p>
|
python|window|tkinter
| 0 |
1,907,219 | 60,945,942 |
How to resolve this issue "ValueError: not enough values to unpack (expected 5, got 4)"?
|
<p>How to resolve this issue "ValueError: not enough values to unpack (expected 5, got 4)"?</p>
<pre class="lang-py prettyprint-override"><code>import sklearn
from sklearn.datasets import load_breast_cancer
data = load_breast_cancer()
label_names = data['target_names']
labels = data['target']
feature_names = data['feature_names']
features = data['data']
print(label_names)
print(labels[0])
print(feature_names[0])
print(features[0])
from sklearn.model_selection import train_test_split
</code></pre>
<p>The error occurs on the following line:</p>
<pre class="lang-py prettyprint-override"><code>train, test, train_labels, test_labels, test_labels = train_test_split(features,labels,test_size = 0.40, random_state = 42)
</code></pre>
|
<p>Your function "train_test_split" only returns 4 values. Modify your line like this:</p>
<pre><code>train, test, train_labels, test_labels = train_test_split(features,labels,test_size = 0.40, random_state = 42)
</code></pre>
|
python|valueerror
| 0 |
1,907,220 | 61,072,490 |
How to get the Key Pair value from a json file?
|
<p>data = "[{"id":"abc, "content":"Bye", "child": [{"id":"dsd", "parent id": "abc", "content": "dds"}]}, {"id": xcv, "content": "hello"}]"</p>
<pre><code> with open("data.json","w") as f:
json.dump(data, f)
# reads it back
with open("data.json","r") as f:
parsed_json = json.load(f)
for e in parsed_json:
print (e["content"])
</code></pre>
<p>I would like to extract Bye and hello but i stumble upon this error. Was wondering how to loop through </p>
<p></p>
<pre><code>TypeError Traceback (most recent call last)
<ipython-input-2-1aa8088c77a7> in <module>
46
47 for e in parsed_json:
---> 48 print (e["content"])
49
50
TypeError: string indices must be integer
</code></pre>
|
<p>Don't use <code>json.dump</code> to write a string to a file. Use it to write a data structure (list, dictionary, etc.) to a file.</p>
<p>So, don't put the original value for your <code>data</code> variable inside quotes.</p>
<p>Also, you're missing some of the quotes in the data (<code>abc</code> is missing the closing quote, and <code>xcv</code> is missing both quotes).</p>
<pre><code>import json
data = [{"id":"abc", "content":"Bye", "child": [{"id":"dsd", "parent id":"abc", "content":"dds"}]},
{"id":"xcv", "content":"hello"}]
with open("data.json","w") as f:
json.dump(data, f)
# reads it back
with open("data.json","r") as f:
parsed_json = json.load(f)
for e in parsed_json:
print (e["content"])
</code></pre>
|
python|json|dictionary
| 1 |
1,907,221 | 60,895,514 |
Declaration in C considered definition in C++
|
<p>I am working on an open-source C file containing the following declaration</p>
<pre><code>static PyTypeObject Bitarraytype;
</code></pre>
<p>followed later by the definition</p>
<pre><code>static PyTypeObject Bitarraytype = {
/* A bunch of stuff */
};
</code></pre>
<p>I am porting this code to C++ (<code>-std=C++2a</code>), however the above declaration and definition is no longer allowed, as it claims <code>error: redefinition of 'Bitarraytype'</code></p>
<p>I'm not sure what's causing this, as the first block above is only a declaration from my understanding. Why doesn't this work in C++ and how can I get around it?</p>
|
<p>The declaration you show is actually a <em>tentative definition</em> in C. C++ doesn't have that, so you get a multiple definition error.</p>
<p>The declaration should be marked <code>extern</code> to mark it is as declaration:</p>
<pre><code>extern PyTypeObject Bitarraytype;
</code></pre>
<p>You'll also need to remove the <code>static</code> keyword, as the two are incompatible.</p>
|
python|c++|c|python-c-api
| 1 |
1,907,222 | 66,121,520 |
How to get all items shown in the visible region of a QTreeWidget?
|
<p>I am making a tree-widget, where I want to get all the items present in the visible region only (not all items present in tree-widget) while scrolling - like below image shown:</p>
<p><a href="https://i.stack.imgur.com/kek6k.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kek6k.png" alt="enter image description here" /></a></p>
<p>In the 1st image as you see, I want to get all the items present in the visible region. And in the second image, I changed the scrollbar and items present in the visible region are also changed. So I want to get all items as per visible region while scrolling.</p>
<p><a href="https://i.stack.imgur.com/Z2MsD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Z2MsD.png" alt="enter image description here" /></a></p>
|
<p>A reasonably efficient way to do this would be to use <a href="https://doc.qt.io/qt-5/qtreewidget.html#itemAt" rel="nofollow noreferrer">indexAt</a> to get the indexes at the top and bottom of the viewport, and then create a range from the row numbers:</p>
<pre><code>def visibleRange(self):
top = QtCore.QPoint(0, 0)
bottom = self.tree.viewport().rect().bottomLeft()
return range(self.tree.indexAt(top).row(),
self.tree.indexAt(bottom).row() + 1)
</code></pre>
<p>You can then iterate over that to pull out whatever information you need from each row. Here's a complete demo script:</p>
<pre><code>import sys
from PyQt5 import QtCore, QtWidgets
class Window(QtWidgets.QWidget):
def __init__(self):
super().__init__()
self.button = QtWidgets.QPushButton('Test')
self.button.clicked.connect(self.handleButton)
self.tree = QtWidgets.QTreeWidget()
layout = QtWidgets.QVBoxLayout(self)
layout.addWidget(self.tree)
layout.addWidget(self.button)
columns = 'ABCDE'
self.tree.setColumnCount(len(columns))
for index in range(100):
QtWidgets.QTreeWidgetItem(
self.tree, [f'{char}{index:02}' for char in columns])
def visibleRange(self):
top = QtCore.QPoint(0, 0)
bottom = self.tree.viewport().rect().bottomLeft()
return range(self.tree.indexAt(top).row(),
self.tree.indexAt(bottom).row() + 1)
def handleButton(self):
for row in self.visibleRange():
item = self.tree.topLevelItem(row)
print(item.text(0))
if __name__ == '__main__':
app = QtWidgets.QApplication(sys.argv)
window = Window()
window.setWindowTitle('Test')
window.setGeometry(800, 100, 540, 300)
window.show()
sys.exit(app.exec_())
</code></pre>
|
python|pyqt|pyqt5|qtreeview|qtreewidget
| 4 |
1,907,223 | 66,328,006 |
how to index into a numpy array using another array of the same size
|
<p>I have an a numpy array <code>a</code> and another one <code>dex</code> of type int of the same shape. I want to use <code>dex</code> to index into <code>a</code>. How do I do that?</p>
<pre><code>a = np.arange(10).reshape(2,5)
array([[0, 1, 2, 3, 4],
[5, 6, 7, 8, 9]])
dex = np.zeros((2,5)).astype(np.int)
dex[:,1] =1
array([[0, 1, 0, 0, 0],
[0, 1, 0, 0, 0]])
</code></pre>
<p>I was trying something like this, which didn't work,
<code>a[dex]=100</code>
and got <code>print(a)</code></p>
<pre><code>array([[100, 100, 100, 100, 100],
[100, 100, 100, 100, 100]])
</code></pre>
<p>I actually want the result to be <code>print(a)</code></p>
<pre><code>array([[0, 100, 2, 3, 4],
[5, 100, 7, 8, 9]])
</code></pre>
|
<p>When you give a[dex], you are trying to refer to all the items. When you give <code>a[dex==1]=100</code>, then it is checking for specific value of <code>dex</code> and assigning the value of <code>100</code> only if the condition is met.</p>
<pre><code>a[dex==1]=100
</code></pre>
<p>will give you:</p>
<pre><code>[[ 0 100 2 3 4]
[ 5 100 7 8 9]]
</code></pre>
|
arrays|numpy
| 0 |
1,907,224 | 65,929,210 |
How to save a model with DenseVariational layer?
|
<p>I'm trying to build a model with DenseVariational layer so that it can report epistemic uncertainties. Something like <a href="https://www.tensorflow.org/probability/examples/Probabilistic_Layers_Regression#figure_3_epistemic_uncertainty" rel="nofollow noreferrer">https://www.tensorflow.org/probability/examples/Probabilistic_Layers_Regression#figure_3_epistemic_uncertainty</a></p>
<p>The model training works just fine and now I would like to save the model and load it in a production environment. However, when I tried <code>model.save('path/model.h5')</code>, I got</p>
<pre><code>Layer DenseVariational has arguments in `__init__` and therefore must override `get_config`.
</code></pre>
<p>Then I added</p>
<pre><code>class CustomVariational(tfp.layers.DenseVariational):
def get_config(self):
config = super().get_config().copy()
config.update({
'units': self.units,
'make_posterior_fn': self._make_posterior_fn,
'make_prior_fn': self._make_prior_fn
})
return config
</code></pre>
<p>but it failed with a new error</p>
<pre><code>Unable to create link (name already exists)
</code></pre>
<p>Is DenseVariational layer for research only?</p>
|
<p>I think we can circumvent this problem by using the <code>save_weights</code> method.</p>
|
tensorflow|tensorflow-serving|tensorflow-probability|densevariational
| 0 |
1,907,225 | 66,337,608 |
Files or Folder Browse in PySimpleGUI
|
<p>Is there a way to either choose one file in a folder or multiple files in a folder or just a folder (and then process all the files inside it) with PySimpleGUI? So far I've made something like this:</p>
<pre><code>import PySimpleGUI as sg
layout = [[sg.Text("Select files or folder:", sg.Input(key='-IN1-'),sg.FilesBrowse('Select')],
[sg.Button("Ok"),sg.Button("Cancel")]]
window = sg.Window("Test_window", layout)
...
</code></pre>
<p>But with this code I can only choose one or multiple files in a folder, and cannot choose a folder. I want a way to choose either one file, multiple files or a folder.</p>
|
<p>You can select a folder, but not both a folder or files. Here is how to select a folder:</p>
<pre><code>left_col = [[sg.Text('Folder'), sg.In(size=(25,1), enable_events=True ,key='-FOLDER-'), sg.FolderBrowse()]]
layout = [[sg.Column(left_col, element_justification='c')]
window = sg.Window('Multiple Format Image Viewer', layout,resizable=True)
while True:
event, values = window.read()
if event in (sg.WIN_CLOSED, 'Exit'):
break
if event == '-FOLDER-':
folder = values['-FOLDER-']
</code></pre>
<p>This may not the best suggestion, but you could have two buttons [Files] [Folder] and let them select one or the other, would that work?</p>
|
python|pysimplegui
| 2 |
1,907,226 | 65,948,746 |
Python : Using Decorators to write Logs on a file
|
<p>I'm trying to use python decorators to write logs on a file, it works when I'm using it on one method, but once I'm starting to try using it on multiple methods, things start to get messy.</p>
<p>For instance, if I have 2 logs for 2 methods A() and B(), B() being called inside of A(), one for when I'm calling it and one for when I'm ending it things get like that: A1 B1 B2 A2 B1 B2 B1 B2</p>
<p>A1 to A2 is fine but after that B() is called x times (the number of times it is called apparently changes) and I can't figure out why.</p>
<p>Here is my Decorator:</p>
<pre><code>class LogDecorator(object):
state: str
def __init__(self, state):
self.state = state
self.log_file = 'log.txt'
def __call__(self, *function):
if len(function) >= 1:
def wrapper(params=None):
if self.state == 'main':
self.reset_log_file()
function_name = function[0].__name__
self.append_log_to_file('Calling function ' + function_name + '...')
result = function[0]() if params is None else function[0](params)
self.append_log_to_file('Function ' + function_name + ' ended. Returned ' + str(result))
return result
return wrapper
def __get__(self, obj, objtype):
return functools.partial(self.__call__, obj)
def append_log_to_file(self, message: str) -> None:
log_file = open(self.log_file, 'a')
log_file.write(message)
log_file.close()
def reset_log_file(self):
log_file = open(self.log_file, 'w+')
log_file.write('')
log_file.close()
</code></pre>
<p>I use the 'main' state because I'm on an endpoint of an API and I want to reset the file for each API call.</p>
<p>Here is my first class with the main state</p>
<pre><code>class AppService:
@staticmethod
@LogDecorator(state='main')
def endpoint() -> Response:
response: Response = Response()
response.append_message('Getting all tests')
tests: list = TestDAO.get_all()
return response
</code></pre>
<p>Here is my second class</p>
<pre><code>class TestDAO(BaseDAO):
@staticmethod
@LogDecorator(state='sub')
def get_all() -> list:
return db.session.query(Test).all()
</code></pre>
<p>The expected output in this sample would be</p>
<pre><code>Calling function endpoint...
Calling function get_all...
Function get_all ended. Returned [Objects]
Function endpoint ended. Returned {Object}
</code></pre>
<p>but I got</p>
<pre><code>Calling function endpoint...
Calling function get_all...
Function get_all ended. Returned [Objects]
Calling function get_all...
Function get_all ended. Returned [Objects]
Calling function get_all...
Function get_all ended. Returned [Objects]
Function endpoint ended. Returned {Object}
Calling function get_all...
Function get_all ended. Returned [Objects]
Calling function get_all...
Function get_all ended. Returned [Objects]
Calling function get_all...
Function get_all ended. Returned [Objects]
Calling function get_all...
Function get_all ended. Returned [Objects]
Calling function get_all...
Function get_all ended. Returned [Objects]
Calling function get_all...
Function get_all ended. Returned [Objects]
</code></pre>
<p>Could anyone figure out why the decorator is behaving like that ?</p>
<p>Thank you in advance</p>
|
<p>Let's expect the output of the following example.</p>
<pre><code>def decorator(f):
def g():
print('Hello, G.')
return g
@decorator
def f():
print('Hello, F.')
f()
</code></pre>
<p>It will print</p>
<pre><code>Hello, G.
</code></pre>
<p>The decorator did not decorate <code>f</code> at all, but instead, without any decoration over <code>f</code>, it returns a completely new method (defined as <code>g</code>). Or, decorator can return anoynymous method as well.</p>
<pre><code>def decorator(f):
return lambda : print('Hello, G.')
</code></pre>
<p>What decorator does is this. It takes a method (with arguments if necessary) and defines a new method probably with the given method, aka, decoration. Then it returns the newly defined method but with the same name of the given function. The following abstraction would help.</p>
<pre><code>@decorator
def f():
print('Hello, F.')
vvvvvvvvvvvvvvvvvvvvvv
def f(): # the name is not changed
#def g(): as if anonymous function
print('Hello, G.')
</code></pre>
<p>So it seems <em>decorated</em> <code>f</code>, however, it is a <em>new</em> method just named <code>f</code>. If you call <code>TestDAO.get_all</code> from <code>AppService</code>, you call already <strong>decorated</strong> <code>TestDAO.get_all</code>.</p>
|
python|python-3.x|python-decorators
| 2 |
1,907,227 | 69,039,324 |
Head Pose Estimation Using Facial Landmarks
|
<p>I want to implement a vector starting from my nose and pointing in the same direction that I'm looking. The problem is the few examples I have found without dlib using facial landmarks are all broken. I don't want to use dlib because it will not install on this machine and I don't have time to troubleshoot it any longer. All landmarks are accurate, so the problem has to lay elsewhere.</p>
<p><a href="https://www.youtube.com/watch?v=-w4o55aF1tA" rel="nofollow noreferrer">This </a>is what I'm shooting for.</p>
<p>The code I have written is here. The vector line is off significantly.</p>
<pre><code>import numpy as np
import mediapipe as mp
def x_element(elem):
return elem[0]
def y_element(elem):
return elem[1]
cap = cv2.VideoCapture(0)
pTime = 0
faceXY = []
mpDraw = mp.solutions.drawing_utils
mpFaceMesh = mp.solutions.face_mesh
faceMesh = mpFaceMesh.FaceMesh(max_num_faces=5, min_detection_confidence=.9, min_tracking_confidence=.01)
drawSpec = mpDraw.DrawingSpec(0,1,1)
success, img = cap.read()
height, width = img.shape[:2]
size = img.shape
# 3D model points.
face3Dmodel = np.array([
(0.0, 0.0, 0.0), # Nose tip
(0.0, -330.0, -65.0), # Chin
(-225.0, 170.0, -135.0), # Left eye left corner
(225.0, 170.0, -135.0), # Right eye right corne
(-150.0, -150.0, -125.0), # Left Mouth corner
(150.0, -150.0, -125.0) # Right mouth corner
],dtype=np.float64)
dist_coeffs = np.zeros((4, 1)) # Assuming no lens distortion
focal_length = size[1]
center = (size[1] / 2, size[0] / 2)
camera_matrix = np.array(
[[focal_length, 0, center[0]],
[0, focal_length, center[1]],
[0, 0, 1]], dtype="double"
)
while True:
success, img = cap.read()
imgRGB = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
results = faceMesh.process(imgRGB)
if results.multi_face_landmarks: # if faces found
dist=[]
for faceNum, faceLms in enumerate(results.multi_face_landmarks): # loop through all matches
mpDraw.draw_landmarks(img, faceLms, landmark_drawing_spec=drawSpec) # draw every match
faceXY = []
for id,lm in enumerate(faceLms.landmark): # loop over all land marks of one face
ih, iw, _ = img.shape
x,y = int(lm.x*iw), int(lm.y*ih)
# print(lm)
faceXY.append((x, y)) # put all xy points in neat array
image_points = np.array([
faceXY[1],
faceXY[175],
faceXY[446],
faceXY[226],
faceXY[57],
faceXY[287]
], dtype="double")
for i in image_points:
cv2.circle(img,(int(i[0]),int(i[1])),4,(255,0,0),-1)
maxXY = max(faceXY, key=x_element)[0], max(faceXY, key=y_element)[1]
minXY = min(faceXY, key=x_element)[0], min(faceXY, key=y_element)[1]
xcenter = (maxXY[0] + minXY[0]) / 2
ycenter = (maxXY[1] + minXY[1]) / 2
dist.append((faceNum, (int(((xcenter-width/2)**2+(ycenter-height/2)**2)**.4)), maxXY, minXY)) # faceID, distance, maxXY, minXY
print(image_points)
(success, rotation_vector, translation_vector) = cv2.solvePnP(face3Dmodel, image_points, camera_matrix, dist_coeffs)
(nose_end_point2D, jacobian) = cv2.projectPoints(np.array([(0.0, 0.0, 1000.0)]), rotation_vector, translation_vector, camera_matrix, dist_coeffs)
p1 = (int(image_points[0][0]), int(image_points[0][1]))
p2 = (int(nose_end_point2D[0][0][0]), int(nose_end_point2D[0][0][1]))
cv2.line(img, p1, p2, (255, 0, 0), 2)
dist.sort(key=y_element)
# print(dist)
for i,faceLms in enumerate(results.multi_face_landmarks):
if i == 0:
cv2.rectangle(img,dist[i][2],dist[i][3],(0,255,0),2)
else:
cv2.rectangle(img, dist[i][2], dist[i][3], (0, 0, 255), 2)
cv2.imshow("Image", img)
cv2.waitKey(1)
</code></pre>
|
<p>Turns out my table for facial points were unorganized.</p>
<p>This is the basic face template depths, it has an order.</p>
<pre><code>face3Dmodel = np.array([
(0.0, 0.0, 0.0), # Nose tip
(0.0, -330.0, -65.0), # Chin
(-225.0, 170.0, -135.0), # Left eye left corner
(225.0, 170.0, -135.0), # Right eye right corner
(-150.0, -150.0, -125.0), # Left Mouth corner
(150.0, -150.0, -125.0) # Right mouth corner
], dtype=np.float64)
</code></pre>
<p>I typed it in the wrong order originally. This is now the same order as above.</p>
<pre><code> image_points = np.array([
faceXY[1], # "nose"
faceXY[152], # "chin"
faceXY[226], # "left eye"
faceXY[446], # "right eye"
faceXY[57], # "left mouth"
faceXY[287] # "right mouth"
], dtype="double")
</code></pre>
|
python|opencv|mesh|mediapipe
| 0 |
1,907,228 | 68,400,732 |
Pandas Dataframe to CSV, but writing n rows lower
|
<p>Given a 3x3 dataframe, with index and column names to be included as a row/column themselves when converting the dataframe to a CSV file, <strong>how can I shift the table down 1 row?</strong></p>
<p>I want to shift down 1 row, leaving 1 empty row to write to the CSV after using a completely separate list.</p>
<p>The code and comments below include more detail and clarity regarding my goal:</p>
<pre><code>import pandas as pd
separate_row = [' ', 'Z', 'Y', 'X']
# Note: The size is 3x3
x = [[0, 0, 0], [0, 0, 0], [0, 0, 0]]
header_cols = ['a','b','c']
df = pd.DataFrame(x, index=[1,2,3], columns=header_cols)
# Note: Exporting as 4x4 now
df.to_csv('data.csv', index=True, header=True)
# How to make CSV file 5x4??
</code></pre>
<p>Row 1 in the CSV file will be filled by <code>separate_row</code>, though I cannot have <code>separate_row</code> as the column name when creating the dataframe. The column name MUST be <code>header_cols</code> but <code>separate_row</code> is to go above.</p>
|
<p>Try:</p>
<pre><code>with open('data.csv', 'w') as csvfile:
pd.DataFrame(columns=separate_row).to_csv(csvfile, index=None)
df.to_csv(csvfile, index=True, header=True)
</code></pre>
<pre><code>>>> %cat data.csv
,Z,Y,X
,a,b,c
1,0,0,0
2,0,0,0
3,0,0,0
</code></pre>
|
python|pandas|list|dataframe|csv
| 0 |
1,907,229 | 59,046,921 |
Networkx: NetworkXException: nodelist contains duplicate for stochastic_block_model
|
<p>I'm new to networkx (version 2.4) and a bit puzzled by the error that I get for <a href="https://networkx.github.io/documentation/stable/reference/generated/networkx.generators.community.stochastic_block_model.html" rel="nofollow noreferrer">stochastic_block_model</a> when I try to add a nodelist. I'm trying to have a different color attribute for each block in the network using this code:</p>
<pre><code>import networkx as nx
N_p = 10
N_n = 10
N_0 = 30
sizes = [N_p, N_n, N_0]
probs = [[0.25, 0.05, 0.02],
[0.05, 0.35, 0.07],
[0.02, 0.07, 0.40]]
nodelist = ['blue' for i in range(N_p)]
nodelist.extend(['red' for i in range(N_n)])
nodelist.extend(['green' for i in range(N_0)])
G = nx.stochastic_block_model(sizes, probs,nodelist=nodelist, seed=0,directed=1)
</code></pre>
<p>But I get the following error message:</p>
<pre><code>...
/opt/anaconda3/lib/python3.7/site-packages/networkx/generators/community.py in stochastic_block_model(sizes, p, nodelist, seed, directed, selfloops, sparse)
576 raise nx.NetworkXException("'nodelist' and 'sizes' do not match.")
577 if len(nodelist) != len(set(nodelist)):
--> 578 raise nx.NetworkXException("nodelist contains duplicate.")
579 else:
580 nodelist = range(0, sum(sizes))
NetworkXException: nodelist contains duplicate.
</code></pre>
<p>What am I doing wrong?</p>
|
<p>The error is just that - the nodelist contains duplicates:</p>
<pre><code>>>> nodelist
['blue'*10, 'red'*10, 'green'*30]
</code></pre>
<p>As in your documentation link:</p>
<blockquote>
<p><strong>Raises NetworkXError –</strong> </p>
<p>If probabilities are not in [0,1]. If the
probability matrix is not square (directed case). If the probability
matrix is not symmetric (undirected case). If the sizes list does not
match nodelist or the probability matrix. <strong>If nodelist contains
duplicate.</strong></p>
</blockquote>
<p>To fix this, either don't use a nodelist, or do something like the following:</p>
<pre><code>nodelist = [f'blue_{i}' for i in range(N_p)]
nodelist.extend([f'red_{i}' for i in range(N_n)])
nodelist.extend([f'green_{i}' for i in range(N_0)])
</code></pre>
|
python|nodes|networkx|graph-theory|nodelist
| 1 |
1,907,230 | 63,088,336 |
How to fill missing values in pandas series with 1 if and only if the last and next non missing value is 1
|
<p>I've got a pandas series with <code>0</code>, <code>1</code> and <code>np.nan</code> values only:</p>
<pre><code>pd.Series([0, np.nan, np.nan, 0, np.nan, np.nan, np.nan, 1, np.nan, 1, np.nan,
np.nan, np.nan, 1, np.nan, 0, np.nan, np.nan, 1, np.nan, np.nan, np.nan, 1])
</code></pre>
<p>I'd like to fill missing values, but my logic is to replace <code>np.nan</code> value with 1 if and only if the previous and next non-missing values are 1 as well, otherwise 0. So the expected output is:</p>
<pre><code>pd.Series([0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1])
</code></pre>
<p>How can I do that?</p>
|
<p>Following your prescription for imputing, the first and the last elements are always <code>0</code> if need to be imputed. You can run a check on <code>vals = ds.values</code> through all indexes <code>for i in range(1, len(vals)-1)</code> on <code>vals[i-1]*vals[i+1] == 1</code>, setting <code>vals[i]=1</code> and <code>0</code> otherwise. When done, reassign to Series and impute <code>ds.fillna(0)</code> for first and last values.</p>
|
python|pandas
| 0 |
1,907,231 | 63,100,479 |
Multiple photos in discord.py embed
|
<p>Okey, I've got two charts images and I want to send them in one embed message:
Here's the code I wrote:</p>
<pre><code> charts = [
discord.File("/root/discord.py/chart-render/tempchart.png", filename="tempchart.png"),
discord.File("/root/discord.py/chart-render/ramchart.png", filename="ramchart.png")
]
stats.set_image(url="attachment://tempchart.png")
stats.set_image(url="attachment://ramchart.png")
await ctx.send(embed=stats, files=charts)
</code></pre>
<p>the problem is that the one is sending in embed, but the second one doesn't - it's sending over the embed message</p>
<p>How can I solve this problem?</p>
|
<p>I have linked an image I hope can help you to understand how embeds work a bit better, so essentially embeds can only have 1 image per one and you will need to send two embeds, unfortunately. Also, <a href="https://leovoel.github.io/embed-visualizer/" rel="nofollow noreferrer">here</a> is a nice online embed visualizer.</p>
<p>I hope this helped</p>
<p><img src="https://cdn.discordapp.com/attachments/84319995256905728/252292324967710721/embed.png" alt="" /></p>
|
python-3.x|embed|discord.py
| 3 |
1,907,232 | 63,287,167 |
From list column in pandas, access each string in list to remove the numbers and periods
|
<p>I have a dataframe with a column having list of strings in each rows. But each string have numbers and periods which i have to remove. I'm unable to access the strings of the list in each row, here's the sample dataframe:</p>
<pre><code>df['column_name']
output:
['1.one','2.two','3. three','4.four ']
['1.one','2.two','3. three','4.four ','5.five']
['1.one','2.two','3. three']
...
</code></pre>
<p>I tried as below, and my output is:</p>
<pre><code>df4['column_name'].str[0].str.replace('\d+\.','')
output:
one
one
one
...
</code></pre>
<p>but i need an output like this:</p>
<pre><code>df4['column_name'].str[0].str.replace('\d+\.','')
output:
'one', 'two', 'three', 'four'
</code></pre>
<p>likewise i have to loop over all the rows of the dataframe, :(. Any help would be very much appreciated !!!</p>
|
<p>You could try this, to get the column of type string:</p>
<pre><code>df['column_name'].str.join(',').str.replace('\d+\.|[ ]','').str.replace(',',', ')
</code></pre>
<p>Or this to get the column of type list:</p>
<pre><code>df['column_name'].str.join(',').str.replace('\d+\.|[ ]','').str.split(',')
</code></pre>
<hr />
<p>Output:</p>
<pre><code>#first solution:
0 one, two, three, four
1 one, two, three, four, five
2 one, two, three
Name: column_name, dtype: object
#second solution:
0 [one, two, three, four]
1 [one, two, three, four, five]
2 [one, two, three]
Name: column_name, dtype: object
</code></pre>
|
python|pandas
| 1 |
1,907,233 | 63,256,435 |
sagetex: SyntaxError: (unicode error) 'unicodeescape' codec can't decode bytes in position 2-3: truncated \UXXXXXXXX escape
|
<p>On Windows 10, when trying to execute the following command</p>
<pre><code>"C:/Program Files/SageMath 9.1/runtime/bin/bash" -l "C:/Program Files/SageMath 9.1/runtime/opt/sagemath-9.1/sage" -c "os.chdir('C:\Users\Diaa\Desktop\Test'); load('testsagetex.sagetex.sage')"
</code></pre>
<p>I get the following error</p>
<blockquote>
<p>SyntaxError: (unicode error) 'unicodeescape' codec can't decode bytes
in position 2-3: truncated \UXXXXXXXX escape</p>
</blockquote>
<p>and the answers to <a href="https://stackoverflow.com/q/37400974/2849383">this question</a> can't help me to fix it.</p>
<p>So, what is wrong or missing here knowing that the full the output can be found <a href="https://pastebin.com/tp31HHbs" rel="nofollow noreferrer">here</a>.</p>
|
<p>This is answered on the question you linked to. Try <code>os.chdir(r'C:\Users\Diaa\Desktop\Test')</code></p>
|
python|windows|command-line|latex|sage
| 1 |
1,907,234 | 62,148,090 |
How to send all members a message in on_ready event discord.py
|
<p>When my bot starts up, I want it to send a message to all the members in each server, for example I want it to say <code>Hello</code>.</p>
<p>Any ideas?</p>
<pre class="lang-py prettyprint-override"><code>@bot.event
async def on_ready():
sentmessage.toallmember.allserver("Hello")
print("[+] Bot is ready")
</code></pre>
|
<p>Do not use this maliciously.</p>
<pre class="lang-py prettyprint-override"><code>@bot.event
async def on_ready():
for m in bot.get_all_members():
try: # this can fail if a user has DMs disabled
await m.send("Something here")
except:
pass
print("[+] Bot is ready")
</code></pre>
<hr>
<p><strong>References:</strong></p>
<ul>
<li><a href="https://discordpy.readthedocs.io/en/latest/api.html#discord.Client.get_all_members" rel="nofollow noreferrer"><code>Client.get_all_members()</code></a></li>
<li><a href="https://discordpy.readthedocs.io/en/latest/api.html#discord.Member.send" rel="nofollow noreferrer"><code>Member.send()</code></a></li>
</ul>
|
python|discord|discord.py
| 0 |
1,907,235 | 58,641,702 |
how to get `bq://project_id'
|
<p>hello everyone i perform some CSV data in google cloud automl tables , and they give me to fill this variable output_path = `bq://project_id' , but i don't know what is it , please if any one can help i will be appreciated a lot</p>
<pre><code># TODO(developer): Uncomment and set the following variables
# project_id = 'PROJECT_ID_HERE'
# compute_region = 'COMPUTE_REGION_HERE'
# model_id = 'MODEL_ID_HERE'
# input_path = 'gs://path/to/file.csv' or
# 'bq://project_id.dataset_id.table_id'
# output_path = 'gs://path' or `bq://project_id'
</code></pre>
|
<p>bq is the command form google cloud for bigquery. please refer the documentation <a href="https://cloud.google.com/bigquery/docs/bq-command-line-tool" rel="nofollow noreferrer">https://cloud.google.com/bigquery/docs/bq-command-line-tool</a>.
It is the asking the output path, where you can store your output either in gstorage or in bigquery table.</p>
|
python|google-cloud-platform
| 0 |
1,907,236 | 58,967,846 |
I am using request.post method of python for calling post method of API Developed in C#
|
<p>I am using <strong>requests.post</strong> method of python for calling post method of API Developed in C#. While calling it via python I am getting an error, but when I used POSTMAN it's working without error.
I am getting the following error:</p>
<p>{'Status': False,
'Message': <strong><em>'Object reference not set to an instance of an object.'</em></strong>,
'Data': {'succeeded': False},
'TotalRecords': 0}</p>
<p>So I have some doubts :</p>
<ol>
<li>If C# API working fine while hitting from POSTMAN but not working with my Python Script. So is there any problem with my Python code?</li>
<li>Is it valid to use <strong>requests.post</strong> for calling C# API?</li>
<li>Is there any problem with JSON Format of Data that I am passing to C# API?</li>
</ol>
<p><strong>Python Script:</strong></p>
<pre><code>import requests
headers = {'Token': 'AnyRandomToken','Content-Type':'application/json'}
your_data = {'EmployeeId': 'XXXXXXXX'}
r = requests.post("URLForC#ApiCan'tShareOverHere", headers=headers, data=your_data).json()
</code></pre>
|
<p>Finally, I get a solution.
I simply remove <strong>content-type</strong> variable from <strong>header</strong> and then hit the C# API, I get a valid response.</p>
|
c#|python|json|api|postman
| 0 |
1,907,237 | 15,823,166 |
Linecache adding an extra line to the line that i get
|
<p>When i try to get a line using linecache in python.</p>
<pre><code>loginpass = raw_input("> ")
if loginpass == linecache.getline('Password.txt', 1):
</code></pre>
<p>The line that it gets always returns with an extra line.
So if line one is </p>
<pre><code>"Test"
</code></pre>
<p>It returns</p>
<pre><code>"Test
"
</code></pre>
<p>It worked earlier in the code but anything after that it adds that line after it.</p>
|
<p>This is normal; reading lines from a file includes the line-ending newline character. Just strip it off:</p>
<pre><code>linecache.getline('Password.txt', 1).rstrip('\n')
</code></pre>
<p>I'm more concerned that you're storing passwords in plain text, though....</p>
|
python|lines|getline|linecache
| 3 |
1,907,238 | 16,029,600 |
Which way of using variables is more efficient in terms of speed, cpu, memory, etc in python?
|
<p>Assume we have a function which wants to operate some actions on attribute z of objA. objA is a property of objB and objB is a property of objC etc... which of these two approaches is faster ? Is there any difference?</p>
<p>Approach 1: Using <code>objC.objB.objA.z</code>for every statement in the function.</p>
<p>Approach 2: Assigning a local variable like x in function as:</p>
<pre><code>x=objC.objB.objA.z
</code></pre>
<p>then operate on x, then assign the output to the preferable variable.</p>
<p>I know Approach 2 makes it easier in terms of writing the actual code but doesn't defining a new variable cost more memory? Which approach is more pythonic and is there any other (better) way to do things other than aforementioned approaches? </p>
|
<p>Approach 2 will in general be quicker, although it may not be a noticeable difference unless it's in a tight loop.</p>
<p>Every time you do <code>a.b.c.d</code>, Python has to look up the values of those attributes, even if they don't change in between uses. If you create a variable <code>x</code> for that value once, then Python only has to look up the attributes once, saving time.</p>
<p>Doing <code>x = a.b.c.d</code> does <em>not</em> create a new object, so it doesn't use any memory. <code>x</code> will be a reference to the same object that <code>a.b.c.d</code> pointed to. However, because of this, you do need to be careful. Any operations that mutate <code>x</code> will affect the original object. For instance, if <code>a.b.c.d</code> is a list, and you do <code>x = a.b.c.d</code>, then <code>x.append(1)</code> will alter the value of the original <code>a.b.c.d</code> object. If this is what you want, great. If not, be sure to explicitly copy the value.</p>
|
python|variables
| 4 |
1,907,239 | 48,912,672 |
How to configure PyBuilder to look locally for certain files needed for testing?
|
<p>My tests use two CSV files as a large part of my program involves interpreting and then posting data to elasticsearch. When running PyBuilder, it can't find these files since it's running from a different directory. For example, one of the errors I get is this:</p>
<pre><code>FileNotFoundError: [Errno 2] No such file or directory: '/usr/local/lib/python3.6/site-packages/PythonUtilities-1.0-py3.6.egg/tests/in/data.csv'
</code></pre>
<p>How can I configure PyBuilder in a way that allows me to work with the files that are in the same directory as my tests?</p>
|
<p>Used a Manifest.in file to specify what files Pybuilder should add.</p>
|
python|build-tools|pybuilder
| 0 |
1,907,240 | 49,162,550 |
Getting "'int' object is not subscriptable" error while apply a method to a pandas data series
|
<p>I have a stocks_df data frame that looks like that one in the picture. When I apply the lambda as in the picture it doesn't throw any errors. </p>
<p>However when I do</p>
<pre><code>list = pandas.Series([1,2,3,4,5])
new_list = list.apply(lambda x: x/x[0])
</code></pre>
<p>It gives me "'int' object is not subscriptable" error. Is there any difference between the two? What am I doing wrong here? </p>
<p><a href="https://i.stack.imgur.com/AqYhv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AqYhv.png" alt="mentioned dataframe"></a></p>
|
<p>For a series, apply operates element wise. To reference the first element of the series, you need to use list[0] instead of x[0]:</p>
<pre><code>new_list = list.apply(lambda x: x/list[0])
</code></pre>
<p>For a DataFrame, apply by default operates column wise, that's why x/x[0] works.</p>
<p>To use the same syntax, you could use:</p>
<pre><code>new_list = list.to_frame().apply(lambda x: x/x[0])
</code></pre>
<p>By the way, using built-in type name (list) as variable name is not a good idea.</p>
|
python|pandas|analytics
| 2 |
1,907,241 | 49,015,736 |
404: Unknown Message (Discord.py v0.16.12)
|
<p><code>
msg = await self.bot.get_message(channel, req["message_id"])
</code>
channel and req["message_id"] are defined, the IDs are parsed as <code>str</code> (if <code>int</code>, then AttributeError would be raised) and the message is in the channel, yet the console output is this <a href="https://i.imgur.com/TVEX9uV.png" rel="nofollow noreferrer">https://i.imgur.com/TVEX9uV.png</a>. The bot has Administrator permission.</p>
|
<p>The error means that either the channel or the message was not found. It would be helpful to see how you are defining both the channel and the message in your code.</p>
<p><a href="https://discordpy.readthedocs.io/en/v0.16.12/api.html#discord.Client.get_message" rel="nofollow noreferrer">Per the documentation</a>, you should only need to pass the message_id as a string. I'm not sure what the req["message_id"] block does as I'm relatively new to this but it should be as simple as calling get_message(channel, message_id)</p>
|
python|python-3.x|discord|discord.py
| 0 |
1,907,242 | 49,020,417 |
.csv File changes to .txt when emailing in Python3
|
<p>I am trying to create a program that stores tweets in a csv file, then emails them via Gmail. Everything seems to be working, until I get to the part where I send the email. Instead of the file coming over as .csv, it is sent via .txt. I have tried to figure it out using the email.mime documentation on the official python website, but it's extremely hard to understand without any sort of examples.</p>
<p>Here is a snippet of the code I currently have:</p>
<pre><code>msg = MIMEMultipart()
msg['From'] = user
msg['To'] = receiver
msg['Subject'] = 'Here are some tweets'
body = 'Enjoy these tweets'
msg.attach(MIMEText(body, 'plain'))
filename = 'tweets.csv'
attachment = open('tweets.csv', 'rb')
part = MIMEBase('application', 'octet-stream')
part.set_payload((attachment).read())
encoders.encode_base64(part)
part.add_header('Content-Dispostition', 'attchment; filename %s' % (filename))
msg.attach(part)
text = msg.as_string()
</code></pre>
|
<p>This is the same code I used to send <code>csv</code> file using <code>SMTPlib</code> hope this will help you. The <code>add_header</code> part in your code is not in the format as stated <a href="https://docs.python.org/2/library/email.message.html#email.message.Message.add_header" rel="nofollow noreferrer">here</a> in <code>python</code> docs</p>
<pre><code>msg=MIMEMultipart()
msg['From'] = user
msg['To'] = receiver
msg['Subject'] = 'Here are some tweets'
filename="tweets.csv"
ctype, encoding = mimetypes.guess_type(filename)
if ctype is None or encoding is not None:
ctype = "application/octet-stream"
maintype, subtype = ctype.split("/", 1)
body = 'Enjoy these tweets'
msg.attach(MIMEText(body,'plain'))
attachment = open(filename, "rb")
part = MIMEBase(maintype, subtype)
part.set_payload(attachment.read())
attachment.close()
part.add_header("Content-Disposition", "attachment", filename=filename)
msg.attach(part)
server = smtplib.SMTP()
server.connect(address,port)
server.ehlo()
server.sendmail(from, to, msg.as_string())
server.close()
</code></pre>
|
python-3.x
| 0 |
1,907,243 | 71,007,929 |
Voluptuous : give error line in yaml file
|
<p>I am using <code>voluptuous</code> a lot to validate yaml description files. Often the errors are cumbersome to decipher, especially for regular users.</p>
<p>I am looking for a way to make the error a bit more readable. One way is to identify which line in the YAML file is incrimined.</p>
<pre><code>from voluptuous import Schema
import yaml
from io import StringIO
Validate = Schema({
'name': str,
'age': int,
})
data = """
name: John
age: oops
"""
data = Validate(yaml.load(StringIO(data)))
</code></pre>
<p>In the above example, I get this error:</p>
<pre><code>MultipleInvalid: expected int for dictionary value @ data['age']
</code></pre>
<p>I would rather prefer an error like:</p>
<pre><code>Error: validation failed on line 2, data.age should be an integer.
</code></pre>
<p>Is there an elegant way to achieve this?</p>
|
<p>The problem is that on the API boundary of <code>yaml.load</code>, all representational information of the source has been lost. <code>Validate</code> gets a Python dict and does not know where it originated from, and moreover the dict does not contain this information.</p>
<p>You can, however, implement this yourself. voluptuous' <code>Invalid</code> error carries a <code>path</code> which is a list of keys to follow. Having this path, you can parse the YAML again into nodes (which carry representation information) and discover the position of the item:</p>
<pre class="lang-py prettyprint-override"><code>import yaml
def line_from(path, yaml_input):
node = yaml.compose(yaml_input)
for item in path:
for entry in node.value:
if entry[0].value == item:
node = entry[1]
break
else: raise ValueError("unknown path element: " + item)
return node.start_mark.line
# demostrating this on more complex input than yours
data = """
spam:
egg:
sausage:
spam
"""
print(line_from(["spam", "egg", "sausage"], data))
# gives 4
</code></pre>
<p>Having this, you can then do</p>
<pre class="lang-py prettyprint-override"><code>try:
data = Validate(yaml.load(StringIO(data)))
except Invalid as e:
line = line_from(e.path, data)
path = "data." + ".".join(e.path)
print(f"Error: validation failed on line {line} ({path}): {e.error_message}")
</code></pre>
<p>I'll go this far for this answer as it shows you how to discover the origin line of an error. You will probably need to extend this to:</p>
<ul>
<li>handle YAML sequences (my code assumes that every intermediate node is a <code>MappingNode</code>, a <code>SequenceNode</code> will have single nodes in its <code>value</code> list instead of a key-value tuple)</li>
<li>handle <code>MultipleInvalid</code> to issue a message for each inner error</li>
<li>rewrite <code>expected int</code> to <code>should be an integer</code> if you really want to (no idea how you'd do that)</li>
<li>abort after printing the error</li>
</ul>
|
python|validation|yaml|voluptuous
| 1 |
1,907,244 | 60,087,370 |
how can I shorten this piece of code that is very redundant
|
<p>is it possible to shorten this piece of code to only few lines?</p>
<pre><code> if rulesVersion:
payload["rulesVersion"] = rulesVersion
if scriptsVersion:
payload["scriptsVersion"] = scriptsVersion
if csq:
payload["CSQ"] = csq
if rebootTimes:
payload["RebootTimes"] = rebootTimes
if acdcSwitch:
payload["PowerSource"] = acdcSwitch
if temperature:
payload["Temperature"] = temperature
</code></pre>
|
<p>Making a <code>dict</code> directly, then filtering to omit the falsy values is probably the safest/most straightforward solution:</p>
<pre><code>payload = {"rulesVersion": rulesVersion,
"scriptsVersion": scriptsVersion,
"CSQ": csq,
"RebootTimes": rebootTimes,
"PowerSource": acdcSwitch,
"Temperature": temperature}
payload = {k: v for k, v in payload.items() if v} # Filter out falsy entries
</code></pre>
<p>An alternative (that risks mismatching names and values if you're not careful) would be to tuple stuff up and loop over the <code>zip</code>-ed pairs in a simple <code>dict</code> comprehension:</p>
<pre><code>names = ("rulesVersion", "scriptsVersion", "CSQ", "RebootTimes", "PowerSource", "Temperature")
values = (rulesVersion, scriptsVersion, csq, rebootTimes, acdcSwitch, temperature)
payload = {name: val for name, val in zip(names, values) if val}
</code></pre>
<p>If the <code>payload</code> is an already existing, non-empty <code>dict</code>, you'd change the final line to something like the following to add the new values rather than rebinding <code>payload</code> to a brand new <code>dict</code>:</p>
<pre><code>payload.update({name: val for name, val in zip(names, values) if val})
# Or genexpr for lower memory overhead, but slightly slower/uglier:
# payload.update((name, val) for name, val in zip(names, values) if val)
</code></pre>
<p>Similarly, for the "build a <code>dict</code> then filter it" case where <code>payload</code> already exists, just build and filter a separate <code>dict</code> (<code>additional_payload</code> or the like), then make the last line:</p>
<pre><code>payload.update(additional_payload)
</code></pre>
|
python
| 1 |
1,907,245 | 5,921,743 |
Views error in Django
|
<p>I have used forms of Django in this manner and have got an error:
Error:<code>invalid literal for int() with base 10: 'check</code>'</p>
<pre><code>#this is forms.py
from django import forms
class PersonalInfo(forms.Form):
Name = forms.CharField(max_length=20)
Email_ID = forms.EmailField(required=False)
Address = forms.CharField(max_length=50,required=False)
Contact_Phone = forms.CharField(max_length=20)
Image = forms.FileField(required=False)
</code></pre>
<p>The PersonalInfo is used in register.html</p>
<pre><code>#This is view.py, register calling register.html
def register(request):
form = PersonalInfo()
return render_to_response('register.html', {'form':form}, context_instance=RequestContext(request))
</code></pre>
<p>In register.html this is the way I will use it :</p>
<pre><code> {% if form.errors %}
<p style="color: red;">
Please correct the error{{ form.errors|pluralize }} below.
</p>
{% endif %}
<form action="/uregister/" method="post">
<table>
{{ form.as_table }}
</table>
<input type="submit" value="Submit">
</form>
</code></pre>
<p>This is views of uregister:</p>
<pre><code>def uregister(request):
if request.method == 'POST':
form = PersonalInfo(request.POST)
if form.is_valid():
cd = form.cleaned_data
per_job = Personal(cd['Name'], cd['Email_ID'], cd['Address'], cd['Contact_Phone'], cd['Image'])
per_job.save()
return HttpResponseRedirect('/')
else:
form = PersonalInfo()
return render_to_response('register.html', {'form': form}, context_instance=RequestContext(request))
</code></pre>
<p>This is the Personal model in models.py:</p>
<pre><code>class Personal(models.Model):
name = models.CharField(max_length=20)
email = models.EmailField(blank=True,null=True)
address = models.CharField(max_length=50,blank=True,null=True)
contact = models.CharField(max_length=20)
pic = models.FileField(upload_to='image/',blank=True,null=True)
</code></pre>
<p>The error I get is :</p>
<pre><code>invalid literal for int() with base 10: 'check'
</code></pre>
<p>and </p>
<pre><code>Exception Type: ValueError
Exception Value: invalid literal for int() with base 10: 'check'
Exception Location: /usr/local/lib/python2.6/dist-packages/django/db/models/fields/__init__.py in get_prep_value, line 479
</code></pre>
<p><code>Check</code> is the name I had given as in dummy data.
Can anyone tell me where i am going wrong? Please.</p>
<p><strong>Update:</strong>
Trace</p>
<pre><code>Traceback:
File "/usr/local/lib/python2.6/dist-packages/django/core/handlers/base.py" in get_response
111. response = callback(request, *callback_args, **callback_kwargs)
File "/home/nagaraj/ghar/gharnivas/views.py" in uregister
49. per_job.save()
File "/usr/local/lib/python2.6/dist-packages/django/db/models/base.py" in save
460. self.save_base(using=using, force_insert=force_insert, force_update=force_update)
File "/usr/local/lib/python2.6/dist-packages/django/db/models/base.py" in save_base
522. manager.using(using).filter(pk=pk_val).exists())):
File "/usr/local/lib/python2.6/dist-packages/django/db/models/query.py" in filter
550. return self._filter_or_exclude(False, *args, **kwargs)
File "/usr/local/lib/python2.6/dist-packages/django/db/models/query.py" in _filter_or_exclude
568. clone.query.add_q(Q(*args, **kwargs))
File "/usr/local/lib/python2.6/dist-packages/django/db/models/sql/query.py" in add_q
1172. can_reuse=used_aliases, force_having=force_having)
File "/usr/local/lib/python2.6/dist-packages/django/db/models/sql/query.py" in add_filter
1107. connector)
File "/usr/local/lib/python2.6/dist-packages/django/db/models/sql/where.py" in add
67. value = obj.prepare(lookup_type, value)
File "/usr/local/lib/python2.6/dist-packages/django/db/models/sql/where.py" in prepare
316. return self.field.get_prep_lookup(lookup_type, value)
File "/usr/local/lib/python2.6/dist-packages/django/db/models/fields/__init__.py" in get_prep_lookup
292. return self.get_prep_value(value)
File "/usr/local/lib/python2.6/dist-packages/django/db/models/fields/__init__.py" in get_prep_value
479. return int(value)
Exception Type: ValueError at /uregister/
Exception Value: invalid literal for int() with base 10: 'check
</code></pre>
|
<p>I think the problem is in line</p>
<pre><code>per_job = Personal(cd['Name'], cd['Email_ID'], cd['Address'], cd['Contact_Phone'], cd['Image'])
</code></pre>
<p>I don't know if it's possible to create a model instance with only positional parameters, but it's not mentioned in the docs. You should be <a href="http://docs.djangoproject.com/en/1.2/topics/db/queries/#creating-objects" rel="nofollow">using keyword parameters</a>:</p>
<pre><code>per_job = Personal(name=cd['Name'], email=cd['Email_ID'], etc.
</code></pre>
<p>The error you're seeing probably results from trying to assing non-integer value to the default database field with object ID, so it might be caused by this.</p>
<hr>
<p>Regarding the other things:</p>
<ul>
<li>Image is not stored probably because you're not <a href="http://docs.djangoproject.com/en/1.2/topics/http/file-uploads/#basic-file-uploads" rel="nofollow">using form attribute</a> <code>enctype="multipart/form-data"</code> which is required for correctly processing uploaded files.</li>
<li>The errors are not displayed most probably because they're contained in the form after validation, and you're replacing that with an empty instance in <code>else:</code> branch of your <code>uregister</code> view.</li>
</ul>
|
python|django|django-forms|django-views
| 2 |
1,907,246 | 67,773,984 |
How to handle SSL Certificate in IE using selenium with python?
|
<p>I'm getting the error as per the image.
<a href="https://i.stack.imgur.com/K0kYa.png" rel="nofollow noreferrer">Error_img</a></p>
<p>I tried the following code to solve it.</p>
<p><strong>Method 1 :</strong></p>
<pre><code>from selenium import webdriver
from selenium.webdriver.ie.options import Options
options = Options()
options.set_capability={"acceptInsecureCerts", True}
options.set_capability={"ignoreProtectedModeSettings":True, "ignoreZoomSetting":True}
driver = webdriver.Ie(options=options,executable_path='D:/
Project/Testing/IEDriverServer_Win32_3.150.1/IEDriverServer.exe')
driver.get(url)
options.set_capability={"ie.ensureCleanSession",True}
driver.close()
</code></pre>
<p><strong>Method 2:</strong></p>
<pre><code>from selenium import webdriver
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
desired_capabilities = DesiredCapabilities.INTERNETEXPLORER.copy()
desired_capabilities['acceptInsecureCerts'] = True
driver = webdriver.Ie(capabilities=desired_capabilities,executable_path='E:/DriverServer_Win32_3.150.1/IEDriverServer.exe')
driver.get(url)
print(driver.title)
driver.close()
</code></pre>
<p>**Can't share the URL therefore I have just written URL word</p>
<p>I tried both code but it's not working</p>
<p>Is there any another solution ?**</p>
|
<p>The <code>acceptInsecureCerts</code> capability doesn't work because IE doesn't allow to accept it. You can refer to <a href="https://github.com/SeleniumHQ/selenium/issues/4704#issuecomment-329539218" rel="nofollow noreferrer">this link</a> for more detailed information.</p>
<p>In IE 11, you can click the link <strong>Go on to the webpage (not recommended)</strong> as a workaround to bypass the SSL certificate error. This link has an id "overridelink". You can find the id using F12 dev tools.</p>
<p>I use this site: <a href="https://expired.badssl.com/" rel="nofollow noreferrer">https://expired.badssl.com/</a> as an example, the sample code is like below:</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
import time
url = "https://expired.badssl.com/"
ieoptions = webdriver.IeOptions()
ieoptions.ignore_protected_mode_settings = True
driver = webdriver.Ie(executable_path='IEDriverServer.exe', options=ieoptions)
driver.get(url)
time.sleep(3)
driver.find_element_by_id('moreInfoContainer').click()
time.sleep(3)
driver.find_element_by_id('overridelink').click()
</code></pre>
<p>It works well in IE 11, you can also try the same method.</p>
|
python|selenium-webdriver|internet-explorer|ssl-certificate
| 1 |
1,907,247 | 67,927,075 |
RecursionError when inheriting from float and using str and repr
|
<p>I was testing some features in Python for fun ;)
But I have a recursion error that I don't understand</p>
<pre class="lang-py prettyprint-override"><code>class Test(float):
def __new__(cls, value):
return super().__new__(cls, value)
def __str__(self):
return super().__str__()
def __repr__(self):
return f'<value: {str(self)}>'
test = Test(12)
print(test)
</code></pre>
<p>Traceback:</p>
<pre class="lang-none prettyprint-override"><code>Traceback (most recent call last):
File "C:\temp\test_float.py", line 13, in <module>
print(test)
File "C:\temp\test_float.py", line 6, in __str__
return super().__str__()
File "C:\temp\test_float.py", line 9, in __repr__
return f'<value: {str(self)}>'
File "C:\temp\test_float.py", line 6, in __str__
return super().__str__()
File "C:\temp\test_float.py", line 9, in __repr__
return f'<value: {str(self)}>'
...the above 2 errors repeated many times...
File "C:\temp\test_float.py", line 6, in __str__
return super().__str__()
RecursionError: maximum recursion depth exceeded
</code></pre>
<p>The line <code>return super().__str__()</code> should call <code>float.__str__()</code> and just returns '12'.</p>
<p>Do you have any ideas ?</p>
|
<p>Your <code>__repr__</code> calls your <code>__str__</code>, which calls the super's <code>__str__</code>, which defers to <code>repr</code>, which calls your <code>__repr__</code>, which is an infinite recursion. You could call <code>super().__repr__</code> in your <code>__repr__</code> method, instead of calling <code>str(self)</code>.</p>
<pre><code>class Test(float):
def __new__(cls, value):
return super().__new__(cls, value)
def __str__(self):
return super().__str__()
def __repr__(self):
return f'<value: {super().__repr__()}>'
</code></pre>
<pre><code>>>> Test(12)
<value: 12.0>
</code></pre>
|
python|string|inheritance|repr|recursionerror
| 4 |
1,907,248 | 30,675,898 |
Python - Add a tuple to an existing list of tuples in a specific position
|
<p>Suppose I have a list of tuples as follows:</p>
<pre><code>listA = [ (B,2), (C,3), (D,4) ]
</code></pre>
<p>I would like to add another tuple <code>(E,1)</code> to this list. How can I do this?</p>
<p>And more specifically, I would like to add this tuple as the 1st tuple in the list so that I get:</p>
<pre><code>newList = [ (E,1), (B,2), (C,3), (D,4) ]
</code></pre>
<p>I am using Python 2.7.</p>
<p>Thanks in advance!</p>
|
<p>If you are going to be appending to the beginning a <a href="https://docs.python.org/2/library/collections.html#collections.deque" rel="nofollow">collections.deque</a> would be a more efficient structure:</p>
<pre><code>from collections import deque
deq = deque([("B",2), ("C",3), ("D",4) ])
deq.appendleft(("E",1))
print(deq)
deque([('E', 1), ('B', 2), ('C', 3), ('D', 4)])
</code></pre>
<p>appending to the start of the deque is <code>0(1)</code>.</p>
<p>If you actually wanted a new list and to keep the old you can simply:</p>
<pre><code>newList = [(E,1)] + listA
</code></pre>
|
python|list|python-2.7|tuples
| 4 |
1,907,249 | 66,946,636 |
Migrate Python3 environment to virtual - managing import dependencies
|
<p>I'm running Python3.8.5 on Ubuntu 20.04. I've been putting off working from virtual envs for my projects so I have some third party packages in my local environment. I've been using this OS for about 1 month so not too many third party packages.</p>
<p>I aim to clean up my Python environment and use this as clean slate for future virtual envs.</p>
<p>Am I right in assuming the best way is to remove all third party packages I've installed via pip? If so how do I know which ones to remove? I've read a few nightmare stories so don't want to go willy nilly deleting packages without knowing what they are or if they are built-ins.</p>
<p>Here is the list of outputs from <code>python3 -m pip freeze</code> command.
Note
<code>>> python3 --version</code> = <code>Python 3.8.5</code> which is my default:</p>
<pre><code>appdirs==1.4.4
apturl==0.5.2
asn1crypto==1.4.0
backcall==0.2.0
bcrypt==3.1.7
beautifulsoup4==4.9.3
black==20.8b1
blinker==1.4
Brlapi==0.7.0
bs4==0.0.1
cached-property==1.5.2
certifi==2019.11.28
cffi==1.14.5
chardet==3.0.4
click==7.1.2
coincurve==15.0.0
colorama==0.4.3
command-not-found==0.3
cryptography==2.8
cupshelpers==1.0
cytoolz==0.11.0
dbus-python==1.2.16
decorator==5.0.5
defer==1.0.6
distro==1.4.0
distro-info===0.23ubuntu1
duplicity==0.8.12.0
entrypoints==0.3
eth-hash==0.3.1
eth-typing==2.2.2
eth-utils==1.10.0
ethereum==2.3.2
fasteners==0.14.1
future==0.18.2
html5lib==1.1
httplib2==0.14.0
idna==2.8
ipython==7.22.0
ipython-genutils==0.2.0
jedi==0.18.0
joblib==1.0.1
key==0.4
keyring==18.0.1
language-selector==0.1
launchpadlib==1.10.13
lazr.restfulclient==0.14.2
lazr.uri==1.0.3
lockfile==0.12.2
louis==3.12.0
lxml==4.6.3
macaroonbakery==1.3.1
Mako==1.1.0
MarkupSafe==1.1.0
monotonic==1.5
mypy-extensions==0.4.3
netifaces==0.10.4
numpy==1.20.2
oauthlib==3.1.0
olefile==0.46
pandas==1.2.3
paramiko==2.6.0
parso==0.8.2
pathspec==0.8.1
pbkdf2==1.3
pexpect==4.6.0
pickleshare==0.7.5
Pillow==7.0.0
prompt-toolkit==3.0.18
protobuf==3.6.1
py-ecc==5.2.0
pycairo==1.16.2
pycparser==2.20
pycryptodome==3.10.1
pycups==1.9.73
pyethash==0.1.27
Pygments==2.8.1
PyGObject==3.36.0
PyJWT==1.7.1
pymacaroons==0.13.0
PyNaCl==1.3.0
PyPDF2==1.26.0
pyRFC3339==1.1
pysha3==1.0.2
python-apt==2.0.0+ubuntu0.20.4.4
python-dateutil==2.7.3
python-debian===0.1.36ubuntu1
pytz==2019.3
pyxdg==0.26
PyYAML==5.3.1
regex==2020.11.13
reportlab==3.5.34
repoze.lru==0.7
requests==2.22.0
requests-unixsocket==0.2.0
rlp==1.2.0
scrypt==0.8.17
SecretStorage==2.3.1
selenium==3.141.0
simplejson==3.16.0
six==1.14.0
soupsieve==2.2.1
speedtest-cli==2.1.2
systemd-python==234
toml==0.10.2
toolz==0.11.1
traitlets==5.0.5
typed-ast==1.4.2
typing-extensions==3.7.4.3
ubuntu-advantage-tools==20.3
ubuntu-drivers-common==0.0.0
ufw==0.36
unattended-upgrades==0.1
urllib3==1.25.8
usb-creator==0.3.7
wadllib==1.3.3
wcwidth==0.2.5
webencodings==0.5.1
xkit==0.0.0
</code></pre>
<p>I had a look at the <a href="https://pip.pypa.io/en/stable/reference/pip_freeze/#" rel="nofollow noreferrer">docs</a> and checked both <code>python3 -m pip freeze</code> <code>--user</code> & <code>--local</code> options but saw lots of packages which I didn't directly install. The outputs of these were less than above example.</p>
|
<p>Turns out its never too late to start using virtual environments. Following on from @sinoroc above, each venv establishes a 'clean' python version which only includes built-in methods and drops all third-party packages (for Ubuntu 20.04 <code>--version</code> = 3.8.5). See below for pip freeze after initializing virtual environment venv:</p>
<pre><code>>>> python3 -m venv venv
>>> ls
Desktop Downloads Pictures repos Templates Videos
Documents Public snap venv
>>> source ./venv/bin/activate
(venv) izpad ~
>>> python3 -m pip install -U pip
Collecting pip
Using cached pip-21.0.1-py3-none-any.whl (1.5 MB)
Installing collected packages: pip
Attempting uninstall: pip
Found existing installation: pip 20.0.2
Uninstalling pip-20.0.2:
Successfully uninstalled pip-20.0.2
Successfully installed pip-21.0.1
(venv) izpad ~
>>> pip freeze
pkg-resources==0.0.0
>>> python --version
Python 3.8.5
</code></pre>
|
dependencies|virtualenv|local|freeze|python-3.8
| 1 |
1,907,250 | 42,955,649 |
Can a python generator work like a dictionary?
|
<p>I have a big list of company information in an excel spreadsheet. I need to bring the company info into my program to process.</p>
<p>Each company has a unique label which is used for accessing the companies. I can create a dictionary using the labels as the keys and the company info as the values, such as <code>{label1: company1, label2: company2, ...}</code>. By doing it this way, when the dictionary is created, it eats up too much memory.</p>
<p>Is it possible to create a generator that can be used like a dictionary?</p>
|
<p>It seems the primary goal of the question is to have an object that <em>behaves</em> like a dictionary, without having the dictionary's contents in RAM (OP: "By doing it this way, when the dictionary is created, it eats up too much memory."). One option here is to use <a href="https://pypi.python.org/pypi/sqlitedict" rel="nofollow noreferrer">sqlitedict</a>, which mimics the Python dictionary API, and uses a Sqlite database under the hood.</p>
<p>Here's the example from the current documentation:</p>
<pre><code>>>> # using SqliteDict as context manager works too (RECOMMENDED)
>>> with SqliteDict('./my_db.sqlite') as mydict: # note no autocommit=True
... mydict['some_key'] = u"first value"
... mydict['another_key'] = range(10)
... mydict.commit()
... mydict['some_key'] = u"new value"
... # no explicit commit here
>>> with SqliteDict('./my_db.sqlite') as mydict: # re-open the same DB
... print mydict['some_key'] # outputs 'first value', not 'new value'
</code></pre>
|
python|dictionary|generator
| 2 |
1,907,251 | 66,724,649 |
when trying to apply my code on data frame column i face the following list index out of range error
|
<pre class="lang-py prettyprint-override"><code>data1 =pd.read_json('C:\\machine learning\\csvjson.json')
data3=data1.iloc[4:]
data3 = data3.reset_index()
data3.drop('index',axis=1)
for i in range(len(data3['coverageData (S)'])):
inpu_t = data3['coverageData (S)'].iloc[i]
re_dict = (inpu_t[0])
coverageStatsDict = (re_dict['CoverageStats'])
blocksData = coverageStatsDict[0]
</code></pre>
<blockquote>
<p>IndexError: list index out of range at re_dict = (inpu_t[0])</p>
</blockquote>
|
<p>It looks like you are attempting to select particular columns. However, the way you used <em>iloc</em> selects rows, not columns (<a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.iloc.html" rel="nofollow noreferrer">Refer Pandas docs on iloc here</a>).</p>
<p>If you are looking to select all rows and all columns 4th column on wards, this should do the trick. Try replacing</p>
<pre><code>data3 = data1.iloc[4:]
</code></pre>
<p>with</p>
<pre><code>data3 = data1.iloc[:, 4:]
</code></pre>
|
python|pandas
| 0 |
1,907,252 | 72,445,746 |
subprocess popen function get lock when calling it with muscle or mafft in a pipeline
|
<p>I'm trying to include sequence alignment using muscle or mafft, depending of the user in a pipeline.
To do so, i'm using the <code>subprocess</code> package, but sometimes, the subprocess never terminates and my script doesn't continue. Here is how I call the subprocess:</p>
<pre><code>child = subprocess.Popen(str(muscle_cline), stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
child.wait()
</code></pre>
<p>The command <code>muscle_cline</code> looks like this:</p>
<pre><code>./tools/muscle/muscle5.1.win64.exe -align C:\Users\alexis\Desktop\git-repo\MitoSplitter\results\genes-fasta\12S_tmp.fasta -output C:\Users\alexis\Desktop\git-repo\MitoSplitter\results\alignement\12S_tmp_muscle_align.fasta
</code></pre>
<p>I'm calling this line in a function that just creates the command line and calls the subprocess, and converts the output.</p>
<p>I'm then calling this function in a <code>for</code> loop</p>
<pre><code>for file in getFastaFile(my_dir):
alignSequenceWithMuscle(file)
</code></pre>
<p>The issue is that sometimes, for unknown reasons, the subprocess never finishes and get locked...</p>
<p>I tried to check the returncode of the child, or print stuff to see where it gets locked, and it's getting locked when I'm calling the subprocess.</p>
<p>Any ideas?</p>
|
<p>You generally want to avoid bare <code>Popen</code>, especially if you don't have a good understanding of its requirements. This is precisely why Python offers you <code>subprocess.check_output</code> and other higher-level functions which take care of the nitty-gritty of managing a subprocess.</p>
<pre><code>output = subprocess.check_output(
["./tools/muscle/muscle5.1.win64.exe",
"-align", r"C:\Users\alexis\Desktop\git-repo\MitoSplitter\results\genes-fasta\12S_tmp.fasta",
"-output", r"C:\Users\alexis\Desktop\git-repo\MitoSplitter\results\alignement\12S_tmp_muscle_align.fasta"],
text=True)
</code></pre>
<p>Notice also the raw strings <code>r"..."</code> to avoid having to double the backslashes, and the <code>text=True</code> keyword argument to instruct Python to implicitly decode the <code>bytes</code> you receive from the subprocess.</p>
|
python|subprocess
| 0 |
1,907,253 | 65,549,441 |
Getting started with Socket IO - how to get response
|
<p>so I want to use Flask Socket IO to emit a message to all clients, but im not sure on the javascript side how to call get the message</p>
<pre><code>from flask import Flask, render_template
from flask_socketio import SocketIO
app = Flask(__name__)
app.config['SECRET_KEY'] = 'hidden'
socketio = SocketIO(app)
@app.route('/')
def sessions():
return render_template('index.html')
@socketio.on('test')
def test():
print('test my event')
socketio.emit('test response')
if __name__ == '__main__':
socketio.run(app, debug=True)
</code></pre>
<pre><code> <!DOCTYPE html>
<html lang="en">
<head>
<title>Test</title>
</head>
<body>
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.5.1/jquery.min.js" integrity="sha512-bLT0Qm9VnAYZDflyKcBaQ2gg0hSYNQrJ8RilYldYQ1FxQYoCLtUjuuRuZo+fjqhx/qtq/1itJ0C2ejDxltZVFg==" crossorigin="anonymous"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/socket.io/3.0.4/socket.io.js" integrity="sha512-aMGMvNYu8Ue4G+fHa359jcPb1u+ytAF+P2SCb+PxrjCdO3n3ZTxJ30zuH39rimUggmTwmh2u7wvQsDTHESnmfQ==" crossorigin="anonymous"></script>
<script type="text/javascript">
var socket = io.connect('http://' + document.domain + ':' + location.port);
socket.on('connect', function() {
socket.emit('test') }
</script>
</body>
</html>
</code></pre>
<p>Pretty much, how do I get the response from socketio.emit('test response') ?</p>
|
<p>Add another socket.on() inside your script tag.</p>
<pre><code> <!DOCTYPE html>
<html lang="en">
<head>
<title>Test</title>
</head>
<body>
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.5.1/jquery.min.js" integrity="sha512-bLT0Qm9VnAYZDflyKcBaQ2gg0hSYNQrJ8RilYldYQ1FxQYoCLtUjuuRuZo+fjqhx/qtq/1itJ0C2ejDxltZVFg==" crossorigin="anonymous"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/socket.io/3.0.4/socket.io.js" integrity="sha512-aMGMvNYu8Ue4G+fHa359jcPb1u+ytAF+P2SCb+PxrjCdO3n3ZTxJ30zuH39rimUggmTwmh2u7wvQsDTHESnmfQ==" crossorigin="anonymous"></script>
<script type="text/javascript">
var socket = io.connect('http://' + document.domain + ':' + location.port);
socket.on('connect', function() {
socket.emit('test') }
socket.on("test response", (response) => {
console.log(response);
});
</script>
</body>
</html>
</code></pre>
<p>The socket.io calls are not necessarily sequential and hence you have as many <code>socket.on</code> as you need and based on what even is on the socket, the corresponding function will get triggered.</p>
<p>Hope it helps! Do ping me in the comments if you have any doubts still.</p>
<p><strong>--EDIT--</strong></p>
<p>Pointer to the docs for exact info on this.</p>
<p><a href="https://socket.io/docs/v3/client-api/index.html#socket-on-eventName-callback" rel="nofollow noreferrer">https://socket.io/docs/v3/client-api/index.html#socket-on-eventName-callback</a></p>
|
python|flask|flask-socketio
| 1 |
1,907,254 | 65,587,772 |
macOS pyenv: pip install not working [SSL: CERTIFICATE_VERIFY_FAILED]
|
<p>I am trying to install numpy package using pip while working with pyenv (global version 3.8.6).</p>
<p>Command:</p>
<pre><code>pip install numpy
</code></pre>
<p>Output:</p>
<pre><code>WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1124)'))': /simple/numpy/
WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1124)'))': /simple/numpy/
WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1124)'))': /simple/numpy/
WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1124)'))': /simple/numpy/
WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1124)'))': /simple/numpy/
Could not fetch URL https://pypi.org/simple/numpy/: There was a problem confirming the ssl certificate: HTTPSConnectionPool(host='pypi.org', port=443): Max retries exceeded with url: /simple/numpy/ (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1124)'))) - skipping
ERROR: Could not find a version that satisfies the requirement numpy (from versions: none)
ERROR: No matching distribution found for numpy
</code></pre>
<p>What I tried to fix this issue:</p>
<ul>
<li>Reinstalling openssl using <code>brew reinstall openssl</code></li>
<li>Reinstalling pyenv using <code>brew reinstall pyenv</code></li>
<li>Reinstalling pyenv-virtualenv using <code>brew reinstall pyenv-virtualenv</code></li>
</ul>
<p>When I try to disable pyenv by removing <code>eval "$(pyenv init -)"</code> and <code>eval "$(pyenv virtualenv-init -)"</code> from the <code>~/.bash_profile</code>, it works fine as it uses the system Python version.</p>
<p>A short term solution is to add <code>--trusted-host pypi.org</code> flag, but I am not sure why it is not working without the flag.</p>
<p>Please help!</p>
|
<p>As seen <a href="https://bugs.python.org/issue28150" rel="nofollow noreferrer">here</a>, in previous versions of Python, Apple provided the OpenSSL packages, but they no longer do.</p>
<p>For a temporary fix, add pypi.org as a trusted host (pythonhosted.org actually hosts the files, but they are downloaded from pypi hence why they are also added to the trusted hosts) when using pip:</p>
<pre><code>pip install --trusted-host pypi.org --trusted-host files.pythonhosted.org pip setuptools
</code></pre>
<p>For a more permanent fix install certifi and Scrapy:</p>
<pre><code>pip install certifi
pip install Scrapy
</code></pre>
<p><a href="https://stackoverflow.com/questions/25981703/pip-install-fails-with-connection-error-ssl-certificate-verify-failed-certi">Similar question 1</a> <br />
<a href="https://stackoverflow.com/questions/42509902/ssl-certificate-verify-failed-using-pip-to-install-packages/49910594">Similar question 2</a></p>
|
python|macos|ssl|openssl|pyenv
| 0 |
1,907,255 | 50,932,755 |
Arbitrarily nested dictionary from tuples
|
<p>Given a dictionary with tuples as keys (and numbers/scalars as values), what is a Pythonic way to convert to a nested dictionary? The hitch is that from input-to-input, the tuples are of arbitrary length.</p>
<p>Below, <code>d1</code>, <code>d2</code>, and <code>d3</code> demonstrate increasing nestedness:</p>
<pre><code>from itertools import product
d1 = dict(zip(product('AB', [0, 1]), range(2*2)))
d2 = dict(zip(product('AB', [0, 1], [True, False]), range(2*2*2)))
d3 = dict(zip(product('CD', [0, 1], [True, False], 'AB'), range(2*2*2*2)))
</code></pre>
<p>And their resulting nested versions would be:</p>
<pre><code># For d1
{'A': {0: 0, 1: 1}, 'B': {0: 2, 1: 3}}
# For d2
{'A': {0: {True: 0, False: 1}, 1: {True: 2, False: 3}},
'B': {0: {True: 4, False: 5}, 1: {True: 6, False: 7}}}
# Beginning of result for d3
{
'C': {
0: {
True: {
'A': 0
'B': 1
},
False: {
'A': 2,
'B': 3
},
1: # ...
</code></pre>
<hr>
<p>My attempts: I like the this trick for initializing an empty data structure, which is given in a number of other SO answers:</p>
<pre><code>from collections import defaultdict
def nested_dict():
return defaultdict(nested_dict)
</code></pre>
<p>But am having trouble implementing this because the number of levels is uncertain. I could use something like:</p>
<pre><code>def nest(d: dict) -> dict:
res = nested_dict()
for (i, j, k), v in d.items():
res[i][j][k] = v
return res
</code></pre>
<p>But this would <em>only</em> work for <code>d2</code> because its keys have 3 levels (i, j, k) above. </p>
<p>Here's my attempt at a solution to generalizing this, but I'm guessing there is a simpler route.</p>
<pre><code>def set_arbitrary_nest(keys, value):
"""
>>> keys = 1, 2, 3
>>> value = 5
result --> {1: {2: {3: 5}}}
"""
it = iter(keys)
last = next(it)
res = {last: {}}
lvl = res
while True:
try:
k = next(it)
lvl = lvl[last]
lvl[k] = {}
last = k
except StopIteration:
lvl[k] = value
return res
>>> set_arbitrary_nest([1, 2, 3], 5)
{1: {2: {3: 5}}}
</code></pre>
|
<p>Just loop over each key, and use all but the last element of the key to add dictionaries. Keep a reference to the last dictionary so set, then use the last element in the key tuple to actually set a key-value pair in the output dictionary:</p>
<pre><code>def nest(d: dict) -> dict:
result = {}
for key, value in d.items():
target = result
for k in key[:-1]: # traverse all keys but the last
target = target.setdefault(k, {})
target[key[-1]] = value
return result
</code></pre>
<p>You could use <a href="https://docs.python.org/3/library/functools.html#functools.reduce" rel="nofollow noreferrer"><code>functools.reduce()</code></a> to handle the traversing-down-the-dictionaries work:</p>
<pre><code>from functools import reduce
def nest(d: dict) -> dict:
result = {}
traverse = lambda r, k: r.setdefault(k, {})
for key, value in d.items():
reduce(traverse, key[:-1], result)[key[-1]] = value
return result
</code></pre>
<p>I used <code>dict.setdefault()</code> rather than the auto-vivication <code>defaultdict(nested_dict)</code> option, as this produces a regular dictionary that won't further implicitly add keys when they don't yet exist.</p>
<p>Demo:</p>
<pre><code>>>> from pprint import pprint
>>> pprint(nest(d1))
{'A': {0: 0, 1: 1}, 'B': {0: 2, 1: 3}}
>>> pprint(nest(d2))
{'A': {0: {False: 1, True: 0}, 1: {False: 3, True: 2}},
'B': {0: {False: 5, True: 4}, 1: {False: 7, True: 6}}}
>>> pprint(nest(d3))
{'C': {0: {False: {'A': 2, 'B': 3}, True: {'A': 0, 'B': 1}},
1: {False: {'A': 6, 'B': 7}, True: {'A': 4, 'B': 5}}},
'D': {0: {False: {'A': 10, 'B': 11}, True: {'A': 8, 'B': 9}},
1: {False: {'A': 14, 'B': 15}, True: {'A': 12, 'B': 13}}}}
</code></pre>
<p>Note that the above solution is a clean O(N) loop (N being the length of the input dictionary), whereas a groupby solution as proposed by Ajax1234 has to <em>sort</em> the input to work, making that a O(NlogN) solution. That means that for a dictionary with 1000 elements, a <code>groupby()</code> would need 10.000 steps to produce the output, whereas an O(N) linear loop only takes 1000 steps. For a million keys, this increases to 20 million steps, etc.</p>
<p>Moreover, recursion in Python is.. slow, as Python can't optimise such solutions to an iterative approach. Function calls are relatively expensive, so using recursion can carry significant performance costs as you greatly increase the number of function calls and by extension frame stack operations.</p>
<p>A time trial shows by how much this matters; using your sample <code>d3</code> and 100k runs, we easily see a 5x speed difference:</p>
<pre><code>>>> from timeit import timeit
>>> timeit('n(d)', 'from __main__ import create_nested_dict as n, d3; d=d3.items()', number=100_000)
8.210276518017054
>>> timeit('n(d)', 'from __main__ import nest as n, d3 as d', number=100_000)
1.6089267160277814
</code></pre>
|
python|python-3.x|dictionary
| 3 |
1,907,256 | 3,401,623 |
How to upload pdf and pptx files to google docs via the gdata python client?
|
<p>I'm using the gdata python client for the google docs api for a project. I use oauth authentication and all the dance, and have successfully uploaded .doc, .xls and every file type in <a href="http://code.google.com/intl/es/apis/documents/faq.html#WhatKindOfFilesCanIUpload" rel="nofollow noreferrer">Their FAQ</a>.
<strong>but</strong> I cannot seem to upload pdf files, even though is right there, listed on the supported filetypes. I tried with the latest version of gdata (released last week) to no avail. Also, I'd like to be able to upload .pptx files, though I realize that that extension is not supported.</p>
<p>Has anybody out there <strong><em>succesfully uploaded a pdf file</em></strong> to google docs via their gdata python client?</p>
|
<p>Done.
First did this
<a href="http://code.google.com/p/gdata-issues/issues/detail?id=591#c77" rel="nofollow noreferrer">http://code.google.com/p/gdata-issues/issues/detail?id=591#c77</a></p>
<p><em>but</em> now I was getting a bad request error <code>"invalid request uri"</code>. So I then discovered in another google thread that the uri for the v3.0 apis was no longer <code>http://docs.google.com/feeds/folders/private/full/<resource-id></code> but <code>http://docs.google.com/feeds/default/private/full/<resource-id>/contents</code></p>
<p>Hacked my copy of gdata to replace <code>folders</code> with <code>default</code> and append <code>/contents</code> and voilà, now it worked for pdfs and all the other supported stuff. </p>
<p>Haven't solved the pptx issue, though...</p>
|
python|google-docs|gdata|gdata-python-client
| -1 |
1,907,257 | 3,325,028 |
What are the technologies that i should use for developing SaaS. Software as a service
|
<p>Guys i would like to know what are the tools and technologies needed to start a software as a service business. What are the requirements needed ?</p>
|
<p>Kind of a silly question? You need a web application that provides a service. You can use any language or hardware that you want. Most likely you'll need a web server or the cloud. You'll need a profitable idea, some money, and some programmers that are worth a crap. You need some way to convince them to do the work for you, generally this involves money or pizza. You'll need to not run out of money before you have something useful that somebody in the world wants to pay you for. Then you need some way for them to pay you. Good luck.</p>
<p>Edit: As far as technologies, a good programmer can write it in 5 different languages for you with a similar result. Let them decide what to use and focus on clearly defining requirements, like scalability, performance, business logic requirements</p>
<p>And you'll need some common sense.</p>
<p>If you're developing it for yourself, stick with what you're familiar with unless there's a business requirement that it can't meet. You should be concerned with the fastest way to get money in the bank and be profitable, which may force you to use a particular technology depending on requirements. It seems you know python, which is pretty good for web stuff after all.</p>
|
c#|java|python
| 1 |
1,907,258 | 35,052,885 |
Elasticsearch-py update API with script
|
<p>I'm trying to use the Update API via the elasticsearch-py python client on ES 2.1.1 and I'm having trouble.</p>
<pre><code>es.index(index='boston', doc_type='stem_map', id=111, body={'word': 'showing', 'counter': 29})
es.get(index='boston', doc_type='stem_map', id=111)
{'_id': '111',
'_index': 'boston',
'_source': {'counter': 29, 'word': 'showing'},
'_type': 'stem_map',
'_version': 1,
'found': True}
upscript = {
'script': {
'inline': 'ctx._source.counter += count',
'params': {
'count': 100
}
}
}
</code></pre>
<p>Then I tried both of the following:</p>
<pre><code>es.update(index='boston', doc_type='stem_map', id=111, body=upscript)
es.update(index='boston', doc_type='stem_map', id=111, script=upscript)
</code></pre>
<p>I'm getting the following error:</p>
<pre><code>RequestError: TransportError(400, 'illegal_argument_exception', '[John Walker][127.0.0.1:9300][indices:data/write/update[s]]')
</code></pre>
<p>Does anybody know what I'm doing wrong?</p>
<p>UPDATE: This also does not work</p>
<pre><code>es.index(index='boston', doc_type='stem_map', id='111', body={'word': 'showing', 'counter': 29})
{'_id': '111',
'_index': 'boston',
'_shards': {'failed': 0, 'successful': 1, 'total': 2},
'_type': 'stem_map',
'_version': 1,
'created': True}
upscript = {
"script" : "ctx._source.counter += count",
"params" : {
'count': 100
}
}
es.update(index='boston', doc_type='stem_map', id='111', body=upscript)
WARNING:elasticsearch:POST /boston/stem_map/111/_update [status:400 request:0.002s]
...
RequestError: TransportError(400, 'illegal_argument_exception', '[Atum][127.0.0.1:9300][indices:data/write/update[s]]')
</code></pre>
<p>ANSWER: My mistake. I didn't realize I had to enable scripting on ES first by changing the config/elasticsearch.yml file. My original code works after enabling scripting.</p>
|
<p>I think your <code>upscript</code> is the wrong format. Try this</p>
<pre><code>upscript = {
"script" : "ctx._source.counter += count",
"params" : {
'count': 100
}
}
</code></pre>
<p>And use <code>body=upscript</code>. </p>
<p>Using <code>script=upscript</code> doesn't work because that requires a url-encoded string of <code>ctx._source.counter += count</code> with the body being <code>{"params" : { "count" : 100 }}</code>. </p>
|
python|elasticsearch|elasticsearch-py
| 2 |
1,907,259 | 34,874,383 |
Changing an object's class while maintaining its attributes and functions
|
<p>If I have 2 classes defined like this:</p>
<pre><code>class A(object):
a = 10
class B(A):
b = 20
</code></pre>
<p>If I create an object:</p>
<pre><code>c = A()
</code></pre>
<p>And then do:</p>
<pre><code>c.__class__ = B
</code></pre>
<p>Is it a valid way to change ('upgrading') the class of the object, maintaining the primary class attributes and methods and gaining the secondary class attributes and methods?</p>
<p>If true, this only makes sense for this cases where the class to which we are changing the object inherits from the previous class? Best regards.</p>
<p>UPDATED:</p>
<p>To give more context.
I have the following class EmbrionDevice.</p>
<pre><code>class EmbrionDevice(object):
def __init__(self, device_info, *args, **kwargs):
super(EmbrionDevice, self).__init__(*args, **kwargs)
# Serial number unique 64-bit address factory-set
self.shl = device_info['source_addr_long']
# 16-bit network address
self.my = device_info['source_addr']
# Node identifier
self.ni = device_info['node_identifier']
# Parent Address
self.pa = device_info['parent_address']
# Device type, 0-coordinator, 1-router, 2-End Device
self.dt = device_info['device_type']
# Device type identifier xbee or Digi device
self.dd = device_info['device_type_identifier']
# Device attributes summary in a dictionary
self.info = device_info
# Embrion future function
self.function_identifier = None
# Device state definition
self.state = DEV_STATE_CODES['embrion']
self.status = DEV_STATUS_CODES['no status']
</code></pre>
<p>That i would later like to change/upgrade, to one of the following specific device classes:</p>
<pre><code>class PassiveDevice(EmbrionDevice):
pass
class ActiveDevice(EmbrionDevice):
pass
</code></pre>
<p>Basically i wanted to ease my copy, avoiding the copy of all the attributes.</p>
|
<p>This is not a valid way to change class of a instance object, A simple example can demonstrate it :- </p>
<pre><code>class A(object):
a = 10
def __init__(self):
self.b = 20
self.c = 30
class B(A):
d = 35
def __init__(self):
self.x = 70
self.y = 80
c = A()
c.__class__ = B
print c
</code></pre>
<blockquote>
<p><code><__main__.B object at 0x02643F10></code></p>
</blockquote>
<p>So now <code>c</code> is instance of class B, Try printing instance attributes:</p>
<pre><code>print c.x
print c.y
</code></pre>
<p>It says:</p>
<blockquote>
<p><code>AttributeError: 'B' object has no attribute 'x'</code></p>
</blockquote>
|
python|python-2.7|python-internals
| 2 |
1,907,260 | 26,822,591 |
When I am trying to times a number it just duplicates it by the amount of times. PYTHON
|
<pre><code>#Initialising Variables
inputISBN = input("What is your 10 digit ISBN number? ")
itemInList = 0
timesNumber = 11
listISBN = []
timesISBN = []
#Checking if the input is only numbers
if len(inputISBN) == 10 and inputISBN.isdigit() :
while itemInList < 10 :
listISBN.append(inputISBN[itemInList])
itemInList = itemInList + 1
print(listISBN)
itemInList = 0
while itemInList < 10 :
num = listISBN[itemInList]
int(num)
timesISBN.append(num * timesNumber)
itemInList = itemInList + 1
timesNumber = timesNumber - 1
print(timesISBN)
else:
print("Invalid input")
</code></pre>
<p>HALP it just prints the input number 11 times then 10 times ARRAGGH
Sorry but there is nothing more to say and i ahve to add more details.
This code is meant to times your input by 11 then 10 and so on but it only duplicates the number by that many. I dont understand why this does not work</p>
|
<p>You need to store the return value of the <code>int()</code> call; the <code>num</code> value is otherwise not affected:</p>
<pre><code>num = listISBN[itemInList]
num = int(num)
</code></pre>
|
python
| 0 |
1,907,261 | 26,581,726 |
How to write recursive number guess function?
|
<p>I am trying to make a program that uses recursion to guess a number the user is thinking of. I am a bit stumped. I am having trouble with the second block of code, the <code>def binarySearch(bottom,top)</code> function. The program is supposed to start out by guess 50. If "N" then it asks if greater or lower. If G then it should ask 75. And so on and so on. I am having trouble setting the values to different answers. Also, how should I go about so it will not give me float point? </p>
<pre><code>def guess(x):
response = input("Is the number " + str(x) + "? Enter Y/N:\n")
if response == "Y":
print("YAY! I got it!")
return response
else:
print("BOO. . . Ok, I'll keep trying.")
response = input("Is the number greater than or less than " + str(x) + "?Enter G/L:\n")
while response != "G" and reponse != "L":
print("Sorry, that isn't a valid response/")
response = input("Pleae try again. Enter G/L:\n")
return response
</code></pre>
<p>
<pre><code>def binarySearch(bottom,top):
if bottom == top: #base case
return bottom
elif guess(top+bottom/2) == "G":
top = top + bottom
return binarySearch(middle,top)
elif guess == "L":
top = (top/2)
return binarySearch(bottom,top)
binarySearch(bottom,top)
</code></pre>
|
<p>To work with non floats, use <code>int(#number)</code>. This will convert a number to an integer.</p>
<p>The next thing that needs to be done is setting either boundaries on the number chosen. Either hard coded in, or open for the user to decide. This will allow for proper binary guessing. That is, binary search works by taking the value that is and the maximum known that is not. </p>
<p>If the number being guessed is greater, then the number returned should be the <code>(int(guessed number + top boundary)/2)</code>. The way it is in you're code, the division happens first. If the number being guessed is less, then we take our <code>int( (guessedNumber -minimal known boundary)/2 )</code>. Finally, if the number matches, we have a success. </p>
<p>Thus, there is a need to constantly be keeping four variables in check: the guess, the top boundary, the bottom boundary, and the actual number.</p>
<p>Regarding the recursion, make a variable newGuess based off of the user input. newGuess will be the middle concept in your code, and should be the new low or new high depending on the outcome of the user response.</p>
|
python|recursion
| 1 |
1,907,262 | 45,244,043 |
Check all possible labels (y) of an array or Dataframe
|
<p>I have a dataset with 12k samples, which each sample has one label y. How can I extract all possible outputs from these 12k samples? (the outputs can vary from 50 to 60 different values... I don't know).</p>
<p>Is there a built-in function for this? A way different than using a for on all samples like 60 times.... I don't know you but this loss of processment triggers me upset and makes the code so ugly.</p>
<p>Note: I don't want a list with the y of each sample, I just want to know how much y I have so I can set the 'number of outputs' of my learning model.</p>
<p>I solved it with:</p>
<pre><code> notfound = 0
n_outputs = 0
for num in range(1,80):
temp = n_outputs
try:
for i in range(len(y)):
if int(y[i]) == num:
n_outputs += 1
raise StopIteration
except StopIteration:
pass
if temp == n_outputs:
notfound += 1
if notfound == 3:
break
print(n_outputs)
</code></pre>
<p>But is there another way?</p>
|
<p>If you have the data in the form of arrays, convert it into a pandas dataframe first and then do <code>data['output'].unique()</code>. It will give you a list of unique outputs. <code>data['output'].nunique()</code> gives you the number of unique values in your output column. <code>data</code> is your dataframe and <code>output</code> is your label column.</p>
|
python|pandas|numpy|dataframe
| 1 |
1,907,263 | 45,208,387 |
Pandas count rows between in two date columns
|
<p>I need count the rows has between the columns date_from and date_to, example:</p>
<p>I have this DataFrame:
date_from date_to</p>
<pre><code>0 2017-07-01 2017-07-03
1 2017-07-01 2017-07-05
2 2017-07-02 2017-07-04
3 2017-07-03 2017-07-04
</code></pre>
<p>I need count how rows has between the columns date_from and date_to, example:</p>
<pre><code> count
date
2017-07-01 2
2017-07-02 3
2017-07-03 3
2017-07-04 1
</code></pre>
<p>I has trying with:</p>
<pre><code>df.groupby(['date_from','date_to']).size()
</code></pre>
<p>but the pandas count a row once</p>
<p><strong>EDIT:</strong></p>
<p>I need count how many rows are between two dates,
The dataframe that only have one row with this:</p>
<pre><code> date_from date_to
0 2017-07-01 2017-07-03
</code></pre>
<p>have this output:
2017-07-01 1
2017-07-02 1 </p>
|
<p>I think you need:</p>
<ul>
<li>first substract one day from <code>date_to</code></li>
<li>reshape by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.stack.html" rel="nofollow noreferrer"><code>stack</code></a> and create <code>DatetimeIndex</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.set_index.html" rel="nofollow noreferrer"><code>set_index</code></a></li>
<li><a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow noreferrer"><code>groupby</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.DataFrameGroupBy.resample.html" rel="nofollow noreferrer"><code>resample</code></a> by <code>day</code>s and aggregate by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.resample.Resampler.ffill.html" rel="nofollow noreferrer"><code>ffill</code></a> or <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.resample.Resampler.count.html" rel="nofollow noreferrer"><code>count</code></a></li>
<li>last use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow noreferrer"><code>groupby</code></a> + <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.size.html" rel="nofollow noreferrer"><code>size</code></a> or <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.value_counts.html" rel="nofollow noreferrer"><code>value_counts</code></a></li>
</ul>
<hr>
<pre><code>df['date_to'] = df['date_to'] - pd.to_timedelta(1, unit='d')
df = df.stack().rename_axis(('a','b')).reset_index(name='c').set_index('c')
df = df.groupby('a').resample('d').ffill().groupby('c').size().reset_index(name='a')
print (df)
c a
0 2017-07-01 2
1 2017-07-02 3
2 2017-07-03 3
3 2017-07-04 1
</code></pre>
<p>Similar solution:</p>
<pre><code>df['date_to'] = df['date_to'] - pd.to_timedelta(1, unit='d')
df = df.stack().rename_axis(('a','b')).reset_index(name='c').set_index('c')
df = df.groupby('a').resample('d')['b'].size().reset_index()
#
df = df['c'].value_counts().sort_index().rename_axis('a').reset_index()
print (df)
a c
0 2017-07-01 2
1 2017-07-02 3
2 2017-07-03 3
3 2017-07-04 1
</code></pre>
<p>And another solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.itertuples.html" rel="nofollow noreferrer"><code>itertuples</code></a>:</p>
<pre><code>df['date_to'] = df['date_to'] - pd.to_timedelta(1, unit='d')
df=pd.concat([pd.Series(r.Index,
pd.date_range(r.date_from, r.date_to)) for r in df.itertuples()])
.reset_index()
df = df['index'].value_counts().sort_index().rename_axis('a').reset_index(name='c')
print (df)
a c
0 2017-07-01 2
1 2017-07-02 3
2 2017-07-03 3
3 2017-07-04 1
</code></pre>
|
python|pandas|dataframe
| 2 |
1,907,264 | 45,014,325 |
Most efficient and most pythonic way to create a NumPy array within a loop
|
<p>I'm currently trying to figure out the most efficient way to create a numpy array in a loop, here are the examples:</p>
<pre><code>import numpy as np
from time import time
tic = time()
my_list = range(1000000)
a = np.zeros((len(my_list),))
for i in my_list:
a[i] = i
toc = time()
print(toc-tic)
</code></pre>
<p>vs</p>
<pre><code>tic = time()
a = []
my_list = range(1000000)
for i in my_list:
a.append(i)
a = np.array(a)
toc = time()
print(toc-tic)
</code></pre>
<p>I was expecting that the second one would be much slower than the first one, because of the need of new memory at each step of the for loop, however these are roughly the same and I was wondering why, but just for curiosity because I can do it with both.</p>
<p>I actually want to write a simple numpy array with data extracted from a dataframe and it looks quite messy. I was wondering if there would be a more pythonic way to do it. I have this dataframe and a list of labels that I need and the simpliest idea would be to do the following (the value I need is the last one of each column):</p>
<pre><code>vars_outputs = ["x1", "x2", "ratio_x1_x2"]
my_df = pd.read_excel(path)
outpts = np.array(my_df[vars_outputs][-1])
</code></pre>
<p>However it is not possible because some of the labels I want are not directly available in the dataframe : for example the ratio_x1_x2 need to be computed from the two first columns. So I added a dict with the missing label and the way to compute them (it's only ratio):</p>
<pre><code>missing_labels = {"ratio_x1_x2" : ["x1", "x2"]}
</code></pre>
<p>and check the condition and create the numpy array (hence the previous question about efficiency)</p>
<pre><code>outpts = []
for var in vars_outputs:
if var in missing_labels.keys():
outpts.append(my_df[missing_labels[var][0]][-1]/my_df[missing_labels[var][1]][-1])
else:
outpts.append(my_df[var][-1])
outpts = np.array(outpts)
</code></pre>
<p>It seems to me way too complicated but I cannot think of an easier way to do so (especially because I need to have this specific order in my numpy output array)</p>
<p>The other idea I have is to add columns in the dataframe with the operation I want but because there are roughly 8000 labels I don't know if it's the best to do because I would have to look into all these labels after this preprocessing step</p>
<p>Thanks a lot</p>
|
<p>Here is the final code, np.fromiter() does the trick and allows to reduce the number of lines by using list comprehension</p>
<pre><code>df = pd.read_excel(path)
print(df.columns)
</code></pre>
<p>It outputs ['x1', 'x2']</p>
<pre><code>vars_outputs = ["x1", "x2", "ratio_x1_x2"]
missing_labels = {"ratio_x1_x2" : ["x1", "x2"]}
it = [df[missing_labels[var][0]].iloc[-1]/df[missing_labels[var][1]].iloc[-1] if var in missing_labels
else df[var].iloc[-1] for var in vars_outputs]
t = np.fromiter(it, dtype = float)
</code></pre>
|
python|numpy
| 1 |
1,907,265 | 61,306,931 |
Retrieve Large Data From MySQL DB With Chunks And Save Them Dataframe Pandas
|
<p>I want to retrieve about 100 million rows and 30 columns of data from an SQL database into a dataframe where I can sort and filter based on certain requirements. I only have 2 Gig memory. Everything comes to a standstill even though I am using chunksize. Here is my code.</p>
<pre><code>import pymysql
chunksize = 100
import pandas as pd
import pymysql.cursors
from urllib import parse```
sqlEngine = create_engine('mysql+pymysql://username:%s@localhost/db' % parse.unquote_plus('password'))
dbConnection = sqlEngine.connect()
for chunk in pd.read_sql("select * from db.db_table", dbConnection, chunksize = chunksize):
print(chunk)
Do somrthing with chunk(chunk is the dataframe that has all the 100 million columns )
</code></pre>
<p>I have reduced my chunksize but still not getting anything.</p>
|
<p>To elaborate on my comment, something like this.</p>
<p>I foresee you're going to have a bad time trying to fit 100 million rows x 30 columns in 2 gigabytes of memory, though.</p>
<pre><code>df = None
for offset in itertools.count(step=chunksize):
print("Reading chunk %d..." % offset)
query = "select * from db.db_table order by id limit %d offset %d" % (chunksize, offset)
chunk_df = pd.read_sql(query, dbConnection)
if not chunk_df: # TODO: this check might not be correct
# No data in new chunk, so we probably have it all
break
if not df:
df = chunk_df
else:
df = pd.concat([df, chunk_df], copy=False)
# do things with DF
</code></pre>
|
python|mysql|pandas|large-data
| 1 |
1,907,266 | 61,345,310 |
Assigning a new value to a python variable
|
<p>I'm extremely new to Python and working on my first text-based game. In this game, I would like player to choose from a list of characters. In the course of the game, I would also like to give the player the ability to change the character (which ultimately affects the outcome of the game). I'm unable to understand what I need to do to ensure the new choice of character is saved appropriately. Below is a stripped down version of the code:</p>
<pre><code>def identity_choice():
identity = input("""Your options are:
1. Ordinary Tourist
2. Chef
Please choose a number""")
if identity == "1":
print("You now look like a tourist!")
return identity
elif identity == "2":
print("You are now a chef")
return identity
else:
print("Sorry, I don't understand that.")
return identity_choice()
def action(identity):
if identity == "1":
print("You can't do what you need to as a tourist")
response = input("Would you like to change your disguise?")
if "y" in response:
identity_choice()
else:
print("You have chosen to retain your identity")
identity = identity_choice()
action(identity)
</code></pre>
|
<p>The variable "identity" is used only local to the function(s). If you need the variable global, just declare the variable outside all functions, and inside the functions you enter the line "global identity".</p>
|
python
| 1 |
1,907,267 | 58,085,641 |
How to use key press events in PyQt5
|
<p>I want the "Add" function to run when I input a number into "LE1" and press the "Enter" key on the keyboard. I also want the line edit to clear its text when I select it for editing. </p>
<pre><code>from PyQt5 import QtWidgets, QtCore
from PyQt5.QtCore import Qt
from PyQt5.QtWidgets import QLineEdit, QLabel, QGridLayout, QWidget, QDialog
class MyWindow(QtWidgets.QMainWindow):
def __init__(self):
super(MyWindow, self).__init__()
centralWidget = QWidget()
self.setCentralWidget(centralWidget)
self.Glayout = QGridLayout(centralWidget)
self.LE1 = QLineEdit('Input Number',self)
self.LE1.keyPressEvent(self.KPE)
Label1 = QLabel('+ 1 =',self)
self.LE2 = QLineEdit(self)
self.Glayout.addWidget(self.LE1)
self.Glayout.addWidget(Label1)
self.Glayout.addWidget(self.LE2)
def Add(self):
Num = float(self.LE1.text())
math = Num + 1
ans = str(math)
self.LE2.setText(ans)
def KPE(self):
if event.key() == Qt.Key_Enter:
self.Add()
if __name__ == "__main__":
import sys
app = QtWidgets.QApplication(sys.argv)
window = MyWindow()
window.show()
sys.exit(app.exec_())
</code></pre>
|
<p>keyPressEvent is a method that if you override it that way you are losing the default behavior, besides it is unnecessary since QLineEdit has the returnPressed signal that notifies if <kbd>Enter</kbd> is pressed.</p>
<p>On the other hand, converting a string to floating can throw an exception so you should prevent that case, another better option is to use a widget that allows only numerical values with QSpinBox or QDoubleSpinBox, or at least restrict the values that are entered into the QLineEdit with a QValidator appropriate.</p>
<p>And finally do not use the word math as a variable name since that is the name of a library that could cause you problems in the future.</p>
<p>Considering the above, the solution is:</p>
<pre class="lang-py prettyprint-override"><code>from PyQt5.QtWidgets import (
QApplication,
QGridLayout,
QLineEdit,
QLabel,
QMainWindow,
QWidget,
)
class MyWindow(QMainWindow):
def __init__(self, parent=None):
super(MyWindow, self).__init__(parent)
self.LE1 = QLineEdit("Input Number")
self.LE1.returnPressed.connect(self.add)
Label1 = QLabel("+ 1 =")
self.LE2 = QLineEdit()
centralWidget = QWidget()
self.setCentralWidget(centralWidget)
layout = QGridLayout(centralWidget)
layout.addWidget(self.LE1)
layout.addWidget(Label1)
layout.addWidget(self.LE2)
def add(self):
try:
num = float(self.LE1.text())
num += 1
self.LE2.setText(str(num))
except ValueError:
pass
if __name__ == "__main__":
import sys
app = QApplication(sys.argv)
window = MyWindow()
window.show()
sys.exit(app.exec_())
</code></pre>
|
python|pyqt|pyqt5
| 1 |
1,907,268 | 58,152,315 |
df.set_index() Not Working as What I Expected
|
<p><a href="https://i.stack.imgur.com/er5ND.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/er5ND.png" alt="Setting the Index"></a></p>
<p>From the above, you can see that I have set the index to 'index'.
My expectation is to be able to use the column 'index' for dropping rows and just use the column 'Barangay' as a feature not as an index of my data frame.</p>
<p><a href="https://i.stack.imgur.com/osrzW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/osrzW.png" alt="Using Index to Drop records"></a></p>
<p>As seen above, rows are dropped still using the 'Barangay' column as a reference index. I tried dropping using index [0, 1] but returns an error. </p>
|
<p>You need assign back:</p>
<pre><code>city_prop = city_prop.set_index('index')
</code></pre>
<p>Or:</p>
<pre><code>city_prop.set_index('index', inplace = True)
</code></pre>
<hr>
<p>EDIT:</p>
<pre><code>df = pd.read_csv('CityProperEskwenilaExtraIndicators.csv',
skiprows=1,
header=None,
sep=';',
index_col=[0,1]).T
</code></pre>
<hr>
<pre><code>print (df.head())
0 Barangay Longitude Latitude Poverty rate Terrain type \
1 # See annex See annex Per 100 inhabitants See annex
2 1 27,67231183 66,3112793 18 Difficult
3 2 65,15620167 53,32027629 54 Difficult
4 3 34,94438385 89,7970517 63 Difficult
5 4 10,97542641 84,26323733 42 Normal
6 5 26,05436012 61,30689679 70 Difficult
0 Roads needing repair Access to WASH Access to clean water \
1 kilometers of road % of population % of population
2 55,40469584 50,2 71,2
3 14,08228761 51,8 88,9
4 33,20044684 77 97,4
5 1,695918463 74,7 52,1
6 85,08259271 70,1 99,3
0 Violent incidents Homicides
1 rate per 100K rate per 100K
2 7,72 6,833797715
3 8,3 5,513650409
4 3,72 2,931838433
5 6,26 5,883509349
6 6,55 5,348430398
</code></pre>
<hr>
<pre><code>#replace ,
df = df.replace(',','.', regex=True)
#remove second level
df.columns = df.columns.droplevel(1)
#convert columns to numeric
excluded = ['Terrain type','Poverty rate']
cols = df.columns.difference(excluded)
#to floats
df[cols] = df[cols].astype(float)
#to integer
df['Poverty rate'] = df['Poverty rate'].astype(int)
print (df.head())
0 Barangay Longitude Latitude Poverty rate Terrain type \
2 1.0 27.672312 66.311279 18 Difficult
3 2.0 65.156202 53.320276 54 Difficult
4 3.0 34.944384 89.797052 63 Difficult
5 4.0 10.975426 84.263237 42 Normal
6 5.0 26.054360 61.306897 70 Difficult
0 Roads needing repair Access to WASH Access to clean water \
2 55.404696 50.2 71.2
3 14.082288 51.8 88.9
4 33.200447 77.0 97.4
5 1.695918 74.7 52.1
6 85.082593 70.1 99.3
0 Violent incidents Homicides
2 7.72 6.833798
3 8.30 5.513650
4 3.72 2.931838
5 6.26 5.883509
6 6.55 5.348430
</code></pre>
<hr>
<pre><code>print (df.dtypes)
0
Barangay float64
Longitude float64
Latitude float64
Poverty rate int32
Terrain type object
Roads needing repair float64
Access to WASH float64
Access to clean water float64
Violent incidents float64
Homicides float64
dtype: object
</code></pre>
|
python|pandas
| 2 |
1,907,269 | 18,534,663 |
Python Print Output to Email
|
<p>I have a script which prints variables (set by user) perfectly.</p>
<pre><code>os.system('clear')
print "Motion Detection Started"
print "------------------------"
print "Pixel Threshold (How much) = " + str(threshold)
print "Sensitivity (changed Pixels) = " + str(sensitivity)
print "File Path for Image Save = " + filepath
print "---------- Motion Capture File Activity --------------"
</code></pre>
<p>I now wish to email this code to myself to confirm when running. I have included in the script email using <code>email.mimieText</code> and <code>multipart</code>. But the output no longer shows the relative variables just the code.</p>
<pre><code> body = """ Motion Detection Started \n Pixel Threshold (How much) = " + str(threshold) \n Sensitivity (changed Pixels) = " + str(sensitivity) \n File Path for Image Save = " + filepath """
</code></pre>
<p>Im sure it is the """ wrapper but unclear what i should use instead?</p>
|
<p>in python <code>"""</code> quotes mean to take everything between them literally.</p>
<p>The easiest solution here would be to define a string <code>myString=""</code>, then at every print statement, instead of printing you can append to your string with <code>myString=myString+"whatever I want to append\n"</code></p>
|
python
| 2 |
1,907,270 | 55,362,219 |
How can i add an image to a header of a xlsx file using xlsxwriter and python?
|
<p>I want to add an image to the header of the xlsx but it's showing nothing on the generated file (our application picks a .csv, then converts to .xlsx using .py file with xlsxwriter, and then to .pdf using a libreoffice command)</p>
<p>We've already tried with different image formats and sizes but it made no difference.</p>
<p>Also tried with the examples from the library (<a href="https://xlsxwriter.readthedocs.io/example_headers_footers.html?highlight=set_header" rel="nofollow noreferrer">https://xlsxwriter.readthedocs.io/example_headers_footers.html?highlight=set_header</a>) with no luck.</p>
<p>We used the <code>worksheet.insert_image()</code>, it adds the image but not in the header. This is our current result: <a href="https://ibb.co/QNXv8bM" rel="nofollow noreferrer">https://ibb.co/QNXv8bM</a></p>
<p>We want to add the image directly on the header (maybe using the <code>set_header()</code> ) but so far our tries with this method hasn't produced any results. When we use the <code>set_header()</code> to place the image it shows nothing on the header.</p>
<p>Here is a piece of the python file that we are using:</p>
<pre><code>def create_worksheet(workbook, data, image, p_header_text):
'''
Creates and formats worksheet
:param workbook: Main workbook
:type workbook: xlsxwriter.Workbook
:param data: dict with data to use in the worksheet
:type data: dict
data example:
data = {'headings': [head1, head2, ..., headn], 'rows': [[data1, ..., datan], ...]}
:return: Nothing
'''
worksheet = workbook.add_worksheet()
### Page Setup
worksheet.set_margins(top=1.4)
worksheet.set_landscape()
worksheet.hide_gridlines(2)
#worksheet.set_paper(9) # 9 = A4
worksheet.fit_to_pages(1, 0)
### Header and footer
header_text = p_header_text
#worksheet.set_header('&C&16&"Calibri,Bold"{}'.format(header_text))
worksheet.set_header('&L&G', {'image_left': '/home/reports/LTA-logo.jpg'})
worksheet.set_footer('&L&D&RPage &P of &N')
#worksheet.insert_image('A1', '/home/reports/LTA-logo.jpg', {'x_offset': 0, 'y_offset': 0})
#worksheet.set_header('&C&G', {'image_left': '/home/reports/LTA-logo.jpg'})
### Create table
create_table(worksheet, data)
</code></pre>
<p><strong>Note</strong>: The <code>worksheet.set_header('&C&16&"Calibri,Bold"{}'.format(header_text))</code> works fine, it shows the text on the header. The problem is when we try to put the image...</p>
<p>The expected result is to make the image appear in the header, left aligned with the title as shown on this picture: <a href="https://ibb.co/vQTytK2" rel="nofollow noreferrer">https://ibb.co/vQTytK2</a></p>
<p><strong>Note 2</strong>: For business reasons (company) i cannot show the data on the print screens</p>
|
<p>It should work with XlsxWriter. You just need to build the format string in the right way with the <code>&L</code> left part and the <code>&C</code> centre part.</p>
<p>For example:</p>
<pre><code>import xlsxwriter
workbook = xlsxwriter.Workbook('headers_footers.xlsx')
worksheet = workbook.add_worksheet('Image')
# Adjust the page top margin to allow space for the header image.
worksheet.set_margins(top=1.3)
worksheet.set_header('&L&[Picture]&C&16&"Calibri,Bold"Revenue Report',
{'image_left': 'python-200x80.png'})
workbook.close()
</code></pre>
<p>Note, I use the more explicit <code>&[Picture]</code> in the example but <code>&G</code> works as well.</p>
<p>Output:</p>
<p><a href="https://i.stack.imgur.com/RnzQt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RnzQt.png" alt="enter image description here"></a></p>
|
python|unix|libreoffice|xlsxwriter
| 4 |
1,907,271 | 55,455,219 |
Probability of re occurring an event
|
<p>I have a dataframe like below:</p>
<pre><code>ItemNumber ItemName
264 400
264 420
264 400
264 420
264 420
513 508
513 508
513 400
513 400
513 126
513 126
</code></pre>
<p>here i would like to see frequency of particular <code>ItemName</code>. and probability of reoccurring an <code>ItemName</code> with respect to <code>ItemNumber</code>.</p>
<p>I have tried using <code>groupby</code> function but I'm not getting the desired format using approach below:</p>
<pre><code>import numpy as np
import pandas as pd
ByItemName = df.groupby(['ItemName'])
</code></pre>
<p>My desired output:</p>
<pre><code>ItemNumber ItemName ItemNameFrequency
264 400 2
264 420 3
513 508 2
513 400 2
513 126 2
</code></pre>
|
<p>Perhaps:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'ItemNumber' : ['264', '264', '264', '264','264','513','513','513','513', '513','513'], 'ItemName' : ['400','420','400','420','420','508','508','400','400', '126','126']})
df = df.groupby(['ItemNumber', 'ItemName']).size().reset_index(name = 'ItemNameFrequency')
print(df)
</code></pre>
<p><strong>OUTPUT</strong>:</p>
<pre><code> ItemNumber ItemName ItemNameFrequency
0 264 400 2
1 264 420 3
2 513 126 2
3 513 400 2
4 513 508 2
</code></pre>
|
python|pandas
| 1 |
1,907,272 | 57,536,576 |
Python 3: how to scrape research results from a website using CSFR?
|
<p>I am trying to scrape the research outcome of a website listing French crowdlending Fintech: <a href="https://www.orias.fr/web/guest/search" rel="nofollow noreferrer">https://www.orias.fr/web/guest/search</a></p>
<p>Doing it manually, I select (IFP) in the radio button and then it provides me with 13 results page with 10 results per page. Each results has a hyperlink I would also like to get information from into the final table.</p>
<p>My main problem seems to come from CSRF, where in the result address, there is:
p_auth=8mxk0SsK
So I cannot simply loop through results pages by changing "p=2" to "p=13" in the link:
<a href="https://www.orias.fr/search?p_auth=8mxk0SsK&p_p_id=intermediaryDetailedSearch_WAR_oriasportlet&p_p_lifecycle=1&p_p_state=normal&p_p_mode=view&p_p_col_id=column-1&p_p_col_count=1&_intermediaryDetailedSearch_WAR_oriasportlet_myaction=fullSearch" rel="nofollow noreferrer">https://www.orias.fr/search?p_auth=8mxk0SsK&p_p_id=intermediaryDetailedSearch_WAR_oriasportlet&p_p_lifecycle=1&p_p_state=normal&p_p_mode=view&p_p_col_id=column-1&p_p_col_count=1&_intermediaryDetailedSearch_WAR_oriasportlet_myaction=fullSearch</a></p>
<p>If I try to use a VPN manually, the wesite adress become "stable":
<a href="https://www.orias.fr/search?p_p_id=intermediaryDetailedSearch_WAR_oriasportlet&p_p_lifecycle=0&p_p_state=normal&p_p_mode=view&p_p_col_id=column-1&p_p_col_count=1&_intermediaryDetailedSearch_WAR_oriasportlet_d-16544-p=1&_intermediaryDetailedSearch_WAR_oriasportlet_implicitModel=true&_intermediaryDetailedSearch_WAR_oriasportlet_spring_render=searchResult" rel="nofollow noreferrer">https://www.orias.fr/search?p_p_id=intermediaryDetailedSearch_WAR_oriasportlet&p_p_lifecycle=0&p_p_state=normal&p_p_mode=view&p_p_col_id=column-1&p_p_col_count=1&_intermediaryDetailedSearch_WAR_oriasportlet_d-16544-p=1&_intermediaryDetailedSearch_WAR_oriasportlet_implicitModel=true&_intermediaryDetailedSearch_WAR_oriasportlet_spring_render=searchResult</a></p>
<p>So I tried to use it in the python code:</p>
<pre class="lang-py prettyprint-override"><code> import requests
from bs4 import BeautifulSoup
k = 1
% test k from 1 to 13
url = "http://www.orias.fr/search?p_p_id=intermediaryDetailedSearch_WAR_oriasportlet&p_p_lifecycle=0&p_p_state=normal&p_p_mode=view&p_p_col_id=column-1&p_p_col_count=1&_intermediaryDetailedSearch_WAR_oriasportlet_d-16544-p=" + str(k) + "&_intermediaryDetailedSearch_WAR_oriasportlet_implicitModel=true&_intermediaryDetailedSearch_WAR_oriasportlet_spring_render=searchResult"
response = requests.get(url, proxies=proxies) # 200 ment it went through
soup = BeautifulSoup(response.text, "html.parser")
table = soup.find('table', attrs={'class':'table table-condensed table-striped table-bordered'})
table_rows = table.find_all('tr')
l = []
for tr in table_rows:
td = tr.find_all('td')
row = [tr.text for tr in td]
l.append(row)
</code></pre>
<p>Which doesn't work as it would in a web browser, it just provide a page as if no results had been requested. Would you know how to make it work?</p>
|
<p>I would alter the page param in the post requests during a loop. Do an initial request to find out number of pages</p>
<pre><code>from bs4 import BeautifulSoup as bs
import requests, re, math
import pandas as pd
headers = {
'Content-Type': 'application/x-www-form-urlencoded',
'User-Agent': 'Mozilla/5.0',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3',
'Referer': 'https://www.orias.fr/web/guest/search'
}
params = [['p_p_id', 'intermediaryDetailedSearch_WAR_oriasportlet'],
['p_p_lifecycle', '0'],
['p_p_state', 'normal'],
['p_p_mode', 'view'],
['p_p_col_id', 'column-1'],
['p_p_col_count', '1'],
['_intermediaryDetailedSearch_WAR_oriasportlet_d-16544-p', '1'],
['_intermediaryDetailedSearch_WAR_oriasportlet_implicitModel', 'true'],
['_intermediaryDetailedSearch_WAR_oriasportlet_spring_render', 'searchResult']]
data = {
'searchString': '',
'address': '',
'zipCodeOrCity': '',
'_coa': 'on',
'_aga': 'on',
'_ma': 'on',
'_mia': 'on',
'_euIAS': 'on',
'mandatorDenomination': '',
'wantsMandator': 'no',
'_cobsp': 'on',
'_mobspl': 'on',
'_mobsp': 'on',
'_miobsp': 'on',
'_bankActivities': '1',
'_euIOBSP': 'on',
'_cif': 'on',
'_alpsi': 'on',
'_cip': 'on',
'ifp': 'true',
'_ifp': 'on',
'submit': 'Search'
}
p = re.compile(r'(\d+)\s+intermediaries found')
with requests.Session() as s:
r= requests.post('https://www.orias.fr/search', headers=headers, params= params, data=data)
soup = bs(r.content, 'lxml')
num_results = int(p.findall(r.text)[0])
results_per_page = 20
num_pages = math.ceil(num_results/results_per_page)
df = pd.read_html(str(soup.select_one('.table')))[0]
for i in range(2, num_pages + 1):
params[6][1] = str(i)
r= requests.post('https://www.orias.fr/search', headers=headers, params= params, data=data)
soup = bs(r.content, 'lxml')
df_next = pd.read_html(str(soup.select_one('.table')))[0]
df = pd.concat([df, df_next])
df.drop('Unnamed: 6', axis = 1, inplace = True)
df.reset_index(drop=True)
</code></pre>
<hr>
<p><strong>Check:</strong></p>
<pre><code>print(len(df['Siren Number'].unique()))
#245
</code></pre>
|
python|web-scraping|beautifulsoup|csrf
| 1 |
1,907,273 | 54,046,711 |
Python 3 threading post request passing header params and data
|
<p>I'm trying to make my post requests faster because at the moment it takes 3 seconds per post. As I need to iterate it n times it could take hours. So, I started looking for threading, async calls and many others but none solved my problem. Mostly of the problems was due to the fact that I couldn't specify the headers and the params of my post request.</p>
<p>My Python version is 3.6.7</p>
<p>My code:</p>
<pre><code>for i in range(0, 1000):
assetId = jsonAssets[i]['id']
uuidValue = uuid.uuid4()
headers = {'Content-Type': 'application/json',}
params = (('remember_token', '123456'),)
data = ('{{"asset":'
'{{"template":1,'
'"uuid":"{uuidValue}", '
'"assetid":{assetId}}}}}'
.format(uuidValue = uuidValue,
assetId = assetId))
response = requests.post('http://localhost:3000/api/v1/assets', headers=headers, params=params, data=data)
</code></pre>
<p>Some of the tries were using:</p>
<pre><code>pool.apply_async
</code></pre>
<p>or</p>
<pre><code>ThreadResponse
</code></pre>
<p>But I couldn't set headers or params like in the request.post</p>
<p>So, how can I make this post request using this header, params and data faster?</p>
<p>Thanks in advance and sorry for any trouble, this is my first stackoverflow post.</p>
|
<p>If you can make single request properly, shortest way for you is to use <a href="https://docs.python.org/3/library/concurrent.futures.html" rel="nofollow noreferrer">ThreadPoolExecutor</a>:</p>
<pre><code>def single_request(i):
assetId = jsonAssets[i]['id']
uuidValue = uuid.uuid4()
# ... all other requests stuff here
return response
with ThreadPoolExecutor(max_workers=10) as executor:
futures = {
executor.submit(single_request, i): i
for i
in range(1000)
}
for future in as_completed(futures):
i = futures[future]
try:
res = future.result()
except Exception as exc:
print(f'excepiton in {i}: {exc}')
else:
print(res.text)
</code></pre>
|
python-3.x|python-requests|python-asyncio|python-multithreading
| 1 |
1,907,274 | 58,460,653 |
How can I convert a Panda DataFrame or QTableWidget to a Pdf?
|
<p>How can I convert a Panda DataFrame or QTableWidget to a Pdf?</p>
<p>I have my Sqlite3 database products listed in a QtableWidget (PyQt5) and have them also listed in a panda dataframe. How can I convert one of these to PDF?</p>
<p>I want to generate a product report and any of these methods would suit me. I tried a lot of things I saw on the stack and google but nothing worked.</p>
<p>DataFrame Function</p>
<pre><code> def gerarRelatorio(self):
self.banco = sqlite3.connect ( 'Vendas.db' )
self.cursor = banco.cursor ( )
engine = create_engine('sqlite:///Vendas.db')
df = pd.read_sql_table("Produtos", engine)
print(df)
</code></pre>
<p>QtableWidget Function</p>
<pre><code> def LoadDatabase(self):
self.banco = sqlite3.connect ( 'Vendas.db' )
self.cursor = banco.cursor ( )
query = "SELECT * FROM Produtos"
result = self.banco.execute ( query )
self.listaprodutos.setRowCount ( 0 )
for row_number, row_data in enumerate ( result ):
self.listaprodutos.insertRow ( row_number )
for colum_number, data in enumerate ( row_data ):
self.listaprodutos.setItem(row_number, colum_number, QtWidgets.QTableWidgetItem(str(data)))
</code></pre>
|
<p>you can convert a pandas df to a PDF using the following method:</p>
<pre><code>from PyQt5 import QtGui, QtWidgets
from PyQt5.QtPrintSupport import QPrinter
from PyQt5.QtWidgets import QApplication
from PyQt5.QtGui import QTextDocument
import sys
import pandas as pd
df = pd.DataFrame({'test1':[1],'test2':[2]}) #the dataframe
html = df.to_html()
app = QApplication(sys.argv)
out = QTextDocument()
out.setHtml(html)
printer = QPrinter()
printer.setOutputFileName("test.pdf")
printer.setOutputFormat(QPrinter.PdfFormat)
printer.setPageSize(QPrinter.A4)
printer.setPageMargins(15, 15, 15, 15, QPrinter.Millimeter)
out.print_(printer)
</code></pre>
<p>of course you'll have to play around with the pdf formatting to make it look how you want</p>
<p>don't have alot of experience with PyQt5 but you could probably somehow skip the df/html parts if you already have things stored in a QtableWidget</p>
|
python|python-3.x|pandas|sqlite|pyqt5
| 1 |
1,907,275 | 58,510,737 |
Freezing layers in pre-trained bert model
|
<p><a href="https://i.stack.imgur.com/tsr0m.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tsr0m.png" alt="Pre Trained BERT Model"></a></p>
<p>How to freeze the last two layers in the above pre-trained model (dropout and classifier layers)? So that when the model is run, I will get a dense layer as output.</p>
|
<p>I would like to point you to the definition of <a href="https://github.com/huggingface/transformers/blob/master/transformers/modeling_bert.py#L849" rel="noreferrer">BertForSequenceClassification</a> and you can easily avoid the dropout and classifier by using:</p>
<pre><code>model = BertForSequenceClassification.from_pretrained("bert-base-uncased", num_labels=2)
model.bert() # this will give you the dense layer output
</code></pre>
<p>Why you can do the above? If you take a look at the constructor of BertForSequenceClassification:</p>
<pre><code>def __init__(self, config):
super(BertForSequenceClassification, self).__init__(config)
self.num_labels = config.num_labels
self.bert = BertModel(config)
self.dropout = nn.Dropout(config.hidden_dropout_prob)
self.classifier = nn.Linear(config.hidden_size, self.config.num_labels)
self.init_weights()
</code></pre>
<p>As you can see, you just want to ignore the <code>dropout</code> and <code>classifier</code> layers.</p>
<p>One more thing, freezing a layer and removing a layer are two different things. In your question, you mentioned that you want to freeze the classifier layer but freezing a layer will not help you to avoid it. Freezing means, you do not want to train the layer.</p>
|
pytorch
| 7 |
1,907,276 | 65,473,784 |
Pandas groupby average after slicing over non-zero values across remaining groups
|
<p>Hi Consider the following dataframe</p>
<pre><code>import pandas as pd
import numpy as np
a = pd.DataFrame(np.array([[1,1,1,1], [1,1,2,0], [1,2,1,1], [1,2,2,0], [2,1,1,0], [2,1,2,0], [2,2,1,1], [2,2,2,1]]), columns = ['k1','k2','k3','v'])
print(a)
k1 k2 k3 v
0 1 1 1 1
1 1 1 2 0
2 1 2 1 1
3 1 2 2 0
4 2 1 1 0
5 2 1 2 0
6 2 2 1 1
7 2 2 2 1
</code></pre>
<p>I want to compute how <code>v</code> varies with respect to <code>k1</code> and am therefore grouping over <code>k1</code> and computing the mean.</p>
<pre><code>print(a.groupby('k1').mean()['v'])
k1
1 0.5
2 0.5
</code></pre>
<p>However we can see that when <code>k2</code> = 1 and <code>k3</code>= 2, value of <code>v</code> is always 0 (for both <code>k1</code> = 1 and 2). I wish to ignore such rows. So, in order to filter such groups of <code>k2</code> and <code>k3</code> I am doing the following</p>
<pre><code>b = (a.groupby(['k2','k3']).mean()['v']!=0).reset_index()
b = b[b['v']]
del b['v']
print(b)
k2 k3
0 1 1
2 2 1
3 2 2
c = a.merge(b, how='inner', on=['k2','k3'])
print(c)
k1 k2 k3 v
0 1 1 1 1
1 2 1 1 0
2 1 2 1 1
3 2 2 1 1
4 1 2 2 0
5 2 2 2 1
</code></pre>
<p>And then finally taking grouped mean over <code>k1</code> I get a better/desirable metric.</p>
<pre><code>print(c.groupby('k1').mean()['v'])
k1
1 0.666667
2 0.666667
</code></pre>
<p>Is there any simpler way to implement this computation since it seems like a pretty common analysis approach but required a pretty long chain of operations</p>
|
<blockquote>
<p>However we can see that when k2 = 1 and k3= 2, value of v is always 0 (for both k1 = 1 and 2). I wish to ignore such rows.</p>
</blockquote>
<p>If you check the standard error:</p>
<pre><code>(a.groupby(['k2','k3']).transform(pd.Series.std) > 0).v
0 True
1 False
2 False
3 True
4 True
5 False
6 False
7 True
Name: v, dtype: bool
</code></pre>
<p>it shows the rows where the rows are not constant for v. You can filter out on this.</p>
|
python|pandas|dataframe|pandas-groupby
| 2 |
1,907,277 | 22,781,718 |
Average value for each user from pivot table (dataframe)
|
<p>I have extracted the table below from a csv file :</p>
<pre><code>timestamp user_id main_val val1 val2 val3 transport
01/01/2011 1 1 3 1491 0 bus
01/07/2012 1 19 57 4867 5 bus
01/09/2013 1 21 63 3455 5 bus
01/02/2011 2 20 8 2121 5 bus
01/12/2012 2 240 30 3558 3 bus
01/01/2011 3 100 5 3357 3 bus
01/11/2012 3 3100 49 1830 bus
01/12/2013 3 3200 51 4637 4 bus
</code></pre>
<p>For this purpose I used the following statement:</p>
<pre><code>import pandas as pd
newnames = ['date','user_id', 'cost', 'val1']
df = pd.read_csv('expenses.csv', names = newnames, header = False)
pivoted = df.pivot('date','user_id')
</code></pre>
<p>and now I have the dataframe pivoted containing the table below :</p>
<pre><code> cost cost cost val1 val1 val1
user_id 1 2 3 1 2 3
timestamp
01/01/2011 1 100 3 5
01/02/2011 20 8
01/07/2012 19 57
01/09/2013 21 63
01/11/2012 3100 49
01/12/2012 240 30
01/12/2013 3200 51
</code></pre>
<p><strong>How can I now calculate a monthly average cost and val1 for each user_id?</strong></p>
<p>Thanks in advance for your help.</p>
|
<p>You probably want to use the resample method
<a href="http://pandas.pydata.org/pandas-docs/version/0.13.1/generated/pandas.DataFrame.resample.html" rel="nofollow">http://pandas.pydata.org/pandas-docs/version/0.13.1/generated/pandas.DataFrame.resample.html</a></p>
<pre><code>import pandas as pd
import numpy as np
newnames = ['date','user_id', 'cost', 'val1']
df = pd.read_csv('expenses.csv', names = newnames, header = False)
df['date'] = pd.to_datetime(df['date'])
pivoted = df.pivot('date','user_id')
pivoted.resample('M')
</code></pre>
|
python|pandas
| 1 |
1,907,278 | 22,624,183 |
This python code that I am running returns nothing but a blank string when I run it
|
<p>I tried running the sentence... "Can you speak pig latin?" </p>
<pre><code>def igpay(sentence):
alist = sentence.split(" ")
NewSentence = ""
vowels = "aeoiu"
cons = "qwrtypsdfghjklzxcvbnm"
for i in range(len(alist)):
c = alist[i]
if c[0] in vowels:
a = c + "way"
NewSentence += a
elif c[0] not in vowels:
for j in range(len(c)):
f = c[j]
if f in cons:
o = c.replace(c[j],"")
a = c[j:j+1]
b = o + a
if f in vowels:
v = b + "ay"
NewSentence += v
return(NewSentence)
</code></pre>
|
<p>The reason you're seeing nothing is that neither of the lines with <code>NewSentence +=</code> is never reached.</p>
<p>The first line is never reached because there are no words that begin with vowels.</p>
<p>The second line is never reached because your test <code>if f in vowels</code> is never executed unless <code>if f in cons</code> is already known to be true. I think you may possibly have an indentation error here.</p>
<p>A few other notes:</p>
<ul>
<li>Your two <code>for</code> statements could be more clearly written <code>for word in alist:</code> and <code>for ltr in word:</code> (I used the variable <code>word</code> instead of <code>c</code> because I think it's clearer). You do not need to loop on an integer value and then index based on that variable.</li>
<li>Your outermost <code>if/elif</code> pair can more clearly be written <code>if/else</code>. There's no other possible route of execution.</li>
<li>Your statements testing <code>in vowels</code> or <code>in cons</code> will fail for upper-case letters.</li>
<li>Your replace() call will replace <em>all</em> instances of the vowel, not just the first one. (I'm not entirely sure what you're trying to do here, is it to strip off the initial consonant and place it on the end?)</li>
</ul>
|
python
| 2 |
1,907,279 | 22,493,394 |
Fastest Way to Populate a Pandas DataFrame When Order Matters
|
<p>I'm building some basic support code for possibly large data grabs from an API. The results come out as a dict for each index value. i.e.</p>
<pre><code>[(index0, {col3:val3, col0:val0, col12:val12, ...}), (index1,{...}), ...]
</code></pre>
<p>However, while the indices come out in order the columns do not. Also, not all columns will necessarily be available for all indices.</p>
<p>It is important the columns end up in the correct order <code>col_list = [col0, col1, ...]</code> as well as the indicies <code>index_list = [index0, index1, ...]</code></p>
<p>My inclination is to just predefine the dataframe</p>
<pre><code>df = DataFrame(index=index_list, columns=col_list)
</code></pre>
<p>and just assign the data by <code>df.loc[idx, col] = val</code> which might be the fastest way if the data was sparse. However, the data is almost certainly dense.</p>
<p>Are there any alternate constructors that would be significantly faster?</p>
|
<p>An idea is to bulk load the data from the list of dicts and sort on the index column(s) afterwards. Pandas is optimized for this kind of thing.</p>
<p>First you need to adjust your list of tuples+dicts to be a list of dicts (so that you can initialize the dataframe easily) . One way (one-liner) to do that is this (assuming you have no control over how you parse them before and the format is as in your example):</p>
<pre><code>your_data = [(2,{"col1":2,"col2":3}),(-1,{"col3":22,"col1":4})]
dict = [x[1].update({"idx_col":x[0]}) or x[1] for x in your_data]
dict>> [{'col1': 2, 'col2': 3, 'idx_col': 2}, {'col1': 4, 'col3': 22, 'idx_col': -1}]
</code></pre>
<p>Then:</p>
<pre><code>df = pd.DataFrame(columns=["col1","col2","col3"]) #not necessary if every col appears
#at least once in the data
df = df.append([{"idx_col":2,"col1":2,"col2":3},{"idx_col":-1,"col3":22,"col1":4}])
#column order preserved
df = df.set_index("idx_col",drop=True).sort() #index order preserved now
</code></pre>
<p>Resulting df:</p>
<pre><code> col1 col2 col3
idx_col
-1 4 NaN 22
2 2 3 NaN
</code></pre>
<p>If you have multiple index columns just use an array ["idx0","idx1",...] in the set_index method (although your example leads me to believe there is one index)</p>
|
python|pandas|dataframe
| 0 |
1,907,280 | 14,618,100 |
how to get python to return a list?
|
<p>So I'm writing a python script that will clean file names of useless and unwanted characters, but I'm running into a problem, I can't seem to figure out how to return a list or dictionary with all of the items in it that I iterated over. it only returns the first item I iterated over. this is my first time writing in python. any help would be greatly appreciated. I'm mostly writing this to learn. the clean_title() method is the one I'm trying to return from. and I call it at the bottom. </p>
<pre><code>import os
import re
# here is how all my video files will look after this
# show name Season 1 Episode 1
filename = os.listdir("E:/Videos/TV/tv-show")
def clean_title(filename):
name = {}
for title in filename:
n_title = title.split('.')
index = [i for i, item in enumerate(n_title) if re.search('\w\d{2}\w\d{2}', item)]
if len(index) > 0:
name = {'title':n_title[0:index[0]], 'ep_info':n_title[index[0]]}
return name
def get_show_name(filename):
pass
def update_title():
#show_title = get_show_name + ' ' + get_episode_info
#print show_title
if __name__=="__main__":
test = clean_title(filename)
print test
</code></pre>
|
<p>You have two distinct problems.</p>
<p>The first is that you're returning from inside your loop, so you will only have processed a single iteration of the loop when you hit the <code>return</code> statement. That's why it looks like you're getting the first iterated value when, in fact, you're never reaching the other iterations.</p>
<p>You can fix this by out-denting the <code>return</code> statement to the correct level.</p>
<p>The second problem is in the way you're accruing your results to return from <code>clean_title()</code>. You are only ever storing a single cleaned title in the <code>name</code> variable. Each time through the loop you're overwriting the previously calculated title with the new one from that iteration. Even if you fix the <code>return</code> issue, the current version would then return the <em>last</em> title you computed.</p>
<p>You can accrue your results in either a list or a dictionary. If a list, you initialize with <code>name = []</code> and add new titles with <code>name.append(title_goes_here)</code>. If you want to accrue your results in a dictionary, you initialize with <code>name = {}</code> and add new titles with <code>name[index_goes_here] = title_goes_here</code>. Note that in the case of a dictionary you need to have some logical key value (usually an integer or string) that you will use to recover the value later on.</p>
<p>Finally, I have to add that I find your use of singular case (<code>filename</code>, <code>title</code>, <code>clean_title</code>) for plural objects and actions to be confusing. I'd call a list of file names <code>filenames</code>, and so on.</p>
|
python|regex|list|return
| 2 |
1,907,281 | 14,771,233 |
Python optimize.curve_fit (regarding existing answer)
|
<p>I wanted to ask a question about a user's reply to other question, but for some reason the comment box isn't showing up. Sorry if I'm doing something wrong.</p>
<p>In any case, regarding this reply:
<a href="https://stackoverflow.com/a/11507723/1950164">https://stackoverflow.com/a/11507723/1950164</a></p>
<p>I have the following question: how can I use this code to fit different data to different functions? I have a similar problem as the one he solved, expect I want to fit the cumulative distribution. So I started trying to generalize the code. I did three modifications:</p>
<p>a) After the line where the histogram is calculated, I added</p>
<pre><code>hist = numpy.cumsum(hist)
</code></pre>
<p>This transforms our distribution into a cumulative distribution</p>
<p>b) Instead of the gaussian function in the example, I defined a new function</p>
<pre><code>def myerf(x, *p):
A, mu, sigma = p
return A/2. * (1+math.erf((x-mu)/(math.sqrt(2)*sigma)))
</code></pre>
<p>This is what the cumulative distribution of a gaussian should be.</p>
<p>c) Lastly, of course, I changed the curve_fit line to call my function:</p>
<pre><code>coeff, var_matrix = curve_fit(myerf, bin_centres, hist, p0=p0)
</code></pre>
<p>This should be a trivial exercise, except it doesn't work. The program now returns the following error message:</p>
<pre><code>bash-3.2$ python fitting.py
Traceback (most recent call last):
File "fitting.py", line 27, in <module>
coeff, var_matrix = curve_fit(myerf, bin_centres, hist, p0=p0)
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/optimize/minpack.py", line 506, in curve_fit
res = leastsq(func, p0, args=args, full_output=1, **kw)
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/optimize/minpack.py", line 348, in leastsq
m = _check_func('leastsq', 'func', func, x0, args, n)[0]
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/optimize/minpack.py", line 14, in _check_func
res = atleast_1d(thefunc(*((x0[:numinputs],) + args)))
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/optimize/minpack.py", line 418, in _general_function
return function(xdata, *params) - ydata
File "fitting.py", line 22, in myerf
return A/2. * (1+math.erf((x-mu)/(math.sqrt(2)*sigma)))
TypeError: only length-1 arrays can be converted to Python scalars
</code></pre>
<p>So what am I doing wrong?</p>
<p>Bonus: give me a reference which explains what that *p is in the argument of the function.</p>
<p>Thanks!</p>
<p>EDIT: I tried running the program with cumulative distribution data, but still calling the gaussian function. That works, you just get a bad fit. So the mistake should be somewhere in myerf function.</p>
<p>EDIT2: If I try substituting the myerf function's return with something simpler, like</p>
<pre><code>return A + mu*x + sigma*x**2
</code></pre>
<p>then it works. So there must be something in that return that isn't doing what it's supposed to.</p>
<p>EDIT3: So, I tried using the error function from scipy instead of that from math, and it works now. I have no idea why it wasn't working before, but it's working now. So the code is:</p>
<pre><code>import matplotlib
matplotlib.use('Agg')
import numpy, math
import pylab as pl
from scipy.optimize import curve_fit
from scipy.special import erf
# Define some test data which is close to Gaussian
data = numpy.random.normal(size=10000)
hist, bin_edges = numpy.histogram(data, density=True)
bin_centres = (bin_edges[:-1] + bin_edges[1:])/2
hist = numpy.cumsum(hist)
def myerf(x, *p):
A, mu, sigma = p
return A/2. * ( 1+erf(((x-mu)/(math.sqrt(2)*sigma))) )
# p0 is the initial guess for the fitting coefficients (A, mu and sigma above)
p0 = [1., 0., 1.]
coeff, var_matrix = curve_fit(myerf, bin_centres, hist, p0=p0)
# Get the fitted curve
hist_fit = myerf(bin_centres, *coeff)
pl.plot(bin_centres, hist, label='Test data')
pl.plot(bin_centres, hist_fit, label='Fitted data')
# Finally, lets get the fitting parameters, i.e. the mean and standard deviation:
print 'Fitted mean = ', coeff[1]
print 'Fitted standard deviation = ', coeff[2]
pl.savefig('fitting.png')
pl.show()
</code></pre>
|
<p>Unlike the <code>math</code> functions, the <code>numpy</code> functions do accept vector input:</p>
<pre><code>>>> import numpy, math
>>> numpy.exp([4,5])
array([ 54.59815003, 148.4131591 ])
>>> math.exp([4,5])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: a float is required
</code></pre>
|
python|curve-fitting
| 0 |
1,907,282 | 57,083,122 |
Scraping a webpage that constantly updates
|
<p>Thanks for taking some time to check out this question.</p>
<p>I'm trying to scrape public bids data from the <a href="http://oasis.caiso.com/mrioasis/logon.do" rel="nofollow noreferrer">CAISO website</a>. And I'm running into these problems:</p>
<p>a. The page is constantly updating, so I think that my code is
getting stuck. </p>
<p>b. The XML objects tags change at every new session.</p>
<p>For (a), I tried using time.sleep and sending an ESC key to stop the refreshing, but it's not working.</p>
<p>I don't know how to solve (b), though. What I typically do is I use this Chrome extension that allows me to get the XML elements in a page and I use those in my code to do what I want. If they change everytime, this strategy doesn't work anymore.</p>
<p>What I want Selenium to do is:</p>
<ol>
<li>Open '<a href="http://oasis.caiso.com/mrioasis/logon.do" rel="nofollow noreferrer">http://oasis.caiso.com/mrioasis/logon.do</a>'</li>
<li>Click on PUBLIC BIDS>Public Bids</li>
<li>Loop over a list of dates, downloading the CSV files for each.</li>
</ol>
<p>Here's my code so far:</p>
<pre><code>driver = webdriver.Chrome()
driver.get('http://oasis.caiso.com/mrioasis/logon.do')
PublicBids = driver.find_element(By.XPATH, '//*[@id="IMG_111854124"]')
PublicBids.click()
dates = ['04/18/2019']
def BidsScraper(d):
time.sleep(2)
dateField = driver.find_element(By.XPATH,'//*[@id="TB_101685670"]')
dateField.send_keys(d)
DownloadCSV = driver.find_element(By.XPATH, '//*[@id="BTN_101685706"]')
DownloadCSV.click()
</code></pre>
<p>Any suggestions are welcome! Thanks again.</p>
<p>EDIT: formatting</p>
|
<p>A couple of things to try is forcing the refresh to stop and clicking only if the element is found with Selenium, or if that still doesn't work for you, I usually try something like moving the mouse to the X/Y coordinates with a macro program like AppRobotic Personal and then simulating a mouse click on the button's X/Y coordinates. Something similar to this in a Try/Except:</p>
<pre><code>import win32com.client
x = win32com.client.Dispatch("AppRobotic.API")
from selenium import webdriver
driver = webdriver.Chrome()
driver.get('http://oasis.caiso.com/mrioasis/logon.do')
PublicBids = driver.find_element(By.XPATH, '//*[@id="IMG_111854124"]')
PublicBids.click()
dates = ['04/18/2019']
def BidsScraper(d):
# wait for loading
x.Wait(2000)
# forcefully stop page reload at this point
driver.execute_script("window.stop();")
try:
dateField = driver.find_element(By.XPATH,'//*[@id="TB_101685670"]')
dateField.send_keys(d)
DownloadCSV = driver.find_element(By.XPATH, '//*[@id="BTN_101685706"]')
#Confirm that button was found
if len(DownloadCSV) > 0
DownloadCSV.click()
except:
dateField = driver.find_element(By.XPATH,'//*[@id="TB_101685670"]')
x.Type(d)
# use UI Item Explorer to find the X,Y coordinates of button
x.MoveCursor(438, 435)
# click on button
x.MouseLeftClick
x.Wait(2000)
</code></pre>
|
python|selenium|web-scraping
| 0 |
1,907,283 | 56,918,360 |
Grabbing contact information with python and beautifulsoup
|
<p>I'm trying to grab the contact information from a page. I need the name, job title, phone, and email address.</p>
<p>I'm learning Python and trying to write code against data I know. I was able to pull out the div blocks with the individual contacts, but I'm not sure how to crawl through them once I have them.</p>
<pre class="lang-py prettyprint-override"><code>tags = soup.find_all('div', attrs={'class':'tshowcase-inner-box'})
</code></pre>
<p>but then I wanted to crawl through the children divs and had no luck.</p>
<pre class="lang-py prettyprint-override"><code> fullname = soup.find('div', attrs={'class':'tshowcase-box-title'})
title = soup('div', attrs={'class':'tshowcase-single-position'})
phone = soup('div', attrs={'class':'tshowcase-single-telephone'})
email = soup('div', attrs={'class':'tshowcase-box-social'})
</code></pre>
<p>I'm not sure what's next though and appreciate any pointers.</p>
<p>Here is the sample HTML:</p>
<pre class="lang-html prettyprint-override"><code><div class="tshowcase-inner-box ts-float-left ">
<div class="tshowcase-box-info ts-align-left ">
<div class="tshowcase-box-title">FULL NAME</div>
<div class="tshowcase-box-details">
<div class="tshowcase-single-position"><i class="fa fa-chevron-circle-right"></i>JOB TITLE</div>
<div class="tshowcase-single-telephone"><i class="fa fa-phone-square"></i><a href="tel:PHONE">PHONE</a></div>
</div>
<div class="tshowcase-box-social"><a href="mailto:EMAIL" rel="nofollow" target="_blank"><i class="fa fa-envelope-o fa-lg"></i></a></div>
</div>
</div>
</code></pre>
|
<p>You can use <code>soup.find_all</code> to locate the elements, and then access the <code>text</code> and <code>href</code> values:</p>
<pre><code>from bs4 import BeautifulSoup as soup
import re
d = soup(html, 'html.parser')
s = [i.text for i in d.find_all('div', {'class':re.compile('title$|position$|telephone$')})]
result = [*s, d.find('div', {'class':'tshowcase-box-social'}).a['href'][7:]]
</code></pre>
<p>Output:</p>
<pre><code>['FULL NAME', 'JOB TITLE', 'PHONE', 'EMAIL']
</code></pre>
<hr>
<p>If you are trying to scrape multiple contact blocks on the page, you can convert the code above into a function that accepts a <code>bs4</code> object to scrape a single listing and iterate over all the block <code>div</code>s:</p>
<pre><code>def get_contact(d):
s = [i.text for i in d.find_all('div', {'class':re.compile('title$|position$|telephone$')})]
return [*s, d.find('div', {'class':'tshowcase-box-social'}).a['href'][7:]]
results = [get_contact(i) for i in soup(html, 'html.parser').find_all('div', {'class':'tshowcase-inner-box'})]
</code></pre>
<p>Output:</p>
<pre><code>[['FULL NAME', 'JOB TITLE', 'PHONE', 'EMAIL']]
</code></pre>
|
python|html|beautifulsoup
| 0 |
1,907,284 | 25,641,459 |
How to pagination many entities in ndb [GAE / Python]
|
<p>I want to get a page in many entities.</p>
<p>the model is 10000 entities,<br>
I want to get a index 5000.</p>
<pre><code>entities = Model.query().fetch(10, offset=5000)
</code></pre>
<p>but, this is anti pattern. </p>
<p>use cursor pattern,</p>
<pre><code>entities, cursor, more = Model.query().fetch_page(10) // 0〜10
entities, cursor, more = Model.query.fetch_page(10, start_cursor=cursor) // 10〜20
</code></pre>
<p>this is get a start cursor. </p>
<pre><code>cursor = ??? // how to get a cursor start index 5000.
</code></pre>
<p>i want to get a cursor.<br>
Is this a good Idea?</p>
|
<p>I don't think GAE support random access, that is why you need offset. I don't understand your question, you know the cursor pattern and it already returns the cursor. I think cursor is meant for paging, and offset is in general to skip some entities.</p>
|
python|google-app-engine
| 0 |
1,907,285 | 44,465,221 |
attempt to split a string using a regex
|
<p>I have a big block of text:</p>
<pre><code>SELECT TOP 10 * FROM APPLES
GO
SELECT TOP 10 * FROM PEARS TREE
GO
SELECT TOP 10 * FROM FRUITS
...
</code></pre>
<p>And I'm simply trying to split this text into a list of individual strings based around GO, this works:</p>
<pre><code>commandlist = textblock.split("GO")
</code></pre>
<p>but.. what I'd like to do is use something like:</p>
<pre><code>commandlist = textblock.split(r"\bGO\b")
</code></pre>
<p>because I'm expecting some text to look like:</p>
<pre><code>SELECT TOP 10 * FROM GOPATRIOTS
GO
SELECT TOP 10 * FROM PEARS LETITGO
GO SELECT TOP 10 * FROM FRUITS
...
</code></pre>
<p>but it seems that I can't just shove a regex into split? Or can I and I'm just missing the way to do so?</p>
|
<p>You need to use <a href="https://docs.python.org/2/library/re.html#re.split" rel="nofollow noreferrer"><code>re.split</code></a>, not a <a href="https://docs.python.org/2/library/stdtypes.html#str.split" rel="nofollow noreferrer">string split</a>:</p>
<pre><code>import re
commandlist = re.split(r"\bGO\b", textblock)
</code></pre>
<p>Or, since you need to split with lines equal to <code>GO</code>:</p>
<pre><code>commandlist = re.split(r"(?m)^GO$", textblock)
</code></pre>
|
python|regex|split
| 2 |
1,907,286 | 44,590,610 |
Automating slicing prodcedures using pandas
|
<p>I am currently using Pandas and Python to handle much of the repetitive tasks, I need done for my master thesis. At this point, I have written some code (with help from stack overflow) that, based on some event dates in one file, finds a start and end date to use as a date range in another file. These dates are then located and appended to an empty list, which I can then output to excel. However, using the below code I get a dataframe with 5 columns and 400.000 + rows (which is basically what I want), but not how I want the data outputted to excel. Below is my code:</p>
<pre><code>end_date = pd.DataFrame(data=(df_sample['Date']-pd.DateOffset(days=2)))
start_date = pd.DataFrame(data=(df_sample['Date']-pd.offsets.BDay(n=252)))
merged_dates = pd.merge(end_date,start_date,left_index=True,right_index=True)
ff_factors = []
for index, row in merged_dates.iterrows():
time_range= (df['Date'] > row['Date_y']) & (df['Date'] <= row['Date_x'])
df_factor = df.loc[time_range]
ff_factors.append(df_factor)
appended_data = pd.concat(ff_factors, axis=0)
</code></pre>
<p>I need the data to be 5 columns and 250 rows (columns are variable identifiers) side by side, so that when outputting it to excel I have, for example column A-D and then 250 rows for each column. This then needs to be repeated for column E-H and so on. Using iloc, I can locate the 250 observations using <code>appended_data.iloc[0:250]</code>, with both 5 columns and 250 rows, and then output it to excel.</p>
<p>Are the any way for me to automate the process, so that after selecting the first 250 and outputting it to excel, it selects the next 250 and outputs it next to the first 250 and so on?</p>
<p>I hope the above is precise and clear, else I'm happy to elaborate!</p>
<p>EDIT:</p>
<p><a href="https://i.stack.imgur.com/ulxh1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ulxh1.png" alt="My entire dataframe" /></a></p>
<p>The above picture illustrate what I get when outputting to excel; 5 columns and 407.764 rows. What I needed is to get this split up into the following way:
<a href="https://i.stack.imgur.com/BWxl4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BWxl4.png" alt="Using iloc[0:250] on the entire smaple" /></a></p>
<p>The second picture illustrates how I needed the total sample to be split up. The first five columns and corresponding 250 rows needs to be as the second picture. When I do the next split using iloc[250:500], I will get the next 250 rows, which needs to be added after the initial five columns and so on.</p>
|
<p>My best guess to solving the problem would be to try and loop, until the counter is greater than length, so</p>
<pre><code>i = 250 # counter
j = 0 # left limit
for x in range(len("your dataframe")):
appended_data.iloc[j:i]
i+=250
if i > len("your df"):
appended_data.iloc[j:(len("your df"))
break
else:
j = i
</code></pre>
|
python|excel|pandas
| 0 |
1,907,287 | 23,753,868 |
Order QComboBox items alphabetically regardless of index
|
<p>I have a list of plants arranged in a specific order which should be keept and expressed by the index. In a QComboBox the plants however should be displayed in alphabetical order.</p>
<p>To do so I tried this:</p>
<pre><code> self.plant = QComboBox(self)
self.plant.insertItem(4, "A")
self.plant.insertItem(1, "B")
self.plant.insertItem(3, "C")
self.plant.insertItem(0, "D")
self.plant.insertItem(5, "E")
self.plant.insertItem(2, "F")
</code></pre>
<p>The resulting ComboBox however is ordered like D A F B C E</p>
<p>Looks like my approach isn't working...</p>
<p>EDIT (further explanation of question):
The plants are listed in historical order in a database (plant D is the oldest, then came B, F, C, A - E has been added recently). The script needs to know which plant has been selected - I'd like to do this with the index and an associative array.
To ease plant selection (there are more than six of them) they should be listed in alphabetical order in the dropdown list. </p>
<p>If I want to add a new plant named Ca, I'd edit the script like this: </p>
<pre><code> self.plant = QComboBox(self)
self.plant.insertItem(4, "A")
self.plant.insertItem(1, "B")
self.plant.insertItem(3, "C")
self.plant.insertItem(6, "Ca")
self.plant.insertItem(0, "D")
self.plant.insertItem(5, "E")
self.plant.insertItem(2, "F")
</code></pre>
<p>I hope this clearifies the meaning of my question: It should be possible to easily place a new plant at the alphabeticaly right position without changing all the other indexes.</p>
<p>In a next step the right task should be executed according to the plant selection: </p>
<pre><code>options = {0 : D,
1 : B,
2 : F,
3 : C,
4 : A,
5 : E,
6 : Ca,
}
def D():
#do task for plant D
def B():
#do task for plant D
...
</code></pre>
|
<p>QCombobox is always ordered by index. If you really need to have them alphabetical and have some numerical data stored then set the data for each item.</p>
<pre><code>cb = QComboBox()
cb.addItem("A", 4)
cb.addItem("B")
cb.setItemData(1, 1, Qt.UserRole)
# To get the data. or cb.itemData(cb.currentIndex(), Qt.UserRole)
cb.itemData(0, Qt.UserRole)
</code></pre>
|
python|pyqt5
| 1 |
1,907,288 | 23,820,374 |
extracting numbers from list of strings with python
|
<p>I have a list of strings that I am trying to parse for data that is meaningful to me. I need an ID number that is contained within the string. Sometimes it might be two or even three of them. Example string might be: </p>
<pre><code>lst1 = [
"(Tower 3rd floor window corner_ : option 3_floor cut out_large : GA - floors : : model lines : id 3999595(tower 4rd floor window corner : option 3_floor: : whatever else is in iit " new floor : id 3999999)",
"(Tower 3rd floor window corner_ : option 3_floor cut out_large : GA - floors : : model lines : id 3998895(tower 4rd floor window corner : option 3_floor: : id 5555456 whatever else is in iit " new floor : id 3998899)"
]
</code></pre>
<p>I would like to be able to iterate over that list of strings and extract only those highlighted id values. </p>
<p>Output would be a <code>lst1 = ["3999595; 3999999", "3998895; 5555456; 3998899"]</code> where each id values from the same input string is separated by a colon but list order still matches the input list. </p>
|
<p>You can use <code>id\s(\d{7})</code> regular expression. </p>
<p>Iterate over items in a list and <a href="https://docs.python.org/2/library/stdtypes.html#str.join" rel="nofollow"><code>join</code></a> the results of <a href="https://docs.python.org/2/library/re.html#re.findall" rel="nofollow"><code>findall()</code></a> call by <code>;</code>:</p>
<pre><code>import re
lst1 = [
'(Tower 3rd floor window corner_ : option 3_floor cut out_large : GA - floors : : model lines : id 3999595(tower 4rd floor window corner : option 3_floor: : whatever else is in iit " new floor : id 3999999)',
'(Tower 3rd floor window corner_ : option 3_floor cut out_large : GA - floors : : model lines : id 3998895(tower 4rd floor window corner : option 3_floor: : id 5555456 whatever else is in iit " new floor : id 3998899)'
]
pattern = re.compile(r'id\s(\d{7})')
print ["; ".join(pattern.findall(item)) for item in lst1]
</code></pre>
<p>prints:</p>
<pre><code>['3999595; 3999999', '3998895; 5555456; 3998899']
</code></pre>
|
python|string|list|parsing
| 3 |
1,907,289 | 71,940,999 |
How to create a level system in Pygame?
|
<p>I am making a simple math game that has a set order of 5 questions. I currently only have the first two problems implemented as functions as well as "victory" and "loss" functions that are called based on the player answer. However, I cannot figure out how to make a proper level system. Here is the current code for the four functions I have implemented:</p>
<pre><code>def victory():
global num_correct
global time
while True:
screen.fill((150, 200, 255))
for event in pygame.event.get():
if event.type == pygame.QUIT:
pygame.quit()
sys.exit()
victory_text = font.render("Correct!", True, dark_blue)
time -= 1
timer.tick(100)
if time <= 0:
second_problem()
score_display(30, 25)
screen.blit(victory_text, (325, 240))
pygame.display.update()
# when incorrect answer is chosen, call this function
def loss():
global num_correct
global time
while True:
screen.fill((150, 200, 255))
for event in pygame.event.get():
if event.type == pygame.QUIT:
pygame.quit()
sys.exit()
loss_text = font.render("Incorrect!", True, dark_blue)
time -= 1
timer.tick(100)
if time <= 0:
second_problem()
score_display(30, 25)
screen.blit(loss_text, (310, 240))
pygame.display.update()
# first math problem (9 - 6) + 3 = 6
def first_problem():
global num_correct
while True:
screen.fill((150, 200, 255))
for event in pygame.event.get():
if event.type == pygame.QUIT:
pygame.quit()
sys.exit()
gametext = font.render("(9 - 6) + 3 = ", True, dark_blue)
if six.draw(screen):
print("true")
num_correct += 1
victory()
if eight.draw(screen):
print("false")
loss()
score_display(30, 25)
screen.blit(gametext, (290, 75))
pygame.display.update()
# second math problem 3 + (4 * 1) = 7
def second_problem():
global num_correct
while True:
screen.fill((150, 200, 255))
for event in pygame.event.get():
if event.type == pygame.QUIT:
pygame.quit()
sys.exit()
gametext = font.render("3 + (4 * 1) = ", True, dark_blue)
if seven.draw(screen):
print("true")
num_correct += 1
victory()
if nine.draw(screen):
print("false")
loss()
score_display(30, 25)
screen.blit(gametext, (290, 75))
pygame.display.update()
</code></pre>
<p>Currently "victory" and "loss" only advance to the second problem. I plan to implement 5 in total. How can I create a level system of sorts to call upon the problems in a set order?</p>
|
<p>You could reduce your code. Instead of <code>first_problem()</code> and <code>second_problem()</code> you could use one function with different values</p>
<pre><code>problem("3 + (4 * 1) = ", "7", "9")
problem("(9 - 6) + 3 = ", "6", "9")
</code></pre>
<p>And function should return <code>True</code> or <code>False</code> instead of running <code>victory()</code> or <code>loss()</code></p>
<p>And then you could keep values on list and use <code>for</code>-loop</p>
<pre><code>all_problems = [
# (question, answer1, answer2)
("(9 - 6) + 3 = ", "6", "9"),
("3 + (4 * 1) = ", "7", "9"),
]
score = 0
for data in all_problems:
result = problem(*data)
if result:
score += 1
victory()
else:
loss()
break # exit when wrong answer
</code></pre>
<p>You would reduce <code>victory()</code> and <code>loss()</code> to one function with parameter <code>message('Corrent!')</code>, <code>message('Corrent!')</code> and then you could do</p>
<pre><code>all_problems = [
# (question, answer1, answer2)
("(9 - 6) + 3 = ", "6", "9"),
("3 + (4 * 1) = ", "7", "9"),
]
score = 0
for data in all_problems:
result = problem(*data)
if result:
score += 1
message('Corrent!')
else:
message('Incorrent!')
break # exit when wrong answer
# - end -
message(f"Final Result: {score}")
</code></pre>
<hr />
<p>Full working code with more changes.</p>
<pre><code>import pygame
# --- constants ---
BLACK = (0, 0, 0)
WHITE = (255, 255, 255)
RED = (255, 0, 0)
GREEM = (0, 255, 0)
BLUE = (0, 0, 255)
DARK_BLUE = (0, 0, 255)
# --- classes ---
class Button:
def __init__(self, answer, correct, x, y):
self.answer = answer
self.correct = correct
self.image = pygame.Surface((100, 100))
self.image.fill(WHITE)
self.rect = self.image.get_rect(centerx=x, centery=y)
self.text_image = font.render(answer, True, DARK_BLUE)
self.text_rect = self.text_image.get_rect(center=self.rect.center)
def draw(self, screen):
screen.blit(self.image, self.rect)
screen.blit(self.text_image, self.text_rect)
def on_mouse_clicked(self, event):
if event.type == pygame.MOUSEBUTTONDOWN:
return self.rect.collidepoint(event.pos)
def on_mouse_motion(self, event):
if event.type == pygame.MOUSEMOTION:
if self.rect.collidepoint(event.pos):
self.image.fill(RED)
else:
self.image.fill(WHITE)
# --- functions ---
def score_display(screen, value, x, y):
text = f'Score: {value}'
text_image = font.render(text, True, DARK_BLUE)
text_rect = text_image.get_rect(x=x, y=y)
screen.blit(text_image, text_rect)
def message(text, time=2):
text_image = font.render(text, True, DARK_BLUE)
text_rect = text_image.get_rect()
text_rect.center = screen.get_rect().center
screen.fill((150, 200, 255))
score_display(screen, score, 30, 25)
screen.blit(text_image, text_rect)
pygame.display.update()
time = time * 10
while time >= 0:
for event in pygame.event.get():
if event.type == pygame.QUIT:
pygame.quit()
exit()
if event.type == pygame.KEYDOWN:
if event.key == pygame.K_ESCAPE:
return
time -= 1
timer.tick(10) # 20*10 = 2 seconds
def problem(question, answer1, answer2, correct1, correct2):
centerx, centery = screen.get_rect().center
text_image = font.render(question, True, DARK_BLUE)
text_rect = text_image.get_rect()
text_rect.centerx = centerx
text_rect.centery = centery - 100
button1 = Button(answer1, correct1, centerx-100, centery+100)
button2 = Button(answer2, correct2, centerx+100, centery+100)
while True:
for event in pygame.event.get():
if event.type == pygame.QUIT:
pygame.quit()
exit()
elif event.type == pygame.MOUSEBUTTONDOWN:
if button1.on_mouse_clicked(event):
return button1.correct
if button2.on_mouse_clicked(event):
return button2.correct
elif event.type == pygame.MOUSEMOTION:
button1.on_mouse_motion(event)
button2.on_mouse_motion(event)
screen.fill((150, 200, 255))
score_display(screen, score, 30, 25)
screen.blit(text_image, text_rect)
button1.draw(screen)
button2.draw(screen)
pygame.display.update()
timer.tick(10)
# --- main ---
all_problems = [
# (question, answer1, answer2, correct1, correct2)
("(9 - 6) + 3 = ", "6", "9", True, False),
("3 + (4 * 1) = ", "7", "9", True, False),
]
pygame.init()
screen = pygame.display.set_mode((800,600))
font = pygame.font.SysFont(None, 50)
timer = pygame.time.Clock()
score = 0
for data in all_problems:
result = problem(*data)
if result:
score += 1
message('Corrent!')
else:
message('Incorrent!')
break
message(f"Final Result: {score}")
pygame.quit()
</code></pre>
<p><a href="https://i.stack.imgur.com/V7oV0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/V7oV0.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/uw1LH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uw1LH.png" alt="enter image description here" /></a></p>
|
python|pygame
| 0 |
1,907,290 | 35,955,750 |
How to find every walk in a numpy array
|
<p>I'm trying to find every single "walk" of length n through an array. A walk in this case is defined as a sequence of length n of adjacent elements (horizontal, diagonal, or vertical) in the array so that point is repeated. For example, a 2x2 matrix</p>
<p>[1 2]<br>
[4 8]</p>
<p>would have walks of length 2: (1, 2), (1, 4), (1, 8), (2, 1), (2, 4), (2, 8) ...<br>
walks of length 3: (1, 2, 4), (1, 2, 8), (1, 4, 2), (1, 4, 8) ... and so on</p>
<p>How could I implement a fast implementation of such an algorithm for small (5x5) matrices in python/numpy, possibly using some aspect of maths that I don't know currently?</p>
<p>Current slow implementation:</p>
<pre><code>from copy import deepcopy
def get_walks(arr, n):
n = n-1
dim_y = len(arr)
dim_x = len(arr[0])
# Begin with every possibly starting location
walks = [[(y, x)] for y in range(dim_y) for x in range(dim_x)]
# Every possible direction to go in
directions = [(0,1), (1,1), (1,0), (1, -1), (0, -1), (-1,-1), (-1, 0), (-1, 1)]
temp_walks = []
for i in range(n):
# Go through every single current walk and add every
# possible next move to it, making sure to not repeat any points
#
# Do this n times
for direction in directions:
for walk in walks:
y, x = walk[-1]
y, x = y+direction[0], x+direction[1]
if -1 < y < dim_y and -1 < x < dim_x and (y, x) not in walk:
temp_walks.append(walk + [(y, x)])
# Overwrite current main walks list with the temporary one and start anew
walks = deepcopy(temp_walks)
temp_walks = []
return walks
</code></pre>
|
<p>I've come up with a recursive solution. Since you want to treat only small problems, this approach can be feasible. I don't have numpy installed for python 3, so this is only guaranteed to work for python 2 as-is (but it should be fairly compatible). Also, I'm pretty sure my implementation is far from optimal.</p>
<p>When checking my output against yours, it occured to me that I get 200 paths for a 3x3 case, while you get 160. Looking at the paths, I think your code has some bug, and you are the one missing paths (and not me having additional ones). Here's my version:</p>
<pre><code>import numpy as np
import timeit
def get_walks_rec(shape,inpath,ij,n):
# add n more steps to mypath, with dimensions shape
# procedure: call shorter walks for allowed neighbouring sites
mypath = inpath[:]
mypath.append(ij)
# return if this is the last point
if n==0:
return mypath
i0 = ij[0]
j0 = ij[1]
neighbs = [(i,j) for i in (i0-1,i0,i0+1) for j in (j0-1,j0,j0+1) if 0<=i<shape[0] and 0<=j<shape[1] and (i,j)!=(i0,j0)]
subpaths = [get_walks_rec(shape,mypath,neighb,n-1) for neighb in neighbs]
# flatten out the sublists for higher levels
if n>1:
flatpaths = []
map(flatpaths.extend,subpaths)
else:
flatpaths = subpaths
return flatpaths
# front-end for recursive function, called only once
def get_walks_rec_caller(mat,n):
# collect all the paths starting from each point of the matrix
sh = mat.shape
imat,jmat = np.meshgrid(np.arange(sh[0]),np.arange(sh[1]))
tmppaths = [get_walks_rec(sh,[],ij,n-1) for ij in zip(imat.ravel(),jmat.ravel())]
# flatten the list of lists of paths to a single list of paths
allpaths = []
map(allpaths.extend,tmppaths)
return allpaths
# input
mat = np.random.rand(3,3)
nmax = 3
# original:
walks_old = get_walks(mat,nmax)
# new recursive:
walks_new = get_walks_rec_caller(mat,nmax)
# timing:
number = 1000
print(timeit.timeit('get_walks(mat,nmax)','from __main__ import get_walks,mat,nmax',number=number))
print(timeit.timeit('get_walks_rec_caller(mat,nmax)','from __main__ import get_walks_rec_caller,mat,nmax',number=number))
</code></pre>
<p>For this 3x3 case with a max path length of 3, 1000 runs with <code>timeit</code> gives me 1.81 seconds with yours vs 0.53 seconds with mine (and you're missing 20% of your paths). For a 4x4 case with max length of 4, 100 runs give 2.1 seconds (yours) vs 0.67 seconds (mine).</p>
<p>An example path, which is present in mine but seems to be missing from yours:</p>
<pre><code>[(0, 0), (0, 1), (0, 0)]
</code></pre>
|
python|arrays|numpy|optimization
| 1 |
1,907,291 | 29,616,292 |
convertion of datetime to numpy datetime without timezone info
|
<p>Suppose I have a <code>datetime</code> variable: </p>
<pre><code> dt = datetime.datetime(2001,1,1,0,0)
</code></pre>
<p>and I convert it to numpy as follows <code>numpy.datetime64(dt)</code> I get</p>
<pre><code> numpy.datetime64('2000-12-31T19:00:00.000000-0500')
</code></pre>
<p>with <code>dtype('<M8[us]')</code></p>
<p>But this automatically takes into account my time-zone (i.e. EST in this case) and gives me back a date of 2001-12-31 and a time of 19:00 hours. </p>
<p>How can I convert it to <code>datetime64[D]</code> in numpy that ignores the timezone information and simply gives me </p>
<pre><code> numpy.datetime64('2001-01-01')
</code></pre>
<p>with <code>dtype('<M8[D]')</code></p>
<p><a href="http://docs.scipy.org/doc/numpy/reference/arrays.datetime.html" rel="nofollow">The numpy datetime64 doc page</a> gives no information on how to ignore the time-zone or give the default time-zone as UTC</p>
|
<p>I was just playing around with this the other day. I think there are 2 issues - how the <code>datetime.datetime</code> object is converted to <code>np.datetime64</code>, and how the later is displayed.</p>
<p>The <code>numpy</code> doc talks about creating a <code>datatime64</code> object from a date string. It appears that when given a <code>datetime.datetime</code> object, it first produces a string.</p>
<pre><code>np.datetime64(dt) == np.datetime64(dt.isoformat())
</code></pre>
<p>I found that I could add timezone info to that string</p>
<pre><code>np.datetime64(dt.isoformat()+'Z') # default assumption
np.datetime64(dt.isoformat()+'-0500')
</code></pre>
<blockquote>
<p>Numpy 1.7.0 reads ISO 8601 strings w/o TZ as local (ISO specifies this)</p>
<p>Datetimes are always stored based on POSIX time with an epoch of 1970-01-01T00:00Z</p>
</blockquote>
<p>As for display, the <code>test_datetime.py</code> file offers some clues as to the undocumented behavior.</p>
<p><a href="https://github.com/numpy/numpy/blob/280f6050d2291e50aeb0716a66d1258ab3276553/numpy/core/tests/test_datetime.py" rel="nofollow">https://github.com/numpy/numpy/blob/280f6050d2291e50aeb0716a66d1258ab3276553/numpy/core/tests/test_datetime.py</a></p>
<p>e.g.:</p>
<pre><code>def test_datetime_array_str(self):
a = np.array(['2011-03-16', '1920-01-01', '2013-05-19'], dtype='M')
assert_equal(str(a), "['2011-03-16' '1920-01-01' '2013-05-19']")
a = np.array(['2011-03-16T13:55Z', '1920-01-01T03:12Z'], dtype='M')
assert_equal(np.array2string(a, separator=', ',
formatter={'datetime': lambda x :
"'%s'" % np.datetime_as_string(x, timezone='UTC')}),
"['2011-03-16T13:55Z', '1920-01-01T03:12Z']")
</code></pre>
<p>So you can customize the print behavior of an array with <code>np.array2string</code>, and <code>np.datetime_as_string</code>. <code>np.set_printoptions</code> also takes a <code>formatter</code> parameter.</p>
<p>The <code>pytz</code> module is used to add further timezone handling:</p>
<pre><code> @dec.skipif(not _has_pytz, "The pytz module is not available.")
def test_datetime_as_string_timezone(self):
# timezone='local' vs 'UTC'
a = np.datetime64('2010-03-15T06:30Z', 'm')
assert_equal(np.datetime_as_string(a, timezone='UTC'),
'2010-03-15T06:30Z')
assert_(np.datetime_as_string(a, timezone='local') !=
'2010-03-15T06:30Z')
....
</code></pre>
<p>Examples:</p>
<pre><code>In [48]: np.datetime_as_string(np.datetime64(dt),timezone='local')
Out[48]: '2000-12-31T16:00:00.000000-0800'
In [49]: np.datetime64(dt)
Out[49]: numpy.datetime64('2000-12-31T16:00:00.000000-0800')
In [50]: np.datetime_as_string(np.datetime64(dt))
Out[50]: '2001-01-01T00:00:00.000000Z'
In [51]: np.datetime_as_string(np.datetime64(dt),timezone='UTC')
Out[51]: '2001-01-01T00:00:00.000000Z'
In [52]: np.datetime_as_string(np.datetime64(dt),timezone='local')
Out[52]: '2000-12-31T16:00:00.000000-0800'
In [81]: np.datetime_as_string(np.datetime64(dt),timezone=pytz.timezone('US/Eastern'))
Out[81]: '2000-12-31T19:00:00.000000-0500'
</code></pre>
|
python-3.x|numpy|python-datetime
| 3 |
1,907,292 | 29,357,658 |
scrapy crawl multiple pages, extracting data and saving into mysql
|
<p>Hi can someone help me out I seem to be stuck, I am learning how to crawl and save into mysql us scrapy. I am trying to get scrapy to crawl all of the website pages. Starting with "start_urls", but it does not seem to automatically crawl all of the pages only the one, it does save into mysql with pipelines.py. It does also crawl all pages when provided with urls in a f = open("urls.txt") as well as saves data using pipelines.py.</p>
<p>here is my code</p>
<h2>test.py</h2>
<pre><code>import scrapy
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.selector import HtmlXPathSelector
from gotp.items import GotPItem
from scrapy.log import *
from gotp.settings import *
from gotp.items import *
class GotP(CrawlSpider):
name = "gotp"
allowed_domains = ["www.craigslist.org"]
start_urls = ["http://sfbay.craigslist.org/search/sss"]
rules = [
Rule(SgmlLinkExtractor(
allow=('')),
callback ="parse",
follow=True
)
]
def parse(self, response):
hxs = HtmlXPathSelector(response)
prices = hxs.select("//div[@class="sliderforward arrow"]")
for price in prices:
item = GotPItem()
item ["price"] = price.select("text()").extract()
yield item
</code></pre>
|
<p>If I understand correctly, you are trying to follow the pagination and extract the results.</p>
<p>In this case, you can avoid using <code>CrawlSpider</code> and use regular <code>Spider</code> class. </p>
<p>The idea would be to parse the first page, extract total results count, calculate how much pages to go and yield <code>scrapy.Request</code> instances to the same URL providing <code>s</code> GET parameter value.</p>
<p>Implementation example:</p>
<pre><code>import scrapy
class GotP(scrapy.Spider):
name = "gotp"
allowed_domains = ["www.sfbay.craigslist.org"]
start_urls = ["http://sfbay.craigslist.org/search/sss"]
results_per_page = 100
def parse(self, response):
total_count = int(response.xpath('//span[@class="totalcount"]/text()').extract()[0])
for page in xrange(0, total_count, self.results_per_page):
yield scrapy.Request("http://sfbay.craigslist.org/search/sss?s=%s&" % page, callback=self.parse_result, dont_filter=True)
def parse_result(self, response):
results = response.xpath("//p[@data-pid]")
for result in results:
try:
print result.xpath(".//span[@class='price']/text()").extract()[0]
except IndexError:
print "Unknown price"
</code></pre>
<p>This would follow the pagination and print prices on the console. Hope this is a good starting point.</p>
|
python|mysql|scrapy
| 0 |
1,907,293 | 46,312,337 |
How to cross check 2 json documents
|
<p>can someone shed some lights on how to do this in Python language?
There are 2 json documents to compare for the pins direction for the same cell. Each json document will have a list of cell, where each cell having a list of pin list with its respective pin direction, how do I compare the data?</p>
<pre><code>Json 1:
cellA pin1 in
CellA pin2 in
CellA pin3 out
CellB pin1 in
CellB pin2 out
Json 2:
cellA pin1 out
cellA pin2 in
cellA pin3 out
cellB pin1 in
</code></pre>
<p>For above 2 cells, Python should indicate mismatches, how should I compare the two? As far, I managed to get each cell on its respective pin and direction but I'm not sure how to compare the two so that in the log show errors in this syntax.</p>
<pre><code>Mismatch [cellA] [pin] [direction_frm_json_1] [direction_frm_json_2]
</code></pre>
<p>Thank you in advanced.</p>
<p>Updated for sample json.
Json Type 1:</p>
<pre><code>{
"cell_name": "cellA",
"pins": [
{
"attributes": [
"DIRECTION in ;",
"Comment line ;"
],
"name": "a"
},
{
"attributes": [
"DIRECTION in ;",
"Comment line ;"
],
"name": "b"
},
{
"attributes": [
"DIRECTION out ;",
"Comment line ;"
],
"name": "o"
},
{
"attributes": [
"DIRECTION inout ;",
"Comment line ;"
],
"name": "vcc"
},
{
"attributes": [
"DIRECTION inout ;",
"Comment line ;"
],
"name": "vss"
},
],
"sessionid": "grace_test",
"time_stamp": 1505972674.332383,
"file_type": "file1"
}
</code></pre>
<p>Json Type 2:</p>
<pre><code>{'cell_name': 'cellA',
'power_pin': [{'direction': ['inout'],
name': 'vcc',
},
{'direction': ['inout'],
'name': 'vss',
}],
'pin': [{'direction': ['out'],
'name': 'a',
},
{'direction': ['in'],
'name': 'b',
},
{'direction': ['out'],
'name': 'o',
}],
"sessionid": "grace_test",
"time_stamp": 1505885461.0,
"file_type": "file2"
}
</code></pre>
|
<p>I'm guessing you're working with JSON objects, so you can have keys and values. If that's the case, the first thing to do is parse your documents:
</p>
<pre><code>import json
docA = json.loads('{"cellA":{"pin1":"in","pin2":"in","pin3":"out"}, \
"cellB":{"pin1":"in","pin2":"out"}}')
docB = json.loads('{"cellA":{"pin1":"out","pin2":"in","pin3":"out"}, \
"cellB":{"pin1":"in"}}')
</code></pre>
<p>So now you can work with Python data structures (dictionaries in this case). Then you can iterate each dictionary by cell and pin, taking care in case that some cells or pins are missing in one of the documents:
</p>
<pre><code>#Check cells in docA
for cell in docA:
#Check cell pins in docA
for pin in docA[cell]:
valueDocB = docB.get(cell,{}).get(pin,None)
if valueDocB != docA[cell][pin]:
print("Mismatch",cell,pin,docA[cell][pin],valueDocB)
#Check cell pins in docB but not in docA
if cell in docB:
for pin in set(docB[cell]).difference(set(docA[cell])):
print("Mismatch",cell,pin,None,docB[cell][pin])
#Check cells in docB but not in docA
for cell in set(docB).difference(set(docA)):
for pin in docB[cell]:
print("Mismatch",cell,pin,None,docB[cell][pin])
</code></pre>
<p>The output for your example data would be:
</p>
<pre><code>Mismatch cellA pin1 in out
Mismatch cellB pin2 out None
</code></pre>
|
python|json|python-3.x
| 1 |
1,907,294 | 60,872,687 |
Is there any JSON tag filtering example?
|
<p>I have one json file and i need to list all "selftext" elements of all data.
Any example of it ? </p>
<p>data example</p>
<pre><code>{ "data": [
{
"selftext": "hello there",
"textex": true,
},
</code></pre>
|
<p>If you want to be able to find a key from an arbitrary json at an arbitrary level, you should use recursion:</p>
<pre><code>def findkey(data, key, resul = None):
if resul is None: resul=[] # initialize an empty list for the results
if isinstance(data, list): # walk down into lists
for d in data:
findkey(d, key, resul)
elif isinstance(data, dict): # dict processing
for k,v in data.items():
if (k == key) and isinstance(v, str): # the expected key and a string value?
resul.append(v)
elif isinstance(v, list) or isinstance(v, dict):
findkey(v, key, resul) # recurse if value is a list or a dict
return resul
</code></pre>
<p>Example:</p>
<pre><code>>>> data = { "data": [
{
"selftext": "hello there",
"textex": True,
},
]}
>>> findkey(data, 'selftext')
['hello there']
</code></pre>
|
python|json
| 0 |
1,907,295 | 21,083,195 |
Paramiko: how to ensure data is received between commands
|
<p>I'm using Paramiko to issue a number of commands and collect results for further analysis. Every once in a while the results from the first command are note fully returned in time and end up in the output for the second command. </p>
<p>I'm attempting to use recv_ready to account for this but it is not working so I assume I am doing something wrong. Here's the relevant code:</p>
<pre><code>pause = 1
def issue_command(chan, pause, cmd):
# send commands and return results
chan.send(cmd + '\n')
while not chan.recv_ready():
time.sleep(pause)
data = chan.recv(99999)
ssh = paramiko.SSHClient()
ssh.load_system_host_keys()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
chan = ssh.connect(host, port=22, username=username, password=password, timeout=3,)
resp1 = issue_command(chan, pause, cmd1)
resp2 = issue_command(chan, pause, cmd2)
</code></pre>
<p>The output for these commands is relatively small (a few sentences). Increasing the pause would likely solve the problem but is not an ideal solution.</p>
<p>Any suggestions or recommendations would be appreciated. </p>
|
<p>I would use <code>transport</code> directly and create a new channel for each command. Then you can use something like:</p>
<pre><code>def issue_command(transport, pause, command):
chan = transport.open_session()
chan.exec_command(command)
buff_size = 1024
stdout = ""
stderr = ""
while not chan.exit_status_ready():
time.sleep(pause)
if chan.recv_ready():
stdout += chan.recv(buff_size)
if chan.recv_stderr_ready():
stderr += chan.recv_stderr(buff_size)
exit_status = chan.recv_exit_status()
# Need to gobble up any remaining output after program terminates...
while chan.recv_ready():
stdout += chan.recv(buff_size)
while chan.recv_stderr_ready():
stderr += chan.recv_stderr(buff_size)
return exit_status, stdout, stderr
ssh = paramiko.SSHClient()
ssh.load_system_host_keys()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(host, port=22, username=username, password=password, timeout=3,)
transport = ssh.get_transport()
pause = 1
resp1 = issue_command(transport, pause, cmd1)
resp2 = issue_command(transport, pause, cmd2)
</code></pre>
<p>An even better way would be to take a list of commands and spawn a new channel for each, poll each chan's <code>recv_ready</code>, and suck up their stdout/stderr when output is available. :-)</p>
<p>Edit: There are potential issues with reading data after the command exits. Please see the comments!</p>
|
python|paramiko
| 11 |
1,907,296 | 70,066,949 |
python type hint field of parent object
|
<p>Given the class <code>User</code> below, is there any way to type hint the type of the field <code>User.foo</code> in the function <code>f</code> (without explicitly hard coding <code>int</code>)? There are sometimes more complex cases where I would like to be able to refer to the type of a parent objects field directly, derived from the parent type. Is that possible with Python?</p>
<pre class="lang-py prettyprint-override"><code>import typing
import pydantic
class User(pydantic.BaseModel):
foo: int
# what type annotation to use for type of User.foo?
# some things i've tried:
# User.foo
# typing.Type[User.foo]
# typing.Type[User].foo
# typing.Type[User]["foo"]
def f(foo: typing.Type[User].foo):
print(foo)
user = User(foo=42)
f(user.foo)
</code></pre>
<p>Whatever I tried so far I always get an error when running <code>mypy</code> or when running <code>python</code>.</p>
<hr />
<p><strong>Update</strong>. I think the example I gave above is too simple to really describe the problem. The example below is a bit more realistic. There are some "??" marked where I would like to make a type annotation (something like <code>Type[TResult.items]</code>), but can't find a solution.</p>
<pre class="lang-py prettyprint-override"><code>import abc
from typing import TypeVar, Generic, Literal
from pydantic.generics import GenericModel
TResultItem = TypeVar("TResultItem")
class ResultBase(
GenericModel, Generic[TResultItem]
):
status: Literal["success", "failure"]
items: list[TResultItem]
TResult = TypeVar("TResult", bound=ResultBase)
class CalculatorBase(Generic[TResult], abc.ABC):
def calculate(self) -> TResult:
try:
items = [] # items: ??
for i in range(10):
items.append(
self._calculate_one(i)
)
except Exception:
return self._create_result(
"failure", []
)
else:
return self._create_result(
"success", items
)
@abc.abstractmethod
def _calculate_one(self, i: int): # -> ??
...
@abc.abstractmethod
def _create_result(
self,
status: Literal["success", "failure"],
items, # items: ??
) -> TResult:
...
</code></pre>
<hr />
<p><strong>Update</strong>. Here's some typescript, I'm basically looking for the python equivalent, if it exists.</p>
<pre><code>type User = {
foo: number;
}
function f(foo: User["foo"]) {
console.log(foo)
}
const user: User = {foo: 42}
f(user.foo)
</code></pre>
|
<p>You can use a variable to abstract the type away from the object:</p>
<pre><code>import pydantic
foo_type = int # holds type of User.foo
class User(pydantic.BaseModel):
foo: foo_type
def f(foo: foo_type):
print(foo)
user = User(foo=42)
f(user.foo)
</code></pre>
<p>Prints:</p>
<pre><code>42
</code></pre>
<p>You could also get the type automatically by using an object of <code>User</code>:</p>
<pre><code>foo_type = type(User(foo=0).foo)
</code></pre>
|
python|type-hinting|mypy|typing|pydantic
| 2 |
1,907,297 | 53,763,760 |
Pandas Dataframe HTML alignment without CSS?
|
<p>I am inserting a dataframe into an HTML MIME email that I am sending out. Some columns need to be left aligned, and others need to be right aligned. I have gone through various posts, and it seems the only option is to use CSS. Before I commit to this method, can anyone tell me if there is an easier, more practical method of aligning the various columns?</p>
<p>So far the best answer I've found that uses CSS is <a href="https://stackoverflow.com/a/50939211/9414465">https://stackoverflow.com/a/50939211/9414465</a> </p>
|
<p>Have you considered a script that reopens the HTML document and inserts an <a href="https://www.w3schools.com/tags/att_td_align.asp" rel="nofollow noreferrer">html alignment tag</a> (<code><td align="right"></code>) per cell?</p>
|
python|html|pandas
| 0 |
1,907,298 | 53,471,032 |
Adding charts to a Flask webapp
|
<p>I created a web app with Flask where I'll be showing data, so I need charts for it.</p>
<p>The problem is that I don't really know how to do that, so I'm trying to find the best way to do that. I tried to use a Javascript charting library on my frontend and send the data to the chart using <em>SocketIO</em>, but the problem is that I need to send that data frequently and at a certain point I'll be having a lot of data, so sending each time a huge load of data through AJAX/SocketIO would not be the best thing to do.</p>
<p>To solve this, I had this idea: could I generate the chart from my backend, instead of sending data to the frontend? I think it would be the better thing to do, since I won't have to send the data to the frontend each time and there won't be a need to generate a ton of data each time the page is loaded, since the chart will be processed on the frontend.</p>
<p>So would it be possible to generate a chart from my Flask code in Python and visualize it on my webpage? Is there a good library do that?</p>
|
<p>Try to use dash is a python library for web charts</p>
|
python|flask|data-visualization
| 2 |
1,907,299 | 53,359,072 |
For loop to print old value and sum of old value
|
<p>I am using for loop in python to display old value and sum of new value. Following is my code.</p>
<pre><code>numbers = [6, 5, 3, 8, 4, 2, 5, 4, 11]
sum = 0
for val in numbers:
sum = sum+val
print(sum)
</code></pre>
<p>and the output of this loop is showing <code>48</code></p>
<p>But i want to show output like </p>
<pre><code>6
6+5 = 11
11+3 = 14
14+8 = 22
22+4 = 26
26+2 = 28
28+5 = 33
33+4 = 37
37+11 = 48
</code></pre>
<p>Please let me know what i need to change in my code to display output like this.</p>
|
<p>You could just iterate through elements in list, printing the required in the loop and updating the <code>total</code>:</p>
<pre><code>numbers = [6, 5, 3, 8, 4, 2, 5, 4, 11]
total = numbers[0]
print(f'{total}')
for val in numbers[1:]:
print(f'{total} + {val} = {total + val}')
total += val
# 6
# 6 + 5 = 11
# 11 + 3 = 14
# 14 + 8 = 22
# 22 + 4 = 26
# 26 + 2 = 28
# 28 + 5 = 33
# 33 + 4 = 37
# 37 + 11 = 48
</code></pre>
|
python|python-3.x
| 10 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.