Unnamed: 0
int64 0
1.91M
| id
int64 337
73.8M
| title
stringlengths 10
150
| question
stringlengths 21
64.2k
| answer
stringlengths 19
59.4k
| tags
stringlengths 5
112
| score
int64 -10
17.3k
|
---|---|---|---|---|---|---|
1,907,600 | 17,910,281 |
Animate flood fill in python
|
<p>I have just finished with my food fill algorithm in python. It runs on an N*N matrix filled with integers. I would like to somehow animate it's workings. Is it somehow possible on the console? I think of something like updateing the nodes with a wait() inbetween the updates.</p>
|
<p>You could use something like this:</p>
<pre><code>#! /usr/bin/python3
import time
m = [ [c for c in line] for line in '''............................
..XXXXXXXXXX...........XXX..
..X........X...........X.X..
..XXXXXX...X....XXXXXXXX.X..
.......X...X....X........X..
....XXXX...XXXXXX........X..
....X....................X..
....X.................XXXX..
....XXXXXXXXXXXXXXXXXXX.....'''.split ('\n') ]
def flood (matrix, start):
maxX = len (matrix [0] )
maxY = len (matrix)
tbf = [start]
while tbf:
x, y = tbf [0]
tbf = tbf [1:]
if x < 0 or x >= maxX or y < 0 or y >= maxY: continue
if matrix [y] [x] == 'X': continue
matrix [y] [x] = 'X'
tbf += [ (x + 1, y), (x - 1, y), (x, y + 1), (x, y - 1) ]
print ('\x1b[0J\x1b[1;1H') #Clear screen and position cursor top left
for line in matrix: print (''.join (line) )
time.sleep (.2)
#flood (m, (0, 0) )
flood (m, (4, 2) )
</code></pre>
<p>This should work on any console that supports ANSI escape sequences (CSI). You can use the same CSI codes for outputting colours (<a href="http://en.wikipedia.org/wiki/ANSI_escape_code" rel="nofollow">Wiki</a>).</p>
|
python|console|console-application|animated
| 2 |
1,907,601 | 65,961,877 |
Pandas: How to preserve _id when parsing nested list?
|
<p>I am trying to access a nested list within a pandas DataFrame, but when I do so I somehow cannot hold on to the <code>_id</code>. But the <code>_id</code> is needed for later processing.</p>
<p>The DataFrame looks like, where coordinates is a list of floats:</p>
<pre><code> _id coordinates
0 6012fc07360d927bdb07b5ee [47.95816055, 12.470216551879616]
1 6012fc07360d927bdb07b5ef [51.46471555, 13.026622019540596]
2 6012fc07360d927bdb07b5f0 [51.141569849999996, 10.40367219787925]
3 6012fc07360d927bdb07b5f1 [53.5104874, 12.847652330627001]
4 6012fc07360d927bdb07b5f2 [52.196924, 11.396162132482953]
...
</code></pre>
<p>Same DataFrame as dictionary:</p>
<pre><code>{'_id': {0: ObjectId('6012fc07360d927bdb07b5ee'), 1: ObjectId('6012fc07360d927bdb07b5ef'), 2: ObjectId('6012fc07360d927bdb07b5f0'), 3: ObjectId('6012fc07360d927bdb07b5f1'), 4: ObjectId('6012fc07360d927bdb07b5f2')}, 'coordinates': {0: [47.95816055, 12.470216551879616], 1: [51.46471555, 13.026622019540596], 2: [51.141569849999996, 10.40367219787925], 3: [53.5104874, 12.847652330627001], 4: [52.196924, 11.396162132482953]}}
</code></pre>
<p>The DataFrame is being parsed to grab lat and lng values</p>
<pre><code>c_dict = []
for c in data['coordinates']:
c_dict.append({'lat': c[0], 'lng': c[1]})
coords_df = pd.DataFrame(c_dict)
</code></pre>
<p>The output is now missing the document <code>_id</code><br />
<a href="https://i.stack.imgur.com/xoGAP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xoGAP.png" alt="Output after parsing the DataFrame" /></a></p>
<p>The data is now being filtered, i.e. "Which lat > 52.00?"</p>
<pre><code>north = coords_df[coords_df.lat > 52.00]
north
Output:
lat lng
3 53.510487 12.847652
4 52.196924 11.396162
</code></pre>
<p>By using <code>north.index</code> I am able to retrieve the DataFrame indices and could use these again to filter the starting DataFrame.
I believe, that this is not the most efficient way to do so, but can also not figure out how to optimize the process.</p>
<p>Help is much appreciated. Thank you.</p>
|
<p>you can save the id too in the dictionary using this code:</p>
<pre><code>c_dict.append({'id':data['_id'],'lat': c[0], 'lng': c[1]})
</code></pre>
|
python|pandas|dataframe
| 2 |
1,907,602 | 69,215,764 |
Python Selenium - how to click and pick from the dropdown list items without select tag
|
<p>I'm new to Python and Selenium, try to automate filling out the form and get the game gift.</p>
<p>Here is a html, and it uses list items instead of select tag.</p>
<pre><code><tr>
<th>Server Name</th>
<td>
<!-- toDev liのdata-value属性に設定して頂いた値が下のhiddenに入ります。 --> <input
type="hidden" name="server" class="js-selected-server">
<ul class="serialForm__select js-select-form" data-type="server">
<li><span class="js-selected-text">Select Server</span>
<div class="serialForm__select--btn">
<img class="iconArrow" data-type="primary"
src="/img/nav/nav_arrow01.svg"> <img class="iconArrow"
data-type="secondary" src="/img/nav/nav_arrow02.svg">
</div></li>
<li data-value="1">Orchard(KOR)</li>
<li data-value="2">Dynasty(CHN)</li>
<li data-value="3">Warlord(SEA)</li>
<li data-value="4">Conquest(US)</li>
<li data-value="5">Invincible(JP)</li>
<li data-value="6">Inferno(TW)</li>
<li data-value="7">Ascension(KOR)</li>
</ul>
</td>
</tr>
<tr>
</code></pre>
<p>Script is able to click the dropdown menu, but not able to pick any of the listed items.
<a href="https://i.stack.imgur.com/CUnwo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CUnwo.png" alt="enter image description here" /></a></p>
<p>I tried the following and none of those works.</p>
<pre><code># dropdown = Select(select_server)
# dropdown.select_by_visible_text('Conquest(US)')
# dropdown.select_by_index('4')
</code></pre>
<p>My code:</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.support.select import Select
import time
edge_options = {
"executable_path": "/edgedriver_mac64/msedgedriver",
# ofcourse change path to driver you downloaded.
"capabilities": {
"platformName": 'mac os x', # I get this from Chrome driver's capabilities
# "os" : "OS X", # also ok.
}
}
browser = webdriver.Edge(**edge_options)
browser.get('https://kstory.hangame.com/en/index')
time.sleep(2)
select_server = browser.find_element_by_css_selector(
"span[class='js-selected-text']")
time.sleep(2)
select_server.click()
## below are still testing, not working yet
select_server.send_keys('Conquest(US)')
# dropdown = Select(select_server)
# dropdown.select_by_visible_text('Conquest(US)')
# dropdown.select_by_index('4')
</code></pre>
<p>Any help is appreciated, thanks.</p>
|
<p>You can try this</p>
<pre><code>driver.find_element_by_xpath("//ul[@data-type='server']/li[text()='Conquest(US)']").click()
</code></pre>
|
python|selenium
| 0 |
1,907,603 | 69,292,293 |
Github actions: Run step after failed step, but only when scheduled
|
<p>I have the following github actions/workflow file below. Behave execution skips the report upload to allure if any test fails or raise an exception. I was able to fix it by using <code>always()</code> on the next step, but unfortunately now the reports are being uploaded even when we are running the tests manually, and we'd like to upload only when scheduled (daily basis). Does anybody know how to achieve this?</p>
<pre><code>...
- name: Run BDD tests
env:
STAGE_EMAIL: ${{ secrets.STAGE_EMAIL }}
STAGE_PASSWORD: ${{ secrets.STAGE_PASSWORD }}
run: |
rm -rf allure-results
mkdir -p allure-results
behave --junit --junit-directory allure-results
- name: Upload test report to Allure
if: ${{ always() }} && github.event_name == 'schedule'
env:
ALLURE_SERVER: ${{ secrets.ALLURE_SERVER }}
run: |
python send_test_results.py $GITHUB_REPOSITORY $GITHUB_RUN_ID $ALLURE_SERVER default
...
</code></pre>
|
<p>This expression will do what you want <code>${{ always() && github.event_name == 'schedule' }}</code>.</p>
|
python|github|continuous-integration|github-actions
| 4 |
1,907,604 | 68,119,005 |
Class whose instances are treated as Any by mypy with --strict-equality
|
<p>I'm creating a library of matchers that are used by comparing against values with <code>==</code>. Unfortunately, when comparing a compound type containing a matcher instance against a value with a non-<code>Any</code> type, like so:</p>
<pre class="lang-py prettyprint-override"><code>assert {"foo": SomeMatcher(arg)} == {"foo": 3}
</code></pre>
<p>then mypy with the <code>--strict-equality</code> flag complains about a "Non-overlapping equality check". For now, I'm working around this by calling the constructors via functions whose return types are annotated as <code>Any</code>, but I'm wondering if it's possible to make mypy treat my matcher instances as always <code>Any</code> automatically. I know something like this is possible, as mypy allows <code>unittest.mock.Mock</code> instances to be passed in place of non-<code>Mock</code> arguments, as though the <code>Mock</code> instances all had type <code>Any</code> — or is that something hardcoded into mypy?</p>
<p>I've already tried getting this to work by giving my matchers a trivial <code>__new__</code> that's annotated with return type <code>Any</code>, as well as using a metaclass whose <code>__new__</code> had return type <code>Any</code>, but neither worked.</p>
<p>An MVCE of a failing matcher:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Any
class InRange:
def __init__(self, start: int, end: int) -> None:
self.start = start
self.end = end
def __eq__(self, other: Any) -> bool:
try:
return bool(self.start <= other <= self.end)
except (TypeError, ValueError):
return False
assert {"foo": InRange(1, 5)} == {"foo": 3}
assert {"foo": 3} == {"foo": InRange(1, 5)}
</code></pre>
|
<p>Mypy does have some hardcoded exceptions for <a href="https://github.com/python/mypy/commit/7678f28f65cf6e878cacd5ba663e6e54347653ae" rel="nofollow noreferrer">certain cases, for example <code>set</code> and <code>frozenset</code></a>. Although not for <code>unittest.mock.Mock</code>, there it's solved <a href="https://github.com/python/typeshed/blob/f260ea238308fe9ad1b9bd57ed6b573e6b653c97/stdlib/unittest/mock.pyi#L75" rel="nofollow noreferrer">by subclassing <code>Any</code></a>.</p>
<p>Subclassing <code>Any</code> works for type stubs, but at runtime python throws an error: <code>TypeError: Cannot subclass typing.Any</code>. The following solves that problem:</p>
<pre class="lang-py prettyprint-override"><code>from typing import TYPE_CHECKING, Any
if TYPE_CHECKING:
Base = Any
else:
Base = object
class InRange(Base):
def __init__(self, start: int, end: int) -> None:
self.start = start
self.end = end
def __eq__(self, other: Any) -> bool:
try:
return bool(self.start <= other <= self.end)
except (TypeError, ValueError):
return False
assert {"foo": InRange(1, 5)} == {"foo": 3}
assert {"foo": 3} == {"foo": InRange(1, 5)}
</code></pre>
|
python|python-3.x|mypy
| 1 |
1,907,605 | 68,084,941 |
TI Nspire Python: eval_function
|
<p>Is there an example of how the eval_function works from the TI System Menu module?</p>
<p>I have attempted</p>
<pre><code>s=eval_function("sin",0.5)
print(s)
</code></pre>
<p>but this code returns an error.</p>
<p>So does</p>
<pre><code>s=eval_function("sin(0.5)",True)
print(s)
</code></pre>
<p>Thank you in advance,</p>
<p>Eddie</p>
|
<p>You will need a user-defined function on the Notes or Calculator page to assign a name to the OS function, like this:</p>
<p><code>ef(x) := sin(x)</code> or <code>Define ef(x) = sin(x)</code></p>
<p>Example of Python script on the editor page:</p>
<pre class="lang-py prettyprint-override"><code>from ti_system import *
s = eval_function("ef", 0.5)
print(s)
</code></pre>
|
python|ti-nspire
| 2 |
1,907,606 | 59,229,421 |
Pyodbc to SQLAlchemy connection string for Sage 50
|
<p>I am trying to switch a pyodbc connection to sqlalchemy engine. My working pyodbc connection is:</p>
<pre><code>con = pyodbc.connect('DSN=SageLine50v23;UID=#####;PWD=#####;')
</code></pre>
<p>This is what I've tried. </p>
<pre><code>con = create_engine('pyodbc://'+username+':'+password+'@'+url+'/'+db_name+'?driver=SageLine50v23')
</code></pre>
<p>I am trying to connect to my Sage 50 accounting data but just can't work out how to build the connection string. This is where I downloaded the odbc driver <a href="https://my.sage.co.uk/public/help/askarticle.aspx?articleid=19136" rel="nofollow noreferrer">https://my.sage.co.uk/public/help/askarticle.aspx?articleid=19136</a>. </p>
<p>I got some orginal help for the pyodbc connection using this website (which is working) <a href="https://www.cdata.com/kb/tech/sageuk-odbc-python-linux.rst" rel="nofollow noreferrer">https://www.cdata.com/kb/tech/sageuk-odbc-python-linux.rst</a> but would like to use SQLAlchemy for it connection with pandas. Any ideas? Assume the issue is with this part <code>pyodbc://</code></p>
|
<p>According to <a href="https://www.sagecity.com/ca/sage_50_accounting_ca/f/sage-50-ca-accounts-payable-receivable-modules/75796/connect-to-database-sage-50" rel="nofollow noreferrer">this thread</a> Sage 50 uses MySQL to store its data. However, Sage also provides its own ODBC driver which may or may not use the same SQL dialect as MySQL itself.</p>
<p>SQLAlchemy needs to know which SQL dialect to use, so you could try using the <code>mysql+pyodbc://...</code> prefix for your connection URI. If that doesn't work (presumably because "Sage SQL" is too different from "MySQL SQL") then you may want to ask Sage support if they know of a SQLAlchemy dialect for their product.</p>
|
python|sqlalchemy|pyodbc|sage50
| 0 |
1,907,607 | 59,420,892 |
How to print 2 horizontal lines
|
<p>I have range of number 132-149
And i have to print 2 horizontal lines
first for even numbers and second for odd numbers
I can print odd and even numbers separately using:</p>
<pre><code>>>> for num in range (132, 150):
if num % 2 == 0:
print (num, end = ' ')
132 134 136 138 140 142 144 146 148
>>> for num in range (132, 150):
if num % 2 != 0:
print (num, end = ' ')
133 135 137 139 141 143 145 147 149
want these two to combine and printe something like this:
132 134 136 138 140 142 144 146 148
133 135 137 139 141 143 145 147 149
</code></pre>
|
<p>Thanks for your edit.
First of all you don't need to iterate over your numbers multiple times. You just can use pythons <a href="https://www.w3schools.com/python/python_conditions.asp" rel="nofollow noreferrer">if/else</a> construct.</p>
<pre><code>even_list = []
odd_list = []
for num in range(132, 150):
if num % 2 == 0:
even_list.append(str(num)) # casted to string to use join function later
else:
odd_list.append(str(num))
print(" ".join(even_list))
print(" ".join(odd_list))
</code></pre>
<h1>Output</h1>
<pre><code>132 134 136 138 140 142 144 146 148
133 135 137 139 141 143 145 147 149
</code></pre>
|
python
| 0 |
1,907,608 | 59,311,779 |
Python classs en function seperated?
|
<p>Why my IDE separates the class and functions from each other and if I change and run, I get an error like this: <code>TypeError: check_turn() missing 1 required positional argument: 'turn'</code></p>
<p><strong>This is how my code first looks like</strong></p>
<pre><code>class Dev:
players = ['Bob', 'Anne']
turn = [0, 1]
def check_turn(self, players, turn):
for i in turn:
if i == 0:
print("It's {} turn".format(players[0]))
elif i == 1:
print("It's {} turn".format(players[1]))
return i
check_turn(players, turn)
</code></pre>
<p><strong>This is after I need to change it</strong></p>
<pre><code>def check_turn(players, turn):
for i in turn:
if i == 0:
print("It's {} turn".format(players[0]))
elif i == 1:
print("It's {} turn".format(players[1]))
return i
class Dev:
players = ['Bob', 'Anne']
turn = [0, 1]
check_turn(players, turn)
</code></pre>
|
<p>In your case, I believe there is no meaning of using a class, using a simple method will be done.</p>
<blockquote>
<p>Classes provide a means of bundling data and functionality together.
Creating a new class creates a new type of object, allowing new
instances of that type to be made. Each class instance can have
attributes attached to it for maintaining its state. Class instances
can also have methods (defined by its class) for modifying its state.</p>
</blockquote>
<pre><code>players = ['Bob', 'Anne']
turn = [0, 1]
def check_turn(players, turn):
for i in turn:
if i == 0:
print("It's {} turn".format(players[0]))
elif i == 1:
print("It's {} turn".format(players[1]))
return i
check_turn(players, turn)
</code></pre>
<p>The output : </p>
<pre><code>It's Bob turn
</code></pre>
<p>But if you want to run check_turn() method when creating an instance of the class,you have to correct it by :</p>
<pre><code>class Dev:
players = ['Bob', 'Anne']
turn = [0, 1]
def check_turn(self, players, turn):
for i in turn:
if i == 0:
print("It's {} turn".format(players[0]))
elif i == 1:
print("It's {} turn".format(players[1]))
return i
def __init__(self):
self.check_turn(self.players, self.turn)
Dev()
</code></pre>
<p>The output : </p>
<pre><code>It's Bob turn
</code></pre>
|
python
| 0 |
1,907,609 | 59,326,290 |
What is the Django query for the below SQL statement?
|
<p>There is a simple SQL table with 3 columns: id, sku, last_update
and a very simple SQL statement: SELECT DISTINCT sku FROM product_data ORDER BY last_update ASC</p>
<p>What would be a django view code for the aforesaid SQL statement?</p>
<p>This code:<br>
<code>q = ProductData.objects.values('sku').distinct().order_by('sku')</code><br>
<b>returns 145 results</b></p>
<p>whereas this statement:<br>
<code>q = ProductData.objects.values('sku').distinct().order_by('last_update')</code><br>
<b>returns over 1000</b> results</p>
<p>Why is it so? Can someone, please, help?</p>
<p>Thanks a lot in advance!</p>
|
<p>The difference is that in the first query the result is a list of (<code>sku</code>)s, in the second is a list of (<code>sku</code>, <code>last_update</code>)s, this because any fields included in the <code>order_by</code>, are also included in the SQL <code>SELECT</code>, thus the distinct is applied to a different set or records, resulting in a different count.</p>
<p>Take a look to the queries Django generates, they should be something like the followings:</p>
<p>Query #1</p>
<pre class="lang-py prettyprint-override"><code>>>> str(ProductData.objects.values('sku').distinct().order_by('sku'))
'SELECT DISTINCT "yourproject_productdata"."sku" FROM "yourproject_productdata" ORDER BY "yourproject_productdata"."sku" ASC'
</code></pre>
<p>Query #2</p>
<pre class="lang-py prettyprint-override"><code>>>> str(ProductData.objects.values('sku').distinct().order_by('last_update'))
'SELECT DISTINCT "yourproject_productdata"."sku", "yourproject_productdata"."last_update" FROM "yourproject_productdata" ORDER BY "yourproject_productdata"."last_update" ASC'
</code></pre>
<p>This behaviour is described in the <a href="https://docs.djangoproject.com/en/3.0/ref/models/querysets/#distinct" rel="nofollow noreferrer"><code>distinct</code> documentation</a></p>
<blockquote>
<p>Any fields used in an order_by() call are included in the SQL SELECT
columns. This can sometimes lead to unexpected results when used in
conjunction with distinct(). If you order by fields from a related
model, those fields will be added to the selected columns and they may
make otherwise duplicate rows appear to be distinct. Since the extra
columns don’t appear in the returned results (they are only there to
support ordering), it sometimes looks like non-distinct results are
being returned.</p>
<p>Similarly, if you use a values() query to restrict the columns
selected, the columns used in any order_by() (or default model
ordering) will still be involved and may affect uniqueness of the
results.</p>
<p>The moral here is that if you are using distinct() be careful about
ordering by related models. Similarly, when using distinct() and
values() together, be careful when ordering by fields not in the
values() call.</p>
</blockquote>
|
python|sql|django|orm
| 2 |
1,907,610 | 63,117,475 |
How to attach to running Popen process
|
<p>I have a shell script that runs for a while and then takes an input.
I want to create the process:</p>
<pre><code>process = subprocess.Popen(
'/my/longscript/wait.sh',
shell=True,
stdout=subprocess.PIPE,
stdin=subprocess.PIPE,
stderr=subprocess.PIPE,
universal_newlines=True
)
process
<subprocess.Popen at 0x7fb7b319ef60>
process.pid
10248
</code></pre>
<p>And then in a different session attach to this process and send it some stdin. How can I reattach to the process, using the pid, or how do I do this?</p>
<pre><code>running_process = subprocess.attach_to_my_old_process(pid=10248)???
running_process.communicate(input='someinput')
</code></pre>
|
<p>Turning my comment into an answer: you have to use a named pipe for this.</p>
<p>How this can be done was done I answered in your <a href="https://stackoverflow.com/questions/63132778/how-to-use-fifo-named-pipe-as-stdin-in-popen-python">related question</a>.</p>
|
python|popen
| 1 |
1,907,611 | 62,070,737 |
How to build a scalable image search engine
|
<p>I have been working on building an image search engine which use Machine Learning to find similar images.</p>
<p><strong>My approach</strong> is, I have used a pre-trained CNN model, and when user inputs an image it is passed through that CNN and image embeddings are taken from the last hidden layer. Cosine Similarity is calculated with embeddings of stored images and most similar images are returned.</p>
<p><strong>Question:</strong> I want to make an <strong>API</strong> out of it which is <strong>scalable</strong> and can be used by <strong>hundreds of users</strong> at the same time, with <strong>minimal response time</strong>. Also there can be <strong>huge number of images</strong> in the database so what would be the best way to <strong>store</strong> them?</p>
|
<p>Here 2 alternatives you might follows.</p>
<ol>
<li>You can store each images embedding or encoded vector as <strong>pair</strong>
with image itself as binary format for example. </li>
</ol>
<h3>Pros</h3>
<hr>
<p>A quite higher <strong>time</strong> efficiency than encoding during runtime, as you just encode user image and compare with the <strong>exist</strong> encoded value.</p>
<h3>Cons</h3>
<hr>
<p><strong>Memory</strong> consuming (database size), but you may improve this, by building network has a strong encoding result by this way you can <strong>reduce</strong> the output vector length significantly, I guess setting some constraints on encoder result going to help achieving this result.</p>
<hr>
<ol start="2">
<li><p>Have architecture to compress your images. Simply
<strong>encoder</strong> and <strong>decoder</strong> architecture. </p>
<p>you target <strong>two layers</strong> the <strong>first</strong> one which encode
image a little (no loss of information), and the last layer
has strong encoded weights.</p>
<p>store both layers outputs <strong>encoded</strong> value of the image and the
<strong>encoded</strong> value which you are going to use for measuring
<strong>similarity</strong>.</p>
<p>You pass the <strong>encoded image</strong> to <strong>decoder</strong> which could be
<strong>transposed convolution layer</strong> simply called <strong>deconvolution</strong> for
rendering your image.</p></li>
</ol>
<h3>Pros</h3>
<hr>
<p>Still high <strong>time</strong> efficiency, but less than the <strong>first</strong> approach, as you need to decode image if you need to show it.</p>
<h3>Cons</h3>
<hr>
<p><strong>Memory</strong> consuming (database size), but much less than 1st approch</p>
<hr>
<p>You can improve overall <strong>memory</strong> and <strong>time</strong> by getting rid of last layer encoded value. and just store <strong>intermediate</strong> layer encoded value (<strong>semi-encoded image</strong>) and store it, by this way the <strong>memory</strong> needed to store images reduced a significantly, by <strong>compressing</strong> its <strong>dimensions</strong>, and time still same but separate <strong>requests</strong> (continue <strong>encoding</strong> for each image, <strong>decoding</strong> for rendering image). </p>
<hr>
<h2>Update</h2>
<p>For <strong>building model</strong> the easiest way using <strong>Keras</strong>, it's high level <strong>API</strong> runs on top of <strong>TensorFlow</strong> or you can use <strong>PyTorch</strong>. </p>
<p>As I am usually using, <strong>Tensorflow</strong> really great for both efficiency and for handling all processes such <strong>preprocessing</strong>, use efficient format for your images, <strong>training</strong> and <strong>predicting</strong>, it has very nice functionality to handle your input <strong>pipeline</strong>.</p>
<hr>
<p>After your model has been trained, then you need to <strong>search as efficient as possible</strong>, If you <strong>cluster</strong> your <strong>encoded vector</strong> using an algorithm such <strong>KMeans</strong>, then you just need to compare <strong>N clusters</strong> rather than the <strong>entire dataset</strong>, you can use <strong>KMeans</strong> by <strong>Tensorflow</strong> or may <strong>Sklearn</strong>. Anyway, I recommend you to have some logic for each case searching on the entire dataset and searching on <strong>N cluster</strong> and <strong>alternating</strong> between them. </p>
<p>Another way for <strong>efficient</strong> search is to use a <strong>data structure</strong> such <strong>k-d tree</strong>, but you need to add some logic to handle your case, <strong>KMeans</strong> might be a better choice for you as <strong>k-d tree</strong> might need some additional effort.</p>
<hr>
<p>Then you can use <strong>Django</strong> framework or even <strong>Flask</strong> for packaging your implementation and logic to web based app.</p>
|
python|machine-learning|web|architecture|scalability
| 1 |
1,907,612 | 62,223,850 |
Folding c.ode in python function
|
<p>Let's say I have a python function like this:</p>
<pre><code>class Something:
def my_function(...): <---- start fold
...
return None <---- end fold
def my_function2(...):
...
</code></pre>
<p>If I am on the first function line, <code>def my_function</code> -- and let's suppose that function is ~50 locs, how would I fold that function in vim? The first thing I thought of doing is <code>zf/return</code> -- but this is quite flawed, as (1) lots of functions won't have return statements; or event more common, there will be multiple return statements within a single function.</p>
<p>What would be the best way to do this?</p>
<p>(StackOverflow doesn't allow the word 'code' in a post??)</p>
|
<p>Try <code>:set foldmethod=indent</code>. It may work for you. <a href="https://vim.fandom.com/wiki/Syntax_folding_of_Python_files" rel="nofollow noreferrer">VimWiki</a> can be quite helpful though.</p>
<p>The problem with python is the lack of explicit block delimiters. So you may want to use some plugins like <a href="https://github.com/tmhedberg/SimpylFold" rel="nofollow noreferrer">SimpylFold</a></p>
|
python|vim
| 0 |
1,907,613 | 62,048,555 |
How do I convince JsonResponse to serialize a custom class?
|
<p>I'm using Django 3.0.6 with Python 3.7.</p>
<p>My views/controllers return a <code>JsonResponse</code>, like that:</p>
<pre><code> return JsonResponse({
'My IP string': champion.ip_string,
'Chiefdom full name': chiefdom.FullName(),
'Chiefdom': chiefdom, # This will raise an exception
})
</code></pre>
<p>I would like to <strong>upgrade the Chiefdom class</strong> to behave nicely with <code>JsonResponse</code>.</p>
<p>The most obvious way to do that to me would be to <strong>override a method</strong> that <code>JsonResponse</code> is going to call to serialise this (I guess? It has to, come on…), but… I haven't been able to find any information at all regarding that.</p>
<p><strong>The whole point of this question is NOT modifying the VIEW</strong>. Only modify the Chiefdom class itself, in a way that then <code>JsonResponse</code> suddenly decides that "oh well now it is indeed serialisable".</p>
<p><em>I could only find information that involves modifying the view itself, or stuff even more convoluted. There's tons and tons of irrelevant information, and if what I'm searching for is there somewhere, it was buried under piles of other stuff.</em></p>
|
<p>pass <strong><em>custom encoder</em></strong> by specifying the <a href="https://docs.djangoproject.com/en/3.0/ref/request-response/#django.http.JsonResponse" rel="nofollow noreferrer"><strong><code>encoder</code></strong></a> parameter.</p>
<p>By default, <strong><code>JsonResponse</code></strong> uses <a href="https://docs.djangoproject.com/en/3.0/topics/serialization/#django.core.serializers.json.DjangoJSONEncoder" rel="nofollow noreferrer"><strong><code>django.core.serializers.json.DjangoJSONEncoder</code></strong></a> class.</p>
<h3>Example:</h3>
<pre><code>from django.core.serializers.json import DjangoJSONEncoder
class <b>MyCustomEncoder</b>(DjangoJSONEncoder):
...
def sample_view(request):
return JsonResponse({
'My IP string': champion.ip_string,
'Chiefdom full name': chiefdom.FullName(),
'Chiefdom': chiefdom, # This will raise an exception
}, <b>encoder=MyCustomEncoder</b>)</code></pre>
|
json|django|python-3.x|serialization
| 1 |
1,907,614 | 35,454,430 |
Run a executable by python at background and still running
|
<p>I have a executable made by <code>py2exe</code> which verifies whether my VPN is connected or not at infinite loop running on Windows. I want to make sure it runs in background or hidden, I searched for several forums and I found 2 scripts worked partially.</p>
<ol>
<li><p>Rename script to <code>scrypt.py</code> and run <code>py2exe</code> again, the .exe hide when I run it, but close or vanish. Doesn't continue running after the click.</p></li>
<li><p>I am made another exe to call the first </p></li>
</ol>
<pre class="lang-py prettyprint-override"><code> import os
import subprocess
os.chdir("C:\Users\dl\Documents\Log\Py")
proc = subprocess.Popen('ipLog.exe', creationflags=subprocess.SW_HIDE, shell=True)
proc.wait()
</code></pre>
<p>or</p>
<pre class="lang-py prettyprint-override"><code> os.chdir("C:\Users\dl\Documents\Log\Py")
proc = subprocess.Popen('ipLog.exe', creationflags=subprocess.SW_HIDE, shell=True)
</code></pre>
<p>Works but the first command still visible, and when i close it, the first exe call by it quit too. </p>
<ol start="3">
<li>I tried install a module call self.hide but i cant.</li>
</ol>
<p>I am newbie in python and try to change my hobbies vb, vba to python.</p>
<p>Thanks for any help.</p>
|
<p>I found a solution in this thread <a href="https://stackoverflow.com/questions/12843903/how-to-start-daemon-process-from-python-on-windows">How to start daemon process from python on windows?</a>. Thank you all people helped with this thread, help my script too.</p>
|
python-2.7
| 1 |
1,907,615 | 58,607,942 |
python too many keyword argument
|
<p>The code below can use same data as input but I need to use **_ to avoid too many argument.
Is any way that remove the parameter **_ and input the correct parameter to f1 and f2?</p>
<pre><code>def f1(freq,bw,**_):
print(freq,bw)
def f2(FW_ID,**_):
print(FW_ID)
db = {
'freq':2414,
'bw':20,
'FW_ID':0.1,
}
f1(**db)
f2(**db)
</code></pre>
|
<p>Any method you try and hack this to save lines of code, would be unreadable, and would lead to bugs that are difficult to trace when your data in <code>db</code> is malformed.</p>
<p>Do it the correct way, by being <a href="https://www.python.org/dev/peps/pep-0020/" rel="nofollow noreferrer">explicit, not implicit</a>. Call <code>f1</code> and <code>f2</code> like this:</p>
<pre><code>f1(freq=db['freq'], bw=db['bw'])
f2(FW_ID=db['FW_ID'])
</code></pre>
<p>Or even just:</p>
<pre><code>f1(db['freq'], db['bw'])
f2(db['FW_ID'])
</code></pre>
<p>If you do that than there's no need for your <code>**_</code> argument, and you can write a more readable function signature:</p>
<pre><code>def f1(freq,bw):
print(freq,bw)
def f2(FW_ID):
print(FW_ID)
</code></pre>
<p>An alternative to all that is to have both functions accept the full <code>db</code> (<code>dict</code>) as an argument, and parse it inside (but that might be too repetitive)</p>
|
python|arguments
| 2 |
1,907,616 | 73,292,690 |
How to assign a slice to a slice in a Pandas dataframe?
|
<p>Having:</p>
<pre><code>np.random.seed(42)
df_one = pd.DataFrame(np.random.rand(4,3), columns=['colA', 'colB', 'colC'])
</code></pre>
<p>And:</p>
<pre><code>df_two = pd.DataFrame(np.ones([2,3]), columns=['colA', 'colB', 'colC'])
</code></pre>
<p>When I try to assign, like this:</p>
<pre><code>df_one.loc[2:4, 'colB'] = df_two.loc[:, 'colB']
</code></pre>
<p>I get:</p>
<pre><code>colA colB colC
0.37 0.95 0.73
0.59 0.15 0.15
0.05 NaN 0.60
0.70 NaN 0.96
</code></pre>
<p>When expecting to get:</p>
<pre><code>colA colB colC
0.37 0.95 0.73
0.59 0.15 0.15
0.05 1.0 0.60
0.70 1.0 0.96
</code></pre>
|
<p>This can be done adding a to_list() at the end:</p>
<pre><code>df_one.loc[2:4, 'colB'] = df_two.loc[:, 'colB'].to_list()
</code></pre>
<p>Or:</p>
<pre><code>df_one.loc[2:4, 'colB'] = df_two.loc[:, 'colB'].values
</code></pre>
|
python|pandas|slice
| 0 |
1,907,617 | 31,634,952 |
Why does this print the correct result in python 2x
|
<pre><code>print 'The value of pi is ' + str() + '3.14'
</code></pre>
<p>This will throw an error with int() and float() but not with str()
Any help greatly appreciated. </p>
|
<p>Because writing <code>str()</code> is equivalent to writing <code>''</code>, i.e. an empty string. With the <code>+</code> operator, you can either use it to add numbers or you can use it to concatenate strings together. When you try to add a string to a non-string, it won't work.</p>
<p>Your code is equivalent to the following:</p>
<pre><code>print 'The value of pi is ' + '' + '3.14'
</code></pre>
<p>Which, in turn, is equivalent to:</p>
<pre><code>print 'The value of pi is 3.14'
</code></pre>
<hr>
<p>Using <code>int()</code>, your code is equivalent to this:</p>
<pre><code>print 'The value of pi is ' + 0 + '3.14'
</code></pre>
<p>And with <code>float()</code>:</p>
<pre><code>print 'The value of pi is ' + 0.0 + '3.14'
</code></pre>
<p>Neither of these will work because they try to add a string to a number.</p>
<hr>
<p>You might have meant to do this instead:</p>
<pre><code>print 'The value of pi is ' + str(3.14)
</code></pre>
<p>This converts <code>3.14</code> from a float (number with decimals) to a string, so that in can be concatenated with the string <code>'The value of pi is '</code>.</p>
|
python|string
| 3 |
1,907,618 | 31,622,751 |
Show user a message in python
|
<p>quick question here. Is there a really easy way to show a user a message in like a text box in python? I just need to input a string beforehand so I can show the message after a loop runs. Thanks!</p>
<p>EDIT: Just Windows 8.1 and straight python 2.7</p>
|
<p>If I'm correct that you want a window to let a user type in some text, then show it back as a message box, then you need two modules, <code>tkMessageBox</code> and <code>Tinker</code>, two standard modules in Python 2.7+. </p>
<p>The following documented code does the above</p>
<pre><code>from Tkinter import *
import tkMessageBox
def getInfo():
#This function gets the entry of the text area and shows it to the user in a message box
tkMessageBox.showinfo("User Input", E1.get())
def quit():
#This function closes the root window
root.destroy()
root = Tk() #specify the root window
label1 = Label( root, text="User Input") #specify the label to show the user what the text area is about
E1 = Entry(root, bd =5) #specify the input text area
submit = Button(root, text ="Submit", command = getInfo) #specify the "submit" button that runs "getInfo" when clicked
quit_button = Button(root, text = 'Quit', command=quit) #specify the quit button that quits the main window when clicked
#Add the items we specified to the window
label1.pack()
E1.pack()
submit.pack(side =BOTTOM)
quit_button.pack(side =BOTTOM)
root.mainloop() #Run the "loop" that shows the windows
</code></pre>
|
python|text|output
| 1 |
1,907,619 | 15,814,087 |
Prevent terminal character set switch on data printing
|
<p>I am running a console application that takes data coming from various sensors around the house. Sometimes the transmission is interrupted and thus the packet does not make sense. When that happens, the contents of the packet is output to a terminal session.
However, what has happened is that while outputting the erroneous packet, it has contained characters that changed character set of the current terminal window, rendering any text (apart from numbers) as unreadable gibberish.</p>
<p>What would be the best way to filter the erroneous packets before their display while retaining most of the special characters? What exactly are sequences that can change behaviour of the terminal?</p>
<p>I would also like to add that apart from scrambled output, the application still works as it should.</p>
|
<p>Your terminal may be responding to <a href="http://en.wikipedia.org/wiki/ANSI_escape_code" rel="nofollow noreferrer">ANSI escape codes</a>.
To prevent the data from inadvertently affecting the terminal, you could print the <code>repr</code> of the data, rather than the data itself:</p>
<p>For example,</p>
<pre><code>good = 'hello'
bad = '\033[41mRED'
print('{!r}'.format(good))
print('{!r}'.format(bad))
</code></pre>
<p>yields</p>
<pre><code>'hello'
'\x1b[41mRED'
</code></pre>
<p>whereas </p>
<pre><code>print(good)
print(bad)
</code></pre>
<p>yields</p>
<p><img src="https://i.stack.imgur.com/tD0Hk.png" alt="enter image description here"></p>
<p>(Btw, typing <code>reset</code> will reset the terminal.) </p>
|
python|bash|raspberry-pi|xbee
| 2 |
1,907,620 | 48,939,533 |
Invalid syntax error in code
|
<p>Trying to run this code for the first time and getting a syntax error on line 21. Not really sure where this is coming from since I haven't done much before that in the code.</p>
<pre><code>#defining the read method
def read():
#opening data from first line
pulsars = open("pulsars1.txt","r")
#opening data from second line
signals = open("signals1.txt","r")
#creating a new empty list
astro_list = []
#reading pulsar line by line/turns all of this into string
pulsar_data = pulsars.read()
#reading signal data/turns all of this into string
signal_data = signals.read()
#appending pulsar values to list
for all_pulsar_data in range(0,len(pulsar_data)):
astro_list.append(pulsar_data)
#appending signal data to list
for all_signal_data in range(0,len(signal_data)):
astro_list.append(signal_data)
return(astro_list)
#defining the main function
def main():
#displaying a descriptiong of what the program does
purpose = "This program proccess data from the Purdue Pulsar Laboratory"
underheading = "=" * len(purpose)
print(purpose)
print(underheading)
print("It reads the data from 2 files containing the pulsar name and signal strength, \nthen combines them and displays the results.")
#accepting inputs from the user about file names
pulsar_name = input("\nPulsar name file: ")
signal_strength = input("Signal strength: ")
#reading values
print("\nAnalyzing data from" , pulsar_name, "and", signal_strength, "files...")
print(" ","Reading from" ,pulsar_name,"...")
print(" ","Reading from" ,signal_strength,"...")
print(" ","Combining values...")
#displaying the top part of the table
astro_list = read()
count_head= "\n The combined BOOYA data includes", len(astro_list), "values."
print(count_head)
print("=" * len(count_head))
print(astro_list)
</code></pre>
|
<p>If you open up a file like this in PyCharm it becomes very obvious where the mistakes are. Hopefully this helps!</p>
<pre><code># defining the read function
def read():
#opening data from first line
pulsars = open("pulsars1.txt","r")
#opening data from second line
signals = open("signals1.txt","r")
#creating a new empty list
astro_list = []
#reading pulsar line by line/turns all of this into string
pulsar_data = pulsars.read()
#reading signal data/turns all of this into string
signal_data = signals.read()
#appending pulsar values to list
for all_pulsar_data in range(0,len(pulsar_data)):
astro_list.append(pulsar_data)
#appending signal data to list
for all_signal_data in range(0,len(signal_data)):
astro_list.append(signal_data)
return(astro_list)
#defining the main function
def main():
astro_list = read()
#displaying a descriptiong of what the program does
purpose = "This program proccess data from the Purdue Pulsar Laboratory"
underheading = "=" * len(purpose)
print(purpose)
print(underheading)
print("It reads the data from 2 files containing the pulsar name and signal strength, \nthen combines them and displays the results.")
#accepting inputs from the user about file names
pulsar_name = input("\nPulsar name file: ")
signal_strength = input("Signal strength: ")
#reading values
print("\nAnalyzing data from" , pulsar_name, "and", signal_strength, "files...")
print(" ","Reading from" ,pulsar_name,"...")
print(" ","Reading from" ,signal_strength,"...")
print(" ","Combining values...")
#displaying the top part of the table
count_head= "\n The combined BOOYA data includes", len(astro_list), "values."
print(count_head)
print("=" * len(count_head))
print(astro_list)
if __name__ == '__main__':
main()
</code></pre>
|
python-3.x
| 0 |
1,907,621 | 6,009,928 |
How to read an uploaded file in Google App Engine
|
<p>Hi I have another question in app engine. You have one form where there's one field for file upload, I need to read the uploaded file, validate it and store it into the datastore.</p>
<p>My question is how can I read the uploaded file?</p>
<p>In django I would have used request.FILES, in GAE is there anything of that sort?</p>
<p>I'd appreciate any help on this</p>
|
<p>this question has been answered in
<a href="https://stackoverflow.com/questions/81451/upload-files-in-google-app-engine">Upload files in Google App Engine</a></p>
<p>the google app engine documents also explain it.
<a href="http://code.google.com/appengine/docs/images/usingimages.html#Uploading" rel="nofollow noreferrer">http://code.google.com/appengine/docs/images/usingimages.html#Uploading</a></p>
<p>In brief, you just need to use self.request.get('name_of_file_in_the_input_form')</p>
|
python|django|google-app-engine
| 2 |
1,907,622 | 68,008,572 |
Python Deep Destruction Like JavaScript and assingment to new Object
|
<pre class="lang-js prettyprint-override"><code>let newJson = {};
({
"foo": {
"name": newJson.FirstName,
"height": newJson.RealHeight
}
} =
{
"foo": {
"name": "Felipe",
"height": "55"
}
});
console.log({newJson})
</code></pre>
<p>As we Know the above code will return in JS the follow output:</p>
<pre class="lang-js prettyprint-override"><code>{newJson :{FirstName: "Felipe", RealHeight: "55"}}
</code></pre>
<p>I Would like to kwnow if there is a Lib or a way to do it in PYTHON</p>
|
<p>Searching for "desctructured assignment in Python" yields results.</p>
<p>You can use the native "tuple unpacking" as defined in <a href="https://www.python.org/dev/peps/pep-0448/" rel="nofollow noreferrer">PEP 448</a> :</p>
<pre class="lang-py prettyprint-override"><code>json_data = {
"foo": {
"name": "Felipe",
"height": "55"
}
}
first_name, real_height = json_data["foo"]["name"], json_data["foo"]["height"]
print(first_name, real_height)
# Felipe 55
</code></pre>
<p>Or you can use something a bit closer which is based on the <a href="https://docs.python.org/3/reference/datamodel.html" rel="nofollow noreferrer">Python Data Model</a> (inspired from <a href="https://stackoverflow.com/a/52083390/11384184">this answer</a>) :</p>
<pre class="lang-py prettyprint-override"><code>from operator import itemgetter
json_data = {
"foo": {
"name": "Felipe",
"height": "55"
}
}
first_name, real_height = itemgetter("name", "height")(json_data["foo"])
print(first_name, real_height)
# Felipe 55
</code></pre>
|
javascript|python|object|destruction
| 1 |
1,907,623 | 30,404,885 |
Text Mining using tweepy
|
<p>I have collected the tweets using tweepy api and i have tokenized them and removed the stopwords but when i load them using json it throws the following error </p>
<pre><code>"File "C:\Python27\Projects\kik.py", line 26, in <module>
tweet = json.loads(tokens)
File "C:\Python27\lib\json\__init__.py", line 338, in loads
return _default_decoder.decode(s)
File "C:\Python27\lib\json\decoder.py", line 366, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
TypeError: expected string or buffer"
</code></pre>
<p>Please help me out.</p>
<pre><code>tweets_data_path = 'c:\\Python27\\Projects\\newstweets.txt'
stopset = set(stopwords.words('english'))
tweets_data = []
tweets_file = open(tweets_data_path, "r")
text = tweets_file.read()
tokens=word_tokenize(str(text))
tokens = [w for w in tokens if not w in stopset]
tweet = json.loads(tokens)
tweets_data.append(tweet)
</code></pre>
|
<p><code>json.loads</code> expects a string, you are trying to load a list.</p>
<p>Instead of:</p>
<pre><code>tokens = [w for w in tokens if not w in stopset]
</code></pre>
<p>Try:</p>
<pre><code>tokens = str([w for w in tokens if not w in stopset])
</code></pre>
|
python|json|twitter|tweepy
| 1 |
1,907,624 | 66,822,916 |
Can't login to Django admin panel after successfully creating superuser using Django custom user model
|
<p>After I create superuser in command line, it says the superuser is created successfully, but when I try to log in it says "Please enter the correct email address and password for a staff account. Note that both fields may be case-sensitive." I tried to delete all migrations and database and try again but it did not help.</p>
<p><strong>Here is my model.py</strong></p>
<pre><code>from django.db import models
from django.contrib.auth.models import AbstractBaseUser, PermissionsMixin, BaseUserManager
from django.db.models.deletion import CASCADE
from django.utils import timezone
from django.utils.translation import gettext_lazy
# Create your models here.
class UserAccountManager(BaseUserManager):
def create_user(self, Email, Password = None, **other_fields):
if not Email:
raise ValueError(gettext_lazy('You must provide email address'))
email = self.normalize_email(Email)
user = self.model(Email=email , **other_fields)
user.set_password(Password)
user.save(using=self._db)
return user
def create_superuser(self, Email, Password = None, **other_fields):
other_fields.setdefault('is_staff', True)
other_fields.setdefault('is_superuser', True)
other_fields.setdefault('is_active', True)
return self.create_user(Email=Email, Password = Password, **other_fields)
class Customer(AbstractBaseUser, PermissionsMixin):
Email = models.EmailField(gettext_lazy('email address'), max_length=256, unique=True)
Name = models.CharField(max_length=64, null=True)
Surname = models.CharField(max_length=64, null=True)
Birthday = models.DateField(auto_now=False, null=True,blank=True)
PhoneNumber = models.CharField(max_length=16, unique=True, null=True, blank=True)
Address = models.CharField(max_length=128, blank=True)
RegistrationDate = models.DateTimeField(default=timezone.now, editable=False)
is_staff = models.BooleanField(default=False)
is_active = models.BooleanField(default=True)
is_superuser = models.BooleanField(default=False)
objects = UserAccountManager()
USERNAME_FIELD = 'Email'
REQUIRED_FIELDS = []
def __str__(self):
return self.Name + " " + self.Surname
def has_perm(self, perm, obj = None):
return self.is_superuser
</code></pre>
<p><strong>Here is my admin.py</strong></p>
<pre><code>from django import forms
from django.contrib import admin
from django.contrib.auth.admin import UserAdmin as BaseUserAdmin
from django.contrib.auth.forms import ReadOnlyPasswordHashField
from django.core.exceptions import ValidationError
from .models import *
# Register your models here.
class UserCreationForm(forms.ModelForm):
password1 = forms.CharField(label='Password', widget=forms.PasswordInput)
password2 = forms.CharField(label='Password confirmation', widget=forms.PasswordInput)
class Meta:
model = Customer
fields = ('Email', 'Name', 'Surname')
def clean_password2(self):
password1 = self.cleaned_data.get("password1")
password2 = self.cleaned_data.get("password2")
if password1 and password2 and password1 != password2:
raise ValidationError("Passwords don't match")
return password2
def save(self, commit=True):
# Save the provided password in hashed format
user = super().save(commit=False)
user.set_password(self.cleaned_data["password1"])
if commit:
user.save()
return user
class UserChangeForm(forms.ModelForm):
password = ReadOnlyPasswordHashField()
class Meta:
model = Customer
fields = ('Email', 'Name','Surname','Birthday','PhoneNumber','Address', 'is_staff', 'is_active', 'is_superuser')
def clean_password(self):
return self.initial["password"]
class UserAdmin(BaseUserAdmin):
form = UserChangeForm
add_form = UserCreationForm
list_display = ('Email', 'Name', 'Surname')
list_filter = ('is_superuser','Email', 'Name', 'Birthday')
fieldsets = (
(None, {'fields': ('Email', 'password')}),
('Personal info', {'fields': ('Birthday','Name','Surname')}),
('Permissions', {'fields': ('is_superuser',)}),
)
add_fieldsets = ()
search_fields = ('Email',)
ordering = ('Email',)
filter_horizontal = ()
admin.site.register(Customer,UserAdmin)
</code></pre>
|
<p>In:</p>
<pre><code> def create_user(self, Email, Password = None, **other_fields):
</code></pre>
<p>Remove = None, leaving Password only.</p>
<p>And add password=password as below:</p>
<pre><code>user = self.model(email=self.normalize_email(
email), username=username, password=password)
user = self.create_user(email=self.normalize_email(
email), username=username, password=password)
</code></pre>
<p>Hope it fixes it. If you still cannot log in, run python manage.py createsuperuser and create a new superuser and try to log in with that one.</p>
|
python|django|authentication|django-models|django-admin
| 1 |
1,907,625 | 66,995,900 |
Scrape 1 More Field From Webpage
|
<p>My code goes into a webpage, and takes certain data from each row</p>
<p>I however want to also get the "topics" from each row. For example listed as "Presidential Session and Community Psychiatry" in row 1, above the "Speakers" text.</p>
<p>My code is currently able to scrape Titles and Chairs of each row (denoted as Role and Name) but not the topic?</p>
<pre><code>from selenium import webdriver
import time
from bs4 import BeautifulSoup
driver = webdriver.Chrome()
import pandas as pd
driver = webdriver.Chrome()
driver.get('https://s7.goeshow.com/apa/annual/2021/session_search.cfm?_ga=2.259773066.1015449088.1617295032-97934194.1617037074')
page_source = driver.page_source
soup = BeautifulSoup(page_source, 'html.parser')
tables = soup.select('#datatable')
for table in tables:
for title in table.select('tr td.title'):
print(title.text.strip())
title_row = title.parent
speaker_row = title_row.next_sibling
for speaker in speaker_row.select('span.session-speaker'):
role = speaker.select_one('span.session-speaker-role').text.strip()
name = speaker.select_one('span.session-speaker-name').text.strip()
topic=speaker.select_one('span.session-track-label').text.strip()
print(role, name,topic)
print()
</code></pre>
|
<pre><code>tables = soup.select('#datatable')
for table in tables:
for title in table.select('tr td.title'):
print(title.text.strip())
title_row = title.parent
speaker_row = title_row.next_sibling
for topic in speaker_row.select('span.session-track-label'):
print(topic.text.strip())
for speaker in speaker_row.select('span.session-speaker'):
role = speaker.select_one('span.session-speaker-role').text.strip()
name = speaker.select_one('span.session-speaker-name').text.strip()
print(role, name)
</code></pre>
<p>If you want all the topics prior to the names and roles you have to target them from the row and not the following sibling.</p>
|
python|selenium|web-scraping|beautifulsoup
| 1 |
1,907,626 | 67,082,993 |
Azure Data Factory run Databricks Python Wheel
|
<p>I have a python package (created in PyCharm) that I want to run on Azure Databricks. The python code runs with Databricks from the command line of my laptop in both Windows and Linux environments, so I feel like there are no code issues.</p>
<p>I've also successfully created a python wheel from the package, and am able to run the wheel from the command line locally.</p>
<p>Finally I've uploaded the wheel as a library to my Spark cluster, and created the Databricks Python object in Data Factory pointing to the wheel in dbfs.</p>
<p>When I try to run the Data Factory Pipeline, it fails with the error that it can't find the module that is the very first import statement of the main.py script. This module (GlobalVariables) is one of the other scripts in my package. It is also in the same folder as main.py; although I have other scripts in sub-folders as well. I've tried installing the package into the cluster head and still get the same error:</p>
<p><code>ModuleNotFoundError: No module named 'GlobalVariables'Tue Apr 13 21:02:40 2021 py4j imported</code></p>
<p>Has anyone managed to run a wheel distribution as a Databricks Python object successfully, and did you have to do any trickery to have the package find the rest of the contained files/modules?</p>
<p>Your help greatly appreciated!</p>
<p>Configuration screen grabs:</p>
<p><a href="https://i.stack.imgur.com/onsDm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/onsDm.png" alt="Confirm the cluster is working in ADF:" /></a></p>
<p><a href="https://i.stack.imgur.com/8b2HR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8b2HR.png" alt="Config after Appending the library " /></a></p>
|
<p>We run pipelines using egg packages but it should be similar to wheel. Here is a summary of the steps:</p>
<ol>
<li>Build the package with with <code>python setup.py bdist_egg</code></li>
<li>Place the egg/whl file and the <code>main.py</code> script into Databricks FileStore (dbfs)</li>
<li>In Azure DataFactory's Databricks Activity go to the Settings tab</li>
<li>In Python file, set the dbfs path to the python entrypoint file (<code>main.py</code> script).</li>
<li>In Append libraries section, select type egg/wheel set the dbfs path to the egg/whl file</li>
<li>Select pypi and set all the dependencies of your package. It is recommended to specify the versions used in development.</li>
</ol>
<p><a href="https://i.stack.imgur.com/E5Pto.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/E5Pto.png" alt="Databricks Activity (Azure Data Factory)" /></a></p>
<p>Ensure GlobalVariables module code is inside the egg. As you are working with wheels try using them in step 5. (never tested myself)</p>
|
python|pyspark|azure-data-factory-2|azure-databricks
| 1 |
1,907,627 | 67,014,961 |
How to fix "IndexError" on Selenium when trying to automate filling out google form text box? (I used inspect element to find class name of text-box)
|
<p>Essentially I am trying to automate/autofill a google form by using selenium and inspect element to find the class name. I am especially having trouble finding the solution to an "IndexError" problem when I try to input text into a "short answer" section of the google form. I am a beginner in Python so sorry if this may come across as a very low-level issue.</p>
<pre><code>from selenium import webdriver
option = webdriver.ChromeOptions()
option.add_argument("-incognito")
browser = webdriver.Chrome(executable_path="path to selenium")
option = webdriver.ChromeOptions()
option.add_argument("-incognito")
email = "my email address to sign into the google form"
browser = webdriver.Chrome(executable_path="path to selenium", options=option)
browser.get('url of google form')
sign_in = browser.find_elements_by_class_name("whsOnd zHQkBf")
sign_in[0].send_keys(email)
Next = browser.find_elements_by_class_name("VfPpkd-RLmnJb")
Next[0].click()
textboxes = browser.find_elements_by_class_name("quantumWizTextinputPaperinputInput exportInput")
textboxes[0].send_keys("name")
radio_buttons = browser.find_elements_by_class_name("freebirdFormviewerComponentsQuestionCheckboxRoot")
radio_buttons[1].click()
submit=browser.find_element_by_class_name("appsMaterialWizButtonPaperbuttonLabel quantumWizButtonPaperbuttonLabel exportLabel")
submit.click()
</code></pre>
|
<p>Are you sure sign_in actually has anything stored in it? It looks to me that those class names are auto generated when the page loads in, so it could that there is not actually a web element with the class name "whsOnd zHQkBf". It would be better to use xpath and use a relative path to your short form input field. This way if the general layout of the page changes you can still find your web element, making the solution more robust.</p>
<p>Updated:</p>
<p>Following code taken directly from Waits Selenium Python API Documentation and can be used to fix "NoSuchElement" exception or "element not interactable" in particular situations.</p>
<p>'''</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
driver = webdriver.Firefox()
driver.get("http://somedomain/url_that_delays_loading")
try:
element = WebDriverWait(driver, 10).until(
EC.presence_of_element_located((By.ID, "myDynamicElement"))
)
finally:
driver.quit()
</code></pre>
<p>'''</p>
<p>You may use By.xpath or any other plethora of identifiers. If this code times out, then the element you are looking for does not exist.</p>
|
python|selenium-webdriver|google-forms|inspect-element
| 1 |
1,907,628 | 43,025,382 |
Error in Tensorflow Inception V3 geting while pool_3 layer output
|
<p>I am trying to get the pool_3 layer output of the tensorflow inception v3. My input is ndarray of shape (64,64,3) but I get following error</p>
<pre><code>with tf.Session() as sess:
pool_3_tensor = sess.graph.get_tensor_by_name('pool_3:0')
feat1 = sess.run(pool_3_tensor,{'DecodeJpeg/contents:0': image})
feat1 = np.squeeze(feat1)
</code></pre>
<hr>
<pre><code>ValueError Traceback (most recent call last)
<ipython-input-26-fb1865f7fbee> in <module>()
3 pool_3_tensor = sess.graph.get_tensor_by_name('pool_3:0')
4
----> 5 feat1 = sess.run(pool_3_tensor,{'DecodeJpeg/contents:0': image})
6 feat1 = np.squeeze(feat1)
/N/u/mbirla/anaconda3/envs/tensorflow/lib/python3.5/site-packages/tensorflow/python/client/session.py in run(self, fetches, feed_dict, options, run_metadata)
765 try:
766 result = self._run(None, fetches, feed_dict, options_ptr,
--> 767 run_metadata_ptr)
768 if run_metadata:
769 proto_data =
tf_session.TF_GetBuffer(run_metadata_ptr)
/N/u/mbirla/anaconda3/envs/tensorflow/lib/python3.5/site-packages/tensorflow/python/client/session.py in _run(self, handle, fetches, feed_dict, options, run_metadata)
942 'Cannot feed value of shape %r for Tensor %r, '
943 'which has shape %r'
--> 944 % (np_val.shape, subfeed_t.name, str(subfeed_t.get_shape())))
945 if not self.graph.is_feedable(subfeed_t):
946 raise ValueError('Tensor %s may not be fed.' % subfeed_t)
ValueError: Cannot feed value of shape (64, 64, 3) for Tensor 'DecodeJpeg/contents:0', which has shape '()
</code></pre>
<p>-------Update----------</p>
<p>After converting it into string. I am getting Invalid JPEG data, size 12288.
For detailed error: <a href="https://gist.github.com/mridulbirla/0d710d7ccd7b22c8f87989c37837e10e" rel="nofollow noreferrer">https://gist.github.com/mridulbirla/0d710d7ccd7b22c8f87989c37837e10e</a></p>
|
<p>change </p>
<pre><code>feat1 = sess.run(pool_3_tensor,{'DecodeJpeg/contents:0': image})
</code></pre>
<p>into</p>
<pre><code>feat1 = sess.run(pool_3_tensor,{'DecodeJpeg/contents:0': image.tostring()})
</code></pre>
<p>and have a try</p>
<p><code>'DecodeJpeg/contents:0'</code> is a scalar string tensor from which you can decode a three dim image tensor</p>
<hr>
<p>When I did transfer learning with inception-v3, I didn't read image into np.array type, instead, read the image content to a string with</p>
<pre><code>with tf.gfile.FastGFile("you_image_file_name") as f:
content = f.read()
</code></pre>
<p>then feed the <code>content</code> to <code>'DecodeJpeg/contents:0'</code></p>
|
python|python-3.x|tensorflow
| 0 |
1,907,629 | 42,965,689 |
Replacing a text with \n in it, with a real \n output
|
<p>I am trying to get a config from a juniper router and I have the following problem:</p>
<p>After setting this</p>
<pre><code>stdin, stdout, stderr = client1.exec_command('show configuration interfaces %s' % SID)
CONFIG = stdout.read()
print CONFIG
</code></pre>
<p>It brings me something like these</p>
<pre><code>'description El_otro_Puerto_de_la_Routing-instance;\nvlan-id 309;\nfamily inet {\n mtu 1600;\n address 10.100.10.10/24;\n}\n'
</code></pre>
<p>and the problem is that I want to receive that information in this format:</p>
<pre><code>'description El_otro_Puerto_de_la_Routing-instance;
nvlan-id 309;
nfamily inet {
mtu 1600;
address 10.100.10.10/24;
}
</code></pre>
<p>So I want the \n to actually be a new line, and not just to show me the "\n" string.</p>
|
<p>If you're running this in the Python interpreter, it is the regular behavior of the interpreter to show newlines as "\n" instead of actual newlines, because it makes it easier to debug the output. If you want to get actual newlines within the interpreter, you should <code>print</code> the string you get.</p>
<p>If this is what the program is outputting (i.e.: You're getting newline escape sequences from the external program), you should use the following:</p>
<pre><code>OUTPUT = stdout.read()
formatted_output = OUTPUT.replace('\\n', '\n').replace('\\t', '\t')
print formatted_output
</code></pre>
<p>This will replace escaped newlines by actual newlines in the output string.</p>
|
python|blank-line|juniper
| 32 |
1,907,630 | 66,527,661 |
How to convert time into specific format in python?
|
<p>I have a column of time in my pandas DataFrame containing more than 800,000 rows. The time format is something like this:</p>
<pre><code>08:28:31
08:28:35
08:28:44
08:28:44
</code></pre>
<p>I want to convert this format into hourly, which means if the first time comes 08:28:31 then the second-time time should come in hour by hour 09:28:31 etc. How do we achieve this in python using the DateTime library</p>
<p>output data:</p>
<pre><code>08:28:31
09:28:31
...
23:28:31
08:28:35
09:28:35
...
23:28:35
08:28:44
...
08:28:44
...
</code></pre>
|
<p>Use:</p>
<pre><code>#convert values to datetimes
df['date'] = pd.to_datetime(df['date'])
#count number of repeated values
df = df.loc[df.index.repeat(24 - df['date'].dt.hour)]
#generate hour timedeltas
hours = pd.to_timedelta(df.groupby(level=0).cumcount(), unit='H')
#add to dates and generate times with convert index to default values
s = df['date'].add(hours).dt.time.reset_index(drop=True)
print (s)
0 08:28:31
1 09:28:31
2 10:28:31
3 11:28:31
4 12:28:31
59 19:28:44
60 20:28:44
61 21:28:44
62 22:28:44
63 23:28:44
Length: 64, dtype: object
</code></pre>
|
python|pandas|datetime|time|rows
| 1 |
1,907,631 | 65,629,747 |
Structure in coding
|
<p>I have som questions about structure in coding and I hope that this is the right place to ask these question/have this discussion. I am fairly new to python and have understood that in order to have a good structure you should (among other things) separate calculations and user interaction, for example you should have a separate print function.</p>
<p>I have tried to think about this when I refined my code, so I created a function <code>print_and_save_stats</code> that both saves statistics from my program and presents these, is this a good way to handle this or should I have a separate function for saving stats and one for printing stats? For example <code>save_stats</code>and <code>print_stats</code>?</p>
<pre><code>def print_and_save_stats(games, h, k):
player1wins=0
player2wins=0
ties=0
for game in range(games):
result=scenario(h, k)
if result==-1: ties+=1
elif result==1: player1wins+=1
else: player2wins+=1
print('Player 1 wins:',player1wins)
print('Player 2 wins:',player2wins)
print('Tie:',ties)
# Fake dataset
height = [player1wins, player2wins, ties]
bars = ('Player 1', 'Player 2', 'Tie')
y_pos = np.arange(len(bars))
# Create bars and choose color
plt.bar(y_pos, height, color = (0.5,0.1,0.5,0.6))
# Add title and axis names
plt.title('My title')
plt.xlabel('')
plt.ylabel('')
# Limits for the Y axis
plt.ylim(0,games)
# Create names
plt.xticks(y_pos, bars)
# Show graphic
plt.show()
</code></pre>
<p>Gladly accepts all tips as I try to develop my knowledge in python.
<strong>Edit</strong> I have now created to separate functions but however the values in <code>save_stats</code> doesn't work in <code>print_stats</code> and I can't see why. When I try to print for example player1wins in save_stats the correct information shows so it should not be empty. The error I get is <code>TypeError: cannot unpack non-iterable NoneType object</code></p>
<pre><code>def save_stats(games, h, k):
player1wins=0
player2wins=0
ties=0
for game in range(games):
result=scenario(h, k)
if result==-1: ties+=1
elif result==1: player1wins+=1
else: player2wins+=1
return [player1wins, player2wins, ties] # for returning
def print_stats(games, h, k):
if k ==3 or k == 5 or k == 7:
if h == 1 or h == 2:
player1wins, player2wins, ties = save_stats(games, h, k)
# Fake dataset
height = [player1wins, player2wins, ties]
bars = ('Player 1', 'Player 2', 'Tie')
y_pos = np.arange(len(bars))
# Create bars and choose color
plt.bar(y_pos, height, color = (0.5,0.1,0.5,0.6))
# Limits for the Y axis
plt.ylim(0,games)
# Create names
plt.xticks(y_pos, bars)
# Show graphic
plt.show()
print('Player 1 wins:',player1wins)
print('Player 2 wins:',player2wins)
print('Tie:',ties)
else:
print('Playing surface does not exist, try again')
</code></pre>
|
<p>In a way, you have the answer already.
The name of your function indicates already that it is doing two different things. On the one hand it analyzes the game results and derives the stats and on the other hand it prints the results. Separating code doing different actions usually supports readability for you and others.</p>
<p><strong>Edit</strong>
What you are missing here, is that your variables in <code>save_stats</code> are scoped (<a href="https://www.w3schools.com/python/python_scope.asp" rel="nofollow noreferrer">further reading</a>).</p>
<p>You could experiment a little with a minimal example</p>
<pre class="lang-py prettyprint-override"><code>def first_scope():
x=1
def second_scope():
x = first_scope()
print(x)
# Be aware that using another variable name than "x" in second_scope will produce the same result
second_scope()
</code></pre>
<p>This example will print <code>None</code>. But we want <code>1</code>. To achieve that, you need to get the value of <code>x</code> out of <code>first_scope</code> by <code>return</code>ing its value (<a href="https://blog.tecladocode.com/python-30-day-13-return-statements/" rel="nofollow noreferrer">further reading</a>).</p>
<p>There are also some further improvements you can do to your code, e.g.:</p>
<ul>
<li>variable naming: <code>h,k</code> is hard to get</li>
<li><a href="https://stackoverflow.com/questions/47882/what-is-a-magic-number-and-why-is-it-bad">What is a magic number, and why is it bad?</a></li>
<li>maybe think about using a class or dictionary for your stats</li>
</ul>
|
python
| 1 |
1,907,632 | 65,730,113 |
Build a translator using python. I don't know why translation = translation + "G"
|
<pre><code>def translate(phrase):
translation = ""
for letter in phrase:
if letter.lower() in "aeiou":
if letter.isupper():
translation = translation + "G"
else:
translation = translation + "g"
else:
translation = translation + letter
return translation
</code></pre>
<p>I dont undertand why translation = translation + "G".Can someone help me</p>
|
<p>The <code>translation = translation + "G"</code> adds 'G' to the existing string <code>translation</code></p>
<p>Eg. if <code>translation = "Hi"</code>
then <code>translation = translation + "G"</code> comes out to be "Hi"+"G" which is "HiG"</p>
<p>All that I can tell is what this code in words. If you could tell me what you wanted the output to be I could have helped you more.</p>
<p>So what the code does is that it takes a phrase/sentence and it replaces all vowels with a g (Capital G if vowel is capital or small g if vowel is small).</p>
<p>Eg. phrase="Welcome to Overflow"</p>
<p>Output -> translation="Wglcgmg tg Gvgrflgw"</p>
|
python|python-3.x|raspberry-pi|pycharm
| 1 |
1,907,633 | 50,821,660 |
Data augmentation using python function with tf.Dataset API
|
<p>I'm looking for dynamically read images and apply data augmentation for my image segmentation problem. From what I've looked so far the best way would be the <code>tf.Dataset</code> API with <code>.map</code> function.</p>
<p>However, from the examples I've seen I think I'd have to adapt all my functions to tensorflow style (use <code>tf.cond</code> instead of <code>if</code>, etc..). The problem is that I have some really complex functions that I need to apply. Therefore I was considering using <code>tf.py_func</code> like this:</p>
<pre><code>import tensorflow as tf
img_path_list = [...] # List of paths to read
mask_path_list = [...] # List of paths to read
dataset = tf.data.Dataset.from_tensor_slices((img_path_list, mask_path_list))
def parse_function(img_path_list, mask_path_list):
'''load image and mask from paths'''
return img, mask
def data_augmentation(img, mask):
'''process data with complex logic'''
return aug_img, aug_mask
# py_func wrappers
def parse_function_wrapper(img_path_list, mask_path_list):
return tf.py_func(func=parse_function,
inp=(img_path_list, mask_path_list),
Tout=(tf.float32, tf.float32))
def data_augmentation_wrapper(img, mask):
return tf.py_func(func=data_augmentation,
inp=(img, mask),
Tout=(tf.float32, tf.float32))
# Maps py_funcs to dataset
dataset = dataset.map(parse_function_wrapper,
num_parallel_calls=4)
dataset = dataset.map(data_augmentation_wrapper,
num_parallel_calls=4)
dataset = dataset.batch(32)
iter = dataset.make_one_shot_iterator()
imgs, labels = iter.get_next()
</code></pre>
<p>However, from <a href="https://stackoverflow.com/a/48781036/6850241">this answer</a> it seems that using <code>py_func</code> for parallelism does not work. Is there any other alternative?</p>
|
<p>py_func is limited by the python GIL, so you won't get much parallelism there. Your best bet is to write your data augmentation in tensorflow proper (or to precompute it and serialize it to disk).</p>
<p>If you do want to write it in tensorflow you can try to use tf.contrib.autograph to convert simple python ifs and for loops into tf.conds and tf.while_loops, which might simplify your code quite a bit.</p>
|
python|tensorflow|deep-learning|dataset|tensorflow-datasets
| 1 |
1,907,634 | 3,611,961 |
How to fix broken relative links in offline webpages?
|
<p>I wrote a simple Python script to download a web page for offline viewing. The problem is that the relative links are broken. So the offline file "c:\temp\webpage.html" has a href="index.aspx" but when opened in a browser it resolves to "file:///C:/temp/index.aspx" instead of "<a href="http://myorginalwebsite.com/index.aspx" rel="nofollow noreferrer">http://myorginalwebsite.com/index.aspx</a>".</p>
<p>So I imagine that I would have to modify my script to fix each of the relative links so that it points to the original website. Is there an easier way? If not, anyone have some sample Python code that can do this? I'm a Python newbie so any pointers will be appreciated.</p>
<p>Thanks.</p>
|
<p>If you just want your relative links to refer to the website, just add a base tag in the head:</p>
<pre><code><base href="http://myoriginalwebsite.com/" />
</code></pre>
|
python|html|hyperlink|offline-browsing
| 5 |
1,907,635 | 35,175,018 |
replace matching words python
|
<p>I have this codition:</p>
<pre><code>if "Exit" in name:
replace_name = name.replace("Exit","`Exit`")
name = replace_name
</code></pre>
<p>and it should replace Exit with 'Exit' but if I have another word like exited, it also replaces it 'Exit'. I only want it to replace the exact one "Exit" not exited. what is the best way to overcome this issue?</p>
<p>Thanks.</p>
|
<p>You can use a <a href="https://docs.python.org/3/library/re.html#re.sub" rel="nofollow">regular expression</a> with word boundary (<code>\b</code>) characters. Also, no need for the <code>if</code> check; if the word is not in the string, then nothing is replaced.</p>
<pre><code>>>> import re
>>> s = "he exited through the exit"
>>> re.sub(r"\bexit\b", "'exit'", s)
"he exited through the 'exit'"
</code></pre>
<p>You could also use flags to make the match case insensitive, or use a callback function for determining the replacement</p>
<pre><code>>>> s = "he exited through the Exit"
>>> re.sub(r"\b(exit)\b", lambda m: "'%s'"%m.group(1).upper(), s, flags=re.I)
"he exited through the 'EXIT'"
</code></pre>
|
python|python-2.7
| 3 |
1,907,636 | 26,476,218 |
How to delete elements in a list based on another list?
|
<p>Suppose I have a list called icecream_flavours, and two lists called new_flavours and unavailable. I want to remove the elements in flavours that appear in 'unavailable', and add those in new_flavours to the original one. I wrote the following program: </p>
<pre><code>for i in unavailable:
icecream_flavours.remove(i)
for j in new_flavours:
icecream_flavours.append(j)
</code></pre>
<p>the append one is fine, but it keeps showing 'ValueError: list.remove(x): x not in list' for the first part of the program. What's the problem? </p>
<p>thanks</p>
|
<p>To add all the <code>new_flavours</code> that are not <code>unavailable</code>, you can use a list comprehension, then use the <code>+=</code> operator to add it to the existing flavors.</p>
<pre><code>icecream_flavours += [i for i in new_flavours if i not in unavailable]
</code></pre>
<p>If there are already flavors in the original list you want to remove, you can remove them in the same way</p>
<pre><code>icecream_flavours = [i for i in icecream_flavours if i not in unavailable]
</code></pre>
|
python|list|append
| 1 |
1,907,637 | 45,151,023 |
Convert ascii string to base64 without the "b" and quotation marks
|
<p>I wanted to convert an ascii string (well just text to be precise) towards base64.
So I know how to do that, I just use the following code:</p>
<pre><code>import base64
string = base64.b64encode(bytes("string", 'utf-8'))
print (string)
</code></pre>
<p>Which gives me</p>
<pre><code>b'c3RyaW5n'
</code></pre>
<p>However the problem is, I'd like it to just print</p>
<pre><code>c3RyaW5n
</code></pre>
<p>Is it possible to print the string without the "b" and the '' quotation marks?
Thanks!</p>
|
<p>The <code>b</code> prefix denotes that it is a <strong>binary string</strong>. A binary string is <em>not</em> a string: it is a sequence of bytes (values in the 0 to 255 range). It is simply typesetted as a string to make it more compact.</p>
<p>In case of base64 however, all characters are valid ASCII characters, you can thus simply decode it like:</p>
<pre><code>print(string.decode('ascii'))
</code></pre>
<p>So here we will decode each byte to its ASCII equivalent. Since base64 guarantees that every byte it produces is in the ASCII range <code>'A'</code> to <code>'/'</code>) we will always produce a valid string. Mind however that this is <em>not</em> guaranteed with an arbitrary binary string.</p>
|
python|python-3.x|base64
| 15 |
1,907,638 | 58,094,337 |
Python3: Print request's IP from flask_restful's get method
|
<p>Here is my get method of my <code>Resource</code> class:</p>
<pre class="lang-py prettyprint-override"><code>from flask_restful import Resource
import subprocess
CMD = ['some', 'cmd' ]
class MyResource(Resource):
def get(self):
try:
completed_process = subprocess.run(CMD, check=True,
capture_output=True)
....
</code></pre>
<p>At which point in the above code can I retrieve (and print) the IP of the incoming <code>XGET</code> request?</p>
<p>I am aware of <a href="https://stackoverflow.com/questions/3759981/get-ip-address-of-visitors-using-flask-for-python">this</a> answer but it does not indicate how to go about it with <code>flask_restful</code> and the <code>Resource</code> class.</p>
|
<p>Here is a full example returning the remote IP. All you need to do is import <code>request</code> and access the <code>remote_addr</code> attribute.</p>
<p>The <code>request</code> object is local to the current request. See <a href="https://flask.palletsprojects.com/en/1.1.x/reqcontext/" rel="nofollow noreferrer">The Request Context</a> for more information.</p>
<pre><code>from flask import Flask, request
from flask_restful import Api, Resource
app = Flask(__name__)
api = Api(app)
class HelloWorld(Resource):
def get(self):
return {'hello': 'world', 'remote-ip': request.remote_addr}
api.add_resource(HelloWorld, '/')
if __name__ == '__main__':
app.run(debug=True)
</code></pre>
|
python|python-3.x|flask|flask-restful
| 1 |
1,907,639 | 69,608,955 |
aws_cdk python error: Unzipped size must be smaller than 262144000 bytes
|
<p>I use CDK to deploy a lambda function that uses several python modules.
But I got the following error at the deployment.</p>
<pre><code>Unzipped size must be smaller than 262144000 bytes (Service: AWSLambdaInte
rnal; Status Code: 400; Error Code: InvalidParameterValueException;
</code></pre>
<p>I have searched following other questions, related to this issue.</p>
<p><a href="https://stackoverflow.com/questions/45342990/aws-lambda-error-unzipped-size-must-be-smaller-than-262144000-bytes">question1</a>
<a href="https://stackoverflow.com/questions/59931761/unzipped-size-must-be-smaller-than-262144000-bytes-aws-lambda-error">question2</a></p>
<p>But they focus on serverless.yaml and don't solve my problem.
Is there any way around for this problem?</p>
<p>Here is my app.py for CDK.</p>
<pre><code>from aws_cdk import (
aws_events as events,
aws_lambda as lam,
core,
)
class MyStack(core.Stack):
def __init__(self, app: core.App, id: str) -> None:
super().__init__(app, id)
layer = lam.LayerVersion(
self, "MyLayer",
code=lam.AssetCode.from_asset('./lib'),
);
makeQFn = lam.Function(
self, "Singleton",
function_name='makeQ',
code=lam.AssetCode.from_asset('./code'),
handler="makeQ.main",
timeout=core.Duration.seconds(300),
layers=[layer],
runtime=lam.Runtime.PYTHON_3_7,
)
app = core.App()
MyStack(app, "MS")
app.synth()
</code></pre>
<p>In ./lib directory, I put python modules like,</p>
<pre><code>python -m pip install numpy -t lib/python
</code></pre>
|
<p>Thanks a lot.</p>
<p>In my case, the issue is solved just by removing all __pycache__ in the local modules before the deployment.</p>
<p>I hope the situation will be improved and we only have to upload requirements.txt instead of preparing all the modules locally.</p>
|
python|aws-lambda|aws-cdk
| 0 |
1,907,640 | 57,507,193 |
Selenium driver: find elements by xpath; how do i parse a level 2 table (i.e. a table within a table)
|
<p>I asked a question to get me to this point <a href="https://stackoverflow.com/questions/57498933/selenium-driver-find-elements-by-xpath-why-is-there-no-data-returned/57499464#57499464">here</a>, except since this was a specific different question I have it separate, but let me know if this isn't the right place.</p>
<p>I have this script:</p>
<pre><code>from selenium import webdriver
from bs4 import BeautifulSoup
import os
from selenium.webdriver.support.ui import Select
from selenium.webdriver.common.by import By
from selenium.webdriver.chrome.options import Options
options = Options()
options.binary_location=r'C:\Program Files (x86)\Google\Chrome\Application\chrome.exe'
options.add_experimental_option('excludeSwitches', ['enable-logging'])
driver = webdriver.Chrome(options=options,executable_path='/mnt/c/Users/kela/Desktop/selenium/chromedriver.exe')
#get the url
driver.get('http://147.8.185.62/services/NutriChem-2.0/')
#find the food name
element = driver.find_element_by_id("input_food_name")
element.send_keys("22663")
#click food-disease association
element = Select(driver.find_element_by_css_selector('[name=food_search_section]'))
element.select_by_value('food_disease')
#click submit and click plant-disease associations
driver.find_element_by_css_selector('[value="Submit"]').click()
driver.switch_to.frame(driver.find_element_by_css_selector('frame'))
driver.find_element_by_css_selector('[onclick*="plant-disease"]').click()
#to click into each drop down table rows
driver.switch_to_default_content()
driver.switch_to.frame(driver.find_element_by_name('mainFrame'))
driver.switch_to.frame(driver.find_element_by_name('ListWeb'))
</code></pre>
<p>This gets me to the page I want to scrape <a href="https://i.stack.imgur.com/evxml.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/evxml.png" alt=""></a>:</p>
<p>The next stage, for each of the grey boxes, I want to pull out (1) the PMID ID, (2) Plant, (3) direction (signified by whether the image is up_arrow.png or down_arrow.png, so just printing the image name is fine) and (4) The disease.</p>
<p>As you can see from my previous question, I am very new to selenium, and thought once I got to this stage, I would just loop through the table rows and print these with beautifulSoup. The short version of my issue is I just cannot get this to work.</p>
<p>Things I have tried:</p>
<p>Attempt 1:</p>
<pre><code>rows = driver.find_elements_by_xpath("//table[@class='Level1Table']//tr[contains(@name,'hList')]")
test_row = rows[0]
print(test_row.text)
</code></pre>
<p>This above code will print 'Pomegranate Osteoartritis 3'; but then I can't work out how to loop within this (I just get empty data).</p>
<p>Attempt 2:
Then I tried to loop through each r in rows, but that still only gives me the level 1 data. (i.e. just prints multiple lines of attempt 1).</p>
<p>Attempt 3:</p>
<pre><code>rows = Select(driver.find_elements_by_xpath("//table[@class='Level2Table']//tr[contains(@name,'hList')]"))
print(rows)
</code></pre>
<p>Above, I wondering why can't I just run the same as attempt 1, but looping through the level 2 tables instead of level 1. This output is empty. I'm not sure why this doesn't work; I can see from inspecting the page that the level2table is there. </p>
<p>Attempt 4:
This was the way I was originally thinking of doing it, but it doesn't work:</p>
<pre><code>for row in rows.findAll('tr'):
food_source = row.find_all('td')[1].text
pmid = row.find_all('td')[0].text
disease = row.find_all('td')[3].text
#haven't figured out how to get the association direction yet
print(food_source + '\t' + pmid + '\t' + disease + '\t' + association)
</code></pre>
<p>This is my first selenium script, so at this point I'm just out of my depth. Could someone please show me how to loop through the level 2 tables within the level 1 table and extract the required info (reference, plant, direction and disease).</p>
<p>Edit 1: Based on Guy's suggestion below, this is the full script:</p>
<pre><code>from selenium import webdriver
from bs4 import BeautifulSoup
import os
from selenium.webdriver.support.ui import Select
from selenium.webdriver.common.by import By
from selenium.webdriver.chrome.options import Options
import pandas as pd
options = Options()
options.binary_location=r'C:\Program Files (x86)\Google\Chrome\Application\chrome.exe'
options.add_experimental_option('excludeSwitches', ['enable-logging'])
driver = webdriver.Chrome(options=options,executable_path='/mnt/c/Users/kela/Desktop/selenium/chromedriver.exe')
#get the url
driver.get('http://147.8.185.62/services/NutriChem-2.0/')
#find the food name
element = driver.find_element_by_id("input_food_name")
element.send_keys("22663")
#click food-disease association
element = Select(driver.find_element_by_css_selector('[name=food_search_section]'))
element.select_by_value('food_disease')
#click submit and click plant-disease associations
driver.find_element_by_css_selector('[value="Submit"]').click()
driver.switch_to.frame(driver.find_element_by_css_selector('frame'))
driver.find_element_by_css_selector('[onclick*="plant-disease"]').click()
#to click into each drop down table rows
driver.switch_to_default_content()
driver.switch_to.frame(driver.find_element_by_name('mainFrame'))
#driver.switch_to.frame(driver.find_element_by_name('ListWeb'))
#rows = driver.find_elements_by_xpath("//table[@class='Level1Table']//tr[contains(@name,'hList')]")
#test_row = rows[0]
driver.switch_to.frame('ListWeb') # no need for find_element, name or id are sufficient
rows = driver.find_elements_by_css_selector('[id^="ListTAXID"] [name^="Item"]')
for row in rows:
row_data = row.find_elements_by_xpath('.//td')
pmid = row_data[0].text
plant = row_data[1].text
direction = row_data[2].get_attribute('src')
disease = row_data[3].text
print(str(pmid) + '\t' + str(plant) + '\t' + str(direction) + '\t' + str(disease))
</code></pre>
<p>That leads to this output:</p>
<pre><code> None
None
None
None
None
None
None
None
None
None
None
None
</code></pre>
|
<p>The inner table is not part of the header row (with <code>'Pomegranate Osteoartritis 3'</code> text), but inside a sibling row that is not visible.</p>
<p>Those rows has <code>id</code> attribute that start with <code>ListTAXID</code> that can help identify them, and the data you are looking for is in a descendant elements with <code>name</code> attribute that start <code>Item</code>.</p>
<p>The text will be available only if the table is open. You can click on all the header rows before collecting the data, or use <code>get_attribute('innerText')</code> instead of <code>text</code>, it will get the data even if the table is still closed.</p>
<pre><code>driver.switch_to.frame('ListWeb') # no need for find_element, name or id are sufficient
rows = driver.find_elements_by_css_selector('[id^="ListTAXID"] [name^="Item"]')
for row in rows:
row_data = row.find_elements_by_xpath('.//td')
pmid = row_data[0].get_attribute('innerText')
plant = row_data[1].get_attribute('innerText')
direction = 'up_arrow' if 'up_arrow' in row_data[2].find_element_by_xpath('.//img').get_attribute('src') else 'down_arrow'
disease = row_data[3].get_attribute('innerText')
</code></pre>
<p>As a side note, you should maximize your window <code>driver.maximize_window()</code></p>
|
python|selenium
| 1 |
1,907,641 | 59,202,250 |
pd.read_html importing a long string rather than a table
|
<p>I used pd.read_html to try and import a table, but I'm getting a long string instead when I run it. Is there a simple way to change the format of the result to get 1 word per row rather than a long string, or should i be using a function other than pd.read_html? Thank you!</p>
<p>here is my code:</p>
<pre><code>import requests
import pandas as pd
url ='http://www.linfo.org/acronym_list.html'
dfs = pd.read_html(url, header =0)
df = pd.concat(dfs)
df
</code></pre>
<p>i also used this and got the same result:</p>
<pre><code>import pandas as pd
url ='http://www.linfo.org/acronym_list.html'
data = pd.read_html(url, header=0)
data[0]
</code></pre>
<p>Out[1]:</p>
<p>ABCDEFGHIJKLMNOPQRSTUVWXYZ A AMD Advanced Micro Devices API application programming interface ARP address resolution protocol ARPANET Advanced Research Projects Agency Network AS autonomous system ASCII American Standard Code for Information Interchange AT&T American Telephone and Telegraph Company ATA advanced technology attachment ATM asynchronous transfer mode B B byte BELUG Bellevue Linux Users Group BGP border gateway protocol...</p>
|
<p>The problem is how the table was created in this site. </p>
<p>According to <a href="https://www.w3schools.com/html/html_tables.asp" rel="nofollow noreferrer">https://www.w3schools.com/html/html_tables.asp</a>, an HTML table is defined with the < table > tag. Each table row is defined with the < tr > tag. A table header is defined with the < th > tag. By default, table headings are bold and centered. A table data/cell is defined with the < td > tag.</p>
<p>If you press CTRL+SHIFT+I, you can inspect the html elements of your site and you will see that this site does not follow this standard. That is why you are not getting the correct dataframe using pandas.read_html.</p>
|
python|pandas|dataframe|import
| 0 |
1,907,642 | 59,205,870 |
multiplying and combining all elements in a list by all others but without duplication/repetition
|
<pre><code>we = [1,2,3,4]
for i in we:
for e in we:
print('('+str(i)+' - '+str(e)+')')
#del we[0]
</code></pre>
<p>This is the result:</p>
<pre><code>(1 - 1)
(1 - 2)
(1 - 3)
(1 - 4)
(2 - 1)
(2 - 2)
(2 - 3)
(2 - 4)
(3 - 1)
(3 - 2)
(3 - 3)
(3 - 4)
(4 - 1)
(4 - 2)
(4 - 3)
(4 - 4)
</code></pre>
<p>But I do not want the same elements to repeat like I have <code>(1 - 3)</code> so I do not want <code>(3 - 1)</code> to show and so on.
I also need to use for loops in this </p>
|
<p>You can do this in a single line using <code>list comprehension</code> and <code>itertools.combinations</code>.</p>
<pre class="lang-py prettyprint-override"><code>from itertools import combinations
['({} - {})'.format(e[0], e[1]) for e in list(combinations([1,2,3, 4], 2))]
</code></pre>
<p><strong>Output</strong>: </p>
<pre><code>['(1 - 2)', '(1 - 3)', '(1 - 4)', '(2 - 3)', '(2 - 4)', '(3 - 4)']
</code></pre>
<h3>References:</h3>
<ol>
<li><a href="https://stackoverflow.com/questions/50754724/python-how-to-find-all-combinations-of-two-numbers-in-a-list">Python - how to find all combinations of two numbers in a list</a></li>
<li><a href="https://stackoverflow.com/questions/22199099/how-to-get-combination-of-element-from-a-python-list">How to get combination of element from a python list?</a></li>
</ol>
|
python
| 0 |
1,907,643 | 54,034,844 |
Exponential to Float (Getting ALL decimal numbers of Float)
|
<p>I need to get ALL decimal numbers possible, but I have a limit of 19 number char on the spinbox. Setting a static numbers on how many decimal places will be and setting a static number of decimals (setDecimals()) is not acceptable since the value I am getting is dynamic.</p>
<p>For example, I want to convert Hertz:</p>
<p>Hertz: 1.00
Megahertz: 1e-6
GigaHertz: 1e-9</p>
<p>I want it to be in this format:
1.00,
0.000006,
0.000000009,</p>
<p>Yes! Decimal places are dynamic.</p>
<p>This is the code for conversion (Value of dictionary is not yet finished):</p>
<p><code>
conversion_formula = {'Hz': 1, 'kHz': 1e-3, 'MHz': 1e-6, 'GHz': 1e-9,
's': 1, 'ms': 1e-3, 'us': 1e-6, 'ns': 1e-9,
'V': 1, 'mV': 1e-3}
</code></p>
<pre><code>if FREQUENCY_UNIT_NAME == title or AMPLITUDE_UNIT_NAME == title:
output_value = input_value * (conversion_formula[current_unit] / conversion_formula[base_unit])
elif title == TIME_UNIT_NAME:
output_value = input_value * (conversion_formula[base_unit] / conversion_formula[current_unit])
</code></pre>
|
<p>The Python package <a href="https://bitbucket.org/william_rusnack/to-precision" rel="nofollow noreferrer"><code>to-precision</code></a> does the job, as per <a href="https://stackoverflow.com/a/44134235/4791226">this answer</a>.</p>
|
python
| 1 |
1,907,644 | 58,307,864 |
The TextBackend supports only expressions over a 1D range: Implicit plotting in Sympy
|
<p>On doing the following, </p>
<pre><code>from sympy import *
x, y = symbols('x y')
p1 = plot_implicit((Eq(x**2 + y**2, 5)))
</code></pre>
<p>I get the following traceback:</p>
<pre><code>Traceback (most recent call last):
File "test.py", line 3, in <module>
p1 = plot_implicit((Eq(x**2 + y**2, 5)))
File "/home/tinkidinki/.local/lib/python3.6/site-packages/sympy/plotting/plot_implicit.py", line 377, in plot_implicit
p.show()
File "/home/tinkidinki/.local/lib/python3.6/site-packages/sympy/plotting/plot.py", line 187, in show
self._backend.show()
File "/home/tinkidinki/.local/lib/python3.6/site-packages/sympy/plotting/plot.py", line 1101, in show
'The TextBackend supports only expressions over a 1D range')
ValueError: The TextBackend supports only expressions over a 1D range
</code></pre>
<p>It doesn't seem to get affected by making it a one-variable expression. How do you plot implicitly in Sympy?</p>
|
<p>If you install matplotlib it will use that for plotting instead of TextBackend. I ran <code>pip install matplotlib</code> and when I tried your expression/command it worked.</p>
|
python|plot|expression|sympy
| 8 |
1,907,645 | 58,460,139 |
Why is subtraction faster when doing arithmetic with a Numpy array and a int compared to using vectorization with two Numpy arrays?
|
<p>I am confused as to why this code:</p>
<pre><code>start = time.time()
for i in range(1000000):
_ = 1 - np.log(X)
print(time.time()-start)
</code></pre>
<p>Executes faster than this implementation:</p>
<pre><code>start = time.time()
for i in range(1000000):
_ = np.subtract(np.ones_like(X), np.log(X))
print(time.time()-start)
</code></pre>
<p>My understanding was that it should be the opposite, as in the second implementation I'm utilizing the speed-up provided by vectorization, since it's able to operate the elements in X simultaneously rather than going sequentially, which is how I assumed the first implementation functions.</p>
<p>Can someone shed some light on this for me, as I am genuinely confused? Thank you!</p>
|
<p>Both versions of your code are equally vectorized. The array you created to try to vectorize the second version is just overhead.</p>
<hr>
<p>NumPy vectorization doesn't refer to hardware vectorization. If the compiler is smart enough, it might end up using hardware vectorization, but NumPy doesn't explicitly use AVX or anything.</p>
<p>NumPy vectorization refers to writing Python-level code that operates on entire arrays at once, not using hardware instructions that operate on multiple operands at once. It's vectorization at the Python level, not at the machine language level. The benefit of this over writing explicit loops is that NumPy can perform the work in C-level loops instead of Python, avoiding a massive amount of dynamic dispatch, boxing, unboxing, trips through the bytecode evaluation loop, etc.</p>
<p>Both versions of your code are vectorized in that sense, but the second one wastes a bunch of memory and memory bandwidth on writing and reading a giant array of ones.</p>
<p>Also, even if we were talking about hardware-level vectorization, the <code>1 -</code> version would be just as amenable to hardware-level vectorization as the other version. You would just load the scalar <code>1</code> into all positions of a vector register and proceed as normal. It would involve far less transfers to and from memory than the second version, thus still probably running faster than the second version.</p>
|
python|arrays|numpy|matrix|linear-algebra
| 7 |
1,907,646 | 58,574,031 |
Understanding inheritance in Django
|
<p>I am trying to understand some basics in django inheritance - I'm sure it is something trivial, buy I just can't get it.</p>
<p>I've got my CartItemForm(forms.ModelForm) and I override <strong>init</strong> method to get user from post.request, like that:</p>
<pre><code>def __init__(self, *args, **kwargs):
self.request = kwargs.pop('request', None)
super().__init__(*args, **kwargs)
</code></pre>
<p>And it works, but I don't really get why it doesn't work when I inherit init method first:</p>
<pre><code>def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.request = kwargs.pop('request', None)
</code></pre>
<blockquote>
<p><strong>init</strong>() got an unexpected keyword argument 'request'</p>
</blockquote>
<p>What am I missing here?</p>
|
<p>It doesn't work because the <a href="https://github.com/django/django/blob/master/django/forms/models.py#L280" rel="nofollow noreferrer">base class</a> uses an explicit list of keyword args, and <code>request</code> isn't one of them</p>
<pre><code>def __init__(self, data=None, files=None, auto_id='id_%s', prefix=None,
initial=None, error_class=ErrorList, label_suffix=None,
empty_permitted=False, instance=None, use_required_attribute=None,
renderer=None):
</code></pre>
<p>For completeness, it works before hand because you're <code>pop</code>-ing the request keyword out of the keyword dictionary and no longer exists when you're calling super</p>
|
python|django
| 3 |
1,907,647 | 45,717,464 |
How to append a variable from another function python3
|
<p>I want to append the date2 from this function into </p>
<pre><code>def date_register():
print("Enter date of registration")
year = int(input("Enter a year: "))
month = int(input("Enter a month: "))
day = int(input("Enter a day: "))
date1 = datetime.date(year,month,day)
date2 = date1 + timedelta(days = 140)
print("Check out date:",date2)
</code></pre>
<p>this function and it came out date2 is not defined</p>
<pre><code>def update_A(row): #to update the roomA
if len(roomA[row]) < 2: #if roomA is less than 2
name = input("Enter your name here: ")
print(date_register())
roomA[row].append((name,date2))
print("Your room no. is {} at row {}".format(roomA[row].index((name,date2))+1,row))
print(Continue())
</code></pre>
<p>Seeking for help thank you </p>
|
<p><code>date2</code> is not defined because it is not within the scope of <code>update_A</code>
Please read <a href="http://python-textbok.readthedocs.io/en/1.0/Variables_and_Scope.html" rel="nofollow noreferrer">here</a> for more information on scope.</p>
<p>You also seem to be confusing <code>return</code> and <code>print</code></p>
<p>In <code>update_A</code>, you write <code>print(date_register())</code> but <code>date_register</code> doesn't return anything to be printed.</p>
<p><code>print</code> sends string representations to the console and can't be used for assignment. Instead use <code>return</code> which basically forces a function call to resolve to the value next to the <code>return</code> statement.
For example:</p>
<pre><code>def foo:
return "bar"
print(foo())
</code></pre>
<p>when <code>foo</code> is called, it will resolve to <code>"bar"</code> which is then printed to the console. for more on the difference and usage of <code>print()</code> and <code>return</code> see <a href="https://stackoverflow.com/questions/7129285/why-would-you-use-the-return-statement-in-python">here</a></p>
<p>To use <code>date2</code> in <code>update_A</code> you should return it and assign it as follows:</p>
<pre><code>def date_register():
print("Enter date of registration")
year = int(input("Enter a year: "))
month = int(input("Enter a month: "))
day = int(input("Enter a day: "))
date1 = datetime.date(year,month,day)
date2 = date1 + timedelta(days = 140)
print("Check out date:",date2)
return date2
def update_A(row): #to update the roomA
if len(roomA[row]) < 2: #if roomA is less than 2
name = input("Enter your name here: ")
date2 = date_register() #assign date2 returned value
print(date2)
roomA[row].append((name,date2))
print("Your room no. is {} at row {}".format(roomA[row].index((name,date2))+1,row))
print(Continue())
</code></pre>
|
python-3.x|append
| 2 |
1,907,648 | 45,434,811 |
Calculate time difference of a Datetime object and get output as floatvalues?
|
<p>I have a dataframe column of Date and time:</p>
<pre><code>0 2017-06-24 08:37:00
1 2017-06-24 08:40:00
2 2017-06-24 08:42:01
3 2017-06-24 08:44:01
4 2017-06-24 08:46:00
5 2017-06-24 08:48:00
6 2017-06-24 08:50:01
7 2017-06-24 08:52:01
8 2017-06-24 08:54:01
9 2017-06-24 08:56:00
10 2017-06-24 08:58:01
11 2017-06-24 09:00:01
12 2017-06-24 09:04:01
13 2017-06-24 09:06:01
Name: Datetime, dtype: datetime64[ns]
</code></pre>
<p>I want the time difference of two timestamp such as:</p>
<pre><code>2017-06-24 08:40:00 - 2017-06-24 08:37:00 = 3.0
2017-06-24 08:42:01 - 2017-06-24 08:40:00 = 2.1
</code></pre>
<p>I tried this code:</p>
<pre><code>for z in range(len(df)):
abc = (df["Datetime"].iat[z+1] - df["Datetime"].iat[z])
</code></pre>
<p>I am getting the output as this with an error:</p>
<pre><code>0 days 00:03:00
0 days 00:02:01
0 days 00:02:00
0 days 00:01:59
0 days 00:02:00
0 days 00:02:01
0 days 00:02:00
0 days 00:02:00
0 days 00:01:59
0 days 00:02:01
IndexError: index 14 is out of bounds for axis 0 with size 14
</code></pre>
<p>expected output:</p>
<p>3.0</p>
<p>2.1</p>
<p>2.0</p>
<p>1.59</p>
<p>Any help would be appreciated.</p>
|
<p>You need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.diff.html" rel="nofollow noreferrer"><code>diff</code></a> - output is <code>timedelta</code>s, so need convert by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.dt.total_seconds.html" rel="nofollow noreferrer"><code>total_seconds</code></a> and if need output in minutes divide by <code>60</code>:</p>
<pre><code>df['diff'] = df['Datetime'].diff().dt.total_seconds().div(60)
print (df)
Datetime diff
0 2017-06-24 08:37:00 NaN
1 2017-06-24 08:40:00 3.000000
2 2017-06-24 08:42:01 2.016667
3 2017-06-24 08:44:01 2.000000
4 2017-06-24 08:46:00 1.983333
5 2017-06-24 08:48:00 2.000000
6 2017-06-24 08:50:01 2.016667
7 2017-06-24 08:52:01 2.000000
8 2017-06-24 08:54:01 2.000000
9 2017-06-24 08:56:00 1.983333
10 2017-06-24 08:58:01 2.016667
11 2017-06-24 09:00:01 2.000000
12 2017-06-24 09:04:01 4.000000
13 2017-06-24 09:06:01 2.000000
</code></pre>
|
python|pandas|datetime|dataframe|timedelta
| 0 |
1,907,649 | 28,450,867 |
Why does list comprehension in my Python Interactive shell append a list of Nones?
|
<p>I'm testing some Django functionality in my interactive shell</p>
<p>Here's my attempt to probe these objects, note the list of Nones at the end</p>
<pre><code>>>> [print(foo) for foo in CharacterSkillLink.objects.all() if foo.speciality]
Streetwise (Street Countdown) Roran
[None]
</code></pre>
<p>And with a more orthodox list comprehension:</p>
<pre><code>>>> [print(foo) for foo in range(1,10)]
1
2
3
4
5
6
7
8
9
[None, None, None, None, None, None, None, None, None]
</code></pre>
<p>Nine Nones, all in a row.</p>
<p>Why am I getting that?</p>
|
<p>Because <code>print</code> returns a value, namely <code>None</code>. What it prints and what it returns are two different things.</p>
|
python|python-3.x|python-interactive
| 6 |
1,907,650 | 41,589,062 |
How to test Python 2 code using nose2
|
<p>I've asked this question before (<a href="https://stackoverflow.com/questions/40406122/force-nose2-to-use-python-2-7-instead-of-python-3-5">Force nose2 to use Python 2.7 instead of Python 3.5</a>) but didn't get an answer, and thought I might give it another try. I'm trying to run tests using the command</p>
<pre><code>nose2
</code></pre>
<p>but I'm getting an error that ends with</p>
<pre><code>SyntaxError: Missing parentheses in call to 'print'
</code></pre>
<p>It seems like <a href="https://github.com/nose-devs/nose2" rel="nofollow noreferrer">nose2</a> assumes that the code is in Python 3, whereas in this case it is in Python 2. Is there any way to make <code>nose2</code> work on Python 2 code? (For example by changing its configuration)?</p>
|
<p>nose2 takes whatever python is configured in the shebang line.</p>
<p>To test a python2 project use (executable and path might differ on your machine):</p>
<pre><code>python2.7 /usr/local/bin/nose2
</code></pre>
<p>verified with this example:</p>
<p><strong>test.py</strong>:</p>
<pre><code>def test_the_program():
print "foo"
</code></pre>
<p><strong>with python3</strong>:</p>
<pre><code>$ python3 /usr/local/bin/nose2
======================================================================
ERROR: test (nose2.loader.ModuleImportFailure)
----------------------------------------------------------------------
ImportError: Failed to import test module: test
(...)
print "hans"
^
SyntaxError: Missing parentheses in call to 'print'
----------------------------------------------------------------------
Ran 1 test in 0.000s
FAILED (errors=1)
</code></pre>
<p><strong>with python2.7</strong>:</p>
<pre><code>$ python2.7 /usr/local/bin/nose2
foo
.
----------------------------------------------------------------------
Ran 1 test in 0.000s
OK
</code></pre>
|
python|unit-testing|nose2
| 3 |
1,907,651 | 6,388,778 |
app engine python urlfetch timing out
|
<p>I have two instances of app engine applications running that I want to communicate with a Restful interface. Once the data of one is updated, it calls a web hook on the second which will retrieve a fresh copy of the data for it's own system.
Inside 'site1' i have: </p>
<pre><code> from google.appengine.api import urlfetch
url = www.site2.com/data_updated
result = urlfetch.fetch(url)
</code></pre>
<p>Inside the handler for data_updated on 'site2' I have:</p>
<pre><code> url = www.site1.com/get_new_data
result = urlfetch.fetch(url)
</code></pre>
<p>There is very little data being passed between the two sites but I receive the following error. I've tried increasing the deadline to 10 seconds but this still doesn't work. </p>
<pre><code> DeadlineExceededError: ApplicationError: 5
</code></pre>
<p>Can anyone provide any insight into what might be happening? </p>
<p>Thanks - Richard</p>
|
<p>App Engine's <code>urlfetch</code> doesn't always behave as it is expected, you have about 10 seconds to fetch the URL. Assuming the URL you're trying to fetch is up and running, you should be able to catch the <code>DeadlineExceededError</code> by calling <code>from google.appengine.runtime import apiproxy_errors</code> and then wrapping the urlfetch call within a try/except block using <code>except apiproxy_errors.DeadlineExceededError:</code>.</p>
<p>Relevant answer <a href="https://stackoverflow.com/questions/5738146/unable-to-handle-deadlineexceedederror-while-using-urlfetch">here</a>.</p>
|
python|google-app-engine|urlfetch
| 3 |
1,907,652 | 57,068,719 |
how to redirect to login page for users who are not logged in when they want to post
|
<p>I am creating a site where user will be posting their stuff, in order to post to the site it requires to login, so for those who are not logged in when they want to post I want them to be redirected to the login page and a displaying pop up message "Post require login"</p>
<p>This is for python 3.7.3 and django 2.2.3. For users who are not logged in, I have added @login_required which throw an error page not found, instead of that I want to redirect them to login form. </p>
<p>Views.py for posting</p>
<pre><code>@login_required
def PostNew(request):
if request.method == "POST":
form = PostForm(request.POST)
if form.is_valid():
post = form.save(commit=False)
post.author = request.user
post.save()
return redirect('loststuffapp:IndexView')
else:
form = PostForm()
return render(request, 'loststuffapp/form.html', {'form': form})
</code></pre>
<p>views.py for login</p>
<pre><code>def login_request(request):
if request.method == "POST":
user_form = AuthenticationForm(request, data=request.POST)
if user_form.is_valid():
username = user_form.cleaned_data.get("username")
password = user_form.cleaned_data.get("password")
user = authenticate(username=username, password=password)
if user is not None:
login(request, user)
messages.info(request, f"You are now logged in as {username}")
return redirect("loststuffapp:IndexView")
else:
messages.error(request, "Invalid username or password")
user_form = AuthenticationForm()
return render(request,
"loststuffapp/login.html",
{"user_form":user_form}
)
</code></pre>
<p>login.html</p>
<pre><code>{% extends "loststuffapp/base.html" %}
{% block content %}
<form method="POST">
{% csrf_token %}
{{user_form.as_p}}
<p><button class="btn" type="submit" >Login</button></p>
<p>If you already have an account, <a href="/login"><strong>register</strong></a> instead</p>
{% endblock %}
</code></pre>
<p>form.html</p>
<pre><code>{% extends 'loststuffapp/base.html' %}
{% block content %}
<h2>New post</h2>
<form method="POST" class="post-form">{% csrf_token %}
{{ form.as_p }}
<p><button type="submit" class="save btn btn-default">Post</button></p>
</form>
{% endblock %}
</code></pre>
<p>urls.py</p>
<pre><code>urlpatterns = [
path('', views.IndexView, name="IndexView"),
path('IndexView', views.IndexView, name="IndexView"),
path('PostNew/', views.PostNew, name="PostNew"),
path('register/', views.register, name="register"),
path('logout/', views.logout_request, name="logout"),
path('login/', views.login_request, name="login"),
path('ContactForm/', views.ContactForm, name="ContactForm"),
] + static(settings.STATIC_URL, document_root=settings.STATIC_ROOT)
</code></pre>
<p>urls.py</p>
<pre><code>urlpatterns = [
path('',include('loststuffapp.urls')),
path('admin/', admin.site.urls),
]
</code></pre>
|
<p>You need to update your <a href="https://docs.djangoproject.com/en/2.2/ref/settings/#login-url" rel="nofollow noreferrer"><strong><code>LOGIN_URL</code></strong></a> in settings.py:</p>
<pre><code>LOGIN_URL = '/login/' # or use login url name "login"
</code></pre>
|
django|python-3.x
| 2 |
1,907,653 | 25,515,834 |
Read lines between empty spaces of data file and write in new files
|
<p>I have the BIG data text file for example:</p>
<pre><code>#01textline1
1 2 3 4 5 6
2 3 5 6 7 3
3 5 6 7 6 4
4 6 7 8 9 9
1 2 3 6 4 7
3 5 7 7 8 4
4 6 6 7 8 5
3 4 5 6 7 8
4 6 7 8 8 9
..
..
</code></pre>
<p>I want to extract data between empty lines and write it in new files. It is hard to know how many empty lines are in file (means you also dont know how many new files you will be writing ; thus it seems very hard to write new files since u dont know how many new files will you be writing. Can anyone guide me? Thank you. I hope my question is clear.</p>
|
<p>Unless your file is very large, split all into individual sections using re, splitting on 2 or more whitespace chars </p>
<pre><code>import re
with open("in.txt") as f:
lines = re.split("\s{2,}",f.read())
print lines
['#01textline1\n1 2 3 4 5 6\n2 3 5 6 7 3\n3 5 6 7 6 4\n4 6 7 8 9 9', '1 2 3 6 4 7\n3 5 7 7 8 4\n4 6 6 7 8 5', '3 4 5 6 7 8\n4 6 7 8 8 9']
</code></pre>
<p>Just iterate over lines and write your new files each iteration</p>
|
python
| 1 |
1,907,654 | 25,646,631 |
How to compare inputted numbers without storing them in list
|
<p>Note: This is not my homework. Python is not taught in my college so i am doing it my myself from MITOCW.</p>
<p>So far i have covered while loop, input & print</p>
<p>Q) Write a program that asks the to input 10 integers, and then prints the largest odd number that was entered. If no odd number was entered it should print a message to the effect </p>
<p>How can i compare those 10 number without storing them in some list or something else? Coz i haven't covered that as if yet. </p>
<pre><code>print "Enter 10 numbers: "
countingnumber=10
while countingnumber<=10:
number=raw_input():
if number % 2 == 0:
print "This is odd"
countingnumber=countingnumber+1
else:
print "This is even. Enter the odd number again"
</code></pre>
<p>i think the program will look something like this. But this has some unknown error & How can i compare all the numbers to get the largest odd number without storing those 10 numbers in the list.</p>
|
<p>you can just define a <code>maxnum</code> variable and save the max in it! also you must use <code>int(raw_input())</code> instead <code>raw_input()</code></p>
<pre><code>print "Enter 10 numbers: "
maxnum=0
for i in range(10):
number=int(raw_input())
if number%2 == 0:
print "This is odd"
if number>maxnum:
maxnum=number
else:
print "This is even. Enter the odd number again"
print "max odd is :{0}".format(maxnum)
</code></pre>
<p>DEMO:</p>
<pre><code>Enter 10 numbers:
2
This is odd
4
This is odd
6
This is odd
8
This is odd
12
This is odd
14
This is odd
16
This is odd
100
This is odd
2
This is odd
4
This is odd
max odd is :100
</code></pre>
|
python
| 2 |
1,907,655 | 44,609,336 |
Logistic Regression in python. probability threshold
|
<p>So I am approaching the classification problem with logistic regression algorithm and I obtain all of the predictions for the test set for class "1". The set is very imbalanced as it has over 200k inputs and more or less 92% are from class "1". Logistic regression generally classifies the input to class "1" if the P(Y=1|X)>0.5. So since all of the observations in test set are being classified into class 1 I thought that maybe there is a way to change this threshold and set it for example to 0.75 so that only observations with P(Y=1|X)>0.75 are classified to class 1 and otherwise class 0. How to implement it in python?</p>
<pre><code>model= LogisticRegression(penalty='l2', C=1)
model.fit(X_train, y_train)
score=accuracy_score(y_test, model2.predict(X_test))
fpr, tpr, thresholds = roc_curve(y_test, model2.predict_proba(X_test)[:,1])
roc=roc_auc_score(y_test, model2.predict_proba(X_test)[:,1])
cr=classification_report(y_test, model2.predict(X_test))
</code></pre>
<p>PS. Since all the observations from test set are being classified to class 1 the F1 score and recall from classification report are 0. Maybe by changing the threshold this problem will be solved.</p>
|
<p>A thing you might want to try is balancing the classes instead of changing the threshold. Scikit-learn is supporting this via <code>class_weights</code>. For example you could try <code>model = LogisticRegression(penalty='l2', class_weight='balanced', C=1)</code>. Look at the documentation for more details:</p>
<p><a href="http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html" rel="nofollow noreferrer">http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html</a></p>
|
python-3.x|scikit-learn|logistic-regression
| 2 |
1,907,656 | 24,144,934 |
Flask app on Heroku gives error: TypeError: 'unicode' does not have the buffer interface
|
<p>I have a flask app that I'm trying to deploy on Heroku. It works perfectly under foreman but when I deploy it, my login procedure fails with the error:</p>
<p>TypeError: 'unicode' does not have the buffer interface</p>
<p>The line of code where the error occurs looks like this:
person = verify_email_password(email, request.form["xyzABC123"])</p>
<p>Googling around, I've seen a very recent (early june 2014) regression in setuptools that causes this error but I am not using setuptools.</p>
|
<p>Upgrading Werkzeug to 0.9.6 (per this discussion: <a href="https://github.com/miguelgrinberg/flasky/issues/17" rel="nofollow">https://github.com/miguelgrinberg/flasky/issues/17</a>) worked for me today.</p>
|
python|heroku|unicode
| 1 |
1,907,657 | 15,098,619 |
Change a pixel value
|
<p>I have an image that I opened using <code>LoadImageM</code> and I get the pixel data using <code>Get2D</code> put I can't seem to find any built-in function to change a pixel value.
I've tried using multiple things from <code>Rectangle</code> to <code>CV_RGB</code> but with no successful results.</p>
|
<p>Consider checking out the new version of the opencv library. </p>
<p>You import it with </p>
<pre><code>import cv2
</code></pre>
<p>and it directly returns numpy arrays. </p>
<p>So for example if you do </p>
<pre><code>image_array = cv2.imread('image.png')
</code></pre>
<p>then you can just access and change the pixels values by simply manipulating <code>image_array</code> :</p>
<pre><code>image_array[0,0] = 100
</code></pre>
<p>sets the top left pixel to the value to 100. </p>
<p>Depending on your installation, you may already have the <code>cv2</code> bindings, so check if <code>import cv2</code> works. </p>
<p>Otherwise just install <code>opencv</code> and <code>numpy</code> and you are good to go.</p>
|
python|opencv
| 5 |
1,907,658 | 29,685,386 |
Pivot Table to Dictionary
|
<p>I have this pivot table:</p>
<pre><code>[in]:unit_d
[out]:
units
store_nbr item_nbr
1 9 27396
28 4893
40 254
47 2409
51 925
89 157
93 1103
99 492
2 5 55104
11 655
44 117125
85 106
93 653
</code></pre>
<p>I want to have a dictionary with 'store_nbr' as the key and 'item_nbr' as the values.<br>
So, <code>{'1': [9, 28, 40,...,99], '2': [5, 11 ,44, 85, 93], ...}</code></p>
|
<p>I'd use <a href="http://pandas.pydata.org/pandas-docs/version/0.16.0/groupby.html"><code>groupby</code></a> here, after resetting the index to make it into columns:</p>
<pre><code>>>> d = unit_d.reset_index()
>>> {k: v.tolist() for k, v in d.groupby("store_nbr")["item_nbr"]}
{1: [9, 28, 40, 47, 51, 89, 93, 99], 2: [5, 11, 44, 85, 93]}
</code></pre>
|
python|dictionary|pandas|pivot-table
| 6 |
1,907,659 | 46,608,204 |
Using datetime.time for comparison and column creation
|
<p>I've been using Pandas for a while and I'm sure it's a dumb question.</p>
<p>I need to create a column in a data frame which is conditional to the datetime.time. If datetime.time < 12, fill column with 'morning', then the same process to 'afternoon' and 'night'. </p>
<pre><code>import datetime
b['time'] = ['01-01-2000 10:00:00', '01-01-2000 15:00:00', '01-01-2000 21:00:00']
b['time'].dt.time
(output)
1 10:00:00
2 15:00:00
3 21:00:00
b['time'].dt.time < 12 #example
TypeError: can't compare datetime.time to int
</code></pre>
<p>Question is: I can't compare datetime.time to int. How can I fix this?</p>
<p>Much appreciated.</p>
|
<p>I think you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.cut.html" rel="nofollow noreferrer"><code>cut</code></a> or <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.searchsorted.html" rel="nofollow noreferrer"><code>numpy.searchsorted</code></a> for labels by bins:</p>
<pre><code>rng = pd.date_range('2017-04-03', periods=24, freq='H')
df = pd.DataFrame({'Date': rng})
bins = [0, 5, 13, 17, 25]
labels = ['Morning','Afternoon','Evening','Night']
hours = df['Date'].dt.hour
df['bin'] = pd.cut(hours-5+24 *(hours<5),bins=bins,labels=labels,right=False)
</code></pre>
<hr>
<pre><code>bins = [-1,4,9,17,21]
labels = ['Night', 'Morning','Afternoon','Evening','Night']
df['bin1'] = np.array(labels)[np.array(bins).searchsorted(hours)-1]
print (df)
Date bin bin1
0 2017-04-03 00:00:00 Night Night
1 2017-04-03 01:00:00 Night Night
2 2017-04-03 02:00:00 Night Night
3 2017-04-03 03:00:00 Night Night
4 2017-04-03 04:00:00 Night Night
5 2017-04-03 05:00:00 Morning Morning
6 2017-04-03 06:00:00 Morning Morning
7 2017-04-03 07:00:00 Morning Morning
8 2017-04-03 08:00:00 Morning Morning
9 2017-04-03 09:00:00 Morning Morning
10 2017-04-03 10:00:00 Afternoon Afternoon
11 2017-04-03 11:00:00 Afternoon Afternoon
12 2017-04-03 12:00:00 Afternoon Afternoon
13 2017-04-03 13:00:00 Afternoon Afternoon
14 2017-04-03 14:00:00 Afternoon Afternoon
15 2017-04-03 15:00:00 Afternoon Afternoon
16 2017-04-03 16:00:00 Afternoon Afternoon
17 2017-04-03 17:00:00 Afternoon Afternoon
18 2017-04-03 18:00:00 Evening Evening
19 2017-04-03 19:00:00 Evening Evening
20 2017-04-03 20:00:00 Evening Evening
21 2017-04-03 21:00:00 Evening Evening
22 2017-04-03 22:00:00 Night Night
23 2017-04-03 23:00:00 Night Night
</code></pre>
|
python|pandas|datetime
| 1 |
1,907,660 | 49,456,831 |
Multiple sets of duplicate records from a pandas dataframe
|
<p>How to get all the existing duplicated sets of records(based on a column) from a dataframe?</p>
<p>I got a dataframe as follows:</p>
<pre><code>flight_id | from_location | to_location | schedule |
1 | Vancouver | Toronto | 3-Jan |
2 | Amsterdam | Tokyo | 15-Feb |
4 | Fairbanks | Glasgow | 12-Jan |
9 | Halmstad | Athens | 21-Jan |
3 | Brisbane | Lisbon | 4-Feb |
4 | Johannesburg | Venice | 23-Jan |
9 | LosAngeles | Perth | 3-Mar |
</code></pre>
<p>Here flight_id is the column on which I need to check duplicates. And there are 2 sets of duplicates.</p>
<p>Output for this specific example should look like--<code>[(2,5),(3,6)]</code>. List of tuples of record index values</p>
|
<p>Is this what you need ? <code>duplicated</code>+<code>groupby</code></p>
<pre><code>(df.loc[df['flight_id'].duplicated(keep=False)].reset_index()).groupby('flight_id')['index'].apply(tuple)
Out[510]:
flight_id
4 (2, 5)
9 (3, 6)
Name: index, dtype: object
</code></pre>
<p>Adding <code>tolist</code> at the end </p>
<pre><code>(df.loc[df['flight_id'].duplicated(keep=False)].reset_index()).groupby('flight_id')['index'].apply(tuple).tolist()
Out[511]: [(2, 5), (3, 6)]
</code></pre>
<p>And another solution ... for fun only</p>
<pre><code>s=df['flight_id'].value_counts()
list(map(lambda x : tuple(df[df['flight_id']==x].index.tolist()), s[s.gt(1)].index))
Out[519]: [(2, 5), (3, 6)]
</code></pre>
|
python|pandas|dataframe|group-by|pandas-groupby
| 9 |
1,907,661 | 70,296,316 |
TypeError: Calculator.main() takes 5 positional arguments but 6 were given
|
<pre><code>import math
class Calculator():
def __init__(self, num1=0.0, op=None, num2=0.0, result=None):
self.num1 = num1
self.op = op
self.num2 = num2
self.result = result
def main(self, num1, op, num2, result):
if op == "+":
result = float(num1) + float(num2)
print(result)
elif op == "-":
result = float(num1) - float(num2)
print(result)
elif op == "*":
result = float(num1) * float(num2)
print(result)
elif op == "/" and float(num2) == 0:
result = None
print("You can't divide by zero")
p.main(self, num1, op, num2, result)
elif op == "/" and float(num2) != 0:
result = float(num1) / float(num2)
print(result)
elif op == "power":
result = float(num1)**float(num2)
print(result)
else:
print("invalid input")
while True:
p = Calculator()
p.main(num1=input("Write a number: "),
op=input("+ or - or * or / or power: "),
num2=input("Write another number: "),
result=None)
ans = input("Would you like to do another equation: ")
if ans == "yes":
p.main()
ans = input("Would you like to do another equation: ")
elif ans == "no":
exit()
</code></pre>
<p>I tried dividing 5 by 6 to test out if everything was working fine and I got this error:</p>
<p>Traceback (most recent call last):
File "d:\Visual Studio Code\Projects\HelloWorld python\tempCodeRunnerFile.py", line 37, in
p.main(num1=input("Write a number: "),
File "d:\Visual Studio Code\Projects\HelloWorld python\tempCodeRunnerFile.py", line 24, in main
p.main(self, num1, op, num2, result)
TypeError: Calculator.main() takes 5 positional arguments but 6 were given</p>
|
<p>This line:</p>
<pre><code> p.main(self, num1, op, num2, result)
</code></pre>
<p>should be:</p>
<pre><code> self.main(num1, op, num2, result)
</code></pre>
<p>This will raise a different error, however, which is that by calling the same function with the same parameters you get the same result, which is another call to the same function (this then repeats about a thousand times, until a recursion error is raised). What needs to happen instead is that you need to prompt the user for new inputs; the function that handles the error should be the function that takes the input from the user.</p>
<p>One way to make this easy is using exceptions, which actually happens automatically if you just do the division by zero and let it raise a <code>ZeroDivisionError</code>. Just have the code that takes the input catch that exception so it can re-prompt the user:</p>
<pre><code>class Calculator():
def operate(self, num1: float, op: str, num2: float) -> float:
"""Perform one operation according to the value of op.
May raise ValueError or ZeroDivisionError."""
if op == "+":
return num1 + num2
elif op == "-":
return num1 - num2
elif op == "*":
return num1 * num2
elif op == "/":
return num1 / num2
elif op == "power":
return num1 ** num2
else:
raise ValueError(f"invalid operator {op}")
def do_one_equation(self) -> None:
"""Prompt the user for input and do one equation.
Loop on invalid input until we have one successful result."""
while True:
try:
result = self.operate(
float(input("Write a number: ")),
input("+ or - or * or / or power: "),
float(input("Write another number: "))
)
print(result)
return
except ZeroDivisionError:
print("You can't divide by zero")
except ValueError:
print("Invalid input")
p = Calculator()
while True:
p.do_one_equation()
if input("Would you like to do another equation: ") == "no":
break
</code></pre>
<p>Note in the code above that <code>result</code> becomes whatever is <code>return</code>ed by the <code>operate()</code> method. The arguments to the function are its <em>input</em>, and the result should be its <em>output</em> (i.e. the thing you <code>return</code>).</p>
|
python
| 1 |
1,907,662 | 53,771,065 |
Less Memory-intense way of copying tables & renaming columns in sqlite/pandas
|
<p>I have found a very nice way to:</p>
<ol>
<li>read a table from a sql database</li>
<li>rename the columns with a dict (read from a yaml file) </li>
<li>rewrite the table to another database</li>
</ol>
<p>The only problem is, that as the table becomes bigger(10col x several million rows), reading the table into a pandas is so memory-intensive, that it causes the process to be killed. </p>
<p>There must be an easier way. I looked at alter table statements but they seem to be very complicated as well& will not do the copying in another db. Any ideas on how to do the same operation without using this much memory. Feeling like pandas are a crutch I use due to my bad sql.</p>
<pre><code>import pandas as pd
import sqlite3
def translate2generic(sourcedb, targetdb, sourcetable,
targettable, toberenamed):
"""Change table's column names to fit generic api keys.
:param: Path to source db
:param: Path to target db
:param: Name of table to be translated in source
:param: Name of the newly to be created table in targetdb
:param: dictionary of translations
:return: New column names in target db
"""
sourceconn = sqlite3.connect(sourcedb)
targetconn = sqlite3.connect(targetdb)
table = pd.read_sql_query('select * from ' + sourcetable, sourceconn) #this is the line causing the crash
# read dict in the format {"oldcol1name": "newcol1name", "oldcol2name": "newcol2name"}
rename = {v: k for k, v in toberenamed.items()}
# rename columns
generic_table = table.rename(columns=rename)
# Write table to new database
generic_table.to_sql(targettable, targetconn, if_exists="replace")
targetconn.close()
sourceconn.close()
</code></pre>
<p>I've looked also at solutions such as <a href="https://tableplus.io/blog/2018/04/sqlite-rename-a-column.html" rel="nofollow noreferrer">this one</a> but they suppose you know the type of the columns.</p>
<p>An elegant solution would be very much appreciated.</p>
<p>Edit: I know there is a method in sqlite since the September release 3.25.0, but I am stuck with version 2.6.0</p>
|
<p>To elaborate on my comments...</p>
<p>If you have a table in foo.db and want to copy that table's data to a new table in bar.db with different column names:</p>
<pre><code>$ sqlite3 foo.db
sqlite> ATTACH 'bar.db' AS bar;
sqlite> CREATE TABLE bar.newtable(newcolumn1, newcolumn2);
sqlite> INSERT INTO bar.newtable SELECT oldcolumn1, oldcolumn2 FROM main.oldtable;
</code></pre>
|
python|pandas|sqlite
| 2 |
1,907,663 | 53,436,055 |
Aggregate dataframe rows into a dictionary
|
<p>I have a pandas DataFrame object where each row represents one object in an image. </p>
<p>One example of a possible row would be:</p>
<pre><code>{'img_filename': 'img1.txt', 'img_size':'20', 'obj_size':'5', 'obj_type':'car'}
</code></pre>
<p>I want to aggregate all the objects that belong to the same image, and get something whose rows would be like:</p>
<pre><code>{'img_filename': 'img1.txt', 'img_size':'20', 'obj': [{'obj_size':'5', 'obj_type':'car'}, {{'obj_size':'6', 'obj_type':'bus'}}]}
</code></pre>
<p>That is, the third column is a list of columns containing the data of each group.</p>
<p>How can I do this?</p>
<p>EDIT:</p>
<p>Consider the following example.</p>
<pre><code>import pandas as pd
df1 = pd.DataFrame([
{'img_filename': 'img1.txt', 'img_size':'20', 'obj_size':'5', 'obj_type':'car'},
{'img_filename': 'img1.txt', 'img_size':'20', 'obj_size':'6', 'obj_type':'bus'},
{'img_filename': 'img2.txt', 'img_size':'25', 'obj_size':'4', 'obj_type':'car'}
])
df2 = pd.DataFrame([
{'img_filename': 'img1.txt', 'img_size':'20', 'obj': [{'obj_size':'5', 'obj_type':'car'}, {'obj_size':'6', 'obj_type':'bus'}]},
{'img_filename': 'img2.txt', 'img_size':'25', 'obj': [{'obj_size':'4', 'obj_type':'car'}]}
])
</code></pre>
<p>I want to turn <code>df1</code> into <code>df2</code>.</p>
|
<p>One way using <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_dict.html" rel="nofollow noreferrer"><code>to_dict</code></a></p>
<pre><code>df2 = df1.groupby('img_filename')['obj_size','obj_type'].apply(lambda x: x.to_dict('records'))
df2 = df2.reset_index(name='obj')
# Assuming you have multiple same img files with different sizes then I'm choosing first.
# If this not the case then groupby directly and reset index.
#df1.groupby('img_filename, 'img_size')['obj_size','obj_type'].apply(lambda x: x.to_dict('records'))
df2['img_size'] = df1.groupby('img_filename')['img_size'].first().values
print (df2)
img_filename obj img_size
0 img1.txt [{'obj_size': '5', 'obj_type': 'car'}, {'obj_s... 20
1 img2.txt [{'obj_size': '4', 'obj_type': 'car'}] 25
</code></pre>
|
python|pandas
| 1 |
1,907,664 | 54,894,705 |
BeautifulSoup perform find() on an iterator
|
<p>I have a page that I need to parse with beautifulsoup, here's the code:</p>
<pre><code>from bs4 import BeautifulSoup
source_html = 'WFM1.html'
data = []
with open(source_html) as html_file:
soup = BeautifulSoup(html_file, 'lxml')
for tr in soup.find('tbody'):
name_div = tr.find('div', class_= 'person-name-text inline-block ng-binding ng-hide')
name = name_div.text.strip()
shift_span = tr.find('span', class_= 'inline-block ng-binding ng-scope')
shift = shift_span.text.strip()
data.append((name, shift))
</code></pre>
<p>When I run this it returns "TypeError: find() takes no keyword arguments". Is it possible to perform find() on an iterator? How do I go about extracting only specific content from an iterator?</p>
<p>To make it clearer, here's how an iterator looks like:</p>
<pre><code><tr class="ng-scope" role="button" style="" tabindex="0">
<td class="person-name-column">
<div aria-hidden="false" class="wfm-checkbox">
<label><input aria-invalid="false" class="ng-pristine ng-untouched ng-valid ng-empty" type="checkbox"> <span class="wfm-checkbox-toggle"></span> <span class="wfm-checkbox-label person-name-text inline-block ng-binding">FirstName LastName <!-- ngIf: vm.toggles.ViewScheduleOnTimezoneEnabled && vm.selectedTimezone && personSchedule.Timezone.IanaId !== vm.selectedTimezone --></span></label> <!-- ngIf: vm.toggles.ViewScheduleOnTimezoneEnabled && vm.selectedTimezone && personSchedule.Timezone.IanaId !== vm.selectedTimezone -->
</div>
<div aria-hidden="true" class="person-name-text inline-block ng-binding ng-hide">
FirstName LastName
</div><!-- ngIf: vm.showWarnings -->
</td><!-- ngIf: ::vm.toggles.ViewShiftCategoryEnabled -->
<td class="shift-category-cell ng-scope" role="button" style="cursor: pointer;" tabindex="0"><!-- ngIf: ::personSchedule.ShiftCategory.Name --> <span class="inline-block ng-binding ng-scope" id="name" style="background: rgb(255, 99, 71); color: black;">EX</span> <!-- end ngIf: ::personSchedule.ShiftCategory.Name -->
<!-- ngIf: ::personSchedule.ShiftCategory.Name --><!-- end ngIf: ::personSchedule.ShiftCategory.Name --></td><!-- end ngIf: ::vm.toggles.ViewShiftCategoryEnabled -->
<td class="schedule schedule-column">
<div class="relative time-line-for">
<!-- ngRepeat: dayOff in ::personSchedule.DayOffs -->
<!-- ngRepeat: shift in ::personSchedule.Shifts -->
<div class="shift ng-scope">
<!-- ngRepeat: projection in ::shift.Projections -->
<div aria-label="Phone 04:00 - 08:00" class="layer absolute floatleft selectable projection-layer ng-scope noneSelected" role="button" style="left: 3.7037%; width: 14.8148%; background-color: rgb(255, 255, 0);" tabindex="0"></div><!-- end ngRepeat: projection in ::shift.Projections -->
<div aria-label="Lunch 08:00 - 08:30" class="layer absolute floatleft selectable projection-layer ng-scope noneSelected" role="button" style="left: 18.5185%; width: 1.85185%; background-color: rgb(0, 255, 0);" tabindex="0"></div><!-- end ngRepeat: projection in ::shift.Projections -->
<div aria-label="Coffee 08:30 - 08:45" class="layer absolute floatleft selectable projection-layer ng-scope noneSelected" role="button" style="left: 20.3704%; width: 0.925926%; background-color: rgb(224, 224, 224);" tabindex="0"></div><!-- end ngRepeat: projection in ::shift.Projections -->
<div aria-label="Phone 08:45 - 10:30" class="layer absolute floatleft selectable projection-layer ng-scope noneSelected" role="button" style="left: 21.2963%; width: 6.48148%; background-color: rgb(255, 255, 0);" tabindex="0"></div><!-- end ngRepeat: projection in ::shift.Projections -->
<div aria-label="FL 10:30 - 12:30" class="layer absolute floatleft selectable projection-layer ng-scope noneSelected" role="button" style="left: 27.7778%; width: 7.40741%; background-color: rgb(255, 140, 0);" tabindex="0"></div><!-- end ngRepeat: projection in ::shift.Projections -->
</div><!-- end ngRepeat: shift in ::personSchedule.Shifts -->
<!-- ngIf: vm.hasHiddenScheduleAtStart(personSchedule) -->
<!-- ngIf: vm.hasHiddenScheduleAtEnd(personSchedule) -->
</div>
</td><!-- ngIf: ::!vm.toggles.EditAndViewInternalNoteEnabled -->
<!-- ngIf: ::vm.toggles.EditAndViewInternalNoteEnabled -->
<td class="schedule-note-column ng-scope" role="button" tabindex="0"><span class="noComment"><i class="mdi mdi-comment"></i></span> <!-- ngIf: vm.getScheduleNoteForPerson(personSchedule.PersonId) && vm.getScheduleNoteForPerson(personSchedule.PersonId).length > 0 --></td><!-- end ngIf: ::vm.toggles.EditAndViewInternalNoteEnabled -->
<!-- ngIf: ::vm.toggles.ShowContractTimeEnabled -->
<td class="contract-time contract-time-column ng-binding ng-scope">8:00</td><!-- end ngIf: ::vm.toggles.ShowContractTimeEnabled -->
</tr>
</code></pre>
|
<p>Your soup type is <code><class 'bs4.BeautifulSoup'></code> so you dont need to Iteration with using for.</p>
<pre><code>name_div = soup.find('div', class_= 'person-name-text inline-block ng-binding ng-hide')
name = name_div.text.strip()
shift_span = soup.find('span', class_= 'inline-block ng-binding ng-scope')
shift = shift_span.text.strip()
data.append((name, shift))
print(data)
</code></pre>
<p>OUTPUT :</p>
<pre><code>[('FirstName LastName', 'EX')]
</code></pre>
<p>UPDATE:</p>
<p>If you have more thane one class <code>person-name-text inline-block ng-binding ng-hide</code>.And suppose this is your html in hmtl file:</p>
<pre><code><div aria-hidden="true" class="person-name-text inline-block ng-binding ng-hide">
FirstName LastName
</div>
<div aria-hidden="true" class="person-name-text inline-block ng-binding ng-hide">
FirstName LastName
</div>
<div aria-hidden="true" class="person-name-text inline-block ng-binding ng-hide">
FirstName LastName
</div>
<div aria-hidden="true" class="person-name-text inline-block ng-binding ng-hide">
FirstName LastName
</div>
</code></pre>
<p>You can get all with using find_all() like :</p>
<pre><code>name_div = soup.find_all('div', class_= 'person-name-text inline-block ng-binding ng-hide')
for all in name_div:
print(all.text.strip())
</code></pre>
<p>OUTPUT :</p>
<pre><code>FirstName LastName
FirstName LastName
FirstName LastName
FirstName LastName
</code></pre>
|
python|python-3.x|loops|beautifulsoup|iterable
| 1 |
1,907,665 | 33,399,244 |
How can I create initial values for a model in django?
|
<p>Everytime a new user is registered I want to create two mail folders: Draft and Deleted, and also allow the user to create his own folders.</p>
<p>How can I override the user registration so the two folders are created for each new user?</p>
<p>Antoher alternative would be to give a default folder value and allow users to add their own new folders. </p>
<p><strong>Models.py</strong></p>
<pre><code>class UserFolder(models.Model):
user = models.ForeignKey(settings.AUTH_USER_MODEL, null = True, blank = True)
class MessageFolder(models.Model):
folder = models.ForeignKey(UserFolder, null = True, blank = True)
message = models.ForeignKey(Message, null = True, blank = True)
</code></pre>
|
<p>You can use django signals to automatically create those by listening in on the signal for the <code>user</code> model. </p>
<p>Below is an example that run a few functions to recalculate stuff when an item is saved/deleted from an order. </p>
<pre><code>from django.db.models.signals import post_save, post_delete
from django.dispatch import receiver
@receiver((post_save, post_delete), sender='main.OrderItem')
def update_order_when_items_changed(sender, instance, **kwargs):
order = instance.order
order.set_weight()
order.set_total_price()
order.save()
</code></pre>
<p>So yours might look something like this (not tested):</p>
<pre><code>@receiver((post_save,), sender='User')
def create_user_folders(sender, instance, created, **kwargs):
if not created: return
# generate MessageFolder && UserFolder
</code></pre>
|
python|django|model|subclass
| 1 |
1,907,666 | 33,271,702 |
Pandas: Index of last non equal row
|
<p>I have a pandas data frame <code>F</code> with a sorted index <code>I</code>. I am interested in knowing about the last change in one of the columns, let's say <code>A</code>. In particular, I want to construct a series with the same index as <code>F</code>, namely <code>I</code>, whose value at <code>i</code> is <code>j</code> where <code>j</code> is the greatest index value less than <code>i</code> such that <code>F[A][j] != F[A][i]</code>. For example, consider the following frame:</p>
<pre><code> A
1 5
2 5
3 6
4 2
5 2
</code></pre>
<p>The desired series would be:</p>
<pre><code>1 NaN
2 NaN
3 2
4 3
5 3
</code></pre>
<p>Is there a pandas/numpy idiomatic way to construct this series?</p>
|
<p><a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.argmax.html" rel="nofollow"><code>np.argmax</code></a> or <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.argmax.html" rel="nofollow"><code>pd.Series.argmax</code></a> on Boolean data can help you find the first (or in this case, last) <code>True</code> value. You still have to loop over the series in this solution, though.</p>
<pre><code># Initiate source data
F = pd.DataFrame({'A':[5,5,6,2,2]}, index=list('fobni'))
# Initiate resulting Series to NaN
result = pd.Series(np.nan, index=F.index)
for i in range(1, len(F)):
value_at_i = F['A'].iloc[i]
values_before_i = F['A'].iloc[:i]
# Get differences as a Boolean Series
# (keeping the original index)
diffs = (values_before_i != value_at_i)
if diffs.sum() == 0:
continue
# Reverse the Series of differences,
# then find the index of the first True value
j = diffs[::-1].argmax()
result.iloc[i] = j
</code></pre>
|
python|pandas|indexing|dataframe
| 0 |
1,907,667 | 24,878,081 |
Python 3.4.1 make test Fails: ERROR: test_connect_starttls (test.test_smtpnet.SmtpTest)
|
<p>I just downloaded python 3.4.1 on my centos 6.4 virtual machine and was following the instructions in the readme. first I did ./configure, then make, then make test.</p>
<p>When I ran make test it had an error, I have no idea why, I just followed the instructions exactly as they were given. here is the output:</p>
<pre><code>test test_smtpnet failed -- Traceback (most recent call last):
File "/usr/local/Python-3.4.1/Lib/test/test_smtpnet.py", line 30, in test_connect_starttls
server = smtplib.SMTP(self.testServer, self.remotePort)
File "/usr/local/Python-3.4.1/Lib/smtplib.py", line 242, in __init__
(code, msg) = self.connect(host, port)
File "/usr/local/Python-3.4.1/Lib/smtplib.py", line 321, in connect
self.sock = self._get_socket(host, port, self.timeout)
File "/usr/local/Python-3.4.1/Lib/smtplib.py", line 292, in _get_socket
self.source_address)
File "/usr/local/Python-3.4.1/Lib/socket.py", line 509, in create_connection
raise err
File "/usr/local/Python-3.4.1/Lib/socket.py", line 495, in create_connection
sock = socket(af, socktype, proto)
File "/usr/local/Python-3.4.1/Lib/socket.py", line 123, in __init__
_socket.socket.__init__(self, family, type, proto, fileno)
OSError: [Errno 97] Address family not supported by protocol
368 tests OK.
1 test failed:
test_smtpnet
7 tests altered the execution environment:
test_calendar test_float test_locale test_site test_strptime
test_types test_warnings
12 tests skipped:
test_curses test_devpoll test_gdb test_kqueue test_msilib
test_ossaudiodev test_startfile test_tk test_ttk_guionly
test_winreg test_winsound test_zipfile64
Re-running failed tests in verbose mode
Re-running test 'test_smtpnet' in verbose mode
test_connect_starttls (test.test_smtpnet.SmtpTest) ... ERROR
test_connect (test.test_smtpnet.SmtpSSLTest) ... ok
test_connect_default_port (test.test_smtpnet.SmtpSSLTest) ... ok
test_connect_using_sslcontext (test.test_smtpnet.SmtpSSLTest) ... ok
test_connect_using_sslcontext_verified (test.test_smtpnet.SmtpSSLTest) ... ok
======================================================================
ERROR: test_connect_starttls (test.test_smtpnet.SmtpTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/Python-3.4.1/Lib/test/test_smtpnet.py", line 30, in test_connect_starttls
server = smtplib.SMTP(self.testServer, self.remotePort)
File "/usr/local/Python-3.4.1/Lib/smtplib.py", line 242, in __init__
(code, msg) = self.connect(host, port)
File "/usr/local/Python-3.4.1/Lib/smtplib.py", line 321, in connect
self.sock = self._get_socket(host, port, self.timeout)
File "/usr/local/Python-3.4.1/Lib/smtplib.py", line 292, in _get_socket
self.source_address)
File "/usr/local/Python-3.4.1/Lib/socket.py", line 509, in create_connection
raise err
File "/usr/local/Python-3.4.1/Lib/socket.py", line 495, in create_connection
sock = socket(af, socktype, proto)
File "/usr/local/Python-3.4.1/Lib/socket.py", line 123, in __init__
_socket.socket.__init__(self, family, type, proto, fileno)
OSError: [Errno 97] Address family not supported by protocol
----------------------------------------------------------------------
Ran 5 tests in 42.864s
FAILED (errors=1)
test test_smtpnet failed
make: *** [test] Error 1
</code></pre>
<p>so it seems like the _socket module tried using an "address family" (whatever the heck that is) that is not supported by the protocol. Can this exception be more vague please?</p>
<p>Does anybody know what I'm missing here?</p>
|
<p>It looks like the test case does require IPv6 to be enabled. Therefore it will fail on a system without IPv6 support enabled. Check the output of <code>ip a s</code> to see if the VM has an IPv6 address set, e.g usually starting with <code>fe80::</code>. Address family can be AF_INET for IPv4 or AF_INET6 for IPv6.
Actually,that is a test case bug.</p>
|
python|ssl
| 2 |
1,907,668 | 24,705,101 |
OpenCV VideoCapture Not Opening
|
<p>I'm trying to use OpenCV's cv2 python bindings on an Amazon server running Ubuntu 14.04 and I can't seem to get VideoCapture to work properly.</p>
<p>I tried opening the default capture as follows:</p>
<pre><code>import cv2
cap = cv2.VideoCapture(0)
cap.isOpened() #Returns false
</code></pre>
<p>I tested this on my local machine and it was true as expected, so there is something wrong with my open CV configuration. I've tried a variety of things:</p>
<ul>
<li>Using an actual filepath that I confirmed points to an .mp4 file</li>
<li>Using -1 and 1 in place of 0 on the second line</li>
<li>Installing ffmpeg (from a ppa as it isn't available by default on Ubuntu 14.04) and rebuilding OpenCV</li>
<li>Removing my OpenCV directory entirely and rebuilding using the script <a href="https://help.ubuntu.com/community/OpenCV" rel="noreferrer">here</a></li>
<li>Verifying and reinstalling various other libraries including x264, gstreamer, and gtk</li>
</ul>
<p>I'm kind of out of ideas at this point. Any ideas of what could be going wrong?</p>
<p>Edit: OpenCV version is 2.4.9.</p>
|
<p>I also faced a similar problem. Possible solutions:</p>
<ol>
<li><p>Check if you have given the correct path.</p>
</li>
<li><p>If you have installed OpenCV using pip, it will not work. You can remove OpenCV and reinstall it looking at the <a href="http://docs.opencv.org/trunk/d7/d9f/tutorial_linux_install.html" rel="nofollow noreferrer">official documentation</a>.</p>
</li>
<li><p>Ways to install via pip, <br>
<code>pip install opencv-python</code> Only installs main module<br>
<code>pip install opencv-contrib-python</code> Install main and contrib module, so use this command.</p>
</li>
</ol>
|
python|opencv|ubuntu-14.04
| 9 |
1,907,669 | 41,213,476 |
Python Threading and Input
|
<p>I am new to Python <code>threading</code> and I need some help with some code, I am working on making an IRC client as a hobby but I need the ability for the user to be able to input text and also pull messages from the IRC server, etc.</p>
<p>In this example, whilst getting user input, the <code>.</code>'s are being outputted after the text that the user has entered I would like the input to be separate to what is being outputted in the listen function.</p>
<pre><code>import threading
import time
import sys
def listen():
sys.stdout.write("Listening...")
sys.stdout.flush()
for i in range(5):
time.sleep(1)
sys.stdout.write(".")
sys.stdout.flush()
return
def test_input():
input("Here: ")
return
threads = []
listen_thread = threading.Thread(target=listen)
threads.append(listen_thread)
listen_thread.start()
input_thread = threading.Thread(target=test_input)
threads.append(input_thread)
input_thread.start()
</code></pre>
<p>Currently, if I type 'hello there how are you' I get: <code>Listening...Here: .hell.o there. how ar.e you.</code></p>
<p>I would like <code>Listening........ Here: hello there how are you</code>
with the <code>.</code>'s after the 'Listening'</p>
<p>(Sorry about the wording)</p>
|
<p>You immediately start the test_input() thread after starting the listen() thread. And because the test_input() thread runs in parallel you immediately print "Here".<br>
You should call a.start() only after having printed all the dots. (which takes 5 seconds).</p>
<p>Also try to use more descriptive variable names.</p>
|
python|multithreading|python-multithreading
| 0 |
1,907,670 | 31,054,539 |
Using PIL to create thumbnails results in TextEdit Document extension
|
<p>Using PIL, I was able to create a thumbnail of my picture but according to my computer (running <code>Mac OS X</code>), my image has an extension of <code>TextEdit Document</code> instead of <code>png</code> or <code>jpeg</code>. I was wondering how can I fix to result in the correct extension.</p>
<p>Here is the code I ran:</p>
<pre><code>>>> from PIL import Image
>>> import glob, os
>>> size = 128, 128
>>> pic = glob.glob("cherngloong1.jpg")
>>> im = Image.open(pic[0])
>>> im
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=2048x1365 at 0x100A63BD8>
>>> im.thumbnail(size, Image.ANTIALIAS)
>>> im.save("cherngloong_thumbnail", "PNG")
>>> im.save("cherngloong_thumbnail1", "JPEG")
</code></pre>
<p>thumbnail extensions:</p>
<p><img src="https://i.stack.imgur.com/szPQs.png" alt="enter image description here"></p>
|
<p>I think OSX is just inferring "TextEdit Document" based on the lack of an extension (i.e. <code>.jpg</code> or <code>.png</code>) in the filename. Try adding one:</p>
<pre><code>im.save("cherngloong_thumbnail.png", "PNG")
im.save("cherngloong_thumbnail1.jpg", "JPEG")
</code></pre>
|
python|python-imaging-library
| 2 |
1,907,671 | 31,119,930 |
Count occurrences of digit 'x' in range (0,n]
|
<p>So I'm trying to write a python function that takes in two arguments, n and num, and counts the occurrences of 'n' between 0 and num. For example, </p>
<p><code>countOccurrences(15,5)</code> should be <code>2</code>.</p>
<p><code>countOccurrences(100,5)</code> should be <code>20</code>.</p>
<p>I made a simple iterative solution to this problem:</p>
<pre><code>def countOccurrences(num,n):
count=0
for x in range(0,num+1):
count += countHelper(str(x),n)
return count
def countHelper(number,n):
count=0
for digit in number:
if digit==n:
count += 1
return count
</code></pre>
<p>This ran into obvious problems if I tried to call <code>countOccurrences(100000000000,5)</code>.
What my question is is how can I make this more efficient? I want to be able to handle the problem "fairly" fast, and avoid out of memory errors. Here is my first pass at a recursive solution trying to do this:</p>
<pre><code>def countOccurence(num, n):
if num[0]==n:
return 1
else:
if len(num) > 1:
return countOccurence(num[1:],n) + countOccurence(str((int(num)-1)),n)
else:
return 0
</code></pre>
|
<p>This won't hit into any memory problems, until <code>max_num</code> is small enough to fit in a C <code>long</code>. Basically it's still a brute-force algorithm, though significantly optimized for Python.</p>
<pre><code>def count_digit(max_num, digit):
str_digit = str(digit)
return sum(str(num).count(str_digit) for num in xrange(max_num+1))
</code></pre>
|
python|algorithm|memory-management
| 2 |
1,907,672 | 40,317,325 |
Pyspark filter empty lines from RDD not working
|
<p>I'm relatively new to spark and pyspark</p>
<pre><code>final_plogfiles = plogfiles.filter(lambda x: len(x)>0)
</code></pre>
<p>I wrote this code to filter out the empty lines from the RDD plogfiles. It did not remove the empty lines.</p>
<p>I also tried</p>
<pre><code>plogfiles.filter(lambda x: len(x.split())>0)
</code></pre>
<p>But if I use <code>plogfiles.filter(lambda x: x.split())</code>, trailing, and leading white spaces in all lines are getting trimmed</p>
<p>I only want to filter out empty lines. I would like to know where I'm going wrong.</p>
|
<p>Is plogfiles an RDD?
following works fine for me:</p>
<pre><code>lines = sc.textFile(input_file)
non_empty_lines = lines.filter(lambda x: len(x)>0 )
</code></pre>
|
python|lambda|filter|pyspark|rdd
| 1 |
1,907,673 | 58,966,203 |
How is scipy.stats.multivariate_normal.pdf different from the same function written using numpy?
|
<p>I need to use the multivariate normal distribution in a script. I have noticed that my version of it gives a different answer from scipy's method. I can't really figure out why...</p>
<p>Here is my function:</p>
<pre><code>def gauss(x, mu, sigma):
assert np.linalg.det(sigma)!=0, "determinant of sigma is 0"
y = np.exp((-1/2)*(x-mu).T.dot(np.linalg.inv(sigma)).dot(x-mu))/np.sqrt(
np.power(2*np.pi, len(x))*np.linalg.det(sigma)
)
return y
</code></pre>
<p>Here is a comparison of the results:</p>
<pre><code>from scipy.stats import multivariate_normal
import numpy as np
x = np.array([-0.54849176, 6.39530657])
mu = np.array([15,20])
sigma = np.array([
[2,3],
[4,10]
])
print(gauss(x, mu, sigma))
# output is 1.8781656851138248e-37
print(multivariate_normal.pdf(x, mu, sigma))
# output is 2.698549423643947e-61
</code></pre>
<p>Has anybody noticed this? Is my function wrong? Any help would be greatly appreciated!</p>
|
<p>The particular input you've used as an example could be slightly misleading because the values are so low that numerical issues would easily suffice to cause the discrepancy you are seeing. However, even when using an example with larger densities, you will still have issues:</p>
<pre class="lang-py prettyprint-override"><code>In [95]: x = np.array([15.00054849176, 20.0009530657])
...: mu = np.array([15, 20])
...: sigma = np.array([
...: [2, 3],
...: [4, 10]
...: ])
...:
In [96]: print(gauss(x, mu, sigma))
...: print(multivariate_normal.pdf(x, mu, sigma))
...:
0.05626976565965294
0.07957746514880353
</code></pre>
<p>Perhaps interestingly, the discrepancy is a factor of <code>np.sqrt(2)</code> up to numerical issues, but this is a bit of a red herring: as it turns out, the discrepancy is caused simply by your covariance matrix not being a covariance matrix: While it's positive semi-definite, it's <em>not symmetric</em>. Using a valid input, the two approaches will indeed agree (up to numerical issues):</p>
<pre class="lang-py prettyprint-override"><code>In [99]: x = np.array([15.00054849176, 20.0009530657])
...: mu = np.array([15, 20])
...: sigma = np.array([
...: [2, 3],
...: [3, 10]
...: ])
...:
In [100]: print(gauss(x, mu, sigma))
...: print(multivariate_normal.pdf(x, mu, sigma))
...:
0.047987017204594515
0.04798701720459451
</code></pre>
<p>Or, with your original inputs:</p>
<pre><code>In [111]: x = np.array([-0.54849176, 6.39530657])
...: mu = np.array([15, 20])
...: sigma = np.array([
...: [2, 3],
...: [3, 10]
...: ])
...:
In [112]: print(gauss(x, mu, sigma))
...: print(multivariate_normal.pdf(x, mu, sigma))
...:
5.060725651214228e-32
5.060725651214157e-32
</code></pre>
|
python|python-3.x|numpy|scipy|probability
| 3 |
1,907,674 | 69,195,731 |
Dockerized Apache Beam returns "No id provided"
|
<p>I've hit a problem with dockerized Apache Beam. When trying to run the container I am getting <code>"No id provided."</code> message and nothing more. Here's the code and files:</p>
<p>Dockerfile</p>
<pre><code>FROM apache/beam_python3.8_sdk:latest
RUN apt update
RUN apt install -y wget curl unzip git
COPY ./ /root/data_analysis/
WORKDIR /root/data_analysis
RUN python3 -m pip install -r data_analysis/beam/requirements.txt
ENV PYTHONPATH=/root/data_analysis
ENV WORKER_ID=1
CMD python3 data_analysis/analysis.py
</code></pre>
<p>Code <code>analysis.py</code> :</p>
<pre><code>import apache_beam as beam
from apache_beam.options.pipeline_options import PipelineOptions
def run():
options = PipelineOptions(["--runner=DirectRunner"])
with beam.Pipeline(options=options) as p:
p | beam.Create([1, 2, 3]) | beam.Map(lambda x: x-1) | beam.Map(print)
if __name__ == "__main__":
run()
</code></pre>
<p>Commands:</p>
<pre><code>% docker build -f Dockerfile_beam -t beam .
[+] Building 242.2s (12/12) FINISHED
...
% docker run --name=beam beam
2021/09/15 13:44:07 No id provided.
</code></pre>
<p>I found that this error message is most likely generated by this line: <a href="https://github.com/apache/beam/blob/410ad7699621e28433d81809f6b9c42fe7bd6a60/sdks/python/container/boot.go#L98" rel="noreferrer">https://github.com/apache/beam/blob/410ad7699621e28433d81809f6b9c42fe7bd6a60/sdks/python/container/boot.go#L98</a></p>
<p>But what does it mean? Which id is this? What am I missing?</p>
|
<p>This error is most likely happening due to your Docker image being based on the SDK harness image (<code>apache/beam_python3.8_sdk</code>). SDK harness images are used in portable pipelines; When a portable runner needs to execute stages of a pipeline that must be executed in their original language, it starts a container with the SDK harness and delegates execution of that stage of the pipeline to the SDK harness. Therefore, when the SDK harness boots up it is expecting to have various configuration details provided by the runner that started it, one of which is the ID. When you start this container directly, those configuration details are not provided and it crashes.</p>
<p>For context into your specific use-case, let me first diagram out the different processes involved in running a portable pipeline.</p>
<pre><code>Pipeline Construction <---> Job Service <---> SDK Harness
\--> Cross-Language SDK Harness
</code></pre>
<ul>
<li><strong>Pipeline Construction</strong> - The process where you define and run your pipeline. It sends your pipeline definition to a Job Service and receives a pipeline result. It does not execute any of the pipeline.</li>
<li><strong>Job Service</strong> - A process for your runner of choice. This is potentially in a different language than your original pipeline construction, and can therefore not run user code, such as custom DoFns.</li>
<li><strong>SDK Harness</strong> - A process that executes user code, initiated and managed by the Job Service. By default, this is in a docker container.</li>
<li><strong>Cross-Language SDK Harness</strong> A process executing code from a different language than your pipeline construction. In your case, Python's Kafka IO uses cross-language, and is actually executing in a Java SDK harness.</li>
</ul>
<p>Currently, the docker container you created is based on an SDK harness container, which does not sound like what you want. You seem to have been trying to containerize your pipeline construction code and accidentally containerized the SDK harness instead. But since you described that you want the ReadFromKafka consumer to be containerized, it sounds like what you need is for the Job Server to be containerized, in addition to any SDK harnesses it uses.</p>
<p>Containerizing the Job Server is possible, and may already be done. For example, here's a <a href="https://hub.docker.com/r/apache/beam_flink1.13_job_server" rel="noreferrer">containerized Flink Job Server</a>. Containerized job servers may give you a bit of trouble with artifacts, as the container won't have access to artifact staging directories on your local machine, but there may be ways around that.</p>
<p>Additionally, you mentioned that you want to avoid having SDK harnesses start in a nested docker container. If you start up a worker pool docker container for the SDK harness and set it as an <a href="https://beam.apache.org/documentation/runtime/sdk-harness-config/" rel="noreferrer">external environment</a>, the runner, assuming it supports external environments, will attempt to connect to the URL you supply instead of creating a new docker container. You will need to configure this for the Java cross-language environment as well, if that is possible in the Python SDK. This configuration should be done via python's pipeline options. <code>--environment_type</code> and <code>--environment_options</code> are good starting points.</p>
|
docker|apache|apache-beam|python-3.8
| 7 |
1,907,675 | 67,590,269 |
Spoitpy current playback
|
<p>I have a problem with this code:</p>
<pre><code>scope = 'user-read-currently-playing'
sp = spotipy.Spotify(auth_manager=SpotifyOAuth(client_id='', client_secret='', scope=scope, redirect_uri=''))
result = sp.current_playback()
print(result)
</code></pre>
<p>I'm trying to get the song to listen, but I get this error:</p>
<pre><code>requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://api.spotify.com/v1/me/player
</code></pre>
<p>if i click to 'https://api.spotify.com/v1/me/player' the result is:</p>
<pre><code>{
"error": {
"status": 401,
"message": "No token provided"
}
}
</code></pre>
<p>I can't understand what the mistake is.</p>
<p>Thanks for who will help me</p>
|
<p>I think the correct scope for the /player endpoint is <strong>user-read-playback-state</strong></p>
|
python|python-3.x|http-status-code-401|spotipy
| 0 |
1,907,676 | 36,249,867 |
Python unable to import socket
|
<p>I am trying to make a IRC twitch bot, however I am runnning into an issue when importing socket</p>
<p>Whenever I run the program I get an error: </p>
<blockquote>
<p>TypeError: 'module' object is not callable</p>
</blockquote>
<p>Here is the traceback:</p>
<pre><code>Traceback (most recent call last):
File "C:\VERYLONGPATH\ChatBot\run.py", line 6, in <module>
s = openSocket()
File "C:\VERYLONGPATH\ChatBot\socket.py", line 5, in openSocket
s = socket.socket()
</code></pre>
<p>And here is the actual python code:</p>
<p>run.py</p>
<pre><code>from socket import openSocket
from initialize import joinRoom
s = openSocket()
joinRoom(s)
while True:
presist = True
</code></pre>
<p>socket.py</p>
<pre><code>import socket
from settings import HOST, PORT, PASS, ID, CHANNEL
def openSocket():
s = socket.socket()
s.connect((HOST, PORT))
s.send("PASS " + PASS + "\r\n")
s.send("NICK " + ID + "\r\n")
s.send("JOIN #" + CHANNEL + "\r\n")
return s
</code></pre>
<p>I am unsure what could be causing this error, as I am importing socket.</p>
|
<p><code>socket.py</code> has the same name as another module, <code>socket</code>, so Python is getting confused as to which <code>socket</code> you mean when you do <code>from socket import openSocket</code>; it decides to import the module rather than your file. Python then throws this error because the <code>socket</code> module doesn't have an <code>openSocket</code> function.</p>
<p>To fix this, change the name of your <code>socket.py</code> file to something else, for example <code>mysocket.py</code>, and then change your code in <code>run.py</code> accordingly like this:</p>
<pre><code>from mysocket import openSocket
...
</code></pre>
|
python|sockets
| 1 |
1,907,677 | 36,475,118 |
Functional solution for short path algorithm in Python
|
<p>I am reading <a href="http://learnyousomeerlang.com/" rel="nofollow noreferrer">Learn You Some Erlang for Great Good!</a> and found out interesting puzzle. I decided to implement it in Python in a most functional way.
<a href="https://i.stack.imgur.com/F9EZd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/F9EZd.png" alt="short path map"></a></p>
<p>Please see my code:</p>
<pre><code>def open_file():
file_source = open('resource/path.txt', 'r') # contains 50\n 10\n 30\n 5\n 90\n 20\n 40\n 2\n 25\n 10\n 8\n 0\n
return file_source
def get_path_tuple(file_source, pathList=[]):
try:
node = int(next(file_source)), int(next(file_source)), int(next(file_source))
pathList.append(node)
get_path_tuple(file_source, pathList)
except StopIteration:
pass
return pathList
def short_step(pathList, n, stepList):
try:
stepA = pathList[n][0] + pathList[n][1]
stepB = pathList[n][1] + pathList[n][2]
stepList.append(min([stepA, stepB]))
short_step(pathList, n+1, stepList)
except IndexError:
pass
return stepList
pathList = get_path_tuple(open_file(), [])
pathList.reverse()
print(short_step(pathList, 0, []))
</code></pre>
<p>But I hit into problem, I don't know how to keep state of current location. Result is: <code>[8, 27, 95, 40]</code>.
Could you please help to fix my code.</p>
|
<p>In fact, I think that as in all pathfinding problems, you have to compute the total path length from start to every point. Then you have to compute both, list of path to A and list of path to B.</p>
<p>I don't know if recursive algorithm is part of the exercise but I used a simple loop.</p>
<pre><code>pathList = [[50,10,30],[5,90,20],[40,2,25],[10,8,999999]]
def all_steps(pathList):
stepListA,stepListB = [],[]
for n in range(0,len(pathList)):
#Step to A
if pathList[n][0]<=pathList[n][1] + pathList[n][2]:#A to A
new_stepListA = list(stepListA)
new_stepListA.append(pathList[n][0])
else: #B to A
new_stepListA = list(stepListB)
new_stepListA.extend([pathList[n][1],pathList[n][2]])
#Step to B
if pathList[n][1]<=pathList[n][2] + pathList[n][2]:#B to B
new_stepListB = list(stepListB)
new_stepListB.append(pathList[n][1])
else: #A to B
new_stepListB = list(stepListA)
new_stepListB.extend([pathList[n][0],pathList[n][2]])
stepListA = list(new_stepListA)
stepListB = list(new_stepListB)
if sum(stepListA)<=sum(stepListB):
print "finish on A"
return stepListA
else:
print "finish on B"
return stepListB
print all_steps(pathList)
</code></pre>
|
python|shortest-path
| 1 |
1,907,678 | 22,213,298 |
Creating same random number sequence in Python, NumPy and R
|
<p>Python, NumPy and R all use the same algorithm (Mersenne Twister) for generating random number sequences. Thus, theoretically speaking, setting the same seed should result in same random number sequences in all 3. This is not the case. I think the 3 implementations use different parameters causing this behavior.</p>
<pre>
R
>set.seed(1)
>runif(5)
[1] 0.2655087 0.3721239 0.5728534 0.9082078 0.2016819
</pre>
<pre>
Python
In [3]: random.seed(1)
In [4]: [random.random() for x in range(5)]
Out[4]:
[0.13436424411240122,
0.8474337369372327,
0.763774618976614,
0.2550690257394217,
0.49543508709194095]
</pre>
<pre>NumPy
In [23]: import numpy as np
In [24]: np.random.seed(1)
In [25]: np.random.rand(5)
Out[25]:
array([ 4.17022005e-01, 7.20324493e-01, 1.14374817e-04,
3.02332573e-01, 1.46755891e-01])
</pre>
<p>Is there some way, where NumPy and Python implementation could produce the same random number sequence? Ofcourse as some comments and answers point out, one could use rpy. What I am specifically looking for is to fine tune the parameters in the respective calls in Python and NumPy to get the sequence.</p>
<p>Context: The concern comes from an EDX course offering in which R is used. In one of the forums, it was asked if Python could be used and the staff replied that some assignments would require setting specific seeds and submitting answers.</p>
<p>Related: </p>
<ol>
<li><a href="https://stackoverflow.com/questions/18486241/comparing-matlab-and-numpy-code-that-uses-random-number-generation?rq=1">Comparing Matlab and Numpy code that uses random number generation</a> From this it seems that the underlying NumPy and Matlab implementation are similar.</li>
<li><a href="https://stackoverflow.com/questions/13735096/python-vs-octave-random-generator?rq=1">python vs octave random generator</a>: This question does come fairly close to the intended answer. Some sort of wrapper around the default state generator is required.</li>
</ol>
|
<p>use <code>rpy2</code> to call r in python, here is a demo, the numpy array <code>data</code> is sharing memory with <code>x</code> in R:</p>
<pre><code>import rpy2.robjects as robjects
data = robjects.r("""
set.seed(1)
x <- runif(5)
""")
print np.array(data)
data[1] = 1.0
print robjects.r["x"]
</code></pre>
|
python|arrays|r|random|numpy
| 10 |
1,907,679 | 16,753,145 |
Create multiple dictionaries from a single iterator in nested for loops
|
<p>I have a nested list comprehension which has created a list of six lists of ~29,000 items. I'm trying to parse this list of final data, and create six separate dictionaries from it. Right now the code is very unpythonic, I need the right statement to properly accomplish the following:</p>
<p>1.) Create six dictionaries from a single statement.</p>
<p>2.) Scale to any length list, i.e., not hardcoding a counter shown as is.</p>
<p>I've run into multiple issues, and have tried the following:</p>
<p>1.) Using while loops</p>
<p>2.) Using break statements, will break out of the inner most loop, but then does not properly create other dictionaries. Also break statements set by a binary switch.</p>
<p>3.) if, else conditions for n number of indices, indices iterate from 1-29,000, then repeat.</p>
<p>Note the ellipses designate code omitted for brevity.</p>
<pre><code># Parse csv files for samples, creating a dictionary of key, value pairs and multiple lists.
with open('genes_1') as f:
cread_1 = list(csv.reader(f, delimiter = '\t'))
sample_1_values = [j for i, j in (sorted([x for x in {i: float(j)
for i, j in cread_1}.items()], key = lambda v: v[1]))]
sample_1_genes = [i for i, j in (sorted([x for x in {i: float(j)
for i, j in cread_1}.items()], key = lambda v: v[1]))]
...
# Compute row means.
mean_values = []
for i, (a, b, c, d, e, f) in enumerate(zip(sample_1_values, sample_2_values, sample_3_values, sample_4_values, sample_5_values, sample_6_values)):
mean_values.append((a + b + c + d + e + f)/6)
# Provide proper gene names for mean values and replace original data values by corresponding means.
sample_genes_list = [i for i in sample_1_genes, sample_2_genes, sample_3_genes, sample_4_genes, sample_5_genes, sample_6_genes]
sample_final_list = [sorted(zip(sg, mean_values)) for sg in sample_genes_list]
# Create multiple dictionaries from normalized values for each dataset.
class BreakIt(Exception): pass
try:
count = 1
for index, items in enumerate(sample_final_list):
sample_1_dict_normalized = {}
for index, (genes, values) in enumerate(items):
sample_1_dict_normalized[genes] = values
count = count + 1
if count == 29595:
raise BreakIt
except BreakIt:
pass
</code></pre>
<p>...</p>
<pre><code>try:
count = 1
for index, items in enumerate(sample_final_list):
sample_6_dict_normalized = {}
for index, (genes, values) in enumerate(items):
if count > 147975:
sample_6_dict_normalized[genes] = values
count = count + 1
if count == 177570:
raise BreakIt
except BreakIt:
pass
# Pull expression values to qualify overexpressed proteins.
print 'ERG values:'
print 'Sample 1:', round(sample_1_dict_normalized.get('ERG'), 3)
print 'Sample 6:', round(sample_6_dict_normalized.get('ERG'), 3)
</code></pre>
|
<p>Your code is too long for me to give exact answer. I will answer very generally.</p>
<p>First, you are using <code>enumerate</code> for no reason. if you don't need <em>both</em> index and value, you probably don't need enumerate.</p>
<p>This part:</p>
<pre><code>with open('genes.csv') as f:
cread_1 = list(csv.reader(f, delimiter = '\t'))
sample_1_dict = {i: float(j) for i, j in cread_1}
sample_1_list = [x for x in sample_1_dict.items()]
sample_1_values_sorted = sorted(sample_1_list, key=lambda expvalues: expvalues[1])
sample_1_genes = [i for i, j in sample_1_values_sorted]
sample_1_values = [j for i, j in sample_1_values_sorted]
sample_1_graph_raw = [float(j) for i, j in cread_1]
</code></pre>
<p>should be (a) using a <code>list</code> named <code>samples</code> and (b) much shorter, since you don't really need to extract all this information from <code>sample_1_dict</code> and move it around right now. It can be something like:</p>
<pre><code>samples = [None] * 6
for k in range(6):
with open('genes.csv') as f: #but something specific to k
cread = list(csv.reader(f, delimiter = '\t'))
samples[k] = {i: float(j) for i, j in cread}
</code></pre>
<p>after that, calculating the sum and mean will be way more natural.</p>
<p>In this part:</p>
<pre><code>class BreakIt(Exception): pass
try:
count = 1
for index, items in enumerate(sample_final_list):
sample_1_dict_normalized = {}
for index, (genes, values) in enumerate(items):
sample_1_dict_normalized[genes] = values
count = count + 1
if count == 29595:
raise BreakIt
except BreakIt:
pass
</code></pre>
<p>you should be (a) iterating of the <code>samples</code> list mentioned earlier, and (b) not using <code>count</code> at all, since you can iterate naturally over <code>samples</code> or <code>sample[i].list</code> or something like that.</p>
|
python
| 1 |
1,907,680 | 54,600,220 |
Create a random noise image with the same mean and (skewed) distribution as another image
|
<p>I have an image represented as a uint16 numpy array (<code>orig_arr</code>) with a skewed distribution. I would like to create a new array (<code>noise_arr</code>) of random values, but that matches the mean and standard deviation of <code>orig_img</code>. </p>
<p>I believe this will require two main steps:</p>
<ol>
<li>Measure the mean and distribution of <code>orig_arr</code></li>
<li>Create a new array of random values using the mean and distribution measured in step 1</li>
</ol>
<p>I'm pretty much lost on how to do this, but here's a sample image and a bit of code to get you started:</p>
<p>Sample image: <a href="https://drive.google.com/open?id=1bevwW-NHshIVRqni5O62QB7bxcxnUier" rel="nofollow noreferrer">https://drive.google.com/open?id=1bevwW-NHshIVRqni5O62QB7bxcxnUier</a> (looks blank but it's not)</p>
<pre><code>orig_arr = cv2.imread('sample_img.tif', -1)
orig_mean = np.mean(orig_arr)
orig_sd = np.std(orig_arr)
print(orig_mean)
18.676384933578962
print(orig_sd)
41.67964688299941
</code></pre>
|
<p>I think <code>scipy.stats.skewnorm</code> might do the trick. It lets you characterize skewed normal distributions, and also sample data from skewed normal distributions. </p>
<p>Now... maybe that's a bad assumption for your data... maybe it's not skew-normal, but this is the first thing I'd try.</p>
<pre><code># import skewnorm
from scipy.stats import skewnorm
# find params
a, loc, scale = skewnorm.fit(orig_arr)
# mimick orig distribution with skewnorm
# keep size and shape the same as orig_arr
noise_arr = skewnorm(a, loc, scale).rvs(orig_arr.size).astype('uint16').reshape(orig_array.shape)
</code></pre>
<p>There's more detail about exploring this kind of data... plotting it... comparing it... over here: <a href="https://stackoverflow.com/questions/54599018/how-to-create-uint16-gaussian-noise-image/54600502">How to create uint16 gaussian noise image?</a></p>
<p>Also... I think using <code>imshow</code> and setting <code>vmin</code> and <code>vmax</code> will let you look at an image or heatmap of your data that is sensitive to the range. That is demonstrated in the link above as well.</p>
<p>Hope that helps!</p>
|
python
| 1 |
1,907,681 | 71,117,302 |
Spacy lemmatizer suddenly returns other value than three months ago, words are not transformed into the singular form anymore
|
<p>I used spacy a few months ago to lemmatize a large amount of text.
Today I had to rerun the written script and the output of spacy changed, it is mostly that plural forms of words are not transformed to the singular anymore.
I tried to reproduce the problem with a simpler use case and the word queen which breaks down to the following:</p>
<pre><code>import spacy
nlp = spacy.load('en_core_web_lg')
sentence = "queen queenhat queens queen"
test = nlp(sentence)
for word in test:
print(word.lemma_)
</code></pre>
<p>The output of this is: queen, queenhat, queens, queen</p>
<p>If I remove the last queen ("queen queenhat queens") the output is: queen, queenhat, queen</p>
<p>In that case, the s gets removed like it did three months ago.</p>
<p>I assumed from this that the s only gets removed if the queen is at the end, since the input "queen queenhat queens queens" also returns: queen queenhat queens queen</p>
<p>But if I at another queens the output becomes: queen queenhat queens queens queens in which case not even the last queen is transformed to the singular form anymore.</p>
<p>I assume this happens because I reinstalled spacy between today and three months ago and got a newer version, I fixed the problem by giving spacy only single words and no full texts, but this really slows the entire script down from seconds to hours. This also happens for other words, queen is just the example I choose to test spacy with.</p>
<p>Is there someway I can fix this? Thanks in advance</p>
|
<p>It sounds like the version of spaCy definitely changed, maybe from v2 to v3.</p>
<p>First, if spaCy is slow, see the <a href="https://github.com/explosion/spaCy/discussions/8402" rel="nofollow noreferrer">speed FAQ</a>.</p>
<p>Next, note that spaCy's lemmatizer is clever, so it relies on the part of speech of a word, since that can affect the lemma. This is why changing the contents of your string changes your lemmas - spaCy thinks that's a weird sentence and tries to predict the part of speech of each word and probably doesn't do so well, since it's not actually a sentence. spaCy is designed to take natural language, like full sentences, as input, not arbitrary lists of words.</p>
<p>If you just need to lemmatize standalone words you're better off using the lemmatizer in spaCy as a standalone component or even using the underlying data files directly. There's not a guide for that but if you look at <a href="https://github.com/explosion/spacy-lookups-data" rel="nofollow noreferrer">spacy-lookups-data</a> you can access it easily enough.</p>
|
python|spacy|lemmatization
| 0 |
1,907,682 | 39,069,782 |
What is the python way to iterate two list and compute by position?
|
<p>What is the Pythonic way to iterate two list and compute?</p>
<pre><code>a, b=[1,2,3], [4,5,6]
c=[]
for i in range(3):
c.append(a[i]+b[i])
print(c)
[5,7,9]
</code></pre>
<p>Is there a one-liner for <code>c</code> without a for loop?</p>
|
<p>Use <code>zip</code> and list comprehension:</p>
<pre><code>[x+y for (x, y) in zip(a, b)]
</code></pre>
|
python
| 9 |
1,907,683 | 52,550,819 |
Change Model's column attribute
|
<p>I have an already migrated Django model, which was created this way:</p>
<pre><code> operations = [
migrations.CreateModel(
name='Victim',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('email', models.EmailField(max_length=200)),
('instagram', models.CharField(max_length=254)),
('old_password', models.CharField(blank=True, max_length=200)),
('new_password', models.CharField(blank=True, max_length=200)),
],
),
]
</code></pre>
<p>But now, I want to make email and instagram attribute <code>Blank=True</code>, but password fields make <code>Blank=False</code>.</p>
<p>What is the easiest way to do this: delete and recreate the model (data is not important) or is there a possible way to do this? </p>
|
<p>You can still change your models and run <code>manage.py makemigrations</code>. It will create another migration to execute the required SQL statements to alter your database schema when running <code>manage.py migrate</code>. This is the role of migrations.</p>
|
django|python-3.x
| 1 |
1,907,684 | 52,735,334 |
Python - Pandas, Resample dataset to have balanced classes
|
<p>With the following data frame, with only 2 possible lables:</p>
<pre><code> name f1 f2 label
0 A 8 9 1
1 A 5 3 1
2 B 8 9 0
3 C 9 2 0
4 C 8 1 0
5 C 9 1 0
6 D 2 1 0
7 D 9 7 0
8 D 3 1 0
9 E 5 1 1
10 E 3 6 1
11 E 7 1 1
</code></pre>
<p>I've written a code to group the data by the 'name' column and pivot the result into a numpy array, so each row is a collection of all the samples of a specific group, and the lables are another numpy array:</p>
<p>Data:</p>
<pre><code>[[8 9] [5 3] [0 0]] # A lable = 1
[[8 9] [0 0] [0 0]] # B lable = 0
[[9 2] [8 1] [9 1]] # C lable = 0
[[2 1] [9 7] [3 1]] # D lable = 0
[[5 1] [3 6] [7 1]] # E lable = 1
</code></pre>
<p>Lables:</p>
<pre><code>[[1]
[0]
[0]
[0]
[1]]
</code></pre>
<p>Code:</p>
<pre><code>import pandas as pd
import numpy as np
def prepare_data(group_name):
df = pd.read_csv("../data/tmp.csv")
group_index = df.groupby(group_name).cumcount()
data = (df.set_index([group_name, group_index])
.unstack(fill_value=0).stack())
target = np.array(data['label'].groupby(level=0).apply(lambda x: [x.values[0]]).tolist())
data = data.loc[:, data.columns != 'label']
data = np.array(data.groupby(level=0).apply(lambda x: x.values.tolist()).tolist())
print(data)
print(target)
prepare_data('name')
</code></pre>
<p>I would like to resample and delete instances from the over-represented class.</p>
<p>i.e </p>
<pre><code>[[8 9] [5 3] [0 0]] # A lable = 1
[[8 9] [0 0] [0 0]] # B lable = 0
[[9 2] [8 1] [9 1]] # C lable = 0
# group D was deleted randomly from the '0' labels
[[5 1] [3 6] [7 1]] # E lable = 1
</code></pre>
<p>would be an acceptable solution, since removing D (labeled '0') will result with a balanced dataset of 2 * label '1' and 2 * label '0'.</p>
|
<p>A very simple approach. Taken from sklearn documentation and Kaggle.</p>
<pre><code>from sklearn.utils import resample
df_majority = df[df.label==0]
df_minority = df[df.label==1]
# Upsample minority class
df_minority_upsampled = resample(df_minority,
replace=True, # sample with replacement
n_samples=20, # to match majority class
random_state=42) # reproducible results
# Combine majority class with upsampled minority class
df_upsampled = pd.concat([df_majority, df_minority_upsampled])
# Display new class counts
df_upsampled.label.value_counts()
</code></pre>
|
python|pandas|numpy|machine-learning|dataset
| 9 |
1,907,685 | 47,900,056 |
What is the meaning of the error ValueError: cannot copy sequence with size 205 to array axis with dimension 26 and how to solve it
|
<p>This is the code i wrote , i am trying to convert the non-numerical data to numeric. However it return an error <strong>ValueError: cannot copy sequence with size 205 to array axis with dimension 26</strong>
The data is get from <a href="http://archive.ics.uci.edu/ml/datasets/Automobile" rel="nofollow noreferrer">http://archive.ics.uci.edu/ml/datasets/Automobile</a></p>
<pre><code>automobile = pd.read_csv('imports-85.csv', names = ["symboling",
"normalized-losses", "make", "fuel", "aspiration", "num-of-doors", "body-
style", "drive-wheels", "engine-location", "wheel-base", "length", "width",
"height", " curb-weight", "engine-type", "num-of-cylinders","engine-
size","fuel-system","bore","stroke"," compression-ratio","horsepower","peak-
rpm","city-mpg","highway-mpg","price"])
X = automobile.drop('symboling',axis=1)
y = automobile['symboling']
le = preprocessing.LabelEncoder()
le.fit([automobile])
print (le)
</code></pre>
|
<p>The <code>fit</code> method takes an array of <code>[n_samples]</code> see the <a href="http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.LabelEncoder.html#sklearn.preprocessing.LabelEncoder.fit" rel="nofollow noreferrer">docs</a>. You're passing the entire data frame within a list. I'm pretty sure if you print the shape of your dataframe (<code>automobile.shape</code>) it will show a shape of <code>(205, 26)</code></p>
<p>If you want to encode your data you need to do it one column at a time e.g.
<code>le.fit(automobile['make'])</code>. </p>
<p>Note, that this is not the correct way to encode categorical data, as the name suggests <code>LabelEncoder</code> is designed for labels and not input features. In scikit-learns current state you should use <code>OneHotEncoder</code>. There are plans in the next release for a <a href="http://scikit-learn.org/dev/modules/generated/sklearn.preprocessing.CategoricalEncoder.html" rel="nofollow noreferrer">categorical encoder</a></p>
|
python|python-3.x|scikit-learn
| 0 |
1,907,686 | 47,984,547 |
Sort a 2 index pivot table: values within group, index based on values
|
<p>I have a dataframe like this:</p>
<pre><code>x = pd.DataFrame({'col1':['bul', 'eng','eng', 'ger','ger', 'fra','fra'],
'col2':['fra', 'ger','fra', 'fra','eng', 'ger','eng'],
'col3':[ 1, 4, 2, 6, 7, 20, 5]})
pt = pd.pivot_table(x, index = ['col1', 'col2'], values = 'col3', aggfunc = np.sum)
pt
col3
col1 col2
bul fra 1
eng fra 2
ger 4
fra eng 5
ger 20
ger eng 7
fra 6
</code></pre>
<p>which I want to sort to arrive at:</p>
<pre><code> col3
col1 col2
fra ger 20
eng 5
ger eng 7
fra 6
eng ger 4
fra 2
bul fra 1
</code></pre>
<p>the third column sorted descendingly (within col1 cell) and col1 sorted based on a property of col3, <em>here max (20 > 7 > 4 > 1)</em> </p>
<p>There are several questions dealing with similar problems, mine is relevant because it features a descriptive title and sample data (also other questions answers don't work for me)</p>
<p><a href="https://stackoverflow.com/a/45300480/3014199">https://stackoverflow.com/a/45300480/3014199</a> suggests </p>
<pre><code>df = pt.reset_index()
.sort_values(['col1','col3'], ascending=[True, False])
.set_index(['col1','col2'])
print(df)
col3
col1 col2
bul fra 1
eng fra 2
ger 4
fra eng 5
ger fra 6
eng 7
fra ger 20
</code></pre>
<p>Which seems to sort col3 for the dataFrame there, but doesn't seem to work at all for my data.</p>
<p><a href="https://stackoverflow.com/questions/10595327/pandas-sort-pivot-table">Pandas: Sort pivot table</a> seems promising as well, but like others I get <code>ValueError: all keys need to be the same shape</code></p>
<p><strong>Update:</strong><br>
My example was not general enough, sorry! It should also work if 2 groups share the same max, e.g.</p>
<pre><code>x2 = pd.DataFrame({'col1':['bul', 'eng','eng', 'ger','ger', 'fra','fra'],
'col2':['fra', 'ger','fra', 'fra','eng', 'ger','eng'],
'col3':[ 1, 7, 2, 6, 7, 20, 5]})
</code></pre>
<p>E.g. MaxU's solution yields: </p>
<pre><code> col3
col1 col2
fra ger 20
eng 5
ger eng 7
eng ger 7
ger fra 6
eng fra 2
bul fra 1
</code></pre>
<p>I bet adding a hash(or rather a grouping number divided by 10) of col1 to the 'max' would work, but there has to be a better way...<br>
Yes! This seems to work:</p>
<pre><code>pt['New']=pt.groupby(level='col1').col3.transform('max')
pt['New'] = 1/(pt.index.labels[0]+1)+pt['New'].values
pt=pt.sort_values(['New','col3'],ascending=False).drop('New',1)
</code></pre>
|
<p>We can using a new para to achieve this </p>
<pre><code>pt['New']=pt.groupby(level='col1').col3.transform('max')
pt=pt.sort_values(['New','col3'],ascending=False).drop('New',1)
pt
Out[1445]:
col3
col1 col2
fra ger 20
eng 5
ger eng 7
fra 6
eng ger 4
fra 2
bul fra 1
</code></pre>
<p>Updated :</p>
<pre><code>pt['New']=pt.groupby(level='col1').col3.transform('max')
pt['New1']=pt.groupby(level='col1').col3.ngroup()
pt=pt.sort_values(['New','New1','col3'],ascending=False)
pt
Out[151]:
col3 New New1
col1 col2
fra ger 20 20 2
eng 5 20 2
ger eng 7 7 3
fra 6 7 3
eng ger 7 7 1
fra 2 7 1
bul fra 1 1 0
</code></pre>
|
pandas|sorting|pivot-table
| 4 |
1,907,687 | 37,529,469 |
Dynamic Point Manipulation in 3D Matplotlib
|
<p>In trying to manipulate a graph in matplotlib, I came across something interesting. when using the plot function, x and y coordinates can be modified through aliasing, however the z cannot. Is this easily explained?
I have included a small example, note that I was trying to get the point to move in a helix along the cylinder. </p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.set_xlabel("X")
ax.set_ylabel("Y")
ax.set_zlabel("Z")
radius = 8
# Cylinder
x = np.linspace(-radius, radius, 100)
z = np.linspace(0, 2, 100)
X, Z = np.meshgrid(x, z)
Y = np.sqrt(radius**2 - X**2)
ax.plot_surface(X, Y, Z, alpha=0.3, linewidth=0)
ax.plot_surface(X, (-Y), Z,alpha=0.3, linewidth=0)
X = np.array(np.zeros(1))
Y = np.array(np.zeros(1))
Z = np.array(np.zeros(1))
X[0]= radius*np.cos(np.deg2rad(0))
Y[0]= radius*np.sin(np.deg2rad(0))
Z[0]= 0.
ax.plot(X,Y,Z, 'or',markersize=3)
for i in range(0,360,5):
X[0]= radius*np.cos(np.deg2rad(i))
Y[0]= radius*np.sin(np.deg2rad(i))
Z[0]= i/140.
plt.draw()
plt.pause(0.01)
</code></pre>
<p>For reference, the end goal of this exercise is to be able to position the point based on sensor inputs in real-time, in three dimensions.</p>
<p>Thank you to anyone who has insight as to why this is occurring and possible solutions! </p>
|
<p>This is indeed interesting.. <code>Line3D</code> is derived from <code>Line2D</code>, so functions like <code>get_xdata</code> or <code>set_xdata</code> are also not available in the <code>z</code> direction. Looking into the source code (<code>art3d.py</code>), it seems that the <code>z</code> values are completely ignored:</p>
<pre><code>def draw(self, renderer):
xs3d, ys3d, zs3d = self._verts3d
xs, ys, zs = proj3d.proj_transform(xs3d, ys3d, zs3d, renderer.M)
self.set_data(xs, ys)
lines.Line2D.draw(self, renderer)
</code></pre>
<p>But perhaps I don't correctly understand how the <code>Line3D</code>'s work...</p>
<p>There is however a function available to manipulate the <code>z</code> values (<code>set_3d_properties</code>), but since that function calls <code>get_xdata</code> and <code>get_ydata</code> itself, the only way I could get this to work is:</p>
<pre><code>p = ax.plot(X,Y,Z, 'or', markersize=3)[0]
for i in range(0,360,5):
p.set_xdata(radius*np.cos(np.deg2rad(i)))
p.set_ydata(radius*np.sin(np.deg2rad(i)))
p.set_3d_properties(zs=i/140.)
</code></pre>
|
python|matplotlib
| 2 |
1,907,688 | 37,451,964 |
connecting and separating string inputs
|
<p>need help with taking inputs in a a loop like so.</p>
<pre><code>example = input("enter the example !")
</code></pre>
<p>Then I need to add that input to one single variable and later print it all out
on separate lines EG:</p>
<pre><code>loop cycle 1:
enter the example ! test1
loop cycle 2:
enter the example ! test2
loop cycle 3:
enter the example ! test3
</code></pre>
<p>inputs:</p>
<pre><code>1. test1
2. test2
3. test3
</code></pre>
<p>One thing is that I am unable to use .append due to using lists in my case is
not in max efficiency. (probably will need to be taught to use \n)</p>
|
<p>you can append new line character to input function </p>
<p><strong>for python2.</strong></p>
<pre><code>example = ""
for i in range(1,4):
example = example + str(i)+". " +raw_input("enter the example !") +"\n"
print example
</code></pre>
<p><strong>for python3</strong></p>
<pre><code>example = ""
for i in range(1,4):
example = example + str(i)+". " +input("enter the example !") +"\n"
print (example)
</code></pre>
<p>output</p>
<pre><code>messi@messi-Hi-Fi-B85S3:~/Desktop/soc$ python sample.py
enter the example !text 1
enter the example !text 2
enter the example !text 2
1. text 1
2. text 2
3. text 2
</code></pre>
|
python|string
| 2 |
1,907,689 | 34,135,672 |
how to step over list comprehension in pycharm?
|
<p>I am new to Python and Pycharm,</p>
<p>I am trying to step over a line with list comprehension, </p>
<p>but instead of moving me to the next line, pycharm is incrementing the loop in 1 iteration. </p>
<p>any ideas how to move to the next line without pushing F8 3000 times?</p>
<p>thanks!</p>
|
<p>PyCharm has 'Run to Cursor' option - just move your cursor one line down and hit it.</p>
|
python|debugging|pycharm
| 2 |
1,907,690 | 34,174,478 |
Creating virtualenv using altinstall
|
<p>I have just done a fresh install of Linux Mint 17.3. It comes with both python 2.7 and 3.4. I usually work with multiple versions of python so I just do an altinstall and then for each project I create a virtualenv using the desired version of python. However im running into issues with the newly installed OS. First a few things i've already done following the install:</p>
<pre><code>sudo apt-get update
sudo apt-get install build-essential
sudo apt-get install python-virtualenv
sudo apt-get install python-pip
</code></pre>
<p>I did an altinstall of python3.3.5:</p>
<pre><code>downloaded the source tarball
./configure --with-zlib
sudo make
sudo make altinstall
</code></pre>
<p>I then tried creating a virtualenv in a new folder to test:</p>
<pre><code>virtualenv -p python3.3 venv
</code></pre>
<p>This gave an error:</p>
<pre><code>no module named zlib
</code></pre>
<p>I've had this issue in the past, so I did:</p>
<pre><code>sudo apt-get install python-dev
sudo apt-get install zlib1g-dev
</code></pre>
<p>Now when I create the virtualenv the zlib error is gone, however im getting a new error and I can't seem to figure out how to fix it:</p>
<pre><code>Running virtualenv with interpreter /usr/local/bin/python3.3
Using base prefix '/usr/local'
New python executable in venv/bin/python3.3
Also creating executable in venv/bin/python
Installing setuptools, pip, wheel...
Complete output from command /home/vega/Documents...8/venv/bin/python3.3 -c "import sys, pip; sys...d\"] + sys.argv[1:]))" setuptools pip wheel:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/usr/local/lib/python2.7/dist-packages/virtualenv_support/pip-7.1.2-py2.py3-none-any.whl/pip/__init__.py", line 15, in <module>
File "/usr/local/lib/python2.7/dist-packages/virtualenv_support/pip-7.1.2-py2.py3-none-any.whl/pip/vcs/subversion.py", line 9, in <module>
File "/usr/local/lib/python2.7/dist-packages/virtualenv_support/pip-7.1.2-py2.py3-none-any.whl/pip/index.py", line 30, in <module>
File "/usr/local/lib/python2.7/dist-packages/virtualenv_support/pip-7.1.2-py2.py3-none-any.whl/pip/wheel.py", line 35, in <module>
File "/usr/local/lib/python2.7/dist-packages/virtualenv_support/pip-7.1.2-py2.py3-none-any.whl/pip/_vendor/distlib/scripts.py", line 14, in <module>
File "/usr/local/lib/python2.7/dist-packages/virtualenv_support/pip-7.1.2-py2.py3-none-any.whl/pip/_vendor/distlib/compat.py", line 66, in <module>
ImportError: cannot import name HTTPSHandler
----------------------------------------
...Installing setuptools, pip, wheel...done.
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/virtualenv.py", line 2363, in <module>
main()
File "/usr/local/lib/python2.7/dist-packages/virtualenv.py", line 832, in main
symlink=options.symlink)
File "/usr/local/lib/python2.7/dist-packages/virtualenv.py", line 1004, in create_environment
install_wheel(to_install, py_executable, search_dirs)
File "/usr/local/lib/python2.7/dist-packages/virtualenv.py", line 969, in install_wheel
'PIP_NO_INDEX': '1'
File "/usr/local/lib/python2.7/dist-packages/virtualenv.py", line 910, in call_subprocess
% (cmd_desc, proc.returncode))
OSError: Command /home/vega/Documents...8/venv/bin/python3.3 -c "import sys, pip; sys...d\"] + sys.argv[1:]))" setuptools pip wheel failed with error code 1
</code></pre>
<p>I read somewhere that this might have something to do with openssl so I did:</p>
<pre><code>sudo apt-get install openssl
sudo apt-get install libcurl4-openssl-dev
</code></pre>
<p>No luck, still having the same issue.</p>
|
<p>I let someone else do the work, so I don't have to worry about building alterante versions myself from source. </p>
<p>I've had really good luck using Felix Krull's "deadsnakes" PPA for installing alternate Pythons on Ubuntu. Would this work for Mint 17? (based on Ubuntu Trusty)</p>
<p>'deadsnakes PPA' packages versions of Python: 2.3, 2.4, 2.5, 2.6, 2.7, 3.1, 3.2, 3.3, 3.4, 3.5 ... all available to install from <code>apt</code>. Once installed, You can manage versions and dependencies with virtualenv and pip.</p>
<p>Installing Python 3.5 from deadsnakes PPA:</p>
<pre><code>$ sudo add-apt-repository ppa:fkrull/deadsnakes
$ sudo apt-get update
$ sudo apt-get install python3.5 python3.5-dev
</code></pre>
<p>The PPA maintainer has maintained these for quite a long time and updates with each Ubuntu release.</p>
<p><a href="https://launchpad.net/~fkrull/+archive/ubuntu/deadsnakes" rel="nofollow">https://launchpad.net/~fkrull/+archive/ubuntu/deadsnakes</a></p>
|
python|virtualenv
| 3 |
1,907,691 | 66,091,099 |
find the index of the first value in x that is greater than 0.999
|
<p>I am confused with this question, how can I solve this?, I try something like this below without index how can I combine them together? Thank you</p>
<p>Q.Try to find the index of the first value in x that is greater than 0.999 using a for loop and break.</p>
<p>Hint: try iterating over range(len(x)).</p>
<pre><code>for x in range(len(x)):
if x > 0.999:
break
print("The answer is", x)```
</code></pre>
|
<p>If you really want you use a for loop with a break, you can use enumerate.</p>
<pre><code>x = [0, 0.3, 0.6, 0.8, 1, 0.4, 0.5, 6]
for i, num in enumerate(x):
if num > 0.999:
break
print("The answer is", i)
#The answer is 4
</code></pre>
|
python|indexing
| 0 |
1,907,692 | 65,981,163 |
How to import from another folder
|
<p>very good day!</p>
<p>I come to you for help.</p>
<p>I'm new to learning Python with an Udemy course.</p>
<p>Within this course a basic project must be done under console, the problem I have is the following:
According to the attached image, in the left panel you can see the project structure (yellow box). Inside this I have the "models" directory where 3 methods are located.</p>
<p>Inside the method "users.py" I need to import the method located inside "db_pro", but Visual Studio Code marks me as an error that I cannot find it.</p>
<p>I have tried several ways without success, I really appreciate your collaboration to be able to carry out this import correctly.</p>
<p><a href="https://i.stack.imgur.com/AuFHf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AuFHf.png" alt="I attach the project" /></a></p>
<p>Thank you so much</p>
|
<p>As KJDII mentioned. Try adding <strong>init</strong>.py file in Your modules folder.
What it does is tell python interpreter, that this specific folder should be treated as python package.
<strong>init</strong>.py file may be left empty.</p>
<p>If this want help, read about attaching packages to python's PATH. Pyhon have it's specific path of searching for packages to be imported.</p>
<p>EDIT:
As I wrote above. Python searches in specific path for packages. Your import is more complex than having a single module file and main app file in exact same folder. Thus python interpreter does not see Your module in its PATH (this topic is more complex, but I encourage You to read about it). What You need to do is:</p>
<pre><code>import sys
sys.path.append(absolute_path_to_your_module)
import your_module_name
</code></pre>
<p>What this will do is every time You run app, sys.path.append adds this moduke_path to PATH environment variable.
Absolute path means, it needs full path from home up to your models directory. Example path: '/home/desktop/my_project/modules'
After this You will gain access to modules directory and just</p>
<pre><code>import db_pro
</code></pre>
|
python|python-3.x
| 1 |
1,907,693 | 16,176,128 |
Enthought Canopy 1.0 missing Python.h and other includes
|
<p>I installed the Canopy distribution from Enthought for Mac OSX 64 bit and the include/python-2.7 folder is missing. </p>
<p>I couldn't find a package that includes this, has anyone had this problem and a solution? </p>
|
<p>Canopy uses a backport of <a href="http://docs.python.org/3/library/venv" rel="nofollow"><code>venv</code></a> to provide the Python that gets exposed to users. <code>venv</code> does not copy the include directories, but <code>distutils</code> will find and use the include directory from the parent distribution, for example, modulo specific version numbers, it will probably look like this:</p>
<p><code>/Applications/Canopy.app/appdata/canopy-1.0.0.1160.macosx-x86_64/Canopy.app/Contents/include/python2.7/</code></p>
<p>If you try compiling a simple extension module, it should work. Please give it a try and let us know if a simple extension module does not work.</p>
|
python-2.7|build|enthought|setup.py|canopy
| 2 |
1,907,694 | 38,668,680 |
Return all keys along with value in nested dictionary
|
<p>I am working on getting all text that exists in several <code>.yaml</code> files placed into a new singular YAML file that will contain the English translations that someone can then translate into Spanish.</p>
<p>Each YAML file has a lot of nested text. I want to print the full 'path', aka all the keys, along with the value, for each value in the YAML file. Here's an example input for a <code>.yaml</code> file that lives in the myproject.section.more_information file:</p>
<pre><code>default:
heading: Here’s A Title
learn_more:
title: Title of Thing
url: www.url.com
description: description
opens_new_window: true
</code></pre>
<p>and here's the desired output: </p>
<pre><code>myproject.section.more_information.default.heading: Here’s a Title
myproject.section.more_information.default.learn_more.title: Title of Thing
mproject.section.more_information.default.learn_more.url: www.url.com
myproject.section.more_information.default.learn_more.description: description
myproject.section.more_information.default.learn_more.opens_new_window: true
</code></pre>
<p>This seems like a good candidate for recursion, so I've looked at examples such as <a href="https://stackoverflow.com/questions/36808260/python-recursive-search-of-dict-with-nested-keys?rq=1">this answer</a></p>
<p>However, I want to preserve all of the keys that lead to a given value, not just the last key in a value. I'm currently using PyYAML to read/write YAML. </p>
<p>Any tips on how to save each key as I continue to check if the item is a dictionary and then return all the keys associated with each value?</p>
|
<p>What you're wanting to do is flatten nested dictionaries. This would be a good place to start: <a href="https://stackoverflow.com/questions/6027558/flatten-nested-python-dictionaries-compressing-keys">Flatten nested Python dictionaries, compressing keys</a></p>
<p>In fact, I think the code snippet in the top answer would work for you if you just changed the sep argument to <code>.</code>.</p>
<p>edit:</p>
<p>Check this for a working example based on the linked SO answer <a href="http://ideone.com/Sx625B" rel="nofollow noreferrer">http://ideone.com/Sx625B</a></p>
<pre><code>import collections
some_dict = {
'default': {
'heading': 'Here’s A Title',
'learn_more': {
'title': 'Title of Thing',
'url': 'www.url.com',
'description': 'description',
'opens_new_window': 'true'
}
}
}
def flatten(d, parent_key='', sep='_'):
items = []
for k, v in d.items():
new_key = parent_key + sep + k if parent_key else k
if isinstance(v, collections.MutableMapping):
items.extend(flatten(v, new_key, sep=sep).items())
else:
items.append((new_key, v))
return dict(items)
results = flatten(some_dict, parent_key='', sep='.')
for item in results:
print(item + ': ' + results[item])
</code></pre>
<p>If you want it in order, you'll need an OrderedDict though.</p>
|
python|dictionary|recursion|pyyaml
| 1 |
1,907,695 | 38,837,222 |
Read Textfile Write new CSV file
|
<p>Currently I have the following code which prints my desired lines from a .KAP file.</p>
<pre><code>f = open('120301.KAP')
for line in f:
if line.startswith('PLY'):
print line
</code></pre>
<p>This results in the following output</p>
<pre><code>PLY/1,48.107478621032,-69.733975000000
PLY/2,48.163516399836,-70.032838888053
PLY/3,48.270000002883,-70.032838888053
PLY/4,48.270000002883,-69.712824977522
PLY/5,48.192379262383,-69.711801581207
PLY/6,48.191666671083,-69.532840015422
PLY/7,48.033358898628,-69.532840015422
PLY/8,48.033359033880,-69.733975000000
PLY/9,48.107478621032,-69.733975000000
</code></pre>
<p>My goal is not to have it just print these lines. I'd like to have a CSV file created named 120301.csv with the coordinates in there own columns (leaving the PLY/# behind). Simple enough? I've been trying different import CSV functions for awhile now. I can't seem to get anywhere.</p>
|
<p>Step by step, since it looks like you're struggling with some basics:</p>
<pre><code>f_in = open("120301.KAP")
f_out = open("outfile.csv", "w")
for line in f_in:
if line.startswith("PLY"): # copy this line to the new file
# split it into columns, ignoring the first one ("PLY/x")
_, col1, col2 = line.split(",")
# format your output
outstring = col1 + "," + col2 + "\n"
# and write it out
f_out.write(outstring)
f_in.close()
f_out.close() # really bad practice, but I'll get to that
</code></pre>
<p>Of course this is really not the best way to do this. There's a reason we have things like the <code>csv</code> module.</p>
<pre><code>import csv
with open("120301.KAP") as inf, open("outfile.csv", "wb") as outf:
reader = csv.reader(inf)
writer = csv.writer(outf)
for row in reader:
# if the first cell of row starts with "PLY"...
if row[0].startswith("PLY"):
# write out the row, ignoring the first column
writer.writerow(row[1:])
# opening the files using the "with" context managers means you don't have
# to remember to close them when you're done with them.
</code></pre>
|
python|csv
| 2 |
1,907,696 | 38,902,239 |
Performance issues with pandas and filtering on datetime column
|
<p>I've a pandas dataframe with a datetime64 object on one of the columns.</p>
<pre><code> time volume complete closeBid closeAsk openBid openAsk highBid highAsk lowBid lowAsk closeMid
0 2016-08-07 21:00:00+00:00 9 True 0.84734 0.84842 0.84706 0.84814 0.84734 0.84842 0.84706 0.84814 0.84788
1 2016-08-07 21:05:00+00:00 10 True 0.84735 0.84841 0.84752 0.84832 0.84752 0.84846 0.84712 0.8482 0.84788
2 2016-08-07 21:10:00+00:00 10 True 0.84742 0.84817 0.84739 0.84828 0.84757 0.84831 0.84735 0.84817 0.847795
3 2016-08-07 21:15:00+00:00 18 True 0.84732 0.84811 0.84737 0.84813 0.84737 0.84813 0.84721 0.8479 0.847715
4 2016-08-07 21:20:00+00:00 4 True 0.84755 0.84822 0.84739 0.84812 0.84755 0.84822 0.84739 0.84812 0.847885
5 2016-08-07 21:25:00+00:00 4 True 0.84769 0.84843 0.84758 0.84827 0.84769 0.84843 0.84758 0.84827 0.84806
6 2016-08-07 21:30:00+00:00 5 True 0.84764 0.84851 0.84768 0.84852 0.8478 0.84857 0.84764 0.84851 0.848075
7 2016-08-07 21:35:00+00:00 4 True 0.84755 0.84825 0.84762 0.84844 0.84765 0.84844 0.84755 0.84824 0.8479
8 2016-08-07 21:40:00+00:00 1 True 0.84759 0.84812 0.84759 0.84812 0.84759 0.84812 0.84759 0.84812 0.847855
9 2016-08-07 21:45:00+00:00 3 True 0.84727 0.84817 0.84743 0.8482 0.84743 0.84822 0.84727 0.84817 0.84772
</code></pre>
<p>My application follows the (simplified) structure below:</p>
<pre><code>class Runner():
def execute_tick(self, clock_tick, previous_tick):
candles = self.broker.get_new_candles(clock_tick, previous_tick)
if candles:
run_calculations(candles)
class Broker():
def get_new_candles(clock_tick, previous_tick)
start = previous_tick - timedelta(minutes=1)
end = clock_tick - timedelta(minutes=3)
return df[(df.time > start) & (df.time <= end)]
</code></pre>
<p>I noticed when profiling the app, that calling the <code>df[(df.time > start) & (df.time <= end)]</code> causes the highest performance issues and I was wondering if there is a way to speed up these calls?</p>
<p>EDIT: I'm adding some more info about the use-case here (also, source is available at: <a href="https://github.com/jmelett/pyFxTrader" rel="noreferrer">https://github.com/jmelett/pyFxTrader</a>)</p>
<ul>
<li>The application will accept a list of <a href="https://en.wikipedia.org/wiki/Foreign_exchange_market#Financial_instruments" rel="noreferrer">instruments</a> (e.g. EUR_USD, USD_JPY, GBP_CHF) and then <a href="https://github.com/jmelett/pyFxTrader/blob/master/trader/broker/oanda_backtest.py#L68" rel="noreferrer">pre-fetch</a> ticks/<a href="https://en.wikipedia.org/wiki/Candlestick_chart" rel="noreferrer">candles</a> for each one of them and their timeframes (e.g. 5 minutes, 30 minutes, 1 hour etc.). The initialised data is basically a <code>dict</code> of Instruments, each containing another <code>dict</code> with candle data for M5, M30, H1 timeframes. </li>
<li>Each "timeframe" is a pandas dataframe like shown at the top</li>
<li>A <a href="https://github.com/jmelett/pyFxTrader/blob/master/trader/controller.py#L145" rel="noreferrer">clock simulator</a> is then used to query the individual candles for the specific time (e.g. at 15:30:00, give me the last x "5-minute-candles") for EUR_USD</li>
<li>This piece of data is then used to "<a href="https://en.wikipedia.org/wiki/Backtesting" rel="noreferrer">simulate</a>" specific market conditions (e.g. average price over last 1 hour increased by 10%, buy market position)</li>
</ul>
|
<p>If efficiency is your goal, I'd use numpy for just about everything</p>
<p>I rewrote <code>get_new_candles</code> as <code>get_new_candles2</code></p>
<pre><code>def get_new_candles2(clock_tick, previous_tick):
start = previous_tick - timedelta(minutes=1)
end = clock_tick - timedelta(minutes=3)
ge_start = df.time.values >= start.to_datetime64()
le_end = df.time.values <= end.to_datetime64()
return pd.DataFrame(df.values[ge_start & le_end], df.index[mask], df.columns)
</code></pre>
<h3>Setup of data</h3>
<pre><code>from StringIO import StringIO
import pandas as pd
text = """time,volume,complete,closeBid,closeAsk,openBid,openAsk,highBid,highAsk,lowBid,lowAsk,closeMid
2016-08-07 21:00:00+00:00,9,True,0.84734,0.84842,0.84706,0.84814,0.84734,0.84842,0.84706,0.84814,0.84788
2016-08-07 21:05:00+00:00,10,True,0.84735,0.84841,0.84752,0.84832,0.84752,0.84846,0.84712,0.8482,0.84788
2016-08-07 21:10:00+00:00,10,True,0.84742,0.84817,0.84739,0.84828,0.84757,0.84831,0.84735,0.84817,0.847795
2016-08-07 21:15:00+00:00,18,True,0.84732,0.84811,0.84737,0.84813,0.84737,0.84813,0.84721,0.8479,0.847715
2016-08-07 21:20:00+00:00,4,True,0.84755,0.84822,0.84739,0.84812,0.84755,0.84822,0.84739,0.84812,0.847885
2016-08-07 21:25:00+00:00,4,True,0.84769,0.84843,0.84758,0.84827,0.84769,0.84843,0.84758,0.84827,0.84806
2016-08-07 21:30:00+00:00,5,True,0.84764,0.84851,0.84768,0.84852,0.8478,0.84857,0.84764,0.84851,0.848075
2016-08-07 21:35:00+00:00,4,True,0.84755,0.84825,0.84762,0.84844,0.84765,0.84844,0.84755,0.84824,0.8479
2016-08-07 21:40:00+00:00,1,True,0.84759,0.84812,0.84759,0.84812,0.84759,0.84812,0.84759,0.84812,0.847855
2016-08-07 21:45:00+00:00,3,True,0.84727,0.84817,0.84743,0.8482,0.84743,0.84822,0.84727,0.84817,0.84772
"""
df = pd.read_csv(StringIO(text), parse_dates=[0])
</code></pre>
<h3>Test input variables</h3>
<pre><code>previous_tick = pd.to_datetime('2016-08-07 21:10:00')
clock_tick = pd.to_datetime('2016-08-07 21:45:00')
</code></pre>
<hr>
<pre><code>get_new_candles2(clock_tick, previous_tick)
</code></pre>
<p><a href="https://i.stack.imgur.com/Ni07K.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ni07K.png" alt="enter image description here"></a></p>
<hr>
<h3>Timing</h3>
<p><a href="https://i.stack.imgur.com/NE3OL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NE3OL.png" alt="enter image description here"></a></p>
|
python|pandas|numpy|dataframe
| 3 |
1,907,697 | 40,389,703 |
List occurrences in list of lists
|
<p>We have a list containing lists like so:</p>
<pre><code>My_list = [ [1,2,3],[4,5,6],[7,8,9],[1,2,3] ]
</code></pre>
<p>Is there an easy way to learn that the list item <code>[1,2,3]</code> appears twice in <code>My_list</code>? I am not interested in the individual numbers but i am interested in them as a list.</p>
|
<p>You may use <code>list.count()</code> in order to find the occurrence of item in the <code>list</code> as:</p>
<pre><code>>>> My_list.count([1, 2, 3])
2
</code></pre>
|
python|list
| 1 |
1,907,698 | 9,679,653 |
Python libraries to generate charts and save as image file
|
<p>I require to generate bar charts and then save them as images, in .png or .bmp format.
Can anyone please point me to such libraries.</p>
<p>My basic need is to generate bar charts in an excel report.</p>
<p>I thought of generating bar charts from any chart library as image files and then insert them into the excel report i am generating using xlwt library.</p>
<p>Please let me know if there is a better way to achieve this objective.</p>
<p>** I had read about pygooglechart module, but i think we need internet connection to generate charts and then download them. Please correct me if i am wrong about this.</p>
|
<p>Have you tried <a href="http://matplotlib.sourceforge.net/" rel="noreferrer">Matplotlib</a> ?</p>
|
python|charts|python-2.7|bar-chart|pygooglechart
| 6 |
1,907,699 | 9,925,381 |
Python script Exception with Tor
|
<p>I have the following script which uses <a href="http://sourceforge.net/projects/socksipy/" rel="noreferrer">SocksiPY</a></p>
<p>and Tor:</p>
<pre><code>from TorCtl import TorCtl
import socks
import socket
socks.setdefaultproxy(socks.PROXY_TYPE_SOCKS5, "127.0.0.1", 9050)
socket.socket = socks.socksocket
import urllib2
import sqlite3
from BeautifulSoup import BeautifulSoup
def newId():
conn = TorCtl.connect(controlAddr="127.0.0.1", controlPort=9051, passphrase="123")
TorCtl.Connection.send_signal(conn, "NEWNYM")
newId()
print(urllib2.urlopen("http://www.ifconfig.me/ip").read())
</code></pre>
<p>This code should change Tor identity but it waits for some time and gives the following error:</p>
<pre><code>tuple index out of range
Traceback (most recent call last):
File "template.py", line 16, in <module>
newId()
File "template.py", line 14, in newId
TorCtl.Connection.send_signal(conn, "NEWNYM")
TypeError: unbound method send_signal() must be called with Connection instance as first argument (got NoneType instance instead)
</code></pre>
<p>But above script is divided into 2 separate scripts:</p>
<pre><code>import socks
import socket
socks.setdefaultproxy(socks.PROXY_TYPE_SOCKS5, "127.0.0.1", 9050)
socket.socket = socks.socksocket
import urllib2
import sqlite3
from BeautifulSoup import BeautifulSoup
print(urllib2.urlopen("http://www.ifconfig.me/ip").read())
</code></pre>
<p>AND:</p>
<pre><code>from TorCtl import TorCtl
def newId():
conn = TorCtl.connect(controlAddr="127.0.0.1", controlPort=9051, passphrase="123")
TorCtl.Connection.send_signal(conn, "NEWNYM")
newId()
</code></pre>
<p>When second script is called then first is called it is ok. Can anyone explain what is the problem and how to fix?</p>
|
<p>Anonymous explained very well this socket overwrite, answer is almost perfect except you have to <strong>close the control socket</strong>. It is safer because of the TorCtl event loop, but I have to look deeper in the TorCtl code to understand this event loop.</p>
<p>To summarize your code becomes:</p>
<pre><code>from TorCtl import TorCtl
import socks
import socket
socks.setdefaultproxy(socks.PROXY_TYPE_SOCKS5, "127.0.0.1", 9050)
import urllib2
import sqlite3
from BeautifulSoup import BeautifulSoup
__originalSocket = socket.socket
def newId():
''' Clean circuit switcher
Restores socket to original system value.
Calls TOR control socket and closes it
Replaces system socket with socksified socket
'''
socket.socket = __originalSocket
conn = TorCtl.connect(controlAddr="127.0.0.1", controlPort=9051, passphrase="123")
TorCtl.Connection.send_signal(conn, "NEWNYM")
conn.close()
socket.socket = socks.socksocket
newId()
print(urllib2.urlopen("http://www.ifconfig.me/ip").read())
</code></pre>
|
python|tor
| 5 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.