Unnamed: 0
int64 0
1.91M
| id
int64 337
73.8M
| title
stringlengths 10
150
| question
stringlengths 21
64.2k
| answer
stringlengths 19
59.4k
| tags
stringlengths 5
112
| score
int64 -10
17.3k
|
---|---|---|---|---|---|---|
1,906,400 | 47,654,338 |
TclStackFree: incorrect freePtr. Call out of sequence? in python program
|
<p>
i want to accomplish a multithread dynamic demo.When the program starts to run, it shows the plot and everything is OK.But it obvious runs unnormally,the mouse pointer becomes a spinning circle. Then the program crashs and exits with the error codes in the tltle.The complete codes are as follows
</p>
<pre><code>#-*-coding:utf-8-*-
import matplotlib
from matplotlib.patches import Circle
import dask
import matplotlib.pyplot as plt
import xlrd
import numpy as np
from matplotlib.animation import FuncAnimation
import matplotlib.ticker as mticker
import cartopy.crs as ccrs
from cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER
import time
from matplotlib.offsetbox import AnnotationBbox,OffsetImage
from PIL import Image
import random
from time import ctime,sleep
import threading
#matplotlib.use('Agg')
#地图可视化
fig=plt.figure(figsize=(20,10))
ax = plt.axes(projection=ccrs.PlateCarree())
ax.coastlines()
ax.stock_img()
gl = ax.gridlines(crs=ccrs.PlateCarree(), draw_labels=True,
linewidth=2, color='gray', alpha=15, linestyle='--')
gl.xlabels_top = False
gl.ylabels_left = False
gl.xlines = False
gl.xlocator = mticker.FixedLocator([-180, -45, 0, 45, 180])
gl.xformatter = LONGITUDE_FORMATTER
gl.yformatter = LATITUDE_FORMATTER
gl.xlabel_style = {'size': 15, 'color': 'gray'}
gl.xlabel_style = {'color': 'red', 'weight': 'bold'}
img=Image.open(r'E:\python_file\untitled\p.png')
imagebox=OffsetImage(img,zoom=0.05)
imagebox.image.axes=ax
ab=AnnotationBbox(imagebox,[55,10],pad=0,frameon=False)
ax.add_artist(ab)
ac=AnnotationBbox(imagebox,[63,0],pad=0,frameon=False)
ax.add_artist(ac)
ad=AnnotationBbox(imagebox,[70,-10],pad=0,frameon=False)
ax.add_artist(ad)
#============================================#攻击
tolerance=1
x_m1,y_m1=random.randint(-180,180),random.randint(-90,90)
v_m1=170
x_m2,y_m2=random.randint(-180,180),random.randint(-90,90)
v_m2=v_m1
x_m3,y_m3=random.randint(-180,180),random.randint(-90,90)
v_m3=v_m1
x_m4,y_m4=55,10
x_m5,y_m5=63,0
x_m6,y_m6=70,-10
class target():
"""docstring for target"""
def __init__(self, x, y):
self.x = x
self.y = y
target1=target(x_m4,y_m4)
target2=target(x_m5,y_m5)
target3=target(x_m6,y_m6)
v=v_m1
class missile(threading.Thread):
"""docstring for missile"""
def __init__(self, x, y,name):
super(missile,self).__init__()
self.x = x
self.y = y
self.name=name
def forward(self, v, target1):
"""docstring for forward"""
if self.x < target1.x:
alpha = np.arctan((target1.y - self.y) / (target1.x - self.x))
elif self.x > target1.x:
alpha = np.pi + np.arctan((target1.y - self.y) / (target1.x - self.x))
elif self.x == target1.x and self.y < target1.y:
alpha = np.pi / 2
else:
alpha = -np.pi / 2
self.x = self.x + v * 0.01 * np.cos(alpha)
self.y = self.y + v * 0.01 * np.sin(alpha)
return self.x, self.y
def distance(self, target1):
"""docstring for distance"""
return np.sqrt((self.x - target1.x) ** 2 + (self.y - target1.y) ** 2)
def run(self):
while True:
if self.distance(target1) < tolerance or self.distance(target2) < tolerance or self.distance(
target3) < tolerance:
print ("collision")
break
if self.distance(target1) < self.distance(target2) and self.distance(target1) < self.distance(target3):
self.x, self.y = self.forward(v, target1)
if self.distance(target2) < self.distance(target1) and self.distance(target2) < self.distance(target3):
self.x, self.y = self.forward(v, target2)
if self.distance(target3) < self.distance(target2) and self.distance(target3) < self.distance(target1):
self.x, self.y = self.forward(v, target3)
plt.plot(self.x, self.y, 'o')
fig.canvas.draw()
fig.canvas.flush_events()
m2=missile(x_m2,y_m2,'mm')
m1=missile(x_m1,y_m1,'mn')
m3=missile(x_m3,y_m3,'md')
print "m1前"
m1.start()
print "m1后"
m2.start()
print "m2后"
m3.start()
print "m3后"
plt.show()
</code></pre>
<p>
Could you have some suggestions to solve the crash to make the program runs normally? In my opinion,the problem is related to multithreading</p>
|
<p>finally,i find there are resource accountings in the multithreadings.putting a lock on the plot process solves the problem</p>
|
python|multithreading|matplotlib
| 0 |
1,906,401 | 47,803,790 |
Python : ValueError: need more than 0 values to unpack
|
<p>I'm trying to save in different lists two parts of a line contained in a file.txt, this file shows:</p>
<p>127.0.0.0.2 23344</p>
<p>127.0.0.0.5 43354</p>
<p>I wanna save the ip as a string in a list and port in another int list.
Everything is okay, but when I add another line, for example:</p>
<p>127.0.0.0.2 23344</p>
<p>127.0.0.0.5 43354</p>
<p>127.0.0.0.4 25565</p>
<p>the compiler gets this error: <strong>Traceback (most recent call last):
File "cliente1.py", line 81, in
ip , port = lineas[x].split()
ValueError: need more than 0 values to unpack</strong></p>
<p>Here is the piece of code:</p>
<pre><code>iplista = list() #creamos las listas
portlista = list()
for x in range (0,numero_de_lineas):
ip , port = lineas[x].split()
iplista.append(ip) #anadirmos a la lista las ips
portlista.append(port) #anadimos a la lista los puertos
</code></pre>
<p>thank all of you helping me!</p>
|
<p>Your code works perfectly for me.
The issue might be the way the lines are written in the input file. Don't include unnecessary blank lines. This will be read into the list of lines and cannot be split.
<a href="https://i.stack.imgur.com/6lEYC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6lEYC.png" alt="enter image description here"></a></p>
<p>The list now becomes : </p>
<p><a href="https://i.stack.imgur.com/68501.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/68501.png" alt="enter image description here"></a></p>
<p>Also check again if the ip address and the port number have a space between them.</p>
|
python-2.7|list|for-loop|split
| 1 |
1,906,402 | 72,794,850 |
Fastest way to convert huge dictionary to a dataframe
|
<p>I have a dictionary that looks like this:</p>
<pre><code>dict = {A: {A: 0,
B: 1,
C: 1,
D: 2,
E: 2,
F: 2,
G: 2,
H: 2,
I: 3},
B: {B: 0,
A: 1,
K: 1,
O: 1,
M: 1,
Q: 1,
L: 1,
Z: 2,
T: 2},
C: {C: 0,
R: 1,
A: 1,
D: 2,
F: 2,
J: 2,
E: 2,
Y: 2,
B: 2},
D: {D: 0,
F: 1,
H: 1,
I: 1,
E: 1,
A: 2,
C: 2,
S: 2,
U: 3}
</code></pre>
<p>But in fact it is way bigger (up to 60K keys) and I need a very fast and efficient way to turn this dictionary into a dataframe that looks like this:</p>
<pre><code>person_1 person_2 degree
A A 0
A B 1
A C 1
A D 2
A E 2
A F 2
A G 2
A H 2
A I 3
B B 0
B A 1
B K 1
B O 1
B M 1
B Q 1
B L 1
B Z 2
B T 2
C C 0
C R 1
C A 1
C D 2
C F 2
C J 2
C E 2
C Y 2
C B 2
D D 0
D F 1
D H 1
D I 1
D E 1
D A 2
D C 2
D S 2
D U 3
</code></pre>
<p>So basically I need a dataframe where each comes from the dictionary keys and their values, and the third column is the number inside that key. What I'm doing right now is to convert the dictionary to df using <code>df = pd.DataFrame(dict)</code> and then</p>
<pre><code>df = pd.melt(df, 'index').rename(columns = {'index': 'hcp_npi',
'variable':'connected_hcp_npi',
'value': 'degree_of_separation'}).dropna()
</code></pre>
<p>And I get the result I need. But the problem with this approach is that when the dictionary exceeds 20K keys, the melt function just takes forever to run. So I'm looking a faster or more efficient way to go from the initial dictionary to the last dataframe.</p>
<p>Thanks!</p>
|
<p>It looks like it's faster to pre-process the dictionary into the column values:</p>
<pre class="lang-py prettyprint-override"><code>from collections import defaultdict
d2 = defaultdict(list)
for k, v in d.items():
d2['person_1'] += [k] * len(v)
d2['person_2'] += list(v.keys())
d2['degree'] += list(v.values())
df = pd.DataFrame(d2)
</code></pre>
<p>I tested your method, @jezrael, @BENYs (now deleted) and mine using <code>timeit</code> and code like this (replacing the <code>stmt</code> as appropriate):</p>
<pre class="lang-py prettyprint-override"><code>timeit.timeit(setup='''
import pandas as pd
d = {'A': {'A': 0, 'B': 1, 'C': 1, 'D': 2, 'E': 2, 'F': 2, 'G': 2, 'H': 2, 'I': 3},
'B': {'B': 0, 'A': 1, 'K': 1, 'O': 1, 'M': 1, 'Q': 1, 'L': 1, 'Z': 2, 'T': 2},
'C': {'C': 0, 'R': 1, 'A': 1, 'D': 2, 'F': 2, 'J': 2, 'E': 2, 'Y': 2, 'B': 2},
'D': {'D': 0, 'F': 1, 'H': 1, 'I': 1, 'E': 1, 'A': 2, 'C': 2, 'S': 2, 'U': 3}
}
''',
stmt='''
df = pd.DataFrame(d)
df = pd.melt(df).rename(columns = {'index': 'hcp_npi',
'variable':'connected_hcp_npi',
'value': 'degree_of_separation'}).dropna()
''',
number=1000)
</code></pre>
<p>For 1000 iterations, the results were:</p>
<pre><code>Nick 0.2878
jezrael 0.3178
BENY 2.2822
TomasCB 2.2774
</code></pre>
<p>For reference, I include @BENY answer here:</p>
<pre class="lang-py prettyprint-override"><code>pd.concat({x : pd.Series(y) for x , y in d.items()}).reset_index()
</code></pre>
|
python|pandas
| 3 |
1,906,403 | 72,579,505 |
how to filter a .csv/.txt file using a list from another .txt
|
<p>So I have an excel sheet that contains in this order:</p>
<p>Sample_name | column data | column data2 | column data ... n</p>
<p>I also have a .txt file that contains</p>
<p>Sample_name</p>
<p>What I want to do is filter the excel file for only the sample names contained in the .txt file. My current idea is to go through each column (excel sheet) and see if it matches any name in the .txt file, if it does, then grab the whole column. However, this seems like a nonefficient way to do it. I also need to do this using python. I was hoping someone could give me an idea on how to approach this better. Thank you very much.</p>
|
<p>Excel PowerQuery should do the trick:</p>
<ol>
<li>Load .txt file as a table (list)</li>
<li>Load sheet with the data columns as another table</li>
<li>Merge (e.g. Left join) first table with second table</li>
<li>Optional: adjust/select the columns to be included or excluded in the resulting table</li>
</ol>
<p>In Python with Pandas’ data frames the same can be accomplished (joining 2 data frames)</p>
<p>P.S. Pandas supports loading CSV files and txt files (as a variant of CSV) into a data frame</p>
|
python|excel
| 1 |
1,906,404 | 39,684,364 |
Heroku gunicorn flask login is not working properly
|
<p>I have a flask app that uses Flask-Login for authentication. Everything works fine locally both using flask's built in web server and gunicorn run locally. But when it's on heroku it's faulty, sometimes it logs me in and sometimes it does not. When I successfully logged in within a few seconds of navigating my session just gets destroyed and have me logged out automatically. This should just happen when the user logged out.</p>
<p>The following code snippet in my view might be relevant:</p>
<pre><code>@app.before_request
def before_request():
g.user = current_user
# I have index (/) and other views (/new) decorated with @login_required
</code></pre>
<p>I might be having similar issues <a href="https://stackoverflow.com/questions/13614877/flask-login-and-heroku-issues">with this</a>. It does not have any answers yet and from what I read from the comments, the author just ran his app with <code>python app.py</code>. That is through using flask's built-in web server. However I can't seem to duplicate his workaround since running <code>app.run(host='0.0.0.0')</code> runs the app in port <code>5000</code> and I can't seem to set <code>port=80</code> because of permission.</p>
<p>I don't see anything helpful with the logs except that it does not authenticate even when I should.</p>
<p>Part of the logs when I got authenticated and tried to navigate to <code>/new</code> and <code>/</code> alternately until it logs me out:</p>
<pre><code>2016-09-25T06:57:53.052378+00:00 app[web.1]: authenticated - IP:10.179.239.229
2016-09-25T06:57:53.455145+00:00 heroku[router]: at=info method=GET path="/" host=testdep0.herokuapp.com request_id=c7c8f4c9-b003-446e-92d8-af0a81985e72 fwd="124.100.201.61" dyno=web.1 connect=0ms service=116ms status=200 bytes=6526
2016-09-25T06:58:11.415837+00:00 heroku[router]: at=info method=GET path="/new" host=testdep0.herokuapp.com request_id=ae5e4e29-0345-4a09-90c4-36fb64785079 fwd="124.100.201.61" dyno=web.1 connect=0ms service=7ms status=200 bytes=2552
2016-09-25T06:58:13.543098+00:00 heroku[router]: at=info method=GET path="/" host=testdep0.herokuapp.com request_id=47696ab9-57b9-4f20-810a-66033e3e9e50 fwd="124.100.201.61" dyno=web.1 connect=0ms service=8ms status=200 bytes=5982
2016-09-25T06:58:18.037766+00:00 heroku[router]: at=info method=GET path="/new" host=testdep0.herokuapp.com request_id=98912601-6342-4d71-a106-26056e4bbb21 fwd="124.100.201.61" dyno=web.1 connect=0ms service=3ms status=200 bytes=2552
2016-09-25T06:58:19.619369+00:00 heroku[router]: at=info method=GET path="/" host=testdep0.herokuapp.com request_id=2b04d31f-93a2-4653-83a4-f95ca9b97149 fwd="124.100.201.61" dyno=web.1 connect=0ms service=3ms status=302 bytes=640
2016-09-25T06:58:19.953910+00:00 heroku[router]: at=info method=GET path="/login?next=%2F" host=testdep0.herokuapp.com request_id=e80d15cd-e9ad-45ff-ae54-e156412fe4ff fwd="124.100.201.61" dyno=web.1 connect=0ms service=3ms status=200 bytes=2793
</code></pre>
<p>Procfile:</p>
<p><code>web: gunicorn app:app</code></p>
|
<p>The problem was solved by adding the <code>--preload</code> option to gunicorn. I'm not entirely sure how that solved the problem and would appreciate if someone can explain.</p>
<p>Updated Procfile:</p>
<p><code>web: gunicorn app:app --preload</code></p>
|
python|heroku|flask|gunicorn|flask-login
| 15 |
1,906,405 | 39,792,891 |
What are some ways to post python pandas dataframes to slack?
|
<p>How can I export a pandas dataframe to slack? </p>
<p>df.to_json() seems like a potential candidate, coupled with the slack incoming webhook, but then parsing the message to display as a nice markdown/html-ized table isn't obvious to me.</p>
<p>Long time listener, first time caller, please go easy on me...</p>
|
<p>There is a .<a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_markdown.html" rel="nofollow noreferrer"><code>to_markdown</code></a>() method on DataFrames, so that might work. But if you are just looking to cut and paste, <a href="https://pypi.python.org/pypi/tabulate" rel="nofollow noreferrer">Tabulate</a> is a good choice. From the docs:</p>
<pre><code>from tabulate import tabulate
df = pd.DataFrame([["Name","Age"],["Alice",24],["Bob",19]])
print tabulate(df, tablefmt="grid")
</code></pre>
<p>Returns</p>
<pre><code>+---+-------+-----+
| 0 | Name | Age |
+---+-------+-----+
| 1 | Alice | 24 |
+---+-------+-----+
| 2 | Bob | 19 |
+---+-------+-----+
</code></pre>
<p>Paste that in a code block in Slack and it should show up nicely.</p>
|
python|pandas|slack
| 8 |
1,906,406 | 16,488,497 |
python-daemon start multiple instances of same program and passing in instance specific arguments
|
<p>I have a log mining tool I have written. I can launch it using nohup and pass in arguments like what file it is to parse (it uses a platform independent tail class I wrote in Python). However I would like it to start as an init script or from the command line (as a daemon). And I would like to be able to start multiple instances if there is more than one log file to look at on the same server. </p>
<p>I have looked at the <a href="https://pypi.python.org/pypi/python-daemon/" rel="nofollow">python-daemon package</a> but it is unclear in the reference documents if it is possible to pass in <em>process/instance specific arguments</em>. e.g. like what log file each daemon instance of the program is supposed to scan. </p>
<p>One of the issues I am trying to get my head around is how to stop or restart the individual daemon instances created. </p>
|
<p>I've had to do the exact same thing today, multiple (same) apps that need to run as Daemons.
I did not use the new python-daemon package because I've never used it, but I've used the Daemon from Sander Marechal (<a href="http://www.jejik.com/articles/2007/02/a_simple_unix_linux_daemon_in_python/" rel="nofollow">A simple unix/linux daemon in Python</a>) many time before.</p>
<p>I've created a simple test app, not the greatest python code, but it works as expected. The sample uses a single extra parameter that can be used like this: <code>./runsample.py start <param></code></p>
<p>You will see a new log file, and pid file created in /tmp for each running Daemon.</p>
<p>You can get the Daemon class from here: <a href="http://www.jejik.com/articles/2007/02/a_simple_unix_linux_daemon_in_python/" rel="nofollow">A simple unix/linux daemon in Python</a></p>
<p><strong>The Test App</strong></p>
<pre><code>import sys, time
from daemon import Daemon
#simple test app that writes to a file every second
#this is just to check that the correct daemons are running
class testapp(Daemon):
ID = 0
def __init__(self, id):
print 'Init (ID): ' + id
#set the params
self.ID = id
#set the pid file
pid = '/tmp/testapp-' + str(id) + '.pid'
#init the base with the pid file location
Daemon.__init__(self, pid)
#this is the overwritten method from the article by Sander Marechal
# http://www.jejik.com/articles/2007/02/a_simple_unix_linux_daemon_in_python/
def run(self):
while True:
#open file, append mode
f = open('/tmp/log-' + self.ID + '.log', 'a')
#write
f.write(str(time.time()))
#close
f.close()
#wait
time.sleep(1)
</code></pre>
<p><strong>The Init Script / Daemon</strong></p>
<pre><code>#!/usr/bin/env python
#
# Multiple daemons for the same app test
#
import sys
from testapp import testapp
#check is anough arguments are passed
if len(sys.argv) != 3:
print "usage: %s start|stop|restart <param>" % sys.argv[0]
sys.exit(2)
#get the extra arguments
id = sys.argv[2]
print 'Param (ID): ' + sys.argv[2]
#start the app with the parameters
daemon = testapp(id)
#from the article by Sander Marechal
# http://www.jejik.com/articles/2007/02/a_simple_unix_linux_daemon_in_python/
if len(sys.argv) == 3:
if 'start' == sys.argv[1]:
print 'Start'
daemon.start()
elif 'stop' == sys.argv[1]:
daemon.stop()
print 'Stop'
elif 'restart' == sys.argv[1]:
print 'Restarting...'
daemon.restart()
print 'Restarted'
else:
print "Unknown command"
sys.exit(2)
sys.exit(0)
else:
print "usage: %s start|stop|restart" % sys.argv[0]
sys.exit(2)
</code></pre>
<p>I hope this works for you as well.</p>
|
python|python-daemon
| 2 |
1,906,407 | 16,108,446 |
Drawing a hollow asterisk square
|
<p>I'm trying to figure out how to turn my <em>whole</em> square into a hollow one. The few things I've tried so far haven't been very successful as I end up getting presented with a rather distorted triangle! </p>
<p>This is the code I have to form my square currently ..</p>
<pre><code>size = 5
for i in range(size):
print ('*' * size)
</code></pre>
<p>When run, this is the result ..</p>
<pre><code>*****
*****
*****
*****
*****
</code></pre>
<p>Do I need to run <code>if</code> or <code>while</code> statements when <code>size</code> is greater than 3 to specify a condition?</p>
|
<p>I think this is what you want to do:</p>
<pre><code>m, n = 10, 10
for i in range(m):
for j in range(n):
print('*' if i in [0, n-1] or j in [0, m-1] else ' ', end='')
print()
</code></pre>
<p>Output:</p>
<pre><code>**********
* *
* *
* *
* *
* *
* *
* *
* *
**********
</code></pre>
<p>You can also draw a triangle this way:</p>
<pre><code>m, n = 10, 10
for i in range(m):
for j in range(n):
print('*' if i in [j, m-1] or j == 0 else ' ', end='')
print()
</code></pre>
<p>Output:</p>
<pre><code>*
**
* *
* *
* *
* *
* *
* *
* *
**********
</code></pre>
|
python
| 8 |
1,906,408 | 38,839,401 |
Pygame Button getRect Collidepoint not working?
|
<p>I have finished the main code for my game and I have started on making a menu screen. I can display the buttons on the screen just fine but when I click somewhere I get this <a href="http://i.stack.imgur.com/pOjmi.png" rel="nofollow">Error</a>:</p>
<p>How can I go about fixing this? If I didn't make anything clear in this question please tell me so I can clarify. Thanks!</p>
<p>Here is my code for the menuscreen:</p>
<pre><code>import pygame
import random
import time
pygame.init()
#colours
white = (255,255,255)
black = (0,0,0)
red = (255,0,0)
green = (0,155,0)
blue = (50,50,155)
display_width = 800
display_height = 600
gameDisplay = pygame.display.set_mode((display_width,display_height))
pygame.display.set_caption('Numeracy Ninjas')
clock = pygame.time.Clock()
#Fonts
smallfont = pygame.font.SysFont("comicsansms", 25)
medfont = pygame.font.SysFont("comicsansms", 50)
largefont = pygame.font.SysFont("comicsansms", 75)
#Sprites
img_button_start = pygame.image.load('Sprites/Buttons/button_start.png')
img_button_options = pygame.image.load('Sprites/Buttons/button_options.png')
gameDisplay.fill(white)
pygame.display.update()
class Button(pygame.sprite.Sprite):
def __init__(self, image, buttonX, buttonY):
super().__init__()
gameDisplay.blit(image, (buttonX, buttonY))
pygame.display.update()
selfrect = image.get_rect()
def wasClicked(event):
if selfrect.collidepoint(event.pos):
return True
def gameIntro():
buttons = pygame.sprite.Group()
button_start = Button(img_button_start, 27, 0)
button_options = Button(img_button_options, 27, 500)
buttons.add(button_start)
buttons.add(button_options)
print(buttons)
#main game loop
running = True
while running:
for event in pygame.event.get():
if event.type == pygame.MOUSEBUTTONDOWN:
print(event.pos)
#check for every button whether it was clicked
for btn in buttons:
print('forbtninbuttons')
if btn.wasClicked():
print('clicked!')
if event.type == pygame.QUIT:
pygame.quit()
</code></pre>
|
<p>You haven't declared any attributes for you class, just local variables. Try doing <code>self.selfrect = image.get_rect()</code> in your initializer and in your <code>wasClicked(event)</code> method do:</p>
<pre><code>def wasClicked(self, event):
if self.selfrect.collidepoint(event.pos):
return True
</code></pre>
<p>It's usually convention to name your rect variable just <em>rect</em> though.</p>
<pre><code>class Button(pygame.sprite.Sprite):
def __init__(self, image, buttonX, buttonY):
super().__init__()
# This code doesn't make sense here. It should be inside your game loop.
# gameDisplay.blit(image, (buttonX, buttonY))
# pygame.display.update()
self.image = image # It's usually good to have a reference to your image.
self.rect = image.get_rect()
def wasClicked(self, event):
if self.rect.collidepoint(event.pos):
return True
else:
return False
</code></pre>
|
python|button|menu|pygame
| 3 |
1,906,409 | 40,545,505 |
Force twisted reactor to stop on sigterm
|
<p>I have a GCE server setup to handle some data analysis. I can communicate with it via <code>ws</code> using <code>twisted</code>. I am the only client of this server.</p>
<p>System is setup like this:</p>
<pre><code>spawn_multiprocessing_hierarchy()
reactor.run() # Blocks main thread
stop_everything_and_cleanup()
</code></pre>
<p>When I'm trying to stop the system and a client is connected, <code>reactor</code> will ignore (or delay indefinitely perhaps?) <code>SIGTERM</code> because it is handling client's connection. However, every other part of the system is fault-tolerant and <code>reactor</code> <em>never</em> handles <em>any</em> critical data. It exists solely for monitoring purposes. This means that I could easily <code>SIGKILL</code> it were it not for other <code>multiprocess.Process</code>es which need to dump data in memory so that they could continue where they stopped last on next launch.</p>
<p>Is it possible to have <code>SIGTERM</code> immediately (without waiting for running tasks in reactor to finish) drop any connections and stop the reactor?</p>
|
<p>Without seeing the rest of your code, it's difficult to assume what the exact issue is. Generally, when the reactor doesn't stop, it's because a task is running in a thread or process. Twisted will try to do the "right thing" and will wait until all threads/processes are finished before exiting. You could add an event when the reactor is stopped via <code>reactor.addSystemEventTrigger</code>.</p>
|
python|twisted|reactor|twisted.internet|sigterm
| 0 |
1,906,410 | 40,616,049 |
executing python function on Linux
|
<p>I'm new to Python but would really like to execute the following function on Linux server command line. Please help to figure out why nothing is being printed when I execute the following script (test.py)? To execute I typed <code>python test.py</code>. Thank you.</p>
<pre><code>##!/usr/bin/python
def get_minimal_representation(pos, ref, alt):
"""
Get the minimal representation of a variant, based on the ref + alt alleles in a VCF
This is used to make sure that multiallelic variants in different datasets,
with different combinations of alternate alleles, can always be matched directly.
Note that chromosome is ignored here - in xbrowse, we'll probably be dealing with 1D coordinates
Args:
pos (int): genomic position in a chromosome (1-based)
ref (str): ref allele string
alt (str): alt allele string
Returns:
tuple: (pos, ref, alt) of remapped coordinate
"""
pos = int(pos)
# If it's a simple SNV, don't remap anything
if len(ref) == 1 and len(alt) == 1:
return pos, ref, alt
else:
# strip off identical suffixes
while(alt[-1] == ref[-1] and min(len(alt),len(ref)) > 1):
alt = alt[:-1]
ref = ref[:-1]
# strip off identical prefixes and increment position
while(alt[0] == ref[0] and min(len(alt),len(ref)) > 1):
alt = alt[1:]
print "Alt: ", alt
ref = ref[1:]
print "Ref: ", ref
pos += 1
print "Pos: ", pos
return pos, ref, alt
print "the result is: ", get_minimal_representation( pos = 1001, ref = "CTCC", alt = "CCC,C,CCCC")
</code></pre>
|
<p>You are not calling the function.</p>
<p>Try</p>
<pre><code>if __name__ == '__main__':
print "the result is: ", get_minimal_representation( pos = 1001, ref = "CTCC", alt = "CCC,C,CCCC")
</code></pre>
<p>at the bottom of your file.</p>
<p>It should be like this:</p>
<pre><code>##!/usr/bin/python
def get_minimal_representation(pos, ref, alt):
"""
Get the minimal representation of a variant, based on the ref + alt alleles in a VCF
This is used to make sure that multiallelic variants in different datasets,
with different combinations of alternate alleles, can always be matched directly.
Note that chromosome is ignored here - in xbrowse, we'll probably be dealing with 1D coordinates
Args:
pos (int): genomic position in a chromosome (1-based)
ref (str): ref allele string
alt (str): alt allele string
Returns:
tuple: (pos, ref, alt) of remapped coordinate
"""
pos = int(pos)
# If it's a simple SNV, don't remap anything
if len(ref) == 1 and len(alt) == 1:
return pos, ref, alt
else:
# strip off identical suffixes
while(alt[-1] == ref[-1] and min(len(alt),len(ref)) > 1):
alt = alt[:-1]
ref = ref[:-1]
# strip off identical prefixes and increment position
while(alt[0] == ref[0] and min(len(alt),len(ref)) > 1):
alt = alt[1:]
print "Alt: ", alt
ref = ref[1:]
print "Ref: ", ref
pos += 1
print "Pos: ", pos
return pos, ref, alt
if __name__ == '__main__':
print "the result is: ", get_minimal_representation( pos = 1001, ref = "CTCC", alt = "CCC,C,CCCC")
</code></pre>
|
python
| 4 |
1,906,411 | 68,108,910 |
Python Print Function Fails
|
<p>I get a syntax error when running this on my Raspberry Pi. Code was developed on my Mac and runs fine there. Any ideas what the cause may be?</p>
<pre><code>print(f"Updating information {datetime.now().isoformat()}")
^
SyntaxError: invalid syntax
</code></pre>
|
<p>Python 3's f-Strings are available only from <code>python 3.6</code></p>
<p>You can follow this gist (<a href="https://gist.github.com/dschep/24aa61672a2092246eaca2824400d37f#installing-python-36-on-raspbian" rel="nofollow noreferrer">https://gist.github.com/dschep/24aa61672a2092246eaca2824400d37f#installing-python-36-on-raspbian</a>) to upgrade and then try the same command</p>
|
python
| 1 |
1,906,412 | 1,571,224 |
ActiveMQ : Use Django Auth with Stomp
|
<p>I am working on <a href="http://blog.gridspy.co.nz/2009/09/introducing-the-nexus.html" rel="nofollow noreferrer">power monitoring</a> and want to send live power data to authorised users only. Some users have opted to install power sensors in their houses, others are viewing those sensors. Each sensor sends samples to a <a href="http://twistedmatrix.com/" rel="nofollow noreferrer">Twisted</a> backend - the goal is to have this backend forward the data to Javascript running in the browser.</p>
<p>My current solution to forwarding the data is an <a href="http://orbited.org/" rel="nofollow noreferrer">Orbited</a> server and an instance of <a href="http://www.morbidq.com/" rel="nofollow noreferrer">MorbidQ</a> (MorbidQ is a Stomp server). Each building in my system (<a href="http://your.gridspy.co.nz/powertech/" rel="nofollow noreferrer">example here</a>) has its own channel for updates. The twisted backend broadcasts the data through the MorbidQ channel to anyone watching, but anyone can watch. There is an entry on my blog about <a href="http://blog.gridspy.co.nz/2009/10/realtime-data-from-sensors-to-browsers.html" rel="nofollow noreferrer">the data flow from sensor to site</a></p>
<p><strong>For many buildings, I only want a couple of users to be able to see live data in a given building. I would like to use Django Auth if possible, or some sort of workaround if not.</strong></p>
<p>What is the easiest way to secure these channels per user?
Can I use Django Auth?
Should I use RabbitMQ or ActiveMQ instead of MorbidQ?
What measures can I take to keep this solution secure?</p>
<p>For coding I am most confident in C++ and Python.</p>
<p>Thanks!</p>
|
<p>Reviving an old thread: MorbidQ is not meant for production use AFAIK. ActiveMQ is a much more robust beast and provides much better ways to handle user-based authentication. I wrote <a href="http://pythonware.blogspot.com/2011/12/user-based-authentication-using-orbited.html" rel="nofollow">this</a> back in 2010 which deals with static user authentication - but ActiveMQ allows you to pass a dynamic list of users for authentication, which can come from whichever backend the application has available. The post I mentioned above does not deal with it, but a little digging into the ActiveMQ authentication/security manual section (plus some Java knowledge) can enable a pretty nasty setup for such use. If LDAP is available, even better.</p>
|
python|django|stomp|orbited
| 1 |
1,906,413 | 63,000,519 |
How to correctly use 'percentParent' in sunburst graphs with only 1 central node?
|
<p>So I'm trying to build a Plotly sunburst graph that displays <code>percentParent</code> for each element in the graph. This works fine for all elements except for when I have only a single option for the central node/ring/whatever (see example below)</p>
<p><a href="https://i.stack.imgur.com/9KdMY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9KdMY.png" alt="enter image description here" /></a></p>
<p>Since the central node obviously does not have a parent, it appears to bug out and display the bracketed call on <code>percentParent</code> from the <code>texttemplate</code> field. However, if there are 2 (or more) central nodes, it automatically calculates each's percentage of the sum total of the two.</p>
<p>My question is:
When I have only 1 central node, how can I either hide this field for the central node only or make it correctly display "100%"?</p>
<p>Example code:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import plotly.graph_objects as go
df = pd.DataFrame({'node_names': ['Center', 'Yes', 'No'],
'node_parent': ['', 'Center', 'Center'],
'node_labels': ['Center', 'Center_Yes', 'Center_No'],
'node_counts': [1000, 701, 299]})
fig = go.Figure(
data=go.Sunburst(
ids=df["node_names"],
labels=df["node_labels"],
parents=df["node_parent"],
values=df["node_counts"],
branchvalues="total",
texttemplate = ('%{label}<br>%{percentParent:.1%}'),
),
)
fig.show()
</code></pre>
|
<p>Here I found a possible way reading the help <code>go.Sunburst.texttemplate?</code></p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import plotly.graph_objects as go
df = pd.DataFrame({'node_names': ['Center', 'Yes', 'No'],
'node_parent': ['', 'Center', 'Center'],
'node_labels': ['Center', 'Center_Yes', 'Center_No'],
'node_counts': [1000, 701, 299]})
fig=go.Figure(
data=go.Sunburst(
ids=df["node_names"],
labels=df["node_labels"],
parents=df["node_parent"],
values=df["node_counts"],
branchvalues="total",
texttemplate = ('%{label}',
'%{label}<br>%{percentParent:.1%}',
'%{label}<br>%{percentParent:.1%}',
'%{label}<br>%{percentParent:.1%}'),
),
)
fig.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/LUXfy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LUXfy.png" alt="enter image description here" /></a></p>
<p>You could eventually modify the firs element in <code>texttemplate</code> as <code>'%{label}<br>100%'</code>.</p>
|
python|plotly|plotly-dash|sunburst-diagram
| 3 |
1,906,414 | 32,144,043 |
Is there any method in python like `lag.plot1` in r?
|
<p>I want to find a method in seaborn or statsmodels with the ability produce scatterplot matrices as <code>lag.plot1</code> in r. I could implement a simple version as following:</p>
<pre><code>In [74]: def lag_plot1(x, nrow, ncol):
...: import matplotlib.pyplot as plt
...: fig, axs = plt.subplots(nrow, ncol, figsize=(3*ncol, 3*nrow))
...: for row in range(nrow):
...: for col in range(ncol):
...: offset = row*ncol + col + 1
...: axs[row][col].scatter(x[offset:], x[:-offset], marker='o')
...: axs[row][col].set_ylabel('x(t)')
...: axs[row][col].set_title('x(t-%d)' % offset)
...: return fig
...:
In [75]: lag_plot1(recur, 4, 3)
</code></pre>
<p><a href="https://i.stack.imgur.com/YXlDw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YXlDw.png" alt="enter image description here"></a></p>
|
<p>There is a <code>lag_plot</code> in pandas <a href="http://pandas.pydata.org/pandas-docs/stable/visualization.html#lag-plot" rel="nofollow">http://pandas.pydata.org/pandas-docs/stable/visualization.html#lag-plot</a> but it doesn't plot the grid of plots for different lags, AFAICS.</p>
<p>statsmodels doesn't have a lag_plot, but there are still open issues to add more plots to support model diagnostics.</p>
|
python|r|statsmodels|seaborn
| 1 |
1,906,415 | 32,236,184 |
Error Time Limit exceeded Add digits of number
|
<p>I have used iterative approach to add all digits of a number till the sum is a unit number. My code is:</p>
<pre><code> def addDigits(self, num):
self.x=num
a=[]
sum=0
count=0
count1=0
p=0
while((self.x)/10>0):
while(self.x>0):
self.x=self.x/10
count=count+1
self.x=num
while(count>0):
if(count==1):
self.x=self.x%10
sum=sum+self.x
else:
self.x=self.x/(10**(count-1))
sum=sum+self.x
self.x=num
self.x=self.x%(10**(count-1))
count=count-1
self.x=sum
num=self.x
return self.x
</code></pre>
<p>I am getting time exceeded error for input where sum>10.Please suggest some ways to solve this problem so that correct output can be there</p>
|
<p>I simplified the algorithm: here are 2 implementations (with 1 respectively 2 nested loops):</p>
<pre><code>def addDigits(self, num):
aux = num
sum = 0
while aux > 9:
sum = sum + aux % 10
aux = aux / 10
if aux <= 9:
sum = sum + aux
aux = sum
sum = 0
return aux
def addDigits(self, num):
aux = num
sum = 0
while True:
while aux > 9:
sum = sum + aux % 10
aux = aux / 10
sum = sum + aux
aux = sum
sum = 0
if aux <= 9:
break
return aux
</code></pre>
<p><strong>Notes</strong>:</p>
<ul>
<li><p>I replaced <code>self.x</code> with <code>aux</code>; there's no reason to have a private member used just for storing a method internal (and temporary) data</p></li>
<li><p>Since the method does not depend of any class members it can be decorated as <code>@classmethod</code></p></li>
</ul>
|
python
| 0 |
1,906,416 | 44,332,821 |
TypeError at /index/:Expected binary or unicode string, got <Image: 43474673_7bb4465a86.jpg>
|
<p>I want to use retrained model to classify image by using Django.</p>
<p>In my Django project:</p>
<blockquote>
<p>model.py:</p>
</blockquote>
<pre><code>from django.db import models
class Image(models.Model):
photo = models.ImageField(null=True, blank=True)
def __str__(self):
return self.photo.name
</code></pre>
<blockquote>
<p>setttings.py</p>
</blockquote>
<pre><code>STATIC_URL = '/static/'
MEDIA_URL = '/media/'
MEDIA_ROOT = os.path.join(BASE_DIR, 'imageupload')
</code></pre>
<blockquote>
<p>urls.py</p>
</blockquote>
<pre><code>from django.conf.urls import url
from django.contrib import admin
from django.conf import settings
from django.conf.urls.static import static
from imageupload import views
urlpatterns = [
url(r'^admin/', admin.site.urls),
url(r'^index/', views.index, name='index'),
] + static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)
</code></pre>
<blockquote>
<p>views.py</p>
</blockquote>
<pre><code>from django.shortcuts import render
from .form import UploadImageForm
from .models import Image
import os, sys
import tensorflow as tf
def index(request):
if request.method == 'POST':
form = UploadImageForm(request.POST, request.FILES)
if form.is_valid():
picture = Image(photo=request.FILES['image'])
picture.save()
#if os.path.isfile(picture.photo.url):
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
image_path = picture
# Read in the image_data
image_data = tf.gfile.FastGFile(image_path, 'rb').read()
# Loads label file, strips off carriage return
label_lines = [line.rstrip() for line in
tf.gfile.GFile("retrained_labels.txt")]
# Unpersists graph from file
with tf.gfile.FastGFile("retrained_graph.pb", 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
tf.import_graph_def(graph_def, name='')
with tf.Session() as sess:
# Feed the image_data as input to the graph and get first prediction
softmax_tensor = sess.graph.get_tensor_by_name('final_result:0')
predictions = sess.run(softmax_tensor, {'DecodeJpeg/contents:0': image_data})
# Sort to show labels of first prediction in order of confidence
top_k = predictions[0].argsort()[-len(predictions[0]):][::-1]
a =[]
label = []
for node_id in top_k:
human_string = label_lines[node_id]
score = predictions[0][node_id]
a = [human_string, score]
label.append(a)
return render(request, 'show.html', {'picture':picture, 'label':label})
else:
form = UploadImageForm()
return render(request, 'index.html', {'form': form})
</code></pre>
<blockquote>
<p>index.html</p>
</blockquote>
<pre><code><p>hello index!</p>
<form method="post" enctype="multipart/form-data">
{% csrf_token %}
{{ form.as_p }}
<input type="submit" value="Submit" />
</form>
</code></pre>
<blockquote>
<p>show.html</p>
</blockquote>
<pre><code><h1>This is show!!</h1>
<img src="{{ picture.photo.url }}" />
<br>
<p>Picture'name is: </p>{{ picture.photo.name }}
<br>
<p>The picture's label:</p>
{{ label }}
<br>
</code></pre>
<p>I succeeded in uploading a image later, the browser have the error:</p>
<p><a href="https://i.stack.imgur.com/xYObJ.png" rel="nofollow noreferrer">the screenshot of the error </a></p>
<p>Thank you!!</p>
<blockquote>
<p>The question had solved!!This is the change:</p>
</blockquote>
<pre><code>image_path = picture.photo.path
</code></pre>
<p>and there are two to be change:</p>
<pre><code>1. label_lines = [line.rstrip() for line in tf.gfile.GFile("imageupload/retrained_labels.txt")]
2. with tf.gfile.FastGFile("imageupload/retrained_graph.pb", 'rb') as f:
</code></pre>
<p>the change is <code>relative path</code>. </p>
|
<p><code>image_path</code> is an image, not the path:</p>
<pre><code>...
image_path = picture
# Read in the image_data
image_data = tf.gfile.FastGFile(image_path, 'rb').read()
...
</code></pre>
<p>get the image's path to the file and pass it to <code>FastGFile</code></p>
|
django|tensorflow
| 0 |
1,906,417 | 43,981,231 |
Dlib "Error deserializing object of type short"
|
<p>I am getting error on <code>pose_predictor = dlib.shape_predictor(predictor_model)</code> on dlib python. </p>
<pre><code>RuntimeError: Error deserializing object of type short
while deserializing a floating point number.
while deserializing a dlib::matrix
while deserializing object of type std::vector
while deserializing object of type std::vector
while deserializing object of type std::vector
</code></pre>
|
<p>I met the same issue and tried re-downloading the file, then it's OK now.</p>
<p>I downloaded from
<a href="https://github.com/JeffTrain/selfie/blob/master/shape_predictor_68_face_landmarks.dat" rel="noreferrer">https://github.com/JeffTrain/selfie/blob/master/shape_predictor_68_face_landmarks.dat</a></p>
|
python|raspberry-pi|face-recognition|dlib
| 11 |
1,906,418 | 32,841,700 |
How to use Flot charts with Flask?
|
<p>I have a SQLAlchemy model in Flask holding series of timestamps and values. I'd like to draw a chart using Flot charts from these data but can't find any information on how to do this. </p>
<pre><code>class Reading(database.Model):
__tablename__= 'Readings'
index = database.Column('id', database.Integer, primary_key = True)
timestamp = database.Column('Timestamp', database.DateTime)
value = database.Column('Value', database.Float)
</code></pre>
<p>I know how to get the data in Flask app and how to use Flot charts in html but have no idea how to connect these two.</p>
|
<p>You will need to convert the timestamp to milliseconds, other then that it's quite easy to plot graphs with flot. You just need to create a list of datapoints, <code>[[timestamp1, value1], [timestamp2, value2], ...]</code> and pass it along with the render_template. Hopefully the example below can help you.</p>
<pre><code>def request_graph_data():
data = [[0, 3], [4, 5], [8, 1], [9, 3]]
return flask.render_template("graph.html",
list_of_data=data)
<script type="text/javascript">
$(function() {
var d1 = {{ list_of_data }};
var d2 = [[0, 3], [4, 8], [8, 5], [9, 13]];
// A null signifies separate line segments
var d3 = [[0, 12], [7, 12], null, [7, 2.5], [12, 2.5]];
$.plot("#placeholder", [ d1, d2, d3 ]);
});
</script>
<div class="demo-container">
<div id="placeholder" class="demo-placeholder"></div>
</div>
</code></pre>
|
python|flask|flot|flask-sqlalchemy
| 3 |
1,906,419 | 32,901,701 |
smtplib can't send mail: 550 5.7.1 This system is configured to reject spoofed sender addresses
|
<p>I got an error mail message : </p>
<pre><code>User_One@mycompany.com
Your message wasn't delivered due to a permission or security issue.
It may have been rejected by a moderator, the address may only accept e-mail
from certain senders, or another restriction may be preventing delivery.
smtp; 550 5.7.1 This system is configured to reject spoofed sender addresses> #SMTP#
Original message headers:
Return-Path: <User_One@mycompany.com>
Received: from localhost.localdomain (unknown [192.X.X.X]) (using
TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client
certificate requested) by smtpcompany.tw (Postfix) with ESMTPS id CB8752E03B7 for
<User_One@mycompany.com>; Fri, 2 Oct 2015 14:24:41 +0800 (CST)
Content-Type: multipart/mixed; boundary="===============1672092220=="
MIME-Version: 1.0
From: <User_One@mycompany.com>
To: <User_One@mycompany.com>
Subject: We got something
</code></pre>
<p>I stuck with this for a while
I don't know where ia wrong<br>
I try <code>smtplib.SMTP('smtp.gmail.com',587)</code> works well<br>
But company smtp fail<br>
I think this line is weild : </p>
<pre><code> Received: from localhost.localdomain (unknown [192.X.X.X])
</code></pre>
<p>Do you have any ideas?<br>
Please help me Thank you</p>
<p>Here is code :</p>
<pre><code>def send_email(mail_from,mail_to,subject, body):
import smtplib
from email.mime.multipart import MIMEMultipart
fromaddr = mail_from
toaddr = mail_to if type(mail_to) is list else [mail_to]
msg = MIMEMultipart()
msg['From'] = fromaddr
msg['To'] = ", ".join(toaddr)
msg['Subject'] = subject
body = body
server = smtplib.SMTP('mail.stmpcompany.tw', 25)
server.set_debuglevel(True)
server.starttls()
text = msg.as_string()
server.sendmail(fromaddr, toaddr, text)
server.quit()
</code></pre>
|
<p>Your sender address (from field in the email) must be a existing email address on the mailsever you use (mail.stmpcompany.tw).</p>
<p>The error message</p>
<blockquote>
<p>5.7.1 This system is configured to reject spoofed sender addresses </p>
</blockquote>
<p>occurs when you use an email address in the from field that either does not belong to a domain which the mailserver is configured for or the server also requries that a user with that emailaddress actually exists. </p>
<p>Your test with gmail worked because googles mailserver doesn't check whether the senders address is valid or not. </p>
|
python-2.7|email
| 2 |
1,906,420 | 13,851,507 |
performing calculations on dictionary values
|
<p>I have a dictionary:</p>
<pre><code>adict = {'key1':{'t1':{'thedec':.078, 'theint':1000, 'thechar':100},
't2':{'thedec':.0645, 'theint':10, 'thechar':5},
't3':{'thedec':.0871, 'theint':250, 'thechar':45},
't4':{'thedec':.0842, 'theint':200, 'thechar':37},
't5':{'thedec':.054, 'theint':409, 'thechar':82},
't6':{'thedec':.055, 'theint':350, 'thechar':60}}}
</code></pre>
<p>I use the following loop so that I can pair values of 'theint' in a vector so that ultimately I can easily use perform statistical calculations on them:</p>
<pre><code>for k1 in adict:
x = []
for t, record in sorted(adict[k1].items(), key = lambda(k,v):v['thedec']):
x.append(record['theint'])
y = [0]*(len(x)/2)
for i in xrange(0,len(x),2):
y[i/2] = sum(x[i:i+2])
</code></pre>
<p>I am wondering if:
1. There is a faster way to extract the values of 'theint' than to use .append()
2. There is a way that I can take, for example, the average of all 'theint' values
3. There is a way that I can loop through a dictionary in two's so that I could somehow skip the step
of first copying all of the values, and go immediately to adding them to a vector as summed pairs.</p>
<p>Thanks for the help.</p>
|
<pre><code>>>> [x['theint'] + y['theint'] for x, y in zip(*[iter(sorted(adict['key1'].values(), key=operator.itemgetter('thedec')))] * 2)]
[759, 1010, 450]
</code></pre>
|
python|list|dictionary|iterator
| 3 |
1,906,421 | 27,260,553 |
Problems installing IDLE2HTML
|
<p>I'm trying to install IDLE2HTML from the following <a href="https://pypi.python.org/pypi/IDLE2HTML/0.2" rel="nofollow">link</a> in order to print from IDLE in color. I use Python 2.7 on a Mac.</p>
<p>I follow the instructions from the <code>readme.txt</code>, but when I reload IDLE nothing has changed. I don't have the Save As HTML option.</p>
<p>In the past I've installed and reinstalled Python a few times.</p>
<p>When trying to find my <code>idlelib</code> folder (as written in the readme instructions), I eventually found it under the path <code>/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/</code></p>
<p>Is my problem that my python is in <code>/System/Library/</code>? </p>
<p>Any ideas please?</p>
|
<p>If the problem is not a configuration problem with the IDLE2HTML Python IDLE extension like <a href="https://stackoverflow.com/a/27262794/3787376">this answer (for the same question)</a> explains about, it may be a problem with the code itself (this has happened with Python 3 IDLE). This can be checked by running IDLE using the Python file that starts it (<code><PYTHON LIBRARY DIR>\idlelib\idle.py</code> where <code><PYTHON LIBRARY DIR></code> is the location of the standard library for the Python installation), in a command line interpreter (use <code><PYTHON EXECUTABLE DIR>\python.exe</code> where <code><PYTHON EXECUTABLE DIR</code> is the location of the executable files that start Python). It will launch IDLE and show any errors in the C.L.I..</p>
<p>Quote from Python <em>idlelib</em> module docstring:</p>
<blockquote>
<p>Idle includes an interactive shell and editor. Use the files named idle.* to start Idle.</p>
</blockquote>
<p>IDLE2HTML real error example:
<a href="https://i.stack.imgur.com/P8Yca.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/P8Yca.png" alt="Example image of IDLE2HTML error shown in C.L.I. - http://i.stack.imgur.com/P8Yca.png"></a></p>
<p>A common problem is that IDLE2HTML <strong>version 2.0</strong> (latest at time of writing) needs a simple <strong>code tweak to work with Python 3</strong>.x (a fixed version that works as part of the <a href="http://idlex.sourceforge.net/extensions.html" rel="nofollow noreferrer">IDLEX</a> Python module had this tweak). File comparison image (left is original file, right is IDLEX version in image); if the Python version is Python 3.x, <code>import Tkinter</code> needs to be changed to <code>import tkinter as Tkinter</code> and <code>import tkFileDialog</code> needs to be changed to <code>import tkinter.filedialog as tkFileDialog</code>:<br>
<a href="https://i.stack.imgur.com/bvaJY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bvaJY.png" alt="The image described above."></a></p>
|
python|python-2.7|python-idle
| 1 |
1,906,422 | 12,499,078 |
lxml parse XML in python between 2 database using xpath function how to ?
|
<p>So, here's my game plan .....</p>
<p>xpath in python<img src="https://i.stack.imgur.com/75xlS.png" alt="enter image description here"></p>
<p>Here's my xml</p>
<p>So this xml is stored in a database ( ca ) , I need to extract this "data" to get all these value layer out and store them to another database ( a ) : </p>
<p>Here's what's I came up so far..</p>
<pre><code>import pyodbc
from lxml import etree
from StringIO import StringIO
con_ca = pyodbc.connect(..)
con_a = pyodbc.connect(..)
cur_ca = con_ca.cursor()
cur_c = con_c.cursor()
cur_ca.execute("""
select id_original,data
from table
""")
rows_ca = cur_ca.fetchall()
for row in rows_ca:
id_original = id_original
x = str(row.data)
root = etree.fromstring(x)
BValid = etree.XPath('/Data/Response/Detail/B/Valid')
BPass = etree.XPath('/Data/Response/Detail/B/Pass')
BDetails = etree.XPath('/Data/Response/Detail/B/Details')
BCode = etree.XPath('/Data/Response/Detail/B/Code')
BDecisionS = etree.XPath('/Data/Response/Detail/B/Decision/Result')
BDecisionB = etree.XPath('/Data/Response/Detail/B/Decision/Bucket')
con_a.execute("""
INSERT INTO table2 (id_original,BValid,BPass,BDetails,BCode,BDecisionS BDecisionB)
VALUES(?, ?, ?,?, ?, ?, ?)
""")
</code></pre>
<p>.. everything work out , except after fetchall()
I was able to get ('//text') : but how can I use Xpath to go into specific node to get value or text from this example ? </p>
|
<pre><code>(b_valid_text,) = root.xpath('/Data/Response/Detail/B/Valid/text()')
</code></pre>
<p>Thanks Andrean! Just in case someone need that answer. This works!!</p>
|
python|xml|database|parsing
| 0 |
1,906,423 | 12,328,975 |
Sqlalchemy session.flush with threading
|
<p>I am currently using scoped_session provided by sqlalchemy with autocommit=True and autoflush=True. </p>
<p>I notice that autoflush is no called properly as some of the updated results are not flushed when my script finishes executing. </p>
<p>Is autoflush not meant to be run with scoped_session in a multithreaded environment? </p>
|
<blockquote>
<p>Is autoflush not meant to be run with scoped_session in a multithreaded environment? </p>
</blockquote>
<p>there is no such restriction, no.</p>
<blockquote>
<p>I notice that autoflush is no called properly as some of the updated results are not flushed when my script finishes executing. </p>
</blockquote>
<p>This is a misunderstanding of autoflush. Autoflush is intended to flush pending data to the database before a query emits a SELECT to the database. It does not provide the feature however that data is flushed immediately as each attribute of an object is changed, as this would be very inefficient and is not feasible with any kind of ORM, unit of work or not. So if you modify a bunch of objects, then throw away the Session without further interaction with it, those pending changes are lost.</p>
<p>Autoflush is intended to be used within the context of a transaction. In its default mode of usage, the Session begins a transaction for you, and you only need call commit() when a series of changes are ready to be finalized. See the docs for background <a href="http://docs.sqlalchemy.org/en/rel_0_7/orm/session.html#flushing" rel="nofollow">http://docs.sqlalchemy.org/en/rel_0_7/orm/session.html#flushing</a> as well as the strong recommendations to <strong>avoid autocommit</strong> at <a href="http://docs.sqlalchemy.org/en/rel_0_7/orm/session.html#autocommit-mode" rel="nofollow">http://docs.sqlalchemy.org/en/rel_0_7/orm/session.html#autocommit-mode</a> .</p>
|
python|sqlalchemy
| 0 |
1,906,424 | 23,166,593 |
How to draw a list of sprites in Pygame
|
<p>I am asking if it is possible to draw a list of sprites in Pygame, and if so how?</p>
<p>I am attempting to draw from a two dimensional list, that will draw the map</p>
<p>import pygame</p>
<pre><code>BLACK = (0, 0, 0)
WHITE = (255, 255, 255)
RED = (255, 0, 0)
screen_width = 700
screen_height = 400
screen = pygame.display.set_mode([screen_width, screen_height])
pygame.init()
image = pygame.image.load("Textures(Final)\Grass_Tile.bmp").convert()
imagerect = image.get_rect()
tilemap = [
[image, image, image],
[image, image, image]
]
done = False
clock = pygame.time.Clock()
while not done:
for event in pygame.event.get():
if event.type == pygame.QUIT:
done = True
screen.fill(BLACK)
screen.blit(tilemap, imagerect)
clock.tick(20)
pygame.display.flip()
pygame.quit()
</code></pre>
<p>I know I can do something much simpler, like drawing each image. But, I am wondering if I can do this, so in the future when I have more sprites I can just add to the list, and create the map.</p>
<p>Thank you!</p>
|
<p>I don't believe this is quite possible as you describe it, but if these images aren't going to change often I would suggest pre-blitting all your map images to a pygame Surface which can later be used, using something like:</p>
<pre><code>map_surface = pygame.Surface(( len(tilemap[0])*TILE_WIDTH, len(tilemap)*TILE_HEIGHT ))
for y,row in enumerate(tilemap):
for x,tile_surface in enumerate(row):
map_surface.blit(tile_surface,(x*TILE_WIDTH,y*TILE_HEIGHT))
</code></pre>
<p>and then later you can simply use:</p>
<pre><code>screen.blit(map_surface)
</code></pre>
<p>One thing to note is that even if your map does change, you will only have to blit the changed tiles onto the <code>map_surface</code> surface, and not have to recreate the whole thing.</p>
|
python|pygame
| 0 |
1,906,425 | 23,303,165 |
Python: method with variable name for setting value of an object
|
<p>Let's have a simple class with values and methods to set them:</p>
<pre><code>class Object:
def __init__(self, id):
self.__id = id
self.__value1 = None
self.__value2 = None
... etc
def set_value1(self, value1):
self.__value1 = value1
def set_value2(self, value2):
self.__value2 = value2
... etc
</code></pre>
<p>Can I somehow merge these .set_valueX(valueX) -functions to be one, how does that happen and is it easily done without import libraries?</p>
|
<p>It can be done using setatter:</p>
<pre><code>class obj(object):
def __setter(self,**kwargs):
key,value = kwargs.items()[0]
setattr(self,"__"+key,value)
def __init__(self,id):
self.id = id
self.__value1 = None
self.__value2 = None
#You should assign setter as the set funciton like this:
self.set_value1 = self.__setter
self.set_value2= self.__setter
#and so on...
</code></pre>
<p>The <strong>init</strong> method could be done shorter, using the same technique. </p>
|
python|class|variables|object|methods
| 0 |
1,906,426 | 23,164,665 |
Python: Create a Map (tuple) from a text file
|
<p>I have a text file with the following data.</p>
<pre><code>river,4
-500, -360
-500, 360
500, 360
500,-360
sand, 3
400, 300
500, 300
200, 100
</code></pre>
<p>My question is I need to take this file load it and create a <code>tuple</code> that looks like the following:</p>
<pre><code>block=("river",4,(-500, -360),(-500, 360),(500, 360),(500,-360)), ("sand", 3,(400,300), (500, 300), (200, 100))
</code></pre>
<p>This is my code so far</p>
<pre><code>file=open("file.txt", "r")
lineString=file.readlines()
</code></pre>
|
<p>This will give you what you want:</p>
<pre><code>import csv
output = []
block = ()
with open('input_file') as in_file:
csv_reader = csv.reader(in_file)
for row in csv_reader:
output.append(tuple(row))
first_element = output[0]
a, b, c, d = output[1:]
block = (first_element[0], a, b, c, d)
</code></pre>
<p>prints</p>
<pre><code>("river",(-500, -360),(-500, 360),(500, 360),(500,-360))
</code></pre>
<p>I must say your way of organizing the data makes no sense to me. The above code will work only when there are 4 lines after <code>river, 4</code> line. If there are more replace:</p>
<pre><code>a, b, c, d = output[1:]
block = (first_element[0], a, b, c, d)
</code></pre>
<p>with </p>
<pre><code>block = (first_element[0], output[1:])
</code></pre>
<p>But in this case the output will be:</p>
<pre><code>("river",[(-500, -360),(-500, 360),(500, 360),(500,-360)])
</code></pre>
|
python|python-3.x
| 0 |
1,906,427 | 41,697,455 |
How can I pull only Xth and Nth item from my huge list?
|
<pre><code>soup = BeautifulSoup(c, 'html.parser')
text = soup.find('p').getText().split()
# I want to print only every 8th and 23th item
print text
print len(text)
</code></pre>
<p>and there is an output containing a huge list.</p>
<pre><code>[u'15', u'Jan', u'Moscow', u'(DME)', u'Geneva', u'(GVA)', u'A319',...
2355
</code></pre>
<p>In my case I need to get only <strong>8th and 23rd item</strong>, I would love to use list comprehension, but I am not sure how to do it.
would appreciate your help. </p>
<p>Thanks</p>
|
<pre><code>import requests, bs4
url = 'http://therewithu.com/flights/'
r = requests.get(url)
soup = bs4.BeautifulSoup(r.text, 'lxml')
lines = soup.find(class_="entry-content").p.text.splitlines()
for line in lines:
l = line.strip().split('\t')
print(l)
</code></pre>
<p>out:</p>
<pre><code>['15 Jan', 'Moscow (DME)', 'Geneva (GVA)', 'A319 (HB-IPT)', '3:15', '4:30 PM', '5:16 PM', '6:20 PM']
['14 Jan', 'Moscow (DME)', 'Geneva (GVA)', 'A320 (HB-IJN)', '3:14', '4:30 PM', '5:13 PM', '6:20 PM']
['13 Jan', 'Moscow (DME)', 'Geneva (GVA)', 'A320 (HB-JLR)', '3:19', '4:30 PM', '5:59 PM', '6:20 PM']
['12 Jan', 'Moscow (DME)', 'Geneva (GVA)', 'A320 (HB-IJH)', '3:26', '4:30 PM', '4:54 PM', '6:20 PM']
['11 Jan', 'Moscow (DME)', 'Geneva (GVA)', 'A320 (HB-IJN)', '3:15', '4:30 PM', '4:51 PM', '6:20 PM']
['10 Jan', 'Moscow (DME)', 'Geneva (GVA)', 'A320 (HB-JLQ)', '3:03', '4:30 PM', '6:17 PM', '6:20 PM']
['09 Jan', 'Moscow (DME)', 'Geneva (GVA)', 'A320 (HB-JLS)', '3:14', '4:30 PM', '5:17 PM', '6:20 PM']
['08 Jan', 'Moscow (DME)', 'Geneva (GVA)', 'A320 (HB-JLS)', '3:03', '4:30 PM', '5:48 PM', '6:20 PM']
['07 Jan', 'Moscow (DME)', 'Geneva (GVA)', 'A320 (HB-JLQ)', '3:14', '4:30 PM', '5:12 PM', '6:20 PM']
['06 Jan', 'Moscow (DME)', 'Geneva (GVA)', 'A320 (HB-IJQ)', '3:08', '4:30 PM', '5:20 PM', '6:20 PM']
['05 Jan', 'Moscow (DME)', 'Geneva (GVA)', 'A320 (HB-IJE)', '3:09', '4:30 PM', '5:24 PM', '6:20 PM']
['04 Jan', 'Moscow (DME)', 'Geneva (GVA)', 'A320 (HB-IJD)', '3:16', '4:30 PM', '5:16 PM', '6:20 PM']
</code></pre>
<p>You split wrong line, you should split each entry into a list, then use index to get the info you need.</p>
|
python|list|beautifulsoup
| 3 |
1,906,428 | 47,281,931 |
How to get columns name in mysqldb with a Python 2.7?
|
<p>If I am using select * from query it is working well, but when I am trying to query the columns name too, it isnt working (maybe because I have got a column called "FROM" but that's why i used 'FROM!?)</p>
<p>Here my code:</p>
<pre><code>connection = MySQLdb.connect(host='localhost',
user='admin',
passwd='',
db='database1',
use_unicode=True,
charset="utf8")
cursor = connection.cursor()
query = """ select ACTUAL_TIME, 'FROM, ID
union all
select ACTUAL_TIME, FROM , ID
from TEST
into outfile '/tmp/test.csv'
fields terminated by ';'
enclosed by '"'
lines terminated by '\n';
"""
cursor.execute(query)
connection.commit()
cursor.close()
</code></pre>
<p>I get this error message:</p>
<pre><code>raise errorvalue
_mysql_exceptions.OperationalError: (1054, "Unknown column 'ACTUAL_TIME' in 'field list'")
</code></pre>
<p>EDIT: SHOW CREATE TABLE TEST;</p>
<pre><code>| TEST | CREATE TABLE `TEST` (
`ACTUAL_TIME` varchar(100) DEFAULT NULL,
`FROM` varchar(100) DEFAULT NULL,
`STATUS` varchar(100) DEFAULT NULL,
`ID` int(10) NOT NULL AUTO_INCREMENT,
PRIMARY KEY (`ID`)
) ENGINE=InnoDB AUTO_INCREMENT=76287 DEFAULT CHARSET=utf8 |
</code></pre>
|
<p>try this :</p>
<pre><code>connection = MySQLdb.connect(host='localhost',
user='admin',
passwd='',
db='database1',
use_unicode=True,
charset="utf8")
cursor = connection.cursor()
query = """ select 'ACTUAL_TIME', 'FROM', 'ID' -- add single quotes
union all
select `ACTUAL_TIME`, `FROM`, `ID` -- add here backtick in column names
from TEST
into outfile '/tmp/test.csv'
fields terminated by ';'
enclosed by '"'
lines terminated by '\n';
"""
cursor.execute(query)
connection.commit()
cursor.close()
</code></pre>
<p>or else you can use this to get column names "SHOW columns"</p>
<p>or :</p>
<pre><code>cursor.execute("SELECT * FROM table_name LIMIT 0")
print cursor.description
</code></pre>
|
python|mysql|sql|mysql-python
| 1 |
1,906,429 | 47,292,798 |
Command "python setup.py egg_info" failed with error code 1 in c:\temp\pip-build-9s6c_h\pyenchant\
|
<p>Having trouble installing <code>pyenchant</code>:</p>
<pre><code>Command "python setup.py egg_info" failed with error code 1 in c:\temp\pip-build-9s6c_h\pyenchant\
</code></pre>
<p>I tried using <code>pip install --upgrade setuptools</code> but that didn't help at all.</p>
<p>Not sure what to do.</p>
<p>EDIT:</p>
<p>Additional traceback:</p>
<pre><code>Collecting pyenchant
Using cached pyenchant-1.6.11.tar.gz
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "c:\temp\pip-build-farfu_\pyenchant\setup.py", line 212, in <module>
import enchant
File "enchant\__init__.py", line 92, in <module>
from enchant import _enchant as _e
File "enchant\_enchant.py", line 145, in <module>
raise ImportError(msg)
ImportError: The 'enchant' C library was not found. Please install it via your OS package manager, or use a pre-built binary wheel from PyPI.
</code></pre>
|
<p>I think you can use the whl file instead, for exammple:</p>
<pre><code>pip install name.whl
</code></pre>
<p>I also encountered this problem today. Fortunately, I used this method to solve it.</p>
<pre><code>pip install basemap-1.1.0-cp36-cp36m-win_amd64.whl
</code></pre>
|
python|pip|installation|pyenchant
| 0 |
1,906,430 | 57,351,741 |
Add a new column to dataframe and add unique values to each row
|
<p>I have a dataframe with two columns: id_code & diagnosis</p>
<p>I have a folder with images with filename same as id_code. I want to add a new columns to the dataframe: Image, height and width</p>
<p>I have the following code:</p>
<pre><code>for idx, row in df_train.iterrows():
img = Image.open('train_images/{}.png'.format(row['id_code']))
height, width = img.size
row['image'] = img
row['height'] = height
row['width'] = width
</code></pre>
<p>I tried using <code>df_train.iloc[idx]['image'] = img</code> instead of <code>row['image'] = img</code> and so on for the rest of the lines of code but that would give me an error.</p>
<p>But this code updates the row but doesn't update the dataframe itself. </p>
|
<p>We should assign it cell by cell </p>
<pre><code>l=[]
for idx, row in df_train.iterrows():
img = Image.open('train_images/{}.png'.format(row['id_code']))
l.append(img)
height, width = img.size
df.loc[idx,'height'] = height
df.loc[idx,'width'] = width
df['img']=l
</code></pre>
|
python|python-3.x|pandas|dataframe
| 2 |
1,906,431 | 33,663,657 |
stack function python error
|
<p>I created the following <code>stack</code> class for a project and am having trouble getting it to function properly. I can't tell if i made the error or it is an error in the main function i was given by my TA, anyways here is my code:</p>
<pre><code>class Stack:
#takes in the object as input argument
#will not return anything
def __init__(self):
#initialise an instance variable to an empty list.
self.items=[]
#takes in the object as input argument
#return value Type: True or False
def isEmpty(self):
#check if the list is empty or not. If empty, return True else return False
if self.items == []:
return True
else:
return False
#takes in the object as the first argument
#takes the element to be inserted into the list as the second argument
#should not return anything
def push(self, x):
#add the element to be inserted at the end of the list
self.items.append(x)
#takes in the object as the input argument
#if the list is not empty then returns the last element deleted from the list. If the list is empty, don't return anything
def pop(self):
#check if the list is Empty
#if Empty: print the list is empty
#if the list is not empty, then remove the last element from the list and return it
if self.isEmpty()==True:
print("the list is empty")
else:
return self.items.pop()
#takes in the object as the input argument
#should not return anything
def printContents(self):
#if the list is not empty, then print each element of the list
print("The content of the list is", self.items)
</code></pre>
<p>Based on the comments can anyone give me any advice on how I might make this work more appropriately? Sorry I am not a computer scientist and i am trying my hardest to understand classes and functions for my python class.</p>
<pre><code>from stack import *
def main():
s = Stack()
s.push(1)
s.pop()
s.pop()
s.push(2)
s.push(3)
s.push(4)
s.printContents()
main()
</code></pre>
|
<p>You should take a good look in spaces and alignment. For example <code>printContents</code> is not properly aligned. Note the proper alignment is very, very important in python.</p>
<p>Also you are not printing in <code>printContents</code>. This should work:</p>
<pre><code>class Stack:
#takes in the object as input argument
#will not return anything
def __init__(self):
#initialise an instance variable to an empty list.
self.items=[]
#takes in the object as input argument
#return value Type: True or False
def isEmpty(self):
#check if the list is empty or not. If empty, return True else return False
if self.items == []:
return True
else:
return False
#takes in the object as the first argument
#takes the element to be inserted into the list as the second argument
#should not return anything
def push(self, x):
#add the element to be inserted at the end of the list
self.items.append(x)
#takes in the object as the input argument
#if the list is not empty then returns the last element deleted from the list. If the list is empty, don't return anything
def pop(self):
#check if the list is Empty
#if Empty: print the list is empty
#if the list is not empty, then remove the last element from the list and return it
if self.isEmpty():
print("the list is empty")
else:
return self.items.pop()
#takes in the object as the input argument
#should not return anything
def printContents(self):
#if the list is not empty, then print each element of the list
print("the contents of the list are", self.items)
def main():
s = Stack()
s.push(1)
s.pop()
s.pop()
s.push(2)
s.push(3)
s.push(4)
s.printContents()
main()
</code></pre>
<p>You can see it working online here:</p>
<p><a href="https://repl.it/BZW4" rel="nofollow">https://repl.it/BZW4</a></p>
|
python|stack
| 0 |
1,906,432 | 46,762,633 |
plot ellipse in a seaborn scatter plot
|
<p>I have a data frame in pandas format (pd.DataFrame) with columns = [z1,z2,Digit], and I did a scatter plot in seaborn:</p>
<pre><code>dataframe = dataFrame.apply(pd.to_numeric, errors='coerce')
sns.lmplot("z1", "z2", data=dataframe, hue='Digit', fit_reg=False, size=10)
plt.show()
</code></pre>
<p><img src="https://i.stack.imgur.com/cEqqg.png" alt="hidden space[![]">
What I want to is plot an ellipse around each of these points. But I can't seem to plot an ellipse in the same figure.</p>
<p>I know the normal way to plot an ellipse is like:</p>
<pre><code>import matplotlib.pyplot as plt
from matplotlib.patches import Ellipse
elps = Ellipse((0, 0), 4, 2,edgecolor='b',facecolor='none')
a = plt.subplot(111, aspect='equal')
a.add_artist(elps)
plt.xlim(-4, 4)
plt.ylim(-4, 4)
plt.show()
</code></pre>
<p>But because I have to do "a = plt.subplot(111, aspect='equal')", the plot will be on a different figure. And I also can't do:</p>
<pre><code>a = sns.lmplot("z1", "z2", data=rect, hue='Digit', fit_reg=False, size=10)
a.add_artist(elps)
</code></pre>
<p>because the 'a' returned by sns.lmplot() is of "seaborn.axisgrid.FacetGrid" object. Any solutions? Is there anyway I can plot an ellipse without having to something like a.set_artist()?</p>
|
<p><a href="https://seaborn.pydata.org/generated/seaborn.lmplot.html" rel="nofollow noreferrer">Seaborn's lmplot()</a> used a <code>FacetGrid</code> object to do the plot, and therefore your variable <code>a = lm.lmplot(...)</code> is a reference to that <code>FacetGrid</code> object.</p>
<p>To add your elipse, you need a refence to the <code>Axes</code> object. The problem is that a FacetGrid can contain multiple axes depending on how you split your data. Thankfully there is a function <a href="https://seaborn.pydata.org/generated/seaborn.FacetGrid.html#seaborn.FacetGrid" rel="nofollow noreferrer"><code>FacetGrid.facet_axis(row_i, col_j)</code></a> which can return a reference to a specific <code>Axes</code> object.</p>
<p>In your case, you would do:</p>
<pre><code>a = sns.lmplot("z1", "z2", data=rect, hue='Digit', fit_reg=False, size=10)
ax = a.facet_axis(0,0)
ax.add_artist(elps)
</code></pre>
|
python|pandas|matplotlib|plot|seaborn
| 3 |
1,906,433 | 47,075,440 |
invalid literal for int() with base 10: 'john'
|
<p>I am trying to print second-lowest grade in a list which is like</p>
<pre><code>students = [['Harry', 37.21], ['Berry', 37.21], ['Tina', 37.2], ['Akriti', 41], ['Harsh', 39]]
</code></pre>
<p>I used the following code: </p>
<pre><code>a = [[input(), float(input())] for i in range(int(input()))]
s = sorted(set([x[1] for x in a]))
for name in sorted(x[0] for x in a if x[1] == s[1]):
print (name)
</code></pre>
<p>But the error which I am getting is:</p>
<pre><code>Traceback (most recent call last):
File "test.py", line 1, in <module>
a = [[input(), float(input())] for i in range(int(input()))]
ValueError: invalid literal for int() with base 10: 'john'
</code></pre>
<p>Not sure how to get the input in the form of nested list as mentioned in <code>students</code>. Can anybody help me out with it?</p>
|
<p><code>a = [[input(), float(input())] for i in range(int(input()))]</code></p>
<p>In this case, the <code>input()</code> calls are performed in the following order:</p>
<pre><code>[[input(), float(input())] for i in range(int(input()))]
# 2 3 1
</code></pre>
<p>You can confirm this by using prompts:</p>
<pre><code>a = [[input('input name\n'), float(input('input grade\n'))]
for i in range(int(input('input num of students\n')))]
</code></pre>
<p>will output:</p>
<pre><code>>> input num of students
1
>> input name
'a'
>> input grade
1
</code></pre>
<p>So when the code runs, you first need to input a number, and not a name.</p>
|
python|python-3.x|list|nested
| 2 |
1,906,434 | 30,107,093 |
Pygame game loop trouble
|
<p>I am a novice programming a game in the Python language using Pygame. I'm having a problem with my game loop. I can only run my game once then I can't make my game loop again. I will post my code below here:</p>
<p>This is my mouse based menu (Menu.py)</p>
<pre><code>import pygame, os, sys
from pygame.locals import *
pygame.init()
def play_music(name, volume=0.7, loop=-1):
fullname = os.path.join('data', name)
try:
pygame.mixer.music.load(fullname)
pygame.mixer.music.set_volume(volume)
pygame.mixer.music.play(loop)
except:
raise SystemExit, "Can't load: " + fullname
def load_sound(name, volume=0.05):
fullname = os.path.join('data', name)
try:
sound = pygame.mixer.Sound(fullname)
sound.set_volume(volume)
except:
raise SystemExit, "Can't load: " + fullname
return sound
def run_game():
import Grab
def reset_highscore():
open("data/highscore.sav", "w").write(str(0))
print "Grab High score reset to 0"
class GrabMenu():
os.environ["SDL_VIDEO_CENTERED"] = "1"
pygame.mouse.set_visible(0) #make the mouse cursor invisible
pygame.display.set_caption("Grab") #name the game
pygame.display.set_icon(pygame.image.load("data/Icon.png")) #set an icon for the game
screen = pygame.display.set_mode((800, 800)) #screen resolution width and height
background = pygame.Surface(screen.get_size()) #make the background
background = pygame.image.load("data/BG.png").convert() #convert background
background = pygame.transform.scale(background,(800, 800)) #scale the background down to the screen resulution
cursor = pygame.image.load("data/Hand1.png") #make a new cursor inside the game
WHITE = (255, 255, 255) # colors
YELLOW = (255, 216, 0)
running = True # value for main loop
clock = pygame.time.Clock()
font = pygame.font.SysFont("times new roman", 28, bold=True) # menu font and text.
fontcolor = WHITE
fontcolor2 = WHITE
fontcolor3 = WHITE
fontcolor4 = WHITE
mousetrigger = False
mouseclick = load_sound("Click.wav", 3.5) # mouse click
Menu1 = True
Menu2 = False #options menu
Menu3 = False #reset score menu
#JOYSTICK
try:
j = pygame.joystick.Joystick(0) # create a joystick instance
j.init() # init instance
print 'Enabled joystick: ' + j.get_name()
except:
pass
#main_loop
while running == True:
clock.tick(30)
for event in pygame.event.get():
if event.type == pygame.QUIT:
running = False
if event.type == pygame.KEYDOWN and event.key == pygame.K_ESCAPE:
running = False, sys.exit()
if event.type == MOUSEBUTTONDOWN and event.button == 1: # if mouse button is pressed, (left mouse key only)
if Menu1:
if ren_r.collidepoint(pygame.mouse.get_pos()): # play the game
if Menu1 == False:
pass
else:
mouseclick.play()
run_game()
elif ren2_r.collidepoint(pygame.mouse.get_pos()): #options
if Menu1 == False:
pass
else:
mouseclick.play()
Menu1 = False
Menu2 = True
elif ren3_r.collidepoint(pygame.mouse.get_pos()): #how to play
if Menu1 == False:
pass
else:
mouseclick.play()
elif ren4_r.collidepoint(pygame.mouse.get_pos()): #quit the game
if Menu1 == False:
pass
else:
mouseclick.play()
running = False, pygame.quit(), sys.exit()
elif Menu2:
if ren5_r.collidepoint(pygame.mouse.get_pos()): #reset high score
mouseclick.play()
reset_highscore()
elif ren6_r.collidepoint(pygame.mouse.get_pos()): #go back
mouseclick.play()
Menu1 = True
Menu2 = False
screen.blit(background, (0, 0)) #draw the background
screen.blit(cursor, pygame.mouse.get_pos()) #set the cursor image as the new mouse cursor
if Menu1:
ren = font.render("Play", True, (fontcolor))
ren_r = ren.get_rect()
ren_r.x, ren_r.y = 340, 530
ren2 = font.render("Options", True, (fontcolor2))
ren2_r = ren2.get_rect()
ren2_r.x, ren2_r.y = 340, 590
ren3 = font.render("How to Play", True, (fontcolor3))
ren3_r = ren3.get_rect()
ren3_r.x, ren3_r.y = 340, 650
ren4 = font.render("Quit Game", True, (fontcolor4))
ren4_r = ren4.get_rect()
ren4_r.x, ren4_r.y = 340, 710
screen.blit(ren, (340, 530))
screen.blit(ren2, (340, 590))
screen.blit(ren3, (340, 650))
screen.blit(ren4, (340, 710))
if ren_r.collidepoint(pygame.mouse.get_pos()):
mousetrigger = True
fontcolor = YELLOW
else:
fontcolor = WHITE
if ren2_r.collidepoint(pygame.mouse.get_pos()):
mousetrigger = True
fontcolor2 = YELLOW
else:
fontcolor2 = WHITE
if ren3_r.collidepoint(pygame.mouse.get_pos()):
mousetrigger = True
fontcolor3 = YELLOW
else:
fontcolor3 = WHITE
if ren4_r.collidepoint(pygame.mouse.get_pos()):
mousetrigger = True
fontcolor4 = YELLOW
else:
fontcolor4 = WHITE
elif Menu2:
options = font.render("Options", 1, (WHITE))
ren5 = font.render("Reset High Score", True, (fontcolor))
ren5_r = ren5.get_rect()
ren5_r.x, ren5_r.y = 350, 650
ren6 = font.render("Go back", True, (fontcolor2))
ren6_r = ren6.get_rect()
ren6_r.x, ren6_r.y = 350, 700
screen.blit(options, (350, 600))
screen.blit(ren5, (350, 650))
screen.blit(ren6, (350, 700))
if ren5_r.collidepoint(pygame.mouse.get_pos()):
mousetrigger = True
fontcolor = YELLOW
else:
fontcolor = WHITE
if ren6_r.collidepoint(pygame.mouse.get_pos()):
mousetrigger = True
fontcolor2 = YELLOW
else:
fontcolor2 = WHITE
pygame.display.update()
</code></pre>
<p>furthermore I use this when I press on "Play" text on my menu: </p>
<pre><code>def run_game():
import Grab
</code></pre>
<p>to start my main program "Grab.py":</p>
<pre><code>import pygame, time, math, sys, random, os
from pygame.locals import *
pygame.init() #make pygame ready
pygame.joystick.init
#my game code
done = False
while done == False:
for event in pygame.event.get():
if event.type == pygame.QUIT:
done = True
elif event.key == K_ESCAPE:
if highscore == 0:
pass
if highscore < get_highscore():
pass
else:
save_highscore(highscore)
if score == 0:
pass
else:
save_score(score)
save_level(level)
save_world(world)
done = True
pygame.mixer.music.stop()
</code></pre>
<p>Main game loop ends here and sends me back to my menu just like I want. However, when I try to import grab again from my menu then nothing happens. I went and searched for "Python import cycle" from Google and I discovered a few things but i'm still stuck. There must be an easier way to make Grab.py loop from this:</p>
<pre><code>def run_game():
import Grab
</code></pre>
<p>Any help is very much appreciated.
Regards HJ.</p>
|
<p>Generally speaking, use <code>import</code> to load the definition of constants, functions or classes. Do not use it to run a bunch of statements. Call a function to run a bunch of statements.</p>
<p>For efficiency, Python only <em>imports modules once</em>. After it is loaded the first time, the module is cached in <a href="https://docs.python.org/3.4/library/sys.html#sys.modules" rel="nofollow"><code>sys.modules</code></a>. Every time Python encounters an import statement it check if the module is in <code>sys.modules</code> and uses the cached module if it exists. Only if the module is not in <code>sys.modules</code> does Python execute the code in the module. Thus the code in the module is never executed more than once. That is why <code>import Grab</code> does not execute the code in <code>Grab.py</code> the second time the import statement is reached.</p>
<p>So, instead, refactor <code>Grab.py</code> to put all the code you wish to run multiple times inside a function:</p>
<pre><code>import pygame, time, math, sys, random, os
from pygame.locals import *
pygame.init() #make pygame ready
pygame.joystick.init() # <-- Note you need parentheses to *call* the init function
def main():
done = False
while done == False:
for event in pygame.event.get():
if event.type == pygame.QUIT:
done = True
elif event.key == K_ESCAPE:
if highscore == 0:
pass
if highscore < get_highscore():
pass
else:
save_highscore(highscore)
if score == 0:
pass
else:
save_score(score)
save_level(level)
save_world(world)
done = True
pygame.mixer.music.stop()
</code></pre>
<p>and then change </p>
<pre><code>import Grab
</code></pre>
<p>to</p>
<pre><code>import Grab
Grab.main()
</code></pre>
<p>You could move <code>import Grab</code> to the top of Menu.py; generally speaking it is <a href="http://legacy.python.org/dev/peps/pep-0008/#imports" rel="nofollow">recommended to put all the import statements at the top</a> of any Python script or module.</p>
|
python|import|while-loop|pygame
| 3 |
1,906,435 | 30,055,779 |
Tesseract 3.03 compile error: 'select' was not declared in this scope
|
<p>I am using <code>cygwin</code> to compile the <a href="https://drive.google.com/folderview?id=0B7l10Bj_LprhQnpSRkpGMGV2eE0&usp=sharing" rel="nofollow noreferrer">Tesseract 3.03 source code</a>.</p>
<p>The following error is encountered when I run <code>make</code> after <code>configure</code>. I have no knowledge to modify the Tesseract source code. Anyone saw this error before? Or is there any prebuilt version of Tessearct 3.03? I need this very version because it contains the training tools <code>text2image</code> and they claim it can be built with <code>make training</code>.</p>
<p><img src="https://i.stack.imgur.com/06r6f.png" alt="enter image description here"></p>
<h2>ADD 1</h2>
<p>Below is the code snippet in trouble.</p>
<p><img src="https://i.stack.imgur.com/C88HB.png" alt="enter image description here"></p>
<p>It seems to me the <code>select</code> function is a C++ library function. Maybe some library is missing on my Cygwin installation. But I am not sure which one.</p>
<h2>ADD 2</h2>
<p>Following <code>rubenvb</code>'s suggestion on this thread:<a href="https://stackoverflow.com/questions/29960825/error-during-making-xz-5-2-1-with-mingw-msys">Error during making "xz-5.2.1" with MinGW/MSYS</a> </p>
<p>I start to use <code>MSYS2 + MinGW-w64</code> to compile <code>Tesseract 3.03</code>. After fighting through all the dependencies and pre-requisites. I finally successfully <code>configure</code> the <code>Tesseract 3.03 source</code>. And then encountered the following error during <code>make</code>:</p>
<p><img src="https://i.stack.imgur.com/ayX1L.png" alt="enter image description here"></p>
<p>I found a similar thread: <a href="https://stackoverflow.com/questions/12973750/fatal-error-strtok-r-h-no-such-file-or-directory-while-compiling-tesseract-oc">fatal error: strtok_r.h: No such file or directory (while compiling tesseract-ocr-3.01 in MinGW)</a></p>
<p>It seems I need to manually add some file to the tesseract source. But I am not sure where to place it.</p>
<p>For now I need to take some sleep.</p>
<p>Hope someone could shed some light on this issue. I will continue with it tomorrow...</p>
<h2>Reference</h2>
<p>Compile Tesseract 3.03 with vs2013</p>
<p><a href="http://vorba.ch/2014/tesseract-3.03-vs2013.html" rel="nofollow noreferrer">http://vorba.ch/2014/tesseract-3.03-vs2013.html</a></p>
<p>Compile Tesseract 3.02 with Cygwin</p>
<p><a href="http://vorba.ch/2014/tesseract-cygwin.html" rel="nofollow noreferrer">http://vorba.ch/2014/tesseract-cygwin.html</a></p>
|
<p>Found a tutorial <a href="http://vorba.ch/2014/tesseract-cygwin.html" rel="nofollow">here</a>.
As said in the comments:</p>
<blockquote>
<p>Try to replace "c++11" with "gnu++11" in the file "configure", then rerun this script.</p>
</blockquote>
|
cygwin|tesseract|python-tesseract
| 0 |
1,906,436 | 29,832,659 |
Use python to extract and plot data from netCDF
|
<p>I am new to using python for scientific data so apologies in advance if anything is unclear. I have a netCDF4 file with multiple variables including latitude, longitude and density. I am trying to plot the variable density on a matplotlib basemap using only density values from coordinates between 35-40 N and 100-110 W.</p>
<pre><code>import numpy as np
import netCDF4 as nc
import matplotlib.pyplot as plt
from mpl_toolkits.basemap import Basemap
in: f = nc.Dataset('C:\\Users\\mdc\\data\\density.nc', 'r')
in: f.variables['latitude'].shape
out:(120000,)
</code></pre>
<p>(the variables longitude and density have the same shape)</p>
<p>I am stuck trying to find a way to extract only the latitude and longitude coordinate pairs (and their associated density values) that fit the criteria of [35 < lat < 40 & -110 < lon < -100]. Any advice on how to do this would be appreciated.</p>
<p>I have tried extracting each of the relevant variables and compiling them into a 2d-array but I have not figured out how to select only the data I need.</p>
<pre><code>lats = f.variables['latitude'][:]
lons = f.variables['longitude'][:]
dens = f.variables['density'][:]
combined = np.vstack((lats,lons,dens))
in: combined
out: array([[ -4.14770737e+01, -3.89834557e+01, -3.86000137e+01, ...,
4.34283943e+01, 4.37634315e+01, 4.40338402e+01],
[ 1.75510895e+02, 1.74857147e+02, 1.74742798e+02, ...,
7.83558655e+01, 7.81687775e+01, 7.80410919e+01],
[ 7.79418945e-02, 7.38342285e-01, 9.94934082e-01, ...,
5.60119629e-01, -1.60522461e-02, 5.52429199e-01]], dtype=float32)
</code></pre>
<p>As for plotting I am trying to plot the coordinate pairs by different colors, rather than sizes, according to their density value.</p>
<pre><code>m = Basemap(projection='robin', resolution='i', lat_0 = 37, lon_0 = -105)
m.drawcoastlines()
for lats,lons,dens in zip(lats,lons,dens):
x,y = m(lats,lons)
size = dens*3
m.plot(x,y, 'r', markersize=size)
plt.show()
</code></pre>
|
<p>The data selection, using pandas (can't install netCDF here, sorry, and pandas is satisfactory):</p>
<pre><code>import pandas as pd
tinyd = pd.DataFrame(np.array(
[[ -4.14770737e+01, -3.89834557e+01, -3.86000137e+01,
4.34283943e+01, 4.37634315e+01, 4.40338402e+01],
[ 1.75510895e+02, 1.74857147e+02, 1.74742798e+02,
7.83558655e+01, 7.81687775e+01, 7.80410919e+01],
[ 7.79418945e-02, 7.38342285e-01, 9.94934082e-01,
5.60119629e-01, -1.60522461e-02, 5.52429199e-01]]).T,
columns=['lat','lon','den'])
mask = (tinyd.lat > -39) & (tinyd.lat < 44) & \
(tinyd.lon > 80) & (tinyd.lon < 175)
toplot = tinyd[mask]
print(toplot)
</code></pre>
<blockquote>
<pre><code> lat lon den
1 -38.983456 174.857147 0.738342
2 -38.600014 174.742798 0.994934
</code></pre>
</blockquote>
<pre><code>plt.scatter(toplot.lat, toplot.lon, s=90, c=toplot.den)
plt.colorbar()
</code></pre>
<p><img src="https://i.stack.imgur.com/si1sD.png" alt="enter image description here"></p>
<p>plotting on top of Basemap is the same, and you can specify a different colormap, etc. </p>
|
python|matplotlib|extract|netcdf|data-extraction
| 0 |
1,906,437 | 72,186,930 |
Is there a robust way to cleanly detect lines in Dots And Boxes game with opencv?
|
<p>I'm currently developing an <code>OpenCV</code> method that can extract the last placed moves on a game of Dots And Boxes. The most straight forward method in my eyes was to detect the lines normally and make sure it doesn't have excess lines. For this I have used the <code>HoughBundler</code> class proposed <a href="https://stackoverflow.com/questions/45531074/how-to-merge-lines-after-houghlinesp">here</a>.</p>
<p>This works for simple input images like this one:</p>
<p><img src="https://i.stack.imgur.com/Mlm4a.png" alt="this one" /></p>
<p>or this one</p>
<p><img src="https://i.stack.imgur.com/0LhGR.png" alt="this one" /></p>
<p>However when I make the lines more complicated like this:</p>
<p><img src="https://i.stack.imgur.com/6t1PD.png" alt="like this" /></p>
<p>it makes the wrong connections:</p>
<p><img src="https://i.stack.imgur.com/25o49.png" alt="Here" /></p>
<p>and</p>
<p><img src="https://i.stack.imgur.com/wn1u8.png" alt="here" /></p>
<p>the lines are detected and they are counted correctly; 4 and 5 lines.</p>
<p>However this is the output of the more complicated input:</p>
<p><img src="https://i.stack.imgur.com/86YaX.png" alt="this" /></p>
<p>As you can see the lines are combined so that it connects points that shouldn't be connected. It has left me puzzled as to why this is. Following is my code.</p>
<pre class="lang-py prettyprint-override"><code>import cv2
import numpy as np
import math
MAX_KERNEL_LENGTH = 11
MIN_SIZE = 150
ZOOM_FACTOR = 0.80
src = None
#HoughBundler from stackoverflow:
class HoughBundler:
def __init__(self,min_distance=5,min_angle=2):
self.min_distance = min_distance
self.min_angle = min_angle
def get_orientation(self, line):
orientation = math.atan2(abs((line[3] - line[1])), abs((line[2] - line[0])))
return math.degrees(orientation)
def check_is_line_different(self, line_1, groups, min_distance_to_merge, min_angle_to_merge):
for group in groups:
for line_2 in group:
if self.get_distance(line_2, line_1) < min_distance_to_merge:
orientation_1 = self.get_orientation(line_1)
orientation_2 = self.get_orientation(line_2)
if abs(orientation_1 - orientation_2) < min_angle_to_merge:
group.append(line_1)
return False
return True
def distance_point_to_line(self, point, line):
px, py = point
x1, y1, x2, y2 = line
def line_magnitude(x1, y1, x2, y2):
line_magnitude = math.sqrt(math.pow((x2 - x1), 2) + math.pow((y2 - y1), 2))
return line_magnitude
lmag = line_magnitude(x1, y1, x2, y2)
if lmag < 0.00000001:
distance_point_to_line = 9999
return distance_point_to_line
u1 = (((px - x1) * (x2 - x1)) + ((py - y1) * (y2 - y1)))
u = u1 / (lmag * lmag)
if (u < 0.00001) or (u > 1):
#// closest point does not fall within the line segment, take the shorter distance
#// to an endpoint
ix = line_magnitude(px, py, x1, y1)
iy = line_magnitude(px, py, x2, y2)
if ix > iy:
distance_point_to_line = iy
else:
distance_point_to_line = ix
else:
# Intersecting point is on the line, use the formula
ix = x1 + u * (x2 - x1)
iy = y1 + u * (y2 - y1)
distance_point_to_line = line_magnitude(px, py, ix, iy)
return distance_point_to_line
def get_distance(self, a_line, b_line):
dist1 = self.distance_point_to_line(a_line[:2], b_line)
dist2 = self.distance_point_to_line(a_line[2:], b_line)
dist3 = self.distance_point_to_line(b_line[:2], a_line)
dist4 = self.distance_point_to_line(b_line[2:], a_line)
return min(dist1, dist2, dist3, dist4)
def merge_lines_into_groups(self, lines):
groups = [] # all lines groups are here
# first line will create new group every time
groups.append([lines[0]])
# if line is different from existing gropus, create a new group
for line_new in lines[1:]:
if self.check_is_line_different(line_new, groups, self.min_distance, self.min_angle):
groups.append([line_new])
return groups
def merge_line_segments(self, lines):
orientation = self.get_orientation(lines[0])
if(len(lines) == 1):
return np.block([[lines[0][:2], lines[0][2:]]])
points = []
for line in lines:
points.append(line[:2])
points.append(line[2:])
if 45 < orientation <= 90:
#sort by y
points = sorted(points, key=lambda point: point[1])
else:
#sort by x
points = sorted(points, key=lambda point: point[0])
return np.block([[points[0],points[-1]]])
def process_lines(self, lines):
lines_horizontal = []
lines_vertical = []
for line_i in [l[0] for l in lines]:
orientation = self.get_orientation(line_i)
# if vertical
if 45 < orientation <= 90:
lines_vertical.append(line_i)
else:
lines_horizontal.append(line_i)
lines_vertical = sorted(lines_vertical , key=lambda line: line[1])
lines_horizontal = sorted(lines_horizontal , key=lambda line: line[0])
merged_lines_all = []
# for each cluster in vertical and horizantal lines leave only one line
for i in [lines_horizontal, lines_vertical]:
if len(i) > 0:
groups = self.merge_lines_into_groups(i)
merged_lines = []
for group in groups:
merged_lines.append(self.merge_line_segments(group))
merged_lines_all.extend(merged_lines)
return np.asarray(merged_lines_all)
#globals:
img = cv2.imread('gridwithmoves.png')
src = img
cpy = img
cv2.imshow('BotsAndBoxes',img)
#finding and zooming to paper:
dst = cv2.medianBlur(img, MAX_KERNEL_LENGTH)
# cv2.imshow(window_name,dst)
gray = cv2.cvtColor(dst,cv2.COLOR_BGR2GRAY)
edges = cv2.Canny(gray,0,400,apertureSize = 3)
# cv2.imshow("Bounding", edges)
cnts = cv2.findContours(edges, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
zoomx,zoomy,zoomw,zoomh = cv2.boundingRect(c)
if (zoomw >= MIN_SIZE and zoomh >= MIN_SIZE):
zoomx = int(zoomx/ZOOM_FACTOR)
zoomy = int(zoomy/ZOOM_FACTOR)
zoomw = int(zoomw*ZOOM_FACTOR)
zoomh = int(zoomh*ZOOM_FACTOR)
img = img[zoomy:zoomy+zoomh, zoomx:zoomx+zoomw]
break
gray = cv2.cvtColor(cpy[zoomy:zoomy+zoomh, zoomx:zoomx+zoomw], cv2.COLOR_BGR2GRAY)
#detecting dots:
edges = cv2.Canny(gray, 75, 150, 3)
cv2.imshow("linesEdges", edges)
# cv2.imshow("src", src)
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray,(5,5),0)
threshed = cv2.adaptiveThreshold(blur, 255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV,5,2)
cnts = cv2.findContours(threshed, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)[-2]
xcnts = []
for cnt in cnts:
x,y,w,h = cv2.boundingRect(cnt)
if (0 < cv2.contourArea(cnt) < 15):
xcnts.append(cnt)
cv2.rectangle(img, (x, y), (x + w, y + h), (0,0,255), 2)
#detecting lines:
lines = cv2.HoughLinesP(edges, 3, np.pi/360, 35, maxLineGap=4) #changing this makes simpler images not work
count_lines = 0
i = 0
lines_cpy = []
for line in lines:
x1, y1, x2, y2 = line[0]
i+=1
draw = True
for cnt in cnts:
x,y,w,h = cv2.boundingRect(cnt)
if (0 < cv2.contourArea(cnt) < 15):
if (x-w < (x1 + x2)/2 < x+w and y-h < (y1 + y2)/2 < y+h):
draw = False
if (draw is True):
for line in lines:
x3, y3, x4, y4 = line[0]
if (x1 is x3 and y1 is y3 and x2 is x4 and y2 is y4):
continue
else:
if (abs((x1-x3)) < 5 and abs((x2-x4)) < 5 and abs((y1-y3)) < 5 and abs((y2-y4)) < 5):
lines_cpy.append(line)
#grouping lines:
bundler = HoughBundler(min_distance=18,min_angle=40) #This bundles the lines incorrectly when input image is more complex
lines_cpy = bundler.process_lines(lines_cpy)
lines_cpy2 = []
for line in lines_cpy:
x1, y1, x2, y2 = line[0]
if (abs((x1-x2)) < 10 and abs((y1-y2)) < 10):
pass
else:
lines_cpy2.append(line)
count_lines+=1
print(count_lines)
print(lines_cpy2)
#draw the lines that are left:
for line in lines_cpy2:
x1, y1, x2, y2 = line[0]
cv2.line(img, (x1, y1), (x2, y2), (255,0,0), 2)
print("\nDots number: {}".format(len(xcnts)))
cv2.imshow("linesDetected", img)
#wait for user to quit:
while(1):
key = cv2.waitKey(1)
if key == 27: # exit on ESC
break
cv2.destroyAllWindows()
</code></pre>
<p>The part where I am detecting lines seems to almost work perfectly. However the grouping of the lines makes some lines that aren't supposed to be there. Is there a better algorithm for grouping lines in this scenario or are my arguments just off?</p>
|
<p>Instead of using Hough Lines, FastLineDetector or LineSegmentDetector you can directly use the contour that you already have and filter for the points and edges. Just for the case that you are not satisfied by the results of the present algorithms and you want more control over what actually is happening. This method can also be helpful in <a href="https://stackoverflow.com/a/72100030/18667225">other use cases</a>. Hope it helps.</p>
<pre><code>import cv2
import numpy as np
import numpy.linalg as la
MAX_KERNEL_LENGTH = 11
MIN_SIZE = 150
ZOOM_FACTOR = 0.80
def get_angle(p1, p2, p3):
v1 = np.subtract(p2, p1)
v2 = np.subtract(p2, p3)
cos = np.inner(v1, v2) / la.norm(v1) / la.norm(v2)
rad = np.arccos(np.clip(cos, -1.0, 1.0))
return np.rad2deg(rad)
def get_angles(p, d):
n = len(p)
return [(p[i], get_angle(p[(i-d) % n], p[i], p[(i+d) % n])) for i in range(n)]
def remove_straight(p, d):
angles = get_angles(p, 2) # approximate angles at points
return [p for (p, a) in angles if a < (180 - d)] # remove points with almost straight angles
img = cv2.imread('img.png')
#finding and zooming to paper:
dst = cv2.medianBlur(img, MAX_KERNEL_LENGTH)
gray = cv2.cvtColor(dst,cv2.COLOR_BGR2GRAY)
edges = cv2.Canny(gray,0,400,apertureSize = 3)
cnts = cv2.findContours(edges, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
zoomx, zoomy, zoomw, zoomh = cv2.boundingRect(c)
if (zoomw >= MIN_SIZE and zoomh >= MIN_SIZE):
zoomx = int(zoomx/ZOOM_FACTOR)
zoomy = int(zoomy/ZOOM_FACTOR)
zoomw = int(zoomw*ZOOM_FACTOR)
zoomh = int(zoomh*ZOOM_FACTOR)
img = img[zoomy:zoomy+zoomh, zoomx:zoomx+zoomw]
break
# detecting dots:
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray, (5, 5), 0)
threshed = cv2.adaptiveThreshold(blur, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV, 5, 2)
cnts = cv2.findContours(threshed, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)[-2]
xcnts = []
lcnts = []
for cnt in cnts:
x,y,w,h = cv2.boundingRect(cnt)
if 0 < cv2.contourArea(cnt) < 15:
xcnts.append(cnt)
cv2.rectangle(img, (x, y), (x + w, y + h), (0, 0, 255), 2)
else:
lcnts.append(cnt)
for cnt in lcnts:
pts = [v[0] for v in cnt] # get pts from contour
pts = remove_straight(pts, 60) # consider bends of less than 60deg deviation as straight
n = len(pts)
for i in range(n):
p1 = pts[i]
p2 = pts[(i + 1) % n]
if (p1[0] - p2[0] > 4) or (p1[1] - p2[1] > 4): # filter for long lines, only one direction
cv2.circle(img, p1, 3, (0, 255, 255), 1)
cv2.circle(img, p2, 3, (0, 255, 255), 1)
cv2.line(img, p1, p2, (0, 255, 255), 1)
cv2.imwrite("out.png", img)
</code></pre>
<p><a href="https://i.stack.imgur.com/35x8V.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/35x8V.png" alt="enter image description here" /></a></p>
|
python|opencv|image-processing|computer-vision|houghlinesp
| 1 |
1,906,438 | 43,252,858 |
how to keep a lot of arrays in one variable python numpy
|
<p>I have a lot of arrays, every is 2D, but has other sizes. I am looking for any good idea how to keep them in one variable. Order of them is important. What do you recommend? Arrays? Dictionaries? Any ideas?</p>
<p>My problem:
I have numpy array:</p>
<pre><code>b=np.array([])
</code></pre>
<p>And now I want to add to them e.g. array:</p>
<pre><code>a=np.array([0,1,2])
</code></pre>
<p>And later:</p>
<pre><code>c=np.array([[0,1,2],[3,4,5]])
</code></pre>
<p>Etc</p>
<p>Result should be:</p>
<pre><code>b=([0,1,2], [[0,1,2],[3,4,5]])
</code></pre>
<p>I don't know how to get it in numpy and without initializing size of first array.</p>
|
<p>If the ordering is important, store them in a list (<code>mylist = [array1, array2, ...]</code>) - or, if you're not going to need to change or shuffle them around after creating the list, store them in a tuple (<code>mylist = (array1, array2, ...)</code>). </p>
<p>Both of these structures can store arbitrary object types (they don't care that your arrays are different sizes, or even that they are all the same kind of object at all) and both maintain a consistent ordering which can be accessed through <code>mylist[0]</code>, <code>mylist[1]</code> etc. They will also appear in the correct order when you go through them using <code>for an_array in mylist:</code> etc.</p>
|
python|arrays|numpy
| 1 |
1,906,439 | 36,778,681 |
Calling dll in python 3 with LPSTR
|
<p>I have a <code>.dll</code> named <code>my.dll</code>, with 4 functions, called in python 3.5 using:<br>
<code>myDLL = ctypes.cdll.LoadLibrary(Path:\to\my.dll)</code></p>
<p>My problem is calling a function that has <code>LPSTR</code>:</p>
<pre><code>#include "stdafx.h"
#include "newheader.h"
double _stdcall pain_function(double arg1, double arg2, LPSTR arg_string_with_spaces)
</code></pre>
<p>the other 3 calls to <code>my.dll</code> work fine.</p>
<p>Below is the code I tried for the <code>pain_function</code>: </p>
<pre><code>import ctypes
from ctypes import wintypes
# Load dll
myDLL = ctypes.WinDLL(Path:\to\my.dll)
# Call function
pain_function = myDLL.pain_function
# Typecast the arguments
pain_function.argtypes = [ctypes.c_double, ctypes.c_double, wintypes.LPSTR]
# Typecast the return
pain_function.restype = ctypes.c_double
pain_return = pain_function(arg1, arg2, some_string)
</code></pre>
<p><code>pain_return</code> returns some nonsensical number. I tried some variations, mostly along the lines of: </p>
<pre><code>some_string = ctypes.create_string_buffer(b' ' * 200)
</code></pre>
<p>I have looked here among other places:</p>
<ol>
<li><p><a href="https://stackoverflow.com/questions/10896633/python-ctypes-prototype-with-lpcstr-out-parameter">python-ctypes-prototype-with-lpcstr-out-parameter</a></p></li>
<li><p><a href="https://support.smartbear.com/viewarticle/75417/?FullScreen=1&ShowTree=0" rel="nofollow noreferrer">Using String Parameters in DLL Function Calls</a></p></li>
</ol>
<p>What is this correct way to pass the string of 200 spaces to the dll?</p>
<p>Thank you in advance</p>
|
<p>You indicated the prototype was:</p>
<pre><code>double _stdcall pain_function(double arg1, double arg2, LPSTR arg_string_with_spaces)
</code></pre>
<p>but using:</p>
<pre><code>myDLL = ctypes.cdll.LoadLibrary(Path:\to\my.dll)
</code></pre>
<p>It should be, for <code>__stdcall</code> calling convention:</p>
<pre><code>myDLL = ctypes.WinDLL('Path:\to\my.dll')
</code></pre>
|
python-3.x|dll|ctypes|lpstr
| 1 |
1,906,440 | 66,874,480 |
PyQt5 QtableWidget update row on cell click (mac)
|
<p>I recently switched to mac and my pyqt5 application isnt running like it should. I have a table set up with checkboxes in the rows. On a pc, when you click on a checkbox it first updates the row, then hits the checkbox. on mac it just hits the checkbox without triggering a row change. I'm cant even figure out what row i'm on to make any changes.</p>
<p>here is basic code that would work on a pc, but always outputs "0, 3" on a mac:</p>
<pre><code> for i in range(4):
item = QtWidgets.QTableWidgetItem()
self.tableWidget.setVerticalHeaderItem(i, item)
item.setText(_translate("MainWindow", str(i)))
self.tableWidget.setCellWidget(i, 0, QtWidgets.QCheckBox())
self.tableWidget.cellWidget(i, 0).clicked.connect(lambda: print(self.tableWidget.currentRow(), i))
</code></pre>
<p>Here is the full file if you would like to test this yourself:</p>
<pre><code>from PyQt5 import QtCore, QtGui, QtWidgets
class Ui_MainWindow(object):
def setupUi(self, MainWindow):
_translate = QtCore.QCoreApplication.translate
MainWindow.setObjectName("MainWindow")
MainWindow.resize(800, 600)
self.centralwidget = QtWidgets.QWidget(MainWindow)
self.centralwidget.setObjectName("centralwidget")
self.tableWidget = QtWidgets.QTableWidget(self.centralwidget)
self.tableWidget.setGeometry(QtCore.QRect(195, 101, 371, 321))
self.tableWidget.setObjectName("tableWidget")
MainWindow.setCentralWidget(self.centralwidget)
self.tableWidget.setColumnCount(1)
self.tableWidget.setRowCount(4)
item = QtWidgets.QTableWidgetItem()
self.tableWidget.setHorizontalHeaderItem(0, item)
item = self.tableWidget.horizontalHeaderItem(0)
item.setText(_translate("MainWindow", "Value"))
for i in range(4):
item = QtWidgets.QTableWidgetItem()
self.tableWidget.setVerticalHeaderItem(i, item)
item.setText(_translate("MainWindow", str(i)))
self.tableWidget.setCellWidget(i, 0, QtWidgets.QCheckBox())
self.tableWidget.cellWidget(i, 0).clicked.connect(lambda: print(self.tableWidget.currentRow(), i))
self.retranslateUi(MainWindow)
QtCore.QMetaObject.connectSlotsByName(MainWindow)
def retranslateUi(self, MainWindow):
_translate = QtCore.QCoreApplication.translate
MainWindow.setWindowTitle(_translate("MainWindow", "MainWindow"))
if __name__ == "__main__":
import sys
app = QtWidgets.QApplication(sys.argv)
MainWindow = QtWidgets.QMainWindow()
ui = Ui_MainWindow()
ui.setupUi(MainWindow)
MainWindow.show()
sys.exit(app.exec_())
</code></pre>
|
<p>Not overly sure if this will be the solution, but be careful about connecting signals in a loop, as you may end up with the wrong variables being used (eg. <code>i</code> might always be 3).</p>
<p>Instead of creating the lambda like this:</p>
<pre><code>lambda: print(self.tableWidget.currentRow(), i)
</code></pre>
<p>Try manually setting the <code>i</code> parameter:</p>
<pre><code>lambda checked, i=i: print(self.tableWidget.currentRow(), i)
</code></pre>
<p>As pointed out by musicamante, clicked signals emit a <code>checked</code> variable, which will need to be ignored otherwise it will overwrite the value you've set for <code>i</code>.</p>
<p>As an alternative way of doing things, this is a potentially cleaner solution, but without testing I'm not sure if manually defining the slot as <code>int</code> will bypass the <code>checked</code> argument or not.</p>
<pre><code>def setupUI(self):
...
self.tableWidget.cellWidget(i, 0).clicked.connect(partial(self.someMethod, i))
@QtCore.Slot(int)
def someMethod(self, i):
print(self.tableWidget.currentRow(), i)
</code></pre>
|
python|pyqt5
| 1 |
1,906,441 | 4,102,487 |
Multiline regex replace
|
<p>I want to transform a text like:</p>
<pre><code>$$
foo
bar
$$
</code></pre>
<p>to</p>
<pre><code><% tex
foo
bar
%>
</code></pre>
<p>and <code>$\alpha$</code> to <code><% tex \alpha %></code>.</p>
<p>For the single line replace, I did this:</p>
<pre><code>re.sub(r"\$(.*)\$", r"<% tex \1 %>", text)
</code></pre>
<p>...and it works fine.</p>
<p>Now, I added the multiline flag to catch the multiline one:</p>
<pre><code>re.sub(r"(?i)\$\$(.*)\$\$", r"<% tex \1 %>", text)
</code></pre>
<p>...but it returns:</p>
<pre><code><% tex %>
foo
bar
<% tex %>
</code></pre>
<p>Why? I'm sure it's something trivial, but I can't imagine what.</p>
|
<p>I'd suggest using the re.M (multiline) flag, and gobbling up everything not a dollar sign in your capture.</p>
<pre><code>>>> import re
>>> t = """$$
foo
bar
$$"""
>>> re.sub(r"\$\$([^\$]+)\$\$", r"<% tex \1 %>", t, re.M)
'<% tex \nfoo\nbar\n %>'
</code></pre>
|
python|regex
| 11 |
1,906,442 | 48,036,153 |
Dockerized Django can't connect to MySQL
|
<p>Using this tutorial <a href="https://semaphoreci.com/community/tutorials/dockerizing-a-python-django-web-application" rel="nofollow noreferrer">https://semaphoreci.com/community/tutorials/dockerizing-a-python-django-web-application</a>, I'm dockering my Django application in a VirtualBox using <code>docker-machine</code>. Everything has gone splendidly until I go to my browser and my application says that it's having issues with MySQL.</p>
<p>Then i found this documentation for dockerizing an instance of mysql <a href="https://github.com/mysql/mysql-docker" rel="nofollow noreferrer">https://github.com/mysql/mysql-docker</a> which I followed, creating the image in the same development VirtualBox that I created. The error I was originally getting was</p>
<pre><code>Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock'
</code></pre>
<p>My Django DATABASES looked like this</p>
<pre><code>DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'db_name',
'USER': 'root',
'HOST': 'localhost',
'PORT': ''
}
}
</code></pre>
<p>I then changed the host to <code>127.0.0.1</code> and even tried specifying the port as <code>3306</code> and I got a new error which was </p>
<pre><code>(2003, "Can't connect to MySQL server on '127.0.0.1' (111)")
</code></pre>
<p>I also went in to MySQL workbench and changed the connection of my Local instance to be <code>127.0.0.1:3306</code> and that hasn't helped.</p>
<p>The commands that I'm running are</p>
<p><code>eval "$(docker-image env development)"</code> ---> supposedly does THESE things:</p>
<pre><code> export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://123.456.78.910:1112"
export DOCKER_CERT_PATH="/Users/me/.docker/machine/machines/development"
export DOCKER_MACHINE_NAME="development"
</code></pre>
<p>Then, now that i'm in my virtualbox, I run:</p>
<p><code>docker run -it -p 8000:8000 <docker image name></code> ---> forwards exposed port 8000 to port 8000 on my local machine</p>
<p><code>docker run -it -p 3306:3306 mysql/mysql-sever</code> ---> forwards exposed port 3306 to port 3306 on my local machine</p>
|
<p>The problem is that you are trying to connect with <code>127.0.0.1</code> or <code>localhost</code> which from the perspective of the django container will refer to itself and not to the mysql container.</p>
<p>In general, for containers to communicate, the best "docker way" is to make the containers share a common docker network.</p>
<pre><code>docker network create mynet
docker run -it --network mynet -p 8000:8000 <docker image name>
docker run -it --network mynet -p 3306:3306 --name mysql mysql/mysql-sever
</code></pre>
<p>Now the application container can connect to mysql using <code>mysql</code> as a hostname and <code>3306</code> as the port.</p>
|
python|mysql|django|docker|docker-machine
| 5 |
1,906,443 | 48,293,384 |
Python not installing packages with pip
|
<p>I've had a look around, and cannot find an answer to my problem.</p>
<p>All I try to run is</p>
<pre><code>pip3 install -r requirements.txt
</code></pre>
<p>but in just about everything I run, I get "invalid syntax" or </p>
<blockquote>
<p>Traceback (most recent call last): File "", line 1, in
NameError: name 'python' is not defined</p>
</blockquote>
<p>All I am trying to do is install a set of packages from a text file in a folder on my desktop. </p>
<p>This is stressing me, as nothing seems to work.</p>
<p>Id appreciate any help!</p>
|
<p>Did you by any chance type <a href="https://stackoverflow.com/questions/16857105/nameerror-name-python-is-not-defined">python in before running that?</a>, if you did then it's throwing that traceback call because it thinks <code>python</code> is a variable, not a command
Also, did you already try installing the package directly from the source?
Most can be run as <code>pip3.x install package</code> where package is the name of what you want to install (e.g Pygame).
I would open a new command shell, terminate processes on current, and run
<code>python -v</code> to know the exact version you have installed. Then pip2.x or pip3.x (where x is version of 2 or 3) install package directly online. </p>
|
python|pip|package
| 0 |
1,906,444 | 73,732,016 |
Is it possible to calculate 2-D confidence intervals with the boot package in R?
|
<p>I'm making a scatter plot of two statistics X(e) and Y(e) for various values of scalar parameter e. The sampling distribution of both X and Y is not normally distributed.</p>
<p>Now I want to calculate a 2-D confidence interval for each point (X(e),Y(e)) in the scatter plot. How do I do that?</p>
<p>Since the sampling distribution is not normally distributed I'm using the boot package in R. With this I can calculate a confidence interval for each X(e) and Y(e) independently. Is this approach statistically sound or should I sample from a 2-D sampling distribution? In that case how do I do that?</p>
|
<p>Here is a way with base package <code>boot</code> to bootstrap confidence intervals for a statistic, the mean, of two vectors simultaneously.</p>
<pre class="lang-r prettyprint-override"><code>df1 <- iris[1:50, 1:2]
head(df1)
#> Sepal.Length Sepal.Width
#> 1 5.1 3.5
#> 2 4.9 3.0
#> 3 4.7 3.2
#> 4 4.6 3.1
#> 5 5.0 3.6
#> 6 5.4 3.9
library(boot)
bootfun <- function(x, i) colMeans(x[i,])
R <- 1000L
set.seed(2022)
b <- boot(df1, bootfun, R)
colMeans(b$t)
#> [1] 5.010754 3.431042
boot.ci(b)
#> BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS
#> Based on 1000 bootstrap replicates
#>
#> CALL :
#> boot.ci(boot.out = b)
#>
#> Intervals :
#> Level Normal Basic Studentized
#> 95% ( 4.905, 5.097 ) ( 4.902, 5.092 ) ( 4.904, 5.093 )
#>
#> Level Percentile BCa
#> 95% ( 4.920, 5.110 ) ( 4.914, 5.096 )
#> Calculations and Intervals on Original Scale
</code></pre>
<p><sup>Created on 2022-09-15 with <a href="https://reprex.tidyverse.org" rel="nofollow noreferrer">reprex v2.0.2</a></sup></p>
<hr />
<p>But to compute two different bootstrapped statistic, it is more complicated, <code>boot</code> only bootstrap one statistic at a time.<br />
Create a list with the statistics of interest and a function bootstrapping the stats in the list one by one. The return value of this function is a list of objects of class <code>"boot"</code> and a <code>lapply</code> loop can compute the confidence intervals.</p>
<pre class="lang-r prettyprint-override"><code>library(boot)
bootfun2 <- function(data, stat_list, R, ...) {
stat <- function(x, i, f) {
y <- x[i]
f(y)
}
lapply(stat_list, \(f) {
boot(data, stat, R = R, f = f)
})
}
R <- 1000L
set.seed(2022)
e <- df1[[1]]
flist <- list(X = mean, Y = sd)
blist <- bootfun2(e, flist, R)
ci_list <- lapply(blist, boot.ci)
#> Warning in FUN(X[[i]], ...): bootstrap variances needed for studentized
#> intervals
#> Warning in FUN(X[[i]], ...): bootstrap variances needed for studentized
#> intervals
ci_list[[1]]$percent[4:5]
#> [1] 4.920000 5.109949
ci_list[[2]]$percent[4:5]
#> [1] 0.2801168 0.4103270
ci_list[[1]]$bca[4:5]
#> [1] 4.91400 5.09639
ci_list[[2]]$bca[4:5]
#> [1] 0.2956919 0.4308415
</code></pre>
<p><sup>Created on 2022-09-15 with <a href="https://reprex.tidyverse.org" rel="nofollow noreferrer">reprex v2.0.2</a></sup></p>
|
python|r|statistics|statistics-bootstrap
| 1 |
1,906,445 | 64,308,257 |
GCP Function Missing Attribute, Converting PDFs to MP3s
|
<p>I'm trying to replicate this GCP project, where a couple of developers created a GCP function that could perform Optical Character Recognition on PDFs, extract the text, and then convert the text to an MP3. <a href="https://www.youtube.com/watch?v=q-nvbuc59Po" rel="nofollow noreferrer">Link to the video for that project can be seen here.</a></p>
<p>I work in accessibility, so this will be an incredible solution for my users, but I am running into a couple issues.</p>
<p>I created the storage bucket for the PDFs, then went to create the function, <a href="https://github.com/kazunori279/pdf2audiobook/blob/master/functions/app/main.py" rel="nofollow noreferrer">and used the code from one of the developer's github, linked here.</a></p>
<p>However, I running into this error when trying to deploy the function:</p>
<p><img src="https://i.stack.imgur.com/Xp03I.png" alt="GCP Function Error, missing attribute" /></p>
<pre><code>Function failed on loading user code. Error message: File main.py is expected to contain a function named hello_gcs
Detailed stack trace:
Traceback (most recent call last):
File "/env/local/lib/python3.7/site-packages/google/cloud/functions/worker_v1.py", line 315, in check_or_load_user_function
_function_handler.load_user_function()
File "/env/local/lib/python3.7/site-packages/google/cloud/functions/worker_v1.py", line 193, in load_user_function
self._user_function = getattr(main_module, _ENTRY_POINT)
AttributeError: module 'main' has no attribute 'hello_gcs'
</code></pre>
<p>This is probably a really easy fix, but I have't played around with GCP functions all that much. Any suggestions for fixing this would be appreciated. Thank you!</p>
|
<p>I think I figured it out! There was an original snippet of code when I first created the function, that I deleted when pasting the developer's code. I pasted the original snippet back in, and now it's deploying just fine! Now to see if I can figure out the rest of the project lol.</p>
|
python|google-cloud-platform|accessibility|ocr
| 1 |
1,906,446 | 70,456,962 |
"This page isn't working 10.110.15.17 didn’t send any data" after pressing "Submit" button
|
<p>I'm currently taking a Udacity's course Full Stack Foundations. One lesson in this course wants me to make a web server for a user to input something. Then, the message will be displayed. on the screen. The user can "Submit" constantly, but it will only show the message that the user last submitted. I am able to see the webpage with "Hello!", input box, and "Submit" button. But after I enter something in the box and click the button, it shows "This page isn't working. 10.110.15.17 didn’t send any data". The console also didn't throw my any error. It also seems like the output is correct. I really couldn't figure out my mistake. Could anyone help me?</p>
<p>Below is my full code,</p>
<pre><code>from http.server import BaseHTTPRequestHandler, HTTPServer
import cgi
class webserverHandler(BaseHTTPRequestHandler):
def do_GET(self):
try:
if self.path.endswith('/hello'):
self.send_response(200)
self.send_header('Content-type', 'text/html')
self.end_headers()
output = ""
output += "<html><body>"
output += "<h1>Hello!</h1>"
output += "<form method='POST' enctype='multipart/form-data' action='/hello'>"
output += "<h2>What would you like me to say?</h2>"
output += "<input name='message' type='text' >"
output += "<input type='submit' value='Submit'>"
output += "</form>"
output += "</body></html>"
self.wfile.write(output.encode())
return
except IOError:
self.send_error(404, 'File Not Found %s' % self.path)
def do_POST(self):
try:
self.send_response(301)
self.send_header('Content-type', 'text/html')
self.end_headers
c_type, p_dict = cgi.parse_header(
self.headers.get('Content-Type')
)
content_len = int(self.headers.get('Content-length'))
p_dict['boundary'] = bytes(p_dict['boundary'], "utf-8")
p_dict['CONTENT-LENGTH'] = content_len
message_content = ''
if c_type == 'multipart/form-data':
fields = cgi.parse_multipart(self.rfile, p_dict)
message_content = fields.get('message')
output = ""
output += "<html><body>"
output += " <h2> Okay, how about this: </h2>"
output += "<h1>%s</h1>" % message_content[0]
output += "<form method='POST' enctype='multipart/form-data' action='/hello'>"
output += "<h2>What would you like me to say?</h2>"
output += "<input name='message' type='text'>"
output += "<input type='submit' value='Submit'>"
output += "</form>"
output += "</body></html>"
self.wfile.write(output.encode())
print(output)
return
except:
pass
def main():
try:
port = 8080
server = HTTPServer(('', port), webserverHandler)
print('Server running on port %s' % port)
server.serve_forever()
except KeyboardInterrupt:
print('^C entered, stopping web server...')
server.socket.close()
if __name__ == "__main__":
main()
</code></pre>
<p>Inside the console:</p>
<pre><code>Server running on port 8080
10.110.15.17 - - [22/Dec/2021 23:53:54] "GET /hello HTTP/1.1" 200 -
10.110.15.17 - - [22/Dec/2021 23:53:58] "POST /hello HTTP/1.1" 301 -
<html><body> <h2> Okay, how about this: </h2><h1>hello!!!!!</h1><form method='POST' enctype='multipart/form-data' action='/hello'><h2>What would you like me to say?</h2><input name='message' type='text'><input type='submit' value='Submit'></form></body></html>
^C entered, stopping web server...
</code></pre>
<p>the web page will only show up if the path ends with '/hello'.</p>
<p>ps: I'm using Python 3.7.</p>
|
<p>It turns out I forgot parenthesis after <code>self.end_headers</code>...</p>
<p>Below is the full code that works:</p>
<pre><code>from http.server import BaseHTTPRequestHandler, HTTPServer
import cgi
class webserverHandler(BaseHTTPRequestHandler):
def do_GET(self):
try:
if self.path.endswith('/hello'):
self.send_response(200)
self.send_header('Content-type', 'text/html')
self.end_headers()
output = ""
output += "<html><body>"
output += "<h1>Hello!</h1>"
output += "<form method='POST' enctype='multipart/form-data' action='/hello'>"
output += "<h2>What would you like me to say?</h2>"
output += "<input name='message' type='text' >"
output += "<input type='submit' value='Submit'>"
output += "</form>"
output += "</body></html>"
self.wfile.write(output.encode())
return
except IOError:
self.send_error(404, 'File Not Found %s' % self.path)
def do_POST(self):
try:
self.send_response(301)
self.send_header('Content-type', 'text/html')
self.end_headers()
c_type, p_dict = cgi.parse_header(
self.headers.get('Content-Type')
)
content_len = int(self.headers.get('Content-length'))
p_dict['boundary'] = bytes(p_dict['boundary'], "utf-8")
p_dict['CONTENT-LENGTH'] = content_len
message_content = ''
if c_type == 'multipart/form-data':
fields = cgi.parse_multipart(self.rfile, p_dict)
message_content = fields.get('message')
output = ""
output += "<html><body>"
output += " <h2> Okay, how about this: </h2>"
output += "<h1>%s</h1>" % message_content[0]
output += "<form method='POST' enctype='multipart/form-data' action='/hello'>"
output += "<h2>What would you like me to say?</h2>"
output += "<input name='message' type='text'>"
output += "<input type='submit' value='Submit'>"
output += "</form>"
output += "</body></html>"
self.wfile.write(output.encode())
print(output)
return
except:
pass
def main():
try:
port = 8080
server = HTTPServer(('', port), webserverHandler)
print('Server running on port %s' % port)
server.serve_forever()
except KeyboardInterrupt:
print('^C entered, stopping web server...')
server.socket.close()
if __name__ == "__main__":
main()
</code></pre>
|
python|http|webserver|httpserver
| 0 |
1,906,447 | 70,534,146 |
DataFrame: how to draw a 3D graph using Index and Columns as x and y, and data as z?
|
<p>My DataFrame line index and column index store <code>x</code> and <code>y</code> values, while the data values are <code>z</code> values, i.e. <code>f(x, y)</code>.</p>
<p>Let's take an example:</p>
<pre><code>import pandas as pd
df = pd.DataFrame([[150, 120, 170], [190, 160, 130]],
index=[2, 4], columns=[10, 30, 70])
print(df)
# 10 30 70
# 2 150 120 170
# 4 190 160 130
</code></pre>
<p>then <code>f(4, 30)</code> is <code>160</code>.</p>
<p>I would like to make a 3D plot of function <code>f</code>. I don't really mind if it looks like a 3D histogram or a surface plot - both answers would be appreciated.</p>
|
<p>You need to create a <a href="https://numpy.org/doc/stable/reference/generated/numpy.meshgrid.html" rel="nofollow noreferrer"><code>meshgrid</code></a> from your row and col indices and then you can use <a href="https://matplotlib.org/stable/api/_as_gen/mpl_toolkits.mplot3d.axes3d.Axes3D.html?highlight=plot_surface#mpl_toolkits.mplot3d.axes3d.Axes3D.plot_surface" rel="nofollow noreferrer"><code>plot_surface</code></a>:</p>
<pre><code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
df = pd.DataFrame([[150, 120, 170], [190, 160, 130]],
index=[2, 4], columns=[10, 30, 70])
x,y = np.meshgrid(df.columns, df.index)
fig, ax = plt.subplots(subplot_kw={"projection": "3d"})
ax.plot_surface(x, y, df)
</code></pre>
<p><a href="https://i.stack.imgur.com/GKZv8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GKZv8.png" alt="enter image description here" /></a></p>
|
python|pandas|dataframe|plot
| 2 |
1,906,448 | 72,836,327 |
Sklearn's roc_auc_score error when in debug mode but runs fine while running normally?
|
<p>I'm using <code>roc_auc_score</code> to evaluate AUC between two arrays, the truth and the estimation. My code runs fine when I'm executing it normally on PyCharms; however, when I use the debug mode, the below strange errors pop up. I tried pausing the code prior to the <code>roc_auc_score</code> line and attempted to just run it using the debug console with just 2 small arrays. Same issue. Strangely, everything is fine using the normal Python console. Any ideas?</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\User\anaconda3\envs\project\lib\site-packages\numpy\core\getlimits.py", line 649, in __init__
self.dtype = numeric.dtype(int_type)
TypeError: 'NoneType' object is not callable
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Program Files\JetBrains\PyCharm 2022.1.2\plugins\python\helpers\pydev\pydevd.py", line 1491, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "C:\Program Files\JetBrains\PyCharm 2022.1.2\plugins\python\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "C:/Users/User/PycharmProjects/project/main-resnet.py", line 707, in <module>
results = train_net(net=net,
File "C:/Users/User/PycharmProjects/project/main-resnet.py", line 495, in train_net
train_auc = roc_auc_score(training_true, training_estimated)
File "C:\Users\User\anaconda3\envs\project\lib\site-packages\sklearn\metrics\_ranking.py", line 566, in roc_auc_score
y_true = label_binarize(y_true, classes=labels)[:, 0]
File "C:\Users\User\anaconda3\envs\project\lib\site-packages\sklearn\preprocessing\_label.py", line 546, in label_binarize
Y = sp.csr_matrix((data, indices, indptr), shape=(n_samples, n_classes))
File "C:\Users\User\anaconda3\envs\project\lib\site-packages\scipy\sparse\compressed.py", line 66, in __init__
idx_dtype = get_index_dtype((indices, indptr),
File "C:\Users\User\anaconda3\envs\project\lib\site-packages\scipy\sparse\sputils.py", line 153, in get_index_dtype
int32min = np.iinfo(np.int32).min
File "C:\Users\User\anaconda3\envs\project\lib\site-packages\numpy\core\getlimits.py", line 651, in __init__
self.dtype = numeric.dtype(type(int_type))
TypeError: 'NoneType' object is not callable
python-BaseException
</code></pre>
|
<p>Well... guess I was simply googling the wrong thing :P</p>
<p>It seems to be an issue between PyCharms and the python version (<a href="https://youtrack.jetbrains.com/issue/PY-52137" rel="nofollow noreferrer">https://youtrack.jetbrains.com/issue/PY-52137</a>).</p>
<p>Switching from Python 3.10.1 to 3.9 should work.</p>
|
python|python-3.x|debugging|scikit-learn|pycharm
| 0 |
1,906,449 | 55,836,731 |
Does time series forecasting belong to supervised learning? or is it another category of machine learning?
|
<p>I have been analyzing several different methods of time series forecasting such as ARIMA and SARIMA using <a href="https://www.statsmodels.org/stable/index.html" rel="nofollow noreferrer">statsmodels</a> library for my final year project. Checking on past literature I have seen that regression algorithms could also be used in combination with methods such as sliding window. However what I can't clarify is what type of algorithms does time series forecasting fall in to. I'm sure it is not unsupervised, therefore does this mean that time series forecasting algorithms are supervised algorithms? or is it a different type of machine learning?</p>
|
<p>Forecasting is a task and supervised learning describes a certain type of algorithm.</p>
<p>So, saying that "forecasting belong to supervised learning" is incorrect.</p>
<p>However, you can use supervised learning algorithms on forecasting tasks, even though this has well-known <a href="https://towardsdatascience.com/how-not-to-use-machine-learning-for-time-series-forecasting-avoiding-the-pitfalls-19f9d7adf424" rel="nofollow noreferrer">pitfalls</a> you should be aware of.</p>
|
python-3.x|machine-learning|time-series|arima
| 0 |
1,906,450 | 73,332,746 |
A http proxy same as Fiddler AutoResponder
|
<p>Hey I'm trying to create something like fiddler autoresponder.</p>
<p>I want replace specific url content to other example:</p>
<p>I researched everything but can't found I tried create proxy server, nodejs script, python script. Nothing.</p>
<p><code>https://randomdomain.com content replace with https://cliqqi.ml</code></p>
<p>ps I'm doing this because I want intercept electron app getting main game file from site and then just intercept they game file site to my site.</p>
|
<p>If you're looking for a way to do this programmatically in Node.js, I've written a library you can use to do exactly that, called <a href="https://github.com/httptoolkit/mockttp" rel="nofollow noreferrer">Mockttp</a>.</p>
<p>You can use Mockttp to create a HTTPS-intercepting & rewriting proxy, which will allow you to send mock responses directly, redirect traffic from one address to another, rewrite anything including the headers & body of existing traffic, or just log everything that's sent & received. There's a full guide here: <a href="https://httptoolkit.tech/blog/javascript-mitm-proxy-mockttp/" rel="nofollow noreferrer">https://httptoolkit.tech/blog/javascript-mitm-proxy-mockttp/</a></p>
|
python|node.js|http|proxy|fiddler
| 1 |
1,906,451 | 73,204,341 |
How to find the optimal value x and y values from a scatter plot?
|
<p>At the moment I have the following scatter plot in Matplotlib Python:</p>
<p><a href="https://i.stack.imgur.com/ZQtSb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZQtSb.png" alt="enter image description here" /></a></p>
<p>I also have about 7 other similar scatter plots but with slightly different z (colour) values but test the same x and y values all stored in json files.</p>
<p>I was wondering if there was an algorithm/function I can use to find the optimal average x and y values such that the z value (colour) is maximized on average out of all my scatter plots, since the values aren't very consistent and vary between scatter plots quite a lot.</p>
<p>(or even just the optimal values of a single scatter plot would suffice as I could combine them into a single one)</p>
|
<p>Just one idea. I think this can be achieved as regression task. Target value could be the norm of z values at each point(x, y).</p>
|
python|matplotlib|scatter-plot
| 0 |
1,906,452 | 66,737,570 |
How Do I Create My Own Datasets To Work With Sklearn
|
<p>I have found a piece of <code>sklearn</code> code that I am finding relatively straightforward to run the example Iris Dataset, but how do I create my own dataset similar to this?</p>
<p><code>iris.data</code> - contains the data measurements of three types of flower <br>
<code>iris.target</code> - contains the labels of three types of flower <br></p>
<p>e.g. rather than analysing the three types of flower in the Iris Dataset, I would like to make my own datasets that follow this format and that I can pass through the code.</p>
<p><code>example_sports.data</code> - contains the data measurements of three types of sports players <br>
<code>example_sports.target</code> - contains the labels of three types of sport <br></p>
<pre><code>from sklearn.datasets import load_iris #load inbuilt dataset from sklearn
iris = load_iris() #assign variable name iris to inbuilt dataset
iris.data # contains the four numeric variables, sepal length, sepal width, petal length, petal width
print(iris.data) #printing the measurements of the Iris Dataset below
iris.target # relates to the different species shown as 0,1,2 for the three different
# species of Iris, a categorical variable, basically a label
print(iris.target)
</code></pre>
<p>The full code can be found at <a href="https://www.youtube.com/watch?v=asW8tp1qiFQ" rel="nofollow noreferrer">https://www.youtube.com/watch?v=asW8tp1qiFQ</a></p>
|
<p><code>sklearn</code> datasets are stored in <code>Bunch</code> which is basically just a type of <code>dict</code>. Sklearn <code>data</code> and <code>targets</code> are basically just NumPy arrays and can be fed into the <code>fit()</code> method of the <code>sklearn</code> estimator you are interested in. But if you want to make your own data as a <code>Bunch</code>, you can do something like the following:</p>
<pre><code>from sklearn.utils import Bunch
import numpy as np
example_sports = Bunch(data = np.array([[1.0,1.2,2.1],[3.0,2.3,1.0],[4.5,3.4,0.5]]), target = np.array([3,2,1]))
print(example_sports.data)
print(example_sports.target)
</code></pre>
<p>Naturally, you can read your own custom lists into the <code>data</code> and <code>target</code> entries of the <code>Bunch</code>. <strong>Pandas</strong> is a good tool if you have the data in <em>Excel/CSV</em> files.</p>
|
python|scikit-learn
| 2 |
1,906,453 | 65,048,188 |
scp between two remote servers in Python
|
<p>I am trying to connect a remote server using Paramiko and send some files to other remote server. I tried the code below, but it didn't work. I checked all connections, username and password parameters, they don't have any problem. Also the file which I want to transfer exist in first remote server in proper path.</p>
<p>The reason why I don't download files to my local computer and upload to second server is, connection speed between two remote servers is a lot faster.</p>
<p>Things that I tried:</p>
<ul>
<li><p>I set paramiko log level to debug, but couldn't find any useful information.</p>
</li>
<li><p>I tried same <code>scp</code> command from first server to second server from command line, worked fine.</p>
</li>
<li><p>I tried to log by <code>data = stdout.readlines()</code> after <code>stdin.flush()</code> line but that didn't log anything.</p>
</li>
</ul>
<pre><code>import paramiko
s = paramiko.SSHClient()
s.set_missing_host_key_policy(paramiko.AutoAddPolicy())
s.connect("10.10.10.10", 22, username='oracle', password='oracle', timeout=4)
stdin, stdout, stderr = s.exec_command(
"scp /home/oracle/myFile.txt oracle@10.10.10.20:/home/oracle/myFile.txt")
stdin.write('password\n')
stdin.flush()
s.close()
</code></pre>
|
<p>You cannot write a password to the standard input of OpenSSH <code>scp</code>.</p>
<p>Try it in a shell, it won't work either:</p>
<pre class="lang-sh prettyprint-override"><code>echo password | scp /home/oracle/myFile.txt oracle@10.10.10.20:/home/oracle/myFile.txt
</code></pre>
<p>OpenSSH tools (including <code>scp</code>) read the password from a terminal only.</p>
<p>You can emulate the terminal by setting <code>get_pty</code> parameter of <a href="https://docs.paramiko.org/en/stable/api/client.html#paramiko.client.SSHClient.exec_command" rel="nofollow noreferrer"><code>SSHClient.exec_command</code></a>:</p>
<pre><code>stdin, stdout, stderr = s.exec_command("scp ...", get_pty=True)
stdin.write('password\n')
stdin.flush()
</code></pre>
<p>Though enabling terminal emulation can bring you unwanted side effects.</p>
<p>A way better solution is to use a public key authentication. There also other workarounds. See <a href="https://stackoverflow.com/q/50096/850848">How to pass password to scp?</a> (though they internally have to do something similar to <code>get_pty=True</code> anyway).</p>
<hr />
<p>Other issues:</p>
<ul>
<li>You have to wait for the command to complete. Calling <code>s.close()</code> will likely terminate the transfer. Using <code>stdout.readlines()</code> will do in most cases. But it may hang, see <a href="https://stackoverflow.com/q/31625788/850848">Paramiko ssh die/hang with big output</a>.</li>
<li>Do not use <code>AutoAddPolicy</code> – You are losing a protection against <a href="https://en.wikipedia.org/wiki/Man-in-the-middle_attack" rel="nofollow noreferrer">MITM attacks</a> by doing so. For a correct solution, see <a href="https://stackoverflow.com/q/10670217/850848#43093883">Paramiko "Unknown Server"</a>.</li>
</ul>
|
python|paramiko|scp
| 1 |
1,906,454 | 65,028,641 |
How to add group choices in the Object Permissions form of the admin interface with django-guardian?
|
<p>Using <code>guardian</code> in a <code>django</code> project, I want the admins to be able to assign object permissions over the admin interface. With <code>guardian</code> this is possible, but the <em>Object permissions</em> form in the admin interface, the <em>Group</em> field is a <code>TextField</code>. How do I make it a <code>ChoiceField</code> with all existing groups as choices?</p>
<p>Is there a solution that only requires adding code in the <code>admin.py</code> file of the app or do we have to overwrite some <code>guardian</code> code? How can we do it without messing with the functionality of <code>guardian</code>?
This is my <code>admin.py</code> file:</p>
<pre><code>from django.contrib import admin
from .models import MyModel
from guardian.admin import GuardedModelAdmin
class MyModelAdmin(GuardedModelAdmin):
pass
admin.site.register(MyModel, MyModelAdmin)
</code></pre>
|
<p>Here is the solution that works like a charm.
First subclass the GuardedModelAdminMixin</p>
<pre><code>from guardian.admin import GuardedModelAdmin, GuardedModelAdminMixin
class CustomGuardedModelAdminMixin(GuardedModelAdminMixin):
def get_obj_perms_group_select_form(self, request):
"""
Returns form class for selecting a group for permissions management. By default :form:`GroupManage` is
returned. This enhancement returns GroupModelManage instead, allowing admins to get a queryset of groups.
"""
return GroupModelManage
</code></pre>
<p>Then define the ModelManager for groups with overriding the clean_group function:</p>
<pre><code>class GroupModelManage(forms.Form):
"""
Extends the Django Guardian GroupManage class to select User from a query containing all
User objects rather than a blank input CharField.
"""
group = forms.ModelChoiceField(queryset=Group.objects.all())
def clean_group(self):
"""
Returns ``Group`` instance based on the given group name.
"""
try:
return self.cleaned_data['group']
except Group.DoesNotExist:
raise forms.ValidationError(self.fields['group'].error_messages['does_not_exist'])
</code></pre>
<p>Finally, use the new Mixin in your Admin class:</p>
<pre><code>class MyModelAdmin(CustomGuardedModelAdminMixin, admin.ModelAdmin):
pass
admin.site.register(MyModel, MyModelAdmin)
</code></pre>
<p>To do custom querysets on the user level, use this function in the new mixin:</p>
<pre><code> def get_obj_perms_user_select_form(self, request):
return UserModelManage
</code></pre>
<p>A full example is here: <a href="https://github.com/geex-arts/django-jet/issues/268#issuecomment-459125970" rel="nofollow noreferrer">LINK</a></p>
|
python|django|django-forms|django-admin|django-guardian
| 1 |
1,906,455 | 64,818,962 |
Using Tweepy to fetch recent Tweets from a specific location
|
<p>I'm trying to fetch the recent Tweets from a specified location which can be either city/country. I went through the Tweepy documentation and followed the snippets as mentioned but it looks like I'm missing something here.</p>
<pre><code># the coordinates and params
lat = 28
long = 77
granularity = 'city'
max_results = 1
# fetching the locations
locations = tweepy.api.reverse_geocode(lat=lat, long=long, granularity=granularity, max_results=max_results)
</code></pre>
<p><a href="https://i.stack.imgur.com/Bo5g0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Bo5g0.png" alt="enter image description here" /></a></p>
<p>It would be helpful if someone can point out what am I missing here.</p>
|
<p><code>tweepy.api</code> is an unauthenticated instance of <a href="https://tweepy.readthedocs.io/en/latest/api.html#tweepy.API" rel="nofollow noreferrer"><code>tweepy.API</code></a>.<br />
You need to instantiate your own instance of <code>API</code> using your authentication/credentials.</p>
|
python|python-3.x|twitter|tweepy
| 0 |
1,906,456 | 63,866,035 |
Drawing arrows between two tables (as an image) (Python)
|
<p>I want to make a plot of two tables next to each other with arrows shooting between them in a particular way (in <code>matplotlib</code>). So far, I know more or less how to use <code>plt.arrow</code> to get the arrows to do what I want, and I found an article that shows how to plot just the table portion of <code>plt.table</code>.</p>
<p><a href="https://towardsdatascience.com/simple-little-tables-with-matplotlib-9780ef5d0bc4" rel="nofollow noreferrer">https://towardsdatascience.com/simple-little-tables-with-matplotlib-9780ef5d0bc4</a></p>
<p>Source code, in case the link dies:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
title_text = 'Loss by Disaster'
footer_text = 'June 24, 2020'
fig_background_color = 'skyblue'
fig_border = 'steelblue'
data = [
[ 'Freeze', 'Wind', 'Flood', 'Quake', 'Hail'],
[ '5 year', 66386, 174296, 75131, 577908, 32015],
['10 year', 58230, 381139, 78045, 99308, 160454],
['20 year', 89135, 80552, 152558, 497981, 603535],
['30 year', 78415, 81858, 150656, 193263, 69638],
['40 year', 139361, 331509, 343164, 781380, 52269],
]
# Pop the headers from the data array
column_headers = data.pop(0)
row_headers = [x.pop(0) for x in data]
# Table data needs to be non-numeric text. Format the data
# while I'm at it.
cell_text = []
for row in data:
cell_text.append([f'{x/1000:1.1f}' for x in row])
# Get some lists of color specs for row and column headers
rcolors = plt.cm.BuPu(np.full(len(row_headers), 0.1))
ccolors = plt.cm.BuPu(np.full(len(column_headers), 0.1))
# Create the figure. Setting a small pad on tight_layout
# seems to better regulate white space. Sometimes experimenting
# with an explicit figsize here can produce better outcome.
plt.figure(linewidth=2,
edgecolor=fig_border,
facecolor=fig_background_color,
tight_layout={'pad':1},
#figsize=(5,3)
)
# Add a table at the bottom of the axes
the_table = plt.table(cellText=cell_text,
rowLabels=row_headers,
rowColours=rcolors,
rowLoc='right',
colColours=ccolors,
colLabels=column_headers,
loc='center')
# Scaling is the only influence we have over top and bottom cell padding.
# Make the rows taller (i.e., make cell y scale larger).
the_table.scale(1, 1.5)
# Hide axes
ax = plt.gca()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# Hide axes border
plt.box(on=None)
# Add title
plt.suptitle(title_text)
# Add footer
plt.figtext(0.95, 0.05, footer_text, horizontalalignment='right', size=6, weight='light')
# Force the figure to update, so backends center objects correctly within the figure.
# Without plt.draw() here, the title will center on the axes and not the figure.
plt.draw()
# Create image. plt.savefig ignores figure edge and face colors, so map them.
fig = plt.gcf()
plt.savefig('pyplot-table-demo.png',
#bbox='tight',
edgecolor=fig.get_edgecolor(),
facecolor=fig.get_facecolor(),
dpi=150
)
</code></pre>
<p>However, I cannot get two tables to plot next to each other and look nice, and the arrow between the two won't go across the two plotting zones.</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
fig, (ax1, ax2) = plt.subplots(1, 2)
title_text = 'Loss by Disaster'
footer_text = 'June 24, 2020'
fig_background_color = 'skyblue'
fig_border = 'steelblue'
data = [
[ 'Freeze', 'Wind', 'Flood', 'Quake', 'Hail'],
[ '5 year', 66386, 174296, 75131, 577908, 32015],
['10 year', 58230, 381139, 78045, 99308, 160454],
['20 year', 89135, 80552, 152558, 497981, 603535],
['30 year', 78415, 81858, 150656, 193263, 69638],
['40 year', 139361, 331509, 343164, 781380, 52269],
]
# Pop the headers from the data array
column_headers = data.pop(0)
row_headers = [x.pop(0) for x in data]
# Table data needs to be non-numeric text. Format the data
# while I'm at it.
cell_text = []
for row in data:
cell_text.append([f'{x/1000:1.1f}' for x in row])
# Get some lists of color specs for row and column headers
rcolors = plt.cm.BuPu(np.full(len(row_headers), 0.1))
ccolors = plt.cm.BuPu(np.full(len(column_headers), 0.1))
# Create the figure. Setting a small pad on tight_layout
# seems to better regulate white space. Sometimes experimenting
# with an explicit figsize here can produce better outcome.
# ax1.figure(linewidth=2,
# edgecolor=fig_border,
# facecolor=fig_background_color,
# tight_layout={'pad':1},
# #figsize=(5,3)
# )
# Add a table at the bottom of the axes
the_table = ax1.table(cellText=cell_text,
rowLabels=row_headers,
rowColours=rcolors,
rowLoc='right',
colColours=ccolors,
colLabels=column_headers,
loc='center')
# Scaling is the only influence we have over top and bottom cell padding.
# Make the rows taller (i.e., make cell y scale larger).
the_table.scale(1, 1.5)
# Hide axes
ax1 = plt.gca()
ax1.get_xaxis().set_visible(False)
ax1.get_yaxis().set_visible(False)
#
# Do it again
#
the_table = ax2.table(cellText=cell_text,
rowLabels=row_headers,
rowColours=rcolors,
rowLoc='right',
colColours=ccolors,
colLabels=column_headers,
loc='center')
# Scaling is the only influence we have over top and bottom cell padding.
# Make the rows taller (i.e., make cell y scale larger).
the_table.scale(1, 1.5)
# Hide axes
ax2 = plt.gca()
ax2.get_xaxis().set_visible(False)
ax2.get_yaxis().set_visible(False)
#
plt.arrow(0.5, 0.5, -5, 1)
</code></pre>
<p>(The <code>do it again</code> could be tidied up with a function, perhaps.)</p>
<p><a href="https://i.stack.imgur.com/5UJzw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5UJzw.png" alt="enter image description here" /></a></p>
<p>Ultimately, I want to get something like the following sketch.</p>
<p><a href="https://i.stack.imgur.com/iEk3F.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iEk3F.png" alt="enter image description here" /></a></p>
<p>I am open to minor tweaks to my approach to complete changing what I'm doing and not even using <code>matplotlib</code>, but how can I wrangle my idea into some code that will produce something like my sketch?</p>
|
<p>You were nearly there (and perhaps have already solved it in the meantime), the main point missing was that you needed to create an arrow (or rather a <code>ConnectionPatch</code>) which needs to transform with the whole figure (i.e. <code>transform=fig.transFigure</code>) instead of using only the individual subplots for reference. Note that this solution is <a href="https://stackoverflow.com/a/44023219/565489">adapted from this answer</a> and <a href="https://matplotlib.org/3.3.3/api/_as_gen/matplotlib.patches.ConnectionPatch.html#matplotlib.patches.ConnectionPatch" rel="nofollow noreferrer">here</a> is some relevant documentation.</p>
<p>Complete code:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from matplotlib.patches import ConnectionPatch
fig, (ax1, ax2) = plt.subplots(1, 2,figsize=(10,5))
title_text = 'Loss by Disaster'
footer_text = 'June 24, 2020'
fig_background_color = 'skyblue'
fig_border = 'steelblue'
data = [
[ 'Freeze', 'Wind', 'Flood', 'Quake', 'Hail'],
[ '5 year', 66386, 174296, 75131, 577908, 32015],
['10 year', 58230, 381139, 78045, 99308, 160454],
['20 year', 89135, 80552, 152558, 497981, 603535],
['30 year', 78415, 81858, 150656, 193263, 69638],
['40 year', 139361, 331509, 343164, 781380, 52269],
]
# Pop the headers from the data array
column_headers = data.pop(0)
row_headers = [x.pop(0) for x in data]
# Table data needs to be non-numeric text. Format the data
# while I'm at it.
cell_text = []
for row in data:
cell_text.append([f'{x/1000:1.1f}' for x in row])
# Get some lists of color specs for row and column headers
rcolors = plt.cm.BuPu(np.full(len(row_headers), 0.1))
ccolors = plt.cm.BuPu(np.full(len(column_headers), 0.1))
# Create the figure. Setting a small pad on tight_layout
# seems to better regulate white space. Sometimes experimenting
# with an explicit figsize here can produce better outcome.
# ax1.figure(linewidth=2,
# edgecolor=fig_border,
# facecolor=fig_background_color,
# tight_layout={'pad':1},
# #figsize=(5,3)
# )
# Add a table at the bottom of the axes
the_table = ax1.table(cellText=cell_text,
rowLabels=row_headers,
rowColours=rcolors,
rowLoc='right',
colColours=ccolors,
colLabels=column_headers,
loc='center')
# Scaling is the only influence we have over top and bottom cell padding.
# Make the rows taller (i.e., make cell y scale larger).
the_table.scale(1, 1.5)
# Do it again
#
the_table = ax2.table(cellText=cell_text,
rowLabels=row_headers,
rowColours=rcolors,
rowLoc='right',
colColours=ccolors,
colLabels=column_headers,
loc='center')
# Scaling is the only influence we have over top and bottom cell padding.
# Make the rows taller (i.e., make cell y scale larger).
the_table.scale(1, 1.5)
### actually turn axes off:
for a in [ax1,ax2]:
a.set_axis_off()
con = ConnectionPatch(xyA=(0.38,0.4), xyB=(0.43, 0.52),
coordsA="data", coordsB="data",
axesA=ax1, axesB=ax2,
shrinkA=1, shrinkB=1,
ec="red", fc="w", linewidth=2, alpha=1,
arrowstyle="-|>",connectionstyle="arc3",
mutation_scale=20, )
ax2.add_artist(con)
plt.show()
</code></pre>
<p>yields this image:
<a href="https://i.stack.imgur.com/cw3Ac.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cw3Ac.png" alt="two tables connected with an arrow" /></a></p>
<p>Note how the arrow start and end points are given in the subplot coordinates (e.g. <code>(0.38,0.4)</code>, i.e. you should be able to quickly find good points via mouse hovering.</p>
|
python|matplotlib|plot
| 0 |
1,906,457 | 53,175,742 |
Fluent Python example 2.11
|
<p>In the book Fluent Python by L. Ramalho example 2.11 I encountered the following line of code ...</p>
<pre><code> line_items = invoice.split("\n") [2:]
</code></pre>
<p>with invoice being ...</p>
<pre><code> invoice = """
line0
line1
line2
"""
</code></pre>
<p>I understand what the code does ... but I'm astounded that after
line_items = invoice.split("\n") a slicing operation [2:] is permitted. Can someone explain to me why this represents valid code since [2:] does not appear to be a separate parameter of .split("\n").</p>
<p>Thank you ...</p>
|
<p>That syntax is valid because <code>split</code> returns a list. The <code>[2:]</code> is slicing that returned list.</p>
|
python|python-3.x|string|split|slice
| 1 |
1,906,458 | 53,271,929 |
Low score in Linear Regression with discrete attributes
|
<p>I'm trying to do a linear regression in my dataframe. The dataframe is about apple applications, and I want to predict the notes of applications. The notes are in following format: </p>
<pre><code>1.0
1.5
2.0
2.5
...
5.0
</code></pre>
<p>My code is:</p>
<pre><code>atributos = ['size_bytes','price','rating_count_tot','cont_rating','sup_devices_num','num_screenshots','num_lang','vpp_lic']
atrib_prev = ['nota']
X = np.array(data_regress.drop(['nota'],1))
y = np.array(data_regress['nota'])
X = preprocessing.scale(X)
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.2)
clf = LinearRegression()
clf.fit(X_train, y_train)
accuracy = clf.score(X_test, y_test)
print(accuracy)
</code></pre>
<p>But my accuracy is 0.046295306696438665. I think this occurs because the linear model is predicting real values, while my 'note' is real, but at intervals. I don't know how to round this values before the <code>clf.score</code>.</p>
|
<p>First, for regression models, <code>clf.score()</code> calculates <a href="http://scikit-learn.org/stable/modules/generated/sklearn.metrics.r2_score.html" rel="nofollow noreferrer">R-squared value</a>, not accuracy. So you would need to decide if you want to treat this problem as a classification problem (For some fixed number of target labels) or a regression problem (for a real-valued target)</p>
<p>Secondly, if you insist on using regression models and not classification, you can call <code>clf.predict()</code> to first get the predicted values and then round off as you want to, and then call <code>r2_score()</code> on actual and predicted labels. Something like:</p>
<pre><code># Get actual predictions
y_pred = clf.predict(X_test)
# You will need to implement the round function yourself
y_pred_rounded = round(y_pred)
# Call the appropriate scorer
score = r2_score(y_test, y_pred_rounded)
</code></pre>
<p>You can look at the <a href="https://scikit-learn.org/stable/modules/model_evaluation.html#the-scoring-parameter-defining-model-evaluation-rules" rel="nofollow noreferrer">sklearn documentation here</a> for available metrics in sklearn.</p>
|
pandas|jupyter-notebook|sklearn-pandas
| 1 |
1,906,459 | 65,326,869 |
problem in installing arcade package in python
|
<p>i got the error while installing arcade package using pip install arcade command and it starts downloading and gets successful but got the error on installation describing error related to matadata.</p>
<p><a href="https://i.stack.imgur.com/lLHhP.png" rel="nofollow noreferrer">enter image description here</a>
ERROR: Command errored out with exit status 1:
'c:\program files\python39\python.exe'
'C:\Users\kaush\AppData\Roaming\Python\Python39\site-packages\pip_vendor\pep517_in_process.py' prepare_metadata_for_build_wheel
'C:\Users\kaush\AppData\Local\Temp\tmpm02r2wt7'</p>
<p><a href="https://i.stack.imgur.com/lLHhP.png" rel="nofollow noreferrer">enter image description here</a></p>
|
<p>It is a bug in python3.9 but it can easily work in previous version of python i.e python3.8.6 the following steps shows how to install arcade in python3.8.6:</p>
<ol>
<li>Uninstall the latest version i.e python 3.9 or any other version.</li>
<li>Go to the path where the file is stored in your PC in my case the path is</li>
</ol>
<blockquote>
<p>C:\Users\kaush\AppData\Local\Programs\Python\</p>
</blockquote>
<p>as you have to delete the folder python39 if you have uninstalled python3.9.</p>
<ol start="3">
<li><p>Go to the site by <a href="https://pypi.org/project/arcade/#files" rel="nofollow noreferrer">arcade file</a> and download the file.</p>
</li>
<li><p>Now download the python version3.8.6 from <a href="https://www.python.org/downloads/" rel="nofollow noreferrer">python</a> and install in you computer.</p>
</li>
<li><p>After getting installed open command prompt and type<code>pip install <arcade whl file full name you downloaded from the link in step 3> and press enter.</code></p>
</li>
<li><p>Just wait for few minutes and it will get installed.</p>
</li>
</ol>
<p>This process worked in window.</p>
|
python|python-3.x|windows|pip|package
| 0 |
1,906,460 | 68,531,632 |
How to see the values of a keras layer's variables' slots
|
<p>Using keras (tf2), I'm writing my own optimiser. In this optimiser, I keep track of the moving average of the squares of all the weights, and of all the gradients. I store these in so-called <em>slots</em>, created as follows:</p>
<pre><code> def _create_slots(self, var_list):
# Separate for-loops required for some reason
for var in var_list:
self.add_slot(var, 'squared_gradient_ma', initializer="zeros")
for var in var_list:
self.add_slot(var, 'squared_param_ma', initializer="zeros")
</code></pre>
<p>I update these in the <code>_resource_apply_dense</code> method, as follows:</p>
<pre><code> def _resource_apply_dense(self, grad, var):
squared_gradient_ma = self.get_slot(var, 'squared_gradient_ma')
squared_param_ma = self.get_slot(var, 'squared_param_ma')
ma_decay = 1 / 1000000
new_squared_gradient_ma = state_ops.assign(
squared_gradient_ma,
(1.0 - ma_decay) * squared_gradient_ma + ma_decay * math_ops.square(grad),
use_locking=self._use_locking,
)
new_squared_param_ma = state_ops.assign(
squared_param_ma,
(1.0 - ma_decay) * squared_param_ma + ma_decay * math_ops.square(var),
use_locking=self._use_locking,
)
</code></pre>
<p>(Naturally, there's also code to update the actual weights, not shown.)</p>
<p>This makes it so that these two moving averages are updated on every training batch.</p>
<p>Now, at the end of training, I can inspect the weights that the model learned, by calling <code>layer.get_weights()</code>. Is there a similar thing I can call to see the values of these two slots that I made?</p>
|
<p>Assuming that your optimizer is subclassed from <code>tf.keras.optimizers.Optimizer</code>, and that you are training with <code>model.fit</code>, then you can access the optimizer weights with:</p>
<pre><code>model.optimizer.get_weights()
</code></pre>
|
python|tensorflow|keras|tf.keras
| 1 |
1,906,461 | 62,863,675 |
python scrapy FormRequest.FormResponse not giving any output
|
<p>Very new to coding and especially new to scrapy.</p>
<p>I have simple scraper writen with "scrapy". I have written it in hope to produce a list of options (from 'x' element). But it dosen't provide anything. I tried <a href="https://stackoverflow.com/q/24289854/3604513">this</a> but failed to get any way ahead in solving issue I am facing.</p>
<p>I am pasting the full code followed by full output.
(note: in setting.py I have changed robottext=True to robottext to False)</p>
<pre><code>from scrapy import FormRequest
import scrapy
class FillFormSpider(scrapy.Spider):
name = 'fill_form'
allowed_domains = ['citizen.mahapolice.gov.in/Citizen/MH/PublishedFIRs.aspx']
start_urls = ['http://citizen.mahapolice.gov.in/Citizen/MH/PublishedFIRs.aspx/']
def parse(self, response):
yield FormRequest.from_response(
response,
formid='form1',
formdata={
'ctl00$ContentPlaceHolder1$txtDateOfRegistrationFrom': "01/07/2020",
'ContentPlaceHolder1_txtDateOfRegistrationTo': "03/07/2020",
'ctl00$ContentPlaceHolder1$ddlDistrict': "19372"},
callback=(self.after_login)
)
def after_login(self, response):
print(response.xpath('//*[@id="ContentPlaceHolder1_ddlPoliceStation"]/text()'))
</code></pre>
<p>And the out put in terminal after running the command <code>scrapy crawl spider_name</code>:</p>
<pre><code>2020-07-12 21:32:21 [scrapy.utils.log] INFO: Scrapy 2.2.0 started (bot: maha_police)
2020-07-12 21:32:21 [scrapy.utils.log] INFO: Versions: lxml 4.5.0.0, libxml2 2.9.10, cssselect 1.1.0, parsel 1.6.0, w3lib 1.22.0, Twisted 20.3.0, Python 3.8.2 (default, Apr 27 2020, 15:53:34) - [GCC 9.3.0], pyOpenSSL 19.1.0 (OpenSSL 1.1.1f 31 Mar 2020), cryptography 2.8, Platform Linux-5.4.0-40-generic-x86_64-with-glibc2.29
2020-07-12 21:32:21 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.epollreactor.EPollReactor
2020-07-12 21:32:21 [scrapy.crawler] INFO: Overridden settings:
{'BOT_NAME': 'maha_police',
'NEWSPIDER_MODULE': 'maha_police.spiders',
'SPIDER_MODULES': ['maha_police.spiders']}
2020-07-12 21:32:21 [scrapy.extensions.telnet] INFO: Telnet Password: 02d86496f7d27542
2020-07-12 21:32:21 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.memusage.MemoryUsage',
'scrapy.extensions.logstats.LogStats']
2020-07-12 21:32:22 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2020-07-12 21:32:22 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2020-07-12 21:32:22 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2020-07-12 21:32:22 [scrapy.core.engine] INFO: Spider opened
2020-07-12 21:32:22 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2020-07-12 21:32:22 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2020-07-12 21:32:25 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://citizen.mahapolice.gov.in/Citizen/MH/index.aspx> from <GET http://citizen.mahapolice.gov.in/Citizen/MH/PublishedFIRs.aspx/>
2020-07-12 21:32:25 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://citizen.mahapolice.gov.in/Citizen/MH/index.aspx> (referer: None)
2020-07-12 21:32:26 [scrapy.spidermiddlewares.offsite] DEBUG: Filtered offsite request to 'citizen.mahapolice.gov.in': <POST https://citizen.mahapolice.gov.in/Citizen/MH/index.aspx>
2020-07-12 21:32:26 [scrapy.core.engine] INFO: Closing spider (finished)
2020-07-12 21:32:26 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 501,
'downloader/request_count': 2,
'downloader/request_method_count/GET': 2,
'downloader/response_bytes': 8149,
'downloader/response_count': 2,
'downloader/response_status_count/200': 1,
'downloader/response_status_count/302': 1,
'elapsed_time_seconds': 3.542722,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2020, 7, 12, 16, 2, 26, 43681),
'log_count/DEBUG': 3,
'log_count/INFO': 10,
'memusage/max': 54427648,
'memusage/startup': 54427648,
'offsite/domains': 1,
'offsite/filtered': 1,
'request_depth_max': 1,
'response_received_count': 1,
'scheduler/dequeued': 2,
'scheduler/dequeued/memory': 2,
'scheduler/enqueued': 2,
'scheduler/enqueued/memory': 2,
'start_time': datetime.datetime(2020, 7, 12, 16, 2, 22, 500959)}
</code></pre>
<p>I wanted to get list of police stations. So I tried to extract text from related element. Where am I going wrong? Also it I want to click on search, how should I proceed? Thanks for kind help.</p>
|
<p>As you can see in this line:</p>
<pre><code>2020-07-12 21:32:26 [scrapy.spidermiddlewares.offsite] DEBUG: Filtered offsite request to 'citizen.mahapolice.gov.in': <POST https://citizen.mahapolice.gov.in/Citizen/MH/index.aspx>
</code></pre>
<p>Your request is being filtered. In your case, since it's filtered because it's considered an offsite by the <code>OffsiteMiddleware</code>. I suggest you to change your <code>allowed_domains</code> from:</p>
<pre><code>allowed_domains = ['citizen.mahapolice.gov.in/Citizen/MH/PublishedFIRs.aspx']
</code></pre>
<p>to:</p>
<pre><code>allowed_domains = ['citizen.mahapolice.gov.in']
</code></pre>
<p><strong>This change alone already worked for me, so it's a preferable solution</strong>.</p>
<p>If you check the <a href="https://docs.scrapy.org/en/latest/topics/spiders.html#scrapy.spiders.Spider.allowed_domains" rel="nofollow noreferrer">docs</a> you will see that this field is supposed to contain the <strong>domains</strong> that this spider is allowed to crawl.</p>
<p>What is happening to your spider is that the <code>OffsiteMiddleware</code> compiles a regex with the domains in <code>allowed_domains</code> field and while processing the requests it checks the method <code>should_follow</code> (<a href="https://github.com/scrapy/scrapy/blob/master/scrapy/spidermiddlewares/offsite.py" rel="nofollow noreferrer">code</a>) to know if the request should be executed. The regex would return <code>None</code>, and it would cause it to be filtered.</p>
<p>You could also use the <code>dont_filter</code> <a href="https://docs.scrapy.org/en/latest/topics/request-response.html#request-objects" rel="nofollow noreferrer">parameter</a> when building your request, this would also work, however the solution above is better. It would look like this:</p>
<pre><code> yield FormRequest.from_response(
response,
formid='form1',
formdata={
'ctl00$ContentPlaceHolder1$txtDateOfRegistrationFrom': "01/07/2020",
'ContentPlaceHolder1_txtDateOfRegistrationTo': "03/07/2020",
'ctl00$ContentPlaceHolder1$ddlDistrict': "19372"},
callback=self.after_login,
dont_filter=True, # <<< Here
)
</code></pre>
|
python|web-scraping|scrapy|web-crawler
| 1 |
1,906,462 | 62,862,072 |
Fixed columns in dash data table breaks layout spread
|
<p>I’m trying to set the two first columns of my table<br/></p>
<p>I want the two first columns of my table to be fixed<br/>
but when I set the <code>fixed_columns</code> component the entire layout gets very small<br/>
I am using dash 1.13 (but also tried 1.12)<br/>
A small as possible code to reproduce my problem:</p>
<pre><code>import dash
import dash_table
import random
def define_table():
headers = ['Fixed1', 'Fixed2', 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J']
columns = [{'name': h, 'id': h} for h in headers]
data = []
for i in range(0, 20):
row = {}
for h in headers:
row[h] = random.random()
data = data + [row]
return dash_table.DataTable(
id='table-results',
data=data,
columns=columns,
fixed_columns={'headers': True, 'data': 2}, # TODO
style_cell={'textAlign': 'left'},
row_deletable=True,
export_columns='visible',
export_format='csv')
app = dash.Dash(__name__)
app.layout = define_table()
if __name__ == '__main__':
app.run_server(debug=True)
</code></pre>
<br/>
A picture of how it looks at my browser:<br/>
<p><img src="https://i.stack.imgur.com/4wNPQ.jpg" alt="On Firefox1" />
<br/></p>
|
<p>from:<br/>
<a href="https://community.plotly.com/t/fixed-columns-in-dash-data-table-breaks-layout-spread/42361" rel="nofollow noreferrer">https://community.plotly.com/t/fixed-columns-in-dash-data-table-breaks-layout-spread/42361</a><br/>
adding <code>style_table={'minWidth': '100%'}</code> to the table attributes fixed it</p>
|
python|css|plotly-dash
| 1 |
1,906,463 | 61,892,438 |
Python Dask Delete From Row
|
<p>I'm trying to write a script to scrub information from csvs using dask. I have a dask df created from a csv like below:</p>
<pre><code>CUSTOMER ORDERS
hashed_customer firstname lastname email order_id status timestamp
0 eater 1_uuid 1_firstname 1_lastname 1_email 12345 OPTED_IN 2020-05-14 20:45:15
1 eater 2_uuid 2_firstname 2_lastname 2_email 23456 OPTED_IN 2020-05-14 20:29:22
2 eater 3_uuid 3_firstname 3_lastname 3_email 34567 OPTED_IN 2020-05-14 19:31:55
</code></pre>
<p>I have another csv with the hashed_customers that I need to scrub from this file. So if the hashed_customer in this file is in CUSTOMER ORDERS, I need to remove the firstname, lastname, and email from the row while keeping the rest, to look something like this:</p>
<pre><code>CUSTOMER ORDERS
hashed_customer firstname lastname email order_id status timestamp
0 eater 1_uuid NULL NULL NULL 12345 OPTED_IN 2020-05-14 20:45:15
1 eater 2_uuid 2_firstname 2_lastname 2_email 23456 OPTED_IN 2020-05-14 20:29:22
2 eater 3_uuid 3_firstname 3_lastname 3_email 34567 OPTED_IN 2020-05-14 19:31:55
</code></pre>
<p>My current script looks like this:</p>
<pre><code>print('FIND ORDERS FROM OPT-OUT CUSTOMERS')
cust_opt_out_order = []
for index, row in df_in.iterrows():
if row.hashed_eater_uuid in cust_opt_out_id:
cust_opt_out_order.append(row.order_id)
print('REMOVE OPT-OUT FROM OPT-IN FILE')
df_cust_out = df_in[~df_in['hashed_eater_uuid'].isin(cust_opt_out_id)]
</code></pre>
<p>But this is removing the entire row, and now I need to keep the row and only remove the name and email elements from the row. How can I drop elements from a row using pandas?</p>
<p>I'm trying to get a dask equivalent to pandas:</p>
<pre><code>df_cust_out.loc[df_in['hashed_eater_uuid'].isin(cust_opt_out_id),['firstname','lastname', 'email']]=np.nan
</code></pre>
|
<p>I recommend looking at the Dataframe.where or Series.where methods:</p>
<p><a href="https://docs.dask.org/en/latest/dataframe-api.html#dask.dataframe.DataFrame.where" rel="nofollow noreferrer">https://docs.dask.org/en/latest/dataframe-api.html#dask.dataframe.DataFrame.where</a></p>
|
python|dataframe|dask
| 0 |
1,906,464 | 61,786,707 |
What effects should tensorflow.compat.v1.disable_v2_behavior() have on training using the Keras API?
|
<p>I have a CNN that trains, on a few hundred thousand examples, to a validation accuracy of ~95% after one epoch. It's straight forward code, using Keras to define a network using the Sequential API. Originally I prepared and used this model on TF 1.3. When I port it over to TF 2.1, replacing the keras calls with tensorflow.keras, it gets to ~60% quickly and gets stuck there (seemingly for many epochs), and the training loss always seems to converge to the same value.</p>
<p>If I add in <code>tf.disable_v2_behavior()</code> at the top of the script, it trains similarly to before.</p>
<p>The documentation states simply that "It switches all global behaviors that are different between TensorFlow 1.x and 2.x to behave as intended for 1.x". Hidden behind the Keras API, I haven't found a clear answer to what this really means in practice. Why should I expect a VGG-like CNN, defined using Keras and trained with <code>model.fit()</code>, to work well without v2 behaviour but to fail so consistently with?</p>
<p>Edit: <code>disable_eager_execution()</code> produces the same result, with improved performance.</p>
|
<p>Please try disabling eager execution and see if that helps. </p>
<pre><code>tf.compat.v1.disable_eager_execution()
</code></pre>
<p>(Add this to the top of your script)</p>
|
tensorflow|machine-learning|keras|deep-learning|tensorflow2.0
| 1 |
1,906,465 | 67,235,014 |
smtplib.SMTPNotSupportedError: SMTP AUTH extension not supported by server - trying to send email via Django
|
<p>I have a Docker container where I am trying to send an email via Django.</p>
<p><strong>I have a separate email server on another domain that I want to use for this purpose. I have other applications connect with it with no problem.</strong></p>
<p>My Django production email setup looks like this (I intend to replace username and password with Kubernetes secrets in the future, but for testing I am just putting them inside the file):</p>
<pre><code>EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend'
EMAIL_HOST = 'mail.mydomain.io'
EMAIL_USE_TLS = True
EMAIL_PORT = 587
EMAIL_HOST_USER = "<username>"
EMAIL_HOST_PASSWORD = "<password>"
</code></pre>
<p>In my module, I have the following code:</p>
<pre><code>from rest_framework.views import APIView
from django.core.mail import send_mail
class MailView(APIView):
def post(self, request):
subject = request.data.get("subject")
message = request.data.get("message")
sender = request.data.get("sender")
receipients = request.data.get("receipients")
send_mail(subject,message,sender,receipients,fail_silently=False,)
... more
</code></pre>
<p>This works locally, however when I try to run it inside the container, I get the following error:</p>
<pre><code>smtplib.SMTPNotSupportedError: SMTP AUTH extension not supported by server.
</code></pre>
<p>Do I need to install some sort of SMTP relay or server to my Docker container?</p>
<p>My Docker container is based on the <code>python:3.7</code> container and I am not currently installing any SMTP extensions or anything to it.</p>
|
<p><code>EMAIL_USE_TLS</code> needs to be set to <code>True</code></p>
<pre><code>EMAIL_USE_TLS = True
</code></pre>
|
python|django|docker|email
| 0 |
1,906,466 | 70,185,506 |
Replace a particular string in a column with strings in another column in Pandas
|
<p>I wonder how to replace the string value of 'Singapore' in location1 column with the string values from location2 column. In this case, they're <code>Tokyo, Boston, Toronto</code> and <code>Hong Kong, Boston</code>.</p>
<pre><code>import pandas as pd
data = {'location1':["London, Paris", "Singapore", "London, New York", "Singapore", "Boston"],
'location2':["London, Paris", "Tokyo, Boston, Toronto", "London, New York", "Hong Kong, Boston", "Boston"]}
df = pd.DataFrame(data)
location1 location2
0 London, Paris London, Paris
1 Singapore Tokyo, Boston, Toronto
2 London, New York London, New York
3 Singapore Hong Kong, Boston
4 Boston Boston
</code></pre>
|
<p>We can do it using the <a href="https://numpy.org/doc/stable/reference/generated/numpy.where.html" rel="nofollow noreferrer"><code>numpy.where</code></a> method :</p>
<pre class="lang-py prettyprint-override"><code>>>> import numpy as np
>>> df["location1"] = np.where(df["location1"] == 'Singapore', df["location2"], df["location1"])
>>> df
location1 location2
0 London, Paris London, Paris
1 Tokyo, Boston, Toronto Tokyo, Boston, Toronto
2 London, New York London, New York
3 Hong Kong, Boston Hong Kong, Boston
4 Boston Boston
</code></pre>
|
python|pandas
| 2 |
1,906,467 | 56,852,923 |
Split an array in rows and columns
|
<p>Using Python 2.</p>
<p>I need to split an array into their rows and columns but I don't seem to get the solution as asked in the exercise </p>
<pre><code>import numpy as np
a = np.array([[5, 0, 3, 3],
[7, 9, 3, 5],
[2, 4, 7, 6],
[8, 8, 1, 6]])
</code></pre>
<p>So far I have these functions </p>
<pre><code>def _rows(a):
print("array:"+ str(a[:,]))
_rows(a)
def _col(a):
alt=a.T
print ("array:"+ str(alt[:,]))
_col(a)
</code></pre>
<p>but I need to return a list and when I use the <code>list()</code> function it separe each individual character</p>
<p>I need the result to be:</p>
<pre><code>[array([5, 0, 3, 3]), array([7, 9, 3, 5]), array([2, 4, 7, 6]), array([8, 8, 1, 6])]
[array([5, 7, 2, 8]), array([0, 9, 4, 8]), array([3, 3, 7, 1]), array([3, 5, 6, 6])]
</code></pre>
|
<p>You can unpack the rows and columns into a list with:</p>
<pre><code>res1, res2 = [*a], [*a.T]
</code></pre>
<hr>
<pre><code>print(res1)
[array([5, 0, 3, 3]),
array([7, 9, 3, 5]),
array([2, 4, 7, 6]),
array([8, 8, 1, 6])]
print(res2)
[array([5, 7, 2, 8]),
array([0, 9, 4, 8]),
array([3, 3, 7, 1]),
array([3, 5, 6, 6])]
</code></pre>
<hr>
<p><a href="https://www.python.org/dev/peps/pep-3132/" rel="nofollow noreferrer">Extended iterable unpacking</a> was introduced in python 3.0, for older versions you can call the list constructor as in @U9-Forward 's answer</p>
|
python|list|numpy
| 8 |
1,906,468 | 61,164,023 |
How do you make a hexadecimal private key into a private key WIF Compressed
|
<p>I am trying to make a python program to take the 64 character hexadecimal private bitcoin key and make it the 52 characters base58 WIF compressed private key. If anyone has a python code snippet or even a formula I could reference it would be great.</p>
|
<p>Algorithm is:</p>
<pre><code>//initialization
define version "80"
ByteString PK,CheckSum
//getting to work
// double sha version+PK
CheckSum=SHA256(SHA256(version+PK).AsHex).AsHex
CheckSum=CheckSum.SubString(1,4) // first 4 bytes
// now create base58
Result = base58encode(version+PK+CheckSum)
</code></pre>
<p>Not my friend that, this is a algorithm, pseudo code, not an actual code, unfortunately I dont know python, I use c/c++, just tried to be worthy and helpful.</p>
|
python|python-3.x|bitcoin
| 1 |
1,906,469 | 66,127,482 |
append variables to a pickle file and read them
|
<p>I'm trying to append several variables to a pickle file to read them later. But it doesn't work as I expected. I would expect that at the end of this script c='A' and d='B' but instead it thows me an error. Could you please explain me why and how to get what I want? Many thanks</p>
<pre><code>import pickle
filename = 'test.pkl'
a = 'A'
b = 'B'
with open(filename, 'wb') as handle:
pickle.dump(a, handle, protocol=pickle.HIGHEST_PROTOCOL)
with open(filename, 'ab') as handle:
pickle.dump(b, handle)
with open(filename, 'rb') as filehandle:
c,d = pickle.load(filehandle)
</code></pre>
|
<p>After running your code, I got <code>ValueError: not enough values to unpack (expected 2, got 1)</code>.</p>
<p>If you run <code>help(pickle.load)</code>, it will tell you that it only loads objects from the file. If you have multiple objects in the file, you have to call <code>pickle.load</code> multiple times to read the objects sequentially.</p>
<p>Your issue is basically you stored them as 2 separate objects but are attempting to read them as a single tuple.</p>
|
python|append|pickle
| 0 |
1,906,470 | 69,051,961 |
regex python. this code cannot read 2nd line which have username as "-" only
|
<p>I am trying to get a dictionary of different keys as follows. the problem is that this code skip the 2nd line which has a username of a different type .i.e user name = -
also the last line I see there are initially space in front of host
can you please help to understand the issue?</p>
<pre><code>import re
def logs():
logdata = """146.204.224.152 - feest6811 [21/Jun/2019:15:45:24 -0700] "POST /incentivize HTTP/1.1" 302 4622
159.253.153.40 - - [21/Jun/2019:15:46:10 -0700] "POST /e-business HTTP/1.0" 504 19845
197.109.77.178 - kertzmann3129 [21/Jun/2019:15:45:25 -0700] "DELETE /virtual/solutions/target/web+services HTTP/2.0" 203 26554"""
</code></pre>
<hr />
<pre><code># YOUR CODE HERE
pattern="""
(?P<host>.*)
(\ -\ )
(?P<user_name>\w.*)
(\ \[)
(?P<time>.*)
(\]\ \")
(?P<request>\w.*)
(\")
"""
for item in re.finditer(pattern,logdata,re.VERBOSE):
print(item.groupdict())
return
</code></pre>
|
<p>You're only looking for usernames comprising of words, but '-' isn't a word.</p>
<p>If you change the regex to:</p>
<pre><code>pattern="""
(?P<host>.*)
(\ -\ )
(?P<user_name>(\w.*|-))
(\ \[)
(?P<time>.*)
(\]\ \")
(?P<request>\w.*)
(\")
"""
</code></pre>
<p>You'll look for either words or '-'</p>
|
python|regex|verbose
| 0 |
1,906,471 | 69,087,629 |
A progress bar for my function (Python, pandas)
|
<p>Im trying to read big csv files and also effectively work on other stuff at the same time. That is why my solution to this problem is to create a progress bar (something that shows how far Ive come threw out the read that gives me a sense of what time I have before the read is complete). However I have tried using tqdm aswell as ownmade while loops, but to my disfortune, I have not found a solution to this problem. I have tried using this thread: <a href="https://stackoverflow.com/questions/57174012/how-to-see-the-progress-bar-of-read-csv">How to see the progress bar of read_csv</a>
without no luck. Maybe I can apply TQDM in a different way? Are there any other solutions?</p>
<p>Heres the important part of the code (the one I want to add a progress bar to)</p>
<pre><code>def read_from_csv(filepath: str,
sep: str = ",",
header_line: int = 43,
skip_rows: int = 48) -> pd.DataFrame:
"""Reads a csv file at filepath containing the vehicle trip data and
performs a number of formatting operations
"""
# The first call of read_csv is used to get the column names, which allows
# the typing to take place at the same time as the second read, which is
# faster than forcing type afterwards
df_names: pd.Index[str] = pd.read_csv(
filepath,
sep = sep,
header = header_line,
skip_blank_lines = False,
skipinitialspace = True,
index_col = False,
engine = 'c',
nrows = 0,
encoding = 'iso-8859-1'
).columns
# The "Time" and "Time_abs" columns have some inconsistent
# "Storage group code" preceeding the actual column name, so their
# full column names are stored so they can be renamed later. Also, we want
# to interpret "Time_abs" as a string, while the rest are floats. This is
# stored in a dict to use in the next call to read_csv
time_col = ""
time_abs_col = ""
names_dict = {}
for name in df_names:
if ": Time_abs" in name:
names_dict[name] = 'str'
time_abs_col = name
elif ": Time" in name:
time_col = name
else:
names_dict[name] = 'float'
# A list of values that we want pandas to interpret as having no value.
# "NOVALUE" is the only one of these that's actually used in the files,
# the rest are copy-pasted defaults.
na_vals = ['', '#N/A N/A', '#NA', '-1.#IND', '-1.#QNAN', '-NaN', '-nan',
'1.#IND', '1.#QNAN', '<NA>', 'N/A', 'NA', 'NULL', 'NaN', 'n/a',
'nan', 'null', 'NOVALUE']
# The whole file is parsed and put in a dataframe
df: pd.DataFrame = pd.read_csv(filepath,
sep = sep,
skiprows = skip_rows,
header = 0,
names = df_names,
skip_blank_lines = False,
skipinitialspace = True,
index_col = False,
engine = 'c',
na_values = na_vals,
dtype = names_dict,
encoding = 'iso-8859-1'
)
# Renames the "Time" and "Time_abs" columns so they don't include the
# storage group part
df.rename(columns = {time_col: "Time", time_abs_col: "Time_abs"},
inplace = True)
# Second retyping of this column (here from string to datetime).
# Very rarely, the Time_abs column in the csv data only has the time and
# not the date, in which case this line throws an error. We manage this by
# simply letting it stay as a string
try:
df[defs.time_abs] = pd.to_datetime(df[defs.time_abs])
except:
pass
# Every row ends with an extra delimiter which python interprets as another
# column, but it's empty so we remove it. This is not really necessary, but
# is done to reduce confusion when debugging
df.drop(df.columns[-1], axis=1, inplace=True)
# Adding extra columns to the dataframe used later
df[defs.lowest_gear] = np.nan
df[defs.lowest_speed] = np.nan
for i in list(defs.second_trailer_axles_dict.values()):
df[i] = np.nan
return df
</code></pre>
<p>Its the reading csv that takes a lot of time thats why that is the point of interest to add a progress bar to.</p>
<p>Thank you in advance!</p>
|
<p>You can easily do this with Dask. For example:</p>
<pre class="lang-py prettyprint-override"><code>import dask.dataframe as dd
from dask.diagnostics import ProgressBar
ddf = dd.read_csv(path, blocksize=1e+6)
with ProgressBar():
df = ddf.compute()
</code></pre>
<pre><code>[########################################] | 100% Completed | 37.0s
</code></pre>
<p>And you will see the file download process.
the blocksize parameter is responsible for the blocks that your file is read with. By changing it, you can achieve good performance. Plus, Dask uses several threads for reading by default, which will speed up the reading process itself.</p>
|
python|pandas
| 1 |
1,906,472 | 72,582,842 |
Distributing the values of one column to multiple columns in Pandas
|
<p>I have the following DataFrame:</p>
<pre><code>df = pd.DataFrame([['A',1,10],
['A',1,20],
['C',2,15],
['B',3,20]], columns=['Val1','Val2','Val3'])
colList = [f'{col}_{i}' for col in ['A', 'B', 'C'] for i in range(5)]
data = np.zeros(shape=(len(df), len(colList)))
df = pd.concat([df, pd.DataFrame(data, columns=colList)], axis=1)
df['REF'] = df['Val1'] + '_' +df['Val2'].astype(str)
Val1 Val2 Val3 A_0 A_1 A_2 A_3 ... B_4 C_0 C_1 C_2 C_3 C_4 REF
0 A 1 10 0.0 0.0 0.0 0.0 ... 0.0 0.0 0.0 0.0 0.0 0.0 A_1
1 A 1 20 0.0 0.0 0.0 0.0 ... 0.0 0.0 0.0 0.0 0.0 0.0 A_1
2 C 2 15 0.0 0.0 0.0 0.0 ... 0.0 0.0 0.0 0.0 0.0 0.0 C_2
3 B 3 20 0.0 0.0 0.0 0.0 ... 0.0 0.0 0.0 0.0 0.0 0.0 B_3
</code></pre>
<p>Column Val3 contains values which should be assigned to the corresponding columns under the REF columns. This can be done by using the following lambda function:</p>
<pre><code>def func(x):
x[x['REF']] = x['Val3']
return x
df = df.apply(lambda x: func(x), axis=1)
print(df)
Val1 Val2 Val3 A_0 A_1 A_2 A_3 ... B_4 C_0 C_1 C_2 C_3 C_4 REF
0 A 1 10 0.0 10.0 0.0 0.0 ... 0.0 0.0 0.0 0.0 0.0 0.0 A_1
1 A 1 20 0.0 20.0 0.0 0.0 ... 0.0 0.0 0.0 0.0 0.0 0.0 A_1
2 C 2 15 0.0 0.0 0.0 0.0 ... 0.0 0.0 0.0 15.0 0.0 0.0 C_2
3 B 3 20 0.0 0.0 0.0 0.0 ... 0.0 0.0 0.0 0.0 0.0 0.0 B_3
</code></pre>
<p>Is there any way I can vectorize this process without using a Lambda function in order to speed up the process?</p>
|
<h2><code>Pivot</code> and <code>update</code></h2>
<pre><code>df.update(df.pivot(columns='REF', values='Val3'))
</code></pre>
<hr />
<pre><code>>>> df
Val1 Val2 Val3 A_0 A_1 A_2 A_3 A_4 B_0 B_1 B_2 B_3 B_4 C_0 C_1 C_2 C_3 C_4 REF
0 A 1 10 0.0 10.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 A_1
1 A 1 20 0.0 20.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 A_1
2 C 2 15 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 15.0 0.0 0.0 C_2
3 B 3 20 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 20.0 0.0 0.0 0.0 0.0 0.0 0.0 B_3
</code></pre>
|
python|pandas|dataframe
| 4 |
1,906,473 | 68,379,900 |
Pandas rename one value as other value in a column and add corresponding values in the other column
|
<p>So, I have a pandas data frame:</p>
<pre><code> df =
a b c
a1 b1 c1
a2 b2 c1
a2 b3 c2
a2 b4 c2
</code></pre>
<p>I want to rename <code>a2</code> into <code>a1</code> and then group by <code>a</code> and <code>c</code> and add the corresponding values of <code>b</code></p>
<pre><code> df =
a b c
a1 b1+b2 c1
a1 b3+b4 c2
</code></pre>
<p>So, something like this</p>
<pre><code> df =
a value c
a1 10 c1
a2 20 c1
a2 50 c2
a2 60 c2
df =
a value c
a1 30 c1
a1 110 c2
</code></pre>
<p>How to do this?</p>
|
<p>Try this:</p>
<pre><code>df.groupby([df['a'].replace({'a2':'a1'}),'c']).sum().reset_index()
</code></pre>
|
python-3.x|pandas|dataframe|pandas-groupby
| 2 |
1,906,474 | 62,947,277 |
Django form field size
|
<p>Right now my filter form text field size is very large I'm trying to make the form textfield size smaller.How can I make them smaller I tried giving an id to my form and tried changing the size using css , but it didn't work.</p>
<pre><code>filter.py
import django_filters
from django_filters import DateFilter
from .models import Order
class OrderFilter(django_filters.FilterSet):
start_date = DateFilter(field_name="date_created", lookup_expr='gte')
end_date = DateFilter(field_name="date_created", lookup_expr='lte',)
class Meta:
model = Order
fields = ['status']
</code></pre>
<pre><code>order_list.html
<form class="filter-form" method="GET" action="">
{{myFilter.form}}
<button class="btn btn-primary" type="submit">Search</button>
</form>
</code></pre>
|
<p>I think you have to define the widget:</p>
<pre><code>start_date = DateFilter(field_name="date_created",lookup_expr='gte',widget=DateInput(attrs={'style':'width:150px; height:30px'}))
</code></pre>
|
python|django
| 0 |
1,906,475 | 67,724,712 |
Custom Id in Django Models
|
<p>In my model I need an ID field that is different from the default ID given by Django. I need my IDs in the following format: [year][ascending number]</p>
<p><em>Example: <code>2021001</code>,<code>2021002</code>,<code>2021003</code></em></p>
<p>The IDs shall not be editable but model entries shall take the year for the ID from a DateTimeField in the model. I am unsure if I use a normal Django ID and also create some kind of additional ID for the model or if I replace the normal Django ID with a <a href="https://docs.djangoproject.com/en/dev/ref/models/fields/#uuidfield" rel="nofollow noreferrer">Custom ID</a>.</p>
|
<p>This problem is pretty similar to one I had solved for a previous project of mine. What I had done for this was to simply use the default id for the primary key, while using some extra fields to make the composite identifier needed.</p>
<p>To ensure the uniqueness and the restarting of the count I had made a model which would (only by the logic, no actual constraints) only have one row in it. Whenever a new instance of the model which needs this identifier would be created this row would be updated in a transaction and it's stored value would be used.</p>
<p>The implementation of it is as follows:</p>
<pre><code>from django.db import models, transaction
import datetime
class TokenCounter(models.Model):
counter = models.IntegerField(default=0)
last_update = models.DateField(auto_now=True)
@classmethod
def get_new_token(cls):
with transaction.atomic():
token_counter = cls.objects.select_for_update().first()
if token_counter is None:
token_counter = cls.objects.create()
if token_counter.last_update.year != datetime.date.today().year:
token_counter.counter = 0
token_counter.counter += 1
token_counter.save()
return_value = token_counter.counter
return return_value
def save(self, *args, **kwargs):
if self.pk:
self.__class__.objects.exclude(pk=self.pk).delete()
super().save(*args, **kwargs)
</code></pre>
<p>Next suppose you need to use this in some other model:</p>
<pre><code>class YourModel(models.Model):
created_at = models.DateTimeField(auto_now_add=True)
yearly_token = models.IntegerField(default=TokenCounter.get_new_token)
@property
def token_number(self):
return '{}{}'.format(self.created_at.year, str(self.yearly_token).zfill(4))
@classmethod
def get_from_token(cls, token):
year = int(token[:4])
yearly_token = int(token[4:])
try:
obj = cls.objects.get(created_at__year=year, yearly_token=yearly_token)
except cls.DoesNotExist:
obj = None
return obj
</code></pre>
<hr>
<p><strong>Note</strong>: This might not be very refined as the code was written when I was very inexperienced, and there may be many areas where it can be refined. For example you can add a <code>unique_for_year</code> in the <code>yearly_token</code> field so:</p>
<pre><code>yearly_token = models.IntegerField(default=TokenCounter.get_new_token, unique_for_year='created_at')
</code></pre>
|
python|django|django-models
| 1 |
1,906,476 | 67,838,417 |
Assign category or integer to row if value within an interval
|
<p>I have this problem that I cannot get around. I have this dataframe:</p>
<pre><code>item distance
0 1 0
1 2 1
2 3 1
3 4 3
4 5 4
5 6 4
6 7 5
7 8 6
8 9 7
9 10 7
10 11 7
11 12 7
12 13 8
13 14 8
14 15 20
15 16 20
</code></pre>
<p>and I need to associate each row to an interval. So, I thought about creating "bins" this way:</p>
<pre><code>max_distance = df['distance'].max()
min_distance = df['distance'].min()
number_bins = (round(max_distance)-round(min_distance))/0.5
</code></pre>
<p>This means that each interval has lenght 0.5. This creates 40 "bin". But this is where I get stuck. I do not know how to</p>
<ol>
<li>create these interval, e.g. <code>(0,0.5], (0.5,1],(1,1.5] ,(1.5,2],(2,2.5] ,(2,5,3]......</code> and give each of them a name <code>1, 2, 3, ...., 40</code></li>
<li>associate each <code>df['distance']</code> to a specific interval number (from 1.)</li>
</ol>
<pre><code>item distance bin
0 1 0 1
1 2 1 2
2 3 1 2
3 4 3 6
4 5 4 6
5 6 4 #and so on
6 7 5
7 8 6
8 9 7
9 10 7
10 11 7
11 12 7
12 13 8
13 14 8
14 15 20
15 16 20
</code></pre>
<p>Now, I tried someting using <code>pd.cut</code> but doing so:</p>
<pre><code>bins_df = pd.cut(df['distance'], round(number_bins))
bins_unique = bins.unique()
</code></pre>
<p>return interval with gaps and not enough categories</p>
<pre><code>[(-0.02, 0.155], (0.93, 1.085], (2.946, 3.101], (3.876, 4.031], (4.961, 5.116], (5.891, 6.047], (6.977, 7.132], (7.907, 8.062], (19.845, 20.0]]
Categories (9, interval[float64]): [(-0.02, 0.155] < (0.93, 1.085] < (2.946, 3.101] < (3.876, 4.031] ... (5.891, 6.047] < (6.977, 7.132] < (7.907, 8.062] < (19.845, 20.0]]
</code></pre>
<p>idealy, I would associate every value <code>distance</code> to a category in <code>[1, number_bins]</code>
Any idea on how I could achieve my desired output would be greatly appreciated.</p>
|
<p>You seemed on the right track with the 2 steps you specified. Here’s how I would carry them out:</p>
<ol>
<li>generate bounds on the bins</li>
</ol>
<pre class="lang-py prettyprint-override"><code>import numpy as np
bounds = np.arange(df['distance'].min(), df['distance'].max() + .5, .5)
bounds
</code></pre>
<ul>
<li><code>np.arange</code> is mostly like <code>range()</code>, but you can specify floating-point bounds and step.</li>
<li>The <code>+.5</code> ensures you get the final bound.</li>
</ul>
<p>This gives you the following:</p>
<pre><code>array([ 1. , 1.5, 2. , 2.5, 3. , 3.5, 4. , 4.5, 5. , 5.5, 6. ,
6.5, 7. , 7.5, 8. , 8.5, 9. , 9.5, 10. , 10.5, 11. , 11.5,
12. , 12.5, 13. , 13.5, 14. , 14.5, 15. , 15.5, 16. ])
</code></pre>
<ol start="2">
<li>use pd.cut</li>
</ol>
<pre class="lang-py prettyprint-override"><code>dist_bins = pd.cut(df['distance'], bins=bounds, include_lowest=True)
dist_bins
</code></pre>
<p>This uses the fact that you can specify the bins manually, see <a href="https://pandas.pydata.org/pandas-docs/version/1.2.0/reference/api/pandas.cut.html" rel="nofollow noreferrer">the doc</a>:</p>
<blockquote>
<p>bins : int, sequence of scalars, or IntervalIndex</p>
<p>The criteria to bin by.</p>
<ul>
<li>[...]</li>
<li>sequence of scalars : Defines the bin edges allowing for non-uniform width. No extension of the range of x is done.</li>
</ul>
</blockquote>
<p>Which returns:</p>
<pre><code>0 (0.999, 1.5]
1 (1.5, 2.0]
2 (2.5, 3.0]
3 (3.5, 4.0]
4 (4.5, 5.0]
5 (5.5, 6.0]
6 (6.5, 7.0]
7 (7.5, 8.0]
8 (8.5, 9.0]
9 (9.5, 10.0]
10 (10.5, 11.0]
11 (11.5, 12.0]
12 (12.5, 13.0]
13 (13.5, 14.0]
14 (14.5, 15.0]
15 (15.5, 16.0]
Name: distance, dtype: category
Categories (30, interval[float64]): [(0.999, 1.5] < (1.5, 2.0] < (2.0, 2.5] < (2.5, 3.0] < ... <
(14.0, 14.5] < (14.5, 15.0] < (15.0, 15.5] < (15.5, 16.0]]
</code></pre>
<p>Note that as per your specification of bins the distance <code>1</code> would not fall in any bin, which is why I used <code>include_lowest=True</code> and why the first bin looks like <code>(0.999, 1.5]</code> (which is basically <code>[1, 1.5]</code>). If you don’t want this you need to start bins below your <code>min()</code></p>
<p>You get a (sorted) <code>category</code> dtype column (<code>pd.Series</code>) as expected.</p>
<p>If you want the list of the 30 categories that were created, you can access them with the <a href="https://pandas.pydata.org/pandas-docs/version/1.2.0/reference/api/pandas.Series.cat.categories.html" rel="nofollow noreferrer"><code>.cat</code> accessor</a></p>
<pre class="lang-py prettyprint-override"><code>dist_bins.cat.categories
</code></pre>
<p>This returns an <code>IntervalIndex</code>:</p>
<pre><code>IntervalIndex([(0.999, 1.5], (1.5, 2.0], (2.0, 2.5], (2.5, 3.0], (3.0, 3.5] ... (13.5, 14.0], (14.0, 14.5], (14.5, 15.0], (15.0, 15.5], (15.5, 16.0]],
closed='right',
dtype='interval[float64]')
</code></pre>
<p>As with every index you can access the list of values:</p>
<pre><code>>>> dist_bins.cat.categories.to_list()
[Interval(0.999, 1.5, closed='right'), Interval(1.5, 2.0, closed='right'), Interval(2.0, 2.5, closed='right'), Interval(2.5, 3.0, closed='right'), Interval(3.0, 3.5, closed='right'), Interval(3.5, 4.0, closed='right'), Interval(4.0, 4.5, closed='right'), Interval(4.5, 5.0, closed='right'), Interval(5.0, 5.5, closed='right'), Interval(5.5, 6.0, closed='right'), Interval(6.0, 6.5, closed='right'), Interval(6.5, 7.0, closed='right'), Interval(7.0, 7.5, closed='right'), Interval(7.5, 8.0, closed='right'), Interval(8.0, 8.5, closed='right'), Interval(8.5, 9.0, closed='right'), Interval(9.0, 9.5, closed='right'), Interval(9.5, 10.0, closed='right'), Interval(10.0, 10.5, closed='right'), Interval(10.5, 11.0, closed='right'), Interval(11.0, 11.5, closed='right'), Interval(11.5, 12.0, closed='right'), Interval(12.0, 12.5, closed='right'), Interval(12.5, 13.0, closed='right'), Interval(13.0, 13.5, closed='right'), Interval(13.5, 14.0, closed='right'), Interval(14.0, 14.5, closed='right'), Interval(14.5, 15.0, closed='right'), Interval(15.0, 15.5, closed='right'), Interval(15.5, 16.0, closed='right')]
</code></pre>
|
python-3.x|pandas
| 1 |
1,906,477 | 30,473,602 |
Scaling test data to 0 and 1 using MinMaxScaler
|
<p>Using the MinMaxScaler from sklearn, I scale my data as below.</p>
<pre><code>min_max_scaler = preprocessing.MinMaxScaler()
X_train_scaled = min_max_scaler.fit_transform(features_train)
X_test_scaled = min_max_scaler.transform(features_test)
</code></pre>
<p>However, when printing X_test_scaled.min(), I have some negative values (the values do not fall between 0 and 1). This is due to the fact that the lowest value in my test data was lower than the train data, of which the min max scaler was fit.</p>
<p>How much effect does not having exactly normalized data between 0 and 1 values have on the SVM classifier? Also, is it bad practice to concatenate the train and test data into a single matrix, perform min-max scaling to ensure values are between 0 and 1, then seperate them again?</p>
|
<p>If you can scale all your data in one shot this would be better because all your data are managed by the Scaler in a logical way (all between 0 and 1). But for the SVM algorithm, there must be no difference as the scaler will extend the scale. There's still the same difference even if it is negative.</p>
<p><a href="http://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html#sklearn.svm.SVC" rel="nofollow">In the documentation</a> we can see that there are negative values so I don't think it has an impact on the result</p>
|
python|machine-learning|scikit-learn|svm
| 0 |
1,906,478 | 67,055,370 |
Want to generalize the python script for three excel format(.xlsx) / (.xlsm) / (.xls) without doing manually
|
<p>Written a code for converting each each excel sheet into different csv file. But the problem is that the way I have written the code can work only for one format. See this if else statement.</p>
<pre><code>if (".xlsx" in str(path_xlsx).lower()) and path_xlsx.is_file():
xlsx_files = [Path(path_xlsx)]
else:
xlsx_files = list(Path(path_xlsx).glob("*.xlsx"))
</code></pre>
<p>Above current if-else statement work for only .xlsx file. But If I want to do for .xlsm format I need to change if-else statement.</p>
<pre><code>if (".xlsm" in str(path_xlsm).lower()) and path_xlsm.is_file():
xlsm_files = [Path(path_xlsm)]
else:
xlsm_files = list(Path(path_xlsm).glob("*.xlsm"))
</code></pre>
<p><strong>Is there any way we can tweak/automate this code that it works for these three excel format(.xls/.xlsm/.xlsx) without changing manually over the code for different excel format.</strong></p>
<pre><code>from pathlib import Path
import time
import parser
import argparse
import pandas as pd
import os
import warnings
warnings.filterwarnings("ignore")
parser = argparse.ArgumentParser(description="Process some integers.")
parser.add_argument("path", help="define the directory to folder/file")
parser.add_argument("--verbose", help="display processing information")
start = time.time()
def main(path_xlsx, verbose):
if (".xlsx" in str(path_xlsx).lower()) and path_xlsx.is_file():
xlsx_files = [Path(path_xlsx)]
else:
xlsx_files = list(Path(path_xlsx).glob("*.xlsx"))
df = pd.DataFrame()
for fn in xlsx_files:
all_dfs = pd.read_excel(fn, sheet_name=None)
for sheet_name, df in all_dfs.items():
df = df.assign(DataSource=Path(fn.name))
x=os.path.splitext(fn.name)[0]
path=r'Output'
df.to_csv(os.path.join(path,f'{sheet_name}+{x}.csv'),index=False)
if __name__ == "__main__":
start = time.time()
args = parser.parse_args()
path = Path(args.path)
verbose = args.verbose
main(path, verbose) #Calling Main Function
print("Processed time:", time.time() - start) #Total Time
</code></pre>
<p><strong>Remember: I am running this code through batch script so any excel file format in the folder should be converted into different excel .csv file. In a folder 5 to 6 files are present with three excel extension (.xlsx/.xlsm/.xls) are present.</strong></p>
|
<p>I am not entirely clear what your issue is. The following will iterate all three types of Excel files. I assume that <code>pandas</code> can handle the conversion of each type with no issue. If you have both a <code>file.xlsx</code> and a <code>file.xlsm</code> with the same sheet names, for example, I imagine they would both convert to the same <code>csv</code> file name(s) with one overlaying the other. You should be able to figure out a way of handling that if such a possibility exists.</p>
<pre class="lang-py prettyprint-override"><code>from itertools import chain
import re
if re.search(r"\.xls[xm]?$", str(path_xlsx).lower()) and path_xlsx.is_file():
xlsx_files = [Path(path_xlsx)]
else:
xlsx_files = list(chain(Path(path_xlsx).glob("*.xls"), Path(path_xlsx).glob("*.xlsx"), Path(path_xlsx).glob("*.xlsm")))
</code></pre>
<p><strong>Note</strong></p>
<p>But there is really no need to first convert <code>xlsx_files</code> to be a <code>list</code> since you will be iterating the results of <code>chain</code> (or <code>glob</code> in your original code) with <code>for fn in xlsx_files:</code>. It is both wasteful of time and space. So:</p>
<pre class="lang-py prettyprint-override"><code>from itertools import chain
import re
if re.search(r"\.xls[xm]?$", str(path_xlsx).lower()) and path_xlsx.is_file():
xlsx_files = [Path(path_xlsx)]
else:
xlsx_files = chain(Path(path_xlsx).glob("*.xls"), Path(path_xlsx).glob("*.xlsx"), Path(path_xlsx).glob("*.xlsm"))
</code></pre>
|
python|excel|if-statement
| 1 |
1,906,479 | 51,096,081 |
Can we patch requests module by eventlet.patcher.import_patched?
|
<p>Trying to use requests module in eventlet python2 hit below error.</p>
<pre><code>>>> import eventlet
>>> eventlet.patcher.import_patched('requests')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/site-packages/eventlet/patcher.py", line 120, in import_patched
*additional_modules + tuple(kw_additional_modules.items()))
File "/usr/lib/python2.7/site-packages/eventlet/patcher.py", line 94, in inject
module = __import__(module_name, {}, {}, module_name.split('.')[:-1])
File "/usr/lib/python2.7/site-packages/requests/__init__.py", line 112, in <module>
from . import utils
ImportError: cannot import name utils
</code></pre>
<p>So how why this module can not be patched?</p>
<p>How could we know if one module can be patched?</p>
|
<p>It's a known issue, we don't have a solution yet. Sorry.</p>
<p>Workaround: <code>eventlet.patcher.import_patched('requests.__init__')</code></p>
<p>Subscribe for news on this problem here: <a href="https://github.com/eventlet/eventlet/issues/7" rel="nofollow noreferrer">https://github.com/eventlet/eventlet/issues/7</a></p>
|
python-requests|eventlet
| 0 |
1,906,480 | 45,172,553 |
How do I make python create a new class instance for each session?
|
<p>Here is the situation:</p>
<p>I have an app which includes a user login feature. </p>
<p>The problem I have is this: </p>
<p>When a user logs in / fails an attempt to login and a user object is created. For the one user,it conforms to the expected behavior of:</p>
<ol>
<li>Go to a page while not logged in EXPECT redirect to login page. </li>
<li>Give incorrect login information not a 5th attempt or beyond EXPECT redirect to login page.</li>
<li>Incorrect login information given a 5th time or beyond, EXPECT redirect to fatal login error message from any route attempted, no further login attempts to be accepted. </li>
<li>Correct login information given EXPECT content is made visible for all routes</li>
</ol>
<p><strong>The problem I am trying to fix is that when one user is logged in, this is interpreted in the context of all user sessions, not the session that was created by that user.</strong> Once one user is logged in, all users attempting to access the app are given access until that user logs out, without the subsequent users having to enter the password. </p>
<p>So I have the class like this which serves as a user object:</p>
<pre><code>class User:
def __init__(self,password,user_name)
self.password = some_password_hashing_method(password)
self.username = user_name
self.loggedin = False
self.passwordguesses = 0
</code></pre>
<p>This is the functionality that evaluates a login request. It will start by checking if there is a user object (should be on a session by session basis), and if non-existant, it instantiates the object per the login request and checks the values against the list of hashed passwords via the hashing algorithm used. </p>
<p>It works with one user, but the problem is that when one user logs in then a second user enters the app from another computer with a different IP address, it does not create a new User object. It will see that the existing user.loggedin is set to True, and will treat the user as if they were logged in when they are not, another user is logged in. </p>
<p><strong>How do I make Python treat this variable as an instance variable that will be generated once each time a new user session is created?</strong></p>
<pre><code>@app.route('/verifylogin',methods = ['POST'])
def verifylogin():
checksPassword = g.readcsv('filewithpasswords.csv')
usersList = g.readcsv('filelistingusers.csv')
global user
try:
the_user.passwordguesses
except NameError:
user = User(request.form['userid'],request.form['password'])
i = 0
if user.password_guesses < 5:
while i < len(checksPassword):
if somehashcheckingmethod(checksPassword[i][0], request.form['password']) and usersList[i][0] == request.form['userid']:
user.loggedin = True
return redirect('/index')
i += 1
user.passwordguesses += 1
return redirect('/login')
else:
return render_template('fatal.html')
</code></pre>
<p>Lastly, here is the redirect header which is included at the beginning of all pages. The expected behavior is to 1. check if there is an active user session for that user, and if not, then they are redirected to the login route. 2. It will check if the current user is logged in, and if not, they are redirected to the login page as well. 3. if neither of these are true, it moves on to the subsequent lines of code and provides access to the content. </p>
<pre><code>try:
user.isloggedin
except NameError:
return redirect('login')
if user.isloggedin != True:
return redirect('login')
</code></pre>
<p>Again, there is apparently a problem with the context this code is being interpreted in, because it is applying the logged in status of one user to all subsequent user sessions.</p>
<p>Please note that commenting out the following line does not correct this issue:</p>
<pre><code>global user
</code></pre>
<p>Once this is commented out, this alters the scope of the user object to be only within the scope of the function verifylogin() wherein it was declared.</p>
<p>So if I write a new route and the function to render its content, now the object 'user' is outside the scope of the function I am writing to render the content for the new route:</p>
<pre><code>@app.route('/index.html')
def index():
try:
user.isloggedin
except NameError:
return redirect('login')
if user.isloggedin != True: # this code and all code below it is unreachable, because you will always get a NameError exception from the above lines of code. This is not a duplicate.
return redirect('login')
return render_template('index.html')
</code></pre>
|
<p>This is usually accomplished in the web by using <a href="http://flask.pocoo.org/snippets/category/sessions/" rel="nofollow noreferrer">sessions</a> delivered through <a href="https://en.wikipedia.org/wiki/Secure_cookies" rel="nofollow noreferrer">secure cookies</a>.</p>
<p><a href="https://blog.miguelgrinberg.com/post/how-secure-is-the-flask-user-session" rel="nofollow noreferrer">https://blog.miguelgrinberg.com/post/how-secure-is-the-flask-user-session</a></p>
<p>A session is a secure (hopefully) cookie that your app will provide each client. Your app should be able to generate a persistent individual session that will last for as long as the session is alive. For many of the popular web frameworks the only thing that the cookie contains is a session ID. This way you can store persistent state for the user and use their session ID as a reference. </p>
|
python|flask|scope
| 0 |
1,906,481 | 61,191,224 |
How can I transform a code that split '|' into a function
|
<p>I have a column called brands. The rows look like this</p>
<pre><code> |Brands|
|Gucci|Prada|
|Versace|Levis|Adidas|
|Champion|Diesel|Nike|
</code></pre>
<p>I have a code to split the '|' from the column brands.</p>
<pre><code>split_columns=['Brands']
for brand in split_brands:
brands_data[brand]=brands_data[brand].apply(lambda x: x.split('|'))
</code></pre>
<p>That code work well and split the '|'.</p>
<p>My question is :</p>
<p>How I can transform that code into a function. I would like to learn to do that.</p>
|
<p>Do this</p>
<pre><code>def func(split_brands):
for brand in split_brands:
brands_data[brand]=brands_data[brand].apply(lambda x: x.split('|'))
</code></pre>
|
python|python-3.x|pandas|python-2.7
| 0 |
1,906,482 | 61,361,792 |
np.prod with units from pint.UnitRegistry() - python
|
<p>I'm trying to find the volume of a voxel with given sidelengths using <code>pint.UnitRegistry()</code><br>
Example of error:</p>
<pre class="lang-py prettyprint-override"><code>import pint
import numpy as np
ureg = pint.UnitRegistry()
voxel_size = (81.3, 30.2, 45.3) * ureg.micrometer
volume = np.prod(voxel_size)
</code></pre>
<p>Results in:</p>
<pre class="lang-py prettyprint-override"><code>TypeError: no implementation found for 'numpy.prod' on types
that implement __array_function__: [<class 'pint.quantity.build_quantity_class.<locals>.Quantity'>]
</code></pre>
<p>How can I solve this issue?</p>
|
<p>Basically <code>pint</code> doesn't support <code>numpy.prod</code>. See <a href="https://pint.readthedocs.io/en/0.11/numpy.html" rel="nofollow noreferrer">docs</a> for supported <code>numpy</code> functions.</p>
<p>The problem is that <code>pint.UnitRegistry()</code> returns an object of type <code>pint.quantity.build_quantity_class.<locals>.Quantity</code> which is not an array of numbers so <code>numpy.prod</code> does not recognize it.</p>
<p>To use <code>pint</code> for what you are trying to do try the following...</p>
<pre><code>import pint
import numpy as np
ureg = pint.UnitRegistry()
vox_volume = voxel_size = [81.3] * ureg.micrometer * [30.2] * ureg.micrometer * [45.3] * ureg.micrometer
print(vox_volume)
</code></pre>
|
python|numpy
| 1 |
1,906,483 | 57,613,189 |
How to display multiple background colors in a chart?
|
<p>I am trying to plot a performance diagram and I would like to have certain sections of it be different colors based on the CSI value.</p>
<p>I am able to plot a performance diagram with an all white background so far.</p>
<pre><code> line_label=[]
line_str=[]
line_label2=[]
line_str2=[]
ax = fig.add_axes([0.1, 0.1, 0.8, 0.8])
x = np.arange(0,1.01,0.01)
y = np.arange(0,1.01,0.01)
xi,yi = np.meshgrid(x, y)
#Calculate bias and CSI; set contour levels
bias = yi/xi
blevs = [0.1, 0.25, 0.5, 0.75, 1, 1.25, 2.5, 5, 10]
csi = 1/( (1/xi) + (1/yi) - 1 )
csilevs = np.arange(0.1,1,0.1)
#Axis labels, tickmarks
ax.set_xlabel('Success Ratio (1 - False Alarm Ratio)',fontsize=16,fontweight='bold',labelpad=30)
ax.set_ylabel('Probability of Detection',fontsize=16,fontweight='bold')
ax.set_xticks(np.arange(0,1.1,0.1))
ax.set_yticks(np.arange(0,1.1,0.1))
plt.setp(ax.get_xticklabels(),fontsize=13)
plt.setp(ax.get_yticklabels(),fontsize=13)
#Second y-axis for bias values < 1
ax2 = ax.twinx()
ax2.set_yticks(blevs[0:5])
plt.setp(ax2.get_yticklabels(),fontsize=13)
#Axis labels for bias values > 1
ax.text(0.1,1.015,'10',fontsize=13,va='center',ha='center')
ax.text(0.2,1.015,'5',fontsize=13,va='center',ha='center')
ax.text(0.4,1.015,'2.5',fontsize=13,va='center',ha='center')
ax.text(0.8,1.015,'1.25',fontsize=13,va='center',ha='center')
#Plot bias and CSI lines at specified contour intervals
cbias = ax.contour(x,y,bias,blevs,colors='black',linewidths=1,linestyles='--')
ccsi = ax.contour(x,y,csi,csilevs,colors='gray',linewidths=1,linestyles='-')
plt.clabel(ccsi,csilevs,inline=True,fmt='%.1f',fontsize=14,fontweight='bold')
</code></pre>
<p>This is current result
<a href="https://imgur.com/a/Uojy2Ja" rel="nofollow noreferrer">https://imgur.com/a/Uojy2Ja</a>.
I want different colors between the gray, curved lines that go from 0, 0.1, 0.2, 0.3, etc.</p>
|
<p>Add </p>
<pre><code>ax.contourf(x,y,csi, np.r_[0, csilevs, 1],linestyles='-')
</code></pre>
<p>before <code>cbias = ...</code></p>
<p>The <code>np._r</code> adds levels at 0 and 1 so they are filled too</p>
<p><a href="https://i.stack.imgur.com/nWYgE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nWYgE.png" alt="figure"></a></p>
|
python|graph|charts|background
| 0 |
1,906,484 | 54,038,025 |
XML convertion to DF in Python
|
<p>I am opening large xml file, where i am getting some results using this code:</p>
<pre><code>import os
import xml.etree.ElementTree as et
base_path=os.path.dirname(os.path.realpath(__file__))
xml_file=os.path.join(base_path,'my xml path file')
tree=et.parse(xml_file)
root=tree.getroot()
for child in root:
for element in child:
print (element.tag,':',element.text)
for one in element:
print(one.tag,':',one.text)
</code></pre>
<p>example of result:</p>
<pre><code>code_one : a
value_one : blue
default: 3
code_one : a
value_one : black
default: 12
code_one : b
value_one : green
default: 4
Rte:
Rte:
</code></pre>
<p>To this point everything is clear and fine, but i want to save this output which i am printing to data frame, or if this will be a problem , to file and then i will open this file and save as DF.</p>
<p>I need to convert output to looks like this one:</p>
<pre><code>code_one, value_one, default
a, blue, 3
a, black, 12
b, green, 4
</code></pre>
<p>thanks in advice</p>
|
<p>this solved my problem:
i have wrote output to the file, and then i have opened using pandas in dataframe
this is how to save the output:</p>
<pre><code>f=open('file.txt','w')
for child in root:
for element in child:
f.write(str(element.tag)+':'+str(element.text))
for one in element:
f.write(str(one.tag)+':'+str(one.text))
f.close()
</code></pre>
|
python|xml|dataframe
| 0 |
1,906,485 | 58,470,961 |
Python Error [TypeError: 'str' object is not callable] comes up when using input() funtion
|
<pre><code>notes =[]
def newNote(notes):
note = input("Whats up")
notes.append(note)
return notes
input = input("in or out? ")
if (input == "in"):
newNote(notes)
</code></pre>
<p><code>note = input("Whats up")</code> is the line that has the problem and I see nothing wrong with it. I have tried the line just by instelf (not in a function) and it works but for some reason it doesnt work inside the function.</p>
<p>Can anyone explain this to me?</p>
|
<p>The problem with line <code>input = input("in or out? ")</code>.</p>
<p>You redefine <code>input</code> function to result of <code>input("in or out? ")</code>, so now the <code>input</code> is a string.</p>
<p>The solution is to simply change <code>input("in or out? ")</code> result variable to something another:</p>
<pre class="lang-py prettyprint-override"><code>notes =[]
def newNote(notes):
note = input("Whats up")
notes.append(note)
return notes
choice = input("in or out? ")
if (choice == "in"):
newNote(notes)
</code></pre>
|
python|input|callable
| 1 |
1,906,486 | 58,288,101 |
Pandas, drop duplicated rows based on other columns values
|
<p>Example data: </p>
<pre><code>df1 = pd.DataFrame({
'file': ['file1','file1','file1','file2','file2','file2','file3','file3','file3'],
'prop1': ['True','False','True','False','False','False','True','False','False'],
'prop2': ['False','False','False','False','True','False','False','True','False'],
'prop3': ['False','True','False','True','False','True','False','False','True']
})
file prop1 prop2 prop3
0 file1 True False False
1 file1 False False True
2 file1 True False False
3 file2 False False True
4 file2 False True False
5 file2 False False True
6 file3 True False False
7 file3 False True False
8 file3 False False True
</code></pre>
<p>File1 have prop1 true 2 times, file2 have prop3 2 times, file3 have each of props 1 time. So i need to make another dataframe like this:</p>
<pre><code> file prop
0 file1 prop1
1 file2 prop3
2 file3 diff (file3 props are different)
</code></pre>
|
<p>We can using <code>idxmax</code> combine <code>sum</code> to detect the <code>max</code> value </p>
<pre><code>s=df1.set_index('file').sum(level=0)
s.idxmax(1).mask(s.eq(s.max(1),axis=0).sum(1)==3,'diff')
file
file1 prop1
file2 prop3
file3 diff
dtype: object
</code></pre>
|
python|pandas
| 2 |
1,906,487 | 45,488,363 |
How to change datetime format in Django-filter form?
|
<p>I'm creating a <code>filter</code> which contains <code>datetime</code> range choice. I would like user to be able to input date in this format: <code>24.08.2017 17:09</code></p>
<p>From docs, it is possible to specify a widget (from <code>django.forms</code>) and widget has attribute <code>input_formats</code>. </p>
<p>So this would be a solution:</p>
<pre><code>datetime_range = django_filters.DateTimeFromToRangeFilter(method='datetime',
label=u'Čas od do',widget=forms.widgets.DateTimeInput(format="%D.%m.%Y %H:%M:%S")
</code></pre>
<p>The problem is that it uses <code>DateTimeFromToRangeFilter</code> which uses two <code>DateTimeInput</code> fields. So if I specify the widget, it renders one <code>DateTimeInput</code> instead of two inputs.</p>
<p>So the question is, how to specify the format (without changing widget)?</p>
<p>I'm trying to specify <code>input_formats</code> inside <code>__init__(...)</code> but it raises:</p>
<blockquote>
<p>Exception Value: can't set attribute</p>
</blockquote>
<pre><code>def __init__(self,*args,**kwargs):
self.base_filters['datetime_range'].field = (forms.DateTimeField(input_formats=["%D.%m.%Y %H:%M"]),forms.DateTimeField(input_formats=["%D.%m.%Y %H:%M"]))
</code></pre>
|
<p>An easy solution would be to simply modify the <a href="https://docs.djangoproject.com/en/1.11/ref/settings/#datetime-input-formats" rel="nofollow noreferrer"><code>DATETIME_INPUT_FORMATS</code></a> setting.</p>
<p>Alternatively, you could create a custom field, based on the existing <a href="https://github.com/carltongibson/django-filter/blob/1.0.4/django_filters/fields.py#L58-L64" rel="nofollow noreferrer">DateTimeRangeField</a>.</p>
<pre class="lang-py prettyprint-override"><code>class CustomDateTimeRangeField(django_filters.RangeField):
def __init__(self, *args, **kwargs):
fields = (
forms.DateTimeField(input_formats=["%D.%m.%Y %H:%M"]),
forms.DateTimeField(input_formats=["%D.%m.%Y %H:%M"]),
)
super(CustomDateTimeRangeField, self).__init__(fields, *args, **kwargs)
</code></pre>
<p>From here, you can either create a separate class, or override the <code>field_class</code> for all <code>DateTimeFromToRangeFilter</code>s.</p>
<pre class="lang-py prettyprint-override"><code>class CustomDateTimeFromToRangeFilter(django_filters.RangeFilter):
field_class = CustomDateTimeRangeField
# or
django_filters.DateTimeFromToRangeFilter.fields_class = CustomDateTimeRangeField
</code></pre>
|
python|django|django-forms|django-filter|django-widget
| 3 |
1,906,488 | 44,532,788 |
Expand multivalued column to new columns in pandas
|
<p>I run</p>
<p>Python Version: 2.7.12 |Anaconda 4.1.1 (64-bit)| (default, Jun 29 2016, 11:07:13) [MSC v.1500 64 bit (AMD64)] Pandas Version: 0.18.1 IPython Version: 4.2.0</p>
<p>on Windows 7 64.</p>
<p>What would be a quick way of getting a dataframe like </p>
<pre><code>pd.DataFrame([[1,'a',1,'b',2,'c',3,'d',4],
[2,'e',5,'f',6,'g',7],
[3,'h',8,'i',9],
[4,'j',10]],columns=['ID','var1','var2','newVar1_1','newVar1_2','newVar2_1','newVar2_2','newVar3_1','newVar3_2'])
</code></pre>
<p>from </p>
<pre><code>pd.DataFrame([[1,'a',1],
[1,'b',2],
[1,'c',3],
[1,'d',4],
[2,'e',5],
[2,'f',6],
[2,'g',7],
[3,'h',8],
[3,'i',9],
[4,'j',10]],columns=['ID','var1','var2'])
</code></pre>
<p>What I would do is to group by ID and then iterate on the groupby object to make a new row from each item and append it on an initially emtpty dataframe, but this is slow since in the real case the rows of the starting dataframe are several thousands.</p>
<p>Any suggestions?</p>
|
<pre><code>df.set_index(['ID', df.groupby('ID').cumcount()]).unstack().sort_index(1, 1)
var1 var2 var1 var2 var1 var2 var1 var2
0 0 1 1 2 2 3 3
ID
1 a 1.0 b 2.0 c 3.0 d 4.0
2 e 5.0 f 6.0 g 7.0 None NaN
3 h 8.0 i 9.0 None NaN None NaN
4 j 10.0 None NaN None NaN None NaN
</code></pre>
<hr>
<p>Or more complete</p>
<pre><code>d1 = df.set_index(['ID', df.groupby('ID').cumcount()]).unstack().sort_index(1, 1)
d1.columns = d1.columns.to_series().map('new{0[0]}_{0[1]}'.format)
d1.reset_index()
ID newvar1_0 newvar2_0 newvar1_1 newvar2_1 newvar1_2 newvar2_2 newvar1_3 newvar2_3
0 1 a 1.0 b 2.0 c 3.0 d 4.0
1 2 e 5.0 f 6.0 g 7.0 None NaN
2 3 h 8.0 i 9.0 None NaN None NaN
3 4 j 10.0 None NaN None NaN None NaN
</code></pre>
|
pandas|pandas-groupby
| 2 |
1,906,489 | 49,753,131 |
implementation of scipy.stats.ncx2
|
<p>How can I find details about the implementation of the non-central chi-squared distribution in scipy.stats.ncx2?</p>
<p>The scipy documentation has no information about implementation.</p>
<p>Tracing the source, it seems as if the function is implemented via scipy.special.chndtr. But there is no link to this source from the scipy pages:
<a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.special.chndtr.html" rel="nofollow noreferrer">https://docs.scipy.org/doc/scipy/reference/generated/scipy.special.chndtr.html</a></p>
<p>I feel I'm missing something obvious, but haven't been able to find any info on this.</p>
|
<p>It's a <a href="https://docs.scipy.org/doc/numpy-1.14.0/reference/ufuncs.html" rel="nofollow noreferrer">ufunc</a> as mentioned in <a href="https://github.com/scipy/scipy/blob/8dca57d9e08118da7b05232349036bf5ec5774df/scipy/special/cdf_wrappers.c#L163" rel="nofollow noreferrer">/scipy/special/cdf_wrappers.c</a> and <a href="https://github.com/scipy/scipy/blob/895a7741b12c2c3f816bfd27e5249468bea64a26/scipy/special/add_newdocs.py#L1086" rel="nofollow noreferrer">/scipy/special/add_newdocs.py</a>.</p>
<p>The associated Fortran source code is located here:<br>
<a href="https://github.com/scipy/scipy/tree/master/scipy/special/cdflib" rel="nofollow noreferrer">https://github.com/scipy/scipy/tree/master/scipy/special/cdflib</a>,<br>
i.e. <a href="https://github.com/scipy/scipy/blob/master/scipy/special/cdflib/cdfchn.f" rel="nofollow noreferrer">https://github.com/scipy/scipy/blob/master/scipy/special/cdflib/cdfchn.f</a> (as pointed out by @Ras).</p>
|
python|scipy
| 0 |
1,906,490 | 53,654,489 |
Integrating Zope with socket-io
|
<p>Is there a way of integrating Zope 2 (2.13.19) using Python 2.6.8 with socket-io ?</p>
<p>I've found <a href="https://python-socketio.readthedocs.io/en/latest/" rel="nofollow noreferrer">https://python-socketio.readthedocs.io/en/latest/</a> but it doesn't seem to fit the requirement.</p>
|
<p>Zope contains a traditional HTTP server, but you could write a ZEO client that would use the socketio library and integrate with Zope's transactions.</p>
|
socket.io|plone|python-2.6|zope|gevent-socketio
| 1 |
1,906,491 | 52,254,611 |
DataTables - server-side processing in Python/Flask - Not working connection to server-side script
|
<p>I'm struggling with this for several days. I'm trying to implement Sergio Llanna's server-side solution to my existing project.</p>
<p><a href="https://github.com/SergioLlana/datatables-flask-serverside" rel="nofollow noreferrer">Sergio's DataTables server-side processing</a></p>
<p>and this is my implementation:</p>
<p><a href="https://github.com/culterone/clt" rel="nofollow noreferrer">My DataTables server-side processing</a></p>
<p>The difference is, that my solution grabs the data from MySQL database instead of local file. I think that I'm pretty close to make it work, data from database are displayed in the table correctly. <strong>When I turn off serverSide option, sorting and searching is working fine, but when I turn it on, everytime I click on the column, search or anything else I get just "Processing..."</strong> Flask debug mode is on. There is no error in the browser's development tools or flask development server.</p>
<p>My JSON XHR (response):</p>
<pre><code>{
"data": [
{
"cola": "Hello",
"colb": "How is it going",
"colc": 1,
"cold": 4
},
{
"cola": "Sample text",
"colb": "Another sample",
"colc": 2,
"cold": 9
},
{
"cola": "Kurnik hosi",
"colb": "Guten tag",
"colc": 3,
"cold": 3
},
{
"cola": "Achiles",
"colb": "Patus",
"colc": 4,
"cold": 1
},
{
"cola": "Kim",
"colb": "Kiduk",
"colc": 5,
"cold": 8
},
{
"cola": "Pastina",
"colb": "Zavada",
"colc": 6,
"cold": 9
},
{
"cola": "Dolna",
"colb": "Marikova",
"colc": 7,
"cold": 9
}
],
"draw": 1,
"recordsFiltered": 10,
"recordsTotal": 7
}
</code></pre>
<p>I think the problem is with the request sent from browser, but this is my first Python/Flask project, so I'm not sure.
Thank you for any advice.</p>
<p>Used technologies:
OS Debian 9,
DB MariaDB 10,
Flask-SQLAlchemy</p>
|
<p>It was a bit unclear to me what you exactly changed compared to the original files, so I went ahead and tried adding SQLAlchemy support to Sergio's original project. </p>
<p><code>__init__.py</code> changes:</p>
<pre><code>from flask import Flask
from flask_sqlalchemy import SQLAlchemy
flask_app = Flask(__name__)
db = SQLAlchemy(flask_app)
from app.mod_tables.models import TableBuilder, SomeTable, add_some_random_db_entries
db.create_all()
add_some_random_db_entries()
table_builder = TableBuilder()
</code></pre>
<p><code>models.py</code> changes:</p>
<pre><code>from app.mod_tables.serverside.serverside_table import ServerSideTable
from app.mod_tables.serverside import table_schemas
from app import db
DATA_SAMPLE = [
{'A': 'Hello!', 'B': 'How is it going?', 'C': 3, 'D': 4},
{'A': 'These are sample texts', 'B': 0, 'C': 5, 'D': 6},
{'A': 'Mmmm', 'B': 'I do not know what to say', 'C': 7, 'D': 16},
{'A': 'Is it enough?', 'B': 'Okay', 'C': 8, 'D': 9},
{'A': 'Just one more', 'B': '...', 'C': 10, 'D': 11},
{'A': 'Thanks!', 'B': 'Goodbye.', 'C': 12, 'D': 13}
]
class SomeTable(db.Model):
__tablename__ = 'some_table'
cola = db.Column('A', db.String(2))
colb = db.Column('B', db.String(2))
colc = db.Column('C', db.Integer, primary_key=True)
cold = db.Column('D', db.Integer)
def __init__(self, cola, colb, colc, cold):
self.cola = cola
self.colb = colb
self.colc = colc
self.cold = cold
@property
def serialize(self):
return {
'A': self.cola,
'B': self.colb,
'C': self.colc,
'D': self.cold
}
def add_some_random_db_entries():
letters = 'arstarstarstarstaars'
for i in range(10):
item = SomeTable(letters[i], letters[i + 1: i + 3], i, i + 1)
db.session.add(item)
db.session.commit()
def make_data_sample_from_db():
newlist = []
for row in SomeTable.query.all():
newlist.append(row.serialize)
return newlist
class TableBuilder(object):
def collect_data_clientside(self):
return {'data': DATA_SAMPLE}
def collect_data_serverside(self, request):
columns = table_schemas.SERVERSIDE_TABLE_COLUMNS
data = make_data_sample_from_db()
return ServerSideTable(request, data, columns).output_result()
</code></pre>
<p>You should be able to just add the database URL, and it should work the same from there.</p>
<p>However, this method of getting filtered results is quite inefficient, especially with large databases. It is only good for small databases, because you have to query the entire table, and the filter it using dictionaries. </p>
<p>I would probably check out <code>flask-admin</code>. It has a lot of the filtering build into the package itself, using SQLAlchemy's filtering support, which means that it is a lot quicker than this approach. </p>
<p>Here's the modified repo on <a href="https://github.com/joost823/datatables-flask-serverside" rel="nofollow noreferrer">github</a>.</p>
|
python|flask|datatables
| 1 |
1,906,492 | 64,540,800 |
Technical solution for large geospatial data
|
<p>I made an app which places points on the map within certain polygons.It is Windows Forms app with simple UI.These points are also pre-filtered.I use Python,geopandas and folium libraries exactly.As a result, the program saves .html map with all layers and .xlsx sammari.However,datasets can be really huge and on this html map there can be,for example,100000 points.Obviously,neither Google Earth nor Browser can deal with these big .html maps.So,are there any solution to work with this big amount of data?</p>
<p>Maybe there are folium enthusiasts?Can marker clusters improve situation?</p>
|
<p>100,000 points is not really large amount of data these days. Even pure web-based tools can draw that. In the JavaScript / Web world I know <a href="https://deck.gl" rel="nofollow noreferrer">deck.gl</a> which scales well beyond this size before you need marker clusters (but they might be a better way to represent such data even if you can technically put 100k markers on the map).</p>
<p>I don't have personal experience programming with deck.gl, but the tool I frequently use (Google BigQuery GeoViz - <a href="https://bigquerygeoviz.appspot.com/" rel="nofollow noreferrer">https://bigquerygeoviz.appspot.com/</a>) can draw 100k points, and more. It is a client-side page, it gets data from BigQuery, but it should be easy to adopt it to draw static resource (it is an open source tool), or find another solution to use in deck.gl examples.</p>
|
python|html|leaflet|geospatial|folium
| 0 |
1,906,493 | 64,599,167 |
Return a list containing occurrence of each word of a list in another list
|
<p>I would like to write a function that returns a list of integers, the integers are the occurrence of each word of a list(called my_words) in another list(called list_words). Let's say my function is called "mywords_occurrence" and the expected output is below:</p>
<pre><code>my_words="apple pear lemon orange"
list_words="apple apple orange lemon banana orange kiwi tomato kiwi apple mango"
mywords_occurrence(my_words,list_words)
[3,0,1,2]
</code></pre>
<p>The function should return [3,0,1,2] because "apple" occurs 3 times in the list_words, "pear" occurs 0 time in the list_word etc.</p>
<p>My codes for the function is below:</p>
<pre><code>def mywords_occurrence(my_words,list_words):
my_words=my_words.split()
list_words=list_words.split
count=0
l=[]
i=0
for n in range(len(list_words)):
if my_words[i]==list_words[n]:
count=count+1
i=i+1
l.append[count]
else:
i=i+1
l.append[count]
return l
</code></pre>
<p>When I tried my codes the error message pops up</p>
<pre><code>for n in range(len(list_words)):
TypeError: object of type 'builtin_function_or_method' has no len()
</code></pre>
<p>I tried to change <code>for n in range(len(list_words))</code> to <code>for n in list_words</code> but then another error message pops up</p>
<pre><code>for n in list_words:
TypeError: 'builtin_function_or_method' object is not iterable
</code></pre>
|
<pre><code>list_words = list_words.split
^^^
</code></pre>
<p>Here you missed the () after <code>split</code>. Therefore it is actually the <code>split</code> method instead of the <code>split</code> result you want to obtain.</p>
|
python
| 2 |
1,906,494 | 64,579,723 |
Training neural nets simultaneously in keras and have them share losses jointly while training?
|
<p>Let's say I want to train three models simultaneously (model1, model2, and model3) and while training have the models one and two share losses jointly with the main network (model1). So the main model can learn representations from the two other models in between layers.</p>
<p>Loss total = (weight1)loss m1 + (weight2)(loss m1 - loss m2) + (weight3)(loss m1 - loss m3)</p>
<p>So far I have the following:</p>
<pre><code>def threemodel(num_nodes, num_class, w1, w2, w3):
#w1; w2; w3 are loss weights
in1 = Input((6373,))
enc1 = Dense(num_nodes)(in1)
enc1 = Dropout(0.3)(enc1)
enc1 = Dense(num_nodes, activation='relu')(enc1)
enc1 = Dropout(0.3)(enc1)
enc1 = Dense(num_nodes, activation='relu')(enc1)
out1 = Dense(units=num_class, activation='softmax')(enc1)
in2 = Input((512,))
enc2 = Dense(num_nodes, activation='relu')(in2)
enc2 = Dense(num_nodes, activation='relu')(enc2)
out2 = Dense(units=num_class, activation='softmax')(enc2)
in3 = Input((768,))
enc3 = Dense(num_nodes, activation='relu')(in3)
enc3 = Dense(num_nodes, activation='relu')(enc3)
out3 = Dense(units=num_class, activation='softmax')(enc3)
adam = Adam(lr=0.0001)
model = Model(inputs=[in1, in2, in3], outputs=[out1, out2, out3])
model.compile(loss='categorical_crossentropy', #continu together
optimizer='adam',
metrics=['accuracy'] not sure know what changes need to be made here)
## I am confused on how to formulate the shared losses equation here to share the losses of out2 and out3 with out1.
</code></pre>
<p>After searching a little it seems that can maybe do the following:</p>
<pre><code>loss_1 = tf.keras.losses.categorical_crossentropy(y_true_1, out1)
loss_2 = tf.keras.losses.categorical_crossentropy(y_true_2, out2)
loss_3 = tf.keras.losses.categorical_crossentropy(y_true_3, out3)
model.add_loss((w1)*loss_1 + (w2)*(loss_1 - loss_2) + (w3)*(loss_1 - loss_3))
</code></pre>
<p>Can this work? I feel like by doing what I suggested above is not really doing what I want which is to have the main model (mod1) learn representations from the two other models (mod2 and mod3) in between layers.
Any suggestions?</p>
|
<p>Since you are not interested in using trainable weights (I label them coefficients to distinguish them from trainable weights) you can concatenate the outputs and pass them as single output to a custom loss function. This means that those coefficients will be available when the training will start.</p>
<p>You should provide a custom loss function as mentioned. A loss function is expected to take 2 arguments only so you should such a function aka <code>categorical_crossentropy</code> which should also be familiar with the parameters you are interested also like <code>coeffs</code> and <code>num_class</code>. So I instantiate a wrapper function with the arguments I want and then pass the inside actual loss function as the main loss function.</p>
<pre><code>from tensorflow.keras.layers import Dense, Dropout, Input, Concatenate
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.models import Model
from tensorflow.python.framework import ops
from tensorflow.python.framework import smart_cond
from tensorflow.python.ops import math_ops
from tensorflow.python.ops import array_ops
from tensorflow.python.keras import backend as K
def categorical_crossentropy_base(coeffs, num_class):
def categorical_crossentropy(y_true, y_pred, from_logits=False, label_smoothing=0):
"""Computes the categorical crossentropy loss.
Args:
y_true: tensor of true targets.
y_pred: tensor of predicted targets.
from_logits: Whether `y_pred` is expected to be a logits tensor. By default,
we assume that `y_pred` encodes a probability distribution.
label_smoothing: Float in [0, 1]. If > `0` then smooth the labels.
Returns:
Categorical crossentropy loss value.
https://github.com/tensorflow/tensorflow/blob/v1.15.0/tensorflow/python/keras/losses.py#L938-L966
"""
y_pred1 = y_pred[:, :num_class] # the 1st prediction
y_pred2 = y_pred[:, num_class:2*num_class] # the 2nd prediction
y_pred3 = y_pred[:, 2*num_class:] # the 3rd prediction
# you should adapt the ground truth to contain all 3 ground truth of course
y_true1 = y_true[:, :num_class] # the 1st gt
y_true2 = y_true[:, num_class:2*num_class] # the 2nd gt
y_true3 = y_true[:, 2*num_class:] # the 3rd gt
loss1 = K.categorical_crossentropy(y_true1, y_pred1, from_logits=from_logits)
loss2 = K.categorical_crossentropy(y_true2, y_pred2, from_logits=from_logits)
loss3 = K.categorical_crossentropy(y_true3, y_pred3, from_logits=from_logits)
# combine the losses the way you like it
total_loss = coeffs[0]*loss1 + coeffs[1]*(loss1 - loss2) + coeffs[2]*(loss2 - loss3)
return total_loss
return categorical_crossentropy
in1 = Input((6373,))
enc1 = Dense(num_nodes)(in1)
enc1 = Dropout(0.3)(enc1)
enc1 = Dense(num_nodes, activation='relu')(enc1)
enc1 = Dropout(0.3)(enc1)
enc1 = Dense(num_nodes, activation='relu')(enc1)
out1 = Dense(units=num_class, activation='softmax')(enc1)
in2 = Input((512,))
enc2 = Dense(num_nodes, activation='relu')(in2)
enc2 = Dense(num_nodes, activation='relu')(enc2)
out2 = Dense(units=num_class, activation='softmax')(enc2)
in3 = Input((768,))
enc3 = Dense(num_nodes, activation='relu')(in3)
enc3 = Dense(num_nodes, activation='relu')(enc3)
out3 = Dense(units=num_class, activation='softmax')(enc3)
adam = Adam(lr=0.0001)
total_out = Concatenate(axis=1)([out1, out2, out3])
model = Model(inputs=[in1, in2, in3], outputs=[total_out])
coeffs = [1, 1, 1]
model.compile(loss=categorical_crossentropy_base(coeffs=coeffs, num_class=num_class), optimizer='adam', metrics=['accuracy'])
</code></pre>
<p>I am not sure about the metrics regarding accuracy though. But I think it will work without other changes. I am also using <code>K.categorical_crossentropy</code> but you can freely change it with another implementation as well of course.</p>
|
python|tensorflow|keras|deep-learning|loss-function
| 1 |
1,906,495 | 70,584,349 |
How to "split" the multiple (numeric) values in XML data, in python?
|
<p>I have a list of tags like this:</p>
<pre><code><duet pair="2,1;3,2;4,5;" day="1;7;9" type="YES"/>
<duet pair="2,4;1,6;2,0;" day="2;5;8" type="NO"/>
</code></pre>
<p>These are wanted (type = yes) or unwanted (type = no) pairings, but imagine with various combinations and various more attributes to filter from but I will post only these for my question. In this example, and lets say the first one, I want to check and compare (based on a schedule are already have) if any of the pairs (2,1),(3,2) or (4,5) are scheduled to be together in any of the days 1 , 7 or 9 .So, I want to seperate both the values of the pair attribute and the day attribute to proceed with my calculations.</p>
<p>I tried the following by getting the value with .get and splitting the it with the ";"</p>
<pre><code>for i in root.findall("./duet/[@type='YES']"):
meeting = i.get("pair")
day = i.get("day")
meeting_split = meeting.split(';')
day_split = day.split(';')
print("The pairs ",meeting_split)
print("The days ", day_split)
</code></pre>
<p>The results I get are the following</p>
<pre><code>The pairs ['2,1', '3,2', '4,5', '']
The days ['1', '7', '9']
</code></pre>
<p>I see 2 problems with the above. 1. That these are strings and 2. That the pairs list gives me the '' at the end of the list (???) .
What I want is the pairs to be a list of tuples preferably, and the days to be a list of integers.
With the i.get , I get the entire value of the attribute, but I don't know any other way to "get" the value. And of course I can't int(day.split()), since the split result is a list.</p>
<p>How could I do it ? Either to actually make these data into int/tuples or to pick up the value from the attribute as one by one.</p>
|
<p>You're going to have the empty space at the end, because your string ends on a delimiter. You can slice the string with <code>[:-1]</code> and split each sub string on the comma, mapping it to it.</p>
<p>The other is more simple, just map int to the split.</p>
<pre><code>meeting = '2,1;3,2;4,5;'
day = "1;7;9"
meeting_split = [tuple(map(int,x.split(','))) for x in meeting.split(';')[:-1]]
day_split = list(map(int,day.split(';')))
print(meeting_split, day_split, sep='\n')
</code></pre>
<p>Output</p>
<pre><code>[(2, 1), (3, 2), (4, 5)]
[1, 7, 9]
</code></pre>
|
python|xml-parsing|attributes
| 1 |
1,906,496 | 73,036,329 |
list indices (DateTimeIndex) must be integers or slices, not str
|
<p>I have the following <code>DateTimeIndex</code></p>
<pre><code>DatetimeIndex(['2022-08-19', '2023-02-19', '2023-08-19', '2024-02-19',
'2024-08-19', '2025-02-19', '2025-08-19', '2026-02-19',
'2026-08-19', '2027-02-19', '2027-08-19', '2028-02-19',
'2028-08-19', '2029-02-19', '2029-08-19', '2030-02-19',
'2030-08-19', '2031-02-19', '2031-08-19', '2032-02-19',
'2032-08-19', '2033-02-19', '2033-08-19', '2034-02-19',
'2034-08-19', '2035-02-19', '2035-08-19', '2036-02-19',
'2036-08-19', '2037-02-19', '2037-08-19', '2038-02-19',
'2038-08-19', '2039-02-19', '2039-08-19', '2040-02-19',
'2040-08-19', '2041-02-19', '2041-08-19', '2042-02-19',
'2042-08-19', '2043-02-19', '2043-08-19', '2044-02-19',
'2044-08-19', '2045-02-19', '2045-08-19', '2046-02-19',
'2046-08-19', '2047-02-19', '2047-08-19', '2048-02-19',
'2048-08-19', '2049-02-19', '2049-08-19', '2050-02-19',
'2050-08-19', '2051-02-19', '2051-08-19', '2052-02-19'],
dtype='datetime64[ns]', freq='<DateOffset: months=6>')
</code></pre>
<p>Given by :</p>
<pre><code>dates = pd.date_range("2022-08-19", "2052-02-19", freq=pd.DateOffset(months=6))
</code></pre>
<p>The Idea was to add it in a new DataFrame DF :</p>
<pre><code>DF=[]
DF['DateCol']=dates
</code></pre>
<p>But it does throw the following error :</p>
<pre><code>TypeError: list indices must be integers or slices, not str
</code></pre>
<p>what am I doing wrong ?</p>
<p>Note that I also tried DF.insert</p>
|
<p>You need to instantiate a <strong>DataFrame</strong>, not a list:</p>
<pre><code>DF = pd.DataFrame({"DateCol": dates})
</code></pre>
<p>output:</p>
<pre><code> DateCol
0 2022-08-19
1 2023-02-19
2 2023-08-19
3 2024-02-19
...
57 2051-02-19
58 2051-08-19
59 2052-02-19
</code></pre>
|
python|pandas
| 2 |
1,906,497 | 73,025,174 |
How to add Multilevel Columns and create new column?
|
<p>I am trying to create a "total" column in my dataframe</p>
<pre><code>idx = pd.MultiIndex.from_product([['Room 1','Room 2', 'Room 3'],['on','off']])
df = pd.DataFrame([[1,4,3,6,5,15], [3,2,1,5,1,7]], columns=idx)
</code></pre>
<p>My dataframe</p>
<pre><code> Room 1 Room 2 Room 3
on off on off on off
0 1 4 3 6 5 15
1 3 2 1 5 1 7
</code></pre>
<p>For each room, I want to create a total column and then a on% column.</p>
<p>I have tried the following, however, it does not work.</p>
<pre><code>df.loc[:, slice(None), "total" ] = df.xs('on', axis=1,level=1) + df.xs('off', axis=1,level=1)
</code></pre>
|
<p>Let us try something fancy ~</p>
<pre><code>df.stack(0).eval('total=on + off \n on_pct=on / total').stack().unstack([1, 2])
</code></pre>
<hr />
<pre><code> Room 1 Room 2 Room 3
off on total on_pct off on total on_pct off on total on_pct
0 4.0 1.0 5.0 0.2 6.0 3.0 9.0 0.333333 15.0 5.0 20.0 0.250
1 2.0 3.0 5.0 0.6 5.0 1.0 6.0 0.166667 7.0 1.0 8.0 0.125
</code></pre>
|
pandas|dataframe
| 2 |
1,906,498 | 73,295,549 |
BERT training error - forward() got an unexpected keyword argument 'labels'
|
<p>I'm trying to train Bert for question answering using squad. At the end I wanna use Labse for this and train it again on other language and see the score growth. As I train bert I immediately get this error:
<code>forward() got an unexpected keyword argument 'labels'</code></p>
<p>To be honest I have no idea what am I doing wrong. Maybe some of you can help me. I am using squad v 1.0 dataset</p>
<pre><code>from datasets import load_dataset
raw_datasets = load_dataset("squad", split='train')
from transformers import BertTokenizerFast, BertModel
from transformers import AutoTokenizer
model_checkpoint = "setu4993/LaBSE"
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
model = BertModel.from_pretrained(model_checkpoint)
max_length = 384
stride = 128
def preprocess_training_examples(examples):
questions = [q.strip() for q in examples["question"]]
inputs = tokenizer(
questions,
examples["context"],
max_length=max_length,
truncation="only_second",
stride=stride,
return_overflowing_tokens=True,
return_offsets_mapping=True,
padding="max_length",
)
offset_mapping = inputs.pop("offset_mapping")
sample_map = inputs.pop("overflow_to_sample_mapping")
answers = examples["answers"]
start_positions = []
end_positions = []
for i, offset in enumerate(offset_mapping):
sample_idx = sample_map[i]
answer = answers[sample_idx]
start_char = answer["answer_start"][0]
end_char = answer["answer_start"][0] + len(answer["text"][0])
sequence_ids = inputs.sequence_ids(i)
# Find the start and end of the context
idx = 0
while sequence_ids[idx] != 1:
idx += 1
context_start = idx
while sequence_ids[idx] == 1:
idx += 1
context_end = idx - 1
# If the answer is not fully inside the context, label is (0, 0)
if offset[context_start][0] > start_char or offset[context_end][1] < end_char:
start_positions.append(0)
end_positions.append(0)
else:
# Otherwise it's the start and end token positions
idx = context_start
while idx <= context_end and offset[idx][0] <= start_char:
idx += 1
start_positions.append(idx - 1)
idx = context_end
while idx >= context_start and offset[idx][1] >= end_char:
idx -= 1
end_positions.append(idx + 1)
inputs["start_positions"] = start_positions
inputs["end_positions"] = end_positions
return inputs
train_dataset = raw_datasets.map(
preprocess_training_examples,
batched=True,
remove_columns=raw_datasets.column_names,
)
len(raw_datasets), len(train_dataset)
from transformers import TrainingArguments
args = TrainingArguments(
"bert-finetuned-squad",
save_strategy="epoch",
learning_rate=2e-5,
num_train_epochs=3,
weight_decay=0.01,
)
from transformers import DataCollatorForLanguageModeling
data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer)
from transformers import Trainer
trainer = Trainer(
model=model,
args=args,
data_collator=data_collator,
train_dataset=train_dataset,
tokenizer=tokenizer,
)
trainer.train()
TypeError Traceback (most recent call last)
<ipython-input-23-2920a50b14d4> in <module>()
10 tokenizer=tokenizer,
11 )
---> 12 trainer.train()
4 frames
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1128 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1129 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1130 return forward_call(*input, **kwargs)
1131 # Do not call functions when jit is used
1132 full_backward_hooks, non_full_backward_hooks = [], []
TypeError: forward() got an unexpected keyword argument 'labels'
</code></pre>
|
<p>Hi i suggest you change the import to <code>BertForSequenceClassification</code></p>
<p>I suggest you check this out in the <a href="https://github.com/huggingface/transformers/blob/v4.21.1/src/transformers/models/bert/modeling_bert.py#L1510" rel="nofollow noreferrer">docs</a> the trainer class actually looks for this specific argument in the forward pass "labels"; which is not stated quite clearly in the huggingface docs</p>
<pre class="lang-py prettyprint-override"><code>from transformers import BertForSequenceClassification
model = BertForSequenceClassification.from_pretrained(model_checkpoint)
</code></pre>
|
nlp|pytorch|torch|bert-language-model|training
| 0 |
1,906,499 | 50,147,236 |
Python issue: TypeError: unhashable type: 'slice' during web scraping
|
<p>I am attempting to scrape some info from a website. I was able to successfully scrape the text that i was looking for, but when I try to create a function to append the texts together, i get a TypeError of an unhashable type. </p>
<p>Do you know what may be happening here? Does anybody know how to fix this issue? </p>
<p>Here is the code in question:</p>
<pre><code>records = []
for result in results:
name = result.contents[0][0:-1]
</code></pre>
<p>and here is the code in entirety, for reproducing purposes:</p>
<pre><code>import requests
from bs4 import BeautifulSoup
r = requests.get('https://skinsalvationsf.com/2012/08/updated-comedogenic-ingredients-list/')
soup = BeautifulSoup(r.text, 'html.parser')
results = soup.find_all('td', attrs={'valign':'top'})
records = []
for result in results:
name = result.contents[0][0:-1]
</code></pre>
<p>A sample of <code>results</code> items:</p>
<pre><code><td valign="top" width="33%">Acetylated Lanolin <sup>5</sup></td>,
<td valign="top" width="33%">Coconut Butter<sup> 8</sup></td>,
...
<td valign="top" width="33%"><sup> </sup></td>
</code></pre>
<p>Thanks in advance!!</p>
|
<p>In some of your collected results the <code>contents</code> contains no text, but only <code>Tag</code> objects, so you get a <code>TypeError</code> when trying to select a slice from the <code>Tag</code>'s attributes dictionary. </p>
<p>You can catch such errors with a try-except block, </p>
<pre><code>for result in results:
try:
name = result.contents[0][0:-1]
except TypeError:
continue
</code></pre>
<p>Or you could use <a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/#strings-and-stripped-strings" rel="nofollow noreferrer"><code>.strings</code></a> to select only <code>NavigableString</code> contents, </p>
<pre><code>for result in results:
name = list(result.strings)[0][0:-1]
</code></pre>
<hr>
<p>But it seems like it's only the last item that has no text content, so you could just ignore it. </p>
<pre><code>results = soup.find_all('td', attrs={'valign':'top'})[:-1]
for result in results:
name = result.contents[0][:-1]
</code></pre>
|
python|function|loops|beautifulsoup|scraper
| 4 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.