Unnamed: 0
int64 0
1.91M
| id
int64 337
73.8M
| title
stringlengths 10
150
| question
stringlengths 21
64.2k
| answer
stringlengths 19
59.4k
| tags
stringlengths 5
112
| score
int64 -10
17.3k
|
---|---|---|---|---|---|---|
1,902,500 | 62,186,218 |
python multiprocessing : AttributeError: Can't pickle local object
|
<p>I have a method inside a class to return a func which parameters may change.</p>
<p>The Interface function accept two parameters, f and its args.I want to use mp.pool to accelerate it.However, it returns an error.</p>
<pre><code>from multiprocessing import Pool
# from multiprocess import Pool
# from pathos.multiprocessing import ProcessingPool as Pool
import pickle
import dill
class Temp:
def __init__(self, a):
self.a = a
def test(self):
def test1(x):
return self.a + x
return test1
def InterfaceFunc(f, x):
mypool = Pool(4)
return list(mypool.map(f, x))
if __name__ == "__main__":
t1 = Temp(1).test()
x = [1, 2, 3, 1, 2]
res1 = list(map(t1, x))
print(res1)
res2 = InterfaceFunc(t1, x)
</code></pre>
<p>it raise the same error:</p>
<pre><code>AttributeError: Can't pickle local object 'Temp.test.<locals>.test1'
</code></pre>
<p>I have tried 3 method:</p>
<p><a href="https://stackoverflow.com/questions/19984152/what-can-multiprocessing-and-dill-do-together">What can multiprocessing and dill do together?</a></p>
<p><a href="https://stackoverflow.com/questions/40234771/replace-pickle-in-python-multiprocessing-lib">Replace pickle in Python multiprocessing lib</a></p>
<p><a href="https://stackoverflow.com/questions/52265120/python-multiprocessing-pool-map-attributeerror-cant-pickle-local-object">Python Multiprocessing Pool Map: AttributeError: Can't pickle local object</a></p>
<p>Method 1, 2 :</p>
<pre><code> from multiprocess import Pool
from pathos.multiprocessing import ProcessingPool as Pool
</code></pre>
<p>It raise error:</p>
<pre><code> File "E:\Users\ll\Anaconda3\lib\site-packages\dill\_dill.py", line 577, in _load_type
return _reverse_typemap[name]
KeyError: 'ClassType'
</code></pre>
<p>Method3 needs to change the code , however I cant simply move the func out of the class because I need f to be a parameter for the Interface.</p>
<p>Do you have any suggestions? I'm an inexperienced newcomer. </p>
|
<p>Python can't pickle the closure, but all you really need is something that you can call that retains state. The <code>__call__</code> method makes a class instance callable, so use that</p>
<pre><code>from multiprocessing import Pool
class TempTest1:
def __init__(self, a):
self.a = a
def __call__(self, x):
return self.a + x
class Temp:
def __init__(self, a):
self.a = a
def test(self):
return TempTest1(self.a)
def InterfaceFunc(f, x):
mypool = Pool(4)
return list(mypool.map(f, x))
if __name__ == "__main__":
t1 = Temp(1).test()
x = [1, 2, 3, 1, 2]
res1 = list(map(t1, x))
print(res1)
res2 = InterfaceFunc(t1, x)
print(res2)
</code></pre>
|
python
| 6 |
1,902,501 | 56,110,662 |
Replace to remove nextline
|
<p>For example, I have a text file:</p>
<pre><code>bla bla "TEXT TEXT"
,BLA BLA TEXT
</code></pre>
<p>ANd I would like to make it </p>
<pre><code>bla bla "TEXT TEXT",BLA BLA TEXT
</code></pre>
<p>What should I put here, to make this happen?</p>
<pre><code>.replace("\n ", "")
</code></pre>
<p><strong>UPDATE:</strong></p>
<p>Sorry my fault that has not made it clear</p>
<pre><code>bla bla "TEXT TEXT",BLA BLA TEXT
bla bla "TEXT TEXT",BLA BLA TEXT
bla bla "TEXT TEXT"
,BLA BLA TEXT
bla bla "TEXT TEXT",BLA BLA TEXT
bla bla "TEXT TEXT",BLA BLA TEXT
bla bla "TEXT TEXT",BLA BLA TEXT
</code></pre>
<p>to </p>
<pre><code>bla bla "TEXT TEXT",BLA BLA TEXT
bla bla "TEXT TEXT",BLA BLA TEXT
bla bla "TEXT TEXT",BLA BLA TEXT
bla bla "TEXT TEXT",BLA BLA TEXT
bla bla "TEXT TEXT",BLA BLA TEXT
bla bla "TEXT TEXT",BLA BLA TEXT
</code></pre>
|
<p>I recommend using <code>re.sub</code> here:</p>
<pre><code>input = """bla bla \"TEXT TEXT\"
,BLA BLA TEXT"""
output = re.sub(r'\n\s*', '', input)
</code></pre>
<p>This would let you remove newlines followed by any amount of following whitespace.</p>
<p>For your current approach, it should work, but the issue is that you need to assign the result back to another string, e.g.</p>
<pre><code>output = input.replace("\n ", "")
</code></pre>
|
python
| 1 |
1,902,502 | 67,321,862 |
First number to determine number of elements in a list (Python)
|
<p>For example, in the first line the user types a number, and that number will determine the number of elements in a list.</p>
|
<p>Python lists are dynamic. You can set them up with a given number of elements but there is nothing to stop subsequent code adding or removing them and so changing the size of the list.</p>
<p>You can do something like:</p>
<pre><code>size = int(input("Enter number of elements:"))
mylist = [None] * size
print(mylist)
</code></pre>
<p>which will give the output</p>
<pre><code>[None, None, None]
</code></pre>
|
python|list|element
| 2 |
1,902,503 | 63,401,884 |
Issues with grid layout when calling tkraise() in fairly simple GUI
|
<p>I am playing around with a few new ideas for a program I am writing that has five frames within the root managed by the grid layout. One of the larger frames is a container to hold multiple frames within itself, and then you can push a button to flip between them. When you push the button the first time, frame will change, but in knocks off two of the original 5 frames, leaving me with a total of 3. The only thing I could do to fix this was re defining the frames again in my switch_frame method, but I was wondering if there was another way to deal with this without having to create new objects.</p>
<pre><code>from tkinter import *
class NewFrame(Frame):
def __init__(self, parent: Tk):
Frame.__init__(self, master=parent, width=parent.winfo_screenwidth() * .75
, height=parent.winfo_screenheight() * .8, bg="purple")
self.switch_button = Button(self, text='back', command=lambda: parent.switch_multi(1)).pack()
class NewFrame2(Frame):
def __init__(self, parent: Tk):
Frame.__init__(self, master=parent, width=parent.winfo_screenwidth() * .75
, height=parent.winfo_screenheight() * .8, bg="black")
self.switch_button = Button(self, text='back', command=lambda: parent.switch_multi(0)).pack()
class Main_UI(Tk):
def __init__(self):
Tk.__init__(self)
self.title("RobinFree")
self.iconbitmap("C:/RobinFree/pics/robinhood.ico")
self.state("zoomed")
# logo frame
self.logo_frame = Frame(self, width=self.winfo_screenwidth() * .45,
height=self.winfo_screenheight() * .2, bg="yellow")
self.logo_frame.grid(row=0, column=0, stick=N + E + S + W)
# funds frame
self.funds_frame = Frame(self, width=self.winfo_screenwidth() * .55,
height=self.winfo_screenheight() * .2, bg="orange")
self.funds_frame.grid(row=0, column=1, stick=N + E + S + W)
# multi frame
self.multi_frame_holder = []
self.multi_frame_container = Frame(self, width=self.winfo_screenwidth() * .75
, height=self.winfo_screenheight() * .8, bg="red")
self.multi_frame_container.grid(row=1, column=0, columnspan=2, rowspan=2, stick=N + E + S + W)
self.switch_button = Button(self.multi_frame_container, text='back',
command=lambda: self.switch_multi(1)).pack()
self.second_frame = NewFrame(self)
self.second_frame.grid(row=1, column=0, columnspan=2, rowspan=2, stick=N + E + S + W)
self.multi_frame_holder.append(self.second_frame)
self.third_frame = NewFrame2(self)
self.third_frame.grid(row=1, column=0, columnspan=2, rowspan=2, stick=N + E + S + W)
self.multi_frame_holder.append(self.third_frame)
self.switch_multi(0)
# positions frame
self.positions_frame = Frame(self, width=self.winfo_screenwidth() * .25
, height=self.winfo_screenheight() * .4, bg="blue")
self.positions_frame.grid(row=1, column=1, stick=N + E + S)
# lvl2 frame
self.lvl2_frame = Frame(self, width=self.winfo_screenwidth() * .25
, height=self.winfo_screenheight() * .4, bg="green")
self.lvl2_frame.grid(row=2, column=1, stick=N + E + S)
def switch_multi(self, index: int):
label = self.multi_frame_holder[index]
label.tkraise()
x = Main_UI()
x.mainloop()
</code></pre>
|
<p><code>tkraise</code> can take an element as argument to raise it above that element, so if you pass the other elements in the <code>multi_frame_holder</code> it will only raise above those and not raise above everything:</p>
<pre><code> label = self.multi_frame_holder[index]
for i,elem in enumerate(self.multi_frame_holder):
if i!= index:
# raise above all other elements in multi_frame_holder but no higher.
label.tkraise(elem)
</code></pre>
|
python|tkinter
| 0 |
1,902,504 | 63,684,825 |
Tracking the year of a policy
|
<p>I'm having a database of insurance policy which contains 2 column of the current and previous policy number.</p>
<pre><code>Current policy number | Previous policy number
ABCD-0001 |
ABCD-0002 | ABCD-0001
ABCD-0003 | ABCD-0002
XYZ-001 |
</code></pre>
<p>Now I want to track the year of the policy. (NOTE: the policy number is just example. Actually it is random)</p>
<p>E.g: 1st row: the previous policy number is blank => the current policy will be 1st year
2nd row: 2nd year since the previous policy is 1st row (which is 1st year).</p>
<p>I'm starting to use Python and Mysql so I don't have any idea how to track the year of a policy.</p>
<p>With Python, I loop through each row and insert the result into a list. E.g: 1st row is 1st year => insert into list_tracking</p>
<pre><code>Current policy number | Previous policy number | Year
ABCD-0001 | | 1
</code></pre>
<p>Then in the next row, if there is previous policy, I get the year of previous policy by using list.index and add 1.</p>
<p>However this way seems complicated and slow when data increasing or updating.
In excel, I can do this easily with combination of if and vlookup:
If previous year is blank then 1 ; if not then vlookup of previous year and add 1 to the vlookup's value.</p>
<p>Please kindly share some new idea in both Python and Mysql or combination.</p>
<p>Thanks in advance.</p>
|
<p>I think you want a recursive query:</p>
<pre><code>with cte as (
select current_policy_number, previous_policy_number, 1 rn
from mytable
where previous_policy_number is null
union all
select t.current_policy_number, t.previous_policy_number, c.rn + 1
from cte c
inner join mytable t on t.previous_policy_number = c.current_policy_number
)
select * from cte
</code></pre>
<p>The recursive <code>with</code> clause starts from rows whose previous policy number is <code>null</code>, and assigns them row number <code>1</code>. Then, it traverses the hierarchical structure from the root to leafs, incrementing the row number as it goes.</p>
<p>Note that recursive queries are available in MySQL 8.0 only.</p>
|
python|mysql|string|hierarchical-data|recursive-query
| 0 |
1,902,505 | 22,132,306 |
Clipping a VBO rendered with a shader
|
<p>I'm developping a 2D game engine, using PyOpenGL.</p>
<p>For coherence with previous versions of my engine, that used SDL, the graphic elements are first stored in a VBO, with a 2D coordinate system where (0, 0) is top-left and (640, 448) is bottom-right (so y axis is reversed). Let's call it SDL-coordinates.</p>
<p>Since my graphics use palette effects, I rendered them with shaders. My vertex shader simply convert my 2D coordinate system to the [-1;1] cube.</p>
<p>Now, I need to clip the display. My first idea was to do it via the pixel shader, by sending all vertices outside the clipping zone to a point outside the [-1 ; 1] cube (I took (2.0, 0.0, 0.0, 1.0)) but it went wrong : it deformed squared tiles which had some of their edges outside the clipping zone but not all.</p>
<p>So I consider using glFrustum, but I don't understand in which coordinate system I must specify the params.</p>
<p>In fact, I tried to put more or less anything as parameters without noticing anything when running the code. What am I doing wrong ?</p>
<p>For the moment, my drawing routine looks like that :</p>
<pre><code>def draw(self):
glClearColor(1.0, 1.0, 0.0, 1.0)
glClear(GL_COLOR_BUFFER_BIT)
glEnable( GL_TEXTURE_2D )
glActiveTexture( GL_TEXTURE0 )
glBindTexture( GL_TEXTURE_2D, self.v_texture)
glEnable( GL_TEXTURE_1D )
glActiveTexture( GL_TEXTURE1 )
shaders.glUseProgram(self.shaders_program)
shaders.glUniform1i(self.texture_uniform_loc, 0)
shaders.glUniform1i(self.palette_uniform_loc, 1)
shaders.glUniform2f(self.offset_uniform_loc, 0, 0)
shaders.glUniform4f(self.color_uniform_loc, 1, 1, 1, 1)
# Draw layers
for layer in self.layers: #[0:1]:
layer.draw()
shaders.glUseProgram( 0 )
pygame.display.flip()
</code></pre>
<p>In class Layer:</p>
<pre><code>def draw(self):
glFrustum(0.0, 0.5, 0.0, 0.5, 0.1, 1.0) # I tried anything here...
# offset is an offset to add to coordinates (in SDL_coordinates)
shaders.glUniform2f(self.vdp.offset_uniform_loc, self.x, self.y)
# color is likely irrelevant here
shaders.glUniform4f(self.vdp.color_uniform_loc, *self.color_modifier)
glBindTexture( GL_TEXTURE_1D, self.palette.get_id())
self.vbo.bind()
glEnableClientState(GL_VERTEX_ARRAY)
glEnableClientState(GL_TEXTURE_COORD_ARRAY)
glVertexPointer(3, GL_FLOAT, 20, self.vbo)
glTexCoordPointer(2, GL_FLOAT, 20, self.vbo + 12)
glDrawArrays(GL_QUADS, 0, len(self.vbo))
self.vbo.unbind()
glDisableClientState(GL_TEXTURE_COORD_ARRAY)
glDisableClientState(GL_VERTEX_ARRAY)
</code></pre>
<p>Note : I must say that I'm new to OpenGL. I learnt by reading tutorials and was quite confused with the 'old' and 'new' OpenGL. </p>
<p>I felt like frustum was more 'old' OpenGL, like many of tranformation matrix manipulation (most of it can be handled by vertex shaders). I may be totally wrong at that and glFrustum (or something else) may be unavoidable in my case. I'd like to read an article about what can be totally forgotten in 'old' OpenGL.</p>
|
<p>Unless you are using the built-in matrices (<code>gl_ModelViewProjectionMatrix</code> etc.) in your shaders, <code>glFrustum</code> won't do anything. If you aren't using those matrices, don't start using them now (you are correct that this is part of 'old' OpenGL).</p>
<p>It sounds like you want to use <a href="https://www.khronos.org/opengles/sdk/docs/man/xhtml/glScissor.xml" rel="nofollow"><code>glScissor</code></a> which defines a clipping rectangle in window coordinates (note that the origin of these is at the lower-left of the window). You have to enable it with <code>glEnable(GL_SCISSOR_TEST)</code>.</p>
<p>As far as articles about what can be totally forgotten in 'old' OpenGL: Googling "Modern OpenGL", or "OpenGL deprecated" should give you a few starting points.</p>
|
python|opengl
| 1 |
1,902,506 | 17,044,661 |
how to filter search by values that are not available
|
<p>I have a list of items as:</p>
<pre><code>i = SearchQuerySet().models(Item)
</code></pre>
<p>now, each item in <code>i</code> has a attribute, <code>price</code></p>
<p>I want to narrow the result in which price information is <strong>not available</strong> along with the ones falling in a given range</p>
<p>something like</p>
<pre><code>i.narrow('price:( None OR [300 TO 400 ] )')
</code></pre>
<p>how can that be done?</p>
|
<p>Try this:</p>
<pre><code>-(-price:[300 TO 400] AND price:[* TO *])
</code></pre>
<p>is logically the same and it works in Solr. </p>
|
python|django|solr|django-haystack
| 13 |
1,902,507 | 58,140,580 |
How do you trivially change a Python script to capture everything written to stdout by itself and its subprocesses?
|
<p>Suppose you have a big script that writes to stdout in many places, both directly (using Python features like <code>print()</code> or <code>logging</code> that goes to <code>stdout</code>) and indirectly by launching <code>subprocess</code>es which write to stdout.</p>
<p>Is there a trivial way to capture all this stdout?</p>
<p>For example, if you want the script to send an email with all its output when it completes.</p>
<p>By "trivial" I mean a constant rather than linear code change. Otherwise, I believe you will have to introduce redirection parameters (and some accumulation logic) into every single <code>subrprocess</code> call. You can capture all the output of the script itself by redirecting <code>sys.stdout</code>, however I don't see a similar "catch-all" trivial solution for all the <code>subprocess</code> calls, or indeed whatever other types of code you may be using to launch these subprocesses.</p>
<p>Is there any such solution, or must one use a runner script that will call this Python script as a subprocess and capture all <code>stdout</code> from that subprocess?</p>
|
<p>Probably the shortest way to do so, not python specific would be to use <a href="https://docs.python.org/3/library/os.html#os.dup2" rel="nofollow noreferrer"><code>os.dup2()</code></a> e.g.:</p>
<pre><code>f = open('/tmp/OUT', 'w')
os.dup2(f.fileno(), 1)
f.close()
</code></pre>
<p>What it does is to replaces file descriptor <code>1</code> which would normally be your <code>stdout</code>. With file descriptor of <code>f</code> (which you can then close). After that all writes to <code>stdout</code> and in <code>/tmp/OUT</code>. This duplication is inheritable, subprocesses have fd <code>1</code> writing to the same file.</p>
|
python|subprocess
| 1 |
1,902,508 | 43,923,728 |
Trouble with RoboBrowser in Python 3.6
|
<p>This is probably very simple to anyone experienced, but I am a beginner and spent hours trying to open a website using RoboBrowser. I have Python 3.6 installed and believe I successfully installed the package through the command prompt. When I execute the below code using Geany, nothing happens besides a box that pops up saying hit any key to continue. The next step is to login, but no point worrying about that until I can actually get the website to open.</p>
<pre><code>import re
from robobrowser import RoboBrowser
browser = RoboBrowser(history=True)
browser.open('https://login.salesforce.com/')
</code></pre>
|
<p>To install RoboBrowser, <code>pip install robobrowser</code> in an administrative shell should be sufficient.</p>
<p>The Code seems OK, you could try to start it with a shell.
Just cd into the directory of the Python source-file and execute it via <code>py <filename.py></code>, <code>python <filename.py></code> or <code>python3 <filename>.py</code> (depending on your environment-variables).</p>
<p>The "hit any key to continue" probably just tells you, that the program finished successfully. You are not printing anything on the screen.</p>
|
python
| 0 |
1,902,509 | 43,823,431 |
Python inheritance - logging when search of __mro__ finds matches and when it does not
|
<p>In Python, suppose I have a basic class structure that looks like this:</p>
<pre><code>class Foo(object):
def do_something(self):
print 'doing a thing'
def do_another_thing(self):
print 'doing another thing'
class Bar(Foo):
def do_another_thing(self):
super(bar, self).do_another_thing()
print 'doing more stuff still'
</code></pre>
<p>I understand how the <code>__mro__</code> attribute is constructed, but I would like to add logging so that I can see in the output what methods it found/called when each class made its call. So, example, I would like it to log as commented below:</p>
<pre><code>f = Foo()
b = Bar()
f.do_something()
#print 'Implementing foo method do_something'
b.do_something()
#print 'Did not find method do_something for bar'
#print 'Implementing foo method do_something'
f.do_another_thing()
#print 'Implementing foo method do_another_thing'
b.do_another_thing()
#print 'Implementing bar method do_another_thing'
#print 'Implementing foo method do_another_thing'
</code></pre>
<p>I have fiddled around with <code>__getattribute__</code> and <code>__get__</code>, but evidently I do not understand these methods well enough to implement as desired. I also looked at using decorators, but I think using descriptors in some way is probably the route to take here.</p>
<p>Here is what I have tried so far:</p>
<pre><code>class Bar(Foo):
def do_another_thing(self):
super(Bar, self).do_another_thing()
print 'doing more stuff still'
def __getattribute__(self, key):
self_dict = object.__getattribute__(type(self), '__dict__')
if key in self_dict:
print 'Implementing {} method {}'.format(type(self).__name__, key)
v = object.__getattribute__(self, key)
if hasattr(v, '__get__'):
return v.__get__(None, self)
return v
print 'Did not find method {} for {}'.format(key, type(self).__name__)
mro = object.__getattribute__(type(self), '__mro__')
for thing in mro[1:]:
v = thing.__getattribute__(self, key)
if hasattr(v, '__get__'):
return v.__get__(None, self)
return v
</code></pre>
<p>I have also redefined this <code>__getattribute__</code> in Foo also, and my output is as follows:</p>
<pre><code>Implementing Foo method do_something
doing a thing
Did not find method do_something for Bar
Did not find method do_something for Bar
doing a thing
Implementing Foo method do_another_thing
doing another thing
Implementing Bar method do_another_thing
doing another thing
doing more stuff still
</code></pre>
<p>So I am able to capture the correct logging at the first level of inheritance, but not able to correctly pass the call back up from Bar to Foo such that I can utilise Foo's <code>__getattribute__</code>. </p>
|
<p>I did it - this seems to work in my use case, though I may have issues with multiple inheritance:</p>
<pre><code>class loggingObject(object):
def __getattribute__(self, item):
#we let the smart __getattribute__ method of object get the object we want
v = object.__getattribute__(self, item)
#if it's not a function, then we don't care about logging where it came from
import types
if not isinstance(v, types.MethodType):
#so we finish off default implementation
if hasattr(v, '__get__'):
return v.__get__(None, self)
return v
#get the dictionary of all explicitly defined items in the class that self is an instance of
self_dict = object.__getattribute__(type(self), '__dict__')
#if item is in self_dict, then the class that self is an instance of did implement this function
if item in self_dict:
#we log, and then implement default __getattribute__ behaviour
print 'Implementing {} method {}'.format(type(self).__name__, item)
if hasattr(v, '__get__'):
return v.__get__(None, self)
return v
#if we are are, item is not explicitly in self_dict, and hence the class self is an instance of did not implement this function
print 'Did not find explicit method {} for {}'.format(item, type(self).__name__)
#unbind the function from self so it can be used for comparison
actual_function = v.__func__
#get the __mro__ for the class that self is an instance of
mro = object.__getattribute__(type(self), '__mro__')
#we loop through the mro to comapre functions in each class going up the inheritance tree until we find a match
for cls in mro[1:]:
try:
#get the function for cls
function = object.__getattribute__(cls, '__dict__')[item]
#if the function from class cls and the actual_function agree, that is where we got it from
if function == actual_function:
print 'Implementing {} method {}'.format(cls.__name__, item)
break
except KeyError:
print 'Did not find explicit method {} for {}'.format(item, cls.__name__)
#now that we have logged where we got the function from, default behaviour
if hasattr(v, '__get__'):
return v.__get__(None, self)
return v
class Foo(loggingObject):
def do_something(self):
print 'doing a thing'
def do_another_thing(self):
print 'doing another thing'
class Bar(Foo):
def do_another_thing(self):
super(Bar, self).do_another_thing()
print 'doing more stuff still'
</code></pre>
|
python|python-2.7|inheritance|descriptor|method-resolution-order
| 0 |
1,902,510 | 54,381,940 |
Neural Network: Convert HTML Table into JSON data
|
<p>I'm kinda new to Neural Networks and just started to learn coding them by trying some examples.
Two weeks ago I was searching for an interesting challenge and I found one. But I'm about to give up because it seems to be too hard for me... But I was curious to know if anyone of you is able to solve this?</p>
<p><strong>The Problem:</strong> Assume there are ".htm"-files that contain tables about the same topic. But the table structure isn't the same for every file. For example: We have a lot ".htm"-files containing information about teachers substitutions per day per school. Because the structure of those ".htm"-files isn't the same for every file it would be hard to program a parser that could extract the data from those tables. So my thought was that this is a task for a Neural Network.</p>
<p><strong>First Question:</strong> Is it a task a Neural Network can/should handle or am I mistaken by that?</p>
<p>Because for me a Neural Network seemed to fit for this kind of a challenge I tried to thing of an Input. I came up with two options:</p>
<p><strong>First Input Option:</strong> Take the HTML Code (only from the body-tag) as string and convert it as Tensor</p>
<p><strong>Second Input Option:</strong> Convert the HTML Tables into Images (via Canvas maybe) and feed this input to the DNN through Conv2D-Layers.</p>
<p><strong>Second Question:</strong> Are those Options any good? Do you have any better solution to this?</p>
<p>After that I wanted to figure out how I would make a DNN output this heavily dynamic data for me? My thought was to convert my desired JSON-Output into Tensors and feed them to the DNN while training and for every prediction i would expect the DNN to return a Tensor that is convertible into a JSON-Output...</p>
<p><strong>Third Question:</strong> Is it even possible to get such a detailed Output from a DNN? And if Yes: Do you think the Output would be suitable for this task?</p>
<p><strong>Last Question:</strong> Assuming all my assumptions are correct - Wouldn't training this DNN take for ever? Let's say you have a RTX 2080 ti for it. What would you guess?</p>
<p>I guess that's it. I hope i can learn a lot from you guys!</p>
<p>(I'm sorry about my bad English - it's not my native language)</p>
<p><strong>Addition:</strong></p>
<p>Here is a more in-depth Example. Lets say we have a ".htm"-file that looks like this:
<a href="https://i.stack.imgur.com/BJQWN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BJQWN.png" alt="Example of an .html-file"></a></p>
<p>The task would be to get all the relevant informations from this table. For example:
All Students from Class "9c" don't have lessons in their 6th hour due to cancellation.</p>
|
<p>1) This is not particularly suitable problem for a Neural Network, as you domain is a structured data with clear dependcies inside. Tree based ML algorithms tend to show much better results on such problems. </p>
<p>2) Both you choices of input are very unstructured. To learn from such data would be nearly impossible. The are clear ways to give more knowledge to the model. For example, you have the same data in different format, the difference is only the structure. It means that a model needs to learn a mapping from one structure to another, it doesn't need to know any data. Hence, words can be Tokenized with unique identifiers to remove unnecessary information. Htm data can be parsed to a tree, as well as json. Then, there are different ways to represent graph structures, which can be used in a ML model. </p>
<p>3) It seems that the only adequate option for output is a sequence of identifiers pointing to unique entities from text. The whole problem then is similar to Seq2Seq best solved by RNNs with an decoder-encoder architecture. </p>
<p>I believe that, if there is enough data and htm files don't have huge amount of noise, the task can be completed. Training time hugely depends on selected model and its complexity, as well as diversity of initial data. </p>
|
tensorflow|machine-learning|keras|neural-network|deep-learning
| 2 |
1,902,511 | 9,061,793 |
Error when using zlib.compress function in Python 3.2
|
<p>I'm importing zlib in my Python program. It works fine in Python 2.6 but shows an error when I try to run it in Python 3.2.</p>
<p>This is my code:</p>
<pre><code>import zlib
s = 'sam'
print ("Your string length is",len(s))
t = zlib.compress(s)
print ("Your compressed string is",t)
print ("Your compressed string length is",len(t))
print ("Your decompressed string is",zlib.decompress(t))
print ("Crc32 is",zlib.crc32(t))
</code></pre>
<p>The error I get is this:</p>
<pre><code>Your string length is 3
Traceback (most recent call last):
File "F:\workspace\samples\python\zip.py", line 4, in <module>
t = zlib.compress(s)
TypeError: 'str' does not support the buffer interface
</code></pre>
<p>But the above program works fine in Python 2.6. Should I use an alternative to zlib? Please help me.</p>
<p><strong>Edit</strong>: I got it to work. It seems I needed to encode it. Here is the revised code:</p>
<pre><code>import zlib
s = 'sam'
print ("Your string length is",len(s))
s=s.encode('utf-8')
t = zlib.compress(s)
print ("Your compressed string is",t)
print ("Your compressed string length is",len(t))
print ("Your decompressed string is",zlib.decompress(t))
print ("Crc32 is",zlib.crc32(t))
</code></pre>
|
<p>Th <code>str</code> type in Python is no longer a sequence of 8-bit characters, but a sequence of Uncode characters. You need to use the <code>bytes</code> type for binary data. You convert between strings and bytes by encoding/decoding.</p>
|
python-3.x|zlib
| 4 |
1,902,512 | 9,284,199 |
supplemental codepoints to unicode string in python
|
<p><code>unichr(0x10000)</code> fails with a <code>ValueError</code> when cpython is compiled without <code>--enable-unicode=ucs4</code>.</p>
<p>Is there a language builtin or core library function that converts an arbitrary unicode scalar value or code-point to a <code>unicode</code> string that works regardless of what kind of python interpreter the program is running on?</p>
|
<p>Yes, here you go:</p>
<pre><code>>>> unichr(0xd800)+unichr(0xdc00)
u'\U00010000'
</code></pre>
<p>The crucial point to understand is that <code>unichr()</code> converts an integer to a single code unit in the Python interpreter's string encoding. The <a href="http://docs.python.org/2/library/functions.html#unichr" rel="noreferrer">The Python Standard Library documentation for 2.7.3, <em>2. Built-in Functions</em>, on <code>unichr()</code></a> reads,</p>
<blockquote>
<p>Return the Unicode string of <strong>one character</strong> whose Unicode code is the integer i.... The valid range for the argument depends how Python was configured – it may be either UCS2 [0..0xFFFF] or UCS4 [0..0x10FFFF]. <code>ValueError</code> is raised otherwise. </p>
</blockquote>
<p>I added emphasis to "one character", by which they mean <a href="http://www.unicode.org/glossary/#code_unit" rel="noreferrer">"one code unit" in Unicode terms</a>.</p>
<p>I'm assuming that you are using Python 2.x. The Python 3.x interpreter has no built-in <code>unichr()</code> function. Instead the <a href="http://docs.python.org/3.3/library/functions.html#chr" rel="noreferrer">The Python Standard Library documentation for 3.3.0, <em>2. Built-in Functions</em>, on <code>chr()</code></a> reads,</p>
<blockquote>
<p>Return the <strong>string representing a character</strong> whose Unicode codepoint is the integer i.... The valid range for the argument is from 0 through 1,114,111 (0x10FFFF in base 16). </p>
</blockquote>
<p>Note that the return value is now a string of unspecified length, not a string with a single code unit. So in Python 3.x, <code>chr(0x10000)</code> would behave as you expected. It "converts an arbitrary unicode scalar value or code-point to a <code>unicode</code> string that works regardless of what kind of python interpreter the program is running on".</p>
<p>But back to Python 2.x. If you use <code>unichr()</code> to create Python 2.x <code>unicode</code> objects, and you are using Unicode scalar values above 0xFFFF, then you are committing your code to being aware of the Python interpreter's implementation of <code>unicode</code> objects. </p>
<p>You can isolate this awareness with a function which tries <code>unichr()</code> on a scalar value, catches <code>ValueError</code>, and tries again with the corresponding UTF-16 surrogate pair:</p>
<pre><code>def unichr_supplemental(scalar):
try:
return unichr(scalar)
except ValueError:
return unichr( 0xd800 + ((scalar-0x10000)//0x400) ) \
+unichr( 0xdc00 + ((scalar-0x10000)% 0x400) )
>>> unichr_supplemental(0x41),len(unichr_supplemental(0x41))
(u'A', 1)
>>> unichr_supplemental(0x10000), len(unichr_supplemental(0x10000))
(u'\U00010000', 2)
</code></pre>
<p>But you might find it easier to just convert your scalars to 4-byte UTF-32 values in a UTF-32 byte <code>string</code>, and decode this byte <code>string</code> into a <code>unicode</code> string:</p>
<pre><code>>>> '\x00\x00\x00\x41'.decode('utf-32be'), \
... len('\x00\x00\x00\x41'.decode('utf-32be'))
(u'A', 1)
>>> '\x00\x01\x00\x00'.decode('utf-32be'), \
... len('\x00\x01\x00\x00'.decode('utf-32be'))
(u'\U00010000', 2)
</code></pre>
<p>The code above was tested on Python 2.6.7 with UTF-16 encoding for Unicode strings. I didn't test it on a Python 2.x intepreter with UTF-32 encoding for Unicode strings. However, it should work unchanged on any Python 2.x interpreter with any Unicode string implementation.</p>
|
python|unicode|python-2.x|supplementary
| 8 |
1,902,513 | 39,009,349 |
python boto 'NoneType' object has no attribute 'stop_isntances'
|
<p>i am learning python boto module and i am trying to stop a running instance .</p>
<pre><code>import boto.ec2
conn = boto.ec2.connect_to_region("us-west-2")
conn.stop_instances(instance_ids=['i-0aa5ce441ef7e0e2a'])
</code></pre>
<p>but i am getting error which says:</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'NoneType' object has no attribute 'stop_instances'
</code></pre>
<p>i gave AWS_access keys for boto. </p>
<p>can anyone please help me to fix this error ?</p>
|
<p>As @kindall points out, your <code>conn</code> object is not initialized (it's NoneType). I also see that you're using boto in the example, but I thought that I would provide an example of this using boto3:</p>
<pre><code>import boto3
boto3_session = boto3.Session(profile=some_profile_you_configured)
boto3_client = boto3_session.client('ec2', region_name='us-west-2')
response = boto3_client.stop_instances(InstanceIds=['i-0aa5ce441ef7e0e2a'])
</code></pre>
|
python|python-2.7|python-3.x|boto|boto3
| 0 |
1,902,514 | 7,300,230 |
Using list as a data type in a column (SQLAlchemy)
|
<p>I want to store a list of rss feed urls in an sqlite db. I'm using SQLAlchemy and was wondering how to store these. I can't seem to find any documentation about lists, and was wondering if this was legal for a column:
Column('rss_feed_urls', List)</p>
<p>Or is there an array type that I could use?</p>
|
<p>If you really must you could use the <a href="http://docs.sqlalchemy.org/en/latest/core/type_basics.html#sqlalchemy.types.PickleType" rel="noreferrer">PickleType</a>. But what you probably want is another table (which consists of a <em>list</em> of rows, right?). Just create a table to hold your RSS feeds:</p>
<pre><code>class RssFeed(Base):
__tablename__ = 'rssfeeds'
id = Column(Integer, primary_key=True)
url = Column(String)
</code></pre>
<p>Add new urls:</p>
<pre><code>feed = RssFeed(url='http://url/for/feed')
session.add(feed)
</code></pre>
<p>Retrieve your list of urls:</p>
<pre><code>session.query(RssFeed).all()
</code></pre>
<p>Find a specific feed by index:</p>
<pre><code>session.query(RssFeed).get(1)
</code></pre>
<p>I'd recommend SQLAlchemy's <a href="http://www.sqlalchemy.org/docs/orm/tutorial.html" rel="noreferrer">Object Relational Tutorial</a>.</p>
|
python|sqlalchemy
| 33 |
1,902,515 | 32,037,190 |
How to capture the current namespace inside nested functions?
|
<p>Suppose one had the following:</p>
<pre><code>def outer(foo, bar, baz):
frobozz = foo + bar + baz
def inner(x, y, z):
return dict(globals().items() + locals().items())
return inner(7, 8, 9)
</code></pre>
<p>The value returned by <code>inner</code> is the dictionary obtained by merging the dictionaries returned by <code>globals()</code> and <code>locals()</code>, as shown. In general<sup>1</sup>, this returned value will <em>not</em> contain entries for <code>foo</code>, <code>bar</code>, <code>baz</code>, and <code>frobozz</code>, even though these variables are visible within <code>inner</code>, and therefore, arguably belong in <code>inner</code>'s namespace.</p>
<p>One way to facilitate the capturing of <code>inner</code>'s namespace would be the following kluge:</p>
<pre><code>def outer(foo, bar, baz):
frobozz = foo + bar + baz
def inner(x, y, z, _kluge=locals().items()):
return dict(globals().items() + _kluge + locals().items())
return inner(7, 8, 9)
</code></pre>
<p>Is there a better way to capture <code>inner</code>'s namespace than this sort of kluge?</p>
<hr>
<p><sup><sup>1</sup> Unless, that is, variables having those names happen to be present in the current global namespace.</sup></p>
|
<p>This isn't dynamic, and most likely it's not the best way, but you <em>could</em> simply access the variables inside <code>inner</code> to add them to its "namespace":</p>
<pre><code>def outer(foo, bar, baz):
frobozz = foo + bar + baz
def inner(x, y, z):
foo, bar, baz, frobozz
return dict(globals().items() + locals().items())
return inner(7, 8, 9)
</code></pre>
<p>You could also store the <code>outer</code> function's locals into a variable, and use the variable inside <code>inner</code>'s return value:</p>
<pre><code>def outer(foo, bar, baz):
frobozz = foo + bar + baz
outer_locals = locals()
def inner(x, y, z):
return dict(outer_locals.items() + globals().items() + locals().items())
return inner(7, 8, 9)
</code></pre>
|
python|python-2.7
| 2 |
1,902,516 | 31,977,236 |
windows not allowing me to replace file
|
<p>I'm using python to iterate through a csv file, and then delete a number of rows. To do this, I'm creating a new file and then attempting to remove the old file before then renaming the new file to the old file name. I keep getting an error and it's unclear to me as to why. The error is as follows:</p>
<pre><code> Traceback (most recent call last):
File "sendemails.py", line 85, in <module>
os.remove('C:\Python27\emails.csv')
WindowsError: [Error 32] The process cannot access the file because it is being used by another process: 'C:\\Python27\\emails.csv'
</code></pre>
<p>The relevant code is as follows:</p>
<pre><code>FIRST_ROW_NUM = 1
ROWS_TO_DELETE = {1,3}
with open('C:\\Python27\\emails.csv', 'rt') as infile, open('C:\\Python27\\emailed.csv', 'wt') as outfile:
outfile.writelines(row for row_num, row in enumerate(infile, FIRST_ROW_NUM)
if row_num not in ROWS_TO_DELETE)
os.remove('C:\\Python27\\emails.csv')
os.rename('C:\\Python27\\emailed.csv','C:\\Python27\\emails.csv')
</code></pre>
<p>The csv file is not open anywhere that I'm aware of, and I printed the infile and outfile (to see whether closed) and both are closed prior to removing or renaming. Any help? I'm completely lost. </p>
|
<p>Sometimes in some Microsoft Windows systems, if you open the file with another program, say the Notepad, the file cannot be deleted because it is being used by that program.</p>
|
python|delete-row|overwrite|file-rename
| 1 |
1,902,517 | 40,553,886 |
how to reshape a 4D tensorflow to a 2D
|
<p>I have an X_train image as:</p>
<pre><code>X-train (37248, 32, 32, 3)
</code></pre>
<p>y_train (37248, 43)</p>
<p>I have a feed-dictionary as</p>
<pre><code>train_feed_dict = {features: X_train, labels: train_labels}
with tf.Session() as session:
session.run(init)
session.run(loss, feed_dict=train_feed_dict)
</code></pre>
<p>My features is:</p>
<pre><code>features = tf.placeholder(tf.float32, shape=[None, 32, 32, 3])
features = tf.reshape(features, [-1, 3072])
</code></pre>
<p>But when I run the code I get this error, reshaping does not seem to take place.</p>
<pre><code>ValueError: Cannot feed value of shape (37248, 32, 32, 3) for Tensor 'Reshape_5:0', which has shape '(?, 3072)'
</code></pre>
|
<p>I believe the error was two prong:</p>
<p>I had to split the data into np.array() format and
flat_features = tf.reshape(features, [-1, 3072])</p>
<p>having features=tf.reshape(features, [-1, 3072]), I had two problems so just changing the name t flat_features by itself did not work</p>
<p>so @drpng was also right in his comment</p>
|
python|numpy|tensorflow
| 2 |
1,902,518 | 51,864,008 |
Numpy no module/attribute when importing other packages depending on it
|
<p>On 2 distinct independent systems, I have Anaconda/Miniconda. On my default virtual environment on either one, since 2018-8-15, when I import pandas or matplotlib, I get errors <code>module 'numpy' has no attribute '__version__'</code> and <code>module 'numpy' has no attribute 'square'</code>, respectively.</p>
<p>An update to conda or one of the modules may have done damage, but I haven't been able to figure out. The computers, one is a Mac and one a Windows PC. The only common features are that I use the Jupyter Notebook environment on both systems, both are default environments, both may have deep learning packages installed on them (as opposed to the other environments which are still functioning well), and I've been using the Jupyter Notebook environment to code on both.</p>
<p>Has anyone come across this problem? Is there a solution? Is it waiting for a fix update?</p>
|
<p>I had this issue and I think it is related to a broken install. I fixed mine with <code>conda update --all</code>. But I guess you can fix it with uninstalling numpy and reinstalling it back. </p>
|
python|numpy|anaconda|jupyter|miniconda
| 0 |
1,902,519 | 68,122,290 |
Interprocess communication with multiprocessing
|
<p>I support an application in which the user selects one or more files to be loaded. The data in each is then transformed during the load process. I would like to make this concurrent. Profiling shows that the transformations are the bottleneck. The transformations are CPU bound (e.g. numpy array manipulations), so I'm using multiprocessing to maximize clock cycles. I would like to communicate with the processes, or at least receive messages from them, so that I can update the GUI.</p>
<p>To this end, I have the following toy example. I create a "Relay" thread which manages "Liaisons", each of which listens for messages from a related process. When messages are received, they're forwarded to the main thread by a signal. Each process works for a random number of seconds, limited by the spinbox value (default is at most 5 seconds). Workers send a message for every second they work.</p>
<pre class="lang-py prettyprint-override"><code>import os
import sys
import time
import random
import multiprocessing as mp
from PyQt5 import QtCore, QtWidgets
class RelayMessages:
stop = 'stop'
class Liaison(QtCore.QObject):
message = QtCore.pyqtSignal(str)
kill = QtCore.pyqtSignal(int)
def __init__(self, id, pipe):
super().__init__()
self.id = id
self.pipe = pipe
def run(self, item):
self.pipe.send(item)
self.listen()
def listen(self):
self.message.emit(f'Liaison {self.id} listening')
while True:
try:
msg = self.pipe.recv()
if msg == RelayMessages.stop:
self.message.emit(f'Liaison {self.id} stopping listening')
self.kill.emit(self.id)
break
self.message.emit(str(msg))
except EOFError: # nothing left to receive
pass
class Relay(QtCore.QThread):
def __init__(self, connections):
super().__init__()
self.liaisons = []
for id, pipe in enumerate(connections):
liaison = Liaison(id, pipe)
self.liaisons.append(liaison)
def run(self):
for liaison in self.liaisons:
liaison.listen()
def start_workers(self, item):
for liaison in self.liaisons:
liaison.pipe.send(item)
class Worker(mp.Process):
def __init__(self, id, pipe, daemon=True):
super().__init__()
self.daemon = daemon
self.pipe = pipe
self.id = id
def run(self):
while True:
try:
item = self.pipe.recv()
if item:
work_load = random.randrange(item)
self.pipe.send(f"worker ({self.id}): task will take {work_load} seconds")
for i in range(work_load):
time.sleep(1)
self.pipe.send(f"worker ({self.id}): {i}")
self.pipe.send(f"worker ({self.id}): task complete")
self.pipe.send(RelayMessages.stop)
break
except EOFError: # nothing left to receive
pass
class MyDialog(QtWidgets.QDialog):
start_workers = QtCore.pyqtSignal(int)
def __init__(self):
super().__init__()
self.setWindowTitle('MP Concurrency')
self.num_procs = int(mp.cpu_count() / 2) # 4
self.relay = None
self.worker_pool = {}
self.button = QtWidgets.QPushButton(f'Start {self.num_procs} processes')
self.button.pressed.connect(self.to_process)
self.browser = QtWidgets.QTextBrowser()
spin_layout = QtWidgets.QHBoxLayout()
self.spin = QtWidgets.QSpinBox()
self.spin.setValue(5)
spin_layout.addWidget(QtWidgets.QLabel('Max work time:'))
spin_layout.addWidget(self.spin)
layout = QtWidgets.QVBoxLayout()
layout.addLayout(spin_layout)
layout.addWidget(self.button)
layout.addWidget(self.browser)
self.setLayout(layout)
def to_process(self):
master_connections = []
slave_connections = []
print('ID | M_CONN | S_CONN', flush=True)
for id in range(self.num_procs):
m_conn, s_conn = mp.Pipe()
print(id, m_conn.fileno(), s_conn.fileno(), flush=True)
master_connections.append(m_conn)
slave_connections.append(s_conn)
self.relay = Relay(master_connections)
self.start_workers.connect(self.relay.start_workers)
for liaison in self.relay.liaisons:
liaison.message.connect(self.update_ui)
liaison.kill.connect(self.remove_worker)
for id, pipe in enumerate(slave_connections):
worker = Worker(id, pipe)
self.worker_pool[id] = worker
self.relay.start()
self.browser.clear()
load = self.spin.value()
for i in range(self.num_procs):
self.worker_pool[i].start()
print(f'Started worker ({self.worker_pool[i].pid})', flush=True)
self.start_workers.emit(load)
def update_ui(self, text):
self.browser.append(text)
def remove_worker(self, id):
popped = self.worker_pool.pop(id)
print(f'Removed worker: {id}', flush=True)
if not self.worker_pool:
self.browser.append('Processes complete')
def closeEvent(self, event):
for i, worker in enumerate(self.worker_pool):
print(f'Terminating worker {i}', flush=True)
worker.terminate()
if self.relay:
print('Quitting relay thread', flush=True)
self.relay.quit()
event.accept()
if __name__ == '__main__':
app = QtWidgets.QApplication(sys.argv)
dialog = MyDialog()
dialog.show()
sys.exit(app.exec_())
</code></pre>
<p>The problem is, messages are displayed in sequence by workers (worker 1, worker 2, etc.) and only when the previous worker has completed. For example, if the first worker's task takes 2 seconds and the second worker's task is 4 seconds, the messages of the first worker are printed before those of the second worker. When the second worker's messages are printed, those which were backlogged while the first worker completed are immediately displayed.</p>
<p>Is there a way to make my implementation display messages as they come in? Is there (for goodness's sake) a better way to implement multiprocessing in PyQt/PySide? I've opted for Pipes because, although Pool.apply_async is a bit cleaner, I couldn't find a way to have the Pool process communicate back to the main process. Maybe I should use a QThreadPool and have a separate thread associated with each process?</p>
|
<p>It looks like using a separate thread for each process produces the expected result.</p>
<p><strong>Original Multiprocess Answer</strong></p>
<pre class="lang-py prettyprint-override"><code>import os
import sys
import time
import random
import multiprocessing as mp
from PyQt5 import QtCore, QtWidgets
def trap_exc_during_debug(*args):
# when app raises uncaught exception, print info
print(args, flush=True)
# install exception hook: without this, uncaught exception would cause application to exit
sys.excepthook = trap_exc_during_debug
class CommandMessages:
start = 'start'
stop = 'stop'
class ProcessWorker(mp.Process):
def __init__(self, id, pipe, work_load, daemon=True):
super().__init__()
self.daemon = daemon
self.pipe = pipe
self.id = id
self.work_load = work_load
print(f'Created worker {self.id} with work_load {self.work_load}', flush=True)
def run(self):
self.pipe.send(f"Worker {self.id} in ({os.getpid()})")
while True:
try:
item = self.pipe.recv()
if item == CommandMessages.start:
self.pipe.send(f"worker ({self.id}): task will take {self.work_load} seconds")
for i in range(self.work_load):
time.sleep(1)
self.pipe.send(f"worker ({self.id}): {i}")
self.pipe.send(f"worker ({self.id}): task complete")
self.pipe.send(CommandMessages.stop)
break
except EOFError: # nothing left to receive
pass
class ThreadWorkerSignals(QtCore.QObject):
done = QtCore.pyqtSignal(int) # worker id
message = QtCore.pyqtSignal(str)
class ThreadWorker(QtCore.QRunnable):
def __init__(self, id, max_load):
super().__init__()
self.signals = ThreadWorkerSignals()
self.id = id
self.max_load = max_load
self._abort = False
def run(self):
thread_name = QtCore.QThread.currentThread().objectName()
thread_id = int(QtCore.QThread.currentThreadId()) # cast to int() to get Id, otherwise it's sip object
self.signals.message.emit(f'Running ThreadWorker {self.id} from thread "{thread_name}" (#{thread_id})')
work_load = random.randrange(self.max_load)
self.signals.message.emit(f'ThreadWorker #{self.id} work_load is {work_load}')
m_conn, s_conn = mp.Pipe()
self.pipe = m_conn
self.process_worker = ProcessWorker(self.id, s_conn, work_load)
self.signals.message.emit(f'ThreadWorker {self.id}: starting self.process_worker...')
self.process_worker.start()
self.pipe.send(CommandMessages.start)
self.listen()
def listen(self):
self.signals.message.emit(f'ThreadWorker {self.id} listening')
while True:
try:
msg = self.pipe.recv()
if msg == CommandMessages.stop:
self.signals.message.emit(f'ThreadWorker {self.id}: closing process_worker...')
self.process_worker.join(2)
self.process_worker.terminate()
print(f'ThreadWorker {self.id}: process_worker closed', flush=True)
self.signals.message.emit(f'ThreadWorker {self.id}: process_worker closed')
self.signals.done.emit(self.id)
break
self.signals.message.emit(str(msg))
except EOFError: # nothing left to receive
pass
def abort(self):
self.signals.message.emit(f'ThreadWorker #{self.id} notified to abort')
self._abort = True
class MyDialog(QtWidgets.QDialog):
abort = QtCore.pyqtSignal()
def __init__(self):
super().__init__()
self.start_time = None
self.setWindowTitle('MP Concurrency with QThreadPool')
self.resize(600, 400)
self.num_procs = int(mp.cpu_count() / 2) # 4
self._thread_pool = QtCore.QThreadPool.globalInstance()
QtCore.QThread.currentThread().setObjectName('main')
self.threads = []
self.start_button = QtWidgets.QPushButton(f'Start {self.num_procs} processes')
self.start_button.pressed.connect(self.start_processes)
self.abort_button = QtWidgets.QPushButton(f'Abort')
self.abort_button.pressed.connect(self.on_abort)
self.abort_button.setEnabled(False)
self.log = QtWidgets.QTextBrowser()
spin_layout = QtWidgets.QHBoxLayout()
self.spin = QtWidgets.QSpinBox()
self.spin.setValue(5)
spin_layout.addWidget(QtWidgets.QLabel('Max work time:'))
spin_layout.addWidget(self.spin)
layout = QtWidgets.QVBoxLayout()
layout.addLayout(spin_layout)
layout.addWidget(self.start_button)
layout.addWidget(self.abort_button)
layout.addWidget(self.log)
self.setLayout(layout)
def start_processes(self):
self.log.clear()
self.thread_count = self.num_procs
self.threads = []
self._threads_completed = 0
max_load = self.spin.value()
self.start_time = time.time()
for idx in range(self.thread_count):
thread_worker = ThreadWorker(idx, max_load)
self.threads.append(thread_worker)
thread_worker.signals.done.connect(self.on_done)
thread_worker.signals.message.connect(self.on_message)
self.abort.connect(thread_worker.abort)
self._thread_pool.start(thread_worker)
self.start_button.setEnabled(False)
self.abort_button.setEnabled(True)
def on_message(self, text):
self.log.append(str(text))
def on_done(self, id):
self.log.append(f'ThreadWorker {id} is done')
self._threads_completed += 1
if self._threads_completed == self.thread_count:
self.log.append('No more workers active')
self.log.append(f'Elapsed time: {time.time() - self.start_time}')
self.start_button.setEnabled(True)
self.abort_button.setEnabled(False)
@QtCore.pyqtSlot()
def on_abort(self):
self.abort.emit()
self.log.append('Asking each thread worker to abort')
done = self._thread_pool.waitForDone(10000)
if not done:
self.log.append('WARNING: COULD NOT CLOSE THREADS')
else:
self.log.append('All threads exited')
self.log.append(f'Elapsed time: {time.time() - self.start_time}')
def closeEvent(self, event):
self.abort.emit()
done = self._thread_pool.waitForDone(5000)
if not done:
print('Threads still open!. Open that task manager!', flush=True)
else:
print('All threads exited', flush=True)
event.accept()
if __name__ == '__main__':
app = QtWidgets.QApplication(sys.argv)
dialog = MyDialog()
dialog.show()
sys.exit(app.exec_())
</code></pre>
<p><strong>Multiprocess vs Threading</strong></p>
<p>I happened to have a "pure" threading example laying around which was close enough that I figured I could compare the two approaches. The threads and processes are set to the max, which may or may not be ideal. There's a global containing a set of large numbers. Those numbers are passed to each process or thread and used to do menial work. I was surprised to see that the multiprocessing approach wins:</p>
<pre><code>Multiprocess time: 34.82416486740112
Threaded time: 57.59582781791687
</code></pre>
<p>This was fun, in an odd sort of way. Maybe someone else will find it useful or improve upon it. (The abort/cleanup process was not thoroughly put together).</p>
<pre class="lang-py prettyprint-override"><code>import os
import sys
import time
import multiprocessing as mp
from PyQt5 import QtCore, QtWidgets
WORK_LOAD = [123456779, 98765554, 7666111, 966325, 978798, 65465, 447733331, 94613697]
def trap_exc_during_debug(*args):
# when app raises uncaught exception, print info
print(args, flush=True)
# install exception hook: without this, uncaught exception would cause application to exit
sys.excepthook = trap_exc_during_debug
class CommandMessages:
start = 'start'
stop = 'stop'
class ProcessWorker(mp.Process):
def __init__(self, id, pipe, work_load, daemon=True):
super().__init__()
self.daemon = daemon
self.pipe = pipe
self.id = id
self.work_load = work_load
print(f'Created worker {self.id} with work_load {self.work_load}', flush=True)
def run(self):
self.pipe.send(f"PureThreadWorker {self.id} in ({os.getpid()})")
while True:
try:
item = self.pipe.recv()
if item == CommandMessages.start:
self.pipe.send(f"worker ({self.id}): starting task")
lst = []
for i in range(self.work_load):
lst.append('x')
self.pipe.send(f"worker ({self.id}): task complete")
self.pipe.send(CommandMessages.stop)
break
except EOFError: # nothing left to receive
pass
class ThreadWorkerSignals(QtCore.QObject):
done = QtCore.pyqtSignal(int) # worker id
message = QtCore.pyqtSignal(str)
class ProcessThreadWorker(QtCore.QRunnable):
def __init__(self, id, work_load):
super().__init__()
self.signals = ThreadWorkerSignals()
self.id = id
self.work_load = work_load
self._abort = False
def run(self):
thread_name = QtCore.QThread.currentThread().objectName()
thread_id = int(QtCore.QThread.currentThreadId()) # cast to int() to get Id, otherwise it's sip object
self.signals.message.emit(f'Running ProcessThreadWorker {self.id} from thread "{thread_name}" (#{thread_id})')
self.signals.message.emit(f'ProcessThreadWorker #{self.id} work_load is {self.work_load}')
m_conn, s_conn = mp.Pipe()
self.pipe = m_conn
self.process_worker = ProcessWorker(self.id, s_conn, self.work_load)
self.signals.message.emit(f'ProcessThreadWorker {self.id}: starting self.process_worker...')
self.process_worker.start()
self.pipe.send(CommandMessages.start)
self.listen()
def listen(self):
self.signals.message.emit(f'ProcessThreadWorker {self.id} listening')
while True:
try:
msg = self.pipe.recv()
if msg == CommandMessages.stop:
self.signals.message.emit(f'ProcessThreadWorker {self.id}: closing process_worker...')
self.process_worker.join(2)
self.process_worker.terminate()
print(f'ProcessThreadWorker {self.id}: process_worker closed', flush=True)
self.signals.message.emit(f'ProcessThreadWorker {self.id}: process_worker closed')
self.signals.done.emit(self.id)
break
self.signals.message.emit(str(msg))
except EOFError: # nothing left to receive
pass
def abort(self):
self.signals.message.emit(f'ProcessThreadWorker #{self.id} notified to abort')
self._abort = True
class PureThreadWorker(QtCore.QRunnable):
def __init__(self, id, work_load):
super().__init__()
self.signals = ThreadWorkerSignals()
self.id = id
self.work_load = work_load
self._abort = False
def run(self):
thread_name = QtCore.QThread.currentThread().objectName()
thread_id = int(QtCore.QThread.currentThreadId()) # cast to int() to get Id, otherwise it's sip object
self.signals.message.emit(f'Running ProcessThreadWorker {self.id} from thread "{thread_name}" (#{thread_id})')
self.signals.message.emit(f'PureThreadWorker {self.id} work_load is {self.work_load}')
self.signals.message.emit(f"PureThreadWorker {self.id}: starting task")
lst = []
for i in range(self.work_load):
lst.append('x')
self.signals.message.emit(f"PureThreadWorker {self.id}: task complete")
self.signals.done.emit(self.id)
def abort(self):
self.signals.message.emit(f'PureThreadWorker #{self.id} notified to abort')
self._abort = True
class MyDialog(QtWidgets.QDialog):
abort = QtCore.pyqtSignal()
def __init__(self):
super().__init__()
self.start_time = None
self.setWindowTitle('MP Concurrency with QThreadPool')
self.resize(600, 400)
self.num_procs = mp.cpu_count()
self._thread_pool = QtCore.QThreadPool.globalInstance()
QtCore.QThread.currentThread().setObjectName('main')
self.threads = []
self.button_start_processes = QtWidgets.QPushButton(f'Start {self.num_procs} processes')
self.button_start_processes.pressed.connect(self.start_processes)
self.button_start_threads = QtWidgets.QPushButton()
self.button_start_threads.clicked.connect(self.start_threads)
self.button_start_threads.setText(f"Start {self._thread_pool.maxThreadCount()} threads")
self.abort_button = QtWidgets.QPushButton(f'Abort')
self.abort_button.pressed.connect(self.on_abort)
self.abort_button.setEnabled(False)
self.log = QtWidgets.QTextBrowser()
layout = QtWidgets.QVBoxLayout()
layout.addWidget(self.button_start_processes)
layout.addWidget(self.button_start_threads)
layout.addWidget(self.abort_button)
layout.addWidget(self.log)
self.setLayout(layout)
def start_processes(self):
self.log.clear()
self.thread_count = self.num_procs
self.threads = []
self._threads_completed = 0
self.log.append(f'Max procs: {self.num_procs}')
self.start_time = time.time()
for idx in range(self.thread_count):
thread_worker = ProcessThreadWorker(idx, WORK_LOAD[idx])
self.threads.append(thread_worker)
thread_worker.signals.done.connect(self.on_process_done)
thread_worker.signals.message.connect(self.on_message)
self.abort.connect(thread_worker.abort)
self._thread_pool.start(thread_worker)
self.button_start_processes.setEnabled(False)
self.abort_button.setEnabled(True)
def start_threads(self):
self.log.clear()
self.pure_workers_done = 0
self.pure_workers = []
self.worker_count = self._thread_pool.maxThreadCount()
self.log.append(f'Max threads: {self._thread_pool.maxThreadCount()}')
self.start_time = time.time()
for idx in range(self.worker_count):
worker = PureThreadWorker(idx, WORK_LOAD[idx])
self.pure_workers.append(worker)
# get progress messages from worker:
worker.signals.done.connect(self.on_pure_done)
worker.signals.message.connect(self.log.append)
# control worker:
self.abort.connect(worker.abort)
self._thread_pool.start(worker)
self.button_start_processes.setEnabled(False)
self.abort_button.setEnabled(True)
def on_message(self, text):
self.log.append(str(text))
def on_process_done(self, id):
self.log.append(f'ProcessThreadWorker {id} is done')
self._threads_completed += 1
if self._threads_completed == self.thread_count:
self.log.append('No more workers active')
self.log.append(f'Elapsed time: {time.time() - self.start_time}')
print(f'Elapsed time: {time.time() - self.start_time}', flush=True)
self.button_start_processes.setEnabled(True)
self.abort_button.setEnabled(False)
def on_pure_done(self, id):
self.log.append(f'PureThreadWorker {id} is done')
self.pure_workers_done += 1
if self.pure_workers_done == self.worker_count:
self.log.append('No more pure workers active')
self.log.append(f'Elapsed time: {time.time() - self.start_time}')
print(f'Elapsed time: {time.time() - self.start_time}', flush=True)
self.button_start_processes.setEnabled(True)
self.abort_button.setEnabled(False)
@QtCore.pyqtSlot()
def on_abort(self):
self.abort.emit()
self.log.append('Asking each thread worker to abort')
done = self._thread_pool.waitForDone(10000)
if not done:
self.log.append('WARNING: COULD NOT CLOSE THREADS')
else:
self.log.append('All threads exited')
self.log.append(f'Elapsed time: {time.time() - self.start_time}')
print(f'Elapsed time: {time.time() - self.start_time}', flush=True)
def closeEvent(self, event):
self.abort.emit()
done = self._thread_pool.waitForDone(5000)
if not done:
print('Threads still open!. Open that task manager!', flush=True)
else:
print('All threads exited', flush=True)
event.accept()
if __name__ == '__main__':
app = QtWidgets.QApplication(sys.argv)
dialog = MyDialog()
dialog.show()
sys.exit(app.exec_())
</code></pre>
|
multithreading|pyqt|multiprocessing|python-multiprocessing|pyside
| 0 |
1,902,520 | 32,364,499 |
Truncating too long varchar when inserting to MySQL via SQLAlchemy
|
<p>I am inserting data to MySQL via SQLAlchemy models. Recently, this app is running against MySQL configured with <code>STRICT_TRANS_TABLES</code> and app fails occasionally because of <em>Data too long for column</em> error.</p>
<p>I know that I can disable <strong>strict</strong> sql_mode for my session (like here <a href="https://stackoverflow.com/questions/18459184/mysql-too-long-varchar-truncation-error-setting">MySQL too long varchar truncation/error setting</a>),</p>
<p>but I was curious if SQLAlchemy can enforce max String() length for column data. Documentation says, the <code>String()</code> length is for <code>CREATE TABLE</code> only. My question:</p>
<ol>
<li>Is it possible to enforce max length (truncate too long strings) in SQLAlchemy? </li>
<li>Can I set it <strong>for individual columns</strong> or for all columns in all tables/database only?</li>
</ol>
|
<p>If you would like to <code>enfoce</code> max length by automatically truncating it on the python/sqlalchemy side, I think that using <a href="http://docs.sqlalchemy.org/en/rel_1_0/orm/mapped_attributes.html#simple-validators" rel="noreferrer">Simple Validators</a> is the easiest way to achieve this:</p>
<pre><code>class MyTable(Base):
__tablename__ = 'my_table'
id = Column(Integer, primary_key=True)
code = Column(String(4))
name = Column(String(10))
@validates('code', 'name')
def validate_code(self, key, value):
max_len = getattr(self.__class__, key).prop.columns[0].type.length
if value and len(value) > max_len:
return value[:max_len]
return value
</code></pre>
|
python|mysql|sqlalchemy
| 11 |
1,902,521 | 32,156,703 |
Cannot print unicode string
|
<p>I'm working with dbf database and Armenian letters, the DBF encoding was unknown so I've created a letter map to decode revived string. Now I have a valid Unicode string, but I cannot print it out because of this error: </p>
<blockquote>
<p>UnicodeEncodeError: 'charmap' codec can't encode characters in position 0-5: character maps to </p>
</blockquote>
<p>What I have tried so far:</p>
<pre><code>print u'%s' %str ## Returns mentioned error
print repr(str) ## Returns string in this form u'\u054c\u0561\u0586\u0561\u0575\u0565\u056c
</code></pre>
<p>How to fix it?</p>
|
<p>try to do the following:</p>
<pre><code>newStr = str.encode("utf-8")
print newStr
</code></pre>
<p>P.S. Had this problem with another language, was able to view letters when wrote them into a file.</p>
|
python|unicode|python-unicode
| 1 |
1,902,522 | 28,325,525 |
python Gtk.PrintOperation print a pdf
|
<p>Until now i was using a variant of this code to print a pdf that i create with pisa.</p>
<p>That is taken from pygtk faq:</p>
<pre><code>import gtk
import gtkunixprint
def print_cb(printjob, data, errormsg):
if errormsg:
print('Error occurred while printing:\n%s' % errormsg)
filename = 'the_pdf_file_to_be_printed.pdf'
pud = gtkunixprint.PrintUnixDialog()
response = pud.run()
if response == gtk.RESPONSE_OK:
printer = pud.get_selected_printer()
settings = pud.get_settings()
setup = pud.get_page_setup()
printjob = gtkunixprint.PrintJob('Printing %s' % filename, printer, settings, setup)
printjob.set_source_file(filename)
printjob.send(print_cb)
pud.destroy()
</code></pre>
<p>Now i am porting to Gtk3 PyObject and i can't solve the problem.
I found that Gtk.PrintOperation is the way but i can't relate print Operation whith a printJob or how to pass the file to print.<br>
thanks </p>
|
<p>Here's an example, I hope you find it useful</p>
<pre><code>#!/usr/bin/env python
import os
import sys
from gi.repository import GLib, Gtk, Poppler
class PrintingApp:
def __init__(self, file_uri):
self.operation = Gtk.PrintOperation()
self.operation.connect('begin-print', self.begin_print, None)
self.operation.connect('draw-page', self.draw_page, None)
self.doc = Poppler.Document.new_from_file(file_uri)
def begin_print(self, operation, print_ctx, print_data):
operation.set_n_pages(self.doc.get_n_pages())
def draw_page(self, operation, print_ctx, page_num, print_data):
cr = print_ctx.get_cairo_context()
page = self.doc.get_page(page_num)
page.render(cr)
def run(self, parent=None):
result = self.operation.run(Gtk.PrintOperationAction.PRINT_DIALOG,
parent)
if result == Gtk.PrintOperationResult.ERROR:
message = self.operation.get_error()
dialog = Gtk.MessageDialog(parent,
0,
Gtk.MessageType.ERROR,
Gtk.ButtonsType.CLOSE,
message)
dialog.run()
dialog.destroy()
Gtk.main_quit()
def main():
if len(sys.argv) != 2:
print "%s FILE" % sys.argv[0]
sys.exit(1)
file_uri = GLib.filename_to_uri(os.path.abspath(sys.argv[1]))
main_window = Gtk.OffscreenWindow()
app = PrintingApp(file_uri)
GLib.idle_add(app.run, main_window)
Gtk.main()
if __name__ == '__main__':
main()
</code></pre>
|
python|gtk3|pyobject
| 5 |
1,902,523 | 28,258,813 |
passing arguments to a cURL shell script file
|
<p>I do not have much experience with shell or python scripts so I am looking for some help on how I can accomplish this. </p>
<p>Goal:</p>
<p>Pass arguments to a shell or python script file that will be used to perform either a cURL Post request or a python post request. </p>
<p>Let's say I go the python route and the filename is api.py</p>
<pre><code>import json,httplib
connection = httplib.HTTPSConnection('api.example.com', 443)
connection.connect()
connection.request('POST', '/message', json.dumps({
"where": {
"devicePlatform": "andriod"
},
"data": {
"body": "Test message!",
"subject": "Test subject"
}
}), {
"X-Application-Id": "XXXXXXXXX",
"X-API-Key": "XXXXXXXX",
"Content-Type": "application/json"
})
result = json.loads(connection.getresponse().read())
print result
</code></pre>
<p>How would I go about passing in arguments to for the body and subject values and how would that look via command line?</p>
<p>Thanks</p>
|
<p>Try using argparse to parse command line arguments</p>
<pre><code>from argparse import ArgumentParser
import json
import httplib
parser = ArgumentParser()
parser.add_argument("-s", "--subject", help="Subject data", required=True)
parser.add_argument("-b", "--body", help="Body data", required=True)
args = parser.parse_args()
connection = httplib.HTTPSConnection('api.example.com', 443)
connection.connect()
connection.request('POST', '/message', json.dumps({
"where": {
"devicePlatform": "andriod"
},
"data": {
"body": args.body,
"subject": args.subject,
}
...
</code></pre>
<p>On the CLI it would look like</p>
<pre><code>python script.py -b "Body" -s "Subject"
</code></pre>
|
python|shell|curl|command-line-arguments
| 1 |
1,902,524 | 32,700,797 |
Saving a cross-validation trained model in Scikit
|
<p>I have trained a model in <code>scikit-learn</code> using <code>Cross-Validation</code> and <code>Naive Bayes</code> classifier. How can I persist this model to later run against new instances?</p>
<p>Here is simply what I have, I can get the <code>CV</code> scores but I don't know how to have access to the trained model</p>
<pre><code>gnb = GaussianNB()
scores = cross_validation.cross_val_score(gnb, data_numpy[0],data_numpy[1], cv=10)
</code></pre>
|
<p>cross_val_score doesn't changes your estimator, and it will not return fitted estimator. It just returns score of estimator of cross validation.</p>
<p>To fit your estimator - you should call fit on it explicitly with provided dataset.
To save (serialize) it - you can use pickle:</p>
<pre><code># To fit your estimator
gnb.fit(data_numpy[0], data_numpy[1])
# To serialize
import pickle
with open('our_estimator.pkl', 'wb') as fid:
pickle.dump(gnb, fid)
# To deserialize estimator later
with open('our_estimator.pkl', 'rb') as fid:
gnb = pickle.load(fid)
</code></pre>
|
python|scikit-learn|pickle|cross-validation
| 16 |
1,902,525 | 34,797,101 |
how test all the possible outcomes of a formula in Python?
|
<p>Hi I have written a formula and I wanted to know how can I test all the outcomes of the formula for all real numbers (or a specific range). Also I wanted to know if I could plot the outcomes with <code>matplotlib</code>
my formula is <code>x = freq / t * 2</code></p>
<p><code>X</code> is the output <code>freq</code> is the frequency change and <code>t</code> is time. <code>freq</code> can be between <code>75 to 300</code> and <code>t</code> is mostly from <code>0 to 5</code></p>
<p>I had written this code in Python but I have to change it for every possibility of the variables. </p>
<p>I use python 3.4 btw</p>
<pre><code>freq = -80
t = 5
x = freq / t * 2
print (x)
</code></pre>
<p>Edit: I wrote the code for intuition.</p>
|
<p>First, there are either zero, one, or infinitely many real numbers within a range. You cannot test a function over any non-trivial range of real numbers, although you could generate a mathematical proof that the function will work over that range. That pedantry aside, what you want is nestled loops:</p>
<pre><code>freq = 75.0
while freq <= 500.0:
t = 0.5
while t <= 5.0:
x = freq / t * 2
print '%f\t%f\t%f' % (freq, t, x)
t += 0.5
freq += 25.0
</code></pre>
<p>Note that t cannot be exactly zero.</p>
|
python|matplotlib
| 3 |
1,902,526 | 27,175,874 |
Remove new line \n reading from CSV
|
<p>I have a CSV file which looks like:</p>
<pre><code>Name1,1,2,3
Name2,1,2,3
Name3,1,2,3
</code></pre>
<p>I need to read it into a 2D list line by line. The code I have written almost does the job; however, I am having problems removing the new line characters <code>'\n'</code> at the end of the third index.</p>
<pre><code> score=[]
for eachLine in file:
student = eachLine.split(',')
score.append(student)
print(score)
</code></pre>
<p>The output currently looks like: </p>
<pre><code>[['name1', '1', '2', '3\n'], ['name2', '1', '2', '3\n'],
</code></pre>
<p>I need it to look like:</p>
<pre><code>[['name1', '1', '2', '3'], ['name2', '1', '2', '3'],
</code></pre>
|
<p>simply call <a href="https://docs.python.org/2/library/stdtypes.html#str.strip" rel="nofollow"><code>str.strip</code></a> on each line as you process them:</p>
<pre><code>score=[]
for eachLine in file:
student = eachLine.strip().split(',')
score.append(student)
print(score)
</code></pre>
|
python|list|csv
| 3 |
1,902,527 | 23,190,156 |
Pandas GroupBy Mean of Large DataSet in CSV
|
<p>A common SQLism is "Select A, mean(X) from table group by A" and I would like to replicate this in pandas. Suppose that the data is stored in something like a CSV file and is too big to be loaded into memory.</p>
<p>If the CSV could fit in memory a simple two-liner would suffice:</p>
<pre><code>data=pandas.read_csv("report.csv")
mean=data.groupby(data.A).mean()
</code></pre>
<p>When the CSV cannot be read into memory one might try:</p>
<pre><code>chunks=pandas.read_csv("report.csv",chunksize=whatever)
cmeans=pandas.concat([chunk.groupby(data.A).mean() for chunk in chunks])
badMeans=cmeans.groupby(cmeans.A).mean()
</code></pre>
<p>Except that the resulting cmeans table contains repeated entries for each distinct value of A, one for each appearance of that value of A in distinct chunks (since read_csv's chunksize knows nothing about the grouping fields). As a result the final badMeans table has the wrong answer... it needs to compute a weighted average mean.</p>
<p>So a working approach seems to be something like:</p>
<pre><code>final=pandas.DataFrame({"A":[],"mean":[],"cnt":[]})
for chunk in chunks:
t=chunk.groupby(chunk.A).sum()
c=chunk.groupby(chunk.A).count()
cmean=pandas.DataFrame({"tot":t,"cnt":c}).reset_index()
joined=pandas.concat(final,cmean)
final=joined.groupby(joined.A).sum().reset_indeX()
mean=final.tot/final.cnt
</code></pre>
<p>Am I missing something? This seems insanely complicated... I would rather write a for loop that processes a CSV line by line than deal with this. There has to be a better way.</p>
|
<p>I think you could do something like the following which seems a bit simpler to me. I made the following data:</p>
<pre><code>id,val
A,2
A,5
B,4
A,2
C,9
A,7
B,6
B,1
B,2
C,4
C,4
A,6
A,9
A,10
A,11
C,12
A,4
A,4
B,6
B,5
C,7
C,8
B,9
B,10
B,11
A,20
</code></pre>
<p>I'll do chunks of 5:</p>
<pre><code>chunks = pd.read_csv("foo.csv",chunksize=5)
pieces = [x.groupby('id')['val'].agg(['sum','count']) for x in chunks]
agg = pd.concat(pieces).groupby(level=0).sum()
print agg['sum']/agg['count']
id
A 7.272727
B 6.000000
C 7.333333
</code></pre>
<p>Compared to the non-chunk version:</p>
<pre><code>df = pd.read_csv('foo.csv')
print df.groupby('id')['val'].mean()
id
A 7.272727
B 6.000000
C 7.333333
</code></pre>
|
python|pandas
| 9 |
1,902,528 | 23,262,039 |
String in Python 3.4
|
<p>I have this question of how to return true for strings. Can anyone help me on how to prompt user input in Python 3.4 and answer this question</p>
<p>Write a class/function to return True if 2 input strings are anagram to each other. string1 is an anagram of string2 if string2 can be obtained by rearranging the characters in string1. </p>
<pre><code>Example:
string1 = 'smart'
string2 = 'marts'
result: True
string1 = 'secure'
string2 = 'rescue'
result: True
</code></pre>
|
<p>Perhaps something along the lines of ( warning untested code ):</p>
<pre><code>def isAnagram(string1, string2):
if sorted(list(string1)) == sorted(list(string2)):
return True
else:
return False
</code></pre>
<p>Admittedly there are more concise ways of doing this however this is particularly easy to understand in my view.</p>
|
python|string|python-3.4
| 0 |
1,902,529 | 8,108,539 |
Trac spawning for nginx deployement without using tracd
|
<p>I am trying to run Trac on nginx.</p>
<p>There is a simple solution that consists of running the tracd server, but I'm trying to avoid that. It doesn't support unix sockets.</p>
<p>Instead, I'm trying to use <a href="http://pypi.python.org/pypi/Spawning" rel="nofollow">Spawning</a> that should be able to launch any WSGI application.</p>
<p>But I don't know how to use it. After <a href="http://trac.edgewall.org/wiki/TracInstall#cgi-bin" rel="nofollow">deplyement</a>, I have my <code>cgi-bin</code> directory with <code>trac.wsgi</code> in it, but I don't know how to launch it using Spawning.</p>
<p>It doesn't accept a file name as an argument, I have to provide the module and the application names, like <code>spawning my_module.my_wsgi_app</code>. But how do I do it with trac.wsgi ?</p>
|
<p>Reading Spawning docs I saw that it receives on the command line, as the first parameter, the <em>dotted name</em> of your application's WSGI object. Especifically for Trac, the WSGI object is defined at <code>trac.web.main.dispatch_request</code> <a href="http://trac.edgewall.org/browser//trunk/trac/web/main.py#L326" rel="nofollow">[1]</a>. Try passing this to Spawning.</p>
<p>But remember that Trac needs some environ variables in order to run correctly, to name a few: <code>TRAC_ENV</code>, pointing to your Trac Environment and <code>PYTHON_EGG_CACHE</code>, where Python will estract any loaded egg file.</p>
<p>Since Spawning does not receive a file as the first argument, you won't need <code>trac.wsgi</code>. </p>
<p>You can try this, running directly from you shell.</p>
<pre><code>$ TRAC_ENV=/path/to/your/trac-env PYTHON_EGG_CACHE=/tmp/.egg-cache spawning trac.web.main.dispatch_request
</code></pre>
<p>Good luck!</p>
|
python|wsgi|trac|spawning
| 2 |
1,902,530 | 8,139,219 |
How to set up django with a postgres database in a specific tablespace?
|
<p>What you'd expect::</p>
<p>settings.py</p>
<pre><code>DATABASES = {
'default': {
'NAME': 'db_name',
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'USER': 'user',
'PASSWORD': 'password',
'HOST': 'host',
'PORT': '5432',
'DEFAULT_TABLESPACE': 'tablespace-name',
}
}
</code></pre>
<p>When I use DEFAULT_TABLESPACE, I would expect that the access will be granted using this default tablespace. But it doesn't matter what I use there. Also, if I explicitly use db_tablespace in a models Meta class, it doesn't do anything as far I can tell.</p>
<p>I've tried different users as well, but even user postgres does not work. If I define db_table = "tablespace.tablename", it also does not work.</p>
<p>The SQL that will work:</p>
<pre><code>select count(*) from schemaname.tablename
</code></pre>
<p>I made a backup of the database and restored it locally, without creating the tablespace. It imports well and ignores the tablespace. Then it just works.</p>
<p>How do I configure Django with postgres with tablespaces? Or schemaname?? Actually I don't understand anything of this, so I hope you guys can help me out.</p>
<p>See also: <a href="https://docs.djangoproject.com/en/dev/topics/db/tablespaces/" rel="nofollow">https://docs.djangoproject.com/en/dev/topics/db/tablespaces/</a></p>
<p>Using Django 1.3.1, postgresql 8.4, psycopg2</p>
<p>Update:</p>
<p>It seems that the "default schema" is not picked up when using tablespaces. I tried out some tests and here are the results using ./manage.py dbshell:</p>
<p>This does work on my database without tablespaces, but does not work on the database with tablespaces:</p>
<pre><code>select * from parameter;
</code></pre>
<p>This does work on both databases:</p>
<pre><code>select * from tablespace.parameter;
</code></pre>
<p>Unfortunately, the option "DATABASE_SCHEMA" in the db settings does not work, neither does "db_schema" in the models Meta object. So the problem is a bit clearer now. Does anybody have a solution to this?</p>
|
<p>A schema and a tablespace are two different things.</p>
<p>A <a href="http://www.postgresql.org/docs/current/static/ddl-schemas.html" rel="noreferrer">schema</a> is essentially a namespace. A <a href="http://www.postgresql.org/docs/current/static/manage-ag-tablespaces.html" rel="noreferrer">tablespace</a> is a place in the filesystem. Schemas are logical things; tablespaces are physical things.</p>
<p>If you're trying to group together a bunch of related tables (and possibly other named objects) in a namespace, you want a schema.</p>
<p>If you're trying to increase speed by putting indexes on a fast SSD disk--a different disk than the tables, views, etc.--you want a tablespace.</p>
|
python|django|postgresql|settings|psycopg2
| 5 |
1,902,531 | 651,949 |
Django: Access primary key in models.filefield(upload_to) location
|
<p>I'd like to save my files using the primary key of the entry.</p>
<p>Here is my code:</p>
<pre><code>def get_nzb_filename(instance, filename):
if not instance.pk:
instance.save() # Does not work.
name_slug = re.sub('[^a-zA-Z0-9]', '-', instance.name).strip('-').lower()
name_slug = re.sub('[-]+', '-', name_slug)
return u'files/%s_%s.nzb' % (instance.pk, name_slug)
class File(models.Model):
nzb = models.FileField(upload_to=get_nzb_filename)
name = models.CharField(max_length=256)
</code></pre>
<p>I know the first time an object is saved the primary key isn't available, so I'm willing to take the extra hit to save the object just to get the primary key, and then continue on. </p>
<p>The above code doesn't work. It throws the following error:</p>
<pre><code>maximum recursion depth exceeded while calling a Python object
</code></pre>
<p>I'm assuming this is an infinite loop. Calling the <code>save</code> method would call the <code>get_nzb_filename</code> method, which would again call the <code>save</code> method, and so on.</p>
<p>I'm using the latest version of the Django trunk.</p>
<p>How can I get the primary key so I can use it to save my uploaded files?</p>
<hr>
<p><strong>Update @muhuk:</strong></p>
<p>I like your solution. Can you help me implement it? I've updated my code to the following and the error is <code>'File' object has no attribute 'create'</code>. Perhaps I'm using what you've written out of context?</p>
<pre><code>def create_with_pk(self):
instance = self.create()
instance.save()
return instance
def get_nzb_filename(instance, filename):
if not instance.pk:
create_with_pk(instance)
name_slug = re.sub('[^a-zA-Z0-9]', '-', instance.name).strip('-').lower()
name_slug = re.sub('[-]+', '-', name_slug)
return u'files/%s_%s.nzb' % (instance.pk, name_slug)
class File(models.Model):
nzb = models.FileField(upload_to=get_nzb_filename, blank=True, null=True)
name = models.CharField(max_length=256)
</code></pre>
<p>Instead of enforcing the required field in my model I'll do it in my Form class. No problem.</p>
|
<p>It seems you'll need to pre-generate your <code>File</code> models with empty file fields first. Then pick up one and save it with the given file object.</p>
<p>You can have a custom manager method like this;</p>
<pre><code>def create_with_pk(self):
instance = self.create()
instance.save() # probably this line is unneeded
return instance
</code></pre>
<p>But this will be troublesome if either of your fields is required. Because you are initially creating a null object, you can't enforce required fields on the model level.</p>
<h3>EDIT</h3>
<p><code>create_with_pk</code> is supposed to be a <a href="http://docs.djangoproject.com/en/dev/topics/db/managers/#adding-extra-manager-methods" rel="nofollow noreferrer">custom manager method</a>, in your code it is just a regular method. Hence <code>self</code> is meaningless. It is all properly <a href="http://docs.djangoproject.com/en/dev/topics/db/managers/" rel="nofollow noreferrer">documented</a> with examples.</p>
|
python|django|file-io
| 4 |
1,902,532 | 42,097,247 |
Can someone help me spot logical error Python
|
<p>I made a simple stack program but I am able to enter more elements than the size given to stack by user input. <em>The if <code>len(self.stk)==size</code> statement doesn't seem to work and I can't understand why.</em>
This is my code:</p>
<pre><code>class stack():
def __init__(self,size):
self.stk=[]
self.size = size
def push(self,item):
if len(self.stk)==self.size:
print"OVERFLOW!"
else:
self.stk.append(item)
print "Len of stack is ",len(self.stk)
print "Size is ",self.size
def pop(self):
if self.isempty()==True:
print "UNDERFLOW"
else:
del self.stk[-1]
def isempty(self):
if self.stk==[]:
return True
else:
return False
def display(self):
print "\nNow going to show you the stack: \n "
for i in range(len(self.stk)-1,0,-1):
print self.stk[i]
size = raw_input("Enter size of stack you want: ")
stak = stack(size)
while True:
choice = int(raw_input("\nEnter \n 1.To push an item \n 2.Pop an item \n 3. Display Stack \n 4.Quit: "))
if choice == 1:
elem = raw_input("Enter the element you want to enter: ")
stak.push(elem)
if choice == 2:
stak.pop()
if choice == 3:
stak.display()
if choice==4:
break
</code></pre>
|
<p>You need to typecast the input to a number</p>
<pre><code>size = int(raw_input("Enter size of stack you want: "))
</code></pre>
<p>Alternatively, since you're working in Python2.7:</p>
<pre><code>size = input("Enter size of stack you want: ")
</code></pre>
<p>Would work, as it will evaluate what they give it and (if given a whole number) will return an integer.</p>
<hr>
<p>In the future, one thing I may suggest:</p>
<p>Add the following function and you can then inspect all of your variables at any point:</p>
<pre><code>def debug(*args):
print('Variable debugger output:')
for arg in args:
if arg in ["__builtins__", "inspect", "debug"]:
continue # skip these variables
try:
var = eval(arg)
print('{0}: {1} {2}'.format(arg, type(var), var))
# Handles variables not passed as strings
except (TypeError, NameError):
print('{0}: {1}'.format(type(arg), arg))
</code></pre>
<p>with</p>
<pre><code>debug(*dir()) # All variables in scope
</code></pre>
<p>or </p>
<pre><code>debug('size', 'self.size') # Specific variables - Note it accepts both a list
debug(size, self.size) # of strings or a list of variables
</code></pre>
<p>which will give you something like:</p>
<pre><code>debug: <type 'function'> <function debug at 0x7fa046a4f938>
in2: <type 'int'> 5
normin: <type 'int'> 23 # normal input
rawin: <type 'str'> 23 # raw input
sys: <type 'module'> <module 'sys' (built-in)>
test: <type 'str'> Hello
testfloat: <type 'float'> 5.234
</code></pre>
<p><em>Note</em>: debug won't show itself if you use the code above... this shows what a function looks like from the debugger.</p>
|
python|if-statement
| 3 |
1,902,533 | 42,112,128 |
Store large amount of data and access it when needed
|
<p>I need to store some data which is a float number and epoch of current time (frequency is 1 second but might be higher in future). I need to do this for long periods of time, might even be years and then I need to access it by date. Ex: Let's assume I've stored 1 year worth of data now I need to know the value of this float number when epoch was <code>X</code>.</p>
<p>My first thought was to divide this data in directories, ex <code>Dir Year-2017, Month-02, Day-08</code>. And I was wondering what would be the best approach. Which language is more suited for this kind of thing? I mainly use <a href="/questions/tagged/python" class="post-tag" title="show questions tagged 'python'" rel="tag">python</a> and <a href="/questions/tagged/c%23" class="post-tag" title="show questions tagged 'c#'" rel="tag">c#</a>, but I can code in other languages too.</p>
<p>I'm not looking for code snippets directly ( although if you want to put them they're highly appreciated ) but where can I learn to do so?</p>
|
<p>An easy way to do this would be to store the data in a database such as SQL, with each entry containing your float, your epoch number and any other meta data you want to include. </p>
<p>sqlite3 is an easy python API to create and add to databases. loads of tutorials online too ;)</p>
|
c#|python|c|storage
| 1 |
1,902,534 | 47,234,542 |
I keep receiving the unhashable error when trying to deal with Turn Order?
|
<p>So I am trying to create a small turnbased program and I came up with some code to determine the player's turn order just fine. However when I tried to do determine the enemies' based upon the players', I ran into a problem. It keeps returining the unhashable error to me. Is there a way to deal with that so I get my desired result, or if you have a better solution for my problem please let me know. Here is the code:</p>
<pre><code>Pm1Order = random.randint(1,8)
Pm2Order = random.randint(1,8)
if Pm2Order == Pm1Order:
Pm2Order = Pm1Order - 1
if Pm2Order == 0:
Pm2Order = Pm1Order + 1
Pm3Order = random.randint(1,8)
if Pm3Order == Pm2Order:
Pm3Order = Pm1Order - 2
if Pm3Order == 0:
Pm3Order = Pm1Order + 2
Pm4Order = random.randint(1,8)
if Pm4Order == Pm3Order:
Pm4Order = Pm1Order - 3
if Pm4Order == 0:
Pm4Order = Pm1Order + 3
print("The turn orders for your party is " +str(Pm1Order)+ " for the knight, " +str(Pm2Order)+ " for the theif, " +str(Pm3Order)+ " for the doctor, and " +str(Pm4Order)+ " for the priest.")
PlayerOrder = set([Pm1Order , Pm2Order , Pm3Order , Pm4Order])
print(str(PlayerOrder))
FullOrder = set([1, 2, 3, 4, 5, 6, 7, 8])
EnemyOrder = FullOrder.difference(PlayerOrder)
EnemyOrder2 = FullOrder.difference(PlayerOrder)
print(str(EnemyOrder))
Enemy1Order = random.sample(set([EnemyOrder]), 1)
print(Enemy1Order)
</code></pre>
<hr>
<pre><code>Traceback (most recent call last):
File "foo.py", line xx, in <module>
Enemy1Order = random.sample(set([EnemyOrder]), 1)
TypeError: unhashable type: 'set'
</code></pre>
|
<p><a href="https://docs.python.org/3/library/stdtypes.html#frozenset.difference" rel="nofollow noreferrer">set.difference()</a> returns a set so <code>EnemyOrder</code> is already a set. Try</p>
<pre><code>Enemy1Order = random.sample(EnemyOrder, 1)
</code></pre>
<p>Or </p>
<pre><code>Enemy1Order = random.sample(set(*[EnemyOrder]), 1)
</code></pre>
|
python
| 0 |
1,902,535 | 47,376,705 |
Python Write terminal output to file then read file
|
<p>I'm having trouble writing the terminal output (all print statements) to a textfile then reading that textfile in the same script. I keep getting an I/O error if I close the program to finish writing to the file and then re-open the file to read it, or no output for the final print(file_contents) statement.</p>
<p>Here's my code:</p>
<pre><code>import sys
filename = open("/Users/xxx/documents/python/dump.txt", 'r+')
filename.truncate()
sys.stdout = filename
print('Hello')
print('Testing')
filename.close()
with open("/Users/xxx/documents/python/dump.txt") as file:
data = file.read()
print(file)
</code></pre>
<p>Any suggestions would be great! I'm planning to use this to print output's from some longer scripts to a slack channel.</p>
<p>Thanks!</p>
|
<p>The error you get is:<br>
<code>IOError: [Errno 2] No such file or directory: '/Users/xxx/documents/python/dump.txt'</code> because:
file open mode <code>r+</code> does not create a file. Use mode <code>w</code> like this:</p>
<p>You have to reattach <code>stdout</code> to console again to print in console.</p>
<pre><code>import sys
filename = open('/Users/xxx/documents/python/dump.txt', 'w')
# filename.truncate() # mode 'w' truncates file
sys.stdout = filename
print('Hello')
print('Testing')
filename.close()
# reattach stdout to console
sys.stdout = sys.__stdout__
with open('/Users/xxx/documents/python/dump.txt') as file:
data = file.read()
print(data)
</code></pre>
<p>will print: </p>
<pre><code>Hello
Testing
</code></pre>
|
python|file
| 0 |
1,902,536 | 47,168,026 |
How to make a windows standalone installer for my test.py
|
<p>For learning purposes, I wrote a test.py which simply will print out "Hello World". Now the problem is that I want to have like an installer for Windows so the program installs and executes after being installed.</p>
<p>test.py</p>
<pre><code>hello()
def hello():
print('HELLO WORLD')
</code></pre>
<p>So do I have to change the code in the program a little bit or something else?</p>
|
<p>You do not need an installer. You can create a standalone file that executes on open with any compiler. </p>
<p>I mostly use pyinstaller. If you have pip already installed you can execute following command in it <code>pip install pyinstaller</code></p>
<p>Make sure pip is in your Path. If not, google for <code>add pip to path</code></p>
<p>Then, navigate into the folder with the test.py and open a new cmd.</p>
<p>Now type <code>pyinstaller test.py --onefile</code> and press enter. </p>
<p>It should now create some new folders inclduing one called <code>dist</code>. In there you can find the standalone exe. </p>
|
python|windows|installation
| 0 |
1,902,537 | 70,886,428 |
tkinter returns "no such file or directory" error
|
<p>This is my code:</p>
<pre><code>from tkinter import *
logoImage = PhotoImage(file='logo.png')
logoLabel = Label(root, image=logoImage, bg='dodgerblue3')
logoLabel.grid(row=0, column=0)
</code></pre>
<p>The image is in the same directory as the project. In fact, when I open it with cmd or Python it works. But when I use VS Code or Turn it into .exe file, it shows this error:</p>
<pre><code>Traceback (most recent call last):
File "c:\Users\Simo\OneDrive\Python\Scientific calculator\calc.py", line 180, in <module>
logoImage = PhotoImage(file='logo.png')
File "C:\Users\Simo\AppData\Local\Programs\Python\Python310\lib\tkinter\__init__.py", line 4093, in __init__
Image.__init__(self, 'photo', name, cnf, master, **kw)
File "C:\Users\Simo\AppData\Local\Programs\Python\Python310\lib\tkinter\__init__.py", line 4038, in __init__
self.tk.call(('image', 'create', imgtype, name,) + options)
_tkinter.TclError: couldn't open "logo.png": no such file or directory
</code></pre>
<p>How can I fix this?</p>
|
<p>Maybe try putting the file into your workspace and using the relative path.</p>
|
python|python-3.x|tkinter
| -1 |
1,902,538 | 58,262,302 |
How to change the order of x-axis labels in a seaborn lineplot?
|
<p>I have the following data frame:</p>
<pre><code>df1_Relax_Pulse_Melted.head()
Task Pulse Time Pulse Measure
0 Language PRE_RELAX_PULSE 90.0
1 Language PRE_RELAX_PULSE 94.0
2 Language PRE_RELAX_PULSE 52.0
3 Language PRE_RELAX_PULSE 70.0
4 Language PRE_RELAX_PULSE 84.0
</code></pre>
<p>When I attempt a barplot of this data, I get the following:</p>
<pre><code>ax = sns.barplot(x="Pulse Time", y="Pulse Measure", hue="Task", data=df1_Relax_Pulse_Melted)
</code></pre>
<p><a href="https://i.stack.imgur.com/zoPXX.png" rel="noreferrer"><img src="https://i.stack.imgur.com/zoPXX.png" alt="enter image description here"></a></p>
<p>However, when I try to use a line plot, I get the following:</p>
<pre><code>ax = sns.lineplot(x="Pulse Time", y="Pulse Measure", hue="Task", data=df1_Relax_Pulse_Melted)
</code></pre>
<p><a href="https://i.stack.imgur.com/2xmoU.png" rel="noreferrer"><img src="https://i.stack.imgur.com/2xmoU.png" alt="enter image description here"></a></p>
<p>As can be seen in the image, the order of the x-axis labels is in a different order from the barplot. Is it possible to change the order of the x-axis in the lineplot? I tried to use the "order" function within the sns.lineplot as follows:</p>
<pre><code>ax = sns.lineplot(x="Pulse Time", y="Pulse Measure", hue="Task", data=df1_Relax_Pulse_Melted, order='PRE_RELAX_PULSE','30S_RELAX_PULSE','POST_RELAX_PULSE')
</code></pre>
<p>However, that produces an error.</p>
<p>``AttributeError: 'Line2D' object has no property 'order'</p>
|
<p><code>sort=False</code> will do it.</p>
<p>As the <a href="https://seaborn.pydata.org/generated/seaborn.lineplot.html" rel="noreferrer">seaborn doc</a> states:</p>
<blockquote>
<p><strong>sort</strong> : boolean, optional</p>
<p>If True, the data will be sorted by the x and y variables, otherwise
lines will connect points in the order they appear in the dataset.</p>
</blockquote>
<p>The x variables are sorted in their "string-order":</p>
<pre><code>'30s_RELAX_PULSE' < 'POST_RELAX_PULSE' < 'PRE_RELAX_PULSE'
</code></pre>
<p>which is not wanted.</p>
<p>The wanted behaviour is the aggregation by the x-values. This is done with the <code>estimator='mean'</code> (default). Every "Pulse Measure"(y) is grouped by the "Pulse Time" (x) and then the mean is calculated.</p>
<pre><code>ax = sns.lineplot(x="Pulse Time", y="Pulse Measure", hue="Task",sort= False, data=df1_Relax_Pulse_Melted)
</code></pre>
<p>My Plot with other sample data:</p>
<p><a href="https://i.stack.imgur.com/MKycz.png" rel="noreferrer"><img src="https://i.stack.imgur.com/MKycz.png" alt="correct order of the x variables"></a></p>
|
python|python-3.x|pandas|seaborn
| 12 |
1,902,539 | 58,332,597 |
Python3.x: Reading multiple columns in a csv file in a for-loop
|
<p>I have numerous csv files in a folder that I want to analyze using python. Each csv file contains one set of y and multiple x values. For example, one csv file looks like the following. (all the csv files have the same number of x and y)</p>
<pre><code>y x1 x2 x3
1 0.5 0.1 2.0
2 1.0 0.2 3.0
3 2.5 0.7 0.5
4 0.4 1.2 0.1
5 0.2 4.0 9.0
6 1.2 5.0 0.2
</code></pre>
<p>I am reading all the csv files in the folder and attempted to read the y and x:</p>
<pre><code>my_xlist=[x1,x2,x3] #these are string.
for file in myfiles:
my_y = []
my_x = []
with open('my_folder/'+file, 'r') as f:
data = csv.reader(f,delimiter=',')
for row in data:
my_y.append(float(row[0]))
my_x.append(float(row[1:len(my_xlist)])) #not sure about this line
</code></pre>
<p>Currently, it reads the x values by row. </p>
<p>The desired outcome is the followings (x values read by columns):</p>
<pre><code>my_y=[1,2,3,4,5,6]
my_x=([0.1,1.0,2.5,0.4,0.2,1.2], [0.1,0.2,0.7,1.2,4.0,5.0], [2.0,3.0,0.5,0.1,9.0,0.2])
</code></pre>
<p>Could any of you help me reading all the columns for x?</p>
|
<p>You can try Pandas to read csv. pandas read csv make DataFrame object which is very helpful for further operation with information. May be following code will help you. </p>
<pre><code>import pandas as pd
import os
list_of_dataframe = []
for file in os.listdir("path/to/directory"):
temp = pd.read_csv(str("path/to/directory/" + file))
list_of_dataframe.append(temp)
df = pd.concat(list_of_dataframe) # This dataframe contains all information
</code></pre>
|
python-3.x|for-loop
| 0 |
1,902,540 | 37,979,138 |
Display merged values using a condition, while including NaN values
|
<p>I have a resultant dataset (called Result) after merging two datasets. I want to display only those rows from Result where company_name1 is equal to company_name2. The output is stored in Result1.This can be done as follows:</p>
<ul>
<li>Result1=Result[Result.company_name1==Result.company_name2] </li>
</ul>
<p>The above statement works fine. The problem is - </p>
<p>There are few rows in Result where either company_name1 or company_name2 is NaN, and those rows won't become part of Result1. My requirement is to over pass the condition in all such cases and also include those rows in Result1.
How do I incorporate that condition?</p>
|
<p>try this:</p>
<pre><code>Result1=Result[(Result.company_name1==Result.company_name2) | \
(pd.isnull(Result.company_name1) | pd.isnull(Result.company_name2))]
</code></pre>
|
python|pandas
| 0 |
1,902,541 | 37,631,679 |
Add data to 3d array from genfromtxt
|
<p>I am trying to generate a 3d NumPy array with the structure [[string, [ID, (int,int,string, string, string)]]]. I use a dictionary now for this data that looks like target[chr][ID]=(a,b,c,d,e,f). I cannot figure out how to get the data from the genfromtxt object into the array. This is what I have done.</p>
<pre><code>data = numpy.genfromtxt(self.target_bed_file, delimiter='\t', usecols=(0,1,2,3,4,5,6), dtype=None)
print(data[0])
#(b'chr1', 3004040, 3005439, 1, b'L1_Rod', b'LINE', b'L1')
target_array = numpy.zeros([21, len(data)], dtype=None)
target_array = target_array.reshape(target_array.shape+(1,))
print(target_array.shape)
#(21, 425372, 1)
target_array[data[0:,0],data[3:,1],:]=(data[1:,2])
</code></pre>
<p>I have tried a few variations of that last line but I always get "IndexError: too many indices for array."</p>
<p>Edit: Based on hpaulj answer I have done this</p>
<pre><code>dt1 = numpy.dtype([('start', int), ('stop', int), ('family', 'S10'), ('type', 'S10'), ('class', 'S10')])
dt = numpy.dtype([('chr', 'S3'), ('ID', int), ('f1', dt1)])
data = numpy.genfromtxt(self.target_bed_file, delimiter='\t', usecols=(0,3,1,2,4,5,6), dtype=dt)
print(data[0]
#(b'chr', 1, (3004040, 3005439, b'L1_Rod', b'LINE', b'L1'))
target_array = numpy.zeros([len(data), 21, 1], dtype=object)
target_array[data['chr'], data['ID'][3:]] = data['f1'][1:]
</code></pre>
<p>Now I get "IndexError: arrays used as indices must be of integer (or boolean) type." I am finding 3d arrays difficult to wrap my head around.</p>
|
<p>Look at the <code>shape</code> and <code>dtype</code> of <code>data</code>. It is not a 2d array.</p>
<pre><code>data[0]
</code></pre>
<p>produces <code>(b'chr1', 3004040, 3005439, 1, b'L1_Rod', b'LINE', b'L1')</code> a structured array record, not a row of a 2d array.</p>
<p>You access 'columns' of such an array by name, e.g. <code>['f0']</code>.</p>
<p>So your assignment expression should be:</p>
<pre><code>target_array[data['f0'],data['f1'][3:] = data['f2'][1:]
</code></pre>
<p>Although that will give dimensional errors. For one field you are using all the rows of <code>data</code>, another is all but 3, another all but 1. They should all index the same number of rows.</p>
<p>On a minor point, you don't need to reshape <code>target_array</code> - just include the extra <code>1</code> in the initial definition:</p>
<pre><code>target_array = numpy.zeros([21, len(data),1], dtype=float)
</code></pre>
<p>What's the purpose of that last <code>1</code> dimension? What's the purpose of the <code>21</code> dimension?</p>
<p><code>dtype=None</code> is the same as <code>dtype=float</code>. Where you wanting <code>dtype=object</code>?</p>
<p>You mention wanting an array with structure:</p>
<pre><code>[[string, [ID, (int,int,string, string, string)]]]
</code></pre>
<p>Do you have in mind a real structured array, with a compound <code>dtype</code>, or an object array with do-it-yourself objects in each slot.</p>
<p><a href="https://stackoverflow.com/a/37228678/901925">https://stackoverflow.com/a/37228678/901925</a> - an example of a complex compound dtype.</p>
|
python-3.x|numpy|multidimensional-array
| 0 |
1,902,542 | 67,854,599 |
Issue installing brownie for solidity/python
|
<p>Ive been having issues running brownie so i uninstalled it then i reinstalled it and got the following error</p>
<blockquote>
<p>Requirement already satisfied: atomicwrites>=1.0 in
c:\python39\lib\site-packages (from pytest==6.2.3->eth-brownie)
(1.4.0) Requirement already satisfied: colorama in
c:\python39\lib\site-packages (from pytest==6.2.3->eth-brownie)
(0.4.4) Requirement already satisfied: pywin32>=223 in
c:\python39\lib\site-packages (from web3==5.18.0->eth-brownie) (301)
WARNING: Ignoring invalid distribution -p
(c:\python39\lib\site-packages) WARNING: Ignoring invalid distribution
-ip (c:\python39\lib\site-packages) WARNING: Ignoring invalid distribution - (c:\python39\lib\site-packages) Installing collected
packages: eth-brownie WARNING: Ignoring invalid distribution -p
(c:\python39\lib\site-packages) WARNING: Ignoring invalid distribution
-ip (c:\python39\lib\site-packages) WARNING: Ignoring invalid distribution - (c:\python39\lib\site-packages) Successfully installed
eth-brownie-1.14.6 WARNING: Ignoring invalid distribution -p
(c:\python39\lib\site-packages) WARNING: Ignoring invalid distribution
-ip (c:\python39\lib\site-packages) WARNING: Ignoring invalid distribution - (c:\python39\lib\site-packages)</p>
</blockquote>
<p><a href="https://i.stack.imgur.com/YlFgm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YlFgm.png" alt="enter image description here" /></a></p>
|
<p>Only way i found to fix was by uninstalling python and all packages then reinstalling it and making sure it was selected as the path in control panel -> system -> system -> advanced system setting -> environmental variables -> path</p>
<p>If anyone else stubbles upon this trying to fix python my advice is just to reinstall all python related stuff as its most likely something broken so save your time and start fresh</p>
|
python|solidity
| 0 |
1,902,543 | 29,857,935 |
Square Root of Floats Error
|
<p>I created a very simple square root program and it would just immediately print the response for <code>elif b > x</code> when using floats. However, if I just used integers it works (but that means I can't find the square root of 9.61, for example). </p>
<p>Here is the program:</p>
<pre><code>x = float(raw_input("What is the number? "))
a = 1.0
b = a*a
while True:
if b == x:
print "The answer is", a
break
elif b > x:
#print "a = ",a
#print "x = ",x
print "That is beyond my computing power. Sorry."
break
elif b < x:
a = a + 0.1
b = a*a
continue
</code></pre>
|
<p>Your problem is checking for <a href="https://docs.python.org/2/tutorial/floatingpoint.html" rel="nofollow"><strong>floating point equality</strong></a>.</p>
<p>The final iteration of your loop is comparing 3.1 * 3.1 against 9.61, and (the floating point representation of...) 3.1 * 3.1 is greater than 9.61, which terminates your loop with "beyond my computing power".</p>
<pre><code>>>> 3.1 * 3.1 == 9.61
False
>>> 3.1 * 3.1 > 9.61
True
>>> 3.1 * 3.1
9.6100000000000001
</code></pre>
<p>If you want to compare floating point numbers like this, check that the difference between them is small enough (<a href="http://www.cygnus-software.com/papers/comparingfloats/comparingfloats.htm" rel="nofollow"><strong>epsilon</strong></a>), instead of checking for equality.</p>
<p>If you want to explore more numeric methods for root finding, read the Wikipedia article on <a href="http://en.wikipedia.org/wiki/Newton%27s_method" rel="nofollow"><strong>Newton-Raphson method</strong></a>.</p>
<p>(Note: Some rational numbers can be represented by floats, so your loop might be able to find the square root of 3.61, for example.)</p>
|
python|math|floating-point|square-root
| 2 |
1,902,544 | 27,841,326 |
Error while setting the color of a cell
|
<p>I'm trying to use openpyxl to set the color of cell, however I'm getting:</p>
<blockquote>
<p>AttributeError: type object 'Color' has no attribute 'DARKYELLO</p>
</blockquote>
<p>This is the code:</p>
<pre><code>from openpyxl import load_workbook
from openpyxl.styles import Font, Style
from openpyxl import Workbook
from openpyxl.writer.styles import StyleWriter
from openpyxl.styles import Border, Color, Font
# ...
self.bworkSheet["A" + str(self.indexLine)].style.fill.start_color = Color.DARKYELLOW
self.bworkSheet["A" + str(self.indexLine)].value = "Duplicate Photo"
</code></pre>
|
<p>Styles are immutable and cannot be changed once they've been created.</p>
|
python|python-2.7|openpyxl
| 1 |
1,902,545 | 72,310,532 |
Commission Split Calculation PowerBI/ Excel/Python/Tableau
|
<p>I need to create a measure from the pictured exported data, where sometimes the Total will be e.g. $1000 from one representative (i.e. Kate Pearson) and at other times it will be split between two people (Kate and Randal, each have $500) or even three.</p>
<p>It is somewhat similar to argmax in Python, but I cannot figure out what calculation to use to extract the data in this manner.</p>
<p>What I want is for each opportunity, a list of totals by either 1 or more people, and if more than one, to list the total along with the person who was responsible.</p>
<p>If anyone is happy to share their logic in python code or even Excel or PowerQuery, that will be a great starting point.<a href="https://i.stack.imgur.com/TnV3F.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TnV3F.png" alt="enter image description here" /></a></p>
<p>Example of output I want to achieve:</p>
<pre><code>Totals:
Randal total: 1750
Kate Total: 2100
Kevin Total: 1150
</code></pre>
|
<p>You could do this in Power Query (<code>Home=>Transform</code>)</p>
<p>I assume your above table name is <code>Sales Table</code>. If not make the change in the <code>Source</code> line of the below code.</p>
<p>The algorithm is outlined in the code comments. But also explore the applied steps to see what each step is doing:</p>
<pre><code>let
Source = #"Sales Table",
//Remove unneeded columns
#"Removed Columns" = Table.RemoveColumns(Source,{"OpportunityID", "Total Sales"}),
//Unpivot all the columns and remove the Attribute column
#"Unpivoted Columns" = Table.UnpivotOtherColumns(#"Removed Columns", {}, "Attribute", "Value"),
#"Removed Columns1" = Table.RemoveColumns(#"Unpivoted Columns",{"Attribute"}),
//We now have pairs of Rep and Split in a single column
// So add an index to group the pairs
#"Added Index" = Table.AddIndexColumn(#"Removed Columns1", "Index", 0, 1, Int64.Type),
#"Inserted Integer-Division" = Table.AddColumn(#"Added Index", "Integer-Division", each Number.IntegerDivide([Index], 2), Int64.Type),
#"Removed Columns2" = Table.RemoveColumns(#"Inserted Integer-Division",{"Index"}),
//Aggregate by creating a two column table
#"Grouped Rows" = Table.Group(#"Removed Columns2", {"Integer-Division"}, {
{"all", each [rep=[Value]{0}, Split=[Value]{1}] }}),
#"Expanded all" = Table.ExpandRecordColumn(#"Grouped Rows", "all", {"rep", "Split"}, {"rep", "Split"}),
#"Removed Columns3" = Table.RemoveColumns(#"Expanded all",{"Integer-Division"}),
//Remove the "NA" reps
//then group by Rep and aggregate by Sum
#"Filtered Rows" = Table.SelectRows(#"Removed Columns3", each ([rep] <> "NA")),
#"Grouped Rows1" = Table.Group(#"Filtered Rows", {"rep"}, {{"Rep Total", each List.Sum([Split]), type number}})
in
#"Grouped Rows1"
</code></pre>
<p><a href="https://i.stack.imgur.com/T9noJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/T9noJ.png" alt="enter image description here" /></a></p>
|
python|excel|aggregation
| 1 |
1,902,546 | 48,451,503 |
Incompatible shapes in merge layers in Keras
|
<p>I'm using Keras to build a semantic segmentation model. The model is fully convolutional, so while I'm training on specific sized inputs i.e. <code>(224,224,3)</code>, when predicting, the model should be able to take any sized input, right? However, when I try to predict on an image of a different resolution, I get an error about mismatched shapes in my merge layers. Here is the error:</p>
<pre><code>(1, 896, 1200, 3)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "predict.py", line 60, in main
n = base_model.predict(x)
File "/home/.local/lib/python2.7/site-packages/keras/engine/training.py", line 1790, in predict
verbose=verbose, steps=steps)
File "/home/.local/lib/python2.7/site-packages/keras/engine/training.py", line 1299, in _predict_loop
batch_outs = f(ins_batch)
File "/home/.local/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py", line 2357, in __call__
**self.session_kwargs)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 778, in run
run_metadata_ptr)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 982, in _run
feed_dict_string, options, run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1032, in _do_run
target_list, options, run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1052, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Incompatible shapes: [1,56,74,512] vs. [1,56,75,512]
[[Node: add_1/add = Add[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"](conv2d_3/Relu, block5_conv3/Relu)]]
[[Node: conv2d_14/div/_813 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_157_conv2d_14/div", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
Caused by op u'add_1/add', defined at:
File "<stdin>", line 1, in <module>
File "predict.py", line 42, in <module>
base_model = models.load_model('mod.h5', custom_objects={'loss':loss})
File "/home/.local/lib/python2.7/site-packages/keras/models.py", line 240, in load_model
model = model_from_config(model_config, custom_objects=custom_objects)
File "/home/.local/lib/python2.7/site-packages/keras/models.py", line 314, in model_from_config
return layer_module.deserialize(config, custom_objects=custom_objects)
File "/home/.local/lib/python2.7/site-packages/keras/layers/__init__.py", line 55, in deserialize
printable_module_name='layer')
File "/home/.local/lib/python2.7/site-packages/keras/utils/generic_utils.py", line 140, in deserialize_keras_object
list(custom_objects.items())))
File "/home/.local/lib/python2.7/site-packages/keras/engine/topology.py", line 2500, in from_config
process_node(layer, node_data)
File "/home/.local/lib/python2.7/site-packages/keras/engine/topology.py", line 2459, in process_node
layer(input_tensors, **kwargs)
File "/home/.local/lib/python2.7/site-packages/keras/engine/topology.py", line 603, in __call__
output = self.call(inputs, **kwargs)
File "/home/.local/lib/python2.7/site-packages/keras/layers/merge.py", line 146, in call
return self._merge_function(inputs)
File "/home/.local/lib/python2.7/site-packages/keras/layers/merge.py", line 210, in _merge_function
output += inputs[i]
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/math_ops.py", line 821, in binary_op_wrapper
return func(x, y, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_math_ops.py", line 73, in add
result = _op_def_lib.apply_op("Add", x=x, y=y, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 768, in apply_op
op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2336, in create_op
original_op=self._default_original_op, op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1228, in __init__
self._traceback = _extract_stack()
InvalidArgumentError (see above for traceback): Incompatible shapes: [1,56,74,512] vs. [1,56,75,512]
[[Node: add_1/add = Add[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"](conv2d_3/Relu, block5_conv3/Relu)]]
[[Node: conv2d_14/div/_813 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_157_conv2d_14/div", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
</code></pre>
<p>My model architecture is the following: I use VGG16 and strip the top layer, and basically put layers in reverse order on top. I also have skip connections between the last convolutional layers of each block. Basically, I'm implementing <a href="https://arxiv.org/pdf/1511.00561.pdf" rel="nofollow noreferrer">SegNet</a>. I don't really understand why I'm getting <code>Incompatible shapes: [1,56,74,512] vs. [1,56,75,512]</code>. I understand that adding an extra connection on to a layer must change its dimensions, but why does Keras's padding not take care of this? </p>
<p>Here is also the code that builds my model:</p>
<pre><code>input_tensor = Input(shape=(None,None,3))
vgg = VGG16(weights='imagenet', include_top=False, input_shape=(None, None,3))
# vgg.summary()
if vgg_train is False:
# Freeze VGG layers
for layer in vgg.layers:
layer.trainable = False
l1_1 = Model.get_layer(vgg, 'block1_conv1')
l1_2 = Model.get_layer(vgg, 'block1_conv2')
l1_p = Model.get_layer(vgg, 'block1_pool')
l2_1 = Model.get_layer(vgg, 'block2_conv1')
l2_2 = Model.get_layer(vgg, 'block2_conv2')
l2_p = Model.get_layer(vgg, 'block2_pool')
l3_1 = Model.get_layer(vgg, 'block3_conv1')
l3_2 = Model.get_layer(vgg, 'block3_conv2')
l3_3 = Model.get_layer(vgg, 'block3_conv3')
l3_p = Model.get_layer(vgg, 'block3_pool')
l4_1 = Model.get_layer(vgg, 'block4_conv1')
l4_2 = Model.get_layer(vgg, 'block4_conv2')
l4_3 = Model.get_layer(vgg, 'block4_conv3')
l4_p = Model.get_layer(vgg, 'block4_pool')
l5_1 = Model.get_layer(vgg, 'block5_conv1')
l5_2 = Model.get_layer(vgg, 'block5_conv2')
l5_3 = Model.get_layer(vgg, 'block5_conv3')
l5_p = Model.get_layer(vgg, 'block5_pool')
#Encoder: Basically re-building VGG layer by layer, because Keras's concat only takes tensors, not layers
x = l1_1(input_tensor)
o1 = l1_2(x)
x = l1_p(o1)
x = l2_1(x)
o2 = l2_2(x)
x = l2_p(o2)
x = l3_1(x)
x = l3_2(x)
o3 = l3_3(x)
x = l3_p(o3)
x = l4_1(x)
x = l4_2(x)
o4 = l4_3(x)
x = l4_p(o4)
x = l5_1(x)
x = l5_2(x)
o5 = l5_3(x)
x = l5_p(o5)
#Decoder layers: VGG architecture in reverse with skip connections and dropout layers
#Block 1
up1 = UpSampling2D()(x)
conv1 = Conv2D(512, 3, activation='relu', padding='same')(up1)
conv1 = Conv2D(512, 3, activation='relu', padding='same')(conv1)
conv1 = Conv2D(512, 3, activation='relu', padding='same')(conv1)
conv1 = add([conv1,o5])
batch1 = BatchNormalization()(conv1)
#Block 2
up2 = UpSampling2D()(batch1)
conv2 = Conv2D(512, 3, activation='relu', padding='same')(up2)
conv2 = Conv2D(512, 3, activation='relu', padding='same')(conv2)
conv2 = Conv2D(512, 3, activation='relu', padding='same')(conv2)
conv2 = add([conv2,o4])
batch2 = BatchNormalization()(conv2)
#Block 3
up3 = UpSampling2D()(batch2)
conv3 = Conv2D(256, 3, activation='relu', padding='same')(up3)
conv3 = Conv2D(256, 3, activation='relu', padding='same')(conv3)
conv3 = Conv2D(256, 3, activation='relu', padding='same')(conv3)
conv3 = add([conv3,o3])
batch3 = BatchNormalization()(conv3)
#Block 4
up4 = UpSampling2D()(batch3)
conv4 = Conv2D(128, 3, activation='relu', padding='same')(up4)
conv4 = Conv2D(128, 3, activation='relu', padding='same')(conv4)
conv4 = add([conv4,o2])
batch4 = BatchNormalization()(conv4)
#Block 5
up5 = UpSampling2D()(batch4)
conv5 = Conv2D(64, 3, activation='relu', padding='same')(up5)
conv5 = Conv2D(64, 3, activation='relu', padding='same')(conv5)
conv5 = add([conv5,o1])
batch5 = BatchNormalization()(conv5)
#Final prediction layer
soft5 = Conv2D(dims, kernel_size=8, strides=8, activation='softmax', padding='same')(batch5)
model = Model(input_tensor,soft5)
model.summary()
return model
</code></pre>
|
<p>Found the solution in case anyone else runs into this issue. Merge layers need to have the same dimensionality, for obvious reasons. The problem arises when downsampling/upsampling. An image with a width of 115 for example, if downsampling by a factor of 2, will be reduced to the ceiling of 57.5, i.e 58. When upsampling this, the resulting tensor has width 116, which causes problems when trying to merge the 115 width layer with one of width 116. The solution for my case was pretty simple. Since all of my training data is the same size, the issue only occurs during inference. At this time, if the image has a dimension not divisible by 32, I just resize and then crop so that it does.</p>
|
tensorflow|neural-network|deep-learning|keras|conv-neural-network
| 2 |
1,902,547 | 4,279,560 |
Accessing static java methods in Python through jython
|
<p>I am currently trying to access a static class in java, within python. I import as normal, then I try to get the class instance of the java class. </p>
<pre><code>from com.exmaple.util import Foo
Foo. __class___.run_static_method()
</code></pre>
<p>This doesn't seem to work. suggestions? What am i doing wrong. </p>
|
<p>Try using</p>
<pre><code>Foo.run_static_method()
</code></pre>
|
java|python|jython
| 2 |
1,902,548 | 48,421,712 |
My program is giving more than one result when it should only give one?
|
<p>I have written this scissors paper rock game in python, sometimes it works perfectly but other times it prints more than one result. For example it will say that you tied and then also that you won and sometimes it will also say that you lost. This is not dependent on the actual outcome though and it seems completely random, please help!</p>
<pre><code> while True:
import time
print('Scissors ')
time.sleep(1)
print(' paper')
time.sleep(1)
print(' rock!')
choice = input()
if choice not in ('rock', 'paper', 'scissors'):
print('Invalid Input')
import random
RPS = ['rock', 'paper', 'scissors']#BOT CHOICE
#TIE
TIE = ['The bot also chose ' + choice, 'It is a tie!', 'TIE!']
#WIN
RWIN = ['The bot chose scissors, but you smashed them with your rock!', 'YOU WIN!', 'Your rock beat the bots scissors']
PWIN = ['The bot chose rock, but you smothered it with your paper', 'YOU WIN!', 'Your paper beat the bots rock']
SWIN = ['The bot chose paper but you cut it to bits!', 'YOU WIN', 'Your scissors beat the bots paper!']
#LOSS
RLOSS = ['The bot chose paper and smotherd your rock :(', 'You loose...', 'Nice try but you lost...better luck next time']
PLOSS = ['The bot chose scissors and chopped your paper to bits :(', 'You loose...', 'Nice try but you lost...better luck next time']
SLOSS = ['The bot chose rock and smashed your scissors :(', 'You loose...', 'Nice try but you lost...better luck next time']
#TIE
if choice ==(random.choice(RPS)):
print(random.choice(TIE))
#WIN
#ROCK
if choice =='rock' and (random.choice(RPS)) =='scissors':
print(random.choice(RWIN))
#PAPER
if choice =='paper' and (random.choice(RPS)) =='rock':
print(random.choice(PWIN))
#SCISSORS
if choice =='scissors' and (random.choice(RPS)) =='paper':
print(random.choice(SWIN))
#LOSS
#ROCK
if choice =='rock' and (random.choice(RPS)) =='paper':
print(random.choice(RLOSS))
#PAPER
if choice =='paper' and (random.choice(RPS)) =='scissors':
print(random.choice(PLOSS))
#SCISSORS
if choice =='scissors' and (random.choice(RPS)) =='rock':
print(random.choice(SLOSS))
while True:
time.sleep(1)
print(' ')
print('Again? (y/n) ')
answer = input()
if answer in ('y', 'n'):
break
print('Invalid input')
if answer =='y':
continue
else:
print('Goodbye')
break
</code></pre>
|
<p>You run <code>random.choice(RPS)</code> multiple times so every time you get a different result which means it can match multiple ifs or none (you want to match a single one).</p>
<p>To fix it add a variable to store the result of <code>random.choice(RPS)</code> (once) and use that variable all in your if statements.</p>
|
python
| 3 |
1,902,549 | 48,412,858 |
words related to a topic (another word) using NTLK
|
<p>I'm working on building my own virtual assistant based on intent<->action mechanism and want to run some NLP on sentences that my users write.</p>
<p>I want to find out words related to a topic(another word) so that I can define an intent, for example:</p>
<p>If a user asks: Will it rain tomorrow? What's the weather today? is it sunny? Is if going to be windy this afternoon?</p>
<p>I want to be able to say that rain, weather, sunny, sun are related to the intent called weather so that I can communicate with the relevant API and retrieve the requested information.</p>
<p>I'm currently working with Python 3 and NLTK but using synonyms, path to parent and similarity doesn't really do the trick:</p>
<pre><code>wordFromList1 = wn.synsets('weather')[0]
wordFromList2 = wn.synsets('cold')[0]
value = wn.wup_similarity(wordFromList1, wordFromList2)
print(value)
---------------------------
0.1
</code></pre>
<p>You can see that the similarity here for cold and weather is really weak. Any suggestions?</p>
<p>Thanks,</p>
|
<p>Yeah, don't use wordnet for this! it might be (very loosely) correlated to solving what you want but will yield horrendous accuracy and recall unless you engineer the hell out of it. For practical results what you need to do is look at things like LSA, LDA or word embeddings, something along those lines</p>
|
python|python-3.x|nlp|artificial-intelligence|nltk
| 0 |
1,902,550 | 51,199,548 |
Python function is returning incorrect data
|
<p>The code below, when referenced in a separate file is returning incorrect data. When given data that does not match the if/else statements, it will loop through the function again, but the variable in the other file (client_type) will still be the incorrect choice. </p>
<p>function:</p>
<pre><code>def create_client():
client_type = input()
if client_type == 'Mickey':
return 'Mickey'
elif client_type == 'Jenny':
return 'Jenny'
elif client_type == 'McElroy':
return 'McElroy'
else:
create_client()
</code></pre>
<p>call to the function:</p>
<pre><code>client_type = functions.create_client()
if client_type == 'Mickey':
client = functions.client(3, 5, 2)
elif client_type == 'Jenny':
client = functions.client(5, 2, 3)
elif client_type == 'McElroy':
client = functions.client(4, 1, 5)
else:
print('Error on choosing client in function create_client.')
</code></pre>
|
<p>Your problem is that when your function recurses, it returns nothing.</p>
<p>You should change</p>
<pre><code>else:
create_client()
</code></pre>
<p>to </p>
<pre><code>else:
return create_client()
</code></pre>
<p>Now, not a direct answer, but you really shouldn't use recursion in this case, it is better with a loop:</p>
<pre><code>def create_client():
while True:
client_type = input()
if client_type == 'Mickey':
return 'Mickey'
elif client_type == 'Jenny':
return 'Jenny'
elif client_type == 'McElroy':
return 'McElroy'
</code></pre>
<p>That won't exhaust the recursive call stack, and saves resources.</p>
<p>I would even go ahead and use a <code>dict</code> instead of a sequence of <code>if</code>/<code>elif</code>:</p>
<pre><code>client_types = {
'Mickey': (3, 5, 2),
'Jenny': (5, 2, 3),
'McElroy': (4, 1, 5),
}
</code></pre>
<p>then you can make your code search the dict and return the correct numbers:</p>
<pre><code>while True:
t = input()
if t in client_types:
break
client = functions.client(*client_types[t])
</code></pre>
|
python|recursion
| 1 |
1,902,551 | 73,822,605 |
TypeError: ">” not supported between instances of "str' and int‘
|
<p>I know I’m missing something with this code. Can someone please help me? I’m new to coding and I’ve struggling with this all day. I don’t want to keep emailing my instructor so maybe I can get help from here. I’m trying to get it to run through the if statements with user input and then calculate the amount but I don’t know what I’m missing.<a href="https://i.stack.imgur.com/rXBlp.png" rel="nofollow noreferrer">enter image description here</a></p>
|
<p>You should post code you're asking about as text in your question.</p>
<p>Going over your code with some comments:</p>
<pre><code>print("Welcome") # no issue here, although Python default is single quotes, so 'Welcome'
print = input("Please enter company name:")
</code></pre>
<p>After that last line, <code>print</code> is a variable that has been assigned whatever text was entered by the user. (even if that text consists of digits, it's still going to be a text)</p>
<p>A command like <code>print("You total cost is:")</code> will no longer work at this point, because <code>print</code> is no longer the name of a function, since you redefined it.</p>
<pre><code>num = input("Please enter number of fiber cables requested:")
</code></pre>
<p>This is OK, but again, <code>num</code> has a <em>text</em> value. <code>'123'</code> is not the same as <code>123</code>. You need to convert text into numbers to work with numbers, using something like <code>int(num)</code> or <code>float(num)</code>.</p>
<pre><code>print("You total cost is:")
</code></pre>
<p>The line is fine, but won't work, since you redefined <code>print</code>.</p>
<pre><code>if num > 500:
cost = .5
</code></pre>
<p>This won't work until you turn <code>num</code> into a number, for example:</p>
<pre><code>if int(num) > 500:
...
</code></pre>
<p>Or:</p>
<pre><code>num = int(num)
if num > 500:
...
</code></pre>
<p>Also, note that the default indentation depth for Python is 4 spaces. You would do well to start using that yourself. Your code will work if you don't, but others you have to work with (including future you) will thank you for using standards.</p>
<p>Finally:</p>
<pre><code>print = ("Total cost:, num")
</code></pre>
<p>Not sure what you're trying to do here. But assiging to <code>print</code> doesn't print anything. And the value you're assigning is just the string <code>'Total cost:, num'</code>. If you want to include the value of a variable in a string, you could use an f-string:</p>
<pre><code>print(f"Total cost: {num}")
</code></pre>
<p>Or print them like this:</p>
<pre><code>print("Total cost:", num) # there will be a space between printed values
</code></pre>
|
python|new-operator
| 0 |
1,902,552 | 17,364,936 |
HSV Clor space help for OpenCV project
|
<p>I am working on detecting the Ball on Table Football. A screenshot of the game is:
<a href="https://www.dropbox.com/s/tsvxqlb358sshob/Screen%20Shot%202013-06-28%20at%2014.24.53.png" rel="nofollow">https://www.dropbox.com/s/tsvxqlb358sshob/Screen%20Shot%202013-06-28%20at%2014.24.53.png</a></p>
<p>I am trying to use the following code:</p>
<pre><code>hsv_min = cv.Scalar (0,0,100)
hsv_max = cv.Scalar (60,4.3,100)
cv.InRangeS (hsvframe, hsv_min, hsv_max, threshpic)
</code></pre>
<p>I am confused with the color range for the Ball on HSV color space. Can any one tell me what can be the color range in HSV scaler ?</p>
|
<p>In the C API, it's
H: 0-360, S: 0-1, V: 0-1. Try using these limits.</p>
|
python|opencv|hsv
| 0 |
1,902,553 | 17,520,711 |
2D motion estimation using Python, OpenCV & Kalman filtering
|
<p>I have a set of images, and would like to recursively predict where a bunch of pixels will be in the next image. I am using Python, OpenCV, and believe Kalman filtering may be the way forward, but am struggling on the implementation. For simplicity, the code below opens and image and extracts just one colour channel, in this case the red one.</p>
<p>So far, I am using optical flow to determine the motion between images in X and Y for each pixel. After each iteration, I would like to use the last N iterations, and by using the X/Y motions found each time, calculate the velocity of the pixel, and predict where it will end up in the next frame. The group of pixels I will look at and predict is not specified, but is not relevant for the example. It would just be a Numpy array of (x,y) values.</p>
<p>Any help would be greatly appreciated. Simplified code snippet below:</p>
<pre><code>import numpy as np
import cv2
from PIL import Image
imageNames = ["image1.jpg", "image2.jpg", "image3.jpg", "image4.jpg", "image5.jpg"]
for i in range(len(imageNames)):
# Load images and extract just one colour channel (e.g., red)
image1 = Image.open(imageNames[i])
image2 = Image.open(imageNames[i+1])
image1R = np.asarray(image1[:,:,0]).astype(np.uint8)
image2R = np.asarray(image2[:,:,0]).astype(np.uint8)
# Get optical flow
flow = cv2.calcOpticalFlowFarneback(image1R, image2R, 0.5, 1, 5, 15, 10, 5, 1)
change_in_x = flow[:,:,0]
change_in_y = flow[:,:,1]
# Use previous flows to obtain velocity in x and y
# For a subset of the image, predict where points will be in the next image
# Use Kalman filtering?
# Repeat recursively
</code></pre>
|
<p>I am not sure if I can explain this here; but I will have a shot. Kalman filter is nothing but a prediction-measurement (correction) based loop. </p>
<p>You have your initial state (position and velocity) after two images:</p>
<pre><code>X0 = [x0 v0]
</code></pre>
<blockquote>
<p>where v0 is the flow between image1 and image2.</p>
<p>and x0 is the position at image2.</p>
</blockquote>
<p>Make an assumption (like constant velocity model). Under constant velocity assumption, you will <strong><em>predict</em></strong> this object will move to X1 = A* X0 where A is found from constant velocity model equations:</p>
<pre><code>x1 = x0 + v0*T
v1 = v0
=> X1 = [x1 v1]
= [1 T ; 0 1] * [x0 v0]
= [1 T ; 0 1] * X0
</code></pre>
<p>T is your sampling time (generally taken as the frame rate if used with cameras). You need to know the time difference of your images here.</p>
<p>Later, you are going to correct this assumption with the next <strong><em>measurement</em></strong> (load image3 here and obtain v1' from flow of image2 and image3. Also take x1' from image3).</p>
<pre><code>X1' = [x1' y1']
</code></pre>
<p>For a simpler version of KF, find the average point as the estimation, i.e.</p>
<pre><code>~X1 = (X1 + X1')/2.
</code></pre>
<p>If you want to use the exact filter, and use kalman gain and coveriance calculations, I'd say you need to check out the <a href="http://www.eee.metu.edu.tr/~umut/ee793/files/METULecture1.pdf" rel="nofollow">algorithm</a>, page 4. Take R small if your images are accurate enough (it is the sensor noise).</p>
<p>The ~X1 you will find takes you to the start. Replace initial state with ~X1 and go over same procedure.</p>
<p>If you check the <a href="http://docs.opencv.org/modules/video/doc/motion_analysis_and_object_tracking.html" rel="nofollow">opencv doc</a>, the algorithm might already be there for you to use.</p>
<p>If you are not going to use a camera and opencv methods; I would suggest you to use MATLAB, just because it is easier to manipulate matrices there.</p>
|
python|opencv|numpy|motion|kalman-filter
| 1 |
1,902,554 | 17,304,252 |
Does python's recvfrom() queue packets?
|
<p>My impression was recvfrom() gave you the next packet on the IP and port it is listening on, and if it is not listening packets get missed. We are having an issue where the problem could be packets are queued up for recvfrom(), therefore it is listening and catching all packets even when recvfrom() is not actively listening.</p>
<p>I could not find definitive documentation on this. Does anybody know for sure the characteristics of recvfrom() is to not queue packets when not being called?</p>
<p>code example:</p>
<pre><code>s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
mcast_g = socket.inet_aton(group)
mreq = struct.pack('4sL', mcast_g, socket.INADDR_ANY)
s.setsockopt(socket.IPPROTO_IP, socket.IP_ADD_MEMBERSHIP, mreq)
s.bind(('', port))
while True:
data, sender = s.recvfrom(1500)
# Do stuff
# Are packets being queued up here?
</code></pre>
|
<p>There is a socket receive buffer in the kernel. <code>recv()</code> and friends read from the buffer, or block while it's empty. If you don't read fast enough, the buffer fills, and UDP datagrams that arrive when the buffer is full are dropped. You can vary the size of the buffer with the socket option <code>SO_RCVBUFSIZE.</code></p>
|
python|sockets|networking|listener|recv
| 4 |
1,902,555 | 64,475,020 |
Difference of Columns in Two Files Python
|
<p>I have two files similar to file1 and file2 and I'm trying to compute the difference for each column and saved it to out.
Examples of files and desired output: <a href="https://i.stack.imgur.com/FPQcY.png" rel="nofollow noreferrer">https://i.stack.imgur.com/FPQcY.png</a>
I've tried using pandas and a few other methods but couldn't get it. This is what I have so far, thanks:</p>
<pre><code>import sys
import pandas as pd
import numpy as np
files = [sys.argv[1], sys.argv[2]]
f1 = open(sys.argv[1])
lines = f1.readlines()
f1.close()
df1 = pd.DataFrame(file1,columns = ['A_1','B_1','C_1']
f2 = open(sys.argv[2])
lines = f2.readlines()
f2.close()
df2 = pd.DataFrame(file2,columns = ['A_2','B_2','C_2']
df1['Difference'] = np.where((df1['A_1'] - df2['A_2']),(df1['B_1'] - df2['B_2']),(df1['C_1'] - df2['C_2']))
print (df1)
</code></pre>
|
<p>Use numpy:</p>
<pre><code>f1 = np.loadtxt(sys.argv[1])
f2 = np.loadtxt(sys.argv[2])
dif = f2 - f1
</code></pre>
|
python|pandas|numpy|dataframe|enumerate
| 0 |
1,902,556 | 64,204,222 |
Python edit multiple worksheets
|
<p>I would like to edit multiple worksheets present in the same Excel File and then save them with the adjustments made. These worksheets have the same columns headers and are called Credit and Debit. The code that I have created is the following:</p>
<pre><code>import pandas as pd
import numpy as np
class blah:
def __init__(self, path, file_in, file_out):
self.path = path
self.file_inviato = file_in
self.file_out = file_out
def process_file(self):
df = pd.read_excel(self.path + self.file_in, sheet_name=None, skiprows=4)
****Here is where I am struggling in amending both worksheets at the same time****
# df = df.columns.str.strip()
# df['Col1'] = np.where((df['Col2'] == 'KO') | (df['Col2'] == 'OK'), 0, df['Col1'])
writer = pd.ExcelWriter(self.path + self.file_out, engine='xlsxwriter')
for sheet_name in df.keys():
df[sheet_name].to_excel(writer, sheet_name=sheet_name, index=False)
writer.save()
b = blah('path....',
'file in....xlsx',
'file out.xlsx')
b.process_file()
</code></pre>
|
<p>found a workaround:</p>
<pre><code>for sheet_name in df.keys():
df[sheet_name] = df[sheet_name].rename(columns=lambda x: x.strip())
df[sheet_name]['Col1'] = np.where((df[sheet_name]['Col2'] == 'KO') |
(df[sheet_name]['Col2'] == 'OK'), 0, df[sheet_name]['Col1'])
</code></pre>
|
python|function|class|worksheet-function
| 0 |
1,902,557 | 64,541,421 |
Pip not retrieving the latest version of a package: Google Colab
|
<p>I'm trying to use the google-ads package in Google Colab, I use the magic method</p>
<pre><code>%pip install google-ads
</code></pre>
<p>The downloaded version is 4.0.0, but the current version in pypi.org is the <a href="https://pypi.org/project/google-ads/" rel="nofollow noreferrer">7.0.0</a>, is there any way to download the newest version?
Thanks a bunch</p>
|
<p>I've found the problem, the package google-ads 7.0.0 requires python>= 3.7, colab uses 3.6.9</p>
|
python|pip|google-colaboratory
| 2 |
1,902,558 | 70,381,891 |
Binning a dataframe when a condition matches in a second dataframe
|
<p>Good morning all.
I want to create a binning column in my main dataframe using data from a second one.
Dataframe#1 has "Runner ID" and "Cumulative Distance" columns. Dataframe#2 has "Runner ID", "Section Start" and "Section Name" columns
I'm trying to create a third column on Dataframe #1 named "Section Name Binning" based on matching "Runner ID" in both dataframes, and then binning "Cumulative Distance" from Dataframe#1 using the data from columns "Section Start" and "Section Name" from Dataframe#2.
"Cumulative Distance" from Dataframe#1 and "Section Start" from Dataframe#2 will always be in increasing order and they restart when "Runner ID" changes.
Attached a picture and Dataframes samples.
As always appreciating your support.</p>
<p>Dataframes for binning
<a href="https://i.stack.imgur.com/VJ9AD.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VJ9AD.jpg" alt="Dataframes for binning" /></a></p>
<pre><code>df1=pd.DataFrame({'Runner_ID':['John','John','John','John','John','John','John','John','John','John','John','Jen','Jen','Jen','Jen','Jen','Jen','Jen','Jen','Jen','Jen','Jen'],'Cumulative_Distance':[1,1.5,1.8,3,3.2,3.7,4,4.3,5,6.6,8,2,2.3,2.8,3.2,3.5,3.9,4.8,5,5.3,5.8,6]})
df2=pd.DataFrame({'Runner_ID':['John','John','John','Jen','Jen','Jen','Jen'],'Section_Start':[0,3,5,0,2.5,3.5,5], 'Section_Name':['Flats', 'Uphill', 'Downhill', 'Flats', 'Uphill','Curve', 'Downhill']})
</code></pre>
|
<p>This is <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.merge_asof.html" rel="nofollow noreferrer"><code>pd.merge_asof</code></a>:</p>
<pre><code>(pd.merge_asof(df1.sort_values('Cumulative_Distance'),df2.sort_values('Section_Start'),
left_on='Cumulative_Distance', right_on='Section_Start',
by='Runner_ID', allow_exact_matches=False)
.sort_values(['Runner_ID','Cumulative_Distance'])
)
</code></pre>
<p>Output:</p>
<pre><code> Runner_ID Cumulative_Distance Section_Start Section_Name
3 Jen 2.0 0.0 Flats
4 Jen 2.3 0.0 Flats
5 Jen 2.8 2.5 Uphill
8 Jen 3.2 2.5 Uphill
9 Jen 3.5 2.5 Uphill
11 Jen 3.9 3.5 Curve
14 Jen 4.8 3.5 Curve
15 Jen 5.0 3.5 Curve
17 Jen 5.3 5.0 Downhill
18 Jen 5.8 5.0 Downhill
19 Jen 6.0 5.0 Downhill
0 John 1.0 0.0 Flats
1 John 1.5 0.0 Flats
2 John 1.8 0.0 Flats
6 John 3.0 0.0 Flats
7 John 3.2 3.0 Uphill
10 John 3.7 3.0 Uphill
12 John 4.0 3.0 Uphill
13 John 4.3 3.0 Uphill
16 John 5.0 3.0 Uphill
20 John 6.6 5.0 Downhill
21 John 8.0 5.0 Downhill
</code></pre>
|
python|pandas|binning
| -1 |
1,902,559 | 73,282,566 |
Need help python open browser using specific profile
|
<p>Still new to a Python.
I need to be able to open a Chrome browser with specific profile.
And verify that proper the page is opened, by specific element on a page</p>
|
<p>Use selenium to control the browser from python.
<a href="https://selenium-python.readthedocs.io/getting-started.html#simple-usage" rel="nofollow noreferrer">https://selenium-python.readthedocs.io/getting-started.html#simple-usage</a></p>
<p>You can use the chrome driver that you download from <a href="https://chromedriver.chromium.org/downloads" rel="nofollow noreferrer">https://chromedriver.chromium.org/downloads</a></p>
<p>You will use XPATH to locate the elements and validate the page
<a href="https://selenium-python.readthedocs.io/locating-elements.html" rel="nofollow noreferrer">https://selenium-python.readthedocs.io/locating-elements.html</a></p>
|
python|browser
| 0 |
1,902,560 | 66,434,682 |
Replace the values of column with mean which has particular value in other column in pandas
|
<p>I have a data frame</p>
<pre><code>df = pd.DataFrame([[10, -1], [20, 1], [30, -1],[40, 1], [50, 1], [60, 1], [70,-1], [80,-1], [90,-1], [100,1]], columns=['A', 'B'])
</code></pre>
<p>Calculate the mean of column A whole value is 1 in column B and assign to c, here it is <strong>c=54</strong>. I want to replace only those values of column A which has -1 value in column B and value lesser than c with c value.</p>
<p><strong>Expected Output:</strong></p>
<pre><code>df = pd.DataFrame([[54, -1], [20, 1], [54, -1],[40, 1], [50, 1], [60, 1], [70,-1], [80,-1], [90,-1], [100,1]], columns=['A', 'B'])
</code></pre>
<p>How to do it?</p>
|
<p>First get values of <code>A</code> by <code>B==1</code> in <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.loc.html" rel="nofollow noreferrer"><code>DataFrame.loc</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#boolean-indexing" rel="nofollow noreferrer"><code>boolean indexing</code></a>, get mean and set <code>A</code> by compare less like mean only for <code>B==-1</code> values:</p>
<pre><code>mean = df.loc[df['B'].eq(1), 'A'].mean()
print (mean)
54.0
mask = df['A'].lt(mean) & df['B'].eq(-1)
df.loc[mask, 'A'] = mean
print (df)
A B
0 54 -1
1 20 1
2 54 -1
3 40 1
4 50 1
5 60 1
6 70 -1
7 80 -1
8 90 -1
9 100 1
</code></pre>
|
python|python-3.x|pandas|dataframe
| 2 |
1,902,561 | 68,623,805 |
Difference between get_attribute and get in Selenium in python
|
<p>I was doing web <strong>scraping</strong>, when I came across the aforementioned are performing almost the same work. I'm sure there are differences, but I cannot understand.</p>
<p>Example of <code>get_attribute(),</code></p>
<pre><code>channel=driver.find_element_by_xpath('//*[@id="main-link"]') #Accessing the link of the channel
url=channel.get_attribute('href')
print(url)
</code></pre>
<p>Example of <strong>get()</strong></p>
<pre><code>urls=[]
for i in listOfLinks:
u=i.get('href')
try:
urls.append('https://www.youtube.com'+u) #Since the fetched urls are not complete we need to append -https://www.youtube.com in front to fetch the complete url
except:
continue
print(urls)
</code></pre>
|
<p>See <code>get()</code> is to open or launch any web URL in Selenium Python.</p>
<p>where as</p>
<p><code>get_attribute</code> is to get the <strong>attribute</strong> of a <strong>tag</strong>. (which you generally see in HTML).</p>
<p>In your example, <code>//*[@id="main-link"]</code> this must represent a web element, right ? You may have seen as well this on UI. so here <code>id</code> is an attribute, same way <code>'href'</code> is an attribute.</p>
<p>so basically, this line means</p>
<pre><code>url = channel.get_attribute('href')
</code></pre>
<p>for web element <code>channel</code> which is associated with <code>//*[@id="main-link"]</code> xpath, grab the <code>href</code> attribute.</p>
|
python|selenium|selenium-webdriver
| 0 |
1,902,562 | 66,140,314 |
Way to create a dictionary while iterating through a dataframe
|
<p>this is my first question here so please excuse me for any formatting errors.</p>
<p>I have a dataframe that let's say looks like this</p>
<pre><code>ID | Contact | First Name | Last Name
1 | A | Joe | Doe
1 | B | Jane | Doe
2 | C | Peter | Parker
2 | D | Iron | Man
</code></pre>
<p>And I want to iterate through the dataframe and create a dictionary so I get results such as:</p>
<pre><code>{1:{A:[Joe, Doe]}, {B:[Jane, Doe]}, 2:{C:[Peter, Parker]},{D:[Iron, Man]}}
</code></pre>
<p>I'm using pandas to create the dataframe and I've been struggling with this for a while, maybe I'm too far down the rabbithole and the answer is rather easy or something different from what I've already tried.</p>
<p>I tried using a for loop to iterate through the contacts and then checking if the ID repeated and then grouping them together but it didn't work.</p>
<p>Thanks in advance for any help.</p>
|
<p>Since the ID seems to be one key for the rest of the informations from this ID, you can try this (which puts all the informations from the ID into a list):</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'ID': [1,1,2,2], 'Contact': ['A', 'B', 'C', 'D'], 'First Name': ['Joe', 'Jane', 'Peter', 'Iron'], 'Last Name': ['Doe', 'Doe', 'Parker', 'Man']})
res = {}
for index, row in df.iterrows():
cur_id = row[0]
cur_contact = row[1]
cur_fname = row[2]
cur_lname = row[3]
if cur_id in res:
res[cur_id].append({cur_contact: [cur_fname, cur_lname]})
else:
res[cur_id] = [{cur_contact: [cur_fname, cur_lname]}]
print(res)
</code></pre>
<p>output:</p>
<pre><code>{1: [{'A': ['Joe', 'Doe']}, {'B': ['Jane', 'Doe']}],
2: [{'C': ['Peter', 'Parker']}, {'D': ['Iron', 'Man']}]}
</code></pre>
|
python|pandas
| 2 |
1,902,563 | 62,442,449 |
Error: The Node.js native addon module (tfjs_binding.node) can not be found at path
|
<p>I tried to run the example of <a href="https://github.com/tensorflow/tfjs-models/tree/master/posenet" rel="nofollow noreferrer">this</a> app:</p>
<p><a href="https://i.stack.imgur.com/IiPIW.png" rel="nofollow noreferrer">enter image description here</a></p>
<p>By this script :</p>
<pre><code>const tf = require('@tensorflow/tfjs-node');
const posenet = require('@tensorflow-models/posenet');
const {
createCanvas, Image
} = require('canvas')
const imageScaleFactor = 0.5;
const outputStride = 16;
const flipHorizontal = false;
const tryModel = async() => {
console.log('start');
const net = await posenet.load(0.75);
const img = new Image();
img.src = './test.jpg';
const canvas = createCanvas(img.width, img.height);
const ctx = canvas.getContext('2d');
ctx.drawImage(img, 0, 0);
const input = tf.browser.fromPixels(canvas);
const pose = await net.estimateSinglePose(input, imageScaleFactor, flipHorizontal, outputStride);
// console.log(pose);
for(const keypoint of pose.keypoints) {
console.log(`${keypoint.part}: (${keypoint.position.x},${keypoint.position.y})`);
}
console.log('end');
}
tryModel();
</code></pre>
<p>but when I tried :</p>
<p><code>node index.js</code></p>
<p>i get this error :</p>
<pre><code>so@so-notebook:~/Desktop/trash/tensorflow$ node index.js
Overriding the gradient for 'Max'
Overriding the gradient for 'OneHot'
Overriding the gradient for 'PadV2'
Overriding the gradient for 'SpaceToBatchND'
Overriding the gradient for 'SplitV'
/home/so/node_modules/@tensorflow/tfjs-node/dist/index.js:49
throw new Error("The Node.js native addon module (tfjs_binding.node) can not "
^
Error: The Node.js native addon module (tfjs_binding.node) can not be found at path: /home/so/node_modules/@tensorflow/tfjs-node/lib/napi-v6/tfjs_binding.node.
Please run command 'npm rebuild @tensorflow/tfjs-node build-addon-from-source' to rebuild the native addon module.
If you have problem with building the addon module, please check https://github.com/tensorflow/tfjs/blob/master/tfjs-node/WINDOWS_TROUBLESHOOTING.md or file an issue.
at Object.<anonymous> (/home/so/node_modules/@tensorflow/tfjs-node/dist/index.js:49:11)
at Module._compile (internal/modules/cjs/loader.js:1138:30)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:1158:10)
at Module.load (internal/modules/cjs/loader.js:986:32)
at Function.Module._load (internal/modules/cjs/loader.js:879:14)
at Module.require (internal/modules/cjs/loader.js:1026:19)
at require (internal/modules/cjs/helpers.js:72:18)
at Object.<anonymous> (/home/so/Desktop/trash/tensorflow/index.js:1:12)
at Module._compile (internal/modules/cjs/loader.js:1138:30)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:1158:10)
</code></pre>
<p>and by its suggestion I tried to do :</p>
<p><code>sudo npm rebuild @tensorflow/tfjs-node build-addon-from-source</code></p>
<p>but get this error:</p>
<pre><code>so@so-notebook:~/Desktop/trash/tensorflow$ sudo npm rebuild @tensorflow/tfjs-node build-addon-from-source
> @tensorflow/tfjs-node@2.0.1 install /home/so/node_modules/@tensorflow/tfjs-node
> node scripts/install.js
CPU-linux-2.0.1.tar.gz
* Downloading libtensorflow
[==============================] 64333/bps 100% 0.0s
* Building TensorFlow Node.js bindings
node-pre-gyp install failed with error: Error: Command failed: node-pre-gyp install --fallback-to-build
node-pre-gyp WARN Using request for node-pre-gyp https download
node-pre-gyp WARN Tried to download(403): https://storage.googleapis.com/tf-builds/pre-built-binary/napi-v6/2.0.1/CPU-linux-2.0.1.tar.gz
node-pre-gyp WARN Pre-built binaries not found for @tensorflow/tfjs-node@2.0.1 and node@14.4.0 (node-v83 ABI, glibc) (falling back to source compile with node-gyp)
In file included from ../binding/tfjs_backend.cc:18:0:
../binding/tfjs_backend.h:25:10: fatal error: tensorflow/c/c_api.h: No such file or directory
#include "tensorflow/c/c_api.h"
^~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
make: *** [Release/obj.target/tfjs_binding/binding/tfjs_backend.o] Error 1
gyp ERR! build error
gyp ERR! stack Error: `make` failed with exit code: 2
gyp ERR! stack at ChildProcess.onExit (/usr/local/lib/node_modules/npm/node_modules/node-gyp/lib/build.js:194:23)
gyp ERR! stack at ChildProcess.emit (events.js:315:20)
gyp ERR! stack at Process.ChildProcess._handle.onexit (internal/child_process.js:276:12)
gyp ERR! System Linux 5.6.9-050609-generic
gyp ERR! command "/home/so/node_modules/node/bin/node" "/usr/local/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js" "build" "--fallback-to-build" "--module=/home/so/node_modules/@tensorflow/tfjs-node/lib/napi-v6/tfjs_binding.node" "--module_name=tfjs_binding" "--module_path=/home/so/node_modules/@tensorflow/tfjs-node/lib/napi-v6" "--napi_version=6" "--node_abi_napi=napi" "--napi_build_version=6" "--node_napi_label=napi-v6"
gyp ERR! cwd /home/so/node_modules/@tensorflow/tfjs-node
gyp ERR! node -v v14.4.0
gyp ERR! node-gyp -v v5.1.0
gyp ERR! not ok
node-pre-gyp ERR! build error
node-pre-gyp ERR! stack Error: Failed to execute '/home/so/node_modules/node/bin/node /usr/local/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js build --fallback-to-build --module=/home/so/node_modules/@tensorflow/tfjs-node/lib/napi-v6/tfjs_binding.node --module_name=tfjs_binding --module_path=/home/so/node_modules/@tensorflow/tfjs-node/lib/napi-v6 --napi_version=6 --node_abi_napi=napi --napi_build_version=6 --node_napi_label=napi-v6' (1)
node-pre-gyp ERR! stack at ChildProcess.<anonymous> (/home/so/node_modules/node-pre-gyp/lib/util/compile.js:83:29)
node-pre-gyp ERR! stack at ChildProcess.emit (events.js:315:20)
node-pre-gyp ERR! stack at maybeClose (internal/child_process.js:1051:16)
node-pre-gyp ERR! stack at Process.ChildProcess._handle.onexit (internal/child_process.js:287:5)
node-pre-gyp ERR! System Linux 5.6.9-050609-generic
node-pre-gyp ERR! command "/home/so/node_modules/node/bin/node" "/home/so/node_modules/@tensorflow/tfjs-node/node_modules/.bin/node-pre-gyp" "install" "--fallback-to-build"
node-pre-gyp ERR! cwd /home/so/node_modules/@tensorflow/tfjs-node
node-pre-gyp ERR! node -v v14.4.0
node-pre-gyp ERR! node-pre-gyp -v v0.14.0
node-pre-gyp ERR! not ok
@tensorflow/tfjs-node@2.0.1 /home/so/node_modules/@tensorflow/tfjs-node
</code></pre>
<p>Ny OS is :</p>
<pre><code>so@so-notebook:~/Desktop/trash/tensorflow$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.4 LTS
Release: 18.04
Codename: bionic
</code></pre>
<p>Thanks.</p>
|
<p>On Ubuntu 20.04 I solved it with:</p>
<ol>
<li><code>sudo apt install build-essential</code></li>
<li><code>sudo npm i -g node-pre-gyp</code></li>
<li><code>npm rebuild @tensorflow/tfjs-node build-addon-from-source</code></li>
</ol>
|
javascript|node.js|tensorflow|npm
| 3 |
1,902,564 | 67,048,112 |
Add Columns in Rows with a Negative Sign
|
<p>This is the text file:</p>
<pre><code>description, date, amount
Car,02/06/2021,26
Desk,06/21/2020,-6
Hockey Stick,07/20/2021,-26
</code></pre>
<p>I essentially want to take the amount and add the numbers together even if it is a negative (with the "-" sign). I don't know how to do this, I think python is not recognising the negative numbers as <code>int</code>s:</p>
<pre><code>data = open("transactions.txt", "r")
info = data.readlines()
data.close()
budget = 0
for line in info:
splitting = line.split(",")
budget += float(splitting[2]) <-- Error
print(budget)
</code></pre>
|
<p>Skip the first line, as it's a full-text header. So it's not the <code>-6</code>, but the <code>amount</code>.<br />
Something like</p>
<pre><code>for line in info[1:]:
</code></pre>
<p>should do the job.</p>
|
python|python-3.x
| 0 |
1,902,565 | 35,169,236 |
Group by min of date in pandas dataframe, returning nanoseconds not days
|
<p>I'm trying to group entries of the column 'Client Name' by the minimum date a corresponding cell.</p>
<pre><code>Client Name Recency
A -10 days
B -4 days
C -1 days
A -5 days
B -2 days
C 0 days
</code></pre>
<p>So the final result should be is</p>
<pre><code>A -5
B -2
C 0
</code></pre>
<p>When I check the type of my 'recency' I get:</p>
<pre><code>>> df['recency'].dtype
dtype('<m8[ns]')
</code></pre>
<p>Which I think is my problem as my days are in nanoseconds? But I find that odd because it says days in the column.</p>
<p>However when I do the grouping:</p>
<pre><code>>> df.groupby(['Client Name'], sort=False)['recency'].min()
A -73785600000000000
B -345600000000000
C 0
</code></pre>
<p>Which leads me to believe I am subtracting nanoseconds and not days.</p>
<p>Why would the column of the dataframe change to nanoseconds? How do I return the min of the days rather than the nanoseconds?</p>
<p>Thanks</p>
|
<p>I try your test dataframe and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.min.html" rel="nofollow"><code>min</code></a> values are different as you - maybe you are using <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.max.html" rel="nofollow"><code>max</code></a>.</p>
<p>You can try change <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow"><code>groupby</code></a> with subset <code>df[['Recency','Client Name']]</code>:</p>
<pre><code>print df
Client Name Recency
0 A -10 days
1 B -4 days
2 C -1 days
3 A -5 days
4 B -2 days
5 C 0 days
print df[['Recency','Client Name']].groupby(['Client Name'], sort=False).min()
Recency
Client Name
A -10 days
B -4 days
C -1 days
</code></pre>
<p>If you need remove text <code>days</code>, convert <code>timedelta</code> to <code>int</code>:</p>
<pre><code>df['RecencyNo'] = (df['Recency'] / np.timedelta64(1, 'D')).astype(int)
print df
Client Name Recency RecencyNo
0 A -10 days -10
1 B -4 days -4
2 C -1 days -1
3 A -5 days -5
4 B -2 days -2
5 C 0 days 0
print df.groupby(['Client Name'], sort=False)['RecencyNo'].min()
Client Name
A -10
B -4
C -1
Name: RecencyNo, dtype: int32
</code></pre>
<p>EDIT:</p>
<p>You can check <a href="https://github.com/pydata/pandas/issues/5724" rel="nofollow"><code>issue 5724</code></a> - maybe it is bug.</p>
|
python|date|pandas|dataframe
| 0 |
1,902,566 | 56,152,270 |
Python 3.6 - Altair Chart Prints Object, Not Graph
|
<p>What can cause graph objects to not display the actual graph, but a chart? Reproducible example below.</p>
<pre><code>from pydataset import data
import altair
cars = data('cars')
cars
c = altair.Chart(cars).mark_line().encode(
x='speed',
y='dist'
)
</code></pre>
<p>Outputs</p>
<pre><code>Chart({
data: speed dist
1 4 2
2 4 10
3 7 4
4 7 22
5 8 16
...
encoding: FacetedEncoding({
x: X({
shorthand: 'speed'
}),
y: Y({
shorthand: 'dist'
})
}),
mark: 'line'
})
</code></pre>
<p>Expected output is a graph, such as the one shown here <a href="https://altair-viz.github.io/user_guide/troubleshooting.html#display-troubleshooting" rel="nofollow noreferrer">https://altair-viz.github.io/user_guide/troubleshooting.html#display-troubleshooting</a></p>
<p>Obviously I've read the trouble shooting, but it isn't clear to me this problem. They talk about getting no output, but not about getting the object as an output.</p>
<p>Edit to clarify : They DO talk about that, but specifically if using Jupyter Notebook and IPython being underversioned. I have Jupyter installed, but am not using. I have IPython installed, but not underversioned.</p>
|
<blockquote>
<p>I have Jupyter installed, but am not using.</p>
</blockquote>
<p>If you are not using Jupyter notebook. JupyterLab, or a similar notebook environment, then you will need some other Javascript-enabled frontend in which to render your charts. You can find more information on this at <a href="https://altair-viz.github.io/user_guide/display_frontends.html#display-general" rel="nofollow noreferrer">https://altair-viz.github.io/user_guide/display_frontends.html#display-general</a>.</p>
|
python|graph|data-visualization|altair
| 1 |
1,902,567 | 55,486,085 |
Dictionary operations on a dataframe column containing dictionary
|
<p>One of my dataframe <code>df1</code> has a column <code>WR</code> with a dictionary in every row - </p>
<pre><code>WR
----
{'M-NET':1, 'C-VTR':2, 'I-INK':9}
{'H-NKG':6, 'M-NET':2, 'C-VTR':2}
{'N-NOC':7, 'I-INK':4}
{'L-TKP':4, 'C-VTR':3, 'H-NKG':3, 'M-NET':9}
{'M-NET':1, 'C-VTR':4}
</code></pre>
<p>How can I do dictionary operations on this row. For eg. I want to make another column that contains the number of keys in each row of <code>WR</code>. Or, I want to get a sum of all the values of this dictionary. </p>
<p>I've tried -</p>
<pre><code>df1['WR#'] = df1['WR'].apply(lambda x: len(x.to_dict().values()))
</code></pre>
<p>and </p>
<pre><code>df1['WR#'] = len(df1['WR'].str.split(', '))
</code></pre>
<p>but these didn't work for me.</p>
<p>I need a column <code>WR#</code> that gives me </p>
<pre><code>3
3
2
4
2
</code></pre>
|
<p>If you need the length try with </p>
<pre><code>df['WR'].str.len()
0 3
1 3
2 2
3 4
4 2
dtype: int64
</code></pre>
<p>If you need the <code>sum</code> </p>
<pre><code>pd.DataFrame(df['WR'].tolist()).sum(1)
0 12.0
1 10.0
2 11.0
3 19.0
4 5.0
dtype: float64
</code></pre>
|
python-3.x|pandas|dataframe
| 1 |
1,902,568 | 53,695,618 |
selenium and tweepy installed but not imported in jupyter notebook
|
<p>I created and installed packages (numpy, pandas, scikit-learn, tweepy, nltk, selenium, bs4, ipykernel) inside directory (TextProcessing) with pipenv. </p>
<p>I installed the ipython as:</p>
<pre><code>pipenv install ipykernel
</code></pre>
<p>when I activate the environment and start jupyter notebook as:</p>
<pre><code>(TextProcessing) bash-3.2$ jupyter notebook
</code></pre>
<p>Within the jupyter notebook I can import numpy, sklearn, pandas, bs4 with success but not selenium and tweepy as they return:</p>
<pre><code>ModuleNotFoundError: No module named 'selenium'
ModuleNotFoundError: No module named 'tweepy'
</code></pre>
<p>More info:
macOS Mojave
python 3.7, Anaconda</p>
<p>pipfile:</p>
<pre><code>[[source]]
name = "pypi"
url = "https://pypi.org/simple"
verify_ssl = true
[dev-packages]
[packages]
selenium = "*"
bs4 = "*"
pandas = "*"
numpy = "*"
tweepy = "*"
nltk = "*"
scikit-learn = "*"
ipykernel = "*"
[requires]
python_version = "3.7"
</code></pre>
<p>part of pipfile.loc related to ipython:</p>
<pre><code>},
"ipykernel": {
"hashes": [
"sha256:0aeb7ec277ac42cc2b59ae3d08b10909b2ec161dc6908096210527162b53675d",
"sha256:0fc0bf97920d454102168ec2008620066878848fcfca06c22b669696212e292f"
],
"index": "pypi",
"version": "==5.1.0"
},
"ipython": {
"hashes": [
"sha256:6a9496209b76463f1dec126ab928919aaf1f55b38beb9219af3fe202f6bbdd12",
"sha256:f69932b1e806b38a7818d9a1e918e5821b685715040b48e59c657b3c7961b742"
],
"version": "==7.2.0"
},
"ipython-genutils": {
"hashes": [
"sha256:72dd37233799e619666c9f639a9da83c34013a73e8bbc79a7a6348d93c61fab8",
"sha256:eb2e116e75ecef9d4d228fdc66af54269afa26ab4463042e33785b887c628ba8"
],
"version": "==0.2.0"
},
"jedi": {
"hashes": [
"sha256:0191c447165f798e6a730285f2eee783fff81b0d3df261945ecb80983b5c3ca7",
"sha256:b7493f73a2febe0dc33d51c99b474547f7f6c0b2c8fb2b21f453eef204c12148"
],
"version": "==0.13.1"
},
"jupyter-client": {
"hashes": [
"sha256:27befcf0446b01e29853014d6a902dd101ad7d7f94e2252b1adca17c3466b761",
"sha256:59e6d791e22a8002ad0e80b78c6fd6deecab4f9e1b1aa1a22f4213de271b29ea"
],
"version": "==5.2.3"
},
"jupyter-core": {
"hashes": [
"sha256:927d713ffa616ea11972534411544589976b2493fc7e09ad946e010aa7eb9970",
"sha256:ba70754aa680300306c699790128f6fbd8c306ee5927976cbe48adacf240c0b7"
],
"version": "==4.4.0"
</code></pre>
|
<p>it turned out i had to install jupyter notebook and start is as below, it is now working and i can import all packages within jupyter notebook:</p>
<pre><code>(TextProcessing) bash-3.2$ pipenv install jupyter
(TextProcessing) bash-3.2$ pipenv run jupyter notebook
</code></pre>
|
python|selenium|jupyter-notebook|virtualenv|pipenv
| 0 |
1,902,569 | 41,870,034 |
Python: extract the core of a 2D numpy array
|
<p>Say I have a 2D numpy array like this:</p>
<pre><code>In[1]: x
Out[1]:
array([[0, 0, 0, 0, 0],
[1, 1, 1, 1, 1],
[2, 2, 2, 2, 2],
[3, 3, 3, 3, 3],
[4, 4, 4, 4, 4],
[5, 5, 5, 5, 5]], dtype=int64)
</code></pre>
<p>and I want to extract the <code>(n-1)*(m-1)</code> core, which would be:</p>
<pre><code>array([[1, 1, 1],
[2, 2, 2],
[3, 3, 3],
[4, 4, 4]], dtype=int64)
</code></pre>
<p>How could I do this, since the data structure is not <em>flat</em>? Do you suggest flattening it first? </p>
<p>This is a simplified version of a much bigger array, which core has dimension <code>(n-33)*(n-33)</code>.</p>
|
<p>You can use negative stop indices to exclude the last x rows/columns and normal start indices:</p>
<pre><code>>>> x[1:-1, 1:-1]
array([[1, 1, 1],
[2, 2, 2],
[3, 3, 3]], dtype=int64)
</code></pre>
<p>For your new example:</p>
<pre><code>>>> t = np.array([[0, 0, 0, 0, 0],
[1, 1, 1, 1, 1],
[2, 2, 2, 2, 2],
[3, 3, 3, 3, 3],
[4, 4, 4, 4, 4],
[5, 5, 5, 5, 5]], dtype=np.int64)
>>> t[1:-1, 1:-1]
array([[1, 1, 1],
[2, 2, 2],
[3, 3, 3],
[4, 4, 4]], dtype=int64)
</code></pre>
<p>You could also remove 2 leading and trailing columns:</p>
<pre><code>>>> t[1:-1, 2:-2]
array([[1],
[2],
[3],
[4]], dtype=int64)
</code></pre>
<p>or rows:</p>
<pre><code>>>> t[2:-2, 1:-1]
array([[2, 2, 2],
[3, 3, 3]], dtype=int64)
</code></pre>
|
python|arrays|numpy|slice
| 2 |
1,902,570 | 47,997,772 |
How to use kwargs in matplotlib.axes.Axes.arrow python 2.7
|
<p>I'm working from arrow_simple_demo.py <a href="https://matplotlib.org/examples/pylab_examples/arrow_simple_demo.html" rel="nofollow noreferrer">here</a> which I have already modified to be:</p>
<pre><code>import matplotlib.pyplot as plt
ax = plt.axes()
ax.arrow(0, 0, 0.5, 0.5, head_width=0.05, head_length=0.1)
</code></pre>
<p>however, I want to change the line style of the arrow to dashed using a kwarg. The arrow docs suggest it is possible. <a href="https://matplotlib.org/api/_as_gen/matplotlib.axes.Axes.arrow.html#matplotlib.axes.Axes.arrow" rel="nofollow noreferrer">arrow docs</a></p>
<p>So I tried to give arrow the **kwargs argument to import: </p>
<pre><code>kwargs = {linestyle:'--'}
</code></pre>
<p>Now my code looks like this:</p>
<pre><code>import matplotlib.pyplot as plt
ax = plt.axes()
kwargs={linestyle:'--'}
ax.arrow(0, 0, 0.5, 0.5, head_width=0.05, head_length=0.1, **kwargs)
</code></pre>
<p>But the result is:</p>
<pre><code>NameError: name 'linestyle' is not defined
</code></pre>
<p>I'm wondering if anyone can tell me if I am using the kwargs correctly, and whether I need to import Patch from matplotlib's patches class to make this work. The statement "Other valid kwargs (inherited from :class:<code>Patch</code>)" which is in the arrow docs above the listing of kwargs makes me think it might be necessary. I have also been looking at the patches docs to figure that question out. <a href="https://matplotlib.org/1.5.3/api/patches_api.html" rel="nofollow noreferrer">here</a></p>
<p>EDIT:</p>
<p>Code finished when I passed linestyle key as a string, but I am not getting the dashed arrow line that I was hoping for. </p>
<pre><code>import matplotlib.pyplot as plt
ax = plt.axes()
kwargs={'linestyle':'--'}
ax.arrow(0, 0, 0.5, 0.5, head_width=0.05, head_length=0.1, **kwargs)
</code></pre>
<p>see picture:</p>
<p><a href="https://i.stack.imgur.com/s9EZE.png" rel="nofollow noreferrer">arrow plot solid line</a></p>
|
<p>The key of your <code>kwargs</code> dictionary should be a string. In you code, python looks for an object called <code>linestyle</code> which does not exist.</p>
<pre><code>kwargs = {'linestyle':'--'}
</code></pre>
<p>unfortunately, doing is not enough to produce the desired effect. The line <em>is</em> dashed, but the problem is that the arrow is drawn with a closed path and the dashes are overlaid on top of one another, which cancels the effect. You can see the dashed line by plotting a thicker arrow.</p>
<pre><code>ax = plt.axes()
kwargs={'linestyle':'--', 'lw':2, 'width':0.05}
ax.arrow(0, 0, 0.5, 0.5, head_width=0.05, head_length=0.1, **kwargs)
</code></pre>
<p><a href="https://i.stack.imgur.com/rT8lG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rT8lG.png" alt="enter image description here"></a></p>
<p>If you want a simple dashed arrow, you have to use a simpler arrow, using <code>annotate</code></p>
<pre><code>ax = plt.axes()
kwargs={'linestyle':'--', 'lw':2}
ax.annotate("", xy=(0.5, 0.5), xytext=(0, 0),
arrowprops=dict(arrowstyle="->", **kwargs))
</code></pre>
<p><a href="https://i.stack.imgur.com/UHbIl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UHbIl.png" alt="enter image description here"></a></p>
|
python|python-2.7|matplotlib|plot
| 4 |
1,902,571 | 48,047,548 |
In Django how to decrypt the session id in database and in cookie with my SECRET_KEY?
|
<p>I created one Django application with below settings - (for cookie base session)</p>
<pre><code>SESSION_ENGINE = 'django.contrib.sessions.backends.signed_cookies'
SESSION_SERIALIZER = 'django.contrib.sessions.serializers.PickleSerializer'
</code></pre>
<p>Then I got a session id</p>
<pre><code>sessionid=.eJxrYJk6gwECaqdo9PDGJ5aWZMSXFqcWxWemTOlhMjSY0iOEJJiUmJydmgeU0UzJSsxLz9dLzs8rKcpM0gMp0YPKFuv55qek5jjB1PIjGZCRWJwxpUfDMNUk1STJ1MLc0tLczDLNyMg0ydDQzDTJzCjZ0jg50SLR3NDc3DzReEqpHgBcETf7:1eVt50:xtWtUp9mwcxusxtg6fZB_tHzlYw
</code></pre>
<p>With another setting (for database-backed sesisons)</p>
<pre><code>SESSION_ENGINE = 'django.contrib.sessions.backends.db'
SESSION_SERIALIZER = 'django.contrib.sessions.serializers.JSONSerializer'
</code></pre>
<p>I got below encrypted string in database:</p>
<pre><code>gzc9c9nwwraqhbdsk9xg935ypkqp7ecs|MmExZWI0NjZjYzIwNDYyZDhjNWVmODJlNmMwNjI0ZmJmMjQ4MTljNDp7Il9hdXRoX3VzZXJfaWQiOiIxMCIsIl9hdXRoX3VzZXJfYmFja2VuZCI6ImRqYW5nby5jb250cmliLmF1dGguYmFja2VuZHMuTW9kZWxCYWNrZW5kIiwiX2F1dGhfdXNlcl9oYXNoIjoiMWU0ZTRiNTg3OTk3NjlmMjI1YjExNjViNjJjOTNjYThhNzE3NzdhMyIsImxhc3RfbG9naW4iOjIyMjJ9
</code></pre>
<p>I want to know what is inside both the encrypted strings. </p>
<ol>
<li>How can I decrypt both? </li>
<li>Which encryption algorithm django uses for encryption? </li>
<li>Where can I set the encryption algorithms? </li>
</ol>
<p>It will be great, if anyone can give me a sample code.</p>
|
<p>First off, I would not recommend you use <code>PickleSerializer</code> unless you have a good reason to change the default session serializer and understand <a href="https://docs.djangoproject.com/en/2.0/topics/http/sessions/#using-cookie-based-sessions" rel="nofollow noreferrer">the security implications</a>.</p>
<p>The cookies you have aren't encrypted, they're just encoded as url-safe base64 (optionally compressed with zlib) and then signed:</p>
<pre><code>In [8]: import base64
In [9]: base64.urlsafe_b64decode('MmExZWI0NjZjYzIwNDYyZDhjNWVmODJlNmMwNjI0ZmJmMjQ4MTljNDp7Il9hdXRoX3VzZXJfaWQiOiIxMCIsIl9hdXRoX3VzZXJfYmFja2VuZCI6ImRqYW5nby5jb250cmliLmF1dGguYmFja2V
... uZHMuTW9kZWxCYWNrZW5kIiwiX2F1dGhfdXNlcl9oYXNoIjoiMWU0ZTRiNTg3OTk3NjlmMjI1YjExNjViNjJjOTNjYThhNzE3NzdhMyIsImxhc3RfbG9naW4iOjIyMjJ9')
Out[9]: '2a1eb466cc20462d8c5ef82e6c0624fbf24819c4:{"_auth_user_id":"10","_auth_user_backend":"django.contrib.auth.backends.ModelBackend","_auth_user_hash":"1e4e4b58799769f225b1165b62c93ca8a71777a3","last_login":2222}'
In [10]: base64.urlsafe_b64decode('.eJxrYJk6gwECaqdo9PDGJ5aWZMSXFqcWxWemTOlhMjSY0iOEJJiUmJydmgeU0UzJSsxLz9dLzs8rKcpM0gMp0YPKFuv55qek5jjB1PIjGZCRWJwxpUfDMNUk1STJ1MLc0tLczDLNyMg0ydDQz
... DTJzCjZ0jg50SLR3NDc3DzReEqpHgBcETf7').decode('zlib')
Out[10]: '\x80\x04\x95\x98\x00\x00\x00\x00\x00\x00\x00}\x94(\x8c\r_auth_user_id\x94\x8c\x0210\x94\x8c\x12_auth_user_backend\x94\x8c)django.contrib.auth.backends.ModelBackend\x94\x8c\x0f_auth_user_hash\x94\x8c(1e4e4b58799769f225b1165b62c93ca8a71777a3\x94u.'
</code></pre>
<p>This is all handled by your <a href="https://docs.djangoproject.com/en/2.0/topics/http/sessions/#using-sessions-out-of-views" rel="nofollow noreferrer"><code>SESSION_ENGINE</code></a>:</p>
<pre><code>from importlib import import_module
from django.conf import settings
SessionStore = import_module(settings.SESSION_ENGINE).SessionStore
session_data = SessionStore().decode('.eJxrYJk6gwECaqdo9PDGJ5aWZMSXFq......')
</code></pre>
|
python|django
| 3 |
1,902,572 | 55,717,544 |
How do I remove the quotation marks off of a string to be used as a variable - python
|
<p>I am trying to pick a random item from my list and then pick a random item of the dictionary that has the same name as the item in the list.</p>
<pre class="lang-py prettyprint-override"><code>import random
letters = ["a" , "b" , "c"]
a = {
"item1": "Q",
"item2": "W",
}
b = {
"item1": "E",
"item2": "R",
}
c = {
"item1": "T",
"item2": "Y",
}
item_num = random.randint(0, len(letters)-1)
print(letters[item_num].get([random.randint(0, 1)]))
</code></pre>
<p>I always get the same error: <code>AttributeError: 'str' object has no attribute 'get'</code>, so I'm thinking that this is being caused because <code>item_num</code> would = something like <code>'a'</code> instead of just <code>a</code> without the quotation marks. Any help is appreciated!</p>
|
<p>The reason you get that error is that you're calling <code>get</code> on a <code>str</code>.</p>
<p><code>item_num</code> is an <code>int</code>. You access an element of <code>letters</code> with it, which is fine because <code>letters</code> is a <code>list</code>. However, an element of <code>letters</code> is a <code>str</code> (<code>'a'</code>, <code>'b'</code> or <code>'c'</code>), which you obviously can't call <code>get</code> on.</p>
<p>What you want to do is call <code>get</code> on the <em>dictionary</em> corresponding to the value of <code>letters[item_num]</code> you got. Accordingly, this probably does what you want:</p>
<pre><code>import random
letters = ["a" , "b" , "c"]
a = {
"item1": "Q",
"item2": "W",
}
b = {
"item1": "E",
"item2": "R",
}
c = {
"item1": "T",
"item2": "Y",
}
source = random.choice(letters)
print(globals()[source].get(f'item{random.randint(1, 2)}'))
</code></pre>
<p>That said, it's likely not what you <em>should</em> do (programmatic access of global variables). Instead, you can consider:</p>
<ul>
<li>Using a nested dictionary, where <code>['a', 'b', 'c']</code> are the keys</li>
<li>Not requiring additional processing of the random integer to get to the second-level keys (<code>1</code>, <code>2</code> as keys instead of <code>'item1'</code>, <code>'item2'</code>)</li>
</ul>
<p>Example:</p>
<pre><code>data = {'a': {1: 'Q',
2: 'W'}
...}
</code></pre>
|
python
| 4 |
1,902,573 | 55,738,711 |
sklearn doesn't recognize my array as 2d even after reshape?
|
<p>I am trying to run the following code. I know I need to reshape my arrays to fit them into the linear regression model. However, after I reshape them it still gives the error saying that my arrays are scalar. I have also tried atleast_2d with no luck. </p>
<pre><code>from sklearn.linear_model import LinearRegression
from sklearn.datasets import load_boston
boston = load_boston()
x = np.array(boston.data[:,5])
y = boston.target
x=np.array(x).reshape(-1,1)
y=np.array(y).reshape(-1,1)
lr = LinearRegression(fit_intercept=True)
lr.fit(x,y)
fig,ax= plt.subplots()
ax.set_xlabel("Average number of rooms(RM)")
ax.set_ylabel("House Price")
xmin = x.min()
xmax = x.max()
ax.plot([xmin,xmax],
[lr.predict(xmin),lr.predict(xmax)],
"-",
lw=2,color="#f9a602")
ax.scatter(x,y,s=2)
> ValueError Traceback (most recent call last)
<ipython-input-6-8c6977f43703> in <module>
7 xmax = xmax
8 ax.plot([xmin,xmax],
----> 9 [lr.predict(xmin), lr.predict(xmax)],
10 "-",
11 lw=2,color="#f9a602")
~\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\linear_model\base.py in predict(self, X)
211 Returns predicted values.
212 """
--> 213 return self._decision_function(X)
214
215 _preprocess_data = staticmethod(_preprocess_data)
~\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\linear_model\base.py in _decision_function(self, X)
194 check_is_fitted(self, "coef_")
195
--> 196 X = check_array(X, accept_sparse=['csr', 'csc', 'coo'])
197 return safe_sparse_dot(X, self.coef_.T,
198 dense_output=True) + self.intercept_
~\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\utils\validation.py in check_array(array, accept_sparse, accept_large_sparse, dtype, order, copy, force_all_finite, ensure_2d, allow_nd, ensure_min_samples, ensure_min_features, warn_on_dtype, estimator)
543 "Reshape your data either using array.reshape(-1, 1) if "
544 "your data has a single feature or array.reshape(1, -1) "
--> 545 "if it contains a single sample.".format(array))
546 # If input is 1D raise error
547 if array.ndim == 1:
ValueError: Expected 2D array, got scalar array instead:
array=<built-in method min of numpy.ndarray object at 0x0000019960BF9CB0>.
Reshape your data either using array.reshape(-1, 1) if your data has a single feature or array.reshape(1, -1) if it contains a single sample.
</code></pre>
|
<p>Found this solution using squeeze </p>
<pre><code>from sklearn.linear_model import LinearRegression
lr = LinearRegression(fit_intercept=True)
xmin
xmax = x.max()
xmax = xmax.reshape(-1,1)
xmin = x.min()
lr.fit(x,y)
xmin =xmin.reshape(-1,1)
print(xmin.shape)
print(xmin)
s = lr.predict(xmin)
s= np.squeeze(s)
r = lr.predict(xmax)
r = np.squeeze(r)
xmax = np.squeeze(xmax)
xmin = np.squeeze(xmin)
lr.fit(x,y)
fig,ax= plt.subplots()
ax.set_xlabel("Average number of rooms(RM)")
ax.set_ylabel("House Price")
xmin = xmin
xmax = xmax
ax.plot([xmin,xmax],
[s, r],
"-",
lw=2,color="#f9a602")
ax.scatter(x,y,s=2)
</code></pre>
|
python|python-3.x|numpy|scikit-learn
| 0 |
1,902,574 | 73,452,101 |
Count words of interest in a list or array
|
<p>I have the following scenario:</p>
<pre><code>import pandas as pd
import numpy as np
stuff = ['Elon Musk', 'elon musk', 'elon Musk', 'Elon Musk is awesome', "Who doesn't love Elon Musk"]
</code></pre>
<p>I'd like to count the times the name Elon Musk is shown in each aspect of the 'stuff' list. Upper or lower case would count. The expected result is that it would return a value count of <code>5</code> (since Elon Musk, case insensitive, appears in each aspect of the list.</p>
|
<p>Something like this should work</p>
<pre><code>l = ['Elon Musk', 'elon musk', 'elon Musk', 'Elon Musk is awesome', "Who doesn't love Elon Musk"]
sum(1 for x in l if 'elon musk' in x.lower())
</code></pre>
<p>Output</p>
<pre><code>5
</code></pre>
<p>Edit:</p>
<p>In the event the word could be repeated you could use regex</p>
<pre><code>import re
l = ['Elon Musk', 'elon musk', 'elon Musk', 'Elon Musk is awesome', "Who doesn't love Elon Musk loving Elon Musk"]
sum(len(re.findall('elon musk', x.lower())) for x in l)
</code></pre>
<p>Output</p>
<pre><code>6
</code></pre>
|
python|list
| 2 |
1,902,575 | 66,378,093 |
Implementing Smith-Waterman algorithm for local alignment in python
|
<p>I have created a sequence alignment tool to compare two strands of DNA (X and Y) to find the best alignment of substrings from X and Y. The algorithm is summarized here (<a href="https://en.wikipedia.org/wiki/Smith%E2%80%93Waterman_algorithm" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Smith–Waterman_algorithm</a>). I have been able to generate a lists of lists, filling them all with zeros, to represent my matrix. I created a scoring algorithm to return a numerical score for each kind of alignment between bases (eg. plus 4 for a match). Then I created an alignment algorithm that should put a score in each coordinate of my "matrix". However, when I go to print the matrix, it only returns the original with all zeros (rather than actual scores).</p>
<p>I know there are other methods of implementing this method (with numpy for example), so could you please tell me why this specific code (below) does not work? Is there a way to modify it, so that it does work?</p>
<p>code:</p>
<pre><code>def zeros(X: int, Y: int):
lenX = len(X) + 1
lenY = len(Y) + 1
matrix = []
for i in range(lenX):
matrix.append([0] * lenY)
def score(X, Y):
if X[n] == Y[m]: return 4
if X[n] == '-' or Y[m] == '-': return -4
else: return -2
def SmithWaterman(X, Y, score):
for n in range(1, len(X) + 1):
for m in range(1, len(Y) + 1):
align = matrix[n-1, m-1] + (score(X[n-1], Y[m-1]))
indelX = matrix[n-1, m] + (score(X[n-1], Y[m]))
indelY = matrix[n, m-1] + (score(X[n], Y[m-1]))
matrix[n, m] = max(align, indelX, indelY, 0)
print(matrix)
zeros("ACGT", "ACGT")
</code></pre>
<p>output:</p>
<pre><code>[[0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0]]
</code></pre>
|
<p>The reason it's just printing out the zeroed out matrix is that the <code>SmithWaterman</code> function is never called, so the matrix is never updated.</p>
<p>You would need to do something like</p>
<pre class="lang-py prettyprint-override"><code># ...
SmithWaterman(X, Y, score)
print(matrix)
# ...
</code></pre>
<p>However, If you do this, you will find that this code is actually quite broken in many other ways. I've gone through and annotated some of the syntax errors and other issues with the code:</p>
<pre class="lang-py prettyprint-override"><code>def zeros(X: int, Y: int):
# ^ ^ incorrect type annotations. should be str
lenX = len(X) + 1
lenY = len(Y) + 1
matrix = []
for i in range(lenX):
matrix.append([0] * lenY)
# A more "pythonic" way of expressing the above would be:
# matrix = [[0] * len(Y) + 1 for _ in range(len(x) + 1)]
def score(X, Y):
# ^ ^ shadowing variables from outer scope. this is not a bug per se but it's considered bad practice
if X[n] == Y[m]: return 4
# ^ ^ variables not defined in scope
if X[n] == '-' or Y[m] == '-': return -4
# ^ ^ variables not defined in scope
else: return -2
def SmithWaterman(X, Y, score): # this function is never called
# ^ unnecessary function passed as parameter. function is defined in scope
for n in range(1, len(X) + 1):
for m in range(1, len(Y) + 1):
align = matrix[n-1, m-1] + (score(X[n-1], Y[m-1]))
# ^ invalid list lookup. should be: matrix[n-1][m-1]
indelX = matrix[n-1, m] + (score(X[n-1], Y[m]))
# ^ out of bounds error when m == len(Y)
indelY = matrix[n, m-1] + (score(X[n], Y[m-1]))
# ^ out of bounds error when n == len(X)
matrix[n, m] = max(align, indelX, indelY, 0)
# this should be nested in the inner for-loop. m, n, indelX, and indelY are not defined in scope here
print(matrix)
zeros("ACGT", "ACGT")
</code></pre>
|
python|algorithm|dna-sequence|sequence-alignment
| 0 |
1,902,576 | 64,655,156 |
Why Python needs two storage blocks
|
<p>Why list in Python needs two storage blocks?</p>
<p><a href="https://www.educative.io/edpresso/tuples-vs-list-in-python" rel="nofollow noreferrer">List is stored in two blocks of memory (One is fixed-sized and the other is variable-sized for storing data)</a></p>
<p>Is it because one block stores the root address and the other one is to keep track of the dynamic changes of the list?</p>
|
<p>Splitting lists into a fixed-size metadata header and a variable-size data buffer lets the data buffer be reallocated without invalidating pointers other code is holding, since other code only holds pointers to the metadata header.</p>
|
python|python-3.x|list|memory-management
| 6 |
1,902,577 | 63,920,639 |
Conversion of Gstreamer launch to OpenCV pipeline for camera OV9281
|
<p>I’m trying to convert a gst-launch command to opencv pipeline. Using the following gst-launch , I am able to launch the camera,</p>
<pre><code>gst-launch-1.0 v4l2src device="/dev/video0" ! “video/x-raw,width=1280,height=800,format=(string)GRAY8” ! videoconvert ! videoscale ! “video/x-raw,width=640,height=400” ! xvimagesink sync=false
</code></pre>
<p>Now I need to convert this into Opencv Pipeline. I tried but I always get the following error:</p>
<blockquote>
<p>[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (711) open OpenCV | GStreamer warning: Error opening bin: could not parse caps “video/x-raw, , format=(string)GRAY8”
[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (480) isPipelinePlaying OpenCV | GStreamer warning: GStreamer: pipeline have not been created
libpng warning: Image width is zero in IHDR
libpng warning: Image height is zero in IHDR
libpng error: Invalid IHDR data</p>
</blockquote>
<p>This is the pipeline I’m trying to run. I know openCV requires BGR format so,</p>
<pre><code>camSet=‘v4l2src device="/dev/video0" ! “video/x-raw,width=1280,height=800,format=(string)GRAY8” ! videoconvert ! “video/x-raw,width=640,height=400,format=BGRx” ! videoconvert ! video/x-raw, format=BGR ! appsink’
cam = cv2.VideoCapture(camSet, cv2.CAP_GSTREAMER)
_, frame = cam.read()
cv2.imwrite(‘test’ +’.png’, frame)
cam.release()
</code></pre>
<p>Can anyone assist me with this?</p>
|
<p>I resolved it by using</p>
<pre><code>camSet = ‘v4l2src device=/dev/video0 ! video/x-raw,width=1280,height=800,format=(string)GRAY8 ! videoconvert ! videoscale ! video/x-raw,width=640,height=400,format=BGR ! appsink’
</code></pre>
|
opencv|python-3.6|gstreamer
| 1 |
1,902,578 | 64,150,463 |
keep rows and the previous one which are not equal
|
<p>I have this kind of dataframe :</p>
<pre><code>d = {1 : [False,False,False,False,True],2:[False,True,True,True,False],3 :[True,False,False,False,True]}
df = pd.DataFrame(d)
df :
1 2 3
0 False False True
1 False True False
2 False True False
3 False True False
4 True False True
</code></pre>
<p>My goal is to keep rows n+1 and n where rows n+1 and n are differents. In the example df, the result would be :</p>
<pre><code>df_result :
1 2 3
0 False False True
1 False True False
3 False True False
4 True False True
</code></pre>
<p>I have already tried this line <code>df_result = df.neq(df.shift())</code> and kept only rows where there is a least one true but it doesn't get the row 3</p>
<p>Any idea how i can have the expected result ?</p>
<p>Thanks !</p>
|
<p>I believe you need compare bot not equal by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.ne.html" rel="nofollow noreferrer"><code>DataFrame.ne</code></a> shifting by <code>1</code> and by <code>-1</code>, get at least one match by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.any.html" rel="nofollow noreferrer"><code>DataFrame.any</code></a> and chain with <code>|</code> for bitwise <code>OR</code>:</p>
<pre><code>df_result = df[df.ne(df.shift()).any(axis=1) | df.ne(df.shift(-1)).any(axis=1)]
print (df_result)
1 2 3
0 False False True
1 False True False
3 False True False
4 True False True
</code></pre>
<p>Another similar idea:</p>
<pre><code>df_result = df[(df.ne(df.shift()) | df.ne(df.shift(-1))).any(axis=1)]
</code></pre>
|
python|pandas|dataframe
| 1 |
1,902,579 | 65,333,465 |
pygame chromebook key names
|
<p>I am currently following a pygame tutorial on a chromebook on which i have installed and linux to use IDLE. i am writing a block of code which assigns and x-axis increase or decrase to the arrow keys:</p>
<pre><code>import pygame
pygame.init()
displayWidth = 800
displayHeight = 600
black = (0, 0, 0)
white = (255, 255, 255)
clock = pygame.time.Clock()
crashed = False
carImg = pygame.image.load('racecar.png')
def car(x,y):
gameDisplay.blit(carImg, (x,y))
x = (displayWidth * 0.45)
y = (displayHeight * 0.8)
x_change = 0
car_speed = 0
gameDisplay = pygame.display.set_mode((displayHeight, displayWidth))
pygame.display.set_caption('Zoomer')
clock = pygame.time.Clock()
while not crashed:
for event in pygame.event.get():
if event.type == pygame.QUIT:
crashed = True
if event.type == pygame.KEYDOWN:
if event.key == pygame.K_RIGHT:
x_change = -5
elif event.key == pygame.K_RIGHT:
x_change = 5
if event.type == pygame.KEYUP:
if event.key == pygame.K_LEFT or event.key == pygame.K_RIGHT:
x_change = 0
x += x_change
gameDisplay.fill(white)
car(x,y)
pygame.display.update()
clock.tick(60)
pygame.quit()
quit()
</code></pre>
<p>when i try to run the code, the car sprite won't budge. is this due to the fact that i am on chromebook and key names are different or is it an other reason? thanks.</p>
|
<p>It doesn't work because you have wrong indentations.</p>
<p>You check <code>pygame.KEYDOWN</code> inside <code>if ... pygame.QUIT:</code> which is executed only when you close window.</p>
<p>You need all <code>if event.type</code> start in the same column</p>
<pre><code>while not crashed:
for event in pygame.event.get():
if event.type == pygame.QUIT:
crashed = True
if event.type == pygame.KEYDOWN:
if event.key == pygame.K_RIGHT:
x_change = -5
elif event.key == pygame.K_RIGHT:
x_change = 5
if event.type == pygame.KEYUP:
if event.key == pygame.K_LEFT or event.key == pygame.K_RIGHT:
x_change = 0
</code></pre>
|
python|pygame
| 1 |
1,902,580 | 65,193,744 |
How to run wsadmin scripts in single wrapper shell script
|
<p>I have some wsadmin python scripts and i tried to include them in single script as below
i need to execute all these scripts with the help of single wrapper script,but i have been facing errors while executing it.so can anyone kindly let me know whats wrong with my script here</p>
<pre><code>#!/usr/bin/env python3
sh wsadmin.sh -lang jython -f /home/Devop/listApps.py
sh wsadmin.sh -lang jython -f /home/Devop/cluster.py
sh wsadmin.sh -lang jython -f /home/Devop/heap.py
sh wsadmin.sh -lang jython -f /home/Devop/Dslist.py
sh wsadmin.sh -lang jython -f /home/Devop/listservers.py
</code></pre>
<pre><code>root@bin]# ./wsadmin.sh -lang jython -f /home/Devop/wrapper.py
WASX7209I: Connected to process "server1" on node localhostNode02 using SOAP connector; The type of process is: UnManagedProcess
WASX7017E: Exception received while running file "/home/Devop/wrapper.py"; exception information: com.ibm.bsf.BSFException: exception from Jython:
Traceback (innermost last):
(no code object) at line 0
File "<string>", line 3
sh wsadmin.sh -lang jython -f /home/Devop/listApps.py
^
SyntaxError: invalid syntax
</code></pre>
<p>I am getting this syntax error</p>
|
<p>what you have now could be used if you rename /home/Devop/wrapper.py to /home/Devop/wrapper.sh with full path to wsadmin.sh inside and run from:
root@bin]# /home/Devop/wrapper.sh</p>
<p>But I believe you are using python in a wrong way. I would suggest you to define functions for each of your python file and include them in one base script to run.</p>
|
python|shell|unix|websphere|websphere-8
| -1 |
1,902,581 | 65,246,817 |
Getting array of classes from model
|
<p>I am trying to get a prediction with the probabilities for each possible class.
The model is made with Azure autoML and I don't know what the index of the classes are. I know I could get this information from multiple runs with different data that could predict different classes but I would like it returned each time.</p>
<pre><code>def init():
global model
# This name is model.id of model that we want to deploy deserialize the model file back
# into a sklearn model
model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'model.pkl')
path = os.path.normpath(model_path)
path_split = path.split(os.sep)
log_server.update_custom_dimensions({'model_name': path_split[1], 'model_version': path_split[2]})
try:
logger.info("Loading model from path.")
model = joblib.load(model_path)
logger.info("Loading successful.")
except Exception as e:
logging_utilities.log_traceback(e, logger)
raise
@input_schema('data', PandasParameterType(input_sample))
@output_schema(NumpyParameterType(output_sample))
def run(data):
try:
resultclass = model.predict(data)
resultprob = model.predict_proba(data)
return json.dumps({"classes": model.classes_.tolist(), "probability": resultprob.tolist()})
except Exception as e:
result = str(e)
return json.dumps({"error": result})
</code></pre>
<p>using</p>
<pre><code>result = model.predict(data)
</code></pre>
<p>returns one of the classes like "PARTS"</p>
<p>using</p>
<pre><code>result = model.predict_proba(data)
</code></pre>
<p>returns the array of the different classes probabilities like [[0.2001282610210249, 0.0636559071698174, 0.03661803212989511, 0.4096565578555216, 0.2866744587788889, 0.003266783044852147]]</p>
<p>following other recommendations I used</p>
<pre><code>model.classes_.tolist()
</code></pre>
<p>which I thought would give me the list of classes but I get [0, 1, 2, 3, 4, 5]</p>
<p>Is there a way to get all of the classes from the model?</p>
|
<p>For a single prediction you may try <code>np.argmax</code> like:</p>
<pre><code>proba = [[0.2001282610210249, 0.0636559071698174, 0.03661803212989511, 0.4096565578555216, 0.2866744587788889, 0.003266783044852147]]
np.argmax(proba)
</code></pre>
<p>If your got a 2D <code>proba</code> array from <code>predict_proba</code> try:</p>
<pre><code>[np.argmax(p) for p in proba]
</code></pre>
|
python|machine-learning|scikit-learn|azure-machine-learning-studio
| 0 |
1,902,582 | 71,879,203 |
Curve 25519 Symmetric Key with Python
|
<p>I'm using Apple's CryptoKit to create keys for an iOS app, encrypt the data and then send it to the backend via JSON and store it in a PGSQL database.</p>
<p>While all of that is working perfectly, I need to be able to decrypt the data from the backend, and thus need to be able to create the same symmetric key I used to encrypt the data.</p>
<p>When I created the keys via Swift, it was done as follows:</p>
<pre><code>let privateKey = Curve25519.KeyAgreement.PrivateKey()
let publicKey = privateKey.publicKey
let sharedSecret = try! privateKey.sharedSecretFromKeyAgreement(with: publicKey)
let symmetricKey = sharedSecret.hkdfDerivedSymmetricKey(using: SHA512.self,
salt: "\(vhvioerhvoreovjreoivgifjtughrygryrufejewdf))".data(using: .utf8)!,
sharedInfo: Data(),
outputByteCount: 32)
</code></pre>
<p>Note: The salt is just me typing a bunch of random characters for this example code, but you get the idea.</p>
<p>I need to accomplish the same thing using Python. The keys are base64 encoded strings sent via JSON to the backend as well so I need to do a b64decode on them (which I've already got working).</p>
<p>I tried using pynacl but I cannot figure out how to create the symmetric key, using sha512 and the same salt I used to create the symmetric key in Swift. I am also in no way tied to pynacl if there's a better option.</p>
<p>Again, the above code is just an example of the process used to create a private, public, and symmetric key in Swift. I am actually using a public key from one account, with a private key from another to create the symmetric key and encrypt the data. I will be doing the same in reverse to decrypt (i.e. a public key from the private key account, and a private key from the public key account).</p>
<p>So far, playing around with it in python I have the following (but again, I am not tied to pynacl if there's a better solution):</p>
<pre><code>from nacl.public import Box, PrivateKey, PublicKey, SealedBox
from nacl.hash import sha512
import nacl.encoding
from base64 import b64decode, b64encode
import binascii
privKeyRead = b64decode('uBruInrnbtrberverv6XZZqQDDeS4SwORSHriW04=')
private_key = PrivateKey(privKeyRead)
pubKeyRead = b64decode('UIZSc3QBfewojfoewjgowjgCqA/P8PjXDlQwU7rTHFBw=')
public_key = PublicKey(pubKeyRead)
salt = b64decode('HIHIUGUBLJOIHIBIOHO9nM0NSSkNDejUwcXJMNUdlUT0=')
cryptoBox = Box(private_key, public_key)
sharedSecret = Box.shared_key(cryptoBox)
# Print Keys and Salt
print("Private Key:", privKeyRead)
print("Public Key:", pubKeyRead)
print("Salt:", salt)
print("Crypto Box:", cryptoBox)
print("Shared Secret:", sharedSecret)
return printed
</code></pre>
<p>Note: The privKeyRead, pubKeyRead, and salt values have garbage characters for this example as I can't obviously post the real values. I have also noticed that both the Crypto Box and the Shared Secret are identical, so I'm pretty sure I would only need one and not both.</p>
<p>Lastly, and I don't want anyone to get hung up on this, I am using Zope5 which is why you see my python example as it is. This is a python script in zope which is perfectly valid. I can also create external methods, so if you have functions, etc. that will work better, please feel free to post exactly how you would do it where zope isn't in the equation. I'll reconfigure for zope if necessary.</p>
|
<p>The Swift code generates a private key, determines the related public key, derives a shared secret using X25519, and derives the symmetric key using HKDF:</p>
<pre class="lang-swift prettyprint-override"><code>import Foundation
import Crypto
let privateKey = Curve25519.KeyAgreement.PrivateKey()
let publicKey = privateKey.publicKey
let sharedSecret = try! privateKey.sharedSecretFromKeyAgreement(with: publicKey)
let symmetricKey = sharedSecret.hkdfDerivedSymmetricKey(using: SHA512.self,
salt: "a test salt".data(using: .utf8)!,
sharedInfo: Data(),
outputByteCount: 32)
print("Private key : ", privateKey.rawRepresentation.base64EncodedString())
print("Public key : ", publicKey.rawRepresentation.base64EncodedString())
print("Shared secret: ", sharedSecret.withUnsafeBytes {return Data(Array($0)).base64EncodedString()})
print("Symmetric key: ", symmetricKey.withUnsafeBytes {return Data(Array($0)).base64EncodedString()})
</code></pre>
<p>A possible output is:</p>
<pre class="lang-none prettyprint-override"><code>Private key : 8AtqpW6UJBhAEzTnvHMQ8ki28TrDvAEbKuV3FDiROWw=
Public key : kICzRWQYcawmlVJpSJ2TINuUDmI0xGm2BnH10qHVxxs=
Shared secret: ijACBaZfaCQuvwwILb5uncYroZ4MLBzOfNZLD5khjz4=
Symmetric key: Il570rzDZ9brWBtxUtWv/Hv29rN6pBls7/AtOK+2oPU=
</code></pre>
<p>For the implementation in Python a library is needed that supports X25519 and HKDF, e.g. Cryptography (Cryptography seems to be the better choice here compared to PyNaCl, since the latter does not support HKDF):</p>
<pre class="lang-py prettyprint-override"><code>from cryptography.hazmat.primitives import hashes
from cryptography.hazmat.primitives.asymmetric.x25519 import X25519PrivateKey
from cryptography.hazmat.primitives.kdf.hkdf import HKDF
import base64
# X25519
private_key = X25519PrivateKey.generate()
public_key = private_key.public_key()
shared_secret = private_key.exchange(public_key)
# HKDF
symmetric_key = HKDF(
algorithm=hashes.SHA512(),
length=32,
salt='a test salt'.encode('utf-8'),
info=b'',
).derive(shared_secret)
print(base64.b64encode(shared_secret))
print(base64.b64encode(symmetric_key))
</code></pre>
<p>Test:<br>
For testing, use the keys of the Swift code, which can be imported as follows:</p>
<pre class="lang-py prettyprint-override"><code>from cryptography.hazmat.primitives.asymmetric.x25519 import X25519PrivateKey, X25519PublicKey
...
private_key = X25519PrivateKey.from_private_bytes(base64.b64decode("8AtqpW6UJBhAEzTnvHMQ8ki28TrDvAEbKuV3FDiROWw="))
public_key = X25519PublicKey.from_public_bytes(base64.b64decode("kICzRWQYcawmlVJpSJ2TINuUDmI0xGm2BnH10qHVxxs="))
</code></pre>
<p>With this, the Python code provides the shared secret and the key of the Swift code.</p>
|
python|python-3.x|encryption|encryption-symmetric|elliptic-curve
| 1 |
1,902,583 | 71,818,066 |
Python - Can't divide column values
|
<p>I'm trying to divide each value of a specific column in <code>df_rent</code> dataframe by simply accessing each value and divide by 1000. But it's returning an error and I cannot understand the reason.</p>
<p>Type of the column is float64.</p>
<pre><code>for i in df_rent['distance_new']:
df_rent[i] = df_rent[i] / 1000
print(df_rent[i])
</code></pre>
|
<p>the error is because if you loop over df_rent['distance_new'],the i assigned will be the value of your first cell in 'distance_new', then the second, then the third, it will not be a pointer index. what you should do is rather simple</p>
<pre><code>df_rent['distance_new']/=1000
</code></pre>
<p>In case someone doesn't understand, /= operator takes the value, divides by RHS, then replace the LHS value by the result. the LHS can be int, float, or in this case a whole column. this solution also works on multiple column if you slice them correctly, will be something like</p>
<pre><code>df_rent.loc[:,['distance_new','other_col_1','other_col2']]/=1000
</code></pre>
|
python|jupyter-notebook|divide
| 1 |
1,902,584 | 71,812,683 |
Removing values from column within groups based on conditions
|
<p>I am really struggling with this even though I feel like it should be extremely easy.</p>
<p>I have a dataframe that looks like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Title</th>
<th>Release Date</th>
<th>Released</th>
<th>In Stores</th>
</tr>
</thead>
<tbody>
<tr>
<td>Seinfeld</td>
<td>1995</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Seinfeld</td>
<td>1999</td>
<td>Yes</td>
<td></td>
</tr>
<tr>
<td>Seinfeld</td>
<td>1999</td>
<td></td>
<td>Yes</td>
</tr>
<tr>
<td>Friends</td>
<td>2000</td>
<td>Yes</td>
<td></td>
</tr>
<tr>
<td>Friends</td>
<td>2004</td>
<td></td>
<td>Yes</td>
</tr>
<tr>
<td>Friends</td>
<td>2004</td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
</div>
<p>I am first grouping by <code>Title</code>, and then <code>Release Date</code> and then observing the values of <code>Released</code> and <code>In Stores</code>. If both <code>Released</code> and <code>In Stores</code> have a value of "Yes" in the same <code>Release Date</code> year, then remove the <code>In Stores</code> value.</p>
<p>So in the above dataframe, the category Seinfeld --> 1999 would have the "Yes" removed from <code>In Stores</code>, but the "Yes" would stay in the <code>In Stores</code> category for "2004" since it is the only "Yes" in the Friends --> 2004 category.</p>
<p>I am starting by using</p>
<pre><code>df.groupby(['Title', 'Release Date'])['Released', 'In Stores].count()
</code></pre>
<p>But I cannot figure out the syntax of removing values from <code>In_Stores</code>.</p>
<p>Desired output:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Title</th>
<th>Release Date</th>
<th>Released</th>
<th>In Stores</th>
</tr>
</thead>
<tbody>
<tr>
<td>Seinfeld</td>
<td>1995</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Seinfeld</td>
<td>1999</td>
<td>Yes</td>
<td></td>
</tr>
<tr>
<td>Seinfeld</td>
<td>1999</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Friends</td>
<td>2000</td>
<td>Yes</td>
<td></td>
</tr>
<tr>
<td>Friends</td>
<td>2004</td>
<td></td>
<td>Yes</td>
</tr>
<tr>
<td>Friends</td>
<td>2004</td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
</div>
<p>EDIT: I have tried this line given in the top comment:</p>
<pre><code>flag = (df.groupby(['Title', 'Release Date']).transform(lambda x: (x == 'Yes').any()) .all(axis=1))
</code></pre>
<p>but the kernel runs indefinitely.</p>
|
<p>You can use <code>groupby.transform</code> to flag rows where <code>In Stores</code> needs to be removed, based on whether the row's <code>['Title', 'Release Date']</code> group has at least one value of 'Yes' in column <code>Released</code>, and also in column <code>In Stores</code>.</p>
<pre><code>flag = (df.groupby(['Title', 'Release Date'])
.transform(lambda x: (x == 'Yes').any())
.all(axis=1))
print(flag)
0 False
1 True
2 True
3 False
4 False
5 False
dtype: bool
df.loc[flag, 'In Stores'] = np.nan
</code></pre>
<p>Result:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">Title</th>
<th style="text-align: right;">Release Date</th>
<th style="text-align: left;">Released</th>
<th style="text-align: left;">In Stores</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">Seinfeld</td>
<td style="text-align: right;">1995</td>
<td style="text-align: left;">nan</td>
<td style="text-align: left;">nan</td>
</tr>
<tr>
<td style="text-align: left;">Seinfeld</td>
<td style="text-align: right;">1999</td>
<td style="text-align: left;">Yes</td>
<td style="text-align: left;">nan</td>
</tr>
<tr>
<td style="text-align: left;">Seinfeld</td>
<td style="text-align: right;">1999</td>
<td style="text-align: left;">nan</td>
<td style="text-align: left;">nan</td>
</tr>
<tr>
<td style="text-align: left;">Friends</td>
<td style="text-align: right;">2000</td>
<td style="text-align: left;">Yes</td>
<td style="text-align: left;">nan</td>
</tr>
<tr>
<td style="text-align: left;">Friends</td>
<td style="text-align: right;">2004</td>
<td style="text-align: left;">nan</td>
<td style="text-align: left;">Yes</td>
</tr>
<tr>
<td style="text-align: left;">Friends</td>
<td style="text-align: right;">2004</td>
<td style="text-align: left;">nan</td>
<td style="text-align: left;">nan</td>
</tr>
</tbody>
</table>
</div>
|
python|pandas|dataframe|filtering
| 1 |
1,902,585 | 68,648,642 |
Approximate function from two dimensional array
|
<p>I have a black and white image that is filtered to contain only the value 0 or 255. The image shows a white line on a black background. As the line is multiple pixels thick the array has columns with multiple non-zero values. <br />
However, I want to approximate the white line as a function from the image therefore I only want to have one value per column. <br />
This value should ideally be the median of all the other disregarded pixels. <br />
I achieved this by retrieving the indices of the non-zero values and get their median setting all of the values of the remaining indices to zero. However, this is a really slow approach is there something more efficient? Another approach was to use np.argmax() which also turned out to be to slow.</p>
<pre><code>def multi_valued_threshold(image_data: np.ndarray) -> np.ndarray:
return cv.threshold(image_data, 50, 255, cv.THRESH_BINARY)[1]
def one_valued_threshold(image_data: np.ndarray) -> np.ndarray:
data = multi_valued_threshold(img)
y = data.shape[1]
for c, d in enumerate(data):
# print("Before",d)
# get the median of the row (center of gravity)
if not np.any(d): continue
center = np.nonzero(d)
# center = int(center)
mark = True
print(center[0])
for i in center[0]:
if mark:
data[c][i] = 255
mark = False
else:
data[c][i] = 0
# data[c][data[c] == 255] = 0
# data[c][int(np.median(center))] = 255
return data
</code></pre>
<p><a href="https://i.stack.imgur.com/yM5S6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yM5S6.png" alt="Orignial image (.bmp)" /></a></p>
<p><a href="https://i.stack.imgur.com/r5Uw4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/r5Uw4.png" alt="Goal (achieved by using np.argmax() too slow!)" /></a>
<a href="https://i.stack.imgur.com/BU2KF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BU2KF.png" alt="Using thresholding" /></a></p>
|
<p>I would do smth like this:</p>
<pre><code>import numpy as np
from matplotlib import pyplot as plt
img = plt.imread('yM5S6.png')
img = img.sum(-1)>0.9
plt.figure(figsize = (10, 5))
plt.imshow(img)
col_idx = np.arange(img.shape[0], 0, -1).astype(float)
col_idxs = col_idx.reshape(-1, 1).repeat(img.shape[1], 1)
col_idxs[(1-img).astype(bool)] = np.nan
data = np.nanmedian(col_idxs, 0)
plt.figure(figsize = (10, 5))
plt.plot(data)
</code></pre>
<p>Output:
<a href="https://i.stack.imgur.com/jJ05E.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jJ05E.png" alt="enter image description here" /></a></p>
|
python|numpy|opencv|computer-vision
| 0 |
1,902,586 | 5,264,492 |
How to calculate tag-wise precision and recall for POS tagger?
|
<p>I am using some rule-based and statistical POS taggers to tag a corpus(of around <strong>5000 sentences</strong>) with Parts of Speech(POS). Following is a snippet of my test corpus where each word is seperated by its respective POS tag by '/'.</p>
<pre><code>No/RB ,/, it/PRP was/VBD n't/RB Black/NNP Monday/NNP ./.
But/CC while/IN the/DT New/NNP York/NNP Stock/NNP Exchange/NNP did/VBD n't/RB fall/VB apart/RB Friday/NNP as/IN the/DT Dow/NNP Jones/NNP Industrial/NNP Average/NNP plunged/VBD 190.58/CD points/NNS --/: most/JJS of/IN it/PRP in/IN the/DT final/JJ hour/NN --/: it/PRP barely/RB managed/VBD *-2/-NONE- to/TO stay/VB this/DT side/NN of/IN chaos/NN ./.
Some/DT ``/`` circuit/NN breakers/NNS ''/'' installed/VBN */-NONE- after/IN the/DT October/NNP 1987/CD crash/NN failed/VBD their/PRP$ first/JJ test/NN ,/, traders/NNS say/VBP 0/-NONE- *T*-1/-NONE- ,/, *-2/-NONE- unable/JJ *-3/-NONE- to/TO cool/VB the/DT selling/NN panic/NN in/IN both/DT stocks/NNS and/CC futures/NNS ./.
</code></pre>
<p>After tagging the corpus, it looks like this:</p>
<pre><code>No/DT ,/, it/PRP was/VBD n't/RB Black/NNP Monday/NNP ./.
But/CC while/IN the/DT New/NNP York/NNP Stock/NNP Exchange/NNP did/VBD n't/RB fall/VB apart/RB Friday/VB as/IN the/DT Dow/NNP Jones/NNP Industrial/NNP Average/JJ plunged/VBN 190.58/CD points/NNS --/: most/RBS of/IN it/PRP in/IN the/DT final/JJ hour/NN --/: it/PRP barely/RB managed/VBD *-2/-NONE- to/TO stay/VB this/DT side/NN of/IN chaos/NNS ./.
Some/DT ``/`` circuit/NN breakers/NNS ''/'' installed/VBN */-NONE- after/IN the/DT October/NNP 1987/CD crash/NN failed/VBD their/PRP$ first/JJ test/NN ,/, traders/NNS say/VB 0/-NONE- *T*-1/-NONE- ,/, *-2/-NONE- unable/JJ *-3/-NONE- to/TO cool/VB the/DT selling/VBG panic/NN in/IN both/DT stocks/NNS and/CC futures/NNS ./.
</code></pre>
<p>I need to calculate the tagging accuracy(<strong>Tag wise- Recall & Precision</strong>), therefore need to find an error(if any) in tagging for each word-tag pair. </p>
<p>The approach I am thinking of is to loop through these 2 text files and store them in a list and later compare the 'two' lists element by element. </p>
<p>The approach seems really crude to me, so would like you guys to suggest some better solution to the above problem.</p>
<p>From the <a href="http://en.wikipedia.org/wiki/Precision_and_recall" rel="noreferrer">wikipedia</a> page:</p>
<blockquote>
<p>In a classification task, the
<strong>precision</strong> for a class is the number of
true positives (i.e. the number of
items correctly labeled as belonging
to the positive class) divided by the
total number of elements labeled as
belonging to the positive class (i.e.
the sum of true positives and false
positives, which are items incorrectly
labeled as belonging to the class).
<strong>Recall</strong> in this context is defined as
the number of true positives divided
by the total number of elements that
actually belong to the positive class
(i.e. the sum of true positives and
false negatives, which are items which
were not labeled as belonging to the
positive class but should have been).</p>
</blockquote>
|
<p>Note that since every word has exactly one tag, overall recall and precision scores are meaningless for this task (they'll both just equal the accuracy measure). But it does make sense to ask for recall and precision measures per tag - for example, you could find the recall and precision for the DT tag.</p>
<p>The most efficient way to do this for all tags at once is similar to the way you suggested, though you can save one pass over the data by skipping the list-making stage. Read in a line of each file, compare the two lines word by word, and repeat until you reach the end of the files. For each word comparison, you probably want to check the words are equal for sanity, rather than assuming the two files are in sync. For each kind of tag, you keep three running totals: true positives, false positives and false negatives. If the two tags for the current word match, increment the true positive total for the tag. If they don't match, you need to increment the false negative total for the true tag and the false positive total for the tag your machine mistakenly chose. At the end, you can calculate recall and precision scores for each tag by following the formula in your Wikipedia excerpt.</p>
<p>I haven't tested this code and my Python's a but rusty, but this should give you the idea. I'm assuming the files are open and the <code>totals</code> data structure is a dictionary of dictionaries:</p>
<pre><code>finished = false
while not finished:
trueLine = testFile.readline()
if not trueLine: # end of file
finished = true
else:
trueLine = trueLine.split() # tokenise by whitespace
taggedLine = taggedFile.readline()
if not taggedLine:
print 'Error: files are out of sync.'
taggedLine = taggedLine.split()
if len(trueLine) != len(taggedLine):
print 'Error: files are out of sync.'
for i in range(len(trueLine)):
truePair = trueLine[i].split('/')
taggedPair = taggedLine[i].split('/')
if truePair[0] != taggedPair[0]: # the words should match
print 'Error: files are out of sync.'
trueTag = truePair[1]
guessedTag = taggedPair[1]
if trueTag == guessedTag:
totals[trueTag]['truePositives'] += 1
else:
totals[trueTag]['falseNegatives'] += 1
totals[guessedTag]['falsePositives'] += 1
</code></pre>
|
python|shell|nlp|machine-learning|text-processing
| 7 |
1,902,587 | 5,264,228 |
Store a lot of data inside python
|
<p>Maybe I start will a small introduction for my problem. I'm writing a python program which will be used for post-processing of different physical simulations. Every simulation can create up to 100 GB of output. I deal with different informations (like positions, fields and densities,...) for different time steps. I would like to have the access to all this data at once which isn't possible because I don't have enough memory on my system. Normally I use read file and then do some operations and clear the memory. Then I read other data and do some operations and clear the memory.</p>
<p>Now my problem. If I do it this way, then I spend a lot of time to read data more than once. This take a lot of time. I would like to read it only once and store it for an easy access. Do you know a method to store a lot of data which is really fast or which doesn't need a lot of space. </p>
<p>I just created a method which is around ten times faster then a normal open-read. But I use <code>cat</code> (linux command) for that. It's a really dirty method and I would like to kick it out of my script.</p>
<p>Is it possible to use databases to store this data and to get the data faster than normal reading? (sorry for this question, but I'm not a computer scientist and I don't have a lot of knowledge behind databases). </p>
<p><strong>EDIT:</strong></p>
<p>My cat-code look something like this - only a example:</p>
<pre><code>out = string.split(os.popen("cat "+base+"phs/phs01_00023_"+time).read())
# and if I want to have this data as arrays then I normally use and reshape (if I
# need it)
out = array(out)
out = reshape(out)
</code></pre>
<p>Normally I would use a numpy Method <code>numpy.loadtxt</code> which need the same time like normal reading.:</p>
<pre><code>f = open('filename')
f.read()
...
</code></pre>
<p>I think that <code>loadtxt</code> just use the normal methods with some additional code lines. </p>
<p>I know there are some better ways to read out data. But everything what I found was really slow. I will now try <code>mmap</code> and hopefully I will have a better performance. </p>
|
<p>I would try using <a href="http://en.wikipedia.org/wiki/Hdf5" rel="noreferrer">HDF5</a>. There are two commonly used Python interfaces, <a href="http://code.google.com/p/h5py/" rel="noreferrer">h5py</a> and <a href="http://www.pytables.org/" rel="noreferrer">PyTables</a>. While the latter seems to be more widespread, I prefer the former.</p>
|
python
| 7 |
1,902,588 | 61,687,307 |
R not changing Plotly colorscales for Contour plots
|
<p>Okay, so I am using a generic data set to troubleshoot this problem. Here is the code I am entering into R: </p>
<pre><code>library(plotly)
fig <- plot_ly(
type = 'contour',
z = matrix(c(10, 10.625, 12.5, 15.625, 20, 5.625, 6.25, 8.125,
11.25, 15.625, 2.5, 3.125, 5, 8.125, 12.5, 0.625,
1.25, 3.125, 6.25, 10.625, 0, 0.625, 2.5, 5.625,
10), nrow=5, ncol=5),
colorscale = 'plasma',
autocontour = F,
contours = list(
start = 0,
end = 8,
size = 2
)
)
fig
</code></pre>
<p>As you can see, I have the colorscale argument set to plasma, which is a built-in colorscale according to plotly ( <a href="https://plotly.com/python/builtin-colorscales/" rel="nofollow noreferrer">https://plotly.com/python/builtin-colorscales/</a> )</p>
<p>However, when I actually execute the code, the resulting graph is NOT in plasma colorscale, which goes from purple to red to yellow.</p>
<p><a href="https://i.stack.imgur.com/9Tnfz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9Tnfz.png" alt="enter image description here"></a></p>
<p>But when I set the colorscale = 'Jet', the resulting graph is in jet colorscale.
<a href="https://i.stack.imgur.com/2Kh4a.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2Kh4a.png" alt="enter image description here"></a></p>
<p>How do I fix this? I want to be able to quickly change the colorscale to any of the built-in ones, so I can see which one my plot looks best in. I also don't want to have to manually define the color for each level of the plot.</p>
<p>Furthermore, when going to the colorscale page for R, the section on contour plots doesn't specify the colorscale variable at all. </p>
<p><a href="https://plotly.com/r/colorscales/" rel="nofollow noreferrer">https://plotly.com/r/colorscales/</a></p>
<p>Maybe I am missing something in the <a href="https://plotly.com/r/contour-plots/" rel="nofollow noreferrer">https://plotly.com/r/contour-plots/</a> page, but I can't find code that integrates defined start and stop points and built-in colorscale besides the one shown above.</p>
|
<p>Your issue is that the color scale 'plasma' is available for the <a href="https://plotly.com/python/builtin-colorscales/" rel="nofollow noreferrer">python version</a> of plotly, but not for the <a href="https://plotly.com/r/builtin-colorscales/" rel="nofollow noreferrer">R version</a></p>
<p>According to the <a href="https://plotly.com/r/builtin-colorscales/" rel="nofollow noreferrer">R documentation</a>, you can check the supported color scales in R with:</p>
<pre><code>library("RColorBrewer")
brewer.pal.info
</code></pre>
<blockquote>
<p>BrBG PiYG PRGn PuOr RdBu RdGy RdYlBu RdYlGn Spectral Accent Dark2
Paired Pastel1 Pastel2 Set1 Set2 Set3 Blues BuGn BuPu GnBu Greens
Greys Oranges OrRd PuBu PuBuGn PuRd Purples RdPu Reds YlGn YlGnBu
YlOrBr YlOrRd</p>
</blockquote>
<p>Specifically for <a href="https://plotly.com/r/reference/#contour-colorscale" rel="nofollow noreferrer">contour plots</a> the supported scales:</p>
<blockquote>
<p>Blackbody,Bluered,Blues,Cividis,Earth,Electric,Greens,Greys,Hot,Jet,Picnic,Portland,Rainbow,RdBu,Reds,Viridis,YlGnBu,YlOrRd.</p>
</blockquote>
<p>The strings are case-sensitive, so <code>colorscale = "Viridis"</code> will work, but <code>colorscale = "viridis"</code> won't.</p>
|
python|r|plotly
| 1 |
1,902,589 | 61,856,270 |
How do I put max and min limit on Django Admin Table's float fields?
|
<p>I have a <code>Details</code> model as below:</p>
<pre><code>class Detail(models.Model):
user = models.ForeignKey(User, on_delete=models.CASCADE)
subject = models.CharField(max_length=50)
skype_session_attendance = models.FloatField()
internal_course_marks = models.FloatField()
programming_lab_activity = models.FloatField()
mid_term_marks = models.FloatField()
final_term_marks = models.FloatField()
def __str__(self):
return f'{self.user.username}-{self.subject}'
</code></pre>
<p>I want the Admin Panel to have the min and max limit on all the Float Fields, such as, when Admin try to enter the values through Admin Panel in the Detail's Table float fields, he/she should not be allowed to enter the values except within the range of min and max limit.</p>
<p>Can anyone help me in this?</p>
|
<p>You can use Validators , like this :</p>
<pre><code>from django.core.validators import MaxValueValidator, MinValueValidator
min = 0
max = 10
skype_session_attendance = models.FloatField(
validators=[MinValueValidator(min), MaxValueValidator(max)],
)
</code></pre>
|
python|django|django-admin
| 3 |
1,902,590 | 67,447,306 |
Can you run python scripts in chrome extensions?
|
<p>I was using python to send requests to an API and process it but can the script be integrated in a chrome extension?</p>
|
<p>I'm not sure what you're trying to achieve. If you want to run python code from browser, try using jupyter-notebook <a href="https://jupyter.org/" rel="nofollow noreferrer">https://jupyter.org/</a> or python shell Chrome extension (<a href="https://chrome.google.com/webstore/detail/python-shell/gdiimmpmdoofmahingpgabiikimjgcia?hl=en" rel="nofollow noreferrer">https://chrome.google.com/webstore/detail/python-shell/gdiimmpmdoofmahingpgabiikimjgcia?hl=en</a>). If you can achieve your goal with above options, it might be easier to use.</p>
<p>On the otherhand, if you want complete control over browser and with python code, please try tools like Selenium (<a href="https://www.selenium.dev/" rel="nofollow noreferrer">https://www.selenium.dev/</a>) and run those tools with Chrome.</p>
|
python|google-chrome-extension
| -1 |
1,902,591 | 70,177,806 |
How do I exit my python3 application cleanly from asyncio event loop run_forever() when user clicks tkinter root window close box?
|
<p>I'm trying to make a python3 application for my Raspberry Pi 4B and I have the tkinter windows working fine, but need to add asynchronous handling to allow tkinter widgets to respond while processing asynchronous actions initiated by the window's widgets.</p>
<p>The test code is using asyncio and tkinter. However, without root.mainloop(), since asyncio loop.run_forever() is called at the end instead. The idea is that when the user clicks the main window's close box, RequestQuit() gets called to set the quitRequested flag and then when control returns to the event loop, root.after_idle(AfterIdle) would cause AfterIdle to be called, where the flag is checked and if true, the event loop is stopped, or that failing, the app is killed with exit(0).</p>
<p>The loop WM_DELETE_WINDOW protocol coroutine RequestQuit is somehow not getting called when the user clicks the main window close box, so the AfterIdle coroutine never gets the flag to quit and I have to kill the app by quitting XQuartz.</p>
<p>I'm using ssh via Terminal on MacOS X Big Sur 11.5.2, connected to a Raspberry Pi 4B with Python 3.7.3.</p>
<p>What have I missed here?</p>
<p>(I haven't included the widgets or their handlers or the asynchronous processing here, for brevity, since they aren't part of the problem at hand.)</p>
<pre><code>from tkinter import *
from tkinter import messagebox
import aiotkinter
import asyncio
afterIdleProcessingIntervalMsec = 500 # Adjust for UI responsiveness here.
busyProcessing = False
quitRequested = False
def RequestQuit():
global quitRequested
global busyProcessing
if busyProcessing:
answer = messagebox.askquestion('Exit application', 'Do you really want to abort the ongoing processing?', icon='warning')
if answer == 'yes':
quitRequested = True
def AfterIdle():
global quitRequested
global loop
global root
if not quitRequested:
root.after(afterIdleProcessingIntervalMsec, AfterIdle)
else:
print("Destroying GUI at: ", time.time())
try:
loop.stop()
root.destroy()
except:
exit(0)
if __name__ == '__main__':
global root
global loop
asyncio.set_event_loop_policy(aiotkinter.TkinterEventLoopPolicy())
loop = asyncio.get_event_loop()
root = Tk()
root.protocol("WM_DELETE_WINDOW", RequestQuit)
root.after_idle(AfterIdle)
# Create and pack widgets here.
loop.run_forever()
</code></pre>
|
<p>The reason why your program doesn't work is that there is no Tk event loop, or its equivalent. Without it, Tk will not process events; no Tk callback functions will run. So your program doesn't respond to the WM_DELETE_WINDOW event, or any other.</p>
<p>Fortunately Tk can be used to perform the equivalent of an event loop as an asyncio.Task, and it's not even difficult. The basic concept is to write a function like this, where "w" is any tk widget:</p>
<pre><code>async def new_tk_loop():
while some_boolean:
w.update()
await asyncio.sleep(sleep_interval_in_seconds)
</code></pre>
<p>This function should be created as an asyncio.Task when you are ready to start processing tk events, and should continue to run until you are ready to stop doing that.</p>
<p>Here is a class, TkPod, that I use as the basic foundation of any Tk + asyncio program. There is also a trivial little demo program, illustrating how to close the Tk loop from another Task. If you click the "X" before 5 seconds pass, the program will close immediately by exiting the mainloop function. After 5 seconds the program will close by cancelling the mainloop task.</p>
<p>I use a default sleep interval of 0.05 seconds, which seems to work pretty well.</p>
<p>When exiting such a program there are a few things to think about.</p>
<p>When you click on the "X" button on the main window, the object sets its <code>app_closing</code> variable to false. If you need to do some other clean-up, you can subclass Tk and over-ride the method <code>close_app</code>.</p>
<p>Exiting the mainloop doesn't call the <code>destroy</code> function. If you need to do that, you must do it separately. The class is a context manager, so you can make sure that <code>destroy</code> is called using a <code>with</code> block.</p>
<p>Like any asyncio Task, <code>mainloop</code> can be cancelled. If you do that, you need to catch that exception to avoid a traceback.</p>
<pre><code>#! python3.8
import asyncio
import tkinter as tk
class TkPod(tk.Tk):
def __init__(self, sleep_interval=0.05):
self.sleep_interval = sleep_interval
self.app_closing = False
self.loop = asyncio.get_event_loop()
super().__init__()
self.protocol("WM_DELETE_WINDOW", self.close_app)
# Globally suppress the Tk menu tear-off feature
# In the following line, "*tearOff" works as documented
# while "*tearoff" does not.
self.option_add("*tearOff", 0)
def __enter__(self):
return self
def __exit__(self, *_x):
self.destroy()
def close_app(self):
self.app_closing = True
# I don't know what the argument n is for.
# I include it here because pylint complains otherwise.
async def mainloop(self, _n=0):
while not self.app_closing:
self.update()
await asyncio.sleep(self.sleep_interval)
async def main():
async def die_in5s(t):
await asyncio.sleep(5.0)
t.cancel()
print("It's over...")
with TkPod() as root:
label = tk.Label(root, text="Hello")
label.grid()
t = asyncio.create_task(root.mainloop())
asyncio.create_task(die_in5s(t))
try:
await t
except asyncio.CancelledError:
pass
if __name__ == "__main__":
asyncio.run(main())
</code></pre>
|
python|tkinter|python-asyncio|exit
| 0 |
1,902,592 | 70,128,630 |
How to pass Byref argument in python function
|
<p>I am new to Python and in the beginer level. I want to pass an argument as Byref similar to what we use in vb or c#.</p>
<p>Below is my code. But i am not getting the expected Output.</p>
<p>Thank you for the help in advance</p>
<pre><code>
class calculator:
arg = 0
globvar = 0
def add(a,b):
global globvar
globvar = a + b
print("Inside function call",globvar)
print("Before function call",globvar)
add(2,4)
print("After function call",globvar)
</code></pre>
<pre><code>Output
Before function call 0
Inside function call 6
After function call 0
</code></pre>
<p>The after function should update to 6. But it is still displaying 0</p>
|
<p>It works as expected if you get rid of the unnecessary <code>class</code> statement.</p>
<pre><code>globvar = 0
def add(a,b):
global globvar
globvar = a + b
print("Inside function call",globvar)
print("Before function call",globvar)
add(2,4)
print("After function call",globvar)
</code></pre>
|
python|pass-by-reference
| 0 |
1,902,593 | 11,016,825 |
Bug in Python's 'a+' file open mode?
|
<p>I'm currently making a filesystem using python-fuse and was looking up where file pointers start for each of the different modes ('r', 'r+', etc.) and found on multiple sites that the file pointer starts at zero unless it is opened in 'a' or 'a+' when it starts at the end of the file.</p>
<p>I tested this in Python to make sure (opening a text file in each of the modes and calling tell() immediately) but found that when it was opened in 'a+' the file pointer was at zero not the end of the file.</p>
<p>Is this a bug in python, or are the websites wrong?</p>
<p>For reference:</p>
<ul>
<li><a href="http://www.tutorialspoint.com/python/python_files_io.htm" rel="nofollow">one of the websites</a> (search for "file pointer")</li>
<li>I am using Python 2.7.3 on Ubuntu</li>
</ul>
|
<p>No, it's not a bug.</p>
<p>What happens when you call <code>tell()</code> <strong>after</strong> writing some data?</p>
<p>Does it write at position 0, or at the end of file as you would expect? I would almost bet my life that it is the latter.</p>
<pre><code>>>> f = open('test', 'a+')
>>> f.tell()
0
>>> f.write('this is a test\n')
>>> f.tell()
15
>>> f.close()
>>> f = open('test', 'a+')
>>> f.tell()
0
>>> f.write('this is a test\n')
>>> f.tell()
30
</code></pre>
<p>So, it does seek to the end of the file <em>before</em> it writes data.</p>
<p>This is how it should be. From the <code>fopen()</code> man page:</p>
<blockquote>
<pre><code> a+ Open for reading and appending (writing at end of file). The
file is created if it does not exist. The initial file position
for reading is at the beginning of the file, but output is
always appended to the end of the file.
</code></pre>
</blockquote>
<p>Phew, lucky I was right.</p>
|
python|file|file-io|python-2.7|mode
| 5 |
1,902,594 | 63,410,686 |
How to link Qt Designer button to a function in a separate file
|
<p>I'm new to Python and I've searched for an answer but couldn't find it (or rather couldn't properly implement it).</p>
<p>I've generated a window with a few buttons in QtDesigner's file named "arch.ui", converted to arch.py.</p>
<p>As I'll be updating GUI occasionally, I don't want to create functions in arch.py, so I've created a main.py file for that.</p>
<p>I've a problem with linking button click to a function => I try to link "btnSource" (from arch.py) to function "printMe" (in main.py).</p>
<p>Obviously it doesn't work. Any help welcome.</p>
<p>Here is generated Designer file:</p>
<pre><code># Form implementation generated from reading ui file 'arch.ui'
#
# Created by: PyQt5 UI code generator 5.15.0
#
# WARNING: Any manual changes made to this file will be lost when pyuic5 is
# run again. Do not edit this file unless you know what you are doing.
from PyQt5 import QtCore, QtGui, QtWidgets
class Ui_MainWindow(object):
def setupUi(self, MainWindow):
MainWindow.setObjectName("MainWindow")
MainWindow.resize(460, 233)
self.centralwidget = QtWidgets.QWidget(MainWindow)
self.centralwidget.setObjectName("centralwidget")
self.btnSource = QtWidgets.QPushButton(self.centralwidget)
self.btnSource.setGeometry(QtCore.QRect(80, 60, 75, 23))
self.btnSource.setObjectName("btnSource")
self.lblSource = QtWidgets.QLabel(self.centralwidget)
self.lblSource.setGeometry(QtCore.QRect(180, 60, 511, 21))
self.lblSource.setObjectName("lblSource")
self.lblTarget = QtWidgets.QLabel(self.centralwidget)
self.lblTarget.setGeometry(QtCore.QRect(180, 120, 481, 16))
self.lblTarget.setObjectName("lblTarget")
self.btnTarget = QtWidgets.QPushButton(self.centralwidget)
self.btnTarget.setGeometry(QtCore.QRect(80, 120, 75, 23))
self.btnTarget.setObjectName("btnTarget")
self.btnGo = QtWidgets.QPushButton(self.centralwidget)
self.btnGo.setGeometry(QtCore.QRect(280, 120, 75, 23))
self.btnGo.setObjectName("btnGo")
MainWindow.setCentralWidget(self.centralwidget)
self.menubar = QtWidgets.QMenuBar(MainWindow)
self.menubar.setGeometry(QtCore.QRect(0, 0, 460, 21))
self.menubar.setObjectName("menubar")
MainWindow.setMenuBar(self.menubar)
self.statusbar = QtWidgets.QStatusBar(MainWindow)
self.statusbar.setObjectName("statusbar")
MainWindow.setStatusBar(self.statusbar)
self.retranslateUi(MainWindow)
QtCore.QMetaObject.connectSlotsByName(MainWindow)
def retranslateUi(self, MainWindow):
_translate = QtCore.QCoreApplication.translate
MainWindow.setWindowTitle(_translate("MainWindow", "MainWindow"))
self.btnSource.setText(_translate("MainWindow", "Source"))
self.lblSource.setText(_translate("MainWindow", "TextLabel"))
self.lblTarget.setText(_translate("MainWindow", "TextLabel"))
self.btnTarget.setText(_translate("MainWindow", "Target"))
self.btnGo.setText(_translate("MainWindow", "Go"))
</code></pre>
<p>And here is my main.py file:</p>
<pre><code>from PyQt5 import QtCore, QtGui, QtWidgets
from arch import Ui_MainWindow
import sys
app = QtWidgets.QApplication(sys.argv)
class myWindow(Ui_MainWindow):
def __init__(self):
super(myWindow, self).__init__()
self.btnSource.clicked.connect(self.btnSource.printMe)#
def printMe(self):
print('blah blah blah')
MainWindow = QtWidgets.QMainWindow()
ui = Ui_MainWindow()
ui.setupUi(MainWindow)
MainWindow.show()
sys.exit(app.exec_())
</code></pre>
|
<h2>tl;dr</h2>
<p>Subclass from both <code>QMainWindow</code> <em>and</em> <code>Ui_MainWindow</code>, and call <code>setupUi</code> from there; then create an instance of <code>myWindow</code>:</p>
<pre><code>class MyWindow(<b>QtWidgets.QMainWindow, Ui_MainWindow</b>):
def __init__(self):
super(MyWindow, self).__init__()
<b>self.setupUi(self)</b>
self.btnSource.clicked.connect(self.printMe)
def printMe(self):
print('blah blah blah')
if __name__ == '__main__':
app = QtWidgets.QApplication(sys.argv)
mainWindow = <b>MyWindow</b>()
mainWindow.show()
sys.exit(app.exec_())
</code></pre>
<h2>Explanation</h2>
<p>Your code doesn't work for many reasons; while the main problem might be that you actually <em>never</em> created an instance of <code>myWindow</code> (about that, you should always use capitalized names for classes), making it completely useless, it wouldn't have worked anyway.</p>
<p>That's because you should <em>not</em> subclass from the <code>ui</code> class object, but from the QWidget descendant (QMainWindow, in your case) you're going to use.</p>
<p>The <code>ui_*</code> objects created from <code>pyuic</code> are only intended as a high level (and <strong>unmodified</strong>) interface to create the UI <em>on top</em> of a QWidget subclass.<br />
Calling <code>setupUi(something)</code> actually creates all child widgets for the widget <code>something</code>, sets the layout and, possibly, <a href="https://www.riverbankcomputing.com/static/Docs/PyQt5/signals_slots.html#connecting-slots-by-name" rel="nofollow noreferrer">automatically connects to slots with a compatible name</a>, but that's all: in fact, if you closely look at the code from the ui file, it actually does nothing besides <code>setupUi</code> and <code>retranslateUi</code> (nor it should!): there's not even an <code>__init__</code>!</p>
<p>If you need to add interaction and create connections from signals to slot/functions, you should use the single/multiple inheritance approaches as explained in the official guide about <a href="https://www.riverbankcomputing.com/static/Docs/PyQt5/designer.html" rel="nofollow noreferrer">using Designer</a> with PyQt; the only other possibility is to use <code>loadUi</code> (while still subclassing from the base class) with the source <code>.ui</code> file:</p>
<pre><code>from PyQt5 import QtWidgets, uic
class MyWindow(QtWidgets.QMainWindow):
def __init__(self):
super().__init__()
uic.loadUi('path/to/gui.ui', self)
self.someWidget.someSignal.connect(self.someSlot)
# ...
def someSlot(self, signalArguments, [...]):
# do something...
</code></pre>
<p><sub>PS: for various reasons, it's usually better to run a QApplication only if the script is the one that's been run (hence the <code>if __name__ ...</code>), mostly because there should be just only <em>one</em> QApplication instance for every running program; in any case, it shouldn't be created <em>before</em> the class declarations (unless, you <strong>really</strong> know what you're doing); it's not a big deal in your case, but, as usual, better safe than sorry.</sub></p>
|
python|pyqt5|qt-designer
| 1 |
1,902,595 | 63,681,033 |
Error when trying to load 30GB SAS file with Pyspark
|
<p>I am trying to replicate what was done in this article <a href="http://blog.rubypdf.com/2018/10/12/how-two-read-sas-data-with-pyspark/" rel="nofollow noreferrer">Loading Big SAS files</a></p>
<p>What I am doing is starting up a jupyter notebook and running the code below. I keep getting a Java load error and I can't figure out why.</p>
<pre><code>Spark Version:2.4.6
Scala Version:2.12.2
Java Version:1.8.0_261
import findspark
findspark.init()
from pyspark.sql.session import SparkSession
spark = SparkSession.builder.\
config("spark.jars.packages","saurfang:spark-sas7bdat:2.0.0-s_2.11")\
.enableHiveSupport().getOrCreate()
df=spark.read.format('com.github.saurfang.sas.spark')\
.load(r'D:\IvyDB\opprcd\opprcd2019.sas7bdat')
</code></pre>
<p>Error I always get is below</p>
<pre><code>Py4JJavaError: An error occurred while calling o163.load.
: java.util.concurrent.TimeoutException: Timed out after 60 sec while reading file metadata, file might be corrupt. (Change timeout with 'metadataTimeout' paramater)
at com.github.saurfang.sas.spark.SasRelation.inferSchema(SasRelation.scala:189)
at com.github.saurfang.sas.spark.SasRelation.(SasRelation.scala:62)
at com.github.saurfang.sas.spark.SasRelation$.apply(SasRelation.scala:43)
at com.github.saurfang.sas.spark.DefaultSource.createRelation(DefaultSource.scala:209)
at com.github.saurfang.sas.spark.DefaultSource.createRelation(DefaultSource.scala:42)
at com.github.saurfang.sas.spark.DefaultSource.createRelation(DefaultSource.scala:27)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:341)
at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:239)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:227)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:174)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:748)
</code></pre>
|
<p>In our case, we were able to fix this issue by adding <a href="https://github.com/epam/parso/" rel="nofollow noreferrer">Parso</a> library into pyspark. <a href="https://github.com/epam/parso/" rel="nofollow noreferrer">Parso</a> is one of the requirements in <a href="https://github.com/saurfang/spark-sas7bdat/tree/v2.1.0" rel="nofollow noreferrer">Spark SAS Data Source</a>.</p>
|
python-3.x|apache-spark|pyspark|sas
| 0 |
1,902,596 | 63,620,989 |
The sys, subprocess, timeit packages are missing from the PyPi site
|
<p>During the last three months I've developed several <em><strong>Python</strong></em> (<em><strong>Python3.6.9</strong></em>) system oriented applications on <em><strong>Centos7.8</strong></em>.
These applications are using the <em><strong>sys</strong></em>, <em><strong>subprocess</strong></em> and <em><strong>timeit</strong></em> packages, which where installed on May 2020 by <em><strong>pip install</strong></em>.</p>
<p>Now I was requested to create a <em><strong>Docker container</strong></em> with these applications and tried to <em><strong>pip install</strong></em> the listed packages within the container (starting with <em><strong>sys</strong></em>):</p>
<pre><code>sudo docker run -i -t centos/python-36-centos7 /bin/bash
(app-root) python -V
Python 3.6.9
(app-root) pip install sys
ERROR: Could not find a version that satisfies the requirement sys (from versions: none)
ERROR: No matching distribution found for sys
</code></pre>
<p>The listed packages weren't found and failed to install with the same ERROR messages.</p>
<p>What might be a problem and how should I solve it ?</p>
<p>Thanks</p>
<p>Zeev</p>
|
<p>Sys, subprocess, timeit, etc are standard libraries(you don't need to install them, they come with the pyhton installation candidate) of python, so using pip to install them is pretty useless. Just reinstall python, maybe your python's standard libs are overwritten by other third party package.</p>
|
python|docker|installation|pip|docker-container
| 3 |
1,902,597 | 56,758,951 |
UnicodeDecodeError in decoding the torrent file
|
<p>I am trying to write the torrent app from scratch just for learning purpose.So after few hours of reading wiki, I wrote some code for decoding of torrent files which use 'Bencoding' "<a href="https://en.wikipedia.org/wiki/Bencode" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Bencode</a>" . But unfortunately, I didn't notice about byte string and python string. My code work fine with python string like torrent data, but when I passed torrent byte data, I got the encoding error.</p>
<p>I tried " open(file, 'rb',encoding='utf-8', errors='ignore') ". It did change the byte string into python string.I also tried all the available answers on stakoverflow. But some data were lost as errors and so I can't properly decode the torrent data. Pardon my messy coding and please help... I also read the bencoder library, and it works directly on byte string, so if there are any way that I don't have to re-write the code, please...</p>
<pre><code>with open(torrent_file1, 'rb') as _file:
data = _file.read()
def int_decode(meta, cur):
print('inside int_decode function')
cursor = cur
start = cursor + '1'
end = start
while meta[end] != 'e':
end += 1
value = int(meta[start:end])
cursor = end + 1
print(value, cursor)
return value, cursor
def chr_decode(meta, cur):
print('inside chr_decode function')
cursor = cur
start = cursor
end = start
while meta[end] != ':':
end += 1
chr_len = int(meta[start:end])
chr_start = end + 1
chr_end = chr_start + chr_len
value = meta[chr_start:chr_end]
cursor = chr_end
print(value, cursor)
return value, cursor
def list_decode(meta, cur):
print('inside the list decoding')
cursor = cur+1
new_list = list()
while cursor < (len(meta)):
if meta[cursor] == 'i':
item, cursor = int_decode(meta, cursor)
new_list.append(item)
elif meta[cursor].isdigit():
item, cursor = chr_decode(meta, cursor)
new_list.append(item)
elif meta[cursor] == 'e':
print('list is ended')
cursor += 1
break
return (new_list,cursor)
def dict_decode(meta, cur=0, key_=False, key_val=None):
if meta[cur] == 'd':
print('dict found')
new_dict = dict()
key = key_
key_value = key_val
cursor = cur + 1
while cursor < (len(meta)):
if meta[cursor] == 'i':
value, cursor = int_decode(meta, cursor)
if not key:
key = True
key_value = value
else:
new_dict[key_value] = value
key = False
elif meta[cursor].isdigit():
value, cursor = chr_decode(meta, cursor)
if not key:
key = True
key_value = value
else:
new_dict[key_value] = value
key = False
elif meta[cursor] == 'l':
lists, cursor = list_decode(meta, cursor)
if key:
new_dict[key_value] = lists
key = False
else:
print('list cannot be used as key')
elif meta[cursor] == 'd':
dicts, cursor = dict_decode(meta, cursor)
if not key:
key=True
key_value = dicts
else:
new_dict[key_value] = dicts
key=False
elif meta[cursor] == 'e':
print('dict is ended')
cursor += 1
break
return (new_dict,cursor)
test = 'di323e4:spami23e4:spam5:helloi23e4:spami232ei232eli32e4:doneei23eli1ei2ei3e4:harmee'
test2 = 'di12eli23ei2ei22e5:helloei12eli1ei2ei3eee'
test3 = 'di12eli23ei2ei22ee4:johndi12e3:dggee'
print(len(test2))
new_dict = dict_decode(data)
print(new_dict)
</code></pre>
<p>Traceback (most recent call last):
File "C:\Users\yewaiyanoo\Desktop\python\torrent\read_torrent.py", line 8, in
data = _file.read()
File "C:\Users\yewaiyanoo\AppData\Local\Programs\Python\Python37-32\lib\codecs.py", line 701, in read
return self.reader.read(size)
File "C:\Users\yewaiyanoo\AppData\Local\Programs\Python\Python37-32\lib\codecs.py", line 504, in read
newchars, decodedbytes = self.decode(data, self.errors)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xad in position 204: invalid start byte</p>
|
<p>Python 3 decodes text files when reading, encodes when writing. The default encoding is taken from locale.getpreferredencoding(False), which evidently for your setup returns 'ASCII'. See the open() function documenation:</p>
<p>In text mode, if encoding is not specified the encoding used is platform dependent: locale.getpreferredencoding(False) is called to get the current locale encoding.</p>
<p>Instead of relying on a system setting, you should open your text files using an explicit codec:</p>
<p>currentFile = open(file, 'rb',encoding='latin1', errors='ignore') </p>
<p>where you set the encoding parameter to match the file you are reading.</p>
<p>Python 3 supports UTF-8 as the default for source code.</p>
<p>The same applies to writing to a writeable text file; data written will be encoded, and if you rely on the system encoding you are liable to get UnicodeEncodingError exceptions unless you explicitly set a suitable codec. What codec to use when writing depends on what text you are writing and what you plan to do with the file afterward.</p>
<p>You may want to read up on Python 3 and Unicode in the Unicode HOWTO, which explains both about source code encoding and reading and writing Unicode data.</p>
|
python-3.x
| 0 |
1,902,598 | 60,889,801 |
Fill missing values in a dataframe using numpy.ndarray
|
<p>I have a dataframe and nparray as follows </p>
<pre><code>import pandas as pd
import numpy as np
dic = {'A': {0: 0.9, 1: "NaN", 2: 1.8, 3: "NaN"},
'C': {0: 0.1, 1: 2.8, 2: -0.1, 3: 0.5},
'B': {0: 0.7, 1: -0.6, 2: -0.1, 3: -0.1},}
df=pd.DataFrame(dic)
print(df)
A C B
0 0.9 0.1 0.7
1 NaN 2.8 -0.6
2 1.8 -0.1 -0.1
3 NaN 0.5 -0.1
a = np.array([1.,2.])
a
array([1., 2.])
</code></pre>
<p>How would I fill the missing (NaN) values in column A with the values from the nparray? I want to fill the column sequentially based on the order of the array so first array element goes into 1A and second goes into 3A.</p>
|
<p>Use <code>numpy.tile</code> to create an array by repeating elements of <code>a</code></p>
<pre><code>df['A'].replace('NaN', np.nan, inplace = True)
len_tile = math.ceil(df['A'].isnull().sum()/len(a))
non_null_a = np.tile(a, len_tile)
</code></pre>
<p>Then use `loc' to fill NaN using the array,</p>
<pre><code>df.loc[df['A'].isnull(), 'A'] = non_null_a
A C B
0 0.9 0.1 0.7
1 1.0 2.8 -0.6
2 1.8 -0.1 -0.1
3 2.0 0.5 -0.1
</code></pre>
<p>Note: For the dummy df that you have provided, simply using array <code>a</code> to replace missing values will work. The code I used takes into account situation where there are more NaNs than the length of the array.</p>
|
python|arrays|pandas|numpy|dataframe
| 2 |
1,902,599 | 69,075,974 |
FPDF: insert variable into PDF
|
<p>Using FPDF2 and Python3 to create a report. Would like to insert a string into a PDF but cannot see how to do it.
Just cannot find the syntax.</p>
<p>Something simple like:</p>
<pre><code>from fpdf import FPDF, HTMLMixin
class PDF(FPDF, HTMLMixin):
pass
name="Chris"
pdf = PDF()
pdf.add_page()
pdf.write_html("""
<h1>My name -{name} should print here</h1>
""")
pdf.output("htmlnew.pdf")
</code></pre>
|
<p>Did you try using an <code>f</code> string:</p>
<pre class="lang-py prettyprint-override"><code>
from fpdf import FPDF, HTMLMixin
class PDF(FPDF, HTMLMixin):
pass
name="Chris"
...
pdf.write_html(f"""<h1>My name {name} should print here</h1>""")
...
</code></pre>
|
python|pdf|fpdf
| 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.