Unnamed: 0
int64 0
1.91M
| id
int64 337
73.8M
| title
stringlengths 10
150
| question
stringlengths 21
64.2k
| answer
stringlengths 19
59.4k
| tags
stringlengths 5
112
| score
int64 -10
17.3k
|
---|---|---|---|---|---|---|
1,908,800 | 3,018,348 |
The LoadLibraryA method returns error code 1114 (ERROR_DLL_INIT_FAILED) after more than 1000 cycles of loading/unloading
|
<p>I'm programing on C++, I'm using Visual Studio 2008, Windows XP, and I have the following problem:
My application, that is a DLL that can be used from Python, loads an external dll, uses the required methods, and then unloads this external Dll.
It's working properly, but after more than 1000 cycles the method "LoadLibraryA" returns a NULL reference.</p>
<p>The main steps are:</p>
<pre><code>HINSTANCE h = NULL;
h = LoadLibraryA(dllfile.c_str());
DWORD dw = GetLastError();
</code></pre>
<p>The error got is:</p>
<pre><code>ERROR_DLL_INIT_FAILED
1114 (0x45A) A dynamic link library (DLL) initialization routine failed.
</code></pre>
<p>The Dll is unloaded by using the following:</p>
<pre><code>FreeLibrary(mDLL);
mDLL = NULL;
</code></pre>
<p>Where mDLL is defined like this:</p>
<pre><code>HINSTANCE mDLL;
</code></pre>
<p>First alternative tried:
Just load the Dll only once, and unloaded it when the application ends. This fix the problem but introduces a new one.</p>
<p>When the application ends, instead of first executing the DllMain method of my applicaion, wich unloads the external DLL, is executing first the DllMain method of the other Dll. This cause the following error because my application is trying to unload a Dll that was unload by itself previously.</p>
<p>"Unhandled exception at 0x04a00d07 (DllName.DLL) in Python.exe: 0xC0000005: Access violation reading location 0x0000006b".</p>
<p>Any suggestion will be welcomed.
Thanks in advance.
Regards.</p>
|
<p>Make sure that initialization code of the loaded/unloaded library doesn't leak memory. Many libraries expect to be loaded only once and not always clean up their resources properly.</p>
<p>E.g. in C++ file at the top level one can declare and initialize a variable like this:</p>
<pre><code>AClass *a = new AClass(1,2,3);
</code></pre>
<p>The code would be executed when library is loaded automatically. Yet, now, it is impossible to free the hanging instance as library doesn't know precisely when/how it is going to be unloaded. In the case one can either replace "AClass *a" with "AClass a" or write your own <a href="http://msdn.microsoft.com/en-us/library/ms682583%28v=VS.85%29.aspx" rel="nofollow noreferrer">DllMain</a> for the library and free resources on DLL_PROCESS_DETACH.</p>
<p>If you have no control over the library's code, then it might make sense to create a cache of loaded libraries and simply never unload them. It is very hard to imagine that there would be unlimited number of libraries to overload such cache.</p>
|
c++|python|loadlibrary
| 1 |
1,908,801 | 67,970,514 |
Where to initialize an instance of a class in a multiple file python code?
|
<p>I working on a project that is broken into multiple files, and while I know my issue is that I'm not initializing an instance of the class I'm trying to use, I can't figure out where to do so. The two files are below - the DataLayer and the main file. The issue in question is under the first button - I'm passing data entered by the user to the DataLayer and attempting to use a method (d_read_from_pandas_all) in that class. The method's only parameter is "self." What am I missing here?</p>
<pre><code>from DataLayer import *
from ViewLayer import *
class NotebookDemo(Frame):
def __init__(self, isapp=True, name='notebookdemo'):
Frame.__init__(self, name=name)
self.pack(expand=Y, fill=BOTH)
self.master.title('Inventory Data Cleaner')
self.isapp = isapp
self._create_widgets()
def _create_widgets(self):
self._create_demo_panel()
def _create_demo_panel(self):
demoPanel = Frame(self, name='demo')
demoPanel.pack(side=TOP, fill=BOTH, expand=Y)
# create the notebook
nb = Notebook(demoPanel, name='notebook')
nb.enable_traversal()
nb.pack(fill=BOTH, expand=Y, padx=2, pady=3)
self._create_descrip_tab(nb)
self._create_view_tab(nb)
self._create_text_tab(nb)
def _create_descrip_tab(self, nb):
# frame to hold contentx
frame = Frame(nb, name='descrip')
# widgets to be displayed on 'Description' tab
docName = tk.StringVar()
docLabel = tk.Label(frame, text = 'enter cruise document name')
docEntry = tk.Entry(frame, textvariable = docName)
btn = Button(frame, text='Load', underline=0,
command=lambda: [DataLayer(str(docName)), DataLayer.d_read_from_pandas_all()])
#
# position and set resize behaviour
btn.grid(row=1, column=0, pady=(2, 4))
docLabel.grid(row = 0, column = 0, pady = (2,4))
docEntry.grid(row = 0, column = 1, pady = (2,4))
frame.rowconfigure(1, weight=1)
frame.columnconfigure((0, 1), weight=1, uniform=1)
# add to notebook (underline = index for short-cut character)
nb.add(frame, text='View Data', underline=0, padding=0)
# =============================================================================
def _create_view_tab(self, nb):
# Populate the second pane. Note that the content doesn't really matter
frame = Frame(nb, name="view")
btn1 = Button(frame, text='Click here to view data', underline=0,
command=lambda:[DataLayer.d_read_from_pandas_all(self, docName), ViewLayer.create_display_table(self,frame), ViewLayer.viewTable(self)])
btn1.grid(row=1, column=0, pady=(2, 4))
nb.add(frame, text='Check Data', underline=0, padding=2)
# =============================================================================
def _create_text_tab(self, nb):
# populate the third frame with a text widget
frame = Frame(nb)
txt = Text(frame, wrap=WORD, width=40, height=10)
vscroll = Scrollbar(frame, orient=VERTICAL, command=txt.yview)
txt['yscroll'] = vscroll.set
vscroll.pack(side=RIGHT, fill=Y)
txt.pack(fill=BOTH, expand=Y)
# add to notebook (underline = index for short-cut character)
nb.add(frame, text='Text Editor', underline=0)
if __name__ == '__main__':
NotebookDemo().mainloop()
</code></pre>
<pre><code>import pandas as pd
class DataLayer:
def __init__(self, filename):
self.filename = filename
def d_read_from_pandas_all(self):
global df
df = pd.read_csv(self.filename, index_col=0)
print(df.head(10))
return df
</code></pre>
|
<p>To import a Class from anothor file you need to import it like:</p>
<pre><code>from file import class
</code></pre>
<p>this also works if you have structured your files in folders:</p>
<pre><code>from folder.file import class
</code></pre>
<p>assuming your <code>DataLayer</code> class is saved in <code>datalayer.py</code> within the same directory you would need to import:</p>
<pre><code>from datalayer import DataLayer
</code></pre>
<p>if you would save you seperate classes in a folder called <code>classes</code>that is in the same directory like your main file, you would import:</p>
<pre><code>from classes.datalayer import DataLayer
</code></pre>
|
python|oop
| 0 |
1,908,802 | 67,723,222 |
Using regex to remove sentence after occurrence of key phrase python
|
<p>I am looking for a regex solution to remove any words in the rest of the sentence after the occurrence of a key phrase.</p>
<p><strong>Example</strong></p>
<p>sentence = "The weather forecast for today is mostly sunny. The forecast for tomorrow will be rainy. The rest of the week..."</p>
<p>Key_phrase = "for tomorrow"</p>
<p>Desired output = "The weather forecast for today is mostly sunny. The forecast. The rest of the week..."</p>
<p><strong>Attempt</strong></p>
<pre><code>head, sep, tail = sentence.partition(key_phrase)
print(head)
</code></pre>
<p>My idea is to first split the string into sentences, apply the above technique and then join the results. However, I feel like there must be a more elegant way to do this with regex?</p>
<p>Thanks for the help</p>
|
<p>Using <code>re.sub</code></p>
<p><strong>Ex:</strong></p>
<pre><code>sentence = "The weather forecast for today is mostly sunny. The forecast for tomorrow will be rainy. The rest of the week..."
key_phrase = "for tomorrow"
print(re.sub(fr"({key_phrase}.*?)(?=\.)", "", sentence))
</code></pre>
<p>Output</p>
<pre><code>The weather forecast for today is mostly sunny. The forecast . The rest of the week...
</code></pre>
|
python|regex
| 2 |
1,908,803 | 64,009,659 |
Python Pandas - how to remove duplicates depending on column values
|
<p>So, I'd like to transform a table as this below:
<a href="https://i.stack.imgur.com/PrhIT.png" rel="nofollow noreferrer">Input data</a></p>
<p>into table as this here:
<a href="https://i.stack.imgur.com/KbiT2.png" rel="nofollow noreferrer">Output data</a></p>
<p>The goal is to remove duplicates and at the same time save information about values from the column "Value_c" in True, False notation.</p>
|
<p>You can use <code>groupby</code> on <code>get_dummies</code> to get the desired output.</p>
<pre><code>>>> df = pd.DataFrame({"A":[1,1,1,2,2,2], "B":[1,1,1,2,2,2], "C":["Q","R","QR","R","QR","Q"], "D":[1,1,1,2,2,2], "E":["X","X","X","Y","Y","Y"]})
>>> df
A B C D E
0 1 1 Q 1 X
1 1 1 R 1 X
2 1 1 QR 1 X
3 2 2 R 2 Y
4 2 2 QR 2 Y
5 2 2 Q 2 Y
>>> df = pd.get_dummies(df, columns=["C","E"])
>>> df.groupby(["A","B","D"]).agg(sum).reset_index()
A B D C_Q C_QR C_R E_X E_Y
0 1 1 1 1 1 1 3 0
1 2 2 2 1 1 1 0 3
>>> df.groupby(["A","B","D"]).agg(max).reset_index()
A B D C_Q C_QR C_R E_X E_Y
0 1 1 1 1 1 1 1 0
1 2 2 2 1 1 1 0 1
>>>
</code></pre>
|
python|pandas|dataframe|filter|duplicates
| 1 |
1,908,804 | 42,804,244 |
How to dynamically load class and call method in Python?
|
<p>I have a Python script that gets two user inputs as command line arguments. The first user input is called <code>load-class</code> and the second <code>call-method</code>.</p>
<p>The main Python script acts as an - so called - controller which loads other (lets call them controller cases) Python scripts, calls methods within those scripts and does something with the method returns. This is why the operator cannot execute the controller cases Python scripts directly.</p>
<p>Here is the non-working example:</p>
<pre><code>#!/usr/bin/env python
import sys
import argparse
import textwrap
import ControllerCases
parser = argparse.ArgumentParser(
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog="{0}".format('-' * 80),
description=textwrap.dedent('''\
TODO: Enter Title Here
{0}
TODO: Enter Info Here
{0}
'''.format('-' * 80))
)
parser.add_argument('--load-class', '-c', type=str, required=True, help="Name of the class.")
parser.add_argument('--call-method', '-m', type=str, required=True, help="Name of the method within selected class.")
parsed_args = parser.parse_args()
def main(argv):
print 'Number of arguments:', len(sys.argv), 'arguments.'
print 'Argument List:', str(sys.argv)
print 'Load class: ', str(parsed_args.load_class)
print 'Call method: ', str(parsed_args.call_method)
getattr(ControllerCases[parsed_args.load_class], parsed_args.call_method)(argv)
if __name__ == "__main__":
main(sys.argv[1:])
</code></pre>
<p>The controller cases Python scripts looking like this:</p>
<pre><code>#!/usr/bin/env python
class dummy():
def hello(argv):
print 'Hello World'
</code></pre>
<p>Within the <code>ControllerCases</code> folder I have a <code>__init__.py</code>:</p>
<pre><code>#!/usr/bin/env python
import os
for module in os.listdir(os.path.dirname(__file__)):
if module == '__init__.py' or module[-3:] != '.py':
continue
__import__(module[:-3], locals(), globals())
del module
</code></pre>
<p>Now I have tried to execute the controller and this happened:</p>
<pre><code>$ python controller.py --load-class dummy --call-method hello
Number of arguments: 5 arguments.
Argument List: ['controller.py', '--load-class', 'dummy', '--call-method', 'hello']
Load class: dummy
Call method: hello
Traceback (most recent call last):
File "controller.py", line 33, in <module>
main(sys.argv[1:])
File "controller.py", line 30, in main
getattr(ControllerCases[parsed_args.load_class], parsed_args.call_method)(argv)
TypeError: 'module' object has no attribute '__getitem__'
</code></pre>
<p>I am using Python 2.7.13.</p>
<p><strong>How to dynamically load class and call method in Python?</strong></p>
|
<p>I have solved it by little real life help. I had to convert the controller cases from class to module and had to call <code>getattr</code> twice: Once for the module and once for the method.</p>
<hr>
<p>Convert the <strong>controller case</strong> class into an controller case module by removing the <code>class XYZ():</code> line.</p>
<p>From:</p>
<pre><code>#!/usr/bin/env python
class dummy():
def hello(argv):
print 'Hello World'
</code></pre>
<p>To:</p>
<pre><code>#!/usr/bin/env python
def hello(argv):
print 'Hello World'
</code></pre>
<hr>
<p>Change the <code>getattr</code> code in the <strong>controller</strong>.</p>
<p>From:</p>
<pre><code>def main(argv):
print 'Number of arguments:', len(sys.argv), 'arguments.'
print 'Argument List:', str(sys.argv)
print 'Load class: ', str(parsed_args.load_class)
print 'Call method: ', str(parsed_args.call_method)
getattr(ControllerCases[parsed_args.load_class], parsed_args.call_method)(argv)
if __name__ == "__main__":
main(sys.argv[1:])
</code></pre>
<p>To:</p>
<pre><code>def main(argv):
print 'Number of arguments:', len(sys.argv), 'arguments.'
print 'Argument List:', str(sys.argv)
print 'Load class: ', str(parsed_args.load_class)
print 'Call method: ', str(parsed_args.call_method)
load_class = getattr(ControllerCases, parsed_args.load_class)
call_method = getattr(load_class, parsed_args.call_method)
try:
call_method(argv)
except KeyboardInterrupt:
print 'UserAborted'
if __name__ == "__main__":
main(sys.argv[1:])
</code></pre>
|
python|python-2.7|class|methods
| 1 |
1,908,805 | 42,977,207 |
Complete the index and columns in pandas(DataFrame)?
|
<p>Here is a datafrmae.</p>
<pre><code>a = pd.DataFrame({'a':np.arange(10)}, index=np.arange(0,20,2))
# then I can create new dataframe and complete the index.
b = pd.DataFrame(index=np.arange(20))
b['a'] = a
# Now convert the index np.arange(0,20,2) to np.arange(20). Fill noexists value by np.nan.
</code></pre>
<p>But how can i do the same way to column? Suppose the column's dtype is int32 and names is np.arange(0,20,2).</p>
|
<p>It seems you need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reindex.html" rel="nofollow noreferrer"><code>reindex</code></a>:</p>
<pre><code>print (a.reindex(b.index))
a
0 0.0
1 NaN
2 1.0
3 NaN
4 2.0
5 NaN
6 3.0
7 NaN
8 4.0
9 NaN
10 5.0
11 NaN
12 6.0
13 NaN
14 7.0
15 NaN
16 8.0
17 NaN
18 9.0
19 NaN
</code></pre>
<p>Also can reindex columns:</p>
<pre><code>a.columns = [0]
print (a.reindex(index=b.index, columns=np.arange(0,20,2)))
0 2 4 6 8 10 12 14 16 18
0 0.0 NaN NaN NaN NaN NaN NaN NaN NaN NaN
1 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
2 1.0 NaN NaN NaN NaN NaN NaN NaN NaN NaN
3 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
4 2.0 NaN NaN NaN NaN NaN NaN NaN NaN NaN
5 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
6 3.0 NaN NaN NaN NaN NaN NaN NaN NaN NaN
7 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
8 4.0 NaN NaN NaN NaN NaN NaN NaN NaN NaN
9 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
10 5.0 NaN NaN NaN NaN NaN NaN NaN NaN NaN
11 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
12 6.0 NaN NaN NaN NaN NaN NaN NaN NaN NaN
13 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
14 7.0 NaN NaN NaN NaN NaN NaN NaN NaN NaN
15 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
16 8.0 NaN NaN NaN NaN NaN NaN NaN NaN NaN
17 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
18 9.0 NaN NaN NaN NaN NaN NaN NaN NaN NaN
19 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
</code></pre>
|
python|pandas
| 1 |
1,908,806 | 66,609,715 |
What is wrong with my implementation of this sentinel value?
|
<p><strong>Here is the problem:</strong> This program reads a sequence of floating-point values and computes their average. A value of zero is used as the sentinel. Improve the loop so that the loop is only exited if the user enters the letter Q.</p>
<p><strong>Here is my code:</strong></p>
<pre><code>total = 0.0
count = 0
# TODO: Fix this program so that it only stops reading numbers
# when the user enters Q.
value = float(input("Enter a real value or Q to quit: "))
while value != "Q" :
total = total + value
count = count + 1
value = float(input("Enter a real value or Q to quit: "))
avg = total / count
print(avg)
</code></pre>
<p><strong>Here is my error message:</strong>
File "/tmp/codecheck.zUZpqo8apX/computesum.py", line 11, in
value = float(input("Enter a real value or Q to quit: "))
ValueError: could not convert string to float: 'Q'</p>
|
<p>Regarding your code:</p>
<pre class="lang-py prettyprint-override"><code>value = float(input("Enter a real value or Q to quit: "))
</code></pre>
<p>If you want to handle a non-float value like <code>Q</code>, you should be treating it as a <em>string</em> at that point, not immediately trying to convert it to a <code>float</code> (which will fail for <code>Q</code>, as you've discoverd).</p>
<p>In other words, something like:</p>
<pre class="lang-py prettyprint-override"><code>value = input("Enter a real value or Q to quit: ")
while value != "Q" :
total = total + float(value)
count = count + 1
value = input("Enter a real value or Q to quit: ")
</code></pre>
<hr />
<p>That will still cause problems if you enter a value that is neither <code>Q</code> <em>nor</em> a valid <code>float</code> but you could handle this by catching an exception and adjusting behaviour:</p>
<pre class="lang-py prettyprint-override"><code>value = input("Enter a real value or Q to quit: ")
while value != "Q" :
try:
total = total + float(value)
count = count + 1
except ValueError:
print("Not a valid float value, try again.")
value = input("Enter a real value or Q to quit: ")
</code></pre>
<p>If the <code>float()</code> call fails in that case, neither <code>total</code> nor <code>count</code> will be updated and the <code>except</code> block will be executed, letting the user know there was a problem.</p>
<hr />
<p>The only other thing I'll mention is that you should think about the case where the <em>first</em> thing you do is enter <code>Q</code>. In that case, <code>count</code> will still be set to zero so you probably <em>don't</em> want to divide by it. Instead, you could do something like:</p>
<pre class="lang-py prettyprint-override"><code>if count == 0:
print("No values entered therefore no average.")
else:
avg = total / count
print(avg)
</code></pre>
|
python
| 1 |
1,908,807 | 50,922,876 |
Python: Counting unique instances of lists/DataFrame in a list of lists/DataFrames
|
<p>I have a for loop that creates - let's say - 1000 lists. The generation of these lists are slightly randomized, so there is a difference between the generated lists, but there will also be some that overlap. And I want to count how many times a unique list occurs, that is, how many times a given list overlaps with another generated list. </p>
<p>Each item in the list is formatted as follows: </p>
<p><code>TeamRecord(name='GER', group='F', p=9, gs=6, ga=2, defeated=['SWE', 'MEX', 'KOR']),</code></p>
<p>If it helps, here's the context: As the list item might indicate, I'm simulating the soccer World Cup group stages and each simulation results in a list containing each teams' performance for that given simulation. So I want to see, given for example 10000 simulations, which outcomes are most likely given how many times they occur in the simulation.</p>
<p>I think this is more of an abstract question, and I don't really have any code to provide that would be useful. I did try to tinker a bit with converting the lists to DataFrames and thought of using the .equals method, but I'm not sure how that could be done effectively. </p>
<p>So again, the question is: </p>
<p>How would you go about counting the occurence of each unique instance of a list generated by a for-loop - that is, all the items in the list should be identicel with another generated list. Is this even possible to do, or is it simply a dumb way of looking at it?</p>
<p><strong>EDIT</strong>
Simple example illustrating the purpose:</p>
<pre><code>list_of_lists = [['Test1', 'Test2', 'Test3'],
['Test1', 'Test2', 'Test3'],
['Test4', 'Test5', 'Test6']]
</code></pre>
<p>How would you go about counting that there are two instances of the first two lists, 1 of the third list and so on.</p>
|
<p>Any solution will be specific to the type of object you are counting. I deal <strong>only</strong> with the specific example you have highlighted, i.e. a list of lists of strings.</p>
<p>You can use <code>collections.Counter</code> on tuple versions of your sublists. This works because tuples are hashable while lists are not.</p>
<pre><code>from collections import Counter
L = [['Test1', 'Test2', 'Test3'],
['Test1', 'Test2', 'Test3'],
['Test4', 'Test5', 'Test6']]
res = Counter(map(tuple, L))
print(res)
Counter({('Test1', 'Test2', 'Test3'): 2,
('Test4', 'Test5', 'Test6'): 1})
</code></pre>
|
python|pandas|dataframe
| 1 |
1,908,808 | 26,499,005 |
How to get a function to execute until the conditions are not sufficient
|
<pre><code>import copy
def remove_fully_correct(answer, guess):
"""(list,list) -> list
Return a list that removes the chars from the first list that are the same and in the same position in the second list
>>>remove_fully_correct(['a','b','c','d'], ['d','b','a','d'])
['a','c']
"""
res = copy.copy(answer)
for index in range(len(res)):
for x, y in zip(res, guess):
if res[index] == guess[index]:
res.remove(x)
return res
</code></pre>
<p>Basically I have a function that removes all the characters from one list that are found in the second list, but my function only seems to remove the first value found from the list. Any help appreciated</p>
|
<p>You are returning inside the loop so you only have one pass through the outer loop :</p>
<pre><code>import copy
def remove_fully_correct(answer, guess):
"""(list,list) -> list
Return a list that removes the chars from the first list that are the same and in the same position in the second list
>>>remove_fully_correct(['a','b','c','d'], ['d','b','a','d'])
['a','c']
"""
res = copy.copy(answer)
for index in range(len(res)):
for x, y in zip(res, guess):
if res[index] == guess[index]:
res.remove(x)
return res # move outside
</code></pre>
<p>You should use enumerate if the lists are the same size:</p>
<pre><code>def remove_fully_correct(answer, guess):
"""(list,list) -> list
Return a list that removes the chars from the first list that are the same and in the same position in the second list
>>>remove_fully_correct(['a','b','c','d'], ['d','b','a','d'])
['a','c']
"""
return [ele for index,ele in enumerate(answer) if ele != guess[index]]
In [6]: remove_fully_correct(['a','b','c','d'], ['d','b','a','d'])
Out[6]: ['a', 'c']
</code></pre>
<p>using zip:</p>
<pre><code>def remove_fully_correct(answer, guess):
"""(list,list) -> list
Return a list that removes the chars from the first list that are the same and in the same position in the second list
>>>remove_fully_correct(['a','b','c','d'], ['d','b','a','d'])
['a','c']
"""
return [a for a,b in zip(answer,guess) if a != b]
</code></pre>
|
python|for-loop
| 0 |
1,908,809 | 26,857,218 |
reversing a string, if it contains only mirrorable letters
|
<p>here is the question. No this is not homework this is self taught.</p>
<p>I am attempting the following question: </p>
<p>The mirror image of string vow is string wov, and the mirror image wood is string boow. The mirror image of string bed cannot be represented as a string, however, because the mirror image of e is not a valid character.
The characters in the alphabet whose mirror image is a valid character are: b, d, i, o, v, w, and x. Develop function mirror() that takes a string and returns its mirror image but only if the mirror image can be represented using letters in the alphabet.</p>
<p>**My Code **</p>
<pre><code>def mirror(word):
'''returns mirror image of word but if it can be represented
using letters in the alphabet'''
for i in range (0,len(word)):
if i == 'bdiovwx':
print(str(word.reverse))
else:
print ('NOPE')
</code></pre>
<p>the result i am getting is nothing. As in i execute the program and nothing prints.</p>
<p>Any thoughts?</p>
<p>Thank you</p>
|
<p>You don't need a for loop for this. Essentially, you are testing if all the characters of the word belong to one of the characters in <code>'bdiovwx'</code>, and so, you can just check fr exactly that - the subset relationship amongst the set of characters in word, and <code>'bdiovwx'</code>. </p>
<p>Also, strings do not have a reverse method in python, so you can use the trick <code>"string"[::-1]</code> to print its reverse.</p>
<pre><code>def mirror(word):
'''
returns mirror image of word but if it can be represented
using letters in the alphabet
'''
if set(word).issubset(set('bdiovwx')):
print(word[::-1])
else:
print ('NOPE')
</code></pre>
|
python|string
| 3 |
1,908,810 | 61,299,115 |
python define variable difference before/after main block
|
<p>what is the difference in python when defining variable before or after main block? See variable "lock" in following code examples.</p>
<p>The following code works:</p>
<pre><code>import multiprocessing
from multiprocessing import Lock
lock=Lock()
def single_process(num):
lock.acquire()
print(num)
lock.release()
if __name__ == "__main__":
p1 = multiprocessing.Process(target=single_process, args=(123,))
p2 = multiprocessing.Process(target=single_process, args=(456,))
p1.start()
p2.start()
p1.join()
p2.join()
</code></pre>
<p>but following code does not work, saying lock is not defined:</p>
<pre><code>import multiprocessing
from multiprocessing import Lock
def single_process(num):
lock.acquire()
print(num)
lock.release()
if __name__ == "__main__":
lock=Lock()
p1 = multiprocessing.Process(target=single_process, args=(123,))
p2 = multiprocessing.Process(target=single_process, args=(456,))
p1.start()
p2.start()
p1.join()
p2.join()
</code></pre>
|
<p>You should pass the lock to the child processes, via the args.</p>
<hr>
<p><strong>Explanation:</strong></p>
<p>I think you are running this on Windows. Either way, you should not expect all the variables you declare or values you set in parent process to be available in child processes automatically. See <a href="https://docs.python.org/dev/library/multiprocessing.html#synchronization-between-processes" rel="nofollow noreferrer">this</a> for synchronisation (locks) and and <a href="https://docs.python.org/dev/library/multiprocessing.html#sharing-state-between-processes" rel="nofollow noreferrer">this</a> for different ways of sharing data to child processes.</p>
<p>That said, both code snippets you posted are running for me on my system (Linux). The difference in Linux is, the <a href="https://en.wikipedia.org/wiki/Fork_(system_call)" rel="nofollow noreferrer">fork()</a> system call is used to start new processes. Fork makes a copy of the calling process. So it inherits the state of the main process. </p>
<p>On <a href="https://docs.python.org/2/library/multiprocessing.html#windows" rel="nofollow noreferrer">Windows</a>, there is no fork() system call. So when a child is created, the module is reloaded in the child processes without the <code>__name__</code> set to <code>__main__</code> (otherwise it would lead to a <a href="https://en.wikipedia.org/wiki/Fork_bomb" rel="nofollow noreferrer">fork bomb</a> in your code). So in your first snippet, the <code>lock</code> variable will be set be set in the child process also, because it is not inside <code>if __name__ == "__main__"</code>. But in your second snippet, the lock variable will not be set, because it is inside the <code>if __name__ == "__main__"</code> block.</p>
|
python|variables|scope
| 1 |
1,908,811 | 60,636,558 |
ERROR conda.core.link:_execute(502): An error occurred while installing package 'conda-forge::astor-0.7.1-py_0'
|
<p>I am trying to follow a Python tutorial and I have been able to execute almost everything, until the point of Deploying an endpoint to Azure with python.</p>
<p>In order to give some context I have uploaded the scripts to my git account:
<a href="https://github.com/levalencia/MLTutorial" rel="nofollow noreferrer">https://github.com/levalencia/MLTutorial</a></p>
<p>File 1 and 2 Work perfectly fine</p>
<p>However the following section in File 3 fails:</p>
<pre><code>%%time
from azureml.core.webservice import Webservice
from azureml.core.model import InferenceConfig
inference_config = InferenceConfig(runtime= "python",
entry_script="score.py",
conda_file="myenv.yml")
service = Model.deploy(workspace=ws,
name='keras-mnist-svc2',
models=[amlModel],
inference_config=inference_config,
deployment_config=aciconfig)
service.wait_for_deployment(show_output=True)
</code></pre>
<p>with below error:</p>
<pre><code>ERROR - Service deployment polling reached non-successful terminal state, current service state: Transitioning
Operation ID: 8353cad2-4218-450a-a03b-df418725acb1
More information can be found here: https://machinelearnin1143382465.blob.core.windows.net/azureml/ImageLogs/8353cad2-4218-450a-a03b-df418725acb1/build.log?sv=2018-03-28&sr=b&sig=UKzefxIrm3l7OsXxj%2FT4RsvUfAuhuaBwaz2P4mJu7vY%3D&st=2020-03-11T12%3A23%3A33Z&se=2020-03-11T20%3A28%3A33Z&sp=r
Error:
{
"code": "EnvironmentBuildFailed",
"statusCode": 400,
"message": "Failed Building the Environment."
}
ERROR - Service deployment polling reached non-successful terminal state, current service state: Transitioning
Operation ID: 8353cad2-4218-450a-a03b-df418725acb1
More information can be found here: https://machinelearnin1143382465.blob.core.windows.net/azureml/ImageLogs/8353cad2-4218-450a-a03b-df418725acb1/build.log?sv=2018-03-28&sr=b&sig=UKzefxIrm3l7OsXxj%2FT4RsvUfAuhuaBwaz2P4mJu7vY%3D&st=2020-03-11T12%3A23%3A33Z&se=2020-03-11T20%3A28%3A33Z&sp=r
Error:
{
"code": "EnvironmentBuildFailed",
"statusCode": 400,
"message": "Failed Building the Environment."
}
</code></pre>
<p>When I download the logs, I got this:</p>
<pre><code>wheel-0.34.2 | 24 KB | | 0% [0m[91m
wheel-0.34.2 | 24 KB | ########## | 100% [0m
Downloading and Extracting Packages
Preparing transaction: ...working... done
Verifying transaction: ...working... done
Executing transaction: ...working... failed
[91m
ERROR conda.core.link:_execute(502): An error occurred while installing package 'conda-forge::astor-0.7.1-py_0'.
FileNotFoundError(2, "No such file or directory: '/azureml-envs/azureml_6abde325a12ccdba9b5ba76900b99b56/bin/python3.6'")
Attempting to roll back.
[0mRolling back transaction: ...working... done
[91m
FileNotFoundError(2, "No such file or directory: '/azureml-envs/azureml_6abde325a12ccdba9b5ba76900b99b56/bin/python3.6'")
[0mThe command '/bin/sh -c ldconfig /usr/local/cuda/lib64/stubs && conda env create -p /azureml-envs/azureml_6abde325a12ccdba9b5ba76900b99b56 -f azureml-environment-setup/mutated_conda_dependencies.yml && rm -rf "$HOME/.cache/pip" && conda clean -aqy && CONDA_ROOT_DIR=$(conda info --root) && rm -rf "$CONDA_ROOT_DIR/pkgs" && find "$CONDA_ROOT_DIR" -type d -name __pycache__ -exec rm -rf {} + && ldconfig' returned a non-zero code: 1
2020/03/11 12:28:11 Container failed during run: acb_step_0. No retries remaining.
failed to run step ID: acb_step_0: exit status 1
Run ID: cb3 failed after 2m21s. Error: failed during run, err: exit status 1
</code></pre>
<p>Update 1:</p>
<p>I tried to run:
conda list --name base conda</p>
<p>inside the notebook and I got this:</p>
<pre><code> # packages in environment at /anaconda:
#
# Name Version Build Channel
_anaconda_depends 2019.03 py37_0
anaconda custom py37_1
anaconda-client 1.7.2 py37_0
anaconda-navigator 1.9.6 py37_0
anaconda-project 0.8.4 py_0
conda 4.8.2 py37_0
conda-build 3.17.6 py37_0
conda-env 2.6.0 1
conda-package-handling 1.6.0 py37h7b6447c_0
conda-verify 3.1.1 py37_0
Note: you may need to restart the kernel to use updated packages.
</code></pre>
<p>However in the deployment log I got this:</p>
<pre><code>Solving environment: ...working...
done
[91m
==> WARNING: A newer version of conda exists. <==
current version: 4.5.11
latest version: 4.8.2
Please update conda by running
$ conda update -n base -c defaults conda
</code></pre>
|
<p>Unfortunately there seems to an issue with this version of Conda (4.5.11). To complete this task in the tutorial, you can just update the dependency for Tensorflow and Keras to be from <code>pip</code> and not <code>conda</code>. There are reasons why this is less than ideal for a production environment. The Azure ML <a href="https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.conda_dependencies.condadependencies?view=azure-ml-py" rel="nofollow noreferrer">documentation states</a>:</p>
<blockquote>
<p>"If your dependency is available through both Conda and pip (from
PyPi), use the Conda version, as Conda packages typically come with
pre-built binaries that make installation more reliable."</p>
</blockquote>
<p>In this case though, if you update the following code block:</p>
<pre><code>from azureml.core.conda_dependencies import CondaDependencies
myenv = CondaDependencies()
myenv.add_conda_package("tensorflow")
myenv.add_conda_package("keras")
with open("myenv.yml","w") as f:
f.write(myenv.serialize_to_string())
# Review environment file
with open("myenv.yml","r") as f:
print(f.read())
</code></pre>
<p>To be the following:</p>
<pre><code>from azureml.core.conda_dependencies import CondaDependencies
myenv = CondaDependencies()
myenv.add_pip_package("tensorflow==2.0.0")
myenv.add_pip_package("azureml-defaults")
myenv.add_pip_package("keras")
with open("myenv.yml", "w") as f:
f.write(myenv.serialize_to_string())
with open("myenv.yml", "r") as f:
print(f.read())
</code></pre>
<p>The tutorial should be able to be completed. Let me know if any of this does not work for you once this update has been made.</p>
<p>I have also reported this issue to Microsoft (in regards to the Conda version).</p>
|
python|azure|jupyter-notebook|conda|azure-machine-learning-studio
| 2 |
1,908,812 | 57,847,521 |
Summing two datetime columns
|
<p>I have a datadframe with two columns, for example</p>
<pre><code>A B
00:01:05 2018-10-10 23:58:10
</code></pre>
<p>and I want to get a third column C which is the sum of A + B</p>
<pre><code>A B C
00:01:05 2018-10-10 23:58:10 2018-10-10 23:59:15
</code></pre>
<p>If I do:</p>
<pre><code>df['C']= df['A'] + df['B']
</code></pre>
<p>I get</p>
<pre><code>cannot add DatetimeArray and DatetimeArray
</code></pre>
|
<p>Convert column <code>A</code> to timedeltas by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.to_timedelta.html" rel="nofollow noreferrer"><code>to_timedelta</code></a> and if necessary column <code>B</code> to <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.to_datetime.html" rel="nofollow noreferrer"><code>to_datetime</code></a>:</p>
<pre><code>df = pd.DataFrame({'A':['00:01:05'],
'B':['2018-10-10 23:58:10']})
df['C'] = pd.to_timedelta(df['A']) + pd.to_datetime(df['B'])
print (df)
A B C
0 00:01:05 2018-10-10 23:58:10 2018-10-10 23:59:15
</code></pre>
<p>If column <code>A</code> contains python times:</p>
<pre><code>df['C'] = pd.to_timedelta(df['A'].astype(str)) + pd.to_datetime(df['B'])
</code></pre>
|
pandas|datetime|sum
| 4 |
1,908,813 | 58,007,676 |
Unable to match using regex in pandas
|
<p>Hi i have a dataframe as below, for which i need to match a number followed by its unit and return only the number</p>
<p>I have units like, ml, gallon, l etc</p>
<p>Input:</p>
<pre><code> text
1234567-CAR WA GK5 9x78x90 12L
3456789 TOP-L BD3 195x169x62 TopL
</code></pre>
<p>Expected output:</p>
<pre><code> text extract Return
1234567-CAR WA GK5 9x78x90 12L 12L 12
3456789 TOP-L BD3 195x169x62 TopL - -
</code></pre>
<p>code:</p>
<pre><code> def names(header):
if re.search('([0-9]+(\.[0-9]*|)(\s|[a-z]*)(\s|[a-z]*)(\s|)ml)',header):
pos_start = re.search('([0-9]+(\.[0-9]*|)(\s|[a-z]*)(\s|[a-z]*)(\s|)ml)', header).start()
pos_end = re.search('([0-9]+(\.[0-9]*|)(\s|[a-z]*)(\s|[a-z]*)(\s|)ml)', header).end()
return header[pos_start:pos_end]
elif re.search('((\d*)l)',header):
pos_start = re.search('((\d*)l)', header).start()
pos_end = re.search('((\d*)l)', header).end()
return header[pos_start:pos_end]
def measure(val):
ml=['ml','ML','mL','Ml']
l=['l','L','Lt','lt']
if any(x in val for x in ml):
return float(re.findall('(\d+\.\d+|\d+)', val)[0])
if any(x in val for x in l):
return float(re.findall('(\d+\.\d+|\d+)', val)[0])*1000
df_result = pd.concat([df['A'],df['text'],df['B'],df['text'].apply(names),(df['text'].apply(names)).dropna().apply(measure)],axis=1)
</code></pre>
<p>Error:</p>
<pre><code> ---> 22 return float(re.findall('(\d+\.\d+|\d+)', val)[0])*1000
IndexError: list index out of range
</code></pre>
|
<p>See if this works for you</p>
<pre><code>df['extract']= [val[-1] for val in df['text'].str.split()]
df['Return']=df['extract'].str.extract(r'(\d+)').fillna('-')
print(df)
text extract Return
0 1234567-CAR WA GK5 9x78x90 12L 12L 12
1 3456789 TOP-L BD3 195x169x62 TopL TopL -
</code></pre>
|
regex|python-3.x|pandas|dataframe
| 0 |
1,908,814 | 56,370,001 |
How to fix ModuleNotFoundError when uploading virtualenv to IBM Cloud Functions?
|
<p>I'm trying to upload a function to IBM Cloud Functions with a virtualenv that has opencv installed. However, when I try to run the action in IBM Cloud it says:</p>
<pre><code>{
"error": "Traceback (most recent call last):
File \"/action/1/src/exec__.py\", line 43, in <module>
from main__ import main as main
File \"/action/1/src/main__.py\", line 1, in <module>
import requests, base64, json, cv2\nModuleNotFoundError: No module named 'cv2'"
}
</code></pre>
<p>I'm using the python:3.7 runtime for this. I thought this was a library issue since this runtime uses Debian Stretch and I've had problems importing opencv with the python:3-slim-strech docker image before since it didn't have some required libraries like libsm6, libxext6 and libxrender.</p>
<p>However, when I ran <code>apt list</code> in the <a href="https://hub.docker.com/r/ibmfunctions/action-python-v3.7" rel="nofollow noreferrer">docker image</a> that IBM uses for its python:3.7 runtime, it had those libraries included.</p>
<p>I created the virtualenv using the docker method shown <a href="https://github.com/apache/incubator-openwhisk/blob/master/docs/actions-python.md#packaging-python-actions-with-a-virtual-environment-in-zip-files" rel="nofollow noreferrer">here.</a> The exact command I used was the following:</p>
<pre><code>docker run --rm -v "$PWD:/tmp" ibmfunctions/action-python-v3.7 /bin/bash -c
"cd tmp; virtualenv virtualenv; source virtualenv/bin/activate;
pip install --no-deps opencv-python;"
</code></pre>
<p>I used --no-deps because <a href="https://cloud.ibm.com/docs/openwhisk?topic=cloud-functions-runtimes#openwhisk_ref_python_environments_3.7" rel="nofollow noreferrer">the runtime already has numpy installed,</a> which is the only dependency of opencv and because with numpy included the zip file exceeded the 48MB limit to upload it to Cloud Functions.</p>
<p>I should be able to import cv2 with no problems but I still get the previous message. Any help would be great!</p>
|
<p>Using a virtualenv folder to include local packages does not automatically <a href="https://stackoverflow.com/questions/12079607/make-virtualenv-inherit-specific-packages-from-your-global-site-packages">inherit the global site-packages</a> from the runtime. This can be enabled using the <code>--system-site-packages</code> flag when using the virtualenv command. </p>
<p>Change the Docker command to the following to make this work:</p>
<pre><code>docker run --rm -v "$PWD:/tmp" ibmfunctions/action-python-v3.7 /bin/bash -c
"cd tmp; virtualenv --system-site-packages virtualenv; source virtualenv/bin/activate;
pip install opencv-python;"
</code></pre>
<p><em><code>--no-deps</code> is no longer needed as the numpy dependency is already satisfied by the global site-package.</em></p>
<p>Following your commands with this updated Docker script now works for me.</p>
<p>Make sure you allocate enough memory to the OpenWhisk action. I had issues running the code with the default 256MB memory limit. Increasing this to 1024MB fixed any issues I encountered.</p>
|
python-3.x|opencv|ibm-cloud|ibm-cloud-functions
| 3 |
1,908,815 | 56,426,647 |
How to filter and a find a subset of a dataframe in which categorical data in two columns occur more than n, m times
|
<p>I have a dataframe from a csv which contains userId, ISBN and ratings for a bunch of books. I want to find a subset of this dataframe in which both userIds occur more than 200 times and ISBNs occur more than 100 times.</p>
<p>Following is what I tried:</p>
<pre><code>ratings = pd.read_csv('../data/BX-Book-Ratings.csv', sep=';', error_bad_lines=False, encoding="latin-1")
ratings.columns = ['userId', 'ISBN', 'bookRating']
# Choose users with more than 200 ratings and books with more than 100 ratings
user_rating_count = ratings['userId'].value_counts()
relevant_ratings = ratings[ratings['userId'].isin(user_rating_count[user_rating_count >= 200].index)]
print(relevant_ratings.head())
print(relevant_ratings.shape)
books_rating_count = relevant_ratings['ISBN'].value_counts()
relevant_ratings_book = relevant_ratings[relevant_ratings['ISBN'].isin(
books_rating_count[books_rating_count >= 100].index)]
print(relevant_ratings_book.head())
print(relevant_ratings_book.shape)
# Check that userId occurs more than 200 times
users_grouped = pd.DataFrame(relevant_ratings.groupby('userId')['bookRating'].count()).reset_index()
users_grouped.columns = ['userId', 'ratingCount']
sorted_users = users_grouped.sort_values('ratingCount')
print(sorted_users.head())
# Check that ISBN occurs more than 100 times
books_grouped = pd.DataFrame(relevant_ratings.groupby('ISBN')['bookRating'].count()).reset_index()
books_grouped.columns = ['ISBN', 'ratingCount']
sorted_books = books_grouped.sort_values('ratingCount')
print(sorted_books.head())
</code></pre>
<p>Following is the output I got:</p>
<pre><code> userId ISBN bookRating
1456 277427 002542730X 10
1457 277427 0026217457 0
1458 277427 003008685X 8
1459 277427 0030615321 0
1460 277427 0060002050 0
(527556, 3)
userId ISBN bookRating
1469 277427 0060930535 0
1471 277427 0060934417 0
1474 277427 0061009059 9
1495 277427 0142001740 0
1513 277427 0312966091 0
(13793, 3)
userId ratingCount
73 26883 200
298 99955 200
826 252827 200
107 36554 200
240 83671 200
ISBN ratingCount
0 0330299891 1
132873 074939918X 1
132874 0749399201 1
132875 074939921X 1
132877 0749399295 1
</code></pre>
<p>As seen above when sorting the table in ascending order grouped by userId, it shows userIds only more than 200 times.
But when sorting the table in ascending order grouped by ISBN, it shows ISBNs which occurs even 1 time.</p>
<p>I expected both userIds and ISBNs to occur more than 200 and 100 times respectively.
Please let me know what I have done wrong and how to get the correct result.</p>
|
<p>You should try and produce a small version of the problem that can be solved without access to large csv files. Check this page for more details: <a href="https://stackoverflow.com/help/how-to-ask">https://stackoverflow.com/help/how-to-ask</a></p>
<p>That said, here is a dummy version of your dataset:</p>
<pre><code>import pandas as pd
import random
import string
n=1000
isbn = [random.choice(['abc','def','ghi','jkl','mno']) for x in range(n)]
rating = [random.choice(range(9)) for x in range(n)]
userId = [random.choice(['x','y','z']) for x in range(n)]
df = pd.DataFrame({'isbn':isbn,'rating':rating,'userId':userId})
</code></pre>
<p>You can get the counts by userId and isbns this way:</p>
<pre><code>df_userId_count = df.groupby('userId',as_index=False)['rating'].count()
df_isbn_count = df.groupby('isbn',as_index=False)['rating'].count()
</code></pre>
<p>and extract the unique values by:</p>
<pre><code>userId_select = (df_userId_count[df_userId_count.rating>200].userId.values)
isbn_select = (df_isbn_count[df_isbn_count.rating>100].isbn.values)
</code></pre>
<p>So that your final filtered dataframe is:</p>
<pre><code>df = df[df.userId.isin(userId_select) & df.isbn.isin(isbn_select) ]
</code></pre>
|
python|dataframe|data-analysis
| 0 |
1,908,816 | 18,593,224 |
python matplotlib.pyplot and numpy problems
|
<p>I have the following problem. I want to evaluate the following function</p>
<pre><code>def sigLinZ(self,val,omega):
A = 1
C = 0
D = 1
B =1./(omega*val*1j)
return np.asmatrix(np.array([[A,B],[C,D]]))
</code></pre>
<p>in such a way that I can use it in pyplot in such a way:</p>
<pre><code>omega = numpy.arange(0,100,1)
y = classInstance.sigLinZ(12,omega)
plt.plot(omega,y)
</code></pre>
<p>but this does not work. Python says:</p>
<pre><code>Traceback (most recent call last):
File "testImpedanz.py", line 132, in test6_lineImpedanz
print "neue Matrix: ", lineImpe.sigLinZ('C',lineImpe.C(),np.array([600e6,300e6]))
File "/afs/physnet.uni-hamburg.de/users/ap_h/pgwozdz/Dokumente/PythonSkriptsPHD/ImpedanzCalculation.py", line 350, in sigLinZ
return np.mat(np.array([[A,B],[C,D]]))
TypeError: only length-1 arrays can be converted to Python scalars
</code></pre>
<p>I know for numpy functions this procedure works just fine but for my function it does not work at all.</p>
|
<p>You are attempting to insert an array into the element of matrix in your definition of <code>omega</code> that you're passing into the method. Either you need to iterate over <code>omega</code> passing each element to <code>sigLinZ</code>separately, or you need to re-write <code>sigLinZ</code> to handle the array and return something like a list of matrices or 3D array. </p>
|
python|numpy|matplotlib
| 3 |
1,908,817 | 55,415,747 |
Download File using python script behind corporate proxy
|
<p>I am putting together a script that will download a file from the web .... However some ppl sit being a corporate firewall so this means that if you are @ home the below code works but if you are in the office it hangs unless you set proxy variable manually and then run ....</p>
<p>What I am thinking is create an if statement ... The if statement will check the users IP address and if they user has an IP address in the 8.x or 9.x or 7.x then use this proxy ... Otherwise ignore and proceed with download</p>
<p>The code I am using for this download is below ... I am pretty new to this so i am not sure how to do an if statement for the IP and then use proxy piece so any help would be great</p>
<pre><code>import urllib.request
import shutil
import subprocess
import os
from os import system
url = "https://downloads.com/App.exe"
output_file = "C:\\User\\Downloads\\App.exe"
with urllib.request.urlopen(url) as response, open(output_file, 'wb') as out_file:
shutil.copyfileobj(response, out_file)
</code></pre>
|
<p>You can read local IP as <a href="https://stackoverflow.com/questions/55415747/download-file-using-python-script-behind-corporate-proxy#comment97550601_55415747">@nickthefreak</a> commented and then establish proxy using <a href="http://www.python-requests.org/en/latest/user/advanced/#proxies" rel="nofollow noreferrer">requests</a> lib:</p>
<pre><code>import socket
import requests
URL = 'https://downloads.com/App.exe'
if socket.gethostbyname(socket.gethostname()).startswith(('8', '9', '7')):
r = requests.get(URL, stream=True, proxies={'http': 'http://10.10.1.10:3128', 'https': 'http://10.10.1.10:1080'})
else:
r = requests.get(URL, stream=True)
with open('C:\\User\\Downloads\\App.exe', 'wb') as f:
for chunk in r:
f.write(chunk)
</code></pre>
|
python|python-3.x|shell|if-statement|output
| 1 |
1,908,818 | 42,570,500 |
What is happening when I assign two Dataframes in Python
|
<p>I am noticing some interesting behavior in my code.</p>
<p>If I do df1=df2, and then df2=df3. Why does df1 also equal df3, if I look inside? Something to do with DataFrame.copy(deep=True)?</p>
<p>Would the same behavior be observed in simple variables, or only complex objects like DFs? </p>
<p><a href="https://i.stack.imgur.com/NnyKV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NnyKV.png" alt="enter image description here"></a></p>
<p>Thanks. </p>
|
<p>In order to copy values instead of the memory location, you need to use df1 = df2.copy(). This is true mostly for complex objects.</p>
|
python|pandas|dataframe
| 0 |
1,908,819 | 58,247,540 |
Pandas parse week numbers
|
<p>Consider the following file <code>test.csv</code>:</p>
<pre><code>"Time","RegionCode","RegionName","NumValue"
"2009-W40","AT","Austria",0
"2009-W40","BE","Belgium",54
"2009-W40","BG","Bulgaria",0
"2009-W40","CZ","Czech Republic",1
</code></pre>
<p>I'd like to parse the date which is stored in the first column and would like to create a dataframe like so:</p>
<pre class="lang-py prettyprint-override"><code>parser = lambda x: pd.datetime.strptime(x, "%Y-W%W")
df = pd.read_csv("test.csv", parse_dates=["Time"], date_parser=parser)
</code></pre>
<p>Result:</p>
<pre><code> Time RegionCode RegionName NumValue
0 2009-01-01 AT Austria 0
1 2009-01-01 BE Belgium 54
2 2009-01-01 BG Bulgaria 0
3 2009-01-01 CZ Czech Republic 1
</code></pre>
<p>However, the resulting time column is not correct. All I get is "2019-01-01" and this is certainly not the 40th week of the year. Am I doing something wrong? Anybody else had this issue when parsing weeks?</p>
|
<p>You are almost correct. The only problem is that from a week number and year, you cannot determine a specific date. The trick is to just add day of the week as 1. </p>
<p>I would recommend sticking with <code>pd.to_datetime()</code> like you tried initially and supplying a <a href="https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior" rel="nofollow noreferrer">date-format string</a>. That should work out fine with the added 1:</p>
<pre><code>pd.to_datetime(df['Time'] + '-1', format='%Y-W%W-%w')
# 0 2009-10-05
# 1 2009-10-05
# 2 2009-10-05
# 3 2009-10-05
</code></pre>
|
pandas|datetime
| 2 |
1,908,820 | 28,772,710 |
How to verify that two different .csv files column ids match with python?
|
<p>I have two different <code>.csv</code> files, but they have the same <code>id</code> colummn. </p>
<pre><code>file_1.csv:
id, column1, column2
4543DFGD_werwe_23, string
4546476FGH34_wee_24, string
....
45sd234_w32rwe_2342342, string
</code></pre>
<p>The other one:</p>
<pre><code>file_1.csv:
id, column3, column4
4543DFGD_werwe_23, bla bla bla
4546476FGH34_wee_24, bla bla bla
....
45sd234_w32rwe_2342342, bla bla bla
</code></pre>
<p>How can I verify that this two columns match(have the same id) or are the same with the csv module or with pandas?.</p>
|
<p>After loading you can call <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.equals.html#pandas.Series.equals" rel="nofollow"><code>equals</code></a> on the id column:</p>
<pre><code>df['id'].equals(df1['id'])
</code></pre>
<p>This will return <code>True</code> of <code>False</code> if they are exactly the same, in length and same values in the same order</p>
<pre><code>In [3]:
df = pd.DataFrame({'id':np.arange(10)})
df1 = pd.DataFrame({'id':np.arange(10)})
df.id.equals(df1.id)
Out[3]:
True
In [7]:
df = pd.DataFrame({'id':np.arange(10)})
df1 = pd.DataFrame({'id':[0,1,1,3,4,5,6,7,8,9]})
df.id.equals(df1.id)
Out[7]:
False
In [8]:
df.id == df1.id
Out[8]:
0 True
1 True
2 False
3 True
4 True
5 True
6 True
7 True
8 True
9 True
Name: id, dtype: bool
</code></pre>
<p>To load the csvs:</p>
<pre><code>df = pd.read_csv('file_1.csv')
df1 = pd.read_csv('file_2.csv') # I'm assuming your real other csv is not the same name as file_1.csv
</code></pre>
<p>Then you can perform the same comparison as above:</p>
<pre><code>df.id.equals(df1.id)
</code></pre>
<p>If you just want to compare the id columns you can specify just to load that column:</p>
<pre><code>df = pd.read_csv('file_1.csv', usecols=['id'])
df1 = pd.read_csv('file_2.csv', usecols=['id'])
</code></pre>
|
python|python-2.7|csv|pandas
| 3 |
1,908,821 | 28,577,255 |
Pythonic way of defining the function
|
<p>A homework assignment asks us to write some functions, namely <code>orSearch</code> and <code>andSearch</code> . </p>
<pre><code>"""
Input: an inverse index, as created by makeInverseIndex, and a list of words to query
Output: the set of document ids that contain _any_ of the specified words
Feel free to use a loop instead of a comprehension.
>>> idx = makeInverseIndex(['Johann Sebastian Bach', 'Johannes Brahms', 'Johann Strauss the Younger', 'Johann Strauss the Elder', ' Johann Christian Bach', 'Carl Philipp Emanuel Bach'])
>>> orSearch(idx, ['Bach','the'])
{0, 2, 3, 4, 5}
>>> orSearch(idx, ['Johann', 'Carl'])
{0, 2, 3, 4, 5}
"""
</code></pre>
<p>Given above is the documentation of <code>orSearch</code> similarly in <code>andSearch</code> we return only those set of docs which contains all instances of the query list. </p>
<p>We can assume that the inverse index has already been provided. An example of an inverse index for <code>['hello world','hello','hello cat','hellolot of cats']</code> is <code>{'hello': {0, 1, 2}, 'cat': {2}, 'of': {3}, 'world': {0}, 'cats': {3}, 'hellolot': {3}}</code> </p>
<p>So my question is, I was able to write a single line comprehension for the <code>orSearch</code> method given by</p>
<pre><code>def orSearch(inverseIndex, query):
return {index for word in query if word in inverseIndex.keys() for index in inverseIndex[word]}
</code></pre>
<p>But I am unable to think of the most pythonic way of writing <code>andSearch</code>. I have written the following code, it works but I guess it is not that pythonic</p>
<pre><code>def andSearch(inverseIndex, query):
if len(query) != 0:
result = inverseIndex[query[0]]
else:
result = set()
for word in query:
if word in inverseIndex.keys():
result = result & inverseIndex[word]
return result
</code></pre>
<p>Any suggestions on more compact code for <code>andSearch</code> ? </p>
|
<p>Rewrite <code>orSearch()</code> to use <code>any()</code> to find any of the terms, and then derive <code>andSearch()</code> by modifying your solution to use <code>all()</code> instead to find all of the terms.</p>
|
python
| 2 |
1,908,822 | 28,730,578 |
How to use python and OpenCV to mark license plates in image
|
<p>I am trying to use a python script with OpenCV to pick out license plates in a image and return the coordinates/draw a bounding box around the license plate. My script that I wrote is not able to find the license plate, it often returns a different area of the car.</p>
<pre><code>import numpy as np
import cv2
def find_license(image):
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
gray = cv2.bilateralFilter(gray, 11, 17, 17)
edged = cv2.Canny(gray, 30, 200)
cv2.imwrite('detect.png', edged)
(cnts, _) = cv2.findContours(edged.copy(), cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
cnts=sorted(cnts, key = cv2.contourArea, reverse = True)[:20]
# loop over our contours
for c in cnts:
peri = cv2.arcLength(c, True)
approx = cv2.approxPolyDP(c, 0.02 * peri, True)
cv2.drawContours(image, [approx], -1, (0,255,0), 3)
# compute the bounding box of the of the paper region and return it
return cv2.minAreaRect(c)
</code></pre>
|
<p>If you are looking to improve your code I would suggest you try thresholding the image first to return only colours that you would associate with the license plate.</p>
<p>(Yellow and white in the UK, depends on your country)</p>
<p>This will remove all parts of the image that are not that colour and your script may have a higher chance of working</p>
<p><a href="http://opencv-python-tutroals.readthedocs.org/en/latest/py_tutorials/py_imgproc/py_colorspaces/py_colorspaces.html#converting-colorspaces" rel="nofollow">Here</a> is a link to a <em>very</em> useful set of tutorials/tools to help you achieve this (and most other computer vision problems)</p>
<p>Another process that may help would be to check the length of each contour per object and discard any that do not fall under a ratio that you set (i.e the top and bottom contour are longer than the side two) as most licence plates have a <em>standard</em> size (maybe not all, again I don't know what country you are making this for)</p>
<p>Another approach entirely would be to train your own haar cascade classifier for license plates which would probably have an even better chance of success. To do this you will need <strong>alot</strong> of image containing license plates and even more <strong>not</strong> containing them.</p>
<p><a href="http://nayakamitarup.blogspot.in/2011/07/how-to-make-your-own-haar-trained-xml.html" rel="nofollow">Here</a> is a link to a tutorial with tools to help you achieve this.</p>
<p>By following the tutorial you should then end up with a .XML file that will be your new trained classifier.</p>
<p><a href="http://opencv-python-tutroals.readthedocs.org/en/latest/py_tutorials/py_objdetect/py_face_detection/py_face_detection.html#face-detection" rel="nofollow">Here</a> is a link to a tutorial that will help you to <em>use</em> your new classifier. I would also suggest reading up on how haar classifiers work in general as this may give you a better understanding of what images to use to train your classifier and what preprocessing techniques you could use on your images to improve the accuracy of your classifier.</p>
<p>Good luck, hope this helps.</p>
|
python|opencv|object-detection
| 1 |
1,908,823 | 14,486,040 |
Raspberry pi 'Response IndentationError'
|
<p>I have been looking at a tutorial on how to send sms texts through the rasp pi. Here is the code that I have and I'm not sure why I have an error.</p>
<pre><code>#!/usr/bin/python
#-----------------------------------
# Send SMS Text Message
#
# Author : Matt Hawkins
# Site : http://www.raspberrypi-spy.co.uk/
# Date : 30/08/2012
#
# Requires account with TxtLocal
# http://www.txtlocal.co.uk/?tlrx=114032
#
#-----------------------------------
# Import required libraries
import urllib # URL functions
import urllib2 # URL functions
# Define your message
message = 'Test message sent from my Raspberry Pi'
# Set your username and sender name.
# Sender name must alphanumeric and
# between 3 and 11 characters in length.
username = 'jonfdom1@aol.com'
sender = 'Jonny.D'
# Your unique hash is available from the docs page
# https://control.txtlocal.co.uk/docs/
hash = '8fe5dae7bafdbbfb00c7aebcfb24e005b5cb7be8'
# Set the phone number you wish to send
# message to.
# The first 2 digits are the country code.
# 44 is the country code for the UK
# Multiple numbers can be specified if required
# e.g. numbers = ('447xxx123456','447xxx654321')
numbers = ('447xxxxxx260')
# Set flag to 1 to simulate sending
# This saves your credits while you are
# testing your code.
# To send real message set this flag to 0
test_flag = 1
#-----------------------------------
# No need to edit anything below this line
#-----------------------------------
values = {'test' : test_flag,
'uname' : username,
'hash' : hash,
'message' : message,
'from' : sender,
'selectednums' : numbers }
url = 'http://www.txtlocal.com/sendsmspost.php'
postdata = urllib.urlencode(values)
req = urllib2.Request(url, postdata)
print 'Attempt to send SMS ...'
try:
response = urllib2.urlopen(req)
response_url = response.geturl()
if response_url==url:
print 'SMS sent!'
except urllib2.URLError, e:
print 'Send failed!'
print e.reason
</code></pre>
<p>And here is the error message I have popping up on the terminal</p>
<pre><code> File "send_sms.py", line 331
response = urllib2.urlopen(req)
^
IndentationError: expected an indented block
</code></pre>
|
<p>Python requires proper indentation, like this:</p>
<pre><code>try:
response = urllib2.urlopen(req)
response_url = response.geturl()
if response_url==url:
print 'SMS sent!'
except urllib2.URLError, e:
print 'Send failed!'
print e.reason
</code></pre>
<p>Here's <a href="http://getpython3.com/diveintopython3/your-first-python-program.html#indentingcode" rel="nofollow">a section on Python indentation from Dive Into Python 3</a>.</p>
|
python|sms|response|local|raspberry-pi
| 2 |
1,908,824 | 41,677,830 |
Parsing free-form salary information
|
<p>I'm trying to parse salary information from a free-form source. I'd like to be able to store the parsed info in a standard format after parsing. There are a number of permutations that I expect to encounter based on experience.</p>
<p>Here's an example of some of the cases I expect:</p>
<pre><code>$10/hr,
$10.00/hr,
$10 per hour,
$10 per hr,
$10.00 per hour,
$10.00 per hr,
10$/hr,
10$/hour,
10.00$/hr,
10.00$/hour
</code></pre>
<p>I could go on and on but I think you get the idea.</p>
<p>Generalized, the formats I'm expecting can be explained like so:</p>
<p><strong>[curr][amount[.xx]][[k][,000]][curr][-][per][-][timeframe]</strong></p>
<ul>
<li>[<strong>curr</strong>] can be any currency symbol, can either appear before or after amount, but not both. Optional</li>
<li>[<strong>amount</strong>] can be int or float, the .xx is optional and can either be .x or .xx. Mandatory</li>
<li>[<strong>[k][,000]</strong>] indicates amount = amount * 1000. Optional</li>
<li>[<strong>-</strong>] separator can either be a space or a dash. Optional</li>
<li>[<strong>per</strong>] can either be "per" or "/". Mandatory</li>
<li>[<strong>timeframe</strong>] can be: year, yr, hour, hr. Mandatory</li>
</ul>
<p>I definitely suspect that I'll need to use some sort of regex here, but I have no experience at all implementing regex and frankly, it confuses me a bit. I'm not looking for someone to solve this problem for me, but if you can help push me off into the right direction it would be greatly appreciated.</p>
<p>Ultimately, I'd like to store the results like:</p>
<pre><code>Class Salary():
float hourly_pay
String pay_type #hourly or salary
</code></pre>
|
<p>This snippet shows something that accomplishes something close to what I was looking for. It's certainly not elegant, but it works.</p>
<p>This is my first regex ever, any suggestions on how to improve this?</p>
<p><a href="https://regex101.com/r/aW3pR4/43" rel="nofollow noreferrer">https://regex101.com/r/aW3pR4/44</a></p>
|
python|regex|parsing
| 0 |
1,908,825 | 41,619,998 |
In Python, how does one override the behavior of a class instance typed on a line by itself
|
<p>Without print (which I believe invokes <strong>str</strong>()) what happens when a variable is on a line by itself.</p>
<p>This is a bit contrived, I know, but I ran into this in a Jupyter notebook when testing a class I'm creating and now I'm curious. I can't seem to find the right set of Google search terms to find the answer in the docs.</p>
<p>I defined a class thusly:</p>
<pre><code>class ExceptionList(BaseException):
pass
# I've implemented, in very standard ways, the following methods
# __str__()
# __getitem__()
# __delitem__()
# __repr__()
# I doubt any other specifics of the class are pertinent
</code></pre>
<p>EDIT</p>
<p>Here is the <strong>repr</strong>() implementation:</p>
<pre><code>def __repr__(self):
return "{}({})".format(self.__class__.__name__, repr(self.__exception_list))
</code></pre>
<p>P.S. I coded that method based on <a href="http://brennerm.github.io/posts/python-str-vs-repr.html" rel="nofollow noreferrer">http://brennerm.github.io/posts/python-str-vs-repr.html</a></p>
<p>EDIT</p>
<p>My implementation of <strong>repr</strong>() causes this behavior:</p>
<pre><code>e = ExceptionList(["Oh, no"])
e
</code></pre>
<blockquote>
<blockquote>
<p>"ExceptionList(['Oh, no'])"</p>
</blockquote>
</blockquote>
<p>So consider:</p>
<pre><code>e1 = Exception("Oh no!")
e2 = ExceptionList("Oh no!")
</code></pre>
<p>In separate notebook cells:</p>
<pre><code>e1
</code></pre>
<blockquote>
<blockquote>
<p>Exception('Oh no!')</p>
</blockquote>
</blockquote>
<pre><code>e2
</code></pre>
<blockquote>
<blockquote>
<p>__main__.ExceptionList()</p>
</blockquote>
</blockquote>
<p>Incidentally (maybe?) the output of:</p>
<pre><code>e2.__class__
</code></pre>
<p>is close:</p>
<blockquote>
<blockquote>
<p>__main__.ExceptionList</p>
</blockquote>
</blockquote>
<p>Does it just have something to do with the scope in which the class was defined? Is it some special behavior of builtins?</p>
<p>Is this behavior this result of invoking some <strong>method</strong> that I'm unaware of? I tried implementing all of the methods produced with <code>dir()</code> though I'm willing to bet that's not exhaustive.</p>
<p>It probably doesn't matter for my implementation but now I <strong>need</strong> to know!</p>
<p>Boilerplate Hater Deterrent:</p>
<ul>
<li>I don't know anything.</li>
<li>I'm a terrible programmer.</li>
<li>"I thought a python was a snake..."</li>
<li>I'm barely qualified to use a toaster.</li>
<li>Please forgive this post's pathetic usage of SO disk space.</li>
</ul>
|
<p>If the sole content of a line is a variable or object, then that line is evaluated and the variable or object is returned and not stored to a variable.</p>
<pre><code>a = 5 + 2 # 5+2 is evaluated and stored in a
5 + 2 # 5+2 is evaluated
a # a is evaluated
class SomeCustomClass:
def __init__(self, *args):
pass
scc = SomeCustomClass()
scc # returns a reference to this instance, but ref is not stored
</code></pre>
<p>Many python interfaces like Jupyter, IPython, IDLE will display the evaluation of the line.</p>
<pre><code>>>> 5+2
7
>>> a=5+2
[Nothing]
>>> a
7
>>> scc
<__main__.SomeCustomClass instance at 0x7fb6d5943d40>
</code></pre>
<p>The <code><__main__.SomeCustomClass instance at 0x7fb6d5943d40></code> is called a representation of the class. If you look at <a href="https://docs.python.org/2/reference/datamodel.html" rel="nofollow noreferrer">Python's Data Model</a> you will see that this representation is specified by the object's <code>__repr__</code> method.</p>
<p>Modifying <code>SomeCustomClass</code>:</p>
<pre><code>class SomeCustomClass:
def __init__(self, *args):
pass
def __repr__(self):
return "SomeCustomClass repr"
>>> scc = SomeCustomClass()
>>> scc
SomeCustomClass repr
>>> s = str(scc) #implicit call to __repr__
>>> s
SomeCustomClass repr
</code></pre>
<p>Now adding <code>__str__</code> method:</p>
<pre><code>class SomeCustomClass:
def __init__(self, *args):
pass
def __repr__(self):
return "SomeCustomClass repr"
def __str__(self):
return "SomeCustomClass str"
>>> scc = SomeCustomClass()
>>> scc
SomeCustomClass repr
>>> s = str(scc) #explicit call to __str__
>>> s
SomeCustomClass str
</code></pre>
|
python|class|jupyter
| 0 |
1,908,826 | 56,994,288 |
Aggregating data in a window in apache beam
|
<p>I am receiving a stream of a complex and nested JSON object as my input to my pipeline.</p>
<p>My goal is to create small batches to feed off to another <code>pubsub</code> topic for processing downstream. I am struggling with the <code>beam.beam.GroupByKey()</code> function - from what I have read this is the correct method to try and aggregate. </p>
<p>A simplified example, the input events:</p>
<pre><code>{ data:['a', 'b', 'c'], url: 'websiteA.com' }
{ data:['a', 'b', 'c'], url: 'websiteA.com' }
{ data:['a'], url: 'websiteB.com' }
</code></pre>
<p>I am trying to create the following:</p>
<pre><code>{
'websiteA.com': {a:2, b:2, c:2},
'websiteB.com': {a:1},
}
</code></pre>
<p>My issue lies in trying to group on anything more that the simplest tuple throws a <code>ValueError: too many values to unpack</code>.</p>
<p>I could run this in two steps, but from my reading using <code>beam.GroupByKey()</code> is expensive and therefore should be minimised.</p>
<p>EDIT based on answer from @Cubez.</p>
<p>This is my combine function which seems to half work :(</p>
<pre class="lang-py prettyprint-override"><code>class MyCustomCombiner(beam.CombineFn):
def create_accumulator(self):
logging.info('accum_created') #Logs OK!
return {}
def add_input(self, counts, input):
counts = {}
for i in input:
counts[i] = 1
logging.info(counts) #Logs OK!
return counts
def merge_accumulators(self, accumulators):
logging.info('accumcalled') #never logs anything
c = collections.Counter()
for d in accumulators:
c.update(d)
logging.info('accum: %s', accumulators) #never logs anything
return dict(c)
def extract_output(self, counts):
logging.info('Counts2: %s', counts) #never logs anything
return counts
</code></pre>
<p>It seems past <code>add_input</code> nothing is being called?</p>
<p>Adding pipeline code:</p>
<pre class="lang-py prettyprint-override"><code>with beam.Pipeline(argv=pipeline_args) as p:
raw_loads_dict = (p
| 'ReadPubsubLoads' >> ReadFromPubSub(topic=PUBSUB_TOPIC_NAME).with_output_types(bytes)
| 'JSONParse' >> beam.Map(lambda x: json.loads(x))
)
fixed_window_events = (raw_loads_dict
| 'KeyOnUrl' >> beam.Map(lambda x: (x['client_id'], x['events']))
| '1MinWindow' >> beam.WindowInto(window.FixedWindows(60))
| 'CustomCombine' >> beam.CombinePerKey(MyCustomCombiner())
)
fixed_window_events | 'LogResults2' >> beam.ParDo(LogResults())
</code></pre>
|
<p>This is a perfect example of needed to use <a href="https://beam.apache.org/documentation/programming-guide/#combine" rel="nofollow noreferrer">combiners</a>. These are transforms that are used to aggregate or combine collections across multiple workers. As the doc says, CombineFns work by reading in your element (beam.CombineFn.add_input), merging multiple elements (beam.CombineFn.merge_accumulators), then finally outputting the final combined value (beam.CombineFn.extract_output). See the Python docs for the parent class <a href="https://beam.apache.org/releases/pydoc/2.12.0/apache_beam.transforms.core.html?highlight=combinefn#apache_beam.transforms.core.CombineFn" rel="nofollow noreferrer">here</a>.</p>
<p>For example, to create a combiner that outputs an average of a collection of numbers looks like this:</p>
<pre><code>class AverageFn(beam.CombineFn):
def create_accumulator(self):
return (0.0, 0)
def add_input(self, sum_count, input):
(sum, count) = sum_count
return sum + input, count + 1
def merge_accumulators(self, accumulators):
sums, counts = zip(*accumulators)
return sum(sums), sum(counts)
def extract_output(self, sum_count):
(sum, count) = sum_count
return sum / count if count else float('NaN')
pc = ...
average = pc | beam.CombineGlobally(AverageFn())
</code></pre>
<p>For your use case, I would suggest something like this:</p>
<pre><code>values = [
{'data':['a', 'b', 'c'], 'url': 'websiteA.com'},
{'data':['a', 'b', 'c'], 'url': 'websiteA.com'},
{'data':['a'], 'url': 'websiteB.com'}
]
# This counts the number of elements that are the same.
def combine(counts):
# A counter is a dictionary from keys to the number of times it has
# seen that particular key.
c = collections.Counter()
for d in counts:
c.update(d)
return dict(c)
with beam.Pipeline(options=pipeline_options) as p:
pc = (p
# You should replace this step with reading data from your
# source and transforming it to the proper format for below.
| 'create' >> beam.Create(values)
# This step transforms the dictionary to a tuple. For this
# example it returns:
# [ ('url': 'websiteA.com', 'data':['a', 'b', 'c']),
# ('url': 'websiteA.com', 'data':['a', 'b', 'c']),
# ('url': 'websiteB.com', 'data':['a'])]
| 'url as key' >> beam.Map(lambda x: (x['url'], x['data']))
# This is the magic that combines all elements with the same
# URL and outputs a count based on the keys in 'data'.
# This returns the elements:
# [ ('url': 'websiteA.com', {'a': 2, 'b': 2, 'c': 2}),
# ('url': 'websiteB.com', {'a': 1})]
| 'combine' >> beam.CombinePerKey(combine))
# Do something with pc
new_pc = pc | ...
</code></pre>
|
python|google-cloud-dataflow|apache-beam
| 7 |
1,908,827 | 25,626,109 |
Python Argparse conditionally required arguments
|
<p>I have done as much research as possible but I haven't found the best way to make certain cmdline arguments necessary only under certain conditions, in this case only if other arguments have been given. Here's what I want to do at a very basic level:</p>
<pre><code>p = argparse.ArgumentParser(description='...')
p.add_argument('--argument', required=False)
p.add_argument('-a', required=False) # only required if --argument is given
p.add_argument('-b', required=False) # only required if --argument is given
</code></pre>
<p>From what I have seen, other people seem to just add their own check at the end:</p>
<pre><code>if args.argument and (args.a is None or args.b is None):
# raise argparse error here
</code></pre>
<p>Is there a way to do this natively within the argparse package?</p>
|
<p>I've been searching for a simple answer to this kind of question for some time. All you need to do is check if <code>'--argument'</code> is in <code>sys.argv</code>, so basically for your code sample you could just do:</p>
<pre><code>import argparse
import sys
if __name__ == '__main__':
p = argparse.ArgumentParser(description='...')
p.add_argument('--argument', required=False)
p.add_argument('-a', required='--argument' in sys.argv) #only required if --argument is given
p.add_argument('-b', required='--argument' in sys.argv) #only required if --argument is given
args = p.parse_args()
</code></pre>
<p>This way <code>required</code> receives either <code>True</code> or <code>False</code> depending on whether the user as used <code>--argument</code>. Already tested it, seems to work and guarantees that <code>-a</code> and <code>-b</code> have an independent behavior between each other.</p>
|
python|argparse
| 86 |
1,908,828 | 44,569,758 |
CloudFormation: Pass List to Lambda Function
|
<p>I have a CloudFormation script that creates a Lambda Function for RDS backups. How can I pass a list of servers from the CloudFormation template to the lambda function? Right now they are hard-coded, and I don't think that is ideal.</p>
<p><strong>CloudFormation Script</strong>:</p>
<pre><code>{ "AWSTemplateFormatVersion": "2010-09-09",
"Parameters": {
"ruleName": {
"Description": "Name for CloudWatch Rule.",
"Type": "String"
},
"cronSchedule": {
"Description": "Cron Schedule Expression",
"Type": "String",
"Default": "cron(0 05 * * ? *)"
},
"bucketName" : {
"Description": "S3 Bucket storing the lambda script",
"Type": "String"
},
"lambdaTimeout": {
"Description": "Timeout for Lambda",
"Type": "String",
"Default": "3"
},
"instanceList":{
"Description": "",
"Type": "String"
}
},
"Resources": {
"cloudWatchRule": {
"Type": "AWS::Events::Rule",
"DependsOn": "lambdaFunction",
"Properties": {
"Description": "Cron Schedule",
"Name": {
"Ref": "ruleName"
},
"ScheduleExpression": {
"Ref": "cronSchedule"
},
"State": "ENABLED",
"Targets": [
{
"Arn":{
"Fn::GetAtt": ["lambdaFunction","Arn"]
},
"Id": {
"Ref": "lambdaFunction"
}
}
]
}
},
"lambdaFunction": {
"Type":"AWS::Lambda::Function",
"DependsOn": [
"lambdaRdsBackupRole",
"rdsBackupExecutionPolicy"
],
"Properties":{
"Code": {
"S3Bucket": {
"Ref": "bucketName"
},
"S3Key": "lambdaFunctions/rdsBackup.zip"
},
"Role": {
"Fn::GetAtt": ["lambdaRdsBackupRole", "Arn"]
},
"Handler": "rdsBackup.lambda_handler",
"Environment":{
"Variables": {
"dbInstances": {
"Ref": "instanceList"
}
}
},
"Runtime": "python3.6",
"MemorySize": 128,
"Timeout": {
"Ref": "lambdaTimeout"
}
}
},
"lambdaRdsBackupRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"lambda.amazonaws.com"
]
},
"Action": [
"sts:AssumeRole"
]
}
]
},
"Path": "/"
}
},
"rdsBackupExecutionPolicy": {
"DependsOn": [
"lambdaRdsBackupRole"
],
"Type": "AWS::IAM::Policy",
"Properties": {
"PolicyName": "lambdaRdsBackupRolePolicy",
"Roles": [
{
"Ref": "lambdaRdsBackupRole"
}
],
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "arn:aws:logs:*:*:*"
},
{
"Effect": "Allow",
"Action": [
"rds:AddTagsToResource",
"rds:DeleteDBSnapshot"
],
"Resource": "arn:aws:rds:*:*"
},
{
"Effect": "Allow",
"Action": [
"rds:ListTagsForResource",
"rds:CreateDBSnapshot"
],
"Resource": "arn:aws:rds:*:*"
},
{
"Effect": "Allow",
"Action": [
"rds:DescribeDBSnapshots"
],
"Resource": "*"
}
]
}
}
}
}
}
</code></pre>
<p>I added this section in, but I'm not quite sure if it's right, and if it is I'm still not quite sure where to go from here:</p>
<pre><code> "Environment":{
"Variables": {
"dbInstances": {
"Ref": "instanceList"
}
}
},
</code></pre>
<p><strong>Lambda Function</strong>:</p>
<pre><code>import boto3
import datetime
def lambda_handler(event, context):
print("Connecting to RDS")
client = boto3.client('rds')
# Instance to backup
dbInstances = ['testdb', 'testdb2']
for dbInstance in dbInstances:
print("RDS snapshot backups started at %s...\n" % datetime.datetime.now())
for snapshot in client.describe_db_snapshots(DBInstanceIdentifier=dbInstance, MaxRecords=50)['DBSnapshots']:
try:
createTs = snapshot['SnapshotCreateTime'].replace(tzinfo=None)
if createTs < datetime.datetime.now() - datetime.timedelta(days=30):
print("Deleting snapshot id:", snapshot['DBSnapshotIdentifier'])
client.delete_db_snapshot(
DBSnapshotIdentifier=snapshot['DBSnapshotIdentifier']
)
except Exception as e:
print("Error: "+ str(e))
pass
client.create_db_snapshot(
DBInstanceIdentifier=dbInstance,
DBSnapshotIdentifier=dbInstance+'{}'.format(datetime.datetime.now().strftime("%y-%m-%d-%H")),
Tags=[
{
'Key': 'Name',
'Value': 'dbInstance'
},
]
)
</code></pre>
|
<p>There are probably several ways to do this. A few that come to mind are listed below.</p>
<p>1) If you are set on adding the variable into the cloud formation template I would add the python script inline to the cloudformation template and you can pass the array in as a variable to the template. <a href="http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-lambda-function-code.html" rel="nofollow noreferrer">http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-lambda-function-code.html</a></p>
<p>2) You can create an environment variable for the lambda function and every time you execute it (either console or command line) you can update the environment variable with the new data base instances. <a href="http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-lambda-function.html#cfn-lambda-function-environment" rel="nofollow noreferrer">http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-lambda-function.html#cfn-lambda-function-environment</a></p>
<p>3) You can use something like API Gateway and tie it to a lambda function. You could pass the array in a POST request to the lambda function. <a href="http://docs.aws.amazon.com/apigateway/latest/developerguide/getting-started.html" rel="nofollow noreferrer">http://docs.aws.amazon.com/apigateway/latest/developerguide/getting-started.html</a></p>
<p>Without knowing your end goal it is hard to recommend one of these over the other.</p>
|
python|amazon-web-services|aws-lambda|amazon-cloudformation
| 3 |
1,908,829 | 44,450,197 |
Logic error when using Newton-Raphson method in Python
|
<p>I'm trying to calculate the approximated square root of a number in python using the Newton-Raphson method(The formula)</p>
<p><img src="https://i.stack.imgur.com/iBbmz.png" alt=""></p>
<p>However the code does not work as it is stuck in the while loop(at least I think so). My plan is to calculate approximations until approximations differ by 1e–10.
This is the code I have right now:</p>
<pre><code>k = input("Enter a number")
try:
k = int(k)
xi = 1
xi2 = xi - (xi**2 - k)/(xi*2)
diff = xi2 - xi
while (diff > 0.0000000001):
xi = xi2
xi2 = xi - (xi**2 - k)/(xi*2)
print(xi2)
print(xi2)
except:
print("bye")
</code></pre>
<p>I'm new to python so any help will be much appreciated! thanks a lot! :)</p>
<p>Update:
I tried using the code below as some answer suggested, however when giving the input 2 it only gives me one loop before giving the answer(1.4166666666666667). The correct answer should be (1.4142135623730951).</p>
<pre><code>while (diff > 0.0000000001):
xi = xi2
xi2 = xi - (xi**2 - k)/(xi*2)
diff = xi2 - xi
</code></pre>
|
<p>The problem is that you are not changing the <code>diff</code> within the loop. The diff is always the initial value you assigned, so it never becomes less than 1e-10</p>
<pre><code>while (diff > 0.0000000001):
xi = xi2
xi2 = xi - (xi**2 - k)/(xi*2)
diff=xi2-xi
</code></pre>
<p><strong>EDIT:</strong></p>
<p>Your problem is not a programming one.
After running some iterations with a <code>print(diff)</code> added to the <code>while</code> block I noticed there were negative values involved.</p>
<p>Consider a value of <code>- 1e-2</code>. It does satisfy the loop break condition because it's less than <code>1e-10</code>, but it's actually a big distance away from desired result.</p>
<p>If you are dealing with differences, use absolutes:</p>
<pre><code>k = input("Enter a number ")
k = int(k)
xi = 1
xi2 = xi - (xi**2 - k)/(xi*2)
diff = abs(xi2 - xi)
#0.0000000001
while (diff > 0.0000000001):
xi = xi2
xi2 = xi - (xi**2 - k)/(xi*2)
diff = abs(xi2 - xi)
print(diff)
print(xi2)
</code></pre>
<p>Output:</p>
<pre><code>$ python newton-raphson.py
Enter a number 2
0.08333333333333326
0.002450980392156854
2.123899820016817e-06
1.5947243525715749e-12
1.4142135623730951
</code></pre>
|
python
| 0 |
1,908,830 | 64,247,484 |
Is there a simple way to comment out or diable a python try statement without re-indenting
|
<p><strong>Problem:</strong><br />
Sometimes it is nice to be able to remove or apply a try statement temporarily. Is there a convenient way to disable the try statement without re-indenting?</p>
<p>For example, if there was a python block statement equivalent called "goForIt:" one could edit the word "try:" to "goForIt:" and it would just execute the block as though it were not wrapped in a "try" and ignore the "except" line too.</p>
<p>The problem I'm trying to solve is that while I want the try statement in production I want to be able to remove it temporarily while debugging to see the error traceback rather than have it trap and process the exception.</p>
<p>Currently I work around this by commenting out the "try" then re-indent the code in the block. Then comment out the entire "except" block. This seems clumsy.</p>
|
<p>Instead of removing the try, you could make the <code>except</code> re-raise the exception:</p>
<pre><code>try:
raise ValueError('whoops')
except ValueError as e:
raise # <-- just put this here
print('caught')
</code></pre>
<p>This will raise the error, just as if it were not caught:</p>
<pre><code>ValueError Traceback (most recent call last)
<ipython-input-146-a6be6779c161> in <module>
1 try:
----> 2 raise ValueError('whoops')
3 except ValueError as e:
4 raise
5 print('caught')
ValueError: whoops
</code></pre>
|
python|exception
| 3 |
1,908,831 | 69,913,966 |
ValueError: invalid literal for int() with base 10: 'go'
|
<p>have strange problem to cant know it . I will be thankful to help me .......<a href="https://i.stack.imgur.com/C5kly.png" rel="nofollow noreferrer">enter image description here</a></p>
|
<p>You've to enter integers to convert it to int. You cannot convert "hi" to integer for example. What will be the value of int("hi")? For sure it'll give error. Try adding integers as input. If you're confused, take input in integer format only by writing <code>varodi=int(input(":"))</code> and take 1 as input to break the loop. Change the statement to <code>if varodi==1:</code> then break. Or else just put the input as integer</p>
|
python|windows
| 0 |
1,908,832 | 72,839,263 |
Access python interpreter in VSCode version controll when using pre-commit
|
<p>I'm using pre-commit for most of my Python projects, and in many of them, I need to use pylint as a local repo. When I want to commit, I always have to activate python venv and then commit; otherwise, I'll get the following error:</p>
<pre class="lang-bash prettyprint-override"><code>black....................................................................Passed
pylint...................................................................Failed
- hook id: pylint
- exit code: 1
Executable `pylint` not found
</code></pre>
<p>When I use vscode version control to commit, I get the same error; I searched about the problem and didn't find any solution to avoid the error in VSCode.</p>
<p>This is my typical <code>.pre-commit-config.yaml</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>repos:
- repo: https://github.com/ambv/black
rev: 21.9b0
hooks:
- id: black
language_version: python3.8
exclude: admin_web/urls\.py
- repo: local
hooks:
- id: pylint
name: pylint
entry: pylint
language: python
types: [python]
args:
- --rcfile=.pylintrc
</code></pre>
|
<p>you have ~essentially two options here -- neither are great (<code>language: system</code> is kinda the unsupported escape hatch so it's on you to make those things available on <code>PATH</code>)</p>
<p>you could use a specific path to the virtualenv <code>entry: venv/bin/pylint</code> -- though that will reduce the portability.</p>
<p>or you could start vscode with your virtualenv activated (usually <code>code .</code>) -- this doesn't always work if vscode is already running</p>
<hr />
<p>disclaimer: I created pre-commit</p>
|
python|visual-studio-code|pre-commit-hook|pre-commit|pre-commit.com
| 1 |
1,908,833 | 49,825,075 |
404 when closing pull request on github api
|
<p>I have the following code for interacting with pull requests on the github api.</p>
<pre><code>def merge(pull):
url = "https://api.github.com/repos/{}/{}/pulls/{}/merge".format(os.environ.get("GITHUB_USERNAME"), os.environ.get("GITHUB_REPO"), pull['number'])
response = requests.put(url, auth=get_auth(), data={})
if response.status_code == 200:
#Merge was successful
return True
else:
#Something went wrong. Oh well.
return response.status_code
def close(pull):
url = "https://api.github.com/repos/{}/{}/pulls/{}".format(os.environ.get("GITHUB_USERNAME"), os.environ.get("GITHUB_REPO"), pull['number'])
payload = {"state" : "closed"}
response = requests.put(url, auth=get_auth(), data=payload)
if response.status_code == 200:
#Close was successful
return True
else:
#Something went wrong. Oh well.
return response.status_code
</code></pre>
<p>Now merge works just fine, when I run it with a pull request, the pull request is merged and it feels good.</p>
<p>But close gives me a 404. This is strange since merge can clearly find the pull request, and also shows that I clearly have permissions set up properly so I can close the request.</p>
<p>I have also confirmed that I can close the request manually by logging in on github and pressing the 'close pull request' button.</p>
<p>Why does github give me a 404 for the close function but not the merge function? What is different between these two functions?</p>
|
<p>The answer is that the 'update a pull request' api call should be a POST request, not a put request.</p>
<p>Changing </p>
<p><code>response = requests.put(url, auth=get_auth(), data=payload)</code></p>
<p>to</p>
<pre><code>response = requests.post(url, auth=get_auth(), data=payload)
</code></pre>
<p>Fixed the issue.</p>
|
python|github|github-api
| 0 |
1,908,834 | 50,077,995 |
Word Count in between XML tags
|
<p>The '<a href="http://www.uva.nl/binaries/content/assets/programmas/information-studies/txt-for-assignment-data-science.txt?3015083536432" rel="nofollow noreferrer">data_science_assignment.txt</a>' that contains three articles from LA Times in a semi-structured format. The tags in the collection dictate the beginning and the end of an article (<code><doc></code> and <code></doc></code>), the article id, the headline of the article and the main text (<code><text></code> and <code></text></code>).</p>
<p>I trying to code a class that can preprocess and store the LA Times articles.</p>
<p>The methods of the class should take as an input the LA Times articles collection, extract each article in the collection, and construct a hash table, the key of which is a word (in the collection) and the value a linked list of all the document that contain this word, and the count of the word in each document.</p>
<p>E.g the word “the” appears in all three articles, 20 times in the first, 34 times in the second, and 12 times in the third</p>
<p><strong>Desired Output:</strong> the -> [1, 20] -> [2, 34] -> [3, 12]</p>
<p><strong>Current Output:</strong> the -> [1,16] -> [2,16] -> [3,16]</p>
<p>Problem: I unable to properly count the words between <code><text> </text></code> tags while ignoring <code><p></p></code> tags. How do I improve my current code for an accurate word count.</p>
<pre><code> __author__ = 'Sam'
import lxml.html as LH
from lxml import html
from lxml import etree
import xml.etree.ElementTree as ET
from collections import Counter
doc = ET.parse("data_science_assignment.txt")
root = doc.getroot()
# Initialise a list to append results to
# root = html.fromstring(doc)
art1 = ""
art2 = ""
art3 = ""
i = 0
# Loop through the pages to search for text
for page in root:
id = page.findtext('docno',default='None')
text = page.findtext('text/*',default='None')
# text = page.attrib.get('text',None)
if i==0:
art1 = text
elif i==1:
art2 = text
else:
art3 = text
i+=1
# article1 = art1.split()
# article2 = art2.split()
# article3 = art3.split()
# print article1
# print (len(article1) + len(article2) + len(article3))
dict1 = {}
dict2 = {}
dict3 = {}
words = []
words.extend(art1.split())
words.extend(art2.split())
words.extend(art3.split())
# print len(words)
for word in words:
if word.lower() in art1:
# print word.lower()
if word.lower() in dict1:
dict1[word.lower()] += 1
else:
dict1[word.lower()] = 1
for word in words:
if word.lower() in art1:
# print word.lower()
if word.lower() in dict2:
dict2[word.lower()] += 1
else:
dict2[word.lower()] = 1
for word in words:
if word.lower() in art1:
# print word.lower()
if word.lower() in dict3:
dict3[word.lower()] += 1
else:
dict3[word.lower()] = 1
# for k,v in dict1.iteritems():
# print k,v
#Get words present in all the articles
dict4 = {}
check = []
for word in words:
if word.lower() in dict1.keys() and word.lower() in dict2.keys() and word.lower() in dict3.keys():
if word.lower not in dict4:
dict4[word.lower()] = "-> [1," + str(dict1[word.lower()]) + "] -> " + "[2," + str(dict2[word.lower()]) + "] -> " + "[3," + str(dict3[word.lower()]) + "]"
for k,v in dict4.items():
print(k,v)
dict5 = {}
# #get words present in only first two articles
for word in words:
if word.lower() in dict1.keys() and word.lower() in dict2.keys() and word.lower() not in dict3.keys():
if word not in dict5:
dict5[word.lower()] = "-> [1," + str(dict1[word.lower()]) + "] -> " + "[2," + str(dict2[word.lower()]) + "] "# + "[3," + str(dict3[word.lower()]) + "]"
for k,v in dict5.items():
print(k,v)
</code></pre>
|
<p>With some clean up, this is my take on the issue:</p>
<p>Changed xpath parser and expression<br>
Created 1 variable per article<br>
Counts are not all correct so word splitting debug is needed </p>
<pre class="lang-python prettyprint-override"><code>import lxml.html as LH
from lxml import html
from lxml import etree
import xml.etree.ElementTree as ET
from collections import Counter
doc = etree.parse("test.xml")
# Initialise a list to append results to
art1 = ""
art2 = ""
art3 = ""
i = 0
art1 = doc.xpath('string((//text)[1])')
art2 = doc.xpath('string((//text)[2])')
art3 = doc.xpath('string((//text)[3])')
dict1 = {}
dict2 = {}
dict3 = {}
words = []
words1 = []
words2 = []
words3 = []
words1.extend(art1.split())
words2.extend(art2.split())
words3.extend(art3.split())
words.extend(words1)
words.extend(words2)
words.extend(words3)
for word in words1:
#if word.lower() in art1:
# print word.lower()
#print("'%s'" % word)
if word.lower() in dict1:
dict1[word.lower()] += 1
else:
dict1[word.lower()] = 1
for word2 in words2:
#if word.lower() in art2:
# print word.lower()
if word2.lower() in dict2:
dict2[word2.lower()] += 1
else:
dict2[word2.lower()] = 1
for word3 in words3:
#if word.lower() in art3:
# print word.lower()
if word3.lower() in dict3:
dict3[word3.lower()] += 1
else:
dict3[word3.lower()] = 1
#Get words present in all the articles
print("Words present in all articles\n")
dict4 = {}
check = []
for word in words:
if word.lower() in dict1.keys() and word.lower() in dict2.keys() and word.lower() in dict3.keys():
if word.lower() not in dict4:
dict4[word.lower()] = "\t-> [1,%d] -> [2,%d] -> [3,%d]" %(dict1[word.lower()],dict2[word.lower()],dict3[word.lower()])
for k,v in sorted(dict4.items()):
print(k,v)
print("\n\nWords present in articles 1,2\n")
dict5 = {}
# #get words present in only first two articles
for word in words:
if word.lower() in dict1.keys() and word.lower() in dict2.keys() and word.lower() not in dict3.keys():
if word not in dict5:
dict5[word.lower()] = "\t-> [1,%d] -> [2,%d]" %(dict1[word.lower()],dict2[word.lower()])
for k,v in sorted(dict5.items()):
print(k,v)
</code></pre>
<p>Result:</p>
<pre><code><!-- language: lang-none -->
Words present in all articles
a -> [1,27] -> [2,4] -> [3,23]
all -> [1,1] -> [2,2] -> [3,3]
an -> [1,6] -> [2,1] -> [3,3]
and -> [1,34] -> [2,3] -> [3,51]
as -> [1,6] -> [2,1] -> [3,5]
at -> [1,4] -> [2,3] -> [3,5]
be -> [1,4] -> [2,1] -> [3,7]
by -> [1,6] -> [2,2] -> [3,8]
for -> [1,7] -> [2,5] -> [3,9]
in -> [1,26] -> [2,3] -> [3,31]
is -> [1,16] -> [2,1] -> [3,12]
of -> [1,56] -> [2,6] -> [3,54]
one -> [1,4] -> [2,1] -> [3,1]
so -> [1,4] -> [2,1] -> [3,1]
that -> [1,11] -> [2,1] -> [3,16]
the -> [1,94] -> [2,12] -> [3,65]
their -> [1,1] -> [2,2] -> [3,6]
then -> [1,1] -> [2,1] -> [3,1]
these -> [1,1] -> [2,2] -> [3,4]
to -> [1,22] -> [2,3] -> [3,35]
with -> [1,7] -> [2,1] -> [3,4]
Words present in articles 1,2
accident. -> [1,1] -> [2,1]
entire -> [1,1] -> [2,1]
from -> [1,1] -> [2,1]
story -> [1,3] -> [2,1]
</code></pre>
|
python|xml
| 1 |
1,908,835 | 66,674,322 |
Explain [:0] in Python
|
<p>Ok, so before I get flamed for not RTFM, I understand that [:0] in my case of:</p>
<pre><code>s ="itsastring"
newS= []
newS[:0] = s
</code></pre>
<p>ends up converting s to a list through slicing. This is my end goal, but coming from a Java background, I don't fully understand the "0" part in "[:0] and syntactically why it's placed there (I know it roughly means increase by 0). Finally, how does Python know that I want to have each char of s be an element based on this syntax? I want to understand it so I can remember it more clearly.</p>
|
<p>If <em>S</em> and <em>T</em> are sequences, <code>S[a:b] = T</code> will replace the subsequence from index <em>a</em> to <em>b-1</em> of <em>S</em> by the elements of <em>T</em>.<br />
If <code>a == b</code>, it will act as a simple insertion.<br />
And <code>S[:0]</code> is the same thing as <code>S[0:0]</code> : so it's a simple insertion at the front.</p>
<pre><code>s = [11,22,33,44,55,66,77]
s[3:3] = [1,2,3] # insertion at position 3
print( s )
s = [11,22,33,44,55,66,77]
s[3:4] = [1,2,3] # deletion of element at position 3, and then insertion
print( s )
s = [11,22,33,44,55,66,77]
s[3:6] = [1,2,3] # deletion of elements from position 3 to 5, and then insertion
print( s )
s = [11,22,33,44,55,66,77]
s[:] = [1,2,3] # deletion of all elements, and then insertion : whole replacement
print( s )
</code></pre>
<p>output:</p>
<pre><code>[11, 22, 33, 1, 2, 3, 44, 55, 66, 77]
[11, 22, 33, 1, 2, 3, 55, 66, 77]
[11, 22, 33, 1, 2, 3, 77]
[1, 2, 3]
</code></pre>
|
python
| 2 |
1,908,836 | 64,849,189 |
Python - Parse Ifconfig File
|
<p>I am attempting to parse an ifconfig file that will have the following format:</p>
<pre><code>Bond10G: flags=5187<UP,BROADCAST,RUNNING,MASTER,MULTICAST> mtu 9000
inet 10.117.62.135 netmask 255.255.254.0 broadcast 10.117.63.255
ether 00:50:56:9e:89:10 txqueuelen 1000 (Ethernet)
RX packets 14315389 bytes 39499265855 (36.7 GiB)
RX errors 0 dropped 35686 overruns 0 frame 0
TX packets 13009616 bytes 38702751346 (36.0 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Bond1G: flags=5187<UP,BROADCAST,RUNNING,MASTER,MULTICAST> mtu 1500
inet 10.117.60.135 netmask 255.255.254.0 broadcast 10.117.61.255
inet6 fe80::250:56ff:fe9e:ed0d prefixlen 64 scopeid 0x20<link>
ether 00:50:56:9e:ed:0d txqueuelen 1000 (Ethernet)
RX packets 1573455 bytes 172628399 (164.6 MiB)
RX errors 0 dropped 10946 overruns 0 frame 0
TX packets 185449 bytes 50369231 (48.0 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth0: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST> mtu 9000
ether 00:50:56:9e:89:10 txqueuelen 1000 (Ethernet)
RX packets 13493291 bytes 39433797198 (36.7 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 13006856 bytes 38701854528 (36.0 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth1: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST> mtu 9000
ether 00:50:56:9e:89:10 txqueuelen 1000 (Ethernet)
RX packets 822097 bytes 65468597 (62.4 MiB)
RX errors 0 dropped 35673 overruns 0 frame 0
TX packets 2760 bytes 896818 (875.7 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth2: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST> mtu 1500
ether 00:50:56:9e:ed:0d txqueuelen 1000 (Ethernet)
RX packets 961003 bytes 127916200 (121.9 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 182704 bytes 49477386 (47.1 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth3: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST> mtu 1500
ether 00:50:56:9e:ed:0d txqueuelen 1000 (Ethernet)
RX packets 612452 bytes 44712199 (42.6 MiB)
RX errors 0 dropped 10930 overruns 0 frame 0
TX packets 2745 bytes 891845 (870.9 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 3164912 bytes 12725232051 (11.8 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 3164912 bytes 12725232051 (11.8 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
</code></pre>
<p>What I would like to get is a dictionary who's value is the interface, and the keys and values looking in this manner:</p>
<pre><code>{'Bond10G' : {'mtu' : '9000', 'inet' : '10.117.62.135', 'netmask' : '255.255.254.0' # all remaining values that are space delimited},
'Bond1G' : {'mtu' : '9000', 'inet' : '10.117.60.135' # all remaining values that are space delimited} }
</code></pre>
<p>I have been able to split by new line to segregate each interface, however I am unsure as to how to continue. Sample code:</p>
<pre><code>with open('ifconfig_file) as data:
for line in data:
temp_array = line.split("\n\n")
</code></pre>
<p>My logic would be (correct me if im wrong):</p>
<ol>
<li><p>Split by colon to grab interface name to find the key (issue is the ether has colons in it).</p>
</li>
<li><p>While not empty line, take those values delimited by spaces and array[0] would be the key and array[1] would be the value.</p>
</li>
</ol>
|
<p>You can leverage <code>split()</code> as well as regex. Here's an example where I parse <code>mtu</code> and <code>inet</code>:</p>
<pre class="lang-py prettyprint-override"><code>import re
def parse_info(interface):
info = {}
mtuMatches = re.findall("mtu [0-9]*\s", interface) # find all matches for mtu
if (len(mtuMatches) > 0):
info['mtu'] = mtuMatches[0].replace("mtu", "").strip() # use the first match
inetMatches = re.findall("inet [0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\s", interface) # find all matches for inet
if (len(inetMatches) > 0):
info['inet'] = inetMatches[0].replace("inet", "").strip() # use the first match
# add more here
return info
def parse_name(interface):
parts = interface.split(":")
return parts[0] # grab the name
def parse_interface(interface):
name = parse_name(interface)
info = parse_info(interface)
return name, info
def parse_file(data):
interfaces = data.split("\n\n")
parsed = {}
for interface in interfaces:
name, info = parse_interface(interface)
parsed[name] = info
return parsed
with open('ifconfig.txt') as file:
print(parse_file(file.read()))
</code></pre>
<p>This is probably not the most performant way of doing it (if you want performance then maybe split by space and iterate over the result) but this is the cleanest in my opinion</p>
|
python|dictionary
| 0 |
1,908,837 | 64,802,362 |
Pandas / Python - Merge dataframes where the key is located in 2 sub-strings
|
<p>I have been asking this question quite few times and it seems that no one can answer it...</p>
<p>I am looking for a loop/fuction or a simple code that can look through 2 columns in different dataframes and output a third column. This example is quite different from a simple merge or a merge where we have one string and one substring... in this example we have 2 substrings to compare and output a third column if one of the key stored in the substring line is present in in the other substring line of the diffrent dataframe.</p>
<p>This is the example:</p>
<pre><code>data = [['Alex','11111111 20'],['Bob','2222222 0000'],['Clarke','33333 999999']]
df = pd.DataFrame(data,columns=['Name','Code'])
df
data = [['Reed','0000 88'],['Ros',np.nan],['Jo','999999 66']]
df1 = pd.DataFrame(data,columns=['SecondName','Code2'])
</code></pre>
<p><a href="https://i.stack.imgur.com/UpYFV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UpYFV.png" alt="enter image description here" /></a></p>
<p>What i need is to find where part of both codes are the same like <code>999999</code> or <code>0000</code> and output the <code>SecondName</code></p>
<p>The expected output:</p>
<p><a href="https://i.stack.imgur.com/Rfgre.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Rfgre.png" alt="enter image description here" /></a></p>
<p>I have done my reserach and I found a way to locate a substring from a string but not from another substring like in my case.</p>
|
<p>You need to split the codes and concat all possible combinations of merged-results.</p>
<p>Here is the working code:</p>
<pre><code>import pandas as pd
import numpy as np
data = [['Alex','11111111 20'],['Bob','2222222 0000'],['Clarke','33333 999999']]
df = pd.DataFrame(data,columns=['Name','Code'])
data = [['Reed','0000 88'],['Ros',np.nan],['Jo','999999 66']]
df1 = pd.DataFrame(data,columns=['SecondName','Code2'])
df[['c1', 'c2']] = df.Code.str.split(" ", expand=True)
df1[['c1', 'c2']] = df1.Code2.str.split(" ", expand=True)
rdf = pd.DataFrame()
for col1 in ['c1', 'c2']:
for col2 in ['c1', 'c2']:
rdf = pd.concat([rdf, df.merge(df1, left_on=[col1], right_on=[col2], how='inner')], axis=0)
rdf = df.merge(rdf[['Name', 'SecondName']], on='Name', how='outer')
print(rdf[['Name', 'SecondName']])
</code></pre>
<p>Output:</p>
<pre><code> Name SecondName
0 Alex NaN
1 Bob Reed
2 Clarke Jo
</code></pre>
|
python|pandas|for-loop|merge|substring
| 2 |
1,908,838 | 63,837,586 |
How to use matplotlib to plot a function with the argument on an axis
|
<p>I want to plot the function</p>
<p><a href="https://i.stack.imgur.com/ClgHB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ClgHB.png" alt="enter image description here" /></a></p>
<p>up to sum finite k. I'll take the t values from the horizontal axis.</p>
<p>What I have so far:</p>
<pre><code>def f_func(n, t):
summation = sum([np.sin(k*t)/k for k in range(n)])
return summation
</code></pre>
<p>Now that I have the function, I want to tell matplotlib to use it's horizontal axis as the time parameter, while I choose a specific k parameter. How do I go about doing this?</p>
|
<p>You can call <code>f_func</code> in a loop and place the values in a list. Note that the summation needs to start at <code>k=1</code> to prevent division by zero.</p>
<p>The following example code creates the curve for successive values of <code>n</code>:</p>
<pre class="lang-py prettyprint-override"><code>from matplotlib import pyplot as plt
import numpy as np
def f_func(n, t):
summation = sum([np.sin(k * t) / k for k in range(1, n + 1)])
return summation
ts = np.linspace(-5, 12.5, 200)
for n in range(1, 11):
fs = [f_func(n, t) for t in ts]
plt.plot(ts, fs, label=f'$n={n}$')
plt.margins(x=0, tight=True)
plt.xlabel('$t$')
plt.ylabel('$f(n, t)$')
plt.legend(ncol=2)
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/7byio.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7byio.png" alt="example plot" /></a></p>
<p>PS: You could play around with numpy's <a href="https://numpy.org/doc/stable/user/basics.broadcasting.html" rel="nofollow noreferrer">broadcasting</a> and calculate the f-values in one go. The function needs to be adapted a bit, taking sums of columns of an intermediate matrix:</p>
<pre class="lang-py prettyprint-override"><code>ts = np.linspace(-5, 12.5, 200)
ks = np.arange(1, n+1).reshape(-1, 1)
fs = np.sum(np.sin(ks * ts) / ks, axis=0)
plt.plot(ts, fs, label=f'$n={n}$')
</code></pre>
|
python|matplotlib|plot|trigonometry
| 3 |
1,908,839 | 53,306,474 |
I have two variables that I know are equal but my if statement does not recognise this?
|
<p>This is my code:</p>
<pre><code>bookings = ['blue,red', 'green,orange', 'yellow, purple']
number = 0
b = 0
c = 1
file_test = open('test_1.txt' , 'wt')
results_song = []
for item in bookings:
words = bookings[number].split(',')
results_song.append(words[0])
results_song.append(words[1])
number = number + 1
results_song_str = '\n'.join(results_song)
print(results_song_str)
file_test.write(results_song_str)
file_test.close()
file_test = open('test_1.txt' , 'r')
line = file_test.readlines()
for item in bookings:
line_1 = line[b]
line_2 = line[c]
answer = input('If first word is then what is the second word')
if answer == line_2:
print('correct')
else:
print('wrong')
b = b + 2
c = c + 2
</code></pre>
<p>However the code will not recognise that answer is equal to <code>line_2</code>. I cannot figure out why this is happening. I have checked that <code>c</code> is the correct number and that <code>line_2</code> is the same as answer. But I did notice that when I ran the code while printing answer and <code>line_2</code> that this would return:</p>
<pre><code>red
red
</code></pre>
<p>but I never put a new line feature in here.</p>
<p>Any help would be much appreciated as I need to use this code for a school assignment.</p>
|
<p>Debugging by printing</p>
<pre><code># ...
for item in bookings:
line_1 = line[b]
line_2 = line[c]
print("Your Answer:", repr(answer))
print("Actual Answer:", repr(line_2))
# ...
</code></pre>
<p>gives</p>
<pre><code>Your Answer: 'red'
Actual Answer: 'red\n'
</code></pre>
<p>Aha! A sneaky newline character! Seems like when the program was reading text from the file and splitting the lines, it saved the newline character for you. How sweetly annoying. : |</p>
<p>To remove it, you can use the <code>str.replace()</code> method</p>
<pre><code># ...
for _ in range(len(bookings)): # I took the freedom to modify the loop conditions
line_1 = line[b].replace('\n','')
line_2 = line[c].replace('\n','')
# ...
</code></pre>
<p>or change the way lines are read from the file, manually splitting the lines using the <code>str.split()</code> method</p>
<pre><code># ...
with open('test_1.txt' , 'r') as file_test:
line = file_test.read().split('\n')
for _ in range(len(bookings)):
line_1 = line[b]
line_2 = line[c]
# ...
</code></pre>
<hr>
<p><em>Credit goes to @juanpa.arrivillaga for suggesting the use of <code>repr()</code> to check values.</em></p>
|
python|python-3.x|if-statement
| 1 |
1,908,840 | 68,455,601 |
Working with python, do I need to write down AST to evaluate a boolean function?
|
<p>I am working on a project, one part of it is to generate a truth table for a given boolean expression ( such as: (A NOT B) NOR (C OR D)). I could not find a built in bitwise operator for NOR gate, so I wrote a code of my own for it, which works good with simple expressions. But if brackets are introduced for unambiguity and false results, the code does a terrible job. [for example, for the expression mentioned above, it gives an output of (a~b~()|()c|d)]
Here is the code snippet:</p>
<pre><code> # to replace NOR gate with bitwise NOR operation of ~(var_a | var_b)
for m in range(0,len(expression)):
v=len(expression)
for v in range(0,len(expression)):
if (expression[v]=="!"):
prev='~(' + expression[v-1]
nextt=expression[v+1]+')'
replacement= prev + "|" + nextt
leftear= expression[:v-1]
rightear=expression[v+2:]
print('left right ears:',leftear,rightear)
expression= leftear+replacement+rightear
print('expression refined= X= ',expression)
</code></pre>
<p>The only solution that I found on google was to write a parse tree (which python says that, it has been deprecated so we are recommended to use AST). I am a total beginner and search out a little bit about parse trees, but I wanted to ask:
<strong>1)Is the AST or parsers the only way to do it? Are there any built in functions for it in python?</strong>
<strong>2) Whats the better way of handeling NOR gate ?</strong></p>
|
<p>Assuming you can also use prefix notation, you might be able to use the
<code>sympy</code> library.</p>
<p>Below is a small program that demonstrates using it for
a sample expression derived from your question, with some code at the
end to generate the truth table for the expression using the <code>satisfiable</code>
function, which generates all possible models for the logic expression.</p>
<pre><code>import itertools
from sympy import *
from sympy.logic import simplify_logic
from sympy.logic.inference import satisfiable
my_names = 'ABCD'
A,B,C,D = symbols(','.join(my_names))
e1 = Nor(Nor(A, B), Or(C, D))
e2 = ~(~(A | B) | (C | D))
print('e1 and e2 are same:', e1 == e2)
print(simplify_logic(e1))
my_symbols = sorted(e1.atoms(Symbol), key=lambda x: x.name)
print('Set of symbols used:', my_symbols)
models = satisfiable(e1, all_models=True)
sat_mods = []
for m in models:
sat_mods.append(dict(sorted(m.items(), key=lambda x: x[0].name)))
truth_tab = []
for c in itertools.product((True, False), repeat=4):
model = dict(zip(my_symbols, c))
truth_tab.append((model, model in sat_mods))
print(truth_tab)
</code></pre>
<p>Output:</p>
<pre><code># e1 and e2 are same: True
# ~C & ~D & (A | B)
# Set of symbols used: [A, B, C, D]
# [({A: True, B: True, C: True, D: True}, False),
# ({A: True, B: True, C: True, D: False}, False),
# ...
</code></pre>
|
python|bitwise-operators|abstract-syntax-tree|boolean-expression|brackets
| 0 |
1,908,841 | 62,621,063 |
Nesting multiple context manager functions of an object
|
<p>I am using python 2.7(migration not complete yet) and trying to build an aggregate class(Cluster) which provides the same context manager function(reserved) as on its individual items(Node). I would ideally like to do this without refactoring the context manager function of the contained object Node. If it had separate enter() and exit() methods, perhaps I would have directly called them in the Cluster.reserved() context manager function. But I thought there might be more elegant ways of doing it. I tried following code with ExitStack:</p>
<pre><code>from contextlib import contextmanager
from contextlib2 import ExitStack
class Node:
def __init__(self, id):
self._id = id
@contextmanager
def reserved(self):
print("Node {} reserved".format(self._id))
try:
yield
except:
print("Exception while handling node")
finally:
print("Node released")
def reserve(self):
print("Node {} reserved".format(self._id))
def release(self):
print("Node {} released".format(self._id))
class Cluster:
def __init__(self, id):
self._id = id
self._nodes = [Node(1), Node(2), Node(3), Node(4)]
@contextmanager
def reserved(self):
with ExitStack() as cm:
cm.enter_context(node.reserved() for node in self._nodes)
print("Cluster {} reserved".format(self._id))
try:
yield
except:
print("Exception while handling cluster")
finally:
print("Cluster released")
@contextmanager
def reserved2(self):
with ExitStack() as cm:
for node in self._nodes:
reserve = node.reserve()
cm.callback(node.release(), reserve)
print("Cluster {} reserved".format(self._id))
try:
yield
except:
print("Exception while handling cluster")
finally:
print("Cluster released")
node = Node(10)
with node.reserved():
print('Node {} handled'.format(node._id))
cluster = Cluster(10)
with cluster.reserved2():
print('Cluster {} handled'.format(cluster._id))
</code></pre>
<p>The first function reserved() returns</p>
<pre><code>AttributeError: type object 'generator' has no attribute '__exit__'
</code></pre>
<p>while the second one doesn't really yield after reserving all the nodes. Wondering what is the right way to handle this...</p>
|
<pre><code>cm.enter_context(node.reserved() for node in self._nodes)
</code></pre>
<p>That code wasn't right. I should have used the list comprehension properly to pass the contextmanager object properly to enter_context</p>
|
python-2.7|with-statement|contextmanager
| 0 |
1,908,842 | 62,028,161 |
How to apply the same function to multiple lists
|
<p>I have two lists and want to apply the same function to both, I know how to apply one at a time, but not both? Then I want to add each element to gather a total?</p>
<pre><code>a = ['a','b','c','d','e']
b = ['b', np.nan,'c','e','a']
c = ['a','b','c','d','e']
</code></pre>
<p>I know you could do below to get the output, but I wanted to do it with serparation</p>
<pre><code>a = [1 if 'a' in a else 99 for x in a]
b = [1 if 'a' in b else 99 for x in b]
c = [1 if 'a' in c else 99 for x in c]
</code></pre>
<p>I first want to outputs below:</p>
<pre><code>a = [1, 99, 99, 99, 99]
b = [99, 99, 99, 99, 1]
c = [99, 99, 99, 99, 1]
</code></pre>
<p>Then Add each elements into one final list</p>
<pre><code>sum = [199, 297, 297, 297, 101]
</code></pre>
|
<p>pandas makes this quite easy
(although im sure its amost as easy just with numpy)</p>
<pre><code>import pandas
df = pandas.DataFrame({'a':a,'b':b,'c':c})
mask = df == 'a'
df[mask] = 1
df[~mask] = 99
df.sum(axis=1)
</code></pre>
|
python|list
| 0 |
1,908,843 | 61,695,450 |
How to sort a list of tuples by 2 conditions with priority on these 2 conditions in python
|
<p>I need to sort something like</p>
<p>list = [('B', 2), ('A', 6), ('D', 4), ('E', 6), ('C', 2)]</p>
<p>into:</p>
<p>sorted_list = [('A', 6), ('E', 6), ('D', 4), ('B', 2), ('C', 2)]</p>
<p>So it is first sorted from the tuple with the highest number first, then if the numbers are equal, the tuples are sorted alphabetically by the letter in the first element. </p>
<p>So priority is highest to lowest in terms of the numbers in each tuple, then alphabetically if 2 or more values are equal. </p>
|
<p>You can do it like this:</p>
<pre><code>sorted([('B', 2), ('A', 6), ('D', 4), ('E', 6), ('C', 2)], key = lambda x: (-x[1],x[0]))
</code></pre>
<p>yields:</p>
<pre><code>[('A', 6), ('E', 6), ('D', 4), ('B', 2), ('C', 2)]
</code></pre>
|
python|list|sorting|tuples
| 0 |
1,908,844 | 60,364,370 |
How to get the coefficients of the model using sklearn's AdaBoostClassifier (with Logistic regression as the base estimator)
|
<p>I have built a model using scikit-learn's <a href="https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.AdaBoostClassifier.html#sklearn.ensemble.AdaBoostClassifier.decision_function" rel="nofollow noreferrer">AdaBoostClassifier</a> with <a href="https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html#sklearn.linear_model.LogisticRegression.predict" rel="nofollow noreferrer">Logistic regression</a> as the base estimator.</p>
<pre><code>model = AdaBoostClassifier(base_estimator=linear_model.LogisticRegression()).fit(X_train, Y_train)
</code></pre>
<p>How do I obtain the coefficients of the model? I want to see how much each feature will contribute numerically towards the target variable <code>log(p/(1-p))</code>.</p>
<p>Many thanks.</p>
|
<p>Adaboost have an <code>estimators_</code> attribute that allows you to iterate over all fitted base learners. And, you can use <code>coef_</code> parameter of each base learner to get the coefficients assigned to each feature. You can then average the coefficients. Note, you'll have to take into account the fact that Adaboost's base learners are assigned individual weight.</p>
<pre><code>coefs = []
for clf,w in zip(model.estimators_,model.estimator_weights_):
coefs.append(clf.coef_*w)
coefs = np.array(coefs).mean(axis=0)
print(coefs)
</code></pre>
<p>If you've got binary classification, you might want to change the line inside the loop as:</p>
<pre><code>coefs.append(clf.coef_.reshape(-1)*w)
</code></pre>
|
python|scikit-learn|logistic-regression|adaboost
| 2 |
1,908,845 | 71,428,902 |
Python error: ModuleNotFound: encodings which does in fact exist
|
<p>I have Python (3.9) installed to to my local user account programs folder. When I execute it, I get the following error. A few things that are odd:</p>
<ol>
<li>In my main Python script, I cannot even do a simple <code>print()</code> first thing, so the problem is directly with Python itself</li>
<li><code>sys.path</code> has 2 entries that don't exist. I am not sure how they were set to those values, or what set them, but they are wrong as those paths don't exist and a third entry references a zip file, which is probably related to the issue I am having</li>
<li>I inspected all the paths manually and everything is as it should be, and the encodings module does exist</li>
</ol>
<p>Python only exists in my <code>PATH</code> environment variable once, which is:
<code>C:\Users\<username>\AppData\Local\Programs\Python\Launcher\</code> and that Launcher folder doesn't exist, and I have no idea how it was even set as I intentionally told Python not to add itself to the <code>PATH</code> variable so it would never interfere with other Python installations (which there currently are none).</p>
<pre><code>Python path configuration:
PYTHONHOME = (not set)
PYTHONPATH = (not set)
program name = 'C:\Users\<username>\AppData\Local\Programs\Python\python.exe'
isolated = 0
environment = 1
user site = 1
import site = 1
sys._base_executable = 'C:\\Users\\<username>\\AppData\\Local\\Programs\\Python\\python.exe'
sys.base_prefix = ''
sys.base_exec_prefix = ''
sys.platlibdir = 'lib'
sys.executable = 'C:\\Users\\<username>\\AppData\\Local\\Programs\\Python\\python.exe'
sys.prefix = ''
sys.exec_prefix = ''
sys.path = [
'C:\\Users\\<username>\\AppData\\Local\\Programs\\Python\\python39.zip',
'C:\\Python39\\Lib\\',
'C:\\Python39\\DLLs\\',
'C:\\Users\\<username>\\AppData\\Local\\Programs\\Python',
]
Fatal Python error: init_fs_encoding: failed to get the Python codec of the filesystem encoding
Python runtime state: core initialized
ModuleNotFoundError: No module named 'encodings'
Current thread 0x000071d4 (most recent call first):
<no Python frame>
</code></pre>
<p>If <code>sys.path</code> is incorrect (which it appears as such), how can I manually set this, or fix it? Especially given that my script never gets the opportunity to execute</p>
|
<p>I had a similar problem in which I wanted to use my Ubuntu python installed in /usr/bin but when I typed python, the default python was pointing to Anaconda python installation instead of the python installed in /usr/bin.</p>
<p>I solved that using this:
<a href="https://stackoverflow.com/questions/24664435/use-the-default-python-rather-than-the-anaconda-installation-when-called-from-th">Use the default Python rather than the Anaconda installation when called from the terminal</a></p>
<p>So in file ~/.bashrc instead of
Added by the Anaconda3 4.3.0 installer
export PATH="/home/user/anaconda3/bin:$PATH"
one would use
export PATH="$PATH:/home/user/anaconda3/bin"</p>
|
python|windows
| 0 |
1,908,846 | 64,452,218 |
How to extract only uppercase substring from pandas series?
|
<p>I have been trying to extract the uppercase substring from pandas dataframe but to avail. How to extract only uppercase sub string in pandas?</p>
<p>Here is my MWE:</p>
<h1>MWE</h1>
<pre><code>import numpy as np
import pandas as pd
df = pd.DataFrame({'col': ['cat', 'cat.COUNT(example)','cat.N_MOST_COMMON(example.ord)[2]']})
df['feat'] = df['col'].str.extract(r"[^A-Z]*([A-Z]*)[^A-Z]*")
print(df)
"""
col feat
0 cat NaN
1 cat.COUNT(example) T
2 cat.N_MOST_COMMON(example.ord)[2] N
""";
</code></pre>
<h1>Expected output</h1>
<pre><code> col feat
0 cat
1 cat.COUNT(example) COUNT
2 cat.N_MOST_COMMON(example.ord)[2] N_MOST_COMMON
</code></pre>
|
<p>How about:</p>
<pre><code> df['feat'] = df.col.str.extract('([A-Z_]+)').fillna('')
</code></pre>
<p>Output:</p>
<pre><code> col feat
0 cat
1 cat.COUNT(example) COUNT
2 cat.N_MOST_COMMON(example.ord)[2] N_MOST_COMMON
</code></pre>
|
python|pandas
| 3 |
1,908,847 | 70,073,274 |
SQLAlchemy relationships conflict
|
<p>I have a problem with forming relationships in a table.</p>
<p>I need to put the pets in boxes. One box - one pet.
Pets are divided into two tables according to their characteristics.</p>
<p><strong>How can I put the</strong> <code>dog_id</code>(Dogs) <strong>or</strong> <code>cat_id</code>(Cats) <strong>to the</strong> <code>pet_id</code>(Boxes)<strong>?</strong></p>
<p>I tried the following:</p>
<pre><code>class Boxes():
__tablename__ = 'Boxes table'
box_id = Column('Box ID', NVARCHAR(5), primary_key=True)
pet_id = Column('Pet ID', ForeignKey('Dogs table.DOG ID'), ForeignKey('Cats table.CAT ID'))
pet_cat = relationship('Cats')
pet_dog = relationship('Dogs')
class Dogs():
__tablename__ = 'Dogs table'
dog_id = Column('DOG ID', NVARCHAR(10), primary_key=True)
dog_characteristics = Column('Dog Characteristics', NVARCHAR(20))
class Cats():
__tablename__ = 'Cats table'
cat_id = Column('CAT ID', NVARCHAR(10), primary_key=True)
cat_characteristics = Column('Cat Characteristics', NVARCHAR(50))
</code></pre>
<p>But there is a conflict:</p>
<pre><code>relationship 'Boxes.pet_cat' will copy column Cats table.CAT ID to column Boxes table.Pet ID, which conflicts with relationship(s): 'Boxes.pet_dog'
</code></pre>
<p>How should I correctly establish a relationship? Thanks</p>
|
<p>Your structure looks <a href="https://docs.sqlalchemy.org/en/13/orm/extensions/declarative/inheritance.html" rel="nofollow noreferrer">polymorphic</a> that you have a pet class and sub classes dog and cat. Define the polymorphic type in mapper_args. Add foreign key in Boxes class. Create relationship with pet instead of Dog or Cat. Pet will help you to have Dog or Cat attributes according to the pet_type.</p>
<pre><code>from sqlalchemy.orm import backref, relationship
from sqlalchemy.sql.schema import Column, ForeignKey
from sqlalchemy.sql.sqltypes import Integer, String
class Boxes():
__tablename__ = 'Boxes table'
box_id = Column('Box ID', NVARCHAR(5), primary_key=True)
pet_id = Column(Integer, ForeignKey('pet.id'))
pet = relationship("Pet", backref=backref("boxes", cascade="all, delete-orphan", lazy=True))
class Pet():
__tablename__ = 'pet'
id = Column(Integer, primary_key=True, autoincrement=True)
name = Column(String)
pet_type = Column(String)
__mapper_args__ = {"polymorphic_identity": "pet", "polymorphic_on": pet_type}
class Dogs(Pet):
__tablename__ = 'Dogs table'
id = Column(Integer, ForeignKey("pet.id"), primary_key=True)
dog_characteristics = Column('Dog Characteristics', NVARCHAR(20))
__mapper_args__ = {"polymorphic_identity": "pet_dog"}
class Cats(Pet):
__tablename__ = 'Cats table'
id = Column(Integer, ForeignKey("pet.id"), primary_key=True)
cat_characteristics = Column('Cat Characteristics', NVARCHAR(50))
__mapper_args__ = {"polymorphic_identity": "pet_cat"}
</code></pre>
|
python|sql|sqlalchemy
| 2 |
1,908,848 | 70,192,707 |
Why is my program not generating two graphs at once? I want both graphs to appear within the plot explorer at the same time
|
<p>With this program my goal was to import the two given files with panda and use that data to create two separate graphs. I was able to create the graphs and save them in different documents but my assignment calls for creating them at the same time and displaying them in the same window. How would I adjust my code to make that work? I suspect I have an issue with the file_name portion of the code and that preventing both portions of code from running properly but I'm not sure or it could be the subplot portion.</p>
<pre><code>import matplotlib.pyplot as plt
from matplotlib import interactive
import numpy as np
def main(file_name):
file_name = r"College Enrollments.csv"
df = pd.read_csv(file_name)
department = df['Department']
enrollment_count = df[' Enrollment count']
plt.figure(figsize= (9,3))
plt.subplot(131)
plt.bar(department,enrollment_count)
plt.title('College Enrollments')
plt.show
x_axis = 'Department'
y_axis = 'Enrollment count'
plt.yticks(np.arange(0,4001,500))
plt.xticks(rotation = 90)
file_name = r"CS Faculty.csv"
df = pd.read_csv(file_name)
professor = df['CS Professor']
yoe = df['Years of Experience']
#end of graph 1
plt.figure(figsize= (9,3))
plt.subplot(132)
plt.bar(professor,yoe)
plt.title('CS Faculty')
plt.show
x_axis = 'CS Professor'
y_axis = 'Years of Experience'
plt.yticks(np.arange(0,31,5))
plt.xticks(rotation = 90)
# read the file_name into a pandas dataframe
# plot the dataframe using arguments "title", "legend", "x", "y", "kind" and "color"
# The only four statements that may use the matplotlib library appear next.
# Do not modify them.
plt.xlabel(x_axis) # Note: x-axis should be determined above
plt.ylabel(y_axis) # Note: y-axis should be determined above
interactive(True) # This allows multiple figures to be displayed simultaneously
plt.show()
#end of graph 2
# -------------------------------------------------
main("College Enrollments.csv")
main("CS Faculty.csv")
</code></pre>
|
<p>I don't think you need to write a function for this but your looking for something like :</p>
<pre><code>fig, (ax1, ax2) = plt.subplots(1, 2)
fig.suptitle('Horizontally stacked subplots')
ax1.plot(x, y)
ax2.plot(x, -y)
</code></pre>
<p>The best thing to get used to is looking at documentation, you will be spending A LOT of time reading documentation if you plan on coding anything.
You can find everything you need here: <a href="https://matplotlib.org/devdocs/gallery/subplots_axes_and_figures/subplots_demo.html" rel="nofollow noreferrer">https://matplotlib.org/devdocs/gallery/subplots_axes_and_figures/subplots_demo.html</a></p>
|
python|pandas|matplotlib
| 1 |
1,908,849 | 70,520,424 |
How to load a tensorflow keras model saved with saved_model to use the predict function?
|
<p>I have a keras sequential model. I have saved that model using the command.</p>
<pre><code>tf.keras.models.save_model(model, 'model')
</code></pre>
<p>Now it has the following folder structure,</p>
<p><a href="https://i.stack.imgur.com/MWQ1h.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MWQ1h.png" alt="Now it has the following folder structure," /></a></p>
<p>Now I am loading the model using</p>
<pre><code>model = tf.saved_model.load('model')
</code></pre>
<p>I also tried with</p>
<pre><code>model = tf.keras.models.load_model('model')
</code></pre>
<p>then I am trying to predict using</p>
<pre><code>model.predict(padded_seq, verbose=0)
</code></pre>
<p>it is giving me error</p>
<pre><code>AttributeError: '_UserObject' object has no attribute 'predict'
</code></pre>
<p>how to use the predict on the model loaded. I have tried with h5 model, it worked fine. But my main use is with this kind of model which is throwing error.</p>
|
<p>I have encountered the same problem with SavedModel models downloaded from TFHUB (example: InceptionV3), even loading it with <code>tf.keras.models.load_model()</code> returns a plain model (a sort of a basic generic model to allow back-compatibility) that does not have keras API (predict, fit, summary, build, etc) on top of it, the object type is: <code><tensorflow.python.saved_model.load.Loader._recreate_base_user_object.<locals>._UserObject object at 0x14a42ac2bcf8></code>
If you want to use just the inference call (predict), you can call your model directly on data (__call__ method is defined) as follow:</p>
<pre><code>model(padded_seq) # or model.__call__(padded_seq)
</code></pre>
<p>One workaround I have found to get the Keras API again is wrapping it inside a KerasLayer in a Sequential model as follow:</p>
<pre><code>import tensorflow as tf
import tensorflow_hub as hub
model = tf.keras.Sequential([
hub.KerasLayer("saved/model/path")
])
model.build(<input_shape>)
</code></pre>
<p>Now the model supports all Keras API like predict, summary, etc, and this now should work:</p>
<pre><code>model.predict(padded_seq, verbose=0)
</code></pre>
|
python|tensorflow|keras|sequential
| 0 |
1,908,850 | 56,498,027 |
No Numeric Type To Aggregate
|
<p>I have a dataframe in which there is the following columns:</p>
<pre><code>Date - Seller - Amount
</code></pre>
<p>Code sample:</p>
<pre><code>import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
testframe = pd.DataFrame({'date': ['01/02/2019', '01/02/2019', '01/02/2019',
'02/02/2019','02/02/2019','03/02/2019'],
'Seller': ['John', 'Ada', 'John', 'Ada', 'Ada', 'Ada'], 'Amount':
[150,200,158,200,60,90]})
</code></pre>
<p>I aggregated the data so:</p>
<pre><code>agg=pd.DataFrame(base.groupby(['date','Seller'])['Amount'].sum())
agg.reset_index(inplace=True)
</code></pre>
<p>Then I tried, using FacetGrid, to display the Amount along the days per Seller(each row from FacetGrid would be a row). Like so:</p>
<pre><code>g=sns.FacetGrid(data=agg,row='Seller')
g.map(sns.lineplot, 'Amount', 'date')
</code></pre>
<p>But I get the following error:</p>
<blockquote>
<p>No numeric types to aggregate</p>
</blockquote>
<p>There was another post here on Stackoverflow, but didn't quite give a me clue about what to do in order to solve.</p>
<p>I checked the variables from packages and it returned numpy.int64. I tried to convert'em to float and int using <code>.astype()</code> but didn't work too.</p>
<p>Environment: Jupyter Notebook</p>
<p>Can someone shed a light on this?</p>
<p>Thanks in advance</p>
|
<p>Possibly you mean something like this:</p>
<pre><code>import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
base = pd.DataFrame({'date': ['01/02/2019', '01/02/2019', '01/02/2019',
'02/02/2019','02/02/2019','03/02/2019'],
'Seller': ['John', 'Ada', 'John', 'Ada', 'Ada', 'Ada'], 'Amount':
[150,200,158,200,60,90]})
base["date"] = pd.to_datetime(base["date"])
agg=pd.DataFrame(base.groupby(['date','Seller'])['Amount'].sum())
agg.reset_index(inplace=True)
g=sns.FacetGrid(data=agg, row='Seller')
g.map(sns.lineplot, 'date', 'Amount')
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/r84Qr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/r84Qr.png" alt="enter image description here"></a></p>
<p>Note that John's plot is empty because he only sold something on a single day, and one cannot draw a line from a point to itself.</p>
|
python|seaborn|facet-grid
| 1 |
1,908,851 | 56,827,055 |
Unable to fetch url in python
|
<p>I was not able to fetch url from biblegateway.com here it shows error as
<code>urllib2.URLError: <urlopen error [Errno 1] _ssl.c:510: error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure></code> Please don't make duplicate as i went throught the sites which shown in duplicate i didn't understand by visiting that site .</p>
<p>here is my code</p>
<pre><code>import urllib2
url = 'https://www.biblegateway.com/passage/?search=Deuteronomy+1&version=NIV'
response = urllib2.urlopen(url)
html = response.read()
print html
</code></pre>
|
<p>Here is a good reference for <a href="https://stackoverflow.com/questions/15138614/how-can-i-read-the-contents-of-an-url-with-python">fetching url</a>. </p>
<p>In python 3 you can have:</p>
<pre><code>from urllib.request import urlopen
URL = 'https://www.biblegateway.com/passage/?search=Deuteronomy+1&version=NIV'
f = urlopen(URL)
myfile = f.read()
print(myfile)
</code></pre>
<p>Not sure it clears a ssl problem though. Maybe some clues <a href="https://stackoverflow.com/questions/27835619/urllib-and-ssl-certificate-verify-failed-error">here</a>.</p>
|
python|python-2.7|urllib2
| 1 |
1,908,852 | 69,275,280 |
ValueError: Expected 2D array, got scalar array instead: array=6.5
|
<p>While practicing Support Vector Regression Model I got this error:</p>
<pre><code>ValueError: Expected 2D array, got scalar array instead:
array=6.5.
Reshape your data either using array.reshape(-1, 1) if your data has a single feature or array.reshape(1, -1) if it contains a single sample.
</code></pre>
<p>This is my code (Python 3.7)</p>
<pre><code>#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Mon Sep 20 14:39:06 2021
@author: lulu
"""
# SVR
# simple learning regression
# Data preprocessing
# Importing the libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd # import data sets and manage data sets
# Importing dataset
dataset = pd.read_csv('/home/lulu/machineLearning/Position_Salaries.csv')
X = dataset.iloc[:, 1:2].values
y = dataset.iloc[:, 2].values
# Splitting the dataset into the training set and test set
"""from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size = 1/3,random_state = 0)"""
# Feature scaling
from sklearn.preprocessing import StandardScaler
sc_X = StandardScaler()
sc_y = StandardScaler()
X = sc_X.fit_transform(X)
y = sc_X.fit_transform(y)
# Fitting SVR to the training set
# Create your regressor here
from sklearn.svm import SVR
regressor = SVR(kernel = 'rbf')
regressor.fit(X, y)
# Producing a new result
y_pred = regressor.predict(sc_X.transform(6.5))
# Visualizing the test VR results
plt.scatter(X, y, color='red')
plt.plot(X, regressor.predict(X),color = 'blue')
plt.title('Truth or Bluff (SVR)')
plt.xlabel('Position level')
plt.ylabel('Salary')
plt.show()
</code></pre>
<p>I want to predict a new result and so since 6.5 is in some way not transformed we actually need to transform it with the following function but i don't know how i apply this function correctly</p>
<pre><code>sc_X.transform
</code></pre>
<p>I want to know why i don't get a result ideal according to ground about multiple linear regression and so forth. The result is a senseless, i can't formulate a conclusion.</p>
|
<p>Try this:</p>
<pre><code>y_pred = regressor.predict(sc_X.transform([[6.5]]))
</code></pre>
|
python-3.x|scikit-learn|non-linear-regression
| 0 |
1,908,853 | 69,172,250 |
How to print multiple specific JSON values with Python?
|
<p>I am attempting to print values from an API via JSON response. I was successful when I tried to print the first and foremost "live" value of the response, but I started running into problems when I tried printing anything other than the "live" value. Below is a sample of what I usually receive from the API, and my goal here is to <em>print out only every visible "name" values.</em></p>
<pre><code>{
"live":[
{
"id":203003098,
"yt_video_key":"K0uWjPoiMRY",
"bb_video_id":"None",
"title":"【Minecraft】Nature, Please Guide Me! ft. @Ceres Fauna Ch. hololive-EN #holoCouncil",
"thumbnail":"None",
"status":"live",
"live_schedule":"2021-09-14T02:00:00.000Z",
"live_start":"2021-09-14T02:00:51.000Z",
"live_end":"None",
"live_viewers":11000,
"channel":{
"id":2260367,
"yt_channel_id":"UC3n5uGu18FoCy23ggWWp8tA",
"bb_space_id":"None",
"name":"Nanashi Mumei Ch. hololive-EN",
"photo":"https://yt3.ggpht.com/MI8E8Wfmc_ngNZXUwu8ad0D-OtqDhmqGVULEu25z-ccscwzJpAw-7ewFXzZYLK2jHB9d5OgQDq4=s800-c-k-c0x00ffffff-no-rj",
"published_at":"2021-07-26T15:45:01.162Z",
"twitter_link":"nanashimumei_en",
"view_count":4045014,
"subscriber_count":281000,
"video_count":14
}
},
{
"id":202920144,
"yt_video_key":"owk8w59Lcus",
"bb_video_id":"None",
"title":"【Undertale】平和なPルートでハッピーエンド目指す!【雪花ラミィ/ホロライブ】",
"thumbnail":"None",
"status":"live",
"live_schedule":"2021-09-14T00:00:00.000Z",
"live_start":"2021-09-14T00:04:22.000Z",
"live_end":"None",
"live_viewers":6200,
"channel":{
"id":31879,
"yt_channel_id":"UCFKOVgVbGmX65RxO3EtH3iw",
"bb_space_id":"None",
"name":"Lamy Ch. 雪花ラミィ",
"description":"ホロライブ所属。\n人里離れた白銀の大地に住む、雪の一族の令嬢。\nホロライブの笑顔や彩りあふれる配信に心を打たれ、\nお供のだいふくと共に家を飛び出した。\n真面目だが世間知らずで抜けたところがある。\n\n\n\nお問い合わせ\nカバー株式会社:http://cover-corp.com/ \n公式Twitter:https://twitter.com/hololivetv",
"photo":"https://yt3.ggpht.com/ytc/AKedOLQDR06gp26jxNNXh88Hhv1o-pNrnlKrYruqUIOx=s800-c-k-c0x00ffffff-no-rj",
"published_at":"2020-04-13T03:51:15.590Z",
"twitter_link":"yukihanalamy",
"view_count":66576847,
"subscriber_count":813000,
"video_count":430
}
},
{
"id":203019193,
"yt_video_key":"QM2DjVNl1gY",
"bb_video_id":"None",
"title":"【MINECRAFT】 Adventuring with Mumei! #holoCouncil",
"thumbnail":"None",
"status":"live",
"live_schedule":"2021-09-14T02:00:00.000Z",
"live_start":"2021-09-14T02:00:58.000Z",
"live_end":"None",
"live_viewers":8600,
"channel":{
"id":2260365,
"yt_channel_id":"UCO_aKKYxn4tvrqPjcTzZ6EQ",
"bb_space_id":"None",
"name":"Ceres Fauna Ch. hololive-EN",
"description":"A member of the Council and the Keeper of \"Nature,\" the second concept created by the Gods.\nShe has materialized in the mortal realm as a druid in a bid to save nature.\nShe has Kirin blood flowing in her veins, and horns that are made out of the branches of a certain tree; they are NOT deer antlers.\n\n\"Nature\" refers to all organic matter on the planet except mankind.\nIt is long said that her whispers, as an avatar of Mother Nature, have healing properties. Whether or not that is true is something only those who have heard them can say.\nWhile she is usually affable, warm, and slightly mischievous, any who anger her will bear the full brunt of Nature\\'s fury.\n\n",
"photo":"https://yt3.ggpht.com/0lkccaVapSr1Z3uuXWbnaQxeqRWr9Tcs4R9rLBRSrAsN9gLacpiT2OFWfFKr4NhF97_hqK3eTg=s800-c-k-c0x00ffffff-no-rj",
"published_at":"2021-07-26T15:38:58.797Z",
"twitter_link":"ceresfauna",
"view_count":5003954,
"subscriber_count":253000,
"video_count":17
}
}
],
</code></pre>
<p>My code:</p>
<pre><code>url = "https://api.holotools.app/v1/live"
response = urlopen(url)
data_json = json.loads(response.read())
print(data_json['live'])
</code></pre>
|
<p>I think you're new to programming language so following is the special note for the new programmer.</p>
<blockquote>
<p>You did well in printing the data but this is not end because your
goal is to get the <strong><code>name</code></strong> so you need to traverse in the response
one by one let me show you</p>
</blockquote>
<pre><code>url = "https://api.holotools.app/v1/live"
response = urlopen(url)
data_json = json.loads(response.read())
dicts = data_json['live']
#Why I'm using loop here? Because we need to get every element of list(data_json['live'] is a list)
for dict in dicts:
print(dict["channel"]["name"]
***Now here after getting single element from list as a dict I select its key which is "channel"***
</code></pre>
<p>Following are some useful links through which you can learn how to traverse in <strong><code>json</code></strong></p>
<ol>
<li><p><a href="https://www.kite.com/python/answers/how-to-iterate-through-a-json-string-in-python" rel="nofollow noreferrer">https://www.kite.com/python/answers/how-to-iterate-through-a-json-string-in-python</a></p>
</li>
<li><p><a href="https://www.delftstack.com/howto/python/iterate-through-json-python/" rel="nofollow noreferrer">https://www.delftstack.com/howto/python/iterate-through-json-python/</a></p>
</li>
</ol>
<p>There are also <strong><code>stackoverflow</code></strong> answer which are about: <em><strong>How to get data from json?</strong></em> but it need some programming skills too following is the link of answers.</p>
<ol>
<li><a href="https://stackoverflow.com/questions/2733813/iterating-through-a-json-object">Iterating through a JSON object</a></li>
<li><a href="https://stackoverflow.com/questions/42445237/looping-through-a-json-array-in-python">Looping through a JSON array in Python</a></li>
<li><a href="https://stackoverflow.com/questions/14547916/how-can-i-loop-over-entries-in-json">How can I loop over entries in JSON?</a></li>
</ol>
|
json|python-3.x
| 0 |
1,908,854 | 68,876,101 |
How do I get error messages from nosetests
|
<p>The <code>nosetest</code> command is failing with no messages. If I cd to my home directory I get the message I would expect:</p>
<pre class="lang-sh prettyprint-override"><code>(base) raysalemi@RayProMac ~ % nosetests
----------------------------------------------------------------------
Ran 0 tests in 0.003s
OK
</code></pre>
<p>But if I cd to my tests directory I get this:</p>
<pre class="lang-sh prettyprint-override"><code>/Users/raysalemi/repos/pyuvm/tests/nosetests
(base) raysalemi@RayProMac nosetests % ls
__pycache__ pyuvm_unittest.py test_05_base_classes.py test_06_reporting_classes.py
(base) raysalemi@RayProMac nosetests % nosetests
(base) raysalemi@RayProMac nosetests % echo $?
1
</code></pre>
<p>This has been running for months so I'm not certain of the change, but I can't get an error message to check, only the exit status.</p>
<p>Suggestions?</p>
|
<p>The solution was to CD to my test directory and run python with the unittest module on the same command line:</p>
<pre class="lang-sh prettyprint-override"><code>(base) raysalemi@RayProMac nosetests % python -m unittest
No module named 'copyz'
No module named 'copyz'
EE
</code></pre>
<p>I had accidentally stuck a <code>z</code> into my code and the import was failing. The only way to get that message was to use <code>unittest</code> directly.</p>
|
python|nose
| 0 |
1,908,855 | 69,155,826 |
How to stop repeat on button press/hold - Python
|
<p>I was hoping someone might have some insight on how to stop a script from continuing to repeat if a button is held (or in my case pressed longer than a second)?</p>
<p>Basically i've a button setup on the breadboard, and I have it coded to play an audio file when the button is pressed. This works, however if the button isn't very quickly tapped, then the audio will repeat itself until button is fully released. Also if the button is pressed and held, the audio file will just repeat indefinitely.</p>
<p>I've recorded a quick recording to demonstrate the issue if its helpful, here: <a href="https://streamable.com/esvoy6" rel="nofollow noreferrer">https://streamable.com/esvoy6</a></p>
<p>I should also note that I am very new to python (coding in general actually), so its most likely something simple that I just haven't been able to find yet. I am using gpiozero for my library.</p>
<p>Any help or insight is greatly appreciated!</p>
<hr />
<p><strong>Here is what my code looks like right now:</strong></p>
<pre><code>from gpiozero import LED, Button
import vlc
import time
import sys
def sleep_minute(minutes):
sleep(minutes * 60)
# GPIO Pins of Green LED
greenLight = LED(17)
greenButton = Button(27)
# Green Button Pressed Definition
def green_btn_pressed():
print("Green Button Pressed")
greenButton.when_pressed = greenLight.on
greenButton.when_released = greenLight.on
# Executed Script
while True:
if greenButton.is_pressed:
green_btn_pressed()
time.sleep(.1)
print("Game Audio Start")
p = vlc.MediaPlayer("/home/pi/Desktop/10 Second Countdown.mp3")
p.play()
</code></pre>
|
<p>So from a brief look at it, it seems that 'time.sleep(.1)' is not doing what you are expecting. Ie. it is obviously interrupted by button presses. This is not abnormal behaviour as button presses on Ardiuno and raspPi (guessing here) would be processed as interrupts.
The script itself does not contain any prevention from double pressing or press and hold etc.</p>
<p>Have you put in any debug lines to see what is executing when you press the button?
I would start there and make adjustments based on what you are seeing.</p>
<p>I am not familiar with this gpiozero, so I can't give any insight about what it may be doing, but looking at the code and given the issue you are having, I would start with some debug lines in both functions to confirm what is happening.</p>
<p>Thinking about it for a minute though, could you not just change the check to 'if greenButton.is_released:'? As then you know the button has already been pressed, and the amount of time it is held in for becomes irrelevant. May also want to put in a check for if the file is already playing to stop it and start it again, or ignore and continue playing (if that is the desired behaviour).</p>
<p>Further suggestions:</p>
<p>For this section of code:</p>
<pre><code># Executed Script
while True:
if greenButton.is_pressed:
green_btn_pressed()
time.sleep(.1)
print("Game Audio Start")
p = vlc.MediaPlayer("/home/pi/Desktop/10 Second Countdown.mp3")
p.play()
</code></pre>
<p>You want to change this to something along these lines:</p>
<pre><code>alreadyPlaying = 0
# Executed Script
while True:
if greenButton.is_pressed:
green_btn_pressed()
#Check if already playing file.
if alreadyPlaying == 1:
# Do check to see if file is still playing (google this, not sure off the top of head how to do this easiest).
# If file still playing do nothing,
#else set 'alreadyPlaying' back to '0'
break
#Check if already playing file.
if alreadyPlaying == 0:
time.sleep(.1)
print("Game Audio Start")
p = vlc.MediaPlayer("/home/pi/Desktop/10 Second Countdown.mp3")
p.play()
alreadyPlaying = 1
</code></pre>
<p>Hopefully you get the idea of what I am saying. Best of luck!</p>
|
python|python-3.x|button|gpiozero
| 0 |
1,908,856 | 73,150,922 |
Can't convert .ts files that downloaded from .m3u8 file to mp4
|
<p>I Have This Files:</p>
<pre><code>001.ts 014.ts 027.ts 040.ts 053.ts 066.ts 079.ts 092.ts 105.ts 118.ts 131.ts 144.ts 157.ts 170.ts 183.ts 196.ts 209.ts 222.ts 235.ts 248.ts 261.ts 274.ts 287.ts 300.ts 313.ts 326.ts
002.ts 015.ts 028.ts 041.ts 054.ts 067.ts 080.ts 093.ts 106.ts 119.ts 132.ts 145.ts 158.ts 171.ts 184.ts 197.ts 210.ts 223.ts 236.ts 249.ts 262.ts 275.ts 288.ts 301.ts 314.ts 327.ts
003.ts 016.ts 029.ts 042.ts 055.ts 068.ts 081.ts 094.ts 107.ts 120.ts 133.ts 146.ts 159.ts 172.ts 185.ts 198.ts 211.ts 224.ts 237.ts 250.ts 263.ts 276.ts 289.ts 302.ts 315.ts 328.ts
004.ts 017.ts 030.ts 043.ts 056.ts 069.ts 082.ts 095.ts 108.ts 121.ts 134.ts 147.ts 160.ts 173.ts 186.ts 199.ts 212.ts 225.ts 238.ts 251.ts 264.ts 277.ts 290.ts 303.ts 316.ts 329.ts
005.ts 018.ts 031.ts 044.ts 057.ts 070.ts 083.ts 096.ts 109.ts 122.ts 135.ts 148.ts 161.ts 174.ts 187.ts 200.ts 213.ts 226.ts 239.ts 252.ts 265.ts 278.ts 291.ts 304.ts 317.ts 330.ts
006.ts 019.ts 032.ts 045.ts 058.ts 071.ts 084.ts 097.ts 110.ts 123.ts 136.ts 149.ts 162.ts 175.ts 188.ts 201.ts 214.ts 227.ts 240.ts 253.ts 266.ts 279.ts 292.ts 305.ts 318.ts 331.ts
007.ts 020.ts 033.ts 046.ts 059.ts 072.ts 085.ts 098.ts 111.ts 124.ts 137.ts 150.ts 163.ts 176.ts 189.ts 202.ts 215.ts 228.ts 241.ts 254.ts 267.ts 280.ts 293.ts 306.ts 319.ts 332.ts
008.ts 021.ts 034.ts 047.ts 060.ts 073.ts 086.ts 099.ts 112.ts 125.ts 138.ts 151.ts 164.ts 177.ts 190.ts 203.ts 216.ts 229.ts 242.ts 255.ts 268.ts 281.ts 294.ts 307.ts 320.ts 333.ts
009.ts 022.ts 035.ts 048.ts 061.ts 074.ts 087.ts 100.ts 113.ts 126.ts 139.ts 152.ts 165.ts 178.ts 191.ts 204.ts 217.ts 230.ts 243.ts 256.ts 269.ts 282.ts 295.ts 308.ts 321.ts 334.ts
010.ts 023.ts 036.ts 049.ts 062.ts 075.ts 088.ts 101.ts 114.ts 127.ts 140.ts 153.ts 166.ts 179.ts 192.ts 205.ts 218.ts 231.ts 244.ts 257.ts 270.ts 283.ts 296.ts 309.ts 322.ts
011.ts 024.ts 037.ts 050.ts 063.ts 076.ts 089.ts 102.ts 115.ts 128.ts 141.ts 154.ts 167.ts 180.ts 193.ts 206.ts 219.ts 232.ts 245.ts 258.ts 271.ts 284.ts 297.ts 310.ts 323.ts
012.ts 025.ts 038.ts 051.ts 064.ts 077.ts 090.ts 103.ts 116.ts 129.ts 142.ts 155.ts 168.ts 181.ts 194.ts 207.ts 220.ts 233.ts 246.ts 259.ts 272.ts 285.ts 298.ts 311.ts 324.ts
013.ts 026.ts 039.ts 052.ts 065.ts 078.ts 091.ts 104.ts 117.ts 130.ts 143.ts 156.ts 169.ts 182.ts 195.ts 208.ts 221.ts 234.ts 247.ts 260.ts 273.ts 286.ts 299.ts 312.ts 325.ts
</code></pre>
<p>That I Downloaded With This Python Program (m3u8 File Does Not Work!):</p>
<pre><code>import requests
import shutil
import os
import subprocess
def strip_end(text, suffix):
if not text.endswith(suffix):
return text
return text[:len(text)-len(suffix)]
def download_file(url):
cwd = os.getcwd()
command = f"wget -O {cwd}/ts_files/{url.split('/')[-1]} {url}"
subprocess.call(command, shell=True)
base_url = "https://stream.example.com/video/2021/example/720p_{}.ts"
if not os.path.exists('ts_files'):
print('ts_file folder is not found, creating the folder.')
os.makedirs('ts_files')
i = 1
while True:
if len(str(i)) == 1:
num = f"00{i}"
elif len(str(i)) == 2:
num = f"0{i}"
else:
num = str(i)
url = base_url.replace("{}", num)
r = requests.get(url, stream=True)
print(f'downloading {i}')
if r.status_code != 404:
download_file(url) # comment out this line to download ts files.
else:
print("404")
break
i = i+1
cwd = os.getcwd() # Get the current working directory (cwd)
TS_DIR = 'ts_files'
with open('merged.ts', 'wb') as merged:
for ts_file in os.listdir(f'{cwd}/{TS_DIR}'):
with open(f'{cwd}/{TS_DIR}/{ts_file}', 'rb') as mergefile:
shutil.copyfileobj(mergefile, merged)
</code></pre>
<p>My Problem Is When I Want To Convert All This Files To One .ts File And Then An MP4 File With ffmpeg, I Get An Error:</p>
<pre><code>nima@funlife:~/ts_files$ cat ./*.ts > all.ts
nima@funlife:~/ts_files$ ffmpeg -i all.ts -acodec copy -vcodec copy all.mp4
ffmpeg version 5.0.1-3+b1 Copyright (c) 2000-2022 the FFmpeg developers
built with gcc 11 (Debian 11.3.0-4)
configuration: --prefix=/usr --extra-version=3+b1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libdav1d --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libglslang --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librabbitmq --enable-librist --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libssh --enable-libsvtav1 --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzimg --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --disable-sndio --enable-pocketsphinx --enable-librsvg --enable-libmfx --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libplacebo --enable-libx264 --enable-shared
libavutil 57. 17.100 / 57. 17.100
libavcodec 59. 18.100 / 59. 18.100
libavformat 59. 16.100 / 59. 16.100
libavdevice 59. 4.100 / 59. 4.100
libavfilter 8. 24.100 / 8. 24.100
libswscale 6. 4.100 / 6. 4.100
libswresample 4. 3.100 / 4. 3.100
libpostproc 56. 3.100 / 56. 3.100
all.ts: Invalid data found when processing input
</code></pre>
<p>.ts Files Content Is Like This:</p>
<pre><code> }��,.g���}��
�����c����Ww�c���c���eo��m�����ŧ� 䱉
�b(+��D�FG�zPe��7�&#bz�1ɶ��� C
�`,��>Ϲc4J��̀��T�I}�"��ކ�R�1��w͋� "� <�#B`ƪ�̸�co
�9���+��W�
P���N���w��T\5g��
\�E�N�E�v��͑4f��U�@]�ΩX�U�x�E��bwm=ְ�iA�����p���M�����\=�_�I3C�hL�h����0)�ο��*��`���eZ� �ؗ4To�0V��S,�+�>�8_]�W�lNJD�|7e�2s�1X)̃5�0h�������~8ߩg���?e��EK�>۷�L
��:6|������>\ �N�WW��,�w
bk��1?*��/��/�5��k����~�� Lޕ}�a���2�{��l��$�d=����g�{a2��L�����
jҫַ��ʿ�"1`ZZ.he)�=�x��E_4:Vg�����H=���x1�����}��W::y�
</code></pre>
<p>Are They Encrypted Or Something?
I'm Trying To Do This Works With Debian 11.3.0-4, Python 3.10.5</p>
<p>Edit: Thanks Johnny, I Saw This Post: <a href="https://stackoverflow.com/questions/7333232/how-to-concatenate-two-mp4-files-using-ffmpeg">How to concatenate two MP4 files using FFmpeg?</a>
I Tried But I Get An Error Again!</p>
<pre><code>nima@funlife:~/ts_files$ ffmpeg -f concat -i file.txt -c copy output.ts
ffmpeg version 5.0.1-3+b1 Copyright (c) 2000-2022 the FFmpeg developers
built with gcc 11 (Debian 11.3.0-4)
configuration: --prefix=/usr --extra-version=3+b1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libdav1d --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libglslang --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librabbitmq --enable-librist --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libssh --enable-libsvtav1 --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzimg --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --disable-sndio --enable-pocketsphinx --enable-librsvg --enable-libmfx --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libplacebo --enable-libx264 --enable-shared
libavutil 57. 17.100 / 57. 17.100
libavcodec 59. 18.100 / 59. 18.100
libavformat 59. 16.100 / 59. 16.100
libavdevice 59. 4.100 / 59. 4.100
libavfilter 8. 24.100 / 8. 24.100
libswscale 6. 4.100 / 6. 4.100
libswresample 4. 3.100 / 4. 3.100
libpostproc 56. 3.100 / 56. 3.100
[concat @ 0x55932dbf2500] Impossible to open '001.ts'
file.txt: Invalid data found when processing input
</code></pre>
<p>Same Error With ffmpeg -f concat -safe 0 -i file.txt -c copy output.ts</p>
<p>file.txt is The List Of Files!</p>
|
<ol>
<li>.ts file is not a plain text file, so you can’t see anything when open it with vim.</li>
<li>You can check the m3u8 file, if there is a license url, if yes, the .ts is encrypted. Also you can try to open the ts file with a media player, like VLC, it is playable if not encrypted.</li>
</ol>
|
python|ffmpeg|blob|m3u8
| 0 |
1,908,857 | 62,182,115 |
How to find index in a string format for a particular value in a column of a pandas dataframe?
|
<pre><code>I have a dataframe named Top15 like this:
Country % Renewable Energy Rank
Brazil 69.65 15
Canada 61.95 56
China 19.75 32
Germany 17.90 2
Spain 37.97 11
France 17.02 12
United Kingdom 10.60 5
India 14.97 10
Iran 5.71 21
...... ...... .....
United States 11.57 38
Here i want to find the Country with greatest %Renewable Energy.
Here country is the index of dataframe and hence i want the name of index only in string format.
But when i use the following code:
maximum=Top15['% Renewable'].max()
Top15[Top15['% Renewable']==maximum].index (Method 1)
I get :Index(['Brazil'], dtype='object', name='Country')
But when i use: Top15[Top15['% Renewable']==maximum].index[0] (Method 2)
I get:'Brazil'
</code></pre>
<p>So i am unable to understand that what is the meaning of index[0] here and how it is giving only the index in a string format(which i want as output) compared to method 1.
Also when i am using index[1] i am getting error:
IndexError: index 1 is out of bounds for axis 0 with size 1</p>
<p>Can somebody please clarify the meaning of index[0] here ?</p>
|
<p>When you are calling <code>.index</code> it returns you the index object, the only element of which, <code>'Brazil'</code>, is accessible via indexing the index object at position 0, so <code>.index[0]</code>.</p>
<p>When you are calling <code>.index[1]</code> you are trying to access the second element of the index object while it only contains one element, hence the error.</p>
|
python|pandas|jupyter-notebook|data-science
| 0 |
1,908,858 | 35,439,255 |
python 2.7 mass zip file extraction to target directory
|
<p>I'm trying to iterate through a folder of zipped files and extract them to a target directory. My code is:</p>
<pre><code>import os
import zipfile
def mass_extract():
source_directory = raw_input("Where are the zips? ")
if not os.path.exists(source_directory):
print "Sorry, that folder doesn't seem to exist."
source_directory = raw_input("Where are the zips? ")
target_directory = raw_input("To where do you want to extract the files? ")
if not os.path.exists(target_directory):
os.mkdir(target_directory)
for path, directory, filename in os.walk(source_directory):
zip_file = zipfile.ZipFile(filename, 'w')
zipfile.extract(zip_file, target_directory)
zip_file.close()
print "Done."
</code></pre>
<p>I'm getting two errors here:</p>
<pre><code>AttributeError: 'module' object has no attribute 'extract'
Exception AttributeError:"'list' object has no attribute 'tell'" in <bound method ZipFile.__del__ of <zipfile.ZipFile object at 0xb701d52c>> ignored
</code></pre>
<p>Any ideas what's wrong?</p>
|
<p>Try changing <code>zipfile.extract</code> to <code>zip_file.extractall</code></p>
<p>Edit: Back from mobile, here is some cleaner code. I noticed the inital code would not run as is because 'filename' is actually a list of files for that directory. Also, opening it as <code>write</code> aka <code>w</code> will just overwrite your existing zip files, you don't want that. </p>
<pre><code>for path, directory, filenames in os.walk(source_directory):
for each_file in filenames:
file_path = os.path.join(path, each_file)
if os.path.splitext(file_path)[1] == ".zip": # Make sure it's a zip file
with zipfile.ZipFile(file_path) as zip_file:
zip_file.extractall(path=target_directory)
</code></pre>
<p><a href="https://github.com/cdgriffith/Reusables/blob/master/reusables/reusables.py#L506" rel="nofollow">Here</a> is an example of zipfile code in use I did a while ago. </p>
|
python|zip|os.walk
| 3 |
1,908,859 | 35,528,398 |
Weird json value urllib python
|
<p>I'm trying to manipulate a dynamic JSON from this site:</p>
<pre><code>http://esaj.tjsc.jus.br/cposgtj/imagemCaptcha.do
</code></pre>
<p>It has 3 elements, <code>imagem</code>, a base64, <code>labelValorCaptcha</code>, just a message, and <code>uuidCaptcha</code>, a value to pass by parameter to play a sound in this link bellow:</p>
<pre><code>http://esaj.tjsc.jus.br/cposgtj/somCaptcha.do?timestamp=1455996420264&uuidCaptcha=sajcaptcha_e7b072e1fce5493cbdc46c9e4738ab8a
</code></pre>
<p>When I enter in the first site through a browser and put in the second link the uuidCaptha after the equal ("..uuidCaptcha="), the sound plays normally. I wrote a simple code to catch this elements.</p>
<pre><code>import urllib, json
url = "http://esaj.tjsc.jus.br/cposgtj/imagemCaptcha.do"
response = urllib.urlopen(url)
data = json.loads(response.read())
urlSound = "http://esaj.tjsc.jus.br/cposgtj/somCaptcha.do?timestamp=1455996420264&uuidCaptcha="
print urlSound + data['uuidCaptcha']
</code></pre>
<p>But I dont know what's happening, the caught value of the <code>uuidCaptcha</code> doesn't work. Open a error web page.</p>
<p>Someone knows?
Thanks!</p>
|
<p>As I said @Charlie Harding, the best way is download the page and get the JSON values, because this JSON is dynamic and need an opened web link to exist.</p>
<p>More info <a href="https://stackoverflow.com/questions/802134/changing-user-agent-on-urllib2-urlopen">here</a>.</p>
|
python|json|urllib
| 0 |
1,908,860 | 58,810,911 |
Is there a simpler way to write this code involving multiple comboboxes and nested dictionaries, and also avoid the KeyError?
|
<p>I am rather new to Python, and after spending many hours I managed to get this working without having to ask a question, however, it definitely seems like it could be rewritten better and in a way that will get rid of this <code>KeyError: ''</code>. The key error only appears (and stalls the function) until I have chosen an item from each combobox, then it resumes due to my math in the function, but I can't figure out another way to write it that would resolve that issue and make the code more compact. I am sure there is a way, but I could definitely use a pointer in the right direction, thanks!
Here is a simpler demonstration version of my program:</p>
<pre><code>import tkinter as tk
import tkinter.ttk as ttk
#DICTIONARIES#
weapondict = {"Bronze Sword": {"atk":32, "def":4}, "Iron Sword": {"atk":47, "def":5}}
shielddict = {"Bronze Shield": {"atk":3, "def":10}, "Iron Shield": {"atk":5, "def":27}}
#FUNCTION#
def selected(func):
a = weapondict[weaponvar.get()]["atk"]
b = shielddict[shieldvar.get()]["atk"]
atkvar.set(a + b)
c = weapondict[weaponvar.get()]["def"]
d = shielddict[shieldvar.get()]["def"]
defvar.set(c + d)
#WINDOWLOOP#
root = tk.Tk()
root.geometry("250x125")
#VARIABLES#
weaponvar = tk.StringVar()
shieldvar = tk.StringVar()
atkvar = tk.IntVar()
defvar = tk.IntVar()
#COMBOBOXES#
weaponbox = ttk.Combobox(root, height=5, state="readonly", values=list(weapondict.keys()), textvariable=weaponvar)
weaponbox.place(x=10, y=10, width=130)
weaponbox.bind('<<ComboboxSelected>>', func=selected)
shieldbox = ttk.Combobox(root, height=5, state="readonly", values=list(shielddict.keys()), textvariable=shieldvar)
shieldbox.place(x=10, y=70, width=130)
shieldbox.bind('<<ComboboxSelected>>', func=selected)
#LABELS#
atklabel = tk.Label(root, text='Atk Bonus:')
atklabel.place(x=150, y=10, width=70, height=20)
deflabel = tk.Label(root, text='Def Bonus:')
deflabel.place(x=150, y=70, width=70, height=20)
atktotal = tk.Label(root, textvariable=atkvar)
atktotal.place(x=155, y=30, width=100, height=20)
deftotal = tk.Label(root, textvariable=defvar)
deftotal.place(x=155, y=90, width=100, height=20)
root.mainloop()
</code></pre>
<p>The goal is simply to take a selection from each combobox and take it's specified value, add that integer to the other one to give a total, while resolving the keyerror issue and making the code more readable and easier to edit. I want to put multiple boxes, each with multiple items and it will get very messy with this approach, thank you in advance!</p>
|
<p>The problem is the <code>tk.StringVar</code> <code>get()</code> method calls will return <code>""</code> when there's nothing in them. A simple fix would be to add an entry to both of the dictionaries that would match this "empty" key and give them some associated values to be used when nothing is selected (i.e. zero) in the corresponding <code>Combobox</code>:</p>
<pre><code>#DICTIONARIES#
weapondict = {"": {"atk":0, "def":0}, # Use when nothing is selected.
"Bronze Sword": {"atk":32, "def":4},
"Iron Sword": {"atk":47, "def":5}}
shielddict = {"": {"atk":0, "def":0}, # Use when nothing is selected.
"Bronze Shield": {"atk":3, "def":10},
"Iron Shield": {"atk":5, "def":27}}
</code></pre>
<p>An alternative would be to modify the <code>selected()</code> function to check whether the values returned by <code>weaponvar.get()</code> and <code>shieldvar.get()</code> were keys in the corresponding dictionary before trying to use them. If they aren't, then some default values for <code>a</code>, <code>b</code>, <code>c</code>, or <code>d</code> could then be provided.</p>
<p>That's completely feasible, but changing the two dictionaries seems a lot easier IMO.</p>
<p>Also note how the two nested dictionaries are defined. I find doing it on multiple lines to be a lot more readable.</p>
|
python|dictionary|combobox|ttk|keyerror
| 0 |
1,908,861 | 59,035,240 |
Add UNIX timestamp to start/end of webcam recording
|
<p>I am using OpenCV in Python to record data from a webcam. The videos are recorded for a fixed number of frames so they all have the same length. I would like to get the exact UNIX timestamp for the start and end of the recording. </p>
<p>Below is my code for an example video of 5s (30 fps, so 150 frames in total).</p>
<pre><code>import cv2
import time
video_capture_0 = cv2.VideoCapture(1)
#define frame height and width
frame_width0 = int(video_capture_0.get(3))
frame_height0 = int(video_capture_0.get(4))
# create output file
out0 = cv2.VideoWriter('test.avi', cv2.VideoWriter_fourcc(str('X'),str('V'),str('I'),str('D')), 30.0, (frame_width0,frame_height0))
counter = 0
start_time = time.time()
while True:
counter += 1
# Capture frame-by-frame
ret0, frame0 = video_capture_0.read()
if (ret0):
#write video
out0.write(frame0)
# show video
cv2.imshow('Cam 0', frame0)
if (cv2.waitKey(1) & 0xFF == ord('q')) or counter == 150:
end_time = time.time()
break
# When everything is done, release the capture
video_capture_0.release()
out0.release()
cv2.destroyAllWindows() # closes all frames
time_passed = end_time - start_time
print('Start time : ', start_time, '\n')
print('End time : ', end_time, '\n')
print('Time passed : ', time_passed, '\n')
</code></pre>
<p>Implementing the time.time() component however always gives me a delay of roughly 0.23s. </p>
<pre><code>Start time : 1574694237.1550183
End time : 1574694242.387653
Time passed : 5.232634782791138
</code></pre>
<p>Does anybody know why this is and how I could improve this?</p>
|
<p>If you pre-run the code <code>ret0, frame0 = video_capture_0.read()</code> before <code>start_time = time.time()</code> to activate the camera, and <s>commit</s> comment the <code>cv2.imshow('Cam 0', frame0)</code> and <code>(cv2.waitKey(1) & 0xFF == ord('q'))</code>, then the result will be very closed to 5.0</p>
|
python|opencv|unix|timestamp
| 0 |
1,908,862 | 58,672,294 |
scapy did not send packets
|
<p>i try to create scapy tools to test sniffing, this is my code :</p>
<pre><code>def scan(ip):
arp_req = sc.ARP(pdst=ip)
bc = sc.Ether(dst="ff:ff:ff:ff:ff:ff")
arp_req_bc = bc/arp_req
answer = sc.srp(arp_req_bc, timeout=1, verbose=True)[0]
print("IP\t\t\tMAC Address\n-----------------------")
for element in answer:
print(element[1].psrc + "\t\t" + element[1].hwsrc)
scan("192.168.43.1/24")
</code></pre>
<p>the output is :</p>
<pre><code> "Sniffing and sending packets is not available at layer 2: "
RuntimeError: Sniffing and sending packets is not available at layer 2: winpcap is not installed. You may use conf.L3socket orconf.L3socket6 to access layer 3
</code></pre>
|
<p>Download and Install WinPcap from this link:
<a href="https://www.winpcap.org/install/" rel="nofollow noreferrer">https://www.winpcap.org/install/</a>
and Run your program again.</p>
<p>Note: You have to close and restart Terminal Window, if You are using Command Line Terminal to run your program.</p>
|
python|python-3.x|scapy
| 1 |
1,908,863 | 73,433,750 |
How Do I Develop a negative film image using python
|
<p>I have tried inverting a negative film images color with the <code>bitwise_not()</code> function in python but it has this blue tint. I would like to know how I could develop a negative film image that looks somewhat good. Here's the outcome of what I did. (I just cropped the negative image for a new test I was doing so don't mind that)</p>
<p><a href="https://i.stack.imgur.com/Ljy2L.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ljy2L.png" alt="enter image description here" /></a></p>
|
<p>If you don't use exact maximum and minimum, but <em>1st and 99th percentile</em>, or something nearby (0.1%?), you'll get some nicer contrast. It'll cut away outliers due to noise, compression, etc.</p>
<p>Additionally, you should want to mess with gamma, or scale the values linearly, to achieve <em>white balance</em>.</p>
<p>I'll apply a "gray world assumption" and scale each plane so the mean is gray. I'll also mess with gamma, but that's just messing around.</p>
<p>And... all of that completely ignores gamma mapping, both of the "negative" and of the outputs.</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import cv2 as cv
import skimage
im = cv.imread("negative.png")
(bneg,gneg,rneg) = cv.split(im)
def stretch(plane):
# take 1st and 99th percentile
imin = np.percentile(plane, 1)
imax = np.percentile(plane, 99)
# stretch the image
plane = (plane - imin) / (imax - imin)
return plane
</code></pre>
<pre class="lang-py prettyprint-override"><code>b = 1 - stretch(bneg)
g = 1 - stretch(gneg)
r = 1 - stretch(rneg)
bgr = cv.merge([b,g,r])
cv.imwrite("positive.png", bgr * 255)
</code></pre>
<p><a href="https://i.stack.imgur.com/cx7k7.png" rel="noreferrer"><img src="https://i.stack.imgur.com/cx7k7.png" alt="plain" /></a></p>
<pre class="lang-py prettyprint-override"><code>b = 1 - stretch(bneg)
g = 1 - stretch(gneg)
r = 1 - stretch(rneg)
# gray world
b *= 0.5 / b.mean()
g *= 0.5 / g.mean()
r *= 0.5 / r.mean()
bgr = cv.merge([b,g,r])
cv.imwrite("positive_grayworld.png", bgr * 255)
</code></pre>
<p><a href="https://i.stack.imgur.com/FUfu0.png" rel="noreferrer"><img src="https://i.stack.imgur.com/FUfu0.png" alt="gray world" /></a></p>
<pre class="lang-py prettyprint-override"><code>b = 1 - np.clip(stretch(bneg), 0, 1)
g = 1 - np.clip(stretch(gneg), 0, 1)
r = 1 - np.clip(stretch(rneg), 0, 1)
# goes in the right direction
b = skimage.exposure.adjust_gamma(b, gamma=b.mean()/0.5)
g = skimage.exposure.adjust_gamma(g, gamma=g.mean()/0.5)
r = skimage.exposure.adjust_gamma(r, gamma=r.mean()/0.5)
bgr = cv.merge([b,g,r])
cv.imwrite("positive_gamma.png", bgr * 255)
</code></pre>
<p><a href="https://i.stack.imgur.com/PHDlU.png" rel="noreferrer"><img src="https://i.stack.imgur.com/PHDlU.png" alt="gamma" /></a></p>
<p>Here's what happens when gamma is applied to the inverted picture... a reasonably tolerable transfer function results from applying the same factor twice, instead of applying its inverse.</p>
<p><a href="https://www.wolframalpha.com/input?i=plot+1-%281+-+%28x%5E2.2%29%29+%5E+%282.2%29+for+x+from+0+to+1" rel="noreferrer"><img src="https://i.stack.imgur.com/QlHuL.png" alt="screenshot" /></a></p>
<p><a href="https://www.wolframalpha.com/input?i=plot+1-%281+-+%28x%5E%281%2F2.2%29%29%29+%5E+%281%2F2.2%29+for+x+from+0+to+1" rel="noreferrer"><img src="https://i.stack.imgur.com/fSbf8.png" alt="screenshot" /></a></p>
<p>Trying to "undo" the gamma while ignoring that the values were inverted... causes serious distortions:</p>
<p><a href="https://www.wolframalpha.com/input?i=plot+1-%281+-+%28x%5E2.2%29%29+%5E+%281%2F2.2%29+for+x+from+0+to+1" rel="noreferrer"><img src="https://i.stack.imgur.com/o3yJu.png" alt="screenshot" /></a></p>
<p><a href="https://www.wolframalpha.com/input?i=plot+1-%281+-+%28x%5E%281%2F2.2%29%29%29+%5E+%282.2%29+for+x+from+0+to+1" rel="noreferrer"><img src="https://i.stack.imgur.com/BsoNy.png" alt="screenshot" /></a></p>
<p>And the min/max values for contrast stretching also affect the whole thing.</p>
<p>A simple photo of a negative simply won't do. It'll include stray light that offsets the black point, at the very least. You need a proper scan of the negative.</p>
|
python|opencv|image-processing
| 7 |
1,908,864 | 31,233,610 |
Import Error: No module name
|
<p>I am facing some issue while importing a class.
The folder structure is as below:</p>
<pre><code>python_space
|_ __init__.py
|_ ds_Tut
|_ __init__.py
|_ stacks
|_ __init__.py
|_ stacks.py (contains class Stack)
|_ trees
|_ __init__.py
|_ parseTree.py (wants to import Stack class from above)
</code></pre>
<p>Used the following code to import:</p>
<pre><code>from stacks.stacks import Stack
</code></pre>
<p>Getting the following error:</p>
<pre><code>"ImportError: No module named stacks.stacks"
</code></pre>
|
<p>stacks is inside the ds_Tut module. does this work?</p>
<pre><code>from ds_Tut.stacks.stacks import Stack
</code></pre>
|
python-3.x
| 0 |
1,908,865 | 49,316,425 |
How to find column number when an element is found in Pandas
|
<p>I have read an excel file in a pandas data frame. I am iterating over the indexed column comparing each of the element of the row with some value. When I find a match I need to find the column number in which the match is found.</p>
<p>Example:</p>
<pre><code>df = pd.DataFrame({'A': [0, 0, 2, 1], 'B': [1,2,3,4], 'C' : [5,7,2,5]})
print df
A B C
0 0 1 5
1 0 2 7
2 2 3 2
3 1 4 5
</code></pre>
<p>If i find a match for element 3, I should be able to print 'B' along with B's column number i.e. 1.
How to achieve that?
Thanks!</p>
|
<p>Use <code>np.where</code>. It'll give you the row and corresponding column positions for all matches</p>
<pre><code>i, j = np.where(df.values == 3)
j
array([1])
</code></pre>
<p>If you want the column labels</p>
<pre><code>df.columns[j]
Index(['B'], dtype='object')
</code></pre>
|
python|pandas|dataframe
| 4 |
1,908,866 | 60,104,495 |
ModuleNotFoundError: No module named 'serial'
|
<p>This is Arduino Python.</p>
<p>The Python script is giving me an error with the following line:</p>
<p><code>import serial</code></p>
<p>The error is:</p>
<p><code>ModuleNotFoundError: No module named 'serial'</code></p>
<p>I want to understand the relationship between the software libraries. where is the serial library?
I think the library is there, but the present Python script is not finding it.<br>
john</p>
|
<p>After checking to see if it is there like @araldo-van-de-kraats says (<code>pip freeze</code>), it looks like you probably want to make sure you've installed pySerial:</p>
<pre><code>pip install pyserial
</code></pre>
<p>See the documentation <a href="https://pythonhosted.org/pyserial/pyserial.html#installation" rel="nofollow noreferrer">here</a>.</p>
|
python|arduino|serial-port
| 3 |
1,908,867 | 59,978,521 |
How to use Replace method in xlwings
|
<p>I'm working on data collection using python and xlwings.<br>
Then I'd like to replace some formula to another one, but I can't find correct method or tips for it.
I know that I can combine VBA and python, it would be more simple, however it's more desirable use only python because of simplicity and efficiency.</p>
<p>Problem;
I'm trying to edit xlsb file, but openpyxl doesn't support xlsb, so I'd like to resolve that problem using xlwings.<br>
Ex. replace "AAA" -> "BBB"</p>
<p>If you know some methods to replace the formula in the sheet of Excel(xlsb) using python, it would be appreciated if you teach me how to do it.</p>
<p>Thank you.</p>
|
<p>try this </p>
<pre><code>sheet.range("A1:A9").api.Replace("AAA", "BBB")
</code></pre>
|
python|excel|excel-formula|xlwings
| 0 |
1,908,868 | 60,051,586 |
Paginator is not showing appropriate result
|
<p>I want to show only 5 results and next 5 in other page.
my view.py</p>
<pre><code>class QuestionList(APIView):
def get(self, request, *args, **kwargs):
res = Question.objects.all()
paginator = Paginator(res, 5)
serializer = QuestionSerializers(res, many=True)
return Response({"Section": serializer.data})
</code></pre>
<p>how will my paginator work here ?</p>
|
<p>Django-rest provide its own method of pagination. You should use that for instead of building your own. You can find its docs <a href="https://www.django-rest-framework.org/api-guide/pagination/" rel="nofollow noreferrer">here</a>.</p>
<p>You just have to add following settings in your settings.</p>
<pre><code>REST_FRAMEWORK = {
'DEFAULT_PAGINATION_CLASS': 'rest_framework.pagination.LimitOffsetPagination',
'PAGE_SIZE': 100
}
</code></pre>
<p>Data in <code>ListApiView</code> will be paginated with this. In your current view</p>
<pre><code>class QuestionList(APIView):
def get(self, request, *args, **kwargs):
res = Question.objects.all()
page = self.paginate_queryset(res)
serialized = QuestionSerializers(page, many=True)
return self.get_paginated_response(serialized.data)
</code></pre>
|
python|django|django-models|django-rest-framework|django-views
| 1 |
1,908,869 | 2,642,451 |
Translate Java to Python -- signing strings with PEM certificate files
|
<p>I'm trying to translate the follow Java into its Python equivalent.</p>
<pre><code> // certificate is contents of https://fps.sandbox.amazonaws.com/certs/090909/PKICert.pem
// signature is a string that I need to verify.
CertificateFactory factory = CertificateFactory.getInstance("X.509");
X509Certificate x509Certificate =
(X509Certificate) factory.generateCertificate(new ByteArrayInputStream(certificate.getBytes()));
Signature signatureInstance = Signature.getInstance(signatureAlgorithm);
signatureInstance.initVerify(x509Certificate.getPublicKey());
signatureInstance.update(stringToSign.getBytes(UTF_8_Encoding));
return signatureInstance.verify(Base64.decodeBase64(signature.getBytes()));
</code></pre>
<p>This is for the PKI signature verification used by AWS FPS. <a href="http://docs.amazonwebservices.com/AmazonFPS/latest/FPSAccountManagementGuide/VerifyingSignature.html" rel="nofollow noreferrer">http://docs.amazonwebservices.com/AmazonFPS/latest/FPSAccountManagementGuide/VerifyingSignature.html</a></p>
<p>Thanks for your help!</p>
|
<p>I looked into doing this with pyCrypto and keyczar but the problem is neither have the ability to parse X509 certificates (keyczar has keyczar.util.ParseX509() but it is limited and doesn't work for the AWS cert or I'm guessing any real world cert).</p>
<p>I believe M2Crypto works though. See the following code snippet, which needs a real signature and plaintext filled in to really test.</p>
<pre><code>from M2Crypto import X509
cert = X509.load_cert("PKICert.pem")
pub_key = cert.get_pubkey()
plaintext = "YYY" # Put plaintext message here
signature = "XXX" # Put signature of plaintext here
pub_key.verify_init()
pub_key.verify_update(plaintext)
if not pub_key.verify_final(signature):
print "Signature failed"
</code></pre>
|
java|python|x509certificate|frame-rate|pem
| 2 |
1,908,870 | 6,207,557 |
How to reduce queries in django model has_relation method?
|
<p>Here are two example Django models. Pay special attention to the has_pet method.</p>
<pre><code>class Person(models.Model):
name = models.CharField(max_length=255)
def has_pet(self):
return bool(self.pets.all().only('id'))
class Pet(models.Model):
name = models.CharField(max_length=255)
owner = models.ForeignKey(Person, blank=True, null=True, related_name="pets")
</code></pre>
<p>The problem here is that the has_pet method always generates a query. If you do something like this.</p>
<pre><code>p = Person.objects.get(id=1)
if p.has_pet():
...
</code></pre>
<p>Then you will actually be doing an extra query just to check if one person has a pet. That is a big problem if you have to check multiple people. It will also generate queries if used in templates like this.</p>
<pre><code>{% for person in persons %}
{% if person.has_pet %}
{{ person.name }} owns a pet
{% else %}
{{ person.name }} is petless
{% endif %}
{% endfor %}
</code></pre>
<p>This example will actually perform an extra query for every person in the persons queryset while it is rendering the template.</p>
<p>Is there a way to do this with just one query, or at least doing less than one extra query per person? Maybe there is another way to design this to avoid the problem altogether.</p>
<p>I thought of adding a BooleanField to Person, and having that field be updated whenever a pet is saved or deleted. Is that really the right way to go?</p>
<p>Also, I already have memcached setup properly, so those queries only happen if the results are not already cached. I'm looking to remove the queries in the first place for even greater optimization.</p>
|
<p>If you want a list of all the people with pets you can do that in a single query:</p>
<pre><code>Person.objects.exclude(pets=None)
</code></pre>
<p>Sounds like you want to iterate over a single list of people, using annotations would probably make sense:</p>
<pre><code>for person in Person.objects.annotate(has_pet=Count('pets')):
if person.has_pet: # if has_pet is > 0 this is True, no extra query
</code></pre>
<p>It'd be nice if Django had an <code>Exists</code> aggregate but it doesn't, and I don't know how difficult it would be to add one. You should profile of course, and figure out if this works for you.</p>
<p>Personally, I'd probably just store <code>has_pets</code> as a boolean on the model, it's probably the most efficient approach.</p>
|
python|django|django-models|django-queryset
| 4 |
1,908,871 | 5,779,916 |
Submitting empty form and weird output
|
<p>Here's my form : </p>
<pre><code><form action = "/search/" method = "get">
<input type = "text" name = "q">
<input type = "submit" value = "Search">
</form>
</code></pre>
<p>And here's my view:</p>
<pre><code>def search(request):
if 'q' in request.GET:
message = 'You searched for: %r' % request.GET['q']
else:
message = 'You submitted an empty form :('
return HttpResponse(message)
</code></pre>
<p>When I try to input something everything works fine, except for weird u' ' thing. For example when I enter asdasda I get the output <code>You searched for: u'asdsa'</code>. Another problem is that when I submit an empty form the output is simply <code>u''</code>, when it should be "You submitted an empty form :(". I'm reading "The Django Book", the 1.x.x version and this was an example..</p>
|
<p>The "weird u thing" is a unicode string. You can read about it here: <a href="http://docs.python.org/tutorial/introduction.html#unicode-strings" rel="nofollow">http://docs.python.org/tutorial/introduction.html#unicode-strings</a></p>
<p>And I'm guessing since the user pressed submit, you get a request that has an empty q value (u'') since the user didn't enter anything. That makes sense, right? You should change your if statement to check for this empty unicode string.</p>
|
python|django|forms
| 3 |
1,908,872 | 67,864,616 |
replacing list items in reverse order and skipping every other item python
|
<p>I am making a program to check whether a card number is potentially valid using the Luhn algorithm.</p>
<pre><code>num = "79927398713" #example num
digits = [int(x) for x in num]
reverse = digits[1:][::2][::-1] #step 1: start from rightmost digit, skip first, skip every other
count = 0
digitsum = 0
print(reverse) #output here is: [1, 8, 3, 2, 9]
for x in (reverse):
reverse[count] *= 2
if reverse[count] > 9:
for x in str(reverse[count]): #multiply each digit in step 1 by 2, if > 9, add digits to make single-digit number
digitsum += int(x)
reverse[count] = digitsum
count += 1
digitsum = 0
count = 0
print(reverse) #output here is [2, 7, 6, 4, 9]
</code></pre>
<p>basically, I want to input [2, 7, 6, 4, 9] back into the corresponding places in the list <code>digits</code>. It would look like this (changed numbers in asterisks)</p>
<p><code>[7, **9**, 9, **4**, 7, **6**, 9, **7**, 7, **2**, 3]</code></p>
<p>the problem is, I would have to read <code>digits</code> backwards, skipping the first (technically last) element, and skipping every other element from there, replacing the values each time.</p>
<p>Am I going about this the wrong way/making it too hard on myself? Or is there a way to index backwards, skipping the first (technically last) element, and skipping every other element?</p>
|
<p>You can do this with simple indexing</p>
<p>Once you have the variable <code>reverse</code>, you can index on the left hand side:</p>
<pre class="lang-py prettyprint-override"><code># reversed is [2, 7, 6, 4, 9] here
digits[1::2] = reversed(reverse) # will place 9,4,6,7,2 in your example
</code></pre>
<p>Note, you can use this trick too for your line where you initialize reverse</p>
<pre class="lang-py prettyprint-override"><code>reverse = digits[1::2][::-1]
</code></pre>
<p>I think you could even use:</p>
<pre class="lang-py prettyprint-override"><code>reverse = digits[-1 - len(digits) % 2::-2]
</code></pre>
<p>which should be even more efficient</p>
<h3>Edit</h3>
<p>Running <code>timeit</code>, the last solution of <code>digits[-1 - len(digits) % 2::-2]</code> on an array of size 10,000 was 3.6 times faster than the original, I'd highly suggest using this</p>
|
python|loops|iteration|reverse|luhn
| 1 |
1,908,873 | 67,673,276 |
import psycopg2 : SystemError: initialization of _psycopg raised unreported exception\r on 2nd URL hit fails
|
<p>I have two url to my application, which refers to same eg: localhost:8080 and abc.com. Pattern is that after restarting the apache server, only the first one URL works which I hit first and 2nd url does not work. Ideally both should work as both refers to same server.</p>
<p>I am getting below error for python 3.9. After following <a href="https://github.com/psycopg/psycopg2/issues/1144" rel="nofollow noreferrer">this</a> I uninstalled and installed mod_wsgi 4.7.1 using python 3.9 pip and updated apache httpd.conf with output of <code>mod_wsgi-express.exe module-config</code></p>
<pre><code>[Mon May 24 04:31:30.115963 2021] [mpm_winnt:notice] [pid 8560:tid 104] AH00422: Parent: Received shutdown signal -- Shutting down the server.
[Mon May 24 04:31:32.121364 2021] [mpm_winnt:notice] [pid 9452:tid 904] AH00364: Child: All worker threads have exited.
[Mon May 24 04:31:32.303380 2021] [mpm_winnt:notice] [pid 8560:tid 104] AH00430: Parent: Child process 9452 exited successfully.
[Mon May 24 04:31:33.770657 2021] [mpm_winnt:notice] [pid 612:tid 908] AH00455: Apache/2.4.46 (Win64) mod_wsgi/4.7.1 Python/3.9 configured -- resuming normal operations
[Mon May 24 04:31:33.771659 2021] [mpm_winnt:notice] [pid 612:tid 908] AH00456: Apache Lounge VS16 Server built: Mar 27 2021 11:42:37
[Mon May 24 04:31:33.771659 2021] [core:notice] [pid 612:tid 908] AH00094: Command line: 'F:\\Program Files\\NGDM\\Apache24\\bin\\httpd.exe -d F:/Program Files/NGDM/Apache24'
[Mon May 24 04:31:33.781650 2021] [mpm_winnt:notice] [pid 612:tid 908] AH00418: Parent: Created child process 10632
[Mon May 24 04:31:34.815778 2021] [mpm_winnt:notice] [pid 10632:tid 872] AH00354: Child: Starting 64 worker threads.
[Mon May 24 04:32:18.161398 2021] [wsgi:error] [pid 10632:tid 1440] RUN_ID==>51683<==\r
[Mon May 24 04:32:18.163398 2021] [wsgi:error] [pid 10632:tid 1440] RUN_ID==>51683<==\r
C:\A\34\s\Modules\_decimal\libmpdec\context.c:57: warning: mpd_setminalloc: ignoring request to set MPD_MINALLOC a second time
[Mon May 24 04:32:29.651626 2021] [wsgi:error] [pid 10632:tid 1440] [client 148.173.41.4:61202] mod_wsgi (pid=10632): Failed to exec Python script file 'F:/XYZ/test_wsgi.wsgi'.
[Mon May 24 04:32:29.651626 2021] [wsgi:error] [pid 10632:tid 1440] [client 148.173.41.4:61202] mod_wsgi (pid=10632): Exception occurred processing WSGI script 'F:/XYZ/test_wsgi.wsgi'.
[Mon May 24 04:32:29.661629 2021] [wsgi:error] [pid 10632:tid 1440] [client 148.173.41.4:61202] Traceback (most recent call last):\r
[Mon May 24 04:32:29.661629 2021] [wsgi:error] [pid 10632:tid 1440] [client 148.173.41.4:61202] File "F:/XYZ/test_wsgi.wsgi", line 6, in <module>\r
[Mon May 24 04:32:29.661629 2021] [wsgi:error] [pid 10632:tid 1440] [client 148.173.41.4:61202] from app_e1 import app as application\r
[Mon May 24 04:32:29.661629 2021] [wsgi:error] [pid 10632:tid 1440] [client 148.173.41.4:61202] File "f:/XYZ\\app_e1.py", line 6, in <module>\r
[Mon May 24 04:32:29.661629 2021] [wsgi:error] [pid 10632:tid 1440] [client 148.173.41.4:61202] import psycopg2\r
[Mon May 24 04:32:29.661629 2021] [wsgi:error] [pid 10632:tid 1440] [client 148.173.41.4:61202] File "c:\\python39\\lib\\site-packages\\psycopg2\\__init__.py", line 51, in <module>\r
[Mon May 24 04:32:29.661629 2021] [wsgi:error] [pid 10632:tid 1440] [client 148.173.41.4:61202] from psycopg2._psycopg import ( # noqa\r
[Mon May 24 04:32:29.661629 2021] [wsgi:error] [pid 10632:tid 1440] [client 148.173.41.4:61202] SystemError: initialization of _psycopg raised unreported exception\r
</code></pre>
|
<p>In My case, restarting the server after adding below line in httpd.conf fixed the issue with multiple python sub interpreter. <a href="https://modwsgi.readthedocs.io/en/master/user-guides/application-issues.html" rel="nofollow noreferrer">Ref</a>.</p>
<pre><code>WSGIApplicationGroup %{GLOBAL}
</code></pre>
|
python-3.x|psycopg2|mod-wsgi
| 0 |
1,908,874 | 30,573,681 |
Map not unpacking tuples
|
<p>I have this simple formula that converts an IP to a 32 bits integer:</p>
<pre><code>(first octet * 256**3) + (second octet * 256**2) + (third octet * 256**1) + (fourth octet)
</code></pre>
<p>I made a program that does that:</p>
<pre><code>def ip_to_int32(ip):
# split values
ip = ip.split(".")
# formula to convert to 32, x is the octet, y is the power
to_32 = lambda x, y: int(x) * 256** (3 - y)
# Sum it all to have the int32 of the ip
# reversed is to give the correct power to octet
return sum(
to_32(octet, pwr) for pwr, octet in enumerate(ip)
)
ip_to_int32("128.32.10.1") # --> 2149583361
</code></pre>
<p>And it works as intended. </p>
<p>Then I tried to make a one-liner, just for the sake of doing it.</p>
<pre><code>sum(map(lambda x, y: int(x) * 256 ** (3 - y), enumerate(ip.split("."))))
</code></pre>
<p>But this raises</p>
<pre><code>TypeError: <lambda>() takes exactly 2 arguments (1 given)
</code></pre>
<p>So the tuple (y, x) is not being unpacked. I can fix this with</p>
<pre><code>sum(map(lambda x: int(x[1]) * 256 ** (3 - x[0]), enumerate(ip.split("."))))
</code></pre>
<p>But this seems uglier (one liners are always ugly)</p>
<p>I even tried using a list comprehensions, but map still doesn't unpack the values.</p>
<p><strong>Is this a feature or am I doing something wrong? Is there a specific way to do this?</strong></p>
|
<p>True, <code>map</code> doesn't unpack, but <a href="https://docs.python.org/2/library/itertools.html#itertools.starmap" rel="nofollow">starmap</a> does:</p>
<pre><code>sum(starmap(lambda x, y: int(y) * 256 ** (3 - x), enumerate(ip.split("."))))
</code></pre>
|
python|python-2.7|dictionary|iterable-unpacking
| 3 |
1,908,875 | 30,627,321 |
Converting string data to binary and decoding it in Python?
|
<p>I previously asked a question, but it was a bit unclear.</p>
<p>If I have this line of data:</p>
<pre><code>52, 123, 0, ./commands/command_fw_update.c, "Testing String"
52, 123, 0, ./commands/command_fw_updat2e.c, "Testing String2"
</code></pre>
<p>How can I convert this data into a .bin file, then read back in the data from the bin file as a string?</p>
|
<p>The data is already in your desired format. If you want to copy the input file to another file called <code>input.bin</code>, use <code>shutil.copyfile</code>:</p>
<pre><code># Copy the data to a .bin file:
import shutil
shutil.copyfile("input.txt", "input.bin")
# Read the data as a string:
with open("input.bin") as data_file:
data = data_file.read()
# Now, to convert the string to useful data,
# parse it any way you want. For example, to take
# the first number in each line and store it into
# an array called "numbers":
data = [[field.strip() for field in line.split(",")]
for line in data.splitlines()]
numbers = [int(data[0]) for data in data]
print numbers
</code></pre>
|
python
| 0 |
1,908,876 | 66,959,571 |
Is there a way to iterate through the vectors of Gensim's Word2Vec?
|
<p>I'm trying to perform a simple task which requires iterations and interactions with specific vectors after loading it into gensim's Word2Vec.</p>
<p>Basically, given a txt file of the form:</p>
<pre><code>t1 -0.11307 -0.63909 -0.35103 -0.17906 -0.12349
t2 0.54553 0.18002 -0.21666 -0.090257 -0.13754
t3 0.22159 -0.13781 -0.37934 0.39926 -0.25967
</code></pre>
<p>where t1 is the name of the vector and what follows are the vectors themselves. I load it in using the function <code>vecs = KeyedVectors.load_word2vec_format(datapath(f), binary=False)</code>.</p>
<p>Now, I want to iterate through the vectors I have and make a calculation, take summing up all of the vectors as an example. If this was read in using <code>with open(f)</code>, I know I can just use <code>.split(' ')</code> on it, but since this is now a KeyedVector object, I'm not sure what to do.</p>
<p>I've looked through the word2vec documentation, as well as used <code>dir(KeyedVectors)</code> but I'm still not sure if there is an attribute like <code>KeyedVectors.vectors</code> or something that allows me to perform this task.</p>
<p>Any tips/help/advice would be much appreciated!</p>
|
<p>There's a list of all words in the <code>KeyedVectors</code> object in its <code>.index_to_key</code> property. So one way to sum all the vectors would be to retrieve each by name in a list comprehension:</p>
<pre class="lang-py prettyprint-override"><code>np.sum([vecs[key] for key in vecs.index_to_key], axis=0)
</code></pre>
<p>But, if all you really wanted to do is sum the vectors – and the keys (word tokens) aren't an important part of your calculation, the set of all the raw word-vectors is available in the <code>.vectors</code> property, as a numpy array with one vector per row. So you could also do:</p>
<pre class="lang-py prettyprint-override"><code>np.sum(vecs.vectors, axis=0)
</code></pre>
|
python|gensim|word2vec
| 0 |
1,908,877 | 50,949,332 |
Silhouette Coefficient of HAC if k = 1
|
<p>How to calculate value of Silhouette Coefficient of HAC clustering if <code>k=1</code> (so, all data in 1 cluster)? Silhouette coefficient has range <code>-1</code> until <code>1</code>, but for singleton (<code>k=maximum</code>) (cluster that has only 1 data) the Silhouette Coefficient is <code>0</code>. Is it <code>0</code> for silhouette coefficient of <code>k=1</code> or is it <code>-1</code> or <code>1</code>? Formula of silhouette coefficient is
<a href="https://i.stack.imgur.com/V9NHL.jpg" rel="nofollow noreferrer">here</a>.</p>
<p><code>SC(i) = (b(i)-a(i))/max(a(i), b(i))</code></p>
<p><code>a(i) = Average distance of object with other object in one cluster.</code></p>
<p><code>b(i) = Minimum Average distance of object with other object in other cluster.</code></p>
<p>*sorry for my bad english</p>
|
<p>The silhouette coefficient is <strong>not defined</strong> for a single cluster only.</p>
<p>So the proper value would be undefined, although I suggest to use 0 then. Because 0 is the value suggested by the authors for one-element clusters, where the Silhouette value also would be undefined.</p>
<p>And the notion of a negative Silhouette is that the points are closer to another cluster. For a one cluster solution, that does not hold. This the value should be 0. Or undefined.</p>
|
python-2.7|cluster-analysis|evaluation|hierarchical-clustering|silhouette
| 0 |
1,908,878 | 3,359,204 |
Python - Assign global variable to function return requires function to be global?
|
<p>So, I'm confused. I have a module containing some function that I use in another module. Imported like so:</p>
<pre><code>from <module> import *
</code></pre>
<p>Inside my module, there exist functions whose purpose is to set global variables in the main program.</p>
<p>main.py:</p>
<pre><code>from functions import *
bar = 20
print bar
changeBar()
print bar
</code></pre>
<p>functions.py:</p>
<pre><code>def changeBarHelper(variable):
variable = variable * 2
return variable
def changeBar():
global bar
bar = changeBarHelper(bar)
</code></pre>
<p>Now, this is a simplification, but it is the least code that yields the same result:</p>
<pre><code>Traceback (most recent call last):
File "/path/main.py", line 5, in
changeBar()
File "/path/functions.py", line 7, in changeBar
bar = changeBarHelper(bar)
NameError: global name 'bar' is not defined
</code></pre>
|
<p>Doing an <code>import *</code> in the way that you've done it is a one way process. You've imported a bunch of names, much the same way as you'd do:</p>
<pre><code>from mymodule import foo, bar, baz, arr, tee, eff, emm
</code></pre>
<p>So they are all just assigned to names in the global scope of the module where the <code>import</code> is done. What this does not do is connect the global namespaces of these two modules. <code>global</code> means <strong>module-global</strong>, not global-to-all-modules. So every module might have its own <code>fubar</code> global variable, and assigning to one won't assign to every module.</p>
<p>If you want to access a name from another module, you must import it. So in this example:</p>
<pre><code>def foo(var1, var2):
global bar
from mainmodule import fubar
bar = fubar(var1)
</code></pre>
<p>By doing the import inside the function itself, you can avoid circular imports. </p>
|
python|function|module|variables|global
| 3 |
1,908,879 | 3,250,393 |
How do I modify program files in Python?
|
<p>In the actual window where I right code is there a way to insert part of the code into everyline that I already have. Like insert a comma into all lines at the first spot>?</p>
|
<p>If you are in UNIX environment, open up a terminal, <code>cd</code> to the directory your file is in and use the <code>sed</code> command. I think this may work:</p>
<pre><code>sed "s/\n/\n,/" your_filename.py > new_filename.py
</code></pre>
<p>What this says is to replace all <code>\n</code> (newline character) to <code>\n,</code> (newline character + comma character) in <code>your_filename.py</code> and to output the result into <code>new_filename.py</code>.</p>
<hr>
<p><strong>UPDATE</strong>: This is much better:</p>
<pre><code>sed "s/^/,/" your_filename.py > new_filename.py
</code></pre>
<p>This is very similar to the previous example, however we use the regular expression token <code>^</code> which matches the beginning of each line (and <code>$</code> is the symbol for end).</p>
<hr>
<p>There are chances this doesn't work or that it doesn't even apply to you because you didn't really provide that much information in your question (and I would have just commented on it, but I can't because I don't have enough reputation or something). Good luck.</p>
|
python
| 2 |
1,908,880 | 34,879,508 |
Get max or min n-elements out of numpy array? (preferably not flattened)
|
<p>I know that I can get min or max values with:</p>
<pre><code>max(matrix)
min(matrix)
</code></pre>
<p>out of a numpy matrix/vector. The indices for those vales are returned by:</p>
<pre><code>argmax(matrix)
argmin(matrix)
</code></pre>
<p>So e.g. when I have a 5x5 matrix:</p>
<pre><code>a = np.arange(5*5).reshape(5, 5) + 10
# array([[10, 11, 12, 13, 14],
# [15, 16, 17, 18, 19],
# [20, 21, 22, 23, 24],
# [25, 26, 27, 28, 29],
# [30, 31, 32, 33, 34]])
</code></pre>
<p>I could get the max value via:</p>
<pre><code>In [86]: np.max(a) # getting the max-value out of a
Out[86]: 34
In [87]: np.argmax(a) # index of max-value 34 is 24 if array a were flattened
Out[87]: 24
</code></pre>
<p>...but what is the most efficient way to get the max or min n-elements?</p>
<p>So let's say out of <em>a</em> I want to have the 5 highest and 5 lowest elements. This should return me <code>[30, 31, 32, 33, 34]</code> for the 5 highest values respectively <code>[20, 21, 22, 23, 24]</code> for their indices. Likewise <code>[10, 11, 12, 13, 14]</code> for the 5 lowest values and <code>[0, 1, 2, 3, 4]</code> for the indices of the 5 lowest elements.</p>
<p>What would be an efficient, reasonable solution for this?</p>
<p><strong>My first idea</strong> was <strong>flattening and sorting</strong> the array and taking the last and first 5 values. Afterwards I search through the original 2D matrix for the indices of those values. <strong>Although this procedure works flattening + sorting isn't very efficient...does anyone know a faster solution?</strong></p>
<p>Additionally I would like to have the indices of the original 2D array and not the flattening one. So instead of <code>24</code> returned by <code>np.argmax(a)</code> I would like to have <code>(4, 4)</code>.</p>
|
<p>The standard way to get the indices of the largest or smallest values in an array is to use <a href="http://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.argpartition.html" rel="nofollow"><code>np.argpartition</code></a>. This function uses an introselect algorithm and runs with linear complexity - this performs better than fully sorting for larger arrays (which is typically O(n log n)).</p>
<p>By default this function works along the last axis of the array. To consider an entire array, you need to use <a href="http://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.ravel.html" rel="nofollow"><code>ravel()</code></a>. For example, here's a random array <code>a</code>:</p>
<pre><code>>>> a = np.random.randint(0, 100, size=(5, 5))
>>> a
array([[60, 68, 86, 66, 9],
[66, 26, 83, 87, 50],
[41, 26, 0, 55, 9],
[57, 80, 71, 50, 22],
[94, 30, 95, 99, 76]])
</code></pre>
<p>Then to get the indices of the five largest values in the (flattened) 2D array, use:</p>
<pre><code>>>> i = np.argpartition(a.ravel(), -5)[-5:] # argpartition(a.ravel(), 5)[:5] for smallest
>>> i
array([ 2, 8, 22, 23, 20])
</code></pre>
<p>To get back the corresponding 2D indices of these positions in <code>a</code>, use <a href="http://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.unravel_index.html" rel="nofollow"><code>unravel_index</code></a>:</p>
<pre><code>>>> i2d = np.unravel_index(i, a.shape)
>>> i2d
(array([0, 1, 4, 4, 4]), array([2, 3, 2, 3, 0]))
</code></pre>
<p>Then indexing <code>a</code> with <code>i2d</code> gives back the five largest values:</p>
<pre><code>>>> a[i2d]
array([86, 87, 95, 99, 94])
</code></pre>
|
python|arrays|numpy|max|slice
| 4 |
1,908,881 | 35,048,582 |
How to fix error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
|
<p>I'm trying to install odoo on a fresh installation of Linux on a VirtualBox machine. I have entered in the commands found here as is: <a href="http://odoo-development.readthedocs.org/en/latest/install.html" rel="noreferrer">Odoo Development Read the Docs</a>. The following command is what prompts the error: command 'x86_64-linux-gnu-gcc' failed with exit status 1:</p>
<pre><code>sudo pip install -r requirements.txt
</code></pre>
<p>So now I'm trying to solve the problem. I have gone to <a href="https://stackoverflow.com/questions/26053982/error-setup-script-exited-with-error-command-x86-64-linux-gnu-gcc-failed-wit">error: Setup script exited with error: command 'x86_64-linux-gnu-gcc' failed with exit status 1</a>,<a href="https://stackoverflow.com/questions/11094718/error-command-gcc-failed-with-exit-status-1-while-installing-eventlet">error: command 'gcc' failed with exit status 1 while installing eventlet</a> , and entered in the following commands. </p>
<p>For the first link:</p>
<pre><code>sudo apt-get install build-essential autoconf libtool pkg-config python-opengl python-imaging python-pyrex python-pyside.qtopengl idle-python2.7 qt4-dev-tools qt4-designer libqtgui4 libqtcore4 libqt4-xml libqt4-test libqt4-script libqt4-network libqt4-dbus python-qt4 python-qt4-gl libgle3 python-dev
sudo easy_install greenlet
sudo easy_install gevent
sudo pip install -r requirements.txt
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
</code></pre>
<p>The Second Link:</p>
<pre><code>sudo apt-get install python-dev
sudo pip install -r requirements.txt
</code></pre>
<p>I still get the error. Then I tried:</p>
<pre><code>sudo apt-get install libevent-dev
sudo pip install -r requirements.txt
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
</code></pre>
<p>What am I doing wrong? </p>
<p>Here is what happens after I enter the command:</p>
<pre><code>$ sudo pip install -r requirements.txt
Requirement already satisfied (use --upgrade to upgrade): Babel==1.3 in /usr/lib/python2.7/dist-packages (from -r requirements.txt (line 1))
Requirement already satisfied (use --upgrade to upgrade): Jinja2==2.7.3 in /usr/local/lib/python2.7/dist-packages (from -r requirements.txt (line 2))
Requirement already satisfied (use --upgrade to upgrade): Mako==1.0.1 in /usr/local/lib/python2.7/dist-packages (from -r requirements.txt (line 3))
Requirement already satisfied (use --upgrade to upgrade): MarkupSafe==0.23 in /usr/lib/python2.7/dist-packages (from -r requirements.txt (line 4))
Requirement already satisfied (use --upgrade to upgrade): Pillow==2.7.0 in /usr/local/lib/python2.7/dist-packages (from -r requirements.txt (line 5))
Requirement already satisfied (use --upgrade to upgrade): Python-Chart==1.39 in /usr/local/lib/python2.7/dist-packages (from -r requirements.txt (line 6))
Requirement already satisfied (use --upgrade to upgrade): PyYAML==3.11 in /usr/lib/python2.7/dist-packages (from -r requirements.txt (line 7))
Requirement already satisfied (use --upgrade to upgrade): Werkzeug==0.9.6 in /usr/local/lib/python2.7/dist-packages (from -r requirements.txt (line 8))
Requirement already satisfied (use --upgrade to upgrade): argparse==1.2.1 in /usr/lib/python2.7 (from -r requirements.txt (line 9))
Requirement already satisfied (use --upgrade to upgrade): decorator==3.4.0 in /usr/local/lib/python2.7/dist-packages (from -r requirements.txt (line 10))
Requirement already satisfied (use --upgrade to upgrade): docutils==0.12 in /usr/lib/python2.7/dist-packages (from -r requirements.txt (line 11))
Requirement already satisfied (use --upgrade to upgrade): feedparser==5.1.3 in /usr/lib/python2.7/dist-packages (from -r requirements.txt (line 12))
Requirement already satisfied (use --upgrade to upgrade): gdata==2.0.18 in /usr/local/lib/python2.7/dist-packages (from -r requirements.txt (line 13))
Requirement already satisfied (use --upgrade to upgrade): gevent==1.0.2 in /usr/local/lib/python2.7/dist-packages (from -r requirements.txt (line 14))
Requirement already satisfied (use --upgrade to upgrade): greenlet==0.4.7 in /usr/local/lib/python2.7/dist-packages (from -r requirements.txt (line 15))
Requirement already satisfied (use --upgrade to upgrade): jcconv==0.2.3 in /usr/local/lib/python2.7/dist-packages (from -r requirements.txt (line 16))
Downloading/unpacking lxml==3.4.1 (from -r requirements.txt (line 17))
Downloading lxml-3.4.1.tar.gz (3.5MB): 3.5MB downloaded
Running setup.py (path:/tmp/pip-build-Hqt4sF/lxml/setup.py) egg_info for package lxml
/usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution option: 'bugtrack_url'
warnings.warn(msg)
Building lxml version 3.4.1.
Building without Cython.
ERROR: /bin/sh: 1: xslt-config: not found
** make sure the development packages of libxml2 and libxslt are installed **
Using build configuration of libxslt
warning: no previously-included files found matching '*.py'
Downloading/unpacking mock==1.0.1 (from -r requirements.txt (line 18))
Downloading mock-1.0.1.tar.gz (818kB): 818kB downloaded
Running setup.py (path:/tmp/pip-build-Hqt4sF/mock/setup.py) egg_info for package mock
warning: no files found matching '*.png' under directory 'docs'
warning: no files found matching '*.css' under directory 'docs'
warning: no files found matching '*.html' under directory 'docs'
warning: no files found matching '*.js' under directory 'docs'
Downloading/unpacking ofxparse==0.14 (from -r requirements.txt (line 19))
Downloading ofxparse-0.14.tar.gz (42kB): 42kB downloaded
Running setup.py (path:/tmp/pip-build-Hqt4sF/ofxparse/setup.py) egg_info for package ofxparse
Downloading/unpacking passlib==1.6.2 (from -r requirements.txt (line 20))
Downloading passlib-1.6.2.tar.gz (408kB): 408kB downloaded
Running setup.py (path:/tmp/pip-build-Hqt4sF/passlib/setup.py) egg_info for package passlib
Downloading/unpacking psutil==2.2.0 (from -r requirements.txt (line 21))
Downloading psutil-2.2.0.tar.gz (223kB): 223kB downloaded
Running setup.py (path:/tmp/pip-build-Hqt4sF/psutil/setup.py) egg_info for package psutil
warning: no previously-included files matching '*' found under directory 'docs/_build'
Downloading/unpacking psycogreen==1.0 (from -r requirements.txt (line 22))
Downloading psycogreen-1.0.tar.gz
Running setup.py (path:/tmp/pip-build-Hqt4sF/psycogreen/setup.py) egg_info for package psycogreen
Downloading/unpacking psycopg2==2.5.4 (from -r requirements.txt (line 23))
Downloading psycopg2-2.5.4.tar.gz (682kB): 682kB downloaded
Running setup.py (path:/tmp/pip-build-Hqt4sF/psycopg2/setup.py) egg_info for package psycopg2
Requirement already satisfied (use --upgrade to upgrade): pyPdf==1.13 in /usr/lib/python2.7/dist-packages (from -r requirements.txt (line 24))
Downloading/unpacking pydot==1.0.2 (from -r requirements.txt (line 25))
Downloading pydot-1.0.2.tar.gz
Running setup.py (path:/tmp/pip-build-Hqt4sF/pydot/setup.py) egg_info for package pydot
Couldn't import dot_parser, loading of dot files will not be possible.
Requirement already satisfied (use --upgrade to upgrade): pyparsing==2.0.3 in /usr/lib/python2.7/dist-packages (from -r requirements.txt (line 26))
Requirement already satisfied (use --upgrade to upgrade): pyserial==2.7 in /usr/lib/python2.7/dist-packages (from -r requirements.txt (line 27))
Downloading/unpacking python-dateutil==2.4.0 (from -r requirements.txt (line 28))
Downloading python_dateutil-2.4.0-py2.py3-none-any.whl (175kB): 175kB downloaded
Downloading/unpacking python-ldap==2.4.19 (from -r requirements.txt (line 29))
Downloading python-ldap-2.4.19.tar.gz (138kB): 138kB downloaded
Running setup.py (path:/tmp/pip-build-Hqt4sF/python-ldap/setup.py) egg_info for package python-ldap
defines: HAVE_SASL HAVE_TLS HAVE_LIBLDAP_R
extra_compile_args:
extra_objects:
include_dirs: /opt/openldap-RE24/include /usr/include/sasl /usr/include
library_dirs: /opt/openldap-RE24/lib /usr/lib
libs: ldap_r
file Lib/ldap.py (for module ldap) not found
file Lib/ldap/controls.py (for module ldap.controls) not found
file Lib/ldap/extop.py (for module ldap.extop) not found
file Lib/ldap/schema.py (for module ldap.schema) not found
warning: no files found matching 'Makefile'
warning: no files found matching 'Modules/LICENSE'
Requirement already satisfied (use --upgrade to upgrade): python-openid==2.2.5 in /usr/lib/python2.7/dist-packages (from -r requirements.txt (line 30))
Requirement already satisfied (use --upgrade to upgrade): pytz==2014.10 in /usr/lib/python2.7/dist-packages (from -r requirements.txt (line 31))
Downloading/unpacking pyusb==1.0.0b2 (from -r requirements.txt (line 32))
Downloading pyusb-1.0.0b2.tar.gz (57kB): 57kB downloaded
Running setup.py (path:/tmp/pip-build-Hqt4sF/pyusb/setup.py) egg_info for package pyusb
warning: no files found matching 'ChangeLog'
Downloading/unpacking qrcode==5.1 (from -r requirements.txt (line 33))
Downloading qrcode-5.1.tar.gz
Running setup.py (path:/tmp/pip-build-Hqt4sF/qrcode/setup.py) egg_info for package qrcode
Downloading/unpacking reportlab==3.1.44 (from -r requirements.txt (line 34))
Downloading reportlab-3.1.44.tar.gz (1.9MB): 1.9MB downloaded
Running setup.py (path:/tmp/pip-build-Hqt4sF/reportlab/setup.py) egg_info for package reportlab
################################################
#Attempting install of _rl_accel & pyHnj
#extensions from '/tmp/pip-build-Hqt4sF/reportlab/src/rl_addons/rl_accel'
################################################
################################################
#Attempting install of _renderPM
#extensions from '/tmp/pip-build-Hqt4sF/reportlab/src/rl_addons/renderPM'
will use package libart 2.3.12
# installing without freetype no ttf, sorry!
# You need to install a static library version of the freetype2 software
# If you need truetype support in renderPM
# You may need to edit setup.cfg (win32)
# or edit this file to access the library if it is installed
################################################
Downloading standard T1 font curves
Finished download of standard T1 font curves
()
########## SUMMARY INFO #########
################################################
#Attempting install of _rl_accel & pyHnj
#extensions from '/tmp/pip-build-Hqt4sF/reportlab/src/rl_addons/rl_accel'
################################################
################################################
#Attempting install of _renderPM
#extensions from '/tmp/pip-build-Hqt4sF/reportlab/src/rl_addons/renderPM'
will use package libart 2.3.12
# installing without freetype no ttf, sorry!
# You need to install a static library version of the freetype2 software
# If you need truetype support in renderPM
# You may need to edit setup.cfg (win32)
# or edit this file to access the library if it is installed
################################################
Downloading standard T1 font curves
Finished download of standard T1 font curves
Downloading/unpacking requests==2.6.0 (from -r requirements.txt (line 35))
Downloading requests-2.6.0-py2.py3-none-any.whl (469kB): 469kB downloaded
Requirement already satisfied (use --upgrade to upgrade): six==1.9.0 in /usr/lib/python2.7/dist-packages (from -r requirements.txt (line 36))
Downloading/unpacking suds-jurko==0.6 (from -r requirements.txt (line 37))
Downloading suds-jurko-0.6.tar.bz2 (143kB): 143kB downloaded
Running setup.py (path:/tmp/pip-build-Hqt4sF/suds-jurko/setup.py) egg_info for package suds-jurko
Requirement already satisfied (use --upgrade to upgrade): vatnumber==1.2 in /usr/lib/python2.7/dist-packages (from -r requirements.txt (line 38))
Downloading/unpacking vobject==0.6.6 (from -r requirements.txt (line 39))
Downloading vobject-0.6.6.tar.gz (53kB): 53kB downloaded
Running setup.py (path:/tmp/pip-build-Hqt4sF/vobject/setup.py) egg_info for package vobject
Requirement already satisfied (use --upgrade to upgrade): wsgiref==0.1.2 in /usr/lib/python2.7 (from -r requirements.txt (line 40))
Requirement already satisfied (use --upgrade to upgrade): xlwt==0.7.5 in /usr/lib/python2.7/dist-packages (from -r requirements.txt (line 41))
Requirement already satisfied (use --upgrade to upgrade): beautifulsoup4 in /usr/lib/python2.7/dist-packages (from ofxparse==0.14->-r requirements.txt (line 19))
Requirement already satisfied (use --upgrade to upgrade): setuptools in /usr/lib/python2.7/dist-packages (from pydot==1.0.2->-r requirements.txt (line 25))
Requirement already satisfied (use --upgrade to upgrade): pip>=1.4.1 in /usr/lib/python2.7/dist-packages (from reportlab==3.1.44->-r requirements.txt (line 34))
Requirement already satisfied (use --upgrade to upgrade): python-stdnum in /usr/lib/python2.7/dist-packages (from vatnumber==1.2->-r requirements.txt (line 38))
Installing collected packages: lxml, mock, ofxparse, passlib, psutil, psycogreen, psycopg2, pydot, python-dateutil, python-ldap, pyusb, qrcode, reportlab, requests, suds-jurko, vobject
Found existing installation: lxml 3.4.4
Not uninstalling lxml at /usr/lib/python2.7/dist-packages, owned by OS
Running setup.py install for lxml
/usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution option: 'bugtrack_url'
warnings.warn(msg)
Building lxml version 3.4.1.
Building without Cython.
ERROR: /bin/sh: 1: xslt-config: not found
** make sure the development packages of libxml2 and libxslt are installed **
Using build configuration of libxslt
building 'lxml.etree' extension
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fno-strict-aliasing -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security -fPIC -I/tmp/pip-build-Hqt4sF/lxml/src/lxml/includes -I/usr/include/python2.7 -c src/lxml/lxml.etree.c -o build/temp.linux-x86_64-2.7/src/lxml/lxml.etree.o -w
In file included from src/lxml/lxml.etree.c:239:0:
/tmp/pip-build-Hqt4sF/lxml/src/lxml/includes/etree_defs.h:14:31: fatal error: libxml/xmlversion.h: No such file or directory
compilation terminated.
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
Complete output from command /usr/bin/python -c "import setuptools, tokenize;__file__='/tmp/pip-build-Hqt4sF/lxml/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-tUvZhB-record/install-record.txt --single-version-externally-managed --compile:
/usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution option: 'bugtrack_url'
warnings.warn(msg)
Building lxml version 3.4.1.
Building without Cython.
ERROR: /bin/sh: 1: xslt-config: not found
** make sure the development packages of libxml2 and libxslt are installed **
Using build configuration of libxslt
running install
running build
running build_py
creating build
creating build/lib.linux-x86_64-2.7
creating build/lib.linux-x86_64-2.7/lxml
copying src/lxml/doctestcompare.py -> build/lib.linux-x86_64-2.7/lxml
copying src/lxml/builder.py -> build/lib.linux-x86_64-2.7/lxml
copying src/lxml/usedoctest.py -> build/lib.linux-x86_64-2.7/lxml
copying src/lxml/pyclasslookup.py -> build/lib.linux-x86_64-2.7/lxml
copying src/lxml/sax.py -> build/lib.linux-x86_64-2.7/lxml
copying src/lxml/_elementpath.py -> build/lib.linux-x86_64-2.7/lxml
copying src/lxml/ElementInclude.py -> build/lib.linux-x86_64-2.7/lxml
copying src/lxml/__init__.py -> build/lib.linux-x86_64-2.7/lxml
copying src/lxml/cssselect.py -> build/lib.linux-x86_64-2.7/lxml
creating build/lib.linux-x86_64-2.7/lxml/includes
copying src/lxml/includes/__init__.py -> build/lib.linux-x86_64-2.7/lxml/includes
creating build/lib.linux-x86_64-2.7/lxml/html
copying src/lxml/html/html5parser.py -> build/lib.linux-x86_64-2.7/lxml/html
copying src/lxml/html/diff.py -> build/lib.linux-x86_64-2.7/lxml/html
copying src/lxml/html/builder.py -> build/lib.linux-x86_64-2.7/lxml/html
copying src/lxml/html/usedoctest.py -> build/lib.linux-x86_64-2.7/lxml/html
copying src/lxml/html/_html5builder.py -> build/lib.linux-x86_64-2.7/lxml/html
copying src/lxml/html/_setmixin.py -> build/lib.linux-x86_64-2.7/lxml/html
copying src/lxml/html/ElementSoup.py -> build/lib.linux-x86_64-2.7/lxml/html
copying src/lxml/html/defs.py -> build/lib.linux-x86_64-2.7/lxml/html
copying src/lxml/html/_diffcommand.py -> build/lib.linux-x86_64-2.7/lxml/html
copying src/lxml/html/clean.py -> build/lib.linux-x86_64-2.7/lxml/html
copying src/lxml/html/formfill.py -> build/lib.linux-x86_64-2.7/lxml/html
copying src/lxml/html/__init__.py -> build/lib.linux-x86_64-2.7/lxml/html
copying src/lxml/html/soupparser.py -> build/lib.linux-x86_64-2.7/lxml/html
creating build/lib.linux-x86_64-2.7/lxml/isoschematron
copying src/lxml/isoschematron/__init__.py -> build/lib.linux-x86_64-2.7/lxml/isoschematron
copying src/lxml/lxml.etree.h -> build/lib.linux-x86_64-2.7/lxml
copying src/lxml/lxml.etree_api.h -> build/lib.linux-x86_64-2.7/lxml
copying src/lxml/includes/tree.pxd -> build/lib.linux-x86_64-2.7/lxml/includes
copying src/lxml/includes/xpath.pxd -> build/lib.linux-x86_64-2.7/lxml/includes
copying src/lxml/includes/c14n.pxd -> build/lib.linux-x86_64-2.7/lxml/includes
copying src/lxml/includes/dtdvalid.pxd -> build/lib.linux-x86_64-2.7/lxml/includes
copying src/lxml/includes/xmlparser.pxd -> build/lib.linux-x86_64-2.7/lxml/includes
copying src/lxml/includes/relaxng.pxd -> build/lib.linux-x86_64-2.7/lxml/includes
copying src/lxml/includes/config.pxd -> build/lib.linux-x86_64-2.7/lxml/includes
copying src/lxml/includes/xinclude.pxd -> build/lib.linux-x86_64-2.7/lxml/includes
copying src/lxml/includes/uri.pxd -> build/lib.linux-x86_64-2.7/lxml/includes
copying src/lxml/includes/xmlerror.pxd -> build/lib.linux-x86_64-2.7/lxml/includes
copying src/lxml/includes/etreepublic.pxd -> build/lib.linux-x86_64-2.7/lxml/includes
copying src/lxml/includes/htmlparser.pxd -> build/lib.linux-x86_64-2.7/lxml/includes
copying src/lxml/includes/schematron.pxd -> build/lib.linux-x86_64-2.7/lxml/includes
copying src/lxml/includes/xmlschema.pxd -> build/lib.linux-x86_64-2.7/lxml/includes
copying src/lxml/includes/xslt.pxd -> build/lib.linux-x86_64-2.7/lxml/includes
copying src/lxml/includes/etree_defs.h -> build/lib.linux-x86_64-2.7/lxml/includes
copying src/lxml/includes/lxml-version.h -> build/lib.linux-x86_64-2.7/lxml/includes
creating build/lib.linux-x86_64-2.7/lxml/isoschematron/resources
creating build/lib.linux-x86_64-2.7/lxml/isoschematron/resources/rng
copying src/lxml/isoschematron/resources/rng/iso-schematron.rng -> build/lib.linux-x86_64-2.7/lxml/isoschematron/resources/rng
creating build/lib.linux-x86_64-2.7/lxml/isoschematron/resources/xsl
copying src/lxml/isoschematron/resources/xsl/RNG2Schtrn.xsl -> build/lib.linux-x86_64-2.7/lxml/isoschematron/resources/xsl
copying src/lxml/isoschematron/resources/xsl/XSD2Schtrn.xsl -> build/lib.linux-x86_64-2.7/lxml/isoschematron/resources/xsl
creating build/lib.linux-x86_64-2.7/lxml/isoschematron/resources/xsl/iso-schematron-xslt1
copying src/lxml/isoschematron/resources/xsl/iso-schematron-xslt1/iso_svrl_for_xslt1.xsl -> build/lib.linux-x86_64-2.7/lxml/isoschematron/resources/xsl/iso-schematron-xslt1
copying src/lxml/isoschematron/resources/xsl/iso-schematron-xslt1/iso_schematron_skeleton_for_xslt1.xsl -> build/lib.linux-x86_64-2.7/lxml/isoschematron/resources/xsl/iso-schematron-xslt1
copying src/lxml/isoschematron/resources/xsl/iso-schematron-xslt1/iso_abstract_expand.xsl -> build/lib.linux-x86_64-2.7/lxml/isoschematron/resources/xsl/iso-schematron-xslt1
copying src/lxml/isoschematron/resources/xsl/iso-schematron-xslt1/iso_schematron_message.xsl -> build/lib.linux-x86_64-2.7/lxml/isoschematron/resources/xsl/iso-schematron-xslt1
copying src/lxml/isoschematron/resources/xsl/iso-schematron-xslt1/iso_dsdl_include.xsl -> build/lib.linux-x86_64-2.7/lxml/isoschematron/resources/xsl/iso-schematron-xslt1
copying src/lxml/isoschematron/resources/xsl/iso-schematron-xslt1/readme.txt -> build/lib.linux-x86_64-2.7/lxml/isoschematron/resources/xsl/iso-schematron-xslt1
running build_ext
building 'lxml.etree' extension
creating build/temp.linux-x86_64-2.7
creating build/temp.linux-x86_64-2.7/src
creating build/temp.linux-x86_64-2.7/src/lxml
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fno-strict-aliasing -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security -fPIC -I/tmp/pip-build-Hqt4sF/lxml/src/lxml/includes -I/usr/include/python2.7 -c src/lxml/lxml.etree.c -o build/temp.linux-x86_64-2.7/src/lxml/lxml.etree.o -w
In file included from src/lxml/lxml.etree.c:239:0:
/tmp/pip-build-Hqt4sF/lxml/src/lxml/includes/etree_defs.h:14:31: fatal error: libxml/xmlversion.h: No such file or directory
compilation terminated.
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
----------------------------------------
Can't roll back lxml; was not uninstalled
Cleaning up...
Command /usr/bin/python -c "import setuptools, tokenize;__file__='/tmp/pip-build-Hqt4sF/lxml/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-tUvZhB-record/install-record.txt --single-version-externally-managed --compile failed with error code 1 in /tmp/pip-build-Hqt4sF/lxml
Storing debug log for failure in /home/aaa/.pip/pip.log
</code></pre>
|
<p>Try installing the build-essential package first.</p>
<pre><code>apt-get install build-essential
</code></pre>
|
python|c++|linux|gcc|openerp
| 30 |
1,908,882 | 34,930,662 |
Continually adding path to sys.path
|
<p>I have inherited quite a bit of python code and all over it is the following snippet which adds the file path of the parent directory to the system path.</p>
<pre><code>from os.path import join, dirname
sys.path.insert(0, join(dirname(sys.argv[0]), "..\\"))
from utilities import find, execute
</code></pre>
<p>My understanding of this is that it adds a path to the search path. Which during the running of a program adds numerous pathsto the search path and presumably make it slower. As each file adds it's own parent directory.</p>
<p>I prefer the syntax </p>
<pre><code>from scm_tools.general.utilities import find, execute
</code></pre>
<p>because this is easier to understand and far less code. This might have implications if I am moving the code around but it's all in a single package.</p>
<p>Am I right in assuming that inside a package that the latter syntax is the more pythonic way of doing things ?</p>
<p>or does it not really matter as under the hood python is doing some magic ?</p>
|
<p>Use relative imports when you can:</p>
<p><code>from ..utilities import find, execute</code></p>
<p>This requires that you stay within the module space, which means each directory you traverse requires an <code>__init__.py</code> file.</p>
<p>There are cases where this breaks down, for example if your tests directory isn't inside the module structure. In these cases you need to edit the path, but you shouldn't edit the path blindly like the above example.</p>
<p>Either add to the <code>PYTHONPATH</code> environment variable before you code starts so you can always reference the root of the directory or only add paths that aren't already in the <code>sys.path</code> and try avoiding adding anything but module roots.</p>
<p>The <code>PYTHONPATH</code> change is a bit risky for code you wish to distribute. It's easy to have a change in <code>PYTHONPATH</code> you can't control or for you to not define that addition in a way that transfers to distributed code. It also adds an annoying module requirement that other's have to deal with -- so reserve this for adding whole swaths of modules that you want to include, like custom site-package directories. It's <em>almost</em> always better to use virtualenv for such situations.</p>
<p>If you do need to change a <code>sys.path</code> inside code you should try to at least avoid clobbering it all over the place or you'll have a headache trying to fix it when it goes awry. To avoid this try to only add root module paths so you can always import in a <code>root.submodule.desiredmodule</code> pattern. Additionally check if a path is already present before you insert it into <code>sys.path</code> to avoid very long <code>sys.path</code>s. In my test directories I oftentimes have an importable file that fixes the <code>sys.path</code> to the root of the directory structures I am testing:</p>
<pre><code># Add parent import capabilities
parentdir = os.path.abspath(os.path.join(os.path.dirname(__file__), '..'))
if parentdir not in sys.path:
sys.path.insert(0, parentdir)
</code></pre>
|
python
| 3 |
1,908,883 | 26,878,677 |
"No matching records found" when using bootstrap-table in Flask
|
<p>I'm now developing website based on flask, and I want to load a data.json using bootstrap-table. But I only got the table without the data. </p>
<p>The directory structure displays as below:</p>
<pre><code>index.py
templates/
new.html
data.json
data1.json
static/
css/
bootstrap-table.css
bootstrap-theme.css
bootstrap-theme.min.css
bootstrap.css.map
base.css
bootstrap-table.min.css
bootstrap-theme.css.map
bootstrap.css
bootstrap.min.css
js/
bootstrap-table.js
bootstrap.js
bower_components/
jquery.min.js
bootstrap-table.min.js
bootstrap.min.js
index.js
npm.js
</code></pre>
<p>and the index.py looks like this:</p>
<pre><code>62 @app.route("/")
63 def new():
64 return render_template('new.html')
</code></pre>
<p>the 'new .html' looks like this:</p>
<pre><code><!doctype html>
{% extends 'base.html' %}
{% block title %}Config{% endblock %}
{% block head %}
{{ super() }}
<!--<script type="text/javascript" src='../static/js/index.js'></script> -->
{% endblock %}
{% block header %}
<p class="title">test</p>
{% endblock %}
{% block content %}
<table data-toggle="table" data-url="data1.json" data-cache="false" data-height="299">
<thead>
<tr>
<th data-field="id">Item ID</th>
<th data-field="name">Item Name</th>
<th data-field="price">Item Price</th>
</tr>
</thead>
</table>
{% endblock %}
</code></pre>
<p>and I've link all the css and js files needed in the base.html</p>
<p>But when the index.py is running, I got the page as below. It only displays table, but without data</p>
<p><img src="https://i.stack.imgur.com/6N92z.png" alt="enter image description here"></p>
<p>Does anyone have met this problem? </p>
|
<p>Firstly, in terms of Flask, make sure the json file is located in the static folder. It's so trivial that I have tried many times to figure it out.</p>
<p>Then, make sure the format of your json file which contains a json array is correct.</p>
<p>FYR
<a href="https://stackoverflow.com/questions/25187349/bootstrap-table-showing-json-data">Bootstrap table showing JSON data</a></p>
|
python|html|json|twitter-bootstrap|flask
| 0 |
1,908,884 | 45,019,854 |
Extracting All Combinations in nested dictionary python
|
<p>I have a dictionary like:</p>
<pre><code>{'6400': {'6401': '1.0', '6407': '0.3333333333333333', '6536': '0.0', '6448': '0.0'}}
</code></pre>
<p>And I would like to product a structure similar to preferably in Pyspark:</p>
<pre><code>('6400',['6400','6401','1.0'])
('6400',['6400','6407','0.3333333333333333'])
('6400',['6400','6536','0.0'])
('6400',['6400','6448','0.0'])
</code></pre>
|
<p>If you do this in python you can use following code to produce the structure you want.</p>
<pre><code>d = {'6400': {'6401': '1.0', '6407': '0.3333333333333333', '6536':
'0.0', '6448': '0.0'}}
result = []
for outer_e in d:
for inner_e in d[outer_e]:
e = [outer_e, inner_e, d[outer_e][inner_e]]
e = (outer_e, e)
result.append(e)
</code></pre>
|
python|dictionary|pyspark|pyspark-sql
| 2 |
1,908,885 | 64,629,114 |
How to add new empty column sequentially
|
<p>I have this Data frame with these columns</p>
<pre><code>dd = pd.DataFrame({'a':[1],'1':[1],'2':[1],'4':[1],'6':[1],'b':[1]})
a 1 2 4 6 b
0 1 1 1 1 1 1
</code></pre>
<p>I want to add the missing column numbers like col 3 and col 5 is missing in its sequential manner, I can surely do this which gives the expected output.</p>
<pre><code>dd['3'] = 0
dd['5'] = 0
dd=dd.reindex(columns= ['a', '1','2','3','4','5','6','b'])
a 1 2 3 4 5 6 b
0 1 1 1 0 1 0 1 1
</code></pre>
<p>I have thousands of columns I can't do it manually is there a way we can add them via a loop or something</p>
|
<p>Let's <code>filter</code> the numeric columns then using <code>get_loc</code> obtain the location in the dataframe correspoding to the start and end location of the numeric columns, finally use <code>reindex</code> with <code>fill_value=0</code> to reindex accordingly:</p>
<pre><code>c = dd.filter(regex=r'^\d+$').columns
l1, l2 = dd.columns.get_loc(c[0]), dd.columns.get_loc(c[-1])
idx = np.hstack([dd.columns[:l1], np.r_[c.astype(int).min():c.astype(int).max() + 1].astype(str), dd.columns[l2 + 1:]])
dd = dd.reindex(idx, axis=1, fill_value=0)
</code></pre>
<hr />
<pre><code> a 1 2 3 4 5 6 b
0 1 1 1 0 1 0 1 1
</code></pre>
|
python|pandas
| 3 |
1,908,886 | 61,420,246 |
AttibuteError: '_AppCtxGlobals' object has no attribute 'user'
|
<p>To set some context I'm creating an API through Flask. To authenticate users, I'm using
<a href="https://flask-httpauth.readthedocs.io/en/latest/" rel="nofollow noreferrer">flask-HTTPAuth</a>. As a part of accessing login protected resources, I've defined my <code>verify_password</code> callback in <code>auth.py</code>. If the user credentials provided evaluate to True, the user is attached to the <code>g</code> object.</p>
<p>In <code>app.py</code>, there is the route <code>/api/v1/users/token</code>, that when requested, a token is issued to a user that is logged in. However when I try to access <code>g.user</code> in <code>app.py</code>, I get the error: <code>AttributeError: '_AppCtxGlobals' object has no attribute 'user'</code>.</p>
<p>Why isn't there any existing 'user' attribute not while accessing the <code>g</code> object in <code>app.py</code>?</p>
<p><strong>auth.py</strong></p>
<pre><code>from flask import g
from flask_http import HTTPBasicAuth
from models import User
basic_auth = HTTPBasicAuth()
@basic_auth.verify_password
def verify_password(username, password):
try:
api_user = User.get(User.username == username)
except User.DoesNotExist:
return False
user_verified = api_user.check_password(password)
if user_verified:
g.user = api_user
return True
return False
</code></pre>
<p><strong>app.py</strong></p>
<pre><code>from flask import Flask, g, jsonify
from auth import basic_auth as auth
app = Flask(__name__)
@auth.login_required
@app.route("/api/v1/users/token")
def issue_api_token():
token = g.user.request_token()
return jsonify({'token': token})
</code></pre>
|
<p>The order of your decorators is wrong, <code>@app.route</code> should always be first.</p>
<pre><code>@app.route("/api/v1/users/token")
@auth.login_required
def issue_api_token():
# ...
</code></pre>
|
python|flask|flask-httpauth
| 1 |
1,908,887 | 61,577,903 |
How to split data into train and test sets from one directory with PyTorch?
|
<p>I have a data folder that doesn't have data split into train and test folders. How do I split the data into train and test sets? The labels come from the names of the files, so any change in that order would have to include the labels. I want to split the data before using ImageFolder so the different transforms can be done on train and test datasets.</p>
<pre><code>train_transforms = transforms.Compose([transforms.RandomRotation(10),
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])])
test_transforms = transforms.Compose([transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])])
train_image_dataset = datasets.ImageFolder(data_dir, transform=train_transforms)
test_image_dataset = datasets.ImageFolder(data_dir, transform=test_transforms)
train_dataloader = torch.utils.data.DataLoader(train_image_dataset, batch_size=64, shuffle=True)
test_dataloader = torch.utils.data.DataLoader(test_image_dataset, batch_size=32)
</code></pre>
|
<p>I think what you are looking for is Cross-Validation , check this <a href="https://discuss.pytorch.org/t/what-is-the-best-way-to-apply-k-fold-cross-validation-in-cnn/15035" rel="nofollow noreferrer">answer</a>. You can add a column with the labels and then apply any cross-validation method to split into test and train.</p>
|
python|neural-network|pytorch|training-data|torchvision
| 0 |
1,908,888 | 61,331,463 |
Maximum Amount Shown in Money
|
<p>I need to show the maximum amount of money in the pot but I always end up with "0" or, when I try to put it into a list, it doesn't work. The rest of the program works fine but I just don't know how to show the maximum amount of money that was in the pot of money. here's my code:</p>
<pre><code>import random
def main():
"""
param: None
return: None
"""
print("Welcome to Lucky Sevens! Take your chances and win big if the die equal to 7! There are many ways to win!")
pot_money = 0
initial_bet = float(input("Enter your bet: "))
die1 = random.randint(1,6)
die2 = random.randint(1,6)
diceTotal = die1 + die2
roll_number = 0
#pot money would be equal to the initial bet until the player starts to gamble or it is empty
pot_money = initial_bet + pot_money
while pot_money > 0:
diceTotal = die1 + die2
roll_number += 1
print("Die 1 was " + str(die1) + " and die 2 was " + str(die2))
print("The total of the die was: " + str(diceTotal))
print("You are currently on roll " + str(roll_number))
if diceTotal == 7:
pot_money = pot_money + 4
else:
pot_money = pot_money - 1
if pot_money == 0:
print("It took " + str(roll_number) + " rolls to break you.")
print("The maximum amount of money in the pot was " + max(str(pot_money)))
print("The pot currently holds: $" + str(pot_money))
die1 = random.randint(1,6)
die2 = random.randint(1,6)
main()
</code></pre>
|
<p>Try this code. It prints the max pot at the end. You just needed to save a variable for the max pot and keep updating it in the loop:</p>
<pre><code>import random
print("Welcome to Lucky Sevens! Take your chances and win big if the die equal to 7! There are many ways to win!")
pot_money = 0
initial_bet = float(input("Enter your bet: "))
die1 = random.randint(1,6)
die2 = random.randint(1,6)
diceTotal = die1 + die2
roll_number = 0
#pot money would be equal to the initial bet until the player starts to gamble or it is empty
pot_money = initial_bet + pot_money
max_pot = pot_money
while pot_money > 0:
diceTotal = die1 + die2
roll_number += 1
print("Die 1 was " + str(die1) + " and die 2 was " + str(die2))
print("The total of the die was: " + str(diceTotal))
print("You are currently on roll " + str(roll_number))
if diceTotal == 7:
pot_money = pot_money + 4
else:
pot_money = pot_money - 1
if pot_money == 0:
print("It took " + str(roll_number) + " rolls to break you.")
print("The maximum amount of money in the pot was ${}".format(max_pot))
if pot_money > max_pot:
max_pot = pot_money
print("The pot currently holds: $" + str(pot_money))
die1 = random.randint(1,6)
die2 = random.randint(1,6)
</code></pre>
<p>returns</p>
<pre><code>Welcome to Lucky Sevens! Take your chances and win big if the die equal to 7! There are many ways to win!
Enter your bet: 2
Die 1 was 3 and die 2 was 2
The total of the die was: 5
You are currently on roll 1
The pot currently holds: $1.0
Die 1 was 3 and die 2 was 6
The total of the die was: 9
You are currently on roll 2
It took 2 rolls to break you.
The maximum amount of money in the pot was $2.0
The pot currently holds: $0.0
</code></pre>
|
python|python-3.x|max
| 1 |
1,908,889 | 61,594,344 |
Multiply values of two columns per row
|
<p>I'd like to multiply the values of two columns per row...</p>
<p>from this:</p>
<p><a href="https://i.stack.imgur.com/UV4iA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UV4iA.png" alt="enter image description here"></a></p>
<p>to this:</p>
<p><a href="https://i.stack.imgur.com/DNrE1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DNrE1.png" alt="enter image description here"></a></p>
|
<p>I think this can be easily done by numpy or pandas. Here is a sample solution-</p>
<pre><code>import pandas as pd
column = ['A','B','C']
dataframe = pd.DataFrame({"A":['a','b','c'],"B":[1,2,3],"C":[2,2,2]})
dataframe['D'] = dataframe['B']*dataframe['C']
print(dataframe)
</code></pre>
|
python-3.x
| 1 |
1,908,890 | 57,908,084 |
Creating a list in Python and inserting into Oracle table, and then able to retrieve the count ,but the values are not found in oracle table
|
<p>Creating a list in python and inserting into oracle table , but no records found in oracle table.</p>
<ol>
<li>Created a list in python.</li>
<li>Created a Oracle table using Python code.</li>
<li>Using executeMany inserted the list.</li>
<li>Run the count(*) query in python and obtained the number of rows and printed in python.</li>
</ol>
<p>Output : Table has been created in Oracle using python code succesfully , but cannot find the records inserted using python</p>
<pre><code>import cx_Oracle
con = cx_Oracle.connect('username/password@127.0.0.1/orcl')
cursor = con.cursor()
create_table = """CREATE TABLE python_modules ( module_name VARCHAR2(1000) NOT NULL, file_path VARCHAR2(1000) NOT NULL )"""
cursor.execute(create_table)
M = []
M.append(('Module1', 'c:/1'))
M.append(('Module2', 'c:/2'))
M.append(('Module3', 'c:3'))
cursor.prepare("INSERT INTO python_modules(module_name, file_path) VALUES (:1, :2)")
cursor.executemany(None,M)
con.commit
cursor.execute("SELECT COUNT(*) FROM python_modules")
print(cursor.fetchone() [0])
</code></pre>
<p>Executing below query "select * from python_modules " should have the 3 records in Oracle SQL Developer tool</p>
|
<p>Change your commit to <code>con.commit()</code>.</p>
|
python
| 0 |
1,908,891 | 57,882,584 |
How to reorder a python list backwards starting with the 0th element?
|
<p>I'm trying to go through a list in reverse order, starting with the -0 indexed item (which is also the 0th item), rather than the -1 indexed item, so that I'll now have the new list to use. I've come up with two ways to do this, but neither seems both concise and clear.</p>
<pre><code>a_list = [1, 2, 3, 4, 5]
print(a_list[:1] + a_list[:0:-1]) # take two slices of the list and add them
# [1, 5, 4, 3, 2]
list_range = range(-len(a_list)+1,1)[::-1] # create an appropriate new index range mapping
print([a_list[i] for i in list_range]) # list comprehension on the new range mapping
# [1, 5, 4, 3, 2]
</code></pre>
<p>Is there a way in python 3 to use slicing or another method to achieve this more simply?</p>
|
<p>If you are up for a programming golf:</p>
<pre><code>>>> a_list = [1, 2, 3, 4, 5]
>>> [a_list[-i] for i in range(len(a_list))]
[1, 5, 4, 3, 2]
</code></pre>
|
python|list
| 5 |
1,908,892 | 56,190,605 |
Can't use pendulum to parse dates in Series, but works one by one
|
<p>I'm trying to parse dates using <code>pendulum</code>. I have a <code>TimeStamp</code> date, so I did the following:</p>
<pre><code>df['aux']=df['Date'].dt.date
df['p_date']=df.aux.apply(lambda x: pendulum.parse(x))
</code></pre>
<p>Which brings the following error:</p>
<pre><code>AttributeError: 'DateTime' object has no attribute 'nanosecond'
</code></pre>
<p>But if I do, something like:</p>
<pre><code>pendulum.parse(df.aux[0])
</code></pre>
<p>It gets parsed no problem. I thought <code>apply(lambda x:)</code> applied the same function to all rows of the <code>Series</code> , but now it isn't working. What's happening?</p>
<p>Sample code:</p>
<pre><code>dates=pd.Series(['2018-03-20','2019-03-21'])
dates.apply(lambda x: pendulum.parse(x)) #Doesn't work
pendulum.parse(dates[0]) #Works
</code></pre>
|
<p>Since pandas dose not have nanosecond , in <a href="https://github.com/sdispater/pendulum/issues/246" rel="nofollow noreferrer">github</a>, so convert it to <code>str</code>, instead of received following error</p>
<blockquote>
<p>'DateTime' object has no attribute 'nanosecond'</p>
</blockquote>
<pre><code>dates.apply(lambda x: str(pendulum.parse(x)))
Out[256]:
0 2018-03-20T00:00:00+00:00
1 2019-03-21T00:00:00+00:00
dtype: object
</code></pre>
|
python|pandas|date|pendulum
| 0 |
1,908,893 | 56,239,674 |
Using regex to selectively pull data into pandas dataframe
|
<p>I am using regex and pandas to read through lines of text in a file and selectively pull data into a dataframe.</p>
<p>Say I have the following line of text</p>
<pre><code>Name : "Bob" Occupation : "Builder" Age : "42" Name : "Jim" Occupation : "" Age : "25"
</code></pre>
<p>I want to pull in all of this information into a dataframe so it looks like the following:</p>
<pre><code>Name Occupation Age
Bob Builder 42
</code></pre>
<p>I want to ignore reading in any of the information about the second person because their occupation is blank.</p>
<p>Code:</p>
<pre><code>with open(txt, 'r') as txt
for line in txt:
line = line.strip
a = re.findall(r'Name : \"(\S+)\"', line)
if a:
b = re.findall(r'Occupation : \"(\S+)\"', line)
if b:
c = re.findall(r'Age : \"(\S+)\"', line)
if c:
df = df.append({'Name' : a, 'Occupation' : b, 'Age' : c}, ignore_index = True)
</code></pre>
<p>This would return the following (incorrect) dataframe </p>
<pre><code> Name Occupation Age
["Bob", "Jim"] ["Builder"] ["42","25"]
</code></pre>
<p>I want to modify this code so that it doesn't ever include the situation that "Jim" is in. i.e. if the person has no "occupation" then don't read their info into the dataframe. You can also see that this code is incorrect because it is now saying that "Jim" has an Occupation of "Builder".</p>
<p>If I was given the below line of text:</p>
<pre><code>Name : "Bob" Occupation : "Builder" Age : "42" Name : "Jim" Occupation : "" Age : "25" Name : "Steve" Occupation : "Clerk" Age : "110"
</code></pre>
<p>The resulting df would be: </p>
<pre><code> Name Occupation Age
["Bob", "Steve"] ["Builder", "Clerk"] ["42","110"]
</code></pre>
<p>This is handy because I would no longer run into any indexing issues, so I could then expand this df into my end goal (know how to do):</p>
<pre><code>Name Occupation Age
Bob Builder 42
Steve Clerk 110
</code></pre>
|
<p>Based on your comment that the 3 keys <code>Name</code>, <code>Occupation</code> and <code>Age</code> are always in the same order, so we can use a single regex pattern to retrieve the field values and meanwhile make sure the matched values are non-EMPTY. Below is an example using <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.extractall.html" rel="nofollow noreferrer">Series.str.extractall()</a>:</p>
<pre><code># example texts copied from your post
str="""
Name : "Bob" Occupation : "Builder" Age : "42" Name : "Jim" Occupation : "" Age : "25" Name : "Steve" Occupation : "Clerk" Age : "110"
Name : "Bob" Occupation : "Builder" Age : "42" Name : "Jim" Occupation : "" Age : "25"
"""
# read all lines into one field dataframe with column name as 'text'
df = pd.read_csv(pd.io.common.StringIO(str), squeeze=True, header=None).to_frame('text')
# 3 fields which have the same regex sub-pattern
fields = ['Name', 'Occupation', 'Age']
# regex pattern used to retrieve values of the above fields. There are 3 sub-patterns
# corresponding to the above 3 fields and joined by at least one white spaces(\s+)
ptn = r'\s+'.join([ r'{0}\s*:\s*"(?P<{0}>[^"]+)"'.format(f) for f in fields ])
print(ptn)
#Name\s*:\s*"(?P<Name>[^"]+)"\s+Occupation\s*:\s*"(?P<Occupation>[^"]+)"\s+Age\s*:\s*"(?P<Age>[^"]+)"
</code></pre>
<p><strong>Where:</strong> </p>
<ul>
<li>The sub-pattern <code>Name\s*:\s*"(?P<Name>[^"]+)"</code> is basically doing the same as <strong><code>Name : "([^"]+)"</code></strong>, but with optionally <em>0</em> to <em>more</em> white spaces surrounding the colon <code>:</code> and a named capturing group. </li>
<li>the plus character <strong><code>+</code></strong> in <code>"([^"]+)"</code> is to make sure the value enclosed by double-quotes is not EMPTY, thus will skip Jim's profile since his <em>Occupation</em> is EMPTY. </li>
<li>Using named capturing groups so that we can have correct column names after running Series.str.extractall(), otherwise the resulting column names will be default to <code>0</code>, <code>1</code> and <code>2</code>.</li>
</ul>
<p>Then you can check the result from Series.str.extractall():</p>
<pre><code>df['text'].str.extractall(ptn)
Name Occupation Age
match
0 0 Bob Builder 42
1 Steve Clerk 110
1 0 Bob Builder 42
</code></pre>
<p>drop the level-1 index, you will get a dataframe with the original index. you can join this back to the original dataframe if there are other columns used in your tasks.</p>
<pre><code>df['text'].str.extractall(ptn).reset_index(level=1, drop=True)
###
Name Occupation Age
0 Bob Builder 42
0 Steve Clerk 110
1 Bob Builder 42
</code></pre>
|
python|regex|pandas
| 2 |
1,908,894 | 18,225,843 |
Where goes wrong for this High Pass Filter in Python?
|
<pre><code># Specifications for HPF
Wp = 0.01 # Cutoff frequency
Ws = 0.004 # Stop frequency
Rp = 0.1 # passband maximum loss (gpass)
As = 60 # stoppand min attenuation (gstop)
b,a = fd.iirdesign(Wp, Ws, Rp, As, ftype='butter')
y = sig.lfilter(b, a, x, axis=-1)
</code></pre>
<p>I adjusted the parameters but the result never turned up as expected.</p>
<p>For example, when I decreased <code>Wp</code>, I was expecting that more frequency components survived after the filtering. Thus, I expected to see a more "shaky" signal.</p>
<p>However, it turned out to be 0 everywhere.</p>
<p>It seems that my understanding on this HPF is wrong.</p>
<p><strong>Is it correct to do this to implement a HPF?</strong></p>
<p><strong>How may I adjust the parameters?</strong></p>
|
<p>It would seem that your transition band is too tight for iirdesign tool. The resulting filter has a large gain boost at low frequencies, essentially creating a lowpass filter after all. Try creating your filter with for example</p>
<pre><code>Wp = 0.1
Ws = 0.04
</code></pre>
<p>This should give you a highpass filter. Try plotting the resulting coefficients with octave or matlab freqz function for checking that it produced the desired filter response.</p>
<p>If you must have such a narrow transition, you can try other than butterworth filter types. For example elliptic manages to produce the desired cutoff, transition and stop, but introduces ringing on both pass and stop bands (and a non-linear phase response).</p>
<pre><code>b, a = fd.iirdesign(0.1, 0.04, 0.1, 60, ftype='ellip')
</code></pre>
|
python|filter|scipy|signals|signal-processing
| 3 |
1,908,895 | 69,562,028 |
Roll up output by endpoint name when they have unique ids for Locust.io
|
<p>I have a task that has two posts requests. The first post request creates the parent and the second uses the key from the parent to create a child of that parent. However in the output, while all the parent post endpoints are rolled up, the child ones are not because they include the parent ID in the endpoint URL.</p>
<p>How can I have the output roll up by endpoint where it uses a placeholder for the id.</p>
<p><strong>Parent Endpoint</strong>
<code>/v1/foo/bar</code></p>
<p><strong>Child Endpoint</strong>
<code>/v1/foo/bar/{id}/upload</code></p>
<h3>Actual</h3>
<p>With the above examples, I get one line in the output for the parent and it shows the number of requests, failures, etc., and I get one line for each child request. So if the parent has 50 requests on it I will have 50 separate lines one for each child request. Something like this is what currently is in the output.</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Type</th>
<th>Name</th>
<th style="text-align: right;"># Requests</th>
<th style="text-align: right;"># Fails</th>
<th style="text-align: center;">...</th>
</tr>
</thead>
<tbody>
<tr>
<td>POST</td>
<td>/v1/foo/bar</td>
<td style="text-align: right;">3</td>
<td style="text-align: right;">0</td>
<td style="text-align: center;"></td>
</tr>
<tr>
<td>POST</td>
<td>/v1/foo/bar/1/upload</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">0</td>
<td style="text-align: center;"></td>
</tr>
<tr>
<td>POST</td>
<td>/v1/foo/bar/2/upload</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">0</td>
<td style="text-align: center;"></td>
</tr>
<tr>
<td>POST</td>
<td>/v1/foo/bar/3/upload</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">0</td>
<td style="text-align: center;"></td>
</tr>
</tbody>
</table>
</div><h3>Would Like</h3>
<p>I don't want the web or CLI output to show for each unique ID, just to be rolled up under one like shown with a placeholder. Something similar to this, but anything that shows the two lines with the counts matching is fine.</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Type</th>
<th>Name</th>
<th style="text-align: right;"># Requests</th>
<th style="text-align: right;"># Fails</th>
<th style="text-align: center;">...</th>
</tr>
</thead>
<tbody>
<tr>
<td>POST</td>
<td>/v1/foo/bar</td>
<td style="text-align: right;">3</td>
<td style="text-align: right;">0</td>
<td style="text-align: center;"></td>
</tr>
<tr>
<td>POST</td>
<td>/v1/foo/bar/{id}/upload</td>
<td style="text-align: right;">3</td>
<td style="text-align: right;">0</td>
<td style="text-align: center;"></td>
</tr>
</tbody>
</table>
</div>
|
<p>When you make the request, you can pass in <code>name='/v1/foo/bar/{id}/upload'</code> and that's what Locust will report it as. From <a href="https://docs.locust.io/en/stable/api.html?highlight=with#locust.clients.HttpSession" rel="nofollow noreferrer">the docs</a>:</p>
<blockquote>
<p>Each of the methods for making requests also takes two additional optional arguments which are Locust specific and doesn’t exist in python-requests. These are:</p>
<p>Parameters:</p>
<p><strong>name</strong> – (optional) An argument that can be specified to use as label in Locust’s statistics instead of the URL path. This can be used to group different URL’s that are requested into a single entry in Locust’s statistics.</p>
</blockquote>
|
python-3.x|locust
| 2 |
1,908,896 | 55,382,525 |
How to find the minimum value of a list element which is based on unique value of the same list using Python
|
<p>I have a csv like the following</p>
<pre><code>SKU;price;availability;Title;Supplier
SUV500;21,50 €;1;27-03-2019 14:46;supplier1
MZ-76E;5,50 €;1;27-03-2019 14:46;supplier1
SUV500;49,95 €;0;27-03-2019 14:46;supplier2
MZ-76E;71,25 €;0;27-03-2019 14:46;supplier2
SUV500;32,60 €;1;27-03-2019 14:46;supplier3
</code></pre>
<p>I am trying to get as an output a csv that will have the following</p>
<pre><code>SKU;price;availability;Title;Supplier
SUV500;21,50 €;1;27-03-2019 14:46;supplier1
MZ-76E;5,50 €;1;27-03-2019 14:46;supplier1
</code></pre>
<p>Where for each SKU I want to get <strong>only</strong> the record in which the price is the minimum</p>
<p>How can I do it because I am totally lost with pandas? with classical if for? with lists?sets?</p>
<p>Any ideas?</p>
|
<p>In pandas you can do the following</p>
<pre><code>import pandas as pd
df= pd.read_csv('your file')
</code></pre>
<p>As andy pointed out below this returns only the price and SKU columns</p>
<pre><code>df_reduced= df.groupby('SKU')['price'].min()
</code></pre>
<p>for all the columns you can change the groupby to a list of all the columns you want to keep</p>
<pre><code>df_reduced= df.groupby(['SKU', 'availability', 'Title', 'Supplier'])['price'].min()
</code></pre>
|
python|python-3.x
| 1 |
1,908,897 | 55,423,157 |
Python to read xml
|
<p>I am no able to make python to read xlm info. Please see my xml Line bellow. Do not seems to be normal as other I have seen before.</p>
<pre><code>b'<?xml version="1.0" encoding="UTF-8" standalone="yes"?>\n<mtsInputForm source="None">\n <type>2109</type>\n <serial>000777R</serial>\n <xmlMessages>\n <xmlMessage type="system"/>\n <xmlMessage type="error" num="1">No information was found matching the search criteria.</xmlMessage>\n </xmlMessages>\n</mtsInputForm>\n'
</code></pre>
<p>I need to get this message "No information was found matching the search criteria" from the XML line above.
How do I do that?</p>
|
<p>I just found out.... </p>
<pre><code>XML_line = '''<?xml version="1.0" encoding="UTF-8" standalone="yes"?>\n<mtsInputForm source="None">\n <type>2109</type>\n <serial>000777R</serial>\n <xmlMessages>\n <xmlMessage type="system"/>\n <xmlMessage type="error" num="1">No information was found matching the search criteria.</xmlMessage>\n </xmlMessages>\n</mtsInputForm>\n'''
source = str.encode(XML_line)
root = ET.fromstring(source)
for msg in root.findall(".//xmlMessage"): #xlm has two tgs with same name "xmlMessage"
information = msg.text
if info_typee == str:
info_list.append(info)
print (information)
</code></pre>
<p>xlm has two tgs with same name "xmlMessage" and the code above bring both information, one is type = class the other is str</p>
|
python-3.x
| 0 |
1,908,898 | 57,691,610 |
Filling rows with conditions in Pandas
|
<p>input data:</p>
<pre><code>df=pd.DataFrame({'A':['NBN 3','test text1','test text2','NBN 3.1 new text','test
1','test 2']},columns=['A','B'])
print(df)
A B
0 NBN 3
1 test text1
2 test text2
3 NBN 3.1 new text
4 test 1
5 test 2
</code></pre>
<p>I need to create new column filled by value <code>df['B']= NBN and number</code>
I want to go from up to down of this df and fill rows by first NBN value unil next NBN value will show up.</p>
<p>expected output:</p>
<pre><code> A B
0 NBN 3 NBN 3
1 test text1 NBN 3
2 test text2 NBN 3
3 NBN 3.1 new text NBN 3.1
4 test 1 NBN 3.1
5 test 2 NBN 3.1
</code></pre>
<p>and so on.</p>
<p>right now i can only use</p>
<p><code>df['B'] = df['A'].str.contains(r'^NBN \d|^NBN \d\.\d')</code></p>
<pre><code> A B
0 NBN 3 True
1 test text1 False
2 test text2 False
3 NBN 3.1 new text True
4 test 1 False
5 test 2 False
</code></pre>
<p>it will show me which rows are True or not. but i have problem with filling then in the way i need.
Any help? Thanks!</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.where.html" rel="nofollow noreferrer"><code>Series.where</code></a> with your mask and forward filling missing values:</p>
<pre><code>df['B'] = df['A'].where(df['A'].str.contains('NBN')).ffill()
#your solution should be changed
#df['B'] = df['A'].where(df['A'].str.contains(r'^NBN \d|^NBN \d\.\d')).ffill()
print(df)
A B
0 NBN 3 NBN 3
1 test text1 NBN 3
2 test text2 NBN 3
3 NBN 3.1 NBN 3.1
4 test 1 NBN 3.1
5 test 2 NBN 3.1
</code></pre>
<p>Another solution with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.extract.html" rel="nofollow noreferrer"><code>Series.str.extract</code></a> and forward filling missing values:</p>
<pre><code>df['B'] = df['A'].str.extract(r'^(NBN\s+\d\.\d|NBN\s+\d)', expand=False).ffill()
</code></pre>
|
python|pandas
| 4 |
1,908,899 | 54,042,605 |
How can you fix this error while installing PIP on Windows 10?
|
<p>I am trying to install pip on my Windows machine. I get the latest version of get-pip.py from <code>https://bootstrap.pypa.io/</code></p>
<p>Then, I fire up the command prompt, locate <code>get-pip.py</code> on my system and try the command: </p>
<p><code>python get-pip.py</code></p>
<p>Everytime I try this, I get the following error:</p>
<p><a href="https://i.stack.imgur.com/tBL6G.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tBL6G.png" alt="enter image description here"></a></p>
<p>Please help! </p>
|
<p>two initial things:</p>
<p>1) what version of python have you installed?
new versions come with pip installed by default.
2) have you tried running cmd as administrator ?
ive had issues with stuff in the past that running cmd as
admin has resolved.</p>
<p>let us know! </p>
|
python|pip|windows-10
| 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.