Unnamed: 0
int64 0
1.91M
| id
int64 337
73.8M
| title
stringlengths 10
150
| question
stringlengths 21
64.2k
| answer
stringlengths 19
59.4k
| tags
stringlengths 5
112
| score
int64 -10
17.3k
|
---|---|---|---|---|---|---|
1,901,300 | 16,278,660 |
Print only variables that are True Python 3.3
|
<p>I have a list of variables. Some are equal to True, some are not. I would like to know the easiest way to print only ones that are 'True'. It doesn't matter what order they are printed, but they need to be separated by a blank line in the output.</p>
<p>For example:</p>
<pre><code>Var1 = "Cat"
Var2 = "Dog"
Var3 = "Sheep"
Var4 = ""
Var5 = ""
Var6 = "Horse"
</code></pre>
<p>And I need to somehow get the system to print:</p>
<pre><code>Cat
Dog
Sheep
Horse
</code></pre>
<p>I was thinking something like:</p>
<pre><code>print True in [Var1, Var2, Var3, Var4, Var5, Var6]
</code></pre>
<p>But obviously that doesn't work as it doesn't seem to be correct usage (I was just guessing). Any ideas would be very much appreciated!</p>
|
<p>Although, as noted in a comment to your question, <code>VarX, VarY, ...</code> is a common anti pattern and you should rather use a list, you could use:</p>
<pre><code>print('\n'.join(filter(bool, [Var1, Var2, Var3, Var4, Var5, Var6])))
</code></pre>
|
python|python-3.3
| 5 |
1,901,301 | 31,784,081 |
time.sleep() in unit test Python when using threads
|
<p>I'm learning threads in Python and thought unit testing would fit my needs.</p>
<p>Using this <a href="http://www.tutorialspoint.com/python/python_multithreading.htm" rel="nofollow">http://www.tutorialspoint.com/python/python_multithreading.htm</a> as my point of origin. However when i'm using time.sleep() in my function, the test seems to return from that code block. </p>
<pre><code>import unittest
import thread
import time
class Test(unittest.TestCase):
def test_threads(self):
thread.start_new_thread(self.print_time, ("Thread 1", 1))
def print_time(self, threadName, delay):
count = 0
while count < 5:
time.sleep(delay)
count += 1
print "%s: %s" % (threadName, time.ctime(time.time()))
if __name__ == "__main__":
unittest.main()
</code></pre>
<p>Using this code outputs nothing.
Getting rid of time.sleep(delay) outputs:</p>
<pre><code>Thread 1: Mon Aug 03 11:36:56 2015
Thread 1: Mon Aug 03 11:36:56 2015
Thread 1: Mon Aug 03 11:36:56 2015
Thread 1: Mon Aug 03 11:36:56 2015
Thread 1: Mon Aug 03 11:36:56 2015
</code></pre>
<p>How can I use time.sleep() when i'm using unit test in Python?</p>
|
<p>The behaviour you see is because your script has ended before the thread has had a chance to print its output. According to the <a href="https://docs.python.org/2/library/thread.html" rel="nofollow" title="docs">docs</a>, under 'Caveats': </p>
<blockquote>
<p>When the main thread exits, it is system defined whether the other threads survive</p>
</blockquote>
<p>So it might well be your thread is killed during its first sleep(). Even if the thread would survive I doubt if you would see ouput, in the same doc:</p>
<blockquote>
<p>When the main thread exits [..] the standard I/O files are not flushed</p>
</blockquote>
<p>Without your <code>time.sleep(delay)</code> apparently the <code>print</code>'s are ready before the script has finished. </p>
<p>A naive way to fix this is to have your test method sleep() for the amount of time it takes for the subthread to finish, so to replace your test method by</p>
<pre><code>def test_threads(self):
Thread(target = self.print_time, args = ("Thread 1", 1)).start()
time.sleep(6)
</code></pre>
<p>Normally though you would use the <code>Threading</code> module for this type of work. When you use it to start threads in non-daemon mode (the default) the calling thread only stops after all threads it started have finished.</p>
|
python|multithreading|unit-testing
| 2 |
1,901,302 | 38,842,133 |
How to sort dictionary's multiple values in ascending order?
|
<p>I have to sort dictionary values in ascending order. Then I have to exclude zero element from dictionary. I have found lots of example of sorting dictionary by keys or values not values itself. Any help on this? </p>
<p>Input:</p>
<pre><code>1467606570.0,192.168.123.241,0.0
1467606817.0,192.168.123.241,247.0
1467607136.0,192.168.123.241,319.0
1467607244.0,192.168.123.241,108.0
1467607642.0,192.168.123.241,398.0
1467608334.0,192.168.123.241,692.0
1467606628.0,192.168.123.240,0.0
1467606876.0,192.168.123.240,248.0
1467607385.0,192.168.123.240,509.0
1467606679.0,192.168.123.246,0.0
1467607084.0,192.168.123.246,405.0
1467607713.0,192.168.123.246,629.0
1467608102.0,192.168.123.246,389.0
1467607524.0,192.168.123.242,0.0
1467608257.0,192.168.123.242,733.0
1467608607.0,192.168.123.242,350.0
1467608669.0,192.168.123.245,0.0
1467608813.0,192.168.123.245,144.0
</code></pre>
<p>Code:</p>
<pre><code> mydict = {}
#sorted_dict = {}
reader = csv.reader(open("/tmp/log/qpm.csv", "rb"))
for i, rows in enumerate(reader):
if i == 0: continue
k = rows[1]
v = rows[2]
if not k in mydict:
mydict[k] = [v]
else:
mydict[k].append(v)
#for key, value in mydict.iteritems():
for key, value in sorted (mydict.iteritems(), key=operator.itemgetter(1),reverse=True):
print key, value
</code></pre>
<p>Output :</p>
<pre><code>192.168.123.241 ['247.0', '319.0', '108.0', '398.0', '692.0']
192.168.123.242 ['0.0', '733.0', '350.0']
192.168.123.246 ['0.0', '405.0', '629.0', '389.0']
192.168.123.240 ['0.0', '248.0', '509.0']
192.168.123.245 ['0.0', '144.0']
</code></pre>
<p>Required Output:</p>
<pre><code>192.168.123.241 ['108.0', '319.0', '398.0', '692.0']
192.168.123.242 ['0.0', '350.0', '733.0']
192.168.123.246 ['0.0', '389.0', '405.0', '629.0']
192.168.123.240 ['0.0', '248.0', '509.0']
192.168.123.245 ['0.0', '144.0']
</code></pre>
<p>How sort values in ascending order?</p>
|
<p>Based on the sample output I suspect you want to sort the <em>values</em> for each key? If so you could do this by changing your print statement to be:</p>
<pre><code>print key, sorted(value)
</code></pre>
<p>Note that the entries in the <code>values</code> list are strings and so will sort in lexicological order which I suspect is not what your want. To sort as floats you can specify a custom comparator:</p>
<pre><code>print key, sorted(value, key=lambda x: float(x))
</code></pre>
<p>Alternatively you could store the values as floats when you initially read the data:</p>
<pre><code> if not k in mydict:
mydict[k] = [float(v)]
else:
mydict[k].append(float(v))
</code></pre>
|
python-2.7
| 1 |
1,901,303 | 38,735,225 |
typeerror: __init__() got an unexpected keyword argument 'timeout' pymongo
|
<p>I'm trying to extract the facebook data into mongoDB. I'm using python 2.7.3 and pymongo-3.3.0 on the linux environment (RHEL), while extracting the data, I got the following error.</p>
<blockquote>
<p><code>Exception AttributeError: "'Cursor' object has no attribute '_Cursor__id'" in <bound method Cursor.__del__ of <pymongo.cursor.Cursor object at 0x48fa110>> ignored
(<type 'exceptions.TypeError'>, TypeError("__init__() got an unexpected keyword argument 'timeout'",),<traceback object at 0x490a638>)</code></p>
</blockquote>
<p>Please suggest me how to fix this.</p>
|
<p>Had the same issue while using collection.find() and the parameter to use is <strong>not</strong> "<em>timeout</em>".</p>
<p>The <strong>correct parameter is</strong> "<em>no_cursor_timeout</em>". This parameter will avoid the exception on cursor timeout.</p>
<p>Example of usage:</p>
<pre><code>collection.find(no_cursor_timeout=True)
</code></pre>
<p>This will avoid your (probably) original exception:</p>
<pre><code>pymongo.errors.CursorNotFound: Cursor not found, cursor id:
</code></pre>
<p>PD: I will update my answer if you are not using the find on collection. In case this is not helpful please update your question with an example os usage.</p>
|
linux|python-2.7|pymongo|rhel
| 4 |
1,901,304 | 38,690,358 |
Andrews Plot random numbers in corner
|
<p>I am trying trying to plot some country data in an Andrews Plot but the number '1e12' keeps showing up in the top right corner and I have no idea why its there or how to get rid of it. Here is the plot itself: </p>
<p><a href="https://i.stack.imgur.com/6PoTc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6PoTc.png" alt="enter image description here"></a></p>
<p>Here is the code I used to make it, pretty standard Andrews Plot: </p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
from pandas.tools.plotting import radviz
from pandas.tools.plotting import table
from pandas import read_csv
from pandas.tools.plotting import andrews_curves
import os
filepath ="/Users/.../DefenseAndrews.csv"
os.chdir(os.getcwd())
os.getcwd()
dc = read_csv(filepath,
header=0, usecols=['Country','GDP','ME','GE','Trade','PopDensity'])
plt.figure()
andrews_curves(dc, 'Country')
plt.legend(loc='best', bbox_to_anchor=(1.0, 4.3))
plt.savefig('figure4_AndrewsPlot.eps', format='eps', dpi=1200)
plt.show()
</code></pre>
<p>My previous solution was to just open save it and manually erase it in an art program. However, I now have to create the images as a eps file which I can't edit after the fact. Any help or advice would be greatly appreciated. </p>
|
<p>That value is the scale on the axis. You'll have to divide your data by a factor of about 1e11. See the following example with iris data.</p>
<p><a href="https://raw.github.com/pydata/pandas/master/pandas/tests/data/iris.csv" rel="nofollow noreferrer">iris data linked here</a></p>
<pre><code>from pandas.tools.plotting import andrews_curves
data1 = pd.read_csv('iris.csv')
data2 = data1.copy()
data2.iloc[:, :4] *= 1e11
fig, axes = plt.subplots(1, 2, figsize=(10, 5))
andrews_curves(data1, 'Name', ax=axes[0])
andrews_curves(data2, 'Name', ax=axes[1])
</code></pre>
<p><a href="https://i.stack.imgur.com/uNXSV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uNXSV.png" alt="enter image description here"></a></p>
<p>You'll notice the left chart does not have this scale number while the right chart does. I deliberately multiplied the data charted on the right by a factor of 1e11.</p>
|
python|pandas|matplotlib
| 1 |
1,901,305 | 38,631,831 |
How to write a small tokenizer in Python?
|
<p>Normally, Python calls functions by</p>
<pre><code>func(arg0, arg1)
</code></pre>
<p>But I would like to change to</p>
<pre><code>func arg0 arg1
</code></pre>
<p>For example,</p>
<pre><code>#Something...
cmd = input()
interpret(cmd)
#Something...
</code></pre>
<p>If I input <code>'func arg0 arg1'</code>, then I expect Python to execute <code>func(arg0, arg1)</code>.</p>
<p><strong>Args will contain string, so that we can't simply split words.</strong></p>
<p>Actually, I would like to write some scripts to use on my mobile. So it would be a little annoying to type parentheses.</p>
|
<p>You can do this : </p>
<pre><code>class tryClass:
def callFunction(self, arg, arg2):
print("In call")
print(arg)
print(arg2)
input = str(input())
input = input.split(" ")
funcName = input[0]
my_cls = tryClass()
method = getattr(my_cls, funcName)
method(input[1], input[2])
</code></pre>
<p>If I put in input callFunction hello world it works :)</p>
|
python|parsing|interpreter
| 0 |
1,901,306 | 52,708,379 |
Comparing two different files from Separate Folders with different extensions and opening one with pywinauto
|
<p>I am trying to compare two different files from separate folders with different extensions and then trying to open one file with pywinauto. The application opens however the files with the extension that i am mentioning does not open.</p>
<p>I tried to iterate over multiple files it did not work i gave a specific file name. Just the application opens.</p>
<p>Below is the code that i have tried.</p>
<pre><code>from pywinauto.application import Application
import os
app = Application(backend="uia").start('C:\Program Files (x86)\Datawatch Monarch 14\DWMonarch.exe')
#app.Dialog.print_control_identifiers()
path = (r'C:\Check\Monarch\ICRDIS.dprj')
path2 = (r'C:\DOLV\ICRDIS.txt')
name1 = path.rsplit('.', 1)[0]
name2 = path2.rsplit('.', 1)[0]
#for name1 in path:
# for name2 in path2:
if name1 == name2:
try:
print(name1)
app.Dialog.child_window(title="Open", auto_id="Open", control_type="SplitButton")
app.Dialog.child_window(title="File", auto_id="PART_ApplicationButton", control_type="Button")
app.Dialog.Menu.Open(path.dprj)
#app.Dialog.Open('name1.dprj')
except:
print("No File Name Matches")
</code></pre>
<p>This is the control identifiers for Monarch with the Open Option:</p>
<pre><code> Menu - 'Ribbon' (L-4, T30, R1924, B171)
| ['RibbonMenu', 'Ribbon', 'Menu']
| child_window(title="Ribbon", auto_id="MainRibbon", control_type="MenuBar")
| |
| | Separator - '' (L26, T1, R29, B21)
| | ['17', 'Separator3']
| | child_window(auto_id="beforeSeparator", control_type="Separator")
| |
| | SplitButton - 'Open' (L31, T-3, R72, B25)
| | ['OpenSplitButton', 'Open', 'SplitButton', 'SplitButton0', 'SplitButton1']
| | child_window(title="Open", auto_id="Open", control_type="SplitButton")
</code></pre>
<p>Couple of questions how do i open the specific application using the mentioned controls and iterate over multiple files.</p>
<p>What am i doing incorrect for this not to work? Please suggest.</p>
<p>Regards,
Ren.</p>
|
<p>Thanks Vasily for the suggestions. Below is my entire code that works.</p>
<pre><code>import os
import shutil
from pywinauto.application import Application
from pywinauto import Desktop
import pandas as pd
from datetime import date
from datetime import datetime, timedelta
from dateutil.relativedelta import relativedelta
from os import walk
from os.path import splitext
import subprocess
import time
from time import sleep
monarch_files = r'C:\Health Check\Support Monarch Project'
monarchPath = 'C:\Program Files (x86)\Datawatch Monarch 14\DWMonarch.exe'
fileformat ='xprj'
fileformat2='dprj'
rundate = date.today()
job_name = pd.read_excel(r'C:\Check\Job Name.xlsx',
sheet_name = 'JobName',
header = 0
)
for index , row in job_name.iterrows():
jobname = row['JobName']
filename=row['ReportName']
for file_name in dolv_files:
fname=file_name.rsplit('.', 1)[0]
if fname==filename:
print(jobname)
file_open = os.path.join(monarch_files,jobname + "." + fileformat)
file_open1 = os.path.join(monarch_files,jobname + "." + fileformat2)
if os.path.exists(file_open):
subprocess.Popen([monarchPath,file_open])
sleep(10)
dlg=Desktop(backend='uia').window(title_re='Datawatch Monarch *')
sleep(5)
try:
dlg.Resolve_Missing_Model_Files.OK.invoke()
dlg=Desktop(backend='uia').window(title_re='Datawatch Monarch *')
dlg.Export.invoke()
dlg.Select_All_Exports.invoke()
dlg.Run_Exports.invoke()
sleep(5)
dlg.close()
try:
dlg.No.invoke()
except:
print('e')
except:
dlg.Export.invoke()
dlg.Select_All_Exports.invoke()
dlg.Run_Exports.invoke()
sleep(5)
dlg.close()
try:
dlg.No.invoke()
except:
print('e')
elif os.path.exists(file_open1):
subprocess.Popen([monarchPath,file_open1])
dlg=Desktop(backend='uia').window(title_re='Datawatch Monarch *')
sleep(5)
try:
dlg.Resolve_Missing_Model_Files.OK.invoke()
dlg=Desktop(backend='uia').window(title_re='Datawatch Monarch *')
dlg.Export.invoke()
dlg.Select_All_Exports.invoke()
dlg.Run_Exports.invoke()
sleep(5)
dlg.close()
try:
dlg.No.invoke()
except:
print('e')
except:
dlg.Export.invoke()
dlg.Select_All_Exports.invoke()
dlg.Run_Exports.invoke()
sleep(5)
dlg.close()
try:
dlg.No.invoke()
except:
print('e')
else:
print("File Not Found")
</code></pre>
<p>Regards,
Ren.</p>
|
python-3.x|pywinauto
| 1 |
1,901,307 | 40,431,952 |
Replace all except first row in a multi index
|
<p>I am using pandas and have loaded some data into a dataframe. What I would like to do is replace the scenario frequency column in my data for all but the first value in each group.</p>
<p>My data looks like this:</p>
<pre><code>ExplosionID FireWater FireID Scenario Frequency
111 0 213 4.209055e-15
214 4.209055e-15
215 4.209055e-15
217 4.209055e-15
219 4.209055e-15
220 4.209055e-15
112 0 232 8.388742e-16
233 8.388742e-16
234 8.388742e-16
235 8.388742e-16
237 8.388742e-16
239 8.388742e-16
240 8.388742e-16
</code></pre>
<p>I would like to replace all but the first values in scenario frequency column with 0, so that I end up with this:</p>
<pre><code>ExplosionID FireWater FireID Scenario Frequency
111 0 213 4.209055e-15
214 0
215 0
217 0
219 0
220 0
112 0 232 8.388742e-16
233 0
234 0
235 0
237 0
239 0
240 0
</code></pre>
<p>The first three columns (<code>ExplosionI</code>, <code>FireWater</code>, <code>FireID</code>) are the indexes in a multi-index.</p>
<p>I've defined a function:</p>
<pre><code>#function to replace all but first value in group with 0
def replace_all_except_first(group):
group.iloc[1:] = 0
return group
</code></pre>
<p>and have tried the following:</p>
<pre><code>data_to_sum = HL_df_subset.groupby(level=0).apply(replace_all_except_first)
</code></pre>
<p>where <code>HL_df_subset</code> is my dataframe. However, this places all values with 0.</p>
<p>I'm new to python and i know i'm completely misunderstanding how groupby works, but i've been trying all sorts and can't get it to work.</p>
<p>Thanks for your help.</p>
|
<ul>
<li><strong><em><code>cumcount</code></em></strong>: to find the ordering. create a boolean series where not equal to <code>0</code>. This means, not the first row </li>
<li><strong><em><code>mask</code></em></strong>: takes true values and masks the relevant parts of the dataframe. In this case, it makes everywhwere the cumcount isn't zero a <code>np.nan</code>. </li>
<li><strong><em><code>fillna</code></em></strong>: take those <code>np.nan</code> and fill them with zero</li>
</ul>
<hr>
<pre><code>HL_df_subset.mask(HL_df_subset.groupby(level=0).cumcount().ne(0)).fillna(0)
</code></pre>
<hr>
<p>consider <code>df</code></p>
<pre><code>df = pd.DataFrame(
dict(A=np.arange(100, 116)),
pd.MultiIndex.from_product(
[list('ab'), list('xy'), [1, 2, 3, 4]]))
df
A
a x 1 100
2 101
3 102
4 103
y 1 104
2 105
3 106
4 107
b x 1 108
2 109
3 110
4 111
y 1 112
2 113
3 114
4 115
</code></pre>
<hr>
<pre><code>df.mask(df.groupby(level=[0, 1]).cumcount().ne(0)).fillna(0)
A
a x 1 100.0
2 0.0
3 0.0
4 0.0
y 1 104.0
2 0.0
3 0.0
4 0.0
b x 1 108.0
2 0.0
3 0.0
4 0.0
y 1 112.0
2 0.0
3 0.0
4 0.0
</code></pre>
|
python|pandas
| 1 |
1,901,308 | 2,106,178 |
PyQt post installation question
|
<p>I successfully installed PyQt in both mac and PC. To do so I had to install mingw (on PC), Xcode (on MAC) and Qt4.6 library. Now that I have PyQt working perfectly, I would like to uninstall mingw, Xcode and Qt Library from both mac and PC. </p>
<p>I know I can remove Xcode and mingw, but what care should I take before removing Qt library. I know PyQt is still using it but it is not using whole 1.5Gig of files installed by Qt installer. So which files should I copy before removing Qt and where should I copy it to.</p>
|
<p>You can remove the <code>demos</code> and <code>examples</code> directories inside your qt installation directory... they take up over 1GB of space and are not required. I would leave the rest there, unless you are really worried about space.</p>
<p>If you <em>do</em> try to clean up the QT installation directory, start by renaming larger files/directories (e.g. add a <code>.old</code> suffix to the name), and see if the features you use in QT still function. If it breaks, just rename the files/directories back (remove <code>.old</code>).</p>
|
python|pyqt4
| 1 |
1,901,309 | 28,327,108 |
Where can I find VIDLE?
|
<p>I am new to Python. I was trying to learn Python using Anaconda Python 2.7 and its Spyder app. I'd like to use Vpython to create 3D objects. So I installed vpython using </p>
<p>conda install -c mwcraig vpython</p>
<p>which is what they suggested and it worked. I just couldn't find the VIDLE shortcut anywhere. Can anyone show me the direction? </p>
|
<p>Well if anyone else is wondering, I went to <a href="https://vpython.org/contents/download_windows.html" rel="nofollow noreferrer">https://vpython.org/contents/download_windows.html</a></p>
<p>Then followed the instructions under the section labeled "Windows downloads for VPython 6"</p>
|
python-2.7|anaconda|vpython
| 0 |
1,901,310 | 44,078,437 |
Python : ValueError: invalid literal for float()
|
<p>I'm tring to create a dictionary importing data from an excel file converted in csv and I want to convert the string value of the dictionary into float, but I get in return this error <code>ValueError: invalid literal for float(): 437,33</code></p>
<pre><code>import csv
from collections import defaultdict
my_dict = {}
my_dict = defaultdict(lambda : 0, my_dict)
with open('excel_csv_file.csv', 'rb') as file_object:
reader = csv.reader(file_object, delimiter=';')
for x in reader:
my_dict[(x[0], x[1])] = x[2]
my_dict = dict((k, float(v)) for k,v in my_dict.iteritems())
print my_dict
</code></pre>
<p>This is what my_dict looks like </p>
<pre><code>{('11605', 'TV'): '437,33',
('10850', 'SMARTPHONE'): '163,47',
('11380', 'TV'): '1911,72',
('11177', 'SMARTPHONE'): '255,80',
('11237', 'TABLET'): '382,28',
('11238', 'TABLET'): '458,01',
('11325', 'TABLET'): '309,55',
...}
</code></pre>
<p>Why am I getting this error?</p>
<p>Also, is there a way to convert the string value inside the tuple key into an int? (for instance <code>('11605', 'TV')</code> to <code>(11605, 'TV')</code>)?</p>
|
<p>Python uses <code>.</code> (period) to separate the integer and fraction parts of a floating-point number, but your data uses <code>,</code> (comma).</p>
<p>To convert to <code>int</code>, you can use value unpacking:</p>
<pre><code>line my_dict = dict(((int(k1), k2), float(v.replace(',', '.'))) for (k1,k2),v in my_dict.iteritems())
</code></pre>
<p>Since you’re using Python 2.7, you can also use a <code>dict</code> comprehension to make this simpler:</p>
<pre><code>line my_dict = {(int(k1), k2): float(v.replace(',', '.')) for (k1,k2),v in my_dict.iteritems()}
</code></pre>
<hr>
<p><strong>Bonus:</strong></p>
<p>If you have input from countries other than the US, and you expect you might run into other issues similar to your decimal separation one, you can use the <code>locale</code> module. If you start the script with </p>
<pre><code>import locale
locale.setlocale(locale.LC_ALL, '')
</code></pre>
<p>you can use the various functions in the locale module <code>locale.atof</code>) instead of the built-in conversion methods like <code>float</code>, and it will automatically handle the user’s locale settings.</p>
|
python|python-2.7
| 1 |
1,901,311 | 32,954,612 |
PyYAML customized yaml processing
|
<p>I would like to extend YAML with some custom macros so that I can "reuse" parts of definitions within the same file. Sample:</p>
<pre><code>DEFAULTS:
- a
- b
- c
CUSTOM1:
- %DEFAULTS
- d
CUSTOM2:
- %DEFAULTS
- e
</code></pre>
<p>resulting in </p>
<pre><code>CUSTOM1==['a','b','c','d']
CUSTOM2==['a','b','c','e']
</code></pre>
<p>Doesn't have to be exact same syntax, as long as I can get same functionality out of it. what are my options?</p>
<p>P.S.
I do realize that it is possible to just walk the dictionary after parsing and re-adjust the values, however I'd like to do it while loading.</p>
|
<p>There are no options within the YAML specification. The only thing that comes close is the <a href="http://yaml.org/type/merge.html" rel="nofollow">merge syntax</a>, but that is for merging mappings and doesn't work for sequences. </p>
<p>If you cannot switch to using mappings in your context (and use the <code><<</code> merge), the cleanest way, IMO, to implement this is to make the values of <code>CUSTOM1</code> and <code>CUSTOM2</code> specific types, e.g. <code>expander</code>:</p>
<pre><code>CUSTOM1: !expander
- %DEFAULTS
- d
</code></pre>
<p>that map to objects that interpret the first sequence element as a replaceable value when it starts with <code>%</code>.</p>
|
python|pyyaml
| 1 |
1,901,312 | 34,638,261 |
Unable to pass list to this Python function when using timeit
|
<p>I wrote a small script to generate the running times of a function with varying input. My intent was to plot the numbers and prove to myself that the function indeed had a quadratic running time. Here's the code:</p>
<pre><code>import timeit
seq = [875011, 549220, 400865, 913974, 699921, 108386, 328934, 593902, 805659, 916502, 597662]
subseq = []
num = 1000000 # How many times the algorithm must run
# Quadratic running time
def quad (S):
n = len(S)
A = [0] * n
for j in range(n):
total = 0
for i in range(j+1):
total += S[i]
A[j] = total / (j+1)
return A
def plot_func (name):
print('\n')
for i in range(len(seq)):
subseq = seq[0:i+1]
t = timeit.Timer('{}(subseq)'.format(name), 'from __main__ import {}, subseq'.format(name))
print(t.timeit(number=num))
plot_func('quad')
</code></pre>
<p>The problem is that the running time doesn't vary, and that's because every time it runs, the function <code>quad</code> refers to the global <code>subseq</code>, which is empty. How can I pass this sub-sequence correctly to this function?</p>
<p>P.S.: I'm also fine using another tool for this job, as long as it can give me the exact running time (in terms of CPU time used) of each iteration of the function.</p>
|
<p>By default, Python thinks that <code>subseq</code> is a variable local to your function. This local name shadows the global variable that you're passing a parameter to the <code>timeit</code> timer.</p>
<p>To make the assignment operation globally-visible, you need to declare the <code>subseq</code> variable as <code>global</code> before using it in the function:</p>
<pre><code>def plot_func (name):
global subseq
print('\n')
for i in range(len(seq)):
subseq = seq[0:i+1]
</code></pre>
|
python|algorithm|timeit
| 1 |
1,901,313 | 34,738,764 |
Size of widget in Gtk.Grid
|
<p>I'm trying to arrange multiple Gtk.Entry in a 9x9 Gtk.Grid. I have set the size of entries with <code>width_chars=1</code> (yes, I want them small). The problem is that the grid doesn't respect the entry's size and expands it. I tried it using a Gtk.Box instead of Grid (I couldn't arrange it in 9x9) and it did render the entries of 1 width. Program is in python3. Here is my code, what am I doing wrong?</p>
<pre><code>from gi.repository import Gtk
wind=Gtk.Window(title='My App')
wind.connect('delete-event',Gtk.main_quit)
grid=Gtk.Grid()
wind.add(grid)
s=[[0 for x in range(9)] for x in range(9)]
for i,j in [(x,y) for x in range(9) for y in range(9)]:
s[i][j]=Gtk.Entry(width_chars=1,xalign=0.5,text='0')
grid.attach(s[i][j],j,i,1,1)
wind.show_all()
Gtk.main()
</code></pre>
<p>This outputs this: <a href="http://i.stack.imgur.com/Be2jB.jpg" rel="nofollow">Program output</a></p>
<p>Edit2: After adding this question, I came to know that the grid inherits the 'expand' property from its children. So I have tried setting the <code>halign</code> and <code>hexpand</code> properties of the entries to <code>False</code>, but it still produces the same result.<br>
I also read about Glade and decided to try to create the same layout through it. But it still produces the same output. Is there no way to stop the widgets / grid / window from expanding like that?</p>
<p>Edit3: I was on ubuntu gnome 15.10 when I originally asked and answered this question. Right now I am on regular ubuntu 14.04 and I noticed something interesting. I tried the same program and this time it turned out that I didn't need either of the above mentioned commands to make it work. It would be very helpful if someone explained why does that happen.</p>
|
<p>The solution turned out to be setting a default size for the window and setting its <code>hexpand</code> value to <code>False</code>.</p>
<pre><code>wind.set_default_size(200,200)
wind.set_hexpand(False)
</code></pre>
|
python|gtk|python-3.4
| 1 |
1,901,314 | 12,078,575 |
Unix: Getting Mouse -coordinates over X like the Mathematica?
|
<p><strong>Mathematica</strong></p>
<pre><code>DynamicModule[{list = {}},
EventHandler[
Dynamic[Framed@
Graphics[{BSplineCurve[list], Red, Line[list], Point[list]},
PlotRange -> 2]], {{"MouseClicked",
1} :> {AppendTo[list,
MousePosition["Graphics"]]}}, {"MouseClicked", 2} :>
Print[list]]]
</code></pre>
<p>I want to do the above at home where I do not have Mathematica. Use whatever tool you want, I like to use Python and R but happy with any solution candidate. The first thing that came to my mind was RStudio and this question <a href="https://stackoverflow.com/questions/9812547/r-gui-vizualiser-with-command-line-access-browser-based-letting-users-to-s">here</a> but I am unsure whether some better way to do this.</p>
<p><em>How can I do the interactive-GUI-innovating over X?</em></p>
<p><strong>Procedure of the Mathematica -snippet outlined</strong></p>
<pre><code>1. you click points
2. you will see BSplineCurve formating between the points and points are red
3. points are saved to an array
4. when finished, you click `right-mouse-button` so array to stdout
</code></pre>
|
<p>Here is an R function that does what you describe:</p>
<pre><code>dynmodfunc <- function() {
plot(0:1,0:1,ann=FALSE,type='n')
mypoints <- matrix(ncol=2, nrow=0)
while( length(p <- locator(1, type='p', col='red')) ) {
mypoints <- rbind(mypoints, unlist(p))
plot(mypoints, col='red', ann=FALSE, xlim=0:1, ylim=0:1)
if(nrow(mypoints)>1) {
xspline(mypoints, shape=-1)
}
}
mypoints
}
(out <- dynmodfunc())
</code></pre>
<p>You can change the <code>shape</code> argument to <code>xspline</code> to change the style of spline. This version returns a 2 column matrix with the x and y values, but that could be changed to another structure if prefered. There are plenty of other things that could be customized as well.</p>
<p>Added function to get the output to paste into Mathematica:</p>
<pre><code>matrix2mathematica <- function(x) {
paste0( '{',
paste0( '{', x[,1], ', ', x[,2], '}', collapse=', '),
'}')
}
cat( matrix2mathematica(out))
</code></pre>
|
python|r|language-agnostic|wolfram-mathematica|x11
| 5 |
1,901,315 | 41,924,857 |
Fitting partial Gaussian
|
<p>I'm trying to fit a sum of gaussians using <a href="http://scikit-learn.org/stable/index.html" rel="noreferrer">scikit-learn</a> because the scikit-learn <a href="http://scikit-learn.org/stable/modules/generated/sklearn.mixture.GaussianMixture.html#sklearn.mixture.GaussianMixture.fit" rel="noreferrer">GaussianMixture</a> seems much more robust than using curve_fit.</p>
<p><strong>Problem</strong>: It doesn't do a great job in fitting a truncated part of even a single gaussian peak:</p>
<pre><code>from sklearn import mixture
import matplotlib.pyplot
import matplotlib.mlab
import numpy as np
clf = mixture.GaussianMixture(n_components=1, covariance_type='full')
data = np.random.randn(10000)
data = [[x] for x in data]
clf.fit(data)
data = [item for sublist in data for item in sublist]
rangeMin = int(np.floor(np.min(data)))
rangeMax = int(np.ceil(np.max(data)))
h = matplotlib.pyplot.hist(data, range=(rangeMin, rangeMax), normed=True);
plt.plot(np.linspace(rangeMin, rangeMax),
mlab.normpdf(np.linspace(rangeMin, rangeMax),
clf.means_, np.sqrt(clf.covariances_[0]))[0])
</code></pre>
<p>gives
<a href="https://i.stack.imgur.com/m5iaA.png" rel="noreferrer"><img src="https://i.stack.imgur.com/m5iaA.png" alt="enter image description here"></a>
now changing <code>data = [[x] for x in data]</code> to <code>data = [[x] for x in data if x <0]</code> in order to truncate the distribution returns
<a href="https://i.stack.imgur.com/3qIPV.png" rel="noreferrer"><img src="https://i.stack.imgur.com/3qIPV.png" alt="enter image description here"></a>
Any ideas how to get the truncation fitted properly? </p>
<p><strong>Note</strong>: The distribution isn't necessarily truncated in the middle, there could be anything between 50% and 100% of the full distribution left.</p>
<p>I would also be happy if anyone can point me to alternative packages. I've only tried curve_fit but couldn't get it to do anything useful as soon as more than two peaks are involved.</p>
|
<p>A bit brutish, but simple solution would be to split the curve in two halfs (<code>data = [[x] for x in data if x < 0]</code>), mirror the left part (<code>data.append([-data[d][0]])</code>) and then do the regular Gaussian fit.</p>
<pre><code>import numpy as np
from sklearn import mixture
import matplotlib.pyplot as plt
import matplotlib.mlab as mlab
np.random.seed(seed=42)
n = 10000
clf = mixture.GaussianMixture(n_components=1, covariance_type='full')
#split the data and mirror it
data = np.random.randn(n)
data = [[x] for x in data if x < 0]
n = len(data)
for d in range(n):
data.append([-data[d][0]])
clf.fit(data)
data = [item for sublist in data for item in sublist]
rangeMin = int(np.floor(np.min(data)))
rangeMax = int(np.ceil(np.max(data)))
h = plt.hist(data[0:n], bins=20, range=(rangeMin, rangeMax), normed=True);
plt.plot(np.linspace(rangeMin, rangeMax),
mlab.normpdf(np.linspace(rangeMin, rangeMax),
clf.means_, np.sqrt(clf.covariances_[0]))[0] * 2)
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/Nb4FB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Nb4FB.png" alt="enter image description here"></a></p>
|
numpy|scipy|scikit-learn|curve-fitting|gaussian
| 2 |
1,901,316 | 47,357,612 |
Convert integers inside a list into strings and then a date in python 3.x
|
<p>i've just started studying python in college and i have a problem with this exercise:
basically i have to take a list of integers, like for example [10,2,2013,11,2,2014,5,23,2015], turn the necessary elements to form a date into a string, like ['1022013',1122014,5232015] and then put a / between the strings so i have this ['10/2/2013', '11/22/2014','05/23/2015']. It needs to be a function, and the length of the list is assumed to be a multiple of 3. How do i go about doing this?
I wrote this code to start:</p>
<pre><code>def convert(lst):
...: for element in lst:
...: result = str(element)
...: return result
...:
</code></pre>
<p>but from a list [1,2,3] only returns me '1'.</p>
|
<p>To split your list into size 3 chunks you use a <a href="https://docs.python.org/3/library/stdtypes.html#range" rel="nofollow noreferrer"><code>range</code></a> with a <code>step</code> of 3</p>
<pre><code>for i in range(0, len(l), 3):
print(l[i:i+3])
</code></pre>
<p>And joining the pieces with <code>/</code> is as simple as </p>
<pre><code>'/'.join([str(x) for x in l[i:i+3]])
</code></pre>
<p>Throwing it all together into a function:</p>
<pre><code>def make_times(l):
results = []
for i in range(0, len(l), 3):
results.append('/'.join([str(x) for x in l[i:i+3]]))
return results
</code></pre>
|
python|python-3.x|ipython
| 1 |
1,901,317 | 47,314,558 |
How to organize Twitter data of CSV in PhpMyAdmin
|
<p>I'm working on an application where I need to store a collection of tweets along with attributes such as Tweet ID, Date of Tweet, Language and Username inside of a MySQL database. </p>
<p><strong>This is an image of what I'm aiming for:</strong> <a href="https://i.imgur.com/1EC3ICc.png" rel="nofollow noreferrer">https://i.imgur.com/1EC3ICc.png</a></p>
<p>To do this, I created a program in python that collects 100+ tweets on Twitter in a JSON file. I then converted the JSON file to a CSV file using Microsoft Excel. After this I imported the CSV file in PHPMyAdmin as a table and I got the following outcome: <a href="https://i.imgur.com/tLkIA0T.png" rel="nofollow noreferrer">https://i.imgur.com/tLkIA0T.png</a> <em>(10 rows x 185 columns)</em>.</p>
<p>The problem with the above is that some tweets have more data such as media, this causes the data to expand over multiple columns.</p>
<p>How do I <strong>quickly</strong> clean this table so that I only have my desired attributes in the table? Do I need to go back to scratch and work from my Python code or can I clean from the Table/CSV file?</p>
|
<p>If Tweets are parsed in JSON format and you need only some of the fields, I recommend you to use JSON module to parse the needed fields and Pandas module to convert them into structured table in order to write it to MySQL, for example:</p>
<pre><code>import json
import pandas as pd
#Open and read the text file where all the Tweets are
with open('tweets.txt') as f:
tweets = f.readlines()
#Convert the read Tweets into JSON object
tweets_json = [json.loads(tweet) for tweet in tweets]
#Convert the list of Tweets into a structured dataframe
df = pd.DataFrame(tweets_json)
#Finally choose the attributes you need
df = df[['created_at', 'id', ...]]
#To write table into MySQL
df.to_sql(...)
</code></pre>
|
python|mysql|csv|twitter
| 0 |
1,901,318 | 47,243,812 |
how to copy a file contents and add a user input in the correct order
|
<p>I am trying to to make a new file and copy the words from another file into the new one, ask the user for an input and insert the new word in the right place.</p>
<pre><code>fo = open("search.text".'r')
item = fo.readlines()
print(item)
fo.close()
newitem = int("please enter a word")
item.append(newitem)
mynewitem = sorted(item)
print(mynewitem)
with open("file2.txt", "w") as output:
output.write(mynewitem)
</code></pre>
|
<p>The <code>search.txt</code> file have the following contents:</p>
<pre><code>1
2
3
4
5
7
8
</code></pre>
<p>By giving that, here is my code:</p>
<pre><code>new_item = raw_input("please enter a word: ") # entry 10 as example
with open("search.txt", "r") as fp:
items = [item.strip() for item in fp.readlines()]
items.append(new_item)
output = "\n".join(items)
with open("result.txt", "w") as fp:
fp.write(output)
</code></pre>
<p>In the <code>result.txt</code> file, you will have:</p>
<pre><code>1
2
3
4
5
7
8
10
</code></pre>
<p>Hope it helps.</p>
|
python|list|file
| 0 |
1,901,319 | 11,712,328 |
How to programmatically classify a list of objects
|
<p>I'm trying to take a long list of objects (in this case, applications from the iTunes App Store) and classify them more specifically. For instance, there are a bunch of applications currently classified as "Education," but I'd like to label them as Biology, English, Math, etc.</p>
<p>Is this an AI/Machine Learning problem? I have no background in that area whatsoever but would like some resources or ideas on where to start for this sort of thing.</p>
|
<p>Yes, you are correct. Classification is a machine learning problem, and classifying stuff based on text data involves natural language processing.</p>
<p>The canonical classification problem is spam detection using a Naive Bayes classifier, which is very simple. The idea is as follows:</p>
<ol>
<li>Gather a bunch of data (emails), and label them by class (spam, or not spam)</li>
<li>For each email, remove stopwords, and get a list of the unique words in that email</li>
<li>Now, for each word, calculate the probability it appears in a spam email, vs a non-spam email (ie count occurrences in spam, vs non spam)</li>
<li>Now you have a model- the probability of a email being spam, given it contains a word. However, an email contains many words. In Naive Bayes, you assume the words occur independently of each other (which turns out to to be an ok assumption), and multiply the probabilities of all words in the email against each other.</li>
<li>You usually divide data into training and testing, so you'll have a set of emails you train your model on, and then a set of labeled stuff you test against where you calculate precision and recall.</li>
</ol>
<p>I'd highly recommend playing around with NLTK, a python machine learning and nlp library. It's very user friendly and has good docs and tutorials, and is a good way to get acquainted with the field.</p>
<p>EDIT: <a href="http://bionicspirit.com/blog/2012/02/09/howto-build-naive-bayes-classifier.html" rel="nofollow">Here's an explanation</a> of how to build a simple NB classifier with code.</p>
|
python|machine-learning|artificial-intelligence|classification
| 3 |
1,901,320 | 33,928,484 |
How to run a python program in one folder and import and run a python program from another folder
|
<p>Good evening.</p>
<p>I have scriptone.py in folderone and scripttwo.py in foldertwo. </p>
<p>How do I tell scriptone.py to run scripttwo.py from foldertwo</p>
<p>If both scriptone.py and scripttwo.py are in the same folder I can run scripttwo.py with</p>
<pre><code>import scripttwo
</code></pre>
<p>But I would really like to run scripttwo.py from foldertwo</p>
<p>Thankyou.</p>
|
<p>Look at the environmental variable <code>PYTHONPATH</code> or <code>sys.path</code>.</p>
|
python|python-import|python-module
| 1 |
1,901,321 | 46,769,328 |
Django: Queries and calculations inside CreateView
|
<p>I am relatively new to Django, so I am not sure if what I am asking possible.</p>
<p>I am building a website with functionality to rate users and write reviews about them. I have model for users (that has average rating field) and a model for reviews (with fields of <code>author</code>, <code>user_profile</code>, <code>grade</code> and <code>review</code>). I am using <code>CreateView</code> for making reviews.</p>
<p>I am trying to do the following:</p>
<ol>
<li><p>To make query to get all previous grades of that person (from <code>Reviews</code> model).</p></li>
<li><p>Make calculations (sum all previous grades, add the new one and all that divide by number of grades (including new grade))</p></li>
<li><p>Save new average grade to <code>UserProfile</code> model</p></li>
<li><p>Save review to <code>Reviews</code> model</p></li>
<li><p>Redirect user to current detail view</p></li>
</ol>
<p>Models.py</p>
<pre><code>class UserProfile(models.Model):
...
avg_grade = models.FloatField(blank=True, null=True)
...
class Reviews(models.Model):
user_profile = models.ForeignKey(UserProfile, on_delete=models.CASCADE)
grade = models.PositiveIntegerField()
review = models.CharField(max_length=256, null=True, blank=True)
author = models.CharField(max_length=256)
</code></pre>
<p>In <code>views.py</code> I managed to make query of grades of that user, but not sure where to do calculations for new average grade (if this is possible inside a Class-Based-View):</p>
<pre><code>class CreateReview(LoginRequiredMixin, CreateView):
form_class = Forma_recenzije
success_url = reverse_lazy('detail')
template_name = 'accounts/recenzija.html'
def get_queryset(self):
u = UserProfile.objects.get(id=int(self.kwargs['pk']))
return Reviews.objects.filter(user_profile=u)
def form_valid(self, form):
form.instance.author = self.request.user
form.instance.user_profile = UserProfile.objects.get(id=int(self.kwargs['pk']))
return super(CreateReview, self).form_valid(form)
</code></pre>
<p>urlpatterns:</p>
<pre><code>[...
url(r'^dadilje/(?P<pk>[-\w]+)/$', views.DadiljaDetailView.as_view(), name="detail"),
url(r'^dadilje/(?P<pk>[-\w]+)/recenzija$', views. CreateReview.as_view(), name="recenzije")
...
]
</code></pre>
|
<p>For the kind of things you want to do Django has signals which you can listen out for.</p>
<p>As an example, you could have a function that listens out for when <code>UserProfile</code> has been saved which clears the cache keys related to that profile.</p>
<p>These functions are usually added to a <code>signals.py</code> within your apps, or in <code>models.py</code> files after you've defined your model.</p>
<p>Signals have to be loaded after your models, so if using a <code>signals.py</code> the way I tend to do it is in <code>apps.py</code>;</p>
<pre><code>class MyAppConfig(AppConfig):
"""App config for the members app. """
name = 'my app'
verbose_name = _("My App")
def ready(self):
""" Import signals once the app is ready """
# pylint: disable=W0612
import myapp.signals # noqa
</code></pre>
<p>Here's an example of a signal receivers, <code>pre_save</code> happens just before the object is saved, so you could run your calcs at this point;</p>
<pre><code>@receiver(pre_save, sender=UserProfile)
def userprofile_pre_save(sender, instance, **kwargs):
"""
Calc avg score
"""
reviews = Reviews.objects.filter(user_profile=instance).aggregate(Avg('grade'))
instance.avg_grade = reviews['grade_avg']
</code></pre>
<p>You'd probably want your receiver on the <code>Review</code> change, but the above was an easy example!!</p>
<p>If you're new to django this might be a bit complex, but give this a read; <a href="https://simpleisbetterthancomplex.com/tutorial/2016/07/28/how-to-create-django-signals.html" rel="nofollow noreferrer">https://simpleisbetterthancomplex.com/tutorial/2016/07/28/how-to-create-django-signals.html</a></p>
|
python|django|calculation|create-view
| 1 |
1,901,322 | 46,923,963 |
How to get list from string which you pre splitted?
|
<p>So there is a string <code>foo = 'asdfasdfzxc<test>afx<one>'</code><br>
How to get <code>bar = ['asdfasdfzxc','test','afx','one']</code> ? </p>
|
<p>Try,</p>
<pre><code>import re
foo = 'asdfasdfzxc<test>afx<one>'
bar = re.split(r'[\<\>]',foo)[:-1]
bar.remove('afx')
bar
Out[89]:
['asdfasdfzxc', 'test', 'one']
</code></pre>
|
python|string|list|split
| 2 |
1,901,323 | 67,799,302 |
Compare two columns of one dataframe to one column of another dataframe
|
<p>I have one dataframe as below:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">Id1</th>
<th style="text-align: center;">Id2</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: center;">4</td>
</tr>
<tr>
<td style="text-align: left;">2</td>
<td style="text-align: center;">5</td>
</tr>
<tr>
<td style="text-align: left;">3</td>
<td style="text-align: center;"></td>
</tr>
</tbody>
</table>
</div>
<p>The 2nd dataframe is:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">ID</th>
<th style="text-align: center;">Comment</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: center;">Pears</td>
</tr>
<tr>
<td style="text-align: left;">2</td>
<td style="text-align: center;">Grapes</td>
</tr>
<tr>
<td style="text-align: left;">3</td>
<td style="text-align: center;">Orange</td>
</tr>
<tr>
<td style="text-align: left;">4</td>
<td style="text-align: center;">Banana</td>
</tr>
<tr>
<td style="text-align: left;">5</td>
<td style="text-align: center;">Apple</td>
</tr>
</tbody>
</table>
</div>
<p>How can I get the output like:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">Id1</th>
<th style="text-align: center;">Id2</th>
<th style="text-align: center;">Review</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: center;">4</td>
<td style="text-align: center;">Banana</td>
</tr>
<tr>
<td style="text-align: left;">2</td>
<td style="text-align: center;">5</td>
<td style="text-align: center;">Apple</td>
</tr>
<tr>
<td style="text-align: left;">3</td>
<td style="text-align: center;"></td>
<td style="text-align: center;">Orange</td>
</tr>
</tbody>
</table>
</div>
<p>So, basically I am trying to do a look up for Id2 (from dataframe 1) and get the comment from 2nd dataframe but if the Id2 (in first dataframe) is null then get the Id1 comment from 2nd dataframe.</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.fillna.html" rel="nofollow noreferrer"><code>Series.fillna</code></a> for replace missing values in <code>Id2</code> by <code>Id1</code> and then mapping column by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.map.html" rel="nofollow noreferrer"><code>Series.map</code></a> by <code>Series</code> created by another <code>DataFrame</code>:</p>
<pre><code>s = df2.set_index('ID')['Comment']
df1['Comment'] = df1['Id2'].fillna(df1['Id1']).map(s)
</code></pre>
<p>If there is multiple <code>ID</code> columns is possible forward filling missing values and selected last column, then mapping:</p>
<pre><code>df1['Comment'] = df1.ffill(axis=1).iloc[:, -1].map(s)
</code></pre>
<p>Solution with <code>merge</code> is possible with helper column:</p>
<pre><code>df1['ID'] = df1['Id2'].fillna(df1['Id1'])
#another idea
#df1['ID'] = df1.ffill(axis=1).iloc[:, -1]
df = df1.merge(df2, on='ID', how='left')
</code></pre>
<p>Or:</p>
<pre><code>df = df1.assign(ID = df1['Id2'].fillna(df1['Id1'])).merge(df2, on='ID', how='left')
</code></pre>
|
python|pandas|lookup
| 1 |
1,901,324 | 68,009,610 |
Issue with virtual environment - python
|
<p>I have an issue with virtual environment, I'm learning python so really do not know what is wrong here.</p>
<p>I have a folder on my desktop called 'learning' in which I am trying to make a virtual environment called venv.</p>
<p>When I am in VSCode I am using the terminal and write <code>python -m venv venv</code> which spits out the following:</p>
<pre><code>PS C:\Users\XXX\Desktop\learning> virtualenv venv
created virtual environment CPython3.9.5.final.0-64 in 1553ms
creator CPython3Windows(dest=C:\Users\XXX\Desktop\learning\venv, clear=False, no_vcs_ignore=False, global=False)
seeder FromAppData(download=False, pip=bundle, setuptools=bundle, wheel=bundle, via=copy, app_data_dir=C:\Users\XXX\AppData\Local\pypa\virtualenv)
added seed packages: pip==21.1.2, setuptools==57.0.0, wheel==0.36.2
activators BashActivator,BatchActivator,FishActivator,PowerShellActivator,PythonActivator,XonshActivator
PS C:\Users\XXX\Desktop\learning> python -m venv venv
[{'first': 'Csr', 'last': 'vR'}, {'first': 'Jessie', 'last': 'vdd'}, {'first': 'Bill', 'last': 'Gates'}]
Traceback (most recent call last):
File "C:\Users\XXX\AppData\Local\Programs\Python\Python39\lib\runpy.py", line 15, in <module>
import importlib.util
File "C:\Users\XXX\AppData\Local\Programs\Python\Python39\lib\importlib\util.py", line 2, in <module>
from . import abc
File "C:\Users\XXX\AppData\Local\Programs\Python\Python39\lib\importlib\abc.py", line 17, in <module>
from typing import Protocol, runtime_checkable
File "C:\Users\XXX\AppData\Local\Programs\Python\Python39\lib\typing.py", line 22, in <module>
import collections.abc
ModuleNotFoundError: No module named 'collections.abc'; 'collections' is not a package
PS C:\Users\XXX\Desktop\learning> python -m venv try
[{'first': 'Csr', 'last': 'vR'}, {'first': 'Jessie', 'last': 'vdd'}, {'first': 'Bill', 'last': 'Gates'}]
Could not import runpy module
Traceback (most recent call last):
File "C:\Users\XXX\AppData\Local\Programs\Python\Python39\lib\runpy.py", line 15, in <module>
import importlib.util
File "C:\Users\XXX\AppData\Local\Programs\Python\Python39\lib\importlib\util.py", line 2, in <module>
from . import abc
File "C:\Users\XXX\AppData\Local\Programs\Python\Python39\lib\importlib\abc.py", line 17, in <module>
from typing import Protocol, runtime_checkable
File "C:\Users\XXX\AppData\Local\Programs\Python\Python39\lib\typing.py", line 22, in <module>
import collections.abc
ModuleNotFoundError: No module named 'collections.abc'; 'collections' is not a package
</code></pre>
<p>As you can see for some reason it is trowing back code from a file called collections.py in the folder 'learning'.. no idea why.
Now when I use "virtualenv venv" it works fine, but I am trying to understand why the other command is not working.</p>
|
<p>You have already created the virtual environment (<code>venv</code>). No need to type <code>python -m venv venv</code> again.</p>
<p>To activate virtual environment (<code>venv</code>), type</p>
<pre class="lang-sh prettyprint-override"><code>PS C:\Users\XXX\Desktop\learning> venv/Scripts/activate
</code></pre>
<p>This will activate the virtual environment named <code>venv</code> in the powershell.</p>
|
python|virtualenv
| 1 |
1,901,325 | 27,626,526 |
Sharing objects between setup and teardown functions in nose
|
<p>I'm using nose and I need to start an HTTP server for a test. I'm starting it in the setup function, and stopping it in the teardown function like this:</p>
<pre class="lang-py prettyprint-override"><code>from my_service import MyClient, MyServer
def setup():
global server
server = MyServer()
server.start()
def teardown():
server.stop()
def test_client():
client = MyClient('localhost', server.port)
assert client.get_colour() == "blue"
</code></pre>
<p>Is there a more elegant way to have the server object available to teardown function and tests other than this global variable? Perhaps a value returned from setup which would be passed as an argument to tests and teardown?</p>
|
<p>Have you considered <a href="https://docs.python.org/2/library/unittest.html" rel="nofollow">unittest</a>? It does exist for this reason, and nose will work with it nicely:</p>
<pre><code>import unittest
class MyLiveServerTest(unittest.TestCase):
def setUp(self):
self.server = MyServer()
self.server.start()
def test_client(self):
client = MyClient('localhost', self.server.port)
assert client.get_colour() == "blue"
def tearDown(self):
self.server.stop()
</code></pre>
|
python|testing|nose|nosetests
| 2 |
1,901,326 | 27,517,425 |
Apply vs transform on a group object
|
<p>Consider the following dataframe:</p>
<pre><code>columns = ['A', 'B', 'C', 'D']
records = [
['foo', 'one', 0.162003, 0.087469],
['bar', 'one', -1.156319, -1.5262719999999999],
['foo', 'two', 0.833892, -1.666304],
['bar', 'three', -2.026673, -0.32205700000000004],
['foo', 'two', 0.41145200000000004, -0.9543709999999999],
['bar', 'two', 0.765878, -0.095968],
['foo', 'one', -0.65489, 0.678091],
['foo', 'three', -1.789842, -1.130922]
]
df = pd.DataFrame.from_records(records, columns=columns)
"""
A B C D
0 foo one 0.162003 0.087469
1 bar one -1.156319 -1.526272
2 foo two 0.833892 -1.666304
3 bar three -2.026673 -0.322057
4 foo two 0.411452 -0.954371
5 bar two 0.765878 -0.095968
6 foo one -0.654890 0.678091
7 foo three -1.789842 -1.130922
"""
</code></pre>
<p>The following commands work:</p>
<pre><code>df.groupby('A').apply(lambda x: (x['C'] - x['D']))
df.groupby('A').apply(lambda x: (x['C'] - x['D']).mean())
</code></pre>
<p>but none of the following work:</p>
<pre><code>df.groupby('A').transform(lambda x: (x['C'] - x['D']))
# KeyError or ValueError: could not broadcast input array from shape (5) into shape (5,3)
df.groupby('A').transform(lambda x: (x['C'] - x['D']).mean())
# KeyError or TypeError: cannot concatenate a non-NDFrame object
</code></pre>
<p><strong>Why?</strong> <a href="http://pandas.pydata.org/pandas-docs/stable/groupby.html#transformation" rel="noreferrer">The example on the documentation</a> seems to suggest that calling <code>transform</code> on a group allows one to do row-wise operation processing:</p>
<pre><code># Note that the following suggests row-wise operation (x.mean is the column mean)
zscore = lambda x: (x - x.mean()) / x.std()
transformed = ts.groupby(key).transform(zscore)
</code></pre>
<p>In other words, I thought that transform is essentially a specific type of apply (the one that does not aggregate). Where am I wrong?</p>
<p>For reference, below is the construction of the original dataframe above:</p>
<pre><code>df = pd.DataFrame({'A' : ['foo', 'bar', 'foo', 'bar',
'foo', 'bar', 'foo', 'foo'],
'B' : ['one', 'one', 'two', 'three',
'two', 'two', 'one', 'three'],
'C' : randn(8), 'D' : randn(8)})
</code></pre>
|
<h3>Two major differences between <code>apply</code> and <code>transform</code></h3>
<p>There are two major differences between the <code>transform</code> and <code>apply</code> groupby methods.</p>
<ul>
<li><strong>Input</strong>:
<ul>
<li><code>apply</code> implicitly passes all the columns for each group as a <strong>DataFrame</strong> to the custom function.</li>
<li>while <code>transform</code> passes each column for each group individually as a <strong>Series</strong> to the custom function.</li>
</ul>
</li>
<li><strong>Output</strong>:
<ul>
<li>The custom function passed to <strong><code>apply</code> can return a scalar, or a Series or DataFrame (or numpy array or even list)</strong>.</li>
<li>The custom function passed to <strong><code>transform</code> must return a sequence</strong> (a one dimensional Series, array or list) <strong>the same length as the group</strong>.</li>
</ul>
</li>
</ul>
<p>So, <code>transform</code> works on just one Series at a time and <code>apply</code> works on the entire DataFrame at once.</p>
<h3>Inspecting the custom function</h3>
<p>It can help quite a bit to inspect the input to your custom function passed to <code>apply</code> or <code>transform</code>.</p>
<h3>Examples</h3>
<p>Let's create some sample data and inspect the groups so that you can see what I am talking about:</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame({'State':['Texas', 'Texas', 'Florida', 'Florida'],
'a':[4,5,1,3], 'b':[6,10,3,11]})
State a b
0 Texas 4 6
1 Texas 5 10
2 Florida 1 3
3 Florida 3 11
</code></pre>
<p>Let's create a simple custom function that prints out the type of the implicitly passed object and then raises an exception so that execution can be stopped.</p>
<pre><code>def inspect(x):
print(type(x))
raise
</code></pre>
<p>Now let's pass this function to both the groupby <code>apply</code> and <code>transform</code> methods to see what object is passed to it:</p>
<pre><code>df.groupby('State').apply(inspect)
<class 'pandas.core.frame.DataFrame'>
<class 'pandas.core.frame.DataFrame'>
RuntimeError
</code></pre>
<p>As you can see, a DataFrame is passed into the <code>inspect</code> function. You might be wondering why the type, DataFrame, got printed out twice. Pandas runs the first group twice. It does this to determine if there is a fast way to complete the computation or not. This is a minor detail that you shouldn't worry about.</p>
<p>Now, let's do the same thing with <code>transform</code></p>
<pre><code>df.groupby('State').transform(inspect)
<class 'pandas.core.series.Series'>
<class 'pandas.core.series.Series'>
RuntimeError
</code></pre>
<p>It is passed a Series - a totally different Pandas object.</p>
<p>So, <code>transform</code> is only allowed to work with a single Series at a time. It is impossible for it to act on two columns at the same time. So, if we try and subtract column <code>a</code> from <code>b</code> inside of our custom function we would get an error with <code>transform</code>. See below:</p>
<pre><code>def subtract_two(x):
return x['a'] - x['b']
df.groupby('State').transform(subtract_two)
KeyError: ('a', 'occurred at index a')
</code></pre>
<p>We get a KeyError as pandas is attempting to find the Series index <code>a</code> which does not exist. You can complete this operation with <code>apply</code> as it has the entire DataFrame:</p>
<pre><code>df.groupby('State').apply(subtract_two)
State
Florida 2 -2
3 -8
Texas 0 -2
1 -5
dtype: int64
</code></pre>
<p>The output is a Series and a little confusing as the original index is kept, but we have access to all columns.</p>
<hr />
<h3>Displaying the passed pandas object</h3>
<p>It can help even more to display the entire pandas object within the custom function, so you can see exactly what you are operating with. You can use <code>print</code> statements by I like to use the <code>display</code> function from the <code>IPython.display</code> module so that the DataFrames get nicely outputted in HTML in a jupyter notebook:</p>
<pre><code>from IPython.display import display
def subtract_two(x):
display(x)
return x['a'] - x['b']
</code></pre>
<p>Screenshot:
<a href="https://i.stack.imgur.com/8LMn5.png" rel="noreferrer"><img src="https://i.stack.imgur.com/8LMn5.png" alt="enter image description here" /></a></p>
<hr />
<h3>Transform must return a single dimensional sequence the same size as the group</h3>
<p>The other difference is that <code>transform</code> must return a single dimensional sequence the same size as the group. In this particular instance, each group has two rows, so <code>transform</code> must return a sequence of two rows. If it does not then an error is raised:</p>
<pre><code>def return_three(x):
return np.array([1, 2, 3])
df.groupby('State').transform(return_three)
ValueError: transform must return a scalar value for each group
</code></pre>
<p>The error message is not really descriptive of the problem. You must return a sequence the same length as the group. So, a function like this would work:</p>
<pre><code>def rand_group_len(x):
return np.random.rand(len(x))
df.groupby('State').transform(rand_group_len)
a b
0 0.962070 0.151440
1 0.440956 0.782176
2 0.642218 0.483257
3 0.056047 0.238208
</code></pre>
<hr />
<h3>Returning a single scalar object also works for <code>transform</code></h3>
<p>If you return just a single scalar from your custom function, then <code>transform</code> will use it for each of the rows in the group:</p>
<pre><code>def group_sum(x):
return x.sum()
df.groupby('State').transform(group_sum)
a b
0 9 16
1 9 16
2 4 14
3 4 14
</code></pre>
|
python|pandas
| 302 |
1,901,327 | 72,147,265 |
How to make fixed ticks for graph using matplotlib
|
<p>I am trying to make a graph with fixed ticks, regardless to the points values.
For example, I want the ticks to be 0-6, and the points values to be 0, 1, 3. I'd use this code:</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
x_points = np.array([0, 1, 3])
y_points = np.array([0, 1, 3])
x_ticks = np.arange(0, 7, 1)
y_ticks = np.arange(0, 7, 1)
plt.xticks(x_ticks)
plt.yticks(y_ticks)
plt.plot(x_points, y_points)
plt.show()
</code></pre>
<p>But the result is only 0, 1, 3 ticks - which are the ticks for the actual current values, and not the values I set using xticks and yticks:</p>
<p><img src="https://i.stack.imgur.com/u9Wz7.png" alt="Current graph" /></p>
<p>And I would like to have the ticks fixed, regardless to whether the points values are actually represented in the graph or not, something like that:</p>
<p><img src="https://i.stack.imgur.com/xCxwH.png" alt="Example of target graph" /></p>
<p>How can I make the axis' ticks to be fixed, regardless of the values it has?</p>
|
<p>Try this:</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
x_ticks = np.arange(0, 7, 1)
y_ticks = np.arange(0, 7, 1)
plt.xticks(x_ticks)
plt.yticks(y_ticks)
plt.plot(range(0, 7))
plt.show()
</code></pre>
<p>Then add your points to the figure</p>
|
python|numpy|matplotlib
| 0 |
1,901,328 | 72,298,834 |
List comprehension with Set comprehension in Python
|
<p>I have a list of list which contains set in them, what I want is list of list with a single set item.</p>
<p>i.e. <code>["item1", "item2", {1,2,3,4}, "item4"]</code> --> <code>[["item1", "item2", 1, "item4"],["item1", "item2", 2, "item4"],["item1", "item2", 3, "item4"],["item1", "item2", 4, "item4"]]</code></p>
<p>Any idea how can I achieve this?</p>
|
<p>Assuming you only have a single set you need to expand in this way and it always sits at index 2 in the original list:</p>
<pre><code>data = ["item1", "item2", {1, 2, 3, 4}, "item4"]
result = [data[:2] + [x] + data[3:] for x in data[2]]
print(result)
</code></pre>
<p>Output:</p>
<pre><code>[['item1', 'item2', 1, 'item4'], ['item1', 'item2', 2, 'item4'], ['item1', 'item2', 3, 'item4'], ['item1', 'item2', 4, 'item4']]
</code></pre>
|
python|python-3.x|python-2.7
| 1 |
1,901,329 | 43,267,820 |
How to bind a field in __init__ function of a form
|
<pre><code>class Example_Form(Form):
field_1 = TextAreaField()
field_2 = TextAreaField()
def __init__(self, type, **kwargs):
super(Example_Form, self).__init__(**kwargs)
if type == 'type_1':
self.field_3 = TextAreaField()
</code></pre>
<p>In some scenarios I need to dynamically add fields into the form. The field_3 added to example form turns out to be a UnboundField. I tried to bind field_3 to form in <code>__init__</code> function, but it won't work.</p>
<pre><code>field_3 = TextAreaField()
field_3.bind(self, 'field_3')
</code></pre>
<p>How to bind field_3 to example form?</p>
|
<p>Use <code>self.meta.bind_field</code> to create a bound field, and assign it to the instance and the <code>_fields</code> dict.</p>
<pre><code>self.field_3 = self._fields['field_3'] = self.meta.bind_field(
self, TextAreaField(),
{'name': 'field_3', 'prefix': self._prefix}
)
</code></pre>
<hr>
<p>In most cases, it's more clear to use a subclass and decide which class to use when creating the form instance.</p>
<pre><code>class F1(Form):
x = StringField()
class F2(F1):
y = StringField()
form = F1() if type == 1 else F2()
</code></pre>
<p>If you need to be more dynamic, you can subclass the form and assign fields to it. Assigning fields to classes works directly, unlike with instances.</p>
<pre><code>class F3(F1):
pass
if type == 3:
F3.z = StringField()
form = F3()
</code></pre>
<p>You can also define all fields, then choose to delete some before validating the form.</p>
<pre><code>class F(Form):
x = StringField()
y = StringField()
form = F()
if type == 1:
del form.y
</code></pre>
|
python|flask|wtforms|flask-wtforms
| 4 |
1,901,330 | 36,927,779 |
Why wont collide_rect work in this scenario? The game keeps printing "Collided!" even when the two sprites aren't touching?
|
<p>This is a snake remake. My goal here was to make the apple respawn randomly after the snake collided with it. For some reason the collide_rect function seems to think they are continuously colliding at all times after I start the game. </p>
<p>Any other tips to help clean this mess up are also welcome:</p>
<pre><code>import pygame
import time
import random
pygame.init()
WHITE = (pygame.Color("white"))
BLACK = ( 0, 0, 0)
RED = (245, 0, 0)
TURQ = (pygame.Color("turquoise"))
GREEN = ( 0, 155, 0)
GREY = ( 90, 90, 90)
SCREEN = (800, 600)
gameDisplay = pygame.display.set_mode(SCREEN)
#Set the window title and picture
pygame.display.set_caption('Block Worm')
ICON = pygame.image.load("apple10pix.png")
pygame.display.set_icon(ICON)
CLOCK = pygame.time.Clock()
FPS = 20
FONT = pygame.font.SysFont("arial", 25)
APPLE_SIZE = 10
TINY_FONT = pygame.font.SysFont("candara", 15)
SMALL_FONT = pygame.font.SysFont("candara", 25)
MED_FONT = pygame.font.SysFont("candara", 50)
LARGE_FONT = pygame.font.SysFont("krabbypatty", 75)
HUGE_FONT = pygame.font.SysFont("krabbypatty", 150)
IMG = pygame.image.load("snakehead.png")
APPLE_IMG = pygame.image.load("apple10pix.png")
DIRECTION = "up"
def pause():
paused = True
message_to_screen("Paused",
BLACK,
Y_DISPLACE = -100,
size = "huge")
message_to_screen("Press C to continue or Q to quit.",
BLACK,
Y_DISPLACE = 25)
pygame.display.update()
while paused:
for event in pygame.event.get():
if event.type == pygame.QUIT:
pygame.quit()
quit()
if event.type == pygame.KEYDOWN:
if event.key in (pygame.K_c, pygame.K_p):
paused = False
elif event.key in(pygame.K_q, pygame.K_ESCAPE):
pygame.quit()
quit()
CLOCK.tick(5)
def score(score):
text = SMALL_FONT.render("Score: " + str(score), True, BLACK)
gameDisplay.blit(text, [0, 0])
def game_intro():
intro = True
while intro:
for event in pygame.event.get():
if event.type == pygame.QUIT:
pygame.quit()
quit()
if event.type == pygame.KEYDOWN:
if event.key == pygame.K_c:
intro = False
if event.key in (pygame.K_q, pygame.K_ESCAPE):
pygame.quit()
quit()
gameDisplay.fill(WHITE)
message_to_screen("Welcome to",
GREEN,
Y_DISPLACE = -170,
size = "large")
message_to_screen("Block Worm",
GREEN,
Y_DISPLACE = -50,
size = "huge")
message_to_screen("The objective of the game is to eat apples.",
BLACK,
Y_DISPLACE = 36,
size = "tiny")
message_to_screen("The more apples you eat the longer you get.",
BLACK,
Y_DISPLACE = 68,
size = "tiny")
message_to_screen("If you run into yourself or the edges, you die.",
BLACK,
Y_DISPLACE = 100,
size = "tiny")
message_to_screen("Press C to play or Q to quit.",
GREY,
Y_DISPLACE = 210,)
pygame.display.update()
CLOCK.tick(FPS)
def text_objects(text, color, size):
if size == "tiny":
TEXT_SURFACE = TINY_FONT.render(text, True, color)
elif size == "small":
TEXT_SURFACE = SMALL_FONT.render(text, True, color)
elif size == "medium":
TEXT_SURFACE = MED_FONT.render(text, True, color)
elif size == "large":
TEXT_SURFACE = LARGE_FONT.render(text, True, color)
elif size == "huge":
TEXT_SURFACE = HUGE_FONT.render(text, True, color)
return TEXT_SURFACE, TEXT_SURFACE.get_rect()
def message_to_screen(msg, color, Y_DISPLACE = 0, size = "small"):
TEXT_SURF, TEXT_RECT = text_objects(msg, color, size)
TEXT_RECT.center = (SCREEN[0] / 2), (SCREEN[1] / 2) + Y_DISPLACE
gameDisplay.blit(TEXT_SURF, TEXT_RECT)
class Snake(pygame.sprite.Sprite):
def __init__(self, image, size, trail, start_size):
pygame.sprite.Sprite.__init__(self)
self.image = image
self.size = size
self.trail = trail
self.start_size = start_size
self.rect = self.image.get_rect()
if DIRECTION == "right":
HEAD = pygame.transform.rotate(self.image, 270)
elif DIRECTION == "left":
HEAD = pygame.transform.rotate(self.image, 90)
elif DIRECTION == "down":
HEAD = pygame.transform.rotate(self.image, 180)
else:
HEAD = image
gameDisplay.blit(HEAD, (self.trail[-1][0], self.trail[-1][1]))
for XnY in self.trail[:-1]:
pygame.draw.rect(gameDisplay, GREEN, [XnY[0], XnY[1], self.size, self.size])
class Apple(pygame.sprite.Sprite):
def __init__(self, image):
pygame.sprite.Sprite.__init__(self)
self.image = image
self.rect = self.image.get_rect()
self.size = self.rect.width
gameDisplay.blit(self.image, (100, 100))
def random_location(self):
self.rect.x = random.randrange(0, (SCREEN[0] - 10), 10)
self.rect.y = random.randrange(0, (SCREEN[1]- 10), 10)
gameDisplay.blit(self.image, (self.rect.x, self.rect.y))
def gameLoop():
global DIRECTION
DIRECTION = "up"
gameExit = False
gameOver = False
SCORE = 0
player_snake_trail = [] # Where the snake head has been.
enemy_snake_trail = [] # If I create an AI snake.
start_length = 1
lead_x = (SCREEN[0] / 2)
lead_y = (SCREEN[1] - (SCREEN[1] / 5))
move_speed = 10
move_speed_neg = move_speed * -1
lead_x_change = 0
lead_y_change = -move_speed
while not gameExit:
if gameOver == True:
message_to_screen("Game over",
RED,
Y_DISPLACE = -50,
size = "huge")
message_to_screen("Press C to play again or Q to quit.",
BLACK,
Y_DISPLACE = 50,
size = "small")
pygame.display.update()
while gameOver == True:
for event in pygame.event.get():
if event.type == pygame.QUIT:
gameExit = True
gameOver = False
elif event.type == pygame.KEYDOWN:
if event.key == pygame.K_q:
gameOver = False
gameExit = True
elif event.key == pygame.K_c:
gameLoop()
for event in pygame.event.get():
if event.type == pygame.QUIT:
gameExit = True
elif event.type == pygame.KEYDOWN:
if event.key in (pygame.K_LEFT, pygame.K_a):
lead_x_change = move_speed_neg
lead_y_change = 0
DIRECTION = "left"
elif event.key in (pygame.K_RIGHT, pygame.K_d):
lead_x_change = move_speed
lead_y_change = 0
DIRECTION = "right"
elif event.key in (pygame.K_UP, pygame.K_w):
lead_y_change = move_speed_neg
lead_x_change = 0
DIRECTION = "up"
elif event.key in (pygame.K_DOWN, pygame.K_s):
lead_y_change = move_speed
lead_x_change = 0
DIRECTION = "down"
elif event.key in (pygame.K_p, pygame.K_ESCAPE):
pause()
# If the snake goes beyond the screen borders the game will end.
if lead_x >= SCREEN[0] or lead_x < 0 or lead_y >= SCREEN[1] or lead_y <0:
gameOver = True
lead_x += lead_x_change
lead_y += lead_y_change
gameDisplay.fill(WHITE)
# Draw the apple on screen
red_apple = Apple(APPLE_IMG)
# Draw the snake on screen
SNAKE_HEAD = []
SNAKE_HEAD.append(lead_x)
SNAKE_HEAD.append(lead_y)
player_snake_trail.append(SNAKE_HEAD)
# If you hit yourself, game over.
if SNAKE_HEAD in player_snake_trail[:-1]:
gameOver = True
if len(player_snake_trail) > start_length:
del player_snake_trail[0]
player_snake = Snake(IMG, 10, player_snake_trail, start_length)
if pygame.sprite.collide_rect(player_snake, red_apple) == True:
print("Collided!")
## start_length += 1
## SCORE += 1
## red_apple.random_location()
# If the snake eats the apple
# Old code below, needs to be rewritten. Disregard.
## if APPLE_RECT.collidepoint(lead_x, lead_y) == True:
## randAppleX, randAppleY = randAppleGen()
## start_length += 1
## SCORE += 1
score(SCORE)
pygame.display.update()
CLOCK.tick(FPS)
pygame.quit()
quit()
game_intro()
gameLoop()
</code></pre>
|
<p>When you set the rects in the constructors for the apple and snake head, they have x and y co-ordinates of 0. You need to set the rects and update them as the snake moves. I have added a set_rect method to them. I have also moved the draw from apple.random_location to its own method. Your code should look like this:</p>
<pre><code>init()
run()
</code></pre>
<p>and in run:</p>
<pre><code>check_events()
update_snake()
collision_check()
draw_things()
clock.tick(FPS)
</code></pre>
<p>Here is the altered code, try to break it up into sections as above.</p>
<pre><code>import pygame
import time
import random
pygame.init()
WHITE = (pygame.Color("white"))
BLACK = ( 0, 0, 0)
RED = (245, 0, 0)
TURQ = (pygame.Color("turquoise"))
GREEN = ( 0, 155, 0)
GREY = ( 90, 90, 90)
SCREEN = (800, 600)
gameDisplay = pygame.display.set_mode(SCREEN)
#Set the window title and picture
pygame.display.set_caption('Block Worm')
ICON = pygame.Surface((10,10)) #pygame.image.load("apple10pix.png")
pygame.display.set_icon(ICON)
CLOCK = pygame.time.Clock()
FPS = 20
FONT = pygame.font.SysFont("arial", 25)
APPLE_SIZE = 10
TINY_FONT = pygame.font.SysFont("candara", 15)
SMALL_FONT = pygame.font.SysFont("candara", 25)
MED_FONT = pygame.font.SysFont("candara", 50)
LARGE_FONT = pygame.font.SysFont("krabbypatty", 75)
HUGE_FONT = pygame.font.SysFont("krabbypatty", 150)
IMG = pygame.Surface((10,10)) #pygame.image.load("snakehead.png")
APPLE_IMG = pygame.Surface((10,10)) #pygame.image.load("apple10pix.png")
DIRECTION = "up"
def pause():
paused = True
message_to_screen("Paused",
BLACK,
Y_DISPLACE = -100,
size = "huge")
message_to_screen("Press C to continue or Q to quit.",
BLACK,
Y_DISPLACE = 25)
pygame.display.update()
while paused:
for event in pygame.event.get():
if event.type == pygame.QUIT:
pygame.quit()
quit()
if event.type == pygame.KEYDOWN:
if event.key in (pygame.K_c, pygame.K_p):
paused = False
elif event.key in(pygame.K_q, pygame.K_ESCAPE):
pygame.quit()
quit()
CLOCK.tick(5)
def score(score):
text = SMALL_FONT.render("Score: " + str(score), True, BLACK)
gameDisplay.blit(text, [0, 0])
def game_intro():
intro = True
while intro:
for event in pygame.event.get():
if event.type == pygame.QUIT:
pygame.quit()
quit()
if event.type == pygame.KEYDOWN:
if event.key == pygame.K_c:
intro = False
if event.key in (pygame.K_q, pygame.K_ESCAPE):
pygame.quit()
quit()
gameDisplay.fill(WHITE)
message_to_screen("Welcome to",
GREEN,
Y_DISPLACE = -170,
size = "large")
message_to_screen("Block Worm",
GREEN,
Y_DISPLACE = -50,
size = "huge")
message_to_screen("The objective of the game is to eat apples.",
BLACK,
Y_DISPLACE = 36,
size = "tiny")
message_to_screen("The more apples you eat the longer you get.",
BLACK,
Y_DISPLACE = 68,
size = "tiny")
message_to_screen("If you run into yourself or the edges, you die.",
BLACK,
Y_DISPLACE = 100,
size = "tiny")
message_to_screen("Press C to play or Q to quit.",
GREY,
Y_DISPLACE = 210,)
pygame.display.update()
CLOCK.tick(FPS)
def text_objects(text, color, size):
if size == "tiny":
TEXT_SURFACE = TINY_FONT.render(text, True, color)
elif size == "small":
TEXT_SURFACE = SMALL_FONT.render(text, True, color)
elif size == "medium":
TEXT_SURFACE = MED_FONT.render(text, True, color)
elif size == "large":
TEXT_SURFACE = LARGE_FONT.render(text, True, color)
elif size == "huge":
TEXT_SURFACE = HUGE_FONT.render(text, True, color)
return TEXT_SURFACE, TEXT_SURFACE.get_rect()
def message_to_screen(msg, color, Y_DISPLACE = 0, size = "small"):
TEXT_SURF, TEXT_RECT = text_objects(msg, color, size)
TEXT_RECT.center = (SCREEN[0] / 2), (SCREEN[1] / 2) + Y_DISPLACE
gameDisplay.blit(TEXT_SURF, TEXT_RECT)
class Snake(pygame.sprite.Sprite):
def __init__(self, image, size, trail, start_size):
pygame.sprite.Sprite.__init__(self)
self.image = image
self.size = size
self.trail = trail
self.start_size = start_size
self.rect = self.image.get_rect()
if DIRECTION == "right":
HEAD = pygame.transform.rotate(self.image, 270)
elif DIRECTION == "left":
HEAD = pygame.transform.rotate(self.image, 90)
elif DIRECTION == "down":
HEAD = pygame.transform.rotate(self.image, 180)
else:
HEAD = image
gameDisplay.blit(HEAD, (self.trail[-1][0], self.trail[-1][1]))
for XnY in self.trail[:-1]:
pygame.draw.rect(gameDisplay, GREEN, [XnY[0], XnY[1], self.size, self.size])
def set_rect(self, rect):
self.rect = rect
class Apple(pygame.sprite.Sprite):
def __init__(self, image):
pygame.sprite.Sprite.__init__(self)
self.image = image
self.random_location()
self.size = self.rect.width
gameDisplay.blit(self.image, (100, 100))
def random_location(self):
x = random.randrange(0, (SCREEN[0] - 10), 10)
y = random.randrange(0, (SCREEN[1]- 10), 10)
self.rect = pygame.Rect(x,y, 10, 10)
print "apple rect =",
print self.rect
def draw(self):
gameDisplay.blit(self.image, (self.rect.x, self.rect.y))
def gameLoop():
global DIRECTION
DIRECTION = "up"
gameExit = False
gameOver = False
SCORE = 0
player_snake_trail = [] # Where the snake head has been.
enemy_snake_trail = [] # If I create an AI snake.
start_length = 1
lead_x = (SCREEN[0] / 2)
lead_y = (SCREEN[1] - (SCREEN[1] / 5))
move_speed = 10
move_speed_neg = move_speed * -1
lead_x_change = 0
lead_y_change = -move_speed
red_apple = Apple(APPLE_IMG)
while not gameExit:
if gameOver == True:
message_to_screen("Game over",
RED,
Y_DISPLACE = -50,
size = "huge")
message_to_screen("Press C to play again or Q to quit.",
BLACK,
Y_DISPLACE = 50,
size = "small")
pygame.display.update()
while gameOver == True:
for event in pygame.event.get():
if event.type == pygame.QUIT:
gameExit = True
gameOver = False
elif event.type == pygame.KEYDOWN:
if event.key == pygame.K_q:
gameOver = False
gameExit = True
elif event.key == pygame.K_c:
gameLoop()
for event in pygame.event.get():
if event.type == pygame.QUIT:
gameExit = True
elif event.type == pygame.KEYDOWN:
if event.key in (pygame.K_LEFT, pygame.K_a):
lead_x_change = move_speed_neg
lead_y_change = 0
DIRECTION = "left"
elif event.key in (pygame.K_RIGHT, pygame.K_d):
lead_x_change = move_speed
lead_y_change = 0
DIRECTION = "right"
elif event.key in (pygame.K_UP, pygame.K_w):
lead_y_change = move_speed_neg
lead_x_change = 0
DIRECTION = "up"
elif event.key in (pygame.K_DOWN, pygame.K_s):
lead_y_change = move_speed
lead_x_change = 0
DIRECTION = "down"
elif event.key in (pygame.K_p, pygame.K_ESCAPE):
pause()
# If the snake goes beyond the screen borders the game will end.
if lead_x >= SCREEN[0] or lead_x < 0 or lead_y >= SCREEN[1] or lead_y <0:
gameOver = True
lead_x += lead_x_change
lead_y += lead_y_change
gameDisplay.fill(WHITE)
# Draw the snake on screen
SNAKE_HEAD = []
SNAKE_HEAD.append(lead_x)
SNAKE_HEAD.append(lead_y)
player_snake_trail.append(SNAKE_HEAD)
# draw the apple on the screen
red_apple.draw()
# If you hit yourself, game over.
if SNAKE_HEAD in player_snake_trail[:-1]:
gameOver = True
if len(player_snake_trail) > start_length:
del player_snake_trail[0]
player_snake = Snake(IMG, 10, player_snake_trail, start_length)
player_snake.set_rect(pygame.Rect(lead_x, lead_y, 10, 10))
if pygame.sprite.collide_rect(player_snake, red_apple) == True:
print("Collided!")
start_length += 1
SCORE += 1
red_apple.random_location()
# If the snake eats the apple
# Old code below, needs to be rewritten. Disregard.
## if APPLE_RECT.collidepoint(lead_x, lead_y) == True:
## randAppleX, randAppleY = randAppleGen()
## start_length += 1
## SCORE += 1
score(SCORE)
pygame.display.update()
CLOCK.tick(FPS)
pygame.quit()
quit()
game_intro()
gameLoop()
</code></pre>
|
python|class|pygame
| 0 |
1,901,331 | 36,757,256 |
Python - Exiting while loop externally
|
<p>I am writing a web server that will log temperatures. The user clicks "collect data" on the web interface, that then triggers a flask function to run a "collect temperature" function which just collects temperature data indefinitely. I then want to be able for the user to hit a "stop data collection" button that would stop the collect temperature function while loop.</p>
<p>The problem (my understanding at least) boils down to something like the following code:</p>
<pre><code>class myClass:
counterOn = 0
num = 0
def __init__(self):
self.num = 0
def setCounterOn(self, value):
self.counterOn = value
def printCounterOn(self):
print self.counterOn
def count(self):
while True:
if self.counterOn == 1:
self.num += 1
print self.num
time.sleep(1)
</code></pre>
<p>then the server file:</p>
<pre><code>myCounter = myClass.myClass()
myCounter.setCounterOn(1)
myCounter.count()
time.sleep(5)
myCounter.setCounterOn(0)
</code></pre>
<p>Ideally I would like the server file to create a counter object, then turn on and off the counter function externally. As it functions now, it is stuck in the while loop. I tried threading only to discover you can't pause or stop a thread. Am I looking at this completely wrong, or is it as simple as a try/except?</p>
<p><strong>Edit:</strong></p>
<p>The external file idea is great. I was having some trouble parsing the text file consistantly across my functions and wound up stumbleing across ConfigParsers to read .ini files. I think I'm going to go that way since eventually I want to have a PID controller controlling the temperature and it will be great to be able to store configurations externally. </p>
<p>I implemented just a while loop that looped forever and only recorded if it saw the config file configured to collect. The problem was that, in my flask file, i would run </p>
<pre><code>@app.route('/startCollection', methods=['POST'])
def startCollectData():
print "collectPressed"
config.read('congif.ini')
config.set('main', 'counterOn', '1')
with open('config.ini', 'w') as f:
config.write(f)
C.count()
return "collect data pressed"
@app.route('/stopCollection', methods=['POST'])
def stopCollectData():
print "stop hit"
config.read('config.ini')
config.set('main', 'counterOn', '0')
with open('config.ini', 'w') as f:
config.write(f)
C.count()
return "stop pressed"
def count(self):
while True:
self.config.read('config.ini')
print self.num
time.sleep(1)
if self.config.get('main', 'counterOn') == '1':
self.num += 1
</code></pre>
<p>From my observation, the startDataCollection was getting stuck on count(). It would never return data, so then when i would try to stop data collection, the flask script wouldn't be there to interpret the stop command.</p>
<p>So i moved on to the mutex. That is exactly the functionality i thought would come out of the box with threads. It seems to be working fine, other than there is usually a really long delay in the 2nd time i stop collection.</p>
<pre><code>@app.route('/')
def main():
print "MYLOG - asdf"
cls.start()
cls.pause()
return render_template('index.html')
@app.route('/startCollection', methods=['POST'])
def startCollectData():
print "collectPressed"
cls.unpause()
return "collect data pressed"
@app.route('/stopCollection', methods=['POST'])
def stopCollectData():
print "stop hit"
cls.pause()
return "collect data pressed"
</code></pre>
<p>results in the following output if i click start, stop, start, then stop:</p>
<pre><code>collectPressed
1
10.240.0.75 - - [22/Apr/2016 15:58:42] "POST /startCollection HTTP/1.1" 200 -
2
3
4
5
6
7
8
9
stop hit
10.240.0.207 - - [22/Apr/2016 15:58:51] "POST /stopCollection HTTP/1.1" 200 -
collectPressed
10
10.240.0.166 - - [22/Apr/2016 15:58:57] "POST /startCollection HTTP/1.1" 200 -
11
12
13
14
15
16
stop hit
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
10.240.0.75 - - [22/Apr/2016 15:59:24] "POST /stopCollection HTTP/1.1" 200 -
</code></pre>
<p>So i hit stop, then it collects for 20 seconds, and then it finally stops. My collection points are going to be 5 minutes apart, so its not a big deal, but just curious.</p>
<pre><code>import threading
import time
class myThread(threading.Thread):
num = 0
def __init__(self, threadID, name, counter):
threading.Thread.__init__(self)
self.threadID = threadID
self.name = name
self.counter = counter
self.mutex = threading.Lock()
self.paused = False
def pause(self):
if(not self.paused):
self.mutex.acquire()
self.paused = True
def unpause(self):
self.mutex.release()
self.paused = False
def run(self):
print "starting" + self.name
while True:
self.mutex.acquire()
self.num += 1
print self.num
time.sleep(1)
self.mutex.release()
</code></pre>
<p>Anyways, thanks for the help. I've been stuck on how to handle this for about 4 months and its great to finally make some progress on it!</p>
<p><strong>Edit 2</strong>
Actually, just ran it again and it took 100 seconds for it to actually stop counting. Thats not going to cut it. Any idea whats going on?</p>
|
<p>I would try using threads again. The fact of the matter is that you have a computation that needs to run, while another instruction sequence (namely the GUI logic) also needs to execute.</p>
<p>I would approach the problem with mutex's (a standard concurrency control technique) which should be able to supply a pause/unpause functionality:</p>
<pre><code>import time
import threading
class myClass(threading.Thread):
num = 0
def __init__(self):
super(myClass, self).__init__()
self.num = 0
self.mutex = threading.Lock()
self.paused = False
def pause(self):
if(not self.paused):
self.mutex.acquire()
self.paused = True
def unpause(self):
self.mutex.release()
self.paused = False
def run(self):
while True:
self.mutex.acquire()
self.num += 1
print self.num
time.sleep(1)
self.mutex.release()
cls = myClass()
cls.start()
time.sleep(10)
cls.pause()
time.sleep(2)
cls.unpause()
time.sleep(2)
</code></pre>
<p>And this should output: (or something similar)</p>
<pre><code>1
2
3
4
5
6
7
8
9
10
(wait)
11
12
</code></pre>
|
python|flask
| 0 |
1,901,332 | 48,577,242 |
Find the PYTHONPATH/env bitnami uses and how to use it
|
<p>I've setup the one-click install of bitnami on Google Cloud. It's got Django 2.0 installed and that only works with python 3.x shown when I get out of a virtualenv I've created</p>
<pre><code>(djangoenv) bitnami@britecore-vm:/home/muiruri_samuel/apps/django$ cd ..
(djangoenv) bitnami@britecore-vm:/home/muiruri_samuel/apps$ deactivate
bitnami@britecore-vm:/home/muiruri_samuel/apps$ . /opt/bitnami/scripts/setenv.sh
bitnami@britecore-vm:/home/muiruri_samuel/apps$ python -v
# installing zipimport hook
import zipimport # builtin
# installed zipimport hook
ImportError: No module named site
# clear __builtin__._
# clear sys.path
# clear sys.argv
# clear sys.ps1
# clear sys.ps2
# clear sys.exitfunc
# clear sys.exc_type
# clear sys.exc_value
# clear sys.exc_traceback
# clear sys.last_type
# clear sys.last_value
# clear sys.last_traceback
# clear sys.path_hooks
# clear sys.path_importer_cache
# clear sys.meta_path
# clear sys.flags
# clear sys.float_info
# restore sys.stdin
# restore sys.stdout
# restore sys.stderr
# cleanup __main__
# cleanup[1] zipimport
# cleanup[1] signal
# cleanup[1] exceptions
# cleanup[1] _warnings
# cleanup sys
# cleanup __builtin__
# cleanup ints: 5 unfreed ints
# cleanup floats
bitnami@britecore-vm:/home/muiruri_samuel/apps$ python
ImportError: No module named site
</code></pre>
<p>I tried a snippet I saw on bitnami community on starting the env but it didn't work. I need to pip install a new package to where bitnami has it's packages so it can use it. I'm ok with just running my commands thereafter from the other virtualenv just incase it shows that would be easier.</p>
|
<p>Him I think the problem is that when you type <strong>"python"</strong>, you're using the Python installed on the system (<code>/usr/bin/python</code>). However, when you type <strong>"python3"</strong> you will be using included on the stack (<code>/opt/bitnami/python/bin/python3</code>). You can check it by running:</p>
<p><code>
which python
which python3
</code></p>
|
python|django|bitnami
| 0 |
1,901,333 | 19,935,457 |
Disable serial console of raspberry pi
|
<p>I follow the guide in this site <a href="http://elinux.org/RPi_Serial_Connection" rel="nofollow">http://elinux.org/RPi_Serial_Connection</a> to disable linux from using serial port, but it's clearly that this will also prevent me from log in to Raspberry Pi by using Minicom. How can I access Raspberry Pi via Minicom to work?
I am trying to transmit data from RaspberryPi to Linux PC but it doesn't work. I thought disable serial port is a solution but it causes another problem for me to log in and run script.
Help me please???</p>
|
<p>There are several options:</p>
<p>1) Connect a monitor and keyboard to the Raspberry Pi and log in that way.</p>
<p>2) configure a fixed IP and other network configuration before disabling the serial port login</p>
<p>3) use a DHCP server to automatically assign a network configuration to the Pi. DHCP can be configured to assigned a fixed address, or you can check its tables to see what address it has assigned to the Pi each time you boot it.</p>
<p>Edit to add:
4) disabling serial port login doesn't have to be permanent. Leave serial port login enabled when the Pi boots, but have your script turn it off as it starts and maybe turn it back on when it's done.</p>
|
python|serial-port|raspberry-pi
| 0 |
1,901,334 | 4,186,194 |
how to find the POST or GET variables posted by mechanize (python)
|
<p>i'm using mechanize to submit a form like this...</p>
<pre><code>import mechanize
br = mechanize.Browser()
br.open('http://stackoverflow.com')
br.select_form(nr=0)
br['q'] = "test"
br.set_handle_robots(False)
response = br.submit()
print response.info()
print response.read()
</code></pre>
<p>using firebug i can see that the actual variables posted are:</p>
<p>q test</p>
<p>how can i retrieve these programatically using my python script?</p>
<p>please note i'm not actually scraping SO - just using it as an example!</p>
<p>also, i know in this case the posted variables are obvious, since there's only the one i specified - often this is not the case!</p>
<p>thanks :) </p>
|
<p>You can enable debug mode in mechanize by putting this:</p>
<pre><code>import mechanize
br = mechanize.Browser()
br.set_debug_http(True)
...
</code></pre>
<p>Hope this can help :)</p>
|
python|forms|post|get|mechanize
| 2 |
1,901,335 | 69,350,640 |
Creating submatrix in python
|
<p>Given a matrix S and a binary matrix W, I want to create a submatrix of S corresponding to the non zero coordinates of W.</p>
<p>For example:</p>
<pre><code>S = [[1,1],[1,2],[1,3],[1,4],[1,5]]
W = [[1,0,0],[1,1,0],[1,1,1],[0,1,1],[0,0,1]]
</code></pre>
<p>I want to get matrices</p>
<pre><code>S_1 = [[1,1],[1,2],[1,3]]
S_2 = [[1,2],[1,3],[1,4]]
S_3 = [[1,3],[1,4],[1,5]]
</code></pre>
<p>I couldn't figure out a slick way to do this in python. The best I could do for each S_i is</p>
<pre><code>S_1 = S[0,:]
for i in range(np.shape(W)[0]):
if W[i, 0] == 1:
S_1 = np.vstack((S_1, S[i, :]))
</code></pre>
<p>but if i want to change the dimensions of the problem and have, say, 100 S_i's, writing a for loop for each one seems a bit ugly. (Side note: S_1 should be initialized to some empty 2d array but I couldn't get that to work, so initialized it to S[0,:] as a placeholder).</p>
<p>EDIT: To clarify what I mean:</p>
<p>I have a matrix S</p>
<pre><code>1 1
1 2
1 3
1 4
1 5
</code></pre>
<p>and I have a binary matrix</p>
<pre><code>1 0 0
1 1 0
1 1 1
0 1 1
0 0 1
</code></pre>
<p>Given the first column of the binary matrix W</p>
<pre><code>1
1
1
0
0
</code></pre>
<p>The 1's are in the first, second, and third positions. So I want to create a corresponding submatrix of S with just the first, second and third positions of every column, so S_1 (corresponding to the 1st column of W) is</p>
<pre><code>1 1
1 2
1 3
</code></pre>
<p>Similarly, if we look at the third column of W</p>
<pre><code>0
0
1
1
1
</code></pre>
<p>The 1's are in the last three coordinates and so I want a submatrix of S with just the last three coordinates of every column, called S_3</p>
<pre><code>1 3
1 4
1 5
</code></pre>
<p>So given any ith column of the binary matrix, I'm looking to generate a submatrix S_i where the columns of S_i contain the columns of S, but only the entries corresponding to the positions of the 1's in the ith column of the binary matrix.</p>
|
<p>It probably is more useful to work with the transpose of W rather than W itself, both for human-readability and to facilitate writing the code. This means that the entries that affect each S_i are grouped together in one of the inner parentheses of W, i.e. in a row of W rather than a column as you have it now.</p>
<p>Then, S_i = np.array[S[j,:] for j in np.shape(S)[0] if W_T[i,j] == 1], where W_T is the transpose of W. If you need/want to stick with W as is, you need to reverse the indices i and j.</p>
<p>As for the outer loop, you could try to nest this in another similar comprehension without an if statement--however this might be awkward since you aren't actually building one output <em>matrix</em> (the S_i can easily be different dimensions, unless you're somehow guaranteed to have the same number of 1s in every column of W). This in fact raises the question of what you want--a list of these arrays S_i? Otherwise if they are separate variables as you have it written, there's no good way to refer to them in a generalizable way as they don't have indices.</p>
|
python|arrays|submatrix
| 1 |
1,901,336 | 48,105,486 |
Update list value in list of dictionaries
|
<p>I have a list of dictionaries (much like in JSON). I want to apply a function to a key in every dictionary of the list.</p>
<pre><code>>> d = [{'a': 2, 'b': 2}, {'a': 1, 'b': 2}, {'a': 1, 'b': 2}, {'a': 1, 'b': 2}]
# Desired value
[{'a': 200, 'b': 2}, {'a': 100, 'b': 2}, {'a': 100, 'b': 2}, {'a': 100, 'b': 2}]
# If I do this, I can only get the changed key
>> map(lambda x: {k: v * 100 for k, v in x.iteritems() if k == 'a'}, d)
[{'a': 200}, {'a': 100}, {'a': 100}, {'a': 100}]
# I try to add the non-modified key-values but get an error
>> map(lambda x: {k: v * 100 for k, v in x.iteritems() if k == 'a' else k:v}, d)
SyntaxError: invalid syntax
File "<stdin>", line 1
map(lambda x: {k: v * 100 for k, v in x.iteritems() if k == 'a' else k:v}, d)
</code></pre>
<p>How can I achieve this?</p>
<p>EDIT: 'a' and 'b' are not the only keys. These were selected for demo purposes only.</p>
|
<p>Iterate through the list and update the desired dict item, </p>
<pre><code>lst = [{'a': 2, 'b': 2}, {'a': 1, 'b': 2}, {'a': 1, 'b': 2}, {'a': 1, 'b': 2}]
for d in lst:
d['a'] *= 100
</code></pre>
<p>Using list comprehension will give you speed but it will create a <strong>new list and n new dicts</strong>, It's useful if you don't wanna mutate your list, here it is</p>
<pre><code>new_lst = [{**d, 'a': d['a']*100} for d in lst]
</code></pre>
<p>In <strong>python 2.X</strong> we can't use <code>{**d}</code> so I built <code>custom_update</code> based on the <code>update</code> method and the code will be</p>
<pre><code>def custom_update(d):
new_dict = dict(d)
new_dict.update({'a':d['a']*100})
return new_dict
[custom_update(d) for d in lst]
</code></pre>
<p>If for every item in the list you want to update a different key</p>
<pre><code>keys = ['a', 'b', 'a', 'b'] # keys[0] correspond to lst[0] and keys[0] correspond to lst[0], ...
for index, d in enumerate(lst):
key = keys[index]
d[key] *= 100
</code></pre>
<p>using list comprehension </p>
<pre><code>[{**d, keys[index]: d[keys[index]] * 100} for index, d in enumerate(lst)]
</code></pre>
<p>In <strong>python 2.x</strong> the list comprehension will be</p>
<pre><code>def custom_update(d, key):
new_dict = dict(d)
new_dict.update({key: d[key]*100})
return new_dict
[custom_update(d, keys[index]) for index, d in enumerate(lst)]
</code></pre>
|
python|dictionary|lambda
| 7 |
1,901,337 | 51,369,023 |
Split tensor according to value
|
<p>How do I generate N different sparse tensors for N unique values for a given tensor?
For example, if I have:</p>
<pre><code>tensor = [[1,3,4,5],[1,2,3,2],[3,3,4,5],[2,2,1,4]]
</code></pre>
<p>I want the results to be:</p>
<pre><code>ch1 = [[1,0,0,0],[1,0,0,0],[0,0,0,0],[0,0,1,0]]
ch2 = [[0,0,0,0],[0,1,0,1],[0,0,0,0],[1,1,0,0]]
ch3 = [[0,1,0,0],[0,0,1,0],[0,0,0,0],[0,0,0,0]]
...
</code></pre>
<p>How can I do this in tensorflow? Assume I have a NHWC-formatted tensor.</p>
|
<p>Got it. we can use <code>tf.one_hot()</code></p>
|
tensorflow
| 1 |
1,901,338 | 51,406,090 |
Pythonic way for Zigzag Iterator?
|
<p>I'm programming onto Zigzag Iterator, it is to iterate a 2D list in following way:</p>
<pre><code>[1,4,7]
[2,5,8,9]
[3,6]
</code></pre>
<p>to </p>
<pre><code>[1,2,3,4,5,6,7,8,9]
</code></pre>
<p>I implemented an algorithem:</p>
<pre><code>class ZigzagIterator:
def __init__(self, vecs):
self.vecs = []
self.turns = 0
for vec in vecs:
vec and self.vecs.append(iter(vec))
def next(self):
try:
elem = self.vecs[self.turns].next()
self.turns = (self.turns+1) % len(self.vecs)
return elem
except StopIteration:
self.vecs.pop(self.turns)
if self.hasNext():
self.turns %= len(self.vecs)
def hasNext(self):
return len(self.vecs) > 0
if __name__ == "__main__":
s = ZigzagIterator([[1,4,7],[2,5,8,9],[3,6]])
while s.hasNext():
print s.next()
>>> 1 2 3 4 5 6 7 8 None None 9 None
</code></pre>
<p>I know the problem is because I call 1 more time next() of each list, then I get 3 None. I could resolve this issue by checking the hasnext method with java. I can also implement a hasnext Iterator in python. My questions is how I can solve this problem in a more pythonic way rather than thinking it in Java. </p>
|
<p>This is the round robin recipe in the <a href="https://docs.python.org/3/library/itertools.html#itertools-recipes" rel="nofollow noreferrer"><code>itertools</code> docs</a>.</p>
<pre><code>def roundrobin(*iterables):
"roundrobin('ABC', 'D', 'EF') --> A D E B F C"
# Recipe credited to George Sakkis
num_active = len(iterables)
nexts = cycle(iter(it).__next__ for it in iterables)
while num_active:
try:
for next in nexts:
yield next()
except StopIteration:
# Remove the iterator we just exhausted from the cycle.
num_active -= 1
nexts = cycle(islice(nexts, num_active))
</code></pre>
|
python|iterator
| 3 |
1,901,339 | 55,731,265 |
Can you reference newly made columns in assign while creating new columns?
|
<p>Using Python 2.7, I have a function through which I need to create a new column and then from that new column create a 2nd new column:</p>
<pre class="lang-py prettyprint-override"><code>def read_assign(fp, col_name):
df = pd.read_csv(fp).assign(model_id=col_name)
df = df.assign(analytic_sol = k95(df.average_fuel_T, df.average_rod_burnup),
error = np.log10((df.analytic_sol - df.avg_th_cond)/df.analytic_sol))
return df
</code></pre>
<p>Currently I am getting an error saying that it does not recognize <code>df.analytic_sol</code> as an attribute of <code>df</code>. Do I have to create a whole new variable and do my assignment a 2nd time? Is there a better way to do this? </p>
<p>Currently this code works but seems inefficient to me:</p>
<pre class="lang-py prettyprint-override"><code>def read_assign(fp, col_name):
df = pd.read_csv(fp).assign(model_id=col_name)
df = df.assign(analytic_sol = k95(df.average_fuel_T, df.average_rod_burnup))
df = df.assign(error = np.log10((df.analytic_sol - df.avg_th_cond)/df.analytic_sol))
return df
</code></pre>
|
<h2>For <code>python 3.6+</code></h2>
<p>Try writing using <a href="https://docs.python.org/3/reference/expressions.html#lambda" rel="nofollow noreferrer"><code>lambda funcions</code></a>. This works because columns aren't <em>"assigned"</em> until the <a href="https://docs.python.org/3/reference/expressions.html#lambda" rel="nofollow noreferrer"><code>assign</code></a> function has finished and returned.</p>
<p>So within the first <code>assign</code> call, <code>df['analytic_sol']</code> does not exist yet... but with <code>lambda</code>, you are essentially referening 'self' within the function, which <strong>does</strong> already have the column <code>analytic_sol</code>. </p>
<pre><code>def read_assign(fp, col_name):
df = pd.read_csv(fp).assign(model_id=col_name)
df = df.assign(analytic_sol = k95(df.average_fuel_T, df.average_rod_burnup),
error = lambda x: np.log10((x['analytic_sol'] - df.avg_th_cond) / x['analytic_sol']))
return df
</code></pre>
|
python|pandas
| 2 |
1,901,340 | 66,476,679 |
google.api_core.exceptions.InternalServerError: 500 Failed to process all the documents
|
<p>I am getting this error when trying to implement the Document OCR from google cloud in python as explained here: <a href="https://cloud.google.com/document-ai/docs/ocr#documentai_process_document-python" rel="nofollow noreferrer">https://cloud.google.com/document-ai/docs/ocr#documentai_process_document-python</a>.</p>
<p>When I run</p>
<pre><code>operation.result(timeout=None)
</code></pre>
<p>I get this error</p>
<pre><code>Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/Users/Niolo/Desktop/project/venv/lib/python3.8/site-packages/google/api_core/future/polling.py", line 134, in result
raise self._exception
google.api_core.exceptions.InternalServerError: 500 Failed to process all the documents
</code></pre>
<p>My full code</p>
<pre><code>import re
import os
from google.cloud import storage
from google.cloud import documentai_v1beta3 as documentai
from google.api_core.client_options import ClientOptions
project_id = 'my_project_id'
location = 'eu' # Format is 'us' or 'eu'
processor_id = 'my_processor_id' # Create processor in Cloud Console
gcs_input_uri = "gs://my_bucket/toy1.py"
gcs_output_uri = "gs://my_bucket"
gcs_output_uri_prefix = "gs://"
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "/Users/Niolo/Desktop/Work/DocumentAI/OCR/key.json"
def batch_process_documents(
project_id,
location,
processor_id,
gcs_input_uri,
gcs_output_uri,
gcs_output_uri_prefix,
timeout: int = 300,
):
# Set endpoint to EU
options = ClientOptions(api_endpoint="eu-documentai.googleapis.com:443")
# Instantiates a client
client = documentai.DocumentProcessorServiceClient(client_options=options)
destination_uri = f"{gcs_output_uri}/{gcs_output_uri_prefix}/"
# 'mime_type' can be 'application/pdf', 'image/tiff',
# and 'image/gif', or 'application/json'
input_config = documentai.types.document_processor_service.BatchProcessRequest.BatchInputConfig(
gcs_source=gcs_input_uri, mime_type="application/pdf"
)
# Where to write results
output_config = documentai.types.document_processor_service.BatchProcessRequest.BatchOutputConfig(
gcs_destination=destination_uri
)
# Location can be 'us' or 'eu'
name = f"projects/{project_id}/locations/{location}/processors/{processor_id}"
request = documentai.types.document_processor_service.BatchProcessRequest(
name=name,
input_configs=[input_config],
output_config=output_config,
)
operation = client.batch_process_documents(request)
# Wait for the operation to finish
operation.result(timeout=None)
# Results are written to GCS. Use a regex to find
# output files
match = re.match(r"gs://([^/]+)/(.+)", destination_uri)
output_bucket = match.group(1)
prefix = match.group(2)
storage_client = storage.Client()
bucket = storage_client.get_bucket(output_bucket)
blob_list = list(bucket.list_blobs(prefix=prefix))
print("Output files:")
for i, blob in enumerate(blob_list):
# Download the contents of this blob as a bytes object.
if ".json" not in blob.name:
print(f"skipping non-supported file type {blob.name}")
return
# Only parses JSON files
blob_as_bytes = blob.download_as_bytes()
document = documentai.types.Document.from_json(blob_as_bytes)
print(f"Fetched file {i + 1}")
# For a full list of Document object attributes, please reference this page: https://googleapis.dev/python/documentai/latest/_modules/google/cloud/documentai_v1beta3/types/document.html#Document
# Read the text recognition output from the processor
for page in document.pages:
for form_field in page.form_fields:
field_name = get_text(form_field.field_name, document)
field_value = get_text(form_field.field_value, document)
print("Extracted key value pair:")
print(f"\t{field_name}, {field_value}")
for paragraph in document.pages:
paragraph_text = get_text(paragraph.layout, document)
print(f"Paragraph text:\n{paragraph_text}")
</code></pre>
|
<p>For the following variables you need to supply them the correct values.</p>
<ul>
<li><code>gcs_input_uri</code> the full path of the pdf/tiff/gif file you would like to process</li>
</ul>
<blockquote>
<p>gcs_input_uri = 'gs://cloud-samples-data/documentai/loan_form.pdf'</p>
</blockquote>
<ul>
<li><code>gcs_output_uri</code> the bucket where you will store the output. <strong>NOTE: don't add a "/" at the end of the bucket name. This will also result to a error 500!</strong></li>
</ul>
<blockquote>
<p>gcs_output_uri = 'gs://samplebucket'</p>
</blockquote>
<ul>
<li><code>gcs_output_uri_prefix</code> this will serve as a folder in your bucket.</li>
</ul>
<blockquote>
<p>gcs_output_uri_prefix = 'test'</p>
</blockquote>
<p>Keep the timeout in <code>operation.result()</code> since <a href="https://googleapis.dev/python/documentai/latest/documentai_v1beta3/services.html#google.cloud.documentai_v1beta3.services.document_processor_service.DocumentProcessorServiceAsyncClient.batch_process_documents" rel="nofollow noreferrer">client.batch_process_documents(request)</a> returns a long running operation.</p>
<blockquote>
<p>An object representing a long-running operation.
The result type for the operation will be
:class:~.document_processor_service.BatchProcessResponse: Response
message for batch process document method.</p>
</blockquote>
<pre><code># Wait for the operation to finish
operation.result(timeout=timeout)
</code></pre>
<p>Here is the working code:</p>
<pre><code>import re
import os
from google.cloud import storage
from google.cloud import documentai_v1beta3 as documentai
from google.api_core.client_options import ClientOptions
project_id = 'tiph-ricconoel-batch8'
location = 'eu' # Format is 'us' or 'eu'
processor_id = 'your_processor_id' # Create processor in Cloud Console
gcs_input_uri = 'gs://cloud-samples-data/documentai/loan_form.pdf'
gcs_output_uri = 'gs://samplebucket'
gcs_output_uri_prefix = 'test'
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = '/full_path/your_json_file.json'
def batch_process_documents(
project_id,
location,
processor_id,
gcs_input_uri,
gcs_output_uri,
gcs_output_uri_prefix,
timeout: int = 300,
):
# Set endpoint to EU
options = ClientOptions(api_endpoint="eu-documentai.googleapis.com:443")
# Instantiates a client
client = documentai.DocumentProcessorServiceClient(client_options=options)
destination_uri = f"{gcs_output_uri}/{gcs_output_uri_prefix}/"
# 'mime_type' can be 'application/pdf', 'image/tiff',
# and 'image/gif', or 'application/json'
input_config = documentai.types.document_processor_service.BatchProcessRequest.BatchInputConfig(
gcs_source=gcs_input_uri, mime_type="application/pdf"
)
# Where to write results
output_config = documentai.types.document_processor_service.BatchProcessRequest.BatchOutputConfig(
gcs_destination=destination_uri
)
# Location can be 'us' or 'eu'
name = f"projects/{project_id}/locations/{location}/processors/{processor_id}"
request = documentai.types.document_processor_service.BatchProcessRequest(
name=name,
input_configs=[input_config],
output_config=output_config,
)
operation = client.batch_process_documents(request)
# Wait for the operation to finish
operation.result(timeout=timeout)
# Results are written to GCS. Use a regex to find
# output files
match = re.match(r"gs://([^/]+)/(.+)", destination_uri)
output_bucket = match.group(1)
prefix = match.group(2)
storage_client = storage.Client()
bucket = storage_client.get_bucket(output_bucket)
blob_list = list(bucket.list_blobs(prefix=prefix))
print("Output files:")
for i, blob in enumerate(blob_list):
# Download the contents of this blob as a bytes object.
if ".json" not in blob.name:
print(f"skipping non-supported file type {blob.name}")
return
# Only parses JSON files
blob_as_bytes = blob.download_as_bytes()
document = documentai.types.Document.from_json(blob_as_bytes)
print(f"Fetched file {i + 1}")
# For a full list of Document object attributes, please reference this page: https://googleapis.dev/python/documentai/latest/_modules/google/cloud/documentai_v1beta3/types/document.html#Document
# Read the text recognition output from the processor
for page in document.pages:
for form_field in page.form_fields:
field_name = get_text(form_field.field_name, document)
field_value = get_text(form_field.field_value, document)
print("Extracted key value pair:")
print(f"\t{field_name}, {field_value}")
for paragraph in document.pages:
paragraph_text = get_text(paragraph.layout, document)
print(f"Paragraph text:\n{paragraph_text}")
</code></pre>
<p>This will create the output file in in <code>gs://samplebucket/test/xxxxx/x/output.json</code>. See testing below:</p>
<p><a href="https://i.stack.imgur.com/fM081.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fM081.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/GZbV5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GZbV5.png" alt="enter image description here" /></a></p>
|
python|google-cloud-platform|google-cloud-storage|cloud-document-ai
| 1 |
1,901,341 | 60,546,573 |
How to check character count for Google Translation API?
|
<p>I am using the following code to translate using the google translation API </p>
<pre><code>from google.cloud import translate_v2 as translate
translate_client = translate.Client(credentials=credentials)
# if isinstance(text, six.binary_type):
# text = text.decode('utf-8')
# Text can also be a sequence of strings, in which case this method
# will return a sequence of results for each text.
result = translate_client.translate(
text, target_language='en')
print(u'Text: {}'.format(result['input']))
print(u'Translation: {}'.format(result['translatedText']))
print(u'Detected source language: {}'.format(
result['detectedSourceLanguage']))
</code></pre>
<p>How can I keep track as to how many characters a remaining or have been used till now? I have 1 million free characters. </p>
|
<p>Even I do not think there is a direct way of requesting such information (apart from the console), there is a way of creating alerting policies internally.</p>
<p>You can set an alerting policy that is triggered for a certain number of requested bytes and apply 1 char = 8 bit = 1 byte.</p>
<p>In order to do that, you should go Monitoring -> Alerting -> Create New Policy -></p>
<pre><code> ·Resource type: Consumed API
·Metric: Request sizes
·Filter -> Service = translate.googleapis.com
</code></pre>
<p>and configure as much triggers as you like.
I hope this finds well!</p>
|
python|google-cloud-platform|google-translate|google-translation-api
| 3 |
1,901,342 | 56,759,761 |
Concatenating dataframes creates too many columns
|
<p>I am reading a number of csv files in using a loop, all have 38 columns. I add them all to a list and then concatenate/create a dataframe. My issue is that despite all these csv files having 38 columns, my resultant dataframe somehow ends up with 105 columns.</p>
<p>Here is a screenshot: </p>
<p><a href="https://i.stack.imgur.com/zEWlW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zEWlW.png" alt="image"></a></p>
<p>How can I make the resultant dataframe have the correct 38 columns and stack all of rows on top of each other?</p>
<pre><code>import boto3
import pandas as pd
import io
s3 = boto3.resource('s3')
client = boto3.client('s3')
bucket = s3.Bucket('alpha-enforcement-data-engineering')
appended_data = []
for obj in bucket.objects.filter(Prefix='closed/closed_processed/year_201'):
print(obj.key)
df = pd.read_csv(f's3://alpha-enforcement-data-engineering/{obj.key}', low_memory=False)
print(df.shape)
appended_data.append(df)
df_closed = pd.concat(appended_data, axis=0, sort=False)
print(df_closed.shape)
</code></pre>
|
<p><strong>TLDR</strong>; check your column headers.</p>
<pre><code>c = appended_data[0].columns
df_closed = pd.concat([df.set_axis(
c, axis=1, inplace=False) for df in appended_data], sort=False)
</code></pre>
<hr>
<p>This happens because your column headers are different. Pandas will align your DataFrames on the headers when concatenating vertically, and will insert empty columns for DataFrames where that header is not present. Here's an illustrative example:</p>
<pre><code>df = pd.DataFrame({'A': [1, 2, 3], 'B': [4, 5, 6]})
df2 = pd.DataFrame({'C': [7, 8, 9], 'D': [10, 11, 12]})
df
A B
0 1 4
1 2 5
2 3 6
df2
C D
0 7 10
1 8 11
2 9 12
</code></pre>
<p></p>
<pre><code>pd.concat([df, df2], axis=0, sort=False)
A B C D
0 1.0 4.0 NaN NaN
1 2.0 5.0 NaN NaN
2 3.0 6.0 NaN NaN
0 NaN NaN 7.0 10.0
1 NaN NaN 8.0 11.0
2 NaN NaN 9.0 12.0
</code></pre>
<p>Creates 4 columns. Whereas, you wanted only two. Try,</p>
<pre><code>df2.columns = df.columns
pd.concat([df, df2], axis=0, sort=False)
A B
0 1 4
1 2 5
2 3 6
0 7 10
1 8 11
2 9 12
</code></pre>
<p>Which works as expected. </p>
|
python|pandas
| 2 |
1,901,343 | 69,244,071 |
Multi - threading click macro / click recorder
|
<p>I am working on a script that will listen to keystrokes till the 'q' button is pressed, afterwards it should stop the script and print out the mouse positions that were saved in 2 seconds intervals. I can't manage the threads and I am still learning this topic. Each time I run the code nothing happens but the process is running:</p>
<pre><code>from pynput.keyboard import Listener
import pyautogui
from multiprocessing import Process
import time
mouse_positions = []
def func1():
while True:
time.sleep(2)
mouse_positions.append(pyautogui.position())
cordinates = []
quit_status = False
keystrokes = []
def on_press(key):
if "q" in str(key) :
print('q was pressed!')
exit("Stopped running")
#qprint(key)
keystrokes.append(key)
print(keystrokes)
#print(keystrokes)
if __name__ == '__main__':
p1 = Process(target=func1)
p1.start()
p1.join()
with Listener(on_press=on_press) as listener: # Create an instance of Listener
listener.join() # Join the listener thread to the main thread to keep waiting for keys
</code></pre>
<p>EDIT :
To anyone intrested, here is a click macro I built, script I built previously was more like mouse capture movement. The script below will record your mouse clicks and afterwards will replay them. Much better.</p>
<pre><code>from pynput.keyboard import Listener
import pyautogui
from pynput import mouse
import time
x_pos = []
y_pos = []
both_pos = []
pressed_key = None
def on_click(x, y, button, pressed):
if pressed:
#print ("{0} {1}".format(x,y))
print(pressed_key)
if pressed_key == "1":
both_pos.append("{0}".format(x,y))
both_pos.append("{1}".format(x,y))
#print("test" + x_pos + y_pos)
print (x_pos + y_pos)
else:
pass
if pressed_key == 'q':
return False
def on_press(key):
print("To replay press 'q' , to stop recording press '1' , to record again press '1' .")
global pressed_key
if 'Key.esc' in str(key):
return False
if '1' in str(key):
pressed_key= None if pressed_key == '1' else '1'
if 'q' in str(key):
print("Replaying actions")
print(str(len(both_pos)))
for point in range(0,len(both_pos),2):
time.sleep(3)
print("clicking")
pyautogui.click(x=int(both_pos[point]),y=int(both_pos[point+1]))
print("done...")
return False
mouse_listener = mouse.Listener(on_click=on_click)
mouse_listener.start()
with Listener(on_press=on_press) as listener: # Create an instance of Listener
listener.join()
#print(mouse_listener.mouse_positions)
</code></pre>
|
<p>Hi you can use <code>threading</code> module.
I have created class <code>MouseListener</code> which inherit from <code>threading.Thread</code> class. Everything what you want to run put into <code>run</code> method. As thread stopper I used <code>still_run</code> attribute.
When you are typing, I pass to <code>on_press</code> function pressed key and <code>mouse_listener</code>. If <strong>q</strong> is pressed I set <code>mouse_listener.still_run</code> to <code>False</code>, what leads to stop the mouse listener.</p>
<p><code>mouse_positions</code> I moved from global scope to <code>MouseListener</code>.</p>
<pre class="lang-py prettyprint-override"><code>
import threading
from pynput.keyboard import Listener
import pyautogui
import time
class MouseListener(threading.Thread):
still_run = True
mouse_positions = []
def run(self):
self.func()
def func(self):
while self.still_run:
time.sleep(2)
self.mouse_positions.append(pyautogui.position())
print(self.mouse_positions)
coordinates = []
quit_status = False
keystrokes = []
def on_press(key, mouse_listener):
print('kp')
if "q" in str(key):
print('q was pressed!')
mouse_listener.still_run = False
print(key)
exit("Stopped running")
keystrokes.append(key)
print(keystrokes)
print(keystrokes)
if __name__ == '__main__':
mouse_listener = MouseListener()
mouse_listener.start()
with Listener(on_press=lambda key: on_press(key, mouse_listener)) as listener: # Create an instance of Listener
listener.join()
print(mouse_listener.mouse_positions)
</code></pre>
|
python-3.x|multithreading|automation|python-multiprocessing|pyautogui
| 0 |
1,901,344 | 68,064,814 |
Modify NDarray for specific columns
|
<p>I have a 2d NDarray, of shape (12,8), like that:</p>
<pre><code>[[ 89 65 4 86 44 137 113 124]
[ 88 71 2 89 40 140 109 129]
[ 93 71 5 87 40 139 111 129]
[ 87 74 6 96 47 143 113 129]
[ 81 74 3 99 47 144 112 129]
[ 86 64 4 89 47 139 115 123]
[ 85 76 1 93 38 142 106 132]
[ 89 80 4 94 38 143 107 134]
[ 84 68 4 93 48 141 114 125]
[ 95 42 14 80 65 130 135 107]
[ 90 35 1 67 50 124 123 104]
[ 68 36 1 84 63 129 126 97]]
</code></pre>
<p>I need to apply a simple function <code>val = 200 - val</code> to all odd-numbered columns. How could I do that?</p>
|
<p>Simply by taking odd numbers as your indices:</p>
<pre><code>>>> a = np.asarray([[ 89, 65, 4, 86, 44, 137, 113, 124],
[ 88, 71, 2, 89, 40, 140, 109, 129],
[ 93, 71, 5, 87, 40, 139, 111, 129],
[ 87, 74, 6, 96, 47, 143, 113, 129],
[ 81, 74, 3, 99, 47, 144, 112, 129],
[ 86, 64, 4, 89, 47, 139, 115, 123],
[ 85, 76, 1, 93, 38, 142, 106, 132],
[ 89, 80, 4, 94, 38, 143, 107, 134],
[ 84, 68, 4, 93, 48, 141, 114, 125],
[ 95, 42, 14, 80, 65, 130, 135, 107],
[ 90, 35, 1, 67, 50, 124, 123, 104],
[ 68, 36, 1, 84, 63, 129, 126, 97]])
>>> idx = np.arange(1,a.shape[1] + 1, 2) #odd-numbered indices
>>> a[:, idx] = 200 - a[:, idx]
>>> a
array([[ 89, 135, 4, 114, 44, 63, 113, 76],
[ 88, 129, 2, 111, 40, 60, 109, 71],
[ 93, 129, 5, 113, 40, 61, 111, 71],
[ 87, 126, 6, 104, 47, 57, 113, 71],
[ 81, 126, 3, 101, 47, 56, 112, 71],
[ 86, 136, 4, 111, 47, 61, 115, 77],
[ 85, 124, 1, 107, 38, 58, 106, 68],
[ 89, 120, 4, 106, 38, 57, 107, 66],
[ 84, 132, 4, 107, 48, 59, 114, 75],
[ 95, 158, 14, 120, 65, 70, 135, 93],
[ 90, 165, 1, 133, 50, 76, 123, 96],
[ 68, 164, 1, 116, 63, 71, 126, 103]])
</code></pre>
|
python|numpy
| 2 |
1,901,345 | 60,143,799 |
How to convert a HyperSpectral image or an image with many bands in TFRecord format?
|
<p>I've been trying to use a hyperspectral image dataset that was in .mat files. I found that using the <em>scipy</em> library with its <em>loadmat</em> function I can load the hyperspectral images and selecting some bands to see them as an RGB.</p>
<pre><code> def RGBread(image):
images = loadmat(image).get('new_image')
return abs(images[:,:,(12,6,4)])
def SIread(image):
images = loadmat(image).get('new_image')
return abs(images[:,:,:])
</code></pre>
<p>After trying to implement the pix2pix architecture I found an unexpected error. When passing the list of the names of the dataset files by a function that is responsible for load the data(which are still .mat files), Tensor Flow does not have a direct method for this reading or coding, so I get these data with my RGBread and SIread method and then I turned them into tensors.</p>
<pre><code> def load_image(filename, augment=True):
inimg = tf.cast( tf.convert_to_tensor(RGBread(ImagePATH+'/'+filename)
,dtype=tf.float32),tf.float32)[...,:3]
tgimg = tf.cast( tf.convert_to_tensor(SIread(ImagePATH+'/'+filename)
,dtype=tf.float32),tf.float32)[...,:12]
inimg, tgimg = resize(inimg, tgimg,IMG_HEIGH,IMG_WIDTH)
if augment:
inimg, tgimg = random_jitter(inimg, tgimg)
return inimg, tgimg
</code></pre>
<p>When loading an image with the load_image method, using the name and path of a single .mat file (a hyperspectral image) of my dataset as argument of my function the method worked perfectly. </p>
<pre><code>plt.imshow(load_train_image(tr_urls[1])[0])
</code></pre>
<p>The problem started when I created my dataSet tensor, because my RGBread function does not receive a tensor as a parameter since <em>loadmat('.mat')</em> expects a string. Having the following error.</p>
<pre><code>train_dataset = tf.data.Dataset.from_tensor_slices(tr_urls)
train_dataset = train_dataset.map(load_train_image,
num_parallel_calls=tf.data.experimental.AUTOTUNE)
</code></pre>
<pre><code>TypeError: expected str, bytes or os.PathLike object, not Tensor
</code></pre>
<p>After reading a lot about reading .mat files I found a user who recommended passing the data to TFrecord format. I've been trying to do it but I couldn't. Someone could help me?</p>
|
<p>Rasterio may be useful here. </p>
<p><a href="https://rasterio.readthedocs.io/en/latest/" rel="nofollow noreferrer">https://rasterio.readthedocs.io/en/latest/</a></p>
<p>It can read hyperspectral .tif which can be passed to tf.data using a tf.keras data-generator. It may be a bit slow and perhaps should be done before training rather than at runtime.</p>
<p>An alternative is to ask whether you need the geotiff metadata. If not, you can preprocess and save as numpy arrays for tfrecords. </p>
|
tensorflow|image-processing|dataset|tfrecord
| 0 |
1,901,346 | 67,884,517 |
Python: how to pivot a dataframe that contains lists?
|
<p>I have a pandas dataframe that looks like the following</p>
<pre><code>df
A B
0 'X1' [3,2,1,5]
1 'X2' [0,-2,1,2]
2 'X3' [5,1,1,-6]
</code></pre>
<p>I would like to get a dataframe like the following one:</p>
<pre><code>df
X1 X2 X3
0 3 0 5
1 2 -2 1
2 1 1 1
3 5 2 6
</code></pre>
|
<p>Convert column from <code>B</code> to DataFrame with index by <code>A</code> column and then transpose by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.T.html" rel="noreferrer"><code>DataFrame.T</code></a>:</p>
<pre><code>df = pd.DataFrame(df.B.tolist(), index=df.A.tolist()).T
print (df)
'X1' 'X2' 'X3'
0 3 0 5
1 2 -2 1
2 1 1 1
3 5 2 -6
</code></pre>
<hr />
<pre><code>df = pd.DataFrame(df.B.tolist(), index=df.A.str.strip("'").tolist()).T
print (df)
X1 X2 X3
0 3 0 5
1 2 -2 1
2 1 1 1
3 5 2 -6
</code></pre>
|
python|pandas
| 5 |
1,901,347 | 42,596,719 |
Early stopping with LSTM on Tensorflow
|
<p>I have implemented an LSTM on tensorflow in order to give a label (float from -1 to 1) to each timestep of a feature vector, as is in this example code:</p>
<pre><code>import tensorflow as tf
from tensorflow.contrib import rnn
from tensorflow.contrib.learn.python.learn.datasets import base
import random
import numpy as np
import fx
import load_data as ld
import matplotlib.pyplot as plt
import random
# Parameters
LEARNING_RATE = 0.001
TRAINING_ITERS = 20000
BATCH_SIZE = 128
DISPLAY_STEP = 10
# Network Parameters
N_TIMESTEPS = 1260 # Number of timesteps in each observation
N_OBSERVATIONS = 2000 # Number of observations in the training set
N_HIDDEN = 32 # Number of features in the hidden layer
# Ratios for splitting the data
DATA_SPLITTING = {'train': 0.6,
'validation': 0.2,
'test': 0.2}
# Generate a bunch of data points and then package them up in the array format
# needed by
# tensorflow
def generate_data (n_observations, Fs, n_timesteps, impose_slow_sine):
features = []
labels = []
for i in range (n_observations):
features_obs = generate_sinusoid (Fs, n_timesteps, impose_slow_sine)
labels_obs = label_data (features_obs)
features.append(features_obs)
labels.append(labels_obs)
# plot stuff to confirm labels are correct
#plot_labels(features_obs, labels_obs)
# Convert to 2d array
features = np.array(features)
labels = np.array(labels)
# I want the data to have 3 dimensions because that's
# the dimension my real data has. Here dimension 0 will be singleton.
# Expand to 3 dimensions
features = np.expand_dims(np.array (features), axis = 0)
labels = np.expand_dims(np.array (labels), axis = 0)
return features, labels
def label_data (x):
max = np.amax (x)
min = np.amin (x)
return 2 * (x - max) / (max - min) + 1
def main ():
# Generate the data
features, labels = generate_data (N_OBSERVATIONS, N_TIMESTEPS, N_TIMESTEPS, True)
# Split data into train, validation, and test sets
data_split = fx.split_data (features, labels, DATA_SPLITTING)
# Create objects that are iterable over batches
train = fx.DataSet (data_split['train_features'], data_split['train_labels'])
validation = fx.DataSet (data_split['validation_features'], data_split['validation_labels'])
test = fx.DataSet (data_split['test_features'], data_split['test_labels'])
# Create tf object that contains all the datasets
data_sets = base.Datasets (train=train, validation=validation, test=test)
# Get the dimensions for in the placeholders
features_dimension = features.shape[0]
labels_dimension = labels.shape[0]
n_timesteps = features.shape[2]
# TF Graph Placeholders
# Dimension 0 is the number of dimensions in the features and labels;
# dimension 1 is the number of observations;
# dimension 2 is the number of timesteps.
x = tf.placeholder ("float", [features_dimension, None, n_timesteps])
y = tf.placeholder ("float", [labels_dimension, None, n_timesteps])
# Define weights
weights = {'out': tf.Variable (tf.zeros ([N_HIDDEN, labels_dimension]))}
biases = {'out': tf.Variable (tf.zeros ([labels_dimension]))}
def RNN (x, weights, biases):
# Prepare data shape to match `rnn` function requirements
# Current data input shape: (features_dimension, n_observations, n_timesteps)
# Permuting features_dimension and n_timesteps
x = tf.transpose (x, [2, 1, 0])
# Reshaping to (n_observations*n_timesteps, features_dimension) (we are removing the depth dimension with this)
x = tf.reshape(x, [-1, features_dimension])
# Split the previous 2D tensor to get a list of `n_timesteps` tensors of
# shape (n_observations, features_dimension).
x = tf.split (x, n_timesteps, 0)
# Define a lstm cell with tensorflow
lstm_cell = rnn.LSTMCell (N_HIDDEN, use_peepholes=True)
# Get lstm cell output
# outputs is a list of `n_timesteps` tensors with shape (n_observations, N_HIDDEN)
outputs, states = rnn.static_rnn (lstm_cell, x, dtype=tf.float32)
# Transform the list into a 3D tensor with dimensions (n_timesteps, n_observations, N_HIDDEN)
outputs = tf.stack(outputs)
# Linear activation
def pred_fn(current_output):
return tf.matmul(current_output, weights['out']) + biases['out']
# Use tf.map_fn to apply pred_fn to each tensor in outputs, along dimension 0 (timestep dimension)
pred = tf.map_fn(pred_fn, outputs)
# Return pred with the same dimensions as the placeholder y
# Current shape: (n_timesteps, n_observations, labels_dimension)
# Required shape: (labels_dimension, n_observations, n_timesteps)
# Permute n_timesteps and n_timesteps
return tf.transpose(pred, [2, 1, 0])
# Results from the RNN
pred = RNN (x, weights, biases)
cost = tf.reduce_mean (tf.square (pred - y))
optimizer = tf.train.GradientDescentOptimizer (LEARNING_RATE).minimize (cost)
# Evaluate model
accuracy = tf.reduce_mean (tf.cast (tf.square (pred - y), "float"))
# Initializing the variables
init = tf.global_variables_initializer ()
# Launch the graph
with tf.Session () as sess:
sess.run (init)
step = 1
# Keep training until reach max iterations
while step * BATCH_SIZE < TRAINING_ITERS:
batch_x, batch_y = data_sets.train.next_batch(BATCH_SIZE)
# Run optimization op (backprop)
sess.run(optimizer, feed_dict={x: batch_x, y: batch_y})
if step % DISPLAY_STEP == 0:
# Calculate batch accuracy
acc = sess.run(accuracy, feed_dict={x: batch_x, y: batch_y})
# Calculate batch loss
loss = sess.run(cost, feed_dict={x: batch_x, y: batch_y})
print("Iter " + str(step*BATCH_SIZE) + ", Minibatch Loss= " + \
"{:.6f}".format(loss) + ", Training Accuracy= " + \
"{:.5f}".format(acc))
step += 1
print ("Optimization Finished!")
# Calculate accuracy for the test data
test_features = data_sets.test.features
test_labels = data_sets.test.labels
print ("Testing Accuracy:", \
sess.run (accuracy, feed_dict={x: test_features, y: test_labels}))
if __name__ == '__main__':
main ()
</code></pre>
<p>Now I want to implement early stopping on my code in order to avoid over-fitting. What is the most straightforward way to accomplish this? Is there anything already implemented on tensorflow? I think the <code>tf.contrib.learn</code> API might allow me to do this but I am not sure about how I can apply it to my case in particular.</p>
|
<p>Use the below code to early stopping</p>
<pre><code>import tensorflow as tf
class myCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
if(logs.get('accuracy')>0.8):
print("\nReached 80% accuracy so cancelling training!")
self.model.stop_training = True
callbacks = myCallback()
model.fit(x_train, y_train, epochs=10, callbacks=[callbacks])
</code></pre>
|
python|tensorflow
| 0 |
1,901,348 | 61,231,709 |
error with user for pip install with custom index-url
|
<p>I have an internal nexus server where I store my python packages. </p>
<p>When I try to run </p>
<p><code>pip install --index-url=https://my_pip_user:my_pip_pass@my_pip_url ...</code> </p>
<p>while building docker image I'm getting </p>
<p><code>my_pip_user is not a valid value for user option, please specify a boolean value like yes/no, true/false or 1/0 instead.</code></p>
<p>The error is not thrown if cmd is run just in bash.</p>
<p>I've tried to put <code>index-url</code> into <code>~/.pip/pip.conf</code> but it doesn't change anything.</p>
|
<p>I had the similar problem before, just check your environment variables, make sure you do not have any env vars which start with "<strong>PIP_</strong>", e.g. "<strong>PIP_USER</strong>", otherwise it will consider this is a pip command option and will be passed to pip command.</p>
<p>Reference: <a href="https://pip.pypa.io/en/stable/user_guide/#environment-variables" rel="nofollow noreferrer">https://pip.pypa.io/en/stable/user_guide/#environment-variables</a></p>
|
python|docker|pip|nexus
| 2 |
1,901,349 | 58,101,411 |
How to read non ascii charactersin python 2.7
|
<p>I know this could be a very common problem and there would be lots of solution already given.
I am unable to find solution for my problem, can some one please let me know if there is any duplicate post, or how to fix it.</p>
<p>I need to read source data which has both ascii and non-ascii characters(need help in python2.7). After reading I need to do some comparison on the source data and then write it into a target file.</p>
<pre><code>with open('read.txt', "r") as file:
reader = csv.reader(file, delimiter='\t')
for lines in reader:
LST_NM = (lines[0])
print(LST_NM)
</code></pre>
<p>My Source File is :
read.txt</p>
<p><code>"Abràmoff"</code></p>
<p>With this non-ascii character, my code is giving below error
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 266: ordinal not in range(128)</p>
<p>Thanks!!!</p>
|
<p>You'll need to determine what encoding was used to create your file.
For example, if your file was written using utf-8 then you can use something like this:</p>
<pre><code>your_encoding = 'utf-8'
import codecs
f = codecs.open('read.txt', encoding=your_encoding)
for line in f:
print repr(line)
</code></pre>
<p>Some other encodings you can try include 'cp1252' which is common on Windows and maybe 'latin_1'</p>
<p><a href="https://docs.python.org/2/howto/unicode.html#reading-and-writing-unicode-data" rel="nofollow noreferrer">Reference</a></p>
|
python-2.7
| 1 |
1,901,350 | 57,433,897 |
How to prevent tkinter program from running button related code upon start?
|
<p>Python beginner with little tkinter or OOP experience. </p>
<p>Functional code from a tkinter youtube tutorial, which called all tkinter functions globally, copied and modified into seperate functions.
Code now calls functions immediately upon start.
Photo no longer displays if exit button is commented.
What am I missing?</p>
<pre class="lang-py prettyprint-override"><code>
#simple glossary example demonstrating tkinter basics
#program broken by adding main and attempting to partition into seperate functions
from tkinter import *
dictionary_db = {
'algorithm' : 'Algorithm: Step by step instructions to complete a task.',
'bug' : 'Bug: Piece of code that causes a program to fail.'
}
photo_file_name = "logo.png"
#click function used by submit button
def click(text_entry, output):
entered_text = text_entry.get()
output.delete(0.0, END)
try:
definition = dictionary_db[entered_text]
except:
definition = "Sorry not an exisiting word. Please try again."
output.insert(END, definition)
#exit function used by exit button
def close_window(window):
window.destroy()
exit()
#configure window
def config(window):
# config
window.title("My tkinter demo")
window.configure(background="black")
#create objects
photo1 = PhotoImage(file=photo_file_name)
photo_label = Label (window, image=photo1, bg="black")
entry_label = Label(window, text="Enter the word you would like a definition for: ", bg="black", fg="white", font="none 12 bold")
text_entry = Entry(window, width=20, bg="white")
output = Text(window, width=75, height=6, wrap=WORD, background="white")
submit_button = Button(window, text="SUBMIT", width=6, command=click(text_entry, output))
definition_label = Label (window, text="\nDefinition", bg="black", fg="white", font="none 12 bold")
exit_label = Label (window, text="click to exit", bg="black", fg="white", font="none 12 bold")
exit_button = Button(window, text="Exit", width=6, command=close_window(window))
#place objects
photo_label.grid(row=0, column=0, sticky=E)
entry_label.grid(row=1, column=0, sticky=W)
text_entry.grid(row=2, column=0, sticky=W)
submit_button.grid(row=3, column=0, sticky=W)
definition_label.grid(row=4, column=0, sticky=W)
output.grid(row=5, column=0, columnspan=2, sticky=W)
exit_label.grid(row=6, column=0, sticky=W)
exit_button.grid(row=7, column=0, sticky=W)
#main
def main():
window = Tk()
config(window)
window.mainloop()
if __name__ == '__main__':
main()
'''
</code></pre>
|
<p>You're invoking the functions, when all you really should be doing is binding the functions to the command keyword argument.</p>
<p>Wrong:</p>
<pre><code>def my_function():
print("you clicked the button")
button = Button(command=my_function())
</code></pre>
<p>Right:</p>
<pre><code>def my_function():
print("you clicked the button")
button = Button(command=my_function)
</code></pre>
<p>However, the "Right" way of doing this won't work for you, since your functions accept parameters. In that case you can make a lambda:</p>
<pre><code>def my_function(arg):
print(arg)
button = Button(command=lambda: my_function("hello world"))
</code></pre>
|
python|tkinter
| 0 |
1,901,351 | 43,849,707 |
Using Lambda with function that takes argument from different columns of dataframe
|
<p>I want to learn how to use lambdas with this type of setting without using a for loop which a function takes arguments from rows of two columns of the dataframe and write the result to another column. </p>
<pre><code>import pandas as pd
df = pd.DataFrame({"A": [1,2,3], "B": [2,3,4]})
print(df)
df["C"] = ""
print(df)
def add_num(num1 ,num2):
return num1 + num2
for i in range(len(df)):
df["C"][i] = add_num(df["A"][i], df["B"][i])
print(df)
</code></pre>
|
<p>You can call <code>apply</code> on the df passing arg <code>axis=1</code> this will iterate row wise, you can then sub-select the columns of interest in the <code>lambda</code> to pass to your func:</p>
<pre><code>In [49]:
df = pd.DataFrame({"A": [1,2,3], "B": [2,3,4]})
df["C"] = ""
def add_num(num1 ,num2):
return num1 + num2
df["C"] = df.apply(lambda x: add_num(x["A"], x["B"]), axis=1)
print(df)
A B C
0 1 2 3
1 2 3 5
2 3 4 7
</code></pre>
<p>Note that one should avoid using <code>apply</code>, most operations can be performed using vectorised methods, I know this is just for learning but you should look for a numpy or other ufunc that is vectorised</p>
|
python|pandas|dataframe|lambda
| 2 |
1,901,352 | 47,679,361 |
Python cx_freeze Create dirs for included files in build
|
<p>Is it possible to create dirs(folders) on cx_freeze build output, cause i include(include_files) many databases files and i want these to be in specific folder etc. I can take them easily from my folders.....</p>
<pre><code>"include_files": ["databases/nations.txt","databases/newafrica.txt",
"databases/neweeurope.txt","databases/neweurope.txt","databases/newmeast.txt","graph.py",
"databases/newnamerica.txt","databases/plates.txt",
"databases/ACN/rigidA.txt","databases/ACN/rigidB.txt",
"databases/ACN/rigidC.txt","databases/ACN/rigidD.txt","databases/ACN/flexibleA.txt",
"databases/ACN/flexibleB.txt","databases/ACN/flexibleC.txt",
"databases/ACN/flexibleD.txt","alternates.xlsx",
</code></pre>
<p>but this will just copy all of them in exe build dir and its a mess.
Thanks in advance.</p>
|
<p>There are several ways you can go around solving the problem.</p>
<p><strong>Method 1 - Using <code>include_files</code></strong></p>
<p>Rather than ask for each individual text file you could just put the file name in the setup script and leave out the individual text files. In your case it would be like this:</p>
<pre><code>"include_files": ["databases"]
</code></pre>
<p>This would copy the entire <code>databases</code> folder with everything in it into you build folder.</p>
<p>Absolute file paths work as well.</p>
<p>If you are going to use the installer feature (bdist_msi) this is the method to use.</p>
<p>You can copy sub folders only using <code>"include_files": ["databases/ACN"]</code></p>
<p><strong>Method 2 - Manually</strong></p>
<p>Ok it's rather un-pythonic but the one way to do it is to copy it manually into the build folder.</p>
<p><strong>Method 3 - Using the <code>os</code> module</strong></p>
<p>Much the same as method two it would copy the folder into your build folder but instead of coping it manually it would use Python. You also have the option of using additional Python features as well.</p>
<p>Hope I was helpful. </p>
|
python-3.x|cx-freeze
| 2 |
1,901,353 | 64,303,276 |
Why is BaseDeleteView is throwing an Attribute error saying object has no attribute render_to_response?
|
<p>I am trying to implement BaseDeleteview, but getting this erroe message -> 'DeletePostView' object has no attribute 'render_to_response', I did not use DeleteView mainly beacuse it expects a confirmation template and I am using bootraps's modal (like a pop up) for confirmation.</p>
<p>I have found much similar question here -> <a href="https://stackoverflow.com/questions/7664662/basedeleteview-throws-attributeerror-render-to-response-missing">BaseDeleteView throws AttributeError (render_to_response missing)</a></p>
<pre><code>class DeletePostView(SuccessMessageMixin, BaseDeleteView):
model = Post
context_object_name = 'remove_post_confirm_object'
# template_name = "posts/delete_confirm_post.html"
success_url = reverse_lazy('users:profile')
success_message = 'Post has been deleted SuccessFully!'
</code></pre>
<p>Error Tranceback:</p>
<pre><code>Traceback (most recent call last):
File "/home/gaurav/Programming_Practice/DjangoProjects/Blog/blog-project/venv/lib/python3.8/site-packages/django/core/handlers/exception.py", line 34, in inner
response = get_response(request)
File "/home/gaurav/Programming_Practice/DjangoProjects/Blog/blog-project/venv/lib/python3.8/site-packages/django/core/handlers/base.py", line 115, in _get_response
response = self.process_exception_by_middleware(e, request)
File "/home/gaurav/Programming_Practice/DjangoProjects/Blog/blog-project/venv/lib/python3.8/site-packages/django/core/handlers/base.py", line 113, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/home/gaurav/Programming_Practice/DjangoProjects/Blog/blog-project/venv/lib/python3.8/site-packages/django/views/generic/base.py", line 71, in view
return self.dispatch(request, *args, **kwargs)
File "/home/gaurav/Programming_Practice/DjangoProjects/Blog/blog-project/venv/lib/python3.8/site-packages/django/utils/decorators.py", line 43, in _wrapper
return bound_method(*args, **kwargs)
File "/home/gaurav/Programming_Practice/DjangoProjects/Blog/blog-project/venv/lib/python3.8/site-packages/django/contrib/auth/decorators.py", line 21, in _wrapped_view
return view_func(request, *args, **kwargs)
File "/home/gaurav/Programming_Practice/DjangoProjects/Blog/blog-project/venv/lib/python3.8/site-packages/django/views/generic/base.py", line 97, in dispatch
return handler(request, *args, **kwargs)
File "/home/gaurav/Programming_Practice/DjangoProjects/Blog/blog-project/venv/lib/python3.8/site-packages/django/views/generic/detail.py", line 108, in get
return self.render_to_response(context)
Exception Type: AttributeError at /post/2020/10/10/wertyuio/remove
Exception Value: 'DeletePostView' object has no attribute 'render_to_response'
</code></pre>
|
<p>Because <code>BaseDeleteView</code> has a <code>get</code> method which in it's body another method named <code>self.render_to_response(context)</code> will be called. This method will came from a builtin mixin named <code>TemplateResponseMixin</code> and since you used <code>BaseDeleteView</code> this method does not exist and it will raise an error. So you should override <code>get</code> method of your view from this:</p>
<pre><code>def get(self, request, *args, **kwargs):
self.object = self.get_object()
context = self.get_context_data(object=self.object)
return self.render_to_response(context)
</code></pre>
<p>to something like this one or any type that you need:</p>
<pre><code>from django.shortcuts import render
# override get method of your DeletePostView
def get(self, request, *args, **kwargs):
self.object = self.get_object()
context = self.get_context_data(object=self.object)
return render(request, 'your_template.html', context=context)
</code></pre>
<p>Or you can add <code>SingleObjectTemplateResponseMixin</code> (i.e. <code>from django.views.generic import SingleObjectTemplateResponseMixin</code>) in your <code>DeletePostView</code> parents to add this functionality to your view and override any method to customize it (which will be the same as <code>DeleteView</code>).</p>
|
python|django|django-rest-framework|django-views
| 0 |
1,901,354 | 70,392,290 |
How to leave gaps in plot of incomplete timeseries?
|
<p>I'm plotting timeseries with gaps. Observations come from 6 different dates and lags between consecutive datapoints within each day are on average about 10 seconds.</p>
<p>On some days observations window is just a part of day, e.g. from 00:00 to 7:40, which results in huge gaps in merged series containing all days. I'd like to plot merged data, but leave gaps between days empty. Instead now there is some kind of linear interpolation
- on the plot below you can see blue plot of data and red dots above x-axis values that are really present in data:</p>
<p><a href="https://i.stack.imgur.com/onoBb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/onoBb.png" alt="enter image description here" /></a></p>
<p>I find it misleading, for example sometimes lines in gaps are on y = 0 and due to it I initially though that device (this is power consumption data) was inactive in that periods and value of y was 0 (when in reality we just don't have data for that periods at all).</p>
<p><strong>What can I do to prevent matplotlib from plotting lines in gaps?</strong> This is how I created the blue plot:</p>
<pre><code>dev_data.plot(kind = "line", ax = axes[0], legend = None, title = "Device data")
</code></pre>
<p>Edit: I've found someone with opposite problem. I want my plot to look like in <a href="https://stackoverflow.com/questions/67895127/gap-in-timeseries-plot">this question</a> but I don't know what should I change in my code.</p>
|
<p>Matplotlib doesn't print missing (NaN or masked) values, see this <a href="https://matplotlib.org/devdocs/gallery/lines_bars_and_markers/masked_demo.html" rel="nofollow noreferrer">demo</a>.</p>
<p>To use this, you find the gaps (for example, difference between timestamps > 10 s) and add additional NaN-values at timestamps within these gaps.</p>
<pre><code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
idx = np.concatenate((pd.date_range('2011-04-17', periods=200, freq='10S'),
pd.date_range('2011-04-18', periods=100, freq='10S'),
pd.date_range('2011-04-19', periods=300, freq='10S')))
s = pd.Series(np.random.rand(len(idx)), index=idx)
fig, (ax1,ax2) = plt.subplots(2, layout='constrained')
s.plot(ax=ax1, legend=None, title="Original without gaps")
# positions with gaps > 10 s
gaps = np.flatnonzero(np.diff(s.index) > np.timedelta64(10, 's'))
# add empty values at gap start positions + 1 s
s1 = s.append(pd.Series(index=s[gaps].index + np.timedelta64(1, 's'), dtype=float))
s1.plot(ax=ax2, legend=None, title="With gaps")
</code></pre>
<p><a href="https://i.stack.imgur.com/a30AE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/a30AE.png" alt="enter image description here" /></a></p>
|
python|matplotlib|time-series|timeserieschart|gaps-in-visuals
| 0 |
1,901,355 | 69,946,709 |
Filling a new column with values from a different column
|
<p>Supposing I have a dataframe like so :
<a href="https://i.stack.imgur.com/rvi0P.png" rel="nofollow noreferrer">dataframe</a></p>
<p>If I have to make a new column, which has the values from column 3 like so
4
N/A
-1.135632
-1.044236
1.071804
0.271860
-1.087401
0.524988
-1.039268
0.844885
-1.469388
-0.968914</p>
<p>i.e, entry 1 of column 4 is filled with entry 0 of column 3, entry 2 of column 4 is filled with entry 1 of column 3 and so on...until the nth entry in the 4th column is filled with the (n-1)th entry of the 3rd column</p>
|
<p><code>df['column_4'] = df['column_3'].shift(1)</code></p>
|
python|pandas
| 1 |
1,901,356 | 72,845,657 |
move | copy a file from one docker container to another container with ansible
|
<p>I created with ansible 2 containers:</p>
<ul>
<li>appOne | python 3.8 | Port: 8082</li>
<li>appTwo | python 3.8 | Port: 8083</li>
</ul>
<p>I created a file on appOne that I want to send to appTwo but small challenge, everything must be done via ansible.</p>
<p>I see a lot of tutorials copying from the host machine to a container but nothing from one container to another.</p>
<p>The ansible doc not being intuitive at all for my part, I come to ask for help here...</p>
<p>Do you have a solution for this?
How can I set source and destination?</p>
|
<p>You can just use the docker cp API.</p>
<pre><code>docker cp CONTAINER:/<some_file> .
</code></pre>
<p>This commands working for both directions,
(For local to container, and from a container towards local)</p>
<p>So you could use appOne as a "host" destination, and appTwo as a source.</p>
<p>Make sure there is a connection between both of the containers.</p>
|
python|docker|ansible
| 0 |
1,901,357 | 72,945,935 |
sending Email using outlook through python
|
<p>I am trying to send email through outlook using python I got<code>Traceback (most recent call last): File "c:\Users\Variable\Documents\python_files\learning_new\sending_email.py", line 23, in <module> session.sendmail(sender_address, receiver_address, text) File "C:\Users\Variable\AppData\Local\Programs\Python\Python310\lib\smtplib.py", line 908, in sendmail raise SMTPDataError(code, resp) smtplib.SMTPDataError: (554, b'5.2.0 STOREDRV.Submission.Exception:OutboundSpamException; Failed to process message due to a permanent exception with message [BeginDiagnosticData]WASCL UserAction verdict is not None. Actual verdict is Suspend, ShowTierUpgrade. OutboundSpamException: WASCL UserAction verdict is not None. Actual verdict is Suspend, ShowTierUpgrade.[EndDiagnosticData] [Hostname=PAXP193MB1710.EURP193.PROD.OUTLOOK.COM]')</code> but the strange thing is that this error not always occurs,sometimes It works and send mails.and</p>
<p>my code:</p>
<pre><code>import smtplib
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
mail_content = '''Hello,
This is a simple mail. There is only text, no attachments are there The mail is sent using
Python SMTP library.
Thank You'''
#The mail addresses and password
sender_address = 'Fahe_em@outlook.com'
sender_pass = '*********'
receiver_address = 'fah909190@gmail.com'
#Setup the MIME
message = MIMEMultipart()
message['From'] = sender_address
message['To'] = receiver_address
message['Subject'] = 'A test mail sent by Python. It has an attachment.' #The subject line
#The body and the attachments for the mail
message.attach(MIMEText(mail_content, 'plain'))
#Create SMTP session for sending the mail
session = smtplib.SMTP('smtp.office365.com', 587) #use gmail with port
session.starttls() #enable security
session.login(sender_address, sender_pass) #login with mail_id and password
text = message.as_string()
session.sendmail(sender_address, receiver_address, text)
session.quit()
print('Mail Sent')
</code></pre>
|
<p>It just needed few googling to see that you encountered an exception related to your account, due to things like: not being verified, spamming, not upgrading, and more.</p>
<p>Sources: <a href="https://docs.microsoft.com/en-us/answers/questions/539571/error-sending-emails-via-outlook.html" rel="nofollow noreferrer">[1]</a>, <a href="https://stackoverflow.com/questions/48778663/c-sharp-sending-email-storedrv-submission-exceptionoutboundspamexception">[2]</a>, <a href="https://support.mozilla.org/en-US/questions/1231162" rel="nofollow noreferrer">[3]</a></p>
|
python
| 0 |
1,901,358 | 73,255,321 |
Define a dataclass in a doctest body
|
<p>I need to define a temporary dataclass in order to test a given function via <a href="https://docs.python.org/3.10/library/doctest.html" rel="nofollow noreferrer">Doctest</a>.
However, I am not able to use to <a href="https://docs.python.org/3/library/dataclasses.html#dataclasses.dataclass" rel="nofollow noreferrer"><code>@dataclass</code> decorator syntax</a> in a doctest body.</p>
<p>Sample file <code>test.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Any
def myprint(x: Any):
"""
>>> from dataclasses import dataclass
>>> @dataclass
>>> class SomeDataClass:
... field1: int
... field2: str
>>> myprint(SomeDataClass(1, "txt"))
SomeDataClass(field1=1, field2='txt')
"""
print(x)
</code></pre>
<p>The doctest invocation is done, as usual, with:</p>
<pre><code>python -m doctest test.py
</code></pre>
<p>This results in a "unexpected EOF while parsing" error.</p>
<p>What am I doing wrong?</p>
<pre><code>**********************************************************************
File "test.py", line 6, in test.myprint
Failed example:
@dataclass
Exception raised:
Traceback (most recent call last):
File "/usr/lib/python3.9/doctest.py", line 1336, in __run
exec(compile(example.source, filename, "single",
File "<doctest test.myprint[1]>", line 1
@dataclass
^
SyntaxError: unexpected EOF while parsing
**********************************************************************
[ this error is triggered by the first one and is not relevant ]
File "test.py", line 10, in test.myprint
Failed example:
myprint(SomeDataClass(1, "txt"))
Exception raised:
Traceback (most recent call last):
File "/usr/lib/python3.9/doctest.py", line 1336, in __run
exec(compile(example.source, filename, "single",
File "<doctest test.myprint[3]>", line 1, in <module>
myprint(SomeDataClass(1, "txt"))
TypeError: SomeDataClass() takes no arguments
**********************************************************************
1 items had failures:
2 of 4 in test.myprint
***Test Failed*** 2 failures.
</code></pre>
<p>This test was done on Python 3.9.</p>
|
<p>A decorator isn't a statement on its own; it connects with the definition below (<code>class</code> or <code>def</code>). So replace the <code>>>> class</code> with <code>... class</code>.</p>
|
python|python-dataclasses|doctest
| 3 |
1,901,359 | 50,020,683 |
which one is effecient, join queries using sql, or merge queries using pandas?
|
<p>I want to use data from multiple tables in a <code>pandas dataframe</code>. I have 2 idea for downloading data from the server, one way is to use <code>SQL</code> join and retrieve data and one way is to download dataframes separately and merge them using pandas.merge.</p>
<h1>SQL Join</h1>
<p>when I want to download data into <code>pandas</code>.</p>
<pre><code>query='''SELECT table1.c1, table2.c2
FROM table1
INNER JOIN table2 ON table1.ID=table2.ID where condidtion;'''
df = pd.read_sql(query,engine)
</code></pre>
<h1>Pandas Merge</h1>
<pre><code>df1 = pd.read_sql('select c1 from table1 where condition;',engine)
df2 = pd.read_sql('select c2 from table2 where condition;',engine)
df = pd.merge(df1,df2,on='ID', how='inner')
</code></pre>
<p>which one is faster? Assume that I want to do that for more than 2 tables and 2 columns.
Is there any better idea?
If it is necessary to know I use <code>PostgreSQL</code>.</p>
|
<p>To really know which is faster, you need to try out the two queries using your data on your databases.</p>
<p>The rule of thumb is to do the logic in a single query. Databases are designed for queries. They have sophisticated algorithms, multiple processors, and lots of memory to handle them. So, relying on the database is quite reasonable. In addition, each query has a bit of overhead, so two queries have twice the overhead of one.</p>
<p>That said, there are definitely circumstances where doing the work in pandas is going to be faster. Pandas is going to do the work in local memory. That is limited -- but much less so than in the "good old days". It is probably not going to be multi-threaded.</p>
<p>For example, the result set might be much larger than the two tables. Moving the data from the database to the application might be (relatively) expensive in that case. Doing the work in in pandas could be faster than in the database.</p>
<p>At the other extreme, no records might match the <code>JOIN</code> conditions. That is definitely a case where a single query would be faster.</p>
|
python|sql|postgresql|pandas
| 3 |
1,901,360 | 64,764,501 |
Is it redundant to use df.copy() when writing a function for feature engineering?
|
<p>I'm wondering if there's any benefit to writing this pattern</p>
<pre><code>def feature_eng(df):
df1 = df.copy()
...
return df1
</code></pre>
<p>as opposed to this pattern</p>
<pre><code>def feature_eng(df):
...
return df
</code></pre>
|
<p>Say you have a raw dataframe <code>df_raw</code> and you create <code>df_feature</code> using <code>feature_eng</code>. Your second method will overwrite <code>df_raw</code> when calling <code>df_feature = feature_eng(df_raw)</code> while the first method will not. So in case you want to keep <code>df_raw</code> as it is and not modify it, the first pattern will lead to the correct result.</p>
<p>A little example:</p>
<pre><code>def feature_eng1(df):
df.drop(columns=['INDEX'], inplace=True)
return df
def feature_eng2(df):
df1 = df.copy()
df1.drop(columns=['INDEX'], inplace=True)
return df1
df_feature = feature_eng1(df_raw)
</code></pre>
<p>Here df_raw will not contain the contain the column <code>INDEX</code> while using <code>feature_eng2</code> it would.</p>
|
python|feature-engineering
| 1 |
1,901,361 | 63,787,733 |
When to use int() and when to use var: int
|
<p>I'm wondering, when do you use <code>var: int</code> and when do you use <code>int(var)</code>.</p>
<p>For example:</p>
<pre class="lang-py prettyprint-override"><code>def func(var: int, var2):
print(var - int(var2))
</code></pre>
|
<p><code>var: int</code> declares the type of a variable that is expected to be and pass to the desired function or class, while <code>int(var)</code> will cast <code>var</code> variable to be instance of int class:</p>
<pre><code>>> a = '3' # a is String
>> a = int(a)
>> a
out: 3 # a is Ineteger now
</code></pre>
|
python|casting|integer
| 5 |
1,901,362 | 64,020,403 |
Pandas multiply selected columns by previous column
|
<p>Assume I have a 3 x 9 Dataframe with index from 0 - 2 and columns from 0 - 8</p>
<pre><code>nums = np.arange(1, 28)
arr = np.array(nums)
arr = arr.reshape((3, 9))
df = pd.DataFrame(arr)
</code></pre>
<p>I want to multiply selected columns (example [2, 5, 7]) by the columns behind them (example [1, 4, 6])
My obstacle is getting the correct index of the previous column to match with the column I want to multiply</p>
<p>Issue:</p>
<pre><code>df[[2, 5, 7]] = df[[2, 5, 7]].multiply(___, axis="index") # in this case I want the blank to be df[[1, 4, 6]], but how to get these indexes for the general case when selected columns vary?
</code></pre>
|
<p>Let's try working with the numpy array:</p>
<pre><code>cols = np.array([2,5,7])
df[cols] *= df[cols-1].values
</code></pre>
<p>Output:</p>
<pre><code> 0 1 2 3 4 5 6 7 8
0 1 2 6 4 5 30 7 56 9
1 10 11 132 13 14 210 16 272 18
2 19 20 420 22 23 552 25 650 27
</code></pre>
<p>Or you can use:</p>
<pre><code>df.update(df[cols]*df.shift(-1, axis=1))
</code></pre>
<p>which gives:</p>
<pre><code> 0 1 2 3 4 5 6 7 8
0 1 2 12.0 4 5 42.0 7 72.0 9
1 10 11 156.0 13 14 240.0 16 306.0 18
2 19 20 462.0 22 23 600.0 25 702.0 27
</code></pre>
|
python|pandas
| 3 |
1,901,363 | 64,102,826 |
Efficient method comparing 2 different tables columns
|
<p><img src="https://i.stack.imgur.com/Lopgz.png" alt="Example_List" /></p>
<p>Hi all guys,</p>
<p>I have got 2 dfs and I need to check if the values from the first are matching on the second, only for a specific column on each, and save the values matching in a new list. This is what I did but it is taking quite a lot of time and I was wandering if there's a more efficient way. The lists are like in the image above from 2 different tables.</p>
<pre><code>for x in df_bd_names['Building_Name']:
for y in df_sup['Source_String']:
if x == y:
matching_words_sup.append(x)
</code></pre>
<p>Thanks</p>
|
<p>Let's create both dataframes:</p>
<pre><code>df1 = pd.DataFrame({
'Building_Name': ['Exces', 'Excs', 'Exec', 'Executer', 'Executor']
})
df2 = pd.DataFrame({
'Source_String': ['Executer', 'Executor', 'Executor Of', 'Executor For', 'Exeutor']
})
</code></pre>
<p>Perform inner merge between dataframes and convert first column to list:</p>
<pre><code>pd.merge(df1, df2, left_on='Building_Name', right_on='Source_String', how='inner')['Building_Name'].tolist()
</code></pre>
<p>Output:</p>
<pre><code>['Executer', 'Executor']
</code></pre>
|
python|python-3.x|pandas|list|string-matching
| 0 |
1,901,364 | 53,077,902 |
python version check command works only in user mode and not root
|
<p>I'm new to Linux and Amazon AWS. I installed Python3.6 on Amazon Linux and when I run the command to check python version, the command works only for the normal user but it doesn't work for the root account. Why is that?</p>
<blockquote>
<pre><code>[ec2-user@ip-172-31-30-209 /]$ python3.6 --version
Python 3.6.6
[ec2-user@ip-172-31-30-209 /]$ sudo su
[root@ip-172-31-30-209 /]# python3.6 --version
bash: python3.6: command not found
[root@ip-172-31-30-209 /]#
</code></pre>
</blockquote>
|
<p>check the owner of the python installation folder or add that ec2-user to root user group.</p>
<p><a href="https://askubuntu.com/questions/213958/can-i-add-myself-to-group-root">Add User to sudo group</a></p>
|
python|amazon-web-services
| 1 |
1,901,365 | 52,989,595 |
Python TCP/IP Communication
|
<p>This is my python modbus tcp communication code and it wait at the this line and than stopping for the connection where is my fault:</p>
<pre><code>sock.connect((TCP_IP, TCP_PORT))
</code></pre>
<p>(if i use slave program also not working)
At the my client side i am using this code:</p>
<pre><code>TCP_IP='10.0.2.15'
TCP_PORT=502
BUFFER_SIZE=39
sock=socket.socket(socket.AF_INET,socket.SOCK_STREAM)
sock.connect((TCP_IP,TCP_PORT))
</code></pre>
<p>This is the master side:</p>
<pre><code># Create a TCP/IP socket
TCP_IP = '10.0.2.2'
TCP_PORT = 502
BUFFER_SIZE = 39
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.connect((TCP_IP, TCP_PORT))
try:
unitId = 16 # Plug Socket11
functionCode = 3 # Write coil
print("\nSwitching Plug ON...")
coilId = 1
req = struct.pack('12B', 0x00, 0x01, 0x00, 0x00, 0x00, 0x06, int(unitId), 0x03, 0xff, 0xc0, 0x00,
0x00)
sock.send(req)
print("TX: (%s)" % req)
rec = sock.recv(BUFFER_SIZE)
print("RX: (%s)" % rec)
time.sleep(2)
print("\nSwitching Plug OFF...")
coilId = 2
req = struct.pack('12B', 0x00, 0x01, 0x00, 0x00, 0x00, 0x06, int(unitId), 0x03, 0xff, 0xc0, 0x00,
0x00)
sock.send(req)
print("TX: (%s)" % req)
rec = sock.recv(BUFFER_SIZE)
print("RX: (%s)" % rec)
time.sleep(2)
finally:
print('\nCLOSING SOCKET')
sock.close()
</code></pre>
|
<p>I think your problem is the IP address: <code>10.0.2.2</code> as stated here [<a href="https://stackoverflow.com/questions/35441481/connection-to-localhost-10-0-2-2-from-android-emulator-timed-out]">Connection to LocalHost/10.0.2.2 from Android Emulator timed out</a>.
You can replace <code>'10.0.2.2'</code> with <code>'localhost'</code> or try to find your <code>IPv4</code> address.</p>
<p>To do so type ifconfig in your command prompt if you use Linux or type ipconfig in windows and search for the <code>IPv4</code> address.</p>
<p>I used a simple client-server example to run your code and replaced <code>'10.0.2.2'</code> with <code>'localhost'</code> and everything went fine.</p>
<p>server side:</p>
<pre><code>import socket
import struct
import time
TCP_IP = 'localhost'
TCP_PORT = 502
BUFFER_SIZE = 39
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind((TCP_IP, TCP_PORT))
s.listen(1)
sock, addr = s.accept()
print 'Connected by', addr
try:
unitId = 16 # Plug Socket11
functionCode = 3 # Write coil
print("\nSwitching Plug ON...")
coilId = 1
req = struct.pack('12B', 0x00, 0x01, 0x00, 0x00, 0x00, 0x06,
int(unitId), 0x03, 0xff, 0xc0, 0x00,
0x00)
while 1:
sock.send(req)
print("TX: (%s)" % repr(req))
rec = sock.recv(BUFFER_SIZE)
print("RX: (%s)" % repr(rec))
time.sleep(2)
break
print("\nSwitching Plug OFF...")
coilId = 2
req = struct.pack('12B', 0x00, 0x01, 0x00, 0x00, 0x00, 0x06,
int(unitId),
0x03, 0xff, 0xc0, 0x00,
0x00)
while 1:
sock.send(req)
print("TX: (%s)" % repr(req))
rec = sock.recv(BUFFER_SIZE)
print("RX: (%s)" % repr(rec))
time.sleep(2)
break
finally:
sock.close()
</code></pre>
<p>client side:</p>
<pre><code>import socket
TCP_IP = 'localhost'
TCP_PORT = 502
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.connect((TCP_IP, TCP_PORT))
data = sock.recv(1024)
print repr(data)
while 1:
sock.send(data)
print("send back to server: (%s)" % repr(data))
break
data = sock.recv(1024)
print repr(data)
while 1:
sock.send(data)
print("send back to server: (%s)" % repr(data))
break
sock.close()
</code></pre>
<p>make sure you run server and client in seperate files/terminals</p>
|
python|tcp|modbus
| 1 |
1,901,366 | 52,922,647 |
Rotate covariance matrix
|
<p>I am generating 3D gaussian point clouds. I'm using the scipy.stats.multivariate.normal() function, which takes a mean value and a covariance matrix as arguments. It can then provide random samples using the rvs() method.</p>
<p>Next I want to perform a rotation of the cloud in 3D, but rather than rotate each point I would like to rotate the random variable parameters, and then regenerate the point cloud. </p>
<p>I'm really struggling to figure this out. After an rotation, the axes of variance will no longer align with the coordinate system. So I guess what what I want is to express variance along three arbitrary orthogonal axes.</p>
<p>Thank you for any help.</p>
<p>Final edit: Thank you, I got what I needed. Below is an example</p>
<pre><code>cov = np.array([
[ 3.89801357, 0.38668784, 1.47657614],
[ 0.38668784, 0.87396495, 1.43575688],
[ 1.47657614, 1.43575688, 15.09192414]])
rotation_matrix = np.array([
[ 2.22044605e-16, 0.00000000e+00, 1.00000000e+00],
[ 0.00000000e+00, 1.00000000e+00, 0.00000000e+00],
[-1.00000000e+00, 0.00000000e+00, 2.22044605e-16]]) # 90 degrees around y axis
new_cov = rotation_matrix @ cov @ rotation_matrix.T # based on Warren and Paul's comments
rv = scipy.stats.multivariate_normal(mean=mean,cov=new_cov)
</code></pre>
<p>If you get an error</p>
<pre><code>ValueError: the input matrix must be positive semidefinite
</code></pre>
<p><a href="https://stackoverflow.com/questions/41515522/numpy-positive-semi-definite-warning">This</a> page I found useful</p>
|
<p>I edited the question with the answer, but again it is</p>
<pre><code>new_cov = rotation_matrix @ cov @ rotation_matrix.T
</code></pre>
|
numpy|scipy|linear-algebra|covariance|covariance-matrix
| 2 |
1,901,367 | 65,453,831 |
Get a file that is in a folder that is a child of the parent directory with python
|
<p>I have attempted this heavily but have found no way to do this, I think I am not understanding something.</p>
<p>Here is my tree of the .bat that runs it: <a href="https://i.stack.imgur.com/lrpvG.png" rel="nofollow noreferrer">https://i.stack.imgur.com/lrpvG.png</a></p>
<p>and here is my 'src' folder: <a href="https://i.stack.imgur.com/fLEc2.png" rel="nofollow noreferrer">https://i.stack.imgur.com/fLEc2.png</a></p>
<p>I am trying to be able to get the 'webdriver.exe' in the src file, from betav1.0.1.py.</p>
<p>So far I have gotten to this code:</p>
<pre><code>import pathlib
from pathlib import Path
import os
path = pathlib.Path().absolute()
pathbase = os.path.basename(path)
import selenium
from selenium import webdriver
driver=webdriver.Chrome(os.path.basename(path).Path('\src\chromedriver.exe'))
</code></pre>
<p>but that returns the error</p>
<pre><code>AttributeError: 'str' object has no attribute 'Path'
</code></pre>
<p>How would I do this?</p>
|
<p>I have found a solution to this. Instead, I used chromedriver-py (<a href="https://pypi.org/project/chromedriver-py/" rel="nofollow noreferrer">https://pypi.org/project/chromedriver-py/</a>) to find the nearest webdriver.</p>
|
python|selenium|path|driver|pathlib
| 1 |
1,901,368 | 65,191,048 |
FutureWarning not displayed with warnings.simplefilter(action = "error", category=FutureWarning)
|
<p>I am having trouble to find the line of code producing the FutureWarning message: elementwise comparison failed.</p>
<p>There are other questions in SO that describe the python / numpy conflict that cause this warning.
I am trying to find which lines are causing this in my code.</p>
<p>When I include these lines in the header section of the code :</p>
<pre><code>import warnings
warnings.simplefilter(action = "default", category=FutureWarning)
</code></pre>
<p>Then the warning message displays on console output, but without info to identify where the problem is occurring.</p>
<p>When I include these lines :</p>
<pre><code>import warnings
warnings.simplefilter(action = "error", category=FutureWarning)
</code></pre>
<p>then the warning message is not displayed.</p>
<p>I have also used</p>
<pre><code>warnings.filterwarnings()
</code></pre>
<p>with the same arguments as simplefilter, and have the same result.</p>
<p>I am trying to run the code and produce a traceback which identifies the offending lines.</p>
<p>What am I not doing correctly ?</p>
|
<p>Try removing your warning / filter warnings and add this instead:</p>
<pre><code>numpy.seterr(all='raise')
</code></pre>
<p>If as intended, it will raise an exception where it occurs.</p>
<p><strong>More info:</strong></p>
<ul>
<li><p><a href="https://stackoverflow.com/questions/40659212/futurewarning-elementwise-comparison-failed-returning-scalar-but-in-the-futur">FutureWarning: elementwise comparison failed; returning scalar, but in the future will perform elementwise comparison</a></p>
</li>
<li><p><a href="https://github.com/numpy/numpy/issues/6784" rel="nofollow noreferrer">https://github.com/numpy/numpy/issues/6784</a></p>
</li>
</ul>
|
python|warnings
| 1 |
1,901,369 | 68,459,635 |
What is "Letter Distribution" and what is "Word Distribution" in NLP dataset while preforming Exploratory data analysis(EDA)
|
<p>Guys im new to data analyst, Im trying to improve my skills so I toke a dataset from kaggle. <a href="https://i.stack.imgur.com/GdQuW.jpg" rel="nofollow noreferrer">these are task of the dataset</a>
I'm stuck on task 3 and 4 of EDA. anyone help me regarding this and how I can perform it.
[Note: This is not any project. I just want to improve my skills for a job]</p>
|
<p>They want you to count the # (instances) of each word or letter in the dataset.</p>
<p>This is part of the EDA, however, so I believe you don't strictly need to do it, it is just potentially helpful for identifying further avenues for analysis.</p>
|
python|database|nlp|data-science|data-analysis
| 1 |
1,901,370 | 68,708,679 |
Why do I get this error when working with Selenium? And what is the solution?
|
<p>I want create an account on <a href="https://en.wikipedia.org/wiki/ProtonMail" rel="nofollow noreferrer">ProtonMail</a>.</p>
<p>But when I sign up, I select the user, but I can not send anything to it.</p>
<p>And it gives the following error:</p>
<blockquote>
<p>ElementNotInteractableException: Message: Element is not reachable by keyboard.
address page='https://account.protonmail.com/signup?language=en'</p>
</blockquote>
<pre><code>user=d.find_element_by_xpath('//*[@id="username"]')
print(user.tag_name) # Is input
user.send_keys('username') # Give an error
</code></pre>
<p>I also used Action Chains, but the problem was not solved.</p>
|
<p>There are 2 issues here:<br />
2 elements located by <code>//*[@id="username"]</code> XPath<br />
The element you want to access is inside iframe.<br />
Try this:</p>
<pre><code>from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
wait(driver, 10).until(EC.frame_to_be_available_and_switch_to_it(driver.find_element_by_xpath('//iframe')))
user=d.find_element_by_xpath('//body[@class="color-norm bg-norm sign-layout-container"]//*[@id="username"]')
user.send_keys('username')
</code></pre>
|
python-3.x|selenium|firefox|xpath|iframe
| 1 |
1,901,371 | 10,309,783 |
How do I "truncate" the last column of a tuple of same length tuples?
|
<p>Suppose I have </p>
<pre><code>x = ((1, 2, 3), (4, 5, 6), (7, 8, 9))
</code></pre>
<p>How do I get to</p>
<pre><code>x = ((1, 2), (4, 5), (7, 8))
</code></pre>
<p>?</p>
<p>The only way I figured out was using list comprehension then converting back to a tuple:</p>
<pre><code>x = tuple([n[1:len(n)] for n in x])
</code></pre>
<p>But I feel that's an ugly way of doing it...</p>
|
<pre><code>In [1]: x = ((1, 2, 3), (4, 5, 6), (7, 8, 9))
In [2]: tuple(a[:-1] for a in x)
</code></pre>
|
python|tuples
| 8 |
1,901,372 | 62,809,641 |
youtube-dl extracted video description contains no newlines and is truncated
|
<p>I have a script that download a playlist of video info as json file.</p>
<p>Yesterday I get video description with <code>\n</code> newline characters, but today those newlines are now just a space and the extracted description is truncated . I remember no change to my code and no update to <code>youtube-dl</code>.</p>
<p>Did youtube change something? Or did I make a mistake somewhere?</p>
<p>Python 3.8.1, youtube-dl 2020.6.16.1</p>
<p>Here's the code that currently extract video description with no newlines.</p>
<pre class="lang-py prettyprint-override"><code>import youtube_dl
import json
ydl_opts = {
'ignoreerrors': True,
}
urls = ['http://youtu.be/wFWihZTktUw', 'http://youtu.be/QvuQH4_05LI']
with youtube_dl.YoutubeDL(ydl_opts) as ydl:
print('Extracting playlist info...')
playlist_info = []
for idx, url in enumerate(urls):
video_info = ydl.extract_info(url, download=False)
if not video_info:
continue
properties = ['title', 'id', 'description']
for k in list(video_info.keys()):
if k not in properties:
del video_info[k]
playlist_info.append(video_info)
with open(f'playlist_info.json', 'w') as fp:
json.dump(playlist_info, fp)
</code></pre>
<p>Example result with newlines (yesterday's result):</p>
<pre><code>[
{
"id": "wTJI_WuZSwE",
"title": "The impossible chessboard puzzle",
"description": "An information puzzle with an interesting twist\nSolution on Stand-up Maths: https://youtu.be/as7Gkm7Y7h4\nHome page: https://www.3blue1brown.com\nBrought to you by you: https://3b1b.co/chess-thanks\n\n------------------\n0:00 Introduction\n3:58 Visualizing the two-square case\n5:46 Visualizing the three-square case\n12:19 Proof that it's impossible\n16:22 Explicit painting of the hypercube\n------------------\n\nThese animations are largely made using manim, a scrappy open-source python library: https://github.com/3b1b/manim\n\nIf you want to check it out, I feel compelled to warn you that it's not the most well-documented tool, and it has many other quirks you might expect in a library someone wrote with only their own use in mind.\n\nMusic by Vincent Rubinetti.\nDownload the music on Bandcamp:\nhttps://vincerubinetti.bandcamp.com/album/the-music-of-3blue1brown\n\nStream the music on Spotify:\nhttps://open.spotify.com/album/1dVyjwS8FBqXhRunaG5W5u\n\nIf you want to contribute translated subtitles or to help review those that have already been made by others and need approval, you can click the gear icon in the video and go to subtitles/cc, then \"add subtitles/cc\". I really appreciate those who do this, as it helps make the lessons accessible to more people.\n\n------------------\n\n3blue1brown is a channel about animating math, in all senses of the word animate. And you know the drill with YouTube, if you want to stay posted on new videos, subscribe: http://3b1b.co/subscribe\n\nVarious social media stuffs:\nWebsite: https://www.3blue1brown.com\nTwitter: https://twitter.com/3blue1brown\nReddit: https://www.reddit.com/r/3blue1brown\nInstagram: https://www.instagram.com/3blue1brown_animations/\nPatreon: https://patreon.com/3blue1brown\nFacebook: https://www.facebook.com/3blue1brown"
},
{
"id": "QvuQH4_05LI",
"title": "Tips to be a better problem solver [Last lecture] | Lockdown math ep. 10",
"description": "Tips on problem-solving, with examples from geometry, trig, and probability.\nPast episodes with integrated quizzes: https://itempool.com/c/3b1b\nFull playlist: https://www.youtube.com/playlist?list=PLZHQObOWTQDP5CVelJJ1bNDouqrAhVPev\nHome page: https://www.3blue1brown.com\nBrought to you by you: https://3b1b.co/ldm-thanks\nHuge huge thanks to Ben Eater: https://www.youtube.com/user/eaterbc\nAnd Cam Christensen, creator of ItemPool: https://itempool.com/\n\nNotes by Ngân Vũ:\nhttps://twitter.com/ThuyNganVu/status/1265480770832855040\n\nMistakes:\n50:35, there should be a dx in the integral\n54:40, if you notice the mistake here and are inclined to complain, keep watching\n\n\n------------------\nVideo timeline (thanks to user \"noonesperfect\")\n0:34 9-Problem Solving Principles/Tip\n1:15 Question 1 (Probability)\n2:08 Who is Ben Eater?\n4:25 Inscribed angle theorem, θL=2*θs\n5:58 Tip-1\n7:48 Tip-2\n8:16 Question 2\n9:34 Answer 2\n10:29 Tip-3\n15:17 Tip-4\n22:48 Question 3\n25:56 Answer 3 (Marked incorrectly, Answer: Option D)\n26:31 Answer 1\n27:28 Explanation for Q1, Floor function\n30:38 Tip-5\n32:53 Tip-6\n33:36 Question 4\n34:43 Answer 4\n36:37 Question 5\n38:10 Answer 5\n41:48 Probability graph in Desmos\n44:08 Revisiting an alternating series sum for ln(2)\n47:29 Tip-7\n51:08 Tip-8\n55:23 Audience questions through tweets\n57:28 Tip-9\n58:29 Python programming for various probability question\n1:04:31 Arts created using Desmos graph tool with mathematical expressions\n1:05:54 Thank you, appreciation to the team and all.\n------------------\nThe live question setup with stats on-screen is powered by Itempool.\nhttps://itempool.com/\n\nCurious about other animations?\nhttps://www.3blue1brown.com/faq#manim\n\nMusic by Vincent Rubinetti.\nDownload the music on Bandcamp:\nhttps://vincerubinetti.bandcamp.com/album/the-music-of-3blue1brown\n\nStream the music on Spotify:\nhttps://open.spotify.com/album/1dVyjwS8FBqXhRunaG5W5u\n\nIf you want to contribute translated subtitles or to help review those that have already been made by others and need approval, you can click the gear icon in the video and go to subtitles/cc, then \"add subtitles/cc\". I really appreciate those who do this, as it helps make the lessons accessible to more people.\n\n------------------\n\n3blue1brown is a channel about animating math, in all senses of the word animate. And you know the drill with YouTube, if you want to stay posted on new videos, subscribe: http://3b1b.co/subscribe\n\nVarious social media stuffs:\nWebsite: https://www.3blue1brown.com\nTwitter: https://twitter.com/3blue1brown\nReddit: https://www.reddit.com/r/3blue1brown\nInstagram: https://www.instagram.com/3blue1brown_animations/\nPatreon: https://patreon.com/3blue1brown\nFacebook: https://www.facebook.com/3blue1brown"
}
</code></pre>
<p>Example result with no newlines and truncated description (today's result):</p>
<pre><code>[
{
"id": "wTJI_WuZSwE",
"title": "The impossible chessboard puzzle",
"description": "An information puzzle with an interesting twist Solution on Stand-up Maths: https://youtu.be/as7Gkm7Y7h4 Home page: https://www.3blue1brown.com Brought to yo..."
},
{
"id": "QvuQH4_05LI",
"title": "Tips to be a better problem solver [Last lecture] | Lockdown math ep. 10",
"description": "Tips on problem-solving, with examples from geometry, trig, and probability. Past episodes with integrated quizzes: https://itempool.com/c/3b1b Full playlist..."
}
]
</code></pre>
|
<p>This is an <a href="https://github.com/ytdl-org/youtube-dl/issues/25937" rel="nofollow noreferrer">issue with youtube-dl</a> that seems to have started today. It is most likely related to changes on Youtube's side.</p>
|
python|youtube|youtube-api|youtube-dl
| 1 |
1,901,373 | 62,693,933 |
RegEx for separating number tail from any-character string?
|
<p>I am keen to find RegEx approach for my problem, for separating number tail from the any-character leading string. I have done work to similar issues with string processing, but I think RegEx might offer me less effort, and be usefull for future.</p>
<p>I have kinda complex strings with number at the end:</p>
<pre><code>'TxAnt0Standard_deviation_peak_variation'
'MeasCfg5Seq1TxAnt2MaxDiff_phase2'
'MeasCfg6Seq1TxAnt0MinAmpl_error_ant10'
</code></pre>
<p>etc.</p>
<p>I need to separate the number tail from the rest:</p>
<p><code>TxAnt0Standard_deviation_peak_variation'</code> and <code>''</code></p>
<p><code>'MeasCfg5Seq1TxAnt2MaxDiff_phase'</code> and <code>'2'</code></p>
<p><code>'MeasCfg6Seq1TxAnt0MinAmpl_error_ant'</code> and <code>'10'</code></p>
<p>I found one example that uses re.match() method. And I am trying something like this:</p>
<pre><code> match = re.match(r"(.+)([0-9]*)", limitName, re.I)
items=tuple()
if match:
items = match.groups()
basis = items[0] #res: whole string
tail =items[1] #res: ''
</code></pre>
<p>which in turn does not do the task, what I get ist the whole string, and an empty sting.</p>
|
<p>You may use</p>
<pre><code>match = re.match(r"(.*\D)?(\d*)$", limitName)
</code></pre>
<p>See the <a href="https://regex101.com/r/Vrviwz/1/" rel="nofollow noreferrer">regex demo</a>. Note that <code>re.match</code> looks for a match at the start of the string, this is why I am not using the <code>^</code> anchor.</p>
<p><strong>Pattern details</strong></p>
<ul>
<li><code>(.*\D)?</code> - an optional capturing group matching any 0 or more chars other than line break chars as many as possible up to the last non-digit char followed with...</li>
<li><code>(\d*)</code> - Capturing group #2: any 0 or more digit</li>
<li><code>$</code> - end of string.</li>
</ul>
<p>Digits are caseless, you need no <code>re.I</code> modifier.</p>
|
python-3.x|regex
| 2 |
1,901,374 | 62,614,302 |
Print orderly with confirmation form a random list
|
<p>I have the following list that has been previous generated randomly:</p>
<pre><code>Car
Boat
Bicycle
House
Apple
</code></pre>
<p>I'm trying to print them with confirmation, so something like after running will ask me confirm</p>
<pre><code>Car
...Y
Boat
...Y
Bicycle
...Y
House
...Y
Apple
...Y
</code></pre>
<p><code>...Y </code> represents the user's confirmation.</p>
<p>I have no idea how to do this so any hint is really appreciated.</p>
|
<pre><code>for item in "Car Boat Bicycle House Apple".split(" "):
x = input(f"{item}\n")
# do something with confirmation x
</code></pre>
|
python|list
| 1 |
1,901,375 | 71,215,926 |
Incorrectly appending/concat. to list at bottom of nested Python dictionary
|
<p>When I created a nested dictionary structure I ran into a issue where when I tried to append an item to a list in the nested dictionary, it would append that item too all lists across all levels of my nested dictionary that had the corresponding final key.</p>
<pre><code>d = {"p": [], "d": []}
d = {value: dict(d) for value in [1,2,3]}
d = {value: dict(d) for value in ["h", "i"]}
d
Out[14]:
{'h': {1: {'p': [], 'd': []}, 2: {'p': [], 'd': []}, 3: {'p': [], 'd': []}},
'i': {1: {'p': [], 'd': []}, 2: {'p': [], 'd': []}, 3: {'p': [], 'd': []}}}
</code></pre>
<p>When I start to populate my "bottom-level" dictionaries' lists using append(), this happens:</p>
<pre><code>d["h"][1]["p"].append("Please Help")
d
Out[16]:
{'h': {1: {'p': ['Please Help'], 'd': []},
2: {'p': ['Please Help'], 'd': []},
3: {'p': ['Please Help'], 'd': []}},
'i': {1: {'p': ['Please Help'], 'd': []},
2: {'p': ['Please Help'], 'd': []},
3: {'p': ['Please Help'], 'd': []}}}
</code></pre>
<p>As seen above, the string was appended to each list in <em>d</em> that had the final key 'p'. Using concatenation of lists produces a similar, but different result:</p>
<pre><code>d = {"p": [], "d": []}
d = {value: dict(d) for value in [1,2,3]}
d = {value: dict(d) for value in ["h", "i"]}
d["h"][1]["p"] = d["h"][1]["p"] + ["Please Help"]
d
Out[23]:
{'h': {1: {'p': ['Please Help'], 'd': []},
2: {'p': [], 'd': []},
3: {'p': [], 'd': []}},
'i': {1: {'p': ['Please Help'], 'd': []},
2: {'p': [], 'd': []},
3: {'p': [], 'd': []}}}
</code></pre>
<p>Here, the string was added to lists that shared the same 2nd- and 3rd-level key (1 and 'p', respectively). I do not understand what is happening and any help would be appreciated. This is on Python 3.7.</p>
|
<p>Like @Mark says you need to use copy...</p>
<pre><code>from copy import deepcopy as dc
d = {"p": [], "d": []}
d = {value: dc(d) for value in [1,2,3]}
d = {value: dc(d) for value in ["h", "i"]}
d["h"][1]["p"].append("Please Help")
print(d)
#{'h': {1: {'p': ['Please Help'], 'd': []}, 2: {'p': [], 'd': []}, 3: {'p': [], 'd': []}}, 'i': {1: {'p': [], 'd': []}, 2: {'p': [], 'd': []}, 3: {'p': [], 'd': []}}}
</code></pre>
|
python|dictionary|nested
| 0 |
1,901,376 | 64,183,757 |
Looping through list of dictionaries
|
<p>My task is to make 3 dictionaries to store information about 3 people I know, such as their first name, last name, age, and the city in which they live:</p>
<pre><code>Sissel = {'first_name': 'Sissel', 'last_name': 'Johnsen', 'age': '23', 'city': 'Copenhagen'}
David = {'first_name': 'David', 'last_name': 'Hansen', 'age': '35', 'city': 'Randers'}
Olivia = {'first_name': 'Olivia', 'last_name': 'Petersen', 'age': '57', 'city': 'New York'}
</code></pre>
<p>Then I had to store them in a list:</p>
<pre><code>people = [Sissel, David, Olivia]
</code></pre>
<p>I have to loop through my list of people. And as I loop through the list, it has to print everything I know about each person by printing the key and associated values in each dictionary.</p>
<p>I tried using a for loop:</p>
<pre><code>for k, v in people:
print(k, v)
</code></pre>
<p>But I just got an error message saying</p>
<pre><code>ValueError: too many values to unpack (expected 2)
</code></pre>
|
<p>People is a list of dictionaries, which is why it throws a <em>too many values to unpack</em> error. In Python 3, you need to call <a href="https://docs.python.org/3/library/stdtypes.html#dict.items" rel="nofollow noreferrer"><code>dict.items()</code></a>:</p>
<pre><code>for person in people:
for k, v in person.items():
print(k, v)
print() # extra space between people
</code></pre>
|
python
| 2 |
1,901,377 | 70,179,548 |
how to get 3d space coordinates from 2d coordinates using python?
|
<p>I'm having a 2d orthomosaic image and also its point cloud. If I draw a polygon around an object on an image then in the output it should return me the coordinates of that object from 3d point cloud. As soon as I get the coordinates of the object from 3d cloud I want to calculate the volume of that object.
Can anyone help me with the same?</p>
|
<p>Retrieving the corresponding point of orthomosaic image from point cloud isn’t trivial as it may seem, since in a digital image, what you have is pixel with a ‘location’ represented by row and column index, while in point cloud data, each point has a (X, Y, Z) value pair, which is coordinate measured in real-world coordinate system.</p>
<p>So what you need is uniforming the coordinates between image and point cloud. For orthomosaic image (assume it is generated from some commonly used software), it should be a tiff file with a transformation rule defined in its meta data. This transformation rule can transform image pixel coordinate <code>i, j</code> into world coordinate <code>X, Y</code>.</p>
<p>These transformation rule, named ‘Geotransform’, is usually defined by a transformation matrix, and can be retrieved from the meta data using library such as GDAL:
(<em><strong>Make sure the coordinate system geotransform targets is the exact system your point cloud data is, otherwise, a point cloud registration is needed</strong></em>)</p>
<pre><code>from osgeo import GDAL
def get_geotransform(path: str):
img = gdal.Open(path, 0) # orthomosaic image path
return img.GetGeoTransform()
</code></pre>
<p>The method above returns a <code>(6, )</code> size array, containing six transformation parameters.</p>
<p>GDAL official docs has <a href="https://gdal.org/tutorials/geotransforms_tut.html" rel="nofollow noreferrer">a good article</a> on how to use these parameters. If you want to transform multiple points, I’ve also implemented a vectorized approach (use at your own risk):</p>
<pre><code>geotransform = get_geotransform(sample_img)
trans_matrix = np.array([[geotransform[2], geotransform[5]], [geotransform[1], geotransform[4]]])
XY = xy @ trans_matrix + np.array([geotransform[0], geotransform[3]])
</code></pre>
<p>Here, <code>xy</code> is a <code>(N, 2)</code> ndarray containing <code>(row, col)</code> index in each row.</p>
<p>So we get the world coordinate X, Y from the process above, now we need to search in the point cloud data for the closest point to X, Y, since there’s no guarantee that after the transformation, you can find the exact same point entry in point cloud data.</p>
<p>You can achieve this goal in a bunch of ways, even a brute force search can do the job, though considering overall efficiency, I suggest use the <a href="https://en.wikipedia.org/wiki/K-d_tree" rel="nofollow noreferrer">KD-Tree</a> data structure. Suppose your point cloud data is a <code>(N, 3)</code> size ndarray, and you can fit a KDTree using <code>scikit-learn</code>:</p>
<pre><code>
from sklearn.neighbors import NearestNeighbors
kd_tree = NearestNeighbors(n_neighbors=1, n_jobs=-1)
kd_tree.fit(cloud[:, 0: 2])
</code></pre>
<p>And get the nearest point cloud entry index:</p>
<pre><code>nearest_point_index = kd_tree.kneighbors(XY, return_distance=False)
</code></pre>
<p>Accessing point cloud data with this index should give you the most approximate coordinate for the image point you selected.</p>
<p><strong>In conclusion</strong>, two steps are needed:</p>
<ul>
<li>Transform the image point index to the world coordinate system</li>
<li>Perform a least distance search through the point cloud to find the corresponding coordinate</li>
</ul>
<p><strong>Also</strong>, the code above is just for pure demonstration, and should be adjusted for your own using scenario. For example, if you just need to query for a small amount of points and your point cloud data is very large, KD-Tree method may be less efficient overall, more details are needed to answer this question more specifically.</p>
|
python|math|image-processing|point-cloud-library|3d-model
| 0 |
1,901,378 | 56,460,344 |
How to iterate over the n-th dimenstion of a numpy array?
|
<p>I use to concatenate my numpy arrays of arbitrary shape to make my code cleaner, however, it seems pretty hard to me to iterate over it in a pythonesque way.</p>
<p>Lets consider a 4 dimension array x (thus <code>len(x.shape) = 4</code>), and that the index I want to iterate is 2, the naive solution that I usually use is something like</p>
<pre class="lang-py prettyprint-override"><code>y = np.array([my_operation(x[:, :, i, :])
for i in range(x.shape[2])])
</code></pre>
<p>I'm looking for something more readable, because it is annoying to have so many ":" and any changes in the dimensions of x would require rewrite a part of my code. Something magic like</p>
<pre><code>y = np.array([my_operation(z) for z in magic_function(x, 2)])
</code></pre>
<p>Is there a numpy method that allows me to iterate over any arbitrary dimension of an array ?</p>
|
<p>One possible solution is to use a dict().</p>
<p>What you can do is:</p>
<pre><code>x = dict()
x['param1'] = [1, 1, 1, 1]
x['param2'] = [2, 2, 2, 2]
print(x['param1'])
# > [1, 1, 1, 1]
</code></pre>
|
python|numpy
| 0 |
1,901,379 | 69,952,035 |
Program does not rise predefined ZeroDivisionError
|
<p>I am building a simple BMI calculator program and I wish it to take users back to the input field if they try zero division. i defined what output should be at that situation but it still prints out ZeroDivisionError</p>
<pre><code>def bmi_calculator():
user_height = float(input('Enter your height in cm: ')) # user height in cm
user_weight = float(input('Enter your weight in kg: ')) # user weight in kb
bmi = user_weight / (user_height / 100) ** 2 # BMI formula
while user_height <= 0 or user_weight <= 0:
try:
print("your weight or height can't be below or equal 0 \n enter your credentials again... ")
user_height = float(input('Enter your height in cm: '))
user_weight = float(input('Enter your weight in kg: '))
except ZeroDivisionError:
print("You can't divide by 0")
user_height = float(input('Enter your height in cm: ')) # user height in cm
user_weight = float(input('Enter your weight in kg: ')) # user weight in kb
print(f'Your BMI is: {bmi}')
print(bmi_calculator())
</code></pre>
|
<p>For the ZeroDivisionError exception to be caught, put the line of code for the division <em>inside</em> the <code>try</code> block.<br>
Then, if the specified exception occurs, the <code>except</code> statement comes into action and prints,<br>
<code>You can't divide by 0</code>.</p>
|
python|python-3.x
| 0 |
1,901,380 | 69,829,511 |
parse a txt file delimitered by pipe line by line
|
<p>I have a text file delimitered by pipes. The problem is that the format of this file is like below:</p>
<pre><code>value 1|column name 1|value 2| column name 2|value 3|column name 3|...etc.
</code></pre>
<p>How to parse this kind of file? My goal is to transform it into a csv file in python. The value of each column could be empty, which means it could be like below:</p>
<pre><code>value1|column name 1||column name 2|value 3|column name 3|...etc.
</code></pre>
|
<p>Would something like this suffice?</p>
<pre><code>import pandas as pd
path = r'C:\Path\To\Your\CSVFile.csv'
df = pd.read_csv(path, sep='|', header=None)
new_names = [i for sub in [('val_' + str(i), 'col_name_' + str(i)) for i in range(len(df.columns))] for i in sub]
df = df.rename({i[0]: i[1] for i in zip(df.columns, new_names)}, axis='columns')
df = df[df.columns.drop(list(df.filter(regex='col_name')))]
print(df)
</code></pre>
|
python|parsing
| 0 |
1,901,381 | 61,071,675 |
Unable to install library in virtual environment
|
<p>I'm not able to install any library in virtual environment. It gives me the following error:</p>
<pre><code>venv) D:\python projects>pip install pandas
Cannot open D:\python projects\venv\Scripts\pip-script.py
</code></pre>
<p>However if i get out of venv and try to install it globally it gets finished without any error.</p>
<p>but I don't know the reason of the error. Another question is that if i have to upgrade the pip version, do i have to upgrade it separately first globally then in my venv? Also Is there any way to use globally installed library in venv?</p>
|
<p>Fixed this by rebuilding my venv, a bit destructive but you can pip freeze > for the packages your using before hand then pip install -r to restore them to your new venv</p>
|
python|pip
| 0 |
1,901,382 | 61,056,779 |
Serialize custom dynamic layer in keras tensorflow
|
<p>I am rather new to tensorflow as well as Python (transfering from R). Currently I am working on a recommendation system in python using keras and tensorflow. The data is "unary", so I only know if someone clicked on something or if he didn't. </p>
<p>The core model is build with the functional API and is a basic wide model (input = multi-hot encoded binary rating matrix, output = probability per class) with one hidden layer in between. To make the model easier to use, I want to be able to throw in class labels and output topN predictions with class labels and probabilities. So instead of "Input = [[0,1,0,1,0,0,0,0]], Ouput = [[0.3,0.1,0.6,0.0]" I want to be able to do like "Input = [['apple','orange','bean']], Ouput = [['lemon',banana'],[0.3,0.2]]".</p>
<p>To do this, I train the basic model and then wrap two custom layers around the model, one at the beginning and one at the end (like here: <a href="https://towardsdatascience.com/customize-classification-model-output-layer-46355a905b86" rel="nofollow noreferrer">https://towardsdatascience.com/customize-classification-model-output-layer-46355a905b86</a>) (I also tried feature_columns, they did not really do it for me). To make the input custom layer, I had to set the layer to "dynamic = True", to enable eager execution. I could not find a way to create a layer for this "tokenization" without using eager execution. This works fine so far.</p>
<p>But now I can not restore the saved model (neither using h5 nor save_model.pb). I also specified the "get_config" method for the custom layers, and everything works fine as long as I only safe the model with the second custom layer at the end. So I suppose the error occurs because the first layer is dynamic. So how do I serealize a dynamic custom layer in keras?</p>
<p>I would really appreciate any help or even thoughts as I could not find any matching topic (or even any topic covering dynamic custom layers in general). Please find some code here:</p>
<pre><code>class LabelInputLayer(Layer):
def __init__(self, labels, n_labels, **kwargs):
self.labels = labels
self.n_labels = n_labels
super(LabelInputLayer, self).__init__(**kwargs)
def call(self, x):
batch_size = tf.shape(x)[0]
tf_labels = tf.constant([self.labels], dtype="int32")
n_labels = self.n_labels
x = tf.constant(x)
tf_labels = tf.tile(tf_labels,[batch_size,1])
# go through every instance and multi-hot encode labels
for j in tf.range(batch_size):
index = []
for i in tf.range(n_labels):
if tf_labels[j,i].numpy() in x[j,:].numpy():
index.append(True)
else:
index.append(False)
# create initial rating matrix and append for each instance
if j == 0:
rating_te = tf.where(index,1,0)
else:
rating_row = tf.where(index,1,0)
rating_te = tf.concat([rating_te,rating_row],0)
rating_te = tf.reshape(rating_te, [batch_size,-1])
return [rating_te]
def compute_output_shape(self, input_shape):
return tf.TensorShape([None, self.n_labels])
# define get_config to enable serialization of the layer
def get_config(self):
config={'labels':self.labels,
'n_labels':self.n_labels}
base_config = super(LabelInputLayer, self).get_config()
return dict(list(base_config.items()) + list(config.items()))
</code></pre>
<p>Here I am creating the basic model:</p>
<pre><code>#Create Functional Model---------------------
a = tf.keras.layers.Input(shape=[n_classes])
b = tf.keras.layers.Dense(4000, activation = "relu", input_dim = n_classes)(a)
c = tf.keras.layers.Dropout(rate = 0.2)(b)
d = tf.keras.layers.Dense(n_classes, activation = "softmax")(c)
nn_model = tf.keras.Model(inputs = a, outputs = d)
</code></pre>
<p>And this is the final model (works in the notebook but can not be restored once saved, so no use in production):</p>
<pre><code>input_layer = Input(shape = input_shape, name = "first_input")
encode_layer = LabelInputLayer(labels = labels, n_labels = n_labels, dynamic = True, input_shape = input_shape)(input_layer)
pre_trained = tf.keras.models.Sequential(nn_model.layers[1:])(encode_layer)
decode_layer = LabelLimitLayer(labels, n_preds)(pre_trained)
encoder_model = tf.keras.Model(inputs = input_layer, outputs = decode_layer)
</code></pre>
<p>Save and restore the model:</p>
<pre><code>tf.saved_model.save(encoder_model, "encoder_model")
model = tf.keras.models.load_model("encoder_model")
</code></pre>
<p>This is the error I receive if I want to restore the model in another notebook (Unfortunately I can also not use the "custom_objects" parameter in the load method, as I have to deploy the model from the save file only):</p>
<pre><code>ValueError: Could not find matching function to call loaded from the SavedModel. Got:
Positional arguments (1 total):
* Tensor("x:0", shape=(None, 10), dtype=float32)
Keyword arguments: {}
Expected these arguments to match one of the following 0 option(s):
</code></pre>
|
<p>I managed to get around the Issue by using tf.functions instead of dynamic=True. I did not however manage to save a dynamic layer in Tensorflow. Maybe that helps someone.</p>
|
python|tensorflow2.0|keras-layer|tf.keras
| 0 |
1,901,383 | 66,036,271 |
Splitting a tensorflow dataset into training, test, and validation sets from keras.preprocessing API
|
<p>I'm new to tensorflow/keras and I have a file structure with 3000 folders containing 200 images each to be loaded in as data. I know that <a href="https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image_dataset_from_directory" rel="nofollow noreferrer">keras.preprocessing.image_dataset_from_directory</a> allows me to load the data and split it into training/validation set as below:</p>
<pre><code>val_data = tf.keras.preprocessing.image_dataset_from_directory('etlcdb/ETL9G_IMG/',
image_size = (128, 127),
validation_split = 0.3,
subset = "validation",
seed = 1,
color_mode = 'grayscale',
shuffle = True)
</code></pre>
<blockquote>
<p>Found 607200 files belonging to 3036 classes.
Using 182160 files for validation.</p>
</blockquote>
<p>But then I'm not sure how to further split my validation into a test split while maintaining proper classes. From what I can tell (through the GitHub <a href="https://github.com/tensorflow/tensorflow/blob/85c8b2a817f95a3e979ecd1ed95bff1dc1335cff/tensorflow/python/data/ops/dataset_ops.py#L3848" rel="nofollow noreferrer">source code</a>), the take method simply takes the first x elements of the dataset, and skip does the same. I am unsure if this maintains stratification of the data or not, and I'm not quite sure how to return labels from the dataset to test it.</p>
<p>Any help would be appreciated.</p>
|
<p>I could not find supporting documentation, but I believe <code>image_dataset_from_directory</code> is taking the end portion of the dataset as the validation split. <code>shuffle</code> is now set to <code>True</code> by default, so the dataset is shuffled before training, to avoid using only some classes for the validation split.
The split done by <code>image_dataset_from_directory</code> only relates to the training process. If you need a (highly recommended) test split, you should split your data beforehand into training and testing. Then, <code>image_dataset_from_directory</code> will split your training data into training and validation.</p>
<p>I usually take a smaller percent (10%) for the in-training validation, and split the original dataset 80% training, 20% testing.
With these values, the final splits (from the initial dataset size) are:</p>
<ul>
<li>80% training:
<ul>
<li>72% training (used to adjust the weights in the network)</li>
<li>8% in-training validation (used only to check the metrics of the model after each epoch)</li>
</ul>
</li>
<li>20% testing (never seen by the training process at all)</li>
</ul>
<p>There is additional information how to split data in your directories in this question: <a href="https://stackoverflow.com/questions/42443936/keras-split-train-test-set-when-using-imagedatagenerator">Keras split train test set when using ImageDataGenerator</a></p>
|
python|tensorflow|keras
| 3 |
1,901,384 | 65,918,026 |
Convert a list of bigram tuples to a list of strings
|
<p>I am trying to create Bigram tokens of sentences.
I have a list of tuples such as</p>
<pre><code>tuples = [('hello', 'my'), ('my', 'name'), ('name', 'is'), ('is', 'bob')]
</code></pre>
<p>and I was wondering if there is a way to convert it to a list using python, so it would look love this:</p>
<pre><code>list = ['hello my', 'my name', 'name is', 'is bob']
</code></pre>
<p>thank you</p>
|
<p>Try this snippet:</p>
<pre><code>list = [' '.join(x) for x in tuples]
</code></pre>
<p><code>join</code> is a string method that contatenates all items of a list (tuple) within a separator defined in <code>''</code> brackets.</p>
|
python|list|nlp|tuples|token
| 2 |
1,901,385 | 72,622,791 |
Python selenium gets stuck on site loading screen
|
<p>First of all, this is the <a href="https://www.gamermarkt.com/" rel="nofollow noreferrer">website</a> I use.</p>
<p><strong>The code block I used:</strong></p>
<pre><code>from selenium import webdriver
browserProfile = webdriver.ChromeOptions()
browserProfile.add_argument("start-maximized")
browserProfile.add_argument('--disable-blink-features=AutomationControlled')
browserProfile.add_argument('--user-agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.0.0 Safari/537.36')
browserProfile.add_experimental_option("excludeSwitches", ["enable-automation"])
browserProfile.add_experimental_option('useAutomationExtension', False)
browser = webdriver.Chrome("chromedriver.exe", chrome_options=browserProfile)
browser.get('https://www.gamermarkt.com')
</code></pre>
<p><strong>ChromeDriver image:</strong></p>
<p><a href="https://i.stack.imgur.com/E4ntv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/E4ntv.png" alt="enter image description here" /></a></p>
<p>stays on this screen.</p>
<p>I think there is a bot block on the site, but I have no idea how to bypass it.</p>
|
<p>I would suggest you explore the following code:</p>
<pre><code>import time
time.sleep(1) #sleep for 1 sec
time.sleep(0.25) #sleep for 250 milliseconds
</code></pre>
|
python|selenium|selenium-webdriver|selenium-chromedriver
| 0 |
1,901,386 | 72,724,387 |
accessing ujson content using ESP32
|
<p>There is a file dumped using ujson. It contains a list of dictionaries.
When I try to load it again using ujson it throws an error - ValueError: syntax error in JSON
What am I missing if you can explain? I am running it on ESP32 using Thonny and I am also fairly new with it..</p>
<pre><code>updated_f = open("riversss.txt", 'r')
data = ujson.loads(updated_f.read())
</code></pre>
<p>This is the file content:</p>
<pre><code>[{
"zjawisko_lodowe": "0",
"temperatura_wody_data_pomiaru": "\"2022-06-23 05:10:00\"",
"stan_wody": "521",
"temperatura_wody": "17.54",
"zjawisko_zarastania_data_pomiaru": "\"2018-11-21 10:57:00\"",
"zjawisko_lodowe_data_pomiaru": "\"2020-01-24 08:00:00\"",
"stan_wody_data_pomiaru": "\"2022-06-23 05:10:00\"",
"stacja": "Ustka",
"zjawisko_zarastania": "0",
"id_stacji": "154160110",
"rzeka": "Bałtyk",
"województwo": "pomorskie"
}][{
"zjawisko_lodowe": "0",
"temperatura_wody_data_pomiaru": "\"2022-06-23 05:10:00\"",
"stan_wody": "521",
"temperatura_wody": "17.54",
"zjawisko_zarastania_data_pomiaru": "\"2018-11-21 10:57:00\"",
"zjawisko_lodowe_data_pomiaru": "\"2020-01-24 08:00:00\"",
"stan_wody_data_pomiaru": "\"2022-06-23 05:10:00\"",
"stacja": "Ustka",
"zjawisko_zarastania": "0",
"id_stacji": "154160110",
"rzeka": "Bałtyk",
"województwo": "pomorskie"
}][{
"zjawisko_lodowe": "0",
"temperatura_wody_data_pomiaru": "\"2022-06-23 05:10:00\"",
"stan_wody": "521",
"temperatura_wody": "17.54",
"zjawisko_zarastania_data_pomiaru": "\"2018-11-21 10:57:00\"",
"zjawisko_lodowe_data_pomiaru": "\"2020-01-24 08:00:00\"",
"stan_wody_data_pomiaru": "\"2022-06-23 05:10:00\"",
"stacja": "Ustka",
"zjawisko_zarastania": "0",
"id_stacji": "154160110",
"rzeka": "Bałtyk",
"województwo": "pomorskie"
}]
</code></pre>
|
<p>Everything between <code>[</code> and <code>]</code> is an array in JSON. Currently you've tried to define three unnamed arrays on top level, each containing a single data record. That's not possible, and probably not what you intended.</p>
<p>I assume you wanted a single top-level array with 3 data records like so:</p>
<pre><code>[{
"zjawisko_lodowe": "0",
"temperatura_wody_data_pomiaru": "\"2022-06-23 05:10:00\"",
"stan_wody": "521",
"temperatura_wody": "17.54",
"zjawisko_zarastania_data_pomiaru": "\"2018-11-21 10:57:00\"",
"zjawisko_lodowe_data_pomiaru": "\"2020-01-24 08:00:00\"",
"stan_wody_data_pomiaru": "\"2022-06-23 05:10:00\"",
"stacja": "Ustka",
"zjawisko_zarastania": "0",
"id_stacji": "154160110",
"rzeka": "Bałtyk",
"województwo": "pomorskie"
},
{
"zjawisko_lodowe": "0",
"temperatura_wody_data_pomiaru": "\"2022-06-23 05:10:00\"",
"stan_wody": "521",
"temperatura_wody": "17.54",
"zjawisko_zarastania_data_pomiaru": "\"2018-11-21 10:57:00\"",
"zjawisko_lodowe_data_pomiaru": "\"2020-01-24 08:00:00\"",
"stan_wody_data_pomiaru": "\"2022-06-23 05:10:00\"",
"stacja": "Ustka",
"zjawisko_zarastania": "0",
"id_stacji": "154160110",
"rzeka": "Bałtyk",
"województwo": "pomorskie"
},
{
"zjawisko_lodowe": "0",
"temperatura_wody_data_pomiaru": "\"2022-06-23 05:10:00\"",
"stan_wody": "521",
"temperatura_wody": "17.54",
"zjawisko_zarastania_data_pomiaru": "\"2018-11-21 10:57:00\"",
"zjawisko_lodowe_data_pomiaru": "\"2020-01-24 08:00:00\"",
"stan_wody_data_pomiaru": "\"2022-06-23 05:10:00\"",
"stacja": "Ustka",
"zjawisko_zarastania": "0",
"id_stacji": "154160110",
"rzeka": "Bałtyk",
"województwo": "pomorskie"
}]
</code></pre>
<p>The double quotes around timestamps are fishy, though valid syntax. Remember, <a href="https://jsonlint.com/" rel="nofollow noreferrer">https://jsonlint.com/</a> is your friend.</p>
|
python|esp32|micropython|ujson
| 1 |
1,901,387 | 68,130,238 |
How to define range in "for" at Django template
|
<p>Django ListView returns object_list, and in Template I want to take the data only once using "for".</p>
<p>So I wrote this code.</p>
<blockquote>
<p>{% for item in object_list(range(1)) %}</p>
</blockquote>
<p>But this code does not work. Here is the error code.</p>
<pre><code>Exception Type: TemplateSyntaxError
Exception Value:
Could not parse the remainder: '(range(1))' from 'object_list(range(1))'
</code></pre>
<p>I know my for code is something wrong, please teach me how to write properly.</p>
<p>I just mentioned the above settings in this question but still if more code is required then tell me I'll update my question with that information. Thank you.</p>
|
<p>You should not define this in the template, but in the view. The view thus looks like:</p>
<pre><code>def my_view(request):
context = {
'myrange': range(1),
# …
}
return render(request, '<i>some-template.html</i>', context)</code></pre>
<p>and then enumerate with:</p>
<pre><code>{% for <b>item in myrange</b> %}
{{ item }}
{% endfor %}</code></pre>
|
python|django|django-rest-framework|django-templates
| 0 |
1,901,388 | 68,125,847 |
'>' not supported between instances of 'str' and 'int' pandas function for getting threshold
|
<p>I have a df</p>
<pre><code>import pandas as pd
df= pd.DataFrame({'ID': [1,2,3],
'Text':['This num dogs and cats is (111)888-8780 and other',
'dont block cow 23 here',
'cat two num: dog and cows here'],
'Match':[[('cats', 86), ('dogs', 86), ('dogs', 29)],
[('cow', 33), ('dont', 57), ('cow', 100)],
[('cat', 100), ('dog', 100), ('cows', 86)] ]
})
</code></pre>
<p>And it looks like this</p>
<pre><code> ID Text Match
0 1 This num dogs and cats is (111)888-8780 and other [(cats, 86), (dogs, 86), (dogs, 29)]
1 2 dont block cow 23 here [(cow, 33), (dont, 57), (cow, 100)]
2 3 cat two num: dog and cows here [(cat, 100), (dog, 100), (cows, 86)]
</code></pre>
<p>My goal is to create a function that only keeps certain item within <code>Match</code> column that are above a certain threshold (e.g. 80) so I tried the following</p>
<pre><code>def threshold(column):
column_tup = column
keep_tuple = []
for col in column_tup:
if column_tup > 80:
keep_tuple.append()
return pd.Series([keep_tuple], index = ['Keep_Words'])
df_thresh = df.join(df.apply(lambda x: threshold(x), axis = 1))
</code></pre>
<p>But this gives me an error</p>
<pre><code>'>' not supported between instances of 'str' and 'int'
</code></pre>
<p>My goal is to get a df with a new column <code>Keep_Words</code> that looks like the following where only score above 85 are kept in <code>Keep_Words</code></p>
<pre><code> ID Text Match Keep_Words
0 1 data data [(cats, 86), (dogs, 86)]
1 2 data data [(cow, 100)]
2 3 data data [(cat, 100), (dog, 100)]
</code></pre>
<p>How do I alter my code to reach my goal?</p>
|
<p>Since you're trying to change only the <code>Match</code> column, you might as well only pass that column to <code>apply</code>:</p>
<pre><code>df.Match.apply(threshold)
</code></pre>
<p>where we don't use <code>axis</code> argument anymore since it is a Series we are applying over and it has only one axis anyway.</p>
<p>Then, each time your function is called, a row of <code>df.Match</code> will be passed and get assigned to the function argument, so we can rename the function signature to:</p>
<pre><code>def threshold(match_row):
</code></pre>
<p>for readability.</p>
<p>So, <code>match_row</code> will be a list, e.g., in the first turn it'll be <code>[(cats, 86), (dogs, 86), (dogs, 29)]</code>. We can iterate over as you did but with 2 for-loop variable as:</p>
<pre><code>for name, val in match_row:
</code></pre>
<p>so that <code>name</code> will become the first entry of each tuple and <code>val</code> is the second. Now we can do the filtering:</p>
<pre><code>keep_tuple = []
for name, val in match_row:
if val > 85:
keep_tuple.append((name, val))
</code></pre>
<p>which is fine but not so Pythonic because there is list comprehensions:</p>
<pre><code>keep_tuple = [(name, val) for name, val in match_row if val > 85]
</code></pre>
<p>Lastly we can return this as you did:</p>
<pre><code>return pd.Series([keep_tuple], index=["Keep_Words"])
</code></pre>
<p>As for calling and assignment, we can <code>join</code> as you did:</p>
<pre><code>df_thresh = df.join(df.Match.apply(threshold))
</code></pre>
<p>All in all,</p>
<pre><code>def threshold(match_row):
keep_tuple = [(name, val) for name, val in match_row if val > 85]
return pd.Series([keep_tuple], index=["Keep_Words"])
df_thresh = df.join(df.Match.apply(threshold))
</code></pre>
<p>which gives</p>
<pre><code>>>> df_thresh
ID Text Match Keep_Words
0 1 This num dogs and cats is (111)888-8780 and other [(cats, 86), (dogs, 86), (dogs, 29)] [(cats, 86), (dogs, 86)]
1 2 dont block cow 23 here [(cow, 33), (dont, 57), (cow, 100)] [(cow, 100)]
2 3 cat two num: dog and cows here [(cat, 100), (dog, 100), (cows, 86)] [(cat, 100), (dog, 100), (cows, 86)]
</code></pre>
<hr>
<p>Lastly, for the error you got: I didn't get that error but the infamous</p>
<pre><code>ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
</code></pre>
<p>error, which was because of this line</p>
<pre><code>if column_tup > 80:
</code></pre>
<p>where <code>column_tup</code> is a whole row as a <code>pd.Series</code> but its behaviour in boolean context is ambiguous.</p>
|
python|pandas|list|function|tuples
| 3 |
1,901,389 | 59,410,508 |
Printing changed file paths in latest commit in gitpython
|
<p>I'm trying to get the changed file paths between the latest commit and the one before it in git python.
The problem is even if the latest commit has 1 changed file its displaying a lot more.
Below is my code:-</p>
<pre><code>repo = git.Repo(path)
commits_list = list(repo.iter_commits())
a_commit = commits_list[0]
b_commit = commits_list[-1]
itemDiff = a_commit.diff(b_commit)
for item in itemDiff
print(item.a_path)
</code></pre>
<p>I'm trying this against a local cloned repo. What am I doing wrong?</p>
|
<p>If you need to read from the repo, consider using GitPython's abstraction <a href="https://github.com/ishepard/pydriller" rel="nofollow noreferrer">Pydriller</a>.</p>
<pre><code>for commit in RepositoryMining("repo").traverse_commits():
for modified_file in commit.modifications:
modified_file.new_path # here you have the path of all the files changed in the commit
</code></pre>
|
python|git|gitpython
| 0 |
1,901,390 | 72,969,302 |
Rewrite json object list into key value pairs
|
<p>I am trying to format oauth token logs pulled from Google Workspace API using python. The objects returned from the google API call use a mix of formats.</p>
<p>Some sections are formatted like <code>"kind": "admin#reports#activity"</code>, which is preferred, while other sections are formtted like <code>"name": "api_name", "value": "caldav"</code>. How can I rewrite the sections formatted like the second example to match the first example? <code>"name": "api_name", "value": "caldav"</code> would become <code>"api_name" : "caldav"</code>.</p>
<p>Sample log returned (some sections have been redacted):</p>
<pre><code>{"kind": "admin#reports#activity", "id": {"time": "2022-07-13T11:45:59.181Z", "uniqueQualifier": "<redacted>", "applicationName": "token", "customerId": "<redacted>"}, "etag": "<redacted>", "actor": {"email": "<redacted>", "profileId": "<redacted>"}, "ipAddress": "<redacted>", "events": [{"type": "auth", "name": "activity", "parameters": [{"name": "api_name", "value": "caldav"}, {"name": "method_name", "value": "caldav.calendars.report"}, {"name": "client_id", "value": "<redacted>"}, {"name": "num_response_bytes", "intValue": "165416"}, {"name": "product_bucket", "value": "CALENDAR"}, {"name": "app_name", "value": "<redacted>"}, {"name": "client_type", "value": "<redacted>"}]}]}
</code></pre>
<p>Thanks,</p>
<p>Dan</p>
|
<p>I hope I've understood you correctly:</p>
<pre class="lang-py prettyprint-override"><code>lst = [
{"kind": "admin#reports#activity", "other_key": 1},
{"name": "api_name", "value": "caldav", "other_key": 2},
{"name": "other_name", "intValue": 999, "other_key": 3},
]
for d in lst:
if "kind" not in d:
d[d.pop("name")] = d.pop("value") if "value" in d else d.pop("intValue")
print(lst)
</code></pre>
<p>will transform the <code>lst</code> to:</p>
<pre class="lang-py prettyprint-override"><code>[
{"kind": "admin#reports#activity", "other_key": 1},
{"other_key": 2, "api_name": "caldav"},
{"other_key": 3, "other_name": 999},
]
</code></pre>
|
python|json|google-workspace
| 3 |
1,901,391 | 62,213,025 |
How to copy files from a finished container in AWS boto3
|
<p>I am starting with AWS ECS/Fargate. I have managed to create a cluster, service, task and everything else and run a container from a custom Docker image. However, all tutorials and similar seem to expect I use AWS for e.g., hosting a web app or similar. In my case, I simply do some processing (computations) in the container and when it's finished I need to get the resulting file (a .csv).</p>
<p>In Docker, I would achieve what I want doing </p>
<pre><code>docker cp container_id:/outputs /some/local/path/
</code></pre>
<p>I am using boto3.</p>
<p><strong>Question</strong>: How can I access the container's file system after it has finished using boto3? </p>
<p>I am completely new to web services so if I'm using wrong terminology or if it is a silly question for whatever reason, please let me know.</p>
|
<p>In general, you don't. If you're using Docker as a job runner, make sure to leave the results somewhere you can retrieve them later. (Even in plain Docker, I'd try to avoid <code>docker cp</code> for routine tasks.)</p>
<p>In an AWS context, try copying them to an S3 bucket. You can <a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html" rel="nofollow noreferrer">assign an IAM role to an ECS task</a> which would give it the required permissions. Depending on the size of the data and the frequency of results, other potential options include directly HTTP POSTing the data to another service you're running, or sending the content in the body of a RabbitMQ or SQS message. Sending an S3 path in a queue message is often a good combination to set up orchestration between tasks.</p>
|
python|amazon-web-services|docker|boto3|amazon-ecs
| 2 |
1,901,392 | 62,270,865 |
Changing a variable name affects the code behavior
|
<p>When I run this code, one of the selenium windows is not closed</p>
<pre><code>import multiprocessing
from selenium import webdriver
class Worker:
def __init__(self):
self.driver = webdriver.Chrome()
def run(self):
self.driver.get('https://www.google.com')
processes = []
for i in range(2):
worker = Worker()
process = multiprocessing.Process(target=worker.run)
process.start()
processes.append(process)
for any_name in processes:
any_name.terminate()
</code></pre>
<p>But if I change variable name from <code>any_name</code> to <code>worker</code>, then all selenium windows are closed. Why is this happening?</p>
<p>PS version: python 3.7, chromedriver 83, selenium 3.141.0</p>
|
<p>This is because the browser closing behavior depends on the <code>__del__</code> method of <code>selenium.webdriver.common.service.Service</code> to make the browser Windows exit, and <code>__del__</code> will only be called when there are no more references to your <code>WebDriver</code> instances. Here is the implementation of <code>Service.__del__</code>:</p>
<pre class="lang-py prettyprint-override"><code> def __del__(self):
# `subprocess.Popen` doesn't send signal on `__del__`;
# so we attempt to close the launched process when `__del__`
# is triggered.
try:
self.stop()
except Exception:
pass
</code></pre>
<p>The <code>stop()</code> method shuts everything down.</p>
<p>Now, the reason the variable naming matters is that it effects whether or not there are any references to a <code>WebDriver</code> when your program exits. When your first for loop completes, <code>worker</code> is still in-scope, which holds a reference to the second <code>Worker</code> you created, which holds a reference to the <code>WebDriver</code>. That keeps it in scope when your main program completes, which means <code>__del__</code> is never called, and the browser window doesn't close. </p>
<p>However, when you re-use <code>worker</code> for the second for loop, it means the reference to the second <code>Worker</code> is no longer held, which means there are no references to <code>WebDriver</code> in-memory, which means <code>__del__</code> will be called and the window will close. You can confirm this behavior by explicitly adding <code>worker = None</code> outside of the first for loop. With that change, both browser windows always exit, no matter what variable name you use in the second loop. </p>
|
python|selenium|process|multiprocessing|terminate
| 1 |
1,901,393 | 58,886,304 |
requests.get crashes on certain urls
|
<pre><code>import requests
r = requests.get('https://www.whosampled.com/search/?q=marvin+gaye')`
</code></pre>
<p>This returns the following error</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\thoma\Downloads\RealisticMellowProfile\Python\New folder\Term project demo.py", line 8, in <module>
r = requests.get('https://www.whosampled.com/search/?q=marvin+gaye')
File "c:\users\thoma\miniconda3\lib\site-packages\requests\api.py", line 75, in get
return request('get', url, params=params, **kwargs)
File "c:\users\thoma\miniconda3\lib\site-packages\requests\api.py", line 60, in request
return session.request(method=method, url=url, **kwargs)
File "c:\users\thoma\miniconda3\lib\site-packages\requests\sessions.py", line 533, in request
resp = self.send(prep, **send_kwargs)
File "c:\users\thoma\miniconda3\lib\site-packages\requests\sessions.py", line 646, in send
r = adapter.send(request, **kwargs)
File "c:\users\thoma\miniconda3\lib\site-packages\requests\adapters.py", line 498, in send
raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))
</code></pre>
|
<p>You can change the user agent so the server does not close the connection:</p>
<pre><code>import requests
headers = {"User-Agent": "Mozilla/5.0"}
r = requests.get('https://www.whosampled.com/search/?q=marvin+gaye', headers=headers)
</code></pre>
|
python|python-requests
| 1 |
1,901,394 | 31,618,329 |
How do I set post permalink(or a slug)?
|
<p>How can I use python to set the value of a permalink on a wordpress post?</p>
<p>Slug will work too, but permalink would be better(I want to set it to a different domain).</p>
|
<p>I think you could do it with WordPress rest API</p>
<p><a href="http://wp-api.org/" rel="nofollow">http://wp-api.org/</a></p>
<p><a href="https://github.com/stylight/python-wordpress-json" rel="nofollow">https://github.com/stylight/python-wordpress-json</a></p>
|
python|wordpress
| 0 |
1,901,395 | 15,889,998 |
Pandas force matrix multiplication
|
<p>I would like to force matrix multiplication "orientation" using Python Pandas, both between DataFrames against DataFrames, Dataframes against Series and Series against Series.</p>
<p>As an example, I tried the following code:</p>
<pre><code>t = pandas.Series([1, 2])
print(t.T.dot(t))
</code></pre>
<p>Which outputs: 5</p>
<p>But I expect this:</p>
<pre><code>[1 2
2 4]
</code></pre>
<p>Pandas is great, but this inability to do matrix multiplications the way I want is what is the most frustrating, so any help would be greatly appreciated.</p>
<p>PS: I know Pandas tries to implicitly use index to find the right way to compute the matrix product, but it seems this behavior can't be switched off!</p>
|
<p>Here:</p>
<pre><code>In [1]: import pandas
In [2]: t = pandas.Series([1, 2])
In [3]: np.outer(t, t)
Out[3]:
array([[1, 2],
[2, 4]])
</code></pre>
|
python|pandas|matrix-multiplication|dot-product|dataframe
| 4 |
1,901,396 | 48,926,511 |
TensorFlow Assign
|
<p>I am trying to write a custom version of an RNN and would like to just store the state and last output of the cells in variables but it is not working. My guess is that TensorFlow sees the storing of the values unnecessary and does not execute it. Here is a snippet that illustrates the problem.</p>
<p>For this example, I have five layers of "cells" that intentionally ignore the input and output the sum of the biases for the cell and the previous output, which is initialized to zero. However, as we run this, the output of the network is always just the values of the biases in the final layer and the value of <code>last_output</code> remains zero.</p>
<pre><code>import tensorflow as tf
import numpy as np
def cell_function(cell_inputs, layer):
last_output = tf.get_variable('last_output_{}'.format(layer), shape=(10, 1),
initializer=tf.zeros_initializer, trainable=False)
biases = tf.get_variable('biases_{}'.format(layer), shape=(10, 1),
initializer=tf.zeros_initializer)
cell_output = last_output + biases
last_output.assign(cell_output)
return cell_output
def rnn_function(inputs):
with tf.variable_scope('rnn', reuse=tf.AUTO_REUSE):
next_inputs = inputs
for layer in range(num_layers):
next_inputs = cell_function(next_inputs, layer)
return next_inputs
num_layers = 5
data = np.random.uniform(0, 10, size=(1001, 10, 1))
x = tf.placeholder('float', shape=(10, 1))
y = tf.placeholder('float', shape=(10, 1))
predictions = rnn_function(x)
loss = tf.losses.mean_squared_error(predictions=predictions, labels=y)
optimizer = tf.train.AdamOptimizer(learning_rate=0.1).minimize(loss=loss)
with tf.variable_scope('rnn', reuse=tf.AUTO_REUSE):
last = tf.get_variable('last_output_4', shape=(10, 1),
initializer=tf.zeros_initializer, trainable=False)
layer_biases = tf.get_variable('biases_4', shape=(10, 1),
initializer=tf.zeros_initializer)
with tf.Session() as sess:
tf.global_variables_initializer().run()
for t in range(1000):
rnn_input = data[t]
rnn_output = data[t+1]
feed_dict = {x: rnn_input, y: rnn_output}
fetches = [optimizer, redictions, loss, last, layer_biases]
_, pred, mse, value, bias = sess.run(fetches, feed_dict=feed_dict)
print('Predictions:')
print(rnn_predictions)
print(last.name)
print(value)
print(layer_biases.name)
print(bias)
</code></pre>
<p>If change the last line of <code>cell_function</code> before the return to <code>last_output = tf.assign(last_output, cell_output)</code> and then return it with <code>cell_output</code> and then return it again out of <code>rnn_function</code> and use that for the variable <code>last</code> everything works. I think it is because we are forcing TensorFlow to compute that node in the graph.</p>
<p>Is there any way to make this work without passing <code>last_output</code> out of the cell? It would be much nicer if I didn't have to keep passing all this stuff out to get the assignment operation to be executed.</p>
|
<p>Make it dependent on an operation that will be run, in this example I'll use the cost function, but use whatever makes sense:</p>
<pre><code>with tf.control_dependencies(cost):
tf.assign(last_output, cell_output)
</code></pre>
<p>Now the assign operation will be required in order for <code>cost</code> to be computed, which should solve your problem. For any operation you request tensorflow to compute with <code>sess.run(some_op)</code>, tensorflow will work backwards through the dependency graph and only compute the minimum elements necessary to produce the requested output.</p>
|
python|tensorflow|assign
| 0 |
1,901,397 | 49,042,221 |
Converting a column of String to integers?
|
<pre><code>`# Convert string column to integer
def str_column_to_int(dataset, column):
class_values = [row[column] for row in dataset]
unique = set(class_values)
lookup = dict()
for i, value in enumerate(unique):
lookup[value] = i
for row in dataset:
row[column] = lookup[row[column]]
return lookup`
</code></pre>
<p>The above code is the most basic machine learning snippet to convert a column of string to integers(or one hot encoding).However I am having difficulty understanding the code esp. <code>class_values = [row[column] for row in dataset]
unique = set(class_values)</code> what does these two lines do that makes it to do one hot encoding?</p>
|
<pre><code>>>> dataset = [
... [1, 2],
... [1, 2],
... [1, 2]
... ]
>>> column = 1
>>> class_values = [row[column] for row in dataset]
>>> class_values
[2, 2, 2]
>>> unique = set(class_values)
>>> unique
{2}
</code></pre>
|
python|algorithm|machine-learning
| 1 |
1,901,398 | 25,025,189 |
PYQT4 - Why I can't add Buttons to QMainWindow in Slot/callback function?
|
<p>I need to create buttons dynamically in a QMainWindow, and I'm trying it through the RefreshData() Slot function.
The point is, despite the function runs and the Buttons are created, they are not linked to QMainWindow!!
Buut when I call that function standalone, this Linking works. What could I be doing wrong, can't figure it out.
LotOfThanks</p>
<pre><code>array_stations = {}
a = Station("A", 0, 0, 0)
b = Station("B", 50, 50, 0)
c = Station("C", 100, 100, 0)
array_stations[a.ID] = a
array_stations[b.ID] = b
array_stations[c.ID] = c
class GuiView(QtGui.QMainWindow):
def __init__(self):
super(GuiView, self).__init__()
self.initUI()
def initUI(self):
#CONTROLE DE ESTACOES - PARA CONTROLAR SE UMA ESTACAO EH NOVA OU NAO
self.estacoes = {}
#Set timer para atualizar Widget
self.timer2 =QtCore.QTimer()
self.timer2.timeout.connect(self.RefreshData) ### THIS ONE DOESNT ADD THE BUTTONS....
self.timer2.start(2000)
self.RefreshData() ### ... BUT THIS ONE DOES IT!
self.layout = QtGui.QVBoxLayout()
@pyqtSlot()
def RefreshData(self):
print "blabla"
global array_stations
########## ADD OR UPDATE BUTTONS #################
for s in array_stations:
if not s in self.estacoes:
# ADICIONO UM BOTAO A LISTA
self.estacoes[s] = QtGui.QPushButton(s,self)
self.estacoes[s].move(array_stations[s].x,array_stations[s].y)
</code></pre>
|
<p>Add into the loop:</p>
<pre><code>self.estacoes[s].show()
</code></pre>
<p>EDIT:</p>
<p>I typed this response when I was in a bit of a hurry. To clarify a bit more, you are creating and adding the new buttons, but you're now telling them to show up. By default, widget's are not shown (including child widgets). But if you add new widgets, you will need to call <code>show()</code> for the new widget again to make it appear. </p>
|
python|pyqt|signals|qmainwindow|slots
| 0 |
1,901,399 | 60,087,418 |
Iterate over pandas dataframe and apply condition
|
<p>Consider I have this dataframe, wherein I want to remove toy as a topic from the topics column and if there is row with a single topic as a toy , remove that row. How can we do that in pandas?</p>
<pre><code>+---+-----------------------------------+-------------------------+
| | Comment | Topic |
+---+-----------------------------------+-------------------------+
| 1 | ----- | toy, bottle, vegetable |
| 2 | ----- | fruit, toy, electronics |
| 3 | ----- | toy |
| 4 | ----- | electronics, fruit |
| 5 | ----- | toy, electronic |
+---+-----------------------------------+-------------------------+
</code></pre>
|
<p>Try using <code>str.replace</code> with <code>str.rstrip</code> and <code>ne</code> inside <code>[...]</code>:</p>
<pre><code>df['topic'] = df['topic'].str.replace('toy', ' ').str.replace(' , ', '').str.rstrip()
print(df[df['topic'].ne('')])
</code></pre>
|
python|pandas
| 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.