Unnamed: 0
int64 0
1.91M
| id
int64 337
73.8M
| title
stringlengths 10
150
| question
stringlengths 21
64.2k
| answer
stringlengths 19
59.4k
| tags
stringlengths 5
112
| score
int64 -10
17.3k
|
---|---|---|---|---|---|---|
1,902,200 | 66,122,000 |
Compare two tsv files and save to other file but only one line is saving
|
<p>I need to compare two tsv files and save needed data to another file.</p>
<p>Tsv_file stands for imdb data file which contains id-rating-votes separated by tab stop and Tsv_file2 stands for my file which contais id-year separated by tab stop</p>
<p>All id's from tsv_file2 are in tsv_file, I need to save to zapis file data in format id-year-rating, where id from tsv_file2 matches id from tsv_file.</p>
<p>The problem is that code below is working but it is saving only one line into zapis file.</p>
<p>What can i improve to save all records?</p>
<pre><code>
for linia in read_tsv2:
for linia2 in read_tsv:
if linia[0] == linia2[0]:
zapis.write(linia[0]+'\t')
zapis.write(linia[1]+'\t')
zapis.write(linia2[1])
</code></pre>
|
<p>It would have made life much simpler if you had provided actual examples of these tsv files instead of describing them. <a href="https://stackoverflow.com/help/how-to-ask">Here</a> is a guide to ask a good question.</p>
<p>As I understand you have</p>
<pre><code> 123456 4 15916
123888 1 151687
115945 5 35051
</code></pre>
<p>vs</p>
<pre><code> 123456 1993
123888 2013
</code></pre>
<p>and you want</p>
<pre><code> 123456 1993 5
123888 2013 1
</code></pre>
<p>There are multiple ways to cut this. I would use the SqLite function of whatever language you choose, load the data into two temp tables, and then make a query to get the data you want. It should be fairly trivial to join the two tables.</p>
<p><strong>Edit:</strong> If the SQLite path is taken, there are plenty of good tutorials around. <a href="https://www.sqlitetutorial.net/sqlite-python/" rel="nofollow noreferrer">Working with SQLite in Python</a>, <a href="https://stackoverflow.com/questions/2887878/importing-a-csv-file-into-a-sqlite3-database-table-using-python">How to import a CSV in SQLite</a> (with Python).</p>
<p>I would do the following, in broad terms:</p>
<ul>
<li>create a :memory: database in SQLite</li>
<li>create tables</li>
</ul>
<p>(these steps can be omitted, and a database +table can be created by an SQLite editor in advance)</p>
<ul>
<li>connect to DB</li>
<li>import each TSV file into a table</li>
<li>execute a query on the database</li>
<li>output the result</li>
</ul>
<p>Python should have libraries for most of these tasks.</p>
<p>The exact same result can be done with other tools.</p>
|
python|csv
| 0 |
1,902,201 | 68,873,666 |
Python fillna using mean of row values for selected columns
|
<p>In the below table I need to only fillna for Week columns. NaN should be filled with mean value of all weeks in that row.</p>
<pre><code>+----+---------+------+-------+-------+-------+-------+
| ID | Feature | Paid | Week1 | Week2 | Week3 | Week4 |
+----+---------+------+-------+-------+-------+-------+
| 1 | 1 | 1 | 12 | NaN | NaN | NaN |
+----+---------+------+-------+-------+-------+-------+
| 2 | 0 | 1 | 34 | 23 | NaN | NaN |
+----+---------+------+-------+-------+-------+-------+
| 3 | 1 | 0 | 24 | 13 | 14 | NaN |
+----+---------+------+-------+-------+-------+-------+
</code></pre>
<p><strong>Code</strong></p>
<pre><code>df.fillna(df[[Week1,Week2,Week3,Week4]].mean(axis=1),axis=1,inplace=True)
</code></pre>
<p>This gives an error saying <code>NotImplementedError: Currently only can fill with dict/Series column by column</code></p>
|
<p>Create a dictionary that maps the <code>Week</code> names to the <code>mean</code> values of weeks along <code>axis=1</code>, then fill the <code>NaN</code> values using this dictionary</p>
<pre><code>c = df.filter(like='Week').columns
df.fillna(dict.fromkeys(c, df[c].mean(1)))
</code></pre>
<hr />
<pre><code> ID Feature Paid Week1 Week2 Week3 Week4
0 1 1 1 12 12.0 12.0 12.0
1 2 0 1 34 23.0 28.5 28.5
2 3 1 0 24 13.0 14.0 17.0
</code></pre>
|
python|pandas|fillna
| 2 |
1,902,202 | 68,949,477 |
Installing some wheels with pip (and tar's?)
|
<p>I have read <a href="https://stackoverflow.com/questions/27885397/how-do-i-install-a-python-package-with-a-whl-file">this answer</a> already regarding installing a Python package with a wheel file.</p>
<p>However, I have been asked to install a set of packages. These are all in a folder and include several whl files, one tar file and one tar.gz file .</p>
<p>I am trying to figure it out how should I install these.</p>
<p>Something along the lines of</p>
<pre><code>pip install --user --no-index --find-links <the folder name> <a package????>
</code></pre>
<p>It is a set of packages so I am not sure how to put that in there. Or should I just stop at <code><the folder name></code>?</p>
|
<p>The following two commands will help you out:</p>
<pre><code>//for the whl files
for %x in (path\*.whl) do python -m pip install --user --no-index --no-deps %x
//for the tar.gz file
python -m pip install --user --no-index --no-deps path\to\file
</code></pre>
<br>
<p>If you want to use a single command, you even modify the for loop to give you both whl and tar files using:</p>
<pre><code>for %x in (path\*.whl, path\*.tar.gz) do python -m pip install --user --no-index --no-deps %x
</code></pre>
<p>Reference 1: <a href="https://stackoverflow.com/questions/65844775/how-to-install-multiple-whl-files-in-the-right-order">How to install multiple .whl files in the right order</a>
<br>
Reference 2: <a href="https://stackoverflow.com/questions/43314517/how-to-install-multiple-whl-files-in-cmd">How to install multiple whl files in cmd</a></p>
|
python|pip|python-wheel
| 1 |
1,902,203 | 72,548,982 |
SyntaError pyspark using window and row_number
|
<p>Can someone help what could be the problem with the following codesnipets?
I get a syntaxerror and cannot figure it out.
I want to remove duplicates and keeping the first row in descending order of a spark df.
I use these:</p>
<pre><code>window = Window.partitionBy(["RECEIPT_DT","nummer","CODE"]).orderBy("Year")
df.withColumn('row', f.row_number().over(window.desc()).filter(f.col('row') == 1).drop('row')
</code></pre>
<p>or</p>
<pre><code>(df
.withColumn('row', f.row_number().over( Window.partitionBy(["RECEIPT_DT","nummer","CODE"]).orderBy("Year").desc() ) \
.filter(f.col('row') == 1)
.drop("row"))
</code></pre>
<p>I always get <strong>SyntaxError: unexpected EOF while parsing</strong>
at the end of the code?</p>
<p>Can u spot the problem pls? I do not understand why.
I'm working in databricks and imported:</p>
<pre><code>from pyspark.sql import Window
from pyspark.sql import functions as f
</code></pre>
<p>I have tried removing the drop or filter rows from the code, but still not working.</p>
|
<p>You are missing a closing <code>)</code> after the over clause.
Also note you can't use Window.desc(), you have to put the desc in the orderBy clause.</p>
<p>The following snippet should work:</p>
<pre><code>window = Window.partitionBy(["RECEIPT_DT","nummer","CODE"]).orderBy(f.col("Year").desc())
df.withColumn('row', f.row_number().over(window)).filter(f.col('row') == 1).drop('row')
</code></pre>
|
python|pyspark|syntax|syntax-error|filtering
| 1 |
1,902,204 | 72,667,592 |
Program cannot Sort
|
<p>new to python , trying to learn oops ,
in below code my objective is to sort employee list based on rating but stuck at object not iteratable.</p>
<pre><code>class Employee:
def getfn(self):
self.empid=(input(" enter emp id:"))
self.name=input("enter name:")
self.gender=input("enter emp gender: ")
self.salary=input(" enter emp salary:")
self.rating=int(input("enter rating:"))
empz=[]
class menu:
n=0
def entry(self):
n=int(input(" enter no of employees:"))
i=0
while i<n:
temp_emp=Employee()
temp_emp.getfn()
empz.append(temp_emp)
i+=1
def print_rec(self):
#
print("-id--name--gender--salary--rating--")
for i in empz:
print(i.empid,i.name,i.gender,i.salary,i.rating)
#print(sorted(empz,key=lambda x:x[4]))
def sort_rating(empz):
return empz.rating
sorted_emp=sorted(empz, key= sort_rating)
print(empz)
</code></pre>
|
<p>The design of your Employee class isn't great. Values used as its attributes should be validated before class construction.</p>
<p>You can control the number of employees to be input more easily than asking for a count.</p>
<p>Hopefully this will give a better idea of how this might be done.</p>
<pre><code>class Employee:
def __init__(self, empid, name, gender, salary, rating):
self.empid = empid
self.name = name
self.gender = gender
self.salary = salary
self.rating = rating
def __str__(self):
return f'ID={self.empid}, Name={self.name}, Gender={self.gender}, Salary={self.salary}, Rating={self.rating}'
# common input functions
def getInput(prompt, t=str):
while True:
v = input(f'{prompt}: ')
try:
return t(v)
except ValueError:
print('Invalid input')
def getInt(prompt):
return getInput(prompt, int)
def getFloat(prompt):
return getInput(prompt, float)
# end of common input functions
employeeList = []
while eid := getInput('ID (Enter to finish)'):
name = getInput('Name')
gender = getInput('Gender')
salary = getFloat('Salary')
rating = getInt('Rating')
employeeList.append(Employee(eid, name, gender, salary, rating))
for employee in sorted(employeeList, key=lambda x: x.rating):
print(employee)
</code></pre>
<p>The common input functions should be in a separate py file so you can import them when needed rather than re-writing them every time. They're trivial but you'll find them helpful when trying to ensure that input is appropriate</p>
|
python|oop
| 1 |
1,902,205 | 68,436,356 |
Fill area between two points in python plotly
|
<p>MRE:</p>
<pre><code>import numpy as np
from scipy.stats import beta
import plotly.graph_objects as go
x = np.arange(0.01, 1, 0.01)
y = beta.pdf(x,2,5)
fig = go.Figure()
fig.add_trace(
go.Scatter(
x = x,
y = y,
)
)
fig.show()
</code></pre>
<p>outputting
<a href="https://i.stack.imgur.com/QiANm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QiANm.png" alt="enter image description here" /></a></p>
<p>I want to fill between certain x points for example only between x in [0.2, 0.4], how can I do this using Plotly?</p>
<p>There are multiple questions and answers that fill whole area <a href="https://plotly.com/python/filled-area-plots/" rel="nofollow noreferrer">https://plotly.com/python/filled-area-plots/</a> however cannot find answer that solves my problem.</p>
|
<p>You can add the following code:</p>
<pre><code>xx = np.arange(0.2, 0.4, 0.01)
yy = beta.pdf(xx,2,5)
fig.add_trace(
go.Scatter(
x = xx,
y = yy,
fill = 'tozeroy'
)
)
</code></pre>
<p>It gives:</p>
<p><a href="https://i.stack.imgur.com/0SBtg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0SBtg.png" alt="plot" /></a></p>
|
python|plotly
| 1 |
1,902,206 | 63,138,063 |
Why does Windows have issues with the encoding, but Linux doesn't?
|
<p>For my toy project <a href="https://github.com/MartinThoma/mpu" rel="nofollow noreferrer"><code>mpu</code></a> I have two CI solutions running:</p>
<ul>
<li><a href="https://travis-ci.org/github/MartinThoma/mpu" rel="nofollow noreferrer">Travis</a> / Linux: Works</li>
<li><a href="https://dev.azure.com/martinthoma/mpu/_build/results?buildId=11&view=logs&j=19c563d8-3dcd-57d1-cd5e-9dd946b0a29b&t=c276ab56-9c7b-560d-e018-453666437518" rel="nofollow noreferrer">Azure</a> / Windows: Fails</li>
</ul>
<p>It fails with this message:</p>
<pre><code>_______________________________ test_read_json ________________________________
def test_read_json():
path = "files/example.json"
source = pkg_resources.resource_filename(__name__, path)
data_real = read(source)
data_exp = {
"a list": [1, 42, 3.141, 1337, "help", "�"],
"a string": "bla",
"another dict": {"foo": "bar", "key": "value", "the answer": 42},
}
> assert data_real == data_exp
E AssertionError: assert {'a list': [1... answer': 42}} == {'a list': [1... answer': 42}}
E Omitting 2 identical items, use -vv to show
E Differing items:
E {'a list': [1, 42, 3.141, 1337, 'help', '€']} != {'a list': [1, 42, 3.141, 1337, 'help', '�']}
E Use -v to get the full diff
tests\test_io.py:175: AssertionError
</code></pre>
<p>Why can it read the € sign from the JSON, but within the test it fails? (Python 3.6)</p>
|
<p>I assume that the <code>read</code> function which is used in the test wraps <code>open</code> in some way or another.</p>
<p>TL;DR Try adding <strong><code>encoding='utf8'</code></strong> to the call to <code>open</code>.</p>
<p>From my experience, Windows does not always play nice with non-ascii characters when reading files unless the encoding is set explicitly.</p>
<p>Also, it does not help that <a href="https://docs.python.org/3/library/functions.html#open" rel="nofollow noreferrer">the default value for <code>encoding</code> is platform-dependent</a>:</p>
<blockquote>
<p>encoding is the name of the encoding used to decode or encode the
file. This should only be used in text mode. The default encoding is
platform dependent (whatever <a href="https://docs.python.org/3/library/locale.html#locale.getpreferredencoding" rel="nofollow noreferrer">locale.getpreferredencoding()</a> returns),
but any text encoding supported by Python can be used. See the codecs
module for the list of supported encodings.</p>
</blockquote>
<p>some tests (ran on Win 10, Python 3.7, <code>locale.getpreferredencoding()</code> returns <code>cp1262</code>):</p>
<p>test.csv</p>
<pre><code>€
</code></pre>
<br>
<pre><code>with open('test.csv') as f:
print(f.read())
# €
with open('test.csv', encoding='utf8') as f:
print(f.read())
# '€'
</code></pre>
|
windows|encoding|python-3.6
| 2 |
1,902,207 | 63,058,863 |
Changing desktop screen orientation
|
<p>Is there a way to change the screen orientation using python? On Ubuntu I can evoke <code>xrandr</code> through the <code>os</code> library, but what about Windows? I want to rotate the screen as a whole (as if the user pressed "ctrl+alt+down/right/left") and not just the app window. Although I'm asking for python, I can accept answers on other languages too (C++, JS, C#) as long as there is a way to do it on both, Linux and Windows (even if it is through a terminal call).</p>
|
<p>Rotating the screen using C# - <a href="https://stackoverflow.com/questions/39288135/rotating-the-display-programmatically">Rotating the display programmatically?</a></p>
<p>If you wanted to use Python, you could use the ctypes library to execute C functions - <a href="https://www.journaldev.com/31907/calling-c-functions-from-python" rel="nofollow noreferrer">https://www.journaldev.com/31907/calling-c-functions-from-python</a>. Just follow the logic from the C# example and re-create it in Python.</p>
<p>The key here is that there are c functions that you need to invoke to actually rotate the screen, primarily ChangeDisplaySettingsExa - <a href="https://docs.microsoft.com/en-us/windows/win32/api/winuser/nf-winuser-changedisplaysettingsexa" rel="nofollow noreferrer">https://docs.microsoft.com/en-us/windows/win32/api/winuser/nf-winuser-changedisplaysettingsexa</a></p>
|
python|linux|windows|fullscreen
| 1 |
1,902,208 | 62,308,649 |
How to remove a Coefficient of (1) from SymPy symbolic expression?
|
<p>I want to remove any coefficient that is equal to 1 in sympy symbolic expression , for example:
I want <code>1.0x**2</code> to be <code>x**2</code> , Is there anyway to do it ?
Also if possible to round integers , for example <code>2.0x**2</code> to be <code>2*x**2</code> </p>
|
<p>You can use <code>nsimplify</code>:</p>
<pre><code>In [4]: nsimplify(2.0*x**2)
Out[4]:
2
2⋅x
</code></pre>
<p>in a Python shell</p>
<pre class="lang-py prettyprint-override"><code>>>> import sympy
>>> sympy.nsimplify("1.0*x**2")
x**2
>>> sympy.nsimplify("2.0*x**2")
2*x**2
</code></pre>
|
python|python-3.x|sympy|symbolic-math
| 5 |
1,902,209 | 62,090,888 |
Trying to convert a .npy file (float64) to uint8 or uint16
|
<p>I'm trying to display a 3D Numpy Array using VTK (python 3) but it needs the type to be uint8 or uint16.</p>
<p>I don't know how to do this and any help would be greatly appreciated. </p>
<p>In case, there's nothing I can do, I just want to display my .npy file using VTK. Any suggestions will be highly appreciated. </p>
|
<p>You can use the numpy.astype() function to change the data type. Here's an example:</p>
<pre><code>>>> arr = np.array([10., 20., 30., 40., 50.])
>>> print(arr)
[10. 20. 30. 40. 50.]
>>> print(arr.dtype)
float64
>>> arr = arr.astype('uint16')
>>> print(arr)
[10 20 30 40 50]
>>> print(arr.dtype)
uint16
</code></pre>
<p>To address your further questions, you want to normalize your data when converting float64 to uint8 or uint16. Basically you want to map the [min, max] data range of your array to [0, 255] for uint8 or [0, 65535] for uint16. You'll still lose information, but normalization minimizes the data loss.</p>
<p>Here's how you'd map float64 to uint8:</p>
<pre><code>dmin = np.min(arr)
dmax = np.max(arr)
drange = dmax-dmin
dscale = 255.0/drange
new_arr = (arr-dmin)*dscale
new_arr_uint8 = new_arr.astype('uint8')
</code></pre>
<p>And, yes, the astype function will work on a loaded array.</p>
|
python|python-3.x|numpy|vtk
| 0 |
1,902,210 | 62,125,688 |
How do I fix my variables on my code as it does not appear
|
<p>So when I ran this code below, I noticed that my variables <code>myName</code> and <code>myAge</code> didn't function at all. An I getting something wrong here?</p>
<h1>This program says hello and asks for my name.</h1>
<pre class="lang-py prettyprint-override"><code>print ('Hello world!')
print ('What is your name?') # ask for their name
myName = input('Michael')
print ('It is good to meet you, ' + myName)
print('The length of your name is:')
print (len(myName))
print ('What is your age?') # ask for their age
myAge = input('16')
print('You will be " + str(int(myAge) + 1) + ' in a year.')
</code></pre>
|
<p>This line "print('You will be " + str(int(myAge) + 1) + ' in a year.')", you are using two different types of quotes ' and ", you can only use one type at once. </p>
<p>This is a string: 'Michael'
Or like this: "Michael"</p>
<p>The items in quote above are valid strings in python.</p>
<p>"Michael' is an invalid string in python. Mixing two different types of quotation symbols.</p>
<p>Consider running my version of your code and see if it all makes sense. </p>
<pre><code>print ('Hello world!')
myName = input('What is your name? ') # ask for their name
print ('It is good to meet you, ' + myName) # Shows the name
print('The length of your name is:')
print (len(myName)) # Shows the length of the name
myAge = input('What is your age? ') # ask for their age
print("You will be " + str(int(myAge) + 1) + " in a year.") # Shows the age + 1
</code></pre>
|
python
| 1 |
1,902,211 | 62,399,570 |
I am getting a warning while downloading a module using pip
|
<p>WARNING: The script pipwin.exe is installed in 'c:\users\dell\appdata\local\programs\python\python37-32\Scripts' which is not on PATH.</p>
|
<p>Simple! Add to python paths:</p>
<pre><code>set PYTHONPATH=%PYTHONPATH%;c:\users\dell\appdata\local\programs\python\python37-32\Scripts
</code></pre>
<p>Type this into cmd!</p>
<hr>
<p>Edit:</p>
<p>If you want this to work permanently, make a .bat file, add that line in, and put in under the Windows>Start menu>Programs>Startup</p>
|
python
| 0 |
1,902,212 | 35,564,063 |
comparing ndarray with values in 1D array to get a mask
|
<p>I have two numpy array, 2D and 1D respectively.
I want to obtain a 2D binary mask where each element of the mask is true if it matches any of the element of 1D array.</p>
<p>Example</p>
<pre><code> 2D array
-----------
1 2 3
4 9 6
7 2 3
1D array
-----------
1,9,3
Expected output
---------------
True False True
False True False
False False True
</code></pre>
<p>Thanks</p>
|
<p>You could use <code>np.in1d</code>. Although <code>np.in1d</code> returns a 1D array, you could simply reshape the result afterwards:</p>
<pre><code>In [174]: arr = np.array([[1,2,3],[4,9,6],[7,2,3]])
In [175]: bag = [1,9,3]
In [177]: np.in1d(arr, bag).reshape(arr.shape)
Out[177]:
array([[ True, False, True],
[False, True, False],
[False, False, True]], dtype=bool)
</code></pre>
<p>Note that <code>in1d</code> is checking of the elements in <code>arr</code> match <em>any</em> of the elements in <code>bag</code>. In contrast, <code>arr == bag</code> tests if the elements of <code>arr</code> equal the broadcasted elements of <code>bag</code> <em>element-wise</em>. You can see the difference by permuting <code>bag</code>:</p>
<pre><code>In [179]: arr == np.array([1,3,9])
Out[179]:
array([[ True, False, False],
[False, False, False],
[False, False, False]], dtype=bool)
In [180]: np.in1d(arr, [1,3,9]).reshape(arr.shape)
Out[180]:
array([[ True, False, True],
[False, True, False],
[False, False, True]], dtype=bool)
</code></pre>
<hr>
<p>When you compare two arrays of unequal shape, NumPy tries to <a href="http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html" rel="nofollow">broadcast</a> the two arrays to a single compatible shape before testing for equality. In this case, <code>[1, 3, 9]</code> gets broadcasted to </p>
<pre><code>array([[1, 3, 9],
[1, 3, 9],
[1, 3, 9]])
</code></pre>
<p>since new axes are added on the left. You can check the effect of broadcasting this way:</p>
<pre><code>In [181]: np.broadcast_arrays(arr, [1,3,9])
Out[185]:
[array([[1, 2, 3],
[4, 9, 6],
[7, 2, 3]]),
array([[1, 3, 9],
[1, 3, 9],
[1, 3, 9]])]
</code></pre>
<p>Once the two arrays are broadcasted up to a common shape, equality is tested
<em>element-wise</em>, which means the values in corresponding locations are tested for
equality. In the top row, for example, the equality tests are <code>1 == 1</code>, <code>2 == 3</code>, <code>3 == 9</code>. Hence,</p>
<pre><code>In [179]: arr == np.array([1,3,9])
Out[179]:
array([[ True, False, False],
[False, False, False],
[False, False, False]], dtype=bool)
</code></pre>
|
python|numpy
| 2 |
1,902,213 | 58,783,196 |
Django 2 combine ListView and DetailView in my template page
|
<p>i have two models "product" and "brand"
product has brand field ManyToMany to link the products with the concerned brand </p>
<pre><code>## models.py
class Product(models.Model):
title = models.CharField(max_length=120)
slug = models.SlugField(blank=True, unique=True)
description = models.TextField()
brand = models.ManyToManyField(Brand)
class Brand(models.Model):
title = models.CharField(max_length=250, unique=True)
slug = models.SlugField(max_length=250, unique=True)
description = models.TextField(blank=True)
## url.py
re_path(r'^brands/(?P<slug>[\w-]+)/$', BrandDetail.as_view(), name = 'branddetail'),
## views.py
class BrandDetail(DetailView):
queryset = Brand.objects.all()
template_name = "brands/brand.html"
## brands/brand.html
{{ object.title }} <br/>
{{ object.description }} <br/>
</code></pre>
<p>Now when am rendring brand.html it shows the brand title and description fine</p>
<p>My question is
if i want to render in the same page <strong>the list of products</strong> linked to specific brand "considering the brand slug already passed in the URL", how i can do that?</p>
<p>the class is DetailsView and it has only Brand details in the query set as shown!!
i need any solution not ne</p>
|
<p>You do not need a <code>ListView</code> for that, you can iterate over the <code>product_set</code> of the brand, like:</p>
<pre><code>{{ object.title }} <br/>
{{ object.description }} <br/>
products:
{% for product in <b>object.product_set.all</b> %}
{{ product.title }} <br/>
{% endfor %}</code></pre>
|
python|django|listview|detailview
| 2 |
1,902,214 | 58,882,929 |
Fine-Tune Universal Sentence Encoder Large with TF2
|
<p>Below is my code for fine-tuning the Universal Sentence Encoder Multilingual Large 2. I am not able to resolve the resulting error. I tried adding a tf.keras.layers.Input layer which results in the same error. Any suggestion on how to successfully build a fine-tuning sequential model for USEM2 will be much appreciated.</p>
<pre><code>import tensorflow as tf
import tensorflow_text
import tensorflow_hub as hub
module_url = "https://tfhub.dev/google/universal-sentence-encoder-multilingual-large/2"
embedding_layer = hub.KerasLayer(module_url, trainable=True, input_shape=[None,], dtype=tf.string)
hidden_layer = tf.keras.layers.Dense(32, activation='relu')
output_layer = tf.keras.layers.Dense(5, activation='softmax')
model = tf.keras.models.Sequential()
model.add(embedding_layer)
model.add(hidden_layer)
model.add(output_layer)
model.summary()
</code></pre>
<pre><code>WARNING:tensorflow:Entity <tensorflow.python.saved_model.function_deserialization.RestoredFunction object at 0x7fdf34216390> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Shape must be rank 1 but is rank 2 for 'text_preprocessor_1/SentenceTokenizer/SentencepieceTokenizeOp' (op: 'SentencepieceTokenizeOp') with input shapes: [], [?,?], [], [], [], [], [].
WARNING:tensorflow:Entity <tensorflow.python.saved_model.function_deserialization.RestoredFunction object at 0x7fdf34216390> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Shape must be rank 1 but is rank 2 for 'text_preprocessor_1/SentenceTokenizer/SentencepieceTokenizeOp' (op: 'SentencepieceTokenizeOp') with input shapes: [], [?,?], [], [], [], [], [].
WARNING: Entity <tensorflow.python.saved_model.function_deserialization.RestoredFunction object at 0x7fdf34216390> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Shape must be rank 1 but is rank 2 for 'text_preprocessor_1/SentenceTokenizer/SentencepieceTokenizeOp' (op: 'SentencepieceTokenizeOp') with input shapes: [], [?,?], [], [], [], [], [].
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-61-7ea0d071abf8> in <module>
1 model = tf.keras.models.Sequential()
2
----> 3 model.add(embedding_layer)
4 model.add(hidden_layer)
5 model.add(output)
~/pyenv36/lib/python3.6/site-packages/tensorflow_core/python/training/tracking/base.py in _method_wrapper(self, *args, **kwargs)
455 self._self_setattr_tracking = False # pylint: disable=protected-access
456 try:
--> 457 result = method(self, *args, **kwargs)
458 finally:
459 self._self_setattr_tracking = previous_value # pylint: disable=protected-access
~/pyenv36/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/sequential.py in add(self, layer)
176 # and create the node connecting the current layer
177 # to the input layer we just created.
--> 178 layer(x)
179 set_inputs = True
180
~/pyenv36/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/base_layer.py in __call__(self, inputs, *args, **kwargs)
840 not base_layer_utils.is_in_eager_or_tf_function()):
841 with auto_control_deps.AutomaticControlDependencies() as acd:
--> 842 outputs = call_fn(cast_inputs, *args, **kwargs)
843 # Wrap Tensors in `outputs` in `tf.identity` to avoid
844 # circular dependencies.
~/pyenv36/lib/python3.6/site-packages/tensorflow_core/python/autograph/impl/api.py in wrapper(*args, **kwargs)
235 except Exception as e: # pylint:disable=broad-except
236 if hasattr(e, 'ag_error_metadata'):
--> 237 raise e.ag_error_metadata.to_exception(e)
238 else:
239 raise
ValueError: in converted code:
relative to /home/neubig/pyenv36/lib/python3.6/site-packages:
tensorflow_hub/keras_layer.py:209 call *
result = f()
tensorflow_core/python/saved_model/load.py:436 _call_attribute
return instance.__call__(*args, **kwargs)
tensorflow_core/python/eager/def_function.py:457 __call__
result = self._call(*args, **kwds)
tensorflow_core/python/eager/def_function.py:494 _call
results = self._stateful_fn(*args, **kwds)
tensorflow_core/python/eager/function.py:1823 __call__
return graph_function._filtered_call(args, kwargs) # pylint: disable=protected-access
tensorflow_core/python/eager/function.py:1141 _filtered_call
self.captured_inputs)
tensorflow_core/python/eager/function.py:1230 _call_flat
flat_outputs = forward_function.call(ctx, args)
tensorflow_core/python/eager/function.py:540 call
executor_type=executor_type)
tensorflow_core/python/ops/functional_ops.py:859 partitioned_call
executor_type=executor_type)
tensorflow_core/python/ops/gen_functional_ops.py:672 stateful_partitioned_call
executor_type=executor_type, name=name)
tensorflow_core/python/framework/op_def_library.py:793 _apply_op_helper
op_def=op_def)
tensorflow_core/python/framework/func_graph.py:548 create_op
compute_device)
tensorflow_core/python/framework/ops.py:3429 _create_op_internal
op_def=op_def)
tensorflow_core/python/framework/ops.py:1773 __init__
control_input_ops)
tensorflow_core/python/framework/ops.py:1613 _create_c_op
raise ValueError(str(e))
ValueError: Shape must be rank 1 but is rank 2 for 'text_preprocessor_1/SentenceTokenizer/SentencepieceTokenizeOp' (op: 'SentencepieceTokenizeOp') with input shapes: [], [?,?], [], [], [], [], [].
</code></pre>
|
<p>As much as I known, <code>Universal Sentence Encoder Multilingual</code> in tf.hub does not support <code>trainable=True</code> so far.</p>
<p>However, these code snippets can make the model do inference:</p>
<p><strong>Using V2</strong></p>
<pre><code>module_url = "https://tfhub.dev/google/universal-sentence-encoder-multilingual-large/2"
embedding_layer = hub.KerasLayer(module_url)
hidden_layer = tf.keras.layers.Dense(32, activation='relu')
output_layer = tf.keras.layers.Dense(5, activation='softmax')
inputs = tf.keras.layers.Input(shape=(1,), dtype=tf.string)
x = embedding_layer(tf.squeeze(tf.cast(inputs, tf.string)))["outputs"]
x = hidden_layer(x)
outputs = output_layer(x)
model = tf.keras.Model(inputs=inputs, outputs=outputs)
</code></pre>
<p><strong>Using V3</strong></p>
<pre><code>module_url = "https://tfhub.dev/google/universal-sentence-encoder-multilingual-large/3"
embedding_layer = hub.KerasLayer(module_url)
hidden_layer = tf.keras.layers.Dense(32, activation='relu')
output_layer = tf.keras.layers.Dense(5, activation='softmax')
inputs = tf.keras.layers.Input(shape=(1,), dtype=tf.string)
x = embedding_layer(tf.squeeze(tf.cast(inputs, tf.string)))
x = hidden_layer(x)
outputs = output_layer(x)
model = tf.keras.Model(inputs=inputs, outputs=outputs)
</code></pre>
<p><strong>inference</strong></p>
<pre><code>model.predict([["hello tf2"]])
</code></pre>
|
tensorflow2.0
| 2 |
1,902,215 | 58,758,439 |
How can I get HEX or RGB color code of the window background color?
|
<p>I'd like to find the window background color in HEX format. Looking for a solution that works on all platforms... Windows/Linux/Mac... </p>
<p>The following code <code>print (self.cget('bg'))</code> just prints <code>SystemButtonFace</code> but I'd like to get the actual HEX format. The reason is that I need to use this color as a base to create a new slightly darker color shade.</p>
|
<p>The <code>winfo_rgb</code> method on all widgets will accept a color name and return the r, g, and b components as integers in the range of 0-65535 (16 bits). You can then convert those to hex using standard python string formatting.</p>
|
python|tkinter
| 2 |
1,902,216 | 73,216,697 |
Writing list into csv
|
<pre><code>row=[]
for row in Final_20:
Query="select * from table where value='"+value+"'"
c.execute(Query)
rows.append(c.fetchall())
with open(CSVFilename, 'w+',newline='') as file:
writer = csv.writer(file, delimiter=',')
for row in rows:
writer.writerow(row)
</code></pre>
<p>The output is inserted to one column</p>
<p>EX
column A column B
1 output 1 null
2 output 2 null</p>
|
<p>When you call <code>rows.append(c.fetchall())</code> you add the 2D-Array from the query as the first element of <code>rows</code></p>
<p>Then your loop finds that single element and writes it.</p>
<p>Use <code>rows = c.fetchall()</code> instead</p>
|
python|export-to-csv|csvwriter
| 0 |
1,902,217 | 59,661,212 |
Copy previous line when the condition is met [Python]
|
<p>I am trying to print previous line when the condition is met.
Example: <code>abc.txt</code> file contains below lines</p>
<pre><code>User 1 - I like eating Apple
User 2 - I like eating Apple
User 3 - I like eating Apple
User 4 - I like eating Grapes
User 5 - I like eating Apple
User 6 - I like eating Orange
</code></pre>
<p>If I did not find "Apple" text in the line, then I need to print previous line "User 3 - I like eating Apple" and "User 5 - I like eating Apple".</p>
<p>What I have tried:</p>
<pre><code>with open('abc.txt', 'r') as f:
for line in f:
prev_line = line[:-1]
try:
if "Apple" not in line:
print(prev_line)
continue
except StopIteration:
pass
</code></pre>
<p><strong>Actual output:</strong></p>
<pre><code>User 4 - I like eating Grapes
User 6 - I like eating Orange
</code></pre>
<p><strong>Expected Output:</strong></p>
<pre><code>User 3 - I like eating Apple
User 5 - I like eating Apple
</code></pre>
|
<p><code>line[:-1]</code> gives you the content of the <em>current</em> line, up to the second-to-last char. It doesn't give you the previous line.</p>
<p>You can read all lines using:</p>
<pre class="lang-py prettyprint-override"><code>with open('C:\\Users\\chaitr2x\\Desktop\\abc.txt', 'r') as f:
lines = f.readlines()
</code></pre>
<p>Then you can do something like this:</p>
<pre class="lang-py prettyprint-override"><code>for idx, line in enumerate(lines):
if "Apple" not in line:
print(lines[idx -1])
</code></pre>
|
python
| 2 |
1,902,218 | 49,031,935 |
Dijkstra's algorithm with Time Tables and varying missing edges
|
<p>I know Dijkstra's algorithm is a popular solution for the "shortest path" problem, however it seems to be backfiring when implementing time tables.</p>
<p>Say I have this graph with the following weights (time to get from one point to another):</p>
<pre><code>A-----C A->C: 10
\--B--/ A->B: 5
B->C: 5
</code></pre>
<p>If you throw it into Dijkstra, it'll return route A->C. That's fine until you refer to a timetable that says route A->C only exists within a certain time frame. You could easily remove the A->C edge if the requested time frame falls outside the range when that edge is used. But obviously the data set I'm working with has a bunch of other ways to get from A->C with other, higher, costs. Not to mention what if you want to get from Z->Y which requires going from A->C in the middle. It doesn't seem like an ideal solution.</p>
<p>Is there a better way, other than Dijkstra, to create a shortest path while also keeping a timetable in mind? Or should the algorithm be modified to consider two weights when finding the optimal path?</p>
<p>If it matters, I'm using python.</p>
<p>[edit]</p>
<p>The time table is a basic table that says a train (in my case) leaves from point A at (say) 12:00 and leaves from station B at 12:05 then leaves from C at 12:10. When it doesn't stop at B, its column is empty and A will have 08:00 and C will have 08:10 </p>
<pre><code>A B C
800 8:10
12:00 12:05 12:10
</code></pre>
|
<p>One way could be creating a set of trees that denotes all simple paths between to given nodes and just selecting the shortest one among those that are not contain a deprecated edge. You can find all paths by adapting Dijkstra's algorithm or another algorithm like DFS or BFS. Also finding all paths between two nodes is considered to be a hard problem but based on your need and the type of graphs that you're dealing with you can create what you want. You can also read this post regarding this matter. -> <a href="https://stackoverflow.com/questions/9535819/find-all-paths-between-two-graph-nodes">Find all paths between two graph nodes</a>. i.e you can have a limited set of paths (if using Dijkstra top N shortest).</p>
<p>Now for Optimizing this step so that find out if an edge is deprecated or not I suggest to have a dictionary of all edge ids or names as keys and their deprecation timestamp as value then filter the dictionary by comparing the values with <code>now().timestamp()</code> and after each find just remove the items from dictionary. Also note that before you start filtering you should check if the edge exist in dictionary or not (in order to prevent the lagorithm to run the filtering multiple times for duplicate edges).</p>
<p>The code should be like following:</p>
<pre><code>def filter_edge(u_id):
if edge in deprecation:
time_stamp = deprecation[u_id]
if time_stamp > datetime.now().timestamp():
return True
return False
</code></pre>
<p>And the path validation is something like following:</p>
<pre><code>def validate_path(path):
return not any(filter_edge(edge.id) for edge in path)
</code></pre>
|
python|python-3.x|algorithm|dijkstra
| 0 |
1,902,219 | 24,945,148 |
Extra characters when sending String from Android client to Python Server
|
<p>I am sending a String from an Android device to a python server via TCP socket, but when the message arrives on the server, there are extra characters in the front. For example, if I send the string</p>
<pre><code>asdf
</code></pre>
<p>the result on the server would be</p>
<pre><code>\x00\x13asdf
</code></pre>
<p>Anyone know why these characters are added to the front of the string? Is there a way to avoid this, or should I just cut these out at the server end?</p>
<p>For the reverse, the server sends</p>
<pre><code>fdsa
</code></pre>
<p>The Android client receives</p>
<pre><code>Nullfdsa
</code></pre>
<p>Client Code (Written in Android, Java):</p>
<pre><code>public static class PlaceholderFragment extends Fragment {
TextView recieve;
EditText addressText, portText, messageText;
Button send, test;
Socket socket = null;
public PlaceholderFragment() {
}
@Override
public View onCreateView(LayoutInflater inflater, ViewGroup container,
Bundle savedInstanceState) {
View rootView = inflater.inflate(
R.layout.fragment_customize_gateway, container, false);
recieve = (TextView) rootView.findViewById(R.id.textView1);
addressText = (EditText) rootView.findViewById(R.id.editText1);
portText = (EditText) rootView.findViewById(R.id.editText2);
messageText = (EditText) rootView.findViewById(R.id.editText3);
send = (Button) rootView.findViewById(R.id.send);
send.setOnClickListener(new OnClickListener() {
@Override
public void onClick(View v) {
// TODO Auto-generated method stub
AsyncTCPSend tcpSend= new AsyncTCPSend(addressText.getText().toString(),Integer.parseInt(portText.getText().toString()), messageText.getText().toString());
tcpSend.execute();
}
});
return rootView;
}
public class AsyncTCPSend extends AsyncTask<Void, Void, Void> {
String address;
int port;
String message;
String response;
AsyncTCPSend(String addr, int p, String mes) {
address = addr;
port = p;
message = mes;
}
@Override
protected Void doInBackground(Void... params) {
Socket socket = null;
try {
socket = new Socket("127.0.0.1", 4999);
DataOutputStream writeOut = new DataOutputStream(socket.getOutputStream());
writeOut.writeUTF(message);
writeOut.flush();
ByteArrayOutputStream writeBuffer = new ByteArrayOutputStream(1024);
byte[] buffer = new byte[1024];
int bytesRead;
InputStream writeIn = socket.getInputStream();
while((bytesRead = writeIn.read(buffer)) != -1) {
writeBuffer.write(buffer,0,bytesRead);
response += writeBuffer.toString("UTF-8");
}
response = response.substring(4); //Server sends extra "Null" string in front of data. This cuts it out
} catch (UnknownHostException e){
e.printStackTrace();
response = "Unknown HostException: " + e.toString();
System.out.println(response);
} catch (IOException e) {
response = "IOException: " + e.toString();
System.out.println(response);
} finally {
if (socket != null) {
try {
socket.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
return null;
}
@Override
protected void onPostExecute(Void result) {
recieve.setText(response);
super.onPostExecute(result);
}
}
</code></pre>
<p>Server Code (In Python):</p>
<pre><code>class ThreadedTCPRequestHandler(socketserver.BaseRequestHandler):
def handle(self):
#Connect to database
try:
from pymongo import MongoClient
dbclient = MongoClient()
db = dbclient.WDI_database
print("Database Connected")
except pymongo.errors.ConnectionFailure as e:
print("Database Failed: {}".format(e))
col = db.users
data2 = str(self.request.recv(1024), 'utf-8')
print("Server: {}".format(data2));
data = data2.split("||")
try:
#[2:] because we get two extra symbols in front of the username from Android
username = data[0][2:]
except IndexError:
username = ""
try:
password = data[1]
except IndexError:
password = ""
try:
camunits = data[2]
except IndexError:
camunits = 0
try:
homunits = data[3]
except IndexError:
homunits = 0
post = {"user": username,
"pass": password,
"cam": camunits,
"disp": homunits}
col.insert(post)
print(col.count())
response = bytes("Received data for: {}".format(username), 'utf-8')
self.request.sendall(response)
class ThreadedTCPServer(socketserver.ThreadingMixIn, socketserver.TCPServer):
pass
if __name__ == "__main__":
# Port 0 means to select an arbitrary unused port
HOST, PORT = "", 5000
tcpserver = ThreadedTCPServer((HOST, PORT-1), ThreadedTCPRequestHandler)
server_thread = threading.Thread(target=tcpserver.serve_forever)
server_thread.daemon = True
server_thread.start()
print("TCP serving at port", PORT-1)
while True:
pass
tcpserver.shutdown()
</code></pre>
|
<p>I think I got some explanations about the extra characters.<p>
In the java code, you are not getting an extra "Null" from the socket, the <code>response</code> string variable is not initialized, by default it is null, and you say <p><code>response += writeBuffer.toString("UTF-8");</code> <p> so you append something to a null string, which happened to be <code>"null" + something</code>.<p>
I would initialize the variable in the declaration or just before the while loop:<p>
<code>String response = "";</code><p>
In the Phyton code, I see nothing wrong, therefore I'd suggest you to write what you send to the Log and see if the extra characters are in the bytes you send.<p>
Instead of <code>writeOut.writeUTF(message);</code><p>
try <code>socket.getOutputStream().write(message.getBytes());</code> // UTF-8 is the default.<p>
and write it to the Log:<p>
<code>android.util.Log.w("SENT", String.format("[%s] %d", message, message.length()));</code><p>See the log to find out what you're really sending.</p>
|
android|python|string|sockets|tcp
| 1 |
1,902,220 | 70,818,813 |
Plotting data points over a box plot with specific colors & jitter in plotly
|
<p>I have a <code>plotly.graph_objects.Box</code> plot and I am showing all points in the box plot. I need to color the markers by an attribute of the data (shown below). I also want to jitter the points (not shown below).</p>
<p>Using <code>Box</code> I can plot the points and jitter them, but I don't think I can color them.</p>
<pre><code> fig.add_trace(go.Box(
name='Data',
y=y,
jitter=0.5,
boxpoints='all',
))
</code></pre>
<p>In order to color the plots, I added a separate trace per group using <code>Scatter</code> instead. It looks like this (pseudo code):</p>
<pre><code>for data in group_of_data:
fig.add_trace(go.Scatter(
name=f'{data.name}',
x=['trace 0', 'trace 0', ..., 'trace 0'],
y=data.values,
marker=dict(color=data.color),
mode='markers',
))
</code></pre>
<p>Notably the <code>x</code> value is the text label of the <code>Box</code> plot. I found that in the question: <a href="https://stackoverflow.com/questions/45828480/is-it-possible-to-overlay-a-marker-on-top-of-a-plotly-js-box-plot">Is it possible to overlay a marker on top of a plotly.js box plot?</a>.</p>
<p>Now I can plot the scatter overlay in the right color by using <code>go.Scatter</code> + <code>go.Box</code> together, but since my <code>x</code> values are text labels (to line them up with the <code>Box</code> plot), I don't know how to add jitter to the <code>Scatter</code> plot. Normally you can add a random value to the <code>x</code> values to make a scatter plot jitter, but when <code>x</code> is a text label I can't.</p>
<p><a href="https://i.stack.imgur.com/CPWLqm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CPWLqm.png" alt="enter image description here" /></a></p>
|
<p>Since there is no data presented, I am using appropriate sample data to create the graph. The structure of the data is creating a data frame for the strip graph. The graph name column, y0 and y1 are added together, and the color is set to that. The boxplot uses y0 and y1. First, we draw a strip graph, and then add an additional box plot. I think there is a reason why the legend is not arranged in numerical order in the resulting graph. I checked and there is only standard, reverse order, and by group, but the order could not be changed at this time.</p>
<pre><code>import plotly.express as px
import numpy as np
import pandas as pd
np.random.seed(1)
y0 = np.random.randn(50) - 1
y1 = np.random.randn(50) + 1
df = pd.DataFrame({'graph_name':['trace 0']*len(y0)+['trace 1']*len(y1),
'value': np.concatenate([y0,y1],0),
'color':np.random.choice([0,1,2,3,4,5,6,7,8,9], size=100, replace=True)}
)
fig = px.strip(df,
x='graph_name',
y='value',
color='color',
stripmode='overlay')
fig.add_trace(go.Box(y=df.query('graph_name == "trace 0"')['value'], name='trace 0'))
fig.add_trace(go.Box(y=df.query('graph_name == "trace 1"')['value'], name='trace 1'))
fig.update_layout(autosize=False,
width=600,
height=600,
legend={'traceorder':'normal'})
fig.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/Fl4uu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Fl4uu.png" alt="enter image description here" /></a></p>
|
python|plotly
| 2 |
1,902,221 | 70,999,509 |
my on click function is not working in ursina
|
<p>I don't know how to do that when the user clicks on Settings so that everything is invisible. I tried</p>
<pre><code>Play.on_click = Settings.visible = False
Play.on_click = quit.visible = False
Play.on_click = Play.visible = False
</code></pre>
<p>but it doesn't work</p>
|
<p><code>on_click</code> expects a function but you assigned a value. Try this instead:</p>
<pre><code>def play_onclick():
Settings.visible = False
quit.visible = False
Play.visible = False
Play.on_click = play_onclick # without parentheses!
</code></pre>
<p>Also, I'm assuming that <code>Play</code> is a <code>ursina.Button</code> instance. If that's the case, you should follow the Python naming convention of having its name start with a lowercase letter, i.e. <code>play</code>.</p>
|
python|ursina
| 1 |
1,902,222 | 60,303,869 |
resizing image in tkinter for my browsing file code
|
<h1>how can I resize my image? here is my code..</h1>
<pre><code>from tkinter import *
from tkinter import ttk
from tkinter import filedialog
from PIL import Image, ImageTk
class Root(Tk):
def __init__(self):
super(Root, self).__init__()
self.title("Python Tkinter Dialog Widget")
self.minsize(640, 400)
self.labelFrame = ttk.LabelFrame(self, text = "Open File")
self.labelFrame.grid(column = 0, row = 1, padx = 20, pady = 20)
self.button()
def button(self):
self.button = ttk.Button(self.labelFrame, text = "Browse A File",command = self.fileDialog)
self.button.grid(column = 1, row = 1)
def fileDialog(self):
self.filename = filedialog.askopenfilename(initialdir = "/", title = "Select A File", filetype =
(("jpeg files","*.jpg"),("all files","*.*")) )
self.label = ttk.Label(self.labelFrame, text = "")
self.label.grid(column = 1, row = 2)
self.label.configure(text = self.filename)
img = Image.open(self.filename)
photo = ImageTk.PhotoImage(img)
self.label2 = Label(image=photo)
self.label2.image = photo
self.label2.grid(column=3, row=4)
root = Root()
root.mainloop()
</code></pre>
|
<p>You can either use <code>Image.resize()</code> which does not keep the image aspect ratio, or <code>Image.thumbnail()</code> which keeps the image aspect ratio:</p>
<pre><code>img = Image.open(self.filename)
imgsize = (600, 400) # change to whatever size you want
#img = img.resize(imgsize)
img.thumbnail(imgsize)
photo = ImageTk.PhotoImage(img)
</code></pre>
|
python-3.x|file|tkinter|python-imaging-library
| 0 |
1,902,223 | 2,904,274 |
globals and locals in python exec()
|
<p>I'm trying to run a piece of python code using exec.</p>
<pre><code>my_code = """
class A(object):
pass
print 'locals: %s' % locals()
print 'A: %s' % A
class B(object):
a_ref = A
"""
global_env = {}
local_env = {}
my_code_AST = compile(my_code, "My Code", "exec")
exec(my_code_AST, global_env, local_env)
print local_env
</code></pre>
<p>which results in the following output</p>
<pre><code>locals: {'A': <class 'A'>}
A: <class 'A'>
Traceback (most recent call last):
File "python_test.py", line 16, in <module>
exec(my_code_AST, global_env, local_env)
File "My Code", line 8, in <module>
File "My Code", line 9, in B
NameError: name 'A' is not defined
</code></pre>
<p>However, if I change the code to this -</p>
<pre><code>my_code = """
class A(object):
pass
print 'locals: %s' % locals()
print 'A: %s' % A
class B(A):
pass
"""
global_env = {}
local_env = {}
my_code_AST = compile(my_code, "My Code", "exec")
exec(my_code_AST, global_env, local_env)
print local_env
</code></pre>
<p>then it works fine - giving the following output -</p>
<pre><code>locals: {'A': <class 'A'>}
A: <class 'A'>
{'A': <class 'A'>, 'B': <class 'B'>}
</code></pre>
<p>Clearly A is present and accessible - what's going wrong in the first piece of code? I'm using 2.6.5, cheers,</p>
<p>Colin</p>
<p><strong>* UPDATE 1 *</strong></p>
<p>If I check the locals() inside the class -</p>
<pre><code>my_code = """
class A(object):
pass
print 'locals: %s' % locals()
print 'A: %s' % A
class B(object):
print locals()
a_ref = A
"""
global_env = {}
local_env = {}
my_code_AST = compile(my_code, "My Code", "exec")
exec(my_code_AST, global_env, local_env)
print local_env
</code></pre>
<p>Then it becomes clear that locals() is not the same in both places -</p>
<pre><code>locals: {'A': <class 'A'>}
A: <class 'A'>
{'__module__': '__builtin__'}
Traceback (most recent call last):
File "python_test.py", line 16, in <module>
exec(my_code_AST, global_env, local_env)
File "My Code", line 8, in <module>
File "My Code", line 10, in B
NameError: name 'A' is not defined
</code></pre>
<p>However, if I do this, there is no problem -</p>
<pre><code>def f():
class A(object):
pass
class B(object):
a_ref = A
f()
print 'Finished OK'
</code></pre>
<p><strong>* UPDATE 2 *</strong></p>
<p>ok, so the docs here - <a href="http://docs.python.org/reference/executionmodel.html" rel="noreferrer">http://docs.python.org/reference/executionmodel.html</a></p>
<p><i>'A class definition is an executable statement that may use and define names. These references follow the normal rules for name resolution. The namespace of the class definition becomes the attribute dictionary of the class. Names defined at the class scope are not visible in methods.'</i></p>
<p>It seems to me that 'A' should be made available as a free variable within the executable statement that is the definition of B, and this happens when we call f() above, but not when we use exec(). This can be more easily shown with the following -</p>
<pre><code>my_code = """
class A(object):
pass
print 'locals in body: %s' % locals()
print 'A: %s' % A
def f():
print 'A in f: %s' % A
f()
class B(object):
a_ref = A
"""
</code></pre>
<p>which outputs</p>
<pre><code>locals in body: {'A': <class 'A'>}
A: <class 'A'>
Traceback (most recent call last):
File "python_test.py", line 20, in <module>
exec(my_code_AST, global_env, local_env)
File "My Code", line 11, in <module>
File "My Code", line 9, in f
NameError: global name 'A' is not defined
</code></pre>
<p>So I guess the new question is - why aren't those locals being exposed as free variables in functions and class definitions - it seems like a pretty standard closure scenario.</p>
|
<p>Well, I believe it's either an implementation bug or an undocumented design decision. The crux of the issue is that a name-binding operation in the module-scope should bind to a global variable. The way it is achieved is that when in the module level, globals() IS locals() (try that one out in the interpreter), so when you do any name-binding, it assigns it, as usual, to the locals() dictionary, which is also the globals, hence a global variable is created.</p>
<p>When you look up a variable, you first check your current locals, and if the name is not found, you recursively check locals of containing scopes for the variable name until you find the variable or reach the module-scope. If you reach that, you check the globals, which are supposed to be the module scope's locals.</p>
<pre><code>>>> exec(compile("import sys\nprint sys._getframe().f_code.co_name", "blah", "exec"), {}, {})
<module>
>>> exec("a = 1\nclass A(object):\n\tprint a\n", {}, {})
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<string>", line 2, in <module>
File "<string>", line 3, in A
NameError: name 'a' is not defined
>>> d = {}
>>> exec("a = 1\nclass A(object):\n\tprint a\n", d,d)
1
</code></pre>
<p>This behavior is why inheritance worked (The name-lookup used code object's scope locals(), which indeed had A in it).</p>
<p>In the end, it's an ugly hack in the CPython implementation, special-casing globals lookup. It also causes some nonsensical artifical situations - e.g.:</p>
<pre><code>>>> def f():
... global a
... a = 1
...
>>> f()
>>> 'a' in locals()
True
</code></pre>
<p>Please note that this is all my inference based on messing with the interpreter while reading section 4.1 (Naming and binding) of the python language reference. While this isn't definitive (I haven't opened CPython's sources), I'm fairly sure I'm correct about the behavior.</p>
|
python|scope
| 20 |
1,902,224 | 3,056,179 |
Binomial test in Python for very large numbers
|
<p>I need to do a binomial test in Python that allows calculation for 'n' numbers of the order of 10000.</p>
<p>I have implemented a quick binomial_test function using scipy.misc.comb, however, it is pretty much limited around n = 1000, I guess because it reaches the biggest representable number while computing factorials or the combinatorial itself. Here is my function:</p>
<pre><code>from scipy.misc import comb
def binomial_test(n, k):
"""Calculate binomial probability
"""
p = comb(n, k) * 0.5**k * 0.5**(n-k)
return p
</code></pre>
<p>How could I use a native python (or numpy, scipy...) function in order to calculate that binomial probability? If possible, I need scipy 0.7.2 compatible code.</p>
<p>Many thanks!</p>
|
<p>Edited to add this comment: please note that, as Daniel Stutzbach mentions, the "binomial test" is probably not what the original poster was asking for (though he did use this expression). He seems to be asking for the probability density function of a binomial distribution, which is not what I'm suggesting below.</p>
<p>Have you tried scipy.stats.binom_test?</p>
<pre><code>rbp@apfelstrudel ~$ python
Python 2.6.2 (r262:71600, Apr 16 2009, 09:17:39)
[GCC 4.0.1 (Apple Computer, Inc. build 5250)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from scipy import stats
>>> print stats.binom_test.__doc__
Perform a test that the probability of success is p.
This is an exact, two-sided test of the null hypothesis
that the probability of success in a Bernoulli experiment
is `p`.
Parameters
----------
x : integer or array_like
the number of successes, or if x has length 2, it is the
number of successes and the number of failures.
n : integer
the number of trials. This is ignored if x gives both the
number of successes and failures
p : float, optional
The hypothesized probability of success. 0 <= p <= 1. The
default value is p = 0.5
Returns
-------
p-value : float
The p-value of the hypothesis test
References
----------
.. [1] http://en.wikipedia.org/wiki/Binomial_test
>>> stats.binom_test(500, 10000)
4.9406564584124654e-324
</code></pre>
<p>Small edit to add documentation link: <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.binom_test.html#scipy.stats.binom_test" rel="noreferrer">http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.binom_test.html#scipy.stats.binom_test</a></p>
<p>BTW: works on scipy 0.7.2, as well as on current 0.8 dev.</p>
|
python|binomial-coefficients
| 10 |
1,902,225 | 67,715,205 |
Visualising trees from random forest using graphviz with feature and class names
|
<p>I have trained a randomforest classifier with randomsearch CV and would like to export say the first 5 decision trees using graphviz. My features data is in dataframe format and my classes data is in series format.</p>
<p>I am able to export the trees using the code below</p>
<pre><code>from sklearn.pipeline import Pipeline
#define classifier
clf = RandomForestClassifier()
#define the pipeline - chain the column transformer and the classifier
pipe = Pipeline([('ct',column_trans), ('clf',clf)])
#randomised search cv with hyper parameters
#input the parameters for search space
param_grid = {
'clf__n_estimators':[500, 1000, 5000],
'clf__max_features':['sqrt','log2'],
'clf__max_depth': [5, 10, 15, 20],
'clf__min_samples_split': [2,5,10,15],
'clf__min_samples_leaf': [2,5,10,15],
'clf__bootstrap': [True, False],
'clf__criterion': ['gini','entropy']
}
#create the random forest classifier object
rscv_rf = RandomizedSearchCV(estimator = pipe, param_distributions=param_grid, scoring= 'f1_macro', verbose=1, random_state=42)
#fit the rf model with X-train and y_train data
rf_model = rscv_rf.fit(X_train, y_train)
#plot decision trees
fig, axes = plt.subplots(nrows = 1,ncols = 5,figsize = (5,5), dpi=800)
for index in range(0, 5):
tree.plot_tree(rscv_rf.best_estimator_.named_steps['clf'].estimators_[index],filled = True, ax = axes[index])
axes[index].set_title('Estimator: ' + str(index), fontsize = 11)
fig.savefig('rf_5trees.png')
</code></pre>
<p>However, when I try to include the Feature Names and the Class Names using the code below, I get index error issues. Is there something I am missing out? I'm guessing that each tree will show different features and my original dataframe only has 25 features while the column transformer in my pipeline actually does one hot encoding to create 155 features, appreciate any form of help. Thank you.</p>
<pre><code>#plot decision trees
fn= list(X.columns)
cn= [str(s) for s in y.unique()]
fig, axes = plt.subplots(nrows = 1,ncols = 5,figsize = (5,5), dpi=800)
for index in range(0, 5):
tree.plot_tree(rscv_rf.best_estimator_.named_steps['clf'].estimators_[index], feature_names = fn, class_names = cn, filled = True,ax = axes[index])
axes[index].set_title('Estimator: ' + str(index), fontsize = 11)
fig.savefig('rf_5trees.png')
</code></pre>
|
<p>Managed to get a working solution but happy to receive other solutions as well</p>
<pre><code>#retrieve the numerical and categorical variables before pipe and put into lists
numerical_columns = X.columns[X.dtypes == 'int64'].tolist()
categorical_columns = X.columns[X.dtypes != 'int64'].tolist()
#get the features that were one-hot encoded
onehot_columns = rscv_rf.best_estimator_.named_steps['ct'].named_transformers_['onehotencoder'].get_feature_names(input_features=categorical_columns)
#create a list of all the feature columns
feature_columns = = numerical_columns + list(onehot_columns)
fn = feature_columns
cn= [str(s) for s in y.unique()]
#plot the chart and save the figure
fig, axes = plt.subplots(nrows = 1,ncols = 5,figsize = (5,5), dpi=800)
for index in range(0, 5):
tree.plot_tree(rscv_rf.best_estimator_.named_steps['clf'].estimators_[index], feature_names = fn, class_names = cn, filled = True,ax = axes[index])
axes[index].set_title('Estimator: ' + str(index), fontsize = 11)
fig.savefig('rf_5trees.png')
</code></pre>
|
python|dataframe|scikit-learn|random-forest|pygraphviz
| 0 |
1,902,226 | 67,896,685 |
Getting Error: cannot import name 'DB' from partially initialized module. Not finding the circular import issue
|
<p>I'm working on a assignment for school.</p>
<p>But When I try to run the flask app I get some kind of circular import error. I've been trying to go back and remove things step by step while still keeping my project functional. I've hit a wall:</p>
<blockquote>
<p>Usage: <strong>main</strong>.py run [OPTIONS]
Error: While importing 'Help.app', an ImportError was raised:
Traceback (most recent call last): FileUsage: <strong>main</strong>.py run [OPTIONS]</p>
<p>Error: While importing 'Help.app', an ImportError was raised:</p>
<p>Traceback (most recent call last): File
"/home/chris/Help/venv/lib/python3.8/site-packages/flask/cli.py", line
256, in locate_app</p>
<pre><code>__import__(module_name) File "/home/chris/Help/__init__.py", line 1, in <module>
from .app import create_app
</code></pre>
<p>File "/home/chris/Help/app.py", line 5, in
from .models import DB, User, insert_example_users</p>
<p>File "/home/chris/Help/models.py", line 3, in
from .twitter import add_or_update_user</p>
<p>File "/home/chris/Help/twitter.py", line 5, in
from .models import DB, Tweet, User</p>
<p>ImportError: cannot import name 'DB' from partially initialized module
'Help.models' (most likely due to a circular import)
(/home/chris/Help/models.py)</p>
</blockquote>
<p>app.py file:</p>
<pre><code># Main app/routing file for Twitoff
from os import getenv
from flask import Flask, render_template
from .models import DB, User, insert_example_users
# creates application
def create_app():
# Creating and configuring an instance of the Flask application
app = Flask(__name__)
app.config["SQLALCHEMY_DATABASE_URI"] = getenv("DATABASE_URI")
app.config["SQLALCHEMY_TRACK_MODIFICATIONS"] = False
DB.init_app(app)
# TODO - Make rest of application
@app.route('/')
def root():
DB.drop_all()
DB.create_all()
insert_example_users()
return render_template("base.html", title="home", users=User.query.all())
@app.route("/update")
def update():
insert_example_users()
return render_template("base.html", title="home", users=User.query.all())
@app.route("/reset")
def reset():
DB.drop_all()
DB.create_all()
return render_template(
"base.html",
title="home",
)
return app
</code></pre>
<p>Twitter.py file:</p>
<pre><code>"""Retrieve tweets and users then create embeddings and populate DB"""
from os import getenv
import tweepy
import spacy
from .models import DB, Tweet, User
# TODO - Don't include raw keys and tokens (create .env file)
TWITTER_API_KEY = getenv("TWITTER_API_KEY")
TWITTER_API_SECRET_KEY = getenv("TWITTER_API_SECRET_KEY")
TWITTER_OAUTH = tweepy.oAuthHandler(TWITTER_API_KEY, TWITTER_API_SECRET_KEY)
TWITTER = tweepy.API(TWITTER_OAUTH)
# NLP model
nlp = spacy.load("my_model")
def vectorize_tweet(tweet_text):
return nlp(tweet_text).vector
def add_or_update_user(username):
try:
twitter_user = TWITTER.get_user(username)
db_user = (User.query.get(twitter_user.id)) or User(
id=twitter_user.id, name=username
)
DB.session.add(db_user)
tweets = twitter_user.timeline(
count=200, exclude_replies=True, include_rts=False, tweet_mode="extended"
)
if tweets:
db_user.newest_tweet_id = tweets[0].id
for tweet in tweets:
vectorized_tweet = vectorize_tweet(tweet.full_text)
db_tweet = Tweet(id=tweet.id, text=tweet.full_text, vect=vectorized_tweet)
db_user.tweets.append(db_tweet)
DB.session.add(db_tweet)
DB.session.commit()
except Exception as e:
print(f"Error processing {username}: {e}")
raise e
</code></pre>
<p>models.py file:</p>
<pre><code>""""SQLAlchemy models and utility functions for Twitoff Application"""
from flask_sqlalchemy import SQLAlchemy
from .twitter import add_or_update_user
DB = SQLAlchemy()
class User(DB.Model):
"""Twitter User table that will correspond to tweets - SQLAlchemy syntax"""
id = DB.Column(DB.BigInteger, primary_key=True)
name = DB.Column(DB.String, nullable=False)
newest_tweet_id = DB.Column(DB.BigInteger)
def __repr__(self):
return f"<User:{self.name}>"
class Tweet(DB.Model):
"""tweet text data - associated with User table"""
id = DB.Column(DB.BigInteger, primary_key=True)
text = DB.Column(DB.Unicode(290))
vect = DB.Column(DB.PickleType, nullable=False)
user_id = DB.Column(DB.BigInteger, DB.ForeignKey("user.id"), nullable=False)
user = DB.relationship("User", backref=DB.backref('tweets', lazy=True))
def __repr__(self):
return f"<Tweet: {self.text}"
def insert_example_users():
"""We will get an error if we run this twice without dropping & creating"""
users = ["elonmusk", "geoffkeighley", "iamjohnoliver", "neiltyson"]
for user in users:
DB.session.add(add_or_update_user(user))
DB.session.commit()
</code></pre>
<p><strong>init</strong>.py file:</p>
<pre><code>from .app import create_app
APP = create_app()
</code></pre>
|
<p>You could place the app instance not in <code>__init__.py</code> but in <code>app.py</code>. The instance should be a WSGI callable and not a package identifier.</p>
|
python|flask|flask-sqlalchemy|spacy
| 1 |
1,902,227 | 67,750,745 |
Try/Exception giving multiple results in Python
|
<p>I have this program that should take the user input (an application name installed) and then it will open the application with subprocess.Popen() . All posibles application names are in a dictionary as keys, and the value is the path to the application .exe file. I'm using a <strong>try/Exception</strong> to see if there will be a <strong>KeyError</strong> (if the input name doens't exist in the dictionary).</p>
<p>So the programm should take the user input, see if the application name is in the dictionary, and then open the application. If the name isn't on the dictionary, it will give an error message and then ask for a name again.</p>
<p>But when I enter a non-existent name <em>n</em> times, it will run the <strong>finally</strong> block <em>n</em> times too. How to solve this?</p>
<pre><code>import subprocess
apps ={
"vlc":"app_path\\vlc.exe",
"sublime":"app_path\\sublime_text.exe",
"chrome":"app_path\\chrome.exe",
"opera":"app_path\\launcher.exe",
"torrent":"app_path.\\qbittorrent.exe"
}
def test():
answer = str(input("What program do you want to run: ")).lower()
try:
print(apps[answer])
except KeyError:
print(f"{answer.capitalize()} application unknown. Try again with a valid application name")
print("===============================================================")
test()
except:
print("==============================")
print("Unknown error.")
else:
subprocess.Popen(apps[answer])
finally:
print("===========================================")
print("FINISHED")
test()
</code></pre>
|
<p>You're using recursion here, so obviously the finally block will run as many times as you enter call the function.</p>
<p>The code within <code>finally</code> is always executed regardless if the try block raises an error or not.</p>
<p>W3Schools has good examples for this: <a href="https://www.w3schools.com/python/python_try_except.asp" rel="nofollow noreferrer">https://www.w3schools.com/python/python_try_except.asp</a></p>
<p>Instead of using try/except, you can simply use a while loop and <code>apps.get(answer, 'ERROR')</code> or <code>if answer in apps</code> to check if the entered input is in the <code>apps</code> dictionary. The following solution may be optimal.</p>
<pre class="lang-py prettyprint-override"><code>def test():
while True:
answer = input("What program do you want to run: ").lower()
if answer in apps:
try:
res = subprocess.Popen(apps[answer]).communicate()
print("===========================================")
print("FINISHED")
return res
except:
print("==============================")
print("Unknown error.")
else:
print(f"{answer.capitalize()} application unknown. Try again with a valid application name")
print("===============================================================")
test()
</code></pre>
|
python
| 1 |
1,902,228 | 67,977,916 |
Remove the Last Vowel in Python
|
<p>I have the following problem and I am wondering if there is a faster and cleaner implementation of the <code>removeLastChar()</code> function. Specifically, if one can already remove the last vowel without having to find the corresponding index first.</p>
<p><em><strong>PROBLEM</strong></em></p>
<p>Write a function that removes the last vowel in each word in a sentence.</p>
<p>Examples:</p>
<p><code>removeLastVowel("Those who dare to fail miserably can achieve greatly.")</code></p>
<p>"Thos wh dar t fal miserbly cn achiev gretly."</p>
<p><code>removeLastVowel("Love is a serious mental disease.")</code></p>
<p>"Lov s serios mentl diseas"</p>
<p><code>removeLastVowel("Get busy living or get busy dying.")</code></p>
<p>"Gt bsy livng r gt bsy dyng"</p>
<p>Notes: Vowels are: a, e, i, o, u (both upper and lowercase).</p>
<p><em><strong>MY SOLUTION</strong></em></p>
<p><em>A PSEUDOCODE</em></p>
<ol>
<li>Decompose the sentence</li>
<li>For each word find the index of the last vowel</li>
<li>Then remove it and make the new "word"</li>
<li>Concatenate all the words</li>
</ol>
<p><strong>CODE</strong></p>
<pre><code>def findLastVowel(word):
set_of_vowels = {'a','e','i','o','u'}
last_vowel=''
for letter in reversed(word):
if letter in set_of_vowels:
last_vowel = letter
break
return last_vowel
def removeLastChar(input_str,char_to_remove):
index = input_str.find(char_to_remove)
indices = []
tmp_str = input_str
if index != -1:
while index != -1:
indices.append(index)
substr1 = tmp_str[:index]
substr2 = tmp_str[index+1:]
tmp_str = substr1+"#"+substr2
index = tmp_str.find(char_to_remove)
index = indices[-1]
substr1 = input_str[:index]
substr2 = input_str[index+1:]
return (substr1+substr2)
else:
return (input_str)
def removeLastVowel(sentence):
decomposed_sentence = sentence.split()
out = []
for word in decomposed_sentence:
out.append(removeLastChar(word,findLastVowel(word)))
print(" ".join(out))
#MAIN
removeLastVowel("Those who dare to fail miserably can achieve greatly.")
removeLastVowel("Love is a serious mental disease.")
removeLastVowel("Get busy living or get busy dying.")
</code></pre>
<p><strong>OUTPUT</strong></p>
<pre><code>Thos wh dar t fal miserbly cn achiev gretly.
Lov s serios mentl diseas.
Gt bsy livng r gt bsy dyng.
</code></pre>
<p><em><strong>QUESTION</strong></em></p>
<p>Can you suggest a better implementation of the <code> removeLastChar()</code> function? Specifically, if one can already remove the last vowel without having to find the corresponding index first.</p>
|
<p>This can be more easily achieved with a regex substitution that removes a vowel that's followed by zero or more consonants up to a word boundary:</p>
<pre><code>import re
def removeLastVowel(s):
return re.sub(r'[aeiou](?=[^\Waeiou]*\b)', '', s, flags=re.I)
</code></pre>
<p>so that:</p>
<pre><code>removeLastVowel("Those who dare to fail miserably can achieve greatly.")
</code></pre>
<p>returns:</p>
<pre><code>Thos wh dar t fal miserbly cn achiev gretly.
</code></pre>
|
python|string
| 2 |
1,902,229 | 30,712,916 |
looking for help in configuring .emacs file for python (python.el),
|
<p>My emacs version is 24.5, using in built python. I have written these lines in my <code>.emacs</code> for it:</p>
<pre><code>(require 'python)
(setq python-shell-interpreter "C:/Python34")
</code></pre>
<p>The problem is none of the commands (when I am trying to run <code>test.py</code>) are working. I have tried several commands named like</p>
<pre><code>M-x python-shell-*
</code></pre>
<p>and they all return</p>
<pre><code>"wrong type argument:arrayp, nil".
</code></pre>
<p>What I am doing wrong?
What am I supposed to do?
What should be ideal configuration (<code>.emacs</code>)?</p>
<p>Further info:</p>
<ul>
<li>Python 3.4 installed at <code>C:/</code></li>
<li>Emacs at <code>C:/Program Files/</code></li>
<li><code>$HOME</code> is <code>C:/user/akk/appdata/roaming/</code></li>
</ul>
|
<p>That variable is for the Python <em>interpreter</em>, not the Python <em>directory</em>.</p>
<p>I don't have a Windows machine to test on, but if you update your configuration to point to the actual binary (possibly <code>C:/Python34/python.exe</code>?) you should find that it works.</p>
|
emacs|python.el
| 3 |
1,902,230 | 30,667,890 |
Javascript + Ajax + Django - chat application issue
|
<p>So i have this application in django+javascript+ajax</p>
<pre><code>var friends = "{{friend}}";
function LoadJson(){ //start function
$.getJSON( "/messages/message/friend="+friends, function( data ) {
var items = [];
var lastitem = parseInt($("#showdata div:last-child").attr("id"));
if (lastitem !== lastitem) {
var lastitem = 0;
}
$.each( data, function( key, val ) {
</code></pre>
<p>I want to add this here, to read the message and if is larger than 30 and if it doesn t contain no space , it will add every 30 letters a space :</p>
<pre><code> if ( key > lastitem ) {
var str = val.msg;
var search = str.search(" ")
if (str > 30) {
if ( (search == -1) || (search > 30) ) {
newVal = str.replace(/(.{30})/g, "$1\n");
console.log(newVal)
}
}
</code></pre>
<p>But everytime i add something to the base application , the application crashes and displays the last message many times.</p>
<pre><code> if ( key > lastitem ) {
$("#showdata").append("<div class='well well-lg lighter row col-md-12' id='" + key + "'>"+"<span class='pull-left'><span class='sender col-md-3'>"+val.user+"</span>" + "<span class='message col-md-6' style='margin-left:30px;'>" + val.msg + "</span></span></br><small class='pull-right col-md-3'>"+ val.time + "</small></div>");
$('#bottom').scrollTop($('#bottom').prop("scrollHeight"));
$(window).scrollTop($(document).height());
};
});
});
setTimeout(LoadJson, 5000);
</code></pre>
<p>};</p>
<p>I will really apreciate any kind of help.
Thank you!</p>
|
<p>This just temporary solved my problem:</p>
<pre><code>var friends = "{{friend}}";
function LoadJson(){ //start function
$.getJSON( "/messages/message/friend="+friends, function( data ) {
var items = [];
var lastitem = parseInt($("#showdata div:last-child").attr("id"));
if (lastitem !== lastitem) {
var lastitem = 0;
}
$.each( data, function( key, val ) {
if (val.msg) {
var str = val.msg;
var search = str.search(" ")
}
if ( key > lastitem ) {
if ( (search == -1) || (search > 30) ) {
newVal = str.replace(/(.{30})/g, "$1\n");
$("#showdata").append("<div class='well well-lg lighter row col-md-12' id='" + key + "'>"+"<span class='pull-left'><span class='sender col-md-3'>"+val.user+"</span>" + "<span class='message col-md-6' style='margin-left:30px;'>" + newVal + "</span></span></br><small class='pull-right col-md-3'>"+ val.time + "</small></div>");
}
else {
$("#showdata").append("<div class='well well-lg lighter row col-md-12' id='" + key + "'>"+"<span class='pull-left'><span class='sender col-md-3'>"+val.user+"</span>" + "<span class='message col-md-6' style='margin-left:30px;'>" + val.msg + "</span></span></br><small class='pull-right col-md-3'>"+ val.time + "</small></div>");
}
$('#bottom').scrollTop($('#bottom').prop("scrollHeight"));
$(window).scrollTop($(document).height());
};
});
});
setTimeout(LoadJson, 5000);
</code></pre>
<p>};</p>
|
javascript|jquery|python|ajax|django
| 0 |
1,902,231 | 67,154,206 |
pandas groupby then filter by date to get mean
|
<p>Using pandas dataframes and I'm attempting to get the average number of purchases in the last 90 days for each row(not including the current row itself) based on CustId and then add a new column "PurchaseMeanLast90Days".</p>
<p>This is the code I tried, which is incorrect:</p>
<pre><code>group = df.groupby(['CustId'])
df['PurchaseMeanLast90Days'] = group.apply(lambda g: g[g['Date'] > (pd.DatetimeIndex(g['Date']) + pd.DateOffset(-90))])['Purchases'].mean()
</code></pre>
<p>Here's my data:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Index</th>
<th>CustId</th>
<th>Date</th>
<th>Purchases</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>1</td>
<td>1/01/2021</td>
<td>5</td>
</tr>
<tr>
<td>1</td>
<td>1</td>
<td>1/12/2021</td>
<td>1</td>
</tr>
<tr>
<td>2</td>
<td>1</td>
<td>3/28/2021</td>
<td>2</td>
</tr>
<tr>
<td>3</td>
<td>1</td>
<td>4/01/2021</td>
<td>4</td>
</tr>
<tr>
<td>4</td>
<td>1</td>
<td>4/20/2021</td>
<td>2</td>
</tr>
<tr>
<td>5</td>
<td>1</td>
<td>5/01/2021</td>
<td>5</td>
</tr>
<tr>
<td>6</td>
<td>2</td>
<td>1/01/2021</td>
<td>1</td>
</tr>
<tr>
<td>7</td>
<td>2</td>
<td>2/01/2021</td>
<td>1</td>
</tr>
<tr>
<td>8</td>
<td>2</td>
<td>3/01/2021</td>
<td>2</td>
</tr>
<tr>
<td>9</td>
<td>2</td>
<td>4/01/2021</td>
<td>3</td>
</tr>
</tbody>
</table>
</div>
<p>For example, row index 5 would include these rows in it's mean() = 3.33</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Index</th>
<th>CustId</th>
<th>Date</th>
<th>Purchases</th>
</tr>
</thead>
<tbody>
<tr>
<td>2</td>
<td>1</td>
<td>3/28/2021</td>
<td>2</td>
</tr>
<tr>
<td>3</td>
<td>1</td>
<td>4/01/2021</td>
<td>4</td>
</tr>
<tr>
<td>4</td>
<td>1</td>
<td>4/20/2021</td>
<td>2</td>
</tr>
</tbody>
</table>
</div>
<p>The new dataframe would look like this(I didn't do the calcs for CustId=2):</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Index</th>
<th>CustId</th>
<th>Date</th>
<th>Purchases</th>
<th>PurchaseMeanLast90Days</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>1</td>
<td>1/09/2021</td>
<td>5</td>
<td>0</td>
</tr>
<tr>
<td>1</td>
<td>1</td>
<td>1/12/2021</td>
<td>1</td>
<td>5</td>
</tr>
<tr>
<td>2</td>
<td>1</td>
<td>3/28/2021</td>
<td>2</td>
<td>3</td>
</tr>
<tr>
<td>3</td>
<td>1</td>
<td>4/01/2021</td>
<td>4</td>
<td>2.67</td>
</tr>
<tr>
<td>4</td>
<td>1</td>
<td>4/20/2021</td>
<td>2</td>
<td>3.0</td>
</tr>
<tr>
<td>5</td>
<td>1</td>
<td>5/01/2021</td>
<td>5</td>
<td>3.33</td>
</tr>
<tr>
<td>6</td>
<td>2</td>
<td>1/01/2021</td>
<td>1</td>
<td>...</td>
</tr>
<tr>
<td>7</td>
<td>2</td>
<td>2/01/2021</td>
<td>1</td>
<td>...</td>
</tr>
<tr>
<td>8</td>
<td>2</td>
<td>3/01/2021</td>
<td>2</td>
<td>...</td>
</tr>
<tr>
<td>9</td>
<td>2</td>
<td>4/01/2021</td>
<td>3</td>
<td>...</td>
</tr>
</tbody>
</table>
</div>
|
<p>You can do a rolling computation:</p>
<pre><code>df["Date"] = pd.to_datetime(df["Date"], dayfirst=False)
df["PurchaseMeanLast90Days"] = (
(
df.groupby("CustId")
.rolling("90D", min_periods=1, on="Date", closed="both")["Purchases"]
.apply(lambda x: x.shift(1).sum() / (len(x) - 1))
)
.fillna(0)
.values
)
print(df)
</code></pre>
<p>Prints:</p>
<pre class="lang-none prettyprint-override"><code> Index CustId Date Purchases PurchaseMeanLast90Days
0 0 1 2021-01-01 5 0.000000
1 1 1 2021-01-12 1 5.000000
2 2 1 2021-03-28 2 3.000000
3 3 1 2021-04-01 4 2.666667
4 4 1 2021-04-20 2 3.000000
5 5 1 2021-05-01 5 2.666667
6 6 2 2021-01-01 1 0.000000
7 7 2 2021-02-01 1 1.000000
8 8 2 2021-03-01 2 1.000000
9 9 2 2021-04-01 3 1.333333
</code></pre>
|
python|pandas|dataframe|filter|mean
| 1 |
1,902,232 | 66,914,092 |
How to exit out of a Python Script while the script invoked by it still continues to run
|
<p>I have two python scripts out of which one of them does some processing from the data in a excel file. The second script is used to update the data whenever the user updates the excel. After the user modifies the xlsx file he/she can run the second script and it automatically reloads the second script. However the problem here is that since the first script that does the actual processing is having a infinite while loop and when I am executing the file from the second script it doesn't exit after executing the first py file. I just want the second script to execute the first python file once and then exit out while the first script keeps executing infinitely. Here is the code:</p>
<pre><code>import notification
import importlib
import subprocess
import os
import sys
importlib.reload(notification)
if os.name == 'nt':
subprocess.run(["python","notification.py"], capture_output=True)
else:
subprocess.run(["python3","notification.py"], capture_output=True)
sys.exit(0)
</code></pre>
<p>Here the name of my first file is notification.py.</p>
|
<p>You need to use <code>subprocess.Popen</code> instead of <code>subprocess.run</code></p>
<p>Reference:</p>
<p><a href="https://stackoverflow.com/questions/5772873/python-spawn-off-a-child-subprocess-detach-and-exit">Python spawn off a child subprocess, detach, and exit</a></p>
<p><a href="https://docs.python.org/3/library/subprocess.html#popen-objects" rel="nofollow noreferrer">https://docs.python.org/3/library/subprocess.html#popen-objects</a></p>
|
python|subprocess
| 0 |
1,902,233 | 50,654,365 |
Running all python unittests from different modules as a suite having parameterized class constructor
|
<p>I am working on a design pattern to make my python unittest as a POM, so far I have written my page classes in modules <code>HomePageObject.py</code>,<code>FilterPageObject.py</code>, my base class (for common stuff)<code>TestBase</code> in <code>BaseTest.py</code>, my testcase modules are <code>TestCase1.py</code> and <code>TestCase2.py</code> and one runner module <code>runner.py</code>.
In runner class i am using <code>loader.getTestCaseNames</code> to get all the tests from a testcase class of a module. In both the testcase modules the name of the test class is same '<code>Test</code>' and also the method name is same '<code>testName</code>'
Since the names are confilicting while importing it in runner, only one test is getting executed. I want python to scan all the modules that i specify for tests in them and run those even the name of classes are same.
I got to know that <code>nose</code> might be helpful in this, but not sure how i can implement it here. Any advice ? </p>
<h2>BaseTest.py</h2>
<pre><code>from selenium import webdriver
from selenium.webdriver.support.wait import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver import ChromeOptions
import unittest
class TestBase(unittest.TestCase):
driver = None
def __init__(self,testName,browser):
self.browser = browser
super(TestBase,self).__init__(testName)
def setUp(self):
if self.browser == "firefox":
TestBase.driver = webdriver.Firefox()
elif self.browser == "chrome":
options = ChromeOptions()
options.add_argument("--start-maximized")
TestBase.driver = webdriver.Chrome(chrome_options=options)
self.url = "https://www.airbnb.co.in/"
self.driver = TestBase.getdriver()
TestBase.driver.implicitly_wait(10)
def tearDown(self):
self.driver.quit()
@staticmethod
def getdriver():
return TestBase.driver
@staticmethod
def waitForElementVisibility(locator, expression, message):
try:
WebDriverWait(TestBase.driver, 20).\
until(EC.presence_of_element_located((locator, expression)),
message)
return True
except:
return False
</code></pre>
<h2>TestCase1.py and TestCase2.py (same)</h2>
<pre><code>from airbnb.HomePageObject import HomePage
from airbnb.BaseTest import TestBase
class Test(TestBase):
def __init__(self,testName,browser):
super(Test,self).__init__(testName,browser)
def testName(self):
try:
self.driver.get(self.url)
h_page = HomePage()
f_page = h_page.seachPlace("Sicily,Italy")
f_page.selectExperience()
finally:
self.driver.quit()
</code></pre>
<h2>runner.py</h2>
<pre><code>import unittest
from airbnb.TestCase1 import Test
from airbnb.TestCase2 import Test
loader = unittest.TestLoader()
test_names = loader.getTestCaseNames(Test)
suite = unittest.TestSuite()
for test in test_names:
suite.addTest(Test(test,"chrome"))
runner = unittest.TextTestRunner()
result = runner.run(suite)
</code></pre>
<p>Also even that one test case is getting passed, some error message is coming </p>
<pre><code>Ran 1 test in 9.734s
OK
Traceback (most recent call last):
File "F:\eclipse-jee-neon-3-win32\eclipse\plugins\org.python.pydev.core_6.3.3.201805051638\pysrc\runfiles.py", line 275, in <module>
main()
File "F:\eclipse-jee-neon-3-win32\eclipse\plugins\org.python.pydev.core_6.3.3.201805051638\pysrc\runfiles.py", line 97, in main
return pydev_runfiles.main(configuration) # Note: still doesn't return a proper value.
File "F:\eclipse-jee-neon-3-win32\eclipse\plugins\org.python.pydev.core_6.3.3.201805051638\pysrc\_pydev_runfiles\pydev_runfiles.py", line 874, in main
PydevTestRunner(configuration).run_tests()
File "F:\eclipse-jee-neon-3-win32\eclipse\plugins\org.python.pydev.core_6.3.3.201805051638\pysrc\_pydev_runfiles\pydev_runfiles.py", line 773, in run_tests
all_tests = self.find_tests_from_modules(file_and_modules_and_module_name)
File "F:\eclipse-jee-neon-3-win32\eclipse\plugins\org.python.pydev.core_6.3.3.201805051638\pysrc\_pydev_runfiles\pydev_runfiles.py", line 629, in find_tests_from_modules
suite = loader.loadTestsFromModule(m)
File "C:\Python27\lib\unittest\loader.py", line 65, in loadTestsFromModule
tests.append(self.loadTestsFromTestCase(obj))
File "C:\Python27\lib\unittest\loader.py", line 56, in loadTestsFromTestCase
loaded_suite = self.suiteClass(map(testCaseClass, testCaseNames))
TypeError: __init__() takes exactly 3 arguments (2 given)
</code></pre>
|
<p>I did this by searching for all the modules of test classes with a pattern and then used <code>__import__(modulename)</code> and called its <code>Test</code> class with desired parameters,</p>
<p>Here is my <strong>runner.py</strong></p>
<pre><code>import unittest
import glob
loader = unittest.TestLoader()
suite = unittest.TestSuite()
test_file_strings = glob.glob('Test*.py')
module_strings = [str[0:len(str)-3] for str in test_file_strings]
for module in module_strings:
mod = __import__(module)
test_names =loader.getTestCaseNames(mod.Test)
for test in test_names:
suite.addTest(mod.Test(test,"chrome"))
runner = unittest.TextTestRunner()
result = runner.run(suite)
</code></pre>
<p><strong>This worked</strong> but still looking for some organized solutions.
(Not sure why second time its showing Ran 0 tests in 0.000s )</p>
<pre><code>Finding files... done.
Importing test modules ... ..done.
----------------------------------------------------------------------
Ran 2 tests in 37.491s
OK
----------------------------------------------------------------------
Ran 0 tests in 0.000s
OK
</code></pre>
|
python-unittest|nose
| 0 |
1,902,234 | 35,058,766 |
python 2.4.3 subprocess check_call
|
<p>I have apparently an old version of python, and when i try to use</p>
<pre><code>subprocess.check_call(...)
</code></pre>
<p>an error is returned saying that the <code>check_call</code> doesn't exist.
Is there an equivalent? By the way... i need to understand which line is invoked when i use <code>subprocess.call(...)</code></p>
|
<p>You should be able to use <code>call</code> as <code>check_call</code> is actually a wrapper for <code>subprocess.call()</code>, which exists in Python 2.4. You can write your own <code>check_call</code> function:</p>
<p>(Warning: this is not tested as I don't have Python 2.4):</p>
<pre><code>class CalledProcessError(Exception):
def __init__(self, returncode, cmd, output=None):
self.returncode = returncode
self.cmd = cmd
self.output = output
def __str__(self):
return "Command '%s' returned non-zero exit status %d" % (self.cmd, self.returncode)
def check_call(*popenargs, **kwargs):
retcode = subprocess.call(*popenargs, **kwargs)
if retcode:
cmd = kwargs.get("args")
if cmd is None:
cmd = popenargs[0]
raise CalledProcessError(retcode, cmd)
return 0
</code></pre>
<p>and instead of <code>subprocess.check_call(...)</code> you can simply call your own <code>check_call(...)</code> using the same arguments.</p>
|
python|process
| 0 |
1,902,235 | 26,513,473 |
Finding words that contain all but one letter in string
|
<p>First off, this is a homework assignment (due tomorrow, fun fact). The goal for this assignment is to take a string the the user inputs, and then search through a dictionary file given to us and print out any word that has all but one of those letters.<br>
The code I have so far is below. I attempted to use regular expressions, but it seems like that only works for patterns, and I do not want it to only find matching patterns, the letters can be anywhere in the word.<br>
Another important note. I'm not allowed to use lists on this assignment, which is where I think I'm running into the most trouble. Along with that the length of the string will vary each time.</p>
<pre><code>import re
dictionary = open('dictionary.txt','r')
def all_but_one_letter():
user_string = input("Please enter a string of characters: ")
print(user_string)
line = 'begin'
while line != "":
line = dictionary.readline()
line = line.rstrip()
if re.findall(user_string, line) == 1:
print(line)
all_but_one_letter()
dictionary.close()
</code></pre>
<p>I also have to print a statement saying if nothing matches the criteria, but right now I'm more concerned with getting this working first. Any help, or hints in the right direction would be greatly appreciated.</p>
|
<p>Use <a href="https://docs.python.org/2/library/sets.html" rel="nofollow">sets</a>:</p>
<pre><code>s1 = "foobar"
s2 = "fooba"
st1 = set(s1)
print(len(st1.intersection(s2)) == len(st1) - 1)
True
</code></pre>
<p>You should use with to open your files and you can just iterate over the file object:</p>
<pre><code>def all_but_one_letter():
with open('dictionary.txt','r') as dictionary:
user_string = input("Please enter a string of characters: ")
for line in dictionary:
words = line.rstrip().split() # split into individual words
for word in words:
st1 = set(user_string)
if len(st1.intersection(word)) == len(st1) - 1:
print(word)
</code></pre>
<p>intersection will find common letters, if the intersection is equal to the length of set of s1 then all but one letters are equal</p>
<pre><code>In [1]: s1 = "foobar"
In [2]: s2 = "fooba"
In [3]: st1 = set(s1)
In [4]: len(st1.intersection(s2)) == len(st1) - 1
Out[4]: True
In [5]: s1 = "fooba"
In [6]: s2 = "fooba"
In [7]: st1 = set(s1)
In [8]: len(st1.intersection(s2)) == len(st1) - 1
Out[8]: False
</code></pre>
|
python|regex|string
| 3 |
1,902,236 | 45,247,500 |
Incremental data load using pandas
|
<p>I am trying to implement incremental data import using pandas.</p>
<p>I have two dataframes: df_old (original data, loaded before) and df_new (new data, to be merged with df_old). </p>
<p>data in df_old/df_new are unique on multiple columns
(for simplicity, lets say just 2: key1 and key2). other columns are data to be merged and lets say, they are only 2 of them too: val1 and val2. </p>
<p>beside these, there is one more column to be taken care of: change_id - it increases for each new entry overwriting the old one</p>
<p>the logic of import is pretty straightforward: </p>
<ol>
<li>if there is new key pair in df_new, it should be added (with corresponding val1/val2 values) to df_old</li>
<li><p>if there is a key pair in df_new which exists in df_old, then:</p>
<p>2a) if corresponding values in df_old and df_new are same, the old ones should be kept</p>
<p>2b) if corresponding values in df_old and df_new are different, the values in df_new should replace the old ones in df_old</p></li>
<li><p>there's no need to care about dala deletion (if some data in df_old exist, which are not present in df_new)</p></li>
</ol>
<p>so far, I came up with 2 different solutions:</p>
<pre><code>>>> df_old = pd.DataFrame([['A1','B2',1,2,1],['A1','A2',1,3,1],['B1','A2',1,3,1],['B1','B2',1,4,1],], columns=['key1','key2','val1','val2','change_id'])
>>> df_old.set_index(['key1','key2'], inplace=True)
>>> df_old
val1 val2 change_id
key1 key2
A1 B2 1 2 1
A2 1 3 1
B1 A2 1 3 1
B2 1 4 1
>>> df_new = pd.DataFrame([['A1','B2',2,1,2],['A1','A2',1,3,2],['C1','B2',2,1,2]], columns=['key1','key2','val1','val2','change_id'])
>>> df_new.set_index(['key1','key2'], inplace=True)
>>> df_new
val1 val2 change_id
key1 key2
A1 B2 2 1 2
A2 1 3 2
C1 B2 2 1 2
</code></pre>
<p>solution 1</p>
<pre><code># this solution groups concatenated old data with new ones, group them by keys and for each group evaluates if new data are different
def merge_new(x):
if x.shape[0] == 1:
return x.iloc[0]
else:
if x.iloc[0].loc[['val1','val2']].equals(x.iloc[1].loc[['val1','val2']]):
return x.iloc[0]
else:
return x.iloc[1]
def solution1(df_old, df_new):
merged = pd.concat([df_old, df_new])
return merged.groupby(level=['key1','key2']).apply(merge_new).reset_index()
</code></pre>
<p>solution 2</p>
<pre><code># this solution uses pd.merge to merge data + additional logic to compare merged rows and select new data
>>> def solution2(df_old, df_new):
>>> merged = pd.merge(df_old, df_new, left_index=True, right_index=True, how='outer', suffixes=('_old','_new'), indicator='ind')
>>> merged['isold'] = (merged.loc[merged['ind'] == 'both',['val1_old','val2_old']].rename(columns=lambda x: x[:-4]) == merged.loc[merged['ind'] == 'both',['val1_new','val2_new']].rename(columns=lambda x: x[:-4])).all(axis=1)
>>> merged.loc[merged['ind'] == 'right_only','isold'] = False
>>> merged['isold'] = merged['isold'].fillna(True)
>>> return pd.concat([merged[merged['isold'] == True][['val1_old','val2_old','change_id_old']].rename(columns=lambda x: x[:-4]), merged[merged['isold'] == False][['val1_new','val2_new','change_id_new']].rename(columns=lambda x: x[:-4])])
>>> solution1(df_old, df_new)
key1 key2 val1 val2 change_id
0 A1 A2 1 3 1
1 A1 B2 2 1 2
2 B1 A2 1 3 1
3 B1 B2 1 4 1
4 C1 B2 2 1 2
>>> solution2(df_old, df_new)
val1 val2 change_id
key1 key2
A1 A2 1.0 3.0 1.0
B1 A2 1.0 3.0 1.0
B2 1.0 4.0 1.0
A1 B2 2.0 1.0 2.0
C1 B2 2.0 1.0 2.0
</code></pre>
<p>Both of the work, however, I am still quite dissapointed with the performance on huge dataframes.
the question is: is there some better way to do this? any hint for decent speed improvement will be more than welcome...</p>
<pre><code>>>> %timeit solution1(df_old, df_new)
100 loops, best of 3: 10.6 ms per loop
>>> %timeit solution2(df_old, df_new)
100 loops, best of 3: 14.7 ms per loop
</code></pre>
|
<p>Here's one way to do this that's pretty fast:</p>
<pre><code>merged = pd.concat([df_old.reset_index(), df_new.reset_index()])
merged = merged.drop_duplicates(["key1", "key2", "val1", "val2"]).drop_duplicates(["key1", "key2"], keep="last")
# 100 loops, best of 3: 1.69 ms per loop
# key1 key2 val1 val2 change_id
# 1 A1 A2 1 3 1
# 2 B1 A2 1 3 1
# 3 B1 B2 1 4 1
# 0 A1 B2 2 1 2
# 2 C1 B2 2 1 2
</code></pre>
<p>The rationale here is to concatenate all rows and simply call <code>drop_duplicates</code> twice, rather than relying on join logic to get the rows you want. The first call to <code>drop_duplicates</code> drops rows originating in <code>df_new</code> that match on both the key & value columns since the default behavior of this method is to keep the first of the duplicate rows (in this case the row from <code>df_old</code>). The second call drops duplicates that match on the key columns, but specifies that the <code>last</code> row for each set of duplicates should be kept.</p>
<p>This approach assumes that the rows are sorted on <code>change_id</code>; this is a safe assumption given the order in which the example DataFrames are concatenated. If this is a faulty assumption with your real data, however, simply call <code>.sort_values('change_id')</code> on <code>merged</code> before dropping the duplicates.</p>
|
python|pandas|merge
| 4 |
1,902,237 | 57,918,307 |
Month Aggregate PyMongo
|
<p>How do you run a find query using PyMongo to aggregate by month?</p>
<p>I have tried the following:</p>
<pre><code>searchdate = datetime.datetime.now().strftime("%Y-%m-%d")
eventscreated = list(db.event.find({"creator._id":ObjectId(myid)}, {"eventdate": {"$month": 'new Date("'+searchdate+'")'}} ))
</code></pre>
<p>results in:</p>
<blockquote>
<p>pymongo.errors.OperationFailure: Unsupported projection option:
eventdate: { $month: "new Date("2019-09-13")" }</p>
</blockquote>
<p>Tried:</p>
<pre><code>isodate = datetime.datetime.now()
isodate = isodate.replace(tzinfo=pytz.utc).isoformat()
eventscreated = list(db.event.find({"creator._id":ObjectId(myid)},
{"eventdate": {"$month": 'ISODate("'+isodate+'")'}} ))
</code></pre>
<p>results in:</p>
<blockquote>
<p>pymongo.errors.OperationFailure: Unsupported projection option:
eventdate: { $month: "ISODate("2019-09-13T07:17:17.222737+00:00")" }</p>
</blockquote>
<p>Can anyone help?</p>
|
<p>You need to use an aggregation pipeline with the <a href="https://docs.mongodb.com/manual/reference/operator/aggregation/month/index.html" rel="nofollow noreferrer">$month</a> operator. The snippet creates a test datetime for each day of the year and the aggregation pipeline counts the number of days in each month.</p>
<pre><code>import pymongo
import datetime
db = pymongo.MongoClient()['mydatabase']
# Data setup
db.testdate.delete_many({})
for d in range(0, 365):
dt = datetime.datetime(2019, 1, 1, 12, 0) + datetime.timedelta(days=d)
db.testdate.insert_one({"Testdate": dt})
# Aggregation pipeline
months = db.testdate.aggregate(
[
{"$project":
{"month": {"$month": "$Testdate"}}
},
{"$group":
{"_id": "$month", "count": {"$sum": 1}}
},
{"$sort":
{"_id": 1}
}
]
)
for item in months:
print(item)
</code></pre>
|
python-3.x|mongodb|flask|pymongo
| 1 |
1,902,238 | 56,052,499 |
Split columns in data frame in multiple columns
|
<p>I have created a data frame using the Dictionary , now I want to split 2 columns in data frame to 4 columns.</p>
<p>Initially there are 3 columns in data frame i.e Parent, Child and Score. I want to Split "Parent" column to "col1 and col2" and "Child" column to "col3 and col4" and want to split it using delimeter '+' </p>
<p>I have tried some of the following methods can any one help</p>
<pre><code>def request_service(Sentence,id):
dict_ip = {"id": "2018 Regression", "Sentence": "What is the customers Issue/problem? Customer spoke to our mobile banking help
payload = json.dumps(dict_ip)
print(payload)
response = requests.request("POST", url, data=payload, headers=headers)
dict_list = json.loads(response.text)
print(dict_list)
#dict_list = {'results': [{'Parent': 'A+B', 'child': 'C+D', 'score': 0.36283498590263946}, {'Parent': 'D+E', 'child': 'A+B', 'score': 0.10505374311256221}, {'Parent': 'N+M', 'child': 'Q+R', 'score': 0.09593593898873307}]}
df_op = pd.DataFrame(columns=['Parent', 'Child', 'Score'])
for idx, result in enumerate(dict_list['results']):
df_op.loc[idx] = [result['Parent'], result['child'], result['score']]
df_op.Score = df_op.Score.round(2)
return df_op
</code></pre>
<p>Expected output is Datframe with 5 columns</p>
<pre><code> col1 col2 col3 col4 Score
A B C D 0.36
D E A B 0.10
N M Q R 0.09
</code></pre>
|
<p>I have found a way to SPLIT THE COLUMNS </p>
<pre><code>df_op = pd.DataFrame(columns=['Parent', 'Child', 'Score'])
for idx, result in enumerate(dict_list['results']):
df_op.loc[idx] = [result['Parent'], result['child'], result['score']]
df3 = pd.DataFrame(df_op.Parent.str.split('+', expand=True).values,
columns=['col1', 'col2'])
df4 = pd.DataFrame(df_op.Child.str.split('+', expand=True).values,
columns=['col3', 'col4'])
df_op_mergerd = pd.concat([df3, df4, df_op], axis=1)
df_op_mergerd.drop(['Parent','Child'], axis=1, inplace=True)
</code></pre>
|
python|dataframe|dictionary
| 0 |
1,902,239 | 18,669,024 |
Passing variable with Python CGI
|
<p>I am using Python CGI to handle a simple web interface that has the following function:</p>
<p>User enters an ID in a form-field, clicks submit and gets a list of variables back from a dictonary containing lists.
With a textfield the user can enter another item, click submit to append the value to the list corresponding with the previously entered ID. </p>
<p>The first submit works well, checking whether the ID exists in the dictionary and then prints the form field for adding a new item. But the value of the ID is lost then. What would be the best way to pass on that value?</p>
<p>Would posting the first value as GET and retrieving it through the URL be a solution to this or is there a better way?</p>
<p>I have to use python & cgi for this project, without any additional framework.</p>
<pre><code>data = cgi.FieldStorage()
if data.has_key('ID'):
myID = data['myID'].value
if testme(ID): #checks if ID exists in dictionary
printmessages(myID)
addmessage(myID)
elif data.has_key('newitem'):
newmitem = data['newitem'].value
insertmessage(myID, newitem) #insert newitem to dictionary with myID
</code></pre>
|
<p>Usually the original id would be passed back in the second form as a hidden input field <code><input name="id" type="hidden" value="myID"></code> to ensure that it is passed back in the subsequent insert form.</p>
|
python|forms|web|cgi|form-submit
| 1 |
1,902,240 | 18,369,363 |
Python Tab Completion Pager
|
<p>I've tried searching for this answer, and either don't have the right word combination, or simply can't find, but I apologize if it's a repeat:</p>
<p>When in a python interpreter (running python[3] from the command line, i.e. not IPython or anything), how do I get the interpreter to "page" my tab completions when there are too many to fit on one screen?</p>
<p>For example, if I</p>
<pre><code>import os
os.<tab>
</code></pre>
<p>on some computers, it will fill the screen with columnar output of all os.* options, and the bottom line is "More" (as if I've run 'more' or 'less' on the output, so to speak), and I page through with Enter or the space bar. On my current OS, though, it just spits out all the possibilities, which requires me to scroll up to see everything. </p>
<p>Is there a simple function that I should have included in, say, my .pythonstartup that would alleviate this? All I have in there now is:</p>
<pre><code>import readline
readline.parse_and_bind("tab: complete")
</code></pre>
<p>which obviously isn't enough to get what I want; I get tab completion, but not paged output.</p>
|
<p>use <a href="http://docs.python.org/2/library/readline#readline.set_completion_display_matches_hook" rel="nofollow">readline.set_completion_display_matches_hook</a> to set the display function.</p>
<p>Here's a quick-and-dirty example that just pipes all matches through <code>column</code> to format them in columns and uses <code>less</code> to display.</p>
<pre><code>import readline
import subprocess
import rlcompleter
def display_matches(substitutions, matches, longest_match_length):
m = '\n'.join(matches) + '\n'
proc = subprocess.Popen('column | less', shell=True, stdin=subprocess.PIPE)
# python2:
proc.communicate(m)
# python3:
# proc.communicate(m.encode('utf-8'))
readline.set_completion_display_matches_hook(display_matches)
readline.parse_and_bind('tab: complete')
</code></pre>
|
python|interpreter|readline|tab-completion
| 3 |
1,902,241 | 69,387,606 |
Using Python Cassandra Driver for large no. of Queries
|
<p>We have a script which talks to Scylla (a cassandra drop-in replacement). The script is supposed to run for a few thusands systems. The script runs a few thousands queries to get its required data. However, after sometime the script gets crashed throwing this error:</p>
<pre><code>2021-09-29 12:13:48 Could not execute query because of : errors={'x.x.x.x': 'Client request timeout. See Session.execute[_async](timeout)'}, last_host=x.x.x.x
2021-09-29 12:13:48 Trying for : 4th time
Traceback (most recent call last):
File ".../db_base.py", line 92, in db_base
ret_val = SESSION.execute(query)
File "cassandra/cluster.py", line 2171, in cassandra.cluster.Session.execute
File "cassandra/cluster.py", line 4062, in cassandra.cluster.ResponseFuture.result
cassandra.OperationTimedOut: errors={'x.x.x.x': 'Client request timeout. See Session.execute[_async](timeout)'}, last_host=x.x.x.x
</code></pre>
<p>The DB Connection code:</p>
<pre><code>def db_base(current_keyspace, query, try_for_times, current_IPs, port):
global SESSION
if SESSION is None:
# This logic to ensure given number of retrying runs on failure of connecting to the Cluster
for i in range(try_for_times):
try:
cluster = Cluster(contact_points = current_IPs, port=port)
session = cluster.connect() # error can be encountered in this command
break
except NoHostAvailable:
print("No Host Available! Trying for : " + str(i) + "th time")
if i == try_for_times - 1:
# shutting down cluster
cluster.shutdown()
raise db_connection_error("Could not connect to the cluster even in " + str(try_for_times) + " tries! Exiting")
SESSION = session
# This logic to ensure given number of retrying runs in the case of failing the actual query
for i in range(try_for_times):
try:
# setting keyspace
SESSION.set_keyspace(current_keyspace)
# execute actual query - error can be encountered in this
ret_val = SESSION.execute(query)
break
except Exception as e:
print("Could not execute query because of : " + str(e))
print("Trying for : " + str(i) + "th time")
if i == (try_for_times -1):
# shutting down session and cluster
cluster.shutdown()
session.shutdown()
raise db_connection_error("Could not execute query even in " + str(try_for_times) + " tries! Exiting")
return ret_val
</code></pre>
<p>How can this code be improved to sustain and be able to run for this large no. of queries? Or we should look into other tools / approach to help us with getting this data? Thank you</p>
|
<p>The Client session timeout indicates that the driver is timing out before the server does or - should it be overloaded - that Scylla hasn't replied back the timeout to the driver. There are a couple of ways to figure this out:</p>
<p>1 - Ensure that your default_timeout is higher than Scylla enforced timeouts in /etc/scylla/scylla.yaml</p>
<p>2 - Check the Scylla logs for any sign of overload. If there is, consider throttling your requests to find a balanced sweet spot to ensure they no longer fail. If it continues, consider resizing your instances.</p>
<p>In addition to these, it is worth to mention that your sample code is not using PreparedStatements, TokenAwareness and other best practices as mentioned under <a href="https://docs.datastax.com/en/developer/python-driver/3.19/api/cassandra/policies/" rel="noreferrer">https://docs.datastax.com/en/developer/python-driver/3.19/api/cassandra/policies/</a> that will certainly improve your overall throughput down the road.</p>
<p>You can find further information on Scylla docs:
<a href="https://docs.scylladb.com/using-scylla/drivers/cql-drivers/scylla-python-driver/" rel="noreferrer">https://docs.scylladb.com/using-scylla/drivers/cql-drivers/scylla-python-driver/</a>
and Scylla University
<a href="https://university.scylladb.com/courses/using-scylla-drivers/lessons/coding-with-python/" rel="noreferrer">https://university.scylladb.com/courses/using-scylla-drivers/lessons/coding-with-python/</a></p>
|
python|optimization|connection|scylla
| 6 |
1,902,242 | 55,379,950 |
Create a file throught code in Kubernetes
|
<p>I need (or at least I think I need) to create a file (could be a temp file but for now it does not work while I was testing it) where I can copy a file stored in google cloud storage.</p>
<p>This file is a geojson file and after load the file i will read it using geopandas.</p>
<p>The code will be run it inside a Kubernete in Google Cloud</p>
<hr>
<blockquote>
<p>The code:</p>
</blockquote>
<pre><code>def geoalarm(self,input):
from shapely.geometry import Point
import uuid
from google.cloud import storage
import geopandas as gpd
fp = open("XXX.geojson", "w+")
storage_client = storage.Client()
bucket = storage_client.get_bucket('YYY')
blob = bucket.get_blob('ZZZ.geojson')
blob.download_to_file(fp)
fp.seek(0)
PAIS = gpd.read_file(fp.name)
(dictionaryframe,_)=input
try:
place = Point((float(dictionaryframe["lon"])/100000), (float(dictionaryframe["lat"]) / 100000))
<...>
</code></pre>
<p>The questions are:</p>
<p>How could I create the file in kubernetes? </p>
<p>Or, how could I use the content of the file as string (if I use download_as_string) in geopandas to do the equivalent of <code>geopanda.read_file(name)</code>?</p>
<hr>
<blockquote>
<p>Extra</p>
</blockquote>
<p>I tried using:</p>
<pre><code>PAIS = gpd.read_file("gs://bucket/xxx.geojson")
</code></pre>
<p>But I have the following error:</p>
<pre><code>DriverError: '/vsigs/bucket/xxx.geojson' does not exist in the file system, and is not recognized as a supported dataset name.
</code></pre>
|
<p>A VERY general overview of the pattern: </p>
<p>You can start by putting the code on a git repository. On Kubernetes, create a deployment/pod with the ubuntu image and make sure you install python, your python dependencies and pull your code in an initialization script, with the final line invoking python to run your code. In the "command" attribute of your pod template, you should use /bin/bash to run your script. Assuming you have the correct credentials, you will be able to grab the file from Google Storage and process it. To debug, you can attach to the running container using "kubectl exec". </p>
<p>Hope this helps!</p>
|
python|kubernetes|google-cloud-platform|geopandas
| 0 |
1,902,243 | 55,566,739 |
'text()' from xpath returns error, invalid argument
|
<p>Using <code>contains('text', 'some text')</code> works, but i want to check if it contains only what i'm inserting.. I found <code>contains(text()="some text")</code>, but it returns as a invalid argument.. Can you tell me what's the problem ? </p>
<pre><code>lxml.etree.XPathEvalError: Invalid number of arguments
</code></pre>
<p>Thanks in advance.</p>
<pre><code>from requests_html import HTMLSession
session = HTMLSession()
html_source = session.get('any website')
passengers = html_source.html.xpath('*//div//h2[contains(text()="Passengers")]//ancestor::div[contains(@class, "wpb_wrapper")]')[1:-4]
for passenger in passengers:
print(passenger.attrs)
</code></pre>
|
<pre><code>r=sessions.get('https://stackoverflow.com/questions/55566739/text-from-xpath-returns-error-invalid-argument')r.html.xpath('//div[@id="question-header"]/h1[contains(text(),"from xpath returns")]')
question=r.html.xpath('//div[@id="question-header"]/h1[contains(text(),"from xpath returns")]')
</code></pre>
<p>["'text()' from xpath returns error, invalid argument"]
This is the output</p>
<p>See the example and change your xpath accordingly.I could have made it clearer had u given a proper url.The url I used is this page url and the output is Your Question title.Hope this helps.</p>
|
python|web-scraping|python-requests-html
| 0 |
1,902,244 | 54,186,576 |
Generate iterable choices tuple from model class in Django
|
<p>I'm working on a django model and I want to generate a tuple out of my model instances : </p>
<p><strong>model.py</strong></p>
<pre><code>class Extra(models.Model):
extra_n = models.CharField(max_length=200)
extra_price = models.IntegerField(default=0)
def __str__(self):
return self.extra_n
</code></pre>
<p>The output I'm expecting based on the user entries on the form associated :</p>
<pre><code>choices = (('extra_price 1','extra_n1'),
('extra_price 2','extra_n2'),
('extra_price 3','extra_n3')
)
</code></pre>
|
<p>You can make an ORM call with <a href="https://docs.djangoproject.com/en/dev/ref/models/querysets/#values-list" rel="nofollow noreferrer"><strong><code>.values_list(..)</code></strong> [Django-doc]</a>:</p>
<pre><code>tuple(Extra.objects.<b>values_list('extra_price', 'extra_n')</b>)</code></pre>
<p>That being said, Django forms can work with a <a href="https://docs.djangoproject.com/en/dev/ref/forms/fields/#django.forms.ModelChoiceField" rel="nofollow noreferrer"><strong><code>ModelChoiceField</code></strong> [Django-doc]</a> that will make choices itself based on the model (or a filtered queryset if you provide one).</p>
|
python|django|django-models|django-forms|tuples
| 2 |
1,902,245 | 58,582,171 |
Python HTTP Server Serves Two Paths Using Different Kinds of Handlers
|
<p>From other SO posts, it's clear how to <a href="https://stackoverflow.com/questions/39801718/how-to-run-a-http-server-which-serve-a-specific-path">serve content from a specific directory</a>, and how to <a href="https://stackoverflow.com/questions/18346583/how-do-i-map-incoming-path-requests-when-using-httpserver">map an incoming path to different <code>do_GET</code> handlers</a>. </p>
<p>To expand on the second question in a way relating to the first, how do you map paths to different kinds of handlers? Specifically, I'd like to map one path to <code>do_GET</code> handler, and another to just serving the content from a specific directory. </p>
<p>If it is not possible, what's the easier way to serve the two different kinds of contents? I know the two could be run on the server in two threads each serving a different port, that's not very neat. </p>
|
<p>I've got an answer by tracking the code from <a href="https://stackoverflow.com/a/46332163/362754">the first reference question answered by Jaymon</a>, and incorporating the code from <a href="https://stackoverflow.com/a/18346685/362754">the second reference question</a>. </p>
<p>The sample follows. It serves content on the local machine from the directory <code>web/</code> to the URL base path <code>/file/</code>, and handles requests with URL base path <code>/api</code> in the user-supplied method <code>do_GET()</code> itself. Initially the code was derived from <a href="https://2ality.com/2014/06/simple-http-server.html" rel="nofollow noreferrer">a sample on the web by Dr. Axel Rauschmayer</a>. </p>
<pre><code>#!/usr/bin/env python
# https://2ality.com/2014/06/simple-http-server.html
# https://stackoverflow.com/questions/39801718/how-to-run-a-http-server-which-serves-a-specific-path
from SimpleHTTPServer import SimpleHTTPRequestHandler
from BaseHTTPServer import HTTPServer as BaseHTTPServer
import os
PORT = 8000
class HTTPHandler(SimpleHTTPRequestHandler):
"""This handler uses server.base_path instead of always using os.getcwd()"""
def translate_path(self, path):
if path.startswith(self.server.base_url_path):
path = path[len(self.server.base_url_path):]
if path == '':
path = '/'
else:
#path = '/'
return None
path = SimpleHTTPRequestHandler.translate_path(self, path)
relpath = os.path.relpath(path, os.getcwd())
fullpath = os.path.join(self.server.base_local_path, relpath)
return fullpath
def do_GET(self):
path = self.path
if (type(path) is str or type(path) is unicode) and path.startswith('/api'):
# call local handler
self.send_response(200)
self.send_header('Content-type', 'text/html')
self.end_headers()
# Send the html message
self.wfile.write("<b> Hello World !</b>")
self.wfile.close()
return
elif (type(path) is str or type(path) is unicode) and path.startswith(self.server.base_url_path):
return SimpleHTTPRequestHandler.do_GET(self)
elif (type(path) is str or type(path) is unicode) and path.startswith('/'):
self.send_response(441)
self.end_headers()
self.wfile.close()
return
else:
self.send_response(442)
self.end_headers()
self.wfile.close()
return
Handler = HTTPHandler
Handler.extensions_map.update({
'.webapp': 'application/x-web-app-manifest+json',
});
class HTTPServer(BaseHTTPServer):
"""The main server, you pass in base_path which is the path you want to serve requests from"""
def __init__(self, base_local_path, base_url_path, server_address, RequestHandlerClass=HTTPHandler):
self.base_local_path = base_local_path
self.base_url_path = base_url_path
BaseHTTPServer.__init__(self, server_address, RequestHandlerClass)
web_dir = os.path.join(os.path.dirname(__file__), 'web')
httpd = HTTPServer(web_dir, '/file', ("", PORT))
print "Serving at port", PORT
httpd.serve_forever()
</code></pre>
|
python|httpserver|simplehttpserver
| 2 |
1,902,246 | 58,207,759 |
How do I specifically use \n for the print function?
|
<pre><code>a = int(input("Enter a number: "))
print("The next number for the number", a, "is", a+1)
print("The previous number for the number", a, "is", a-1)
</code></pre>
<p>I'd like to use just one-line command with the print function, in which I say to print in 2 lines. I'v tried the following code, but it doesn't work properly, because it prints the second line after a space. I'd like to remove this useless space.</p>
<pre><code>print("The next number for the number", a, "is", a+1,"\n","The previous number for the number", a, "is", a-1)
</code></pre>
|
<p>You can do the following:</p>
<pre><code>print("The next number for the number %i \nThe next number for the number %i" % (int(a)+1, int(a)-1))
</code></pre>
|
python|printing|newline
| 0 |
1,902,247 | 45,322,194 |
Most efficient way to reshape tensor into sequences
|
<p>I am working with audio in TensorFlow, and would like to obtain a series of sequences which could be obtained from <em>sliding a window</em> over my data, so to speak. Examples to illustrate my situation:</p>
<p><strong>Current Data Format:</strong></p>
<p>Shape = [batch_size, num_features]</p>
<pre><code>example = [
[1, 2, 3],
[4, 5, 6],
[7, 8, 9],
[10, 11, 12],
[13, 14, 15]
]
</code></pre>
<p><strong>What I want:</strong></p>
<p>Shape = [batch_size - window_length + 1, window_length, num_features]</p>
<pre><code>example = [
[
[1, 2, 3],
[4, 5, 6],
[7, 8, 9]
],
[
[4, 5, 6],
[7, 8, 9],
[10, 11, 12]
],
[
[7, 8, 9],
[10, 11, 12],
[13, 14, 15]
],
]
</code></pre>
<p>My current solution is to do something like this:</p>
<pre><code>list_of_windows_of_data = []
for x in range(batch_size - window_length + 1):
list_of_windows_of_data.append(tf.slice(data, [x, 0], [window_length,
num_features]))
windowed_data = tf.squeeze(tf.stack(list_of_windows_of_data, axis=0))
</code></pre>
<p>And this does the transform. However, it also creates 20,000 operations which slows TensorFlow down a lot when creating a graph. If anyone else has a fun and more efficient way to do this, please do share.</p>
|
<p>You can do that using <code>tf.map_fn</code> as follows:</p>
<pre><code>example = tf.constant([
[1, 2, 3],
[4, 5, 6],
[7, 8, 9],
[10, 11, 12],
[13, 14, 15]
]
)
res = tf.map_fn(lambda i: example[i:i+3], tf.range(example.shape[0]-2), dtype=tf.int32)
sess=tf.InteractiveSession()
res.eval()
</code></pre>
<p>This prints</p>
<pre><code>array([[[ 1, 2, 3],
[ 4, 5, 6],
[ 7, 8, 9]],
[[ 4, 5, 6],
[ 7, 8, 9],
[10, 11, 12]],
[[ 7, 8, 9],
[10, 11, 12],
[13, 14, 15]]])
</code></pre>
|
tensorflow
| 2 |
1,902,248 | 41,348,874 |
Python O365 send email with HTML file
|
<p>I'm using O365 for Python.
Sending an email and building the body my using the setBodyHTML() function. However at the present I need to write the actual HTML code inside the function. I don't want to do that. I want to just have python look at an HTML file I saved somewhere and send an email using that file as the body. Is that possible? Or am I confined to copy/pasting my HTML into that function? I'm using office365 for business. Thanks.</p>
<p>In other words instead of this: <code>msg.setBodyHTML("<h3>Hello</h3>")</code> I want to be able to do this: <code>msg.setBodyHTML("C:\somemsg.html")</code></p>
|
<p>I guess you can assign the file content to a variable first, i.e.:</p>
<pre><code>file = open('C:/somemsg.html', 'r')
content = file.read()
file.close()
msg.setBodyHTML(content)
</code></pre>
|
python|html|email|office365
| 1 |
1,902,249 | 41,629,651 |
Implementing recursive functions for trees in python class
|
<p>I have created a class <code>Tree</code> and a class <code>Node</code>. Now I needed to implement <code>preOrder</code>, <code>postOrder</code> and <code>inOrder</code> traversals. I did it using private functions. But is there a way to do the same using only one function? </p>
<pre><code>class Node:
def __init__(self, data):
self.left = None
self.right = None
self.data = data
class Tree:
def __init__(self):
self.root = None
# Private helper functions
def __insert(self, data, root):
if data < root.data:
if root.left is None:
root.left = Node(data)
else:
self.__insert(data, root.left)
elif data >= root.data:
if root.right is None:
root.right = Node(data)
else:
self.__insert(data, root.right)
# Traversals
def __preOrder(self, root):
print root.data
if root.left:
self.__preOrder(root.left)
if root.right:
self.__preOrder(root.right)
# Wrapper Functions
def insert(self, data):
if self.root == None:
self.root = Node(data)
else:
self.__insert(data, self.root)
def preOrder(self):
self.__preOrder(self.root)
tree = Tree()
print "Enter elements to be inserted in the tree(End with a -1): "
while True:
elem = int(raw_input())
if elem == -1:
break
tree.insert(elem)
print "Preorder traversal: "
tree.preOrder()
</code></pre>
<p>Here I have to use the helper functions because I don't want the user to be providing the root element explicitly. </p>
|
<p>Yes, you can implement all 3 types of traversal in a single function. I've turned the traversal functions into generators to make them more versatile. So instead of printing their data they are iterators that yield their data. This lets you process the data as it's yielded, or you can capture it into a list (or other collection). </p>
<p>In Python 2 your classes should inherit from <code>object</code>, otherwise you get old-style classes, which are rather limited compared to new-style classes (Python 3 only has new-style classes).</p>
<p>BTW, there's no need to use double underscores for the private functions (which invokes Python's <a href="https://stackoverflow.com/questions/7456807/python-name-mangling">name-mangling</a> machinery), a single leading underscore is adequate.</p>
<p>I've also added simple <code>__repr__</code> methods to the classes, which can be handy during development & debugging.</p>
<pre><code>class Node(object):
def __init__(self, data):
self.left = None
self.right = None
self.data = data
def __repr__(self):
return repr((self.data, self.left, self.right))
class Tree(object):
def __init__(self):
self.root = None
def __repr__(self):
return repr(self.root)
# Private helper functions
def _insert(self, data, root):
if data < root.data:
if root.left is None:
root.left = Node(data)
else:
self._insert(data, root.left)
else: # data >= root.data:
if root.right is None:
root.right = Node(data)
else:
self._insert(data, root.right)
def _traverse(self, root, mode):
if mode == 'pre':
yield root.data
if root.left:
for u in self._traverse(root.left, mode):
yield u
if mode == 'in':
yield root.data
if root.right:
for u in self._traverse(root.right, mode):
yield u
if mode == 'post':
yield root.data
# Wrapper Functions
def insert(self, data):
if self.root == None:
self.root = Node(data)
else:
self._insert(data, self.root)
def preOrder(self):
for u in self._traverse(self.root, 'pre'):
yield u
def inOrder(self):
for u in self._traverse(self.root, 'in'):
yield u
def postOrder(self):
for u in self._traverse(self.root, 'post'):
yield u
# Test
tree = Tree()
for elem in '31415926':
tree.insert(elem)
print tree
print "Preorder traversal: "
print list(tree.preOrder())
print "InOrder Traversal: "
print list(tree.inOrder())
print "PostOrder Traversal: "
print list(tree.postOrder())
</code></pre>
<p><strong>output</strong></p>
<pre><code>('3', ('1', None, ('1', None, ('2', None, None))), ('4', None, ('5', None, ('9', ('6', None, None), None))))
Preorder traversal:
['3', '1', '1', '2', '4', '5', '9', '6']
InOrder Traversal:
['1', '1', '2', '3', '4', '5', '6', '9']
PostOrder Traversal:
['2', '1', '1', '6', '9', '5', '4', '3']
</code></pre>
<p>Here's an example of processing the data as it's yielded:</p>
<pre><code>for data in tree.inOrder():
print data
</code></pre>
<hr>
<p>FWIW, this code would be a lot cleaner in Python 3 because we could use the <code>yield from</code> syntax instead of those <code>for</code> loops. So instead of </p>
<pre><code>for u in self._traverse(root.left, mode):
yield u
</code></pre>
<p>we could do</p>
<pre><code>yield from self._traverse(root.left, mode)
</code></pre>
|
python|python-2.7|recursion|tree
| 6 |
1,902,250 | 41,655,163 |
python code to get (latest) file with timestamp as name to attach to a e-mail
|
<ol>
<li><p>I have a BASH script that takes a photo with a webcam.</p>
<pre><code>#!/bin/bash
# datum (in swedish) = date
datum=$(date +'%Y-%m-%d-%H:%M')
fswebcam -r --no-banner /home/pi/webcam/$datum.jpg
</code></pre></li>
<li><p>I have a python code to take run the BASH script when recieve a signal from a motion detector and also call a module wich send the e-mail </p>
<pre><code>import RPi.GPIO as GPIO
import time
import gray_camera
import python_security_mail
GPIO.setmode(GPIO.BCM)
GPIO.setup(23, GPIO.IN, pull_up_down=GPIO.PUD_DOWN)
while True:
if(GPIO.input (23)== 1):
print('discovered!!!')
gray_camera.camera()
time.sleep(1)
python_security_mail.mail()
time.sleep(1.5)
GPIO.cleanup()
</code></pre></li>
</ol>
<p>And the mail code:</p>
<pre><code> import os
import smtplib
import userpass
from email.mime.text import MIMEText
from email.mime.image import MIMEImage
from email.mime.multipart import MIMEMultipart
def SendMail(ImgFileName):
img_data = open('/home/pi/solstol.png', 'rb').read()
msg = MIMEMultipart()
msg['Subject'] = 'subject'
msg['From'] = userpass.fromaddr
msg['To'] = userpass.toaddr
fromaddr = userpass.fromaddr
toaddr = userpass.toaddr
text = MIMEText("test")
msg.attach(text)
image = MIMEImage(img_data, name=os.path.basename('/home/pi/solstol.png'))
msg.attach(image)
s = smtplib.SMTP('smtp.gmail.com', 587)
s.ehlo()
s.starttls()
s.ehlo()
s.login(fromaddr, userpass.password)
s.sendmail(fromaddr, toaddr, msg.as_string())
s.quit()
</code></pre>
<p>I have just learned how to attach a file to a e-mail.
The code works so far. But I would like to get the latest photo taken and attach to the email.</p>
<p>I am still just a beginner in Python. The code in here I have mostly copied from various tutorials and changed a little bit to work for me. I have no deep understanding in all of this. In some few parts I may perhaps have intemediate knowledge.
I have no idea how to write the code to get python to find the file (photo with jpg format) I want and attach it to the mail.</p>
<p>So I am very glad if there is someone who can guide me how to fill in the missing part. </p>
<hr>
<p>I put in wrong code for the mail function. I got a little bit better result with this one:</p>
<pre><code>#!/usr/bin/python
import userpass
import smtplib
from email.MIMEMultipart import MIMEMultipart
from email.MIMEBase import MIMEBase
from email.MIMEText import MIMEText
from email import Encoders
import os
def send_a_picture():
gmail_user = userpass.fromaddr
gmail_pwd = userpass.password
def mail(to, subject, text, attach):
msg = MIMEMultipart()
msg['From'] = gmail_user
msg['To'] = userpass.toaddr
msg['Subject'] = subject
msg.attach(MIMEText(text))
part = MIMEBase('application', 'octet-stream')
part.set_payload(open(attach, 'rb').read())
Encoders.encode_base64(part)
part.add_header('Content-Disposition',
'attachment; filename="%s"' % os.path.basename(attach))
msg.attach(part)
mailServer = smtplib.SMTP("smtp.gmail.com", 587)
mailServer.ehlo()
mailServer.starttls()
mailServer.ehlo()
mailServer.login(gmail_user, gmail_pwd)
mailServer.sendmail(gmail_user, to, msg.as_string())
mailServer.close()
mail(userpass.toaddr,
"One step further",
"Hello. Thanks for the link. I am a bit closer now.",
"solstol.png")
send_a_picture()
</code></pre>
<p>Edit.</p>
<p>Hello. I have now added seconds to the filename. If there is no picture in the folder when running glob.glob("/home/pi/<em>.jgp") I got:
Traceback (most recent call last):
File "", line 1, in
last_photo_taken = glob.glob("/home/pi/</em>.jpg")[-1]
IndexError: list index out of range. </p>
<p>When I take a picture I got a return ('/home/pi/2017-01-16-23:39:46.jpg').
If I then takes another picture the return still is '/home/pi/2017-01-16-23:39:46.jpg'.
If I restart the shell I got the next picture as return. Thank you for your help today. I will write more tomorow.</p>
|
<p>the format you chose <code>'%Y-%m-%d-%H:%M'</code> has the nice property to have the same sort order alphabetically or date-wise.</p>
<p>So since your files are located in <code>/home/pi/webcam</code> and have the jpg extension, you could compute the last photo like this:</p>
<pre><code>import glob
last_photo_taken = glob.glob("/home/pi/webcam/*.jpg")[-1]
</code></pre>
<p>Attach the <code>last_photo_taken</code> file to your e-mail.</p>
<p>Note: photos taken at the same minute will overlap: last photo overwrites the previous one. You should consider adding seconds to the file name.</p>
<p>Note: even if your files weren't conveniently named with the date, you could sort the images by modification date and take the last one:</p>
<pre><code>last_photo_taken = sorted(glob.glob("/home/pi/webcam/*.jpg"),key=os.path.getmtime)[-1]
</code></pre>
|
python|bash|email-attachments
| 2 |
1,902,251 | 41,554,540 |
How to set image shape in Python for Tensorflow prediction?
|
<p>I'm dealing with the following error:</p>
<pre><code>ValueError: Cannot feed value of shape (32, 32, 3) for Tensor 'Placeholder:0', which has shape '(?, 32, 32, 3)'
</code></pre>
<p>The placeholder is set to: <code>x = tf.placeholder(tf.float32, (None, 32, 32, 3))</code></p>
<p>And the image (when running <code>print(img1.shape)</code>) has the output: <code>(32, 32, 3)</code></p>
<p>How can I update the image to be aligned when running: <code>print(sess.run(correct_prediction, feed_dict={x: img1}))</code></p>
|
<p>The placeholder <code>x</code> in your program represents a <strong>batch</strong> of 32x32 (presumably) RGB images, for which predictions will be computed in a single step. If you want to compute a prediction on a single image—i.e. an array of shape <code>(32, 32, 3)</code>—you must reshape it to have an additional leading dimension. There are many ways to do this, but <a href="https://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#numpy.newaxis" rel="nofollow noreferrer"><code>np.newaxis</code></a> is one nice way to do it:</p>
<pre><code> img1 = ... # Array of shape (32, 32, 3)
img1_as_batch = img1[np.newaxis, ...] # Array of shape (1, 32, 32, 3)
print(sess.run(correct_prediction, feed_dict={x: img1_as_batch}))
</code></pre>
|
python|tensorflow
| 1 |
1,902,252 | 57,049,675 |
How to specify number of layers in keras?
|
<p>I'm trying to define a fully connected neural network in keras using tensorflow backend, I have a sample code but I dont know what it means.</p>
<pre><code>model = Sequential()
model.add(Dense(10, input_dim=x.shape[1], kernel_initializer='normal', activation='relu'))
model.add(Dense(50, input_dim=x.shape[1], kernel_initializer='normal', activation='relu'))
model.add(Dense(20, input_dim=x.shape[1], kernel_initializer='normal', activation='relu'))
model.add(Dense(10, input_dim=x.shape[1], kernel_initializer='normal', activation='relu'))
model.add(Dense(1, kernel_initializer='normal'))
model.add(Dense(y.shape[1],activation='softmax'))
</code></pre>
<p>From the above code I want to know what is the number of inputs to my network, number of outputs, number of hidden layers and number of neurons in each layer. And what is the number coming after model.add(Dense ? assuming x.shape[1]=60.
What is the name of this network exacly? Should I call it a fully connected network or convolutional network?</p>
|
<p>That should be quite easy.</p>
<ol>
<li><p>For knowing about the model's inputs and outputs use,</p>
<pre><code>input_tensor = model.input
output_tensor = model.output
</code></pre>
<p>You can print these <code>tf.Tensor</code> objects to get the <code>shape</code> and <code>dtype</code>.</p></li>
<li><p>For fetching the Layers of a model use,</p>
<pre><code>layers = model.layers
print( layers[0].units )
</code></pre></li>
</ol>
<p>With these tricks you can easily get the input and output tensors for a model or its layer.</p>
|
tensorflow|keras|deep-learning
| 0 |
1,902,253 | 25,830,639 |
Return multiple lines in web.py
|
<p>I got my data out of MySQL using MySQLdb, however when I try to display them in the webpage using web.py with JSON</p>
<pre><code>fetch_resu = cur.fetchall()
json_list = []
for each_tup in fetch_resu:
json_list.append(each_tup)
return json.dumps(json_list, encoding="UTF-8", ensure_ascii=False, indent=4, separators=(',', ': '))
</code></pre>
<p>I will get a bunch of data stuck in one line in the web page (which same sentence run perfectly in local debugging)</p>
<p>Since I can only return data, how could I make them prettier?
Or shall I use template? But my database is dynamicly changing.</p>
|
<p>IMHO, there are two options.</p>
<ul>
<li>surround the content with <code><pre>..</pre></code> or <code><code>..</code></code></li>
</ul>
<p>OR</p>
<ul>
<li><p>respond as a <code>text/plain</code> content-type:</p>
<pre><code>web.header('Content-Type', 'text/plain')
return json.dumps(...)
</code></pre></li>
</ul>
|
python|mysql|json|django|web.py
| 0 |
1,902,254 | 25,874,686 |
How can I simulate key events with selenium in a CodeMirror Editor
|
<p>There are plenty of examples around that show how to use the following to enter text using selenium</p>
<pre><code>driver.execute_script('cm.setValue("text")');
</code></pre>
<p>This works, but isn't very "selenium" of us. We want to simulate actual keyboard key presses like the send_keys function in selenium. We created a <strong>enterFormData</strong> that gets an element and types to it using driver.send_keys()<em>(e.g. a textarea with an ID we can easily simulate typing)</em>. How can we simulate actual key presses into the CodeMirror editor? We also want to be able to test HotKeys <em>(e.g. Ctrl-Shift-M)</em> and then take a <strong>driver.get_screenshot_as_base64()</strong></p>
|
<p>For selenium to detect the keyboard events, you'll first have to bring codemirror into focus. </p>
<p>You can do something like this:</p>
<pre><code>/* getting codemirror element */
WebElement codeMirror = driver.findElement(By.className("CodeMirror"));
/* getting the first line of code inside codemirror and clicking it to bring it in focus */
WebElement codeLine = codeMirror.findElements(By.className("CodeMirror-line")).get(0);
codeLine.click();
/* sending keystokes to textarea once codemirror is in focus */
WebElement txtbx = codeMirror.findElement(By.cssSelector("textarea"));
txtbx.sendKeys("Hello World");
</code></pre>
|
javascript|python|selenium|codemirror
| 4 |
1,902,255 | 25,666,241 |
How to override default create method in django-rest-framework
|
<p>I have a model which uses a different create method in the manager. How do I override this method so that the post method in ListCreateAPIView uses the method I've written instead of the default method. Here's the method.</p>
<pre><code>class WeddingInviteManager(models.Manager):
def create(self, to_user, from_user, wedding):
wedding_invitation = self.create(from_user=from_user,to_user=to_user,
wedding=wedding)
notification.send([self.to_user], 'wedding_invite',{'invitation':wedding_invitation})
return wedding_invitation
</code></pre>
|
<p>I suppose the reason for doing the is actually the notification system.</p>
<p>I would recommend doing something like this instead:</p>
<pre><code>class MyModel(models.Model):
...
def save(self, silent=False, *args, **kwargs):
# Send notification if this is a new instance that has not been saved
# before:
if not silent and not self.pk:
notification.send([self.to_user], 'wedding_invite', {'invitation': self})
return super(MyModel, self).save(*args, **kwargs)
</code></pre>
<p>But if you must, this is (in theory) how you do it (code not tested):</p>
<pre><code>from rest_framework import serializers, viewsets
class MyModelSerializer(serializers.ModelSerializer):
def save_object(self, obj, **kwargs):
"""
Save the deserialized object.
"""
if getattr(obj, '_nested_forward_relations', None):
# Nested relationships need to be saved before we can save the
# parent instance.
for field_name, sub_object in obj._nested_forward_relations.items():
if sub_object:
self.save_object(sub_object)
setattr(obj, field_name, sub_object)
##### EDITED CODE #####
if obj.pk:
obj.save(**kwargs)
else:
# Creating a new object. This is silly.
obj = MyModel.objects.create(obj.to_user, obj.from_user, obj.wedding)
##### /EDITED CODE #####
if getattr(obj, '_m2m_data', None):
for accessor_name, object_list in obj._m2m_data.items():
setattr(obj, accessor_name, object_list)
del(obj._m2m_data)
if getattr(obj, '_related_data', None):
related_fields = dict([
(field.get_accessor_name(), field)
for field, model
in obj._meta.get_all_related_objects_with_model()
])
for accessor_name, related in obj._related_data.items():
if isinstance(related, RelationsList):
# Nested reverse fk relationship
for related_item in related:
fk_field = related_fields[accessor_name].field.name
setattr(related_item, fk_field, obj)
self.save_object(related_item)
# Delete any removed objects
if related._deleted:
[self.delete_object(item) for item in related._deleted]
elif isinstance(related, models.Model):
# Nested reverse one-one relationship
fk_field = obj._meta.get_field_by_name(accessor_name)[0].field.name
setattr(related, fk_field, obj)
self.save_object(related)
else:
# Reverse FK or reverse one-one
setattr(obj, accessor_name, related)
del(obj._related_data)
class MyModelViewSet(viewsets.ModelViewSet):
serializer_class = MyModelSerializer
queryset = MyModel.objects.all()
</code></pre>
|
python|django|django-rest-framework
| 4 |
1,902,256 | 44,472,479 |
Running a file using terminal on Ubuntu: os.system is functional where subprocess.Popen cannot find the file
|
<p>I'm trying to run a file via python using <code>subprocess.Popen</code> on Ubuntu version 16. A open source application is installed and using the command <code>CopasiUI</code> at the terminal will open the Copasi GUI whereas the command <code>CopasiSE</code> at the terminal opens the command line interface to the same program. Using <code>CopasiSE <file path></code> where <code><file path></code> is the full path to a copasi file will submit the copasi file for running. This fully works when done manually. </p>
<p><strong>Code</strong>:
` </p>
<pre><code>In [13]: f='/home/b3053674/Documents/PyCoTools/PyCoTools/PyCoToolsTutorial/Kholodenko_0.cps'
In [14]: import os
In [15]: f
Out[15]: '/home/b3053674/Documents/PyCoTools/PyCoTools/PyCoToolsTutorial/Kholodenko_0.cps'
In [16]: os.path.isfile(f)
Out[16]: True
</code></pre>
<p>And the symlink to the program also is functional:</p>
<pre><code>In [20]: subprocess.Popen('CopasiSE')
Out[20]: <subprocess.Popen at 0x7f72a2643e90>
In [21]: COPASI 4.16 (Build 104)
The use of this software indicates the acceptance of the attached license.
To view the license please use the option: --license
Usage: CopasiSE [options] [file]
--SBMLSchema schema The Schema of the SBML file to export.
--configdir dir The configuration directory for copasi. The
default is .copasi in the home directory.
--configfile file The configuration file for copasi. The
default is copasi in the ConfigDir.
--exportBerkeleyMadonna file The Berkeley Madonna file to export.
--exportC file The C code file to export.
--exportXPPAUT file The XPPAUT file to export.
--home dir Your home directory.
--license Display the license.
--maxTime seconds The maximal time CopasiSE may run in
seconds.
--nologo Surpresses the startup message.
--validate Only validate the given input file (COPASI,
Gepasi, or SBML) without performing any
calculations.
--verbose Enable output of messages during runtime to
std::error.
-c, --copasidir dir The COPASI installation directory.
-e, --exportSBML file The SBML file to export.
-i, --importSBML file A SBML file to import.
-s, --save file The file the model is saved to after work.
-t, --tmp dir The temp directory used for autosave.
</code></pre>
<p>And using <code>os.system</code> works:</p>
<pre><code>In [21]: os.system('CopasiSE {}'.format(f))
COPASI 4.16 (Build 104)
The use of this software indicates the acceptance of the attached license.
To view the license please use the option: --license
</code></pre>
<p>This is expected output for a running Copasi file. </p>
<p>BUT <code>subprocess.Popen</code> give me this:</p>
<pre><code>In [22]: subprocess.Popen('CopasiSE {}'.format(f))
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
<ipython-input-22-c8cd60af5d46> in <module>()
----> 1 subprocess.Popen('CopasiSE {}'.format(f))
/home/b3053674/anaconda2/lib/python2.7/subprocess.pyc in __init__(self, args, bufsize, executable, stdin, stdout, stderr, preexec_fn, close_fds, shell, cwd, env, universal_newlines, startupinfo, creationflags)
388 p2cread, p2cwrite,
389 c2pread, c2pwrite,
--> 390 errread, errwrite)
391 except Exception:
392 # Preserve original exception in case os.close raises.
/home/b3053674/anaconda2/lib/python2.7/subprocess.pyc in _execute_child(self, args, executable, preexec_fn, close_fds, cwd, env, universal_newlines, startupinfo, creationflags, shell, to_close, p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite)
1022 raise
1023 child_exception = pickle.loads(data)
-> 1024 raise child_exception
1025
1026
OSError: [Errno 2] No such file or directory
In [23]:
</code></pre>
<p>Could anybody suggest why? </p>
|
<p>With subprocess.Popen you should pass parameters separately: <code>subprocess.Popen(['CopasiSE', f])</code> because it's shell (invoked by <code>os.system()</code>) that splits command line.</p>
|
python|operating-system|subprocess|popen
| 1 |
1,902,257 | 44,599,317 |
Renaming file names using a csv dictionary
|
<p>There are quite a few threads on the website regarding this question.
But does not answer my question.
I'm looking to rename folders with community code to community name.
I keep getting Windows error :file specified not found.
Here is sample code</p>
<pre><code> import csv,os
path=r"files location"
reader = csv.reader(open(path+'\CommunityDictionary.csv', 'rb'))
cdict = {}
for row in reader:
sym, community = row
cdict[sym] = community
dir=r"root folder path" #folder contains sub folders with Abbreviatedcodes#
for folder in os.walk(dir):
for folder in cdict:
os.rename(os.path.join(dir,folder), os.path.join(dir,cdict[folder]))
</code></pre>
<p>If any body could point out what I'm doing wrong ,would be greatly appreciated.
The same code worked a couple weeks ago but not now.</p>
|
<p>Thanks double_j !!</p>
<p>I figured that my csv has a key value that does not exist in my files that I'm trying to rename.
The code posted in my question works like a charm !!.</p>
|
python|csv
| 0 |
1,902,258 | 61,880,048 |
Tkinter Matplotlib and Thread
|
<p>In the main loop, a 2 x 2 tkinter grid is cretaed with
one label in each cell of the first line.
In the second line, two Matplotlib figures are crated with a subplot
Two functions are in charge to dynamicly refresh the grid.
They are running each one in a thread.
The first line (two labels) is well refresh by the two functions.
But, in the second line nothing ... no plot !</p>
<pre><code>from tkinter import *
import threading
import time
from matplotlib.figure import Figure
from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg
from matplotlib import style
def read_api1():
n = 0
while 1:
n = n + 1
texte1.config(text="fig1 " + str(n))
ax1.plot([1,2], [12,14])
time.sleep(2)
ax1.cla()
def read_api2():
m = 0
while 1:
m = m + 1
texte2.config(text="fig2 " + str(m))
ax2.plot([.1,.2,.3], [2,4,3])
time.sleep(1)
main = Tk()
style.use("ggplot")
texte1 = Label(main, text="fig1")
texte1.grid(row=0,column=0)
fig1 = Figure(figsize=(2, 2), dpi=112)
ax1 = fig1.add_subplot()
fig1.set_tight_layout(True)
graph = FigureCanvasTkAgg(fig1, master=main)
canvas = graph.get_tk_widget()
canvas.grid(row=1, column=0)
texte2 = Label(main, text="fig2")
texte2.grid(row=0,column=1)
fig2 = Figure(figsize=(2, 2), dpi=112)
ax2 = fig2.add_subplot()
fig2.set_tight_layout(True)
graph = FigureCanvasTkAgg(fig2, master=main)
canvas = graph.get_tk_widget()
canvas.grid(row=1, column=1)
t = threading.Thread(target=read_api1)
t.start()
t = threading.Thread(target=read_api2)
t.start()
main.mainloop()
</code></pre>
<p>Any help would be appreciated :)</p>
<hr>
<p><strong>EDIT:</strong></p>
<p>Some more details @furas: </p>
<ul>
<li><p><code>read_api</code>2 is supposed to get data from a <code>WEB API</code> at specific time. So what you recommend (<code>after</code> method) should work. </p></li>
<li><p><code>read_api1</code> is supposed to acquire data from a serial port (<code>GPIO UART</code>). So the thread will be waiting for data beeing available for reading. </p></li>
</ul>
<p>In that case, I don't see how to use the after method </p>
<p>In other words, the question is : how to refresh a matplotlib plot in a tkinter environnement based on asynchronous input ? The asynchronous serial data read cannot be in the mainloop, so I put it in the thread but even with <code>graph.draw()</code>, it does not work. Any suggestion ?</p>
|
<p>There are two problems:</p>
<ol>
<li><p>it needs <code>graph.draw()</code> to update/redraw plot</p></li>
<li><p>usually GUIs don't like to run in thread and it seems <code>graph.draw()</code> doesn't work in thread (at least on my Linux). </p></li>
</ol>
<p>You may have to use <code>main.after(1000, main_api1)</code> to run the same function after <code>1000ms</code> (<code>1s</code>) without using <code>thread</code> and without blocking <code>mainloop()</code></p>
<hr>
<pre><code>import tkinter as tk # PEP8: `import *` is not preferred
from matplotlib.figure import Figure
from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg
from matplotlib import style
import random
# --- functions ---
def read_api1():
global n
n = n + 1
texte1.config(text="fig1 " + str(n))
ax1.cla()
ax1.plot([1,2], [random.randint(0,10),random.randint(0,10)])
graph1.draw()
main.after(1000, read_api1)
def read_api2():
global m
m = m + 1
texte2.config(text="fig2 " + str(m))
ax2.cla()
ax2.plot([.1,.2,.3], [random.randint(0,10),random.randint(0,10),random.randint(0,10)])
graph2.draw()
main.after(1000, read_api2)
# --- main ---
m = 0
n = 0
main = tk.Tk()
style.use("ggplot")
texte1 = tk.Label(main, text="fig1")
texte1.grid(row=0, column=0)
fig1 = Figure(figsize=(2, 2), dpi=112)
ax1 = fig1.add_subplot()
fig1.set_tight_layout(True)
graph1 = FigureCanvasTkAgg(fig1, master=main)
canvas1 = graph1.get_tk_widget()
canvas1.grid(row=1, column=0)
#graph1.draw()
texte2 = tk.Label(main, text="fig2")
texte2.grid(row=0,column=1)
fig2 = Figure(figsize=(2, 2), dpi=112)
ax2 = fig2.add_subplot()
fig2.set_tight_layout(True)
graph2 = FigureCanvasTkAgg(fig2, master=main)
canvas2 = graph2.get_tk_widget()
canvas2.grid(row=1, column=1)
#graph2.draw()
read_api1()
read_api2()
main.mainloop()
</code></pre>
<hr>
<p><strong>EDIT:</strong> Example which runs two threads. Every threads generate data in different speed and use two <code>queue</code>s to send data to main thread. And main thread use two <code>after()</code> to check two queues and update two plots.</p>
<pre><code>import tkinter as tk # PEP8: `import *` is not preferred
from matplotlib.figure import Figure
from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg
from matplotlib import style
import random
import threading
import queue
import time
# --- functions ---
def WEB_API(queue):
# it will run in thread
print('WEB_API: start')
while web_api_running:
value = random.randint(1, 3)
time.sleep(.1)
print('WEB_API:', value)
queue.put(value)
def GPIO_API(queue):
# it will run in thread
print('GPIO_API: start')
while gpio_api_running:
value = random.randint(1, 3)
time.sleep(value)
print('GPIO_API:', value)
queue.put(value)
def read_api1():
global n
global data1
if not queue1.empty():
value = queue1.get()
# remove first item and add new item at the the end
data1 = data1[1:] + [value]
n += 1
texte1.config(text="fig1 " + str(n))
ax1.cla()
ax1.plot(range(10), data1)
graph1.draw()
main.after(100, read_api1)
def read_api2():
global m
global data2
if not queue2.empty():
value = queue2.get()
# remove first item and add new item at the the end
data2 = data2[1:] + [value]
m = m + 1
texte2.config(text="fig2 " + str(m))
ax2.cla()
ax2.plot([.1,.2,.3], data2)
graph2.draw()
main.after(100, read_api2)
# --- before GUI ---
# default data at start (to add new value at the end and remove first value)
data1 = [0,0,0,0,0,0,0,0,0,0]
data2 = [0,0,0]
m = 0
n = 0
# queues to communicate with threads
queue1 = queue.Queue()
queue2 = queue.Queue()
# global variables to control loops in thread
web_api_running = True
gpio_api_running = True
# start threads and send queues as arguments
thread1 = threading.Thread(target=WEB_API, args=(queue1,))
thread1.start()
thread2 = threading.Thread(target=GPIO_API, args=(queue2,))
thread2.start()
# --- GUI ---
main = tk.Tk()
style.use("ggplot")
texte1 = tk.Label(main, text="fig1")
texte1.grid(row=0, column=0)
fig1 = Figure(figsize=(2, 2), dpi=112)
ax1 = fig1.add_subplot()
fig1.set_tight_layout(True)
graph1 = FigureCanvasTkAgg(fig1, master=main)
canvas1 = graph1.get_tk_widget()
canvas1.grid(row=1, column=0)
texte2 = tk.Label(main, text="fig2")
texte2.grid(row=0,column=1)
fig2 = Figure(figsize=(2, 2), dpi=112)
ax2 = fig2.add_subplot()
fig2.set_tight_layout(True)
graph2 = FigureCanvasTkAgg(fig2, master=main)
canvas2 = graph2.get_tk_widget()
canvas2.grid(row=1, column=1)
# draw plots first time
ax1.plot(range(10), data1)
ax2.plot([.1,.2,.3], data2)
# run after which will update data and redraw plots
read_api1()
read_api2()
main.mainloop()
# --- after GUI ---
# stop loops in threads
web_api_running = False
gpio_api_running = False
# wait for end of theads
thread1.join()
thread2.join()
</code></pre>
|
python|multithreading|matplotlib|tkinter
| 0 |
1,902,259 | 61,680,758 |
Best way to map Pandas column using dictionary
|
<p>I have a dictionary that who's keys are names and values are another dictionary which maps numbers to their normalized values (essentially just numbers to numbers). For example:</p>
<pre><code>dict1 = {'name1':{0.02:0.04, 0.034:0.06, 0.051:0.08...0.59:0.71, 0.611:0.723}}
</code></pre>
<p>I have a dataframe <code>df</code> of ~5 million rows. One column is named <code>Value</code>. Another is <code>Name</code>. Overall, I want to add an extra column <code>Mapped</code> that uses <code>Value</code> and maps it using <code>dict1</code>. I need to use the <code>Name</code> column to look up in the outer dictionary in <code>dict1</code> and then the <code>Value</code> column to look up in the inner dictionary. However, <code>Value</code> may not be a key in <code>dict1</code> -- if it isn't, I want to take the closet key. </p>
<p>This is what I have that I know works, but it has been running for a pretty long time:</p>
<pre><code>df['Map'] = df.apply(lambda row: dict1[row['Name']][min(dict1[row['Name']].keys(), key = lambda k: abs(row['Value'] - k))], axis = 1)
</code></pre>
<p>Is there a more efficient way to do this so it does not take forever?</p>
<p>Example <code>df</code>:</p>
<pre><code>In [2]: df
Out[2]:
Name Value
0 'name1' 0.02
1 'name1' 0.03
2 'name1' 0.6
</code></pre>
<p>Here's the output I would want:</p>
<pre><code>In [2]: df
Out[2]:
Name Value Map
0 'name1' 0.02 0.04
1 'name1' 0.03 0.06
2 'name1' 0.6 0.71
</code></pre>
|
<p>This is <code>merge_asof</code>:</p>
<pre><code># prepare data for merge
s = (pd.DataFrame(dict1).stack()
.reset_index(name='Map')
.sort_values(['level_0','Map'])
)
# output
(pd.merge_asof(df, s,
left_on='Value', right_on='level_0',
left_by='Name', right_by='level_1',
direction='nearest')
.drop(['level_0', 'level_1'], axis=1)
)
</code></pre>
<p>Output:</p>
<pre><code> Name Value Map
0 name1 0.02 0.04
1 name1 0.03 0.06
2 name1 0.60 0.71
</code></pre>
|
python|pandas|dictionary
| 2 |
1,902,260 | 61,701,142 |
Nagios check giving error and not the output I expect
|
<p>I have a python code which when run locally gives correct output but when I run it with Nagios check locally it gives errors.</p>
<p>Code :</p>
<pre><code>#!/usr/bin/env python
import pandas as pd
df = pd.read_csv("...")
print(df)
</code></pre>
<p>Nagios configuration :</p>
<ol>
<li>inside localhost.cfg</li>
</ol>
<pre><code>define service {
use local-service
host_name localhost
service_description active edges
check_command. check_edges
}
</code></pre>
<ol start="2">
<li>inside commands.cfg</li>
</ol>
<pre><code>define command {
command_name check_edges
command_line $USER1$/check_edges.py $HOSTADDRESS$
}
</code></pre>
<p>Error : </p>
<pre><code>(No output on stdout) stderr : Traceback File "/usr/local/nagios/libexec/check_edges.py" line 3, in <module> import pandas as pd
ImportError: No module named pandas
</code></pre>
<p>Please give as much details as possible to solve this problem</p>
<p>****pip show python gives :
Location: /usr/lib/python2.7/lib-dynload</p>
<p>pip show pandas gives :
Location : /home/nwvepops01/.local/lib/python2.7/site-packages****</p>
|
<p>As user, from the shell, check the istance of python.
For example with this command:</p>
<pre><code>env python
</code></pre>
<p>Modify the script and on the first line replace this</p>
<pre><code>#!/usr/bin/env python
</code></pre>
<p>with the absolute path of the python executable.</p>
|
python|pandas|nagios
| 0 |
1,902,261 | 23,612,436 |
Refactor: Eliminate two each in Ruby
|
<p>I am trying to generate all poker cards (52 of cards), here is how I do it:</p>
<pre><code>ranks = '23456789TJQKA'.split ''
suits = 'SHDC'.split ''
my_deck = []
ranks.each do |r|
suits.each { |s| my_deck << r+s }
end
my_deck # => ["2S", "2H", "2D", "2C", "3S", "3H", "3D", "3C", "4S", "4H", "4D", "4C", "5S", "5H", "5D", "5C", "6S", "6H", "6D", "6C", "7S", "7H", "7D", "7C", "8S", "8H", "8D", "8C", "9S", "9H", "9D", "9C", "TS", "TH", "TD", "TC", "JS", "JH", "JD", "JC", "QS", "QH", "QD", "QC", "KS", "KH", "KD", "KC", "AS", "AH", "AD", "AC"]
</code></pre>
<p>My friends who use python shows me this:</p>
<pre><code>[r+s for r in '23456789TJQKA' for s in 'SHDC']
</code></pre>
<p>Does anyone could give me advice on how to make the above code more beautiful as the Python version? Thank you in advance.</p>
|
<p>Another way to write this using <a href="http://www.ruby-doc.org/core-2.1.1/Array.html#method-i-product" rel="noreferrer"><code>Array#product</code></a>:</p>
<pre><code>ranks = %w(2 3 4 5 6 7 8 9 T J Q K A)
suits = %w(S H D C)
my_deck = ranks.product(suits).map(&:join)
#=> ["2S", "2H", "2D", "2C", "3S", "3H", "3D", "3C", "4S", "4H", "4D", "4C", "5S", "5H", "5D", "5C", "6S", "6H", "6D", "6C", "7S", "7H", "7D", "7C", "8S", "8H", "8D", "8C", "9S", "9H", "9D", "9C", "TS", "TH", "TD", "TC", "JS", "JH", "JD", "JC", "QS", "QH", "QD", "QC", "KS", "KH", "KD", "KC", "AS", "AH", "AD", "AC"]
</code></pre>
|
python|ruby|refactoring
| 6 |
1,902,262 | 24,278,190 |
can't print '\' (single backslash) in Python
|
<p>I am using Python 3 and am trying to find a ways to search for a way to insert a '\' (single backslash) into my program. </p>
<p>I am gettin this error:
SyntaxError: EOL while scanning string literal</p>
|
<p>or use:</p>
<pre><code>print(r'This is \backslash')
</code></pre>
<p>But escaping with <code>this \\backslash</code> is recommended.</p>
|
python
| 3 |
1,902,263 | 24,433,357 |
How can I get QListWidget item by name?
|
<p>I have a <code>QListWidget</code> which displays a list of names using PyQt in <strong>Python</strong>. How can I get the <code>QListWidgetItem</code> for a given name?</p>
<p>For example, if I have the following <code>QListWidget</code> with 4 items, how can I get the item which contains text = dan?</p>
<p><img src="https://i.stack.imgur.com/42TX6.png" alt="QListWidget"></p>
|
<p>The python equivalent to vahancho's answer:</p>
<pre><code>items = self.listWidgetName.findItems("dan",Qt.MatchExactly)
if len(items) > 0:
for item in items:
print "row number of found item =",self.listWidgetName.row(item)
print "text of found item =",item.text()
</code></pre>
|
python|qt|pyqt|pyqt4|pyside
| 17 |
1,902,264 | 20,792,143 |
How to generate a Zip from a set of streams and producing a stream with the Zip data?
|
<p>I have an app with manages a set of files, but those files are actually stored in Rackspace's CloudFiles, because most of the files will be ~100GB. I'm using the Cloudfile's TempURL feature to allow individual files, but sometimes, the user will want to download a set of files. But downloading all those files and generating a local Zip file is impossible since the server only have 40GB of disk space.</p>
<p>From the user view, I want to implement it the way GMail does when you get an email with several pictures: It gives you a link to download a Zip file with all the images in it, and the download is immediate.</p>
<p>How to accomplish this with Python/Django? I have found <a href="https://github.com/SpiderOak/ZipStream" rel="nofollow">ZipStream</a> and looks promising because of the iterator output, but it still only accepts filepaths as arguments, and the <code>writestr</code> method would need to fetch all the file data at once (~100GB).</p>
|
<p>Since <strong>Python 3.5</strong> it is possible to create zip chunks stream of huge files/folders. You can use the unseekable stream. So no need to use <a href="https://github.com/SpiderOak/ZipStream" rel="nofollow noreferrer">ZipStream</a> now.
See my answer <a href="https://stackoverflow.com/a/55169752/5717886">here</a>.</p>
<p>And live example here: <a href="https://repl.it/@IvanErgunov/zipfilegenerator" rel="nofollow noreferrer">https://repl.it/@IvanErgunov/zipfilegenerator</a></p>
<p>If you don't have filepath, but have chunks of bytes you can exclude <code>open(path, 'rb') as entry</code> from example and replace <code>iter(lambda: entry.read(16384), b'')</code> with your iterable of bytes. And prepare ZipInfo manually:</p>
<pre><code>zinfo = ZipInfo(filename='any-name-of-your-non-existent-file', date_time=time.localtime(time.time())[:6])
zinfo.compress_type = zipfile.ZIP_STORED
# permissions:
if zinfo.filename[-1] == '/':
# directory
zinfo.external_attr = 0o40775 << 16 # drwxrwxr-x
zinfo.external_attr |= 0x10 # MS-DOS directory flag
else:
# file
zinfo.external_attr = 0o600 << 16 # ?rw-------
</code></pre>
<p>You should also remember that the zipfile module writes chunks of its zipfile own size. So, if you send a piece of 512 bytes the stream will receive a piece of data only when and only with size the zipfile module decides to do it. It depends on the compression algorithm, but I think it is not a problem, because the zipfile module makes small chunks <= 16384.</p>
|
python|django|stream|zip
| 3 |
1,902,265 | 72,017,126 |
GeoPandas .clip() on example from the docs throws TypeError
|
<p>I am unable to reproduce the <code>.clip()</code>-example from the <a href="https://geopandas.org/en/stable/gallery/plot_clip.html" rel="nofollow noreferrer">GeoPandas docs</a> without error. I suspect it has something to do with my setup, since the same thing worked a few months ago in a different environment and I have not found reports of this happening to others. But I cannot figure out what the problem is--I am hoping someone here has an idea.</p>
<p>Copying and pasting the example code into my jupyter notebook looks something like this:</p>
<pre><code>import geopandas
from shapely.geometry import Polygon
# get a set of points
capitals = geopandas.read_file(geopandas.datasets.get_path("naturalearth_cities"))
# Create a custom polygon
polygon = Polygon([(0, 0), (0, 90), (180, 90), (180, 0), (0, 0)])
# Attempt to clip points by polygon
capitals_clipped = capitals.clip(polygon)
</code></pre>
<p>Running it gives me the following error:</p>
<pre><code>---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Input In [7], in <cell line: 1>()
----> 1 capitals_clipped = capitals.clip(polygon)
File ~/.conda/envs/test-env/lib/python3.8/site-packages/pandas/util/_decorators.py:311, in deprecate_nonkeyword_arguments.<locals>.decorate.<locals>.wrapper(*args, **kwargs)
305 if len(args) > num_allow_args:
306 warnings.warn(
307 msg.format(arguments=arguments),
308 FutureWarning,
309 stacklevel=stacklevel,
310 )
--> 311 return func(*args, **kwargs)
File ~/.conda/envs/test-env/lib/python3.8/site-packages/pandas/core/frame.py:10917, in DataFrame.clip(self, lower, upper, axis, inplace, *args, **kwargs)
10905 @deprecate_nonkeyword_arguments(
10906 version=None, allowed_args=["self", "lower", "upper"]
10907 )
(...)
10915 **kwargs,
10916 ) -> DataFrame | None:
> 10917 return super().clip(lower, upper, axis, inplace, *args, **kwargs)
File ~/.conda/envs/test-env/lib/python3.8/site-packages/pandas/core/generic.py:7569, in NDFrame.clip(self, lower, upper, axis, inplace, *args, **kwargs)
7567 result = self
7568 if lower is not None:
-> 7569 result = result._clip_with_one_bound(
7570 lower, method=self.ge, axis=axis, inplace=inplace
7571 )
7572 if upper is not None:
7573 if inplace:
File ~/.conda/envs/test-env/lib/python3.8/site-packages/pandas/core/generic.py:7417, in NDFrame._clip_with_one_bound(self, threshold, method, axis, inplace)
7414 else:
7415 threshold_inf = threshold
-> 7417 subset = method(threshold_inf, axis=axis) | isna(self)
7419 # GH 40420
7420 return self.where(subset, threshold, axis=axis, inplace=inplace)
File ~/.conda/envs/test-env/lib/python3.8/site-packages/pandas/core/ops/__init__.py:470, in flex_comp_method_FRAME.<locals>.f(self, other, axis, level)
466 axis = self._get_axis_number(axis) if axis is not None else 1
468 self, other = align_method_FRAME(self, other, axis, flex=True, level=level)
--> 470 new_data = self._dispatch_frame_op(other, op, axis=axis)
471 return self._construct_result(new_data)
File ~/.conda/envs/test-env/lib/python3.8/site-packages/pandas/core/frame.py:6973, in DataFrame._dispatch_frame_op(self, right, func, axis)
6970 if not is_list_like(right):
6971 # i.e. scalar, faster than checking np.ndim(right) == 0
6972 with np.errstate(all="ignore"):
-> 6973 bm = self._mgr.apply(array_op, right=right)
6974 return self._constructor(bm)
6976 elif isinstance(right, DataFrame):
File ~/.conda/envs/test-env/lib/python3.8/site-packages/pandas/core/internals/managers.py:302, in BaseBlockManager.apply(self, f, align_keys, ignore_failures, **kwargs)
300 try:
301 if callable(f):
--> 302 applied = b.apply(f, **kwargs)
303 else:
304 applied = getattr(b, f)(**kwargs)
File ~/.conda/envs/test-env/lib/python3.8/site-packages/pandas/core/internals/blocks.py:402, in Block.apply(self, func, **kwargs)
396 @final
397 def apply(self, func, **kwargs) -> list[Block]:
398 """
399 apply the function to my values; return a block if we are not
400 one
401 """
--> 402 result = func(self.values, **kwargs)
404 return self._split_op_result(result)
File ~/.conda/envs/test-env/lib/python3.8/site-packages/pandas/core/ops/array_ops.py:283, in comparison_op(left, right, op)
280 return invalid_comparison(lvalues, rvalues, op)
282 elif is_object_dtype(lvalues.dtype) or isinstance(rvalues, str):
--> 283 res_values = comp_method_OBJECT_ARRAY(op, lvalues, rvalues)
285 else:
286 res_values = _na_arithmetic_op(lvalues, rvalues, op, is_cmp=True)
File ~/.conda/envs/test-env/lib/python3.8/site-packages/pandas/core/ops/array_ops.py:73, in comp_method_OBJECT_ARRAY(op, x, y)
71 result = libops.vec_compare(x.ravel(), y.ravel(), op)
72 else:
---> 73 result = libops.scalar_compare(x.ravel(), y, op)
74 return result.reshape(x.shape)
File ~/.conda/envs/test-env/lib/python3.8/site-packages/pandas/_libs/ops.pyx:107, in pandas._libs.ops.scalar_compare()
TypeError: '>=' not supported between instances of 'str' and 'Polygon'
</code></pre>
<p>So far I have tried to repeat this in a clean conda environment obtained like so:</p>
<pre><code>conda create -n test-env
conda activate test-env
conda install ipykernel geopandas
ipython kernel install --user --name=test_ipython
conda deactivate test-env
</code></pre>
<p>According to conda, running python 3.7.11 and geopandas 0.9.0. Even using this fresh <code>test_ipython</code>-Kernel, I get the same <code>TypeError</code> when I attempt clipping the stock world map.</p>
<p>I don't currently think this is a bug, and am assuming mere ignorance on my part.</p>
|
<p>This was a problem with geopandas versions.</p>
<p>It turns out that <code>geopandas.GeoDataFrame.clip()</code> did not work the same way in v0.9.0. Checking the docs for the appropriate version of geopandas reveals that back then, clip was not a <code>GeoDataFrame</code>-method but a standalone one, making the solution simply</p>
<pre><code>capitals_clipped = geopandas.clip(capitals, polygon)
</code></pre>
|
jupyter-notebook|conda|geopandas
| 1 |
1,902,266 | 72,072,666 |
How do I get this If statement condition to work properly?
|
<p>I'm having a problem with the following piece of code:</p>
<pre><code>decision = str(input("Would you like to change it?:"))
if decision.lower == 'yes':
new_holiday = input("What is your new favorite holiday?:")
</code></pre>
<p>The problem with this is that when I input 'yes' in the first prompt, instead of showing me the second one as I want, it just skips the if statement completely. Am I missing something here?</p>
|
<p><code>decision.lower</code> generates a method <code><built-in method lower of str object at ...></code>. you should call <code>decision.lower()</code>.</p>
<p>Change your code to the following:</p>
<pre><code>decision = str(input("Would you like to change it?:"))
if decision.lower() == 'yes':
new_holiday = input("What is your new favorite holiday?:")
</code></pre>
|
python|python-3.x|if-statement|input
| 1 |
1,902,267 | 72,062,836 |
In Python, If there is a duplicate, use the date column to choose the what duplicate to use
|
<p>I have code that runs 16 test cases against a CSV, checking for anomalies from poor data entry. A new column, 'Test case failed,' is created. A number corresponding to which test it failed is added to this column when a row fails a test. These failed rows are separated from the passed rows; then, they are sent back to be corrected before they are uploaded into a database.</p>
<p>There are duplicates in my data, and I would like to add code to check for duplicates, then decide what field to use based on the date, selecting the most updated fields.</p>
<p>Here is my data with two duplicate IDs, with the first row having the most recent Address while the second row has the most recent name.</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">ID</th>
<th style="text-align: left;">MnLast</th>
<th style="text-align: left;">MnFist</th>
<th style="text-align: left;">MnDead?</th>
<th style="text-align: left;">MnInactive?</th>
<th style="text-align: left;">SpLast</th>
<th style="text-align: left;">SpFirst</th>
<th style="text-align: left;">SPInactive?</th>
<th style="text-align: left;">SpDead</th>
<th style="text-align: left;">Addee</th>
<th style="text-align: left;">Sal</th>
<th style="text-align: left;">Address</th>
<th style="text-align: left;">NameChanged</th>
<th style="text-align: left;">AddrChange</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">123</td>
<td style="text-align: left;">Doe</td>
<td style="text-align: left;">John</td>
<td style="text-align: left;">No</td>
<td style="text-align: left;">No</td>
<td style="text-align: left;">Doe</td>
<td style="text-align: left;">Jane</td>
<td style="text-align: left;">No</td>
<td style="text-align: left;">No</td>
<td style="text-align: left;">Mr. John Doe</td>
<td style="text-align: left;">Mr. John</td>
<td style="text-align: left;">123 place</td>
<td style="text-align: left;">05/01/2022</td>
<td style="text-align: left;">11/22/2022</td>
</tr>
<tr>
<td style="text-align: left;">123</td>
<td style="text-align: left;">Doe</td>
<td style="text-align: left;">Dan</td>
<td style="text-align: left;">No</td>
<td style="text-align: left;">No</td>
<td style="text-align: left;">Doe</td>
<td style="text-align: left;">Jane</td>
<td style="text-align: left;">No</td>
<td style="text-align: left;">No</td>
<td style="text-align: left;">Mr. John Doe</td>
<td style="text-align: left;">Mr. John</td>
<td style="text-align: left;">789 road</td>
<td style="text-align: left;">11/01/2022</td>
<td style="text-align: left;">05/06/2022</td>
</tr>
</tbody>
</table>
</div>
<p>Here is a snippet of my code showing the 5th testcase, which checks for the following: Record has Name information, Spouse has name information, no one is marked deceased, but Addressee or salutation doesn't have "&" or "AND." Addressee or salutation needs to be corrected; this record is married.</p>
<pre><code>import pandas as pd
import numpy as np
data = pd.read_csv("C:/Users/file.csv", encoding='latin-1' )
# Create array to store which test number the row failed
data['Test Case Failed']= ''
data = data.replace(np.nan,'',regex=True)
data.insert(0, 'ID', range(0, len(data)))
# There are several test cases, but they function primarily the same
# Testcase 1
# Testcase 2
# Testcase 3
# Testcase 4
# Testcase 5 - comparing strings in columns
df = data[((data['FirstName']!='') & (data['LastName']!='')) &
((data['SRFirstName']!='') & (data['SRLastName']!='') &
(data['SRDeceased'].str.contains('Yes')==False) & (data['Deceased'].str.contains('Yes')==False)
)]
df1 = df[df['PrimAddText'].str.contains("AND|&")==False]
data_5 = df1[df1['PrimSalText'].str.contains("AND|&")==False]
ids = data_5.index.tolist()
# Assign 5 for each failed
for i in ids:
data.at[i,'Test Case Failed']+=', 5'
# Failed if column 'Test Case Failed' is not empty, Passed if empty
failed = data[(data['Test Case Failed'] != '')]
passed = data[(data['Test Case Failed'] == '')]
failed['Test Case Failed'] =failed['Test Case Failed'].str[1:]
failed = failed[(failed['Test Case Failed'] != '')]
# Clean up
del failed["ID"]
del passed["ID"]
failed['Test Case Failed'].value_counts()
# Print to console
print("There was a total of",data.shape[0], "rows.", "There was" ,data.shape[0] - failed.shape[0], "rows passed and" ,failed.shape[0], "rows failed at least one test case")
# output two files
failed.to_csv("C:/Users/Failed.csv", index = False)
passed.to_csv("C:/Users/Passed.csv", index = False)
</code></pre>
<p>What is the best approach to check for duplicates, choose the most updated fields, drop the outdated fields/row, and perform my test?</p>
|
<p>First, try to set a mapping that associates update date columns to their corresponding value columns.</p>
<pre class="lang-py prettyprint-override"><code>date2val = {"AddrChange": ["Address"], "NameChanged": ["MnFist", "MnLast"], ...}
</code></pre>
<p>Then, transform date columns into <code>datetime</code> format to be able to compare them (using <code>argmax</code> later).</p>
<pre class="lang-py prettyprint-override"><code>for key in date2val.keys():
failed[key] = pd.to_datetime(failed[key])
</code></pre>
<p>Then, group by ID the duplicates (since ID is the value that decides whether it is a duplicate), and for each date column get the maximum value in the group (which refers to the most recent update) and retrieve the columns to update from the initial mapping. I'll update the last row and set it as the final updated result (by putting it in <code>corrected</code> list).</p>
<pre class="lang-py prettyprint-override"><code>corrected = list()
for _, grp in failed.groupby("ID"):
for key in date2val.keys():
recent = grp[key].argmax()
for col in date2val[key]:
grp.iloc[-1][col] = grp.iloc[recent][col]
corrected.append(grp.iloc[-1])
corrected = pd.DataFrame(corrected)
</code></pre>
|
python|dataframe|indexing|etl|testcase
| 4 |
1,902,268 | 72,089,883 |
refresh web in django
|
<p>I have a question, how do I refresh the page where I am currently in django? I am new with all this, add a favorite button, but I am doing a redirect to a page, it is wrong, since I only need the page where I am currently to be refreshed, could someone help me?</p>
|
<p>greeted, I just had to do a redirect to a <code>request.META.get['HTTP_REFERER']</code></p>
|
python|django
| 0 |
1,902,269 | 35,912,041 |
Quick way to modify large Dictionary Values
|
<p>I have a large dictionary, each value itself is a large list. </p>
<p>I have to go through each value and remove members that don't follow this pattern: </p>
<p>"Each member which is smaller than the previous member should be smaller than the next member or if the member is greater than the previous member then it should be greater than the next number too."</p>
<p>This is what I have tried: </p>
<pre><code>Nodes = {1:[2,...,3],..., 2:[3,...5]} # a short example of the big dict
for key in Nodes.keys():
for i in range(1, len(Nodes[key])-1):
if Nodes[key][i-1] < Nodes [key][i] < Nodes[key][i+1] or Nodes[key][i-1] > Nodes [key][i] > Nodes[key][i+1]:
del Nodes[key][i]
</code></pre>
<p>And this is what I am getting: </p>
<pre><code>Traceback (most recent call last):
if Nodes[key][i-1] < Nodes [key][i] < Nodes[key][i+1] or Nodes[key][i-1] > Nodes [key][i] > Nodes[key][i+1]:
IndexError: list index out of range
</code></pre>
<p>One thing I can predict is that the operator deletes the list member right away then the list messes up, because when I tried to check this by printing the list member instead of deleting it, the code worked.</p>
|
<blockquote>
<p>One thing I can predict is that the operator deletes the list member
right away then the list messes up, because when I tried to check this
by printing the list member instead of deleting it, the code worked.</p>
</blockquote>
<p>You are right. And as @Daniel said, "Don't modify a list you are iterating over."</p>
<p>You can create a new list, append all the valid values to new list, and assign key in the dict this new list.</p>
<pre><code>Nodes = {1:[2,...,3],..., 2:[3,...5]} # a short example of the big dict
for key in Nodes.keys():
new_list = list()
for i in range(1, len(Nodes[key])-1):
if not( Nodes[key][i-1] < Nodes [key][i] < Nodes[key][i+1] or Nodes[key][i-1] > Nodes [key][i] > Nodes[key][i+1]):
new_list.append(Nodes[key][i])
Nodes[key] = new_list
</code></pre>
|
python|performance|dictionary
| 0 |
1,902,270 | 15,407,419 |
Using a string, to give a name to a new string
|
<p><strong>What I'm trying to do:</strong></p>
<p>I have some files in a directory, and I made a list of these:</p>
<pre><code>filesInDir = os.listdir("scanfiles")
</code></pre>
<p>And after I got these, I'm trying to split the lines into seperate lists:</p>
<pre><code>for files in filesInDir:
sourceFile = open("scanfiles/" + files, "r")
dynmicNameList = sourceFile.read().splitlines()
</code></pre>
<p>I would like it so that the array name is the file's name. So far I've only seen way more complicated scenarios for this problem. But I can't get this working.</p>
|
<p>You want a dictionary for those lines, not local variables:</p>
<pre><code>lines = {}
for files in filesInDir:
sourceFile = open("scanfiles/" + files, "r")
lines[files] = sourceFile.read().splitlines()
</code></pre>
|
python|list
| 3 |
1,902,271 | 29,735,300 |
Calling a function in another function causing error due to arguments in parantheses
|
<p>As it happens I am just getting into programming with Python and I was about to program a little rock-paper-scissors game.</p>
<p>Unfortunately when I'm trying to run my script, I am receiving the following error:</p>
<pre><code>file rps.py, line 53 in game
compare (move,choice)
NameError: name 'move' is not defined"
</code></pre>
<p>Here's my code so far:</p>
<pre><code>from random import randint
possibilities = ['rock', 'paper', 'scissors']
def CPU(list):
i = randint(0, len(list)-1)
move = list[i]
#print (str(move))
return move
def User():
choice = str(input('Your choice? (Rock [r], Paper[p], Scissors[s])'))
choice = choice.lower()
if choice == 'rock' or choice == 'r':
choice = 'rock'
elif choice == 'scissors' or choice =='s':
choice = 'scissors'
elif choice == 'paper' or choice == 'p':
choice = 'paper'
#print ('Your choice: ' + str(choice))
return choice
def compare(c, u):
if c == u:
print ('Your choice was: ' + str(u) + 'and I chose: ' + str(c))
print ('That is what we call a tie. Nobody wins.')
elif c == 'paper' and u == 'rock':
print ('Your choice was: ' + str(u) + 'and I chose: ' + str(c))
print ('This means that you, my friend, lose.')
elif c == 'paper' and u == 'scissors':
print ('Your choice was: ' + str(u) + 'and I chose: ' + str(c))
print ('Congratulations, you win....this time.')
elif cc == 'rock' and u == 'paper':
print ('Your choice was: ' + str(u) + 'and I chose: ' + str(c))
print ('Congratulations, you win....this time.')
elif c == 'rock' and u == 'scissors':
print ('Your choice was: ' + str(u) + 'and I chose: ' + str(c))
print ('This means that you lose.')
elif c == 'scissors' and u == 'paper':
print ('Your choice was: ' + str(u) + 'and I chose: ' + str(c))
print ('This means that you lose.')
elif c == 'scissors' and u == 'rock':
print ('Your choice was: ' + str(u) + 'and I chose: ' + str(c))
print ('Congratulations, you win....this time.')
def game():
CPU(possibilities)
User()
compare(move, choice)
game()
</code></pre>
<p>I am pretty sure that I did something wrong when I defined the function <code>compare(c,u)</code> and added the arguments 'c' and 'u' in the parentheses.
I thought that I made sure that I was able to use these variables by using the return statement before.</p>
<p>I am quite new to programming in general and therefore inexperienced, so please be kind!</p>
|
<p>The problem is that you are only calling the functions <code>CPU</code> and <code>User</code> but you are not assigning them to any variables. Hence you need to re-define your function <code>game</code> as in</p>
<pre><code>def game():
move = CPU(possibilities)
choice = User()
compare(move, choice)
</code></pre>
<p>In this way you will be calling the function <code>compare</code> with a local copy of the values <code>return</code>ed after calling the two other functions.</p>
<p>You can refer more about functions and the <code>return</code> statement by referring the official <a href="https://docs.python.org/2/tutorial/controlflow.html#defining-functions" rel="nofollow">documentation</a></p>
|
python|function
| 5 |
1,902,272 | 29,482,401 |
Python 2.7: set variable name
|
<p>I would like to set the variable name in a for loop, like:</p>
<pre><code>for i in range(5):
namei = i # this is a variable name
</code></pre>
<p>It will give me:</p>
<pre><code>name0 = 0
name1 = 1
name2 = 2
name3 = 3
name4 = 4
</code></pre>
<p>does anyone know how to do that?</p>
<p>Thank you!</p>
|
<p>Instead of having 5 separate variables, you should use an array.</p>
<p>Eg.</p>
<pre><code>name = [] #Create an empty array
for i in range(5):
name.append(i) #Add each value to your array
</code></pre>
<p>This will leave you with</p>
<pre><code>name[0] = 0
name[1] = 1
...
etc.
</code></pre>
|
python-2.7
| -1 |
1,902,273 | 46,217,444 |
python lightweight solution for typecasting float(64) to bytes
|
<p>I'd like to have a very simple solution in displaying the raw bytes for a float value (or more consecutive ones in memory). in my understanding, this is named typecasting (reading the memory values in byte) not to be misunderstood as casting (reading the value and interpreting that in byte).</p>
<p>The simplest test seem to be:</p>
<pre><code>import numpy
a=3.14159265
print(a.hex())
# Returns 0x1.921fb53c8d4f1p+1
b=numpy.array(a)
print(b.tobytes())
# returns b'\xf1\xd4\xc8S\xfb!\t@'
# expected is something like 'F1' 'D4' 'C8' '53' 'FB' '21' '09' '40'
</code></pre>
<p>but the method hex() returns an Interpretation of the IEEE FLOAT represantation in hex. The second method shows four hex-Byte markers <code>\x</code> but I wonder as a float64 should read 8 Bytes. Further I'm wondering by the other characters.<br>
I good old simple C I would have implemented that by simply using an unsigned int pointer on the memory address of the float and printing 8 values from that unsigned int "Array" (pointer.)<br>
I know, that I can use C within python - but are there other simple solutions?<br>
Maybe as it is of interest: I need that functionalaty to save many big float vectors to a BLOB into a database.</p>
<p>I think, similiar Problems are to be found in <a href="https://stackoverflow.com/questions/38874139/correct-interpretation-of-hex-byte-convert-it-to-float">Correct interpretation of hex byte, convert it to float</a> reading that if possible (displayable characters) they are not printed in <code>\x</code>-form. How can I change that?</p>
|
<p>You can use the built-in module <code>struct</code> for this. If you have Python 3.5 or later:</p>
<pre><code>import struct
struct.pack('d', a).hex()
</code></pre>
<p>It gives:</p>
<pre><code>'f1d4c853fb210940'
</code></pre>
<p>If you have Python older than 3.5:</p>
<pre><code>import binascii
binascii.hexlify(struct.pack('d', a))
</code></pre>
<p>Or:</p>
<pre><code>hex(struct.unpack('>Q', struct.pack('d', a))[0])
</code></pre>
<p>If you have an array of floats and want to use NumPy:</p>
<pre><code>import numpy as np
np.set_printoptions(formatter={'int':hex})
np.array([a]).view('u8')
</code></pre>
|
python|python-3.x|typecasting-operator|typecast-operator
| 4 |
1,902,274 | 62,600,525 |
Dataframe: get_dummies for arrays in coll
|
<p>Working with Python/Pandas</p>
<p>I have a csv file pretty simple except for one column: the source is an array.</p>
<p>An example of my table:</p>
<pre><code>Column A |Column B |Column C |Column D |
__________________________|__________|__________|__________|
[Water, Food, Groceries] | 0 |true |9 |
[Water, Desert, Sand] | 1 |false |1 |
[Earth, Groceries] | 2 |null |12 |
[Air, Food, Car] | 3 |true |8 |
[Cristal, Love, Groceries]| 4 |false |0 |
</code></pre>
<p>What I want to accomplish:</p>
<pre><code>Column B |Column C |Column D |column_a_water |column_a_food | column_a_groceries |
__________|__________|__________|_______________|_______________|____________________|
0 |true |9 | 1 | 1 | 1 |
1 |false |1 | 1 | 0 | 0 |
2 |null |12 | 0 | 0 | 1 |
3 |true |8 | 0 | 1 | 0 |
4 |false |0 | 0 | 0 | 1 |
</code></pre>
<p>With pandas get_dummies I can make it work with Column C, but not with Column A. Using the same technique it does not work.</p>
<p>What can I do to deal with this situation?</p>
|
<p>Use, <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.explode.html" rel="nofollow noreferrer"><code>Series.explode</code></a> on <code>Column A</code>, then use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.get_dummies.html" rel="nofollow noreferrer"><code>Series.str.get_dummies</code></a> on this exploded column then use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.sum.html" rel="nofollow noreferrer"><code>DataFrame.sum</code></a> on <code>level=0</code> then using <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.add_prefix.html" rel="nofollow noreferrer"><code>DataFrame.add_prefix</code></a> add prefix <code>Column A</code> to each of the dummy columns, finally using <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.join.html" rel="nofollow noreferrer"><code>DataFrame.join</code></a> join the the original dataframe with the dataframe containing dummy columns:</p>
<pre><code># Use this line IF the values in Column A are type of `string` instead of lists.
df['Column A'] = df['Column A'].str.strip('[]').str.split('\s*,\s*')
df1 = (
df['Column A'].explode()
.str.get_dummies().sum(level=0).add_prefix('Column A_')
)
df1 = df.drop('Column A', 1).join(df1)
</code></pre>
<p>Result:</p>
<pre><code># print(df1)
Column B Column C Column D Column A_Air ... Column A_Groceries Column A_Love Column A_Sand Column A_Water
0 0 True 9 0 ... 1 0 0 1
1 1 False 1 0 ... 0 0 1 1
2 2 NaN 12 0 ... 1 0 0 0
3 3 True 8 1 ... 0 0 0 0
4 4 False 0 0 ... 1 1 0 0
</code></pre>
|
python|pandas|dataframe
| 4 |
1,902,275 | 70,184,876 |
Absolute path works when hardcoded but not when stored in variable in python
|
<p>I am trying to write a python bot that will just simply upload mass files in a directory to a server. (Mostly game clips with a few screenshots.) The issue is when I pass the file path dynamically I get a file not found error. When passing it hardcoded it works fine. I have printed and even sent to discord the file path and it is correct. Tried .strip() and .encode('unicode-escape') and various other options but haven't found anything that works. This has me a bit puzzled. Any ideas?</p>
<pre><code>import os
import discord
import time
from discord.ext import commands
client = commands.Bot(command_prefix = '!!')
#locations to upload
locations = [
'/root/discord/',
'/home/discord',
]
#file types to not upload
bad_files = [
'viminfo',
'txt',
'sh',
'',
'bat',
]
#walk through directory and upload files
async def dir_walk(ctx,p):
for roots,dirs,files in os.walk(p):
for i in dirs:
for x in files:
#check to see if file extension matches one listed to not upload.
if x.split('.')[-1] in bad_files:
pass
else:
try:
#upload files
file_path = os.path.join(roots,i,x)
f = open(full_path,'rb')
await ctx.send(i,file = discord.File(f,filename = x))
time.sleep(5)
except:
raise
time.sleep(5)
@client.command(pass_context=True, name="walk")
async def list_dir(ctx):
for x in locations:
await dir_walk(ctx, x)
client.run('')
</code></pre>
<p>The traceback is :</p>
<pre><code>Ignoring exception in command walk:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/discord/ext/commands/core.py", li ne 85, in wrapped
ret = await coro(*args, **kwargs)
File "newwalk.py", line 50, in list_dir
await dir_walk(ctx,x)
File "newwalk.py", line 40, in dir_walk
f = open(x,'rb')
FileNotFoundError: [Errno 2] No such file or directory: 'ss dec_2019_1_20_0008.jpg'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/discord/ext/commands/bot.py", lin e 939, in invoke
await ctx.command.invoke(ctx)
File "/usr/local/lib/python3.8/dist-packages/discord/ext/commands/core.py", li ne 863, in invoke
await injected(*ctx.args, **ctx.kwargs)
File "/usr/local/lib/python3.8/dist-packages/discord/ext/commands/core.py", li ne 94, in wrapped
raise CommandInvokeError(exc) from exc
discord.ext.commands.errors.CommandInvokeError: Command raised an exception: Fil eNotFoundError: [Errno 2] No such file or directory: 'ss dec_2019_1_20_0008.jpg'
</code></pre>
|
<p>I managed to find a way to do this. It will take some more work but I changed the code a bit. Here it is.</p>
<pre><code>import os
import discord
import time
from discord.ext import commands
client = commands.Bot(command_prefix = '!!')
#locations to upload
locations = [
'',
]
#file types to not upload
good_files = [
'png',
'jpg',
'jpeg',
'mp4',
'mpg',
'mpeg',
'wav',
'flv',
'mov',
'gif',
'tif',
'bmp',
]
#walk through directory and upload files
async def dir_walk(ctx,p):
for roots,dirs,files in os.walk(p):
for i in dirs:
os.chdir(os.path.join(roots,i))
for x in os.listdir('.'):
if os.path.isfile(x):
if x.split('.')[-1] in good_files:
try:
with open(x,'rb') as f:
await ctx.send(i,file = discord.File(f,filename = x))
time.sleep(1)
except:
pass
@client.command(pass_context=True, name="walk")
async def list_dir(ctx):
for x in locations:
await dir_walk(ctx,x)
client.run('')
</code></pre>
|
python-3.x|path|discord.py|python-os
| 0 |
1,902,276 | 53,404,300 |
How can I exclude my count from my grade average?
|
<p>I'm trying to figure out how to have a count when when I want to display my file, but not have the count get added into my grade average.</p>
<pre><code>def main():
choice = "q"
while choice != "X":
print_menu()
choice = input("Enter an option (D, C, X): ")
if choice == "D":
DisplayScores()
elif choice == "C":
CalcAverage()
def print_menu():
print("D. Display Grades")
print("C. Calculate Average")
print("X. Exit Application")
def DisplayScores():
try:
infile = open("data.txt",'r')
count = 0
for line in infile:
count += 1
print(count,line.rstrip("\n"))
line = infile.readline()
infile.close()
except IOError:
print("File does not exist.")
except:
print("Unknown error.")
def CalcAverage():
Average = 0.0
try:
datafile = open("data.txt", 'r')
for grade in datafile:
total = float(grade)
Average += total
print("The average of the class is: ", format(Average/29, '.2f'))
except IOError:
print("Something is wrong with the file.")
print("Unknown Error.")
datafile.close()
main()
</code></pre>
|
<p>Why are you printing the average each time through the loop? You can't know what the average is until the loop is finished. Also, it would be better to divide by the actual number of grades in the file, instead of assuming 29. Try this instead:</p>
<pre><code>total = 0.0
grades = 0
try:
datafile = open("data.txt", 'r')
for grade in datafile:
grades += 1
total += float(grade)
print("The average of the class is: ", format(total/grades, '.2f'))
</code></pre>
|
python|count
| 0 |
1,902,277 | 46,157,084 |
Ssh Socket Closed . Wanted an Interactive Ssh shell automation for Linux Box
|
<p>Ssh Socket Closed . Wanted an Interactive Ssh shell automation for Linux Box</p>
<pre><code> import paramiko
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
##Creating Ssh Session
ssh.connect("gfr4123408", port=22, username='rstrusr',password='Password')
stdin,stdout,stderr = ssh.exec_command('symcfg -lockbox reset -ssv')
#Here it asks for password and i want to write password below
stdin.write("Password")
stdin.write('\n')
stdin.flush()
output=stdout.readlines()
print(output)
</code></pre>
I Get the following Error
<blockquote>
<p>Traceback (most recent call last): File "", line 1, in
stdin.write('password') File "C:\Users\venkar2\AppData\Local\Programs\Python\Python36\lib\site-packages\paramiko\file.py",
line 402, in write
self._write_all(data) File "C:\Users\venkar2\AppData\Local\Programs\Python\Python36\lib\site-packages\paramiko\file.py",
line 519, in _write_all
count = self._write(data) File "C:\Users\venkar2\AppData\Local\Programs\Python\Python36\lib\site-packages\paramiko\channel.py",
line 1333, in _write
self.channel.sendall(data) File "C:\Users\venkar2\AppData\Local\Programs\Python\Python36\lib\site-packages\paramiko\channel.py",
line 831, in sendall
sent = self.send(s) File "C:\Users\venkar2\AppData\Local\Programs\Python\Python36\lib\site-packages\paramiko\channel.py",
line 785, in send
return self._send(s, m) File "C:\Users\venkar2\AppData\Local\Programs\Python\Python36\lib\site-packages\paramiko\channel.py",
line 1169, in _send
raise socket.error('Socket is closed') OSError: Socket is closed</p>
</blockquote>
<p>How can i resolve this as i have to configure for 200 + Devices ?? </p>
|
<p>Most of the time this problem is related to Time Between sending two commands.
I had this problem.when i used "time.sleep(X)" between my commands , this problem gone.</p>
|
python|python-3.x
| 0 |
1,902,278 | 54,960,933 |
Cumulative sum based on mutiple condition in dataframe
|
<p>I'm stuck on a problem which I think is not complicated but I can't see an easy way ...</p>
<p>I have a dataframe (stats_match) like this with 11 000 rows:</p>
<pre><code>domicile exterieur season home away
FC Metz Stade Rennais FC 1999 0.0 0.0
Paris Saint-Germain ESTAC Troyes 1999 1.0 0.0
Olympique Lyonnais Montpellier Hérault SC 1999 1.0 2.0
Girondins de Bordeaux SC Bastia 1999 3.0 2.0
RC Strasbourg Alsace RC Lens 1999 1.0 0.0
AS Monaco AS Saint-Etienne 1999 2.0 2.0
</code></pre>
<p>I would like to do a cumulative sum of the numbers of goals scored by teams/season and only on the actual teams of Ligue 1 (because I forecast to delete the rows without a team from the actual season). The actuals teams are stored in the other data frame (stade) like this :</p>
<pre><code>equipe stade capacity
Angers SCO Stade Raymond Kopa 17048
Nîmes Olympique Stade des Costières 18364
Girondins de Bordeaux Matmut Atlantique 42115
Girondins de Bordeaux Stade Chaban-Delmas 33290
RC Strasbourg Alsace Stade de la Meinau 26109
LOSC Stade Pierre Mauroy 25000
</code></pre>
<p>I tried this : </p>
<pre><code>d = defaultdict(list)
for index, row in stats_match.iterrows():
if ((row.domicile in list(stade.equipe)) & (row.exterieur in list(stade.equipe))):
d[row.domicile].append([row.saison,row.but_domicile])
d[row.exterieur].append([row.saison,row.but_exterieur])
elif (row.domicile in list(stade.equipe)):
d[row.domicile].append([row.saison,row.but_domicile])
else:
d[row.exterieur].append([row.saison,row.but_exterieur])
</code></pre>
<p>The code works and gives me a dictionary of my team with all the goal scored (home and away).
I don't know if it's the easiest way because now, I don't know how to do my cumulative sum with the condition on the season with:</p>
<ul>
<li>np.add.accumulate()</li>
<li>np.cumsum()</li>
</ul>
<p>And then how to add it correctly at the right place in my data frame? I thought to add the index into my dictionary during the loop, could it work?</p>
<p>Many Thanks.</p>
|
<p>You can do this natively in <code>pandas</code>.</p>
<p>First, if I understand you correctly, you only want the teams in <code>stade</code>:</p>
<pre><code>filtered_stats_match = stats_match[stats_match[['domicile', 'exterieur']].isin(stade['equipe']).any(axis=1)]
</code></pre>
<p>After this, you can simply perform a <code>groupby</code> to get the cumulative sum:</p>
<pre><code>filtered_stats_match.groupby(['domicile', 'season'])[['home', 'away']].cumsum()
</code></pre>
|
python|pandas|dataframe|cumulative-sum
| 0 |
1,902,279 | 54,749,793 |
Appending rows from one CSV to another in Python
|
<p>I have looked at many solutions for this but cannot find one that works for what I want to do.</p>
<p>Basically I have 2 CSV files:</p>
<blockquote>
<p>all.csv</p>
</blockquote>
<pre><code>1 Wed Oct 03 41.51093923 41.51093923 41.51093923 41.51093923
2 Wed Oct 04
3 Wed Oct 05 41.43764015 41.43764015 41.43764015
4 Wed Oct 06 41.21395681 41.21395681 41.21395681
5 Wed Oct 07 42.07607442 42.07607442 42.07607442
6 Wed Oct 08 42.0074109 42.0074109 42.0074109
7 Wed Oct 09 41.21395681 41.21395681
8 Wed Oct 10 41.43764015 41.43764015 41.43764015 41.43764015
9 Wed Oct 11 41.21395681 41.21395681 41.21395681 41.21395681
</code></pre>
<blockquote>
<p>original.csv</p>
</blockquote>
<pre><code>10 Wed Oct 12 41.43764015
11 Wed Oct 13
12 Wed Oct 14 42.07607442 42.07607442 42.07607442
13 Wed Oct 15 41.43764015 41.43764015 41.43764015 41.43764015
14 Wed Oct 16 41.21395681 41.21395681 41.21395681 41.21395681
15 Wed Oct 17
16 Wed Oct 18 42.07607442 42.07607442 42.07607442
</code></pre>
<p>I want to append <code>original.csv</code> to <code>all.csv</code>, by simply taking all rows in <code>original.csv</code> and merging them underneath the last line in <code>all.csv</code> to get:</p>
<pre><code>1 Wed Oct 03 41.51093923 41.51093923 41.51093923 41.51093923
2 Wed Oct 04
3 Wed Oct 05 41.43764015 41.43764015 41.43764015
4 Wed Oct 06 41.21395681 41.21395681 41.21395681
5 Wed Oct 07 42.07607442 42.07607442 42.07607442
6 Wed Oct 08 42.0074109 42.0074109 42.0074109
7 Wed Oct 09 41.21395681 41.21395681
8 Wed Oct 10 41.43764015 41.43764015 41.43764015 41.43764015
9 Wed Oct 11 41.21395681 41.21395681 41.21395681 41.21395681
10 Wed Oct 12 41.43764015
11 Wed Oct 13
12 Wed Oct 14 42.07607442 42.07607442 42.07607442
13 Wed Oct 15 41.43764015 41.43764015 41.43764015 41.43764015
14 Wed Oct 16 41.21395681 41.21395681 41.21395681 41.21395681
15 Wed Oct 17
16 Wed Oct 18 42.07607442 42.07607442 42.07607442
</code></pre>
<p>As you can see there is no headers for the data and the rows vary in length. This is just an example of the type of files I am working with but I want to get a solution that can work on any CSV.</p>
<p>I am working with Python3 and so far have tried using the <code>pandas</code> library but had no luck.</p>
<p>Any suggestions would be great, thanks.</p>
|
<p>You don't need to use <code>pandas</code>. Simply append one csv to another:</p>
<pre><code>with open('original.csv', 'r') as f1:
original = f1.read()
with open('all.csv', 'a') as f2:
f2.write('\n')
f2.write(original)
</code></pre>
<p>Output:</p>
<pre><code>1 Wed Oct 03 41.51093923 41.51093923 41.51093923 41.51093923
2 Wed Oct 04
3 Wed Oct 05 41.43764015 41.43764015 41.43764015
4 Wed Oct 06 41.21395681 41.21395681 41.21395681
5 Wed Oct 07 42.07607442 42.07607442 42.07607442
6 Wed Oct 08 42.0074109 42.0074109 42.0074109
7 Wed Oct 09 41.21395681 41.21395681
8 Wed Oct 10 41.43764015 41.43764015 41.43764015 41.43764015
9 Wed Oct 11 41.21395681 41.21395681 41.21395681 41.21395681
10 Wed Oct 12 41.43764015
11 Wed Oct 13
12 Wed Oct 14 42.07607442 42.07607442 42.07607442
13 Wed Oct 15 41.43764015 41.43764015 41.43764015 41.43764015
14 Wed Oct 16 41.21395681 41.21395681 41.21395681 41.21395681
15 Wed Oct 17
16 Wed Oct 18 42.07607442 42.07607442 42.07607442
</code></pre>
|
python|pandas|csv|merge|dataset
| 6 |
1,902,280 | 33,238,570 |
How to serialize in Django Rest Framework with a many to many field
|
<p>I have a model called <code>UserProfile</code> which is a <code>OneToOneField</code> to the default <code>User</code> model. I have a <code>Post</code> model which has <code>User</code> as <code>ManyToManyField</code>. I am unable to write a serializer for <code>Post</code> which includes <code>User</code> in responses.</p>
<p>My <code>UserProfile</code> model:</p>
<pre><code>class UserProfile(models.Model):
user = models.OneToOneField(User)
name = models.CharField(max_length=255, null=True)
profile_picture = models.CharField(max_length=1000, null=True)
</code></pre>
<p>My <code>Post</code> model:</p>
<pre><code>class Post(models.Model):
text = models.TextField(null=True)
title = models.CharField(max_length=255, null=True)
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
user = models.ManyToManyField(User)
</code></pre>
<p>My <code>Post</code> serializer:</p>
<pre><code>class PostSerializer(serializers.ModelSerializer):
users = UserProfileSerializer(source='user.userprofile', many=True)
class Meta:
model = Post
fields = ('id', 'text', 'title', 'users')
</code></pre>
<p>With above serializer I am getting the following error:</p>
<pre><code>Got AttributeError when attempting to get a value for field `users` on serializer `WorkSerializer`.
The serializer field might be named incorrectly and not match any attribute or key on the `Work` instance.
Original exception text was: 'ManyRelatedManager' object has no attribute 'userprofile'.
</code></pre>
<p><strong>EDIT</strong>: I created another serializer <code>UserSerializerForPost</code> which is used in <code>PostSerializer</code>:</p>
<pre><code>class UserSerializerForPost(serializers.ModelSerializer):
user = UserProfileSerializer(source='userprofile')
class Meta:
model = User
fields = ('user',)
class PostSerializer(serializers.ModelSerializer):
users = UserSerializerForPost(source='user', many=True)
class Meta:
model = Post
fields = ('id', 'text', 'title', 'users')
</code></pre>
<p>Though this works, but I am getting <code>UserProfile</code> response in a dictionary of <code>user</code> as <code>users</code> list:</p>
<pre><code>"users": [
{
"user": {
"id": 2,
...
},
{
"user": {
"id": 4,
...
}
}
]
</code></pre>
<p>But I want:</p>
<pre><code>"users": [
{
"id": 2,
...
},
{
"id": 4,
...
}
}
],
</code></pre>
|
<p>Following solution worked for me and it did not even require creating <code>UserSerializerForPost</code>:</p>
<pre><code>class PostSerializer(serializers.ModelSerializer):
users = serializers.SerializerMethodField()
class Meta:
model = Post
fields = ('id', 'text', 'title', 'users')
def get_users(self, obj):
response = []
for _user in obj.user.all():
user_profile = UserProfileSerializer(
_user.userprofile,
context={'request': self.context['request']})
response.append(user_profile.data)
return response
</code></pre>
<p><strong>EDIT</strong>: Okay I found a even better approach than the above. First add a <code>get_user_profiles</code> to <code>Post</code>:</p>
<pre><code>class Post(models.Model):
text = models.TextField(null=True)
title = models.CharField(max_length=255, null=True)
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
user = models.ManyToManyField(User)
def get_user_profiles(self):
return UserProfile.objects.filter(user__post=self)
</code></pre>
<p>Then I updated my PostSerializer with:</p>
<pre><code>class PostSerializer(serializers.ModelSerializer):
users = UserProfileSerializer(source='get_user_profiles', many=True)
class Meta:
model = Post
fields = ('id', 'text', 'title', 'users')
</code></pre>
<p>This is way too cleaner version that earlier one.</p>
|
python|django|python-2.7|serialization|django-rest-framework
| 5 |
1,902,281 | 21,761,726 |
numpy einsum with '...'
|
<p>The code below is meant to conduct a linear coordinate transformation on a set of 3d coordinates. The transformation matrix is <code>A</code>, and the array containing the coordinates is <code>x</code>. The zeroth axis of <code>x</code> runs over the dimensions x, y, z. It can have any arbitrary shape beyond that.</p>
<p>Here's my attempt:</p>
<pre><code>A = np.random.random((3, 3))
x = np.random.random((3, 4, 2))
x_prime = np.einsum('ij,j...->i...', A, x)
</code></pre>
<p>The output is:</p>
<pre><code> x_prime = np.einsum('ij,j...->i...', A, x)
ValueError: operand 0 did not have enough dimensions
to match the broadcasting, and couldn't be extended
because einstein sum subscripts were specified at both
the start and end
</code></pre>
<p>If I specify the additional subscripts in <code>x</code> explicitly, the error goes away. In other words, the following works:</p>
<pre><code>x_prime = np.einsum('ij,jkl->ikl', A, x)
</code></pre>
<p>I'd like <code>x</code> to be able to have any arbitrary number of axes after the zeroth axis, so the workaround I give about is not optimal. I'm actually not sure why the first <code>einsum</code> example is not working. I'm using numpy 1.6.1. Is this a bug, or am I misunderstanding the <a href="http://docs.scipy.org/doc/numpy-1.6.0/reference/generated/numpy.einsum.html" rel="nofollow">documentation</a>?</p>
|
<p>Yep, it's a bug. It was fixed in this pull request: <a href="https://github.com/numpy/numpy/pull/4099" rel="nofollow">https://github.com/numpy/numpy/pull/4099</a></p>
<p>This was only merged a month ago, so it'll be a while before it makes it to a stable release.</p>
<p><strong>EDIT</strong>: As @hpaulj mentions in the comment, you can work around this limitation by adding an ellipsis even when all indices are specified:</p>
<pre><code>np.einsum('...ij,j...->i...', A, x)
</code></pre>
|
python|numpy
| 3 |
1,902,282 | 24,929,580 |
Python: Zero Copy while truncating a byte buffer
|
<p>This is a noob question on Python.</p>
<p>Is there a way in Python to truncate off few bytes from the begining of bytearray and achieve this without copying the content to another memory location? Following is what I am doing:</p>
<pre><code>inbuffer = bytearray()
inbuffer.extend(someincomingbytedata)
x = inbuffer[0:10]
del inbuffer[0:10]
</code></pre>
<p>I need to retain the truncated bytes (referenced by x) and perform some operation on it. </p>
<p>will x point to the same memory location as inbuffer[0] or will the 3rd line in the above code make a copy of data. Also, if the copy is not made, will deleting in the last line also delete the data referenced by x? Since x is still referencing that data, GC should not be reclaiming it. Is that right?</p>
<p><strong>Edit:</strong></p>
<p>If this is not the right way to truncate a byte buffer and return the truncated bytes without copying, is there any other type that supports such operation safely?</p>
|
<p>In your example, <code>x</code> will be a new object that holds a <em>copy</em> of the contents of <code>inbuffer[0:10]</code>.</p>
<p>To get a representation without copying, you need to use a memoryview (available only in Python 3):</p>
<pre><code>inbuffer_view = memoryview(inbuffer)
prefix = inbuffer_view[0:10]
suffix = inbuffer_view[10:]
</code></pre>
<p>Now <code>prefix</code> will point to the first 10 bytes of <code>inbuffer</code>, and <code>suffix</code> will point to the remaining contents of <code>inbuffer</code>. Both objects keep an internal reference to <code>inbuffer</code>, so you do not need to explicitly keep references to <code>inbuffer</code> or <code>inbuffer_view</code>.</p>
<p>Note that both <code>prefix</code> and <code>suffix</code> will be memoryviews, not bytearrays or bytes. You can create bytes and bytearrays from them, but at that point the contents will be copied.</p>
<p>memoryviews can be passed to any function that works with objects that implement the buffer protocol. So, for example, you can write them directly into a file using <code>fh.write(suffix).</code></p>
|
python
| 1 |
1,902,283 | 24,491,001 |
MongoEngine/MongoDB and Django unable to "add more urls" into urls.py
|
<p>So Everything works fine with my initial 5 urls in the <code>urls.py</code> file.</p>
<pre><code>urlpatterns = patterns('',
url(r'^add/$', PostCreateView.as_view(), name='create'),
url(r'^$', PostListView.as_view(), name='list'),
url(r'^(?P<pk>[\w\d]+)/$', PostDetailView.as_view(), name='detail'),
url(r'^(?P<pk>[\w\d]+)/edit/$', PostUpdateView.as_view(), name='update'),
url(r'^(?P<pk>[\w\d]+)/delete/$', PostDeleteView.as_view(), name='delete'),
)
</code></pre>
<p>But when I add an extra line. Let's say </p>
<pre><code>url(r'^test/$', test.as_view(), name='test'),
</code></pre>
<p>I am hit with a 500 Server error page and with debugging it is stating that there is a <code>validation error?</code> </p>
<pre><code>"test is not a valid objectid"
</code></pre>
<p>i feel it's an issue with mongoengine but do not what or where.</p>
|
<p>The order of the rules matter. This rule will match <code>test/</code>:</p>
<pre><code>url(r'^(?P<pk>[\w\d]+)/$', PostDetailView.as_view(), name='detail'),
</code></pre>
<p>Define your rules like this:</p>
<pre><code>urlpatterns = patterns('',
url(r'^add/$', PostCreateView.as_view(), name='create'),
url(r'^$', PostListView.as_view(), name='list'),
url(r'^test/$', test.as_view(), name='test'),
url(r'^(?P<pk>[\w\d]+)/$', PostDetailView.as_view(), name='detail'),
url(r'^(?P<pk>[\w\d]+)/edit/$', PostUpdateView.as_view(), name='update'),
url(r'^(?P<pk>[\w\d]+)/delete/$', PostDeleteView.as_view(), name='delete'),
)
</code></pre>
|
python|django
| 2 |
1,902,284 | 24,496,447 |
Include variable in python script that invokes MS SQL Server
|
<p>I am attempting to include variable in python script that invokes MS SQL Server</p>
<pre><code>import pyodbc
ip_addr = '10.10.10.10'
querystring = """SELECT USER_NAME
FROM sem6.sem_computer, [sem6].[V_SEM_COMPUTER], sem6.IDENTITY_MAP, sem6.SEM_CLIENT
WHERE [sem6].[V_SEM_COMPUTER].COMPUTER_ID = SEM_COMPUTER.COMPUTER_ID
AND sem6.SEM_CLIENT.GROUP_ID = IDENTITY_MAP.ID
AND sem6.SEM_CLIENT.COMPUTER_ID = SEM_COMPUTER.COMPUTER_ID
AND [IP_ADDR1_TEXT] = %s
"""
params = (ip_addr)
con = pyodbc.connect('DRIVER={SQL Server};SERVER=10.10.10.100;DATABASE=database;UID=username;PWD=password')
cur = con.cursor()
cur.execute(querystring, params)
result = cur.fetchone()[0]
print result
con.commit()
con.close()
</code></pre>
<p>And it gives the following error</p>
<pre><code>Traceback (most recent call last):
File "database_test.py", line 17, in <module>
cur.execute(querystring, params)
pyodbc.ProgrammingError: ('The SQL contains 0 parameter markers, but 1 parameter
s were supplied', 'HY000')
</code></pre>
|
<p>Use ? instead of %s in your original query. </p>
|
python|sql|sql-server
| 1 |
1,902,285 | 38,153,588 |
How to solve this 'NoneType' object error?
|
<p>I've been trying to fix this error for the past couple of hours. I'm new to python, so it'll probably be an easy fix for you guys.</p>
<pre><code>import re, sys, time, string
from bs4 import BeautifulSoup
import urllib2
import logging
class FetchAllSybols(object):
def __init__(self):
# URL
# default m (market) - IN, t (type) - S (stock)
self.sym_start_url = "https://in.finance.yahoo.com/lookup/stocks?t=S&m=IN"
self.sym_page_url = '&b=0'#page
self.sym_alphanum_search_url = '&s=a' #search alphabet a
self.sym_full_url = ''
self.alphabet_str_to_search = string.ascii_lowercase # full alphabet
self.sym_info = {}
self.header = "SymbolId,Full Name, Type, Exchange, URL\n"
def set_alphabet_in_url(self, alphabet):
"""
Set the alphabet portion of the url by passing the alphabet.
:param alphbet (str): can be alphabet.
"""
self.sym_alphanum_search_url = '&s=' + str(alphabet)
def set_pagenumber_in_url(self, pageno):
"""
Set the page portion of the url by passing the pageno.
:param pageno (str): page number.
"""
self.sym_page_url = '&b=' + str(pageno)
def gen_next_url(self):
"""
Creates the full url necessary for sym scan by joining the search parameter and page no.
"""
self.sym_full_url = self.sym_start_url + self.sym_alphanum_search_url + self.sym_page_url
def get_data_from_next_page(self):
self.gen_next_url()
print ("Fetching data from URL", self.sym_full_url)
req = urllib2.Request(self.sym_full_url, headers={ 'User-Agent': 'Mozilla/5.0' })
html = None
counter = 0
while counter < 10:
try:
html = urllib2.urlopen(req)
except urllib2.HTTPError, error:
logging.error(error.read())
logging.info("Will try 10 times with 2 seconds sleep")
time.sleep(2)
counter += 1
else:
break
soup = BeautifulSoup(html, "html")
return soup
def get_all_valid_symbols(self):
"""
Scan all the symbol for one page. The parsing are split into odd and even rows.
"""
soup = self.get_data_from_next_page()
table = soup.find_all("div", class_="yui-content")
table_rows = table[0].find_all('tr')
for table_row in table_rows:
if table_row.find('td'):
if table_row.contents[2].text != 'NaN':
self.sym_info[table_row.contents[0].text]=[table_row.contents[1].text,
table_row.contents[3].text,
table_row.contents[4].text,
table_row.a["href"]]
def get_total_page_to_scan_for_alphabet(self, alphabet):
"""
Get the total search results based on each search to determine the number of page to scan.
:param alphabet (int): The total number of page to scan
"""
self.sym_start_url = "https://in.finance.yahoo.com/lookup/stocks?t=S&m=IN&r="
self.sym_page_url = '&b=0'#page
self.sym_alphanum_search_url = '&s='+alphabet
soup = self.get_data_from_next_page()
total_search_str = (str(soup.find_all("div", id="pagination")))
#Get the number of page
total_search_qty = re.search('of ([1-9]*\,*[0-9]*).*',total_search_str).group(1)
total_search_qty = int(total_search_qty.replace(',','', total_search_qty.count(',')))
final_search_page_count = total_search_qty/20 #20 seach per page.
return final_search_page_count
def get_total_sym_for_each_search(self, alphabet):
"""
Scan all the page indicate by the search item.
The first time search or the first page will get the total number of search.
Dividing it by 20 results per page will give the number of page to search.
:param alphabet(str)
"""
# Get the first page info first
self.set_pagenumber_in_url(0)
total_page_to_scan = self.get_total_page_to_scan_for_alphabet(alphabet)
logging.info('Total number of pages to scan: [%d]'% total_page_to_scan)
# Scan the rest of the page.
# may need to get time to rest
for page_no in range(0,total_page_to_scan+1,1):
self.set_pagenumber_in_url(page_no*20)
self.gen_next_url()
logging.info('Scanning page number: [%d] url: [%s] ' % (page_no, self.sym_full_url))
self.get_all_valid_symbols()
def serach_for_each_alphabet(self):
"""
Sweep through all the alphabets to get the full list of shares.
"""
for alphabet in self.alphabet_str_to_search:
logging.info('Searching for : [%s]' % alphabet)
self.set_alphabet_in_url(alphabet)
self.get_total_sym_for_each_search(alphabet)
def dump_into_file(self):
'''
Store all symbols into a csv file.
'''
f = open('Symbol_Info.csv', 'w')
f.write(self.header)
for key in sorted(self.sym_info):
values = self.sym_info[key]
sym = key+','
for value in values:
sym += str(value) + ','
sym += '\n'
f.write(sym)
f.close()
if __name__ == "__main__":
root = logging.getLogger()
root.setLevel(logging.INFO)
ch = logging.StreamHandler(sys.stdout)
ch.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')
ch.setFormatter(formatter)
root.addHandler(ch)
fileHandler = logging.FileHandler("dump.log")
fileHandler.setFormatter(formatter)
root.addHandler(fileHandler)
f = FetchAllSybols()
f.alphanum_str_to_search = 'abcdefghijklmnopqrstuvwxyz'
f.serach_for_each_alphabet()
f.dump_into_file()
</code></pre>
<p>I've been getting this error:</p>
<pre><code>Traceback (most recent call last):
File "/Users/xmizer/PycharmProjects/StockProject/venv/hello_world.py", line 157, in <module>
f.serach_for_each_alphabet()
File "/Users/xmizer/PycharmProjects/StockProject/venv/hello_world.py", line 122, in serach_for_each_alphabet
self.get_total_sym_for_each_search(alphabet)
File "/Users/xmizer/PycharmProjects/StockProject/venv/hello_world.py", line 104, in get_total_sym_for_each_search
total_page_to_scan = self.get_total_page_to_scan_for_alphabet(alphabet)
File "/Users/xmizer/PycharmProjects/StockProject/venv/hello_world.py", line 88, in get_total_page_to_scan_for_alphabet
total_search_qty = re.search('of ([1-9]*\,*[0-9]*).*',total_search_str).group(1)
AttributeError: 'NoneType' object has no attribute 'group'
</code></pre>
<p>I think it has to do with this line but I"m not sure:</p>
<pre><code>table = soup.find_all("div", class_="yui-content")
</code></pre>
|
<p>The problem is that <p> <code>total_search_qty = re.search('of ([1-9]*\,*[0-9]*).*',total_search_str)</code> </p> <p>in</p> <p><code>get_total_page_to_scan_for_alphabet</code> </p>isn't matching anything so it returns None instead of a <code>MatchObject</code>. When you try to use the <code>.group</code> method, the interpreter complains that <code>NoneType</code> doesn't have any attribute named <code>group</code>. This should have been easy to figure out by carefully reading the Traceback. Try to get used to reading this because it will be invaluable for debugging.</p>
<p>So either you need to refine your regex, or if you think that it is some exceptional circumstance you can use exception handling to deal with this gracefully. If it is something that might come up frequently, a simple conditional check to deal with it might be the way to go.</p>
|
python-3.x
| 0 |
1,902,286 | 31,004,415 |
Anaconda overriding python as the default site-packages directory windows 7
|
<p>from what I remember of middle school, anacondas and pythons are large snakes that overpower their prey, but anacondas are much bigger and heavier, which may be how it is overtaking my python pathway:</p>
<p><img src="https://i.stack.imgur.com/tWg8V.png" alt="enter image description here"> </p>
<p><img src="https://i.stack.imgur.com/HT63W.png" alt="enter image description here"></p>
<p>I installed psycopg2 with an easy .exe on windows 7 last night, and it worked (to the anaconda path)- but when I tried to start a Django app, it is looking in Python27\lib\site-packages. I allowed Anaconda to add itself to the pathway when I installed it because a post recommending it said to, and I also have Python set up correctly with a windows path variable.</p>
<p>This makes me wonder, should I actually uninstall python2.7.10, since Anaconda came with python 2.7.9 built in, and use the anaconda prompt for everything, and also delete python 2.7 from my path variable in windows? So,exactly</p>
<p>What is the best way to use Anaconda with Django? </p>
|
<p>I currently spent the weekend trying to tackle a similar issue. I have Anaconda (Miniconda3) installed for analytical work, however I also wanted to work on a Django project. In the end I ended up using an Anaconda virtual environment to work on my project. Here is basically what i did in cmd:</p>
<pre><code>>mkdir mysite
>cd mysite
>conda create -n mysite-env python=3
>activate mysite-env
>conda install django
</code></pre>
<p>Basically this created a virtualenv in the envs folder in my Anaconda installation and I can create/work with django without having to worry about multiple python installations. Not sure if this is the best way, but its the way that works for me. </p>
|
python|django|windows|anaconda|path-variables
| 2 |
1,902,287 | 30,970,493 |
create a tuple of tokens and texts for a conditional frequency distribution
|
<p>I'd like to create a table that shows the frequencies of certain words in 3 texts, whereas the texts are the columns and the words are the lines. </p>
<p>In the table I'd like to see which word appears how often in which text.</p>
<p>These are my texts and words:</p>
<pre><code>texts = [text1, text2, text3]
words = ['blood', 'young', 'mercy', 'woman', 'man', 'fear', 'night', 'happiness', 'heart', 'horse']
</code></pre>
<p>In order to create a conditional frequency distribution I wanted to create a list of tuples that should look like lot = [('text1', 'blood'), ('text1', 'young'), ... ('text2', 'blood'), ...)</p>
<p>I tried to create lot like this:</p>
<pre><code>lot = [(words, texte)
for word in words
for text in texts]
</code></pre>
<p>Instead of lot = ('text1', 'blood') etc. instead of 'text1' is the whole text in the list.</p>
<p>How can I create the list of tuples as intended for the conditional frequency distribution function?</p>
|
<p>Hopefully I have understood your question correctly. I think you are assigning both variable 'word' and 'texts' to their own tuple. </p>
<p>Try the following:</p>
<pre><code>texts = [text1, text2, text3]
words = ['blood', 'young', 'mercy', 'woman', 'man', 'fear', 'night', 'happiness', 'heart', 'horse']
lot = [(word, text)
for word in words
for text in texts]
</code></pre>
<p>Edit: Because the change is so subtle, I should elaborate a bit more. In your original code you were setting both 'words' and 'texts' to their own tuple, ie you were assigning the whole array rather than each element of the array.</p>
|
python|tuples|frequency-distribution
| 0 |
1,902,288 | 40,199,467 |
How would I use Try and except in this code?
|
<p>I don't understand.</p>
<p>I thought TypeError was what I needed.
I looked at some examples online and I thought this was right:</p>
<pre><code>def main():
x = int(input("Input a number:"))
y = int(input("Input another number:"))
result = x * y
while not result:
try:
result = x * y
except TypeError:
print('Invalid Number')
main()
</code></pre>
|
<p>Include input's in try and except statements</p>
<pre><code>def main():
while True:
try:
x = int(input("Input a number:"))
y = int(input("Input another number:"))
break
except ValueError:
print('invalid Number')
result=x*y
print(result)
</code></pre>
|
python-3.x
| 0 |
1,902,289 | 40,308,855 |
TypeError: tuple indices must be integers or slices, not str
|
<p>I need to make a function that updates tuples in a list of tuples. The tuples contain transactions, which are characterised by ammount, day, and type. I made this function that should completely replace a tuple with a new one, but when I try to print the updated list of tuples I get the error:</p>
<pre><code>TypeError: tuple indices must be integers or slices, not str
</code></pre>
<p>The code:</p>
<pre><code>def addtransaction(transactions, ammount, day, type):
newtransactions = {
"Ammount": ammount,
"Day": day,
"Type": type
}
transactions.append(newtransaction)
def show_addtransaction(transactions):
Ammount = float(input("Ammount: "))
Day = input("Day: ")
Type = input("Type: ")
addtransaction(transactions, ammount, day, type)
def show_all_transaction(transactions):
print()
for i, transaction in enumerate(transactions):
print("{0}. Transaction with the ammount of {1} on day {2} of type: {3}".format(
i + 1,
transaction['Ammount'], ; Here is where the error occurs.
transaction['Day'],
transaction['Type']))
def update_transaction(transactions): ; "transactions" is the list of tuples
x = input("Pick a transaction by index:")
a = float(input("Choose a new ammount:"))
b = input("Choose a new day:")
c = input("Choose a new type:")
i = x
transactions[int(i)] = (a, b, c)
addtransaction(transactions, 1, 2, service)
show_all_transaction(transactions)
update_transaction(transactions)
show_all_transaction(transactions)
</code></pre>
|
<p>A tuple is basically only a <code>list</code>, with the difference that in a <code>tuple</code> you cannot overwrite a value in it without creating a new <code>tuple</code>.</p>
<p>This means you can only access each value by an index starting at 0, like <code>transactions[0][0]</code>.</p>
<p>But as it appears you should actually use a <code>dict</code> in the first place. So you need to rewrite <code>update_transaction</code> to actually create a <code>dict</code> similar to how <code>addtransaction</code> works. But instead of adding the new transaction to the end you just need to overwrite the transaction at the given index.</p>
<p>This is what <code>update_transaction</code> already does, but it overwrites it with a tuple and not a <code>dict</code>. And when you print it out, it cannot handle that and causes this error.</p>
<p><strong>Original answer</strong> (Before I knew the other functions)</p>
<p>If you want to use strings as indexes you need to use a <code>dict</code>. Alternatively you can use <a href="https://docs.python.org/3/library/collections.html#collections.namedtuple" rel="nofollow"><code>namedtuple</code></a> which are like tuples but it also has an attribute for each value with the name you defined before. So in your case it would be something like:</p>
<pre><code>from collections import namedtuple
Transaction = namedtuple("Transaction", "amount day type")
</code></pre>
<p>The names given by the string used to create <code>Transaction</code> and separated by spaces or commas (or both). You can create transactions by simply calling that new object. And accessing either by index or name.</p>
<pre><code>new_transaction = Transaction(the_amount, the_day, the_type)
print(new_transaction[0])
print(new_transaction.amount)
</code></pre>
<p>Please note that doing <code>new_transaction["amount"]</code> will still not work.</p>
|
python
| 2 |
1,902,290 | 29,044,966 |
buffer allocation through callback (python ctypes)
|
<p>I have been looking at ways to allocate a buffer in Python and pass it safely to C, using the Python ctypes library.</p>
<p>As a first step I implemented a callback to allocate the buffer, with function type <code>CFUNCTYPE(c_long, POINTER(c_void_p), c_long)</code>.</p>
<p>Version 1, using the CRT <code>malloc</code>:</p>
<pre class="lang-python prettyprint-override"><code>def alloc(buf, size):
print type(buf)
print "allocating string of size-> ", size
buf[0] = vcrt_dll.malloc(size)
return 0
</code></pre>
<p>Version 2, using <code>create_string_buffer</code>:</p>
<pre class="lang-python prettyprint-override"><code>def string_allocator(buf, size):
print type(buf)
print "allocating string of size-> ", size
buf[0] = create_string_buffer(size)
return 0
</code></pre>
<p>I am using the code below to pass the pointer to C. <code>TryAlloc</code> is some C function that populates a message for the Python caller.</p>
<p>C Code:</p>
<pre class="lang-c prettyprint-override"><code>long TryAlloc(char **buf)
{
long size(256); // magic number !!!
allocator((void **)buf, size);
ZeroMemory(*buf, size);
char msg[] = "hello world";
strncpy_s(*buf, size, msg, strlen(msg));
return 0;
}
</code></pre>
<p>Python code:</p>
<pre class="lang-python prettyprint-override"><code>def try_alloc():
buf = c_char_p()
sample_dll.TryAlloc(byref(buf))
print buf.value
</code></pre>
<p>When using <code>malloc</code> this works fine, but I need to manually invoke <code>free</code> for the allocated buffer. When I use the string buffer alternative, it raises the following error in Python:</p>
<pre class="lang-none prettyprint-override"><code>TypeError: incompatible types, c_char_Array_100 instance instead of c_void_p
</code></pre>
<p>I saw some other threads discussing the same issue (For example, <a href="https://stackoverflow.com/questions/28054170/calling-python-from-c-through-callback-with-variable-buffer">Calling python from C++, through callback, with variable buffer</a>), but I couldn't find a proper solution for this. Can someone suggest a clean way of implementing this?</p>
|
<p><code>ctypes</code> is trying to prevent you from crashing your program in a more serious manner. The object returned <code>create_string_buffer</code> owns the buffer it points to, and expects to free this buffer when the python object's reference count reaches 0. If you were to manually free this memory then the program would later result in a segmentation fault when the python object tried to free the buffer it points to. In fact you shouldn't try to free any memory that was allocated by the <code>ctypes</code> module. For instance:</p>
<pre><code>i = c_int(10)
p = pointer(i)
vcrt_dll.free(p) # error
</code></pre>
<p>The only time you should manually free memory is for heap memory returned by a C library you are using (eg. <code>vcrt_dll.malloc</code>).</p>
|
python|pointers|ctypes
| 0 |
1,902,291 | 8,927,763 |
How can I set a url that takes one parameter with aphanumeric string?
|
<p>What I'm trying to do:</p>
<pre><code>url(r'^confirmemail/[a-bA-z0-9]+/', 'blog.views.confirmemail'),
</code></pre>
<p>The exact url would be something like that: <a href="http://127.0.0.1:8000/confirmemail/tubwp2n0a6" rel="nofollow">http://127.0.0.1:8000/confirmemail/tubwp2n0a6</a> or <a href="http://127.0.0.1:8000/confirmemail/tubwp2n0a6/" rel="nofollow">http://127.0.0.1:8000/confirmemail/tubwp2n0a6/</a></p>
<p>The length of the string parameter would be ten(10).</p>
<p>How can write that url in urls.py?</p>
|
<p><code>url(r'^confirmemail/(?P<code>[a-zA-Z0-9]+)/', 'blog.views.confirmemail'),</code></p>
<p>or </p>
<p><code>url(r'^confirmemail/(?P<code>[a-zA-Z0-9]{10})/', 'blog.views.confirmemail'),</code> to restrict length to the 10 symbols.</p>
|
python|django|url|django-urls
| 5 |
1,902,292 | 8,405,096 |
Python 3.2 - cookielib
|
<p>I have working 2.7 code, however there are no such thing as cookielib and urllib2 in 3.2? How can I make this code work on 3.2? In case someone is wondering - I'm on Windows.</p>
<p><em>Example 2.7</em></p>
<pre><code>import urllib, urllib2, cookielib
cj = cookielib.CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
login_data = urllib.urlencode({'login' : 'admin', 'pass' : '123'})
resp = opener.open('http://website/', login_data)
html = resp.read()
# I know that 3.2 is using print(), don't have to point that out.
print html
</code></pre>
|
<p>From <a href="http://docs.python.org/library/cookielib.html">Python docs</a>:</p>
<blockquote>
<p>Note The cookielib module has been renamed to http.cookiejar in Python
3.0. <strong>The 2to3 tool will automatically adapt imports when converting your sources to 3.0</strong>.</p>
</blockquote>
<p>Is that not an acceptable solution? If not, why?</p>
|
python|python-3.x
| 47 |
1,902,293 | 52,423,510 |
check related object or not
|
<p>I need to check my object related or not ORM postgresql/</p>
<p>I have two object ItemVariationValue and ItemVariation i need to check
<strong>ItemVariationValue</strong> has relation with <strong>ItemVariation</strong></p>
<p><strong>models.py</strong></p>
<pre><code>class ItemVariation(models.Model):
item=models.ForeignKey(Item,on_delete=models.CASCADE)
price=models.IntegerField(blank=True,null=True,default=0)
item_code=models.CharField(max_length=500)
keywords= models.ManyToManyField(Keyword)
image=models.ImageField(upload_to='dishes/', blank=True, null=True)
def __str__(self):
return str(self.id)
class ItemVariationValue(models.Model):
item=models.ForeignKey(Item,on_delete=models.CASCADE)
item_variation=models.ForeignKey(ItemVariation,on_delete=models.CASCADE)
attribute=models.ForeignKey(Attribute,on_delete=models.CASCADE)
attribute_value=models.ForeignKey(AttributeValue,on_delete=models.CASCADE)
def __str__(self):
return str(self.id)
</code></pre>
<p><strong>views.py</strong></p>
<pre><code>def get_items(request): # request= {"order_details": "varxxx:1"}
order=request.data['order_details']
items = order.split(',')
total = 0
for item in items:
od = item.split(':')
sku = str(od[0])
qty = int(od[1])
itemvariation=ItemVariation.objects.get(item_code=sku)
# if ItemVariationValue.item_variation has ForeignKey(ItemVariation):
</code></pre>
|
<blockquote>
<p><strong>Note</strong> (<code>ForeignKey</code> nomenclature): <code>ForeignKey</code>s should <em>not</em> have an <code>_id</code> suffix, since Django will automatically construct an extra field <code>fieldname_id</code> that contains the primary key. By writing an <code>_id</code> suffix, you will introduce extra attributes like <code>item_variation_id_id</code>. Although strictly speaking that is not a problem, it introduces a lot of confusion. For example <code>my_itemvariationvalue.itemvariation_id</code> will result in an <code>ItemVariation</code> object, etc.</p>
</blockquote>
<p>If you fix the <code>ForeignKey</code> names, you can check this like:</p>
<pre><code># given the ForeignKey is named itemvariation, not itemvariation_id and use Comparison Operators
if my_itemvariationvalue.itemvariation_id == my_itemvariation.id:
# ... (objects are related)
else:
# ... (otherwise)</code></pre>
<p>By using the <code>itemvariation_id</code> here, we <em>avoid</em> loading the related object, and thus potentially an extra database query.</p>
|
python|django|postgresql|django-models
| 1 |
1,902,294 | 52,109,956 |
how to apply window function in python?
|
<p>I have below sample dataframe as :- i.e. id, name across different year and quater with different value</p>
<pre><code>id name year quater value
1 bn 2017 2
1 bn 2017 3 4.5
1 bn 2017 4
2 an 2018 1 2.3
2 an 2018 2 3.3
2 an 2018 3 4.5
</code></pre>
<p>I have to identify if the name+id(primary key)
has appeared before in the year and quater that has value then treat that as existing(0) and if there is value in the future and nothing before then treat it as new(1).</p>
<pre><code>id name year quater value status
1 bn 2017 2 1
1 bn 2017 3 4.5 0
1 bn 2017 4 0
2 an 2018 1 2.3 1
2 an 2018 2 3.3 0
2 an 2018 3 4.5 0
</code></pre>
|
<p>You can use <code>duplicated</code> with a subset of id, name and year, then invert the result to identify the first occurrence..., eg:</p>
<pre><code>df['status'] = (~df.duplicated(subset=['id', 'name', 'year'])).astype(int)
</code></pre>
<p>Gives you:</p>
<pre><code> id name year quater value status
0 1 bn 2017 2 NaN 1
1 2 an 2018 1 2.3 1
2 2 an 2018 2 3.3 0
3 2 an 2018 3 4.5 0
</code></pre>
<p>Note that while this'll work on your data ordered as presented you may wish to sort by year (and maybe quarter) to ensure your status flags go within the same year and to the first quarter seen appropriately.</p>
|
pandas|window
| 1 |
1,902,295 | 52,091,908 |
Django formset with no initial data
|
<p>I am a Django rookie and I am developing a small app to register time (duration) and quantity of activities per user per day. Sort of like a work log. My problem is this: My “add entry” view displays and updates old records rather than adding new records to the db. I need a view to add new records, not replace old ones.</p>
<p>From searching around and from the #django IRC channel, I understand that the formset-way by default draws on old data rather than setting the client up for adding new data. I have, however, not found anything about how to avoid this behaviour and have the client provide a blank form for "appending new data" rather than "editing existing data".</p>
<p>My deadline is drawing really close and all help is greatly appreciated.</p>
<p>Here are the relevant code snippets:</p>
<p><em>From models.py</em></p>
<pre><code>class Activity(models.Model):
name = models.CharField(max_length=200)
description = models.TextField()
class Workday(models.Model):
entrydate = models.DateField()
worker = models.ForeignKey(User, on_delete=models.CASCADE)
class Entry(models.Model):
duration = models.DurationField()
quantity = models.PositiveIntegerField()
activity = models.ForeignKey(Activity, on_delete=models.CASCADE)
workday = models.ForeignKey(Workday, on_delete=models.CASCADE)
</code></pre>
<p><em>From forms.py</em></p>
<pre><code>class EntryForm(ModelForm):
activity = ModelChoiceField(queryset=Activity.objects.order_by('name'), initial=0)
class Meta:
model = Entry
fields = ['activity',
'duration',
'quantity',
]
class WorkdayForm(ModelForm):
class Meta:
model = Workday
fields = ['entrydate']
widgets = {'entrydate': SelectDateWidget}
</code></pre>
<p><em>From views.py</em></p>
<pre><code>def addentry(request):
EntryFormSet = modelformset_factory(Entry, form=EntryForm, extra=0, fields=('activity', 'duration', 'quantity'))
if request.method == 'POST':
workdayform = WorkdayForm(request.POST, prefix='workday')
formset = EntryFormSet(request.POST)
if formset.is_valid() and workdayform.is_valid():
# Generate a workday object
workday = workdayform.save(commit=False)
workday.entrydate = workdayform.cleaned_data['entrydate']
workday.worker = request.user
workday.save()
# Generate entry objects for each form in the entry formset
for form in formset:
e = form.save(commit=False)
e.workday = workday
e.save()
form.save_m2m()
messages.add_message(request, messages.SUCCESS,
"Registrert aktivitet " +
e.workday.entrydate.strftime('%A %d. %B %Y') +
": " + e.activity.name + " (" + str(e.quantity) +") - " +
str(e.duration)
)
return redirect('index')
else:
workdayform = WorkdayForm(request.POST, prefix='workday')
formset = EntryFormSet(request.POST)
for dict in formset.errors:
messages.add_message(request, messages.ERROR, dict)
context = {
'workdayform': workdayform,
'formset': formset,
}
return render(request, 'register/addentry.html', context)
else:
workdayform = WorkdayForm(prefix='workday')
formset = EntryFormSet()
context = {
'workdayform': workdayform,
'formset': formset,
}
return render(request, 'register/addentry.html', context)
</code></pre>
<p><em>From addentry.html</em></p>
<pre><code>{% block content %}
{% if user.is_authenticated %}
<h1>Ny dag</h1>
{% if formset and workdayform %}
<form id="newdayform" method="POST" class="post-form">
{% csrf_token %}
{{ workdayform.as_p }}
{{ formset.management_form }}
<table>
<thead>
<tr>
<td>Aktivitet</td>
<td>Varighet<br/>(HH:MM:SS)</td>
<td>Antall</td>
</tr>
</thead>
<tbody>
{% for form in formset %}
<tr>
<td>{{ form.activity }}</td>
<td>{{ form.duration }}</td>
<td>{{ form.quantity }}</td>
<td class="hidden">{{ form.id }}</td>
</tr>
{% endfor %}
</tbody>
</table>
<button type="submit">Registrer tid</button>
</form>
<script src="{% static 'register/jquery.formset.js' %}"></script>
<script type="text/javascript">
$(function() {
$('#newdayform tbody tr').formset();
})
</script>
{% if entryform.errors or workdayform.errors %}
<h3>Feil i utfyllingen</h3>
{{ entryform.errors }}
{{ workdayform.errors }}
{% endif %}
{% else %}
<p>No form!</p>
{% endif %}
{% endif %}
{% endblock %}
</code></pre>
|
<p>Thanks to @e4c5 and <a href="https://stackoverflow.com/questions/3730224/django-formset-populating-when-it-should-be-blank">this previous Q&A</a>, the issue is solved by passing a queryset of no objects to the formset, like this:</p>
<pre><code>def addentry(request):
(...)
qs = Entry.objects.none()
formset = EntryFormSet(queryset=qs)
(...)
</code></pre>
|
python|django|django-forms|modelform|formset
| 3 |
1,902,296 | 51,673,847 |
Python if branch “expected an indented block”
|
<p>Sorry if i am still newbie, i get an indentation error at line 13, please help me. I already read much article from google especially stackoverflow.</p>
<pre><code>list1 = []
long = False
count = 0
TVR_count = 0
for i in range(0,len(df1Lat)):
for j in range(0, len(df_ANTV)):
if (df1Lat.start_time.values[i][0:5] == df_ANTV.daypart_variable.values[j][0:5]):
if (df1Lat.end_time.values[i][0:5] == df_ANTV.daypart_variable.values[j][0:5]):
df1Lat.TVR_total = df_ANTV.TVR.values[j];
list1.append(df1Lat.iloc[i];
else:
long = True
count += 1
elif (long == True):
count += 1
TVR_count += df_ANTV.TVR.values[j]
if ((str(df1Lat.end_time.values[i])[0:5]) == (str(df_ANTV.daypart_variable.values[j])[0:5])):
long = False
df1Lat.TVR_total = TVR_count/count
list1.append(df1Lat.iloc[i])
count = 0
TVR_count=0
else:
pass
dfLat = pd.DataFrame(list1)
dfLat[['date','channel','product','start_time','end_time','TVR_total']].head(60)
</code></pre>
|
<p>You forgot a closing bracket on this line <code>list1.append(df1Lat.iloc[i];</code>
Replace it with this: <code>list1.append(df1Lat.iloc[i])</code></p>
|
python
| 0 |
1,902,297 | 51,794,954 |
How to make a python mocked out function return a specific value conditional on an argument to the function?
|
<p>I have a python 2.7x Tornado application that when run serves up a handful of RESTful api endpoints.</p>
<p>My project folder includes numerous test cases that rely on the python <code>mock</code> module such as shown below.</p>
<pre><code>from tornado.testing import AsyncHTTPTestCase
from mock import Mock, patch
import json
from my_project import my_model
class APITestCases(AsyncHTTPTestCase):
def setUp(self):
pass
def tearDown(self):
pass
@patch('my_project.my_model.my_method')
def test_something(
self,
mock_my_method
):
response = self.fetch(
path='http://localhost/my_service/my_endpoint',
method='POST',
headers={'Content-Type': 'application/json'},
body=json.dumps({'hello':'world'})
)
</code></pre>
<p>The RESTful endpoint <code>http://localhost/my_service/my_endpoint</code> has two internal calls to <code>my_method</code> respectively: <code>my_method(my_arg=1)</code> and <code>my_method(my_arg=2)</code>.</p>
<p>I want to mock out <code>my_method</code> in this test-case such that it returns <code>0</code> if it is called with <code>my_arg</code>==2, but otherwise it should return what it would always normally return. How can I do it?</p>
<p>I know that I should do something like this:</p>
<pre><code>mock_my_method.return_value = SOMETHING
</code></pre>
<p>But I don't know how to properly specify that something so that its behavior is conditional on the arguments that my_method is called with. Can someone show me or point me to an example??</p>
|
<blockquote>
<p>I want to mock out <code>my_method</code> in this test-case such that it returns 0 if it is called with <code>my_arg==2</code>, but otherwise it should return what it would always normally return. How can I do it?</p>
</blockquote>
<p>Write your own method mock calling the original one on condition:</p>
<pre><code>from my_project import my_model
my_method_orig = my_project.my_model.my_method
def my_method_mocked(self, *args, my_arg=1, **kwargs):
if my_arg == 2: # fake call
return 0
# otherwise, dispatch to real method
return my_method_orig(self, *args, **kwargs, my_arg=my_arg)
</code></pre>
<p>For patching: if you don't need to assert how often the mocked method was called and with what args etc, it is sufficient to pass the mock via <code>new</code> argument:</p>
<pre><code>@patch('my_project.my_model.my_method', new=my_method_mocked)
def test_something(
self,
mock_my_method
):
response = self.fetch(...)
# this will not work here:
mock_my_method.assert_called_with(2)
</code></pre>
<p>If you want to invoke the whole mock assertion machinery, use <code>side_effect</code> as suggested in the other answer. Example:</p>
<pre><code>@patch('my_project.my_model.my_method', side_effect=my_method_mocked, autospec=True)
def test_something(
self,
mock_my_method
):
response = self.fetch(...)
# mock is assertable here
mock_my_method.assert_called_with(2)
</code></pre>
|
python|mocking
| 6 |
1,902,298 | 59,842,306 |
How can I replace items within a list in python3 using a condition?
|
<p>Write a function which prints all integers between 0 and 100:
For every number that is divisible by <strong>7</strong> or has the digit <strong>7</strong> print <code>‘boom’</code> instead of the number itself.</p>
<p>My solution so far: </p>
<pre><code>def boom():
r = []
for i in range(1, 100):
if i % 7 == 0 or '7' in str(i):
r.append("boom")
else:
r.append(i)
return r
</code></pre>
<p>print(boom())</p>
<p>Helped by: AkshayNevrekar</p>
<p>Question: Do you find a better way to solve this specific problem? a better algorithm? Thanks!</p>
|
<p>You can try this. </p>
<p>You need to check if <em>i</em> is divisible by <strong>7</strong> if yes then add <code>'BOOM'</code> to your result list. Now if a number has digit <strong>7</strong> in it or not has to be checked you can typecast <em>i</em> to string and use <code>in</code> to check if 7 is present or not. If <em>i</em> not divisible by 7 and it doesn't contain 7 in it then just append <em>i</em> to your result list.</p>
<pre><code>def boom():
return ['BOOM' if i%7==0 or '7' in str(i) else i for i in range(1,101)]
print(boom())
[1, 2, 3, 4, 5, 6, 'BOOM', 8, 9, 10, 11, 12, 13, 'BOOM', 15, 16, 'BOOM', 18, 19, 20, 'BOOM', 22, 23, 24, 25, 26, 'BOOM', 'BOOM', 29, 30, 31, 32, 33, 34, 'BOOM', 36, 'BOOM', 38, 39, 40, 41, 'BOOM', 43, 44, 45, 46, 'BOOM', 48, 'BOOM', 50, 51, 52, 53, 54, 55, 'BOOM', 'BOOM', 58, 59, 60, 61, 62, 'BOOM', 64, 65, 66, 'BOOM', 68, 69, 'BOOM', 'BOOM', 'BOOM', 'BOOM', 'BOOM', 'BOOM', 'BOOM', 'BOOM', 'BOOM', 'BOOM', 80, 81, 82, 83, 'BOOM', 85, 86, 'BOOM', 88, 89, 90, 'BOOM', 92, 93, 94, 95, 96, 'BOOM', 'BOOM', 99, 100]
</code></pre>
<hr>
<pre><code>def boom(n:int,inp:str):
return [inp if i%7==0 or '7' in str(i) else i for i in range(1,n+1)]
print(boom(20,'7 used to be here'))
for i in boom(20,'7 used to be here'):
print(i)
</code></pre>
<p>Here, <code>n</code> is range and <code>inp</code> is string that replaces the number which has 7 in it or divisible by 7.</p>
<hr>
<p>output</p>
<pre><code>[1, 2, 3, 4, 5, 6, '7 used to be here', 8, 9, 10, 11, 12, 13, '7 used to be here', 15, 16, '7 used to be here', 18, 19, 20]
1
2
3
4
5
6
7 used to be here
8
9
10
11
12
13
7 used to be here
15
16
7 used to be here
18
19
20
</code></pre>
<p>Try this to make your <code>boom()</code> complete.</p>
<pre><code>def boom(n=100,inp='BOOM'):
return [inp if i%7==0 or '7' in str(i) else i for i in range(1,n+1)]
</code></pre>
<ol>
<li>If <code>boom()</code> is called with no parameters then it considers n to be <em>101</em> and inp as <code>'BOOM'</code>.</li>
<li>If <code>boom(n=x)</code> is called then it considers range to be <em>x+1</em> and inp as <code>'BOOM'</code></li>
<li>If <code>boom(inp=any_string)</code> is called then it considers range to be <em>101</em> and inp as any_string.</li>
<li>if <code>boom(n=x,inp=any_string)</code> is called then it considers range to be <em>x+1</em> and inp to be any_string.</li>
</ol>
<hr>
<p>output</p>
<pre><code>>>> boom()
[1, 2, 3, 4, 5, 6, 'BOOM', 8, 9, 10, 11, 12, 13, 'BOOM', 15, 16, 'BOOM', 18, 19, 20, 'BOOM', 22, 23, 24, 25, 26, 'BOOM', 'BOOM', 29, 30, 31, 32, 33, 34, 'BOOM', 36, 'BOOM', 38, 39, 40, 41, 'BOOM', 43, 44, 45, 46, 'BOOM', 48, 'BOOM', 50, 51, 52, 53, 54, 55, 'BOOM', 'BOOM', 58, 59, 60, 61, 62, 'BOOM', 64, 65, 66, 'BOOM', 68, 69, 'BOOM', 'BOOM', 'BOOM', 'BOOM', 'BOOM', 'BOOM', 'BOOM', 'BOOM', 'BOOM', 'BOOM', 80, 81, 82, 83, 'BOOM', 85, 86, 'BOOM', 88, 89, 90, 'BOOM', 92, 93, 94, 95, 96, 'BOOM', 'BOOM', 99, 100]
>>> boom(50)
[1, 2, 3, 4, 5, 6, 'BOOM', 8, 9, 10, 11, 12, 13, 'BOOM', 15, 16, 'BOOM', 18, 19, 20, 'BOOM', 22, 23, 24, 25, 26, 'BOOM', 'BOOM', 29, 30, 31, 32, 33, 34, 'BOOM', 36, 'BOOM', 38, 39, 40, 41, 'BOOM', 43, 44, 45, 46, 'BOOM', 48, 'BOOM', 50]
>>> boom(inp='hehe')
[1, 2, 3, 4, 5, 6, 'hehe', 8, 9, 10, 11, 12, 13, 'hehe', 15, 16, 'hehe', 18, 19, 20, 'hehe', 22, 23, 24, 25, 26, 'hehe', 'hehe', 29, 30, 31, 32, 33, 34, 'hehe', 36, 'hehe', 38, 39, 40, 41, 'hehe', 43, 44, 45, 46, 'hehe', 48, 'hehe', 50, 51, 52, 53, 54, 55, 'hehe', 'hehe', 58, 59, 60, 61, 62, 'hehe', 64, 65, 66, 'hehe', 68, 69, 'hehe', 'hehe', 'hehe', 'hehe', 'hehe', 'hehe', 'hehe', 'hehe', 'hehe', 'hehe', 80, 81, 82, 83, 'hehe', 85, 86, 'hehe', 88, 89, 90, 'hehe', 92, 93, 94, 95, 96, 'hehe', 'hehe', 99, 100]
>>> boom(n=77,inp='77777')
[1, 2, 3, 4, 5, 6, '77777', 8, 9, 10, 11, 12, 13, '77777', 15, 16, '77777', 18, 19, 20, '77777', 22, 23, 24, 25, 26, '77777', '77777', 29, 30, 31, 32, 33, 34, '77777', 36, '77777', 38, 39, 40, 41, '77777', 43, 44, 45, 46, '77777', 48, '77777', 50, 51, 52, 53, 54, 55, '77777', '77777', 58, 59, 60, 61, 62, '77777', 64, 65, 66, '77777', 68, 69, '77777', '77777', '77777', '77777', '77777', '77777', '77777', '77777']
</code></pre>
|
python|list
| 3 |
1,902,299 | 18,968,569 |
Integrate other html documentation into sphinx docs
|
<p>How do you include html docs generated by other tools such as nose, coverage, and pylint reports into Sphinx documentation. Should they go into the _static directory? and if so, how do you link to them?</p>
<p>I am trying to build a concise package of all of the code development tool documentation into a .pdf or html.</p>
|
<p>I have a similar problem and I could solve it by doing the following. Note this is just for the HTML output.</p>
<ol>
<li>Create an index.rst field with just the following:</li>
</ol>
<p><code>======================
Javadoc of the API XXX
======================</code></p>
<p>As I am using the extension "sphinx.ext.autosectionlabel", this becomes a "level 1" section.</p>
<ol start="2">
<li>Modify the Make file so, once the HTML is generated, it replaces the index.html of the section "Javadoc of the API XXX" with the Javadoc of my API.</li>
</ol>
<p>After this change, on the toctree I have a link to "Javadoc of the API XXX" and when you click it, you see the Javadoc.</p>
<p>I know is not the <em>proper way</em> of doing it, but it is the only that I have come up with after many Google searches. </p>
|
python|html|code-coverage|python-sphinx|nose
| 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.